Sei sulla pagina 1di 244

Sechel

logic, language and tools to manage any


organization as a network
Sechel

logic, language and tools to manage any


organization as a network

Domenico Lepore

with Giovanni Siepe, Sergio Pagano, Francesco Siepe, Yulia Pakka, Larry Dries,
Gianlucio Maci

Edited by Angela Montgomery


Copyright 2010 Intelligent Management Inc.

Toronto, Canada.

ISBN 978-0-557-58884-8
Contents
Acknowledgements ................................................................................................................. xi 
Preface ...................................................................................................................................xiii 
Introduction............................................................................................................................. xv 
PART ONE: Intuition (birth of an idea) ..................................................................................... 1 
1. Freedom and responsibility: designing the new systemic organization ................ 3 
2. Predictability vs. Speed: Back to the Deming and Goldratt conflict .................... 13 
3. Unleashing potential from the trap of hierarchy .................................................. 23 
4. Operating a systemic organization: who, when, why and how ........................... 33 
PART TWO: Understanding (Analysis and development) ...................................................... 43 
5. Transforming Industry ........................................................................................ 45 
6. Intelligent Industry: Quality, Speed, Network ...................................................... 53 
7. Creating Conscious Industry, and Industry with a Conscience ........................... 63 
8. New Leadership, New Economics ...................................................................... 73 
9. The Ten Steps of the Decalogue ........................................................................ 79 
PART THREE: Knowledge (application/execution) ................................................................ 89 
10. Sechel: Fostering a higher intelligence with the Thinking Process Tools ........... 95 
Dr. Domenico Lepore 
11. Measurements: Throughput Accounting and Statistical Process Control
for Effective Decision Making ........................................................................... 117 
Larry Dries JD 
12. Managing Variation .......................................................................................... 127 
Dr. Giovanni Siepe 
13. Using the External Constraint in an Integrated Network for Marketing and
Sales ................................................................................................................ 155 
Yulia Pakka MScBA 
14. The Information System within Intelligent Management ................................... 181 
Francesco Siepe PhD and Prof. Sergio Pagano PhD 
PART FOUR: The New Intuition of the Enterprise as a Network of Projects ........................ 201 
15. A Systemic Approach to Complex Networks .................................................... 205 
Gianlucio Maci PhD 
Summary of the main ideas in this book ............................................................................... 215 
Biography of Domenico Lepore ............................................................................................ 219 
Bibliography.......................................................................................................................... 221 
This book is dedicated to the blessed memory of
Dr. W. Edwards Deming and Rabbi Menachem Mendel
Schneerson, the Lubavitcher Rebbe. In my mind, they
represent the quintessential form of sechel the world so
greatly needs today.
Domenico Lepore

Dr. Lepore graduated as dottore in fisica from the University of Salerno with an experimental
thesis in quantum metrology. He became an expert in the systems-based approaches to
management of W. Edwards Deming and the Theory of Constraints (TOC), and subsequently
developed a systems thinking management methodology named the Decalogue™. Lepore
has led successful improvement and turnaround implementations in over 30 national and
multinational organizations, primarily in Italy and the United States, in sectors as varied as
aluminium and care for the elderly.
Lepore co-authored the book Deming and Goldratt: the Decalogue with friend and
foremost TOC expert Oded Cohen. The book, published in 1999 by North River Press in the
U.S., has been translated into several languages. It contains the basic tenets of the
methodology Lepore has implemented over the last 15 years and is recommended reading
for various universities around the world.
Lepore is the founder of Intelligent Management Inc., an organization with the goal of
promoting a systems-thinking approach to organizations and boosting management
intelligence to deal with an increasingly interconnected and interdependent world. He is co-founder
of Invictus IM Corp., a strategic advisory and investment firm which assists corporate
management teams to recognize and achieve full potential for their companies. Invictus IM
uses the Decalogue methodology in all its activities. (A detailed biography is provided at the
end of this book.)
Further information on the Decalogue and the contents of this book can be found by
visiting:

www.intelligentmanagement.ws
Acknowledgements

There are many people I wish to acknowledge and thank for their support; without them this
book would not exist.
First and foremost, the friends and colleagues who accepted to illustrate with examples
the points I tried to make throughout the book;
Oded Cohen and Martin Powell for their teachings and for standing by me through thick
and thin;
Rabbi Aaron Raskin of Chabad of Brooklyn Heights, who accepts the burden of teaching
to and dealing with a secular gentile without ever losing his cool;
Rabbi Yanki Tauber, who so beautifully transposes the sichos of Rav Menachem Mendel
Schneerson into essays that will always educate and inspire;
Larry Dries, Lorne Albaum and Seth Dalfen for helping me through very difficult times
and for making Canada home;
Corrado De Gasperis, Karen Narwold and Jim Guddy for their many years of effort to
make management an intelligent activity;
Piero Alviggi for being more than a brother ever could, and Alberto Mizzotti for his
unfailing professional support;
I would need a new, not yet invented, system of words to describe what I owe to my wife,
Angela Montgomery, the life and soul of Intelligent Management. Only this system could
define the breadth of sentiments and emotions I experience every day with her. This book is
hers just as much as it is mine.
PREFACE

In the last thirty years a rapidly increasing number of fields of human knowledge, from
science to medicine, from epistemology to environmental studies, have turned to systems
theory and systems thinking to gain a deeper insight into the basic mechanisms of life and its
evolution. The findings have all been pointing at certain basic features that we all, as living
entities, share.
What every rigorous study in any field has proved is that not only are divisiveness and
individualism unsustainable in a globally interconnected world, they are contrary to the basic
biochemical fabric of our very existence. Economics and politics, as well as management, have
been almost oblivious to these findings and for the most part continue to regurgitate the same old
recipes; they continue to apply patches instead of acknowledging the new emerging paradigms.
Win-Win conflict resolution, cooperation instead of competition, symbiosis instead of
survival of the fittest, patterns not just structures; these today are some of the basic, well
understood elements that make up a society that can sustain its ambition to evolve and
prosper, as well as the founding elements of our biological existence. Life, as we experience
it on this planet at every level, is based on interdependencies and interconnections. We exist,
as Fritjof Capra brilliantly pointed out, within a “web of life”, a network of interdependencies
that cannot be understood solely in terms of its basic components but has to be studied in
terms of its interrelations.
This book deals with economics and management, two human endeavours that are
tragically trailing behind in the quest for understanding how we, as humans, can live together.
It does so by looking at the single most fundamental aspect of our existence that makes us
humans different from any other animal: the ability to think and learn. More precisely, it deals
with our inability to transform at an appropriate pace what we know into coherent actions. We
seem to be lacking the kind of “intelligence” required to tap into that higher level of
consciousness that makes us see what we know and who we are as one.
We cannot surrender to this deficiency because the ever-increasing complexity of our world
demands more, and better, abilities to live and work in constantly changing environments. We
need to transform our cognitive patterns to adapt to this unprecedented complexity.

More specifically, there are three faculties of the intellect that we need to learn to link together
if we want to remain at the helm of the transformation process:
1. the ability to generate new ideas (intuition);
2. the ability to understand the full spectrum of implications of these newly developed
ideas (understanding);
3. the ability to design and execute a plan coherent with this understanding
(knowledge).
The “new intelligence” we need to develop is called, in Hebrew, sechel and in this book
we look at how it can be acquired and applied.
In order to produce results, this sechel has to be complemented by a rigorous method of
investigation that is typical of science. Dr. Edwards Deming embedded this rigorousness into
the PDSA (Plan, Do, Study, Act) cycle and its statistical underpinnings.
Last, but not least, sechel and PDSA have to be supported by a coherent organizational
structure. Such a structure must be systemic in nature; this allows us to overcome the
strictures of the traditional hierarchical/functional organization and free individuals from the
prison of wrong interdependencies.
This book is about these three elements: sechel, PDSA and systemic organizational
structure, but it is also about what enables these elements to generate results, i.e. leadership.
This is not a “self-help” book nor a “how to” guide; at the same time it is not a book mired in
philosophical speculations or, even worse, in “new age”, feel-good, psycho-behavioural
tendencies. It is about the founding principles of a new epistemology of wealth; it is a bridge
between rock solid elements of foundational knowledge and their practical application.
Ultimately, it is a book about the pattern to reunite what we do, how we do it and who we are.
It is very likely that most of you will think that what you read here makes sense. Still,
I believe that many will not try to embrace or adopt the ideas here presented. I decided to use
the term “Intelligent Management” to name the interrelated set of principles, methods, beliefs
and behaviours that, in my opinion, make the difference between what is “intelligent” and
what is, let’s say, something else. In the end, it is important to take responsibility for the
amount of intelligence with which we accept to live our lives.
The book tries to explain Intelligent Management in three ways: by elucidating upon its
founding concepts, by illustrating with examples how these concepts can be applied, and by
connecting the somewhat semi-empirical nature of managerial findings with a fast growing
piece of mainstream science, network theory.
Domenico Lepore, May 2010

xiv
INTRODUCTION

In our lifetime we are witnessing the exhilarating freedom of artificial walls beginning to
crumble. Twenty years after the fall of the Berlin wall, we are moving slowly but steadily
towards the understanding that if we are to flourish then artificial barriers must be removed,
from apartheid and discrimination in all its manifestations, to ecology and global economies.
As communication becomes more instantaneous, we are increasingly aware that we are not
separate from others the way we perceived ourselves to be in the past. Every one of us, at
the level of individual, family, community, nation and beyond is interconnected; indeed, we
are part of a network of interdependencies.
This is not simply a political and sociological stance. There is an underlying scientific
basis for this shift in worldview away from a Newtonian, industrial age notion of the whole
being made up of separate parts towards one of dynamic interconnection. We know from
physics that living organisms can be viewed as systems, be they cells or solar systems, and
as such they obey certain laws. We define a system as a network of interdependent
processes that work together to achieve a goal. Such systems are made up of a web of
interconnections. In other words, we are understanding more and more that our reality is
systemic. For this reason, any attempt to govern countries or manage organizations that is
not based on this awareness is doomed to create damage. The negative effects may not be
immediate, but they will inevitably come.
If we have this knowledge, then why is it that decision makers fail to understand the long-
term, systemic implications of their actions in virtually every sector of society? Why do we
lack the basic intelligence needed to realize that every decision made for the advantage of a
privileged minority and to the detriment of the majority will eventually lead to disaster? What
is it that leads reasonably educated and, generally speaking, well-intentioned individuals to
make one shortsighted, half-baked, non-systemic decision after another?
In November 2009 the US Department of Agriculture announced that in the previous
year 49 million people in the USA had lacked consistent access to adequate food. In New
York City, as of December 2009, there are 1.5 million people with no paid sick days and
nearly 2 million who cannot read and write in English. Perhaps some business owners and
city officials believe there is an advantage in keeping the population ignorant and unhealthy.
What is more likely is that these people have a cognitive inability: they simply cannot draw
basic cause-and-effect relationships between a sick person going to work in the morning and
the spreading of illness (not to mention their inability to perform); between the growth of
illiteracy and the drop in quality of necessary services to the community; between an increase
in hunger and an increase in crime.
In the USA ordinary people borrowed over five years 1.4 trillion dollars that they could
not repay and lost their homes; the “financial industry” developed 14 trillion dollars of various
assets on the back of those loans and borrowed much more using these assets as collateral.
In 2009 the financial industry granted many tens of millions of bonuses to the ‘heroes’ that
generate so much money in tax revenue to the government. The ‘only way out’ of the crisis
seemed to be to rescue the banks. At the same time, a health bill to spend just under 1 trillion
dollars over ten years raised concerns. The list of nonsense is endless.
Greed, lust for power, evil inclinations, cultural beliefs and historic circumstances are all
part of human existence. However, the sheer and disconcerting inability displayed over the
last forty years by the leaders of the western world, with few exceptions, to exert meaningful
guidance, from world peace to the environment, from poverty and inequalities to the folly of
modern finance, leads me to believe that the issue at stake is much deeper than a host of
unguarded sentiments and unbridled instincts.
The recurrent inability of leaders to provide meaningful guidance has prompted me to
write this book. I believe that, in spite of the disasters around us, we are experiencing a
unique phase in our human history: we have the science, the knowledge and the tools to
foster a more systemic use of the intellect to capitalize on our intuition, to develop thorough
analyses, and to design and take correct actions for the common good.
We are talking about a more evolved form of human intelligence that allows intelligent,
thoroughly thought-through decisions to be manifested through intelligent action. There is no
precise term for this evolved intelligence in English, but it is perfectly described by the
Hebrew word ‘sechel’. By integrating the scientific method with a more systemic use of the
intellect we will achieve a greater level of sechel. In order to foster a more systemic
intelligence, or sechel, we must increase the ability of our minds to connect three faculties of
the intellect:
• intuition (the birth of an idea),
• understanding (development and analysis)
• knowledge (application/execution).
This sechel will enable a re-foundation of economics and management. My focus in this
book is the systems-based management of industry. This is the area I have worked in for
almost two decades. I believe that industry is where long-term, sustainable wealth for
countries is created. Quite simply, no real economy can exist without a solid industrial
infrastructure. The hope for the future of industry is Intelligent Management, and the purpose
of this book is to provide the vision, the method and the know-how for achieving this obvious,
but surprisingly rare, ability.

Systems-based management for a better future


We can define Intelligent Management as the cohesive integration of the scientific
method with a more systemic use of the intellect. In the realm of management, by scientific
method we mean the continuous improvement PDSA cycle (Plan Do Study Act) and its
foundational element the Theory of Variation with its offspring Statistical Process Control
(SPC). This is the bedrock of W. Edwards Deming’s Theory of Profound Knowledge. The
discipline and precision of the scientific method prepares and habituates the mind to a

xvi
systemic and constant effort towards continuous improvement. This mindset and cognitive
ability can be reinforced and empowered through a formidable set of logical tools developed
by the Israeli physicist Eliyahu Goldratt, who developed the Theory of Constraints. By
integrating the scientific method with this systemic use of the intellect we will achieve a
greater level of sechel, which is precisely what the world today needs so desperately.
For the last twenty years, in different capacities and in both Europe and North America,
I have worked to enhance the business performances of a variety of organizations; industry
primarily, but also government, healthcare and education. As a physicist turned
‘organizational scientist’, I have always acted by following a theory, i.e. a set of assumptions
that I went on to validate or disprove. The main body of knowledge I was armed with was the
Theory of Profound Knowledge set forth by Dr. W. Edwards Deming and the Theory of
Constraints developed by Dr. Eliyahu Goldratt. From the mid-1990s with Oded Cohen
I started to integrate these two management theories and this integration evolved into a
coherent systems-based methodology that we called the Decalogue™. We published our
findings in 1999 in our book Deming and Goldratt: the Decalogue (North River Press).

Putting it all together: an algorithm for systemic management


From a philosophical as well as scientific viewpoint, the Decalogue attempts to shift
management from the obsolete, Newtonian worldview in which the results of the whole
organization equal the sum of its individual, separate and hierarchical parts, towards a
systemic and interdependent network. The shift is achieved by combining the allegedly
“reductionist” approach of the Theory of Constraints with a purely systemic view based on
interdependencies and interactions. It does so in practical terms by:
1. building interdependent processes managed through the control of variation
2. subordinating these interdependencies to a strategically chosen element of the
system called constraint
3. designing the organization as a network of interdependent projects with a goal
Over the years, I had ample proof of the validity of Deming’s Theory of Profound
Knowledge and Goldratt’s Theory of Constraints, and the results that were systematically
achieved confirmed the solidity of the Decalogue. One of the cornerstones upon which both
theories, and the Decalogue that brings them together, are founded is the desire and
propensity that people have to learn. What I systematically ascertained is that this learning,
almost invariably, does not translate into coherent applications. The “constraint” seems to be
people’s ability to make good use of what has been learnt.
Naturally, I do not overlook the intrinsic difficulty of mastering the systems thinking
required by the Decalogue; breaking free from the prison of analytical thinking that is taught in
schools and universities is an enormous challenge. Moreover, business schools continue to
churn out students that are taught that managing an organization passes through the
individual management of its parts and, to add insult to injury, the prevailing measurement
and accounting systems are based on local optima. I have plenty of experimental data

xvii
(and emotional exit wounds) to prove that people are ill at ease when asked to change
behaviours as a result of new learning. However, I am optimistic: sechel, the Hebrew word to
describe the purely human ability to modify behaviours as a result of learning, is an ability we
can systematically pursue and develop.
The ten steps presented in Deming and Goldratt: the Decalogue are the founding
elements and the algorithm to sustain the transformational effort required to create and
manage an enterprise as a system, as opposed to the hierarchical/functional model.
Intelligent Management covers the global cognitive and philosophical landscape out of which
the Decalogue emerges and exercises the three faculties of the intellect that have to be
addressed: intuition, understanding and knowledge, i.e. sechel. Intelligent Management
essentially ‘rethinks’ the main debaser of these three faculties in industry, i.e. the
organizational design. Increased sechel and a suitable organizational design facilitate the
adoption of the scientific method embodied in management by the Plan Do Study Act (PDSA)
cycle and its underlying statistical content. Likewise, the adoption of the scientific method
facilitates the creation of a suitable organizational design and an increased sechel: all three
strands are interdependent and continuous, like the sides of a Mobius strip. This is Intelligent
Management and the only limits to what it can achieve are the constraints we impose upon
ourselves.

The shape of this book


We have shaped the material of this book into the form that is most natural for the
pattern of knowledge and development we are describing and which we refer to as sechel.
Therefore, Part One deals with Intuition, Part Two deals with the Development/Analysis of
this Intuition, Part Three describes the Application/Execution, and Part Four spirals back to
the new Intuition that allows the cycle to begin again. These parts correspond with the phases
in the Theory of Constraints referred to as What to Change, What to Change to, and How to
make the Change happen for which there are a precise set of thinking process tools.
Parts One and Two deal with the line of reasoning I would like the reader to become
acquainted with, i.e. the enterprise as a network of interdependent projects designed around
a constraint; Part Three provides examples and elucidations of this reasoning, illustrating the
‘know-how’, and Part Four deals with the new intuition of what is to come: network theory as
applied to organizations.

The chapters in Part Three deal with the basic elements that this newly acquired sechel should
enable industry to pursue. We focus on precise applications of Intelligent Management to:
• The fallacy of the prevailing measurement system
• Statistical thinking and synchronization
• Organizational design (project-based) and the role of the Information System
• Marketing and sales (creating systematic breakthroughs)
• The ‘choked tube’ enterprise and network theory

xviii
These are the fundamentals that not only create the foundation for management in the
twenty-first century, they point at a model for a new kind of leadership. Needless to say, the
examples are all interdependent and no attempt is made to make the reader believe that they
could replicate what they read here in their organization.
This is not a self-help book, indeed I doubt that such books truly exist. This book is
dense and, at times, challenging. It is not an easy read and parts will require actual study. Its
aim is to encourage leaders and managers, especially in industry, to develop a better sechel.
This ability is mandatory if we want to cope with the challenges posed by the increased
complexity of our world and, I believe, even more importantly, if we want to live a more
meaningful life.

xix
PART ONE

Intuition
(birth of an idea)

In Part One we examine the conscious and connected organization from the perspective of
Intuition (birth of an idea). The intuition here is the inherent conflict of hierarchy vs. system
underlying any organization and that can be resolved through the adoption of a systemic
model, using the Decalogue as the algorithm to achieve that transformation.
1.

FREEDOM AND RESPONSIBILITY: DESIGNING THE NEW


SYSTEMIC ORGANIZATION

Shaping our ideas within the organization


There is undoubtedly within mankind a profound drive to achieve fulfilment and, at the same
time, a profound drive to serve a higher purpose, whatever form that may take. Some strive
for personal fulfilment within the workplace. For many, sadly, the workplace is in no way seen
as connected with any form of higher purpose; they relegate that particular drive to other
parts of their life.
This split between work and deeper meaning is exacerbated when we are forced to
function within an organizational structure that inherently debases what we do by creating
artificial barriers and ceilings. This is the reality of the traditional hierarchical organization
divided up into functions. When the structure interrupts the natural flow of the goals we are
trying to achieve, when nonsensical policies and agendas force us to do things we fully
perceive to be without meaning, or worse, prevent us from improving our performance, we
naturally look elsewhere to express our potential or vent our frustration.
It does not have to be like that and we do not have to be like that. Science, civilization
and individuals have evolved beyond the strictures of hierarchy/functions as we understand
them today. Contemporary science has revealed the inadequacy of that model. Today we
have the science, technology and knowhow to shape organizations into what they intrinsically
are - a system. We know that every living system is a network, and we can work within them
to maximize quality, flow and involvement.
There is a price to be paid, however, for growing our intelligence and our freedom. When
we remove the barriers, the ceilings, the ‘stupid’ boss who forces us to underachieve, when
we take away all this paraphernalia, we are left with ourselves. We are left with the
responsibility of no excuses, and the stark and terrifying prospect of truly exploring the
boundaries of what we can achieve. In this chapter we outline an evolved form of
organization. This organization is neither for the faint-hearted nor for bullies, and it is one that
many will continue to reject and undermine.

Changing the shape and hitting walls


My quest for a unified management approach that would bring together all the
fundamental elements of system management originated from studying and applying the
theories of W. Edwards Deming, founding father of the Quality Movement and Eliyahu Goldratt,
creator of the Theory of Constraints. These two giants in the development of management
Sechel: logic, language and tools to manage any organization as a network

theory have taken organizations beyond the realm of military-style hierarchies. I believe nothing
in the last sixty years of management studies comes even close to the depth, insightfulness,
rigor and vibrancy of their work. And yet, a truly successful, groundbreaking, internationally
recognized, full-blown application of their teachings has yet to come.
For nearly twenty years, I have been working with organizations to help them understand
Quality and Synchronization operationally. After a four-year collaboration with Oded Cohen,
in 1999 we published Deming and Goldratt: the Decalogue. The Decalogue is a ten-step
algorithm that contains the elements of understanding required for a systemic approach to
management. We will examine some practical applications in Part Three of this book.
Since our book came out, through successes and failures, I have considerably
broadened our initial understanding. I have been personally involved in many dozens of
implementations of the Decalogue as an advisor, consultant, mentor, top manager and board
member. I was able to gain increasing confidence in the scientific validity of the Decalogue
thanks to the results achieved, and also realize which further elements of knowledge were
needed to make it even more operational. The results throughout a wide range of
organizations were solid and consistent:
• reduced lead times
• increased production capacity that was previously unavailable
• reduced work-in-process and finished product inventory
• reduced delays in delivery
• improved quality resulting in reduced waste and rejections
• enhanced project management reliability
• increased access to new market segments
• last, and most certainly not least, increased cash (throughput) as a result of
increased sales.
However, in spite of the undoubted successes, the results were not always on the scale
it was fair to expect. Some fundamental issue was still unresolved and at a certain point we
were hitting a wall.
The wall against which all these heart-felt and knowledge-guided implementations
crashed has invariably been the organizational structure. Something in the shape of the
organizations where implementations of the Decalogue were carried out created an inevitable
barrier to full success. The question then became, how do we create a structure that bears
the weight and flow of a truly systemic organization? Why is it that people, in spite of their
genuine desire to create this kind of organization, get stuck?
I believe this issue has two facets; one concerns the practical mechanism with which
such a structure can be created, the other runs much deeper and touches the very essence
of the relationship between who we are and what we do. The issue of organizational design
and the way people carry out their responsibilities within that design becomes then critical for
the successful creation of a systemic enterprise.

4
Domenico Lepore

Climbing for success


Let’s look first at the shape of the organization, the design. What kind of organization are we
talking about when we say a ‘systemic organization’ and we use the Decalogue as an algorithm to
create it? It is not a traditional hierarchical/functional structure where the sum of individual efforts
and departments is considered equal to the performance of the company. This form is inadequate
as it does not contain the whole picture, including suppliers and customers, nor does it allow for the
free flow of communication along the processes that are inevitably cross functional. Indeed,
departments may even compete with each other within the same organization.
Due to the artificial sealing/ceiling off of departments and levels, the internal policies of
this kind of traditional structure will be based on local optima at the expense of global
performance. We will try to do what is best for ourselves or for our department because that
is how we get measured and rewarded. In this kind of organization we are not able to see and
measure the damage we can be doing to other ‘departments’ and the results of the
organization as a whole by serving local optima.
Moreover, the traditional hierarchical/functional organization is perfect if we want to
concern ourselves with power games and personal success as opposed to achieving the
goal of the organization. Its very structure invites us to ‘step on the heads’ of others to ‘get
ahead’. Having a career path becomes synonymous with ‘climbing the corporate ladder’.
Depending on who is ‘in charge’, talented and able people may remain hidden if they are
not functional to the agenda of their boss. This is clearly a tragic waste of human ability and
resources. Do we therefore need to throw the hierarchy out? The answer is no, but we need
to redeploy it.

All that matters: variation and constraint, i.e. predictability and synchronization
Compared with the traditional hierarchy, the systemic organization is alarmingly simple.
This simplicity derives from the fact that instead of imposing a conceptual model, such as a
function, in a systemic organization we reveal the way the organization intrinsically is.
In order to create an organization that combines the two most fundamental elements of a
successful system as explained by the theories of Deming and Goldratt, we must:
1) understand the system we are operating and its intrinsic variation
2) provide a synchronization and protection mechanism that enables its effective
management
Let’s look at what that means. Deming’s major contribution was to insist on the
understanding and management of variation (see Chapter 12). Every human process, from
waking up in the morning to sending a man to the moon, is affected by variation; a process
can never be repeated in an identical way. Incorrectly managed variation in manufacturing,
for example, leads to scrap, waste and money lost.
It is impossible to eliminate all variation because entropy exists and is intrinsic to any
process. However, through statistical methods it is possible to understand variation, measure it,
manage it and take actions to reduce it. This requires a mindset of continuous improvement as
opposed to monitoring. In spite of the disastrous and costly effects of ignoring this reality,
surprisingly few managers are conversant with Statistical Process Control.

5
Sechel: logic, language and tools to manage any organization as a network

Once we have achieved statistical predictability in our processes, we need to


synchronize them and protect them from disruption. This can be done most effectively by
identifying the constraint of the system, i.e. the element in the system that determines the
pace at which the system generates units of the goal. Goldratt’s fundamental insight was to
understand that we can manage a system by focusing on the constraint, i.e. subordinating the
other processes of the system to it to ensure it works to the maximum. We protect the
constraint from the impact of variation affecting the other processes by placing a buffer before
it. The entire system is scheduled around the constraint using a very precise finite capacity
based algorithm.
All that matters for the success of the organization and those who work within it is speed
and reliability. Our goal, therefore, is to create and manage a systemic organization based on
process predictability and high synchronization of these processes. The only way to achieve
this is to have an organizational structure that is built for and consistent with that very
purpose. It is a structure where:
• interdependencies are clearly laid out through detailed mapping of the processes
within the organization
• variation is understood and managed through relentless application of statistical
methods
• a physical constraint has been identified
• a subordination process (to the constraint) is created
• a buffer is placed in front of the constraint
The constraint dictates the performance of the entire organization, therefore a minute
lost by the constraint is a minute lost by the whole system. The purpose of the buffer in front
of the constraint is to absorb the cumulative variation generated by the system and to prevent
this variation from generating disruption to the constraint.

Conventional structure vs. coherent structure


If we understand all this, how do we go about creating a coherent organizational design?
How can we avoid the trap of conventional thinking and leverage our sechel, connecting
intuition (birth of an idea), understanding (analysis and development) and knowledge
(application/execution)? Is there an organizational structure that accommodates for these
needs and provides a general template for the enactment of a truly systemic organization?
How much are we willing to trade the largely and blatantly unsuitable yet mesmerizing
security of a functional organization for something that can be utterly unconventional,
dangerously challenging, but fully coherent with our needs?
Challenging the traditional concept of the hierarchy is a step for which many people are
unprepared. What is it about the hierarchy that makes it so untouchable even though it
creates so many of the problems that companies experience daily? Clearly, we will not find
the answer just by offering the rational explanation of why it does not work, as Oded Cohen
and I did when we explained it ten years ago. It must be in some doubly-wired synaptic

6
Domenico Lepore

connection that is formed very early in our life and continuously reinforced by the
environment we live in. We will come back to this question in more detail in the following
chapters.
An organization brings individuals with competencies together. By combining these
competencies in a suitable manner we achieve the goal of the organization. In other words,
individual efforts can lead to a global result if we devise a mechanism that creates orderly
coherence in the combination of these efforts. A hierarchy should facilitate the creation of this
order. Accordingly, it is not the hierarchy that we challenge but the kind of subordination that
a traditional hierarchy calls for.
Traditionally, the way we translate the hierarchy into a company structure resembles the
well-known pyramid below.

Hierarchy pyramid

As we know, the vertical lines that connect the boxes clearly define the span of control
and the accountability of those boxes. Along with the accountability goes the amount of
individual power that those boxes have. A hierarchy is then conventionally translated into a
mechanism of control and reporting. Accordingly, we can easily conclude that the pyramid
cannot achieve the goal of creating the orderly coherence among the individual efforts that
we were seeking due to the separations it imposes.
Is there another way to create that coherence, hence giving the hierarchy its natural role
of creating order? There is, but we do need to make a cognitive effort to grasp it. One step at
a time.
At its most basic level, if you can imagine an x-ray of a company, any company can be
seen as a set of recurring and non-recurring activities. For instance, ordinary maintenance,
book closing, production scheduling, shipping, etc. are recurring activities; the introduction of
a new technology, the expansion of premises, the launch of a new product, can be seen, in a
way, as non-recurring activities. If we view a company as a system then we must understand
how to build the interdependencies that make up this set of activities and how these
interdependencies evolve in time.
7
Sechel: logic, language and tools to manage any organization as a network

This is an important point: not only do we need to know how these activities are going to
take place (this can be done by mapping processes with a deployed flowchart) but also the
timing of this evolution and we need a method to control it. As a deployment flowchart in
Chapter 12 clearly shows, virtually any activity that an organization undertakes is cross-
functional; this realization alone should be enough to dismantle the idea of the validity of a
functional organization. Moreover, these interdependencies must be staffed and managed
within a temporal spread.

Example of Deployment Flowchart

The new organization: coordination not functional reporting


So, to recap, we know that the hierarchical/functional model is inadequate as it creates
artificial barriers and does not allow a true understanding of the organization as a series of
recurring and non-recurring activities. An organizational structure should therefore be
designed to facilitate the orderly management of sets of activities that are continuously
created, coordinated, cross-functional, and that evolve in time. There is a precise name for
this in English: projects.
A project is exactly this: a network of interdependencies created to achieve in a well-
defined time frame a precise goal. A project is a system with a precise duration. A company
viewed as a system is therefore a network of projects, and the orderly creation and timely
completion of these projects should accomplish the stated goal of the network.
Let’s pause for a minute. What are we saying? One more time, as the Romans used to
recommend: repetita iuvant repetition helps.

8
Domenico Lepore

1) A functional structure is not suitable to support the systemic approach to managing


organizations, and organizations by the nature of their work are cooperative and
cross-functional. This is because none of the activities of any company can be
performed within the narrow boundaries of a single function;
2) Any plausible template for an organizational structure that can foster cooperative
work must also take into account the evolution in time of the interdependencies
needed to accomplish any activity;
3) In essence, the management of any organization becomes the management of a
network of recurring, orderly and evolving-in-time activities. We call them projects.
The appropriate hierarchy is exercised through ensuring orderly coordination, not
functional reporting;
4) The backbone of any organizational effort becomes then the ability to manage the
network of projects any organization is made up of;
5) We can safely say that the springboard to overcome the seemingly untouchable
functional structure is the idea of a company seen as a network (with a stated
goal) of projects.

Help, where am I?
There is a real cognitive ordeal to overcome before we can embrace the idea of a
project-based organizational structure: we feel we lose all our familiar reference points. We
are accustomed to having “a career” in a functional area with a “boss” that assesses our
functional performances and a “bonus” paid to reward a local optimum. This is so true that
even if we understand that these things do not make sense we still struggle to give them up.
Yes, we may like the idea of a “systemic organization”, it makes sense to us but…hey, don’t
take away my local certainties: I need a boss to report to, I want to be measured locally (how
else?) and I do want my bonus for doing my job well and loyally (to the boss). What the heck!

Let’s see how we can accommodate, at least rationally, for these seemingly indispensable
features of our professional life. A systemic organizational structure that is based on process
predictability and a high level of synchronization must safely rest on:
a) A clearly, indeed “super-clearly” laid out network of conversations (what everyone
needs to say to everyone else to make processes work: input, output, how to
measure it and how to improve it; indeed, how the process should work). We can call
it “The Playbook”;
b) A suitable Information System structure to support these conversations.
These two issues are neither conceptually nor technologically difficult to address. The
network of conversations requires clear ideas and sufficient knowhow on how to operate the
company’s processes and how to link them together. It can be built in weeks, not months, for
any midsized company and a few more weeks are needed to expose anyone in the company
to the outcome of this work. The suitable Information System structure is even simpler: the
9
Sechel: logic, language and tools to manage any organization as a network

different pieces that would make up this IS are already available as open source and all it
takes is clarity on what an IS should be for (see Chapter 14). Unfortunately, concepts and
technologies are the offspring of paradigms and these paradigms are originated by forces;
often we are not trained to understand and control these forces. Let’s digress for a second.

Career paths and the systemic organization


Let’s ask ourselves a basic question: why do we choose/accept a certain job rather than
another one? Why do we take a position in accounting, production, R&D, marketing, etc. in
Company X rather than in Company Y? In the last twenty years of work on both sides of the
pond the overwhelming majority of the people I met would answer: “it would offer me the best
chances for a career”.
I cannot prove it, but I have strong empirical evidence that jobs are offered (and
accepted) by leveraging people’s legitimate desire to have a successful career. Even if a
functional position in no time will take away most of my professional pride because it forces
me to act “unintelligently” and work ludicrous hours for often no reasons other than my boss’s
agenda, even if it provides me with little learning and, invariably, sooner or later will stifle my
possibility to advance, the need to know that “I have a career pattern” is overriding. If this
force is so brutally prevailing, how can we accommodate for it in a systemic organization?
How can we design a career pattern in a systemic organization built as a network of projects?
Let’s take another step.
Let me repeat it: the structure of a systemic organization must be based on predictability
and synchronization. Predictability is ensured by technical/subject matter prowess and
synchronization is ensured by the ability to combine these competencies effectively.
A person can have a strong technical competence and love the idea of exercising that
competence all day long; another may have a lesser inclination for any particular subject
matter but be versed in the art and science of managing complex projects. In a systemic
organization both these people have a chance to progress and this progression doesn’t need
to be artificially hindered by functional boundaries.
In a systemic organization, anyone, at any time, is part of a project. They lend their
competence to a project that is designed, along with all the other projects the company is
made up of, to maximize the results of the whole company towards its goal. Some people will
develop competencies for managing increasingly complex projects, some others will continue
to deepen their competencies and enrich the content base of the company. Most importantly,
all of them will naturally be placed on a continuous learning pattern. Let’s talk about learning
(and the need for a boss).

Learning and the interconnected future


One of the most profound and, I believe, misunderstood elements of Deming’s doctrine
is the notion of “Joy in learning”. In his seminal 4-day seminars Deming never missed the
opportunity to remind the audience, often in a deceptive semi-serious tone, that joyfulness
and learning go hand in hand. Learning happens when positive emotions are created and
curiosity is triggered in the student. Learning happens when something, hopefully a good

10
Domenico Lepore

teacher, inspires us to want to know more. Joyfulness is the state of mind that is conducive to
openness and availability to receive; it is a state of grace that makes us see possibilities, that
lifts our spirit and originates positive feelings. As one of the greatest intellects of our times,
Rav Menachem Mendel Schneerson, has taught us, joy is the force that breaks all the
boundaries.
Far too often in our world, instead, learning is incentivized by the promise of a reward other
than the learning itself and joy is confined to some material achievement. In the western world, by
far and large, we have replaced the joy of learning with the acknowledgement granted with it.
Learning, then, becomes merely functional to achieving some grade or certificate, and completely
disconnected from what that learning should be for: to open our mind to the endless possibilities
that exist and encourage more learning. The current education system contributes very heavily to
the narrowing of our horizons by providing courses on “functional” competencies, the ones that are
going to be rewarded in the workplace, and trains us through grades and competition to see joy as
unnecessary or even counterproductive. The current education system triggers the cycle of despair
and the debasement of our innate desire to learn, and the prevailing management style reinforces
that cycle.
The complete disassociation that managers have from learning is exemplified by the total
failure of the majority of the training efforts that take place in organizations. Simply put: real
learning, not the kind that comes from reading quick fix books, but the kind that changes
behaviours, is not considered “strategic” for career advancement.
What is the real difficulty that we face in wanting to create a true learning organization
that dismantles the functional structure and replaces it with the far more suitable network of
projects? It is not connected with lack of knowledge of how to do it, nor with the lack of
technologies to support it. The real issue is the mental barrier, or cognitive constraint that
prevents individuals and organizations from working together as a system for a common goal.
We cannot afford to be pessimistic about the possibility to elevate that cognitive
constraint. We cannot afford to surrender to the lack of intelligence that permeates the way
politics and businesses are run today. We cannot afford to see as inevitable that insurance
companies and Big Pharma control our health and we cannot afford to continue to believe
that one day we could be part of that 1% of the population that owns 80% of the wealth. We
cannot afford to trust the claims made by the financial pundits and Wall Street wizards of how
they boost our economy. We cannot afford to be persuaded that only our individual efforts
and personal drive will, eventually, be the cause of our success as a country. Those days are
gone forever.
We live in a completely interconnected, interdependent, increasingly complex world
where the levers for success have definitively shifted from competition to cooperation, from
win-lose to win-win, from me against you to you and me against the problem. We don’t just
need new knowledge; we need a new form of organization and a new covenant with our
mind. In order to live and prosper in this world of unprecedented interconnection we have to
learn at a much faster pace and we can only do it if we improve our ability to leverage our
intellect. This book was written to show that we have the necessity and the way to do
precisely that.

11
2.

PREDICTABILITY VS. SPEED:


BACK TO THE DEMING AND GOLDRATT CONFLICT

There are situations of stasis in our lives and in our work that can seem insurmountable. Often
we feel forced into polarized positions that keep us far from a solution. At best, we may adopt a
compromise, but a compromise is not a solution, it merely leaves both sides dissatisfied. What is
a real solution? It is a breakthrough that moves us forward while protecting the true needs of both
sides. In this way the ‘conflicting’ positions simply cease to exist.
The intuition that an organization is a network of projects and therefore must be
designed and managed accordingly evolved over fifteen years of work. The validity of this
solution can be better understood if we take a step back and trace that evolution. An
important and fundamental conflict first had to be solved. In this chapter and Chapter 3
I describe the underlying conflict that engendered that solution, and the process it took to
overcome and solve it.

Discovering Deming
In 1993 I joined a network of academics and professionals based in the UK called the
British Deming Association (BDA). It had been created a few years earlier, with Dr. Deming
himself as its honorary Chair, and its goal was to promote and disseminate the teachings of
Deming through seminars, real life testimonials, conferences, and publications.
Dr. Deming was a physicist and statistician whose work founded the Quality movement. He
is perhaps best known for his work in Japan. His advice on improving design, production, quality
and sales, founded on a thorough understanding and application of Statistical Process Control,
helped Japan rise from the ashes after World War II to become a world leader in manufacturing.
A basic tenet of his philosophy was to see and manage an organization as a system, in other
words a network of interdependent processes that work together to achieve a goal.
I had been studying Deming extensively over the previous three years as part of my job at
the Milan management school of the Camera di Commercio, part of the Department of Trade
and Industry. My brief was to initiate small businesses in the basic principles of Quality
Management. As a physicist now working with organizations, I found Deming’s philosophy
exhilarating. By 1993 I had transformed my far too quiet activity of government agency
employee into a vibrant crusade to promote Deming’s philosophy and I was gaining some
traction. It was as if I had known Deming forever; his words full of power, wisdom and unrivalled
scientific rigor resonated with me more and better than any other form of organizational study.
Meeting the British Deming Association further boosted my enthusiasm and enriched
manifold my learning through continuous contact with illustrious Deming scholars. Dr. Deming’s
Sechel: logic, language and tools to manage any organization as a network

zest for knowledge has been one of the strongest and most profound sources of inspiration in my
whole life and the Theory of Profound Knowledge the blueprint for my professional development.
By 1995, may I say with some pride, I was successfully advising dozens of companies in
northern Italy and training hundreds of managers in Deming’s philosophy. Dr. Deming never
slowed down his efforts to promote better management through better understanding of
variation and how this variation permeates every aspect of our lives. This is what I was
passing on to businesses and the message was loud and clear: reduce variation, promote
statistical predictability and improve Quality.

Learning about The Theory of Constraints (TOC)


In those years of unbridled enthusiasm and unshakable commitment to Deming, a
colleague introduced me to a book called What Is This Thing Called TOC, by Dr. Goldratt.
I found this little black book about the Theory of Constraints electrifying. Like Deming, whom
he acknowledges, Goldratt sees the organization as a system. In a few pages Goldratt
sketches conceptual patterns from Socrates to modern science. He outlines the steps for
identifying what is blocking an organization from growing, how to identify what to change and
how to make the change happen in a pattern of continuous improvement. He introduces the
conflict cloud, a powerful logical tool, as a link between the scientific method and what he
calls the “intuitive” method. That short and largely ignored book prompted me to read every
business novel with which Dr. Goldratt has chosen to communicate his scientific findings.
I then became conversant with, and frankly enthused by, this whole new TOC terminology:
constraint, buffer, Drum Buffer Rope (DBR), Throughput, the lot. Goldratt’s slant on
management was unique and by all means appealing.
Were someone to conduct a comparative literature-type study of Deming and Goldratt they
would find no intersection between the vocabulary, syntax, tone, or the format of the books of
these two giants. While the business and scientific communities that gathered around these two
gentlemen happily ignored each other, to me, it was crystal clear that Deming and Goldratt were
aiming at the same goal. They were both using the same scientific rigor and pursuing
“intelligence” as a main driver for the transformation of the management style in the western
world. Moreover, I realized that both these approaches complemented each other perfectly. Each
one provided a missing link for the other. The Theory of Constraints provides powerful focus for a
continuous improvement methodology by focusing attention on the constraint, but it lacks an
explicit involvement of statistical methods in managing organizations. Deming’s philosophy is
fundamental, but it can be very difficult to implement as it lacks the practical thinking process
tools from the Theory of Constraints. However, it was not a perfect fit as there also seemed to be
certain incompatibilities between the two.
In 1995, after two years of fruitless attempts to engage the Deming and Goldratt
communities with each other, I met Oded Cohen, second only to Goldratt in knowledge of and
experience with the Theory of Constraints. Relying on his almost unlimited patience and
wisdom, I very inelegantly vomited on him my thoughts on the impasse I was experiencing
with the two camps. True to his nature as teacher and mentor, Oded asked me to solve the
‘conflict’ by writing the conflict cloud. The conflict cloud is the Thinking Process tool from the

14
Domenico Lepore

Theory of Constraints for resolving conflicts. This tool is fundamental for making a logical,
cause-effect analysis of any situation in which we are stuck. When used in concert with the
other Thinking Process tools, the conflict cloud can help us generate powerful solutions that
capitalize on our intuition, develop our understanding through analysis, and it leads us to
design a set of coherent actions that we can execute. The conflict cloud is a tool that
everyone needs to learn. What follows is the journey out of the Deming vs. Goldratt ‘cloud’.

Deming “vs.” Goldratt: speed “vs.” predictability


How would it be possible to unite these two theories when there were apparently
areas of incompatibility? The conflict was fairly straightforward: Manage according to the
Theory of Profound Knowledge (Deming) vs. Manage according to the Theory of
Constraints (Goldratt).
In the conflict cloud diagram we write the two opposing positions in the two boxes
labelled D and D’ to the right of the diagram as shown below:

Deming vs. Goldratt conflict cloud - 1

Whereas the Theory of Profound Knowledge is clearly based on statistical predictability,


the Theory of Constraints is obviously based on speed. These are the needs that lead to
adopting the two seemingly conflicting positions.
In the conflict cloud diagram we write the need that spurs the position D in box B and the
need that spurs position D’ in box C as below:

15
Sechel: logic, language and tools to manage any organization as a network

Deming vs. Goldratt conflict cloud - 2

What is common to both these needs? The common goal, i.e. what both Deming and
Goldratt pursue is a sustainable process of continuous improvement of performances.
We write the goal that is common to both need B and need C in the box labelled A as below:

Deming vs. Goldratt conflict cloud - 3

16
Domenico Lepore

The reality in which this conflict was rooted appeared in all its clarity when I surfaced all
the assumptions. This is the process we use with the conflict cloud in order to ‘evaporate’ any
conflict. We verbalize all the assumptions we make that lead us to make the statements in
each of the five main boxes, A, B, C, D and D’. Whereas basic and robust assumptions about
reality allow us to live our lives (we ‘assume’ that when we open our front door we can step
out onto a hard surface) weaker assumptions are mental models or limiting beliefs that keep
us blocked in certain situations. We can derive these assumptions by using the word
‘because’:
IF A (the goal) THEN B (the need behind position D) BECAUSE (assumption(s) A-B);
IF B THEN D BECAUSE (assumptions B-D)
IF A (the goal) THEN C (the need behind position D’) BECAUSE (assumptions C- D’)
IF C THEN D’ BECAUSE (assumptions C-D’)

Here are the assumptions I surfaced for the Deming vs. Goldratt conflict:
A-B: the only way to sustain any process is to ensure its predictability
B-D: Deming’s philosophy and management approach is designed to ensure (and it is
based upon) process stability
A-C: continuous improvement of performances cannot be separate from the pace at
which these performances are achieved
C-D’: The management of finite capacity entailed in the Theory of Constraints maximizes
the pace at which units of the goal can be achieved.

Deming vs. Goldratt conflict cloud - 4

17
Sechel: logic, language and tools to manage any organization as a network

The picture was very clear and I looked at it for quite a while before I was able to take
another step. Clearly, I was massively leveraging my intuition because there was no literature
supporting my claim that Deming and Goldratt could be brought together in a cohesive and
rigorous manner. Actually, I had been given many warnings not to try this because, unlike
myself, quite a few respectable scholars firmly believe that the two men represent different
and irreconcilable paradigms.
Meanwhile, Oded kept pushing me to think harder. He had no vested interest in this
pursuit but he did take sheer pleasure in any endeavour that would promote people coming
together rather than being divided. Moreover, he was interested in developing Theory of
Constraints (TOC) professionals in Italy and thought I was a worthwhile student, willing and
ready to embark on a business journey.
In 1996 I left my safe job in the Department of Trade and Industry to found the company
MST, Methods for Systems Thinking, with the goal of promoting a Deming-Goldratt approach
to management. The business plan sounded interesting to the CEO of the newly created
business incubator in Milan and I was granted a little office in their beautiful premises. I had
an office, a small but talented team, some customers but not yet the complete conceptual
solution I was seeking.
Oded and I used to meet every two-three months in different parts of the world and for
different occasions. We would always find time to sit down and further our discussions and
examine the conflict more deeply. The solution came one day, almost seamlessly.
How could I retain (protect) the need for speed and the need for predictability without
ending up in a conflict? Please, take a look at the historical ‘Production Viewed as a System’
picture that Deming drew in 1950.

Production Viewed as a System - 1

This is a system and its three main components, the customers, the feedback
mechanism and the interdependencies are clearly laid out. But it does not have to be limited
to Production alone. Instead of ‘Production Viewed as a System’, we can just as easily write
‘Enterprise Viewed as a System’ if we change the names of all those arrows (to, for example,

18
Domenico Lepore

production, sales, marketing). In other words, using the same logic we can use this model to
portray any system we want to investigate. Indeed, we need to further elucidate on the
individual arrows but the concept is pretty clear. Each of those arrows, and the system as a
whole, must be understood and managed in its statistical evolution.

Production Viewed as a System - 2

Managing variation AND constraint(s)


Now we have our enterprise clearly depicted as a system. We can manage and improve
our enterprise by leveraging the insight that comes from a statistical understanding of it. We
can manage the variation in the system in the best possible way by increasing reliability,
streamlining procedures and improving the quality of every process; we can minimize the
impact of market cycles and we can optimize the speed of new product development.

So where does the idea of constraint, Goldratt’s main tenet, come into the picture? Goldratt’s
majestic contribution to management is the understanding that no matter what we do, the
speed at which we proceed will be dictated by one or very few element(s) of the system. The
Theory of Constraints allows us to leverage this reality and use it to increase performance by:
• identifying the constraint
• exploiting the constraint to make the constraint work at full speed
• subordinating all the processes of the system to the constraint, i.e. build the system
around the constraint
• placing a buffer of material (or time) in front of the constraint so it can work constantly
This insight is absent from a purely Deming-based approach. But there was more to it
which I was able to see immediately: if we decide to ignore the idea of constraint, we will
19
Sechel: logic, language and tools to manage any organization as a network

soon get ourselves into a situation where the system we operate, no matter how good we are
at managing variation, will be made up of interacting and constantly shifting constraints.
Our system is, obviously, a finite one. Its components, as well as the whole system, are
finite. This means that we have a choice: we can allow random fluctuations (variation) internal
or external to the system to move the constraint erratically, or we can decide strategically by
which element of the system we want to be limited (constrained).
In other words, constraint(s), just like variation, are integral to any system.
The idea of process stability combines very naturally and elegantly with the concept of
constraint when we place it within the context of the enterprise as a dynamic system. We can
have an idea of this from the picture below:

System choked by a tube - 1

This picture could certainly be improved, but it does capture what I came to realize. In
order to bring together cohesively the idea of constraint with the idea of a statistically
controlled system we have to orchestrate statistically controlled processes and subordinate
them to a well-defined (and very stable) part of our system, the constraint. If these statistically
stable, well orchestrated, processes all have the capacity to subordinate to the chosen
constraint, then all we need in order to protect the whole system is a “buffer” in front of the
constraint; needless to say, the oscillation of the buffer must be controlled statistically.

System choked by a tube - 2

20
Domenico Lepore

In the years ensuing this realization, I spent a considerable part of my time on both sides
of the Atlantic trying to explain, not always successfully, why this idea of organization would
work, while achieving considerable results with clients. Oded Cohen and Larry Gadd of North
River Press believed in the model and so the next step was to write a book about it. This
book, Deming and Goldratt: the Decalogue, explains the ten steps that provide a map to
guide and sustain organizations in a continuous improvement pattern. It has since sold
thousands of copies, been translated into several languages, is recommended reading for
university courses internationally and is cited in numerous academic publications.
When we find a solution to a conflict by invalidating the assumptions that lead us to
adopt the two conflicting positions, in TOC we call this solution an injection. The idea of a
system that is statistically predictable and designed around a buffered constraint is a powerful
injection and can be very easily explained to anyone with a basic flair for statistics and
physics. However, the real issue became how to manage such a system. Oded’s
fundamental question was: “What is the set of assumptions that we invalidate by bringing
about such an injection?”
He had a point, and a big one. When we published Deming and Goldratt: the Decalogue
we knew that each of the ten steps was scientifically solid and all together the solution was
complete. However, there was still a big question: in what order should these steps be
deployed to be the most effective and powerful? The answer came in the ensuing ten years
of professional practice. As the result of a thorough analysis the assumptions behind the
Deming vs. Goldratt conflict became clear, together with the realization of what it would really
take to invalidate those limiting beliefs.

21
3.

UNLEASHING POTENTIAL FROM THE TRAP OF


HIERARCHY

Intuition, understanding and knowledge


There is an intuition, and then there is a viable and tested model that translates that intuition
into tangible results. The distance between the two is huge. It is like the distance between
Einstein’s field equations and a man walking on the moon. I had precisely that feeling of
distance when I began to develop the basic set of assumptions underpinning the Deming vs.
Goldratt conflict.
I was already helping organizations to apply successfully elements of Deming’s
theory and elements of Goldratt’s theory so I knew I was on to something that was not just
theoretically valid. I was still missing the cohesive whole that I knew intuitively these two
approaches could create and the way to apply it. However, before I could get to the level
of application/execution, I had to complete the development, i.e. the logical analysis I had
begun. What was missing was a complete picture of the conflict between the Deming
approach and the Goldratt approach. The complete picture would only come when it was
clear which were the assumptions or limiting beliefs that led to the two conflicting
positions.

So far, my intuition had told me:


• the Deming and Goldratt approaches, far from being incompatible, truly
complemented and enhanced one another;
• from studying and applying both approaches it was clear that each provided the
other with a missing link;
• the ‘conflict’ between them was only a matter of assumptions: limiting beliefs or
mental models;
• the solution to the conflict was the enterprise viewed as a system with one chosen
constraint.
What I could see evolving was a blueprint for an organizational design. This blueprint
was a completely new way of organizing the processes and functioning of an enterprise as a
system. By combining the power and sustainability of predictability with the leveraging and
focusing power of the constraint, it had the right shape to truly unleash the potential of any
organization.
Sechel: logic, language and tools to manage any organization as a network

The only way to verify my intuition was to build a solid understanding of the root problem
my intuition was addressing. I had to work backwards and ask myself: if the enterprise viewed
as a system with one chosen constraint is the answer, what is the question?
Put another way, if this was the new way of designing an organization, what was wrong
with the old way?

Don’t mess with the hierarchy


“Let’s go and say hello to Eli”. I was visiting Oded Cohen in Kfar Saba, a fast growing
town just outside Tel Aviv. Oded and Eli Goldratt live a few hundreds yards apart and Oded
thought it would be nice to have a cup of coffee with “Da Man”. True to his nature, Eli started
to push me hard on every front: business, implementations, theory, etc. I pushed back and it
was not an easy evening. When we left, Eli walked us to the door and said: “Remember,
whatever you intend to do or write, never challenge the idea of hierarchy, the world is not
ready.” We hadn’t even mentioned the issue in our conversation and I paid little attention to
that statement at the time.
It was not so surprising that Goldratt’s warning was about leaving the hierarchy alone.
Whereas Deming and Goldratt have repeatedly pointed out the gigantic fallacies of
prevailing organizational structures and that the organization must be seen as a system,
neither of them has ever described how to link these elements to provide a clear-cut picture
of a workable organizational structure. There is no literature by Deming or Goldratt on how
to practically replace the prevailing organizational structures in spite of their obvious
inadequacies.
Both Deming and Goldratt have contributed majestically to the unveiling of the
foundational scientific roots of continuous improvement. Sadly, still today few people think of
Deming outside of Quality Management or Goldratt’s theory much beyond its applications to
manufacturing and logistics. Nobody seems to recognize Deming and Goldratt for what they
truly are: scientists who have taken organizational studies into the realm of science. Deming
and Goldratt are the epitome of the organizational scientist.
Why, then, do applications of their theories inevitably fail to deliver what they
promise? What is it that blocks organizations and individuals in achieving their true
potential? What is the inherent conflict, even when applying Deming or Goldratt, that
keeps them stuck?
A few weeks after meeting with Goldratt, the answer I was searching for hit me.
I realized where the main assumptions were coming from that underlie any possible
conflict between a Deming vs. Goldratt approach to management: they were connected
with the way Deming and Goldratt approached all the issues concerning organizational
design, particularly the idea of control. Their ideas were fundamentally different. We can
summarize those differences:

Control for Deming:


The only intelligent form of control happens through the relentless application of
statistical studies and the insight thus gained.

24
Domenico Lepore

Control for Goldratt:


The management of the constraint allows maximum control over the performances of the
company.
These different approaches to control were the assumptions underlying the conflicting
positions D and D’ in the Deming vs. Goldratt conflict cloud. Accordingly, if, as I truly believed,
the choked system, i.e. the enterprise as a system organized around the constraint, was the
answer, the question I was trying to formulate had to have something to do with the
organizational structure.

Deming vs. Goldratt conflict cloud assumptions D-D’

The inherent conflict in any organization: hierarchy vs. system


What is the prevailing organizational structure and what is wrong with it? I did not have to
go too far to have an answer. In the early 1990s I had been actively involved with the ISO TC
176 International Committee responsible for the development of a family of Quality
Management Standards named ISO 9000. As a result of that work, many small companies
would come to my courses at the Department of Trade and Industry in Milan looking for a way
out of what seemed a preposterous way of organizing and managing a business.
They were right. The ISO Standards had been designed to accommodate for
realignment on how large industrial conglomerates would conduct basic quality assurance
activities, not to help businesses make more money or sustain continuous improvement
efforts. Moreover, as one of the Big Kahuna of the committee told me informally, “I do not
believe this is applicable to any company with less than 500 employees.” There was an
inherent conflict within the ISO Standard that made it almost impossible to fulfil operationally:
the Standard allegedly encourages companies to adopt a PDSA cycle to pursue
improvement, but when it comes to control and accountability it invariably turns to the
hierarchical/functional organization.
25
Sechel: logic, language and tools to manage any organization as a network

The conflict was blatant. On the one hand:


a hierarchy fails to acknowledge three critical aspects of the life of a successful
company: interdependencies, feedback cycle and the customer
For these reasons it is unsuitable for sustaining a continuous improvement effort. On the
other hand:
a hierarchy seems to cater for accountability and provides a sense of control
The conflict between a Deming-based approach vs. a Goldratt-based approach to
management rests then on the idea of control and how this idea is translated into a coherent,
and coherently measured, organizational system.
If we want to find a way ahead, then the assumptions underlying the Deming vs. Goldratt
conflict could be more suitably and operationally translated into a conflict between
conventionally hierarchical visions of the organization vs. “something different, i.e. not
hierarchical”. I was on to a good path.
Let’s recap:
The basic assumptions of the Deming vs. Goldratt conflict are rooted in the idea of
control and how this control can be exerted within an organizational structure suitable for
continuous improvement. As the prevailing organizational design is still the
hierarchical/functional one, then a practical way to look at this conflict is to translate it into
“Hierarchy vs. Not Hierarchy”.
I had my answer to the conflict and I had my question to get the answer. It was clear that
the hierarchical model was inadequate and prevented organizations from unleashing their
true potential. Not only that, but by confining individuals within hierarchical roles, it stifled the
possibility for talent and abilities to fully emerge. What I could not know at this stage was how
deeply embedded the paradigm of hierarchy is within people’s psyche. No matter how
uncomfortable the ‘imprisonment’ of the hierarchy might be, for many it would be more
comfortable than the freedom allowed by creating the enterprise as a system. Freedom
carries a heavy burden of responsibility. I will return to this subject in more detail further on.
The traditional model for control is a hierarchical pyramid.

Hierarchy and control

26
Domenico Lepore

The reasons why a hierarchical pyramid exists are the assumptions between B and D:

Control assumptions

Increasing our capacity to listen to the customer so we can satisfy the needs of the market
leads us to NOT adopt a hierarchical model:

NOT adopt a hierarchical/functional structure

27
Sechel: logic, language and tools to manage any organization as a network

Here are the pieces of the conflict we have looked at put together:

Control cloud with assumptions between B-D and C- D’

The two positions are in conflict because we believe the assumptions in the box on the right:

Control cloud assumptions between D-D’

28
Domenico Lepore

The new organization: rolling out the solution of the ‘choked’ system
In my quest to unify the work of Deming with Goldratt’s Theory of Constraints, I had
developed a strong intuition for a new idea of organization: Deming’s system constrained in
one point. In order to verify the validity of my intuition I had to develop an understanding of
the fundamental, never verbalized, assumptions that would keep the two models in conflict; in
doing so I was able to connect my findings on Deming and Goldratt to the “inherent conflict”
of any organization.

So, Deming’s system constrained in one point is the injection (solution) that not only unifies
the approaches of Deming and Goldratt, it re-defines the ideas of how we:
• Control the system: through the constraint using buffer management and
relentless application of statistical methods
• Measure the performance of the system: throughput accounting
• Design the system for continuous improvement: the ‘choked’ system

The ‘choked’ system

The understanding provided by this injection was in need of a massive dose of


knowledge in order to be transformed into a reality. The ten steps forming the Decalogue
were the answer Oded and I came up with. This was the algorithm for creating and managing
a systemic organization that is Deming’s system constrained in one point.
We tested the Decalogue logically for validity using the Thinking Process tool called the
Transition Tree. The validation of this algorithm gave us confidence in the sufficiency of the
solution.
In our book Deming and Goldratt: the Decalogue we fully describe the ten steps and how
they follow the pattern of the PDSA cycle. This cycle firmly embeds the scientific method
within the application of the steps. This was only the beginning of a ten-year journey applying
and verifying the validity of the solution. At the end of Part Two of this book we outline the ten
steps. Through the examples provided in Part Three we offer further elucidation on the
implementation of the steps and how they can contribute to increasing manifold the
performance and continuous improvement of an organization.

29
Sechel: logic, language and tools to manage any organization as a network

The ten steps of the Decalogue:


1. Establish the goal of the system, the units of measurement and the operational
measurements
2. Understand the System
3. Make the System stable
4. Build the System around the constraint
5. Manage the constraint - Buffer Management
6. Reduce variation in the constraint and the main processes
7. Create a suitable management structure
8. Eliminate the external constraint - sell excess capacity
9. Where possible, bring the constraint back inside the organization and fix it there
10. Create a continuous learning program

What it takes to do the Decalogue: intuition, understanding and knowledge


The ten steps of the Decalogue highlight the knowledge needed to overcome the
Deming vs. Goldratt conflict and execute successfully the idea of a system constrained in one
point. I am proud to say that these steps have stood the test of the last ten years. However,
both the successes and failures I experienced in these years have clearly pointed at areas
that were in need of some meaningful upgrade. The most important lessons have come from
the area of organizational design, how people see themselves within the organization, and
what is their level of personal responsibility. Goldratt was right in being wary about
challenging such a deeply embedded paradigm. But challenge it we must if we are to evolve
beyond our current capabilities.
The Decalogue is effectively the algorithm that enables us to bring about the solution ‘the
enterprise as a system choked in one point’. However, this solution can only be carried out
effectively if management is able to interconnect, as we described in the Introduction, three
faculties of the intellect:

• intuition (birth of an idea),


• understanding (analysis/development)
• knowledge (execution/application)
This means managers have to move, cognitively, through the three phases that in TOC
are described as what to change, what to change to, and how to make the change happen.
Without truly interiorizing these three phases the ten steps will not function. You can have the
best intuition, the best analysis, and the best strategy and detailed roll out plan, but if managers
do not fully comprehend the serious implications of not acting, the plan will not happen. I still
have the exit wounds intellectually and emotionally from direct experience of this.

30
Domenico Lepore

If we have the analysis, and we have the solution and we have a precise action plan,
why do people not act? It can be through their inability to comprehend the validity of the
solution because what they are required to do is different from what they are accustomed to
doing. This translates immediately into paralysis and these managers need further coaching
in the approach. On a more subtle level, if a manager does not act in accordance with the
agreed plan it can be because they are unwilling to subordinate to the project and unwilling to
give up on local optima. This is much worse than paralysis as it can lead to sabotage,
deliberate or otherwise.
We may say that a person has intelligent emotions, or sechel when they are able to see
and live the interconnections of intuition, understanding and knowledge. How can they
achieve this? Using the Thinking Process Tools fortifies this ability, but even that is not
enough. The mental skills of sechel have to be grounded in a method, and that is the
scientific method embodied in the PDSA (Plan Do Study Act) cycle, which rests on a
thorough understanding of variation. We may think of the theory of variation as the pulsing
heart of the PDSA cycle.
This scientific method enables us to develop sechel by seeing how to connect intuition,
understanding and knowledge in a rigorous and consistent way. With this higher level of
understanding we can create the interdependencies of our system correctly. When we
achieve this we are able to develop a more evolved and intelligent organization. What does
that look like in practice? Creating this new kind of organization means building a network of
projects with a goal.

31
4.

OPERATING A SYSTEMIC ORGANIZATION: WHO, WHEN,


WHY AND HOW

Why do we work?
Work has almost completely changed its shape in the last 40-50 years, and the aim that work is
designed to accomplish should change accordingly. Work can no longer simply be the
organization of many elements to achieve a profit for a tiny minority. People today have different
expectations and a different covenant with their working life. Our work can be fruitful, intelligent,
and lead to success, but people also have to understand that they cannot be totally separate
from what they do. The ultimate goal of work should be to elevate people.
An organization has to build the right kind of interdependencies so people do not
feel imprisoned in their work and there is intrinsic meaning in what they do. This means
neither dependence nor independence, but interdependence. In this way, the individual
can understand that by contributing with intelligence, passion and adherence to the
company goals, they derive more benefit than they would by working alone (being
independent).
How can the company leverage people’s natural desire for elevation? By providing
opportunities to participate in something from which the benefit they derive is greater than
the effort they put in. For an organization to allow this meaningfulness to exist in the
workplace it must start from a vision. It must then create the right interdependencies, after
which it is up to the individual to take part and become one with the organization knowing
that their own personal life will be enhanced. This is precisely what the multi project
environment can offer: people contribute what they are capable of and the system
capitalizes on this. By enabling people to do what they are good at there are immediate
good results that reflect back on the worker in a plurality of positive ways.
The way we live is increasingly shaped by the limited availability of resources and this
fact cannot be separated from a different, indeed radically different way of generating and
distributing wealth. We are not talking about socialism but a more intelligent way of
operating organizations. The way organizations and their work is organized, the way they
interact with each other, the way they cater for the wellbeing and development of their
members is a foundational part of a major shift in our productive lives.

Who, when, why and how


If we want to concentrate into two words everything we have said about the nature of the
organization as a system those two words are: Quality and Synchronization. On a more
Sechel: logic, language and tools to manage any organization as a network

abstract level this translates into variation and network. This applies not only to production but
also to the entire organization. When we address Quality and Synchronization correctly, and
the two are completely interdependent, we can guarantee that we achieve our goal in a
continuous, predictable and economically viable manner. Let’s look at what we need to do on
a practical basis to allow this to happen.
Quality means understanding that variation affects every human process, understanding
exactly what those processes are, and never ceasing to act to bring and keep our processes
within statistical control. (We deal with this fundamental subject matter at length in
Chapter 12.) In principle, this could be accomplished within a traditional hierarchical structure
with better performance indicators and a culture of teamwork. We would not need the
Decalogue. The real shift in performance comes from the understanding of finite capacity and
the need for resource optimization and synchronization.
A systemic approach to the economics and management of resources focuses on what
can be achieved by combining the resources at hand. The goal of synchronization is to make
the most out of what we have as a system. The cognitive ordeal that we face in embracing
such a model is that the resources do not “belong” to something; they cannot be allocated in
any conventional “functional” way. Achieving a systemic optimum is a very different sport
from achieving a set of functional optima. Everything has to be subordinated to the constraint.
In fact, in the systemic game resources are generally sub-optimized locally in order to
optimize the global systemic result.

How do we reinforce operationally Quality and Synchronization within our system? When we
build our organization as a systemic structure there are four vital components that keep the
structure alive and working. These are:
1. the ‘Playbook’: a detailed map of all our processes and the repository of all the
knowledge needed to operate the interdependencies
2. an Information System that gives us the ability to sustain the flow of information
connected with the Playbook
3. a scheduler: a mechanism to synchronize, hence maximize this flow
4. a Learning Centre: a method to ensure that all the continuous learning necessary to
operate such a fast system is in place

Vital component number one


The Playbook: learn it, love it, live it

Let’s say it once again. A company seen as a system is a network of interdependent


processes/projects with a goal. No ambition to manage a company systemically could ever be
legitimate without a mechanism to ensure that everyone in the system knows where he fits
into the life of the company. We call this the Playbook because in some way it resembles the
collection of plays that every football player must know to win the game. Our systemic
Playbook is a map that details the interdependencies/linkages, thus giving a true operational
meaning to the expression “organizational design.” We are able to create this map by using
34
Domenico Lepore

Deployment Flowcharts to map out every process within the company, identifying who does
what and when. Functional roles disappear. Instead, the Playbook details the network of
conversations that must occur (daily, weekly, monthly, etc.) in the system in order to make
these linkages effective.

The playbook is the nervous system of the organization; it captures all the connections that
make the working of the company possible. The playbook is not the work of a notary and it is
not carved in stone. It is the living, ever-evolving document that portrays the life of the
organization. It does so at different levels by:
• depicting how all the processes are linked
• describing how these processes must be performed
• specifying which activities these processes entail and who is supposed to perform
them
• illustrating the inputs and outputs of these activities
• recording the expected outcome of these activities; designing, validating and
testing all the improvement activities
and, most importantly, by devising all the statistical studies necessary to gain insight into
the life of the organization.
Quality and Statistical Process Control are not about techniques. They are a worldview
and a mindset where continuous improvement is continuous, because variation and entropy
never stop. The Playbook is the practical device that embodies and enacts Deming’s PDSA
cycle. It is the offspring of Deming’s vision of a company guided by the all-encompassing
concept of Quality. The playbook supports steps Two and Three of the Decalogue and it is
grounded in the idea of process predictability as a prerequisite for knowledge-based
management. It also provides a meaningful mechanism to enhance company communication
because it is based upon open and transparent information and knowledge flow. The
Playbook is the open book where no personal agenda can hide. The writing up and, most
importantly, the enactment of the Playbook, is not the job of a “company function”; it is the
first and most important job of top management.

Vital Component number Two


The Information System: what is it and what is it for?

I once spoke to a highly successful owner of a business software house about the kind of
software that would be required to manage companies systemically. He listened attentively,
and then told me that what I was proposing was absolutely valid, but so simple he would not
be able to justify charging tens of thousands of dollars for it.
My generation has seen the birth and the development of computers and their
exponential growth in terms of importance for virtually every human activity. One of the most
important applications of computers is in the almost endless possibilities that they have to
support human interaction. The power of these machines increases relentlessly and so does
35
Sechel: logic, language and tools to manage any organization as a network

the software that makes them useful. In principle, software should exist only to help humans
live and work better. It is astonishing, instead, how software becomes more and more an
artificial constraint to the development of the organizations. This is not only due to the
principle of greed expressed by the software owner I spoke to. The unsuitability of the
majority of software to add any meaningful value to the work of the organizations is easily
explained by looking at the way companies are structured.
Software specialists develop their applications following inputs; the more these inputs
are wrong, the more the software will be useless and the more developers will be asked to
produce more software. In a function-based organization there is no possibility for software to
generate any companywide value because inputs will be by definition aimed at supporting
local optima. Moreover, in a function-based organization software specialists will be “stored
away” in a box called IT services and will always be considered nothing more than a
necessary evil, just like accounting.
In a systemic organization the purpose of an Information System is clear and relatively
simple. As the word implies, in any company there should be a system that enables timely
access to the creation, storage and retrieval of information. Needless to say, the first step
towards creating such a system is to understand what this information should be for. It is very
simple (too simple for some developers): this information should serve to remove limitations
(constraints) towards a stated goal. Accordingly, the role of an information system should be
to facilitate the functioning of a chosen constraint.
An Information System should certainly mirror the Playbook and support its enactment
but should also facilitate the management of the constraint(s). An Information System should
be made of a database where all the information is stored and connected with a scheduler for
the optimization of the chosen physical constraint(s). With very few exceptions, essentially
very large and very spread out companies, an IS could and should be built “in house” and
should be almost free. An IS should provide information regarding the very few vital pieces of
information a company must always monitor, for instance: daily cash in-cash out; on-time
payments to suppliers and from customers; provide visibility to customers and suppliers on
our inventory and WIP as well as facilitating order entry, etc. Most importantly, an IS should
be equipped with the possibility to perform easily and timely all the statistical analyses
needed to understand and improve the performances of the company.
The most critical part of an Information System, however, is the ability to synchronize the
work of the organization. A thorough understanding of what finite capacity and
synchronization mean, both conceptually and operationally, must precede any attempt to
design an organizational structure suitable to sustain the Decalogue effort to bring about
effective systemic management.
As we have stated, the systemic approach to the economics and management of
resources focuses on what can be achieved by combining the resources at hand. This is
difficult because we are accustomed to thinking of resources as “belonging” to something, i.e.
a function. In this paradigm resources are allocated to achieve local, functional optima to the
detriment of the goal of the system. Instead, in the systemic approach resources are
generally sub-optimized locally in order to optimize the global systemic result. The goal of

36
Domenico Lepore

synchronization is to make the most out of what we have “as a system”. Resources are
therefore deployed to maximize the global result (we call it Throughput) NOT what these
resources could achieve if they were to operate in isolation. Accordingly, it is critical to
understand what we are synchronizing. Once the system has been designed and the
constraint of the system has been chosen, there are two levels of synchronization that must
take place at the same time in any organization.

Synchronization level one: scheduling of the chosen physical constraint.

In a production environment, for instance, the scheduler will have to maximize the ability of
the constraint to generate throughput ensuring that no time is wasted on it AND that the
constraint always works on the right mix. In order to be scheduled, the constraint only needs
to be fed a few variables, namely:
• delivery date (and associated Throughput)
• bill of materials
• routing
• WIP
• inventory and replenishment time
In essence, the scheduler enables the effective coordination that must exist between
sales and replenishment taking into account process and lead times for the manufacturing
and shipping of the products.

Synchronization level two: optimization of the resources companywide.

If we want to manage our organization as a network of synchronized projects we must


have a way to avoid resource contention. Resources contribute with their competencies and
these competencies are assigned to projects. What we schedule is the finite capacity (in
terms of time) of the competencies we have and we then allocate the person that is available.
In order to do this effectively we must have a database of competencies (name + what they
are capable of doing) and a finite capacity scheduler that can draw from this database only
those resources that at any given time are available.
Let me stress this point again: individuals with a set of competencies do not belong
to a company function. They are available any time that they are not sick or on holiday
and this availability is captured by the scheduler and synchronized in a way that
maximizes the Throughput the company can generate. Indeed, not ALL the available
time of ALL the resources needs to be scheduled, inasmuch as there are many activities
that need to be attended to at all times by certain resources, typically shop-floor and
very repetitive ones.
Both levels of synchronization, i.e. scheduling of the constraint and optimization of the
resources companywide, can be managed following the Focusing Steps provided by the
Theory of Constraints. An example may be useful.

37
Sechel: logic, language and tools to manage any organization as a network

Let’s say that we want to synchronize the work of a company producing industrial robots and
automation tools. Such a company can be seen as an assembly operation and we have to
decide where we want the physical constraint to be. Let’s say, for the sake of simplicity, that
we choose as constraint the part of the process flow where all the different components
making up the final product are assembled. The Focusing Steps would then tell us that:
A. We release the different components at the pace at which we can physically
assemble them, neither faster nor slower
B. We ensure that ALL the components making up the customer order we want to
process on that date get in to the assembly line ONE buffer time ahead
C. Job order by job order, we monitor statistically what percentage of the buffer has
been eaten into or gained
D. We assess the predictability of this process (that delivers the pieces to the
assembly line) and then:
i. If the process is in control and the upper limit is within the buffer, we carry
on
ii. If the process is in control but the upper limit is outside the buffer, we re-
size the buffer
iii. If the process is out of control and all the data points are within the buffer,
we search for the reasons that send the process out of control and we fix
them
iv. If the system is out of control and some of the data points are outside the
buffer, we stop the line and fix the problem
E. If we perform these steps and the market demand does not exceed the capacity of
the chosen constraint, presumably measured in units of assembled product per
time period, then we probably ship everything on time.

However, in order for this chosen constraint to maximize the Throughput of the company
many of the processes making up the system (virtually all) must be synchronized:
• Engineering must issue flawless drawings
• Replenishment must deliver the subcomponents to the warehouse in time
• Accounting and administration must pay and collect promptly
• Marketing and sales must identify suitable customers and keep the line in “pull”
with the highest Throughput mix, etc.
There can be no ‘heroic’ attempts here by one function to try and outshine another. Any
attempt to oversell, squeeze suppliers on price, delay collection and payments and reduce
the thoroughness of drawings in the name of cost optimization, will result in sub-optimization
of the system’s performance. None of these activities can be done in isolation; any attempt by
functions to outperform each other and claim more “functional” relevance in the system will

38
Domenico Lepore

not produce one more unit shipped. Accordingly, we need an algorithm and an organizational
structure that helps the coordination of all the activities that maximize the throughput that we
can achieve with the designated constraint.

Let’s summarize. In order to manage a synchronized system we must:


a) Designate a physical constraint, buffer it and manage the buffer
b) Orchestrate the work of the whole organization with a powerful algorithm that allows
the best use of all the resources. This algorithm will be useless if we do not elect it to
be the main driver of a suitable, i.e. systemic, organizational structure.
Generally speaking, the set of resources that is identified in the company as suitable for
project-like activities should be allocated to projects for 70-80% of their time. Dr. Goldratt,
over twenty years ago, developed the Critical Chain algorithm to optimize the allocation of
resources on projects and some software exists based on that algorithm. In the appendix we
will show simulators for finite capacity scheduling as well as software for the optimizations of
finite resources in projects.

The Critical Chain approach to Project Management


In his 1997 novel called Critical Chain Dr. Goldratt tackles the issue of project
management. Goldratt provides a unique and revolutionary insight into this very
underdeveloped part of managerial literature. The main tenets of the book are the following:
• Wrong behaviours and mind habits like multitasking and putting issues off until the
last minute (“Student Syndrome’) slow down artificially the completion of projects;
• Our minds are not trained to assess risks associated with probability distributions;
• We must not protect individual tasks but the project as a whole – no milestones;
• The traditional Critical path method for scheduling projects often creates resource
contention;
• Resolving this resource contention leads to a very different series of dependent
events that determine the length of the project; we call it Critical Chain;
• It is this chain that we protect with a project buffer that absorbs the covariance of
the project;
• Non-critical branches, called feeders, are also protected with a cumulative buffer
(not individually) placed at the end of the “feeding chain”;
• If we manage several projects in parallel, we must select a finite set of resources
called “pacing resources”; they will dictate the pace at which the organization as a
whole is capable of achieving its goals.
Critical Chain is a very intense, heartfelt and sometimes abrasive novel. In this
deceptively simple and fast paced book, Dr. Goldratt throws down the gauntlet to academics

39
Sechel: logic, language and tools to manage any organization as a network

and industrialists alike on what it takes to use knowledge to achieve results. Critical Chain is
the offspring of a vision of the world and, too often, the elucidation on this vision has been
insufficient compared with the wealth of practical details that Dr. Goldratt’s applications often
attract from pundits all over the world. Like all of Goldratt’s revolutionary contributions to
management, Critical Chain has achieved very partial results in industry and none at
corporate level.
Looking at Critical Chain as a technique for managing projects means essentially
missing the point. The reason why, after thirteen years of relentless efforts to disseminate
Critical Chain, tools like Microsoft Project still dominate the way projects are “managed” is
that any attempt to use Critical Chain without embracing a purely systemic view of the
organization is doomed to failure.
Critical Chain represents the embodiment of a vision of the organization based on pace
of flow, people’s involvement and great emphasis on quality. Quality, involvement and flow
are the basic philosophical pillars of the systemic organization so we shall investigate how
Critical Chain can play a much greater role in the building of an intrinsically systemic
organization (see appendix on Information System).

Vital component number four:


A Centre for Learning

What do we need to make all this work? We need a new relationship with what we do.
Companies need to establish a very precise, upfront and clearly understood covenant with
their people; top management must show a total commitment to this idea of organization.
This entails, along with a career path, the addressing of very legitimate issues like
compensation, authorities and responsibilities, status, etc.
Once this commitment from top management has been achieved and the above-
mentioned issues addressed, how do we practically go about the “reinvention of man”
foreshadowed by this new covenant with labour? The answer is a Centre where everyone in
the company can learn how to be and act in this new organization.
Let’s be explicit. The time necessary for a competent and willing individual to learn what
he needs to know to operate in a project-based structure is measured in (many) months;
however, the time it takes to transform this learning into a truly metabolized behavioural
change could be years. Obviously, while it would be desirable to shorten the former, we must
absolutely find a way to accelerate the latter. A Centre for Learning is not a training centre, it
is not a management school, it is not R&D, it is not an Academy and it does not look like the
couch of a psychotherapist.
A Centre for Learning is a (possibly physical) space where managers are first taught
and then coached and mentored on how to change their way of conducting business. It is
a space where, under the guidance of highly skilled and knowledgeable professionals, the
intuition, understanding and knowledge of management needed to develop and enact
business strategies are leveraged. A Centre for Learning is where managers go to
develop, test, validate and refine plans and activities aimed at propelling the business of
the Company.

40
Domenico Lepore

The success of a Centre for Learning is measured like anything else in the company, by
Throughput and cash increase. The Centre for Learning is where managers learn how to see
solutions, put them at work and where they give and get feedback. It is where all the relevant
business decisions are developed, managers are nurtured and the future of the company
planned. The Centre for Learning is not just for the Company; it is where customers and
suppliers (and even competitors in some cases) come to share in the knowledge needed to
operate like a chain; it is where the concept of partnership is embodied in the reality of the
relationship and where bold innovation finds its cradle. It is where, through cooperation and
not competition, we have the possibility to understand the power of a network.
What I describe in this chapter is neither a fantasy nor is it particularly complicated to
create. What it does require, however, is the sharing of a paradigm and a sense of urgency.
The paradigm underpinning this transformation, from the prevailing management style into
one of optimization, is sustainability; and the urgency stems from the understanding that the
world is at the very eve of a tectonic shift and this shift calls for a completely different style of
leadership. That leadership must be inspired and informed by a higher form of intelligence. It
is an intelligence able to leverage the interdependence among three faculties of the intellect:
intuition, understanding and knowledge. It is an intelligence that connects cause and effect
and governs decisions always in the awareness of their wider, systemic implications.

41
PART TWO

Understanding:
Analysis/Development

In Part Two we address the conscious and connected organization from the perspective of
Understanding, i.e. Analysis/Development. Focusing on Industry, we consider the
organization as part of a chain of value based on speed and quality. We look at the shift in
consciousness and use of the mind necessary to foster a new kind of knowledge for wealth
creation. This new knowledge calls for a new Economics, as proposed by Deming, and a
consistent model of Leadership.
5.

TRANSFORMING INDUSTRY

Everyone doing his best is not the answer. It is first necessary that people know what
to do. Drastic changes are required. The first step in the transformation is to learn how
to change…Long-term commitment to new learning and new philosophy is required of
any management that seeks transformation. The timid and the fainthearted, and
people that expect quick results, are doomed to disappointment.

W. Edwards Deming

In Part One we examined an intuition (birth of an idea) about fundamental problems in


managing organizations as systems. The intuition was about the inherent conflict faced by
any organization. Here in Part Two we take this intuition into the phase of understanding
(Analysis/Development), and consider the outcome of this intuition and what would be
needed to make it happen. We look ahead to the goal we are trying to achieve and the
possible obstacles that could prevent its achievement. In Part Three we will provide examples
and explanations on how to make the change happen.

From exile to freedom: splitting the Red Sea


Stupidity, as Dr. Deming has reminded us, is a choice. We are not talking about a low IQ
but the refusal to use the mental capacity we have. We do not lack the knowledge needed to
build and operate a truly systemic company guided by the idea of throughput; what we lack is
the ability to decide to use that knowledge. The difficulty is in the transition from knowing what
to do to making it happen.
Why is this so difficult? Once we take action, we change our reality and nothing is quite
the same. This is profoundly linked with the way we see ourselves in the world. Taking those
steps is scary. When the Red Sea parted for Moses, it was not simply divine intervention, but
divine intervention coupled with human decision and action. Torah scholars inform us that the
sea did not part until someone took the first step into it.
As Deming says, it is all about transformation. Our goal, as individuals and organizations,
should be nothing less than transformation. Into what? Into being the fastest and most effective
vessels for making our potential real and happening.
The most obvious and suitable environment for transformation is Industry. This is where
raw material is literally transformed into saleable goods. I truly believe that the solidity of any
country’s economy must be based on a strong and competitive Industry because that is where
materials, processes and methods guided by human intelligence can produce the maximum
wealth. Industry is the natural cradle for the development of a new ecology of wealth.
Sechel: logic, language and tools to manage any organization as a network

The focus of Part Two of this book is Industry and how to transform it through a systemic
approach. Before we look at this in a more hands-on light (I purposefully do not say ‘practical’
because to paraphrase Einstein, nothing is more practical than a good theory), we need to
step back and examine the cultural and philosophical forces that can inhibit such a
transformation, and identify the pattern that will promote and sustain it.

Not Change Management but transformation


Our goal when we adopt a truly systemic approach in an organization is to transform the
way the organization pursues its goals. It is not Change Management, nor re-engineering, nor
improving, but transforming. The algorithm of the Decalogue is designed precisely for this. It
is not Kaizen, it is not Lean, and it is not Business Process Reengineering: its goal is not the
achievement of better performances because they are a natural by-product; it is a complete
paradigm shift in the way the business of the organization is conducted.
The transformational nature of the Decalogue is reflected in its ambition to bring together the
elements of the New Economics advocated by Dr. Deming and the largely unexplored power
provided by the thinking approach to systems of TOC. Dr. Goldratt has labelled this thinking as
“common sense” and called its offspring “thinking tools”. After 15 years of relentless work I am
adamant that there is nothing common about the “sense” Dr. Goldratt talks about and the “tools”
he designed resemble tools just like my three pointer resembles Kobe’s. As for thinking in a way
that produces coherent results, this seems to be one of mankind’s greatest challenges.
In a systemic attempt to provide a unified approach to economics, finance and
management, the Decalogue leads us inevitably to create the foundation for a new
epistemology of wealth creation, something that through rigorous investigation separates
knowledge (what we know and it is part of who we are) from superstition (what we believe is
true and for which we have no, or a shaky, basis of explanation).
In a world where the only cure to the folly of the financial markets seems to be stricter
rules on the trading of derivatives and salary caps and where financial institutions still believe
they can perpetuate moral hazards because the government will eventually bail them out, we
need to dig deeper on what causes the state we are in and find an option on how to come out
of it. That is the ultimate purpose and scope of the Decalogue.

An ecology of the mind


The starting point of any scientific investigation of the natural world is the understanding
of the forces that act on and shape the subject under scrutiny. In a similar manner, when we
seek to activate a transformational change in the way humans translate what they know into
coherent behaviours, we have to understand the fundamental forces that act upon them and
make them who they are. This is because these forces do influence the cognition process
and limit or enhance the ability to learn.
Sometimes examples can help: the unsuitability of statistical models based on Gauss-
like probability distributions to assess financial risk was unequivocally proven well over thirty
years ago by well known and respected mathematicians. Disasters like Black Monday in 1987
and LTCM in 1998, not to mention what we have witnessed in the last two years, have been

46
Domenico Lepore

the results of the sheer inadequacy of the relevant institutions to cope with the cognition
process that is associated with the development of new knowledge.
The transformation that the Decalogue advocates cannot be undertaken without
understanding and embracing a basic set of values, and it is mandatory to realize that the
achievement of economic results must be connected with a precise ecology of the mind. In
this way, the legitimate pursuit of personal monetary wealth will not be disjointed from the
quest for purposefulness that should be the prime motivator for any human endeavour.

Change and cognition: what drives us and what stops us


Greed, incompetence, evil inclinations and lack of a moral compass are effects of a
profound cause: a very limited ability that human cognition has to cope with structural
changes taking place in a learning domain. Indeed, the ability that humans have to adapt to
physical changes is far greater than the ability needed to adapt to mental changes. Moving
from a probabilistic view of the world with a centre and tails to one made of poorly predictable
spikes (the fractal geometry of nature and its underpinning power law) simply has not yet
registered with economists, bankers politicians and regulators. It is not a matter of education
(although that has played some role), it is a matter of cognition processes.
The forces that shape how we learn and translate what we learn into consistent actions
are not of the same kind that we study in physics. These forces shape individuals as well as
organizations and they are responsible for the mental models that limit the way we perceive
the world around us.
We can understand these forces only if we pierce the outer layer of the self and we
connect with the energy within. This energy, we can call it “life”, is what originates all of our
faculties. Whether we look at this from a business, scientific, philosophical or religious
perspective, the issue of our inner energy is central for any development in our understanding
of the human condition.

These forces can be broadly divided in two main categories:


1) purely physical drives
2) the equally strong drive that humans have for transcendence
Purely physical drives are connected with the primal fears that any individual brings with
them. These forces trigger all those actions aimed at restraining our behaviours and act on us
in a way that makes us develop the need for the control of the environment we are in. The
development of these forces is very heavily influenced by the way we experience the world,
by the defining moments of our life and by the social and cultural fabric in which we develop
our relationships. The forces that shape the fundamental need for control that is common to
any person need to be harnessed and refined if we want to avoid being dominated by them.
The drive humans have for transcendence is an innate desire that we all share to go
beyond our current state and see ourselves projected into a different and greater dimension
of existence. This is the reflection of an often untapped level of consciousness. This is what
many call “soul”, or that part of us that aims high, that is projected above, that calls for new

47
Sechel: logic, language and tools to manage any organization as a network

challenges and yearns for a higher level of spirituality and meaningfulness. In the very same
way in which we need to exert “bodily” control over our environment we also need to project a
“soulful” vision of ourselves into the future.

Body, soul and corporate unculture


The body and the soul that make up our being must both be addressed in the pursuit of
happiness; they are like the wick and the flame, both indispensable to create light. Control
and vision are then the two main needs that any human must see satisfied in order to live a
harmonious life. How does all this apply to the transformation of an organization?
As Dr. Deming repeatedly pointed out, individual performances within an organization are
the result of what the individual is capable of PLUS all the interaction and interdependencies in
which they are immersed. As a result of these interactions Dr. Deming rightly claims that
assessing individual performances in an organization is both dangerous and useless.
An organization is different from a group of people waiting for a bus not because they do
not have a common goal, they do, but because an organization ties the individuals and their
actions within a network of interdependencies. These ties have the power to multiply the
outcome of these actions or, sadly, debase it. These ties, in creating a unique way in which
individuals interact, originate an entity, “the organization”, that is subject, just like any
individual, to the forces we described above. The modern lingo to describe the shaping
impact of these forces is “corporate culture” and the job of HR seems to have become the
development of appropriate metrics. Such metrics are almost invariably psycho-statistical
hallucinations that more often than not stifle innovation and beef up controlling syndromes.
These forces act upon the organization, guide its choices and reinforce the corporate
culture; the individuals that are part of it become entrenched in this culture, and the feedback
loop “organization-individual” is constantly kept in motion.

Control vs. vision and the decision-making process


The need for control and the need for vision, common to individuals as well as
organizations, each have different verbalizations when focused on practical applications; the
most relevant of these applications is linked to the decision-making process that
organizations undertake. Before we proceed, let me make sure I got my message across.
Every organization, by creating the network of interdependencies that support their ability to
reach their goals, generates the forces that shape the life of the organization. These forces are
responsible for developing and sustaining the two fundamental needs of any organization (and
individual for that matter): the need for “control” and the need for “vision”. Control and vision are
generic verbalizations; each of these needs, in any practical situation, can be more appropriately
verbalized in order to facilitate the understanding of its relevance to the subject matter.
As we are dealing with the cognition process and its impact on our lives, I believe we can
fruitfully concentrate on the very critical process of decision-making. The question is: how do
“control” and “vision” translate in the decision-making process?
A decision point is where we are confronted with the possibility of change. Whether
we are cognizant of it or not, everyday we take myriads of decisions; these decisions

48
Domenico Lepore

foreshadow in our mind a “change”, hence trigger a conscious decision point, only when
we feel that by changing we alter irreversibly the current state of reality AND we are in
some way comfortable in the current state. In other words, we truly feel the need to take a
“life changing” decision only when the current state of our reality shows some or many
elements of discomfort. On the other hand, regardless of our current situation, our innate
drive towards elevation and improvement will push us towards wanting to take life-
changing decisions.
In this process, “control” takes the shape of “security” and this need leads to not making
the change; “vision”, on the other hand, will be translated as “satisfaction” and will gear our
decision towards making that change.
The decision process takes us then to a very precise and fundamental dilemma: Change
vs. Do not change. This dilemma is originated by the two core needs of “control” and “vision”
that in the decision process can be verbalized as “security” and “satisfaction”.

How to transform Industry: intuition (birth of an idea), understanding


(analysis/development) and knowledge (application/execution)
We will start from here our journey to understand how to grow the seeds of
transformation.
The road that leads from the bondage of the wrong interdependencies that imprison
Industry to the freedom that comes from the ability to systematically acquire new knowledge
and use it wisely for the betterment of performances and, ultimately, mankind, passes
through a more powerful use of the faculties of our intellect.
We all know that taking on new behaviours or modify existing ones is excruciatingly
painful. This is because behaviours are the last link of a chain of events that take place in our
mind as we progress in our experience of the world. The experience we gain everyday
provides us with a “wisdom” that endows our mind with the ability to develop intuition.
Intuition is what we feel is true (or valid) but for which we have no proof. Intuition is the
hunch that makes us opt for X instead of Y, it is the voice within, it is that neural connection
that makes us see things nobody else sees. Intuition is the starting point, in science and in
any human activity. Intuition can be electrifying but it is weak and does not last. In
management, intuition is important but “hunch management” is what has kept the human
endeavour of management from evolving into a real science.
Intuition should and must be followed by understanding. Intuition uncovers the way
ahead but is far from sufficient to get to the goal. Understanding, instead, has got to do with
the big picture, with clarity on the cause-effect relationships that define the present reality and
how this reality can evolve in the desired direction, including potential pitfalls. Understanding
enables prediction and is rooted in the epistemological tenet that without prediction there is
no management. Understanding is what enables connection and makes us aware that we are
all part of a network of networks. Understanding accelerates the transition from a non-
sustainable, win-lose approach to problems to a win-win mindset. Understanding allows
planning and fosters a long-term vision as well as creating the conditions for acting
meaningfully.

49
Sechel: logic, language and tools to manage any organization as a network

Understanding provides the ideal platform for change but it still lives in the realm of
planning. What moves us ahead is the action that comes from the intimate knowledge that we
have of the subject matter we intend to act upon. This knowledge is something deeper than
expertise. It is that intimacy that comes with awareness; it is that part of the self that is
actualized in what we know; it is our ultimate being projected in what we do; it is our ability to
do who we are.
Intuition, understanding and knowledge are three very different faculties of our intellect
that must all be activated to accomplish any behavioural transformation. Only if we become
conversant with the mechanisms that activate and sustain these faculties can we capitalize
on that form of intelligence peculiar to humans that in Hebrew is called sechel, the ability to
acquire and deploy knowledge coherently.
Remember what is at stake: the transformation of the way Industry pursues its goals.
I claim that this transformation is only possible if we improve the quality of our intellect, if we
learn how to use our mind in a much more powerful way. Only if we accomplish this feat will
we be able to capitalize on the knowledge that is available and that the world continues to
develop at an unprecedented rate.
Improving these abilities is neither simple nor is it an activity for the fainthearted.
Learning, actually “re-learning”, how to think is a profoundly disturbing activity. It may be
thrilling for a very few, but it is certainly disconcerting for the majority.
Why is it so difficult?

Re-learning to think
The Theory of Constraints (TOC), the largely untapped body of knowledge developed
originally by Dr. Goldratt, is deceptively simple. The monumental effort that the Goldratt
Institutes and Schools have produced in the last 20 years to make this knowledge available
has produced, for the time being, disappointing results if compared with its potential. At least
on this, Dr. Goldratt would agree with me.
TOC, by its very nature, goes too fast and too deep in unveiling the myriad of flawed
assumptions we make about the world and how we act in it. When we come to management
and economic decision-making the gap between what is “intelligent” and what people do
everyday is frightening.
Deciding to learn a better way to think is intimately connected with the decision to act
accordingly. Thinking and acting in a different, more powerful and focused way must also be
accompanied by a different way of speaking. Words are powerful and create meaning; words,
and the letters they are made of, are the messengers of information needed to communicate:
they create meaning and, with it, reality. Humans love to speak and they are very connected
with their words because it is through their words that they acquire and develop
consciousness.
Changing behaviour is difficult because the new thinking and speaking that goes with
new acting de facto portrays an existential transformation. In order to tap the sechel that
we all possess we have to decide to transcend ourselves and, up to a point, reinvent
ourselves.

50
Domenico Lepore

In other words, it is emotionally burdensome to accept this truth: we are not at the centre
of the world but simply a part of a complex chain and all our flawed assumptions are simply
the result of our very limited ability to perceive this complexity.
When confronted with the step by step guidance to change that TOC proposes, we find
ourselves, in virtually no time, coping with the monumentality of the transformational change
we must undergo in order to achieve what we thought we really wanted. And it is precisely at
that point that we unplug; we reconsider our reality and conclude that it’s not too bad after all.
Humans, with few exceptions, are simply not geared for change.
So what shall we do? Give it up? Shall we abandon any hope for mankind to be
fundamentally better and more intelligent? No, of course not.

The way out of the crisis


The way out of the crisis originated by prevailing economic and managerial thinking is to
embrace a new paradigm of openness and the acceptance that we, as individuals,
companies and states are not “central” and the world does not rotate around us. We, as
individuals, companies and states are part of a network of interdependencies that can only
have one goal: the betterment of the human condition.
Industry must be at the forefront of this transformation in our way of being through better
thinking because this is the engine for growth and prosperity. The last century has seen,
along with unspeakable tragedies, the development of an unprecedented freedom to
innovate, the tumbling down of physical and spiritual walls, and the birth of a new era of
knowledge and information flow. The “New Economics” advocated by Dr. Deming is based on
a strong Industry fuelled by an ever-growing speed of innovation and supported by a system
of physical and cultural infrastructures provided by governments cognizant that their role is to
facilitate the growth of talent, protect everybody’s freedom to participate and venture, and
help the weak and the poor to live with dignity.
The last century has also seen the development of all the foundational work needed for
management to become a real science and, as such, a major contributor to wealth creation.
This book is inspired by the men that in these years of quest for a better way to live a
meaningful professional and personal life have provided me with all the right questions and
pointed me towards possible answers.

51
6.

INTELLIGENT INDUSTRY:
QUALITY, SPEED, NETWORK

Stamp out the fire and get nowhere. Stamp out the fires puts us back to where we
were in the first place. Taking action on the basis of results without theory of
knowledge, without theory of variation, without knowledge about a system. Anything
goes wrong, do something about it, overreacting; acting without knowledge, the effect
is to make things worse.

(W. Edwards Deming in documentary ‘Deming of America’.)

Goal, values, and vision


Any industry is an organization. What do we mean by an organization? The word organization
brings to mind the idea of a structured way for individuals to accomplish some defined tasks.
This differs from individual labour, essentially, because in an organization individual efforts
must be coordinated in order to produce a result.
The successful coordination of individual efforts is possible only if there is a goal
common to all the producers of these efforts. The goal is by definition a measurable entity, so
along with the goal there has to be a way to measure its achievement.
A goal, in order not to be an arbitrary one, must be the offspring of a vision and, in turn,
this vision should reflect a basic set of values. I claim that the most fundamental prerequisite
for a successful industry, as for any organization, is that all of its members share its vision
and the basic values that inspire this vision. Goal, vision and values must portray an idea of
future and how their fulfilment will result in the betterment of the lives of everyone in the
organization and in society at large.
Once goal, vision and values are clear and shared by everyone in the organization, then
we need to decide how we want to coordinate these efforts. In other words, we have to
design how Industry should operate. Hundreds of books have been written on “organizational
design” and all of them contain some or many elements of validity. However, if we want this
design to have any resemblance to a scientific way of looking at the issue, if we want
organizational studies to become a science, then we have to understand, just as in nature,
the basic forces that keep the design together and explain how organizations can function.

Hierarchy vs. system in Industry


As we have analyzed in Chapter 3, and let’s recap, the main endeavour in creating an
organization or industry is how to orchestrate the work of individuals for a common goal.
Sechel: logic, language and tools to manage any organization as a network

The higher the number of individuals to be coordinated, the bigger the effort to control the
way they work. Accordingly, a fundamental need of any (growing) organization or industry is
to exert an effective control on how this coordination produces results.
By the same token, industries are built for a purpose and, invariably, the fulfilment of
their purpose is linked to the way they deliver to their customers. In other words, any
successful industry must devise the way it functions to address better and better its
customers’ needs; it must be structured to listen very clearly to “the voice of the customer”.
This does not mean having a team of people receiving calls and handling customer
complaints or suggestions. That would be purely palliative. An industry can cater for
customers’ needs effectively ONLY if the interdependencies that lead to their satisfaction are
well designed and operated.
The lines that interconnect departments and functions are clearly not vertical. So, a
suitable organizational structure aimed at listening to the customer cannot be
hierarchical/functional.

Let’s summarize:
A successful industry must cater for its need for control and this leads almost invariably
to a hierarchical structure because accountability in a hierarchy is easily defined. On the other
hand, any successful industry must listen carefully to the voice of the customer and this
prompts us to believe that hierarchy may not be the right solution because it fails to
acknowledge the very interdependent nature of the work of an industry.

Conflict means opportunity


If we accept the idea that any conflict is an opportunity to dig deeper into the way we
think and unveil new possibilities, then all we have to do is surface the basic assumptions we
make about this conflict. We have to verbalize, as clearly as we can, why we perceive there
to be a conflict between a hierarchical structure and some different kind of structure.

There are three categories of assumptions that we make and they reflect very ingrained
mental models that we have about:
1) the idea of control
2) how we measure
3) how an industry can be modelled
Challenging these mental models will pave the way to a powerful solution. The
monumental (and consistently under-heeded) work of Dr. W. Edwards Deming has already
provided the conceptual background to develop solutions to this conflict.

Not companies but value chains


Any industry develops very naturally as a network of interdependent components; what
makes this network a system, as opposed to a grouping of efforts, is the definition (and the

54
Domenico Lepore

adherence to) a common goal. In other words, when people start collaborating for a common
goal the interdependencies necessary to achieve it can be very easily defined. When we pay
attention to these interdependencies and interrelations we can have a clear understanding of
how the system should operate. To be effective, the design of this system cannot just be
about the people in the organization. It must include customers and suppliers and the way in
which all the components of the system are going to benefit from the achievement of the goal.
Let’s be as clear as possible on this issue.
Our suppliers (and their suppliers), our customers (and their customers) and we,
(including our competitors) are all links in chains of value. This value is realized when an end
user benefits from the product/service these chains deliver. Unless the end user pays for and
enjoys the product the chains have delivered, in time nobody truly gains. On the other hand, if
chains get better and better at delivering better and better products that end-users enjoy, then
the market grows and so does everybody’s wealth.
The “Market” is far bigger than we think, and its development is artificially limited by
organizations’ inadequacies to innovate and to take to fruition the potential of innovation. We
already have the mental technology to fully exploit the potential of any organization and
naturally expand the markets we are in, rather than fighting over the share of an existing
market. It is not about my piece of the pie is bigger than yours. It is about let’s all work to
make the pie bigger. We will look at this in detail in Part Three.

A new kind of wealth


What is the meaningful outcome of organizing commerce as a value chain? It is wealth,
not just money, wellbeing, not just profit; an ethical, transparent, environmentally-conscious,
safe and sustainable chain of efforts aimed at improving everybody’s quality of life through
ever improving products and services.
We need to see organizations, and in particular industries, for what they really are. They
are not simply boxes housed within company premises with a geographical location. They are
vessels, i.e. channels that allow products and services to flow from raw material through to a
satisfied end user. The intrinsically collaborative nature of their work clearly shows that the
only meaningful way for any industry to prosper is to become an active vessel for the
development and distribution of products that help people to live better.
Let us state this with a little more emphasis. The only sustainable role for any industry in
today’s world is to allow products to fulfil their role in the world. How? By getting them to the
market as speedily as possible with the highest quality possible. The opposite of this is
hedging, an unethical practice that deliberately deprives the market in order to gain an
‘advantage’. Any form of “hedging”, whether of money or goods, that hinders this flow only
creates artificial scarcity for the advantage of a few. This advantage is temporary and illusory
because in the end everybody loses. Speed of flow, instead, is the essence of a new,
systems-based economics that will have to be soon formalized and adopted if we are to take
commerce to an unprecedented new level.

55
Sechel: logic, language and tools to manage any organization as a network

Control vs. anarchy: Statistical Process Control and Constraint Management


How do we protect the legitimate need to have a handle on the growth of our Industry
and ensure accountability for the people that work in it? In other words, how do we control it?
The fundamental question we have to ask ourselves is: what is control? I think it is safe
to say that control has to do with taking decisions and influencing the outcome of these
decisions. I may say that I have control over a situation if I can predict the outcome of my
decisions and I have the ability to determine that outcome.
How do we exert this kind of prediction-based control over an organizational system that
is not hierarchical/functional? Two elements here are relevant: predictability and practicality.
A thorough understanding of predictability is achieved through the theory of variation and
its useful application, Statistical Process Control (SPC); practicality comes from the
realization that any system we intend to manage is limited in its ability to achieve its goal by
very few elements that determine its outcome. Dr. Goldratt, as we know, calls them
“constraints” and his fundamental contribution to the advancement of the science of
management, Theory of Constraints, describes very clearly how to deal with them.
Managing a system becomes immensely simpler if we focus our attention on these
constraints and in the chapters in Part Three we will explain how. What is important here is to
understand that the viable organizational model that stems from invalidating the assumptions
underpinning the “Inherent conflict” of hierarchy vs. system described previously can be
based on a system constrained in one (or few) point(s).
If we accept to grapple with such a model, we can quickly move out of the conflict and
into a much wider range of opportunities. The Decalogue is the algorithm that allows
industries to organize themselves to embrace a much wider range of opportunities and
possibilities.

Quality is the key


If organizations in general are the vessels for the development and distribution of wealth,
Industry must be its flagship. Industry is built on Quality; an all-encompassing, totally
pervasive, larger than life, Quality.
What is Quality? As Dr. Deming reminds us: “a product or a service possesses quality if
it helps somebody and enjoys a good and sustainable market. Trade depends on quality.”
With a seemingly simple sentence, Dr. Deming teaches us that Quality embraces every
single aspect of a company’s life; from the basic organizational setup, to the way products are
manufactured and sold, from the mandatory goal of satisfying the customer to representing
the foundation for trade and commerce, Quality must go far beyond the boundary of the
company and embrace the whole value chain.
When applied to Industry, the Decalogue focuses on the maximum Throughput the
company can generate through sales. Throughput measures pace, it is the time derivative of
cash, i.e. cash per unit of time, and as such is connected with the concept of speed. For the
sake of clarity, we shall talk of the three elements that determine that speed and, needless to
say, these elements are all part of the same reality, the systemic organization.

56
Domenico Lepore

Speed of Throughput

The three elements that determine the speed of Throughput are:


1) the infrastructure
2) what the infrastructure is for (marketing, sales and after sales, and NPD)
3) how to keep the infrastructure up and running (learning for change)

The founding elements of an infrastructure suitable for speed are:


a. the ability to move material from suppliers to customers (conventionally, we may call
it “replenishment and logistics’)
b. the ability to move material within the company (manufacturing)
c. the ability to process all the information needed to keep up with this physical flow
(planning and scheduling as well as administration, accounting and finance)
What keeps all these activities synchronized, properly staffed and in check with the goals
of the company is a simple but effective organizational mechanism called “Quality System”.
Indeed, I am not talking about the ill-conceived, painfully inadequate, legally minded,
improvement-proof, function driven, registration schemes that have plagued Industry for more
than two decades. With Quality System I mean the organizational algorithm that enables the
best possible utilization of the company’s finite resources. If we want to manage an industry
like a system we need to understand how we can best coordinate the activities that are
carried out daily in the system.

A network of activities
An industry viewed as a system is a network of recurring and non-recurring activities.
These activities need to be staffed with people with suitable competencies. It will be very
cumbersome to deploy these competencies timely and systemically if we organize them
functionally. Why? Because any attempt to use a resource allocated to “function A” to perform
its competence beyond the boundaries of its function will immediately result in a conflict
between the head of the function and whoever has been given the task to deploy that
competence, usually a project manager. Unfortunately, virtually every minimally complex
activity needed by any company to achieve any goal is cross-functional in nature, hence we
are stuck in a real dilemma on how to best utilize the resources at hand. The more we realize
how paralyzing this dilemma is, the closer we come to understanding the paradigm shift
needed in Industry.
The clash between a functional organization and achieving cross-functional goals is,
quite plainly and simply, what keeps Industry stuck. This dilemma is the chronic conflict that
keeps the science of management from evolving into the real engine of economic growth.
Addressing the multilayered issue of how to optimize finite resources to maximize Throughput
is critical if we are to elevate the role of Industry into the generation of wealth.

57
Sechel: logic, language and tools to manage any organization as a network

Two key elements enable the optimal management of finite resources: predictability in
the execution of activities and their synchronization. In other words: IF individual activities are
performed with a high degree of reliability, i.e. with quality, AND these activities are
orchestrated with a powerful algorithm that allows the best possible synchronization towards
the stated goal, THEN we have an infrastructure that can maximize the Throughput that the
organizational system can generate.
As we have stated, the “inherent conflict” of any organization applies to Industry too:
Hierarchical/functional company vs. non Hierarchical/Functional company. There are two
needs to be simultaneously satisfied: how to control the system and how to increase its ability
to reach the customer. In other words: how can we control a system that evolves and grows
as a result of a better and better understanding of the customer?
In 1997 Dr. Goldratt published under the form of a novel a book called Critical Chain.
The theme of the book is that we can maximize the speed of new product development by
adopting a particular approach to Project Management (PM). The implications of that
approach are truly far reaching and pave the way for a complete and yet unexplored solution
to the inherent conflict. I believe it is worth talking about it here briefly and much more
comprehensively in a separate chapter in Part Three (see Chapter 14).

A network of projects
Recurrent and non-recurrent activities in an industry can be seen as “projects”.
Whether we seek to improve the speed at which we manufacture products, install new
equipment, organize shipments or file quarterly closing we need the coordinated efforts of
many different competencies. Deploying these competencies in a logical sequence is
relatively easy. However, breaking assumptions about the way performances should be
controlled and measured seems to be a true cognitive ordeal. The measurement of
performances seems to be inextricably connected with a local, i.e. functional indicator while
we all know that what matters is the global bottom line of the company. How do we come
out of this seemingly irreconcilable conflict? We do it by asking ourselves what company
functions are for, and uncovering the obvious truth that functions should house
competencies, not power.
Engineers, accountants, scientists, subject matter experts, should not be considered
members of a “company function”. Rather, they should be seen as valuable competencies
that can be deployed for the goal of the whole company. These resources, ALL the
resources, should be available for whatever “project” the company needs to accomplish.
What I am saying is that any company should be seen as a network of projects with the
global goal of maximizing the Throughput of the company. The Critical Chain algorithm that
Dr. Goldratt developed can be used to maximize the use of the finite resources of a company.
As a matter of fact, this algorithm can be used to redesign the way any company works.
Critical Chain becomes then much more than simply an algorithm to accelerate project
completion; it is the vehicle to integrate, control and deploy the resources of the organization.
Instead of company functions, there should be networks of projects; instead of heads of
functions, there should be managers of increasingly complex projects that draw their

58
Domenico Lepore

resources from a pool of available competencies with no resource contention; instead of


executives that fight for power, there should be cooperative work that is in synch with the goal
of the company. Instead of often conflicting local indicators of performance, there should be
one single driver for everybody.
Anyone who believes that stupidity and lust for power are not inevitable should try and
understand this idea of organization. In Part Three we will deal at length not only with how to
achieve it practically but more importantly, with the elements of cognition that must be
triggered to make this change work.
In summary: an infrastructure built for speed must be based on Quality and a “Quality
System” must be in place to make it work. What do we need this infrastructure for?

The Independence scheduling software based on finite capacity

Satisfying the customer


It may sound preposterous, but far too often functional companies in their quest for local
power and autonomy forget about the simple fact that a company’s success is based on customer
satisfaction. Any infrastructure should be built bearing in mind the idea of satisfying the customer in
their present and future needs. How do we do that, what does “satisfying” really mean?
The only real value we can bring to a customer (and to a supplier for that matter) is
connected with increasing their ability to compete in the marketplace and, clearly, this ability
can take different shapes.

Our customers (and suppliers) will be sensitive to our offer of products and services if:
a) they add a measurable benefit
b) they clearly remove a current limitation
Benefits and limitations are related to the goal that our customer (or supplier) seeks to
achieve. Industry should be guided by this simple principle: how can I help my customers
(and suppliers) to achieve their goals? This is the job of marketing; understanding how to help
customers (and suppliers) to achieve their goals.

59
Sechel: logic, language and tools to manage any organization as a network

Let’s make this broader. Industries do not exist in a vacuum; they play their role in the
very intricate networks of value creation for customers. Industries have the role of channelling
value through these networks and can only do that effectively if they understand clearly how
they can add that value, and if they do it ethically and for long term, win-win purposes.
Industries exist in the market place, which is almost unlimited; the legal boundaries to
their activities (legal entities) are practical and necessary limitations whose only real goal is to
make it possible to manage them. By creating a limiting boundary managers do not feel they
are losing control. Unfortunately, these boundaries, as necessary as they can be, only exist
on paper. Interdependencies and interrelations are so strong and continuously created that
thinking of a company in terms of its legal boundaries is a recipe for disaster, and so is the
idea of marketing by only looking at customers’ current requests.

Intelligent Marketing
What should Marketing really be doing? Marketing is that aspect of Quality that aims at
understanding the business environment in which the company exists in its entirety. Marketing
certainly has the job of designing the best possible offer the company can deliver (and we will
delve further into this aspect of Intelligent Marketing in Part Three) but its goal must be much
more comprehensive. The role of Marketing is to understand “markets” and provide a clear
cause-effect analysis of the most suitable way for a company to fit into those markets.

Marketing leverages a “market driven” infrastructure to provide the most suitable positioning
of the company in the networks of which the company is part. It does so by:
• equipping the sales-force with the appropriate intelligence to facilitate sales
• creating an open channel of communication with customers and suppliers via all
the post-sales service related activities
• actively engaging the supporting infrastructure in customer and supplier related
improvement initiatives
• relentlessly communicating findings connected with increasing the possibilities of
the company
• providing orderly and thoughtful insights into the often chaotic evolution of the markets
Intelligent Marketing calls for intelligent salespeople. Why do people go into sales? Why
would someone want to get up in the morning and travel, be away from home most of the
time, eat out every day often not in nice places, face rejections, argue with strangers as well
with their own production colleagues? Who is Willie Loman? Salespeople live a mystique,
which is hard to understand, hence to tackle rationally. Empirically, after many years, all I can
say is that salespeople like the challenge, the freedom and the discretionary power to take
decisions. Salespeople like to be heroes. Unlike financial people who are often motivated by
greed (and the invariable blindness that comes with it), salespeople are motivated by the
challenge to win customers’ hearts. A real salesman sells himself, and he loves it.

60
Domenico Lepore

Salesmen like to feel they are breadwinners; they do not contribute to the success of the
company like everyone else, they are special. They follow their instinct, not procedures.
Unfortunately, salespeople can singlehandedly jeopardize any systemic endeavour unless we
integrate them organically in the way the company operates. We shall return to this in Part
Three in Chapter 13 dedicated to the External Constraint.
Let’s summarize what we’ve said so far:
An industry poised for long-term, ever-growing success must be based on a solid, quality
driven infrastructure, and a critical part of that quality is related to how much and how well
customer needs are understood and continuously satisfied. Marketing is part of the Quality
activities. Such a company replaces the conventional functional design and modus operandi
with an organizational design inspired by resource optimization that leverages competencies
for the completion of resource-contention-free, finite-capacity-based networks of projects.

Science, theory and validation


The essence of the scientific method is to ascertain the realm of validity of a theory, i.e.
the conditions that have to be satisfied for that theory to be valid. A theory has to be proved.
The difference between true science and “social” sciences is in the rigorousness of that proof.
If organizational science wants to leave the quicksand of behavioural studies and move onto
the firm ground of applied science it must develop the ability to ask, just as science does,
fundamental questions.
Even a much less powerful organizational design than the one originated by combining
high reliability and effective synchronization would yield a more viable organizational structure
than the functional structure. Every day, every industry experiences the drawbacks of a
functional structure. The associations of Quality professionals and the registration bodies,
while promoting a loosely defined system approach to management, always refer to the
hierarchical structure and seek better and better gimmicks to minimize the damages.
We do not lack the knowledge needed to build and operate a truly systemic company
guided by the idea of Throughput; what we lack is the ability to decide to use that knowledge. The
ability to choose what is “right” goes hand in hand with the ability to develop and reveal our
consciousness. Any time we have to make a decision, thus triggering a change, we have the
need to “name” that change in a way that we perceive as “good” for us. We have to perceive the
value of our choice at every level of our consciousness: intellectual, emotional and pragmatic.
The successful adoption of a systemic approach to the management of Industry must
address in an unprecedented way people’s consciousness, the deeply rooted images that they
build about themselves, their role in the company, and their perceived status in the world.
As sad as it may seem, people accept the nonsense and frustration they have to cope
with in a functional company because they can relate to it. And so they accept the
progressive separation between who they think they are and what they do. Such an accepted
separation deteriorates into an even worse disconnection: the one between what we learn
and what we accept to change as a result of that learning. It is time for that to change.

61
7.

CREATING CONSCIOUS INDUSTRY, AND INDUSTRY


WITH A CONSCIENCE

Knowledge is theory. We should be thankful if action of management is based on


theory. Knowledge has temporal spread. Information is not knowledge. The world is
drowning in information but is slow in acquisition of knowledge. There is no substitute
for knowledge.

W. Edwards Deming, The New Economics

Towards a different use of the mind


We are living in times of unprecedented access to information and unprecedented depth of
knowledge. While useful, information is not the same thing as knowledge, and only
knowledge can actually change us.
What I believe, but cannot prove, is that the mechanisms of our cognition, the ability to
leverage emotions and rationality to facilitate learning, are inadequate to support the pace at
which knowledge is being generated. I believe that we, as humans, are currently cognitively
limited in our ability to cope with change.
In the last three hundred and fifty years the scientific revolution begun by Galileo Galilei
and Sir Isaac Newton, and the method of investigation that is typical of science, have allowed
mankind to gain insight into virtually every field of human endeavour, from the immensely big
to the minutely small. Technology, a spectacular offspring of this knowledge, has accelerated
our understanding of many of the underlying laws of the natural world. In spite of all this
progress, still today we know very little about how we think. If our minds were able to absorb
and adapt to what we know, if our ability to acquire and deploy what we learn were only
minimally sufficient, we would not be in the crisis we are in.
I believe, but cannot prove, that greed, stupidity, arrogance and all the evil that debases
the human condition every day in the world is the result of a very limited ability to use our
intellect.
We live in an extraordinarily complex world, where interdependencies and interconnections
multiply at an ever-increasing speed. The cause-effect relationships that govern the world as we
experience it form a super intricate ‘network of networks’ and we have a very limited
understanding of the underlying properties of these networks and the laws that govern them.
Our mind grasps at straws any time we have to come to terms with the implications of
the non-linear laws that govern networks. Indeed, Nobel Prizes are being won for providing
strong empirical evidence that “rational decisions” do not belong to humans. We have to
Sechel: logic, language and tools to manage any organization as a network

come to terms with the fact that reality is far more complex than we would like it to be.
A blatant example that we are refusing to recognize this is the sheer inability of the financial
institutions to cope with probability models that are not originated by Gauss-like assumptions.
In spite of the work of Benoit Mandelbrot decades ago showing the inadequacy of these
assumptions, they continue to form the backbone of the way people interpret and interact with
financial markets. The damage created by this inability to embrace more appropriate models
is incalculable. On the positive side, today very sophisticated computational models allow us
to penetrate complexities by connecting simple structures. This allows us an increasingly
profound understanding of complexity. Nevertheless, can see how completely disconnected
mainstream thought is from this awareness when today undergraduate students are still
taught the ludicrous concept of “the invisible” hand of the markets and the frankly hilarious
models that sustain the “supply and demand” approach to economics. In other words, we
already have the knowledge and the tools to do so much better than this, but we continue to
lag behind that potential.
I believe, but cannot prove, that if we want to close the gap between the knowledge
available and what we are willing to use, we have to tap into a different use of the mind. We
have to learn how to see change as not simply something to be feared, but a natural, intrinsic
part of our life. When we learn how to use our intuition and intellect to implement consistent
action, change is not a threat and a hazard; it is a continuous source of new opportunities.

Transformation through better thinking and learning


It is possible to transform the way individuals and organizations pursue their goals. That
possibility is connected with a new and more powerful ability to use the three faculties of the
intellect connected with intuition (birth of an idea), understanding (analysis/development) and
knowledge (application/execution). Tapping into these largely unexploited faculties increases
our ability to leverage our sechel, or intelligence, and our ability to learn and act coherently
with this learning.

The issues then become:


• how can I fortify my intellect?
• how can I develop a better ability to generate intuition, understanding and
knowledge?
• is there a practical way of doing it?
As I pointed out previously, gaining a better, more disciplined handle on the way we
think, speak and act requires a non-negligible effort. This effort can be fuelled and
sustained only if the achievement of this new mastery addresses simultaneously the two
main needs that both individuals and organizations have: control and vision.
From now on, for the sake of clarity, we will always refer to organizational transformation,
as opposed to individual transformation, through better thinking and learning. Such a
transformation, of course, cannot be disjointed from the transformation of the individuals that
belong to it, but organizations have their own set of paradigms that are the offspring of the
64
Domenico Lepore

way interdependencies are created and managed. In other words, organizations have a life of
their own that is generated by the somewhat unknowable combination of the individuals in
them and the way they interact with each other. It is this complexity that calls for a new,
stronger and deeper insight.

Change and intuition, understanding and knowledge


Dr. Goldratt has repeatedly pointed out that change (like any other entity we wish to
investigate) needs to be in some way definable and measurable and he has devised a
powerful set of “tools” to help us manage the change process. These tools have been
described in many publications, including the book Oded Cohen and I published in 1999, and
examples of their use will be provided and explained in Part Three of this book.

Goldratt talks about the three main phases of change and these are:
• what to change
• what to change to
• how to make the change happen

Change can be better understood and managed if we can link it to one of the intellectual
faculties of intuition, understanding and knowledge that are responsible for effectively
enacting the change. In other words, the task at hand is:
1) identify the phases of change
2) link them to the faculty of the intellect responsible for its enactment
3) develop an adequate mechanism to support the appropriate faculty
4) connect the phases as one process
5) ensure a metric is in place for the deployment of the change

Within the Decalogue approach we add a further stage that ensures the scientific validity and
robustness of our analysis and actions:
6) adopt the scientific approach, PDSA, for the management of this change
As we have already stated, there are three distinct phases in any transformational/change
process. The first one is when we have “intuition” of the current state of reality that needs
change (what to change). This intuition is fuzzy and the blur is originated by the many emotions
triggered by effects in the life of the organization. In TOC we call these “Undesirable Effects”
(UDEs); they are the light bulbs going on and off to warn us that a change is needed. We can
capture the intuition stemming from these effects with a diagram that clearly displays the
cause-effect relationships that unequivocally lead us to a clear picture of the present state of
reality: this support is provided by the TOC diagram called the “Current Reality Tree” (CRT).
Current Reality Trees leverage some categories of speech and bring them together to
form a clear-cut, very powerful image of the reality we are trying to change, in this way

65
Sechel: logic, language and tools to manage any organization as a network

“anchoring” the intuition to the fertile ground of precise verbalization. We will illustrate these
categories along with guidelines for building a Current Reality Tree in Chapter 10 of Part
Three.
A clear verbalization of our intuition helps us see our current reality in (almost) all its
facets and makes us aware of why this reality exists. Now that our intuition is strong and
clearly articulated, we can decide that we want to change this reality. As we will see when we
learn the mechanics, the Current Reality Tree is built by verbalizing all our mental models, i.e.
the profound images of the world that we have built for ourselves through the years of our life.
We quickly (sometimes too quickly for comfort) realize that our “reality” is nothing but the
result of these mental models, i.e. the assumptions we make about our reality. These
assumptions are the limiting beliefs that generate the undesirable effects we experience, i.e.
the Current Reality we want to change.

Creating a desirable Future Reality: solutions, roadmap, and actions


The way to create a more desirable Future Reality happens in steps. The first of
these steps is to challenge our assumptions, invalidate them and define a less restricting
paradigm more in line with the future reality we want to live and experience. Let’s clarify
this further.
A limiting belief is not something we have chosen consciously. It is something that has
grown over the years nurtured by experience; it is the inextricable combination of
unchallenged empirical evidence, cultural substrate and some natural, intrinsic disposition of
our mind to form images that is difficult to dismantle. Moreover, assumptions serve us well
most of the time, they help us take quick decisions, not to reinvent the wheel every time. As a
result of that, challenging assumptions is very difficult. Far too often transformational efforts
are shipwrecked on the shores of people’s inability or willingness to challenge their
assumptions.
As you will see in Part Three, the mechanics of the thinking tools is fairly straightforward
and easily learnt, but I know very few people who can do much with this mechanics. The tools
may, at first sight, seem simple because they are made up of words, but rigorously tracing the
cause-effect relationships between our intuition, our current reality, and the mental models
that keep us trapped in that reality is intellectually demanding and mentally tiring. Most people
are simply not accustomed to the level of focus, analysis and open-mindedness that a precise
use of the tools requires. This is not to discourage anyone, on the contrary; this is to help
anyone who is serious about transforming their company to adopt the correct approach to
something which is truly, literally, life changing.
A different paradigm opens new and hitherto unexplored avenues towards the future.
However, these avenues have to be defined with precise solutions, called “injections” in TOC.
These are powerful solutions that translate a new paradigm into a new course of action.
Intuition will dictate which ones they are. These injections point in the right direction and
make us see where we want to go more clearly. They are the road signs to the future. Now
we need a full-blown picture of the road in front of us.

66
Domenico Lepore

The second step is to ensure that this picture is a) complete, and b) highlights all the
possible pitfalls. This is achieved through understanding (analysis/dvelopment). This is when
we need the ability to see ahead to both the goal and what could prevent us from achieving it;
this is when we leave behind the comfortable habits of “hunch management” to embrace an
epistemological view of the organization. Not the forceful, chaotic, nonsensical, life-
consuming, relationship-abrasive common practice fire-fighting, but an orderly, methodical,
reflective, all-encompassing, long-term oriented, purposeful vision of the future.
Understanding is the human ability to imagine and plan beyond the contingencies of the
present and towards a meaningful future. The Future Reality Tree (FRT) and the Negative
Branch Reservation (NBR) are the Thinking Process tools that support and enhance our
understanding.
If the Future Reality we yearn for has been delineated, the potential pitfalls identified and
a precise strategy crafted, then all we need is a step-by-step procedure to walk into the
future. This procedure has two aspects; on the one hand it is a protocol with detailed
instructions, pretty much like the one used by NASA to bring Apollo 13 back from the moon
they had lost. On a deeper level, what we need to operate this procedure is a new kind of
knowledge.

The Future Reality Tree

67
Sechel: logic, language and tools to manage any organization as a network

A new kind of knowledge


Knowledge and know-how are two different things and, generally speaking, they
belong to different cognitive domains. A professor of Quantum Mechanics has a complete,
detailed and thorough understanding of the laws that govern all the phenomena
connected with electromagnetic waves, but I bet that only a small subset of them would be
able to design and build the wiring of a building. On the other hand, I am equally confident
that the most skilled and technically competent electrician that we would confidently hire
for the wiring of the building would probably grasp at straws when it comes to the basic
mathematical laws that govern the skills that they master. To accomplish the
organizational transformation that we have planned by leveraging our understanding, we
need a third step. We need both knowledge and know-how, but also something more.
We need this knowledge to trigger a higher level of consciousness; we need this
knowledge to become an active vehicle of self-actualization; we need this knowledge, and
the power that stems from it, to address the fundamental needs that move us towards our
future reality. We need this knowledge to be one with ourselves.
For an industry this means that the protocol we have devised must not only be
completely within the realm of knowledge and know-how of the people that have to enact
it but, even more importantly, they have to feel that they are taking steps towards a future
they want to be part of. People have to truly perceive that by impacting their lives, this
organizational transformation will produce a better future. In that way there is no need for
them to feel that their personal life is something ‘suspended’ during the hours at work.
In an organizational transformation, knowledge is as separate from sterile erudition
as it is from the aloofness of know-how; knowledge, on the contrary, is the vibrant,
conscious engine that truly drives the transformation. As Dr. Deming used to say: “ We do
not install knowledge, I wish we could”.
How do we support and enhance this knowledge? We can obtain “packets” of
knowledge from the people that have it by prompting them to participate in this knowledge
creation and sharing. We can then combine these packets into an easily understood
sequence and we can assign to each person the task of executing tasks coherent with
their knowledge. The Thinking Process tools called the Prerequisite Tree (PRT) and
Transition Tree (TRT) enhance our ability to make a proficient use of our knowledge.
Moreover, powerful synchronization techniques have been designed to provide the best
possible metrics and control for the roll out of the transformation.

68
Domenico Lepore

The Transition Tree

Understanding, knowledge and science


Accomplishing the organizational transformation required for Industry to fulfil its role of
wealth creation does call for a higher and better use of our intellect. I strongly believe that the
thinking leveraged by the tools designed by Dr. Goldratt is critical to this end. I also believe
that understanding and knowledge must be solidly linked to the scientific approach and the
wealth of discoveries it has generated in the last 350 years and that it continues to generate.
The essence of this scientific approach is embodied in the Plan-Do-Study-Act, PDSA
cycle, advocated by Dr. Deming as the main mechanism to generate and sustain the
application of knowledge within any organization. Each of these four steps must be guided by
statistical insight. The PDSA cycle is rooted in the epistemological belief that phenomena
must be described and understood in statistical terms; this statistical vision of the world,
largely applied in the investigation of the natural world, has been adopted by Industry in a
limited way and completely misapplied or ignored by economists and financiers, not to
mention accountants.
This lagging behind the sciences has created a divorce between industry/finance and the
intrinsic statistical and largely non-linear nature of phenomena. Conversely, the constant
attempt of social sciences to filter everything through deterministic lenses has created an
anachronistic gap between Industry and Capital. Accordingly, to successfully accomplish an

69
Sechel: logic, language and tools to manage any organization as a network

organizational transformation, Industry has to adopt a rigorous scientific approach and close
the gap with financial institutions by confronting them on the validity of their measurements.
The purpose of Industry is not to serve capital markets but the other way round. Capital
serves to enable Industry to produce. Industry can only be served by financial markets when
the ‘analysts’ of those markets start to ask Industry relevant questions, ones that truly unveil
how the Industry is doing, not questions that serve to produce fictitious numbers on which
delusional financial products can be constructed.
I hope the path to the transformation of Industry is clear: we need to improve our ability
to draw from our intellect and connect together its elements using the powerful catalysts of
our ability to think. Industry, also, needs to embrace wholeheartedly and without hesitation a
scientific approach underpinned by the PDSA cycle and its statistical nature. Industry must
also close the gap that separates statistically based performance measures from the
deterministic and linear thinking of the financial world.
An Industry with a high level of consciousness as we have described has, by definition, a
high level of interconnection with those who work within its system, with those who supply it
and those who are its end-users. The ongoing task is to constantly satisfy better the needs for
control and vision of Industry, of those who work within it, and of those who interact with it up
and downstream. The inevitable outcome of this level of consciousness and connection is a
more ethical way of operating. There is automatically no space or use for hedging and
undercutting, or for pollution of the environment. The inevitable outcome is Industry with a
conscience.
Our ability to create and follow this kind of organization is directly linked, as we examined
in Chapter 5, with our ability to increasingly do who we are. This requires us to accept a level
of personal freedom to which few are accustomed and even fewer feel comfortable with.
What do I mean by freedom? It does not mean laissez-faire, it does not mean just doing
whatever you want. It means accepting the responsibility of understanding that our only true
limits are ourselves and what we are able to perceive for ourselves. Our limits are mental
models, and are mental models dictate the boundaries of our actions. As we begin to
challenge these models/assumptions, we begin to taste the vivifying experience of exploring
our true potential. Fulfilling that potential is not a question of luck but one of choice.
The prevailing management style (or lack of it) has taken the separation between
knowledge and consciousness to an extreme and impaired people’s ability to choose
intelligence over stupidity. This ability needs to be given back and enhanced manifold.
Industry must rebuild itself on the debris caused by this disconnection by leveraging this
intrinsic unity between consciousness and knowledge, by re-learning how to connect learning
and choice, by retooling its people’s ability to manage intelligently. A new covenant with
Industry needs a new covenant with its people, one where each individual has the opportunity
to better themselves and in so doing better their industry. We have the intuition, we have the
understanding and we have the tools to make it happen. However, in order for the
transformation to truly take hold we need to create a new kind of leadership and that
leadership must be the expression of a new Economics.

70
Domenico Lepore

PHASE OF CHANGE FACULTY OF INTELECT THINKING PROCESS TOOL

What to change Intuition (birth of an idea) • Collecting UDEs,

• Core Conflict Cloud

• Current Reality Tree

What to change to Understanding • Injections


(analysis/development)
• Future Reality Tree

• Negative Reservation
Branch

How to make the change Knowledge • Prerequisite Tree


happen (application/execution)
• Transition Tree

71
8.

NEW LEADERSHIP, NEW ECONOMICS

…the job of a leader is to accomplish transformation of his organization.

Deming, The New Economics

Intelligent, from “intelligere”, the ability to acquire and deploy knowledge;


Management, from “manus”, hand, “agere” to act, i.e. to act with hands.

The views, experience, analysis and practice expressed in this book are the fruit of a
systems-based paradigm of organizations and management based on knowledge and
transformation. By transformation we mean the relentless effort of organizations to transform
themselves into and consistently perform as the most effective vessels to fulfil their role in the
world with the highest quality and speed. The umbrella term we use to cover all the various
aspects of theory and practice that allow this transformation to take place is Intelligent
Management. In order to have Intelligent Management, we need a different kind of manager
and leader.
Moses was the leader of the Jewish people and so was Mohamed for the Muslims. In
much more recent times everyone would agree that Nelson Mandela is a leader, and so was
Mahatma Gandhi. In the more mundane world of sport it is undeniable that Michael Jordan
has been a leader and so was Pele’. Some organizations “lead” in their field and so do some
editorial ventures. In other words, virtually every human endeavour has someone that stands
out from the crowd.
If we were to draw a histogram representing the frequency of quotation in management
lingo, the word “leadership” would probably have the tallest bar. This is in direct conflict with
the relatively few names that come to mind when we think of “leaders”, and it is perplexing to
witness how often the noun “leader” is attributed to utterly debatable characters.
We cry out for leadership when it is missing; I will try to clarify what I mean with
leadership within the context of Intelligent Management.

Leadership and knowledge


A leader is somebody who has a theory; somebody who “owns” a body of knowledge
that backs his claims that they can accomplish a transformation within their span of control.
The transformation that Intelligent Management advocates is one of system optimization as a
prerequisite for innovation; one in which competition is replaced by cooperation, where
performances are managed using appropriate statistical thinking and not assessed
deterministically, where teamwork is fostered and not the ranking of individuals.
Sechel: logic, language and tools to manage any organization as a network

Knowledge is mandatory but it is not enough. A leader must be able to get their message
across; they must be able to address their people in a way that touches their brains but also
their hearts. A leader must also have fortitude; they have to “walk the talk” and be an
example. A leader has to have the strength that it takes to accomplish something that
probably only they have the vision of, and they must be able to clearly communicate that
vision. A leader must have an action plan, a step-by-step guidance that people can
understand and execute.
These are necessary attributes for a leader, but they are not sufficient.

Leadership and selflessness


A leader must provide their people with a vision of life and this vision must be permeated
with the idea of justice; a leader with their words and actions must elevate people’s faith in
the possibility to build a better world. Jesse James and John Dillinger captured the
imagination of many and, sadly, they had many followers. But they were not leaders because
their vision was against any acceptable ethics.
A leader is selfless and seeks no power; on the contrary they see their job as servitude.
This is not because they are weak but because they understand their role in the world.
A leader is, at their very core, an enabler of people’s potential.
One of the greatest spiritual leaders of our times, Rabbi Menachem Mendel Schneerson,
The Rebbe, says that a “leader has to be a reflection of our own light back to us, so we may
see ourselves anew”.
A leader must aim at creating other leaders not followers; they do so by elevating their
people’s ability to address their inner drives and cater for their primal fears. They seek no
control over people’s emotions; they empower them, they give them the possibility to be the
best they can. A leader understands that we live in a world where interconnections and
interdependencies easily go beyond our ability to fully comprehend them. The only
meaningful role we can play in this interconnected network is to act as a vessel.
The transformation that Intelligent Management pursues is one where emphasis is
placed on helping ventures to fulfil their role in the world, to pursue the goal they set for
themselves, to accomplish the vision that has inspired their creation. Almost invariably, this
has to deal with enabling the organization to focus on the key elements that determine
success. Conceptually, these elements can be summarized as Quality, Involvement and
Flow.

Quality, Involvement, Flow


A Leader sees Quality as larger than life, an all-encompassing attribute that permeates
everything we do. Quality is grounded in the belief that management in any shape or form is
prediction, and the ability to predict is the essence of any epistemology.
Flow is the result of our actions; flow of materials, information and money. Flow is
associated with motion, hence with evolution and the pace of change. It is the derivative of
space and time, it reminds us that no real value can be created unless a certain amount of
change is taken into consideration.

74
Domenico Lepore

Involvement of people is what enables Quality and Flow. The foundation for any
involvement is the teamwork that underpins the functioning of organizational systems. We
can build a system by asking everyone to subordinate to the goal of a system, to give up on
something personally for the greater good. It often works but invariably leads to people
working mechanically; in time, it takes away some pride in workmanship, it replaces
innovation with compliance. A leader understands that a system can develop and continually
improve its results towards the stated goal if people in the system see in what they do for the
system an enhancement of their personal life. A leader understands that there is no conflict
between self-fulfilment and subordination to the goal because a correctly designed system
allows an individual to gain more by subordinating to the goal than they would achieve
independently. A leader understands that what people do and what people are must be one.
A leader does not exist in a vacuum; they can only exist if surrounded by people that
acknowledge them. They select their people and those people recognize the leader as such.
A leader must evolve and so must their leadership. While the core tenets of their leadership
can be everlasting they, personally, may not be. A leader must be ready to pass on the baton
at the right time. A leader is a leader when they lead, but also when they stop doing so.

A new economics
Best efforts and hard work, not guided by new knowledge, they only dig deeper the pit
we are in. The aim of this book is to provide new knowledge.

Deming, The New Economics.

The new knowledge for creating systems-based organizations provided by Deming calls
for a new economics. Economics belongs to the realm of the so-called “social sciences”. It
aims at investigating the production, distribution and consumption of goods and services.
Economics also concerns itself with the study of economies and how the players, the decision
makers, act to guide economic choices.
At the most fundamental level, economics should pertain to the understanding of how to
deal with the resources at hand and optimize their use for a stated goal. In this sense,
economics is also a “political science” because the use of these resources should be guided
by political decisions. (Indeed, politics should be guided by a philosophical vision, an ethical
one, but this is another story).
Economics mimics science by developing models, economic models that should explain
the economic outcome of certain decisions and these models are, or should be, inspired by a
vision of the world. In recent decades, The Royal Swedish Academy of Sciences has instituted
a Nobel Prize for Economics and several economists have been awarded for their models.
In summary: an economist, hopefully inspired by a vision of the world, develops models
that should guide the economies of countries to an optimal utilization of their resources for a
stated goal. Governments, hopefully guided by a vision, embrace economic models based on
their adherence to their vision.
Any model is based on a set of assumptions, it must be. When these assumptions are
not verified and validated the model is bound to fail in providing the results it was designed

75
Sechel: logic, language and tools to manage any organization as a network

for. Of course, the political circumstances of any democratic country change very frequently
and the ability to translate models into effective policy making is always less optimal than one
would wish; moreover, an increasingly interconnected world calls for increasingly complex
models with assumptions that are harder and harder to validate. Indeed, governments are
pressed to take actions and these actions have to accommodate for political agendas not
necessarily driven by the vision that inspired the economic model. By the way, when time
(and perceived risk/reward) comes into the picture and we slide into the field of “finance”, we
witness the full potential of the prevailing economic paradigms as reflected in the models that,
tragically, still today purport to create value.
If we continue the analysis of this chain of causes and effects we can understand why
the world is experiencing the current economic situation. I would like to prompt the reader to
broaden their view on what economics should be.

Flawed models
Mainstream economic and financial models, the ones that currently rule the markets and
determine value, have shifted their focus over the years from what is best for the society they
should try to model to what is mathematically possible to achieve for the benefit of a few.
I use the word “mathematically” with a sense of grief. Mathematics is a very serious business;
it is thanks to mathematics that we understand the physical world and it is thanks to its
rigorousness that we are confident that the scientific method can provide acceptable validity.
Sadly, most economists and financiers at best can be considered “hands on” mathematical
labourers and their models are far from being the offspring of any scientific method.

Current mainstream economic and financial models are flawed for two sets of reasons:
1. They are often divorced from realistic assumptions about the situation they seek to
model AND from the managerial actions that should ensure the predicted outcome. In
other words, the modelling happens in the vacuum of second tier “mathematical”
speculations with flawed assumptions about what is possible or impossible to achieve
managerially.
2. Mainstream economic and financial models pursue an idea of value that is divorced
from any concept of the general wealth and wellbeing of individuals and society, with
notable exceptions such as Amartya Sen. Prevailing models are based on a
systematically disproven “rational” behaviour that is driven by the lust for individual
profit. These models are rooted in the paradigm that if somebody wins somebody
else has to lose. They call it “competition” and a gigantic and ineffective apparatus
has been created to “ensure” fair competition.

A new kind of freedom


In order to re-assert economics as a useful field of investigation we have to re-ground it
in a new paradigm; a new economics can only be originated by a new outlook on value and

76
Domenico Lepore

wealth. As Senator Robert F. Kennedy said more than once, “The GDP cannot be considered
a measure for the standard of our lives”.
The starting point is to define what the role of the government should be and which
policies an economic model should mirror. Any government should first and foremost protect
the freedom of its citizens, for sure: freedom from any risk of slavery. Three major factors
impact our freedom, other than the ability to protect ourselves from enemies and practice the
religion of our choice: freedom from ignorance, freedom from the tyranny of diseases we
cannot afford to cure, freedom to start or adhere to ventures, business or otherwise.
So, the role of the government in establishing and endorsing an economic model is clear:
a solid education and research system, affordable healthcare for everyone, a network of
support for the development of any form of free enterprise.
How we build these systems, how we manage them and what set of values should
inspire them is the kernel of the new economics. Economics then really becomes the science
that studies how countries should develop.
The new economics should not just be concerned with better mathematical models to
portray scenarios; any serious mathematician would always alert decision makers to the
probable fallacy of such models. The new economics must be intimately connected with the
ways wealth can be created and the best ways to increase the distribution of this wealth.
The world we live in becomes exponentially more and more interconnected and wealth
(and its creation) is a multifaceted entity. Which is the wealthier country, one where the GDP
is high but millions of people cannot afford serious education and healthcare, or one where
the GDP is lower but these “freedom rights” are guaranteed? This conflict exists only because
“economics” is anchored to flawed assumptions about wealth and value.
The distribution of wealth, seen as conflicting with the right of the individual to amass
personal wealth to the detriment of others, has always been labelled as “socialist” and as
such unsuitable for the free world. The ugly truth is that prevailing economic and financial
thinking has lead to the squandering of the resources that the planet has available and to the
stifling of innovation. This thinking has systematically favoured short-term decisions over
long-term planning. This thinking has swayed tens of thousands of talented people away from
applying their minds to constructive and foundational work and towards the sterile and
artificial domain of “financial products”. This thinking has led us to believe that we can create
something out of nothing.
Never in human history has the word “scarcity” meant so much. Our resources are
scarce and we need to learn how to use them; the name of the game of any serious
economic effort then becomes “sustainability”. The new economics must become the science
that studies the optimization of scarce resources and in order to do so must tap into the
bodies of knowledge that deal with how finite resources can be successfully managed.
The new economics must also be based on the founding assumption that no win can be
based on somebody losing; that we are all interdependent and the wellbeing of individuals is
critical for the well being of society; that wealth must be created in order for it to be distributed
and any form of imbalance will soon turn into a global loss; that individual success to the
detriment of others cannot be sustained. The new economics is founded on the assumption

77
Sechel: logic, language and tools to manage any organization as a network

that individuals, organizations, large systems and networks and, ultimately, countries are
vessels for the creation and distribution of ideas, products, services that help everyone to live
better, more intelligently and harmoniously with our environment.
The new economics will strive to provide not just mathematical platforms but also the
practical means to achieve a meaningful life.

78
9.

THE TEN STEPS OF THE DECALOGUE

Ten years, ten steps: the Decalogue and Intelligent Management


The goal of this chapter is to describe the ten steps of the Decalogue™ in further detail.
Before I do this, I think it is useful to summarize how the ten steps of the Decalogue™ were
developed and outline their basic tenets.
The book I wrote with Oded Cohen was a snapshot of the results of collaborative work that
we had begun four years earlier. Oded was (and indeed, still is) my friend, mentor and teacher;
he believed that my drive towards a unifying approach to management based on Dr. Goldratt’s
Theory of Constraints and the teachings of Dr. Deming was worth pursuing and decided to put
the weight of his wisdom and expertise in this endeavour. Why did these two approaches need to
be unified? Because as powerful as his philosophy may be, Deming’s ideas are dauntingly
difficult to implement, and TOC, while revolutionary and effective, lacks completeness as a
solution without the rigour of understanding and using Statistical Process Control.
In 1999, North River Press published our work as Deming and Goldratt: the Decalogue.
Larry Gadd, its owner, in the previous 17 years had published all of Dr. Goldratt’s books and
believed that Oded Cohen and myself had developed a significant piece of new knowledge.
During the writing of our book, it became clear that we were talking about a precise
algorithm for leading an organization through a systemic transformation. This algorithm had
ten steps, hence it was named the Decalogue. The gestation was emotionally cumbersome
and the end result was a book that was less than optimal. However, our efforts served the
purpose of opening the debate on how the work of these two giants of management could be
realistically furthered.

The ten steps of the Decalogue in three fundamental phases


The ten steps that make up the Decalogue were derived semi-empirically and validated
by a Transition Tree that Oded and I wrote to ensure completeness and to form the basis for
our collaboration. These steps are very operational and in order for them to be fully
understood it is necessary to elucidate upon their conceptual background. I will do so by
referring to them within three fundamental phases.

Phase one: the system


The Decalogue was developed, scientifically and otherwise, within the framework of
system management; a system is defined as a network of interdependent components that
work together for a clearly stated goal.
Sechel: logic, language and tools to manage any organization as a network

Accordingly, at the onset of any Decalogue effort it is mandatory to define:


a. the goal of the system we intend to operate (well beyond the boundaries of the
legal entity that defines “the company”);
b. the units with which we measure the achievement of the goal (i.e. cash per year);
c. the method we intend to use (the “how”) to measure what we want to achieve;
d. the interdependencies (the links and linkages of the network) that allow the system
to achieve the stated goal.

Phase two: managing the system


The system must be managed. In my opinion, the most relevant part of the monumental
contribution that Dr. Deming gave to management, allowing this human endeavour to enter
the realm of science, was his realization that any system in order to be managed must be
understood in terms of the variation of its components, i.e. the processes that make up the
system. Dr. Deming understood that, at the most fundamental level, the interaction of people,
methods, materials and the environment in which these elements exist, generates variation.
By leveraging the rigorousness of the statistical work of Walter Shewhart, Dr. Deming created
a unique and unrivalled management philosophy that provides, if properly understood and
metabolized, the guiding elements for a transformation. Dr. Deming’s work, named by him at
the end of his long life as the Theory of Profound Knowledge, is for the science of
management what the Theory of Relativity is for our understanding of the physical world: a
paradigm shift. Understanding and managing variation in the system is foundational for the
successful adoption of the Decalogue and for the making of that shift.

Leveraging the constraint


About thirty years ago, Israeli scientist Dr. Eli Goldratt started to point out, with as much
lucidity as vehemence, an obvious but neglected fact: any system, organizational or
otherwise, is intrinsically limited in what it can achieve by one or more limiting factors. If we
choose, strategically rather than opportunistically, these limiting factors, (“the constraints”)
and take full advantage of (exploit) them by organizing (subordinating) the whole company to
that purpose, we can maximize what the system can realistically achieve towards its stated
goal. Dr. Goldratt has amply illustrated his view on how organizations can achieve their goals
following his approach in the vast body of knowledge he has developed named the Theory of
Constraints (TOC). Dr. Goldratt has developed a wealth of very powerful applications for the
proper utilization of the finite element(s), the constraint(s), that limit organizations in achieving
their goal and has trained a whole host of instructors capable of teaching them.
Nearly twenty years ago Dr. Goldratt started to develop what he calls the ‘Thinking
Processes Tools’ and described them in a novel called It’s not Luck dedicated to illustrating
how it is possible to systematically overcome “constraints” that are not physical.
TOC has been nothing short of a revolution and I had the great good fortune to learn it
from the most profound of its scholars, Oded Cohen. Needless to say, the applications of
TOC to managing the constraint are critical for an effective implementation of the Decalogue.

80
Domenico Lepore

Just like the most famous offspring of the theory of variation, Statistical Process Control,
these applications need to be fully understood in order not to become an expedience.

Phase three: grow the system in a sustainable way


A system must grow in its ability to achieve its goal. This growth can only happen in a
sustainable way if the people that work in the system and the people that work on the system
are capable of learning about methods, products and markets in a continuous, systematic,
orderly manner and are capable of and willing to apply their learning. The organizational
design that supports this continuous growth through learning is probably the most complex
and least understood aspect of the systemic approach to the management of organizations.

To summarize: The basic elements for sustaining the long-term development and success of
an organizational system are:
1. Clarity on the goal and the network of interdependencies that enable its achievement;
2. Understanding and management of the variation associated with each process and
their interactions as well as identification, exploitation and subordination of the
system to a set of limiting factors, the constraint(s) of the system;
3. Establishing an organizational mechanism that ensures appropriate capitalization on
people’s continuous learning about methods, markets and products.

Beyond the Decalogue: Intelligent Management


Whereas the basic scientific principles postulated in the book Deming and Goldratt
remain completely valid, the last ten years of relentless application of these principles have
yielded a new methodological approach in its own right.
Practicing the application of the ten steps has highlighted their inner strengths and
weaknesses, prompted new and deeper questions and provided an opportunity for creating a
much more integrated, cohesive and defined algorithm. What has emerged out of this ten-
year process is a powerful management philosophy that provides a challenging method for
the transformation of the present style of management into one of optimization. Optimization
means that every component of the system is optimized (where appropriate, this means sub-
optimized in the traditional sense) to achieve the goal of the system. I do not yet have a name
for it; for now let’s call it “Intelligent Management” or IM.
This transformation is a lofty goal, and I believe that it can only be achieved if we shake
at the roots not only the way we think about performances, but also who we are and how we
intend to be meaningful in what we do. The freedom of choice that most of us enjoy comes
with a burden of responsibilities and we have to learn how to live and honour these
responsibilities in order to preserve this freedom.

Towards a unified approach


What are we missing? From a strictly scientific viewpoint, not very much; in other parts of
this book you will find reasonably detailed explanations on how to deploy the above-

81
Sechel: logic, language and tools to manage any organization as a network

mentioned concepts in a practical yet coherent manner. Moreover, if we accept that


management, as a human activity, has a non-negligible empirical component, we must also
be open to many more “techniques” that can help companies to achieve their goals.
Bookstores are filled with “one minute manager” type publications.
I believe that what is missing is what holds all this knowledge together. I believe that
what we are missing is the global framework within which this knowledge can flourish.
I believe that if we want to attempt to build an epistemologically unified Theory of
Management we have to deal with the fundamental elements that make up who we are:
speech, thought and action. And we have to do it not in the vacuum of a psychological
investigation, nor in the empirical realm of psychiatry, nor within the ill-defined parameters of
organizational development. We have to do it within the comprehensive, holistic, systemic
ground of human cognition. We have to explore how our mind connects elements of intuition
with their orderly application in order to achieve clearly defined objectives. In doing so, we will
unveil the basic, common threads that make us human, hence more alike than we like to
think.
The last ten years have seen an exponential growth in the planetary interconnections of
which we are all part. The artificial walls built by our inability to conceive and experience
higher levels of interdependency are crumbling. It is time for the science of management to
play its part in helping our lives to be meaningful in the work place.
The ten steps forming the Decalogue were the answer Oded and I came up with and the
Transition Tree that validated this algorithm made us feel confident in the sufficiency of the
solution.
This was the beginning of a ten-year journey in applying and verifying the validity of the
solution. What follows is a brief illustration of the ten steps amply described in our book Deming
and Goldratt. In Part Three we shall provide examples and further elucidation on these steps.

The ten steps of the Decalogue


Step One: Establishing the goal of the system, the units of measurements and the operating
measurements

Without a goal there is no system and without clarity on what to measure in the system and
how to measure it, talking about a goal becomes lip service. GAAP accounting utilized to
support managerial decisions is the enemy number one of productivity. EBIT, EBITDA, EPS
and any form of GAAP derived measurements totally miss the point of what a company
should strive for. If the goal of a company is connected in any way with making money, then
all we need to know is:
• What comes in (sales);
• What goes out to purchase materials and services that go into the products we sell
(TVC, Totally Variable Costs);
• What we need to make the system function (fixed costs + investments), Operating
Expenses (OE);

82
Domenico Lepore

• The inventory (I) we need to keep in the system to ensure that we always have
enough “material” to produce and ship.

These basic variables are connected in the following way:


Sales minus TVC = Throughput (T)
T minus OE = Net Profit
T minus OE minus I = cash profit, the physical money we see in the bank account before
tax. Indeed, from year to year, I becomes ΔI.
Throughput is overwhelmingly more important than Inventory and OE because it can
potentially grow far more than OE and I can be ever reduced. Sadly, the whole GAAP effort
revolves around “understanding OE” with devastating effects on decision making. Let me say
it as clearly as I can: the industrial world is where real wealth can be created; industry needs
to learn how to measure its performances based on T, I, and OE. Banks, the stock market,
financial institutions and the guild of accountants MUST understand that their job is to support
true wealth creation by fostering in industry the pursuit of these measurements instead of
thrusting upon them their preposterous ones.
Nearly thirty years ago Dr. Goldratt explained these simple concepts and all the financial
and industrial catastrophes we have experienced in the last 25 years are, directly or
indirectly, the result of not having understood them. As we will try to show in the chapter on
Measurements, throughput is the most powerful way to link the industrial and the financial
worlds and the accounting that goes with it provides a whole new and meaningful scope to
the endeavours of the accounting profession.
It is also critically important to measure how the system performs in its effort to deliver to
customers. We can have a pretty good handle on it by measuring the on-time delivery and
assigning a dollar value to any delay, we call it T$D (Throughput $ Day). The longer the delay
the higher T$D. Similarly, we want to know how much cash we keep trapped in inventory to
achieve an optimal T$D, we call it I$D (Inventory $ Day).
Step Two: Understanding the system
If we do not know who does what, what the input and output are and how everybody’s
work is connected then we are not managing. As Dr. Deming used to say, “business schools
teach you how to raid a company not how to manage it”. It is disconcerting to discover how little
we know about our system until we have a clear picture of how our interdependencies are laid
out and how neglected by top management this issue is. Step Two provides the foundational
elements of understanding that will enable the building of a truly effective Quality System.
Step Three: Making the system stable
Dr. Deming used to say that if he had to reduce his message to Management to just a
few words, he would say that all their job is about is reducing variation. Dr. Deming’s
unrivalled contribution to the science of management comes from having understood the
importance (and all the implications) of a body of statistical studies developed by Dr. Walter
Shewhart of Bell laboratories in the 1920s. Shewhart found out that any process is affected

83
Sechel: logic, language and tools to manage any organization as a network

by variation, which is enemy number one of Quality and reliability; however this variation can
be attributed to either common causes or special ones. Distinguishing “noise” from “signal”
was then critical to devising actions aimed at managing this variation. Dr. Shewhart
developed an important part of the so-called Theory of Variation known as Statistical Process
Control (SPC) and a very useful mechanism named Control Chart. His work was foundational
for the improvement of productivity, first at Bell and then in a myriad of organizations
countrywide, and certainly served as a springboard for the gigantic work of Dr. Deming.

Dr. Deming realized that in order to manage the variation associated with a process people
make two kinds of errors:
1. They attribute variation to a special cause when it is instead due to a common cause;
2. They attribute variation to a common cause when it is instead due to a special cause.
The ramifications of these errors are endless and still today plague the way companies
make decisions; Statistical Process Control, the main body of knowledge out of which the
Theory of Profound Knowledge has evolved, is largely ignored at Top Management and
Corporate level. Certainly, the lack of scientific background that is typical of the average
western corporate manager has contributed to the dismissal of SPC but, more profound
reasons, I believe, have sealed its fate so far.
SPC is neither a purely mathematical tool nor is it a conventional financial management
tool. The process behaviour charts (a better, less intimidating name for “control chart”) mirror
the outcome of our managerial decisions (often taken with some kind of local optima in mind).
The image they portray is not the reassuringly deterministic one that accountants and
financial people are accustomed to; on the contrary, what these charts display are predictable
or unpredictable ranges of oscillation. These oscillations very often reflect, mercilessly, the
conflicting confusion that dictates our choices and the course of action that their interpretation
calls for flies in the face of “conventional wisdom”.
In many years of professional practice I have constantly witnessed the psychological
disarray that ensues a statistical study carried out with the use of Process Charts. Understanding
SPC, let alone using it properly and accurately, requires a paradigm shift in the way we look at
data and make sense of them for business decisions. From a mathematical standpoint, process
charts are based on an average dispersion statistics and the approach they use is the 3sigma
one for the calculation of limits. What has always generated confusion about statistical process
control charts is that although connected with probability theory they do not work because of it.
The essence of the charts is in their predictive role and in the possibility they provide to build an
epistemological approach to management based on prediction.
Control charts capture the most fundamental feature of the work of individuals and their
interaction within organizations: the variation associated with processes. In building a
systemic organization based on the Decalogue the driver’s seat belongs to SPC.
Steps Four and Five: Identify the constraint and implement buffer management
The new kind of organization that is the solution to the conflict between Deming’s
approach and Goldratt’s approach is a network of interdependent processes with one

84
Domenico Lepore

common goal where we have achieved a good level of statistical predictability. It can be
successfully managed, but the question is, how? Dr. Goldratt’s main contribution to the
Theory of Management has been to point out that any system is limited towards its goal by
very few elements, the constraints. If we identify them we can manage them following the
steps of focusing that he developed. The Decalogue, leveraging the intrinsic stability of a
Deming-based system, suggests that the constraint can be “chosen” (one constraint) instead
of being identified. In other words, we can always decide which constraint it is strategically
more convenient to focus on and build the system accordingly. Let me stress this: we can do
it ONLY because we have already built a system made of low variation processes; this is why
we can safely design our company around a strategically chosen constraint. Instead of
cycling the five focusing steps of TOC - (1) Identify the constraint, (2) exploit the constraint,
(3) subordinate to the constraint, (4) elevate the constraint, and, if the constraint has moved,
(5) go back to step (1) - we can make the system grow by appropriately choosing the
constraint and sizing the capacity of all the feeding/subordinating processes coherently.
Again, this is possible only because the variation in the system is low.

Protecting and controlling the system: Buffer Management


The performance of this system is ensured by its predictability but we need a mechanism
to protect it and control it. This is provided by the buffer and by its management. What is a
buffer? Individual processes exhibit variation; two or more together do too, this is called
covariance. The effect of covariance is a cumulated variation that can result in any sort of
combination of these variances. Regardless of how little individual variation the processes of
the system have, we do need a mechanism to protect the most critical part of our
organization, the constraint. A buffer is a quantity of “time” that we position in front of the
constraint to protect it from the cumulative variation that the system generates. Simply, in
synchronizing the processes that deliver the output of our organization, we ensure that what
has to be worked on by the constraint gets in front of it “one buffer time ahead”.
Is there a precise and unequivocal recipe to size the buffer appropriately? No, and it
doesn’t matter. If you understand control charts the sizing is fairly obvious; if you don’t, you
have bigger problems to cope with. As Dr. Deming would most probably comment: “let’s go
back to serious business”. The real importance of the buffer is not in the protection from
disruption, certainly not an irrelevant issue, but in the possibility to exert control over the
functioning of the whole organization: if our processes are all predictable, then we can truly
use the buffer as a mechanism to gather succinct, effective and comprehensive information
about the “state of the synchronization” of our company. Indeed, if we monitor the buffer with
the use of a control chart, then the control becomes real insight, the goal of management.
The management of the buffer entails a Copernican shift in the way we mean control
because it prompts decision makers to rely on understanding and knowledge rather than
hunches; it forces them into leveraging arguably esoteric statistical studies instead of salt of
the earth accounting; it constrains the unbridled vibrancy of managers into the straightjacket
of rational thinking. As a result of those considerations, over the decades, the repeated
attempts to introduce buffer management to Top Management have failed.

85
Sechel: logic, language and tools to manage any organization as a network

Step Six: Reduce the variability of the constraint(s) and the main processes
Step six is obviously connected to step Five but less obviously to step Seven. Clearly,
we do understand the impact that variation has on our system and the need to reduce it but,
when push comes to shove, we are not prepared to continue to work on variability reduction.
Why? The answer is in our ability and desire to understand the purpose of system
management.
The culturally irritating translation of Deming’s Philosophy into the myriad of “Kaizen-like”
management techniques that have bamboozled western management for the last 30 years
has transformed it from an innovation and wealth creation driven vision of the world into an
efficiency game. If, on top of this, we continue to view our company in “functional” terms, then
reducing variation simply means reducing costs. Of course, no function would ever easily
surrender to that because it would imply “cutting the budget”, hence any serious attempt to
reduce variation is nipped in the bud. A relentless effort towards continuous reduction in
variation can only stem from a systemic vision of our company and the understanding that
only this reduction would provide the insight needed for triggering real jumps in performance.
The way to link a relentless, focused and companywide variation reduction crusade to
financial performances is through the adoption of a suitable organizational structure.
Step Seven: Create a suitable Management structure
When we wrote our book, Oded and I were perfectly aware of the inadequacy of the
prevailing organizational structure to support the systemic endeavour we were preaching, but,
in fairness, we had no generic answer to the problem. What you have read in the previous
chapters regarding a project-based structure is the result of subsequent analysis and
elaboration. At the time of publication (1999), we just wanted to stress that, without a suitable
structure, the realistic possibility to sell all the capacity of the constraint would be hindered by
local optima considerations. In other words, the design of a suitable structure was a
prerequisite for enabling the true expansion of the system.
Step Eight: Eliminate the external constraint; sell the excess capacity
When we design a system that caters for a high degree of process predictability and
synchronization, where control and protection are ensured by buffers, and where all the
policies, behavioural and measurement “constraints” are dealt with by an appropriate
organizational structure, we do so to maximize sales. The most important part of the chain is
the customer and any company should always be designed to ever improve its ability to
satisfy its customers’ verbalized and hidden needs. The Decalogue, if understood and
embedded in the appropriate structure, should very quickly unveil capacity that is not
currently being sold.
Another way of looking at this issue is the following. Let’s say that, day one, the rate of
sales of a company starting its Decalogue journey was such that some shipments were
missed and constant fire fighting would create friction between production and sales. The
Decalogue would call for a disciplined process mapping aimed at understanding process
variability, the choice and management of a suitable constraint as well as the devising of a

86
Domenico Lepore

coherent measurement system. Moreover, in order for this level of synchronization not to be
hindered by the local optima evil inclination that functional organizations invariably undergo,
we design an appropriate, coherent organizational structure. Almost invariably, the constraint
will shift outside, i.e. become an external constraint: our capacity, what we can realistically
design, manufacture and ship becomes greater than what we are currently capable of selling.
At this point, it will be blatantly obvious that our real understanding of the market is woefully
limited and we, in truth, do not know how to sell. The most spectacular application of the use
of the Thinking Tools is in the management of the external constraint. This is a very critical
point in the pattern of a successful Decalogue implementation. Why? As we previously
stated, salespeople can singlehandedly jeopardize any systemic endeavour unless we
integrate them organically in the way the company operates. We shall look at this in more
detail in Chapter 13 dedicated to External Constraint.
Step Nine: where possible, bring the constraint inside the organization (and lock it)
The Decalogue approach to management is based on process stability; indeed the most
critically important part of the system is the constraint. Hence, we want to ensure maximum
predictability especially on the constraint. Clearly, when the constraint is external such
predictability is more difficult to achieve. This is the reason why, whenever possible, we want
to manage an internal constraint. Moreover, this is also the easiest way to make the system
grow without stirring company dynamics conducive to mayhem.
The need for constraint reliability is so strong that even when organizations have a
virtually unlimited internal capacity, like supermarkets, we should always elect to appoint an
internal constraint and subordinate the whole organization to it. The growth of the system will
then happen through a systematic, orderly and relentless exploitation of the capacity of the
constraint. When such a capacity is not sufficient to meet market demand we will first
increase the appropriate non-constraint areas making them capable of subordinating to the
constraint and only then we will elevate the constraint. One more time: the name of the game
is process predictability.
Step Ten: Set up a continuous learning program
The possibility for a Decalogue based management system to produce in time the hoped
for results rests on the ability that the organization has to continually learn what is needed to
constantly improve its performances.
Learning does not happen in a vacuum and cannot be based solely on individual desire.
Learning must become part of the way the organization functions and the change associated
with it a way of life for the company. Learning cannot be “installed” neither can it be forced on
people. It must be a personal choice but also an integral part of the way the company has
structured itself to conduct business. Learning must be promoted companywide and from the
Top Management but must come from a designated and empowered source. Over the years
we have come to call it a Centre for Learning.
The ten steps highlight the knowledge needed to overcome the Deming vs. Goldratt
conflict and enact successfully the idea of a system constrained in one point; I am proud to
say that the steps have stood the test of the last ten years. However, both the successes and

87
Sechel: logic, language and tools to manage any organization as a network

failures I experienced in these years have clearly pointed at areas that were in need of some
meaningful upgrade. The most important lessons have come from the area of organizational
design. At the end of this book we will look at the emerging model for the organization of the
21st century.

The phases of the Decalogue mapped onto the PDSA cycle

The Decalogue is founded on the principle of continuous improvement. Not only does it
employ the Plan, Do, Study, Act continuous improvement cycle designed by Deming, the
entire ten steps embody that cycle pattern in the way they are carried out. This can be seen
clearly when we map out the ten steps of the Decalogue and their four main phases onto the
PDSA cycle itself.

88
PART THREE

Knowledge
(application/execution)
Domenico Lepore

ARE THE NEXT CHAPTERS USEFUL?

The goal of Part Three is not to provide an illustration of how the concepts of the Decalogue
have been applied over the last 15 years. Neither is it to elucidate on successes and failures
which are the hallmark of any innovation process. My reluctance to provide “examples” is
originated by the wealth of empirical evidence that I collected over the years: very rarely
people can relate to other people’s experience. Moreover, as Dr. Deming reminds us:
“without a theory experience teaches nothing”.
And yet I did feel the need, at this point, to provide a further insight, a visualization of
what I discussed so far. I felt the need to show, with some level of detail, how the concepts
presented in the previous parts of the book can be made operational.
I was clearly in a conflict.
“Provide examples” protects the need for “satisfy expectations”; readers expect
examples, it makes them feel the author is worth their trust, “he has been there”, “he has
done that”. “Do not provide examples” protects the need for “intellectual coherence”; this book
is about scientific and philosophical tenets concerning the creation of a sustainable
economics, not a “one minute manager” handbook.
There are two basic assumptions at the core of this conflict. The first one is that
examples have to be “real life” to feel “real”. The second is that it has to be “me” to talk about
it; in other words, it has to be “the organizational scientist” who provides the practical, detailed
answers. What follows is the result of tackling these assumptions.
What you will find in Part Three is a series of chapters (only one is by me) written by a
group of professionals, all of whom have several years of “theory and practice” with the
Decalogue. What each of them presents is his/her take on the Decalogue as applied to a
particular area. Indeed, all of the contributors have also several years of experience in the
subject matter at hand. In other words: the writers of these articles are subject matter experts
with a solid grasp of the Decalogue.
In these chapters there are no names and only vague reference is made to the particular
business of the company the authors worked with. What you will find is a practical analysis of
how some foundational aspects of a systemic organization are addressed; each author has
provided his/her own slant to the issue and I have made no attempt to make his or her style
uniform with the others. In essence, what you will read is their view on how the issue can be
faced and, in some cases, fairly detailed instructions are provided. Again: these instructions
will teach nothing unless the underpinning scientific and philosophical tenets are understood.
Larry Dries JD addresses the highly controversial (and largely misunderstood) theme of
accounting. Unfortunately, the world we live in, particularly Public Companies, is slave to the
conventions of GAAP and cost accounting. While GAAP and cost accounting are not evil in
themselves, the use that the industrial and financial world has decided to make of them stifles
innovation and distorts data, thus leading companies to take wrong decisions. Larry makes a

91
Sechel: logic, language and tools to manage any organization as a network

clear case for Throughput Accounting and provides a non-negligible insight into the
psychology of accountants and finance professionals deriving from the assumptions that form
the foundational basis of their particular world view: cost reduction. Larry’s background in law
and investment banking has given him experience and perspective on how the manipulation
and interpretation of financial data can materially influence company behaviour, often leading
to unfortunate results. Larry’s discussion on measuring also briefly mentions the use of
Statistical Process Control and how it drives good decision making through objective
analysis. Such analysis permits accurate measurements, which then can form the basis for
the design of precise solutions to problems flowing from special cause variation. This entire
discussion on measuring and accounting strongly highlights the importance of the
philosophical underpinnings of the Decalogue itself.
Dr. Giovanni Siepe addresses variation and its implications for the management of
operations. Giovanni, a theoretical physicist who has spent two decades in industry, draws
heavily from the monumental body of work by Dr. Donald Wheeler; Don’s books have been
for me as well as for Giovanni a constant source of inspiration and guidance in the application
of Dr. Deming’s philosophy and we are eternally grateful to him for his teachings. Giovanni
has worked extensively on simulations and has gained over the years a profound
understanding of how critical the management of variation is for the successful running of
operations: from replenishment to manufacturing to sales.
Yulia Pakka MScBA took on the burdensome task of summarizing the External
Constraint approach to Marketing and Sales. A few words of introduction are in order.
Marketing (and its links to sales) within the framework of the Decalogue takes on a formidably
important and delicate role. Far too many people think of Goldratt and Deming as “Production
and Quality” experts and their management philosophies as, in essence, ways to improve
“efficiencies”; nothing could be more wrong. Deming and Goldratt, hence the Decalogue,
provide a holistic framework for an everlasting sustainable growth and Marketing is the vessel
for that growth.
Yulia has a multifaceted and international professional experience in highly competitive
and innovative markets. She explains very clearly, rigorously and comprehensively how a
statistically stable organizational system can reliably provide customers with well priced,
quality products that address well identified needs. She provides clarity on the connections
that should exist between pre-sales and post-sales activities and warns about the risks of a
sales-force cognitively not aligned with the marketing process. Yulia gives a brilliant, yet
challenging, illustration of the mechanism by which a company can systematically win the
“heart and mind” of its customer and tackles head on the mystique of the sales process.
Anyone who is serious about understanding what Dr. Goldratt has described in his seminal
book It’s Not Luck should read Yulia’s chapter very carefully.
Francesco Siepe PhD and Professor Sergio Pagano PhD, deal with Information
Systems and what should be their main feature: the ability to support an organization to
improve its performances towards a stated goal. Francesco and Sergio are, respectively, a
mathematician and a physics professor. Their professional collaboration is aimed at

92
Domenico Lepore

designing and building software technologies that can support a Decalogue based approach
to the management of organizations.
A common background in non-linear systems studies has helped them devise
computational tools for the proper management and synchronization of finite resources. Our
continuous conversations on the usefulness of software for the enhancement of business
performances is leading to a set of tools that maximize the impact of any existing Information
System. In this chapter, Francesco and Sergio illustrate an application of their approach to
Project Management and show how simple software tools can support the adoption of a
completely systemic organizational design based on the management of projects. Their work
is based on, and expands upon, the concepts that Dr. Goldratt illustrated in his book
Necessary but not Sufficient.
Gianlucio Maci PhD offers in his chapter a preview of where the development of the
Decalogue is heading. The initial inspiration (and motivation) for writing this book was
provided by the idea of sechel and how such sechel can be tapped into if we develop the
three faculties of the mind that I discuss in the book. Connecting the dots among intuition
(birth of an idea), understanding (of its full spectrum of ramifications) and
knowledge/consciousness (successful application) invariably brings a new “spin of the wheel”
of innovation and development. In our case, such a development found its scientific roots in
Network Theory.
Over the last three years Gianlucio has engaged in the theoretical development and
hands-on testing of models aimed at providing enough experimental validity to the intuition
that companies can be better managed if seen as networks of interdependent, variation-
poised components, hence introducing the language of networks into the field of
organizational studies. His research work on complex systems will be part of the groundwork
needed to evolve the Decalogue into a full-blown Intelligent Management approach.
Angela Montgomery PhD has edited and shaped the entire book, from a set of disparate
parts into a systemic whole. Angela masters the art and science of crossing boundaries,
geographical and conceptual. Her zest for knowledge, intellectual prowess and flexibility has
allowed her to contribute meaningfully over the years to the development of the Decalogue
knowledge base. She is co-founder of Intelligent Management Inc., an organization with the
goal of promoting awareness and adoption of the Intelligent Management approach.

93
10.

SECHEL: FOSTERING A HIGHER INTELLIGENCE WITH


THE THINKING PROCESS TOOLS

Dr. Domenico Lepore

In 1994 Dr. Goldratt published It’s not luck, a novel where he addresses constraints that are
not physical. In this book he tackles such constraints with the aid of a powerful set of logical
(and emotional) diagrams that he labelled “Thinking Process Tools” (TPT). Many books have
been written to exemplify the use of the TPT. In our book, Oded Cohen and I dedicated 27
pages to this end. It seems to me that none of us did a particularly good job given the
abysmal use that very few people have made of the TPT over the years. I will therefore try
not to make the same mistake; whoever is interested in learning the use of the tools will
probably find useful the website that accompanies this book: www.intelligentmanagement.ws
Here I will attempt to provide a different slant on the issue.
The Thinking Process Tools were created to fortify in people the ability to reason cause-
and-effect. This is a daunting task because our mind simply does not work that way. In our
daily lives, most of the time we “re-act” instead of acting and we very rarely understand the
full spectrum of the consequences of our “re-actions”. Simply put, human cognition is heavily
constrained in understanding the network of interdependencies that our actions trigger.
Dr. Deming used to warn about the consequences (cause and effect) of seemingly simple
actions: “if we kick a dog in the street, we are responsible for the attack that, probably out of
fear, the dog will take against the next passerby”.

Cause and effect


Our mind is severely limited in its capacity to understand the field of forces within which
we are all immersed. Moreover, the system we are all part of has a “delaying effect” that
makes it virtually impossible to connect our actions to a precise output. Indeed, we all
understand the basic implications of certain actions: if I eat too much I will have a stomach
ache later. However, as soon as implications are slightly more complex many of them will not
be grasped.
The ability to reason cause-and-effect is foundational for any meaningful change we
decide to bring to our life. This is even more so if the change we want to effect concerns a
complex entity like a company. “The times they are a-changing” should hopefully turn into
“times we are a-changing”; Bob will forgive me. Let’s take a look.
Non-physical constraints are all of those limitations that are created by our mind as a
result of past experiences. Some of these constraints are very helpful and make our life
Sechel: logic, language and tools to manage any organization as a network

possible; restraints can play an extremely important part in our lives just as much as the
possibility to lift them when appropriate. These limitations do not belong only to individuals
but also to organizations; the field of forces created by the structure of the organization, the
paradigms of its founders and directors and the socio-political environment at large do shape
these constraints. The issue, for individuals and organizations alike, is not so much the
existence of these constraints but rather the acknowledgement that they are such;
unfortunately, far too often, these mind-created constraints become the “reality” of the
organization. The life cycle of companies, and even entire industries, can be measured in
terms of their appreciation (or lack of it) for these deeply rooted images of the world that over
the years I have come to call “cognitive constraints”.

Cognitive constraints
Cognitive constraints are neither connected with (lack of) education, nor with the rational
understanding of the debasing role they play in our lives. Simply, our mind adapts very
quickly to seemingly “stationary states” and sees moving out of these states as too
challenging. In life as well as in business, there is no such a thing as a “stationary state”; as
Dr. Deming used to say: “the only thing that does not require maintenance is obsolescence”.
This is a fundamental truth: if we do not evolve, we regress. What makes everything more
complicated is the pace at which we must evolve to survive and the anthropological direction
of this evolution.
Let me try to be clear on this issue as much as I can: management (and economics for
that matter) has seen a slew of socio-psycho-behavioural studies over the last 40 years. An
incredible avalanche of “techniques” have been devised to help managers cope with their
duties; legions of lawyers and accountants have patrolled the field where “wealth” is
generated and herds of consultants have galloped the wild and undefined prairies of
performance improvement. We can easily label these efforts as “upgrade management”
because in some way (almost) all of them have contributed to some advancement. Not
anymore.

The reinvention of work


The challenge in front of the science of management (and economics) is the reinvention
of work; its sociology, its psychology and its ethics. We have to start almost from scratch
because from toddlers up we programme individuals for goals that cannot be sustainably
attained. Moreover, from universities and business schools a totally new message has to be
launched on what needs to be learned to really help the development of sustainable wealth
creation.
The reinvention of work must contemplate the reinvention of the engine that makes work
possible: individuals and organizations. This is the feat in front of the science of management
and it is nothing short of a transformation. It starts with the development of a new way of
learning and makes use of what we learn; it starts with a superior ability to leverage the
faculties of our intellect and use them together to access an unprecedented level of
“intelligence” which is what these times require.

96
Domenico Lepore

The “new individual” and the “new organization” need to be crafted in a very different,
much broader paradigm. The founding element of this paradigm, this collective perception of
the world, will be cooperation not competition; interdependence not dependence or
independence; networks of value not company book profit; comprehensive wealth not just
money; centrality of the individual not individualism.
In order to cope with this massive shift in perspective we need to be able to understand
the connections that exist among different elements of our current reality, the implications
triggered by their interdependencies, and what it is that makes us perceive as inevitable their
coming together. We need a mechanism that supports and fortifies our intuition of why things
appear the way they do.
Unveiling the cause-and-effect relationships that we make in crafting our reality provides
a unique opportunity to transform this reality. It is therefore important to stress that the
mechanism, the tool we use to accomplish this feat is a genuinely transformational tool.
Without any delusion of doing a better job than ten years ago, I will try to illustrate it.

Embracing conflicts
In virtually every culture I had the opportunity to come in touch with, the word “conflict” is
evocative of something negative; an often prolonged struggle, a situation of distress, an
incompatibility difficult to reconcile. Whether you look in the dictionary or try to fish in some
hidden ravine of your memory, you will invariably connect “conflict” with something you would
want to avoid. Unless you are a lawyer, you will probably derive very little pleasure and
benefit from a situation of conflict.
Conflicts, with their heavy burden of emotions, act as debasers of our intellect, allowing
our evil inclinations to take over. Conflicts unleash powerful forces that make up an important
part of what we call “life”; like any force, we can be overwhelmed by it or we can harness it for
a good purpose.
Conflicts are inevitable; they arise from the simple fact that we are all different and we
see the world in different ways; we have different agendas, different priorities and we are
subject to different stimuli. Sometimes we have conflicts with ourselves, we struggle to take
decisions. The issue, then, is not so much to avoid conflicts; rather it is to turn their potentially
devastating power into something useful. Dr. Goldratt in his lifetime crusade for better
thinking as a prerequisite for better acting, dedicated a book, It’s Not Luck, and endless
seminars to the subject of conflict resolution. Here the attempt is not so much to explain the
mechanics of it but to unveil its underlying paradigm.

Separating wants from needs


A conflict is invariably originated by two statements that we perceive as conflicting; these
statements always take the shape of a “want”: I want X, you want Y, and this difference of
“wants” originates a conflict. Unless we enjoy the arguing that normally ensues a conflict or
we decide that its resolution is not worth pursuing, we may want to consider unveiling what
causes the conflict. Choosing to unveil the causes of a conflict is an act of self-determination,
a point of strength, and, as such, a step in the direction of change.

97
Sechel: logic, language and tools to manage any organization as a network

A want always hides a “need”. Anytime we state a want, consciously or not, we verbalize
with our statement the way we intend to satisfy our need. I want a new car to satisfy a need
for safety or prestige; I want a steak to satisfy my craving for meat or need for proteins; I want
to read a book to protect my need for knowledge or because I need the information contained
in that book. The list is obviously endless.
Separating the “want” from the “need” is the first step; once we do that and we look at
each need that has originated the “want”, we immediately realize that these needs are never
in conflict. Why do we claim this? Because we can quickly link these needs to a plausible
common goal, the achievement of which is only possible if the two needs are simultaneously
satisfied. In the end, we only argue and want to solve a conflict with somebody with whom we
think we have some common goal to achieve, otherwise we are indifferent.
Goal, needs and wants must be verbalized as clearly as possible; it is only through a
correct and precise verbalization that we disclose the true nature of the categories of speech
that make up the conflict. (See Chapter 2 for a description of the Conflict Cloud, how to build
and read it.)
Virtually any situation of blockage can be portrayed in terms of conflict. There are at least
three main advantages in doing so: first, we see with true clarity the issue at stake, we
understand “why” we are in conflict; second, we deflate the potentially growing bubble of
resentment associated with a conflict and keep at bay ill feelings as much as possible. The
third and most important advantage is that by building a conflict with precision we are
automatically poised to solve it; I will try to explain why.
So far we have analyzed a conflicting situation in terms of its three founding statements:
the “wants”, our claims; the “needs”, what we try to satisfy/protect with our claims; the
“common goal”, something that can only be achieved if both needs are satisfied/protected.
The goal then, really becomes “common” to both conflicting parties. We have also seen how
to connect these logical entities and how the conflict reads.

Challenging our mental models


What paves the way to the resolution of the conflict is a fourth set of statements, the
assumptions. What connects the boxes and provides conceptual solidity to the conflict is the
set of mental models, the beliefs we call “assumptions”; these portray the logic by which we
see the conflict as inevitable. Once these mental models are in the open it becomes possible
to challenge them and replace them with less “constraining” ones; in other words, by
surfacing our limiting beliefs we provide ourselves with the practical possibility to verify the
validity of such beliefs and find a way out of them.
The way we do this is through the development of “injections”, solutions that while
invalidating the assumptions that led to the conflict, protect both needs in the conflict.
I think it is important to stress the inherent difficulty of this exercise. By verbalizing the
profound images that shape the way we think, by exposing the way we look at the world, by
making explicit how we interpret reality we simultaneously show a very intimate part of
ourselves and we accept to challenge it too. Moreover, by accepting to modify these mental
images, implicitly we accept to change our outlook on the world and, in some cases, change
ourselves. Changing mental models that have been created over time is very complicated

98
Domenico Lepore

and requires a serious commitment towards a sincere betterment of the most relevant part of
our being, the way we think.
This is because, and I will never stress this enough, the seemingly simple conflict
diagram is far from being a “tool”, like a hammer or a piece of software. Embracing the
conflict cloud approach to problem solving calls for the acceptance of a new paradigm of
openness and transparency about ourselves; it requires the systematic and relentless
acceptance of others’ views; it is based on the deep conviction that sundering ourselves from
others is artificial and never truly possible. The conflict cloud, through its paradoxical
nomenclature, is the key to creating connection, to linking positions, and to building bridges.
In pursuing the overcoming of conflicts through the identification of their ultimate root
(our assumptions) we create the mental circuitry that enables us to develop seemingly
inconceivable solutions; we open up a realm of new possibilities, we transcend ourselves and
create a much broader range of possibilities for interactions.
This is why the conflict cloud is not simply a “tool” to address disputes; rather it is the
starting point for transforming how organizations can be built and work in a much more
meaningful way. The conflict cloud triggers and sustains a new kind of management where
we can really tap into the unexplored possibilities that human cooperation can generate.
The conflict cloud supports and fortifies intuition.

The Current Reality Tree


The tool we use to portray the current reality we want to investigate is a particular kind of
conflict cloud called “Current Reality Tree”. This cloud does not address a traditional conflict
like “What I want vs. what you want”, rather it is the direct offspring of the two fundamental
needs we discussed earlier in the book. Let me recap it.
Since our childhood we have been immersed in a field of forces that shapes the two basic
needs that all of us have; the need for control (over ourselves) and the need for vision (of
ourselves). These two needs reflect the dual nature that only humans have of physical and
spiritual being. While the spiritual, transcendent part of ourselves knows no boundaries and aims
high; the physical part knows and undergoes fears. Invariably, the simultaneous satisfaction of
these two needs will prompt us into a situation of conflict. This time though, the conflict will not be
between two people wanting different things and neither will it be a dilemma between two equally
desirable wants. The conflict originated by these two needs, for individuals as well as for
organizations, will be between something that we strongly desire (and we do not seem to be able
to access) and a highly undesirable situation that is the result of how our fears force us to cope
with the need for control. This kind of conflict is called “core conflict cloud”.

The Core Conflict Cloud


Another, more conventional, way of building a core conflict cloud is to start from the elements
of our reality that we perceive as undesirable; historically they are named Undesirable Effects
(UDEs). If we go down this route, then the procedure is the following:
One: we collect all the Undesirable Effects (UDEs)

99
Sechel: logic, language and tools to manage any organization as a network

Two: we find a verbalization that summarizes them all, we call it D. (We may want to do
this in steps: a) we stratify the UDEs in homogeneous categories; b) we summarize each
category with one statement; c) we consolidate these statements into one);

Position D

Three: we find a verbalization that summarizes all the Desirable Effects (DEs), we would
like to experience, we call it D’:

Positions D and D’

Four: we state the need for “control” that forces us to accept, to cope with D; we call it B:

The need B

100
Domenico Lepore

Five: we state the need for “vision” that prompts us to say that D’ is the reality we would
like to live in; we call it C

The need C

Six: we verbalize the most basic goal whose achievement must pass through the
simultaneous satisfaction of B and C; we call it A. In other words, B and C must be
simultaneously satisfied in order to achieve A.

Positions, needs and goal

The exercise of building a core conflict cloud for an organization is invaluable and the
process can be exhilarating. Let me try and give you a sense of this. In the last 15 years
I have worked with hundreds of top and middle managers to build custom made
implementations of the Decalogue and the starting point has always been the writing of the
core conflict. A group of managers sits for two or three days in a room starting with a “bitching
and moaning” session where all the UDEs are verbalized. This first phase is a very “feel

101
Sechel: logic, language and tools to manage any organization as a network

good” one, everybody agrees that the company is plagued by these effects. These effects are
and feel “real” and everybody would like to get rid of them.
Summarizing all the UDEs in one single statement is normally a little cumbersome but it
is generally done in few hours. At this point the procedure I listed above begins and the end
result is normally welcomed as a breakthrough. What happened?
Remember, the conflict cloud helps to sharpen our intuition. In just a few days the group
of managers has moved from an often disparate set of non-verbalized hunches to a clear cut
picture of the forces that keep them from achieving their goal. Moreover, a precise description
of the needs that craft the psyche of the organization goes a long way towards helping to
understand the “why” we are trapped in this conflict, the reason for it. I can safely say that no
top management strategic retreat session delivers a tangible and operational output like this
one. Now that the intuition is strong we can make it stronger.
What transforms a core conflict into a full-blown picture of our current reality is a
disciplined, orderly elucidation of all the mental models that give birth to the conflict. These
mental models are deeply rooted images that we have of ourselves and the world around us.
These mental models, which we may also call “assumptions”, are the cognitive lenses
through which we perceive reality.
Assumptions are, like any other mental construction, the result of external (the
environment, education, experiences, values, etc.) and internal (the chemistry and physics of
our mind) factors. The difference between an assumption and a statement of reality is only
the realm of validity, determined often by cultural circumstances. (If you want a practical
example of this last statement, take a sentence like “in a democracy every citizen is entitled
to decent, affordable and reliable healthcare” and ask for a comment from a statistically
representative sample of individuals in the US, Canada, and Europe).

The conflict cloud

102
Domenico Lepore

Assumptions are the logical connectors between goal, needs and wants; they help us
see the logic that shapes the conflict. A conflict with its set of clearly verbalized assumptions
portrays the current reality precisely in the way we experience it and is the strongest possible
support we can provide to our intuition.
I hope that these pages have helped clarify what is at stake here: the development of a
new, better and stronger ability of our mind to develop and fortify the faculties of our intellect.
Such a development is greatly enhanced by the TPT, and what we have presented here, the
core conflict cloud/CRT, form the basic mechanism to strengthen intuition. We provide an
example of core conflict cloud/CRT in the following pages regarding the case of the ProPrint
company.

Triggering a superior level of consciousness


Dr. Goldratt developed the TPT (Thinking Process Tools) to help people think better and
faster. The TPT support what have been called the three phases of change: What to change
– what to change to – how to cause the change.
I think it is important to understand that the successful completion of these phases can
only be achieved if we strengthen our ability to leverage the faculties of our intellect: intuition,
understanding and knowledge, and trigger a superior level of consciousness. I believe that
what ignites the desire to achieve more sechel, to be more “intelligent”, to go beyond the
somewhat comfortable boundaries of “common wisdom” and the platitudes we are
accustomed to, is a desire to live in and contribute to a better world.
Addressing and overcoming the restraints we experience, the cognitive constraints that
limit our existence is emotionally burdensome. However, it is ultimately rewarding because
each time we do it we access a superior level of ourselves and with it we acquire a more
meaningful level of existence.
The new covenant with work that management has to create must be based on that
meaningfulness. What we do must have a purpose; it will generate monetary wealth that
everybody will benefit from, but money cannot be the prime motivator. The purpose of what
we do cannot be disconnected from the creation of shared, fairly distributed and
comprehensive levels of wellbeing.
The philosophically fundamental conflict between body and soul, between the physical
and the spiritual sides of our being as humans, cannot remain confined to scholarly and
religious debates; it must find a solution and its embodiment in the way we act in the world
and in the workplace. It is a precise responsibility of management, capital and institutions to
rebuild on these ethical foundations the work of industry and organizations at large. If the
core conflict and the CRT (Current Reality Tree) address the intuition, how about making
understanding and knowledge equally stronger?

Freeing ourselves of cognitive constraints and getting to the goal


The CRT contains the four critical elements that anchor our intuition and provide a well-
grounded picture of the way internal and external forces combine to craft the reality we
experience. Goal, needs, wants and assumptions tell us why we are in the current state of

103
Sechel: logic, language and tools to manage any organization as a network

reality. But they also pave the way to come out of this reality and move towards a future that
is more desirable.
As we said, assumptions are mental models we have about the world; they are formed
as a result of experiences and socio-cultural circumstances. Assumptions are, in every
respect, a reality for the person that develops them. These assumptions, particularly the ones
that we verbalize between D and D’ in the conflict/CRT are, de facto, the constraining
element of our reality; they are our cognitive constraint.

The cognitive constraint

If these constraining beliefs were challenged and invalidated, i.e. we were to identify logical
statements that would disprove them, then these “constraints” would be “removed”. As a
result of this removal (the lingo is: “elevation”) of the constraint, our ability to achieve our goal
would be magnified. In order to qualify as “assumption sweepers” these logical statements
disproving our assumptions must fulfil two prerequisites:
1) they must logically invalidate one or more assumptions;
2) they must protect/address both needs OR one of them and be neutral to the other.
Indeed, the totality of these statements must address/protect both needs. If these
prerequisites are satisfied, we call these statements “injections”. The need for control (B) and
the need for vision (C) are captured by the two statements: on one side the “vision” of a
company that can overcome with ease the limitations they clearly see as artificial, on the
other the “controlling” need for remaining as faithful as possible to the perception they have of
themselves professionally and otherwise.

104
Domenico Lepore

Injections are solutions to the conflict; by invalidating all the assumptions they
“evaporate” (nothing like jargon, eh?) the conflict cloud (D and D’ disappear) and can
potentially move us from our Current Reality to a more desirable, less constraining Future
Reality.
However, in order for this to happen, we have to ensure that this set of injections is both
complete and as free as possible from potential negative implication. Only then will we have a
full understanding of the pattern in front of us. Only then will we have a thorough
comprehension of all the potential ramifications of the solutions we identified (the injections).

Completing the solution: the Future Reality Tree


What ensures the completeness of the set of injections identified, hence providing a
conceptually reliable path to the future, is another TPT called the Future Reality Tree (FRT).
The process of building an FRT requires some skill, a bit of experience and a fierce
determination. I want to stress that this is neither academia nor is it an exercise in
conventional logic. Building an FRT is only possible if we have embraced the vision and the
method that supports it; the vision is that of a company that takes very seriously its
commitment to the future, that sees itself as an ongoing generator of wealth for all its
stakeholders and society at large. The method is the orderly, relentless identification of all the
cause-and-effect relationships that are likely to shape the future if certain actions are carried
out successfully. In this sense, the FRT is similar in nature to the PDSA cycle because it
prompts a rigorous, scientific investigation of the subject matter.

ProPrint: a real-life example

PROPRINT conflict cloud

105
Sechel: logic, language and tools to manage any organization as a network

The way to read this conflict is the following: “If our goal is A then we must B; If we
must B then/therefore we are forced to cope with/accept D. On the other hand, if our goal is
A then we must C; If we must C then/therefore we want/desire D’ ”.
The boxes between A, B and D; A, C and D’ as well as the one between D and D’ are
the assumptions and they read as follows: “If our goal is A AND Box with assumptions, then
we must B (or C). The same reading applies to the other boxes with assumptions. The Box
between D and D’ reads: “We would like to have D’ but we are forced to live/cope with D
BECAUSE….
The core conflict cloud above portrays the current (at that time) reality of a
manufacturing company producing special equipment for the printing industry. In my
experience no person or company has ever benefitted from reading another person or
company’s core conflict. The reason for this is multifaceted. First, every company believes it
is special and unique – nobody compares; moreover it is difficult to penetrate quickly
somebody else’s subject matter. Second, reading a conflict is not the same as building it;
the focus and comprehension in the two scenarios are radically different. Third, the
language that management uses to portray its reality is peculiar to that group of people.
This is why, in fairness, I do not believe reading this cloud and what follows will be of great
help. However, in order to increase even minimally the chances of being successful, I have
done some editing work to the original conflict so as to make it as general as possible.
The company, which we shall call ProPrint, had about 200 employees and two
facilities. It had been on the market for 50 years and had been bought out by some of the
existing management who had teamed up six years previously with some keen, willing and
long-term focused capital. ProPrint was well known for the quality of its products, had a
well-established customer base, no union issues, and enjoyed a respectable cash profit
generation.
The main UDEs of ProPrint were all connected with its legacy. It had been a small,
family owned company for decades, where hands on experience and closeness to the
owner/founder were valued more than formal education and knowledge. The skills
considered critical for the product were in the secretive hands of a few elderly employees,
and the focus of the company was exclusively on the product rather than the customer. The
sudden jump in revenue ensuing the takeover by management had created a clear
disconnect between traditional work habits and what was needed to cope with the pace of
growth. Moreover, ProPrint did not seem to be able to take advantage of a growing demand
coming from abroad.
In the course of a three-day session, the top management team of eight people were
able to clarify and fortify their intuition about the issues they were facing, i.e. the conflict that
was trapping them. They were able to verbalize clearly the forces that were keeping
ProPrint from evolving. The management at ProPrint invested heavily towards this end and
they came out with the considerable result we reproduce here. I took the liberty of making
some simplifications to make it more legible.

106
Domenico Lepore

ProPrint Future Reality Tree

This is the part of the FRT that connects the injections to the achievement of the goal via
the simultaneous satisfaction of both needs. (A more complete and, for the purpose of this
book, unnecessary version of this tree should show the links between the achievement of the
goal and all the DEs that would replace the UDEs out of which the core conflict was
developed).
The set of injections developed by ProPrint was validated in terms of their completeness
by linking each injection through cause-and-effect logic with “future” intermediate states of
reality that are achieved as a result of the accomplishment of the injection. Indeed, these
injections are “pies in the sky”; they do not exist in reality and they are, at this stage, only
logical statements.

107
Sechel: logic, language and tools to manage any organization as a network

The strength and the value of the FRT are in the understanding it provides of the
comprehensiveness and the breadth of the effort needed to transition from the current to
the future state of reality. The thoroughness of this effort and the completeness of the
understanding we derive from it are further enhanced by the meticulous test we carry out on
potential negative implications deriving from the coming into existence of the injections. The
TPT that supports our understanding in this effort is called Negative Branch Reservation (NBR).

The logic of this tool is simple but the mechanics require some attention. In essence, with this
tool we verify if the injection suggested carries with it potential negativities; if we can identify
them we can try to diffuse or reduce their impact. The steps to build this “branch” are the
following:
1) we state the injection;
2) we state the potential negative outcome;
3) we build the cause-and-effect chain of events that we anticipate would determine
the negative outcome;
4) we identify the logical statement/statement of future reality that would turn the
positive of the injection into a negative;
5) we devise an action that trims that negative;
6) we incorporate the needed change into the original injection so as to make it more
effective.
Anticipating negative implications is an activity in which many people are well versed.
The point here is not to encourage scepticism, fuel negativity or stifle creativity and
enthusiasm. The goal of an NBR is to ensure full understanding of the ramifications of our
decisions. In some cases this tool helps us abort in time a half-baked decision. In some
others, regardless of our inability to trim that negative implication, it reinforces our desire to
go ahead anyway. More frequently, NBRs helps us craft better and more rounded injections
giving us a higher chance of accomplishing what they had been designed for.
To exemplify the use of the NBR we can see how ProPrint raised and trimmed a
potential negative associated to one of their injections on the FRT. (The injection is in the
bottom left box.)

108
Domenico Lepore

ProPrint Negative Branch

109
Sechel: logic, language and tools to manage any organization as a network

ProPrint Negative branch is trimmed

110
Domenico Lepore

ProPrint FRT with modified injection

So far, we have worked with the basic starting situation and pictured it using the tree of
our current reality. We have seen how ProPrint built its own current reality. Depicting their
reality so clearly helped the management team to develop the focus and the desire to build a
thorough understanding of how it would be possible to evolve out of that reality towards a
better future. The FRT and the NBRs provided that understanding and the global vision of
how the company could move ahead without losing itself. The bridge between the present
and the future had been built.

111
Sechel: logic, language and tools to manage any organization as a network

What is the future?


Let’s allow ourselves a moment of speculation. What is the future? This is not a
rhetorical question, on the contrary. The life of an enterprise, its ability to stand the test of
time, its possibility to wade through the ups and downs of the markets rests firmly on its ability
to pursue its vision of the future. A vision of the future for an enterprise cannot be the
outcome of a strategic meeting where a few executives develop a plan to react to some ill-
understood events or to push something based on personal agendas. Nor can it be some
opportunistic decision driven by purely monetary reasons.
The future is not the present in a few years time, neither is it what happens regardless of
what we do. Building the future is about taking the responsibility to make the future happen; it
is about striving to understand better and better the implications of what we do. A vision of the
future is certainly the responsibility of the top management but calls, first and foremost, for
the ability to subordinate to that vision, regardless of the circumstances. People’s involvement
in building a future they can associate themselves with is essential.
If we take our understanding seriously, if we value our ability to anticipate what future
events will be like as a result of our actions, if we truly believe in cause-and effect thinking,
then we must be able to subordinate to that future reality and walk on that bridge.
The bridge that brings us to the desired future and that makes that future possible is
knowledge. Let me repeat what I said earlier in the book. Knowledge is not erudition and it is
not information. Knowledge is not know-how and it is not experience.
Knowledge is based on theory and must allow prediction. Without the ability to predict
there is no management. However, in order for this knowledge to deliver results we must be
one with it. Knowledge cannot be disconnected from the consciousness that should permeate
it. If the knowledge needed to execute the injections of the FRT is not in synch with the
change in reality that the application of that knowledge will create; if the unfolding of the
pattern we have chosen creates a disconnection between the holder of the knowledge and
the sense of who they are in the new reality; if what people know does not match the image
they have of themselves in the new reality, then the transformation from the present to the
future will not happen.
There are two TPTs that help to deploy the necessary knowledge and the possession of
it together with an increased self-consciousness: the Prerequisite Tree (PRT) and the
Transition Tree (TRT)

Gathering and deploying knowledge to reach the goal


The PRT and the TRT have been designed to gather and deploy all the knowledge
needed to achieve the injections in the FRT. The PRT and TRT are the backbone of an
actionable plan. They pursue the goal of providing guidance and a logical pattern as well as
detailing all the steps that have to be taken in order to achieve the stated objective. Let’s see
how they work.
When we face the ordeal of achieving an ambitious target we are often daunted by the
‘mountain’ we have to climb; all we see are the obstacles in front of us. We perceive these

112
Domenico Lepore

obstacles as undefined, scary entities that hinder our ability to achieve our goal. When we are
in front of an injection (or any other arduous task) the first reaction is: “Oh my goodness, how
am I going to do this?” This is understandable but it is not rational. First of all, why are we
faced with this feat? Because either we have chosen to be, or because it is part of an
evolution, personal and/or organizational, we have decided to be part of. Very rarely do we
get ourselves in front of a task we have no possibility of overcoming. Indeed, at the onset, we
might not have clear ideas on how to tackle the ordeal. This is good, let’s leverage it.
The basic raw materials to build a core conflict are Undesirable Effects (UDEs); the
starting ingredients for a PRT are the obstacles we envisage in our journey to the goal: let’s
list them. The writing of this list is going to be both liberating and informative: we shall have
full clarity of what we perceive as an impediment and with it we’ll develop a pacifying sense of
control over it. When the list is complete we will have split “a big and undefined problem” into
a set of much smaller ones. How do we attack them? We make a list of the obstacles on the
left side of the page, from top to bottom; then on the same line on the right side of the page
we can list a corresponding Intermediate Objective. An I.O. can be either a logical statement
describing what would enable us to overcome that obstacle or, better, how we intend to do it.
For instance, if the obstacle is “I have insufficient funds” the I.O. could either be “I have
provided for sufficient funds” or, better, the solution, i.e. “I have persuaded my aunt to
sponsor me”.

How to cause the change – obstacles along the road to change


In order to build the Prerequisite Tree, we start by listing all the obstacles we see
preventing us from implementing the injection.

Obstacles and injection

Now that we have the solution, Ions, to the individual obstacles (very often one I.O.
addresses several obstacles) we can forget about the obstacles and concentrate on the

113
Sechel: logic, language and tools to manage any organization as a network

Intermediate Objectives. The next step is to provide a basic sequence to these IOs. This is for
two reasons: first, we need to ascertain “what is prerequisite to what”; second, we need to
know how many things we could theoretically run in parallel. (In reality, resources will dictate
that pace, as we will see).

Prerequisite Tree
Then we sequence the IOs, placing at the bottom of the tree the first ones we have to
achieve and then adding the others, according to a logic and time sequence. The Prerequisite
Tree is based on necessity.

The Prerequisite Tree

The PRT and the TRT were both developed for the same purpose but they are
“genetically” different. A PRT is a map; it is based on a logic of necessity and does not
foresee any check based on conditions of sufficiency. It is, essentially, a roadmap, a guide
that provides a suitable route to the goal. Its main value lies in the collaborative effort required
to build it and it is an ideal tool for teamwork. It is the simplest of the TP to learn but it can be
misleading; it is not just a list, it is a list with priorities dictated by logical prerequisites and
these prerequisites are, in turn, altered by the amount of resources at hand. You will be

114
Domenico Lepore

surprised to see how many different points of view there are in a group trying to address
“what is logically a prerequisite to what”.
A Transition Tree is a conveyor of understandable, usable and comprehensive
knowledge aimed at executing a precise task with very little variation. Transition trees are
developed any time it is necessary to convey to somebody a precise procedure to transform
an input into an output. If it were to be used within the framework of a conventional quality
assurance documentation it would certainly be in the so called “work instructions”. TRTs use
an extremely powerful cause-and-effect sufficiency logic; in other words, every intermediate
output is checked in terms of the sufficiency clauses that determine it. The transition tree is
built by stating the final result we want to achieve, normally an intermediate objective (I.O.) of
a PRT. The basic, repetitive, five-block structure of a transition tree is the following:
One. a starting situation
Two. the prevailing logic of the situation
Three. the need triggered by the prevailing logic
Four. the action taken to satisfy the need
Five. the logic of the action – why we have taken that action

Transition Tree

115
Sechel: logic, language and tools to manage any organization as a network

As a result of the action taken, the reality will change to a new state and we can easily
verify if we are getting closer to the final result we want to achieve with the TRT. At every step
we can scrutinize the effectiveness of our action, the validity of its logic and their coherence
with the identified need.
PRTs and TRTs are the Thinking Process Tools we use to ensure that all the knowledge
available is captured and used in an orderly and proficient manner. Moreover, we can use
them as a backbone for any project we want to undertake in as much as they provide a clear-
cut set of actions that can be easily timed and staffed. In other words, the actions on the
TRTs can be easily transformed into tasks of a project with an appropriate duration and
allocated resources.
What I have tried to describe is the pattern that links the different phases of a
transformational project: what to change – what to change to – how to cause the change.
These phases activate very distinct elements of our intellect: intuition, understanding and
knowledge, which the human mind struggles to hold together with the cohesiveness required
for a minimally complex project.
The TPT have been devised to improve this cohesiveness and help us access a much
higher level of sechel. The TPTs are the quintessential support to systems thinking and, just
as with Statistical Process Control (SPC), it will take mankind several decades to appreciate
their depth and scope.
ProPrint is a success story not because of their ability to execute the FRT and enact
successfully all the trees needed to accomplish the stated goal. It is a success because it
provided a vibrant and ambitious management team with the possibility to gauge their
strengths and weaknesses in respect with their legitimate ambitions. As a result of this
exercise they realized that the effort required to truly transform successfully the organization
they had taken over was beyond the desire they thought they had. They realized that what
they had achieved could have been very easily (and very rewardingly) incorporated by a
larger entity and become a part of a bigger organization. Accordingly, they accepted a very
generous offer from a large multinational to become one of their subsidiaries, performing
better and better what they already knew how to do. Everybody won.
In Chapter 13 on External Constraint we shall deal at length with the use of the TPTs as
applied to Marketing and Sales.

116
11.

MEASUREMENTS: THROUGHPUT ACCOUNTING AND


STATISTICAL PROCESS CONTROL FOR EFFECTIVE
DECISION MAKING

Larry Dries JD

Measuring
All organizations have a goal. Business corporations have the goal of making money. It’s
important for not-for-profit businesses to make money too, even though their goal is generally
to use the money they make to support an identified purpose, i.e., curing a disease,
supporting a disadvantaged class of persons, etc. Still, the goal of the not-for-profit is crucially
dependent upon its ability to generate profit. This chapter will primarily focus upon ensuring
that a business’ goal of making money is achieved. After all, that really is the goal of for profit
business organizations, and an important step towards achieving the goals of not-for-profit
organizations. An organization may produce goods or deliver services, but the goal of that
business is to make money through the production of goods or delivery of services  the goal
is not the mere making of the goods or the delivery of the services themselves  let’s not be
misled.
Good decision making is the sine qua non of good management. The Decalogue is a
management methodology which utilizes a set of tools to ensure good decision making. Step
One of the Decalogue, however, recognizes that before any decisions can be taken, an
organization must “establish the goal of the system (the organization), the units of
measurement, and the operating measurements.” Why? Because the system cannot be
directed towards a goal if a goal is not first established, and decisions cannot be assessed as
right or wrong if there exists no way in which to measure the extent to which those decisions
are achieving the goal.
Measuring has historically been within the purview of the accounting discipline.
Accounting, it has been said, is the language of business. We must ensure therefore that we
are speaking a language that conveys concepts that support good decision making.
The preparation of financial statements is generally regulated by a set of rules and
regulations known as generally accepted accounting principles (GAAP). GAAP, though, is not
a single body of principles. It depends upon where you are. That fact, in and of itself, should
generate concern. Is good decision making really dependent upon where you are? I think not.
So why should our accounting language differ, subject to place? This is an indication that
perhaps the “language” is not meant to support decision making.
Sechel: logic, language and tools to manage any organization as a network

If you are a company doing business in the United States you probably report your
financial performance according to US GAAP, which on a federal level is established by the
Financial Accounting Standards Board. Outside of the United States there has been a
movement toward using International Financial Reporting Standards (IFRS), which is a set of
principles adopted by the International Accounting Standards Board. In Canada, up until
2006, the regulating body, the Accounting Standards Board, used a set of principles similar
to, but not identical with, US GAAP. These principles were called Canadian GAAP. Since
2006 the Accounting Standards Board in Canada has abandoned Canadian GAAP and
adopted IFRS. For purposes of discussion here, we shall simply refer to all of these various
standards as GAAP.
During the days of the industrial revolution, another body of accounting principles was
developed, which is generically called cost accounting. As in the case of GAAP, there exist
various forms of this generic body of knowledge, like activity based costing, but again for our
purposes here we shall refer to these forms of accounting used by managers to understand
the costs of running a business as cost accounting. Cost accounting developed during the
early industrial age to assist large scale businesses to record and track costs, and thereby
help business owners and managers to make good decisions. The idea is correct. The
implementation, however, has fallen far short of being helpful in achieving good decision
making.
We concern ourselves with both GAAP and cost accounting because together, they are
the fundamental accounting principles which most accountants, finance professionals, and
managers use to assess organizational performance, and drive decisions. GAAP and cost
accounting are neither mutually exclusive, nor wholly consistent with one another. Cost
accounting need not comport with GAAP, as it is a form of management accounting for inside
managers. Unfortunately, as GAAP controls financial reporting to outsiders, it often gets
lumped in, to some degree, with cost accounting measures, which results in an
incomprehensible set of reports to all except the most technically trained financial
professional. That doesn’t really help managers make good business decisions. You can’t
successfully use what you can’t understand.
Moreover, many GAAP reports themselves are not really that helpful in supporting
operational decision making. For one thing, these reports are often generated at a time
significantly after operational decisions are made, and thereby contribute little to arriving at
operational decisions. Further, many GAAP reports accumulate several items under one
generic line item, which is also not that useful in driving decisions. GAAP reports may, at best
in such cases, provide a historical perspective of the efficacy of past operational decisions.
As the Decalogue is focused upon the promotion of good decision making, it requires a
set of measurements which can be used by all managers (not just financial professionals),
and drive good decision making by delivering the correct information as and when required;
that is, the measurements which every decision maker needs to know in order to reach a
valid decision. What is a valid decision? For our purposes we shall define a valid decision as
one which logically flows from a consideration of the relevant information which should be
considered, in the circumstances, to support the organizational goal.

118
Domenico Lepore

Why can’t practitioners of the Decalogue rely upon GAAP and/or cost accounting to
provide the measurements to support good decision making? Neither GAAP nor cost
accounting, alone or in combination with one another, generates meaningful measurements
to support good decision making as and when required. This being the case, The Decalogue
has turned to another form of measuring to support it. This other form of measurement is
known as Throughput Accounting.

Throughput Accounting
Throughput Accounting is an alternative approach to cost accounting which is proposed
by Dr. Eliyahu Goldratt, the developer of the Theory of Constraints. It is not a costing
approach, as it does not allocate costs to products or services. It is unrelated to GAAP, as it
makes no attempt to obey any of GAAP’s rules. As such, Throughput Accounting cannot be
reconciled with either cost accounting or GAAP. Such reconciliation is a meaningless
exercise, so there is no point in wasting any effort trying to do so.
Throughput Accounting has only one aspect that makes it similar to cost accounting; that
is, it is an internal methodology. It is meant to provide an organization’s leaders and
managers with measurements that support valid decisions, and reveal the progress, or lack of
progress, the organization is making towards the achievement of its goal. In today’s world, an
organization will still require GAAP accounting. Why? Not to assist it with decision making. It
will require GAAP accounting to report to outsiders. Those are the rules of our various
societies. The bottom line is that Decalogue organizations, like non-Decalogue organizations,
speak to outsiders in the language of GAAP. Decalogue organizations, however, ignore the
language of GAAP and cost accounting in reaching decisions. Decision making, in Decalogue
organizations, is supported by the principles of Throughput Accounting.
Throughput Accounting is predicated upon a set of three variables, and the manner in
which they interact, to affect an organization’s profitability. These three variables are called
Throughput, Inventory and Operating Expense. Using these three variables, Throughput
Accounting defines a set of relationships to measure and define how revenues and expenses
of an organization are related.

The Measurements of Throughput Accounting


Throughput (T) is the rate at which the system produces units of the goal (through
sales). When the goal units are money, as is generally the case in most organizations,
throughput equals the sales revenues (S) minus the totally variable cost (TVC) of what it pays
to produce the goods or services sold. (T = S – TVC) Note that although the foregoing
formula for throughput is written T = S – TVC, Throughput is really the derivative function, the
rate at which cash is generated by the system; that is, it recognizes that a dollar earned in
one day, is not equivalent to a dollar earned in one hour. The rate of cash generation matters.
As such, we will later discuss the concepts of throughput dollar days and inventory dollar
days.
Inventory (I) is the money tied up in the system to be transformed later into sales. This
money is in the form of what is generally understood as inventory. Inventory is only valued on

119
Sechel: logic, language and tools to manage any organization as a network

the totally variable cost associated with its creation or procurement. It does not include any
allocation from overhead or fixed expense.
Operating Expense (OE) is the money the system spends in generating units of the
goal. Examples of operating expense include: rent, utilities, taxes, payroll, maintenance,
advertising, training as well as investments in buildings, and machines, etc.
Valid decisions are ones that support the attainment of the organization’s goal. Decision
makers therefore must test their proposed decisions against three questions. Will the
decision:
1. Increase Throughput?
2. Reduce Inventory?
3. Reduce Operating Expense?
The answers to these three questions determine the effect of the decision on the system,
through relationships such as the following:
1. Net Profit (NP) = T – OE
2. Cash Profit (CP) = T – OE - I
3. Change in Net Profit (ΔNP) = ΔT – ΔOE (which should be >0 to support a valid
decision)
4. Change in Cash Profit (ΔCP) = ΔT – ΔOE - ΔI
By testing decisions against the manner in which they influence the foregoing
variables, and the relationships among those variables as those relationships are briefly
shown above, the genuine effect a decision has upon the organization can be quantified
and understood. These are measurements that really drive good decision making.
Throughput Accounting is not concerned with calculating the cost of a product, as is cost
accounting. It strategically supports and monitors the decision making processes connected
with the generation of cash.

Comparison of Throughput Accounting, with Cost Accounting and GAAP,


Measurements
Throughput Accounting fills a void, which is the need for real information to enable the
optimization of sustainable cash flow in any organization. Most businesses do not fail for lack
of profitability; they fail because they are cash starved.
Understanding the rate of cash generation is a fundamentally significant concept for all
organizations, and neither GAAP nor cost accounting, unlike Throughput Accounting,
provides a real time measurement of cash generation. GAAP itself has no measurement to
monitor cash generation. That is why cost accounting has developed the measurement of
EBITDA (earnings before interest, taxes, depreciation and amortization). EBITDA purports to
measure cash earnings without accrual accounting, cancelling tax-jurisdiction effects, and
cancelling the effects of different capital structures. Let’s not be fooled, it’s still not a

120
Domenico Lepore

measurement of throughput. Also, a report such as the GAAP cash flow statement, for
instance, is merely a reconciliation of the income statement earnings to the change in the
organization’s bank account, which provides a manager with absolutely no meaningful
information upon which to base any decision.
As previously noted, throughput recognizes the time component of cash generation. (It is
important to note at this point that throughput is not recognized until the organization actually
receives the money. It is not an account receivable.) This then, highlights how the
measurement of throughput provides meaningful information concerning actual cash flow.
GAAP reports, and cost accounting to a lesser but still unacceptable manner, fail to recognize
the time aspect of cash generation. A Throughput Accounting net cash profit statement is a
far more valuable report than GAAP’s income statement, balance sheet and cash flow
statement.
Cost accounting principles are in many ways worse still. Cost accounting burdens its
outputs with all types of artificial notions. Cost accountants are expert at allocating an
organization’s fixed costs in a given time period over the products produced in that given
time period. The problem with this exercise is that it does not really attribute costs for the
reasons the costs are genuinely incurred. This skews decision making, as it introduces
artificial goals (i.e. higher gross margin), as compared to the genuine goal of the
organization (i.e., making money). For example, research and development costs are items
of operating expense, not items of fixed cost that should be allocated against products. The
decision to engage in research and development was the need to ensure the development
for products in the future; in other words, the need to have sustainable sources of revenue.
However, if decision making regarding research and development efforts are based upon
their effect upon product (as a result of an allocation as part of the expense of already
existing products by cost accountants) then the organizational goal is not addressed and
the decision to fund research and development is flawed. It simply becomes the result of an
artificially created accounting measurement unrelated to the genuine purpose of research
and development. Such a measuring system has no place in any credible decision making
paradigm.
Let’s see some other ways in which Throughput Accounting treats certain
measurements differently than how GAAP treats those same measurements. These
differences are materially significant because the measurements themselves, as a result of
the manner in which they are calculated, drive vastly differing decisions. To be perfectly
clear, this means that the decision making is materially impacted by the measuring system
itself. That being true, the organization’s achievements are fundamentally driven by the
measuring system. That’s why it is vitally important to have a measuring system that
supports decision making by providing information that has a cause and effect relationship
to real world outcomes, not one that relies upon artificial constructs that do not support real
world causal relationships.

121
Sechel: logic, language and tools to manage any organization as a network

Throughput GAAP
Accounting
Sales Recognized when Recognized when
cash is received product ships
(typically)
Cost of sales TVC, raw material Raw material plus
only allocated overhead
incurred when the
product was
produced
Plant expenses Recognized when Capitalized into
incurred inventory produced
in a given period
and recognized
when the product is
later sold
Balance sheet Not necessary Required due to
accrual accounting
and matching of
revenues to
expenses
Capital Recognized when Recognized over
expenditures cash is paid the useful life of the
asset

The GAAP measurements in the above table are artificial constructs. They are not
reflective of real cause-effect relationships; as such, they can lead to bad decision making.

The Role of Measuring in the Decalogue Organization


The Decalogue was developed to satisfy a need. That need is the need to manage
organizations effectively. Why has effective management proven to be so elusive? According
to Dr. Domenico Lepore, The Decalogue’s co-developer, because the assumptions
underlying our management methods come from a world view that does not match our needs.
These assumptions include:
• A vision of an organization that is hierarchical, rather than systemic
• The pursuit of local optimization at the expense of global optimization
• A management approach oriented toward a “cost reduction world”, rather than an
“increasing performance world”
Let’s focus on the last bullet point. You cannot save your way to prosperity. Many
organizations, however, continually attempt to do just that. Rather than concentrating on the
generation of throughput, executive teams spend a vast amount of time and effort dedicated
to cutting expenses. Fundamentally, this cannot in and of itself ensure sustainable

122
Domenico Lepore

profitability. We earlier noted that decision makers must test their decisions against three
questions. Will the decision:
1. Increase Throughput?
2. Reduce Inventory?
3. Reduce Operating Expense?
These three simple questions embody the only things a decision can meaningfully
impact. Interestingly, only throughput generation is absolutely necessary to ensure a
sustainable business. That’s not to say that reducing inventory or operating expense cannot
have positive impacts upon a business, rather it is simply a recognition that without
throughput there is no business. This is the assumption that must be invalidated.
Organizations must become performance oriented, rather than anally cost conscious. How
does this relate to measuring?
As Throughput Accounting concerns itself with measuring throughput, it really is
fundamentally dependent upon, and related to, the needs of the customers of the
organization. How do you increase throughput? It’s not an accounting problem. It’s not a cost
cutting exercise. It’s a realization that the organization’s success is dependent upon meeting
the needs of its customers. Hence, Throughout Accounting, like all organizational activities, is
an activity that depends on and affects other activities. That’s the definition of
interdependence. Measuring is just one of the interdependent activities of any organization. It
certainly relates to customer needs through the marketing and sales functions; and,
Throughput Accounting should support decision making in, among other areas, the areas of
marketing and sales.
This brings us to the measurements referred to earlier, throughput dollar days and
inventory dollar days. Throughput dollar days measure the organization’s ability to deliver to
its customers on time. By delivering on time, throughput is increased in the short term by
realizing sales sooner. Customer confidence is also increased. Throughput dollar days are
calculated by multiplying the throughput of a late shipment by the number of days it is late.
The aim is for throughput dollar days to equal zero.
Inventory dollar days measure the organization’s ability to turn its inventory; it’s a
measurement of the effectiveness of the supply chain. Throughput Accounting recognizes
inventory as a liability, being cash trapped in the system. Inventory dollar days are calculated
by multiplying the value of an item of inventory by the number of days that item of inventory
has been in the system. Inventory dollar days can never be zero, but less is definitely more.
The goal is to carry only as much inventory as is required to reduce throughput dollar days to
zero. These are the measurements of Throughput Accounting that support a “performance
world view”, as compared to a “cost reduction world view.”
Dollar days can only be compared to other dollar days. It is neither a monetary nor time
based measurement. Therefore, throughput dollar days are only comparable to other
throughput dollar days, and likewise inventory dollar days are only comparable to other
inventory dollar days. There exist no similar GAAP or cost accounting measurements that can
be analogously compared. These are genuine measurements of organizational activities - the

123
Sechel: logic, language and tools to manage any organization as a network

effectiveness of on time delivery, and the effectiveness of the supply chain operation. They
are real examples of how Throughput Accounting genuinely supports essential decision
making concerning matters that every organization faces.

Statistical Process Control


Statistical Process Control (SPC) is the course of action by which statistical methods are
applied to the monitoring and controlling of organizational processes. SPC is required owing
to the fact that in the real world no two things are identical. Each process is influenced by
many causes that are difficult to identify, and processes can thereby generate outputs with
varying characteristics, some of which are unacceptable. SPC is concerned with
measurements inasmuch as it recognizes that variation exists, and that variation can be
quantified. Measuring is, after all, the act of ascertaining the extent or quantity of a thing. Our
discussion of measuring in a Decalogue organization cannot then be complete without some
mention of SPC.
A science based methodology, like the Decalogue, requires decisions to be based upon
reliable data, not guesses. SPC examines processes and sources of variation in processes
using objective analysis, not subjective opinions. It is this objective analysis that allows
quantification, which then permits the design of precise solutions to problems of unacceptable
variation. Our discussion of SPC here is not to examine all of its intricacies; rather it is to
reveal how SPC is part of the measuring that is intrinsic to any systemic methodology.
Dr. Walter A. Shewhart is the early pioneer of SPC. Later, Dr. W. Edwards Deming
evolved SPC methods into a comprehensive management philosophy known as Quality
Management. Dr. Deming is recognized worldwide as the founding father of the Quality
movement and SPC is, fundamentally, a method to ensure and sustain Quality.
SPC permits us to understand the predictability of an outcome from past experience,
within understood limits. This understanding allows decisions to be knowledge based, rather
than suspicion based. Knowledge based actions are infinitely superior to actions based upon
guesses.
SPC enables the understanding of a process in terms of its causes of variation and
facilitates the elimination of sources of special cause variation. (Variation is classified as
either common cause variation or special cause variation. Common cause variation consists
of variation inherent in the process. Special cause variation then is variation caused by
anything not inherent in the process as designed.) In understanding a process, the process is
mapped out and monitored using a control chart. The control chart identifies variation that
may be due to special cause. The control chart, by identifying special cause variation, frees
the user from concern over common cause variation. When excessive variation is identified
by the control chart through the application of a set of precise rules, process capability is
thereby identified and further analysis can be performed to determine the special causes of
the variation. In other words, all organizational processes are measured. It is this dedication
to the measurement of all organizational processes that is a distinguishing characteristic of a
science based methodology. The only way to effectively measure organizational processes is
through the use of SPC.

124
Domenico Lepore

SPC is another management tool applied in a Decalogue organization. It is a measuring


system that helps achieve the goal of the organization by ensuring that decisions are taken
as a result of a precise understanding of what influences the achievement of units of the goal.

Some Final Matters


We started by pointing out that good decision making is the sine qua non of good
management; and, decisions cannot be assessed as right or wrong if there exists no way in
which to measure the extent to which those decisions are achieving the goal of the
organization. Hopefully we have shown how a system of measuring forms an inherent part of
any valid decision making paradigm. More importantly, however, it is essential to deploy a
measuring system that generates meaningful measurements that encourage a sustainable
world view.
Unfortunately, Throughput Accounting is not the “gorilla” in the space. Corporate finance
is driven by cost accounting principles, reported to the world in the language of GAAP. Cost
accounting and GAAP are firmly mired in a “cost reduction world.”
Organizations are under pressure to discover the “new normal.” We suggest the “new
normal” should be the adoption of the “increasing performance world” assumption. It’s the
way to ensure throughput and the way to ensure that you have a business.
In order to adopt this new paradigm though, organizations must adopt new measuring
systems. What measurements you think about, and how you think about those
measurements, determines the performance of your organization. So in the final
determination, “bean counting” does matter. But it is not so much the counting that counts, it
is how you count and what you do with the results that really make the difference.

125
12.

MANAGING VARIATION

Dr. Giovanni Siepe

What is variation?
It is beyond the scope of this chapter to teach the fundamentals of Statistical Process Control;
Don Wheeler and other experts have already done that majestically. The goal of this chapter
is to examine the fundamental role of managing variation in any effort to manage an
organization systemically, in particular using the Decalogue methodology.
If we examined the arrival times of employees to their work place every day, we would
immediately notice that nobody ever manages to get to the office at exactly the same time.
No matter how optimized their routine is, the arrival time is always different. The reason that
prevents everybody from getting to work every day at exactly the same time is variation.
Why can’t any process be standardized so that no variation occurs? It is because in
nature there is a ‘variable’ called entropy that accounts for the variation associated with every
process. The 2nd law of thermodynamics states that any “spontaneous” change in a “closed”
system is accompanied by an overall increase in entropy. When water evaporates molecules
are dispersed and tend to occupy the whole space, resulting in an increase of entropy. The
entropy of the universe, for example, is always increasing.
Entropy is a measure of disorder, or randomness (variation) in a system. Any
organization, or system, in its spontaneous evolution, is naturally affected by the increase of
entropy. The day-by-day repetition of simple actions at our work place will never be the same
because of the natural increase of entropy.
Variation affects all aspects of our life, and all processes in an organization. It is of
profound importance to managers who, in order to exert their role, must ensure a stable and
predictable environment. Indeed, the essence of management is prediction. Let’s have a look
at few key points we have to consider when we talk about variation.
Walter Shewhart was the first to have an intuition about this phenomenon, which is
intrinsic to every process and system. We can define a process as a set of actions/activities
that happen over time, following a rationale/procedure and aimed at a specified goal.
Shopping for food at the market with our family on a Saturday is a process and so is the set
of actions that get us to the office every day.

Let’s examine the set of actions that everybody goes through to get to work, and the relevant
variations:
Sechel: logic, language and tools to manage any organization as a network

• How many minutes do people stay in bed after switching off the alarm? Perhaps
they stayed up late the night before, so they want to stay in bed a few more
minutes;
• Do they make the coffee or some other family member makes it?
• Is the bathroom free or do they have to wait to get in?
• How long do they stay in the shower and how long will they take to get dressed?
Did they decide the night before what they were going to wear?
• Will the car start first time or will they have to let the engine warm up?
• How many red lights and how many green lights will they come across on our way
to work?
• Will they find a parking spot near the entrance to the office?
The arrival time depends on how these simple actions, with their associated variation,
are combined together.
The variation associated with each action is identified mathematically by its ‘variance’,
whereas variation associated with the whole process is identified by the combination of the
set of variances relevant to each action, which is the co-variance. As a matter of fact,
processes inside an organization are highly interconnected and interdependent, and
predicting the outcome of a sequence of events becomes very difficult.

Variation and processes

Let’s consider the sales process of a company. ‘Scoring’ a sale is just the last step of a
process that starts with purchasing the raw material, goes through manufacturing and
assembly, up to shipment. To be sure we score the sale we have to consider the variation
associated with every part of the process:
• the arrival time of raw material
• time to inspect it
• time for moving material from the raw material warehouse to manufacturing
• time for manufacturing
• time for assembly
• time for moving material to the warehouse of finished products
• shipment time
There is no way to ‘predict’ the final outcome of the process without considering the
different steps that brought us to the finalization of the sale; the only way is to approach the
process ‘holistically’.

128
Domenico Lepore

Understanding and improving the performances of highly interdependent processes in


complex organizations is possible only if we understand the variation associated with single
processes, the cause-effect relationships among them, and the impact that they have
individually and cumulatively on the final result; in other words, we have to approach the
organization (system) as a whole.
As we know, a system is a network of interdependent components that work together to
achieve a common goal. If there is no goal, there is no system. If we have a set of segregated
parts, neither interacting, nor interdependent, A, B, C, the Theory of Systems shows that the
performance of this kind of network is ‘additive’:
Perf.(system)=Perf.(A)+Perf.(B)+Perf.(C)+…

Independent network

On the other hand, if the components are dependent and/or interacting, the cause-effect
relations among the various parts of the system are “non-linear”, and the Theory of Systems
shows that:
Perf.(system)=)≠Perf.(A)+Perf.(B)+Perf.(C)+…

Interdependent network

Interdependence and co-variance are elements that influence the “dynamics” of a


system.
When we have large interconnected systems, it is virtually impossible to predict the
effect that changing a single component will have on the system overall. We face a problem
of “complexity”, where non-linear interactions play an important role.

129
Sechel: logic, language and tools to manage any organization as a network

Complex system

Variation and management decisions


A company is a “complex system” ruled by non-linear interactions that make it difficult to
make any prediction when any change is effected. That’s why any approach to improve the
system has to be “holistic”, and consider all the interactions/interdependencies.
The description of what everybody does, input and output, understanding the
interdependencies and, finally, understanding variation associated with each process, is
foundational to any form of “intelligent management”.
What is the guiding rule that should inspire any managerial decision? The job of
Management is to work on the system and create a predictable environment; as Dr. Deming
reminds us “the job of managers is to reduce variation”.
Statistical Process Control (SPC), the main offspring of the Theory of Variation, is the
body of knowledge that helps managers to act rationally for the improvement of the system.
SPC is not a technique; it is a way of thinking that should be fostered by Top Management at
every level in the organization.
The data and the way we collect them are as important as the analysis we perform on
them. The starting point to understand the system and its different processes is to select key
points where we monitor variation. Once we have identified these points, we can start with
the analysis.

Understanding processes
The most practical way to understand processes is to draw a picture of them, by means
of a flowchart. A flowchart is a diagram that describes a sequence of events, tasks and
decisions that transform input in a process/system into output. Flowcharts use some standard

130
Domenico Lepore

symbols and conventions to make it easier to communicate and understand them and portray
a process with a map or chain of activities and decisions.
We can describe the flow of materials, information and documentation; we can show the
various activities included in the process, explain how these activities transform input into
output, indicate the decisions that have to be made along the chain; we can design
interrelations and interdependencies among the various phases of the process that are
important, and we can easily acknowledge that the strength of a chain depends on its
weakest link.
In describing processes we realize that it is very difficult to establish very precise borders
between departments and functions. In order to deliver a product, or provide a service, these
fictitious separations are often crossed. We see, therefore, that these processes cut across a
traditional organization chart or organization pyramid. Flowcharts are then the key to
developing understanding of if, how, and where every single link is adding value to the chain.
“…a flow diagram also assists us to predict what components of the system will be
affected, and by how much, as a result of a proposed change in one or more components…”
(Deming, The New Economics).
What we do is to map out processes and compare them with how they should ideally
work, so as to understand where complications lie, identify misalignments between authority
and responsibility if any, look for critical points, and determine breakage points in the chain
that link the supplier with the customer.
From this analysis we have to recognize ‘Key Quality Characteristics’ (KQC), i.e. aspects
of the processes that heavily influence their capacity to contribute to the goal of the system.
By highlighting these critical characteristics, we can identify the points where it is more useful
to gather data on the variables of the processes. From the analysis of these data we can
understand whether the processes are in control (predictable) or not before taking any action
to improve them, and whether improvement actions are effective or not.

Using flowcharts
The purpose of flowcharting is to understand the system and its interdependencies,
understand its behaviour (monitoring its variation), and take the right decisions to improve it,
and not just do a nice drawing for our boss.
There are two kinds of flowcharts, the process flowchart, and the deployment flowchart.
A process flowchart simply presents the sequence of activities and the decision points in
a process. It does not show the people who are involved in every phase.
A deployment flowchart (DFC) describes who does what. It shows the interactions
among people in the various phases of the process. It is crucial to know what these
interactions are to really understand how the process works and how to improve it.

The steps for drawing a DFC are:


• Identify the boundaries of the process, go through the process following the
sequence of events, and achieve an overall vision without too many details

131
Sechel: logic, language and tools to manage any organization as a network

• Identify the key areas (competencies)


• Put all the stages of the process on a flowchart by competency, according to the
sequence in which they are carried out
The basic symbols are rectangles (tasks), and diamonds (decisions) connected by
arrows, and every task or decision is summarized inside its box.
Let’s examine a few examples of Deployment Flowcharts, and identify the KQCs
associated with them.

Deployment Flowchart Purchasing

This is a very simplified example of a DFC of a purchasing process. The KQC identified are:
Q1: types of items to be purchased (number and frequency)
Q2: suppliers with covenant
Q3: suppliers with no covenant

132
Domenico Lepore

Q4: item replenished (it is in stock)


Q5: order placed with key supplier
Q6: order placed with chosen supplier
Q7: time to deliver the product
In principle, we can have one indicator for any point identified in the process. What we
have to understand though, is that not all the points have the same relevance, and there are
some that impact the system dramatically. It is by understanding interdependencies that we
can focus on those points.

A real case: Replenishment


Let’s analyze a real case. Below are three DFCs relevant to three major, completely
interdependent processes, and let’s see how can we monitor the behaviour of the system in
order to improve it.
The process we analyze here is ‘Replenishment’, the process that links sales to
purchasing.
In order to know how much raw material we have to buy, and how much we have to
release to production, we have to monitor stock levels based on consumption, the reliability of
the supplier and, essentially, the pace of sales.
Accordingly, the KQCs here are set so as to monitor the stock levels and the pace of sales.
We can then collect information about the consumption, and analyze these data using SPC.
Stock levels and the pace at which we sell are important elements to monitor so we can
understand how much raw material we have to buy (given the reliability and the delivery time
of the supplier), but at the same time operations have to be reliable. The ‘uptime’ of the
machines is important as well, since a high and consistent uptime is what makes it possible to
have a regular and reliable flow of material through the system. Monitoring the ‘uptime’ is,
thus, another important KQC to consider.
Logistics is the last step of this chain. In order to understand the reliability of our system,
we have to monitor the Expected Time of Departure (ETD), and the Actual Time of Departure
(ATD) of the finished products, or the relevant arrival times to the customers (if we own the
vehicles) so as to allow us to understand the ‘effectiveness’ of our service.

133
Sechel: logic, language and tools to manage any organization as a network

Deployment Flowchart for Replenishment

134
Deployment Flowchart Manufacturing Operations & Plant Shipping
Domenico Lepore

135
Sechel: logic, language and tools to manage any organization as a network

Deployment Flowchart Dispatch & Logistics

136
Domenico Lepore

Making sense of the data: variation and Control Charts


Monitoring essentially three-four points in this very complex chain, from purchasing of
raw material to shipment of finished product, will allow us to control how the system is
performing, and where inefficiencies happen.
Once we have collected the data, we have to organize them so they are useful for
analysis and decision-making. How do we extract the information from the data?
A histogram, or bar chart, is a graphical representation of the data with possible values
along the horizontal axis and their frequency along the vertical one. A histogram is the easiest
way to see the distribution of the data, and how they spread around some average value.

A histogram or bar chart

A different way to represent and/or summarize data is the run chart. A run chart is a
graphical representation of the evolution of the data. This representation makes evident any
change in the system over time.

137
Sechel: logic, language and tools to manage any organization as a network

A run chart

A Control Chart, or Process Behaviour Chart, is a graphical representation of the evolution of


the data over time compared with its average and its ‘natural limits’ of variation:

A Control Chart or Process Behaviour Chart

This representation makes evident how the process oscillates, its range of oscillation and
provides a framework for making sense of data.
Variation cannot be eliminated; it can be reduced, but not eliminated. Entropy exists.

The main problem we face when we deal with variation is to understand it. Variation
associated with processes can be of two different types:

138
Domenico Lepore

1. variation due to causes intrinsic to the process/system; these causes are always
present, and are called common causes of variation
2. variation due to special causes; these are not part of the system (that has generated
the common causes)
In the first case we say the process/system is in “statistical control” because it is subject
to a pattern of variation that is predictable over time. In the second case we say that the
process/system is “out of statistical control”, as it is subject to variation that is unpredictable
over time. Management in these two states is radically different.
We use statistical methods in order to be able to filter ‘noise’ (routine or intrinsic
variation) from the data in order to detect ‘signals’ (special causes of variation).

Intelligent Management and SPC


SPC is not a technique; it is a way of thinking. SPC is foundational for Intelligent
Management.
The aim of control charts is to provide insight into processes and trigger rational thinking.
After many years of relentless application of Shewhart’s charts, I can safely say that they are
a powerful tool to manage intelligently the emotions triggered by the analysis of a process.
We would discourage anybody from managing organizations using probability theory; what
we suggest here is to leverage some basic, well proven, scientific ideas concerning the
intrinsic variation of any human and organizational activity to take decisions that have an
economical value.
The way we deal with variation in a system is totally different in the case of ‘in control’
and ‘out of control’.
When a process is in control, it is working with the minimum variation possible given
specific conditions of use. If these conditions remain stable, the process is in the best
possible state as its behaviour is predictable over time.

In order to further improve this process we can only try to reduce its variation with the
following actions:
• Stratify the data by dividing them into categories based on different factors and
analyze how the data fall into subgroups
• Separate the data by dividing them into various categories and treating them
separately from the others
• Gain experience by applying the Deming Continuous Improvement Cycle (PDSA):
plan, do the experiment, monitor its results, learn from the effects observed and
act
When a process is out of control, there is not a lot we can say about it.
Indeed, its behaviour is not predictable over time. It is subject to unpredictable jumps and
all the data relating to it lose their predictive potential and become ‘historical’ data.

139
Sechel: logic, language and tools to manage any organization as a network

In order to act on an out of control process, i.e. try to bring it into control we must:
• Gather data as quickly as possible to identify rapidly the special causes that
generate instability in the system
• Activate an emergency solution to limit damage
• Find out what made the special cause occur
• Implement a long-term solution
There are at least two excellent reasons for not wanting a process to be out of control.
The first is connected to the impossibility of predicting which often makes it impossible to plan
and carry out programs.
The second is linked to the costs associated with activities in a company that confuses
common causes with special causes of variation.
Indeed, performance that seems good will often disguise poorly optimized use of
resources.
However, this does not mean that ‘out of control’ is always bad. When you have a stable
process, let’s say a sales data series which predictably oscillate within the upper and lower
control limit, and we are trying to enforce a plan to increase sales, we actually hope to see
the system going OUT of control on the upper side, so as to detect that an ACTION caused a
shift in the system toward the desired direction, namely an increase in sales. By the same
token, a process in statistical control is not necessarily a desirable process; oscillation limits
that are too wide are often the result of poor understanding and execution of the process and
force unnecessary costs on the system.
Again, the analysis must be performed with intelligence and common sense, and the
charts have always to be read considering the operational context.

Using SPC to monitor and improve processes


Let’s see now how we can use SPC to monitor and improve processes. We have seen
before some DFCs relevant to different interconnected processes, the first one was
Replenishment.
Replenishment is an algorithm that uses statistical process control (SPC) to replenish
warehouses with raw materials and/or finished products. This involves an understanding of
variation, sizing based on production needs and cash immobilization, and reviews of
purchasing and sales policies.

140
Domenico Lepore

Warehouse Control Chart

We make a control chart for the material flow in and out of the warehouse and determine
its average value and its variation.
We make the control chart of the sales (consumption), and if sales are in control, i.e.
predictable, we can trust that the value will stay within 3 sigma (sales) of its average value,
let’s call it AC (average consumption per unit of time).
Our supplier delivers every N units of time, with a variation sigma (supplier) (also
assessed with a control chart). What is the minimum safe level of products in the warehouse?

The maximum flow we can expect in material consumption is


AC + 3 sigma(sales) per unit of time

The longest delivery time we can expect from the supplier is


N + 3 sigma(supplier) units of time.

The replenishment level is then just the product of these numbers


(AC + 3 sigma(sales) ) x (N + 3 sigma (supplier) ).
We have thus established the right level of inventory. Any agreement with the supplier
will be made on the basis of our average consumption, but we only replenish what we sell. If
we sell more, we ask for more. If we sell less, we ask for less. “Buffering” the system with the
right amount of raw material and finished product, based on the statistical understanding of
the consumption, will allow us to free up cash immobilized in inventory.

This is how the two processes under discussion, daily inventory of raw material and daily
sales, could appear:

141
Sechel: logic, language and tools to manage any organization as a network

Inventory of Raw Material Control Chart

This process is in control, and shows a wide range of variation, 400÷11000MT. Is the reason
for this wide variation linked in any way to the sales process? This is the relevant control chart:

Sales Control Chart

142
Domenico Lepore

The above process is out of control, i.e. unpredictable, even though it shows a drastically
narrower variation compared with the inventory of raw material.
Let’s analyze and compare the two charts. The daily average stock of raw material is
about 6000MT, with a range of variation between 400MT and 11000MT, and the process
oscillates in a predictable way between the two natural limits. The daily average sales is
about 2000MT, with a range of variation between 1500MT and 2500MT, and the process is
not predictable.
We are still missing one parameter to establish the right level of inventory; let’s see what
the control chart relevant to the lead-time from the supplier to our warehouse looks like:

Lead Time Control Chart

The process is in control, with an average delivery time of 1.5 days, and a range
oscillating between less than one half day up to 3 days.
Based on the above numbers, the correct stock level according to the pace of our sales is:
AC + 3 sigma(sales) ) x (N + 3 sigma(supplier)
that means:
2500MT x 3 days = 7500MT
This would be the case if the processes were in control, but unfortunately sales are out
of control so, in order to set the stock level, we have to bring sales back in control, then we
monitor and assess the pace of our sales, calculate the average, and only then can we apply
the above formula.

143
Sechel: logic, language and tools to manage any organization as a network

This level of inventory will “absorb” any variation due to common causes that we can ascribe
to the pace at which we sell or the lead time from the supplier to our warehouse. In the example
above, this could be done by ascertaining what sent the system out of control; if the special
cause is identified (and the problem fixed), we can remove that data point from the calculation
and see if the “new” set of data (without the out of control point) displays predictability.
Once we have established the stock level, let’s say 7500MT, we monitor the pace at
which we use it, and we replenish exactly what we use. If we sell 1000MT we replenish
1000MT, if we sell 4000MT we replenish 4000MT. We only have to ascertain that the
consumption is in control in order to be sure that we can continue to be reliable.

Management and control


How can managers exert control effectively? As we saw in Chapter 3, there is an
inherent conflict underpinning the problem of control. The traditional model for control is the
hierarchical model, and the reason for its existence is because personal capacity for control is
limited, and adding hierarchical levels increase personal capacity of control. On the other
hand, if we want to increase the capacity to listen to the customers and the suppliers, so to
satisfy the needs of the market, we should not adopt a hierarchical model.
The model we propose, instead, is “the enterprise as a system”, a direct reference to
Dr. Deming’s model of “production viewed as a system”:

Production viewed as a system

The adoption of this model automatically includes in the picture customers, suppliers,
and interdependencies, which are completely absent in the hierarchical model. But this
approach alone is not enough to manage an organization effectively. The capacity of a
system/network of interdependent components to achieve its goal is limited by a very small
number of factors, indeed often only one: the constraint.

144
Domenico Lepore

The constraint is what determines the pace/speed at with which a system generates
units of its goal. In for-profit companies the units of the goal are linked to the generation of
cash profit (value).
A system that is unbalanced around its constraint is like a tube with one section that is
narrower than the rest: there is one phase/process/group of resources with less capacity than
all the others.
Why do we manage a system through its constraint? It is because an unbalanced system is
simpler and cheaper to manage. In an unbalanced system everything revolves around the
constraint phase, and a detailed plan is necessary for this phase only. This schedule allows us to
manage the whole plant. Reducing (global) variation in an unbalanced system means
concentrating on and investing in the constraint phase only, not every single part of the process.
If we have to manage a plant, increasing the productivity or improving its performance is
considerably cheaper, and less wasteful in terms of time and energy, if it is unbalanced
around the constraint.
Markets that are stable and repetitive over time, both regarding quantity and product mix,
are not very common. That is why following the variation in demand is very difficult. By
unbalancing the system around its constraint, we can achieve a flexibility that is much more
manageable because the problem of sizing capacity (and making any changes) only
concerns the constraint and not every phase of the process.
The algorithm we use to manage production through its constraint is Drum Buffer Rope
(DBR). Lead times in plants managed using DBR tend to get close to the time it takes to
technically complete the process. This is made possible by eliminating almost completely
queues and piles of inventory. Obviously, this short and controllable lead time can become a
considerable competitive advantage.

The mechanism to manage a ‘physical’ constraint is, as we said, DBR. The three steps of
managing the constraint are:
• Identify
• Exploit
• Subordinate
Identification is relatively simple, since we can very quickly identify the machine, or the
process, that limits the capacity of the system. However, considerations about capacity are
necessary but not sufficient to identify the constraint. Indeed, in a real plant the product-
market demand mix will vary over time and influence the capacity required by the various
work centres. It is crucial that, once it has been chosen, the constraint remain the same over
time, even when there is an increase in production capacity (“unbalanced” increase). That is
why strategic considerations come into play in the identification of the constraint.

The constraint must be chosen bearing in mind that it is the element that determines the
speed of cash generation for the whole organization. The main criteria include:
• Assessment of future strategies and scenarios in the market

145
Sechel: logic, language and tools to manage any organization as a network

• Comparative analysis of machines and resources (technical characteristics of the


machines, skills of resources, interchangeability)
• Assessment of the investment required and ways of increasing constraint capacity

Subordinating to the constraint: buffers and variation

Subordinating means making sure the system acts in such a way that the constraint is
allowed to work flat out on the right product mix. Subordinating means having an action plan
for the resources, i.e. a scheduling that takes into account the presence of the constraint (we
must never starve the constraint). The data/elements that are fundamental for scheduling are:
• BOM (Bill Of Material) / ROUTING
• Raw material
• WIP (Work In Progress)
• Resource availability/calendars
• Buffer
• Sales orders

By definition, every minute lost on the constraint is lost forever, with an evident loss of profit.
Once we have identified/chosen strategically the constraint, we create a detailed planning for
it bearing in mind two things:
1. The constraint must work at 100% of its capacity
2. The constraint must work on the right product mix
As far as the rest of the plant is concerned, it is enough to know, order by order, the overall
lead times, i.e. lead time 1, that takes into consideration the time between release of raw
material and the constraint, and lead time 2 from after the constraint to the end of the process.

Constraint

146
Domenico Lepore

The orders from clients are the starting point of scheduling, which is performed working
backwards from the delivery date promised to the client.
If lead time 1 = T1minutes, and lead time 2 = T2minutes, the DBR algorithm introduces
two time intervals to protect the process that are called buffers.
Shipping buffer (before delivery date) = 20% *T2 minutes
Constraint buffer (before constraint) = 20% *T1 minutes
These buffers have the purpose of protecting the customer’s orders by protecting the
constraint from the intrinsic variation of the process. The unit of measurement of the buffer is time.

Buffer time

In order to protect the constraint, we have to make sure that material is ready a “buffer of
time” in advance in front of the constraint itself.
In the case of buffer consumption, managers find themselves faced with a basic conflict:
intervene on the system vs. not intervene on the system.

The buffer consumption conflict

147
Sechel: logic, language and tools to manage any organization as a network

The two basic assumptions, ‘I don’t know in which cases to intervene’ and ‘I don’t know how
to intervene’, are injected by defining the rules of intervention:

Injection to buffer consumption conflict

The traditional TOC approach divides the buffer into three zones and suggests different
behaviours according to the zone of the buffer that has been ‘emptied’. The buffer will ‘empty
out or ‘fill up’ depending on whether tasks or operations are completed early or late in respect
with the scheduling.
However, as we said, the ability to predict is the essence of management, and only if we
are able to foresee the outcome of our actions on the system can we take decisions that
anticipate the development of events and therefore have more control over them. In the
traditional TOC approach we ‘react’ to a situation; when the buffer is emptied, we react. This
is not the Decalogue approach.

Using control charts to size the buffer


We have defined a stable system as one that produces predictable results so, the
question here is: is the system stable? In the Decalogue approach we use SPC to monitor
and control the system. In our approach to buffer management SPC is used to monitor,
control and improve the way the constraint is exploited and, in general, to size it adequately.
What we do is to size the buffer, and monitor the oscillation of its consumption to see if it
is in control. If this is the case, independently of how much it is emptied, we do not have to
take any action. On the other hand, even in case the buffer is not emptied substantially, if the
oscillation of the consumption is not in control we have to act on the system to bring it back in
control because, in this case, we cannot predict what will be the consumption in the future.
In conclusion, if the processes that impact the buffer are not stable, we cannot define the
amount of protection needed (sizing the buffer).
The buffer acts as a means of control only if the processes of the system are in control
and buffer consumption oscillates stably with an upper limit that is inferior to the maximum
width of the buffer.

148
Domenico Lepore

We can summarize the ‘mechanics’ of this process of buffer management thus:


Material is released at the pace at which the constraint can process it, so that it arrives a
set amount of time early (called buffer) in front of the constraint in relation to the moment it is
due to be processed by the constraint.
Order by order, a check is made so that the time when material arrives physically in front
of the constraint varies in a controlled way around the established value (buffer). Variation is
studied and actions are taken to reduce it (buffer management).

Our manufacturing simulator


Perfect scheduling of a plant is not a simple endeavour. Taking into account all the
details, such as variation and the complexity of a plant, may become a very difficult task. We
have developed a manufacturing simulator to study the problem in more detail.
Manufacturing networks often show a considerable level of complexity that makes them
difficult to manage. In a way they represent a model of ‘complex systems’. Such networks are
highly interdependent, due to the finite number of resources, and at the same time have to
show flexibility to respond to the oscillation of the market demand. As a consequence, proper
network management is measured on the ability to schedule any product mix the market
requires in an optimal way.
A number of process simulators have been developed to help perform optimal
scheduling in a process network. The simulator allows us to schedule specific operations,
such as purchasing of raw material, or activation of a process, or setting automatic process
activation whenever there is material to process. The goal of each simulation is to produce a
pre-determined set of products, within a given time frame and with assigned economic
resources. The degree of achievement of the goal is determined at the end of the simulation
using the TOC economic measurements.
However, most of the developed models are deterministic, i.e. the system parameters
like process time, setup time, etc., are all fixed. In real life instead, processes have an
intrinsic, unavoidable variability which, if not understood, may have a relevant impact on the
overall performance of the system.
In order to properly simulate real scenarios and get reliable results, we have
developed a Manufacturing Network Simulator where a certain degree of randomness can
be added to each node in the system. As a result, we can simulate more realistic
situations, and verify the impact of variation on the schedule of the plant. This simulator
calculates the TOC parameters during each simulation and, as a result, the effectiveness
of the scheduling.
Let’s look at the network we used, describing all the interdependencies and the details
regarding the production processes.
There are three selling nodes and four purchasing nodes; the network develops itself
as a network of complex interdependencies made by a segment that is linear, which has
one purchasing node and ends, after the manufacturing process, in a selling node, and
the other segment which has three purchasing nodes and two selling nodes, and goes

149
Sechel: logic, language and tools to manage any organization as a network

through an A-shape network and V-shape network connected to each other as shown
below:

Simulator

The cost of the raw material, the selling price for each product, and the market demand
by product are established at the beginning of the simulation, and production time, setup
times and number of resources have been specified.
The goal of each simulation is to produce a pre-determined weekly market demand that
is to deliver 40 items of the first product, 52 of the second product and 40 of the third one.
In an ideal world, where we have a fixed weekly demand, the moment we finish to
produce a product we sell it and get the money, the moment we purchase some raw material
we receive it (and pay for it), and we have fixed and 100% reliable production times. Even
though we have finite capacity (in machines and cash) we manage to schedule the machines
to satisfy market demand. However, as soon as we introduce variation into the system the
same scheduling plan fails to satisfy demand. Managing variation entails, as we have said,
protecting the resources with a ‘buffer’.
Once we have identified the constraint of the system, which can easily be done by
calculating the time each resource is taking to produce the material to satisfy the demand, we
protect it with a buffer. What the repeated simulations show us is that introducing variation, up
150
Domenico Lepore

to a certain extent (even 50% of variation), on production time on every resource except the
constraint does not affect our ability to satisfy market demand. The extra capacity of non-
constraint machines is enough to absorb a considerable amount of variation and, as a
consequence, there is no immediate need to buffer, provided that the variation is not huge.
On the contrary, a small measure of variation on the constraint production time (5% already
makes a considerable difference) is sufficient to shift away significantly from the ideal result.

Project Management and variation


The technique of Buffer Management is not only used to protect the constraint in the
case of a manufacturing or production process. Every time we have finite resources, and this
is always the case, we have to manage and protect the ‘finite capacity’ available. In project
management we are faced with the problem of completing the project successfully within an
established timeframe and within a precise budget.
The approach to project management in the Decalogue is a ‘systemic’ approach. We
have already said that a system is a set of interdependent processes that work together to
achieve a goal. A project is a set of interdependent tasks, which must be carried out within
precise specifications; hence, a project is a system.
We start by trying to define the limiting factor of the project, its constraint, in order to
understand how to manage and protect it. We give the name Critical Chain to the longest
sequence of dependent events taking into consideration the sharing of resources. This sequence
determines the length of the project, and this is the limiting factor (constraint) of the project itself.
There is a ‘cognitive constraint’ that characterizes project management that is basically
dictated by the inability to understand how to buffer the project in order to be successful.

Project Manager’s cloud

151
Sechel: logic, language and tools to manage any organization as a network

The injection to this cloud is the ‘project buffer’.


The project buffer provides us with the injection “add protection only to the sequence of
tasks that determine the length of the project”.
Instead of protecting every single task in the critical chain the protection is accumulated
at the end of it in one buffer, the project buffer.

The project buffer

The question is now how to manage the buffer. A system is stable if it produces
predictable results, and the ability to predict is the true essence of management. What really
matters here is: is the system stable? There is a simple and powerful mechanism that can
answer this question. It is called buffer management.
If the processes that influence the buffer are not stable, we cannot decide on the amount
of protection (sizing of the buffer).
The buffer becomes a control mechanism only if the processes of the system are in
control and consumption of the buffer oscillates in a stable way with an upper limit that is
inferior to the maximum width of the buffer.

Width of the buffer

152
Domenico Lepore

SPC has a large variety of applications but, as Dr. Deming said, it is a ‘way of thinking’.
Becoming accustomed to this way of thinking is not easy, and requires a great deal of effort,
as does acknowledging the existence of variation. As strange as it may seem, the majority of
people do not take variation into consideration at all, and they perceive any change in
processes as something ‘good’ or something ‘bad’. If you compare two numbers you will
always have one bigger than the other, or equal. With SPC we have the possibility to
compare two numbers and understand that they are generated by the same process, and that
any difference is due to one thing alone: entropy.
Managing the inevitable increase of entropy to maximize throughput is the only possible
goal for a manager.

153
13.

USING THE EXTERNAL CONSTRAINT IN AN


INTEGRATED NETWORK FOR MARKETING AND SALES

Yulia Pakka MScBA

A company’s success is based on the level of satisfaction of their customers. Therefore, any
organization that wishes to be successful in the long term must build one reliable system that
has customers and their needs in the forefront. Marketing and Sales act as the company’s
representatives towards the customer.
The role of Marketing is to act as a constantly updated repository of knowledge about:
• who we are – traditionally companies formulate vague and generic mission
statements, instead they should aim at understanding the business environment in
which the company exists in its entirety;
• what we do – products and/or services that the company has to offer and how these
differ from other offerings;
• who can benefit from our offerings and how to find them. A thorough understanding of
current customer base, analytical ability to identify potential customers and design
precise plans on how to approach these customers and win them over;
• how to communicate all of the above;
• how to ensure consistency.
Well organized and managed systems will deliver consistent and predictable results. For
optimal management of resources all activities related to the five points above must be
executed with predictability and in a synchronized way.
To satisfy both predictability and synchronization, Marketing and Sales are built as a
‘production chain’ where every step has inputs and outputs, every step has competent
resources allocated to it and has increased Throughput as a goal.

Marketing Plan
Predictable and repeatable results cannot be achieved through unorganized activities. The
Marketing Plan provides a solid structure where each element requires conscious thought
about the customer we are focused on. The Marketing Plan allows us to develop our potential
customers (leads) into loyal, long-term customers and, ideally, promoters of our offer.
Sechel: logic, language and tools to manage any organization as a network

Customer stages

The Marketing Plan must identify the right customer and the right need that will be
satisfied with our solution. It should not be limited to the traditional 4Ps (Product, Price, Place,
Promotion) but should incorporate all aspects needed to drive the business. The Marketing
Plan is the tool to guide the thinking and as such cannot be a constraint.

Customer steps

Our customers’ experience is not limited to the product. The size of our product portfolio
and range of services offered, ordering and invoicing, Sales support and the possibility of
providing feedback all play a great role in customer satisfaction.
The continuous feedback channel provides constant updates on reality changes and, as
a result, companies must adjust product/solution to the customer, rather than expect that
customers will adjust themselves to the product.
By satisfying the customer needs of today we are changing their current reality and
creating demand for different/new solutions. By constantly delivering desired solutions to our
customer we ensure consistency. We plan for change using the PDSA:

156
Domenico Lepore

PDSA cycle

Act*
• adopt the change
• or abandon it
• or run through the cycle again, possibly under different environmental conditions

Marketing structure
Traditionally, Marketing is structured according to the product groups/families where
success of the performance is measured based on the achievement of some budgetary
numbers. Every person/group is acting in isolation with a very low degree of cooperation.
Such an approach does not provide an optimal utilization of resources and does not have the
overall goal of the company in mind.

157
Sechel: logic, language and tools to manage any organization as a network

Instead, recurring Marketing activities should be viewed and managed as projects where
tasks are performed by the resources that are available and have the competencies to
perform these tasks.

Marketing process

Market Segmentation and Lead Identification


To create a focused approach to the market place, Marketing pinpoints different
segments which could be sensitive to the company’s offering. This segmentation could be
product/solution based or could have other commonalities as a foundation, e.g. common
problems that this segment is facing. Within those segments there will be a certain number of
potential clients – leads, which need to be identified.

Marketing steps as cycle

158
Domenico Lepore

Lead qualification
Lead qualification is a complex, and somewhat intricate process with multiple feedback
loops between Sales and Marketing. It starts with having a clear definition of which lead is
considered to be sales-ready. The goal is not to pursue every lead but to identify which leads
will produce the highest return for the company and the time it will take to produce results.
Depending on the readiness of the lead, some of them will be placed into the ‘leads bank
which serves as a raw material for the Sales process. Other leads will be kept for
maintenance and nurturing until they become more sensitive to the company’s offering and
are ready to be processed by Sales.
Leads can be acquired in many ways – research, trade and industry events, personal
network, third party databases and ‘walk-ins’, but the truth is that most leads will not be ready
to engage, and passing them on to Sales will strengthen the general perception that
Marketing-generated leads are no good.
A qualified lead will be open and willing to meet with the Sales person and it will have a
need for the product/solution the company has to offer; however it should also be checked for
possible pitfalls such as non-payment or bad credit before it is placed into the leads bank. In
traditional organizations Sales people’s compensation is based on revenue, and therefore
they have no interest in lead-nurturing. Sales people do not want to waste their time on
potential customers who will not be buying today, therefore lead-nurturing is often viewed as
a ‘Marketing’ task. This should not be so. Marketing and sales in a systemic organization
underpinned by Intelligent Management should epitomize the interdependencies that must
exist among different professionals in the organization.
Integrating Marketing and Sales into one interdependent network will provide a solid
base for converting leads into winnable opportunities. The successful nurturing of leads will
result in a relationship with the customer that is built through an informative dialog about the
company’s offerings until the customer is ready for the next step of the Sales process.

Offer preparation and presentation


The ultimate goal of every company is to have more customers than they can supply.
This will allow the company to choose whom to sell to and create the largest possible
Throughput mix with their products.
What follows aims to present a generic solution to the issue many companies are facing: more
capacity than customers to buy it – in other words, companies that are constrained externally.
In order to bring back the constraint internally by selling all their potential capacity (as
opposed to cutting it) companies must focus their activities on identifying the right market –
market size at least 10 times bigger than their capacity, identify potential customers in this
segment that could benefit from our solution, find the right dialogue partners within these
companies, and create and successfully present our offer to them.

159
Sechel: logic, language and tools to manage any organization as a network

Using the Thinking Process Tools in Marketing


In this chapter we will try to provide some illustration of what is an Undesirable Effect
(UDE) and a Transition Tree (TRT) and on how to collect UDEs from a potential customer.
We also add a TRT for offer presentation aimed at generating predictable results.
In order to present a win-win offer to the customer, the offer preparation must be
executed as a project with the involvement of different competencies. A complete integration
of Marketing and Sales as well as close cooperation with other relevant competencies in the
company is critical for success; in this way we can ensure that all customers’ UDEs are
addressed and that our offer is as much of a break-through solution as possible.
During offer preparation the chosen team performs a cause and effect analysis that results
in the Core Conflict Cloud of the customer. Using Sales input and information gathered during the
UDE collection process we surface the assumptions relevant to the cloud and develop injections
that will lead to a win-win solution. Along with the development of the offer, possible negative
implications must be surfaced, investigated and trimmed where possible.
In order to communicate our offer we must develop a thorough understanding of the essence
of our injections: what is new about it, which of the UDEs it will address and what “desirable effects”
(benefits) will be caused by the implementation of such a solution. An open discussion of possible
negative implications and how to minimize them is necessary. Certain negative implications can
only be trimmed if the customer is actively participating in the designing of the solution.
A good presentation of the well-prepared offer will lay the foundation for a future partnership.

External Constraint
As we said before, often companies find themselves in the situation where they cannot sell all
their potential capacity. Their constraint is the market, i.e. the company is constrained externally.
By analyzing the causes that prevent our company from selling more and
customers/prospects from buying more we build a basis for an agreement that will maximize
benefits for the company and its customers.

A successful agreement for us and for the customer must overcome six levels of resistance:
1. Disagreement about the problem
2. Disagreement about the direction of the solution
3. Lack of faith in the completeness of the solution
4. Fear of negative consequences generated by the solution
5. Too many obstacles along the path that leads to the change
6. Reservations about our ability/willingness to implement the solution (and about the
ability/willingness of others)
Each of the six levels of resistance can be overcome by using the Thinking Process
Tools from the Theory of Constraints (TOC).

160
Domenico Lepore

1. Disagreement about the problem 1. Conflict cloud


2. Disagreement about the direction of the 2. Injections
solution
3. Fear of negative consequences 3. Negative implications branch
generated by the solution
4. Lack of faith in the completeness of the 4. Future Reality Tree (FRT)
solution
5. Too many obstacles along the path that 5. Prerequisite Tree (PRT)
leads to the change
6. Reservations about our ability/willingness 6. Transition tree (TRT)
to implement the solution (and about the
ability/willingness of others)

What follows is an explanation of how the completely generic solution for the External
Constraint was developed. Indeed, this is no replacement for the reading of It’s not Luck, the
novel written by Dr. Goldratt in 1994 where this approached was first described (see also
Deming and Goldratt: the Decalogue).
We start by grouping the Undesirable Effects (UDEs) and performing the cause/effect
analysis that leads to the development of the Core Conflict Cloud/ Current Reality Tree of the
customer.

EXAMPLES OF UDEs FOR A COMPANY WITH AN EXTERNAL CONSTRAINT:

Market UDEs:
• We do not believe it is possible to identify new segments of the market willing to
accept our offer;
• Competition is fiercer than ever;
• The speed with which we design and launch new products/services does not keep
up with demand and changes in the market;
• There is no way to predict the taste, trends and choices of the market;
• The market we deal with (clients and suppliers) is influenced by non-ethical
practices.

Product/Service UDEs:
• There is increasing pressure to reduce prices;
• We perceive that we are selling at a low price;

161
Sechel: logic, language and tools to manage any organization as a network

• Suppliers and partners do not support us in speeding up and improving the


company’s Production and Sales processes;
• We are still too attached to products instead of solutions/services;
• We do not have a “strong” brand (we are not always able to protect intellectual
property and/or make our product stand out);
• After a certain amount of time any product/service becomes a commodity.

Organizational UDEs:
• Actions aimed at increasing sales are increasingly less effective (Marketing
campaign, communication, contacts and activities to increase client loyalty);
• There is no structured Marketing to keep us highly in tune with the market;
• We are not able to transform contacts into contracts fast enough. The Sales force
is not well aligned with the rest of the organization. We chase all of the sales
opportunities without having well-defined priorities;
• The Sales force does not have adequate skills (not enough technical knowledge,
insufficient inter-personal skills etc.)
• There is not a sufficiently wide and adequate network to cover the geographical
distribution and market segment we have chosen;
• Sales forecasts are not very reliable;
• The Sales process is very unpredictable.

Based on the UDEs listed above we can build the following Core Conflict Cloud for the
company that is constrained externally:

162
Domenico Lepore

External constraint core conflict cloud

External Constraint Core Conflict assumptions between D and D’

163
Sechel: logic, language and tools to manage any organization as a network

The original set of injections:


1. We undertake actions that significantly increase the market’s perception of the value
of our product;
2. We identify a market that is bigger than the production capacity our company has
available;
3. We segment the market;
4. We check the range of oscillation within which the market is willing to buy;
5. We establish the price of the product/service on the basis of two factors only: the
TVC of the product and the amount of constraint time it absorbs;
6. The actions our company undertakes are difficult to copy;
7. We design and implement a flow of marketing and sales tasks that takes into account
finite capacity and the actual time it takes to complete tasks;
8. We identify the entire value chain that we and the client are part of.
As outlined earlier, in every situation where a company cannot sell all its potential
capacity we must start by identifying the market and its segments. We know what we can
offer, how it differs from anything else on the market, that it is difficult to copy and has
increased value perception. We know what our finite capacity is and how much time it takes
us to perform certain actions. Our pricing policy considers only Totally Variable Cost (TVC)
and we know our break-even point as well as the range of price oscillation for the specific
segment. We identify and understand the entire value chain and do not focus only on the
customer and ourselves.
The injections described above are completely generic. In other words, they represent a
valid solution to the External Constraint problem. The next step is to devise a sequence for an
orderly implementation of the injections and a validation process for its completeness. Such a
process is called Future Reality Tree (FRT).
The FRT called for the development of two extra injections: no. 9 ‘We present our
unrefusable offer’ and injection no. 10 ‘We show those in the supply chain the advantages of
cooperation’. The complete solution provided by the FRT satisfies both needs B and C and
this leads to the goal.

164
Domenico Lepore

Future Reality Tree (FRT)

Each injection demands a series of actions that must be performed. Injections 2 and 3
relate to market segmentation and lead identification followed by qualification and placement
into the lead bank.

165
Sechel: logic, language and tools to manage any organization as a network

Injections 1, 4 and 5 deal with the pricing of our solution; a thorough understanding of
customer UDEs and how our solution will eliminate or minimize them will provide us with
strong benefit points that will gain a customer’s attention. In our customers’ opinion, fair price
has no correlation to the GAAP determined cost that went into the creation of the
product/solution. Fair is the price of the value and benefits our offering is bringing to the
customer’s reality.
By determining price oscillation for the segment our target customer is in, we understand
the range of prices this customer would consider fair. Since after injection 2 we can choose
our target segments and injection 5 has provided us with the true cost (TVC) of our offer we
clearly want to work only in those segments where the lowest price point is still higher than
our TVC.
By taking time to understand our customer’s issues (UDEs) and crafting our solution
specifically to address these, we are executing injection 6. Offer preparation is a recurring
activity that draws on multiple competencies of the organization. Traditionally organized
companies that have a silo structure promote the isolation of each function. In order to make
offer preparation successful it must be viewed and managed as a project.

Generic clouds for identified segments - examples


What is an Undesirable Effect (UDE)? First of all, it is an ‘effect’- its existence is
indisputable; we can argue about its magnitude, but not the existence. Secondly, it is
‘undesirable’ because this effect endangers or prohibits the company from satisfying a
legitimate need.
The undesirable effects (UDEs) can be worded differently for different companies.
Determining these effects and understanding the interactions among them will lead to the
understanding of the Core Conflict of the company we are targeting.
UDEs are, in their essence, a complaint about an ongoing problem that exists in the
company’s reality. But are these problems unique to each company? By segmenting the
customers according to their UDEs and performing cause and effect analysis to determine
their core problem we will have a group of customers with the same Core Conflict that we can
express through a generic cloud.
Market segment by market segment, following the External Constraint approach that we
just described, we can build a number of Core Clouds which we call generic. Such clouds
capture the essence of the cognitive reality those market segments experience and pave the
way to the systematic development of win-win offers.
Working alongside the Sales team in the development of generic clouds increases the
speed at which the offer can be created and presented. During UDE collection, summarized
in the TRT at the end of this chapter, the Sales team will be looking for signs and validation of
one of the generic clouds; they will pay attention to customer-specific language and
terminology which will be incorporated in the customer’s Core Conflict Cloud during the offer
preparation stage. What follows are examples of such generic clouds.

166
Domenico Lepore

Make or Buy decision represents a most common dilemma companies are facing, as in
the following diagram:

Make or Buy cloud

Resource Engagement Cloud 1: If the goal of the company is to maximize profit they
have to decide if their team should focus on getting lowest purchase price or build a reliable
supply chain at a fair price.

Resource Engagement Cloud – maximizing profit

167
Sechel: logic, language and tools to manage any organization as a network

Resource engagement cloud 2: in this cloud the goal of the company is to become the
market leader and keep this position. Their conflict can be represented as in the following
diagram:

Resource Engagement Cloud – market leadership

Examples of Suppliers Cloud:

Conflict Cloud: Buy large quantities overseas vs. buy locally in small quantities – best price

168
Domenico Lepore

Conflict Cloud: Buy large quantities overseas vs. buy locally in small quantities – stability of supply

Deriving the Core Conflict: Single Supplier vs. Multiple Supplier

UDE Segments
1. Undesirable effects related to Suppliers:
• Suppliers are unreliable
• Our products are complex and it is difficult to find suppliers
• Qualifying new suppliers takes too long
• We don’t understand our suppliers
• We don’t understand how their changes impact our supply chain
• The process of choosing and maintaining supplier relationships is cumbersome
2. Undesirable effects related to Organizational Structure:
• Our division does not have the authority to make decisions on material and
specifications
• The time it takes to reach a decision (gain approval on) takes too long
• We are unfairly losing business to our sister company
3. Undesirable effects related to Product/Service and Pricing:
• Qualifying new products takes too long
• There is increasing pressure to reduce prices
• There is internal pressure to maximize profit year after year

169
Sechel: logic, language and tools to manage any organization as a network

Conflict Cloud Core Supplier vs. Multiple Suppliers – improved profitability

Conflict Cloud Core Supplier vs. Multiple Suppliers – reliable growth

The production of sales


Traditional Sales organizations firmly allocate products and/or customers to each Sales
resource. Sales people are seen as ‘masters of their domain’ (customers) who possess all
the knowledge, control relationships and ‘take care’ of business. One person is a resource for
all the sales production tasks from opening to closing regardless of their personal strength
and weaknesses.
Due to the highly competitive nature of the Sales force, which is often encouraged by top
management, there is very limited interaction among the different members of the Sales

170
Domenico Lepore

team. There is no knowledge sharing as this is considered a debaser of their individual


advantage and very limited feedback is provided to Marketing. As each of the members of the
Sales force has their own target which is directly linked to their compensation, it is hard to
imagine how some of them would work together as a team.
Such lack of cohesiveness and teamwork artificially limits the company’s ability to ever
improve service to customers. Invariably, each customer will be seen as an isolated case and
no commonalities will be investigated in order to design and implement win-win solutions.
In a ‘market driven’ organization Marketing activities are intertwined with the Sales
process. Such an infrastructure has one goal: to provide an open channel of communication
and encourage knowledge sharing. Every step of the sales production chain is performed by
competent resources that interact with each other.
Marketing and Sales work together to convert contacts into contracts at the fastest possible
pace. Priorities are given to the projects that generate the highest Throughput for the company.

Sales process

Qualified leads provided by Marketing serve as raw material for the Sales process which
starts by initiating a contact with the customer. The goal of the contact is to set up an
appointment/visit during which we can learn about our customer’s reality, their issues and
wishes by collecting their undesirable effects.
These UDEs are inputs for a cause-and-effect analysis that is performed during the offer
preparation stage. Understanding UDEs and their interaction will enable us to identify the
most suitable way to design win-win solutions. The process starts by identifying the generic
cloud applicable to the customer we want to address and continues with a detailed ‘tailoring’
of the cloud through a customer-specific verbalization. The Core cloud of the customer helps
express his “core problem”, i.e. the set of assumptions that keep him in the conflict, and the
logical reasons for the existence of UDEs.

171
Sechel: logic, language and tools to manage any organization as a network

From UDEs to Cloud

During the UDE collection process we want to make sure that we collect all or most of the
valid UDEs. This is achieved by following the steps and the logic of UDE collection. The TRT was
designed to give clear instructions on what to do, when to do it and why we’re doing it.

172
Domenico Lepore

UDE collection Transition Tree 1

173
Sechel: logic, language and tools to manage any organization as a network

UDE collection Transition Tree 2

174
Domenico Lepore

UDE collection Transition Tree 3

175
Sechel: logic, language and tools to manage any organization as a network

Offer presentation
Once our offer is prepared and validated, we need to make the customer aware of it. The
outcome of our offer presentation must be predictable and repeatable. This cannot result from
ad-hoc activities. The offer presentation stage follows a very precise process that is described
by the following Transition Tree (TRT).

Offer presentation Transition Tree 1

176
Domenico Lepore

Offer Presentation Transition Tree 2

177
Sechel: logic, language and tools to manage any organization as a network

Offer Presentation Transition Tree 3

Post-presentation
Once our offer is presented and accepted we work with the customer on the
implementation. During this stage we tackle negative implications and assist our customer
with the transition from their current state to our solution.
Internally, our organization has to be prepared to service new customers. In addition to
purely administrative tasks (account set-up, credit limit etc.), Sales works with Customer
Service on transitioning the knowledge and to-date history, establishing supply chain rules,
required follow-up and feed-back on possible changes on the customer’s side.
When deciding on the priorities for offer presentation (and the ensuing transition to post-
sales support) it is extremely important to consider the speed at which the offer will generate
Throughput. No sale should be recognized until we have received the payment. Part of the
pricing process should include consideration for the size of prompt-payment discount, as
some segments will be very sensitive to those.
Ideally, after presenting our offer the customer starts purchasing our products
immediately on a pre-paid basis.

Customer Service and post-sales support


The success of our offer is not in closing the deal but in ensuring that the customer is
satisfied with the service we provide. We don’t recognize the sale until the money is in our

178
Domenico Lepore

account; the work therefore continues, but it’s now performed by a different set of
competencies within our system –Customer Service.
After the final steps of the Sales process are completed i.e. we presented our offer, and
the details are finalized and documented, then Sales, Marketing and customer service
perform another recurring activity – the transfer of the account.
This step is critical as it ensures that knowledge about the customer that was gathered
during previous steps is passed on to Customer Service. After the transfer is completed
Customer Service representatives know whom they will be dealing with, what issues the
customer had in the past and how we resolved them with our offer. Customer Service is
advised of particulars and details that are critical for the customer.
From that point on, ALL tasks related to processing the order, tracking the progress,
invoicing and informing the customer of the status fall under Customer Service. Customer
Service receives the order and enters it after verifying the details (pricing, delivery condition
etc.) with our agreement. Customer Service also tracks what stage of the processing the
order is in, schedules delivery or pick up times and issues the invoice.
Customer Service also works on resolving possible day-to-day issues that might arise,
such as wrong/lost invoices or late deliveries. They monitor credit status and limits of the
customer and constantly feedback to Sales and Marketing.
Customer Service representatives become the daily contact for all customer needs
related to the order but do not block Sales people from staying in touch with the customer.
Moreover, there is a constant information flow between Marketing, Sales and Customer
Service. Through daily contacts with the customer, Customer Service will recognize the signs
of changed reality and involve Sales and Marketing when needed.
We understand that after some time our offer and continuous service will change our
customer’s reality and their UDEs. It might manifest in the customer’s demand for a lower
price, different delivery conditions or other products.
It is important to recognize these signs and arrange for Sales representatives to visit and
collect a new set of UDEs. This is where we start the cycle of ongoing improvement. From the
UDE collection we continue with the process of analyzing cause and effect, crafting and
presenting a new offer and the cycle runs the above-described course resulting in a
successful presentation.
The sustainable growth of the company cannot be achieved by having an un-coordinated
and functional structure. Crafting and presenting successful offers that cannot be duplicated
requires close integration of Marketing and Sales with constant availability of other
competencies within the company. This can only be achieved in a project-based organization
that rigorously follows the tools and logic described in this chapter.

179
14.

THE INFORMATION SYSTEM WITHIN INTELLIGENT


MANAGEMENT

Francesco Siepe PhD and Prof. Sergio Pagano PhD

Every modern manufacturing company has some sort of computer based Information System
to support the daily operations and to help managers take strategic decisions. There are a
large variety of pieces of software and hardware that have been developed for this purpose.
Solutions for automating accounting in companies have been proposed based on all possible
available technologies. Large computers, once called mainframes, were used to this purpose,
but also very small personal computers, like the glorious Apple II of the 1980s, were used
successfully.
If you compare the characteristics of these old machines (clock speed 1 Mhz, 8 bit, 48
Kbytes RAM, 128 Kbytes disk) with those of an average modern PC (clock speed 2 Ghz, 64
bit, 2 Gbytes RAM, 512 Gbytes disk) you get an increase of more than 2,000 in speed and
100,000 in storage capacity. The same improvement to your car in the 1980s, 150 km/h
carrying 5 people, would now result in a speed of 300,000 km/h and the capacity to carry
20,000 people!
Clearly, with such big improvement in performances, one would expect modern
computer systems to make managing any organization a breeze. In reality, we have
witnessed, at best, an increase in performances of a factor 10.
The reason is that the software, i.e. the way people use all the power available through
automatic computers, has been oriented toward implementing better machine-human
interfaces (e.g. the little shadow that you can see under each icon on your desktop or the little
“ding” that you can hear when you get a new e-mail) rather than toward providing useful
information to the user.

What does information mean?


Traditionally, the main reason for implementing in a company a computer based
Information System, in modern parlance known as an Enterprise Resource Planning (ERP)
system, is to get better visibility in operations. Consequently, the bottom line (profit) effects
expected from an ERP system are higher speed in communication between company
departments, higher speed in the handling of orders and fewer errors in commercial
documents. All these advantages result from an increased speed of integration of all the
information coming from different parts of the company. The smaller the company, the fewer
are the advantages of implementing an ERP.
Sechel: logic, language and tools to manage any organization as a network

Moreover, the cost of an ERP implementation does not scale with the company size.
This implies that, while for large organizations with revenues larger than 5-10 billions of $, the
ERP cost (well over US $1,000,000) is repaid in about 12 months, whereas in companies with
revenue smaller than 1 billion $ the cost of the ERP (often between $500,000 and $1m) might
never be repaid.

Typical drawbacks of an ERP implementation are:


• difficulty of use
• long implementation time
• need for adaptation of software to specific needs
• long cycle time for bug correction
• difficulty in justifying the investment made in terms of return for the company
Again, the main point is the meaning of information. What is the difference between data
and information? The content of a warehouse can be called data, but if we want to find out
whether we can satisfy an urgent order from a customer, then it is information. Whether or not
it looks like information depends on the observer.
We can define information as that part of data that has an impact on our actions and/or
decisions, and, if missing, will have an effect on them.
For different people, or for the same person at different times, the same piece of data
can be data or information. Therefore it is no surprise that people so often confuse databases
with information systems. If we don't know in advance what kind of decision we have to take
then we will not know the difference between data and information.
In essence: information can be defined only within the process we use to take decisions.
Information is not the data necessary to answer questions; it is the answer to the question.
Information is not the input to a decision process; it is the output.
Accepting this definition implies that the Information System must reflect the decision
process employed. As a consequence, every discussion about the composition and logical
structure of the Information System has to be made within the frame of the decision process
adopted. Moreover, as the decision process depends on the management philosophy
(Intelligent Management), the Information System can only derive from that. We believe that
an Information System makes sense and generates value only if it supports effectively the
decision process. Information Systems have been traditionally thought of as the main
repository of the information necessary for financial accounting and reporting. However, this
role is necessary but not sufficient to generate value for the company.
In order to generate value for the company an Information System must be able to
effectively support management decisions.
Accordingly, an IS must follow a well defined and agreed upon enterprise-wide
managerial logic and must unify and create consistency among the language of finance and
accounting and the language of operations as well as supporting Marketing and Sales.

182
Domenico Lepore

How do we make decisions in a company managed as a system?


Managing a company as a system requires skills and behaviours that are very different
from those required by the management of a traditional hierarchical-functional organization,
and this is relevant for the IS.
In a system the emphasis is on the interdependencies, on the way a team has to work to
achieve a common goal. The focus shifts from local optimum (functional efficiency) to global
optimum (system efficiency). In a system the value of an action is measured based on the
impact it has on the overall result.
The Theory of Constraints (TOC) rests on the assumption that every system has one (or
very few) constraints that limit the overall performances. TOC proceeds by identifying and
overcoming, through adequate exploitation, the constraints that limit the system’s
performance. These constraints can be physical (machines, people, know-how, etc.) but
more often they are the result of the mental models with which we look at the work of the
organization.

In order to manage physical constraints we must:


1) Identify/strategically choose the constraint
2) Exploit the constraint to achieve maximum Throughput (work all the time and on
the best possible Throughput mix)
3) Subordinate to the constraint (all the activities of the company)
To perform the second and third step, TOC uses powerful algorithms called: Drum Buffer
Rope (DBR) for production and Critical Chain (CC) for project management and new product
development.

Moreover, TOC has introduced a set of system measurements that can effectively help in
making the correct managerial decisions:
• Throughput (TPUT): The pace at which the system generates cash
• Sales: What comes in
• TVC (Totally Variable Costs): What goes out to purchase materials and services
that go into the products we sell
• Operating Expenses (OE): What we need to make the system function (fixed costs
+ investments)
• (I): Inventory: What we need to keep in the system to ensure that we always have
enough “material” to produce and ship

These basic variables are connected in the following way:


Sales – TVC = Throughput (T)
T – OE = Net Profit

183
Sechel: logic, language and tools to manage any organization as a network

T – OE – I = cash profit, the physical money we see in the bank account before tax.
Indeed, from year to year, I becomes ΔI.

Moreover, TOC has introduced two additional measurements to monitor the flow of finished
products and inventory:
The “Throughput $ Day” (T$D) measures the loss in Throughput caused by delays in
delivery of finished products: the Throughput generated by the sale in $ times the number of
days of delay.
The “Inventory $ Day” (I$D) measures the amount of inventory and the pace at which it
moves through the company: value of inventory in $ times number of days of presence in the
company.
These measurements allow managers to focus on the elements that determine the
performances of the company. However, these measurements need to be understood in order to
take the correct managerial actions. As we have seen in Chapter 12, Statistical Process Control
provides a very powerful means for monitoring the variability of the important processes in the
company, thus allowing managers to be able to decide when to take corrective action.

IS and value generation


How, then, does an IS generate value? In order to generate value for a Company, an
Information System has to be able to help with reducing and managing the organization’s
“critical limitations”. An information system has to help to manage the constraint.

It does so by:
• performing the three fundamental steps of TOC for the management of physical
constraints (identify, exploit, subordinate)
• giving information on how much time the constraint is used
• scheduling the constraint
• making management reports based on evidencing the cash generated by the
management of the constraint
• exchanging dynamically data with existing databases

Which data does an Information System need in order to provide information?


• Bill of Materials
• Physical Process flow (routing)
• Work in Progress (WIP)
• Warehouse content (I)
• Replenishment times, i.e. the time factor that helps me gauge how much inventory
I need to keep in the system so as to never miss shipments

184
Domenico Lepore

• Sales (prices and delivery dates), i.e. when TVC is transformed into Value (cash).
All these data are present in any modern ERP system so extracting them from the
databases is straightforward.
What is less immediate is the capability to produce good scheduling of the constraint
according to the TOC algorithms. However, it is well within the capability of any reasonable
computer-programming workforce to come up with schedulers that can implement these
techniques, and some software packages can also be found on the market.
This applies also to constraints in the management of projects and an example is given
below.

What is a good IS?


A good Information System is a complete and flexible database with a good definition of
the system’s interdependencies and with the capacity to schedule the constraint. The
limitation that an Information System overcomes is the need to act without all the necessary
information. In summary, an Information System is the instrument that allows us to focus the
attention of all the company on the few factors that determine the overall performances. It
must provide value, not just technology.

The implementation of a Decalogue ERP system: a real life example


In this section we use a real life example to illustrate the process of implementation of a
company-wide ERP based on the Decalogue. This can also be seen as an example of how a
project is developed and scheduled using the Critical Chain approach.

The implementation of the companywide ERP system that we describe presented several
challenges:
• The Company was made up of many facilities, spread out all over the North
American eastern sea-board;
• These facilities used local ERP systems, different to and incompatible with each
other as far as their data was concerned (e.g. they used different codes for the
same item);
• Changing the ERP system invariably creates resistance in people who had been
accustomed for years to work with their system of order entry, data queries,
accounting and reporting;
• The existing ERP systems were obsolete;
• None of these systems would lend itself to the statistical studies and Throughput
accounting practices required by the Decalogue;
• Any existing ERP system would not foresee a module for cataloguing and
managing people’s skills (for finite capacity scheduling purposes).
A Decalogue based ERP system must have some well-defined features.
185
Sechel: logic, language and tools to manage any organization as a network

First, it must be philosophically and architecturally poised for an enterprise-wide


application; it must provide the possibility to truly support company-wide activities and enable
system-based, global optima decisions.
Second, it must be intuitive and easy to use.
Third, it must accommodate for Decalogue-based measures and parameters, which are
statistical in nature. The interdependencies among these parameters, the corresponding units
of measure and the control charts that the system should generate are outlined in the
following table:

Parameter Unit of measure Control chart


Sales $ Sales
Throughput $ Throughput
Delivery date Day T$D
BOM/WIP Quantity WIP
Routing (Material path Position/time I$D
through production)
Stocks (Inventory buffers) Quantity Inventory, raw materials

Fourth, it must be flexible enough to absorb and incorporate pre-existing systems.


Once such a system was identified, we used the Prerequisite tree to build the Plan.

186
Domenico Lepore

PRT Go live

The most relevant steps of the project, as far as the Decalogue is concerned, are
Intermediate Objectives (IOs) 7, 8 and 9. IO 8 relies on the introduction of Throughput
Accounting parameters into the new ERP system and the development of the corresponding
reports through which we can keep the Company business under control.
IO 9 (which is strictly related to IO 4) is about the managing of resources and of the
Company itself as a collection of projects. This subject will be addressed in more detail in the
last section of this chapter.

187
Sechel: logic, language and tools to manage any organization as a network

IO 7 is crucial for the production strategy and is closely linked to the concept of Drum
Buffer Rope (DBR) and Buffer Management (BM). DBR and BM form the method to manage
and protect the constraint.
The Drum is the constraint process, the critical resource, or the “weak link”, of our
production chain. It dictates the pace of the system downstream (since the constraint by
definition is a limiting factor) and upstream.
The Rope connects the Drum to the releasing of raw material or of semi-finished goods
at the beginning of the chain. In this way the production chain processes only the quantities
that the constraint is able to work on, without accumulating too much material before the
constraint. Each intermediate link, between the beginning of the chain and the constraint
process (and after, for that matter) must be able to process all the material it receives (that is
lower than the maximum capacity of the link, since this amount is dictated by the constraint,
the weakest link).
To protect the system against the intrinsic variability of every process we put Buffers
before the constraint and at the final link of the chain (shipping buffer). The dimensions of
those buffers are calculated according to the variability we measured in the constraint and in
the system.
The higher the variability (assuming that it is in statistical control) the bigger the buffer
needs to be (n.b. the buffer is measured in time units).
Each of the Intermediate Objectives must be reached through a path of tasks, a set of
well-defined actions, organized by means of the Transition Tree. The Transition Tree
highlights the logic that we use to move from the present (the obstacle) to the desired future
(Intermediate Objective).

Building a Transition Tree


Let’s take as an example how to build a Transition Tree for the IO 6 and, particularly, for
the purchase order structure. In the Prerequisite Tree shown here we have joined some of
the IOs in order to have a more simplified view. This makes sense if the IOs are similar. IO 6
indeed should be seen as three different IOs: purchase orders, sales, and job orders. The
same is also true in some other cases.

This was our state of reality:


We have not developed the Purchase Order structure in the new ERP system and we
have not trained the corresponding people.

The corresponding intermediate objective was, of course:


We have developed the Purchase Order structure in the new ERP system and we have
trained the corresponding people.
What we have to do now is find a sequence of actions that leads us to achieve the
objective starting from the present state of reality. This sequence of actions must be seen as
a sequence of changes of states of reality. More precisely, starting from the present, we have
to identify the need that we want to satisfy and the action to perform in order to satisfy this

188
Domenico Lepore

need. This will allow us to reach the next state of reality. This line of reasoning can be
represented as shown below.

Building a Transition Tree

189
Sechel: logic, language and tools to manage any organization as a network

Then, starting from the new state of reality, we can apply the same scheme again until we
reach the goal (i.e. the Intermediate Objective). Moreover, we have to check the logic of the
path that leads us to the objective; we have to ascertain if the actions we have found are
sufficient to reach our objective (logic of the action) and why the next need is unavoidable
(logic of the sequence). This will add more ‘leaves’ to the Transition Tree:

Model of Transition Tree

190
Domenico Lepore

In our specific case, after identifying the necessary steps to achieve the goal (IO 6) starting
from our initial state, we derived the following Transition Tree. In the picture below, the
abbreviation EDI stands for Electronic Data Interchange and indicates a way to communicate
between commercial partners by means of telecom networks:

Purchasing Order Structure Transition Tree

The same thing has been done for all the Intermediate Objectives. On average, from four
to six actions should be sufficient to achieve every IO. If more actions are needed, usually it
means that the IO should be split into two or more sub-IOs.

191
Sechel: logic, language and tools to manage any organization as a network

When we finished building all the Transition Trees, we had about 170 tasks to manage,
in order to start the transformation of the company’s ERP.
The Transition Trees provided us with a well-defined and actionable list of tasks. The
next step was to schedule those tasks.

Scheduling the tasks in the project

The tool we used to schedule the project is a software called Independence, developed by
Domenico Lepore and a team of mathematicians and software specialists (see
www.independence.it). Independence follows the algorithm illustrated by Dr. Goldratt in his
book Critical Chain and incorporates elements of SPC-based buffer management to make it
more consistent with the Decalogue. The steps to create a project schedule are:
1. Enter the task names and their estimated duration. People usually tend to over buffer
themselves by declaring task duration that is much longer than they really need. We
assumed that if we were to focus the entire working day (usually eight hours) on the
task, there would be no need to consider more than three days for the most difficult
task. NO MULTITASKING IS ALLOWED. Of course, some unforeseen event can
delay the operations and this is what the buffer is for.
2. Enter each task owner name (the person responsible for the task) and all the other
resources needed to perform it. Indeed, people assigned to tasks must have the right
competencies to execute it but it is not good practice to assign too many tasks to the
same resources; they would soon become critical and would unnecessarily delay the
completion of the project. The problem of resource contention will be dealt with later.
3. Establish the connection between tasks, i.e. their real natural sequence. Indeed, it
may happen that a task belonging to a later IO can be brought forward because its
execution is independent of many of the tasks that should precede it according to the
Prerequisite Tree. For example, even if by our Prerequisite Tree IO 6 can be reached
only after the first 5 IOs have been completed, the task concerning EDI support could
be started immediately, because its execution does not depend on any of the tasks
contained in IOs 1 to 5. This step in Independence can be done dynamically, that is,
by directly manipulating the links on a map containing all the project tasks.
An example is shown in the picture below.

Map of project tasks

192
Domenico Lepore

As we can easily see, many tasks can be executed in parallel because they are
independent one from each other.
4. Identify the Critical Chain. This is an essential step of the schedule. The Critical
Chain is the longest path of project tasks (dependent events), taking into
consideration resource contention. The Critical Chain determines the length of the
project and is therefore its limiting factor, its constraint. Independence puts the
tasks in time sequence and considers task duration, task network, the assigned
resources and the resources shared with other running projects. Then, it
calculates the Critical Chain as the longest continuous path from the beginning to
the end of the project. Of course, a resource which is assigned to many tasks is
likely to be on the Critical Chain.
5. In order to shorten the length of the project (i.e. the length of its Critical Chain) we
can try to solve manually some of the resource contentions. To this aim, we can
operate on those tasks of the Critical Chain that are consecutive and have some
common resources. Here the difficulty is in challenging the assumptions that we
make about the impossibility to replace some of the assigned resources with
some idle ones. Naturally, we do not underestimate the skills needed for the task
but we firmly believe that training can help elevate people’s skills and unleash
more necessary resources into the system.
6. Assign project buffer and feeding buffers. Usually, when we manage a project, we
tend to over protect each and every task. This is wrong. If we know that we have
more time than we really need, we invariably tend to use it all either by starting
late or by not focusing our time solely on the execution of the task at hand.
Multitasking is the offspring of deeply rooted assumptions that people make on
efficiency in the workplace; sadly, it is fuelled by a host of software for “personal
productivity”. Multitasking, on the contrary, is a fierce enemy of a system-based
approach to management and a debaser of people’s ability to focus.
In our scheduling we use a buffer to protect the chain, not the individual tasks. The
project buffer is chosen usually to be between 20% and 30% of the length of the Critical
Chain. But we also have to be sure that the non-critical tasks that feed the Critical Chain are
protected, so that disruption in their execution does not affect the length of the project. For
this reason we add a feeding buffer to every non-critical task that precedes a Critical Chain
task. These feeding buffers usually have a length that is between 20% and 30% (same
percentage of the project buffer) of the non-critical tasks sequence they follow.
7. Task update and Buffer Management. Independence allows us to enter the project
due date so that, once the Critical Chain length has been derived and the buffers
are calculated, we know exactly the day in which the project should start in order
to finish it by the due date. As the project execution begins, the project manager
updates Independence with task execution day by day and the program warns us
about possible delays to, or earliness of, project completion. At the same time, the

193
Sechel: logic, language and tools to manage any organization as a network

program generates control charts that measure buffer consumption so we know


that project execution is predictable or that we have to investigate reasons for
anomalous buffer consumption (or increase of buffer in case of systematic early
completion).
By scheduling the ERP project with the above algorithm we were able to create a 5-
month project, including the project buffer. This was a remarkable result considering the
much longer average durations for similar endeavours.
When we follow the steps described above and introduce all our data correctly into
Independence, the result is the so-called Gantt chart. In the picture below we can see a
complete Gantt chart of an ERP project implementation as well as its Critical Chain.

194
Domenico Lepore

Gantt and Critical Chain of an ERP implementation project

In the next picture we have blown up a small section of the above project. In particular,
we can appreciate the different tasks as shown by Independence.

195
Sechel: logic, language and tools to manage any organization as a network

Detail of project

Multi-Project management: the future of the Organization


In the previous section we dealt with how a single project is scheduled. Generally
speaking, a project is an orderly sequence of actions performed in order to achieve a goal
within a certain date, and within a certain budget. In a systemic company there will always be
a number of projects running concurrently and, naturally, the problem of resource allocation
arises.
In order to sustain a systemic endeavour, a company that embraces the Decalogue must
identify its goal and the set of injections needed in order to achieve it. Each injection can be
seen as the goal of a project in its own right. All these projects are normally scheduled taking
into account the correct precedence order. However, as time goes by, new injections will be
developed to account for new scenarios or possibilities, and more projects will have to be
launched.

How to run a multi-project company


The intelligently managed company that we are envisaging is run by launching a
sequence of projects. Therefore, the correct management of parallel projects is crucial for the
success of company operations.
In the Intelligent Management approach, each project is managed by concentrating on
the system constraint, the Critical Chain, by buffering against variation, through the Project
Buffer and the Feeding Buffers, and by controlling the buffer consumption through the use of
Statistical Process Control.

196
Domenico Lepore

However, the correct implementation of the projects depends on how much the allocated
resources are timely and responsive. A great advantage of this approach to project
management comes from removing unnecessary protection from each task and using
protection only where it is really needed, i.e. on the constraint. This implies that we must be
ready to accept late task completion as well as early completion. The Project Manager exerts
a special control on the resources involved in tasks belonging to the Critical Chain and alerts
the resources next in line in order to capitalize on early task completions.
The situation becomes more complex when multiple projects have to be managed, and
possibly by different people. When a resource becomes unavailable, let's say they are ill,
ways to take corrective actions must be available other than just delaying the related tasks.
Although ultimately the decision is taken by the project manager, the Information System can
provide useful information to support such decisions.
The correct way to handle multi-projects is to have a pool of resources with some degree
of interchangeability and a way to communicate through different projects so as to be able to
do a partial reallocation of the tasks.
In order to perform the tasks of the projects, some capabilities (skills) are needed. These
skills are normally present in the company, but could also be partially missing. The IS has to
provide a database of which skills are needed, where the skills are in relation with the internal
resources and the grade of competency of the resources.
At the project task definition phase, the project manager, possibly helped by a team of
knowledgeable people, chooses the required skills and the necessary level of competency.
A quick search in the system database will show which are the resources available that
fulfil the requirements (or perform better), and what is their calendar of allocation on other
projects. At this stage we do not yet know whether a required resource will be available at the
time of execution of a specific task. The available resources are tentatively allocated and the
project goes into the scheduling phase.
Once the project is scheduled, i.e. it is known when each task has to be executed, the
possible resource contentions are examined and a possibility is given to choose a different
resource. If any resource is changed, the project has to be rescheduled, as there could be a
different, hopefully shorter, Critical Chain.
So we have an iterative process in place that ends when the project manager is satisfied
with the results obtained.
The degree of flexibility given by the skill/competency level of abstraction allows, for
instance, to take a resource from an already allocated task on a running project and
substitute it with an equivalent resource. This substitution is straightforward when the
originating task is not critical while the destination task is, because the impact on the original
project is limited. In the case where both tasks belong to project Critical Chains, the
substitution should be done with more care and the project managers involved should be
informed.

So the tools needed to manage a multi-project company are:


• a project scheduler based on the Critical Chain algorithm, such as Independence;
• a database of the skills and of the resources, with the relative interconnections;

197
Sechel: logic, language and tools to manage any organization as a network

• a calendar of all resource allocation on the various projects, as well as other


necessary information like vacation periods, holidays, etc.
All these tools, except perhaps the first, are readily available and easily implementable
pieces of software. All together, they represent a very powerful instrument for the definition
and management of projects running concurrently in a company.

Multi-projects and flexibility


From an organizational standpoint, we need some degree of flexibility in the
management of these projects and, more precisely, during the project-scheduling phase all
the necessary resources are allocated to the tasks to be performed.
Subsequent projects must avoid the use of already allocated resources or, alternatively,
push back in time the tasks that require those resources. This situation is not completely rigid;
each project has a certain degree of flexibility given by the buffers (feeding buffers and
project buffer) and this allows for some limited resource rescheduling.
However, if the number of newly scheduled projects is not small, it may well be that a
“more important” recent project has to be delayed due to the fact that key resources are fully
booked by “in due course” projects. Also, in any company there are several tasks that have a
well-defined periodicity and different levels of criticality for the success of the company; these
projects do absorb resources, often for ill-defined Throughput goals.
In order to efficiently manage a multi project company, a way to handle resource
allocation and re-allocation has to be found.
What we propose is to associate all the resources with a set of “skills”, each of which has
a degree of competency, as shown on the following table:

Resource Name Skill Degree of competency


(out of 3)

Mark Programmer 1

Data entry 2

John Accounting 1

Data entry 3

Customer service 2

Alice Secretary 1

Data entry 2

Call centre 2

198
Domenico Lepore

When a resource has to be allocated to a specific task, the required skill and level of
competency is searched in the resource database. The names that match the required
characteristics are examined and one is chosen. At a later time, if a resource contention is
created, either during the project scheduling or when another project has to be scheduled,
another compatible resource is searched for and, if found, allocated to the required task.
In the example above, Mark, whose activity was requested for data entry in a project
task, could be replaced by Alice, if she is available, or by John, who has a greater
competency.
The introduction of the “skill level” helps to create enough flexibility in the project-
scheduling phase, solving potential contention cases by using equivalent resources and
allowing the reallocation of specific resources while keeping the same, or better, competency
required by the tasks.
Indeed, every degree of flexibility requires some level of redundancy. That is, we cannot
have all resources working full time and expect to be able to allocate people for new tasks.
On the other hand, in a “cost world” it is considered a sin to have the work force not working
full time. The issue of excess capacity for non-constraint resources must be addressed when
the basic performance measurement system is devised. We need a certain degree of excess
capacity in order to cover the intrinsic variability of resource performances; the correct
amount can be estimated by means of statistical methods.
Another source of flexibility can come from the separation between recurring projects
and non-recurring ones. The former can be easily engineered and appropriate time for their
execution can be determined. The latter will benefit from the proficient use of the Critical
Chain method and its non-contentious resource allocation method.
The future of the organization is in designing organizations not as functional/hierarchical
grids that are clearly inadequate due to their inability to fully support, measure and promote
intrinsically cross-functional activities. The solution is to design organizations as a network of
interdependent projects that work together to achieve the goal of the organization.
In conclusion, the allocation of people to tasks according to their skills and level of
competency produces an effective use of the resources available, a more ethical
consideration of people’s capabilities, and allows us to unleash the power provided by the
“intelligent” implementation of project management.

199
PART FOUR

The new intuition


Domenico Lepore

THE ENTERPRISE AS A NETWORK OF PROJECTS

Angela Montgomery PhD

In the first three parts of this book, we started with the intuition of the systemic enterprise as
opposed to a traditional hierarchical and functional organization. We analyzed what this
means in terms of how organizations, Industry in particular, can be organized systemically,
and the new kind of economics and leadership this requires. We then saw practical
indications of how several aspects of the systemic organization can be implemented: the
thinking tools we need to move from intuition to scheduled actions, the accounting method,
statistical methods, Marketing and IT systems.
In Part Four, we move in our cognitive spiral back around to a new beginning. We
consider the conscious and connected organization with a new intuition of how Network
Theory can be used to enhance and expand the design and management of an organization
as a system. Networking is not just a buzzword and a current trend. It represents some of the
most cutting edge research being carried out within physics, biology and neural sciences.
Once again, just as the intuition of the organization as a system is not an imposition but a
revealing of the underlying nature of any organization, the intuition of the organization as a
complex network uses our advancing knowledge of nature to understand how organizations
work and, therefore, how to design them accordingly to maximize their output.
The coming years will offer the opportunity for further analysis, development and
execution based on this intuition. In the meantime, we can attempt to understand the
relevance of this new science to organizations and what practical implications this may have
for design and management. We attempt to do so here with this brief introduction followed by
a scientific article outlining the research the Intelligent Management group has facilitated in
this area to date.

What do we mean by a network, and how do networks behave?

There are various kinds of network in the real world. Some occur in nature, such as a
beehive. Other networks are manmade, such as the London Underground. These networks
are a collection of nodes that are all interconnected with various degrees of separation
among them.
These ‘simple’ networks are not designed with a specific goal that affects how the nodes
interact with each other. They simply allow connections to occur randomly; hence they are
called ‘random’ networks. The statistical distribution that describes the probability with which
these nodes are connected to each other follows a ‘normal’ Gaussian distribution (i.e. the
data cluster around the mean with a few outliers).
A different kind of network exists where the interconnection among the nodes is greater
with some nodes than others. The nodes with more nodes connected to them are called

203
Sechel: logic, language and tools to manage any organization as a network

hubs. These networks, known as ‘scale free’ networks, thus have a hierarchy consisting of
‘visited’ hubs and more isolated nodes.
The statistical distribution that describes the probability with which these nodes are
connected to each other follows a Power Law. This distribution follows an inverse power
relation: e.g. a tsunami twice as large as the one that occurred in Asia in 2004 would be four
times as rare (less likely to occur) and a tsunami three times as large would be nine times as
rare.
When we examine an organization in the light of network theory, we are able to unveil
the reticular nature of the organization and analyze its behaviour and development with a
much deeper level of understanding. More importantly, we can consciously design, manage
and operate the organization with a much higher level of optimization.
An organization can be viewed as a living, complex network with a precise goal. As such
it has nodes and hubs and it exhibits emergent properties. These are new structures or
behaviours that emerge spontaneously from interconnection. We know that we can optimize
the performance of a system by choosing a constraint and subordinating the rest of the
system to the constraint. We can therefore optimize the performance of our organization by
designing it as a network of projects in which certain resources are the hubs/constraints.
As we saw in previous chapters, by buffering the constraint, we are able to protect it from
the accumulation of variation from the processes that feed it. In the same way, we need to
protect the hub from variation, and by protecting the hub and subordinating to it, we ensure
maximum performance for the entire network.
Any Undesirable Effects that emerge from the interaction of nodes can be recognized as
emergent properties of the network. By collecting the Undesirable Effects (UDEs) and
deriving the core conflict of the network/organization (as described in Chapter 10), we will be
able to build an injection/solution that defuses the possibility for these interactions to disrupt
the functioning of the network. The use of the Thinking Process Tools acts as a catalyst for
our understanding of how networks behave by translating our intuition into understanding.
They enable the building of a plan to guide the network more powerfully towards its goal.

204
15.

A SYSTEMIC APPROACH TO COMPLEX NETWORKS

Gianlucio Maci PhD

The Theory of Complexity first emerged at the end of the 1960s as an evolution of the studies
concerning Systems theory, Dynamic Systems Theory and Cybernetics. At that time, the
reductionist approach had been struggling to provide satisfactory answers regarding the non-
linear interactions identified in all organized systems. Any reductionist law did not provide
reliable solutions.
In this context, systems were no longer studied from the standpoint of single
components, but from the standpoint of the behaviour of the system as a whole. Ever since
then, the emergent behaviours of systems began to be analyzed as a whole and identified as
complex systems.
We define a complex system as a set of interconnected parts that interact in a non-linear
way. The collective behaviour of this set of parts exhibits emergent properties which cannot
be found in each of the individual parts. The dynamics of such a complex system generate
new properties whose characteristics are the subject of study. Network theory takes into
consideration these emergent properties and studies the structural evolution of a complex
system.
Apparently, as every network evolves it presents a kind of random and unpredictable
behaviour. However, there are a few fundamental laws and organizing principles that
networks do follow. These laws and principles help us to understand the topological features
of many different types of systems, from cells, to commercial organizations, and even the
Internet.
A network is a set of items, which are called nodes or vertices, with connections between
them, called links. Systems taking the form of networks (also called “graphs” in much of the
mathematical literature) abound in the world. The fundamental unit of a network is a node (or
vertex) and the line which connects nodes to each other is called a link (or edge).
A link is directed if it runs in only one direction (such as a one-way road between two
points), and undirected if it runs in both directions. Directed links can be thought of as arrows
indicating their orientation. A graph is directed if all of its links are directed. An undirected
graph can be represented by a directed graph with instead of one link in one direction, two
links, one in each direction, between each pair of connected nodes (see figure below).
Sechel: logic, language and tools to manage any organization as a network

Undirected graph

The degree (of the) node can be defined as the number of links connected to a node.
Those links can be directed or undirected, therefore a directed network can have both an in-
degree and an out-degree for each node, which are the numbers of in-coming and out-going
links respectively.
Going through the network the geodesic path is defined as the shortest path through the
network from one node to another. Note that there may be, and often is, more than one
geodesic path between two nodes.
The diameter of a network is therefore the length (in number of links) of the longest
geodesic path between any two nodes. A few authors have also used this term to mean the
average geodesic distance in a network, although strictly speaking the two quantities are
quite distinct.

Hub

206
Domenico Lepore

The total degree is then defined as ki=kiin+Kiout

Diameter

The path length between two nodes A and B is the smallest number of links connecting
them and the path between A and B is 3 whereas the network diameter shown in the above
picture is 6.
Watt and Strogatz in their article “Collective dynamics of small world” introduced a
measure called local clustering coefficient C. This quantity of any node quantifies how
connected its neighbours are.
A neighbour of a node A can be defined as the set of kA nodes at distance 1 from A,
whereas the clustering coefficient for node A with kA links is defined as 2 times the number of
links between kA neighbours of A as follows:

CA= 2na / kA(kA-1) (1)

The clustering coefficient of the network is then given by the average of CA over all the
nodes as follows:
C = <c> =1/N ∑ ci (2)

Network of 5 nodes

207
Sechel: logic, language and tools to manage any organization as a network

The individual nodes have local clustering coefficients, Eq.1, of 1, 1, 1/6, 0 and 0, for a
mean value, Eq.2, of C=13/30

Emergent properties and Power Laws


Networks exist everywhere and at every level of complexity. The only way to study them
is through the analysis of emergent properties which are the non-linear interactions among
the nodes.
Nodes and links generate a network. No matter how these nodes and links are
connected, the probability distribution of the network defines its topology. Random networks
have a Gaussian type distribution. The peak of distribution of the number of nodes suggests
that most of the nodes have the same number of links, and so there are no hubs. Nodes
deviating from the average are highly rare.
Scale free networks, instead, have a few nodes that are highly connected to the other
nodes, and these are called hubs. The nodes in a scale free network follow a power law
degree of distribution and this generates a hierarchy of nodes, ranging from rare hubs to
numerous small nodes.

Random network and scale-free network

Let’s compare two kinds of networks. A random network, for example a national highway
network (see diagram above), appears to be equally distributed; the goal of the nodes and
the links is simply to cover geographically the land area which has to be mapped. Therefore,
there are a certain number of nodes with an average number of connections that roughly
follow a bell curve distribution. No reference is made, in this type of network, to the capacity
of links that connect the nodes and there are no flows of processes associated with them. In
contrast, if we focus on an air traffic system, the scenario is quite different. The power law

208
Domenico Lepore

degree of the distribution of a scale-free network predicts that most nodes have only a few
links, held together by a few highly connected hubs. It is obviously important to measure how
great the demand is by all the nodes to be connected to a specific node. The higher the
request and the creation of hubs within the network, the higher is the probability of a scale-
free network.
In the first network, the goal is ‘to allow people to drive within the US moving from node
to node through the links’, whereas in the second network, the goal is ‘to create a map to
allow a flow of people to move from one node to another’. Networks have to be defined
through the definition of a goal. Different goals define different topologies of networks. Scale
free networks follow a power law distribution in which a few nodes are highly connected and
therefore are crucial for the functioning of the network. Attacks (disruptions) on these nodes
can compromise the evolution of the entire network.
As networks grow and preferential attachment takes place hubs are created, followed by
clusters. Thus, the hierarchical structure of the network evolves. As clusters become
connected, one giant cluster will emerge. The emergence of a giant component represents a
phase transition, which is called percolation.

Complex networks and real organizations


How does all this relate to the creation and development of real organizations? For the
purposes of this discussion, different types of real organizations are dynamic systems that
can be considered as complex networks, defined as a structure of interacting nodes and links
connected to each other. In these organizations the greatest undesirable effect is ‘not to be
able to work in a synchronized way’. Synchronization implies that every node of the network
works toward the goal, and that the network/company goal is one only.
We can think of a hierarchical organization as being made up of clusters whose hubs are
not connected to each other. This type of organization can shift to a systemic network in
which each node is attached to any other node through only a few links. The systemic
approach in such a network would then support the creation of interdependencies in order to
allow emergent properties to shorten the path toward achieving the goal of the network.

How do we design such a systemic network? Such networks have to be built and designed
through:
• sharing of a common goal because this allows the subordination of the singles
nodes to the goal
• creating of the right interdependencies among the nodes
• identifying the right number of links to be connected and the prediction of how the
system will evolve through the statistical emergence of the network’s properties
It is only the goal of an organizational network that can enable the proper identification of
the logic with which the network has to be designed, the direction in which the network has to
evolve, and the alternative paths to follow in case of ‘attacks’ to nodes due to the intrinsic
variation of its processes.
209
Sechel: logic, language and tools to manage any organization as a network

Connecting ‘organizations as complex networks’ with the Decalogue


How do we connect the theory and practice of the Decalogue with understanding
organizations as complex networks? We do so by recognizing that the Undesirable Effects
(UDEs) of an organization are in fact emergent properties. By understanding the emerging
properties of an organization we can find many clues regarding the structure of the network,
its interdependencies and the design of the organization. Such properties allow us to surface:
• mistakes in connections and information flow due to an erroneous mapping of the
links
• undesirable formation of clusters that constrain the functioning of the network
preventing it from working in a systemic way
What are the nodes in an organization? They can be employees, machines within a
manufacturing environment, or any task as part of processes within an organizational
network. Gathering UDEs leads to the choice of a common goal and the correct creation of
interdependencies among these nodes.
As a result, we are able to see that the achievement of the goal of the production
network is limited by a few numbers of nodes that have less capacity than the others. Among
such nodes we identify one, which we call the constraint, which has to be managed properly
and protected from the intrinsic variation of the system.
The constraint is a node whose finite capacity is the most limiting factor for achieving the
desired goal of the production network. This node determines the speed at which the network
generates products and, as a consequence, is strongly connected to the pace of sales.
We can define the constraint as ‘the node with least capacity that has to be chosen
strategically’ because it represents the measurement point of the business network. All other
nodes are fully linked, within very few degrees of separation, to the constraint node in a way
that increases the network cluster coefficient and supports the systemic design.
From a Network Theory point of view, the constraint is the hub of the network and it is
helpful to build the structure of the network itself around the constraint both in order to design
properly its mechanics and to model the network in the direction of the goal.
Hubs, the highly connected (few) nodes at the tail of the power law degree distribution,
are known to play a key role in keeping complex networks together, thus playing a crucial role
in the robustness of the network.
To summarize, hubs play the important role of bridging the many small communities of
clusters into a single, systemically integrated network.

Networks, organizations and variation


The introduction of the concept of variation within a network enables us to look at the
process of the output of any system and not solely at its single nodes. Every process made
up of the work of many nodes is affected by an oscillation due to the interaction and the
evolution of the network.

210
Domenico Lepore

Usually, organizations are accustomed to comparing the average output values of their
systems as indicators of the system’s stability. This forces the decision makers to assess
systems based on incorrect assumptions. A value below average does not give any indication
of the stability of the system because, as per statistical definition, half of the value will be
above average on any test, and half below.
The single value, the single sold product and the single processed item are simply parts
of a system whose evolution and reliability is displayed within a process. Therefore, there are
fluctuations in processing items, banking services, information given via phone, product
assembly and all other repetitive activities completed through interconnected nodes which
also have intrinsic statistical oscillation. Process Behaviour Charts display all these local
interacting variations of the single nodes as a process and detect the specific limits within
which the whole process can oscillate in a predictable way. This can be interpreted as a
fluctuation of the predictability of the system. This fluctuation is called variation.
The behaviour of processes within a network becomes non-deterministic in the case of
interacting systems. The consequent emergence of new properties is an important
characteristic to be kept under statistical control. At the same time, being aware of intrinsic
variation allows us to monitor the development of the network, the statistical understanding of
which helps us predict the emergent properties of the entire system.
These two interrelated and cyclical observations are practically implemented by projects
developed from the injections to the core conflict of the organization. Therefore, identifying
the constraint and managing the intrinsic variation of the processes occurring in the company
by using the methods of Statistical Process Control enables us to create a systemic network.
Let’s introduce an experiment regarding the behaviour of a manufacturing network and
variation.
In the graph below we show the behaviour of the Throughput and the efficiency of the
constraint machine as a function of the variation of a specific manufacturing network:

Variability: Throughput

211
Sechel: logic, language and tools to manage any organization as a network

Variability: usage

In the first diagram we show the Throughput trend as a function of the intrinsic variation of
a manufacturing network. In the curve made of dashes the variation is considered in all
processes except for the constraint processes, and as the graph shows, the Throughput value
has been affected only after 60% of variability. In contrast, the dotted curve, in which the
variability is considered all over the network, decreases quite quickly, and the same behaviour
is shown by the solid curve, whose variability is considered only on the constraint node.
In the second diagram we show how the efficiency of the constraint node oscillates as a
function of the variability. As we can see, the trend is similar to the first diagram.
The diagram below shows the statistical correlation between the considered variability at
input and the resulting statistical error at output:

Correlation

212
Domenico Lepore

Ultimately, this diagram shows the trend of the standard deviation of the constraint node
usage (solid line) as a function of the variability, where the dotted line represents the
Throughput trend as a function of the variability. It is clear that by increasing the variability all
over the network the error band of both the dotted line or of the solid line becomes wider, and
this represents the band within which the constraint node usage and the Throughput curve
oscillates in a predictable way.
This experiment shows that any kind of variability applied on the constraint node affects
the output of the whole network and, for this reason, it has to be properly protected from any
statistical oscillation within the network.
To this end, a buffer time is located in front of the constraint to protect it from disruption as
well as to monitor the system’s behaviour. Therefore, the capacity constraint, which represents
the drum of the system, has to be perfectly aligned to the network capacity and its selling signal,
the rope, has to be sent to the replenishment node to dictate the purchasing of material.
We can conclude by saying that Drum Buffer Rope and intrinsic variability enable the
design of a scale free network made of hubs that connect nodes in a systemic way; buffers
will take into consideration (and protect from) the intrinsic variation of the complex systems.
Such a network, therefore, will drive its evolution toward a common goal. A feedback system
will continuously feed the network through a few designated links that will spread the
information throughout the nodes.

213
Domenico Lepore

Summary of the main ideas in this book

In conclusion, it may be useful to summarize the main ideas contained in this book:
• The work of an organization and the way it interacts with its environment are
systemic in nature. In other words, the organization viewed as a system is not an
‘invention’ but rather a ‘discovery’: it is the unveiling of something that is
structurally inherent to the life of any organization. Organizations are, and must be,
considered as systems. The conventional Hierarchical/functional organizational
chart is far from adequate to portray what an organization should do and how it
should work.
• The most fundamental feature of any system is the way its components (its
processes) interact and are interdependent with each other in the pursuit of a
stated and agreed upon goal. Such a network of interdependencies shapes and
determines the possibilities of the system towards its goal.
• The most effective way to manage the performances of a system is through the
understanding of the variation of its processes. Such understanding must be
statistical. Hence, managing a system means, in essence, managing its variations.
Any meaningful leadership can only be originated by a profound understanding of
the nature of process variation and its impact on the system and its environment. A
leader must strive to ensure statistical predictability everywhere in their
organization in order to allow meaningful managerial decisions.
• The performances of a system made up of processes with well understood
variations can be greatly enhanced if we determine one element to be its
“physical” constraint. In such a variation-managed, constrained-based system the
performances of the whole are essentially linked to the performances of the
constraint. A new measurement system is required based on Throughput,
Inventory and Operating Expense and their basic interrelations. The Decalogue
provides a simple algorithm and a guideline to guide the management of such a
system.
• What is the most logical and practical way to coordinate the work of a constraint-
based system? In other words, how can we proficiently organize the network of
interdependencies making up our organization? What is the organizational
structure most suitable to sustain the systemic endeavour?
• Such a structure is a multi-project environment. Any organization that accepts the
idea of system will find in the ‘network of projects’ the organizational structure that
most naturally leverages the power of a system.

215
Sechel: logic, language and tools to manage any organization as a network

• Leading and managing an organization as a network of projects certainly requires


a precise algorithm but, just as importantly, it requires from the members of the
organization the development of a new way of thinking, faster learning and a much
greater ability to act coherently with the new learning. We call this ‘enhanced
intelligence’ sechel.
• Tapping into this exclusively human kind of higher intelligence becomes possible
when we learn how to connect three basic faculties of the human mind: the ability
to generate new ideas (intuition); the ability to analyze the full spectrum of
implication of the newly generated ideas and plan accordingly (understanding); the
ability to execute coherently and proficiently upon the plan (knowledge).
• These faculties preside over the ability to accomplish change and, more precisely:
a) the ability to identify what has to be changed; b) the ability to identify the
direction of the change (what to change to); c) the ability to cater for the concerted
actions needed to bring about the change.
• The Theory of Constraints (TOC) provides a set of logical tools to address and
govern these three phases of change. These tools, allegedly simple and easy to
learn, if used properly and methodically affect our ability to connect the above-
mentioned faculties of the mind: intuition, understanding and knowledge. They
help acquire a better sechel.
• The pillars of the conscious and connected organization of the 21st century are
then: an increased intelligence (sechel), a statistical understanding of the systemic
nature of the work of the organization, an organizational structure based on a
‘network of projects’ that replaces the obsolete hierarchical/functional structure.
• A new leadership is needed to manage in this new scenario. Such a leadership will
drive the transformation from the present state to one of optimization: cooperation
NOT competition; win-win NOT win-lose; statistical understanding NOT forecast;
people development NOT performance appraisal; sustainability NOT short term
gains; long term planning and careful execution NOT quarterly results.
• Part Three contains several examples on the application of the knowledge and
method described in Parts One and Two. Such examples will be illuminating for
those who understand the underpinning theory and of no use to the hasty and
unfocused reader.
• Part Four tackles the new frontier of network theory as a basis for managing
organizations. This can be achieved on a practical level by designing the
organization as a network of projects with a strategic constraint, and by using
statistical methods for continuous improvement.
The diagram below maps out the complete implementation cycle using the Thinking
Process Tools. This cycle can be used both on a macro and micro scale, to transform an

216
Domenico Lepore

entire organization into a thinking system, or more simply to transform a situation of blockage
within an organization into a systemic project for increased Throughput.
This cycle begins with the collection of Undesirable Effects (UDEs), which allows the
core conflict, i.e. the cognitive constraint preventing an organization from achieving its full
potential, to be verbalized in the form of the conflict cloud. This conflict cloud includes the
goal of the organization and the two fundamental needs underpinning the vision and structure
of the organization.
Once the underlying assumptions that create the core conflict are surfaced, a
breakthrough solution(s) can be devised, known as ‘injection’. The Future Reality Tree (FRT)
uses a logic of sufficiency to connect the injections with statements of reality ensuring the
achievement of the goal while satisfying the two fundamental needs identified in the conflict
cloud. Any negative implications identified during the building of the FRT are verbalized and
addressed using the Negative Branch Reservation (NBR).
In order to implement the injections/solutions, all obstacles are identified and reverbalized
in terms of Intermediate Objectives to be achieved. These Intermediate Objectives are mapped
using the Prerequisite Tree. Each intermediate Objective is further broken down into actions
using the Transition Tree which reveals the logic, need and resulting change in reality of each
action to be taken. Once the actions have been specified, they can be scheduled into a project
using the Critical Chain algorithm based on finite capacity.

The Implementation Cycle

217
Domenico Lepore: biography

Domenico Lepore studied Physics at Universita’ degli Studi di Salerno, Italy, and became
Dottore in Fisica in 1988 with an experimental thesis on quantum metrology. The device
obtained from this thesis was adopted as the Italian National Voltage Standard. His pursuit of
a research program was motivated by the goal of learning a method of investigation that
would be applicable to areas other than just the natural world.
From 1991-96, Lepore worked for the Management School of the Italian equivalent of the
Department of Trade and Industry where he became an expert in the work of W. Edwards
Deming and Quality Management. He was the Italian representative for the ISO continuing
education group and member of the Italian Standards Organization. During this period Lepore
studied the Theory of Constraints (TOC) developed by Israeli physicist Eliyahu Goldratt, and
quickly saw how TOC was a natural catalyst for implementing Deming’s philosophy. He
designed and delivered unique training modules and consultancy for Small to Medium
companies combining statistical methods and TOC.
In 1996, Lepore formed his own company to further develop and test this combined
approach. He formalized this methodology integrating the work of Deming and Goldratt into a
rigorous, ten-step algorithm named the Decalogue™. The Decalogue interconnects the
systems approach based on understanding variation with the effectiveness of managing an
organization around a strategically chosen constraint. Lepore commissioned a team of
mathematicians to develop made-to-measure software to support and accelerate
implementations, including a finite capacity project scheduler called Independence.
Dr. Lepore co-authored the book Deming and Goldratt: the Decalogue with friend and
foremost TOC expert Oded Cohen, published in 1999 by North River Press in the U.S. The
book, translated into several languages, contains the basic tenets of the methodology Lepore
has implemented over the last 15 years and is recommended reading for several universities
around the world.
Lepore’s Decalogue™ methodology has led to the successful improvement and
turnaround in management and performance at over 30 national and multinational
organizations, primarily in Italy and the United States. Thanks to the robustness and
repeatability of the methodology, success has been demonstrated in a wide variety of fields –
from aluminium to nursing. Implementations led by Lepore have produced dramatic
improvement in performance by focusing on quality, speed and flow. Increased cash from
sales becomes the result of reduced lead times, expanded production capacity previously
unavailable, reduced inventories, fewer delivery delays, and greater access to new markets.
In 2002, Lepore became Senior Advisor to GrafTech International, a world leader in the
manufacturing of Graphite for industrial application. The Decalogue was applied company
wide and over the three-year consultancy period, GrafTech achieved outstanding results.
Sechel: logic, language and tools to manage any organization as a network

Thanks to this success, the former Chairman and CFO of GrafTech invited Lepore to join
them in a venture to raise capital, acquire companies and manage them with the Decalogue
methodology. Thus, in 2006 Lepore became President of Symmetry Holdings Inc, the first
holding company to utilize the Decalogue methodology. In 2007 Symmetry Holdings
consummated the fastest acquisition ever completed by a Special Purpose Acquisition
Company and took control of a highly traditional public company in the Steel sector with the
aim of creating one system and increasing throughput based on speed and transparency. By
the end of 2008 the company had achieved all their goals for the year: reduction in debt,
dramatic reduction in inventory, unmatched speed of replenishment within the industry, and
unification of 21 unconnected plants into a system organized around a strategically chosen
constraint.
The unprecedented economic crisis did not hit their operations until 2009. A lack of
alignment among managers old and new regarding the vision and goal of the new company,
renamed Barzel Industries, translated into errors in the timely transformation of a fragmented
organization into a profitable and unified system. Sales did not increase sufficiently to avoid a
liquidity crisis, and a tightening on lending from the banks led to a change in ownership.
This unique opportunity that was lost prompted Lepore more than ever to formalize in a
book the kind of intelligence that must be applied in order to succeed in bringing business up
to speed in an interconnected and interdependent world. He dedicates his working life to
facilitating the ability of organizations to interconnect intuition, understanding and knowledge,
from the birth of an idea to its detailed implementation and to manage and monitor projects
systemically.
Lepore is the founder of Intelligent Management Inc., an organization with the goal of
promoting a systems-thinking approach to organizations and boosting management
intelligence. He is co-founder of Invictus IM Corp., a strategic advisory and investment firm
which assists corporate management teams to recognize and achieve full potential for their
companies. Invictus IM uses the Decalogue methodology in all its activities.

220
BIBLIOGRAPHY

Lepore, Domenico et al.:


Domenico Lepore and Oded Cohen, Deming and Goldratt: the Decalogue. Great Barrington
Mass.: North River Press, 1999. Translated into four languages.
G. Maci, D. Lepore, S. Pagano and G. Siepe. “Systemic Approach to Management: a case
study”. (Poster presented at 5th European Conference on Complex Systems, Hebrew
University, Givat Ram Campus, Jerusalem, Israel, September 14-19, 2008).
G. Maci, D. Lepore, S. Pagano and G. Siepe. “Managing organizations as a system: the
Novamerican case study”. (Poster presented at International Workshop and Conference on
Network Science, Norwich Research Park, UK, June 23-27 2008).
D. Andreone, V. Lacquaniti, G. Costabile, D. Lepore, R. Monaco, S. Pagano, M. Russo and
G. Costabile Eds., “Twenty-junction arrays for a Josephson voltage standard at 100 mV
level”. World Scientific Pub. Co, Singapore, 1988, p. 1-12.
Deming, W. Edwards:
Out of the Crisis. Cambridge, Mass.: Massachusetts Institute of Technology Center for
Advanced Engineering Study, 1986.
The New Economics for Industry, Government, Education. Cambridge, Mass.: Massachusetts
Institute of Technology Center for Advanced Engineering Study, 1993.
Killian, Cecelia S. The World of W. Edwards Deming. Washington D.C.: CEEPress, 1988.
Neave, Henry: The Deming Dimension. Knoxville, Tenn.: SPC Press, 1990.
Deming of America. (Documentary) Cincinnati, OH: The Petty Consulting/Productions, 1991.
Shewhart, Walter A.:
Economic Control of Quality of Manufactured Product. New York: van Nostrand Company
Inc., 1931; American Society for Quality Control, 1980.
Statistical Method from the Viewpoint of Quality Control. Edited by W. Edwards Deming.
Mineola, New York: Dover, 1986.
Wheeler, Donald J.
Four Possibilities. Knoxville, Tenn.: SPC Press, 1983.
Understanding Statistical Process Control. Knoxville, Tenn.: SPC Press, 1992.
Understanding Variation. Knoxville, Tenn.: SPC Press, 1993.
Advanced Topics in Statistical Process Control. Knoxville, Tenn.: SPC Press, 1995.
Sechel: logic, language and tools to manage any organization as a network

Building Continual Improvement. Knoxville, Tenn.: SPC Press, 1998.


Avoiding Manmade Chaos. Knoxville, Tenn.: SPC Press, 1998.
Goldratt, Eliyahu:
What is this thing called the Theory of Constraints and How Should It Be Implemented? Great
Barrington, Mass.: North River Press, 1990.
The Haystack Syndrome: Sifting Information from the Data Ocean. Great Barrington, Mass.:
North River Press, 1990.
The Goal: A Process of Ongoing Improvement. Great Barrington, Mass.: North River Press,
1984.
It’s Not Luck. Great Barrington, Mass.: North River Press, 1994.
Critical Chain. Great Barrington, Mass.: North River Press, 1997.
The Theory of Constraints Journal. (Vol. 1-6) Avraham Goldratt Institute, 1987.
Corbett, Thomas. Throughput Accounting. Great Barrington, Mass.: North River Press, 1998.
Dunbar, Nicholas. Inventing Money. West Sussex, England: John Wiley & Sons, Ltd.. 2000.
Mandelbrot, Benoit. The Fractal Geometry of Nature. New York: W.H. Freeman,1982.
Mandelbrot Benoit and Richard L. Hudson. The Misbehavior of Markets: A Fractal View of
Financial Turbulence. New York: Basic Books, 2004.
Capra, Fritjof. The Web of Life: A New Scientific Understanding of Living Systems. New York:
Anchor Books, 1996.
Barabàsi, Albert-Làszlò. Linked: the New Science of Networks. Cambridge, Massachusetts:
Perseus Publishing, 2002.
Barabàsi, Albert-Làszlò and Reka Albert. “Emergence of Scaling in Random Networks.”
Science Vol. 286. no. 5439 (1999): 509-12.
Watt, Duncan J. and Strogatz, Steven, “Collective dynamics of small world.” Nature 393
(1998): 440-42.
Newman, M.E.J. “The Structure and function of complex networks.”
http://citeseerx.ist.psu.edu. PDF, 2004.
R. Cohen, K. Erez, D. ben-Avraham, and S. Havlin. “Breakdown of the Internet under
Intentional Attack”. Physical Review Letters 86 (2001): 3682-85.
R. Pastor-Satarros and A. Vespignani. “Epidemic spreading in scale-free networks.” Physical
Review Letters 86 (2001): 3200-03.
Ginsburgh, Harav Yitzchak. The Dynamic Corporation.
http://www.inner.org/dynamic/dynamic.htm

222
Domenico Lepore

Jacobson, Simon. Towards a Meaningful Life: the wisdom of the Rebbe Menachem Mendel
Schneerson. New York: William Morrow and Co. Inc., 2002.
Bonder, Nilton. The Kabbalah of Money. Boston: Shambalah Publications Inc., 1996.
The Kabbalah of Envy. Boston: Shambalah Publications Inc., 1997.

223

Potrebbero piacerti anche