Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Domenico Lepore
with Giovanni Siepe, Sergio Pagano, Francesco Siepe, Yulia Pakka, Larry Dries,
Gianlucio Maci
Toronto, Canada.
ISBN 978-0-557-58884-8
Contents
Acknowledgements ................................................................................................................. xi
Preface ...................................................................................................................................xiii
Introduction............................................................................................................................. xv
PART ONE: Intuition (birth of an idea) ..................................................................................... 1
1. Freedom and responsibility: designing the new systemic organization ................ 3
2. Predictability vs. Speed: Back to the Deming and Goldratt conflict .................... 13
3. Unleashing potential from the trap of hierarchy .................................................. 23
4. Operating a systemic organization: who, when, why and how ........................... 33
PART TWO: Understanding (Analysis and development) ...................................................... 43
5. Transforming Industry ........................................................................................ 45
6. Intelligent Industry: Quality, Speed, Network ...................................................... 53
7. Creating Conscious Industry, and Industry with a Conscience ........................... 63
8. New Leadership, New Economics ...................................................................... 73
9. The Ten Steps of the Decalogue ........................................................................ 79
PART THREE: Knowledge (application/execution) ................................................................ 89
10. Sechel: Fostering a higher intelligence with the Thinking Process Tools ........... 95
Dr. Domenico Lepore
11. Measurements: Throughput Accounting and Statistical Process Control
for Effective Decision Making ........................................................................... 117
Larry Dries JD
12. Managing Variation .......................................................................................... 127
Dr. Giovanni Siepe
13. Using the External Constraint in an Integrated Network for Marketing and
Sales ................................................................................................................ 155
Yulia Pakka MScBA
14. The Information System within Intelligent Management ................................... 181
Francesco Siepe PhD and Prof. Sergio Pagano PhD
PART FOUR: The New Intuition of the Enterprise as a Network of Projects ........................ 201
15. A Systemic Approach to Complex Networks .................................................... 205
Gianlucio Maci PhD
Summary of the main ideas in this book ............................................................................... 215
Biography of Domenico Lepore ............................................................................................ 219
Bibliography.......................................................................................................................... 221
This book is dedicated to the blessed memory of
Dr. W. Edwards Deming and Rabbi Menachem Mendel
Schneerson, the Lubavitcher Rebbe. In my mind, they
represent the quintessential form of sechel the world so
greatly needs today.
Domenico Lepore
Dr. Lepore graduated as dottore in fisica from the University of Salerno with an experimental
thesis in quantum metrology. He became an expert in the systems-based approaches to
management of W. Edwards Deming and the Theory of Constraints (TOC), and subsequently
developed a systems thinking management methodology named the Decalogue™. Lepore
has led successful improvement and turnaround implementations in over 30 national and
multinational organizations, primarily in Italy and the United States, in sectors as varied as
aluminium and care for the elderly.
Lepore co-authored the book Deming and Goldratt: the Decalogue with friend and
foremost TOC expert Oded Cohen. The book, published in 1999 by North River Press in the
U.S., has been translated into several languages. It contains the basic tenets of the
methodology Lepore has implemented over the last 15 years and is recommended reading
for various universities around the world.
Lepore is the founder of Intelligent Management Inc., an organization with the goal of
promoting a systems-thinking approach to organizations and boosting management
intelligence to deal with an increasingly interconnected and interdependent world. He is co-founder
of Invictus IM Corp., a strategic advisory and investment firm which assists corporate
management teams to recognize and achieve full potential for their companies. Invictus IM
uses the Decalogue methodology in all its activities. (A detailed biography is provided at the
end of this book.)
Further information on the Decalogue and the contents of this book can be found by
visiting:
www.intelligentmanagement.ws
Acknowledgements
There are many people I wish to acknowledge and thank for their support; without them this
book would not exist.
First and foremost, the friends and colleagues who accepted to illustrate with examples
the points I tried to make throughout the book;
Oded Cohen and Martin Powell for their teachings and for standing by me through thick
and thin;
Rabbi Aaron Raskin of Chabad of Brooklyn Heights, who accepts the burden of teaching
to and dealing with a secular gentile without ever losing his cool;
Rabbi Yanki Tauber, who so beautifully transposes the sichos of Rav Menachem Mendel
Schneerson into essays that will always educate and inspire;
Larry Dries, Lorne Albaum and Seth Dalfen for helping me through very difficult times
and for making Canada home;
Corrado De Gasperis, Karen Narwold and Jim Guddy for their many years of effort to
make management an intelligent activity;
Piero Alviggi for being more than a brother ever could, and Alberto Mizzotti for his
unfailing professional support;
I would need a new, not yet invented, system of words to describe what I owe to my wife,
Angela Montgomery, the life and soul of Intelligent Management. Only this system could
define the breadth of sentiments and emotions I experience every day with her. This book is
hers just as much as it is mine.
PREFACE
In the last thirty years a rapidly increasing number of fields of human knowledge, from
science to medicine, from epistemology to environmental studies, have turned to systems
theory and systems thinking to gain a deeper insight into the basic mechanisms of life and its
evolution. The findings have all been pointing at certain basic features that we all, as living
entities, share.
What every rigorous study in any field has proved is that not only are divisiveness and
individualism unsustainable in a globally interconnected world, they are contrary to the basic
biochemical fabric of our very existence. Economics and politics, as well as management, have
been almost oblivious to these findings and for the most part continue to regurgitate the same old
recipes; they continue to apply patches instead of acknowledging the new emerging paradigms.
Win-Win conflict resolution, cooperation instead of competition, symbiosis instead of
survival of the fittest, patterns not just structures; these today are some of the basic, well
understood elements that make up a society that can sustain its ambition to evolve and
prosper, as well as the founding elements of our biological existence. Life, as we experience
it on this planet at every level, is based on interdependencies and interconnections. We exist,
as Fritjof Capra brilliantly pointed out, within a “web of life”, a network of interdependencies
that cannot be understood solely in terms of its basic components but has to be studied in
terms of its interrelations.
This book deals with economics and management, two human endeavours that are
tragically trailing behind in the quest for understanding how we, as humans, can live together.
It does so by looking at the single most fundamental aspect of our existence that makes us
humans different from any other animal: the ability to think and learn. More precisely, it deals
with our inability to transform at an appropriate pace what we know into coherent actions. We
seem to be lacking the kind of “intelligence” required to tap into that higher level of
consciousness that makes us see what we know and who we are as one.
We cannot surrender to this deficiency because the ever-increasing complexity of our world
demands more, and better, abilities to live and work in constantly changing environments. We
need to transform our cognitive patterns to adapt to this unprecedented complexity.
More specifically, there are three faculties of the intellect that we need to learn to link together
if we want to remain at the helm of the transformation process:
1. the ability to generate new ideas (intuition);
2. the ability to understand the full spectrum of implications of these newly developed
ideas (understanding);
3. the ability to design and execute a plan coherent with this understanding
(knowledge).
The “new intelligence” we need to develop is called, in Hebrew, sechel and in this book
we look at how it can be acquired and applied.
In order to produce results, this sechel has to be complemented by a rigorous method of
investigation that is typical of science. Dr. Edwards Deming embedded this rigorousness into
the PDSA (Plan, Do, Study, Act) cycle and its statistical underpinnings.
Last, but not least, sechel and PDSA have to be supported by a coherent organizational
structure. Such a structure must be systemic in nature; this allows us to overcome the
strictures of the traditional hierarchical/functional organization and free individuals from the
prison of wrong interdependencies.
This book is about these three elements: sechel, PDSA and systemic organizational
structure, but it is also about what enables these elements to generate results, i.e. leadership.
This is not a “self-help” book nor a “how to” guide; at the same time it is not a book mired in
philosophical speculations or, even worse, in “new age”, feel-good, psycho-behavioural
tendencies. It is about the founding principles of a new epistemology of wealth; it is a bridge
between rock solid elements of foundational knowledge and their practical application.
Ultimately, it is a book about the pattern to reunite what we do, how we do it and who we are.
It is very likely that most of you will think that what you read here makes sense. Still,
I believe that many will not try to embrace or adopt the ideas here presented. I decided to use
the term “Intelligent Management” to name the interrelated set of principles, methods, beliefs
and behaviours that, in my opinion, make the difference between what is “intelligent” and
what is, let’s say, something else. In the end, it is important to take responsibility for the
amount of intelligence with which we accept to live our lives.
The book tries to explain Intelligent Management in three ways: by elucidating upon its
founding concepts, by illustrating with examples how these concepts can be applied, and by
connecting the somewhat semi-empirical nature of managerial findings with a fast growing
piece of mainstream science, network theory.
Domenico Lepore, May 2010
xiv
INTRODUCTION
In our lifetime we are witnessing the exhilarating freedom of artificial walls beginning to
crumble. Twenty years after the fall of the Berlin wall, we are moving slowly but steadily
towards the understanding that if we are to flourish then artificial barriers must be removed,
from apartheid and discrimination in all its manifestations, to ecology and global economies.
As communication becomes more instantaneous, we are increasingly aware that we are not
separate from others the way we perceived ourselves to be in the past. Every one of us, at
the level of individual, family, community, nation and beyond is interconnected; indeed, we
are part of a network of interdependencies.
This is not simply a political and sociological stance. There is an underlying scientific
basis for this shift in worldview away from a Newtonian, industrial age notion of the whole
being made up of separate parts towards one of dynamic interconnection. We know from
physics that living organisms can be viewed as systems, be they cells or solar systems, and
as such they obey certain laws. We define a system as a network of interdependent
processes that work together to achieve a goal. Such systems are made up of a web of
interconnections. In other words, we are understanding more and more that our reality is
systemic. For this reason, any attempt to govern countries or manage organizations that is
not based on this awareness is doomed to create damage. The negative effects may not be
immediate, but they will inevitably come.
If we have this knowledge, then why is it that decision makers fail to understand the long-
term, systemic implications of their actions in virtually every sector of society? Why do we
lack the basic intelligence needed to realize that every decision made for the advantage of a
privileged minority and to the detriment of the majority will eventually lead to disaster? What
is it that leads reasonably educated and, generally speaking, well-intentioned individuals to
make one shortsighted, half-baked, non-systemic decision after another?
In November 2009 the US Department of Agriculture announced that in the previous
year 49 million people in the USA had lacked consistent access to adequate food. In New
York City, as of December 2009, there are 1.5 million people with no paid sick days and
nearly 2 million who cannot read and write in English. Perhaps some business owners and
city officials believe there is an advantage in keeping the population ignorant and unhealthy.
What is more likely is that these people have a cognitive inability: they simply cannot draw
basic cause-and-effect relationships between a sick person going to work in the morning and
the spreading of illness (not to mention their inability to perform); between the growth of
illiteracy and the drop in quality of necessary services to the community; between an increase
in hunger and an increase in crime.
In the USA ordinary people borrowed over five years 1.4 trillion dollars that they could
not repay and lost their homes; the “financial industry” developed 14 trillion dollars of various
assets on the back of those loans and borrowed much more using these assets as collateral.
In 2009 the financial industry granted many tens of millions of bonuses to the ‘heroes’ that
generate so much money in tax revenue to the government. The ‘only way out’ of the crisis
seemed to be to rescue the banks. At the same time, a health bill to spend just under 1 trillion
dollars over ten years raised concerns. The list of nonsense is endless.
Greed, lust for power, evil inclinations, cultural beliefs and historic circumstances are all
part of human existence. However, the sheer and disconcerting inability displayed over the
last forty years by the leaders of the western world, with few exceptions, to exert meaningful
guidance, from world peace to the environment, from poverty and inequalities to the folly of
modern finance, leads me to believe that the issue at stake is much deeper than a host of
unguarded sentiments and unbridled instincts.
The recurrent inability of leaders to provide meaningful guidance has prompted me to
write this book. I believe that, in spite of the disasters around us, we are experiencing a
unique phase in our human history: we have the science, the knowledge and the tools to
foster a more systemic use of the intellect to capitalize on our intuition, to develop thorough
analyses, and to design and take correct actions for the common good.
We are talking about a more evolved form of human intelligence that allows intelligent,
thoroughly thought-through decisions to be manifested through intelligent action. There is no
precise term for this evolved intelligence in English, but it is perfectly described by the
Hebrew word ‘sechel’. By integrating the scientific method with a more systemic use of the
intellect we will achieve a greater level of sechel. In order to foster a more systemic
intelligence, or sechel, we must increase the ability of our minds to connect three faculties of
the intellect:
• intuition (the birth of an idea),
• understanding (development and analysis)
• knowledge (application/execution).
This sechel will enable a re-foundation of economics and management. My focus in this
book is the systems-based management of industry. This is the area I have worked in for
almost two decades. I believe that industry is where long-term, sustainable wealth for
countries is created. Quite simply, no real economy can exist without a solid industrial
infrastructure. The hope for the future of industry is Intelligent Management, and the purpose
of this book is to provide the vision, the method and the know-how for achieving this obvious,
but surprisingly rare, ability.
xvi
systemic and constant effort towards continuous improvement. This mindset and cognitive
ability can be reinforced and empowered through a formidable set of logical tools developed
by the Israeli physicist Eliyahu Goldratt, who developed the Theory of Constraints. By
integrating the scientific method with this systemic use of the intellect we will achieve a
greater level of sechel, which is precisely what the world today needs so desperately.
For the last twenty years, in different capacities and in both Europe and North America,
I have worked to enhance the business performances of a variety of organizations; industry
primarily, but also government, healthcare and education. As a physicist turned
‘organizational scientist’, I have always acted by following a theory, i.e. a set of assumptions
that I went on to validate or disprove. The main body of knowledge I was armed with was the
Theory of Profound Knowledge set forth by Dr. W. Edwards Deming and the Theory of
Constraints developed by Dr. Eliyahu Goldratt. From the mid-1990s with Oded Cohen
I started to integrate these two management theories and this integration evolved into a
coherent systems-based methodology that we called the Decalogue™. We published our
findings in 1999 in our book Deming and Goldratt: the Decalogue (North River Press).
xvii
(and emotional exit wounds) to prove that people are ill at ease when asked to change
behaviours as a result of new learning. However, I am optimistic: sechel, the Hebrew word to
describe the purely human ability to modify behaviours as a result of learning, is an ability we
can systematically pursue and develop.
The ten steps presented in Deming and Goldratt: the Decalogue are the founding
elements and the algorithm to sustain the transformational effort required to create and
manage an enterprise as a system, as opposed to the hierarchical/functional model.
Intelligent Management covers the global cognitive and philosophical landscape out of which
the Decalogue emerges and exercises the three faculties of the intellect that have to be
addressed: intuition, understanding and knowledge, i.e. sechel. Intelligent Management
essentially ‘rethinks’ the main debaser of these three faculties in industry, i.e. the
organizational design. Increased sechel and a suitable organizational design facilitate the
adoption of the scientific method embodied in management by the Plan Do Study Act (PDSA)
cycle and its underlying statistical content. Likewise, the adoption of the scientific method
facilitates the creation of a suitable organizational design and an increased sechel: all three
strands are interdependent and continuous, like the sides of a Mobius strip. This is Intelligent
Management and the only limits to what it can achieve are the constraints we impose upon
ourselves.
The chapters in Part Three deal with the basic elements that this newly acquired sechel should
enable industry to pursue. We focus on precise applications of Intelligent Management to:
• The fallacy of the prevailing measurement system
• Statistical thinking and synchronization
• Organizational design (project-based) and the role of the Information System
• Marketing and sales (creating systematic breakthroughs)
• The ‘choked tube’ enterprise and network theory
xviii
These are the fundamentals that not only create the foundation for management in the
twenty-first century, they point at a model for a new kind of leadership. Needless to say, the
examples are all interdependent and no attempt is made to make the reader believe that they
could replicate what they read here in their organization.
This is not a self-help book, indeed I doubt that such books truly exist. This book is
dense and, at times, challenging. It is not an easy read and parts will require actual study. Its
aim is to encourage leaders and managers, especially in industry, to develop a better sechel.
This ability is mandatory if we want to cope with the challenges posed by the increased
complexity of our world and, I believe, even more importantly, if we want to live a more
meaningful life.
xix
PART ONE
Intuition
(birth of an idea)
In Part One we examine the conscious and connected organization from the perspective of
Intuition (birth of an idea). The intuition here is the inherent conflict of hierarchy vs. system
underlying any organization and that can be resolved through the adoption of a systemic
model, using the Decalogue as the algorithm to achieve that transformation.
1.
theory have taken organizations beyond the realm of military-style hierarchies. I believe nothing
in the last sixty years of management studies comes even close to the depth, insightfulness,
rigor and vibrancy of their work. And yet, a truly successful, groundbreaking, internationally
recognized, full-blown application of their teachings has yet to come.
For nearly twenty years, I have been working with organizations to help them understand
Quality and Synchronization operationally. After a four-year collaboration with Oded Cohen,
in 1999 we published Deming and Goldratt: the Decalogue. The Decalogue is a ten-step
algorithm that contains the elements of understanding required for a systemic approach to
management. We will examine some practical applications in Part Three of this book.
Since our book came out, through successes and failures, I have considerably
broadened our initial understanding. I have been personally involved in many dozens of
implementations of the Decalogue as an advisor, consultant, mentor, top manager and board
member. I was able to gain increasing confidence in the scientific validity of the Decalogue
thanks to the results achieved, and also realize which further elements of knowledge were
needed to make it even more operational. The results throughout a wide range of
organizations were solid and consistent:
• reduced lead times
• increased production capacity that was previously unavailable
• reduced work-in-process and finished product inventory
• reduced delays in delivery
• improved quality resulting in reduced waste and rejections
• enhanced project management reliability
• increased access to new market segments
• last, and most certainly not least, increased cash (throughput) as a result of
increased sales.
However, in spite of the undoubted successes, the results were not always on the scale
it was fair to expect. Some fundamental issue was still unresolved and at a certain point we
were hitting a wall.
The wall against which all these heart-felt and knowledge-guided implementations
crashed has invariably been the organizational structure. Something in the shape of the
organizations where implementations of the Decalogue were carried out created an inevitable
barrier to full success. The question then became, how do we create a structure that bears
the weight and flow of a truly systemic organization? Why is it that people, in spite of their
genuine desire to create this kind of organization, get stuck?
I believe this issue has two facets; one concerns the practical mechanism with which
such a structure can be created, the other runs much deeper and touches the very essence
of the relationship between who we are and what we do. The issue of organizational design
and the way people carry out their responsibilities within that design becomes then critical for
the successful creation of a systemic enterprise.
4
Domenico Lepore
All that matters: variation and constraint, i.e. predictability and synchronization
Compared with the traditional hierarchy, the systemic organization is alarmingly simple.
This simplicity derives from the fact that instead of imposing a conceptual model, such as a
function, in a systemic organization we reveal the way the organization intrinsically is.
In order to create an organization that combines the two most fundamental elements of a
successful system as explained by the theories of Deming and Goldratt, we must:
1) understand the system we are operating and its intrinsic variation
2) provide a synchronization and protection mechanism that enables its effective
management
Let’s look at what that means. Deming’s major contribution was to insist on the
understanding and management of variation (see Chapter 12). Every human process, from
waking up in the morning to sending a man to the moon, is affected by variation; a process
can never be repeated in an identical way. Incorrectly managed variation in manufacturing,
for example, leads to scrap, waste and money lost.
It is impossible to eliminate all variation because entropy exists and is intrinsic to any
process. However, through statistical methods it is possible to understand variation, measure it,
manage it and take actions to reduce it. This requires a mindset of continuous improvement as
opposed to monitoring. In spite of the disastrous and costly effects of ignoring this reality,
surprisingly few managers are conversant with Statistical Process Control.
5
Sechel: logic, language and tools to manage any organization as a network
6
Domenico Lepore
connection that is formed very early in our life and continuously reinforced by the
environment we live in. We will come back to this question in more detail in the following
chapters.
An organization brings individuals with competencies together. By combining these
competencies in a suitable manner we achieve the goal of the organization. In other words,
individual efforts can lead to a global result if we devise a mechanism that creates orderly
coherence in the combination of these efforts. A hierarchy should facilitate the creation of this
order. Accordingly, it is not the hierarchy that we challenge but the kind of subordination that
a traditional hierarchy calls for.
Traditionally, the way we translate the hierarchy into a company structure resembles the
well-known pyramid below.
Hierarchy pyramid
As we know, the vertical lines that connect the boxes clearly define the span of control
and the accountability of those boxes. Along with the accountability goes the amount of
individual power that those boxes have. A hierarchy is then conventionally translated into a
mechanism of control and reporting. Accordingly, we can easily conclude that the pyramid
cannot achieve the goal of creating the orderly coherence among the individual efforts that
we were seeking due to the separations it imposes.
Is there another way to create that coherence, hence giving the hierarchy its natural role
of creating order? There is, but we do need to make a cognitive effort to grasp it. One step at
a time.
At its most basic level, if you can imagine an x-ray of a company, any company can be
seen as a set of recurring and non-recurring activities. For instance, ordinary maintenance,
book closing, production scheduling, shipping, etc. are recurring activities; the introduction of
a new technology, the expansion of premises, the launch of a new product, can be seen, in a
way, as non-recurring activities. If we view a company as a system then we must understand
how to build the interdependencies that make up this set of activities and how these
interdependencies evolve in time.
7
Sechel: logic, language and tools to manage any organization as a network
This is an important point: not only do we need to know how these activities are going to
take place (this can be done by mapping processes with a deployed flowchart) but also the
timing of this evolution and we need a method to control it. As a deployment flowchart in
Chapter 12 clearly shows, virtually any activity that an organization undertakes is cross-
functional; this realization alone should be enough to dismantle the idea of the validity of a
functional organization. Moreover, these interdependencies must be staffed and managed
within a temporal spread.
8
Domenico Lepore
Help, where am I?
There is a real cognitive ordeal to overcome before we can embrace the idea of a
project-based organizational structure: we feel we lose all our familiar reference points. We
are accustomed to having “a career” in a functional area with a “boss” that assesses our
functional performances and a “bonus” paid to reward a local optimum. This is so true that
even if we understand that these things do not make sense we still struggle to give them up.
Yes, we may like the idea of a “systemic organization”, it makes sense to us but…hey, don’t
take away my local certainties: I need a boss to report to, I want to be measured locally (how
else?) and I do want my bonus for doing my job well and loyally (to the boss). What the heck!
Let’s see how we can accommodate, at least rationally, for these seemingly indispensable
features of our professional life. A systemic organizational structure that is based on process
predictability and a high level of synchronization must safely rest on:
a) A clearly, indeed “super-clearly” laid out network of conversations (what everyone
needs to say to everyone else to make processes work: input, output, how to
measure it and how to improve it; indeed, how the process should work). We can call
it “The Playbook”;
b) A suitable Information System structure to support these conversations.
These two issues are neither conceptually nor technologically difficult to address. The
network of conversations requires clear ideas and sufficient knowhow on how to operate the
company’s processes and how to link them together. It can be built in weeks, not months, for
any midsized company and a few more weeks are needed to expose anyone in the company
to the outcome of this work. The suitable Information System structure is even simpler: the
9
Sechel: logic, language and tools to manage any organization as a network
different pieces that would make up this IS are already available as open source and all it
takes is clarity on what an IS should be for (see Chapter 14). Unfortunately, concepts and
technologies are the offspring of paradigms and these paradigms are originated by forces;
often we are not trained to understand and control these forces. Let’s digress for a second.
10
Domenico Lepore
teacher, inspires us to want to know more. Joyfulness is the state of mind that is conducive to
openness and availability to receive; it is a state of grace that makes us see possibilities, that
lifts our spirit and originates positive feelings. As one of the greatest intellects of our times,
Rav Menachem Mendel Schneerson, has taught us, joy is the force that breaks all the
boundaries.
Far too often in our world, instead, learning is incentivized by the promise of a reward other
than the learning itself and joy is confined to some material achievement. In the western world, by
far and large, we have replaced the joy of learning with the acknowledgement granted with it.
Learning, then, becomes merely functional to achieving some grade or certificate, and completely
disconnected from what that learning should be for: to open our mind to the endless possibilities
that exist and encourage more learning. The current education system contributes very heavily to
the narrowing of our horizons by providing courses on “functional” competencies, the ones that are
going to be rewarded in the workplace, and trains us through grades and competition to see joy as
unnecessary or even counterproductive. The current education system triggers the cycle of despair
and the debasement of our innate desire to learn, and the prevailing management style reinforces
that cycle.
The complete disassociation that managers have from learning is exemplified by the total
failure of the majority of the training efforts that take place in organizations. Simply put: real
learning, not the kind that comes from reading quick fix books, but the kind that changes
behaviours, is not considered “strategic” for career advancement.
What is the real difficulty that we face in wanting to create a true learning organization
that dismantles the functional structure and replaces it with the far more suitable network of
projects? It is not connected with lack of knowledge of how to do it, nor with the lack of
technologies to support it. The real issue is the mental barrier, or cognitive constraint that
prevents individuals and organizations from working together as a system for a common goal.
We cannot afford to be pessimistic about the possibility to elevate that cognitive
constraint. We cannot afford to surrender to the lack of intelligence that permeates the way
politics and businesses are run today. We cannot afford to see as inevitable that insurance
companies and Big Pharma control our health and we cannot afford to continue to believe
that one day we could be part of that 1% of the population that owns 80% of the wealth. We
cannot afford to trust the claims made by the financial pundits and Wall Street wizards of how
they boost our economy. We cannot afford to be persuaded that only our individual efforts
and personal drive will, eventually, be the cause of our success as a country. Those days are
gone forever.
We live in a completely interconnected, interdependent, increasingly complex world
where the levers for success have definitively shifted from competition to cooperation, from
win-lose to win-win, from me against you to you and me against the problem. We don’t just
need new knowledge; we need a new form of organization and a new covenant with our
mind. In order to live and prosper in this world of unprecedented interconnection we have to
learn at a much faster pace and we can only do it if we improve our ability to leverage our
intellect. This book was written to show that we have the necessity and the way to do
precisely that.
11
2.
There are situations of stasis in our lives and in our work that can seem insurmountable. Often
we feel forced into polarized positions that keep us far from a solution. At best, we may adopt a
compromise, but a compromise is not a solution, it merely leaves both sides dissatisfied. What is
a real solution? It is a breakthrough that moves us forward while protecting the true needs of both
sides. In this way the ‘conflicting’ positions simply cease to exist.
The intuition that an organization is a network of projects and therefore must be
designed and managed accordingly evolved over fifteen years of work. The validity of this
solution can be better understood if we take a step back and trace that evolution. An
important and fundamental conflict first had to be solved. In this chapter and Chapter 3
I describe the underlying conflict that engendered that solution, and the process it took to
overcome and solve it.
Discovering Deming
In 1993 I joined a network of academics and professionals based in the UK called the
British Deming Association (BDA). It had been created a few years earlier, with Dr. Deming
himself as its honorary Chair, and its goal was to promote and disseminate the teachings of
Deming through seminars, real life testimonials, conferences, and publications.
Dr. Deming was a physicist and statistician whose work founded the Quality movement. He
is perhaps best known for his work in Japan. His advice on improving design, production, quality
and sales, founded on a thorough understanding and application of Statistical Process Control,
helped Japan rise from the ashes after World War II to become a world leader in manufacturing.
A basic tenet of his philosophy was to see and manage an organization as a system, in other
words a network of interdependent processes that work together to achieve a goal.
I had been studying Deming extensively over the previous three years as part of my job at
the Milan management school of the Camera di Commercio, part of the Department of Trade
and Industry. My brief was to initiate small businesses in the basic principles of Quality
Management. As a physicist now working with organizations, I found Deming’s philosophy
exhilarating. By 1993 I had transformed my far too quiet activity of government agency
employee into a vibrant crusade to promote Deming’s philosophy and I was gaining some
traction. It was as if I had known Deming forever; his words full of power, wisdom and unrivalled
scientific rigor resonated with me more and better than any other form of organizational study.
Meeting the British Deming Association further boosted my enthusiasm and enriched
manifold my learning through continuous contact with illustrious Deming scholars. Dr. Deming’s
Sechel: logic, language and tools to manage any organization as a network
zest for knowledge has been one of the strongest and most profound sources of inspiration in my
whole life and the Theory of Profound Knowledge the blueprint for my professional development.
By 1995, may I say with some pride, I was successfully advising dozens of companies in
northern Italy and training hundreds of managers in Deming’s philosophy. Dr. Deming never
slowed down his efforts to promote better management through better understanding of
variation and how this variation permeates every aspect of our lives. This is what I was
passing on to businesses and the message was loud and clear: reduce variation, promote
statistical predictability and improve Quality.
14
Domenico Lepore
Theory of Constraints for resolving conflicts. This tool is fundamental for making a logical,
cause-effect analysis of any situation in which we are stuck. When used in concert with the
other Thinking Process tools, the conflict cloud can help us generate powerful solutions that
capitalize on our intuition, develop our understanding through analysis, and it leads us to
design a set of coherent actions that we can execute. The conflict cloud is a tool that
everyone needs to learn. What follows is the journey out of the Deming vs. Goldratt ‘cloud’.
15
Sechel: logic, language and tools to manage any organization as a network
What is common to both these needs? The common goal, i.e. what both Deming and
Goldratt pursue is a sustainable process of continuous improvement of performances.
We write the goal that is common to both need B and need C in the box labelled A as below:
16
Domenico Lepore
The reality in which this conflict was rooted appeared in all its clarity when I surfaced all
the assumptions. This is the process we use with the conflict cloud in order to ‘evaporate’ any
conflict. We verbalize all the assumptions we make that lead us to make the statements in
each of the five main boxes, A, B, C, D and D’. Whereas basic and robust assumptions about
reality allow us to live our lives (we ‘assume’ that when we open our front door we can step
out onto a hard surface) weaker assumptions are mental models or limiting beliefs that keep
us blocked in certain situations. We can derive these assumptions by using the word
‘because’:
IF A (the goal) THEN B (the need behind position D) BECAUSE (assumption(s) A-B);
IF B THEN D BECAUSE (assumptions B-D)
IF A (the goal) THEN C (the need behind position D’) BECAUSE (assumptions C- D’)
IF C THEN D’ BECAUSE (assumptions C-D’)
Here are the assumptions I surfaced for the Deming vs. Goldratt conflict:
A-B: the only way to sustain any process is to ensure its predictability
B-D: Deming’s philosophy and management approach is designed to ensure (and it is
based upon) process stability
A-C: continuous improvement of performances cannot be separate from the pace at
which these performances are achieved
C-D’: The management of finite capacity entailed in the Theory of Constraints maximizes
the pace at which units of the goal can be achieved.
17
Sechel: logic, language and tools to manage any organization as a network
The picture was very clear and I looked at it for quite a while before I was able to take
another step. Clearly, I was massively leveraging my intuition because there was no literature
supporting my claim that Deming and Goldratt could be brought together in a cohesive and
rigorous manner. Actually, I had been given many warnings not to try this because, unlike
myself, quite a few respectable scholars firmly believe that the two men represent different
and irreconcilable paradigms.
Meanwhile, Oded kept pushing me to think harder. He had no vested interest in this
pursuit but he did take sheer pleasure in any endeavour that would promote people coming
together rather than being divided. Moreover, he was interested in developing Theory of
Constraints (TOC) professionals in Italy and thought I was a worthwhile student, willing and
ready to embark on a business journey.
In 1996 I left my safe job in the Department of Trade and Industry to found the company
MST, Methods for Systems Thinking, with the goal of promoting a Deming-Goldratt approach
to management. The business plan sounded interesting to the CEO of the newly created
business incubator in Milan and I was granted a little office in their beautiful premises. I had
an office, a small but talented team, some customers but not yet the complete conceptual
solution I was seeking.
Oded and I used to meet every two-three months in different parts of the world and for
different occasions. We would always find time to sit down and further our discussions and
examine the conflict more deeply. The solution came one day, almost seamlessly.
How could I retain (protect) the need for speed and the need for predictability without
ending up in a conflict? Please, take a look at the historical ‘Production Viewed as a System’
picture that Deming drew in 1950.
This is a system and its three main components, the customers, the feedback
mechanism and the interdependencies are clearly laid out. But it does not have to be limited
to Production alone. Instead of ‘Production Viewed as a System’, we can just as easily write
‘Enterprise Viewed as a System’ if we change the names of all those arrows (to, for example,
18
Domenico Lepore
production, sales, marketing). In other words, using the same logic we can use this model to
portray any system we want to investigate. Indeed, we need to further elucidate on the
individual arrows but the concept is pretty clear. Each of those arrows, and the system as a
whole, must be understood and managed in its statistical evolution.
So where does the idea of constraint, Goldratt’s main tenet, come into the picture? Goldratt’s
majestic contribution to management is the understanding that no matter what we do, the
speed at which we proceed will be dictated by one or very few element(s) of the system. The
Theory of Constraints allows us to leverage this reality and use it to increase performance by:
• identifying the constraint
• exploiting the constraint to make the constraint work at full speed
• subordinating all the processes of the system to the constraint, i.e. build the system
around the constraint
• placing a buffer of material (or time) in front of the constraint so it can work constantly
This insight is absent from a purely Deming-based approach. But there was more to it
which I was able to see immediately: if we decide to ignore the idea of constraint, we will
19
Sechel: logic, language and tools to manage any organization as a network
soon get ourselves into a situation where the system we operate, no matter how good we are
at managing variation, will be made up of interacting and constantly shifting constraints.
Our system is, obviously, a finite one. Its components, as well as the whole system, are
finite. This means that we have a choice: we can allow random fluctuations (variation) internal
or external to the system to move the constraint erratically, or we can decide strategically by
which element of the system we want to be limited (constrained).
In other words, constraint(s), just like variation, are integral to any system.
The idea of process stability combines very naturally and elegantly with the concept of
constraint when we place it within the context of the enterprise as a dynamic system. We can
have an idea of this from the picture below:
This picture could certainly be improved, but it does capture what I came to realize. In
order to bring together cohesively the idea of constraint with the idea of a statistically
controlled system we have to orchestrate statistically controlled processes and subordinate
them to a well-defined (and very stable) part of our system, the constraint. If these statistically
stable, well orchestrated, processes all have the capacity to subordinate to the chosen
constraint, then all we need in order to protect the whole system is a “buffer” in front of the
constraint; needless to say, the oscillation of the buffer must be controlled statistically.
20
Domenico Lepore
In the years ensuing this realization, I spent a considerable part of my time on both sides
of the Atlantic trying to explain, not always successfully, why this idea of organization would
work, while achieving considerable results with clients. Oded Cohen and Larry Gadd of North
River Press believed in the model and so the next step was to write a book about it. This
book, Deming and Goldratt: the Decalogue, explains the ten steps that provide a map to
guide and sustain organizations in a continuous improvement pattern. It has since sold
thousands of copies, been translated into several languages, is recommended reading for
university courses internationally and is cited in numerous academic publications.
When we find a solution to a conflict by invalidating the assumptions that lead us to
adopt the two conflicting positions, in TOC we call this solution an injection. The idea of a
system that is statistically predictable and designed around a buffered constraint is a powerful
injection and can be very easily explained to anyone with a basic flair for statistics and
physics. However, the real issue became how to manage such a system. Oded’s
fundamental question was: “What is the set of assumptions that we invalidate by bringing
about such an injection?”
He had a point, and a big one. When we published Deming and Goldratt: the Decalogue
we knew that each of the ten steps was scientifically solid and all together the solution was
complete. However, there was still a big question: in what order should these steps be
deployed to be the most effective and powerful? The answer came in the ensuing ten years
of professional practice. As the result of a thorough analysis the assumptions behind the
Deming vs. Goldratt conflict became clear, together with the realization of what it would really
take to invalidate those limiting beliefs.
21
3.
The only way to verify my intuition was to build a solid understanding of the root problem
my intuition was addressing. I had to work backwards and ask myself: if the enterprise viewed
as a system with one chosen constraint is the answer, what is the question?
Put another way, if this was the new way of designing an organization, what was wrong
with the old way?
24
Domenico Lepore
26
Domenico Lepore
The reasons why a hierarchical pyramid exists are the assumptions between B and D:
Control assumptions
Increasing our capacity to listen to the customer so we can satisfy the needs of the market
leads us to NOT adopt a hierarchical model:
27
Sechel: logic, language and tools to manage any organization as a network
Here are the pieces of the conflict we have looked at put together:
The two positions are in conflict because we believe the assumptions in the box on the right:
28
Domenico Lepore
The new organization: rolling out the solution of the ‘choked’ system
In my quest to unify the work of Deming with Goldratt’s Theory of Constraints, I had
developed a strong intuition for a new idea of organization: Deming’s system constrained in
one point. In order to verify the validity of my intuition I had to develop an understanding of
the fundamental, never verbalized, assumptions that would keep the two models in conflict; in
doing so I was able to connect my findings on Deming and Goldratt to the “inherent conflict”
of any organization.
So, Deming’s system constrained in one point is the injection (solution) that not only unifies
the approaches of Deming and Goldratt, it re-defines the ideas of how we:
• Control the system: through the constraint using buffer management and
relentless application of statistical methods
• Measure the performance of the system: throughput accounting
• Design the system for continuous improvement: the ‘choked’ system
29
Sechel: logic, language and tools to manage any organization as a network
30
Domenico Lepore
If we have the analysis, and we have the solution and we have a precise action plan,
why do people not act? It can be through their inability to comprehend the validity of the
solution because what they are required to do is different from what they are accustomed to
doing. This translates immediately into paralysis and these managers need further coaching
in the approach. On a more subtle level, if a manager does not act in accordance with the
agreed plan it can be because they are unwilling to subordinate to the project and unwilling to
give up on local optima. This is much worse than paralysis as it can lead to sabotage,
deliberate or otherwise.
We may say that a person has intelligent emotions, or sechel when they are able to see
and live the interconnections of intuition, understanding and knowledge. How can they
achieve this? Using the Thinking Process Tools fortifies this ability, but even that is not
enough. The mental skills of sechel have to be grounded in a method, and that is the
scientific method embodied in the PDSA (Plan Do Study Act) cycle, which rests on a
thorough understanding of variation. We may think of the theory of variation as the pulsing
heart of the PDSA cycle.
This scientific method enables us to develop sechel by seeing how to connect intuition,
understanding and knowledge in a rigorous and consistent way. With this higher level of
understanding we can create the interdependencies of our system correctly. When we
achieve this we are able to develop a more evolved and intelligent organization. What does
that look like in practice? Creating this new kind of organization means building a network of
projects with a goal.
31
4.
Why do we work?
Work has almost completely changed its shape in the last 40-50 years, and the aim that work is
designed to accomplish should change accordingly. Work can no longer simply be the
organization of many elements to achieve a profit for a tiny minority. People today have different
expectations and a different covenant with their working life. Our work can be fruitful, intelligent,
and lead to success, but people also have to understand that they cannot be totally separate
from what they do. The ultimate goal of work should be to elevate people.
An organization has to build the right kind of interdependencies so people do not
feel imprisoned in their work and there is intrinsic meaning in what they do. This means
neither dependence nor independence, but interdependence. In this way, the individual
can understand that by contributing with intelligence, passion and adherence to the
company goals, they derive more benefit than they would by working alone (being
independent).
How can the company leverage people’s natural desire for elevation? By providing
opportunities to participate in something from which the benefit they derive is greater than
the effort they put in. For an organization to allow this meaningfulness to exist in the
workplace it must start from a vision. It must then create the right interdependencies, after
which it is up to the individual to take part and become one with the organization knowing
that their own personal life will be enhanced. This is precisely what the multi project
environment can offer: people contribute what they are capable of and the system
capitalizes on this. By enabling people to do what they are good at there are immediate
good results that reflect back on the worker in a plurality of positive ways.
The way we live is increasingly shaped by the limited availability of resources and this
fact cannot be separated from a different, indeed radically different way of generating and
distributing wealth. We are not talking about socialism but a more intelligent way of
operating organizations. The way organizations and their work is organized, the way they
interact with each other, the way they cater for the wellbeing and development of their
members is a foundational part of a major shift in our productive lives.
abstract level this translates into variation and network. This applies not only to production but
also to the entire organization. When we address Quality and Synchronization correctly, and
the two are completely interdependent, we can guarantee that we achieve our goal in a
continuous, predictable and economically viable manner. Let’s look at what we need to do on
a practical basis to allow this to happen.
Quality means understanding that variation affects every human process, understanding
exactly what those processes are, and never ceasing to act to bring and keep our processes
within statistical control. (We deal with this fundamental subject matter at length in
Chapter 12.) In principle, this could be accomplished within a traditional hierarchical structure
with better performance indicators and a culture of teamwork. We would not need the
Decalogue. The real shift in performance comes from the understanding of finite capacity and
the need for resource optimization and synchronization.
A systemic approach to the economics and management of resources focuses on what
can be achieved by combining the resources at hand. The goal of synchronization is to make
the most out of what we have as a system. The cognitive ordeal that we face in embracing
such a model is that the resources do not “belong” to something; they cannot be allocated in
any conventional “functional” way. Achieving a systemic optimum is a very different sport
from achieving a set of functional optima. Everything has to be subordinated to the constraint.
In fact, in the systemic game resources are generally sub-optimized locally in order to
optimize the global systemic result.
How do we reinforce operationally Quality and Synchronization within our system? When we
build our organization as a systemic structure there are four vital components that keep the
structure alive and working. These are:
1. the ‘Playbook’: a detailed map of all our processes and the repository of all the
knowledge needed to operate the interdependencies
2. an Information System that gives us the ability to sustain the flow of information
connected with the Playbook
3. a scheduler: a mechanism to synchronize, hence maximize this flow
4. a Learning Centre: a method to ensure that all the continuous learning necessary to
operate such a fast system is in place
Deployment Flowcharts to map out every process within the company, identifying who does
what and when. Functional roles disappear. Instead, the Playbook details the network of
conversations that must occur (daily, weekly, monthly, etc.) in the system in order to make
these linkages effective.
The playbook is the nervous system of the organization; it captures all the connections that
make the working of the company possible. The playbook is not the work of a notary and it is
not carved in stone. It is the living, ever-evolving document that portrays the life of the
organization. It does so at different levels by:
• depicting how all the processes are linked
• describing how these processes must be performed
• specifying which activities these processes entail and who is supposed to perform
them
• illustrating the inputs and outputs of these activities
• recording the expected outcome of these activities; designing, validating and
testing all the improvement activities
and, most importantly, by devising all the statistical studies necessary to gain insight into
the life of the organization.
Quality and Statistical Process Control are not about techniques. They are a worldview
and a mindset where continuous improvement is continuous, because variation and entropy
never stop. The Playbook is the practical device that embodies and enacts Deming’s PDSA
cycle. It is the offspring of Deming’s vision of a company guided by the all-encompassing
concept of Quality. The playbook supports steps Two and Three of the Decalogue and it is
grounded in the idea of process predictability as a prerequisite for knowledge-based
management. It also provides a meaningful mechanism to enhance company communication
because it is based upon open and transparent information and knowledge flow. The
Playbook is the open book where no personal agenda can hide. The writing up and, most
importantly, the enactment of the Playbook, is not the job of a “company function”; it is the
first and most important job of top management.
I once spoke to a highly successful owner of a business software house about the kind of
software that would be required to manage companies systemically. He listened attentively,
and then told me that what I was proposing was absolutely valid, but so simple he would not
be able to justify charging tens of thousands of dollars for it.
My generation has seen the birth and the development of computers and their
exponential growth in terms of importance for virtually every human activity. One of the most
important applications of computers is in the almost endless possibilities that they have to
support human interaction. The power of these machines increases relentlessly and so does
35
Sechel: logic, language and tools to manage any organization as a network
the software that makes them useful. In principle, software should exist only to help humans
live and work better. It is astonishing, instead, how software becomes more and more an
artificial constraint to the development of the organizations. This is not only due to the
principle of greed expressed by the software owner I spoke to. The unsuitability of the
majority of software to add any meaningful value to the work of the organizations is easily
explained by looking at the way companies are structured.
Software specialists develop their applications following inputs; the more these inputs
are wrong, the more the software will be useless and the more developers will be asked to
produce more software. In a function-based organization there is no possibility for software to
generate any companywide value because inputs will be by definition aimed at supporting
local optima. Moreover, in a function-based organization software specialists will be “stored
away” in a box called IT services and will always be considered nothing more than a
necessary evil, just like accounting.
In a systemic organization the purpose of an Information System is clear and relatively
simple. As the word implies, in any company there should be a system that enables timely
access to the creation, storage and retrieval of information. Needless to say, the first step
towards creating such a system is to understand what this information should be for. It is very
simple (too simple for some developers): this information should serve to remove limitations
(constraints) towards a stated goal. Accordingly, the role of an information system should be
to facilitate the functioning of a chosen constraint.
An Information System should certainly mirror the Playbook and support its enactment
but should also facilitate the management of the constraint(s). An Information System should
be made of a database where all the information is stored and connected with a scheduler for
the optimization of the chosen physical constraint(s). With very few exceptions, essentially
very large and very spread out companies, an IS could and should be built “in house” and
should be almost free. An IS should provide information regarding the very few vital pieces of
information a company must always monitor, for instance: daily cash in-cash out; on-time
payments to suppliers and from customers; provide visibility to customers and suppliers on
our inventory and WIP as well as facilitating order entry, etc. Most importantly, an IS should
be equipped with the possibility to perform easily and timely all the statistical analyses
needed to understand and improve the performances of the company.
The most critical part of an Information System, however, is the ability to synchronize the
work of the organization. A thorough understanding of what finite capacity and
synchronization mean, both conceptually and operationally, must precede any attempt to
design an organizational structure suitable to sustain the Decalogue effort to bring about
effective systemic management.
As we have stated, the systemic approach to the economics and management of
resources focuses on what can be achieved by combining the resources at hand. This is
difficult because we are accustomed to thinking of resources as “belonging” to something, i.e.
a function. In this paradigm resources are allocated to achieve local, functional optima to the
detriment of the goal of the system. Instead, in the systemic approach resources are
generally sub-optimized locally in order to optimize the global systemic result. The goal of
36
Domenico Lepore
synchronization is to make the most out of what we have “as a system”. Resources are
therefore deployed to maximize the global result (we call it Throughput) NOT what these
resources could achieve if they were to operate in isolation. Accordingly, it is critical to
understand what we are synchronizing. Once the system has been designed and the
constraint of the system has been chosen, there are two levels of synchronization that must
take place at the same time in any organization.
In a production environment, for instance, the scheduler will have to maximize the ability of
the constraint to generate throughput ensuring that no time is wasted on it AND that the
constraint always works on the right mix. In order to be scheduled, the constraint only needs
to be fed a few variables, namely:
• delivery date (and associated Throughput)
• bill of materials
• routing
• WIP
• inventory and replenishment time
In essence, the scheduler enables the effective coordination that must exist between
sales and replenishment taking into account process and lead times for the manufacturing
and shipping of the products.
37
Sechel: logic, language and tools to manage any organization as a network
Let’s say that we want to synchronize the work of a company producing industrial robots and
automation tools. Such a company can be seen as an assembly operation and we have to
decide where we want the physical constraint to be. Let’s say, for the sake of simplicity, that
we choose as constraint the part of the process flow where all the different components
making up the final product are assembled. The Focusing Steps would then tell us that:
A. We release the different components at the pace at which we can physically
assemble them, neither faster nor slower
B. We ensure that ALL the components making up the customer order we want to
process on that date get in to the assembly line ONE buffer time ahead
C. Job order by job order, we monitor statistically what percentage of the buffer has
been eaten into or gained
D. We assess the predictability of this process (that delivers the pieces to the
assembly line) and then:
i. If the process is in control and the upper limit is within the buffer, we carry
on
ii. If the process is in control but the upper limit is outside the buffer, we re-
size the buffer
iii. If the process is out of control and all the data points are within the buffer,
we search for the reasons that send the process out of control and we fix
them
iv. If the system is out of control and some of the data points are outside the
buffer, we stop the line and fix the problem
E. If we perform these steps and the market demand does not exceed the capacity of
the chosen constraint, presumably measured in units of assembled product per
time period, then we probably ship everything on time.
However, in order for this chosen constraint to maximize the Throughput of the company
many of the processes making up the system (virtually all) must be synchronized:
• Engineering must issue flawless drawings
• Replenishment must deliver the subcomponents to the warehouse in time
• Accounting and administration must pay and collect promptly
• Marketing and sales must identify suitable customers and keep the line in “pull”
with the highest Throughput mix, etc.
There can be no ‘heroic’ attempts here by one function to try and outshine another. Any
attempt to oversell, squeeze suppliers on price, delay collection and payments and reduce
the thoroughness of drawings in the name of cost optimization, will result in sub-optimization
of the system’s performance. None of these activities can be done in isolation; any attempt by
functions to outperform each other and claim more “functional” relevance in the system will
38
Domenico Lepore
not produce one more unit shipped. Accordingly, we need an algorithm and an organizational
structure that helps the coordination of all the activities that maximize the throughput that we
can achieve with the designated constraint.
39
Sechel: logic, language and tools to manage any organization as a network
and industrialists alike on what it takes to use knowledge to achieve results. Critical Chain is
the offspring of a vision of the world and, too often, the elucidation on this vision has been
insufficient compared with the wealth of practical details that Dr. Goldratt’s applications often
attract from pundits all over the world. Like all of Goldratt’s revolutionary contributions to
management, Critical Chain has achieved very partial results in industry and none at
corporate level.
Looking at Critical Chain as a technique for managing projects means essentially
missing the point. The reason why, after thirteen years of relentless efforts to disseminate
Critical Chain, tools like Microsoft Project still dominate the way projects are “managed” is
that any attempt to use Critical Chain without embracing a purely systemic view of the
organization is doomed to failure.
Critical Chain represents the embodiment of a vision of the organization based on pace
of flow, people’s involvement and great emphasis on quality. Quality, involvement and flow
are the basic philosophical pillars of the systemic organization so we shall investigate how
Critical Chain can play a much greater role in the building of an intrinsically systemic
organization (see appendix on Information System).
What do we need to make all this work? We need a new relationship with what we do.
Companies need to establish a very precise, upfront and clearly understood covenant with
their people; top management must show a total commitment to this idea of organization.
This entails, along with a career path, the addressing of very legitimate issues like
compensation, authorities and responsibilities, status, etc.
Once this commitment from top management has been achieved and the above-
mentioned issues addressed, how do we practically go about the “reinvention of man”
foreshadowed by this new covenant with labour? The answer is a Centre where everyone in
the company can learn how to be and act in this new organization.
Let’s be explicit. The time necessary for a competent and willing individual to learn what
he needs to know to operate in a project-based structure is measured in (many) months;
however, the time it takes to transform this learning into a truly metabolized behavioural
change could be years. Obviously, while it would be desirable to shorten the former, we must
absolutely find a way to accelerate the latter. A Centre for Learning is not a training centre, it
is not a management school, it is not R&D, it is not an Academy and it does not look like the
couch of a psychotherapist.
A Centre for Learning is a (possibly physical) space where managers are first taught
and then coached and mentored on how to change their way of conducting business. It is
a space where, under the guidance of highly skilled and knowledgeable professionals, the
intuition, understanding and knowledge of management needed to develop and enact
business strategies are leveraged. A Centre for Learning is where managers go to
develop, test, validate and refine plans and activities aimed at propelling the business of
the Company.
40
Domenico Lepore
The success of a Centre for Learning is measured like anything else in the company, by
Throughput and cash increase. The Centre for Learning is where managers learn how to see
solutions, put them at work and where they give and get feedback. It is where all the relevant
business decisions are developed, managers are nurtured and the future of the company
planned. The Centre for Learning is not just for the Company; it is where customers and
suppliers (and even competitors in some cases) come to share in the knowledge needed to
operate like a chain; it is where the concept of partnership is embodied in the reality of the
relationship and where bold innovation finds its cradle. It is where, through cooperation and
not competition, we have the possibility to understand the power of a network.
What I describe in this chapter is neither a fantasy nor is it particularly complicated to
create. What it does require, however, is the sharing of a paradigm and a sense of urgency.
The paradigm underpinning this transformation, from the prevailing management style into
one of optimization, is sustainability; and the urgency stems from the understanding that the
world is at the very eve of a tectonic shift and this shift calls for a completely different style of
leadership. That leadership must be inspired and informed by a higher form of intelligence. It
is an intelligence able to leverage the interdependence among three faculties of the intellect:
intuition, understanding and knowledge. It is an intelligence that connects cause and effect
and governs decisions always in the awareness of their wider, systemic implications.
41
PART TWO
Understanding:
Analysis/Development
In Part Two we address the conscious and connected organization from the perspective of
Understanding, i.e. Analysis/Development. Focusing on Industry, we consider the
organization as part of a chain of value based on speed and quality. We look at the shift in
consciousness and use of the mind necessary to foster a new kind of knowledge for wealth
creation. This new knowledge calls for a new Economics, as proposed by Deming, and a
consistent model of Leadership.
5.
TRANSFORMING INDUSTRY
Everyone doing his best is not the answer. It is first necessary that people know what
to do. Drastic changes are required. The first step in the transformation is to learn how
to change…Long-term commitment to new learning and new philosophy is required of
any management that seeks transformation. The timid and the fainthearted, and
people that expect quick results, are doomed to disappointment.
W. Edwards Deming
The focus of Part Two of this book is Industry and how to transform it through a systemic
approach. Before we look at this in a more hands-on light (I purposefully do not say ‘practical’
because to paraphrase Einstein, nothing is more practical than a good theory), we need to
step back and examine the cultural and philosophical forces that can inhibit such a
transformation, and identify the pattern that will promote and sustain it.
46
Domenico Lepore
the results of the sheer inadequacy of the relevant institutions to cope with the cognition
process that is associated with the development of new knowledge.
The transformation that the Decalogue advocates cannot be undertaken without
understanding and embracing a basic set of values, and it is mandatory to realize that the
achievement of economic results must be connected with a precise ecology of the mind. In
this way, the legitimate pursuit of personal monetary wealth will not be disjointed from the
quest for purposefulness that should be the prime motivator for any human endeavour.
47
Sechel: logic, language and tools to manage any organization as a network
challenges and yearns for a higher level of spirituality and meaningfulness. In the very same
way in which we need to exert “bodily” control over our environment we also need to project a
“soulful” vision of ourselves into the future.
48
Domenico Lepore
foreshadow in our mind a “change”, hence trigger a conscious decision point, only when
we feel that by changing we alter irreversibly the current state of reality AND we are in
some way comfortable in the current state. In other words, we truly feel the need to take a
“life changing” decision only when the current state of our reality shows some or many
elements of discomfort. On the other hand, regardless of our current situation, our innate
drive towards elevation and improvement will push us towards wanting to take life-
changing decisions.
In this process, “control” takes the shape of “security” and this need leads to not making
the change; “vision”, on the other hand, will be translated as “satisfaction” and will gear our
decision towards making that change.
The decision process takes us then to a very precise and fundamental dilemma: Change
vs. Do not change. This dilemma is originated by the two core needs of “control” and “vision”
that in the decision process can be verbalized as “security” and “satisfaction”.
49
Sechel: logic, language and tools to manage any organization as a network
Understanding provides the ideal platform for change but it still lives in the realm of
planning. What moves us ahead is the action that comes from the intimate knowledge that we
have of the subject matter we intend to act upon. This knowledge is something deeper than
expertise. It is that intimacy that comes with awareness; it is that part of the self that is
actualized in what we know; it is our ultimate being projected in what we do; it is our ability to
do who we are.
Intuition, understanding and knowledge are three very different faculties of our intellect
that must all be activated to accomplish any behavioural transformation. Only if we become
conversant with the mechanisms that activate and sustain these faculties can we capitalize
on that form of intelligence peculiar to humans that in Hebrew is called sechel, the ability to
acquire and deploy knowledge coherently.
Remember what is at stake: the transformation of the way Industry pursues its goals.
I claim that this transformation is only possible if we improve the quality of our intellect, if we
learn how to use our mind in a much more powerful way. Only if we accomplish this feat will
we be able to capitalize on the knowledge that is available and that the world continues to
develop at an unprecedented rate.
Improving these abilities is neither simple nor is it an activity for the fainthearted.
Learning, actually “re-learning”, how to think is a profoundly disturbing activity. It may be
thrilling for a very few, but it is certainly disconcerting for the majority.
Why is it so difficult?
Re-learning to think
The Theory of Constraints (TOC), the largely untapped body of knowledge developed
originally by Dr. Goldratt, is deceptively simple. The monumental effort that the Goldratt
Institutes and Schools have produced in the last 20 years to make this knowledge available
has produced, for the time being, disappointing results if compared with its potential. At least
on this, Dr. Goldratt would agree with me.
TOC, by its very nature, goes too fast and too deep in unveiling the myriad of flawed
assumptions we make about the world and how we act in it. When we come to management
and economic decision-making the gap between what is “intelligent” and what people do
everyday is frightening.
Deciding to learn a better way to think is intimately connected with the decision to act
accordingly. Thinking and acting in a different, more powerful and focused way must also be
accompanied by a different way of speaking. Words are powerful and create meaning; words,
and the letters they are made of, are the messengers of information needed to communicate:
they create meaning and, with it, reality. Humans love to speak and they are very connected
with their words because it is through their words that they acquire and develop
consciousness.
Changing behaviour is difficult because the new thinking and speaking that goes with
new acting de facto portrays an existential transformation. In order to tap the sechel that
we all possess we have to decide to transcend ourselves and, up to a point, reinvent
ourselves.
50
Domenico Lepore
In other words, it is emotionally burdensome to accept this truth: we are not at the centre
of the world but simply a part of a complex chain and all our flawed assumptions are simply
the result of our very limited ability to perceive this complexity.
When confronted with the step by step guidance to change that TOC proposes, we find
ourselves, in virtually no time, coping with the monumentality of the transformational change
we must undergo in order to achieve what we thought we really wanted. And it is precisely at
that point that we unplug; we reconsider our reality and conclude that it’s not too bad after all.
Humans, with few exceptions, are simply not geared for change.
So what shall we do? Give it up? Shall we abandon any hope for mankind to be
fundamentally better and more intelligent? No, of course not.
51
6.
INTELLIGENT INDUSTRY:
QUALITY, SPEED, NETWORK
Stamp out the fire and get nowhere. Stamp out the fires puts us back to where we
were in the first place. Taking action on the basis of results without theory of
knowledge, without theory of variation, without knowledge about a system. Anything
goes wrong, do something about it, overreacting; acting without knowledge, the effect
is to make things worse.
The higher the number of individuals to be coordinated, the bigger the effort to control the
way they work. Accordingly, a fundamental need of any (growing) organization or industry is
to exert an effective control on how this coordination produces results.
By the same token, industries are built for a purpose and, invariably, the fulfilment of
their purpose is linked to the way they deliver to their customers. In other words, any
successful industry must devise the way it functions to address better and better its
customers’ needs; it must be structured to listen very clearly to “the voice of the customer”.
This does not mean having a team of people receiving calls and handling customer
complaints or suggestions. That would be purely palliative. An industry can cater for
customers’ needs effectively ONLY if the interdependencies that lead to their satisfaction are
well designed and operated.
The lines that interconnect departments and functions are clearly not vertical. So, a
suitable organizational structure aimed at listening to the customer cannot be
hierarchical/functional.
Let’s summarize:
A successful industry must cater for its need for control and this leads almost invariably
to a hierarchical structure because accountability in a hierarchy is easily defined. On the other
hand, any successful industry must listen carefully to the voice of the customer and this
prompts us to believe that hierarchy may not be the right solution because it fails to
acknowledge the very interdependent nature of the work of an industry.
There are three categories of assumptions that we make and they reflect very ingrained
mental models that we have about:
1) the idea of control
2) how we measure
3) how an industry can be modelled
Challenging these mental models will pave the way to a powerful solution. The
monumental (and consistently under-heeded) work of Dr. W. Edwards Deming has already
provided the conceptual background to develop solutions to this conflict.
54
Domenico Lepore
adherence to) a common goal. In other words, when people start collaborating for a common
goal the interdependencies necessary to achieve it can be very easily defined. When we pay
attention to these interdependencies and interrelations we can have a clear understanding of
how the system should operate. To be effective, the design of this system cannot just be
about the people in the organization. It must include customers and suppliers and the way in
which all the components of the system are going to benefit from the achievement of the goal.
Let’s be as clear as possible on this issue.
Our suppliers (and their suppliers), our customers (and their customers) and we,
(including our competitors) are all links in chains of value. This value is realized when an end
user benefits from the product/service these chains deliver. Unless the end user pays for and
enjoys the product the chains have delivered, in time nobody truly gains. On the other hand, if
chains get better and better at delivering better and better products that end-users enjoy, then
the market grows and so does everybody’s wealth.
The “Market” is far bigger than we think, and its development is artificially limited by
organizations’ inadequacies to innovate and to take to fruition the potential of innovation. We
already have the mental technology to fully exploit the potential of any organization and
naturally expand the markets we are in, rather than fighting over the share of an existing
market. It is not about my piece of the pie is bigger than yours. It is about let’s all work to
make the pie bigger. We will look at this in detail in Part Three.
55
Sechel: logic, language and tools to manage any organization as a network
56
Domenico Lepore
Speed of Throughput
A network of activities
An industry viewed as a system is a network of recurring and non-recurring activities.
These activities need to be staffed with people with suitable competencies. It will be very
cumbersome to deploy these competencies timely and systemically if we organize them
functionally. Why? Because any attempt to use a resource allocated to “function A” to perform
its competence beyond the boundaries of its function will immediately result in a conflict
between the head of the function and whoever has been given the task to deploy that
competence, usually a project manager. Unfortunately, virtually every minimally complex
activity needed by any company to achieve any goal is cross-functional in nature, hence we
are stuck in a real dilemma on how to best utilize the resources at hand. The more we realize
how paralyzing this dilemma is, the closer we come to understanding the paradigm shift
needed in Industry.
The clash between a functional organization and achieving cross-functional goals is,
quite plainly and simply, what keeps Industry stuck. This dilemma is the chronic conflict that
keeps the science of management from evolving into the real engine of economic growth.
Addressing the multilayered issue of how to optimize finite resources to maximize Throughput
is critical if we are to elevate the role of Industry into the generation of wealth.
57
Sechel: logic, language and tools to manage any organization as a network
Two key elements enable the optimal management of finite resources: predictability in
the execution of activities and their synchronization. In other words: IF individual activities are
performed with a high degree of reliability, i.e. with quality, AND these activities are
orchestrated with a powerful algorithm that allows the best possible synchronization towards
the stated goal, THEN we have an infrastructure that can maximize the Throughput that the
organizational system can generate.
As we have stated, the “inherent conflict” of any organization applies to Industry too:
Hierarchical/functional company vs. non Hierarchical/Functional company. There are two
needs to be simultaneously satisfied: how to control the system and how to increase its ability
to reach the customer. In other words: how can we control a system that evolves and grows
as a result of a better and better understanding of the customer?
In 1997 Dr. Goldratt published under the form of a novel a book called Critical Chain.
The theme of the book is that we can maximize the speed of new product development by
adopting a particular approach to Project Management (PM). The implications of that
approach are truly far reaching and pave the way for a complete and yet unexplored solution
to the inherent conflict. I believe it is worth talking about it here briefly and much more
comprehensively in a separate chapter in Part Three (see Chapter 14).
A network of projects
Recurrent and non-recurrent activities in an industry can be seen as “projects”.
Whether we seek to improve the speed at which we manufacture products, install new
equipment, organize shipments or file quarterly closing we need the coordinated efforts of
many different competencies. Deploying these competencies in a logical sequence is
relatively easy. However, breaking assumptions about the way performances should be
controlled and measured seems to be a true cognitive ordeal. The measurement of
performances seems to be inextricably connected with a local, i.e. functional indicator while
we all know that what matters is the global bottom line of the company. How do we come
out of this seemingly irreconcilable conflict? We do it by asking ourselves what company
functions are for, and uncovering the obvious truth that functions should house
competencies, not power.
Engineers, accountants, scientists, subject matter experts, should not be considered
members of a “company function”. Rather, they should be seen as valuable competencies
that can be deployed for the goal of the whole company. These resources, ALL the
resources, should be available for whatever “project” the company needs to accomplish.
What I am saying is that any company should be seen as a network of projects with the
global goal of maximizing the Throughput of the company. The Critical Chain algorithm that
Dr. Goldratt developed can be used to maximize the use of the finite resources of a company.
As a matter of fact, this algorithm can be used to redesign the way any company works.
Critical Chain becomes then much more than simply an algorithm to accelerate project
completion; it is the vehicle to integrate, control and deploy the resources of the organization.
Instead of company functions, there should be networks of projects; instead of heads of
functions, there should be managers of increasingly complex projects that draw their
58
Domenico Lepore
Our customers (and suppliers) will be sensitive to our offer of products and services if:
a) they add a measurable benefit
b) they clearly remove a current limitation
Benefits and limitations are related to the goal that our customer (or supplier) seeks to
achieve. Industry should be guided by this simple principle: how can I help my customers
(and suppliers) to achieve their goals? This is the job of marketing; understanding how to help
customers (and suppliers) to achieve their goals.
59
Sechel: logic, language and tools to manage any organization as a network
Let’s make this broader. Industries do not exist in a vacuum; they play their role in the
very intricate networks of value creation for customers. Industries have the role of channelling
value through these networks and can only do that effectively if they understand clearly how
they can add that value, and if they do it ethically and for long term, win-win purposes.
Industries exist in the market place, which is almost unlimited; the legal boundaries to
their activities (legal entities) are practical and necessary limitations whose only real goal is to
make it possible to manage them. By creating a limiting boundary managers do not feel they
are losing control. Unfortunately, these boundaries, as necessary as they can be, only exist
on paper. Interdependencies and interrelations are so strong and continuously created that
thinking of a company in terms of its legal boundaries is a recipe for disaster, and so is the
idea of marketing by only looking at customers’ current requests.
Intelligent Marketing
What should Marketing really be doing? Marketing is that aspect of Quality that aims at
understanding the business environment in which the company exists in its entirety. Marketing
certainly has the job of designing the best possible offer the company can deliver (and we will
delve further into this aspect of Intelligent Marketing in Part Three) but its goal must be much
more comprehensive. The role of Marketing is to understand “markets” and provide a clear
cause-effect analysis of the most suitable way for a company to fit into those markets.
Marketing leverages a “market driven” infrastructure to provide the most suitable positioning
of the company in the networks of which the company is part. It does so by:
• equipping the sales-force with the appropriate intelligence to facilitate sales
• creating an open channel of communication with customers and suppliers via all
the post-sales service related activities
• actively engaging the supporting infrastructure in customer and supplier related
improvement initiatives
• relentlessly communicating findings connected with increasing the possibilities of
the company
• providing orderly and thoughtful insights into the often chaotic evolution of the markets
Intelligent Marketing calls for intelligent salespeople. Why do people go into sales? Why
would someone want to get up in the morning and travel, be away from home most of the
time, eat out every day often not in nice places, face rejections, argue with strangers as well
with their own production colleagues? Who is Willie Loman? Salespeople live a mystique,
which is hard to understand, hence to tackle rationally. Empirically, after many years, all I can
say is that salespeople like the challenge, the freedom and the discretionary power to take
decisions. Salespeople like to be heroes. Unlike financial people who are often motivated by
greed (and the invariable blindness that comes with it), salespeople are motivated by the
challenge to win customers’ hearts. A real salesman sells himself, and he loves it.
60
Domenico Lepore
Salesmen like to feel they are breadwinners; they do not contribute to the success of the
company like everyone else, they are special. They follow their instinct, not procedures.
Unfortunately, salespeople can singlehandedly jeopardize any systemic endeavour unless we
integrate them organically in the way the company operates. We shall return to this in Part
Three in Chapter 13 dedicated to the External Constraint.
Let’s summarize what we’ve said so far:
An industry poised for long-term, ever-growing success must be based on a solid, quality
driven infrastructure, and a critical part of that quality is related to how much and how well
customer needs are understood and continuously satisfied. Marketing is part of the Quality
activities. Such a company replaces the conventional functional design and modus operandi
with an organizational design inspired by resource optimization that leverages competencies
for the completion of resource-contention-free, finite-capacity-based networks of projects.
61
7.
come to terms with the fact that reality is far more complex than we would like it to be.
A blatant example that we are refusing to recognize this is the sheer inability of the financial
institutions to cope with probability models that are not originated by Gauss-like assumptions.
In spite of the work of Benoit Mandelbrot decades ago showing the inadequacy of these
assumptions, they continue to form the backbone of the way people interpret and interact with
financial markets. The damage created by this inability to embrace more appropriate models
is incalculable. On the positive side, today very sophisticated computational models allow us
to penetrate complexities by connecting simple structures. This allows us an increasingly
profound understanding of complexity. Nevertheless, can see how completely disconnected
mainstream thought is from this awareness when today undergraduate students are still
taught the ludicrous concept of “the invisible” hand of the markets and the frankly hilarious
models that sustain the “supply and demand” approach to economics. In other words, we
already have the knowledge and the tools to do so much better than this, but we continue to
lag behind that potential.
I believe, but cannot prove, that if we want to close the gap between the knowledge
available and what we are willing to use, we have to tap into a different use of the mind. We
have to learn how to see change as not simply something to be feared, but a natural, intrinsic
part of our life. When we learn how to use our intuition and intellect to implement consistent
action, change is not a threat and a hazard; it is a continuous source of new opportunities.
way interdependencies are created and managed. In other words, organizations have a life of
their own that is generated by the somewhat unknowable combination of the individuals in
them and the way they interact with each other. It is this complexity that calls for a new,
stronger and deeper insight.
Goldratt talks about the three main phases of change and these are:
• what to change
• what to change to
• how to make the change happen
Change can be better understood and managed if we can link it to one of the intellectual
faculties of intuition, understanding and knowledge that are responsible for effectively
enacting the change. In other words, the task at hand is:
1) identify the phases of change
2) link them to the faculty of the intellect responsible for its enactment
3) develop an adequate mechanism to support the appropriate faculty
4) connect the phases as one process
5) ensure a metric is in place for the deployment of the change
Within the Decalogue approach we add a further stage that ensures the scientific validity and
robustness of our analysis and actions:
6) adopt the scientific approach, PDSA, for the management of this change
As we have already stated, there are three distinct phases in any transformational/change
process. The first one is when we have “intuition” of the current state of reality that needs
change (what to change). This intuition is fuzzy and the blur is originated by the many emotions
triggered by effects in the life of the organization. In TOC we call these “Undesirable Effects”
(UDEs); they are the light bulbs going on and off to warn us that a change is needed. We can
capture the intuition stemming from these effects with a diagram that clearly displays the
cause-effect relationships that unequivocally lead us to a clear picture of the present state of
reality: this support is provided by the TOC diagram called the “Current Reality Tree” (CRT).
Current Reality Trees leverage some categories of speech and bring them together to
form a clear-cut, very powerful image of the reality we are trying to change, in this way
65
Sechel: logic, language and tools to manage any organization as a network
“anchoring” the intuition to the fertile ground of precise verbalization. We will illustrate these
categories along with guidelines for building a Current Reality Tree in Chapter 10 of Part
Three.
A clear verbalization of our intuition helps us see our current reality in (almost) all its
facets and makes us aware of why this reality exists. Now that our intuition is strong and
clearly articulated, we can decide that we want to change this reality. As we will see when we
learn the mechanics, the Current Reality Tree is built by verbalizing all our mental models, i.e.
the profound images of the world that we have built for ourselves through the years of our life.
We quickly (sometimes too quickly for comfort) realize that our “reality” is nothing but the
result of these mental models, i.e. the assumptions we make about our reality. These
assumptions are the limiting beliefs that generate the undesirable effects we experience, i.e.
the Current Reality we want to change.
66
Domenico Lepore
The second step is to ensure that this picture is a) complete, and b) highlights all the
possible pitfalls. This is achieved through understanding (analysis/dvelopment). This is when
we need the ability to see ahead to both the goal and what could prevent us from achieving it;
this is when we leave behind the comfortable habits of “hunch management” to embrace an
epistemological view of the organization. Not the forceful, chaotic, nonsensical, life-
consuming, relationship-abrasive common practice fire-fighting, but an orderly, methodical,
reflective, all-encompassing, long-term oriented, purposeful vision of the future.
Understanding is the human ability to imagine and plan beyond the contingencies of the
present and towards a meaningful future. The Future Reality Tree (FRT) and the Negative
Branch Reservation (NBR) are the Thinking Process tools that support and enhance our
understanding.
If the Future Reality we yearn for has been delineated, the potential pitfalls identified and
a precise strategy crafted, then all we need is a step-by-step procedure to walk into the
future. This procedure has two aspects; on the one hand it is a protocol with detailed
instructions, pretty much like the one used by NASA to bring Apollo 13 back from the moon
they had lost. On a deeper level, what we need to operate this procedure is a new kind of
knowledge.
67
Sechel: logic, language and tools to manage any organization as a network
68
Domenico Lepore
69
Sechel: logic, language and tools to manage any organization as a network
organizational transformation, Industry has to adopt a rigorous scientific approach and close
the gap with financial institutions by confronting them on the validity of their measurements.
The purpose of Industry is not to serve capital markets but the other way round. Capital
serves to enable Industry to produce. Industry can only be served by financial markets when
the ‘analysts’ of those markets start to ask Industry relevant questions, ones that truly unveil
how the Industry is doing, not questions that serve to produce fictitious numbers on which
delusional financial products can be constructed.
I hope the path to the transformation of Industry is clear: we need to improve our ability
to draw from our intellect and connect together its elements using the powerful catalysts of
our ability to think. Industry, also, needs to embrace wholeheartedly and without hesitation a
scientific approach underpinned by the PDSA cycle and its statistical nature. Industry must
also close the gap that separates statistically based performance measures from the
deterministic and linear thinking of the financial world.
An Industry with a high level of consciousness as we have described has, by definition, a
high level of interconnection with those who work within its system, with those who supply it
and those who are its end-users. The ongoing task is to constantly satisfy better the needs for
control and vision of Industry, of those who work within it, and of those who interact with it up
and downstream. The inevitable outcome of this level of consciousness and connection is a
more ethical way of operating. There is automatically no space or use for hedging and
undercutting, or for pollution of the environment. The inevitable outcome is Industry with a
conscience.
Our ability to create and follow this kind of organization is directly linked, as we examined
in Chapter 5, with our ability to increasingly do who we are. This requires us to accept a level
of personal freedom to which few are accustomed and even fewer feel comfortable with.
What do I mean by freedom? It does not mean laissez-faire, it does not mean just doing
whatever you want. It means accepting the responsibility of understanding that our only true
limits are ourselves and what we are able to perceive for ourselves. Our limits are mental
models, and are mental models dictate the boundaries of our actions. As we begin to
challenge these models/assumptions, we begin to taste the vivifying experience of exploring
our true potential. Fulfilling that potential is not a question of luck but one of choice.
The prevailing management style (or lack of it) has taken the separation between
knowledge and consciousness to an extreme and impaired people’s ability to choose
intelligence over stupidity. This ability needs to be given back and enhanced manifold.
Industry must rebuild itself on the debris caused by this disconnection by leveraging this
intrinsic unity between consciousness and knowledge, by re-learning how to connect learning
and choice, by retooling its people’s ability to manage intelligently. A new covenant with
Industry needs a new covenant with its people, one where each individual has the opportunity
to better themselves and in so doing better their industry. We have the intuition, we have the
understanding and we have the tools to make it happen. However, in order for the
transformation to truly take hold we need to create a new kind of leadership and that
leadership must be the expression of a new Economics.
70
Domenico Lepore
• Negative Reservation
Branch
71
8.
The views, experience, analysis and practice expressed in this book are the fruit of a
systems-based paradigm of organizations and management based on knowledge and
transformation. By transformation we mean the relentless effort of organizations to transform
themselves into and consistently perform as the most effective vessels to fulfil their role in the
world with the highest quality and speed. The umbrella term we use to cover all the various
aspects of theory and practice that allow this transformation to take place is Intelligent
Management. In order to have Intelligent Management, we need a different kind of manager
and leader.
Moses was the leader of the Jewish people and so was Mohamed for the Muslims. In
much more recent times everyone would agree that Nelson Mandela is a leader, and so was
Mahatma Gandhi. In the more mundane world of sport it is undeniable that Michael Jordan
has been a leader and so was Pele’. Some organizations “lead” in their field and so do some
editorial ventures. In other words, virtually every human endeavour has someone that stands
out from the crowd.
If we were to draw a histogram representing the frequency of quotation in management
lingo, the word “leadership” would probably have the tallest bar. This is in direct conflict with
the relatively few names that come to mind when we think of “leaders”, and it is perplexing to
witness how often the noun “leader” is attributed to utterly debatable characters.
We cry out for leadership when it is missing; I will try to clarify what I mean with
leadership within the context of Intelligent Management.
Knowledge is mandatory but it is not enough. A leader must be able to get their message
across; they must be able to address their people in a way that touches their brains but also
their hearts. A leader must also have fortitude; they have to “walk the talk” and be an
example. A leader has to have the strength that it takes to accomplish something that
probably only they have the vision of, and they must be able to clearly communicate that
vision. A leader must have an action plan, a step-by-step guidance that people can
understand and execute.
These are necessary attributes for a leader, but they are not sufficient.
74
Domenico Lepore
Involvement of people is what enables Quality and Flow. The foundation for any
involvement is the teamwork that underpins the functioning of organizational systems. We
can build a system by asking everyone to subordinate to the goal of a system, to give up on
something personally for the greater good. It often works but invariably leads to people
working mechanically; in time, it takes away some pride in workmanship, it replaces
innovation with compliance. A leader understands that a system can develop and continually
improve its results towards the stated goal if people in the system see in what they do for the
system an enhancement of their personal life. A leader understands that there is no conflict
between self-fulfilment and subordination to the goal because a correctly designed system
allows an individual to gain more by subordinating to the goal than they would achieve
independently. A leader understands that what people do and what people are must be one.
A leader does not exist in a vacuum; they can only exist if surrounded by people that
acknowledge them. They select their people and those people recognize the leader as such.
A leader must evolve and so must their leadership. While the core tenets of their leadership
can be everlasting they, personally, may not be. A leader must be ready to pass on the baton
at the right time. A leader is a leader when they lead, but also when they stop doing so.
A new economics
Best efforts and hard work, not guided by new knowledge, they only dig deeper the pit
we are in. The aim of this book is to provide new knowledge.
The new knowledge for creating systems-based organizations provided by Deming calls
for a new economics. Economics belongs to the realm of the so-called “social sciences”. It
aims at investigating the production, distribution and consumption of goods and services.
Economics also concerns itself with the study of economies and how the players, the decision
makers, act to guide economic choices.
At the most fundamental level, economics should pertain to the understanding of how to
deal with the resources at hand and optimize their use for a stated goal. In this sense,
economics is also a “political science” because the use of these resources should be guided
by political decisions. (Indeed, politics should be guided by a philosophical vision, an ethical
one, but this is another story).
Economics mimics science by developing models, economic models that should explain
the economic outcome of certain decisions and these models are, or should be, inspired by a
vision of the world. In recent decades, The Royal Swedish Academy of Sciences has instituted
a Nobel Prize for Economics and several economists have been awarded for their models.
In summary: an economist, hopefully inspired by a vision of the world, develops models
that should guide the economies of countries to an optimal utilization of their resources for a
stated goal. Governments, hopefully guided by a vision, embrace economic models based on
their adherence to their vision.
Any model is based on a set of assumptions, it must be. When these assumptions are
not verified and validated the model is bound to fail in providing the results it was designed
75
Sechel: logic, language and tools to manage any organization as a network
for. Of course, the political circumstances of any democratic country change very frequently
and the ability to translate models into effective policy making is always less optimal than one
would wish; moreover, an increasingly interconnected world calls for increasingly complex
models with assumptions that are harder and harder to validate. Indeed, governments are
pressed to take actions and these actions have to accommodate for political agendas not
necessarily driven by the vision that inspired the economic model. By the way, when time
(and perceived risk/reward) comes into the picture and we slide into the field of “finance”, we
witness the full potential of the prevailing economic paradigms as reflected in the models that,
tragically, still today purport to create value.
If we continue the analysis of this chain of causes and effects we can understand why
the world is experiencing the current economic situation. I would like to prompt the reader to
broaden their view on what economics should be.
Flawed models
Mainstream economic and financial models, the ones that currently rule the markets and
determine value, have shifted their focus over the years from what is best for the society they
should try to model to what is mathematically possible to achieve for the benefit of a few.
I use the word “mathematically” with a sense of grief. Mathematics is a very serious business;
it is thanks to mathematics that we understand the physical world and it is thanks to its
rigorousness that we are confident that the scientific method can provide acceptable validity.
Sadly, most economists and financiers at best can be considered “hands on” mathematical
labourers and their models are far from being the offspring of any scientific method.
Current mainstream economic and financial models are flawed for two sets of reasons:
1. They are often divorced from realistic assumptions about the situation they seek to
model AND from the managerial actions that should ensure the predicted outcome. In
other words, the modelling happens in the vacuum of second tier “mathematical”
speculations with flawed assumptions about what is possible or impossible to achieve
managerially.
2. Mainstream economic and financial models pursue an idea of value that is divorced
from any concept of the general wealth and wellbeing of individuals and society, with
notable exceptions such as Amartya Sen. Prevailing models are based on a
systematically disproven “rational” behaviour that is driven by the lust for individual
profit. These models are rooted in the paradigm that if somebody wins somebody
else has to lose. They call it “competition” and a gigantic and ineffective apparatus
has been created to “ensure” fair competition.
76
Domenico Lepore
wealth. As Senator Robert F. Kennedy said more than once, “The GDP cannot be considered
a measure for the standard of our lives”.
The starting point is to define what the role of the government should be and which
policies an economic model should mirror. Any government should first and foremost protect
the freedom of its citizens, for sure: freedom from any risk of slavery. Three major factors
impact our freedom, other than the ability to protect ourselves from enemies and practice the
religion of our choice: freedom from ignorance, freedom from the tyranny of diseases we
cannot afford to cure, freedom to start or adhere to ventures, business or otherwise.
So, the role of the government in establishing and endorsing an economic model is clear:
a solid education and research system, affordable healthcare for everyone, a network of
support for the development of any form of free enterprise.
How we build these systems, how we manage them and what set of values should
inspire them is the kernel of the new economics. Economics then really becomes the science
that studies how countries should develop.
The new economics should not just be concerned with better mathematical models to
portray scenarios; any serious mathematician would always alert decision makers to the
probable fallacy of such models. The new economics must be intimately connected with the
ways wealth can be created and the best ways to increase the distribution of this wealth.
The world we live in becomes exponentially more and more interconnected and wealth
(and its creation) is a multifaceted entity. Which is the wealthier country, one where the GDP
is high but millions of people cannot afford serious education and healthcare, or one where
the GDP is lower but these “freedom rights” are guaranteed? This conflict exists only because
“economics” is anchored to flawed assumptions about wealth and value.
The distribution of wealth, seen as conflicting with the right of the individual to amass
personal wealth to the detriment of others, has always been labelled as “socialist” and as
such unsuitable for the free world. The ugly truth is that prevailing economic and financial
thinking has lead to the squandering of the resources that the planet has available and to the
stifling of innovation. This thinking has systematically favoured short-term decisions over
long-term planning. This thinking has swayed tens of thousands of talented people away from
applying their minds to constructive and foundational work and towards the sterile and
artificial domain of “financial products”. This thinking has led us to believe that we can create
something out of nothing.
Never in human history has the word “scarcity” meant so much. Our resources are
scarce and we need to learn how to use them; the name of the game of any serious
economic effort then becomes “sustainability”. The new economics must become the science
that studies the optimization of scarce resources and in order to do so must tap into the
bodies of knowledge that deal with how finite resources can be successfully managed.
The new economics must also be based on the founding assumption that no win can be
based on somebody losing; that we are all interdependent and the wellbeing of individuals is
critical for the well being of society; that wealth must be created in order for it to be distributed
and any form of imbalance will soon turn into a global loss; that individual success to the
detriment of others cannot be sustained. The new economics is founded on the assumption
77
Sechel: logic, language and tools to manage any organization as a network
that individuals, organizations, large systems and networks and, ultimately, countries are
vessels for the creation and distribution of ideas, products, services that help everyone to live
better, more intelligently and harmoniously with our environment.
The new economics will strive to provide not just mathematical platforms but also the
practical means to achieve a meaningful life.
78
9.
80
Domenico Lepore
Just like the most famous offspring of the theory of variation, Statistical Process Control,
these applications need to be fully understood in order not to become an expedience.
To summarize: The basic elements for sustaining the long-term development and success of
an organizational system are:
1. Clarity on the goal and the network of interdependencies that enable its achievement;
2. Understanding and management of the variation associated with each process and
their interactions as well as identification, exploitation and subordination of the
system to a set of limiting factors, the constraint(s) of the system;
3. Establishing an organizational mechanism that ensures appropriate capitalization on
people’s continuous learning about methods, markets and products.
81
Sechel: logic, language and tools to manage any organization as a network
Without a goal there is no system and without clarity on what to measure in the system and
how to measure it, talking about a goal becomes lip service. GAAP accounting utilized to
support managerial decisions is the enemy number one of productivity. EBIT, EBITDA, EPS
and any form of GAAP derived measurements totally miss the point of what a company
should strive for. If the goal of a company is connected in any way with making money, then
all we need to know is:
• What comes in (sales);
• What goes out to purchase materials and services that go into the products we sell
(TVC, Totally Variable Costs);
• What we need to make the system function (fixed costs + investments), Operating
Expenses (OE);
82
Domenico Lepore
• The inventory (I) we need to keep in the system to ensure that we always have
enough “material” to produce and ship.
83
Sechel: logic, language and tools to manage any organization as a network
by variation, which is enemy number one of Quality and reliability; however this variation can
be attributed to either common causes or special ones. Distinguishing “noise” from “signal”
was then critical to devising actions aimed at managing this variation. Dr. Shewhart
developed an important part of the so-called Theory of Variation known as Statistical Process
Control (SPC) and a very useful mechanism named Control Chart. His work was foundational
for the improvement of productivity, first at Bell and then in a myriad of organizations
countrywide, and certainly served as a springboard for the gigantic work of Dr. Deming.
Dr. Deming realized that in order to manage the variation associated with a process people
make two kinds of errors:
1. They attribute variation to a special cause when it is instead due to a common cause;
2. They attribute variation to a common cause when it is instead due to a special cause.
The ramifications of these errors are endless and still today plague the way companies
make decisions; Statistical Process Control, the main body of knowledge out of which the
Theory of Profound Knowledge has evolved, is largely ignored at Top Management and
Corporate level. Certainly, the lack of scientific background that is typical of the average
western corporate manager has contributed to the dismissal of SPC but, more profound
reasons, I believe, have sealed its fate so far.
SPC is neither a purely mathematical tool nor is it a conventional financial management
tool. The process behaviour charts (a better, less intimidating name for “control chart”) mirror
the outcome of our managerial decisions (often taken with some kind of local optima in mind).
The image they portray is not the reassuringly deterministic one that accountants and
financial people are accustomed to; on the contrary, what these charts display are predictable
or unpredictable ranges of oscillation. These oscillations very often reflect, mercilessly, the
conflicting confusion that dictates our choices and the course of action that their interpretation
calls for flies in the face of “conventional wisdom”.
In many years of professional practice I have constantly witnessed the psychological
disarray that ensues a statistical study carried out with the use of Process Charts. Understanding
SPC, let alone using it properly and accurately, requires a paradigm shift in the way we look at
data and make sense of them for business decisions. From a mathematical standpoint, process
charts are based on an average dispersion statistics and the approach they use is the 3sigma
one for the calculation of limits. What has always generated confusion about statistical process
control charts is that although connected with probability theory they do not work because of it.
The essence of the charts is in their predictive role and in the possibility they provide to build an
epistemological approach to management based on prediction.
Control charts capture the most fundamental feature of the work of individuals and their
interaction within organizations: the variation associated with processes. In building a
systemic organization based on the Decalogue the driver’s seat belongs to SPC.
Steps Four and Five: Identify the constraint and implement buffer management
The new kind of organization that is the solution to the conflict between Deming’s
approach and Goldratt’s approach is a network of interdependent processes with one
84
Domenico Lepore
common goal where we have achieved a good level of statistical predictability. It can be
successfully managed, but the question is, how? Dr. Goldratt’s main contribution to the
Theory of Management has been to point out that any system is limited towards its goal by
very few elements, the constraints. If we identify them we can manage them following the
steps of focusing that he developed. The Decalogue, leveraging the intrinsic stability of a
Deming-based system, suggests that the constraint can be “chosen” (one constraint) instead
of being identified. In other words, we can always decide which constraint it is strategically
more convenient to focus on and build the system accordingly. Let me stress this: we can do
it ONLY because we have already built a system made of low variation processes; this is why
we can safely design our company around a strategically chosen constraint. Instead of
cycling the five focusing steps of TOC - (1) Identify the constraint, (2) exploit the constraint,
(3) subordinate to the constraint, (4) elevate the constraint, and, if the constraint has moved,
(5) go back to step (1) - we can make the system grow by appropriately choosing the
constraint and sizing the capacity of all the feeding/subordinating processes coherently.
Again, this is possible only because the variation in the system is low.
85
Sechel: logic, language and tools to manage any organization as a network
Step Six: Reduce the variability of the constraint(s) and the main processes
Step six is obviously connected to step Five but less obviously to step Seven. Clearly,
we do understand the impact that variation has on our system and the need to reduce it but,
when push comes to shove, we are not prepared to continue to work on variability reduction.
Why? The answer is in our ability and desire to understand the purpose of system
management.
The culturally irritating translation of Deming’s Philosophy into the myriad of “Kaizen-like”
management techniques that have bamboozled western management for the last 30 years
has transformed it from an innovation and wealth creation driven vision of the world into an
efficiency game. If, on top of this, we continue to view our company in “functional” terms, then
reducing variation simply means reducing costs. Of course, no function would ever easily
surrender to that because it would imply “cutting the budget”, hence any serious attempt to
reduce variation is nipped in the bud. A relentless effort towards continuous reduction in
variation can only stem from a systemic vision of our company and the understanding that
only this reduction would provide the insight needed for triggering real jumps in performance.
The way to link a relentless, focused and companywide variation reduction crusade to
financial performances is through the adoption of a suitable organizational structure.
Step Seven: Create a suitable Management structure
When we wrote our book, Oded and I were perfectly aware of the inadequacy of the
prevailing organizational structure to support the systemic endeavour we were preaching, but,
in fairness, we had no generic answer to the problem. What you have read in the previous
chapters regarding a project-based structure is the result of subsequent analysis and
elaboration. At the time of publication (1999), we just wanted to stress that, without a suitable
structure, the realistic possibility to sell all the capacity of the constraint would be hindered by
local optima considerations. In other words, the design of a suitable structure was a
prerequisite for enabling the true expansion of the system.
Step Eight: Eliminate the external constraint; sell the excess capacity
When we design a system that caters for a high degree of process predictability and
synchronization, where control and protection are ensured by buffers, and where all the
policies, behavioural and measurement “constraints” are dealt with by an appropriate
organizational structure, we do so to maximize sales. The most important part of the chain is
the customer and any company should always be designed to ever improve its ability to
satisfy its customers’ verbalized and hidden needs. The Decalogue, if understood and
embedded in the appropriate structure, should very quickly unveil capacity that is not
currently being sold.
Another way of looking at this issue is the following. Let’s say that, day one, the rate of
sales of a company starting its Decalogue journey was such that some shipments were
missed and constant fire fighting would create friction between production and sales. The
Decalogue would call for a disciplined process mapping aimed at understanding process
variability, the choice and management of a suitable constraint as well as the devising of a
86
Domenico Lepore
coherent measurement system. Moreover, in order for this level of synchronization not to be
hindered by the local optima evil inclination that functional organizations invariably undergo,
we design an appropriate, coherent organizational structure. Almost invariably, the constraint
will shift outside, i.e. become an external constraint: our capacity, what we can realistically
design, manufacture and ship becomes greater than what we are currently capable of selling.
At this point, it will be blatantly obvious that our real understanding of the market is woefully
limited and we, in truth, do not know how to sell. The most spectacular application of the use
of the Thinking Tools is in the management of the external constraint. This is a very critical
point in the pattern of a successful Decalogue implementation. Why? As we previously
stated, salespeople can singlehandedly jeopardize any systemic endeavour unless we
integrate them organically in the way the company operates. We shall look at this in more
detail in Chapter 13 dedicated to External Constraint.
Step Nine: where possible, bring the constraint inside the organization (and lock it)
The Decalogue approach to management is based on process stability; indeed the most
critically important part of the system is the constraint. Hence, we want to ensure maximum
predictability especially on the constraint. Clearly, when the constraint is external such
predictability is more difficult to achieve. This is the reason why, whenever possible, we want
to manage an internal constraint. Moreover, this is also the easiest way to make the system
grow without stirring company dynamics conducive to mayhem.
The need for constraint reliability is so strong that even when organizations have a
virtually unlimited internal capacity, like supermarkets, we should always elect to appoint an
internal constraint and subordinate the whole organization to it. The growth of the system will
then happen through a systematic, orderly and relentless exploitation of the capacity of the
constraint. When such a capacity is not sufficient to meet market demand we will first
increase the appropriate non-constraint areas making them capable of subordinating to the
constraint and only then we will elevate the constraint. One more time: the name of the game
is process predictability.
Step Ten: Set up a continuous learning program
The possibility for a Decalogue based management system to produce in time the hoped
for results rests on the ability that the organization has to continually learn what is needed to
constantly improve its performances.
Learning does not happen in a vacuum and cannot be based solely on individual desire.
Learning must become part of the way the organization functions and the change associated
with it a way of life for the company. Learning cannot be “installed” neither can it be forced on
people. It must be a personal choice but also an integral part of the way the company has
structured itself to conduct business. Learning must be promoted companywide and from the
Top Management but must come from a designated and empowered source. Over the years
we have come to call it a Centre for Learning.
The ten steps highlight the knowledge needed to overcome the Deming vs. Goldratt
conflict and enact successfully the idea of a system constrained in one point; I am proud to
say that the steps have stood the test of the last ten years. However, both the successes and
87
Sechel: logic, language and tools to manage any organization as a network
failures I experienced in these years have clearly pointed at areas that were in need of some
meaningful upgrade. The most important lessons have come from the area of organizational
design. At the end of this book we will look at the emerging model for the organization of the
21st century.
The Decalogue is founded on the principle of continuous improvement. Not only does it
employ the Plan, Do, Study, Act continuous improvement cycle designed by Deming, the
entire ten steps embody that cycle pattern in the way they are carried out. This can be seen
clearly when we map out the ten steps of the Decalogue and their four main phases onto the
PDSA cycle itself.
88
PART THREE
Knowledge
(application/execution)
Domenico Lepore
The goal of Part Three is not to provide an illustration of how the concepts of the Decalogue
have been applied over the last 15 years. Neither is it to elucidate on successes and failures
which are the hallmark of any innovation process. My reluctance to provide “examples” is
originated by the wealth of empirical evidence that I collected over the years: very rarely
people can relate to other people’s experience. Moreover, as Dr. Deming reminds us:
“without a theory experience teaches nothing”.
And yet I did feel the need, at this point, to provide a further insight, a visualization of
what I discussed so far. I felt the need to show, with some level of detail, how the concepts
presented in the previous parts of the book can be made operational.
I was clearly in a conflict.
“Provide examples” protects the need for “satisfy expectations”; readers expect
examples, it makes them feel the author is worth their trust, “he has been there”, “he has
done that”. “Do not provide examples” protects the need for “intellectual coherence”; this book
is about scientific and philosophical tenets concerning the creation of a sustainable
economics, not a “one minute manager” handbook.
There are two basic assumptions at the core of this conflict. The first one is that
examples have to be “real life” to feel “real”. The second is that it has to be “me” to talk about
it; in other words, it has to be “the organizational scientist” who provides the practical, detailed
answers. What follows is the result of tackling these assumptions.
What you will find in Part Three is a series of chapters (only one is by me) written by a
group of professionals, all of whom have several years of “theory and practice” with the
Decalogue. What each of them presents is his/her take on the Decalogue as applied to a
particular area. Indeed, all of the contributors have also several years of experience in the
subject matter at hand. In other words: the writers of these articles are subject matter experts
with a solid grasp of the Decalogue.
In these chapters there are no names and only vague reference is made to the particular
business of the company the authors worked with. What you will find is a practical analysis of
how some foundational aspects of a systemic organization are addressed; each author has
provided his/her own slant to the issue and I have made no attempt to make his or her style
uniform with the others. In essence, what you will read is their view on how the issue can be
faced and, in some cases, fairly detailed instructions are provided. Again: these instructions
will teach nothing unless the underpinning scientific and philosophical tenets are understood.
Larry Dries JD addresses the highly controversial (and largely misunderstood) theme of
accounting. Unfortunately, the world we live in, particularly Public Companies, is slave to the
conventions of GAAP and cost accounting. While GAAP and cost accounting are not evil in
themselves, the use that the industrial and financial world has decided to make of them stifles
innovation and distorts data, thus leading companies to take wrong decisions. Larry makes a
91
Sechel: logic, language and tools to manage any organization as a network
clear case for Throughput Accounting and provides a non-negligible insight into the
psychology of accountants and finance professionals deriving from the assumptions that form
the foundational basis of their particular world view: cost reduction. Larry’s background in law
and investment banking has given him experience and perspective on how the manipulation
and interpretation of financial data can materially influence company behaviour, often leading
to unfortunate results. Larry’s discussion on measuring also briefly mentions the use of
Statistical Process Control and how it drives good decision making through objective
analysis. Such analysis permits accurate measurements, which then can form the basis for
the design of precise solutions to problems flowing from special cause variation. This entire
discussion on measuring and accounting strongly highlights the importance of the
philosophical underpinnings of the Decalogue itself.
Dr. Giovanni Siepe addresses variation and its implications for the management of
operations. Giovanni, a theoretical physicist who has spent two decades in industry, draws
heavily from the monumental body of work by Dr. Donald Wheeler; Don’s books have been
for me as well as for Giovanni a constant source of inspiration and guidance in the application
of Dr. Deming’s philosophy and we are eternally grateful to him for his teachings. Giovanni
has worked extensively on simulations and has gained over the years a profound
understanding of how critical the management of variation is for the successful running of
operations: from replenishment to manufacturing to sales.
Yulia Pakka MScBA took on the burdensome task of summarizing the External
Constraint approach to Marketing and Sales. A few words of introduction are in order.
Marketing (and its links to sales) within the framework of the Decalogue takes on a formidably
important and delicate role. Far too many people think of Goldratt and Deming as “Production
and Quality” experts and their management philosophies as, in essence, ways to improve
“efficiencies”; nothing could be more wrong. Deming and Goldratt, hence the Decalogue,
provide a holistic framework for an everlasting sustainable growth and Marketing is the vessel
for that growth.
Yulia has a multifaceted and international professional experience in highly competitive
and innovative markets. She explains very clearly, rigorously and comprehensively how a
statistically stable organizational system can reliably provide customers with well priced,
quality products that address well identified needs. She provides clarity on the connections
that should exist between pre-sales and post-sales activities and warns about the risks of a
sales-force cognitively not aligned with the marketing process. Yulia gives a brilliant, yet
challenging, illustration of the mechanism by which a company can systematically win the
“heart and mind” of its customer and tackles head on the mystique of the sales process.
Anyone who is serious about understanding what Dr. Goldratt has described in his seminal
book It’s Not Luck should read Yulia’s chapter very carefully.
Francesco Siepe PhD and Professor Sergio Pagano PhD, deal with Information
Systems and what should be their main feature: the ability to support an organization to
improve its performances towards a stated goal. Francesco and Sergio are, respectively, a
mathematician and a physics professor. Their professional collaboration is aimed at
92
Domenico Lepore
designing and building software technologies that can support a Decalogue based approach
to the management of organizations.
A common background in non-linear systems studies has helped them devise
computational tools for the proper management and synchronization of finite resources. Our
continuous conversations on the usefulness of software for the enhancement of business
performances is leading to a set of tools that maximize the impact of any existing Information
System. In this chapter, Francesco and Sergio illustrate an application of their approach to
Project Management and show how simple software tools can support the adoption of a
completely systemic organizational design based on the management of projects. Their work
is based on, and expands upon, the concepts that Dr. Goldratt illustrated in his book
Necessary but not Sufficient.
Gianlucio Maci PhD offers in his chapter a preview of where the development of the
Decalogue is heading. The initial inspiration (and motivation) for writing this book was
provided by the idea of sechel and how such sechel can be tapped into if we develop the
three faculties of the mind that I discuss in the book. Connecting the dots among intuition
(birth of an idea), understanding (of its full spectrum of ramifications) and
knowledge/consciousness (successful application) invariably brings a new “spin of the wheel”
of innovation and development. In our case, such a development found its scientific roots in
Network Theory.
Over the last three years Gianlucio has engaged in the theoretical development and
hands-on testing of models aimed at providing enough experimental validity to the intuition
that companies can be better managed if seen as networks of interdependent, variation-
poised components, hence introducing the language of networks into the field of
organizational studies. His research work on complex systems will be part of the groundwork
needed to evolve the Decalogue into a full-blown Intelligent Management approach.
Angela Montgomery PhD has edited and shaped the entire book, from a set of disparate
parts into a systemic whole. Angela masters the art and science of crossing boundaries,
geographical and conceptual. Her zest for knowledge, intellectual prowess and flexibility has
allowed her to contribute meaningfully over the years to the development of the Decalogue
knowledge base. She is co-founder of Intelligent Management Inc., an organization with the
goal of promoting awareness and adoption of the Intelligent Management approach.
93
10.
In 1994 Dr. Goldratt published It’s not luck, a novel where he addresses constraints that are
not physical. In this book he tackles such constraints with the aid of a powerful set of logical
(and emotional) diagrams that he labelled “Thinking Process Tools” (TPT). Many books have
been written to exemplify the use of the TPT. In our book, Oded Cohen and I dedicated 27
pages to this end. It seems to me that none of us did a particularly good job given the
abysmal use that very few people have made of the TPT over the years. I will therefore try
not to make the same mistake; whoever is interested in learning the use of the tools will
probably find useful the website that accompanies this book: www.intelligentmanagement.ws
Here I will attempt to provide a different slant on the issue.
The Thinking Process Tools were created to fortify in people the ability to reason cause-
and-effect. This is a daunting task because our mind simply does not work that way. In our
daily lives, most of the time we “re-act” instead of acting and we very rarely understand the
full spectrum of the consequences of our “re-actions”. Simply put, human cognition is heavily
constrained in understanding the network of interdependencies that our actions trigger.
Dr. Deming used to warn about the consequences (cause and effect) of seemingly simple
actions: “if we kick a dog in the street, we are responsible for the attack that, probably out of
fear, the dog will take against the next passerby”.
possible; restraints can play an extremely important part in our lives just as much as the
possibility to lift them when appropriate. These limitations do not belong only to individuals
but also to organizations; the field of forces created by the structure of the organization, the
paradigms of its founders and directors and the socio-political environment at large do shape
these constraints. The issue, for individuals and organizations alike, is not so much the
existence of these constraints but rather the acknowledgement that they are such;
unfortunately, far too often, these mind-created constraints become the “reality” of the
organization. The life cycle of companies, and even entire industries, can be measured in
terms of their appreciation (or lack of it) for these deeply rooted images of the world that over
the years I have come to call “cognitive constraints”.
Cognitive constraints
Cognitive constraints are neither connected with (lack of) education, nor with the rational
understanding of the debasing role they play in our lives. Simply, our mind adapts very
quickly to seemingly “stationary states” and sees moving out of these states as too
challenging. In life as well as in business, there is no such a thing as a “stationary state”; as
Dr. Deming used to say: “the only thing that does not require maintenance is obsolescence”.
This is a fundamental truth: if we do not evolve, we regress. What makes everything more
complicated is the pace at which we must evolve to survive and the anthropological direction
of this evolution.
Let me try to be clear on this issue as much as I can: management (and economics for
that matter) has seen a slew of socio-psycho-behavioural studies over the last 40 years. An
incredible avalanche of “techniques” have been devised to help managers cope with their
duties; legions of lawyers and accountants have patrolled the field where “wealth” is
generated and herds of consultants have galloped the wild and undefined prairies of
performance improvement. We can easily label these efforts as “upgrade management”
because in some way (almost) all of them have contributed to some advancement. Not
anymore.
96
Domenico Lepore
The “new individual” and the “new organization” need to be crafted in a very different,
much broader paradigm. The founding element of this paradigm, this collective perception of
the world, will be cooperation not competition; interdependence not dependence or
independence; networks of value not company book profit; comprehensive wealth not just
money; centrality of the individual not individualism.
In order to cope with this massive shift in perspective we need to be able to understand
the connections that exist among different elements of our current reality, the implications
triggered by their interdependencies, and what it is that makes us perceive as inevitable their
coming together. We need a mechanism that supports and fortifies our intuition of why things
appear the way they do.
Unveiling the cause-and-effect relationships that we make in crafting our reality provides
a unique opportunity to transform this reality. It is therefore important to stress that the
mechanism, the tool we use to accomplish this feat is a genuinely transformational tool.
Without any delusion of doing a better job than ten years ago, I will try to illustrate it.
Embracing conflicts
In virtually every culture I had the opportunity to come in touch with, the word “conflict” is
evocative of something negative; an often prolonged struggle, a situation of distress, an
incompatibility difficult to reconcile. Whether you look in the dictionary or try to fish in some
hidden ravine of your memory, you will invariably connect “conflict” with something you would
want to avoid. Unless you are a lawyer, you will probably derive very little pleasure and
benefit from a situation of conflict.
Conflicts, with their heavy burden of emotions, act as debasers of our intellect, allowing
our evil inclinations to take over. Conflicts unleash powerful forces that make up an important
part of what we call “life”; like any force, we can be overwhelmed by it or we can harness it for
a good purpose.
Conflicts are inevitable; they arise from the simple fact that we are all different and we
see the world in different ways; we have different agendas, different priorities and we are
subject to different stimuli. Sometimes we have conflicts with ourselves, we struggle to take
decisions. The issue, then, is not so much to avoid conflicts; rather it is to turn their potentially
devastating power into something useful. Dr. Goldratt in his lifetime crusade for better
thinking as a prerequisite for better acting, dedicated a book, It’s Not Luck, and endless
seminars to the subject of conflict resolution. Here the attempt is not so much to explain the
mechanics of it but to unveil its underlying paradigm.
97
Sechel: logic, language and tools to manage any organization as a network
A want always hides a “need”. Anytime we state a want, consciously or not, we verbalize
with our statement the way we intend to satisfy our need. I want a new car to satisfy a need
for safety or prestige; I want a steak to satisfy my craving for meat or need for proteins; I want
to read a book to protect my need for knowledge or because I need the information contained
in that book. The list is obviously endless.
Separating the “want” from the “need” is the first step; once we do that and we look at
each need that has originated the “want”, we immediately realize that these needs are never
in conflict. Why do we claim this? Because we can quickly link these needs to a plausible
common goal, the achievement of which is only possible if the two needs are simultaneously
satisfied. In the end, we only argue and want to solve a conflict with somebody with whom we
think we have some common goal to achieve, otherwise we are indifferent.
Goal, needs and wants must be verbalized as clearly as possible; it is only through a
correct and precise verbalization that we disclose the true nature of the categories of speech
that make up the conflict. (See Chapter 2 for a description of the Conflict Cloud, how to build
and read it.)
Virtually any situation of blockage can be portrayed in terms of conflict. There are at least
three main advantages in doing so: first, we see with true clarity the issue at stake, we
understand “why” we are in conflict; second, we deflate the potentially growing bubble of
resentment associated with a conflict and keep at bay ill feelings as much as possible. The
third and most important advantage is that by building a conflict with precision we are
automatically poised to solve it; I will try to explain why.
So far we have analyzed a conflicting situation in terms of its three founding statements:
the “wants”, our claims; the “needs”, what we try to satisfy/protect with our claims; the
“common goal”, something that can only be achieved if both needs are satisfied/protected.
The goal then, really becomes “common” to both conflicting parties. We have also seen how
to connect these logical entities and how the conflict reads.
98
Domenico Lepore
and requires a serious commitment towards a sincere betterment of the most relevant part of
our being, the way we think.
This is because, and I will never stress this enough, the seemingly simple conflict
diagram is far from being a “tool”, like a hammer or a piece of software. Embracing the
conflict cloud approach to problem solving calls for the acceptance of a new paradigm of
openness and transparency about ourselves; it requires the systematic and relentless
acceptance of others’ views; it is based on the deep conviction that sundering ourselves from
others is artificial and never truly possible. The conflict cloud, through its paradoxical
nomenclature, is the key to creating connection, to linking positions, and to building bridges.
In pursuing the overcoming of conflicts through the identification of their ultimate root
(our assumptions) we create the mental circuitry that enables us to develop seemingly
inconceivable solutions; we open up a realm of new possibilities, we transcend ourselves and
create a much broader range of possibilities for interactions.
This is why the conflict cloud is not simply a “tool” to address disputes; rather it is the
starting point for transforming how organizations can be built and work in a much more
meaningful way. The conflict cloud triggers and sustains a new kind of management where
we can really tap into the unexplored possibilities that human cooperation can generate.
The conflict cloud supports and fortifies intuition.
99
Sechel: logic, language and tools to manage any organization as a network
Two: we find a verbalization that summarizes them all, we call it D. (We may want to do
this in steps: a) we stratify the UDEs in homogeneous categories; b) we summarize each
category with one statement; c) we consolidate these statements into one);
Position D
Three: we find a verbalization that summarizes all the Desirable Effects (DEs), we would
like to experience, we call it D’:
Positions D and D’
Four: we state the need for “control” that forces us to accept, to cope with D; we call it B:
The need B
100
Domenico Lepore
Five: we state the need for “vision” that prompts us to say that D’ is the reality we would
like to live in; we call it C
The need C
Six: we verbalize the most basic goal whose achievement must pass through the
simultaneous satisfaction of B and C; we call it A. In other words, B and C must be
simultaneously satisfied in order to achieve A.
The exercise of building a core conflict cloud for an organization is invaluable and the
process can be exhilarating. Let me try and give you a sense of this. In the last 15 years
I have worked with hundreds of top and middle managers to build custom made
implementations of the Decalogue and the starting point has always been the writing of the
core conflict. A group of managers sits for two or three days in a room starting with a “bitching
and moaning” session where all the UDEs are verbalized. This first phase is a very “feel
101
Sechel: logic, language and tools to manage any organization as a network
good” one, everybody agrees that the company is plagued by these effects. These effects are
and feel “real” and everybody would like to get rid of them.
Summarizing all the UDEs in one single statement is normally a little cumbersome but it
is generally done in few hours. At this point the procedure I listed above begins and the end
result is normally welcomed as a breakthrough. What happened?
Remember, the conflict cloud helps to sharpen our intuition. In just a few days the group
of managers has moved from an often disparate set of non-verbalized hunches to a clear cut
picture of the forces that keep them from achieving their goal. Moreover, a precise description
of the needs that craft the psyche of the organization goes a long way towards helping to
understand the “why” we are trapped in this conflict, the reason for it. I can safely say that no
top management strategic retreat session delivers a tangible and operational output like this
one. Now that the intuition is strong we can make it stronger.
What transforms a core conflict into a full-blown picture of our current reality is a
disciplined, orderly elucidation of all the mental models that give birth to the conflict. These
mental models are deeply rooted images that we have of ourselves and the world around us.
These mental models, which we may also call “assumptions”, are the cognitive lenses
through which we perceive reality.
Assumptions are, like any other mental construction, the result of external (the
environment, education, experiences, values, etc.) and internal (the chemistry and physics of
our mind) factors. The difference between an assumption and a statement of reality is only
the realm of validity, determined often by cultural circumstances. (If you want a practical
example of this last statement, take a sentence like “in a democracy every citizen is entitled
to decent, affordable and reliable healthcare” and ask for a comment from a statistically
representative sample of individuals in the US, Canada, and Europe).
102
Domenico Lepore
Assumptions are the logical connectors between goal, needs and wants; they help us
see the logic that shapes the conflict. A conflict with its set of clearly verbalized assumptions
portrays the current reality precisely in the way we experience it and is the strongest possible
support we can provide to our intuition.
I hope that these pages have helped clarify what is at stake here: the development of a
new, better and stronger ability of our mind to develop and fortify the faculties of our intellect.
Such a development is greatly enhanced by the TPT, and what we have presented here, the
core conflict cloud/CRT, form the basic mechanism to strengthen intuition. We provide an
example of core conflict cloud/CRT in the following pages regarding the case of the ProPrint
company.
103
Sechel: logic, language and tools to manage any organization as a network
reality. But they also pave the way to come out of this reality and move towards a future that
is more desirable.
As we said, assumptions are mental models we have about the world; they are formed
as a result of experiences and socio-cultural circumstances. Assumptions are, in every
respect, a reality for the person that develops them. These assumptions, particularly the ones
that we verbalize between D and D’ in the conflict/CRT are, de facto, the constraining
element of our reality; they are our cognitive constraint.
If these constraining beliefs were challenged and invalidated, i.e. we were to identify logical
statements that would disprove them, then these “constraints” would be “removed”. As a
result of this removal (the lingo is: “elevation”) of the constraint, our ability to achieve our goal
would be magnified. In order to qualify as “assumption sweepers” these logical statements
disproving our assumptions must fulfil two prerequisites:
1) they must logically invalidate one or more assumptions;
2) they must protect/address both needs OR one of them and be neutral to the other.
Indeed, the totality of these statements must address/protect both needs. If these
prerequisites are satisfied, we call these statements “injections”. The need for control (B) and
the need for vision (C) are captured by the two statements: on one side the “vision” of a
company that can overcome with ease the limitations they clearly see as artificial, on the
other the “controlling” need for remaining as faithful as possible to the perception they have of
themselves professionally and otherwise.
104
Domenico Lepore
Injections are solutions to the conflict; by invalidating all the assumptions they
“evaporate” (nothing like jargon, eh?) the conflict cloud (D and D’ disappear) and can
potentially move us from our Current Reality to a more desirable, less constraining Future
Reality.
However, in order for this to happen, we have to ensure that this set of injections is both
complete and as free as possible from potential negative implication. Only then will we have a
full understanding of the pattern in front of us. Only then will we have a thorough
comprehension of all the potential ramifications of the solutions we identified (the injections).
105
Sechel: logic, language and tools to manage any organization as a network
The way to read this conflict is the following: “If our goal is A then we must B; If we
must B then/therefore we are forced to cope with/accept D. On the other hand, if our goal is
A then we must C; If we must C then/therefore we want/desire D’ ”.
The boxes between A, B and D; A, C and D’ as well as the one between D and D’ are
the assumptions and they read as follows: “If our goal is A AND Box with assumptions, then
we must B (or C). The same reading applies to the other boxes with assumptions. The Box
between D and D’ reads: “We would like to have D’ but we are forced to live/cope with D
BECAUSE….
The core conflict cloud above portrays the current (at that time) reality of a
manufacturing company producing special equipment for the printing industry. In my
experience no person or company has ever benefitted from reading another person or
company’s core conflict. The reason for this is multifaceted. First, every company believes it
is special and unique – nobody compares; moreover it is difficult to penetrate quickly
somebody else’s subject matter. Second, reading a conflict is not the same as building it;
the focus and comprehension in the two scenarios are radically different. Third, the
language that management uses to portray its reality is peculiar to that group of people.
This is why, in fairness, I do not believe reading this cloud and what follows will be of great
help. However, in order to increase even minimally the chances of being successful, I have
done some editing work to the original conflict so as to make it as general as possible.
The company, which we shall call ProPrint, had about 200 employees and two
facilities. It had been on the market for 50 years and had been bought out by some of the
existing management who had teamed up six years previously with some keen, willing and
long-term focused capital. ProPrint was well known for the quality of its products, had a
well-established customer base, no union issues, and enjoyed a respectable cash profit
generation.
The main UDEs of ProPrint were all connected with its legacy. It had been a small,
family owned company for decades, where hands on experience and closeness to the
owner/founder were valued more than formal education and knowledge. The skills
considered critical for the product were in the secretive hands of a few elderly employees,
and the focus of the company was exclusively on the product rather than the customer. The
sudden jump in revenue ensuing the takeover by management had created a clear
disconnect between traditional work habits and what was needed to cope with the pace of
growth. Moreover, ProPrint did not seem to be able to take advantage of a growing demand
coming from abroad.
In the course of a three-day session, the top management team of eight people were
able to clarify and fortify their intuition about the issues they were facing, i.e. the conflict that
was trapping them. They were able to verbalize clearly the forces that were keeping
ProPrint from evolving. The management at ProPrint invested heavily towards this end and
they came out with the considerable result we reproduce here. I took the liberty of making
some simplifications to make it more legible.
106
Domenico Lepore
This is the part of the FRT that connects the injections to the achievement of the goal via
the simultaneous satisfaction of both needs. (A more complete and, for the purpose of this
book, unnecessary version of this tree should show the links between the achievement of the
goal and all the DEs that would replace the UDEs out of which the core conflict was
developed).
The set of injections developed by ProPrint was validated in terms of their completeness
by linking each injection through cause-and-effect logic with “future” intermediate states of
reality that are achieved as a result of the accomplishment of the injection. Indeed, these
injections are “pies in the sky”; they do not exist in reality and they are, at this stage, only
logical statements.
107
Sechel: logic, language and tools to manage any organization as a network
The strength and the value of the FRT are in the understanding it provides of the
comprehensiveness and the breadth of the effort needed to transition from the current to
the future state of reality. The thoroughness of this effort and the completeness of the
understanding we derive from it are further enhanced by the meticulous test we carry out on
potential negative implications deriving from the coming into existence of the injections. The
TPT that supports our understanding in this effort is called Negative Branch Reservation (NBR).
The logic of this tool is simple but the mechanics require some attention. In essence, with this
tool we verify if the injection suggested carries with it potential negativities; if we can identify
them we can try to diffuse or reduce their impact. The steps to build this “branch” are the
following:
1) we state the injection;
2) we state the potential negative outcome;
3) we build the cause-and-effect chain of events that we anticipate would determine
the negative outcome;
4) we identify the logical statement/statement of future reality that would turn the
positive of the injection into a negative;
5) we devise an action that trims that negative;
6) we incorporate the needed change into the original injection so as to make it more
effective.
Anticipating negative implications is an activity in which many people are well versed.
The point here is not to encourage scepticism, fuel negativity or stifle creativity and
enthusiasm. The goal of an NBR is to ensure full understanding of the ramifications of our
decisions. In some cases this tool helps us abort in time a half-baked decision. In some
others, regardless of our inability to trim that negative implication, it reinforces our desire to
go ahead anyway. More frequently, NBRs helps us craft better and more rounded injections
giving us a higher chance of accomplishing what they had been designed for.
To exemplify the use of the NBR we can see how ProPrint raised and trimmed a
potential negative associated to one of their injections on the FRT. (The injection is in the
bottom left box.)
108
Domenico Lepore
109
Sechel: logic, language and tools to manage any organization as a network
110
Domenico Lepore
So far, we have worked with the basic starting situation and pictured it using the tree of
our current reality. We have seen how ProPrint built its own current reality. Depicting their
reality so clearly helped the management team to develop the focus and the desire to build a
thorough understanding of how it would be possible to evolve out of that reality towards a
better future. The FRT and the NBRs provided that understanding and the global vision of
how the company could move ahead without losing itself. The bridge between the present
and the future had been built.
111
Sechel: logic, language and tools to manage any organization as a network
112
Domenico Lepore
obstacles as undefined, scary entities that hinder our ability to achieve our goal. When we are
in front of an injection (or any other arduous task) the first reaction is: “Oh my goodness, how
am I going to do this?” This is understandable but it is not rational. First of all, why are we
faced with this feat? Because either we have chosen to be, or because it is part of an
evolution, personal and/or organizational, we have decided to be part of. Very rarely do we
get ourselves in front of a task we have no possibility of overcoming. Indeed, at the onset, we
might not have clear ideas on how to tackle the ordeal. This is good, let’s leverage it.
The basic raw materials to build a core conflict are Undesirable Effects (UDEs); the
starting ingredients for a PRT are the obstacles we envisage in our journey to the goal: let’s
list them. The writing of this list is going to be both liberating and informative: we shall have
full clarity of what we perceive as an impediment and with it we’ll develop a pacifying sense of
control over it. When the list is complete we will have split “a big and undefined problem” into
a set of much smaller ones. How do we attack them? We make a list of the obstacles on the
left side of the page, from top to bottom; then on the same line on the right side of the page
we can list a corresponding Intermediate Objective. An I.O. can be either a logical statement
describing what would enable us to overcome that obstacle or, better, how we intend to do it.
For instance, if the obstacle is “I have insufficient funds” the I.O. could either be “I have
provided for sufficient funds” or, better, the solution, i.e. “I have persuaded my aunt to
sponsor me”.
Now that we have the solution, Ions, to the individual obstacles (very often one I.O.
addresses several obstacles) we can forget about the obstacles and concentrate on the
113
Sechel: logic, language and tools to manage any organization as a network
Intermediate Objectives. The next step is to provide a basic sequence to these IOs. This is for
two reasons: first, we need to ascertain “what is prerequisite to what”; second, we need to
know how many things we could theoretically run in parallel. (In reality, resources will dictate
that pace, as we will see).
Prerequisite Tree
Then we sequence the IOs, placing at the bottom of the tree the first ones we have to
achieve and then adding the others, according to a logic and time sequence. The Prerequisite
Tree is based on necessity.
The PRT and the TRT were both developed for the same purpose but they are
“genetically” different. A PRT is a map; it is based on a logic of necessity and does not
foresee any check based on conditions of sufficiency. It is, essentially, a roadmap, a guide
that provides a suitable route to the goal. Its main value lies in the collaborative effort required
to build it and it is an ideal tool for teamwork. It is the simplest of the TP to learn but it can be
misleading; it is not just a list, it is a list with priorities dictated by logical prerequisites and
these prerequisites are, in turn, altered by the amount of resources at hand. You will be
114
Domenico Lepore
surprised to see how many different points of view there are in a group trying to address
“what is logically a prerequisite to what”.
A Transition Tree is a conveyor of understandable, usable and comprehensive
knowledge aimed at executing a precise task with very little variation. Transition trees are
developed any time it is necessary to convey to somebody a precise procedure to transform
an input into an output. If it were to be used within the framework of a conventional quality
assurance documentation it would certainly be in the so called “work instructions”. TRTs use
an extremely powerful cause-and-effect sufficiency logic; in other words, every intermediate
output is checked in terms of the sufficiency clauses that determine it. The transition tree is
built by stating the final result we want to achieve, normally an intermediate objective (I.O.) of
a PRT. The basic, repetitive, five-block structure of a transition tree is the following:
One. a starting situation
Two. the prevailing logic of the situation
Three. the need triggered by the prevailing logic
Four. the action taken to satisfy the need
Five. the logic of the action – why we have taken that action
Transition Tree
115
Sechel: logic, language and tools to manage any organization as a network
As a result of the action taken, the reality will change to a new state and we can easily
verify if we are getting closer to the final result we want to achieve with the TRT. At every step
we can scrutinize the effectiveness of our action, the validity of its logic and their coherence
with the identified need.
PRTs and TRTs are the Thinking Process Tools we use to ensure that all the knowledge
available is captured and used in an orderly and proficient manner. Moreover, we can use
them as a backbone for any project we want to undertake in as much as they provide a clear-
cut set of actions that can be easily timed and staffed. In other words, the actions on the
TRTs can be easily transformed into tasks of a project with an appropriate duration and
allocated resources.
What I have tried to describe is the pattern that links the different phases of a
transformational project: what to change – what to change to – how to cause the change.
These phases activate very distinct elements of our intellect: intuition, understanding and
knowledge, which the human mind struggles to hold together with the cohesiveness required
for a minimally complex project.
The TPT have been devised to improve this cohesiveness and help us access a much
higher level of sechel. The TPTs are the quintessential support to systems thinking and, just
as with Statistical Process Control (SPC), it will take mankind several decades to appreciate
their depth and scope.
ProPrint is a success story not because of their ability to execute the FRT and enact
successfully all the trees needed to accomplish the stated goal. It is a success because it
provided a vibrant and ambitious management team with the possibility to gauge their
strengths and weaknesses in respect with their legitimate ambitions. As a result of this
exercise they realized that the effort required to truly transform successfully the organization
they had taken over was beyond the desire they thought they had. They realized that what
they had achieved could have been very easily (and very rewardingly) incorporated by a
larger entity and become a part of a bigger organization. Accordingly, they accepted a very
generous offer from a large multinational to become one of their subsidiaries, performing
better and better what they already knew how to do. Everybody won.
In Chapter 13 on External Constraint we shall deal at length with the use of the TPTs as
applied to Marketing and Sales.
116
11.
Larry Dries JD
Measuring
All organizations have a goal. Business corporations have the goal of making money. It’s
important for not-for-profit businesses to make money too, even though their goal is generally
to use the money they make to support an identified purpose, i.e., curing a disease,
supporting a disadvantaged class of persons, etc. Still, the goal of the not-for-profit is crucially
dependent upon its ability to generate profit. This chapter will primarily focus upon ensuring
that a business’ goal of making money is achieved. After all, that really is the goal of for profit
business organizations, and an important step towards achieving the goals of not-for-profit
organizations. An organization may produce goods or deliver services, but the goal of that
business is to make money through the production of goods or delivery of services the goal
is not the mere making of the goods or the delivery of the services themselves let’s not be
misled.
Good decision making is the sine qua non of good management. The Decalogue is a
management methodology which utilizes a set of tools to ensure good decision making. Step
One of the Decalogue, however, recognizes that before any decisions can be taken, an
organization must “establish the goal of the system (the organization), the units of
measurement, and the operating measurements.” Why? Because the system cannot be
directed towards a goal if a goal is not first established, and decisions cannot be assessed as
right or wrong if there exists no way in which to measure the extent to which those decisions
are achieving the goal.
Measuring has historically been within the purview of the accounting discipline.
Accounting, it has been said, is the language of business. We must ensure therefore that we
are speaking a language that conveys concepts that support good decision making.
The preparation of financial statements is generally regulated by a set of rules and
regulations known as generally accepted accounting principles (GAAP). GAAP, though, is not
a single body of principles. It depends upon where you are. That fact, in and of itself, should
generate concern. Is good decision making really dependent upon where you are? I think not.
So why should our accounting language differ, subject to place? This is an indication that
perhaps the “language” is not meant to support decision making.
Sechel: logic, language and tools to manage any organization as a network
If you are a company doing business in the United States you probably report your
financial performance according to US GAAP, which on a federal level is established by the
Financial Accounting Standards Board. Outside of the United States there has been a
movement toward using International Financial Reporting Standards (IFRS), which is a set of
principles adopted by the International Accounting Standards Board. In Canada, up until
2006, the regulating body, the Accounting Standards Board, used a set of principles similar
to, but not identical with, US GAAP. These principles were called Canadian GAAP. Since
2006 the Accounting Standards Board in Canada has abandoned Canadian GAAP and
adopted IFRS. For purposes of discussion here, we shall simply refer to all of these various
standards as GAAP.
During the days of the industrial revolution, another body of accounting principles was
developed, which is generically called cost accounting. As in the case of GAAP, there exist
various forms of this generic body of knowledge, like activity based costing, but again for our
purposes here we shall refer to these forms of accounting used by managers to understand
the costs of running a business as cost accounting. Cost accounting developed during the
early industrial age to assist large scale businesses to record and track costs, and thereby
help business owners and managers to make good decisions. The idea is correct. The
implementation, however, has fallen far short of being helpful in achieving good decision
making.
We concern ourselves with both GAAP and cost accounting because together, they are
the fundamental accounting principles which most accountants, finance professionals, and
managers use to assess organizational performance, and drive decisions. GAAP and cost
accounting are neither mutually exclusive, nor wholly consistent with one another. Cost
accounting need not comport with GAAP, as it is a form of management accounting for inside
managers. Unfortunately, as GAAP controls financial reporting to outsiders, it often gets
lumped in, to some degree, with cost accounting measures, which results in an
incomprehensible set of reports to all except the most technically trained financial
professional. That doesn’t really help managers make good business decisions. You can’t
successfully use what you can’t understand.
Moreover, many GAAP reports themselves are not really that helpful in supporting
operational decision making. For one thing, these reports are often generated at a time
significantly after operational decisions are made, and thereby contribute little to arriving at
operational decisions. Further, many GAAP reports accumulate several items under one
generic line item, which is also not that useful in driving decisions. GAAP reports may, at best
in such cases, provide a historical perspective of the efficacy of past operational decisions.
As the Decalogue is focused upon the promotion of good decision making, it requires a
set of measurements which can be used by all managers (not just financial professionals),
and drive good decision making by delivering the correct information as and when required;
that is, the measurements which every decision maker needs to know in order to reach a
valid decision. What is a valid decision? For our purposes we shall define a valid decision as
one which logically flows from a consideration of the relevant information which should be
considered, in the circumstances, to support the organizational goal.
118
Domenico Lepore
Why can’t practitioners of the Decalogue rely upon GAAP and/or cost accounting to
provide the measurements to support good decision making? Neither GAAP nor cost
accounting, alone or in combination with one another, generates meaningful measurements
to support good decision making as and when required. This being the case, The Decalogue
has turned to another form of measuring to support it. This other form of measurement is
known as Throughput Accounting.
Throughput Accounting
Throughput Accounting is an alternative approach to cost accounting which is proposed
by Dr. Eliyahu Goldratt, the developer of the Theory of Constraints. It is not a costing
approach, as it does not allocate costs to products or services. It is unrelated to GAAP, as it
makes no attempt to obey any of GAAP’s rules. As such, Throughput Accounting cannot be
reconciled with either cost accounting or GAAP. Such reconciliation is a meaningless
exercise, so there is no point in wasting any effort trying to do so.
Throughput Accounting has only one aspect that makes it similar to cost accounting; that
is, it is an internal methodology. It is meant to provide an organization’s leaders and
managers with measurements that support valid decisions, and reveal the progress, or lack of
progress, the organization is making towards the achievement of its goal. In today’s world, an
organization will still require GAAP accounting. Why? Not to assist it with decision making. It
will require GAAP accounting to report to outsiders. Those are the rules of our various
societies. The bottom line is that Decalogue organizations, like non-Decalogue organizations,
speak to outsiders in the language of GAAP. Decalogue organizations, however, ignore the
language of GAAP and cost accounting in reaching decisions. Decision making, in Decalogue
organizations, is supported by the principles of Throughput Accounting.
Throughput Accounting is predicated upon a set of three variables, and the manner in
which they interact, to affect an organization’s profitability. These three variables are called
Throughput, Inventory and Operating Expense. Using these three variables, Throughput
Accounting defines a set of relationships to measure and define how revenues and expenses
of an organization are related.
119
Sechel: logic, language and tools to manage any organization as a network
the totally variable cost associated with its creation or procurement. It does not include any
allocation from overhead or fixed expense.
Operating Expense (OE) is the money the system spends in generating units of the
goal. Examples of operating expense include: rent, utilities, taxes, payroll, maintenance,
advertising, training as well as investments in buildings, and machines, etc.
Valid decisions are ones that support the attainment of the organization’s goal. Decision
makers therefore must test their proposed decisions against three questions. Will the
decision:
1. Increase Throughput?
2. Reduce Inventory?
3. Reduce Operating Expense?
The answers to these three questions determine the effect of the decision on the system,
through relationships such as the following:
1. Net Profit (NP) = T – OE
2. Cash Profit (CP) = T – OE - I
3. Change in Net Profit (ΔNP) = ΔT – ΔOE (which should be >0 to support a valid
decision)
4. Change in Cash Profit (ΔCP) = ΔT – ΔOE - ΔI
By testing decisions against the manner in which they influence the foregoing
variables, and the relationships among those variables as those relationships are briefly
shown above, the genuine effect a decision has upon the organization can be quantified
and understood. These are measurements that really drive good decision making.
Throughput Accounting is not concerned with calculating the cost of a product, as is cost
accounting. It strategically supports and monitors the decision making processes connected
with the generation of cash.
120
Domenico Lepore
measurement of throughput. Also, a report such as the GAAP cash flow statement, for
instance, is merely a reconciliation of the income statement earnings to the change in the
organization’s bank account, which provides a manager with absolutely no meaningful
information upon which to base any decision.
As previously noted, throughput recognizes the time component of cash generation. (It is
important to note at this point that throughput is not recognized until the organization actually
receives the money. It is not an account receivable.) This then, highlights how the
measurement of throughput provides meaningful information concerning actual cash flow.
GAAP reports, and cost accounting to a lesser but still unacceptable manner, fail to recognize
the time aspect of cash generation. A Throughput Accounting net cash profit statement is a
far more valuable report than GAAP’s income statement, balance sheet and cash flow
statement.
Cost accounting principles are in many ways worse still. Cost accounting burdens its
outputs with all types of artificial notions. Cost accountants are expert at allocating an
organization’s fixed costs in a given time period over the products produced in that given
time period. The problem with this exercise is that it does not really attribute costs for the
reasons the costs are genuinely incurred. This skews decision making, as it introduces
artificial goals (i.e. higher gross margin), as compared to the genuine goal of the
organization (i.e., making money). For example, research and development costs are items
of operating expense, not items of fixed cost that should be allocated against products. The
decision to engage in research and development was the need to ensure the development
for products in the future; in other words, the need to have sustainable sources of revenue.
However, if decision making regarding research and development efforts are based upon
their effect upon product (as a result of an allocation as part of the expense of already
existing products by cost accountants) then the organizational goal is not addressed and
the decision to fund research and development is flawed. It simply becomes the result of an
artificially created accounting measurement unrelated to the genuine purpose of research
and development. Such a measuring system has no place in any credible decision making
paradigm.
Let’s see some other ways in which Throughput Accounting treats certain
measurements differently than how GAAP treats those same measurements. These
differences are materially significant because the measurements themselves, as a result of
the manner in which they are calculated, drive vastly differing decisions. To be perfectly
clear, this means that the decision making is materially impacted by the measuring system
itself. That being true, the organization’s achievements are fundamentally driven by the
measuring system. That’s why it is vitally important to have a measuring system that
supports decision making by providing information that has a cause and effect relationship
to real world outcomes, not one that relies upon artificial constructs that do not support real
world causal relationships.
121
Sechel: logic, language and tools to manage any organization as a network
Throughput GAAP
Accounting
Sales Recognized when Recognized when
cash is received product ships
(typically)
Cost of sales TVC, raw material Raw material plus
only allocated overhead
incurred when the
product was
produced
Plant expenses Recognized when Capitalized into
incurred inventory produced
in a given period
and recognized
when the product is
later sold
Balance sheet Not necessary Required due to
accrual accounting
and matching of
revenues to
expenses
Capital Recognized when Recognized over
expenditures cash is paid the useful life of the
asset
The GAAP measurements in the above table are artificial constructs. They are not
reflective of real cause-effect relationships; as such, they can lead to bad decision making.
122
Domenico Lepore
profitability. We earlier noted that decision makers must test their decisions against three
questions. Will the decision:
1. Increase Throughput?
2. Reduce Inventory?
3. Reduce Operating Expense?
These three simple questions embody the only things a decision can meaningfully
impact. Interestingly, only throughput generation is absolutely necessary to ensure a
sustainable business. That’s not to say that reducing inventory or operating expense cannot
have positive impacts upon a business, rather it is simply a recognition that without
throughput there is no business. This is the assumption that must be invalidated.
Organizations must become performance oriented, rather than anally cost conscious. How
does this relate to measuring?
As Throughput Accounting concerns itself with measuring throughput, it really is
fundamentally dependent upon, and related to, the needs of the customers of the
organization. How do you increase throughput? It’s not an accounting problem. It’s not a cost
cutting exercise. It’s a realization that the organization’s success is dependent upon meeting
the needs of its customers. Hence, Throughout Accounting, like all organizational activities, is
an activity that depends on and affects other activities. That’s the definition of
interdependence. Measuring is just one of the interdependent activities of any organization. It
certainly relates to customer needs through the marketing and sales functions; and,
Throughput Accounting should support decision making in, among other areas, the areas of
marketing and sales.
This brings us to the measurements referred to earlier, throughput dollar days and
inventory dollar days. Throughput dollar days measure the organization’s ability to deliver to
its customers on time. By delivering on time, throughput is increased in the short term by
realizing sales sooner. Customer confidence is also increased. Throughput dollar days are
calculated by multiplying the throughput of a late shipment by the number of days it is late.
The aim is for throughput dollar days to equal zero.
Inventory dollar days measure the organization’s ability to turn its inventory; it’s a
measurement of the effectiveness of the supply chain. Throughput Accounting recognizes
inventory as a liability, being cash trapped in the system. Inventory dollar days are calculated
by multiplying the value of an item of inventory by the number of days that item of inventory
has been in the system. Inventory dollar days can never be zero, but less is definitely more.
The goal is to carry only as much inventory as is required to reduce throughput dollar days to
zero. These are the measurements of Throughput Accounting that support a “performance
world view”, as compared to a “cost reduction world view.”
Dollar days can only be compared to other dollar days. It is neither a monetary nor time
based measurement. Therefore, throughput dollar days are only comparable to other
throughput dollar days, and likewise inventory dollar days are only comparable to other
inventory dollar days. There exist no similar GAAP or cost accounting measurements that can
be analogously compared. These are genuine measurements of organizational activities - the
123
Sechel: logic, language and tools to manage any organization as a network
effectiveness of on time delivery, and the effectiveness of the supply chain operation. They
are real examples of how Throughput Accounting genuinely supports essential decision
making concerning matters that every organization faces.
124
Domenico Lepore
125
12.
MANAGING VARIATION
What is variation?
It is beyond the scope of this chapter to teach the fundamentals of Statistical Process Control;
Don Wheeler and other experts have already done that majestically. The goal of this chapter
is to examine the fundamental role of managing variation in any effort to manage an
organization systemically, in particular using the Decalogue methodology.
If we examined the arrival times of employees to their work place every day, we would
immediately notice that nobody ever manages to get to the office at exactly the same time.
No matter how optimized their routine is, the arrival time is always different. The reason that
prevents everybody from getting to work every day at exactly the same time is variation.
Why can’t any process be standardized so that no variation occurs? It is because in
nature there is a ‘variable’ called entropy that accounts for the variation associated with every
process. The 2nd law of thermodynamics states that any “spontaneous” change in a “closed”
system is accompanied by an overall increase in entropy. When water evaporates molecules
are dispersed and tend to occupy the whole space, resulting in an increase of entropy. The
entropy of the universe, for example, is always increasing.
Entropy is a measure of disorder, or randomness (variation) in a system. Any
organization, or system, in its spontaneous evolution, is naturally affected by the increase of
entropy. The day-by-day repetition of simple actions at our work place will never be the same
because of the natural increase of entropy.
Variation affects all aspects of our life, and all processes in an organization. It is of
profound importance to managers who, in order to exert their role, must ensure a stable and
predictable environment. Indeed, the essence of management is prediction. Let’s have a look
at few key points we have to consider when we talk about variation.
Walter Shewhart was the first to have an intuition about this phenomenon, which is
intrinsic to every process and system. We can define a process as a set of actions/activities
that happen over time, following a rationale/procedure and aimed at a specified goal.
Shopping for food at the market with our family on a Saturday is a process and so is the set
of actions that get us to the office every day.
Let’s examine the set of actions that everybody goes through to get to work, and the relevant
variations:
Sechel: logic, language and tools to manage any organization as a network
• How many minutes do people stay in bed after switching off the alarm? Perhaps
they stayed up late the night before, so they want to stay in bed a few more
minutes;
• Do they make the coffee or some other family member makes it?
• Is the bathroom free or do they have to wait to get in?
• How long do they stay in the shower and how long will they take to get dressed?
Did they decide the night before what they were going to wear?
• Will the car start first time or will they have to let the engine warm up?
• How many red lights and how many green lights will they come across on our way
to work?
• Will they find a parking spot near the entrance to the office?
The arrival time depends on how these simple actions, with their associated variation,
are combined together.
The variation associated with each action is identified mathematically by its ‘variance’,
whereas variation associated with the whole process is identified by the combination of the
set of variances relevant to each action, which is the co-variance. As a matter of fact,
processes inside an organization are highly interconnected and interdependent, and
predicting the outcome of a sequence of events becomes very difficult.
Let’s consider the sales process of a company. ‘Scoring’ a sale is just the last step of a
process that starts with purchasing the raw material, goes through manufacturing and
assembly, up to shipment. To be sure we score the sale we have to consider the variation
associated with every part of the process:
• the arrival time of raw material
• time to inspect it
• time for moving material from the raw material warehouse to manufacturing
• time for manufacturing
• time for assembly
• time for moving material to the warehouse of finished products
• shipment time
There is no way to ‘predict’ the final outcome of the process without considering the
different steps that brought us to the finalization of the sale; the only way is to approach the
process ‘holistically’.
128
Domenico Lepore
Independent network
On the other hand, if the components are dependent and/or interacting, the cause-effect
relations among the various parts of the system are “non-linear”, and the Theory of Systems
shows that:
Perf.(system)=)≠Perf.(A)+Perf.(B)+Perf.(C)+…
Interdependent network
129
Sechel: logic, language and tools to manage any organization as a network
Complex system
Understanding processes
The most practical way to understand processes is to draw a picture of them, by means
of a flowchart. A flowchart is a diagram that describes a sequence of events, tasks and
decisions that transform input in a process/system into output. Flowcharts use some standard
130
Domenico Lepore
symbols and conventions to make it easier to communicate and understand them and portray
a process with a map or chain of activities and decisions.
We can describe the flow of materials, information and documentation; we can show the
various activities included in the process, explain how these activities transform input into
output, indicate the decisions that have to be made along the chain; we can design
interrelations and interdependencies among the various phases of the process that are
important, and we can easily acknowledge that the strength of a chain depends on its
weakest link.
In describing processes we realize that it is very difficult to establish very precise borders
between departments and functions. In order to deliver a product, or provide a service, these
fictitious separations are often crossed. We see, therefore, that these processes cut across a
traditional organization chart or organization pyramid. Flowcharts are then the key to
developing understanding of if, how, and where every single link is adding value to the chain.
“…a flow diagram also assists us to predict what components of the system will be
affected, and by how much, as a result of a proposed change in one or more components…”
(Deming, The New Economics).
What we do is to map out processes and compare them with how they should ideally
work, so as to understand where complications lie, identify misalignments between authority
and responsibility if any, look for critical points, and determine breakage points in the chain
that link the supplier with the customer.
From this analysis we have to recognize ‘Key Quality Characteristics’ (KQC), i.e. aspects
of the processes that heavily influence their capacity to contribute to the goal of the system.
By highlighting these critical characteristics, we can identify the points where it is more useful
to gather data on the variables of the processes. From the analysis of these data we can
understand whether the processes are in control (predictable) or not before taking any action
to improve them, and whether improvement actions are effective or not.
Using flowcharts
The purpose of flowcharting is to understand the system and its interdependencies,
understand its behaviour (monitoring its variation), and take the right decisions to improve it,
and not just do a nice drawing for our boss.
There are two kinds of flowcharts, the process flowchart, and the deployment flowchart.
A process flowchart simply presents the sequence of activities and the decision points in
a process. It does not show the people who are involved in every phase.
A deployment flowchart (DFC) describes who does what. It shows the interactions
among people in the various phases of the process. It is crucial to know what these
interactions are to really understand how the process works and how to improve it.
131
Sechel: logic, language and tools to manage any organization as a network
This is a very simplified example of a DFC of a purchasing process. The KQC identified are:
Q1: types of items to be purchased (number and frequency)
Q2: suppliers with covenant
Q3: suppliers with no covenant
132
Domenico Lepore
133
Sechel: logic, language and tools to manage any organization as a network
134
Deployment Flowchart Manufacturing Operations & Plant Shipping
Domenico Lepore
135
Sechel: logic, language and tools to manage any organization as a network
136
Domenico Lepore
A different way to represent and/or summarize data is the run chart. A run chart is a
graphical representation of the evolution of the data. This representation makes evident any
change in the system over time.
137
Sechel: logic, language and tools to manage any organization as a network
A run chart
This representation makes evident how the process oscillates, its range of oscillation and
provides a framework for making sense of data.
Variation cannot be eliminated; it can be reduced, but not eliminated. Entropy exists.
The main problem we face when we deal with variation is to understand it. Variation
associated with processes can be of two different types:
138
Domenico Lepore
1. variation due to causes intrinsic to the process/system; these causes are always
present, and are called common causes of variation
2. variation due to special causes; these are not part of the system (that has generated
the common causes)
In the first case we say the process/system is in “statistical control” because it is subject
to a pattern of variation that is predictable over time. In the second case we say that the
process/system is “out of statistical control”, as it is subject to variation that is unpredictable
over time. Management in these two states is radically different.
We use statistical methods in order to be able to filter ‘noise’ (routine or intrinsic
variation) from the data in order to detect ‘signals’ (special causes of variation).
In order to further improve this process we can only try to reduce its variation with the
following actions:
• Stratify the data by dividing them into categories based on different factors and
analyze how the data fall into subgroups
• Separate the data by dividing them into various categories and treating them
separately from the others
• Gain experience by applying the Deming Continuous Improvement Cycle (PDSA):
plan, do the experiment, monitor its results, learn from the effects observed and
act
When a process is out of control, there is not a lot we can say about it.
Indeed, its behaviour is not predictable over time. It is subject to unpredictable jumps and
all the data relating to it lose their predictive potential and become ‘historical’ data.
139
Sechel: logic, language and tools to manage any organization as a network
In order to act on an out of control process, i.e. try to bring it into control we must:
• Gather data as quickly as possible to identify rapidly the special causes that
generate instability in the system
• Activate an emergency solution to limit damage
• Find out what made the special cause occur
• Implement a long-term solution
There are at least two excellent reasons for not wanting a process to be out of control.
The first is connected to the impossibility of predicting which often makes it impossible to plan
and carry out programs.
The second is linked to the costs associated with activities in a company that confuses
common causes with special causes of variation.
Indeed, performance that seems good will often disguise poorly optimized use of
resources.
However, this does not mean that ‘out of control’ is always bad. When you have a stable
process, let’s say a sales data series which predictably oscillate within the upper and lower
control limit, and we are trying to enforce a plan to increase sales, we actually hope to see
the system going OUT of control on the upper side, so as to detect that an ACTION caused a
shift in the system toward the desired direction, namely an increase in sales. By the same
token, a process in statistical control is not necessarily a desirable process; oscillation limits
that are too wide are often the result of poor understanding and execution of the process and
force unnecessary costs on the system.
Again, the analysis must be performed with intelligence and common sense, and the
charts have always to be read considering the operational context.
140
Domenico Lepore
We make a control chart for the material flow in and out of the warehouse and determine
its average value and its variation.
We make the control chart of the sales (consumption), and if sales are in control, i.e.
predictable, we can trust that the value will stay within 3 sigma (sales) of its average value,
let’s call it AC (average consumption per unit of time).
Our supplier delivers every N units of time, with a variation sigma (supplier) (also
assessed with a control chart). What is the minimum safe level of products in the warehouse?
This is how the two processes under discussion, daily inventory of raw material and daily
sales, could appear:
141
Sechel: logic, language and tools to manage any organization as a network
This process is in control, and shows a wide range of variation, 400÷11000MT. Is the reason
for this wide variation linked in any way to the sales process? This is the relevant control chart:
142
Domenico Lepore
The above process is out of control, i.e. unpredictable, even though it shows a drastically
narrower variation compared with the inventory of raw material.
Let’s analyze and compare the two charts. The daily average stock of raw material is
about 6000MT, with a range of variation between 400MT and 11000MT, and the process
oscillates in a predictable way between the two natural limits. The daily average sales is
about 2000MT, with a range of variation between 1500MT and 2500MT, and the process is
not predictable.
We are still missing one parameter to establish the right level of inventory; let’s see what
the control chart relevant to the lead-time from the supplier to our warehouse looks like:
The process is in control, with an average delivery time of 1.5 days, and a range
oscillating between less than one half day up to 3 days.
Based on the above numbers, the correct stock level according to the pace of our sales is:
AC + 3 sigma(sales) ) x (N + 3 sigma(supplier)
that means:
2500MT x 3 days = 7500MT
This would be the case if the processes were in control, but unfortunately sales are out
of control so, in order to set the stock level, we have to bring sales back in control, then we
monitor and assess the pace of our sales, calculate the average, and only then can we apply
the above formula.
143
Sechel: logic, language and tools to manage any organization as a network
This level of inventory will “absorb” any variation due to common causes that we can ascribe
to the pace at which we sell or the lead time from the supplier to our warehouse. In the example
above, this could be done by ascertaining what sent the system out of control; if the special
cause is identified (and the problem fixed), we can remove that data point from the calculation
and see if the “new” set of data (without the out of control point) displays predictability.
Once we have established the stock level, let’s say 7500MT, we monitor the pace at
which we use it, and we replenish exactly what we use. If we sell 1000MT we replenish
1000MT, if we sell 4000MT we replenish 4000MT. We only have to ascertain that the
consumption is in control in order to be sure that we can continue to be reliable.
The adoption of this model automatically includes in the picture customers, suppliers,
and interdependencies, which are completely absent in the hierarchical model. But this
approach alone is not enough to manage an organization effectively. The capacity of a
system/network of interdependent components to achieve its goal is limited by a very small
number of factors, indeed often only one: the constraint.
144
Domenico Lepore
The constraint is what determines the pace/speed at with which a system generates
units of its goal. In for-profit companies the units of the goal are linked to the generation of
cash profit (value).
A system that is unbalanced around its constraint is like a tube with one section that is
narrower than the rest: there is one phase/process/group of resources with less capacity than
all the others.
Why do we manage a system through its constraint? It is because an unbalanced system is
simpler and cheaper to manage. In an unbalanced system everything revolves around the
constraint phase, and a detailed plan is necessary for this phase only. This schedule allows us to
manage the whole plant. Reducing (global) variation in an unbalanced system means
concentrating on and investing in the constraint phase only, not every single part of the process.
If we have to manage a plant, increasing the productivity or improving its performance is
considerably cheaper, and less wasteful in terms of time and energy, if it is unbalanced
around the constraint.
Markets that are stable and repetitive over time, both regarding quantity and product mix,
are not very common. That is why following the variation in demand is very difficult. By
unbalancing the system around its constraint, we can achieve a flexibility that is much more
manageable because the problem of sizing capacity (and making any changes) only
concerns the constraint and not every phase of the process.
The algorithm we use to manage production through its constraint is Drum Buffer Rope
(DBR). Lead times in plants managed using DBR tend to get close to the time it takes to
technically complete the process. This is made possible by eliminating almost completely
queues and piles of inventory. Obviously, this short and controllable lead time can become a
considerable competitive advantage.
The mechanism to manage a ‘physical’ constraint is, as we said, DBR. The three steps of
managing the constraint are:
• Identify
• Exploit
• Subordinate
Identification is relatively simple, since we can very quickly identify the machine, or the
process, that limits the capacity of the system. However, considerations about capacity are
necessary but not sufficient to identify the constraint. Indeed, in a real plant the product-
market demand mix will vary over time and influence the capacity required by the various
work centres. It is crucial that, once it has been chosen, the constraint remain the same over
time, even when there is an increase in production capacity (“unbalanced” increase). That is
why strategic considerations come into play in the identification of the constraint.
The constraint must be chosen bearing in mind that it is the element that determines the
speed of cash generation for the whole organization. The main criteria include:
• Assessment of future strategies and scenarios in the market
145
Sechel: logic, language and tools to manage any organization as a network
Subordinating means making sure the system acts in such a way that the constraint is
allowed to work flat out on the right product mix. Subordinating means having an action plan
for the resources, i.e. a scheduling that takes into account the presence of the constraint (we
must never starve the constraint). The data/elements that are fundamental for scheduling are:
• BOM (Bill Of Material) / ROUTING
• Raw material
• WIP (Work In Progress)
• Resource availability/calendars
• Buffer
• Sales orders
By definition, every minute lost on the constraint is lost forever, with an evident loss of profit.
Once we have identified/chosen strategically the constraint, we create a detailed planning for
it bearing in mind two things:
1. The constraint must work at 100% of its capacity
2. The constraint must work on the right product mix
As far as the rest of the plant is concerned, it is enough to know, order by order, the overall
lead times, i.e. lead time 1, that takes into consideration the time between release of raw
material and the constraint, and lead time 2 from after the constraint to the end of the process.
Constraint
146
Domenico Lepore
The orders from clients are the starting point of scheduling, which is performed working
backwards from the delivery date promised to the client.
If lead time 1 = T1minutes, and lead time 2 = T2minutes, the DBR algorithm introduces
two time intervals to protect the process that are called buffers.
Shipping buffer (before delivery date) = 20% *T2 minutes
Constraint buffer (before constraint) = 20% *T1 minutes
These buffers have the purpose of protecting the customer’s orders by protecting the
constraint from the intrinsic variation of the process. The unit of measurement of the buffer is time.
Buffer time
In order to protect the constraint, we have to make sure that material is ready a “buffer of
time” in advance in front of the constraint itself.
In the case of buffer consumption, managers find themselves faced with a basic conflict:
intervene on the system vs. not intervene on the system.
147
Sechel: logic, language and tools to manage any organization as a network
The two basic assumptions, ‘I don’t know in which cases to intervene’ and ‘I don’t know how
to intervene’, are injected by defining the rules of intervention:
The traditional TOC approach divides the buffer into three zones and suggests different
behaviours according to the zone of the buffer that has been ‘emptied’. The buffer will ‘empty
out or ‘fill up’ depending on whether tasks or operations are completed early or late in respect
with the scheduling.
However, as we said, the ability to predict is the essence of management, and only if we
are able to foresee the outcome of our actions on the system can we take decisions that
anticipate the development of events and therefore have more control over them. In the
traditional TOC approach we ‘react’ to a situation; when the buffer is emptied, we react. This
is not the Decalogue approach.
148
Domenico Lepore
149
Sechel: logic, language and tools to manage any organization as a network
through an A-shape network and V-shape network connected to each other as shown
below:
Simulator
The cost of the raw material, the selling price for each product, and the market demand
by product are established at the beginning of the simulation, and production time, setup
times and number of resources have been specified.
The goal of each simulation is to produce a pre-determined weekly market demand that
is to deliver 40 items of the first product, 52 of the second product and 40 of the third one.
In an ideal world, where we have a fixed weekly demand, the moment we finish to
produce a product we sell it and get the money, the moment we purchase some raw material
we receive it (and pay for it), and we have fixed and 100% reliable production times. Even
though we have finite capacity (in machines and cash) we manage to schedule the machines
to satisfy market demand. However, as soon as we introduce variation into the system the
same scheduling plan fails to satisfy demand. Managing variation entails, as we have said,
protecting the resources with a ‘buffer’.
Once we have identified the constraint of the system, which can easily be done by
calculating the time each resource is taking to produce the material to satisfy the demand, we
protect it with a buffer. What the repeated simulations show us is that introducing variation, up
150
Domenico Lepore
to a certain extent (even 50% of variation), on production time on every resource except the
constraint does not affect our ability to satisfy market demand. The extra capacity of non-
constraint machines is enough to absorb a considerable amount of variation and, as a
consequence, there is no immediate need to buffer, provided that the variation is not huge.
On the contrary, a small measure of variation on the constraint production time (5% already
makes a considerable difference) is sufficient to shift away significantly from the ideal result.
151
Sechel: logic, language and tools to manage any organization as a network
The question is now how to manage the buffer. A system is stable if it produces
predictable results, and the ability to predict is the true essence of management. What really
matters here is: is the system stable? There is a simple and powerful mechanism that can
answer this question. It is called buffer management.
If the processes that influence the buffer are not stable, we cannot decide on the amount
of protection (sizing of the buffer).
The buffer becomes a control mechanism only if the processes of the system are in
control and consumption of the buffer oscillates in a stable way with an upper limit that is
inferior to the maximum width of the buffer.
152
Domenico Lepore
SPC has a large variety of applications but, as Dr. Deming said, it is a ‘way of thinking’.
Becoming accustomed to this way of thinking is not easy, and requires a great deal of effort,
as does acknowledging the existence of variation. As strange as it may seem, the majority of
people do not take variation into consideration at all, and they perceive any change in
processes as something ‘good’ or something ‘bad’. If you compare two numbers you will
always have one bigger than the other, or equal. With SPC we have the possibility to
compare two numbers and understand that they are generated by the same process, and that
any difference is due to one thing alone: entropy.
Managing the inevitable increase of entropy to maximize throughput is the only possible
goal for a manager.
153
13.
A company’s success is based on the level of satisfaction of their customers. Therefore, any
organization that wishes to be successful in the long term must build one reliable system that
has customers and their needs in the forefront. Marketing and Sales act as the company’s
representatives towards the customer.
The role of Marketing is to act as a constantly updated repository of knowledge about:
• who we are – traditionally companies formulate vague and generic mission
statements, instead they should aim at understanding the business environment in
which the company exists in its entirety;
• what we do – products and/or services that the company has to offer and how these
differ from other offerings;
• who can benefit from our offerings and how to find them. A thorough understanding of
current customer base, analytical ability to identify potential customers and design
precise plans on how to approach these customers and win them over;
• how to communicate all of the above;
• how to ensure consistency.
Well organized and managed systems will deliver consistent and predictable results. For
optimal management of resources all activities related to the five points above must be
executed with predictability and in a synchronized way.
To satisfy both predictability and synchronization, Marketing and Sales are built as a
‘production chain’ where every step has inputs and outputs, every step has competent
resources allocated to it and has increased Throughput as a goal.
Marketing Plan
Predictable and repeatable results cannot be achieved through unorganized activities. The
Marketing Plan provides a solid structure where each element requires conscious thought
about the customer we are focused on. The Marketing Plan allows us to develop our potential
customers (leads) into loyal, long-term customers and, ideally, promoters of our offer.
Sechel: logic, language and tools to manage any organization as a network
Customer stages
The Marketing Plan must identify the right customer and the right need that will be
satisfied with our solution. It should not be limited to the traditional 4Ps (Product, Price, Place,
Promotion) but should incorporate all aspects needed to drive the business. The Marketing
Plan is the tool to guide the thinking and as such cannot be a constraint.
Customer steps
Our customers’ experience is not limited to the product. The size of our product portfolio
and range of services offered, ordering and invoicing, Sales support and the possibility of
providing feedback all play a great role in customer satisfaction.
The continuous feedback channel provides constant updates on reality changes and, as
a result, companies must adjust product/solution to the customer, rather than expect that
customers will adjust themselves to the product.
By satisfying the customer needs of today we are changing their current reality and
creating demand for different/new solutions. By constantly delivering desired solutions to our
customer we ensure consistency. We plan for change using the PDSA:
156
Domenico Lepore
PDSA cycle
Act*
• adopt the change
• or abandon it
• or run through the cycle again, possibly under different environmental conditions
Marketing structure
Traditionally, Marketing is structured according to the product groups/families where
success of the performance is measured based on the achievement of some budgetary
numbers. Every person/group is acting in isolation with a very low degree of cooperation.
Such an approach does not provide an optimal utilization of resources and does not have the
overall goal of the company in mind.
157
Sechel: logic, language and tools to manage any organization as a network
Instead, recurring Marketing activities should be viewed and managed as projects where
tasks are performed by the resources that are available and have the competencies to
perform these tasks.
Marketing process
158
Domenico Lepore
Lead qualification
Lead qualification is a complex, and somewhat intricate process with multiple feedback
loops between Sales and Marketing. It starts with having a clear definition of which lead is
considered to be sales-ready. The goal is not to pursue every lead but to identify which leads
will produce the highest return for the company and the time it will take to produce results.
Depending on the readiness of the lead, some of them will be placed into the ‘leads bank
which serves as a raw material for the Sales process. Other leads will be kept for
maintenance and nurturing until they become more sensitive to the company’s offering and
are ready to be processed by Sales.
Leads can be acquired in many ways – research, trade and industry events, personal
network, third party databases and ‘walk-ins’, but the truth is that most leads will not be ready
to engage, and passing them on to Sales will strengthen the general perception that
Marketing-generated leads are no good.
A qualified lead will be open and willing to meet with the Sales person and it will have a
need for the product/solution the company has to offer; however it should also be checked for
possible pitfalls such as non-payment or bad credit before it is placed into the leads bank. In
traditional organizations Sales people’s compensation is based on revenue, and therefore
they have no interest in lead-nurturing. Sales people do not want to waste their time on
potential customers who will not be buying today, therefore lead-nurturing is often viewed as
a ‘Marketing’ task. This should not be so. Marketing and sales in a systemic organization
underpinned by Intelligent Management should epitomize the interdependencies that must
exist among different professionals in the organization.
Integrating Marketing and Sales into one interdependent network will provide a solid
base for converting leads into winnable opportunities. The successful nurturing of leads will
result in a relationship with the customer that is built through an informative dialog about the
company’s offerings until the customer is ready for the next step of the Sales process.
159
Sechel: logic, language and tools to manage any organization as a network
External Constraint
As we said before, often companies find themselves in the situation where they cannot sell all
their potential capacity. Their constraint is the market, i.e. the company is constrained externally.
By analyzing the causes that prevent our company from selling more and
customers/prospects from buying more we build a basis for an agreement that will maximize
benefits for the company and its customers.
A successful agreement for us and for the customer must overcome six levels of resistance:
1. Disagreement about the problem
2. Disagreement about the direction of the solution
3. Lack of faith in the completeness of the solution
4. Fear of negative consequences generated by the solution
5. Too many obstacles along the path that leads to the change
6. Reservations about our ability/willingness to implement the solution (and about the
ability/willingness of others)
Each of the six levels of resistance can be overcome by using the Thinking Process
Tools from the Theory of Constraints (TOC).
160
Domenico Lepore
What follows is an explanation of how the completely generic solution for the External
Constraint was developed. Indeed, this is no replacement for the reading of It’s not Luck, the
novel written by Dr. Goldratt in 1994 where this approached was first described (see also
Deming and Goldratt: the Decalogue).
We start by grouping the Undesirable Effects (UDEs) and performing the cause/effect
analysis that leads to the development of the Core Conflict Cloud/ Current Reality Tree of the
customer.
Market UDEs:
• We do not believe it is possible to identify new segments of the market willing to
accept our offer;
• Competition is fiercer than ever;
• The speed with which we design and launch new products/services does not keep
up with demand and changes in the market;
• There is no way to predict the taste, trends and choices of the market;
• The market we deal with (clients and suppliers) is influenced by non-ethical
practices.
Product/Service UDEs:
• There is increasing pressure to reduce prices;
• We perceive that we are selling at a low price;
161
Sechel: logic, language and tools to manage any organization as a network
Organizational UDEs:
• Actions aimed at increasing sales are increasingly less effective (Marketing
campaign, communication, contacts and activities to increase client loyalty);
• There is no structured Marketing to keep us highly in tune with the market;
• We are not able to transform contacts into contracts fast enough. The Sales force
is not well aligned with the rest of the organization. We chase all of the sales
opportunities without having well-defined priorities;
• The Sales force does not have adequate skills (not enough technical knowledge,
insufficient inter-personal skills etc.)
• There is not a sufficiently wide and adequate network to cover the geographical
distribution and market segment we have chosen;
• Sales forecasts are not very reliable;
• The Sales process is very unpredictable.
Based on the UDEs listed above we can build the following Core Conflict Cloud for the
company that is constrained externally:
162
Domenico Lepore
163
Sechel: logic, language and tools to manage any organization as a network
164
Domenico Lepore
Each injection demands a series of actions that must be performed. Injections 2 and 3
relate to market segmentation and lead identification followed by qualification and placement
into the lead bank.
165
Sechel: logic, language and tools to manage any organization as a network
Injections 1, 4 and 5 deal with the pricing of our solution; a thorough understanding of
customer UDEs and how our solution will eliminate or minimize them will provide us with
strong benefit points that will gain a customer’s attention. In our customers’ opinion, fair price
has no correlation to the GAAP determined cost that went into the creation of the
product/solution. Fair is the price of the value and benefits our offering is bringing to the
customer’s reality.
By determining price oscillation for the segment our target customer is in, we understand
the range of prices this customer would consider fair. Since after injection 2 we can choose
our target segments and injection 5 has provided us with the true cost (TVC) of our offer we
clearly want to work only in those segments where the lowest price point is still higher than
our TVC.
By taking time to understand our customer’s issues (UDEs) and crafting our solution
specifically to address these, we are executing injection 6. Offer preparation is a recurring
activity that draws on multiple competencies of the organization. Traditionally organized
companies that have a silo structure promote the isolation of each function. In order to make
offer preparation successful it must be viewed and managed as a project.
166
Domenico Lepore
Make or Buy decision represents a most common dilemma companies are facing, as in
the following diagram:
Resource Engagement Cloud 1: If the goal of the company is to maximize profit they
have to decide if their team should focus on getting lowest purchase price or build a reliable
supply chain at a fair price.
167
Sechel: logic, language and tools to manage any organization as a network
Resource engagement cloud 2: in this cloud the goal of the company is to become the
market leader and keep this position. Their conflict can be represented as in the following
diagram:
Conflict Cloud: Buy large quantities overseas vs. buy locally in small quantities – best price
168
Domenico Lepore
Conflict Cloud: Buy large quantities overseas vs. buy locally in small quantities – stability of supply
UDE Segments
1. Undesirable effects related to Suppliers:
• Suppliers are unreliable
• Our products are complex and it is difficult to find suppliers
• Qualifying new suppliers takes too long
• We don’t understand our suppliers
• We don’t understand how their changes impact our supply chain
• The process of choosing and maintaining supplier relationships is cumbersome
2. Undesirable effects related to Organizational Structure:
• Our division does not have the authority to make decisions on material and
specifications
• The time it takes to reach a decision (gain approval on) takes too long
• We are unfairly losing business to our sister company
3. Undesirable effects related to Product/Service and Pricing:
• Qualifying new products takes too long
• There is increasing pressure to reduce prices
• There is internal pressure to maximize profit year after year
169
Sechel: logic, language and tools to manage any organization as a network
170
Domenico Lepore
Sales process
Qualified leads provided by Marketing serve as raw material for the Sales process which
starts by initiating a contact with the customer. The goal of the contact is to set up an
appointment/visit during which we can learn about our customer’s reality, their issues and
wishes by collecting their undesirable effects.
These UDEs are inputs for a cause-and-effect analysis that is performed during the offer
preparation stage. Understanding UDEs and their interaction will enable us to identify the
most suitable way to design win-win solutions. The process starts by identifying the generic
cloud applicable to the customer we want to address and continues with a detailed ‘tailoring’
of the cloud through a customer-specific verbalization. The Core cloud of the customer helps
express his “core problem”, i.e. the set of assumptions that keep him in the conflict, and the
logical reasons for the existence of UDEs.
171
Sechel: logic, language and tools to manage any organization as a network
During the UDE collection process we want to make sure that we collect all or most of the
valid UDEs. This is achieved by following the steps and the logic of UDE collection. The TRT was
designed to give clear instructions on what to do, when to do it and why we’re doing it.
172
Domenico Lepore
173
Sechel: logic, language and tools to manage any organization as a network
174
Domenico Lepore
175
Sechel: logic, language and tools to manage any organization as a network
Offer presentation
Once our offer is prepared and validated, we need to make the customer aware of it. The
outcome of our offer presentation must be predictable and repeatable. This cannot result from
ad-hoc activities. The offer presentation stage follows a very precise process that is described
by the following Transition Tree (TRT).
176
Domenico Lepore
177
Sechel: logic, language and tools to manage any organization as a network
Post-presentation
Once our offer is presented and accepted we work with the customer on the
implementation. During this stage we tackle negative implications and assist our customer
with the transition from their current state to our solution.
Internally, our organization has to be prepared to service new customers. In addition to
purely administrative tasks (account set-up, credit limit etc.), Sales works with Customer
Service on transitioning the knowledge and to-date history, establishing supply chain rules,
required follow-up and feed-back on possible changes on the customer’s side.
When deciding on the priorities for offer presentation (and the ensuing transition to post-
sales support) it is extremely important to consider the speed at which the offer will generate
Throughput. No sale should be recognized until we have received the payment. Part of the
pricing process should include consideration for the size of prompt-payment discount, as
some segments will be very sensitive to those.
Ideally, after presenting our offer the customer starts purchasing our products
immediately on a pre-paid basis.
178
Domenico Lepore
account; the work therefore continues, but it’s now performed by a different set of
competencies within our system –Customer Service.
After the final steps of the Sales process are completed i.e. we presented our offer, and
the details are finalized and documented, then Sales, Marketing and customer service
perform another recurring activity – the transfer of the account.
This step is critical as it ensures that knowledge about the customer that was gathered
during previous steps is passed on to Customer Service. After the transfer is completed
Customer Service representatives know whom they will be dealing with, what issues the
customer had in the past and how we resolved them with our offer. Customer Service is
advised of particulars and details that are critical for the customer.
From that point on, ALL tasks related to processing the order, tracking the progress,
invoicing and informing the customer of the status fall under Customer Service. Customer
Service receives the order and enters it after verifying the details (pricing, delivery condition
etc.) with our agreement. Customer Service also tracks what stage of the processing the
order is in, schedules delivery or pick up times and issues the invoice.
Customer Service also works on resolving possible day-to-day issues that might arise,
such as wrong/lost invoices or late deliveries. They monitor credit status and limits of the
customer and constantly feedback to Sales and Marketing.
Customer Service representatives become the daily contact for all customer needs
related to the order but do not block Sales people from staying in touch with the customer.
Moreover, there is a constant information flow between Marketing, Sales and Customer
Service. Through daily contacts with the customer, Customer Service will recognize the signs
of changed reality and involve Sales and Marketing when needed.
We understand that after some time our offer and continuous service will change our
customer’s reality and their UDEs. It might manifest in the customer’s demand for a lower
price, different delivery conditions or other products.
It is important to recognize these signs and arrange for Sales representatives to visit and
collect a new set of UDEs. This is where we start the cycle of ongoing improvement. From the
UDE collection we continue with the process of analyzing cause and effect, crafting and
presenting a new offer and the cycle runs the above-described course resulting in a
successful presentation.
The sustainable growth of the company cannot be achieved by having an un-coordinated
and functional structure. Crafting and presenting successful offers that cannot be duplicated
requires close integration of Marketing and Sales with constant availability of other
competencies within the company. This can only be achieved in a project-based organization
that rigorously follows the tools and logic described in this chapter.
179
14.
Every modern manufacturing company has some sort of computer based Information System
to support the daily operations and to help managers take strategic decisions. There are a
large variety of pieces of software and hardware that have been developed for this purpose.
Solutions for automating accounting in companies have been proposed based on all possible
available technologies. Large computers, once called mainframes, were used to this purpose,
but also very small personal computers, like the glorious Apple II of the 1980s, were used
successfully.
If you compare the characteristics of these old machines (clock speed 1 Mhz, 8 bit, 48
Kbytes RAM, 128 Kbytes disk) with those of an average modern PC (clock speed 2 Ghz, 64
bit, 2 Gbytes RAM, 512 Gbytes disk) you get an increase of more than 2,000 in speed and
100,000 in storage capacity. The same improvement to your car in the 1980s, 150 km/h
carrying 5 people, would now result in a speed of 300,000 km/h and the capacity to carry
20,000 people!
Clearly, with such big improvement in performances, one would expect modern
computer systems to make managing any organization a breeze. In reality, we have
witnessed, at best, an increase in performances of a factor 10.
The reason is that the software, i.e. the way people use all the power available through
automatic computers, has been oriented toward implementing better machine-human
interfaces (e.g. the little shadow that you can see under each icon on your desktop or the little
“ding” that you can hear when you get a new e-mail) rather than toward providing useful
information to the user.
Moreover, the cost of an ERP implementation does not scale with the company size.
This implies that, while for large organizations with revenues larger than 5-10 billions of $, the
ERP cost (well over US $1,000,000) is repaid in about 12 months, whereas in companies with
revenue smaller than 1 billion $ the cost of the ERP (often between $500,000 and $1m) might
never be repaid.
182
Domenico Lepore
Moreover, TOC has introduced a set of system measurements that can effectively help in
making the correct managerial decisions:
• Throughput (TPUT): The pace at which the system generates cash
• Sales: What comes in
• TVC (Totally Variable Costs): What goes out to purchase materials and services
that go into the products we sell
• Operating Expenses (OE): What we need to make the system function (fixed costs
+ investments)
• (I): Inventory: What we need to keep in the system to ensure that we always have
enough “material” to produce and ship
183
Sechel: logic, language and tools to manage any organization as a network
T – OE – I = cash profit, the physical money we see in the bank account before tax.
Indeed, from year to year, I becomes ΔI.
Moreover, TOC has introduced two additional measurements to monitor the flow of finished
products and inventory:
The “Throughput $ Day” (T$D) measures the loss in Throughput caused by delays in
delivery of finished products: the Throughput generated by the sale in $ times the number of
days of delay.
The “Inventory $ Day” (I$D) measures the amount of inventory and the pace at which it
moves through the company: value of inventory in $ times number of days of presence in the
company.
These measurements allow managers to focus on the elements that determine the
performances of the company. However, these measurements need to be understood in order to
take the correct managerial actions. As we have seen in Chapter 12, Statistical Process Control
provides a very powerful means for monitoring the variability of the important processes in the
company, thus allowing managers to be able to decide when to take corrective action.
It does so by:
• performing the three fundamental steps of TOC for the management of physical
constraints (identify, exploit, subordinate)
• giving information on how much time the constraint is used
• scheduling the constraint
• making management reports based on evidencing the cash generated by the
management of the constraint
• exchanging dynamically data with existing databases
184
Domenico Lepore
• Sales (prices and delivery dates), i.e. when TVC is transformed into Value (cash).
All these data are present in any modern ERP system so extracting them from the
databases is straightforward.
What is less immediate is the capability to produce good scheduling of the constraint
according to the TOC algorithms. However, it is well within the capability of any reasonable
computer-programming workforce to come up with schedulers that can implement these
techniques, and some software packages can also be found on the market.
This applies also to constraints in the management of projects and an example is given
below.
The implementation of the companywide ERP system that we describe presented several
challenges:
• The Company was made up of many facilities, spread out all over the North
American eastern sea-board;
• These facilities used local ERP systems, different to and incompatible with each
other as far as their data was concerned (e.g. they used different codes for the
same item);
• Changing the ERP system invariably creates resistance in people who had been
accustomed for years to work with their system of order entry, data queries,
accounting and reporting;
• The existing ERP systems were obsolete;
• None of these systems would lend itself to the statistical studies and Throughput
accounting practices required by the Decalogue;
• Any existing ERP system would not foresee a module for cataloguing and
managing people’s skills (for finite capacity scheduling purposes).
A Decalogue based ERP system must have some well-defined features.
185
Sechel: logic, language and tools to manage any organization as a network
186
Domenico Lepore
PRT Go live
The most relevant steps of the project, as far as the Decalogue is concerned, are
Intermediate Objectives (IOs) 7, 8 and 9. IO 8 relies on the introduction of Throughput
Accounting parameters into the new ERP system and the development of the corresponding
reports through which we can keep the Company business under control.
IO 9 (which is strictly related to IO 4) is about the managing of resources and of the
Company itself as a collection of projects. This subject will be addressed in more detail in the
last section of this chapter.
187
Sechel: logic, language and tools to manage any organization as a network
IO 7 is crucial for the production strategy and is closely linked to the concept of Drum
Buffer Rope (DBR) and Buffer Management (BM). DBR and BM form the method to manage
and protect the constraint.
The Drum is the constraint process, the critical resource, or the “weak link”, of our
production chain. It dictates the pace of the system downstream (since the constraint by
definition is a limiting factor) and upstream.
The Rope connects the Drum to the releasing of raw material or of semi-finished goods
at the beginning of the chain. In this way the production chain processes only the quantities
that the constraint is able to work on, without accumulating too much material before the
constraint. Each intermediate link, between the beginning of the chain and the constraint
process (and after, for that matter) must be able to process all the material it receives (that is
lower than the maximum capacity of the link, since this amount is dictated by the constraint,
the weakest link).
To protect the system against the intrinsic variability of every process we put Buffers
before the constraint and at the final link of the chain (shipping buffer). The dimensions of
those buffers are calculated according to the variability we measured in the constraint and in
the system.
The higher the variability (assuming that it is in statistical control) the bigger the buffer
needs to be (n.b. the buffer is measured in time units).
Each of the Intermediate Objectives must be reached through a path of tasks, a set of
well-defined actions, organized by means of the Transition Tree. The Transition Tree
highlights the logic that we use to move from the present (the obstacle) to the desired future
(Intermediate Objective).
188
Domenico Lepore
need. This will allow us to reach the next state of reality. This line of reasoning can be
represented as shown below.
189
Sechel: logic, language and tools to manage any organization as a network
Then, starting from the new state of reality, we can apply the same scheme again until we
reach the goal (i.e. the Intermediate Objective). Moreover, we have to check the logic of the
path that leads us to the objective; we have to ascertain if the actions we have found are
sufficient to reach our objective (logic of the action) and why the next need is unavoidable
(logic of the sequence). This will add more ‘leaves’ to the Transition Tree:
190
Domenico Lepore
In our specific case, after identifying the necessary steps to achieve the goal (IO 6) starting
from our initial state, we derived the following Transition Tree. In the picture below, the
abbreviation EDI stands for Electronic Data Interchange and indicates a way to communicate
between commercial partners by means of telecom networks:
The same thing has been done for all the Intermediate Objectives. On average, from four
to six actions should be sufficient to achieve every IO. If more actions are needed, usually it
means that the IO should be split into two or more sub-IOs.
191
Sechel: logic, language and tools to manage any organization as a network
When we finished building all the Transition Trees, we had about 170 tasks to manage,
in order to start the transformation of the company’s ERP.
The Transition Trees provided us with a well-defined and actionable list of tasks. The
next step was to schedule those tasks.
The tool we used to schedule the project is a software called Independence, developed by
Domenico Lepore and a team of mathematicians and software specialists (see
www.independence.it). Independence follows the algorithm illustrated by Dr. Goldratt in his
book Critical Chain and incorporates elements of SPC-based buffer management to make it
more consistent with the Decalogue. The steps to create a project schedule are:
1. Enter the task names and their estimated duration. People usually tend to over buffer
themselves by declaring task duration that is much longer than they really need. We
assumed that if we were to focus the entire working day (usually eight hours) on the
task, there would be no need to consider more than three days for the most difficult
task. NO MULTITASKING IS ALLOWED. Of course, some unforeseen event can
delay the operations and this is what the buffer is for.
2. Enter each task owner name (the person responsible for the task) and all the other
resources needed to perform it. Indeed, people assigned to tasks must have the right
competencies to execute it but it is not good practice to assign too many tasks to the
same resources; they would soon become critical and would unnecessarily delay the
completion of the project. The problem of resource contention will be dealt with later.
3. Establish the connection between tasks, i.e. their real natural sequence. Indeed, it
may happen that a task belonging to a later IO can be brought forward because its
execution is independent of many of the tasks that should precede it according to the
Prerequisite Tree. For example, even if by our Prerequisite Tree IO 6 can be reached
only after the first 5 IOs have been completed, the task concerning EDI support could
be started immediately, because its execution does not depend on any of the tasks
contained in IOs 1 to 5. This step in Independence can be done dynamically, that is,
by directly manipulating the links on a map containing all the project tasks.
An example is shown in the picture below.
192
Domenico Lepore
As we can easily see, many tasks can be executed in parallel because they are
independent one from each other.
4. Identify the Critical Chain. This is an essential step of the schedule. The Critical
Chain is the longest path of project tasks (dependent events), taking into
consideration resource contention. The Critical Chain determines the length of the
project and is therefore its limiting factor, its constraint. Independence puts the
tasks in time sequence and considers task duration, task network, the assigned
resources and the resources shared with other running projects. Then, it
calculates the Critical Chain as the longest continuous path from the beginning to
the end of the project. Of course, a resource which is assigned to many tasks is
likely to be on the Critical Chain.
5. In order to shorten the length of the project (i.e. the length of its Critical Chain) we
can try to solve manually some of the resource contentions. To this aim, we can
operate on those tasks of the Critical Chain that are consecutive and have some
common resources. Here the difficulty is in challenging the assumptions that we
make about the impossibility to replace some of the assigned resources with
some idle ones. Naturally, we do not underestimate the skills needed for the task
but we firmly believe that training can help elevate people’s skills and unleash
more necessary resources into the system.
6. Assign project buffer and feeding buffers. Usually, when we manage a project, we
tend to over protect each and every task. This is wrong. If we know that we have
more time than we really need, we invariably tend to use it all either by starting
late or by not focusing our time solely on the execution of the task at hand.
Multitasking is the offspring of deeply rooted assumptions that people make on
efficiency in the workplace; sadly, it is fuelled by a host of software for “personal
productivity”. Multitasking, on the contrary, is a fierce enemy of a system-based
approach to management and a debaser of people’s ability to focus.
In our scheduling we use a buffer to protect the chain, not the individual tasks. The
project buffer is chosen usually to be between 20% and 30% of the length of the Critical
Chain. But we also have to be sure that the non-critical tasks that feed the Critical Chain are
protected, so that disruption in their execution does not affect the length of the project. For
this reason we add a feeding buffer to every non-critical task that precedes a Critical Chain
task. These feeding buffers usually have a length that is between 20% and 30% (same
percentage of the project buffer) of the non-critical tasks sequence they follow.
7. Task update and Buffer Management. Independence allows us to enter the project
due date so that, once the Critical Chain length has been derived and the buffers
are calculated, we know exactly the day in which the project should start in order
to finish it by the due date. As the project execution begins, the project manager
updates Independence with task execution day by day and the program warns us
about possible delays to, or earliness of, project completion. At the same time, the
193
Sechel: logic, language and tools to manage any organization as a network
194
Domenico Lepore
In the next picture we have blown up a small section of the above project. In particular,
we can appreciate the different tasks as shown by Independence.
195
Sechel: logic, language and tools to manage any organization as a network
Detail of project
196
Domenico Lepore
However, the correct implementation of the projects depends on how much the allocated
resources are timely and responsive. A great advantage of this approach to project
management comes from removing unnecessary protection from each task and using
protection only where it is really needed, i.e. on the constraint. This implies that we must be
ready to accept late task completion as well as early completion. The Project Manager exerts
a special control on the resources involved in tasks belonging to the Critical Chain and alerts
the resources next in line in order to capitalize on early task completions.
The situation becomes more complex when multiple projects have to be managed, and
possibly by different people. When a resource becomes unavailable, let's say they are ill,
ways to take corrective actions must be available other than just delaying the related tasks.
Although ultimately the decision is taken by the project manager, the Information System can
provide useful information to support such decisions.
The correct way to handle multi-projects is to have a pool of resources with some degree
of interchangeability and a way to communicate through different projects so as to be able to
do a partial reallocation of the tasks.
In order to perform the tasks of the projects, some capabilities (skills) are needed. These
skills are normally present in the company, but could also be partially missing. The IS has to
provide a database of which skills are needed, where the skills are in relation with the internal
resources and the grade of competency of the resources.
At the project task definition phase, the project manager, possibly helped by a team of
knowledgeable people, chooses the required skills and the necessary level of competency.
A quick search in the system database will show which are the resources available that
fulfil the requirements (or perform better), and what is their calendar of allocation on other
projects. At this stage we do not yet know whether a required resource will be available at the
time of execution of a specific task. The available resources are tentatively allocated and the
project goes into the scheduling phase.
Once the project is scheduled, i.e. it is known when each task has to be executed, the
possible resource contentions are examined and a possibility is given to choose a different
resource. If any resource is changed, the project has to be rescheduled, as there could be a
different, hopefully shorter, Critical Chain.
So we have an iterative process in place that ends when the project manager is satisfied
with the results obtained.
The degree of flexibility given by the skill/competency level of abstraction allows, for
instance, to take a resource from an already allocated task on a running project and
substitute it with an equivalent resource. This substitution is straightforward when the
originating task is not critical while the destination task is, because the impact on the original
project is limited. In the case where both tasks belong to project Critical Chains, the
substitution should be done with more care and the project managers involved should be
informed.
197
Sechel: logic, language and tools to manage any organization as a network
Mark Programmer 1
Data entry 2
John Accounting 1
Data entry 3
Customer service 2
Alice Secretary 1
Data entry 2
Call centre 2
198
Domenico Lepore
When a resource has to be allocated to a specific task, the required skill and level of
competency is searched in the resource database. The names that match the required
characteristics are examined and one is chosen. At a later time, if a resource contention is
created, either during the project scheduling or when another project has to be scheduled,
another compatible resource is searched for and, if found, allocated to the required task.
In the example above, Mark, whose activity was requested for data entry in a project
task, could be replaced by Alice, if she is available, or by John, who has a greater
competency.
The introduction of the “skill level” helps to create enough flexibility in the project-
scheduling phase, solving potential contention cases by using equivalent resources and
allowing the reallocation of specific resources while keeping the same, or better, competency
required by the tasks.
Indeed, every degree of flexibility requires some level of redundancy. That is, we cannot
have all resources working full time and expect to be able to allocate people for new tasks.
On the other hand, in a “cost world” it is considered a sin to have the work force not working
full time. The issue of excess capacity for non-constraint resources must be addressed when
the basic performance measurement system is devised. We need a certain degree of excess
capacity in order to cover the intrinsic variability of resource performances; the correct
amount can be estimated by means of statistical methods.
Another source of flexibility can come from the separation between recurring projects
and non-recurring ones. The former can be easily engineered and appropriate time for their
execution can be determined. The latter will benefit from the proficient use of the Critical
Chain method and its non-contentious resource allocation method.
The future of the organization is in designing organizations not as functional/hierarchical
grids that are clearly inadequate due to their inability to fully support, measure and promote
intrinsically cross-functional activities. The solution is to design organizations as a network of
interdependent projects that work together to achieve the goal of the organization.
In conclusion, the allocation of people to tasks according to their skills and level of
competency produces an effective use of the resources available, a more ethical
consideration of people’s capabilities, and allows us to unleash the power provided by the
“intelligent” implementation of project management.
199
PART FOUR
In the first three parts of this book, we started with the intuition of the systemic enterprise as
opposed to a traditional hierarchical and functional organization. We analyzed what this
means in terms of how organizations, Industry in particular, can be organized systemically,
and the new kind of economics and leadership this requires. We then saw practical
indications of how several aspects of the systemic organization can be implemented: the
thinking tools we need to move from intuition to scheduled actions, the accounting method,
statistical methods, Marketing and IT systems.
In Part Four, we move in our cognitive spiral back around to a new beginning. We
consider the conscious and connected organization with a new intuition of how Network
Theory can be used to enhance and expand the design and management of an organization
as a system. Networking is not just a buzzword and a current trend. It represents some of the
most cutting edge research being carried out within physics, biology and neural sciences.
Once again, just as the intuition of the organization as a system is not an imposition but a
revealing of the underlying nature of any organization, the intuition of the organization as a
complex network uses our advancing knowledge of nature to understand how organizations
work and, therefore, how to design them accordingly to maximize their output.
The coming years will offer the opportunity for further analysis, development and
execution based on this intuition. In the meantime, we can attempt to understand the
relevance of this new science to organizations and what practical implications this may have
for design and management. We attempt to do so here with this brief introduction followed by
a scientific article outlining the research the Intelligent Management group has facilitated in
this area to date.
There are various kinds of network in the real world. Some occur in nature, such as a
beehive. Other networks are manmade, such as the London Underground. These networks
are a collection of nodes that are all interconnected with various degrees of separation
among them.
These ‘simple’ networks are not designed with a specific goal that affects how the nodes
interact with each other. They simply allow connections to occur randomly; hence they are
called ‘random’ networks. The statistical distribution that describes the probability with which
these nodes are connected to each other follows a ‘normal’ Gaussian distribution (i.e. the
data cluster around the mean with a few outliers).
A different kind of network exists where the interconnection among the nodes is greater
with some nodes than others. The nodes with more nodes connected to them are called
203
Sechel: logic, language and tools to manage any organization as a network
hubs. These networks, known as ‘scale free’ networks, thus have a hierarchy consisting of
‘visited’ hubs and more isolated nodes.
The statistical distribution that describes the probability with which these nodes are
connected to each other follows a Power Law. This distribution follows an inverse power
relation: e.g. a tsunami twice as large as the one that occurred in Asia in 2004 would be four
times as rare (less likely to occur) and a tsunami three times as large would be nine times as
rare.
When we examine an organization in the light of network theory, we are able to unveil
the reticular nature of the organization and analyze its behaviour and development with a
much deeper level of understanding. More importantly, we can consciously design, manage
and operate the organization with a much higher level of optimization.
An organization can be viewed as a living, complex network with a precise goal. As such
it has nodes and hubs and it exhibits emergent properties. These are new structures or
behaviours that emerge spontaneously from interconnection. We know that we can optimize
the performance of a system by choosing a constraint and subordinating the rest of the
system to the constraint. We can therefore optimize the performance of our organization by
designing it as a network of projects in which certain resources are the hubs/constraints.
As we saw in previous chapters, by buffering the constraint, we are able to protect it from
the accumulation of variation from the processes that feed it. In the same way, we need to
protect the hub from variation, and by protecting the hub and subordinating to it, we ensure
maximum performance for the entire network.
Any Undesirable Effects that emerge from the interaction of nodes can be recognized as
emergent properties of the network. By collecting the Undesirable Effects (UDEs) and
deriving the core conflict of the network/organization (as described in Chapter 10), we will be
able to build an injection/solution that defuses the possibility for these interactions to disrupt
the functioning of the network. The use of the Thinking Process Tools acts as a catalyst for
our understanding of how networks behave by translating our intuition into understanding.
They enable the building of a plan to guide the network more powerfully towards its goal.
204
15.
The Theory of Complexity first emerged at the end of the 1960s as an evolution of the studies
concerning Systems theory, Dynamic Systems Theory and Cybernetics. At that time, the
reductionist approach had been struggling to provide satisfactory answers regarding the non-
linear interactions identified in all organized systems. Any reductionist law did not provide
reliable solutions.
In this context, systems were no longer studied from the standpoint of single
components, but from the standpoint of the behaviour of the system as a whole. Ever since
then, the emergent behaviours of systems began to be analyzed as a whole and identified as
complex systems.
We define a complex system as a set of interconnected parts that interact in a non-linear
way. The collective behaviour of this set of parts exhibits emergent properties which cannot
be found in each of the individual parts. The dynamics of such a complex system generate
new properties whose characteristics are the subject of study. Network theory takes into
consideration these emergent properties and studies the structural evolution of a complex
system.
Apparently, as every network evolves it presents a kind of random and unpredictable
behaviour. However, there are a few fundamental laws and organizing principles that
networks do follow. These laws and principles help us to understand the topological features
of many different types of systems, from cells, to commercial organizations, and even the
Internet.
A network is a set of items, which are called nodes or vertices, with connections between
them, called links. Systems taking the form of networks (also called “graphs” in much of the
mathematical literature) abound in the world. The fundamental unit of a network is a node (or
vertex) and the line which connects nodes to each other is called a link (or edge).
A link is directed if it runs in only one direction (such as a one-way road between two
points), and undirected if it runs in both directions. Directed links can be thought of as arrows
indicating their orientation. A graph is directed if all of its links are directed. An undirected
graph can be represented by a directed graph with instead of one link in one direction, two
links, one in each direction, between each pair of connected nodes (see figure below).
Sechel: logic, language and tools to manage any organization as a network
Undirected graph
The degree (of the) node can be defined as the number of links connected to a node.
Those links can be directed or undirected, therefore a directed network can have both an in-
degree and an out-degree for each node, which are the numbers of in-coming and out-going
links respectively.
Going through the network the geodesic path is defined as the shortest path through the
network from one node to another. Note that there may be, and often is, more than one
geodesic path between two nodes.
The diameter of a network is therefore the length (in number of links) of the longest
geodesic path between any two nodes. A few authors have also used this term to mean the
average geodesic distance in a network, although strictly speaking the two quantities are
quite distinct.
Hub
206
Domenico Lepore
Diameter
The path length between two nodes A and B is the smallest number of links connecting
them and the path between A and B is 3 whereas the network diameter shown in the above
picture is 6.
Watt and Strogatz in their article “Collective dynamics of small world” introduced a
measure called local clustering coefficient C. This quantity of any node quantifies how
connected its neighbours are.
A neighbour of a node A can be defined as the set of kA nodes at distance 1 from A,
whereas the clustering coefficient for node A with kA links is defined as 2 times the number of
links between kA neighbours of A as follows:
The clustering coefficient of the network is then given by the average of CA over all the
nodes as follows:
C = <c> =1/N ∑ ci (2)
Network of 5 nodes
207
Sechel: logic, language and tools to manage any organization as a network
The individual nodes have local clustering coefficients, Eq.1, of 1, 1, 1/6, 0 and 0, for a
mean value, Eq.2, of C=13/30
Let’s compare two kinds of networks. A random network, for example a national highway
network (see diagram above), appears to be equally distributed; the goal of the nodes and
the links is simply to cover geographically the land area which has to be mapped. Therefore,
there are a certain number of nodes with an average number of connections that roughly
follow a bell curve distribution. No reference is made, in this type of network, to the capacity
of links that connect the nodes and there are no flows of processes associated with them. In
contrast, if we focus on an air traffic system, the scenario is quite different. The power law
208
Domenico Lepore
degree of the distribution of a scale-free network predicts that most nodes have only a few
links, held together by a few highly connected hubs. It is obviously important to measure how
great the demand is by all the nodes to be connected to a specific node. The higher the
request and the creation of hubs within the network, the higher is the probability of a scale-
free network.
In the first network, the goal is ‘to allow people to drive within the US moving from node
to node through the links’, whereas in the second network, the goal is ‘to create a map to
allow a flow of people to move from one node to another’. Networks have to be defined
through the definition of a goal. Different goals define different topologies of networks. Scale
free networks follow a power law distribution in which a few nodes are highly connected and
therefore are crucial for the functioning of the network. Attacks (disruptions) on these nodes
can compromise the evolution of the entire network.
As networks grow and preferential attachment takes place hubs are created, followed by
clusters. Thus, the hierarchical structure of the network evolves. As clusters become
connected, one giant cluster will emerge. The emergence of a giant component represents a
phase transition, which is called percolation.
How do we design such a systemic network? Such networks have to be built and designed
through:
• sharing of a common goal because this allows the subordination of the singles
nodes to the goal
• creating of the right interdependencies among the nodes
• identifying the right number of links to be connected and the prediction of how the
system will evolve through the statistical emergence of the network’s properties
It is only the goal of an organizational network that can enable the proper identification of
the logic with which the network has to be designed, the direction in which the network has to
evolve, and the alternative paths to follow in case of ‘attacks’ to nodes due to the intrinsic
variation of its processes.
209
Sechel: logic, language and tools to manage any organization as a network
210
Domenico Lepore
Usually, organizations are accustomed to comparing the average output values of their
systems as indicators of the system’s stability. This forces the decision makers to assess
systems based on incorrect assumptions. A value below average does not give any indication
of the stability of the system because, as per statistical definition, half of the value will be
above average on any test, and half below.
The single value, the single sold product and the single processed item are simply parts
of a system whose evolution and reliability is displayed within a process. Therefore, there are
fluctuations in processing items, banking services, information given via phone, product
assembly and all other repetitive activities completed through interconnected nodes which
also have intrinsic statistical oscillation. Process Behaviour Charts display all these local
interacting variations of the single nodes as a process and detect the specific limits within
which the whole process can oscillate in a predictable way. This can be interpreted as a
fluctuation of the predictability of the system. This fluctuation is called variation.
The behaviour of processes within a network becomes non-deterministic in the case of
interacting systems. The consequent emergence of new properties is an important
characteristic to be kept under statistical control. At the same time, being aware of intrinsic
variation allows us to monitor the development of the network, the statistical understanding of
which helps us predict the emergent properties of the entire system.
These two interrelated and cyclical observations are practically implemented by projects
developed from the injections to the core conflict of the organization. Therefore, identifying
the constraint and managing the intrinsic variation of the processes occurring in the company
by using the methods of Statistical Process Control enables us to create a systemic network.
Let’s introduce an experiment regarding the behaviour of a manufacturing network and
variation.
In the graph below we show the behaviour of the Throughput and the efficiency of the
constraint machine as a function of the variation of a specific manufacturing network:
Variability: Throughput
211
Sechel: logic, language and tools to manage any organization as a network
Variability: usage
In the first diagram we show the Throughput trend as a function of the intrinsic variation of
a manufacturing network. In the curve made of dashes the variation is considered in all
processes except for the constraint processes, and as the graph shows, the Throughput value
has been affected only after 60% of variability. In contrast, the dotted curve, in which the
variability is considered all over the network, decreases quite quickly, and the same behaviour
is shown by the solid curve, whose variability is considered only on the constraint node.
In the second diagram we show how the efficiency of the constraint node oscillates as a
function of the variability. As we can see, the trend is similar to the first diagram.
The diagram below shows the statistical correlation between the considered variability at
input and the resulting statistical error at output:
Correlation
212
Domenico Lepore
Ultimately, this diagram shows the trend of the standard deviation of the constraint node
usage (solid line) as a function of the variability, where the dotted line represents the
Throughput trend as a function of the variability. It is clear that by increasing the variability all
over the network the error band of both the dotted line or of the solid line becomes wider, and
this represents the band within which the constraint node usage and the Throughput curve
oscillates in a predictable way.
This experiment shows that any kind of variability applied on the constraint node affects
the output of the whole network and, for this reason, it has to be properly protected from any
statistical oscillation within the network.
To this end, a buffer time is located in front of the constraint to protect it from disruption as
well as to monitor the system’s behaviour. Therefore, the capacity constraint, which represents
the drum of the system, has to be perfectly aligned to the network capacity and its selling signal,
the rope, has to be sent to the replenishment node to dictate the purchasing of material.
We can conclude by saying that Drum Buffer Rope and intrinsic variability enable the
design of a scale free network made of hubs that connect nodes in a systemic way; buffers
will take into consideration (and protect from) the intrinsic variation of the complex systems.
Such a network, therefore, will drive its evolution toward a common goal. A feedback system
will continuously feed the network through a few designated links that will spread the
information throughout the nodes.
213
Domenico Lepore
In conclusion, it may be useful to summarize the main ideas contained in this book:
• The work of an organization and the way it interacts with its environment are
systemic in nature. In other words, the organization viewed as a system is not an
‘invention’ but rather a ‘discovery’: it is the unveiling of something that is
structurally inherent to the life of any organization. Organizations are, and must be,
considered as systems. The conventional Hierarchical/functional organizational
chart is far from adequate to portray what an organization should do and how it
should work.
• The most fundamental feature of any system is the way its components (its
processes) interact and are interdependent with each other in the pursuit of a
stated and agreed upon goal. Such a network of interdependencies shapes and
determines the possibilities of the system towards its goal.
• The most effective way to manage the performances of a system is through the
understanding of the variation of its processes. Such understanding must be
statistical. Hence, managing a system means, in essence, managing its variations.
Any meaningful leadership can only be originated by a profound understanding of
the nature of process variation and its impact on the system and its environment. A
leader must strive to ensure statistical predictability everywhere in their
organization in order to allow meaningful managerial decisions.
• The performances of a system made up of processes with well understood
variations can be greatly enhanced if we determine one element to be its
“physical” constraint. In such a variation-managed, constrained-based system the
performances of the whole are essentially linked to the performances of the
constraint. A new measurement system is required based on Throughput,
Inventory and Operating Expense and their basic interrelations. The Decalogue
provides a simple algorithm and a guideline to guide the management of such a
system.
• What is the most logical and practical way to coordinate the work of a constraint-
based system? In other words, how can we proficiently organize the network of
interdependencies making up our organization? What is the organizational
structure most suitable to sustain the systemic endeavour?
• Such a structure is a multi-project environment. Any organization that accepts the
idea of system will find in the ‘network of projects’ the organizational structure that
most naturally leverages the power of a system.
215
Sechel: logic, language and tools to manage any organization as a network
216
Domenico Lepore
entire organization into a thinking system, or more simply to transform a situation of blockage
within an organization into a systemic project for increased Throughput.
This cycle begins with the collection of Undesirable Effects (UDEs), which allows the
core conflict, i.e. the cognitive constraint preventing an organization from achieving its full
potential, to be verbalized in the form of the conflict cloud. This conflict cloud includes the
goal of the organization and the two fundamental needs underpinning the vision and structure
of the organization.
Once the underlying assumptions that create the core conflict are surfaced, a
breakthrough solution(s) can be devised, known as ‘injection’. The Future Reality Tree (FRT)
uses a logic of sufficiency to connect the injections with statements of reality ensuring the
achievement of the goal while satisfying the two fundamental needs identified in the conflict
cloud. Any negative implications identified during the building of the FRT are verbalized and
addressed using the Negative Branch Reservation (NBR).
In order to implement the injections/solutions, all obstacles are identified and reverbalized
in terms of Intermediate Objectives to be achieved. These Intermediate Objectives are mapped
using the Prerequisite Tree. Each intermediate Objective is further broken down into actions
using the Transition Tree which reveals the logic, need and resulting change in reality of each
action to be taken. Once the actions have been specified, they can be scheduled into a project
using the Critical Chain algorithm based on finite capacity.
217
Domenico Lepore: biography
Domenico Lepore studied Physics at Universita’ degli Studi di Salerno, Italy, and became
Dottore in Fisica in 1988 with an experimental thesis on quantum metrology. The device
obtained from this thesis was adopted as the Italian National Voltage Standard. His pursuit of
a research program was motivated by the goal of learning a method of investigation that
would be applicable to areas other than just the natural world.
From 1991-96, Lepore worked for the Management School of the Italian equivalent of the
Department of Trade and Industry where he became an expert in the work of W. Edwards
Deming and Quality Management. He was the Italian representative for the ISO continuing
education group and member of the Italian Standards Organization. During this period Lepore
studied the Theory of Constraints (TOC) developed by Israeli physicist Eliyahu Goldratt, and
quickly saw how TOC was a natural catalyst for implementing Deming’s philosophy. He
designed and delivered unique training modules and consultancy for Small to Medium
companies combining statistical methods and TOC.
In 1996, Lepore formed his own company to further develop and test this combined
approach. He formalized this methodology integrating the work of Deming and Goldratt into a
rigorous, ten-step algorithm named the Decalogue™. The Decalogue interconnects the
systems approach based on understanding variation with the effectiveness of managing an
organization around a strategically chosen constraint. Lepore commissioned a team of
mathematicians to develop made-to-measure software to support and accelerate
implementations, including a finite capacity project scheduler called Independence.
Dr. Lepore co-authored the book Deming and Goldratt: the Decalogue with friend and
foremost TOC expert Oded Cohen, published in 1999 by North River Press in the U.S. The
book, translated into several languages, contains the basic tenets of the methodology Lepore
has implemented over the last 15 years and is recommended reading for several universities
around the world.
Lepore’s Decalogue™ methodology has led to the successful improvement and
turnaround in management and performance at over 30 national and multinational
organizations, primarily in Italy and the United States. Thanks to the robustness and
repeatability of the methodology, success has been demonstrated in a wide variety of fields –
from aluminium to nursing. Implementations led by Lepore have produced dramatic
improvement in performance by focusing on quality, speed and flow. Increased cash from
sales becomes the result of reduced lead times, expanded production capacity previously
unavailable, reduced inventories, fewer delivery delays, and greater access to new markets.
In 2002, Lepore became Senior Advisor to GrafTech International, a world leader in the
manufacturing of Graphite for industrial application. The Decalogue was applied company
wide and over the three-year consultancy period, GrafTech achieved outstanding results.
Sechel: logic, language and tools to manage any organization as a network
Thanks to this success, the former Chairman and CFO of GrafTech invited Lepore to join
them in a venture to raise capital, acquire companies and manage them with the Decalogue
methodology. Thus, in 2006 Lepore became President of Symmetry Holdings Inc, the first
holding company to utilize the Decalogue methodology. In 2007 Symmetry Holdings
consummated the fastest acquisition ever completed by a Special Purpose Acquisition
Company and took control of a highly traditional public company in the Steel sector with the
aim of creating one system and increasing throughput based on speed and transparency. By
the end of 2008 the company had achieved all their goals for the year: reduction in debt,
dramatic reduction in inventory, unmatched speed of replenishment within the industry, and
unification of 21 unconnected plants into a system organized around a strategically chosen
constraint.
The unprecedented economic crisis did not hit their operations until 2009. A lack of
alignment among managers old and new regarding the vision and goal of the new company,
renamed Barzel Industries, translated into errors in the timely transformation of a fragmented
organization into a profitable and unified system. Sales did not increase sufficiently to avoid a
liquidity crisis, and a tightening on lending from the banks led to a change in ownership.
This unique opportunity that was lost prompted Lepore more than ever to formalize in a
book the kind of intelligence that must be applied in order to succeed in bringing business up
to speed in an interconnected and interdependent world. He dedicates his working life to
facilitating the ability of organizations to interconnect intuition, understanding and knowledge,
from the birth of an idea to its detailed implementation and to manage and monitor projects
systemically.
Lepore is the founder of Intelligent Management Inc., an organization with the goal of
promoting a systems-thinking approach to organizations and boosting management
intelligence. He is co-founder of Invictus IM Corp., a strategic advisory and investment firm
which assists corporate management teams to recognize and achieve full potential for their
companies. Invictus IM uses the Decalogue methodology in all its activities.
220
BIBLIOGRAPHY
222
Domenico Lepore
Jacobson, Simon. Towards a Meaningful Life: the wisdom of the Rebbe Menachem Mendel
Schneerson. New York: William Morrow and Co. Inc., 2002.
Bonder, Nilton. The Kabbalah of Money. Boston: Shambalah Publications Inc., 1996.
The Kabbalah of Envy. Boston: Shambalah Publications Inc., 1997.
223