Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
DOI 10.1007/s11191-011-9385-9
Abstract Our purpose in this paper is to try to make a significant contribution to the
analysis of cognitive capabilities of the organization of active social systems such as the
business enterprise by re-examining the concepts of organizational intelligence, organi-
zational memory and organizational learning in light of the findings of modern neurosci-
ence. In fact, in this paper we propose that neuroscience shows that sociocognitivity is for
real. In other words, cognition, in the broad sense, is not exclusive to living organisms:
Certain kinds of social organizations (e.g. the enterprise) possess elementary cognitive
capabilities by virtue of their structure and their functions. The classical theory of orga-
nizational cognition is the theory of Artificial Intelligence. We submit that this approach
has proven to be false and barren, and that a materialist emergentist neuroscientific
approach, in the tradition of Mario Bunge (2003, 2006), leads to a far more fruitful
viewpoint, both for theory development and for eventual factual verification. Our proposals
for sociocognitivity are based on findings in three areas of modern neuroscience and
biopsychology: (1) The theory of intelligence and of intelligent systems; (2) The neuro-
logical theory of memory as distributed, hierarchical neuronal systems; (3) The theory of
cognitive action in general and of learning in particular. We submit that findings in every
one of these areas are applicable to the social organization.
1 Introduction
Approximately 50 years ago, the scientific polymath and Nobel prize winner Herbert
Simon published his book Organizations with James March, a sociologist of organizations.
In their book (March and Simon 1958) they laid the general groundwork for a broad
research program in organization theory, which, in spite of the many fashionable but failed
attempts at post-modern contributions to organization theory, remains the only central
paradigm (in the sense of Thomas Kuhn) for a science of organizations.
D. A. Seni (&)
University of Quebec, Montreal, QC, Canada
e-mail: seni.dan@uqam.ca
123
1486 D. A. Seni
This remains, to this day, the seminal work on organization theory and the sociocog-
nitive hypothesis. A broad and fundamental piece of work whose full impact has yet to be
felt in organization theory, it predated and even anticipated modern understanding of
cognition and memory.
In the introduction to the chapter on cognitive limits the authors state the following
concerning organizational learning:
The innovative processes that are essential in initiating new programs in organization are closely
related to the various intellective processes referred to by psychologists as problem-solving,
productive thinking creative thinking, invention and the like. Our starting point will be to
examine briefly what is known of the problem-solving process at the individual level, and then to
introduce organizational considerations. (March and Simon 1958, p. 177).
March and Simons book was divided into three parts each more abstract than the
preceding, culminating in a last section on a sociocognitive hypothesis. We cite from their
postscript (italics ours):
We have surveyed the literature on organization theory, starting with those theories that viewed the
employee as an instrument and physiological automaton, proceeding through theories that were
centrally concerned with the motivational and affective aspects of human behavior, and concluding
with theories that placed particular emphasis on cognitive processes What evidence we have been
able to gather for our propositions has mostly related to the middle chapters of the book, those
dealing with motivation and attitudes. The cognitive aspects of organizational behavior are to date
almost unexplored terrain. (March and Simon 1958, p. 211).
123
Neurosciences Call for a New Model 1487
(2006). von Krogh and Roos (1995), Tsoukas (2005, Chapter 4) and Baum (2002) discuss
the general programme for an organizational epistemology. On the concept of organiza-
tional memory see Seville (1996). And on the strategic management of knowledge in the
firm see Baumard (1996). On the relation of management to organizational cognition see
Bouvier (2004). Finally, Moessinger (1991, 2008) restates organization theory from a
materialist and realist perspective.
The second trend is, of course, that neuroscience has made explosive progress. So much
so, that what once appeared as programmatical and rested mainly on analogies and met-
aphors between brain and organization (Beer 1972) may eventually lead to a full eluci-
dation of March and Simons sociocognitive hypothesis.
Our objective in this paper is to propose a scientific (materialist and realist) reading of
the concepts of organizational intelligence, organizational memory and organizational
learning so as to establish a scaffold on which knowledge management and the manage-
ment of organizational capabilities can be cast. In order to meet this goal we propose a
sociocognitive hypothesis: Cognition is not, in the general sense, limited to individual
animals. Certain kinds of organizations, namely those that can be deemed to be intelligent,
have primitive forms of cognition at a systemic level.
Our proposals concerning the sociocognitive hypothesis involve a synthesis of three
main trains of thought in the neuroscientific literature: (1) the general theory of intelligence
and of intelligent systems; (2) the neuroscience of memory, of mental modeling and of
cognitive epistemology; (3) the theory of cognitive action, of thinking and reasoning, but
particularly the theory of learning.
Each of these areas of neuroscience and cognitive psychology may be extended to the
theory of organizations, and each may then be interpreted either in terms of AI (artificial
intelligence), an approach which has proven to be barren (Searle 1994, 1995a, b) and
dualistic, or in terms of neuroscience. Moreover, our thesis concerning these two inter-
pretations of cognition is the following: Thanks in large part to advancement in neuro-
science, neurobiology and psychiatry, our understanding of brain, mind and mental
phenomena need no longer be idealistic; they are to be understood strictly in materialistic
and realistic terms (Bunge 2006). All forms of dualism become archaic and disappear as
the mind is seen to emerge from brain function, i.e. as we adopt a scientific emergent
materialism. As Hawkins (2004, p. 43) noted; That cells in our brains create mind is a
fact, not a hypothesis.
Our position on the sociocognitive hypothesis is organized in this paper in a number of
propositions and homologies.
There is much ambiguity in the literature on organizational learning concerning the nature
of organization-level cognition, knowledge and learning. The ambiguity and confusion
stems from a lack of clarity concerning the part-whole relation in organizations as systems,
and the individualistic versus holistic conception of organizational knowledge. Argyris and
Schon (1974) state the problem thus:
There is something paradoxical here. Organizations are not merely collections of individuals, yet
there are no organizations without such collections. Similarly, organizational learning is not merely
individual learning, yet organizations learn only through the experience and action of individuals.
What then, are we to make of organizational learning? What is an organization that it may learn?
(Argyris and Schon 1974, p. 9.)
123
1488 D. A. Seni
Yet, having raised the problem, the authors offer no clear mechanism of organizational
learning at the systemic level. Consequently, they, and several others, propose that orga-
nizational knowledge is contextually shared individual knowledge; organizational
knowledge would, in some sense, be the intersection of individual knowledge (Nonaka and
Takeuchi 1995). Others use the concept of organizational knowledge metaphorically (Beer
1972) and remain moot on mechanisms that bring it about. Others propose that organi-
zational knowledge is somehow embedded or incarnated in the routines of organizations, in
organizational cultures, in processes and in action programs (Levitt and March 1988).
Finally, some (Weick and Roberts 1993) propose a holistic concept of collective orga-
nizational mind.
In this section we define organizational knowledge and organizational learning as
emergent systemic properties of the organization, not by reduction to individual knowl-
edge. Moreover, organizational learning emerges form the interaction between components
of the organization (individuals and groups) and is not a disembodied property of the
whole. In other words we adopt a systemic rather than either an individualistic or a holistic
approach (Bunge 1977a, b, c, 1979a, b, c, 2000).
The problem of organizational knowledge and learning can be stated as follows:
Assume an epistemology capable of characterizing the notion of the state of knowledge
of a system or an agent (Bunge 1983a, b). For example, as a first approximation, assume
the commonsense stock and flow model of knowledge which views knowledge as being
stored and accumulated in quantity and quality and as being transported or transmitted
from one place to another and from one agent to another. (Of course, this model, the basis
of the artificial intelligence view, is false, as will be discussed later). Let I1 and I2 be two
agents (individuals, teams, groups, units, and so on) that cooperate and do knowledge work
together. Moreover, let K(Ii) be the state of knowledge of agent Ii. Then D(Ki) is the change
in state of knowledge of agent I1, either learning or forgetting. Let (I1, I2) be a system (e.g.
an organization) composed of agents I1 and I2 whose state of knowledge can be represented
by K(I1, I2). Then, the system or organization (I1, I2) is a learning organization (over some
period of time during which there is a change in the state of knowledge of its components)
if
DK I1 ; I2 [ DK I1 ; 0 DK 0; I2
A measure of organizational learning above and beyond individual learning (i.e. the
quantity of systemic knowledge) is represented as
DK I1 ; I2 DK I1 ; 0 DK 0; I2
Note that according to this definition organizational knowledge is not the same as shared
individual knowledge since the latter is simply
DK I1 ; 0 \ DK 0; I2
Indeed, organizational knowledge, according to this definition, is an emergent property of
system (I1, I2) rather than a resultant property of its components. Finally, note that this
definition can be generalized to n agents or individuals in the organization.
In the context of the business enterprise, as in most organizations, organizational
knowledge is an intangible organizational asset or a form of social capital (Coleman 1988)
which contributes to organizational skills, economies of scope, competencies and capa-
bilities which support multiple coordinated (that is, non-individual) tasks. The concept of
organizational knowledge and learning then allows for the clarification of the classical
123
Neurosciences Call for a New Model 1489
distinction between social capital and human or individual capital (e.g. education, skills,
capacities, and so on.) Social capital emerges from human capital in an organization if the
agents of a system are coupled such that
C K I1 ; I2 [ C K I1 ; 0 C K 0; I2
As before, C K I1 ; I2 is the social capital of the system and C K Ii the human
capital of individual Ii.
Most active organizations, that is, organizations that are agents of change, are set up
because they use resources to perform their functions more efficiently than do collections
of individuals. In principle, therefore organizational knowledge and learning contributes to
resource efficiency. If RK I1 ; I2 are the resources used by the system and RK Ii the
resources employed by individual Iithen organizational knowledge contributes to the
efficiency of the system if
RK I1 ; I2 \RK I1 ; 0 RK 0; I2
Once established, these definitions raise the issue of proposing a mechanism that
explains the emergence of organizational knowledge and learning. The mechanism we
propose is the following: Knowledge, as will be discussed later, is a change in the
representational internal state (e.g. semantic memory in brain) of a system. A system or
an agent knows something if the system has a memory which enables it to recall it. In
turn a system has memory if it is intelligent. Finally, a system is an intelligent agent if
its agency is model-based, that is, if its action is based on an anticipation (and evalu-
ation, in the case of anticipatory control) of future states of its environment as repre-
sented in an internal model. In the following sections we propose just such a mechanism
and clarify these ideas with a number of hypotheses and a number of homologies dawn
from neuroscience.
The theoretical school of thought derived from the cognitive approach is called cogni-
tivism. The phenomenal success of the cognitive approach can be seen by its dominance as
the core model in contemporary psychology (replacing the behaviorism of the late 1950s).
This success has led to it being applied in a wide range of areas such as psychology
(particularly cognitive psychology), cognitive science and psychophysics, cognitive neu-
roscience, neurology and neuropsychology, behavioral economics and behavioral finance,
artificial intelligence and cybernetics, ergonomics and user interface design, philosophy of
mind, linguistics and especially psycholinguistics and cognitive linguistics, economics and
especially experimental economics.
The term cognition (Latin: cognoscere, to know) is used in several related ways to
refer to a facility for processing information, applying knowledge and changing prefer-
ences. Cognition or cognitive processes can be natural or artificial, conscious or not
conscious; therefore they are analyzed from different perspectives and in different con-
texts, in neurology, psychology, philosophy, systemics and computer science. The concept
of cognition is closely related to such abstract concepts as mind, reasoning, perception,
intelligence, learning, and many others that describe numerous capabilities of mind and
expected properties of artificial or synthetic intelligence. Cognition is an abstract property
of living organisms; therefore it is studied as a direct property of the brain or of an abstract
mind on sub-symbolic and symbolic levels.
123
1490 D. A. Seni
123
Neurosciences Call for a New Model 1491
the extent to which its present state is determined not only by past states but also and
mostly by internal states that anticipate the future as well. In this sense, intelligent systems
break with the standard view of causality in that they are teleological.
Hypothesis 2 What is intelligence? Intelligence qualifies behavior. It is not a material
property or an organ. Nor does it refer to specific mental capabilities. It is a broad class of
behavior of a certain kind, a form of action described by the ability to represent, anticipate,
and behave accordingly.
The internal state of an intelligent system is representational; it represents the state of its
environment (as well as system-environment interaction). Thus, intelligent behavior is
based on the anticipation of the systems state in interaction with its environment.
Robert Rosen (1974), in his work on anticipatory systems, proposes the idea that
organizations exhibit emergent properties of intelligence based on anticipation and rep-
resentation. Moreover, he proposes that his concept of model-based anticipatory behavior
can be extended to the behavior of all intelligent systems (Rosen 1985). We cite exten-
sively from Rosen and Kineman (2005) on Rosens idea:
Robert Rosen claimed that life, as a property of a living system (an organism), is not caused by the
physical nature of what it is composed of, but rather, is a consequence of complex organization of a
certain type in a material system. In other words, the causal basis of life is a matter of relational
causality rather than physical, material causality. Living systems are characterized by a unique set of
behaviors and capabilities. Among these is the ability to employ information encoded into the
organization itself as a means of maintaining system stability in an ever-changing environment. This
encoded information can allow organisms to act in a way that Rosen described as characteristically
model-based behavior. He theorized that the encoded information could act as a set of internal
predictive models which pertain to both the internal environment of the system and the external
environment, as well as to the relational interactions between the two, and which actually guide
system behavior. Collectively, such guidance amounts to an anticipatory mode of system control.
Rosen concluded that encoded information is an integral aspect of any living systems organization
and, based on the relation of this information to organismic behavior, he categorized all living
organisms as anticipatory systems. (Rosen and Kineman 2005, p. 399).
In Robert Rosens view, the most curious observable characteristic of life with respect to time is
anticipatory behavior. All organisms exhibit anticipatory behavior, which Rosen has described as
being characteristically model based: behavioral changes the system is undergoing in the present are
caused by events that have not yet happened, but are entailed to happen in the future. This is the
essence of prediction. He contended that there is a causal, verifiable impact on living systems in the
present time that cannot be explained by a purely reactive mechanism of system control or by
classical definitions of time.
It should be clarified that anticipation in Rosens usage does not refer to an ability to see or
otherwise sense the immediate or the distant futurethere is no prescience or psychic phenomena
suggested here. Instead, Rosen suggested that there must be information about self, about species,
and about the evolutionary environmentas it behaved through time, encoded into the organization
of all living systems. He observed that this information is capable of acting causally on the organ-
isms present behavior, based on relations projected to be applicable in the future. Thus, while not
violating time established by external events, organisms are capable of constructing an internal
surrogate for time as part of a model that can indeed be manipulated to produce anticipation. It is in
this sense that degrees of freedom in internal models of time allow its reversibility to produce new
information. (Rosen and Kineman 2005, p. 404).
(The idea) of an internal predictive model built into the organization of living systems does not
necessarily imply a distinct, tangible thing, such as an organ, that the living organism consults for
predictions. Rather, the idea is really just a convenient name referring to the collective source of a
natural capability. The source can be described as a complex, interactive relationship between time
and information that exists both independently as part of the external environment and is also
internalized (encoded) directly into the living systems itself such that the capability is hard-wired and
can be passed from one generation to the next The multi-faceted aspect of reality that we call
time is embedded and encoded into living systems at the organizational level and this must be
123
1492 D. A. Seni
viewed as part of the material organization of their anticipatory capabilities. (Rosen and Kineman
2005, pp. 406407).
Hypothesis 3 Over the last fifty years, neuroscience, neurobiology and psychobiology
has advanced a strict materialist and realist theory of the mind as brain. Thus, all kinds of
dualism stem from archaic forms of thought and are to be replaced by monistic emergent
materialism (Bunge 2006). Accordingly, the internal model of anticipatory systems is
material. It is an integral part of the very structure of the system. No real system contains
disembodied representations; indeed representations are embodied, rendered concrete, as a
physical or a material homology.
Hypothesis 4 Our next hypothesis concerns the function of intelligence in a system: The
function of an intelligent systems anticipatory mechanism is a semantic function of ref-
erence or representation that follows from the modeling relation between the internal
mechanism and the external world. The internal model allows the system to perform its
functions. Thus we dont see or know the world as it iswe see it through a model that the
brain constructs (over evolutionary time) so as to cope and thrive. In fact the model that our
brain constructs produces our very perception of the world. Thus color is a construct of the
brain and the central nervous system; there are only light waves in reality. The same is true
of the sense of smell. And so on.
123
Neurosciences Call for a New Model 1493
discarded first, only to be recaptured later in terms of realizations of the relational properties held in
common by large classes of organisms.
In any case, one of the novel consequences of the relational picture (as opposed to the experimental
analytical picture of living systems which culminates in biochemistry and molecular biology) is that
many (if not all) of the relational properties of organisms can be realized in contexts which are not
normally biological.Or, what is more germane to the present discussion, they may be realized in
the context of human activities, in the form of social, political and economic systems which deter-
mine the character of our social lives. (Rosen 1985, p. 24).
Examples of human behavior as anticipatory are almost trivial given the fact that we are conscious,
cognate, we perceive and conceive, and we act on our knowledge, our beliefs and our representations.
However, more surprising is the manifestation of similar anticipatory behavior at lower levels where
there is no question of learning or of consciousness. For instance, many primitive organisms are
phototropic; they move towards darkness. Now darkness in itself has no physiological significance;
in itself, it is biologically neutral. However, darkness can be correlated with characteristics which are
not physiologically neutral; e.g. with moisture or with the absence of sighted predators. The relation
between darkness and such positive features comprises a model through which the organism predicts
that by moving through darkness, it will gain an advantage. Of course, this is not a conscious
decision; the organism has no real option, because the model is, in effect, wired-in. Another
example of such a wired-in model may be found in the wintering activity of deciduous trees. The
shedding of leaves and other physiological changes that occur in the autumn are clearly an adaptation
to winter conditions. What is the cue for such activity? It so happens that the cue is not the ambient
temperature but rather the length of the day. In other words, the tree possesses a model that antic-
ipates low temperature on the basis of a shortening of the day regardless of what the present ambient
may be. Once again, the adaptive behavior arises because of a wired-in predictive model which
associates a shortening day (which in itself is physiologically neutral) with a future drop in tem-
perature (which is not physiologically neutral.) (Rosen 1985, p. 7).
Hypothesis 6 What is memory? A system with an internal model of its environment, its
past and its future is a memory system. Thus memory is a general or systemic concept. It is
the (relative) persistence of structure or invariance of state relative to another system of
which it may be a part or with which it is in interaction and which is of higher variability.
Thus razor blades, springs and, of course, all living things have memory in this general
sense.
Some kinds of memory are rigid or hard-wired, that is the structure of the system
remains constant under the influence of variability. Thus, pine trees have relatively more
rigid memory than deciduous trees inasmuch as the chemical structures and processes
that underpin photosynthesis are more invariant with respect to the seasonal cycles. Other
kinds of memory are flexible insofar as a particular set of structural relations (responses) is
elicited by specific environmental conditions (stimulus). Finally, some kinds of memory
are plastic in the sense that they are produced or reproduced by environmental conditions.
Thus mental memory is both plastic and productive: neuronal assemblies form when the
firing of one cell is associated and linked with the firing of another. The phrase Cells that
fire together, are wired together summarizes, far too simply, Hebbs (1949) famous theory
of the mechanism of mental ideation and memory. The activation of a neuronal assembly
(the production, recall or invention of a thought or a memory) is produced, constructed or
reconstructed by neurons firing each time a mental act occurs. Otherwise, neuron cells
remain overproduced, inactive and free potentially to participate in other assemblies. Thus,
unlike coded or rigid memory, physical memory reconstructs a memory or a mental act
anew every time the event is elicited. There is no permanent structure, text or code in the
brain; there is only neural propensity or action potential. There is no RAM, ROM or
hard disks in the brain. In fact, this is both strength and weakness; because of plasticity the
brain learns and invents, and unlike the computer it is imprecise and forgets easily.
123
1494 D. A. Seni
Hypothesis 9 Learning systems are cognitively intelligent and have a plastic or mal-
leable internal model, given that learning is the product of this malleability. Their internal
model, memory or knowledge system evolves, is modulated and is transformed with new
lower level input. On the other hand, non-cognitive intelligent systems are hard-wired for
intelligent behavior but do not evolve or learn individually.
Hypothesis 10 What then, in the above terms, is learning? Learning becomes simply
the improvement of memory in action. Learning is a directed or constructive process of
a systems internal model, its memory or its knowledge. It is the product of system
plasticity. And system plasticity, as we explained, requires redundancy, surplus and over-
production of available components or resources.
Hypothesis 11 Modeling, memorizing and learning are productive or constructive pro-
cesses. Consequently, there are no models as such; in reality there is only modeling.
Similarly, there is no thing as memory, only recalling, forgetting and memorizing.
Finally, there is no such thing as knowledge, only the act of knowing. Cognitive
memorizing, knowing, modeling and learning are all constructive processes. It is worth
emphasizing; there is no cognitive memory, knowledge or model per se.
Hypothesis 12 Consequently, there is no software as distinct from hardware in
intelligent systems. All software is code (physical and material sign) controlling lower-
level hardware processes. All software engineering is really hardware engineering.
123
Neurosciences Call for a New Model 1495
Hypothesis 13 Thus the functionalist AI theory of intelligence and mind which asserts
that the mind is to brain as software is to hardware is a form of dualism and is both false
and barren.
Hypothesis 14 In the neuroscientific model of intelligence (Hawkins, op. cit.) knowledge
is not a self-existing entity. Knowledge is memory, and memory is a materially organized
model. Thus knowledge creation or learning is identical to the expansion and contraction
of memory. Memory is changed and improves with learning. But in this view, what is this
knowledge that it can improve? What is the stuff of knowledge?
The elements of knowledge such as concepts and ideas constitute the building blocks of
thought (Garnham, A. and J. Oakhill 1994). Yet there is no real thing to which the term
knowledge refers; the term is a useful fiction referring to the diachronous aspect of
knowing; the state of the brain at a moment in time in the process of thinking or knowing.
To know or to have knowledge is to think. If we are to think we must have thoughts; we
must act, mentally, so to speak. To think these thoughts we think (perform actions) with
ideas.
Knowing and thinking involves the construction of trains of thought. Trains of thought
are sequences of thoughts each made up of the more basic components of cognition,
namely, concepts and images. These are intimately related to memory which is also
constructive. The construction of new trains of thought involves the formation of novel
neural pathways and assemblies whose trail persists as memory. The pathways are
reinforced with use and fade with disuse. We eventually forget a train of thought (e.g. a
memory), as these constructs remain inactive. The thinking of a familiar train of thought is
simply the reactivation of the pathway to which it corresponds in memory.
Of course, the concept of memory is itself as fictitious as the concept of knowl-
edge. There is no storage area for trains of thought, just the reactivation of neural
assemblies and pathways.
Apart from concepts and ideas, there are, it seems, other categories of cognitive con-
struction such as music and fragrance. These constitute knowledge as well.
Many thoughts are built from concepts that we use to categorize things in the world
following the distinctions (analysis) we are able to make from perception and the models
we are able to synthesize. Early models of conceptual representation in the brain were
based on semantic networks or sets of semantic features. Though these initially differed,
dissimilar approaches were shown to be formally equivalent. Both had to be modified to
accommodate the typicality effects that inspired a rival theory based on the notion of
interpretation according to a prototype or model, together with information concerning how
far something can differ from the prototype and still exemplify the same concept. Another
theory is that a concept is not represented by an abstract prototype but simply by its
exemplars. A final theory of concepts which applies primarily to natural kinds of concepts
is that lay or personal theories play a crucial role in categorization and that concepts are not
independent of the theories of the domains to which they apply.
A theory of concepts also has to provide a mechanism of conceptual combination that
explains how concepts can be linked into chains of thoughts (how the concept of pet fish
can be constructed from the concepts of pet and fish.) It also has to explain the idea of
conceptual scheme or context within which some concepts are more basic than others.
Some kinds of thinking are mediated or encoded, not by language, but by images. One
theory suggests that ideas are encoded both verbally and in images. It suggests that mental
action is performed using an image rather than a language form of representation. The
image, in this theory, is not a picture of the referent but a model that preserves certain
123
1496 D. A. Seni
spatial relationships. Finally, image representation is not exclusive and may incorporate
concepts as well.
Much work on learning organizations masks an underlying yet unfounded fear of the
consequences of extending the AI view of the brain to organizationsthat the brain is
essentially a computer, that the mind is software, and that knowledge can somehow be
stored in memory. Eventually, according to this view, if organizations are learning
systems, computers will one day replace them. Although the AI program for explaining
brain and mind has been a failure, (Searle 1994, 1995a, b) it remains the dominant view of
organizational knowledge and learning.
Hypothesis 15 The organization as theoretical object is a material system and its
properties, including its cognitive ones, need to be explained realistically. Yet, almost
universally, the objective idealistic (brainless) theory of knowledge and mind is adopted
in organization theory and stems from the unfortunate adoption of Polanyis tacit
knowledge epistemology (Polanyi 1958, 1967) that confuses knowledge with cognition. In
the following table we compare the view concerning various categories of organizational
cognition that results from the integration of the AI stock and flow model of knowledge
with a tacit knowledge epistemology, and the interpretation that results from a constructive
neuroscientific viewpoint.
As described by the table, there is no such thing as memorythere is only the action of
memorizing and the action of memory construction and use. In other words the AI com-
putational or instructional stock and flow model of memory and knowledge is false. The
brain does not behave like a universal Turing machine. Similarly, there is no knowledge
but only knowing (Table 1).
Hypothesis 16 What then, is the constructive cognitive mechanism that allows an
organization to learn? The concept of a learning organization implies that the organization
is in some sense intelligent. Intelligent behavior, as previously described, need not
involve highly developed cognitive capacities, that is, it need not be restricted to organisms
with evolved brains or highly developed central nervous systems. Indeed, the concept
applies to all systems, both natural and artificial, that exhibit intelligent behavior. The
simple yet general model and mechanism of an anticipatory system explains intelligent
Knowledge Knowing
Belief Believing
Accumulated understanding Process of understanding
Information Informing, communicating, signaling, transmitting
Tacit knowledge (ineffable) Tacit knowing or structural emergent knowing: the way
in which we are aware of neural processes in terms of
perceived objects (Polanyi, 1967 p. x)
Memory Memorizing and recalling
123
Neurosciences Call for a New Model 1497
behavior. All that is required for an organization to learn is that it exhibits the properties of
an anticipatory system with flexible memory.
Hypothesis 17 The organization, like the brain, is a uniform hierarchical semantic
memory system (Hawkins and Dileep 2006) with the capacity for self-referential multi-
level semantic processing. It functions by construction, anticipation and planning. The
organization like the brain learns by anticipation, planning, and feedback.
In the organization, as in the brain, therefore, the general mechanism for learning is
anticipation. Explicit anticipation in the organization at the sociological level is organized
as coordinated and collaborative plan making. This implies that the basic sociocognitive
unit or learning mechanism of the organization is the plan; plans and planning are ubiq-
uitous throughout all organizations and at all levels (Bratman 1987; Seni 1990).
Moreover, groups, not individuals, perform plan making in the organization through
various processes of cooperation and collaboration (Tomasello 2009, 2010; Seni 2009,
2010). Although individuals are the atoms of the organization, or in neuroscientific
terms, its cells or neurons, groups of individuals are the molecules or the neural
assemblies whose organization exhibits memory or semantic content. Knowledge in the
organization is grounded in group processes. The components of the organization and the
brain (i.e. the groups in the organization and the various regions of the neocortex) operate
not on the world itself but on a model that represents or stands in semantical relation to the
world (the relation of so-called intentionality. See Searle 1983). This hierarchical
semantical model refers to the world and in so doing allows these systems to infer
from, and predict, events in the world.
Hypothesis 18 Since plans are epistemological objects, organizational cognitive capa-
bilities are reducible to the capacity to plan and to implement plans. Thus organizational
capabilities are a function of system plasticity, organizational redundancy and surplus.
Hypothesis 19 How then to study knowledge in the organization and what considerations
follow from adopting an anticipatory systems view of organizations? Three consider-
ations need to be borne in mind: First, the idea that multiple aspects of a systems external
context are encoded within the organization of the system itself as mechanisms of antic-
ipatory control. Second, is the idea that multiple aspects of time (rates, sequences, dura-
tion) are encoded into the organization of the system and parameterized differently from
objective properties of time. Moreover, these aspects of time are related to each other
within the systems organization, making anticipation possible. And third, the idea that
organizations are complex systems and that organization or system structure (including
organizational routines) contributes to the bulk of the causal information about the system.
Modes of analysis and explanation preserving this organization need to be used alongside
those from reductionist (individualistic) modes. In short, a systems knowledge emerges
from the material organization of anticipation and evaluation. This is the essence of the
sociocognitive hypothesis.
Analogies between brains and organizations are nothing new; they go back at least to the
work of Staftford Beer (1972). However, for Beer this was mere analogy, simply a hunch
in thinking about systems. The advances in neuroscience allow us to now move from mere
123
1498 D. A. Seni
123
Neurosciences Call for a New Model 1499
composition and its structure, self-organizing processes are themselves causally determined.
Thus the system responds to circumstance or behaves according to its memory (the structural
neuroscientific approach) or to its knowledge (the cognitive psychological approach).
Homology 2 The AI (artificial intelligence) program of the brain is, as has been pointed
out, unfruitful. Similarly, the homologous stock and flow theories of organizational
intelligence, knowledge and learning common in the literature do not explain the emergent
cognitive properties of the organization.
Homology 3 Every mental state is a brain state. The mind is an emergent property of the
brain. A mental disorder is therapeutically treated when the brains structure and function
is altered. In homologous fashion, every piece of organizational knowledge is in fact a state
of the organization i.e. a piece of materially coded or embedded information. Organiza-
tional learning alters the organizations structure and function.
Homology 4 The fundamental constituents of the brain are not chemical molecules; the
atoms of neurological functions are individual cells or neurons. However, the basic units of
cognition are groups of cells organized in neocortical columns acting as modules. Modules
are grouped into entities by a common extrinsic connection, by the need to replicate a
function, etc. Closely linked and multiply interconnected subsets of modules in separated
entities form connected distributed systems throughout the brain.
In homologous fashion, and contrary to the conventional view, the basic unit of orga-
nizational cognition is not the individual person, but groups of individuals (Gureckis and
Goldstone 2006). These groups of individuals are organized as social subsystems and agents
at work (i.e. they are set in cooperative and collaborative relations). In large organizations,
such groups may be part of larger entities linked by both formal and informal relations in the
organization. Informal relations give rise to distributed systems throughout the organiza-
tion: In other words, the organization is a distributed system of groups.
Homology 5 The structure of the neocortex is homogenously replicated across the brain.
There are no specialized organs for mental functions in the neocortex. This implies that the
basic structure of the parts of the brain involved in cognitive functions is relatively
homogenous. This in turn leads one to suppose that there is a basic mechanism common to
all mental function. This idea has been clearly proposed and expressed in the basic work of
Vernon Mountcastle and Gerald Edelman (1982). We cite from the introduction to their
book by Francis O. Schmitt:
The Mindful Brain is a proposal by two eminent biological scientists for a mechanism whereby mind
becomes manifest from the operations of brain tissue. The proposal is detailed and in many respects
sophisticated.
This significant contribution to neuroscience consists of two papers, the first by Mountcastle and the
second by Edelman. Between them, they examine from different but complementary directions the
relationships that connect the higher brain functionsmemory, learning, perception, thinkingwith
what goes on at the most basic levels of neural activity, with particular stress on the role of local
neuronal circuits.
Mountcastles paper reviews what is known about the actual structure of various parts of the neo-
cortex. He relates the large entities of the neocortex to their component modulesthe local neuronal
circuitsand shows how the complex interrelationships of such a distributed system can yield
dynamic distributed functioning. Mountcastle proposes that higher functions depend on the ensemble
actions of very large populations of neurons in the forebrain organized into complex interaction
systems of multiply replicated local neuronal circuits of columns, which in turn are composed of
closely linked subsets of minicolumns. These columns are the fundamental units of the neocortex
involved in higher brain function. He postulates that the ways in which local cortical columns, as
123
1500 D. A. Seni
processing and distributing units, operate upon their inputs to produce outputs is qualitatively similar
in all neocortical areas and are basic to the carrying out of high-order functions by the brain.
Although phylogenetically older parts of the brain may play a significant role the key to theories of
higher brain functions is in the unique structure and properties of the cerebral cortex.
For his part, Edelman proposed a general mechanismselectionist rather than instructionalistfor
the higher brain functions; memory, perception, self-awareness, learning, and consciousness. This
mechanism relied on the finding that groups of cells, not single cells, are the main units of selection
in higher brain functions. (Mountcastle and Edelman 1982, introduction)
123
Neurosciences Call for a New Model 1501
123
1502 D. A. Seni
decision-making and planning. How does neuroscience explain these functions, how do
thoughts flow and how is knowledge constructed?
We have known for some time, of course, that cells or neurons constitute the brains
most basic functional components. But even though imaging technology allows us to see
individual neurons, our models of how their action and interaction are coordinated are still
very partial. The general idea has been that each cell or neuron functions like a micro-
processor and that each is linked to billions of others in functional assemblies. Although
according to more recent findings this idea is acknowledged to be oversimplified, two
prevailing classical models of neural function have been at odds since their founders
were jointly awarded the Nobel Prize over 100 years ago; both are still relevant.
On the one hand the Italian physician Camillo Gogli conceived of the working of the
brain as composed of the activity of neurons, each being a node (or switch) in a uniform
interconnected network. This reticularist model was formalized as the McCulloch-Pitts
(1943) model. On the other hand, the Spanish anatomist Santiago Ra`mon y Cajal proposed
the neuron doctrine in which each neuron is as an island unto itself rather than a node in a
network.
One of the great discoveries made in applying the neuron doctrine is that neural impulses
(called action potentials) carry information in one direction, from the cell body to the axon
tip. Every morsel we taste, every concept we form, every idea we have, and every piece of
knowledge we construct is described by patterns of impulses firing through the axons.
Moreover, neuroscientists have found that the same frequency of impulses signified dif-
ferent events; the same frequencies for very bright light when we are outside in the day and
relatively dim light when we are inside at night. That is because the impulse code reports
changes of state rather than transcriptions of states (e.g. sensations). This is of immense
importance since it means that the brain does not map the world to recall it on demand;
rather it reconstructs it anew every time it is called on, following potential pathways that
may be reinforced or inhibited. Action potential coding explains the working of the brain as
a constructive and reconstructive organ rather than as a storage and retrieval system. What
you saw in the mirror when you shaved this morning is never the same as what you saw
before (William James). How does the brain reconstruct? It succeeds in doing this by
imposing and using a higher order level of invariance or a model of categorization.
The debate between the doctrinaires and the reticularists has raged on for decades as
every new advance turned up arguments for both views (See Fields 2006, Ramo`n y Cajal
1995, and Bullock 2005). But there has been startling expansion in thinking beyond both
the classical models since the late eighties: most of the cells in the brain are not neurons but
glia. Nearly ten times as many of them fill the spaces between neurons, and the ratio of glia
to neurons increases in animals higher up the tree of evolution. Moreover these cells
have information-processing functions as well, and are coordinated by a variety of
chemical mechanisms. Gliass violate both the neuron and reticularist doctrines in two
ways. First, information flows through cells in the brain that are not neurons. Second,
unlike neurons that interact through a series of fixed links or pathways, glias interact by
broadcasting signals the way cell phones do. Glias make shapeless connections that flow
across the hardwired connections among neurons. To the neuron doctrine we must now add
the glia doctrine. Both Golgi and Ra`mon y Cajal were right. Yet we still dont know how
brain function and memory are coordinated in constructing a piece of knowledge, much
less a set of beliefs.
Although they never attain the level of complexity of animal cognition, we do know that
the higher-order socio-cognitive functions of the organization are memory, perception,
123
Neurosciences Call for a New Model 1503
The remarkable diversity of nervous systems and their exquisite capacity for adaptive function are
both intriguing and confounding to the neurobiologist. Despite their complexity, however, all nervous
systems appear to obey similar general principles at the level of morphological expression of neu-
ronal structures and in their mechanism of signal transmission. The recognition of these general
principles and their application to subsystems in more complex brains has been among the greatest
triumphs of neurobiology in this century. (Edelman 1982, p. 51)
(The) complexity of mammalian neuronal systems and their behavioral repertoires is enormous, and
it would be premature to elaborate a detailed theory to account for their function. It may not be
premature, however to ask a more general question related to the evolution, development, and
function of higher brain systems, particularly those in man: Does the brain operate according to a
single principle in carrying out its higher order cognitive functions? That is, despite the manifold
differences in brain subsystems and the particularities of their connections, can one discern a general
mechanism or principle that is required for the realization of cognitive faculties? If so, at what level
does the mechanism operatecells, molecules or circuits of cells? (Edelman 1982, p. 52)
123
1504 D. A. Seni
6 Conclusions
In this paper we have argued for materialist and emergent concepts of organizational
intelligence, memory and learning based on some of the findings of modern neuroscience.
Indeed, we have argued for a sociocognitive hypothesis rooted in neurosciencethat
neuroscience research has begun to confirm that some brain connections are formed
socially; when two people connect through cooperative and collaborative tasks, their neural
pathways, as illuminated by fMRI scans, take on similar patterns (Lindenberger et al.
2009). These changes in brain patterns are reinforced by further interactionso that an
organization that successfully draws people into repeated patterns of thought and action
may literally rewire their neural pathways. These pathways are further reinforced by the
reactions of hormones, neurotransmitters, and other chemicals within the body.
Organizational cognition makes organizations reconstitutable as Krippendorff (2008)
likes to saythat is, organizations persist in history, over time, and in spite of personnel
changes. What makes reconstitutability possible is systemic intelligence and memory,
properties not of individual brains but of the communication systemthe relations of
cooperation and collaborationin the organization. In this sense, the organization wires
or rewires neural pathways in individuals and the sociocognitive relation of individual-
to-system emergence is reversed.
To elaborate on these ideas we have proposed a number of hypotheses and homologies
concerning the cognitive capabilities of the organization. We have inferred these ideas
from results in modern neuroscience. We have based these hypotheses and homologies on
(1) the theory of intelligence and of intelligent systems; (2) the material theory of memory,
of mental modeling and of cognitive epistemology; (3) the theory of cognitive action, and
more particularly the theory of learning. At first blush, these hypotheses and homologies
seem plausible and fill a yawning void in the theory. Yet, they remain to be fleshed out and
further refined. They will also have to be tested eventually in the context of real social
organizations.
References
Argote, L. (1999). Organizational learning: Creating, retaining and transferring knowledge. Netherlands:
Kluwer.
Argyris, C. (1982). Reasoning, learning and action. San Francisco, CA: Jossey Bass.
Argyris, C. (1997). On organizational learning. Malden, MA.: Blackwell.
Argyris, C., & Schon, D. A. (1974). Organizational learning: A theory of action perspective. Reading, MA:
Addison-Wesley.
Arrow, K. (1962). The economic implications of learning by doing. Review of Economic Studies, 29,
166170.
Baum, J. (2002). The Blackwell companion to organizations. London: Blackwell Timothy Rowley.
Baumard, P. (1996). Organisation deconcertee: La gestion strategique de la connaissance. Paris: Masson.
Beer, S. (1972). The brain of the firm: The managerial cybernetics of organization. London: Allen Lane the
Penguin Press.
Bood, R. P. (1998). Charting organizational learning: A comparison of multiple mapping techniques.
In C. Eden & J.-C. Spender (Eds.), Managerial and organizational cognition. London: Sage.
Boulding, K. (1968). The image: Knowledge and life in society. Ann Arbor: University of Michigan.
Bouvier, A. (2004). Management et sciences cognitives. Paris: Presses universite de France.
Bratman, M. E. (1987). Intention, plans and practical reason. Cambridge, MA: Harvard University Press.
Bullock, T. H. (2005). The neuron doctrine, redux. Science, 310(4), 5461.
Bunge, M. (1977a). General systems and holism. General Systems, 22, 8790.
123
Neurosciences Call for a New Model 1505
Bunge, M. (1977b). The GST challenge to the classical philosophies of science. International Journal of
General Systems, 4, 329376.
Bunge, M. (1977c). Emergence and the mind. Neuroscience, 2, 501509.
Bunge, M. (1979a). Treatise on basic philosophy, Vol. 3, ontology I: A world of systems. Doredecht,
Holland: D. Reidel.
Bunge, M. (1979b). Treatise on basic philosophy, Vol. 4, ontology II: A world of systems. Doredecht:
D. Reidel.
Bunge, M. (1979c). A systems concept of society: Beyond individulaism and holism. Theory and Decision,
10, 1330.
Bunge, M. (1980). The mind-body problem. Oxford: Pergamon Press.
Bunge, M. (1983a). Treatise in basic philosophy, Vol. 5, epistemology and methodology I: Exploring the
world. Dordecht, Holland: D. Reidel.
Bunge, M. (1983b). Treatise in basic philosophy, Vol. 6, epistemology and methodology II: Understanding
the world. Dordecht, Holland: D. Reidel.
Bunge, M. (2000). Systemism: The alternative to individualism and holism. Journal of Socio-Economics,
29, 147157.
Bunge, M. (2003). Emergence and convergence: Qualitative noverlty and the unity of knowledge. Toronto:
University of Toronto Press.
Bunge, M. (2006). Chasing reality: Strife over realism. Toronto: University of Toronto Press.
Bunge, M., & Ardila, R. (1987). Philosophy of psychology. New York, NY: Springer.
Coleman, J. S. (1988). Social capital in the creation of human capital. American Journal of Sociology,
94(Supplement), S95S120.
Cossette, P. (2004). LOrganisation: Une Perspective Cognitiviste. Quebec, QC: Presses de lUniversite
Laval.
Daft, R. L., & Weick, K. E. (1984). Towards a model of organizations as interpretive systems. Academy of
Management Review, 9, 284295.
Dodgson, M. (1993). Organizational learning: A review of the literature. Organization Studies, 14(3),
375394.
Edelman, G. E. (1982). Group selection and phasic reentrant signaling: A theory of higher brain function. In
G. E. Edelman & V. Mountcastle, (Eds.), The mindful brain: Cortical organization and the group-
selective theory of higher brain function. Cambridge, MA.
Engel, A., Debener, S., & Krazciach, C. (2006). Coming to attention: How the brain decides what to focus
conscious attention on. Scientific American Mind, 17(4), 4653.
Fields, R. D. (2006). Beyond the neuron doctrine. Scientific American Mind, 17(1), 2127.
Fiol, C. M., & Lyles, M. A. (1985). Organizational learning. Academy of Management Review, 10(4),
803813.
Garnham, A., & Oakhill, J. (1994). Thinking and reasoning. Oxford: Blackwell.
Grueter, T. (2006). Picture this: How does the brain create images in our minds? Scientific American Mind,
17(1), 1823.
Gureckis, T. M., & Goldstone, R. L. (2006). Thinking in groups. Pragmatics & Cognition, 14(2), 293311.
Hawkins, J. (2004). On intelligence. New York, NY: Henry Holt & Co.
Hawkins, J., & Dileep, G. (2006). Hierarchical temporary memory: Concepts, theory and terminology.
Menlo Park, Ca: Numenta, Inc.
Hebb, D. O. (1949). The organization of behavior. New York, NY: Wiley.
Hedberg, B. (1981). How organizations learn and unlearn. In P. C. Nystrom & W. H. Stabuck (Eds.),
Handbook of organizational design (Vol. 1, pp. 327). New York: Oxford University Press.
Kim, D. H. (1993). The link between individual and organizational learning. Sloan Management Review,
35(1), 3750.
Klimecki, R., & Lassleben, H. (1998). Modes of organizational learning: Indications from an empirical
study. Management Learning, 29(4), 405430.
Kosslyn, S. M. (1996). Image and the brain. Cambridge, MA: MIT Press.
Krippendorff, K. (2008). Social organizations as reconstitutable networks of conversation. Cybernetics and
Human Knowing, 15(34), 149161.
Levitt, B., & March, J. G. (1988). Organizational learning. Annual Review of Sociology, (14), 319340.
Lindenberger, U., Li, S.-C., Gruber, W., & Muller, V. (2009). Brains swinging in concert: Cortical phase
synchronization while playing guitar, BMC Neuroscience, 10(22).
March, J. G., & Olsen, J. P. (1975). The uncertainty of the past: Organizational learning under ambiguity.
European Journal of Politcal Research, 3, 147171.
March, J. G., & Simon, H. A. (1958). Organizations. New York: Wiley.
123
1506 D. A. Seni
Mayr, E. (1982). The growth of biological thought: Diversity, evolution and inheritance. Cambridge, MA:
Belknap Press of Harvard University Press.
McCulloch, W. S., & Pitts, W. H. (1943). A logical calculus if the ideas immanent in nervous activity. The
Bulletin of Mathematical Biophysics, 5, 133155.
Moessinger, P. (1991). Les Fondements de LOrganisation. Paris: Presses Universitaires de France.
Moessinger, P. (2008). Voir la Societe: Le micro et le Macro. Paris: Hermann Editeurs.
Mountcastle, V. (1982). An organizing principle for cerebral function: The unit module and the distributed
system. In G. E. Edelman & V. Mountcastle (Eds.), The mindful brain: Cortical organization and the
group-selective theory of higher brain function. Cambridge, MA.
Nonaka, I. (1991). The knowledge creating company. [Reprint 91608]. Harvard Business Review.
Nonaka, I. (1994). A dynamic theory of organizational knowledge creation. Organization Science, 5(1),
1436.
Nonaka, I., & Takeuchi, H. (1995). The knowledge creating company: How Japanese companies create the
dynamics of innovation. Oxford: Oxford University Press.
Nonaka, I., von Krogh, G., & Voelpel, S. (2006). Organizational knowledge creation theory: Evolutionary
paths and future advances. Organization Science, 27(8), 11791208.
Pylyshyn, Z. (2003). Return of the mental image: Are there really pictures in the brain? Trends in Cognitive
Sciences, 7(3), 113118.
Ramo`n y Cajal, S. (1995). Histology of the nervous system of man and vertebrates. Oxford: Oxford
University Press.
Rosen, R. (1974). Planning, management, policies and strategies; four fuzzy concepts. International Journal
of General Systems, 1(4), 245252.
Rosen, R. (1985). Anticipatory systems; philosophical, mathematical, and methodological foundations.
Oxford: Pergamon.
Rosen, J., & Kineman, J. J. (2005). Anticipatory systems and time: A new look at Rosennean complexity.
Systems Research and Behavioral Science, 22, 399412.
Scott, R. W. (1987). Organizations: Rational, natural and open systems. Englewood Cliffs, N.J.: Prentice-Hall.
Searle, J. (1983). Intentionality: An essay in the philosophy of mind. Cambridge: U.K. Cambridge University
Press.
Searle, J. (1994). The rediscovery of the mind. Boston: MIT Press.
Searle, J. (1995a). The mystery of consciousness: Part I. New York Review of Books, 52(17), 6065.
Searle, J. (1995b). The mystery of consciousness: Part II. New York Review of Books, 52(18), 5461.
Seni, D. A. (1990). The sociotechnology of sociotechnical systems: Elements of a theory of plans. In P.
Weingartner & G. Dorn (Eds.), Studies on Mario Bunges Treatise (pp. 431454). Amsterdam: Rodopi
Press.
Seni, D. A. (2009). Capacites cooperatives et capacites collaboratives des organisations: Une esquisse. 5
ie`me Colloque sur le Management des Capacites Organisationnelles, Actes du 77e Congre`s de
lAssociation Canadienne-Francaise pour lAvancement des Sciences (ACFAS), mai 2009, Universite
dOttawa, Ottawa, Canada.
Seni, D. A. (2010). La coordination dans les organisations : Une approche axiomatique, 6 ie`me Colloque
sur le Management des Capacites Organisationnelles, Actes du 78e Congre`s de lAssociation Cana-
dienne-Francaise pour lAvancement des Sciences (ACFAS), May 2010, Universite de Montreal,
Montreal, Canada.
Seville, M. G. (1996). La memoire des organizations. Paris: Hartmann.
Spender, J.-C. (1996). Making knowledge the basis of a dynamic theory of the firm. Strategic Management
Journal, 17(Winter special issue), 14.
Spender, J.-C., & Eden, C. (Eds.). (1998). Managerial and organizational cognition: Theory, methods and
research. Ann Arbor: University of Michigan.
Tomasello, M. (2009). Why we cooperate. Cambridge, MA: MIT Press.
Tomasello, M. (2010). Origins of human communication. Cambridge, MA: MIT Press.
Tsoukas, H. (2005). Organizations as knowledge systems: A brief overview. In H. Tsoukas (Ed.), The firm as
a distributed knowledge system: A constructivist approach (pp. 97101). New York: Oxford University
Press.
von Krogh, G., & Roos, J. (1995). Organizational epistemology. Vhps Distribution.
Weick, K. E., & Roberts, K. H. (1993). Collective mind in organizations: heedful in interrelating on flight
decks. Administrative Science Quarterly, 38, 357381.
Weick, K. E., & Westley, F. (1996). Organizational learning: affirming an oxymoron. In S. R. Clegg,
C. Hardy, & W. R. Nord (Eds.), Handbook of organizational studies (pp. 440459). London: Sage.
Williams, A. P. O. (2001). A belief-focused process model of organizational learning. Journal of Man-
agement Studies, 38(1), 6785.
123