Sei sulla pagina 1di 4

Towards an Enactive Epistemology of Technics

Armen Khatchatourov
John Stewart
Charles Lenay
COSTECH UTC, France
E-mail: armen.khatchatourov@utc.fr, john.stewart@utc.fr, charles.lenay@utc.fr

contingencies [2] are not pre-given but emerge from


the interaction between an organism and its
environment. For the purposes of this article, we are
not going any further in the description of
(discrepancies between) these approaches, we rather
propose to consider what is specific to human beings.
One of the major characteristics of human worlds is
that the sensory-motor coupling is mediated by
technical artifacts, this leading to two radical
innovations (Fig. 1)

Abstract
The aim of this paper is to introduce an
epistemological framework in which the technics are
considered from the very start in their intertwining
with the human. We show in which sense the enactive
paradigm can provide the basis for such an
epistemology.
Our thesis is that all technical artifacts, from stone
tools to cars to computers, are "enactive interfaces"
that mediate the structural coupling between human
beings and the world they live in, and hence bring
forth a particular world of lived experience.
The social dimension of this approach to technics is
also discussed.

1. Introduction
Figure 1. Mediated sensorimotor coupling.
The basic scheme for considering enaction is the
dynamic sensory-motor coupling between an organism
and its environment.
The sensory inputs, S, are used to guide the actions
A; the actions A modify the environment and/or the
relation of the organism to its environment, and hence
modify in return the sensory inputs. This basic scheme
applies to all living beings. In the 1920s von Uexkll
characterized animal worlds on the basis of sensorimotor contingencies as they function in ecological
context.
What the world is for the organism amounts to
neither more nor less than the consequences of its
actions for its sensory inputs; or to its sensori-motor
contingencies [1]; and this in turn clearly depends on
the repertoire of possible actions. Without action, there
is no world and no perception.
There is a deep affinity between this approach, the
enactive approach of Varela [2], and the ecological
psychology [3] according to which perception is not a
matter of computational representation, but rather a
direct perception of affordances, i.e. potential
actions as such. This affinity lies, as we understand it,
in (a) a non-representationalist framework, and (b) in
the fact that rules or laws of control [3] or

Firstly, the range of possible sensory inputs and the


repertoire of possible actions are greatly increased,
without any limits other than the invention and
fabrication of new artifacts. This is clear for the new
possibilities of action which are created by tools, from
hammers to power-tools. It is also clear for instruments
- microscopes, telescopes, radios and so on resulting in
sensory inputs which are strictly impossible without the
devices in question.
More generally, but less obviously, technical artifacts
organize sensory experience: think of the world of
skier, which is impossible without the artifact. Even
when we are not actually skiing, our perception of the
mountain is determined by the possibility (i.e. virtual
action) of skiing and the correlative sensations. So this
first point can be understood more profoundly: in case
of contemporary humans, there hardly are any natural
perceptions or relations to the world: our sensorymotor coupling is always fashioned, at least virtually,
by technical artifacts. [4]
Secondly, technical artifacts are not irremediably fixed
to the body. More precisely, technical artifacts exist in
two modes: in hand and put down. When a
technical artifact is in hand, being used, it becomes a

Proceedings of ENACTIVE/07
4th International Conference on Enactive Interfaces
Grenoble, France, November 19th-22nd, 2007

129

prosthetic extension of the body; correlatively, the


artifact disappears from consciousness, and the
attention of the human subject is focused on the
world that comes about (think again of the world of
the skier, for example). Artifacts, like the body, are
normally transparent to the subject; as Heidegger [5]
has pointed out, they are only noticed when they are
dysfunctional (a wobbly hammer or a twisted ankle).
However, unlike biological organs, technical artifacts
can also be put down: separated from the body, they
can now become objects of attention. In this mode,
their objective physical proprieties can be perceived;
they can be invented, fabricated, repaired and so on [6].
The whole question of learning can be seen as the
back-and-forth movement between these two modes.
This explains also the radical innovative potential of
technical artifacts.

about the design, fabrication and use of technical


artifacts.
Following [5], some precisions can also be brought
to the distinction between the two modes we mentioned
above. The difference between in hand and put-down is
not simply between attached/not attached to the body.
First f all, there are two relatively independent levels
of dividing:
1. Between put-down and in-hand: put-down
corresponds to the mode in which the artifact is the
object of the explicit attention as an assembly of the
matter with certain proprieties (the specifically
scientific mode of relation to the object). One can think
on the difference between designing and riding the
bicycle. The in-hand mode is the mode in which the
user is engaged in the activity, and in which, under
normal conditions, the artifact is transparent, one feels
it like the extension of the body, not like the object of
the physics.
2. Between a normally functioning and a broken
artifact. Now comes the situation in which the artifact
is broken. In this situation, the artifact switches from
in-hand to put-down: instead of riding the bicycle and
being engaged in the sensory-motor activity, one
examines the broken chain as something having being
made of the material with bad resistance, etc. So one
can as she wish be in different attitudes to artifact:
consider it as in-hand or put-down (when maintaining
the technical device, one puts it in the put-down mode).
But the situation when the artifact breaks is particular,
because it forces the user to consider it as put-down.
It is the same case with the computers, even in the
Virtual Reality. As a user, one does not care about what
is going in the computer, it becomes a transparent
equipment. When the artifact is broken, the user will
check cables, electricity, she will consider it as an
object of science and technology, and the artifact is not
a transparent mean of action anymore.
The difficulty comes when we consider the fact that
in the put-down mode, the designer is also engaged in
the activity. But in a different way: the artifact is not a
mean of action. In fact, when one is
maintaining/designing or doing scientific research, she
is using other artifacts (pencils/CAD/hammer or
measurement instruments), which are in-hand as means
of action, and which are transparent to the user. Thus
one can see the put-down mode as a derivative from the
most fundamental in-hand mode.
Now the in-hand mode was provisionally defined as
an attachment to the body, in order to underline the fact
that it is transparent and fits into action. But in fact the
artifact can be not attached to the body, but still inhand. The road for example is not attached to the body,
but is still in-hand as a transparent mean of action.
Being on the road, one does not consider the road as
the physical proprieties of tarmac in the way the
science/technology do, but rather as a possibility to get

2. A categorization of technical artifacts


Before going further, it is useful to give a more
complete categorization of technical artifacts, which
can be roughly divided into 3 types. The first type we
discussed above can be called extensions of the
body: tools and sensory instruments. But there is also
a second type of artifact, consisting of deliberate
modifications of the environment: roads, buildings,
fields and so on. It is even more obvious that this
second type of artifact also modifies the world that
human beings live in.
A third sort can be called semiotic artifacts. Here,
the actions consist in emitting signals, and the
sensory input is specifically geared to the reception of
these signals. If the conditions that trigger the emission
of a signal and the response of the receiver are
appropriate, this leads to a co-ordination of actions, and
constitutes the basic form of communication, present
already in animal world. The human inventions are:
first of all, language itself; and then multiple technical
inventions: writing, printing, and computers. It is worth
noting that the computers are not only semiotic
artifacts, but also sensori-motor devices which
comprise a certain repertoire of real actions (moving
mouse, joysticks, etc.) with, in return, an increasing
range of sensory inputs (visual patterns, sounds etc.);
regularities are established between action and
sensation in this case as for the first type of artifacts.
This categorization can be useful for analytical
purposes; but it is important to note that in practice,
technical artifacts do not function in isolation from
each other, but form technical systems with a synergy
between these three types. For example, roads (type 2)
go together with cars (type 1), their synergy being
organized by maps and plans (type 3). A possible use
of the term technology (techno-logos) is to designate
the situation where there is linguistic communication

ENACTIVE/07

130

where she wants to; the lighting pole on the road is not
attached to the body, but it is still in-hand because it is
also a mean of action of going there; moreover, the
light coming a certain way, one takes it into account
without explicitly thinking on its properties, and adapt
her sensory-motor activity when riding a bicycle
It stands to reason that there is still a difference
between the artifact that are actually attached to the
body, and which are not, but the first level of
distinction seems to be between in-hand (in a broad
sense) and put-down. In this broad sense, the artifacts
are in in-hand mode when they (a) fit into action, (b)
change sensory-motor loops, (c) are transparent, i.e. not
explicitly noticed, disappear from consciousness in aid
of the world they bring forth.

ability to make it enactive. From our point of view, we


need to understand how the enaction takes place
between the two terms, the user and the artifact.
Still, enactive is a quality that does relate to the
individual, and the experience of an enacted world as a
world of possibilities is always for a human (who is
always technically equipped, even if she doesn't
actually use any interface), and the artifact alone does
not enact anything. But if the capacity to enact lies in
the user, artifacts do always change the quality of
enaction and humans experience.
So already for a single user the enactive
knowledge is something situated between the user
and the artifact, but what about the social exposure?
The couple artifact / sensory-motor contingencies is
something that does evolve on the scale of the society,
and the problem of usage is something intrinsically
social. Thats why it is difficult to report this problem
to enactive knowledge of a single user.
What is enactive, it is not the interface itself, neither
the usage alone, it is the combination of them. If one
designs a very enactive interface, but there is no
social acceptance or implication, in the best case the
usage will be restricted to a narrow community. But the
contrary is also true: if the interface is not appropriated,
there will be no enaction (in the following sense: no
good quality of relation between the human and the
world) even if there is a wide social exposure. So, we
need to distinguish two sorts of Enactive Interfaces: in
a broad sense, every technical artifact is enactive
because it does modify the sensory-motor
contingencies, and bring forth a particular lived
experience, even if the artifact is really constraining;
in a strong sense, the criteria for the interface to be
enactive (good quality of interaction, transparency,
etc) are actually still to find.
But this is probably not enough. If we continue to
think, - and that was the mainstream of industrial
engineers -, that it is sufficient to design an interface
that seems good to designers, we would be probably
wrong. Many works on the anthropology of usage and
on involving the end-users in the process of design
seem to go in this direction.
Moreover, what one accepts as a quality of
interaction, is not something independent on
technology itself, more precisely on the socially
accepted aspect of technology or, lets say, its historical
aspect. (It is not sure that todays cameraphones are
really useful and enactive interfaces, they are however
widely socially accepted as something having a quality
of interaction). In other words, the artifacts are not only
responding to functional criteria, they are also, as
Leroi-Gourhan [7] for example has pointed out, a
support of figurative aesthetics, and this may be to the
detriment of the pure functionality. This could help us
to understand in which way the acceptance of the
artifacts is related to sensory-motor knowledge: this

3. The social dimension


The fact that technical artifacts exist in the mode of
being put down has an important consequence: the
persons who design and make technical artifacts are,
generally, not the same as those who use them. Thus,
technological development goes together with a
division of labour and, correlatively, the development
of mechanisms of social synthesis (exchange, market
economies) which organize the integration of technical
systems as functional wholes.
Traditionally, the technology is usually considered
as a black box, as intrinsically neutral means to predefined ends. The approach outlined here leads to a
new perspective in which technology occupies a central
position. The work of engineers has immense social
significance because, in fine, the choices of
technological devices fashion the human condition
itself, by manufacturing interfaces that change the
means of action, and influence sensations.
This introduces the debate about the usage of the
artifacts, which is of particular importance in the field
of Human-Computer Interaction where the design of
interfaces is supposed to be adapted to the user. The
question is then to articulate the design with the users
needs, abilities and knowledge, and to build possible
enactive
interfaces.
This
raises
multiple
epistemological questions: Is the knowledge of the
usage situated in the user as an acquired sensorimotor
knowledge? Does the quality of the interaction depend
only on this users knowledge? How to build an artifact
responding to the sensorimotor knowledge of the user?
Since the sensorimotor knowledge is not something
independent from the practice of artifacts, it seems
difficult to say that it is situated in the user. If the
artifact modifies the established sensory-motor
contingencies, then the enactive knowledge depends on
the artifacts. In other words, for human beings, it seems
impossible to talk about a standalone user, on whose
knowledge depends the use of the artifact, and the

ENACTIVE/07

131

knowledge is always socially and technically


transmitted and determined. However, it is worth
noting that in any case we are not talking about a
technological determinism: the question is how the
social structures arrange with the technology, and not
what technology imposes by itself. The core question
is that it is difficult to know which interfaces will have
the social implications.
Would the artifact have or not the social exposure is
not something lying in the technology if one considers
the technology as the pure functionality of the artifact;
but it is something lying in the technology if one
considers the technology also as something intrinsically
socially constructed, and also if one considers the
social structures (for example the exposure of the
artifact related to the socially accepted criteria of
aesthetics) as something technically transmitted.

which humans function like computers; but as we


understand it, the enactive approach rejects this
classical paradigm in cognitive science.
Finally, an interesting question that arises is the
status of virtual reality. In this case, it does seem as
though the computer is playing the role of the world,
by providing the sensory consequences of actions on
the part of the human being. But even here, note that
the experience of a human being immersed in a virtual
reality is not that of interacting with a computer; the
human interacts with the entities that populate the
world that has been brought about. We only become
conscious of the computer (the interface) when a
malfunction triggers the switch to the put-down mode;
in normal functioning (the in-hand mode) the
computer-interface disappears from consciousness.
This remark is in no way meant to decry the interest
of virtual realities; on the contrary, such experiments
are deeply revealing. What they show is that in order to
create a virtual reality, it is neither necessary nor
sufficient to compute (in all its gory detail) the total
physical reality an impossible task anyway, as shown
by flight simulators that have to fall back on analog
models; what is required is neither more nor less than
to provide the appropriate sensory returns to human
actions. This helps, greatly, to bring home the point
that what human beings experience in natural
situations is not an objective world as-it-is, but the
sensory-motor contingencies of their embodied
situation. Thus, interfaces and tools can permit (or not)
humans to enact the world, and the world we live in
depends on their design.

4. Discussion
The preceding considerations may not seem
particularly controversial, but they have some
controversial consequences. The term Interface is
clearly of central importance. However, the term itself
is the vehicle of an ambiguity that requires
clarification: interface between what and what?
As we understand it, the term interface is properly
used as the interface between an organism (human or
otherwise) and its environment. Thus, the basic
interfaces are the biological sensory and motor
organs; for humans, technical artifacts are extensions to
these basic interfaces, but they remain interfaces. New
technical devices constitute new worlds: think for
example of the world of the skier. But note this: we
do not talk about the interface between the man and
the ski; the ski is the interface between the man and the
snowy mountain, or better still between the skier and
the skiing world that is brought forth.
Does this change in the case of computers? Our
point of view is that computers are basically technical
devices, and should be treated in the same way as other
technical devices. Certainly, they are devices of a
special sort, and the worlds that are brought forth
when a human being uses them are a special sort of
world; but the interaction that occurs (that is
mediated by the machine) is between the human being
and this world; it is not an interaction between the
human being and the machine. Thus, there is something
deeply wrong in the very phrase Human-Computer
Interface. Of course, HCI has become a hackneyed
term, but this engrained (mis)-use does not make it
correct. The basic problem lies in the implication that
human beings and computers are entities of the same
sort, so that they could interact on a basis of equality.
This would only be correct if one whole-heartedly
embraces the representational paradigm according to

References
[1] ORegan K.J. and No A., A sensorimotor account of
vision and visual consciousness. in Behavioral and
Brain Sciences 24, 2001, pp. 5-115.
[2] Varela F., Thompson E. & Rosch E. The Embodied
Mind. MIT Press, Boston, 1991.
[3] Warren, W. H. (1998). Visually controlled locomotion:
40 years later. Ecological Psychology, 10, 177-219.
[4] Khatchatourov, A et Auvray, M. Loutil modifie-t-il la
perception ou la rend-il possible? in Arob@se, 2005,
vol. 1. www.univ-rouen.fr/arobase
[5] Heidegger, M., Being and Time, State University of
New York Press, 1996
[6] Lenay et al. Sensory Substitution, Limits and
Perspectives, in Hatwell et al (eds), "Touching for
Knowing", John Benjamins Publishers, Amesterdam,
2004
[7] Leroi-Gourhan, A., Gesture and Speech, MIT Press,
1993.

ENACTIVE/07

132

Potrebbero piacerti anche