Sei sulla pagina 1di 12

Ergonomics

ISSN: 0014-0139 (Print) 1366-5847 (Online) Journal homepage: http://www.tandfonline.com/loi/terg20

From human machine interaction to human


machine cooperation

Jean-Michel Hoc

To cite this article: Jean-Michel Hoc (2000) From human machine interaction to human
machine cooperation, Ergonomics, 43:7, 833-843, DOI: 10.1080/001401300409044

To link to this article: http://dx.doi.org/10.1080/001401300409044

Published online: 10 Nov 2010.

Submit your article to this journal

Article views: 1042

View related articles

Citing articles: 61 View citing articles

Full Terms & Conditions of access and use can be found at


http://www.tandfonline.com/action/journalInformation?journalCode=terg20

Download by: [University of Otago] Date: 03 December 2016, At: 03:59


E RG ONOM ICS, 2000, VOL . 43, N O. 7, 833843

From human machine interaction to human machine


cooperation

J EAN -M ICHEL H OC
Centre N ational de la R echerche Scienti que (CN R S), Universitede Valenciennes
et du H ainaut-Cambre sis (U VH C), Laboratoire dAutomatique et de M e canique
Industrielles et Humaines (LAM IH), Psychologie et Ergonomie de la Cognition
dans le Environnements Technologiques (PER COTEC), Le M ont Houy, F -59313
Valenciennes Cedex 9, France

Keywords: H CI; H uman machine cooperation; R eliability; Dynamic situations;


Automation.

Since the 1960s, the rapid growth of information systems has led to the wide
development of research on human computer interaction (H CI) that aims at the
designing of human computer interfaces presenting ergonomic properties, such
as friendliness, usability, transparency, etc. Various work situations have been
covered clerical work, computer programming, design, etc. However, they were
mainly static in the sense that the user fully controls the computer. M ore recently,
public and private organizations have engaged themselves in the enterprise of
managing more and more complex and coupled systems by the means of
automation. Modern machines not only process information, but also act on
dynamic situations as humans have done in the past, managing stock exchange,
industrial plants, aircraft, etc. These dynamic situations are not fully controlled
and are a ected by uncertain factors. H ence, degrees of freedom must be
maintained to allow the humans and the machine to adapt to unforeseen
contingencies. A human machine cooperation (H M C) approach is necessary to
address the new stakes introduced by this trend. This paper describes the possible
improvement of H CI by H M C, the need for a new conception of function
allocation between humans and machines, and the main problems encountered
within the new forms of human machine relationship. It proposes a conceptual
framework to study H M C from a cognitive point of view in highly dynamic
situations like aircraft piloting or air-tra c control, and concludes on the design
of `cooperative machines.

1. Introduction
If the abbreviation H CI (human computer interaction) can be maintained to cover
every form of human machine relationship, it should be considerably extended to
cover the ergonomics challenges presented by the contemporary, highly complex and
dynamic human machine systems. The notion of a human computer interface
remains a key point with its properties of friendliness, usability, transparency, etc.
H owever, cognitive ergonomics has developed a large number of studies on human
automation relationship failures, which concern what is behind the interface (Billings

e-mail: Jean-M ichel.H oc@univ-valenciennes.fr

Ergonomics ISSN 0014-013 9 print/ISSN 1366-584 7 online 2000 Taylor & Francis Ltd
http://www.tandf.co.uk/ journals
834 J.-M. Hoc

1991, Parasuraman and M ouloua 1996). F rom the front line human operators point
of view, machines are not only tools, but also possibly autonomous agents, and some
coherence must be maintained between humans and machines actions on the
environment, whatever the interface is. That is why introducing the framework of
cooperation into the study of human machine relationship is attractive.
This paper focuses on human machine cooperation (H M C) in dynamic
situations like aircraft piloting, air-tra c control (ATC), industrial process control
or anaesthesiology. As opposed to static situations fully controlled by the human
machine systems, dynamic situations can change autonomously. As far as human
operators are concerned, control is partial in two senses. F irst, the situation has its
own dynamics with which the operators actions and the machines actions are
combined to produce e ects. Second, automation introduces autonomous actions of
machines on the situation. That is why the human activity integrates control and
supervision. The sources of complexity and di culty of this kind of situation have
been largely described in the literature and will not be detailed here (H ollnagel et al.
1988, Woods 1988, H oc 1993, H oc et al. 1995b). F or the purpose of this paper, only
two sources of complexity will be stressed uncertainty and risk.
D yn amic situations are uncertain since they are partially controlled. U nexpected
factors can modify the dynamics of the process under supervision and internal
incidents or breakdowns can a ect the process itself. Such an uncertainty cannot be
only managed by probabilistic models. It implies implementing adaptive mechanisms
in real-time to face to events unforeseen beforehand by the designer of machines,
regulations or work procedures. M oreover, action e ects are often submitted to
response delays, so that diagnosis, decision-making and planning must be performed
before accessing all the necessary information. D ynamic situations are also risky.
The cost of errors can be very high in terms of accident or money. H ence, human
operators must manage risks (associated with costs and probabilities) when
implementing a decision. F or them, the major risk is the loss of control of the
situation, acting too late, although comprehension can reach a high level by this
time. Consequently, their main objective is to maintain the situation between
acceptable limits, including their own cognitive limits, accepting partially to
understand what is going on (H oc et al. 1995a, Amalberti 1999).
U ncertainty and risk management implies keeping some degrees of freedom
available in the system for adaptation and for calibrating an acceptable compromise
between comprehension and maintenance of the situation under control. F or these
reasons, it is very dangerous to close the human machine system into too rigid
strategies, procedures and function allocation between humans and machines. F or
two decades human machine communication and cooperation have progressively
enriched the human machine interaction research trends. Communication and
cooperation require the introduction of more aspects of the human human into the
human machine relationships. The notion of `joint cognitive systems applied to
human machine systems was a decisive step towards this conception (H ollnagel and
Woods 1983, R asmussen et al. 1994, Woods and R oth 1995). This notion revealed
the fact that the human machine system s task can have no sense if one considers
the human or the machine as units in isolation. On the contrary, there is an overall
task, dynamically decomposed into subtasks, to be integrated and distributed among
human and arti cial agents. N ew approaches to system design following the
ecological perspective (Vicente 1999) or, more generally, the cognitive engineering
methodology (D owell and Long 1998) have translated this theoretical approach into
From human machine interaction to human machine cooperation 835

practical methods of human machine system design. Their main principle consists
in describing the work domain constraints, postponing activity analysis or
prescription as late as possible to avoid introducing unnecessary constraints that
could reduce the degrees of freedom for adaptation.
F unction allocation between humans and machines is a key question for the
study or design of human machine cooperation (H M C). The second section of this
paper will be devoted to the evolution of its conception during the past two or three
decades. In parallel, the third section will propose a short review of the main
di culties humans encounter with automation in highly dynamic situations as they
have been mainly identi ed in aviation. Then, these two approaches to the human
machine relationship will lead to the fourth section, which proposes an attempt to
apply the cooperation paradigm to this relationship.

2. Function allocation between humans and machines


The allocation of function between humans and machines is a very old topic in human
engineering. The rst type of solution to the allocation problem was an attempt to
decompose an activity on a general basis into elementary functions and to allocate
each one to the best e cient device (human or machine) for that function (F itts 1951).
F or example, machines are considered as very good at calculating complex formulas,
whereas humans are the best for facing unexpected or unknown situations. Such an
approach has been repeatedly criticized (Bainbridge 1987, Older et al. 1997).
F irst, functions are rarely de ned as such to avoid the need for frequent cooperation
during task execution when tasks explode into several functions, so that tasks
(including several functions) rather than functions are allocated. F or example, in ATC,
it has long been known that aircraft con ict detection and resolution are two tightly
related functions (Bisseret 1970). If con ict detection is de ned as the identi cation of a
type of con ict relevant to the choice of an appropriate solution, the allocation of the
detection function to a machine and of the resolution function to the human is not very
e cient. If one considers that the machine can not resolve the con ict, the reason lies in
the machine s limitation in identifying the con ict type appropriately. Thus, some
cooperation should be introduced between the two agents to integrate closely the two
functions (and the two agents) during task execution. In a recent study on ATC
automation, control theorists have preferred the allocation of tasks (con icts) rather
than the allocation of functions between the human and the machine to avoid resolving
this di cult cooperation problem (Vanderhaegen et al. 1994).
Second, human operators are nally (legally) responsible for the whole human
machine system s performance. H ow can they control a machine that operates very
di erently from them? F unction allocation should integrate several levels of
abstraction and not only a single one. Allocating a low-level function to a machine
does not alleviate the human workload of supervising the overall task execution and
the `human-out-of-the-loop syndrome has very often been stressed. That is probably
one of the reasons why human air-tra c controllers rejected an automatic mode of
con ict allocation between the human and the machine in the study of Vanderhaegen
et al. (1994).
Third, the de nition of functions is very determined by cognitive theories, which
are various and concurrent. M ore often than not the decomposition is based on
ine cient `common sense cognitive science. Thus, any decomposition should be
considered as temporary and regularly updated, taking advantage of the progress
made in cognitive psychology and technology. F or example, Sheridan and Verplanck
836 J.-M. Hoc

(1978) or Endsley and K aber (1999) have proposed taxonomies largely in uenced by
recent technologies, that is decompositions used in actual human machine systems,
for example manual control, action support, shared control, decision support,
supervisory control, etc.
The argument here is not to say that decomposing human machine systems into
elementary functions is useless. When designing operator training or machines this is
an inescapable problem. H owever, it is unclear whether the objects to be allocated
are functions or tasks. In single-task situations, probably the consideration of
functions is the right one, provided that some cooperative mechanisms are
introduced between human and machine to allow the human to recompose the
whole task. In multitask situations, like ATC, the case is very di erent because the
human operators must share their time between several tasks (e.g. several and
simultaneous aircraft con icts in ATC). F rom the human point of view, each task
corresponds to an intention to be protected. When task planning or execution is
again split into several functions, the management of working memory can become
overloading. N evertheless, whatever what should be allocated functions or tasks
the criterion of e ciency is not the only one to consider, especially in dynamic
situations where the precise conditions of the actual performance are not known
beforehand. F or example, R asmussen et al. (1994) or Older et al. (1997) suggest
several other criteria like organizational or nancial constraints, instantaneous
workload, resource requirements, access to data, etc. H ence, a dynamic allocation
paradigm is needed (R ieger and G reenstein 1982, Lemoine et al. 1996) that can take
advantage of the access to instantaneous evaluation of the situations to choose the
best allocation.
Several studies on ATC have explored the dynamic allocation problem on the
basis of workload regulation using an automated aircraft con ict detection and
resolution device. Vanderhaegen et al. (1994) showed that performance and
reliability are better within an implicit allocation paradigm, where allocation of
con icts is fully automated, than within an explicit allocation paradigm, where
allocation is decided by the human controllers. H owever, expert controllers preferred
the explicit allocation principle, in spite of an increase in workload due to task
allocation. H oc and Lemoine (1998) explored the bene t of further assistance within
the explicit paradigm and showed an improvement in reliability by the means of
more planned strategies. H owever, the well-known phenomenon of complacency (see
below) was observed in the more assisted situation where the human controllers
could not decide the allocation by themselves but validated the decision of the
automated system (with a possibility of veto). In this situation, they were more
willing to let the system act without their supervision and to adopt con ict resolution
strategies that did not interfere with the system strategies. These studies raise the
question of the compatibility between an overall plan to manage the tra c (in charge
of the controllers) and the plans elaborated to perform the subtasks. Task or
function allocation should consider the overall organization of the human machine
system s task under the human responsibility.

3. Failures in the human machine relationship


M any studies have been devoted to the failures in coupling human and automation,
especially within the aviation domain where such failures have obvious consequences
on reliability (for an overview, see Parasuraman and M ouloua 1996). As far as
human error is considered, that is those cases where humans contribute to the
From human machine interaction to human machine cooperation 837

multicausality of the accident, inquiry boards have analysed some poor ergonomic
properties of the cockpit. H owever, on a more general basis including industrial
process control, anaesthesiology, etc., four main kinds of failure can be summed up
that not only have implications on human machine interface design, but also have
something to do with H M C.

3.1. L oss of expertise


H uman loss of expertise is the consequence of designing machines that play
autonomous roles, either performing low-level functions (decision implementation)
or high-level functions (diagnosis, decision-making). Bainbridge (1987) stressed these
`ironies of automation at the same time as consisting in assisting the operators and
reducing the occasions for maintaining their skills. The e ect of this phenomenon
has often be described as resulting in a loss of vigilance, the operators becoming
`passive (Endsley 1996). Some studies of dynamic task allocation have adopted this
point of view, de ning the allocation to the machine in relation to the human
operators workload (M illot and M andiau 1995). H owever, the main problem of the
loss of expertise is the loss of the necessary conditions to allow the operators to exert
their responsibility over the entire human machine system (Jones and M itchell
1994). In addition, when operators again must take up manual control, they are
likely to reach a poor performance. If this operators role within the human
machine system is taken seriously, the loss of expertise leads to the loss of a
particularly important kind of agent in the system who will can not cooperate with
automation when the latter is out of its depth.

3.2. Complacency
Several times, human complacency or over-reliance as regards automation has been
described, especially in relation to `intelligent machines performing high level
functions (Layton et al. 1994, Endsley 1996, Smith et al. 1997, M osier et al. 1998).
Even when operators are su ciently expert and can be reasonably aware of the
limitations of the machines, which can not nd optimal solutions in certain
circumstances where they cannot consider some important factors, operators take
the solutions for granted without questioning them. F or example, Layton et al.
(1994) and Smith et al. (1997) studied the use of an automatic replanner during air
cruise. In a particular scenario, it was obvious that the system could not nd the
optimal re-routing. H owever, expert pilots did not question the proposed solution.
This phenomenon poses the di cult problem of shared supervision. M ore often than
not, automation aims at reducing the operators workload when they are confronted
with increasing requirements. The automation process does not alleviate the
workload in terms of operators supervision of the overall human machine system
activity. A possible reason for complacency could be a division of the supervision
eld into two independent elds, the operators eld and the machines eld (as
shown for example in ATC automation; H oc and Lemoine 1998). F rom the H M C
point of view, the bene ts of mutual control of the operator over the machine have
been demonstrated (e.g. in trouble-shooting with the assistance of an expert system;
R oth et al. 1988) or suggested (Clark and Smyth 1993). In response to complacency,
Smith et al. (1997) suggested that the machine proposes several solutions instead of
only one. H owever, this mutual control attitude is not easy to induce (M osier et al.
1998, Endsley and K aber 1999).
838 J.-M. Hoc

3.3. T rust and self-con dence


The seminal studies of M uir (1988) and Lee and M oray (1992, 1994) have drawn the
attention to an important factor of the operators utilization of automation when
they have the choice between an automatic and a manual mode. In the experiments
(control of a pasteurizer) the latter was designed to be more e cient than the former.
These studies have shown that the shifts between the two modes can be predicted by
the ratio between trust (in the machine) and self-con dence. M ore importantly, M uir
described trust as a dynamic phenomenon in relation to the operators experience of
the machine, going through three steps. At rst, without any precise model of the
machine, only faith or doubt can develop, as described in cockpit automation for
example (Amalberti 1992). F irst confronted with an automatic system for landing,
pilots either used the system blindly (faith) or shortcut it (doubt). As experience
increases, operators can reach a feeling of dependability and then of predictability.
In terms of H M C, such a development is related to the elaboration of a model of
oneself (self-con dence) and of a model of the other agent (trust) quali ed by
Castelfranchi (1998) as `mind reading within cooperation situations.

3.4. L oss of adaptability


Cognition is the way by which systems can adapt to the variety of their
environments, relying on knowledge represented in their memories. `Intelligent
machines are designed to increase the machines adaptive power. H owever, they
cannot equal operators in adaptive skill. Beyond the loss of expertise, automation
has also been considered as reducing the adaptability of the human machine system
(Ephrath and Young 1981, M osier 1997, Parasuraman 1997). The main reason for
this drawback is the lack of feedback to the operator when the machine performs a
task. This results in the well-known `human-out-of-the-loop syndrome (Ephrath
and Young 1981, Endsley and K aber 1999) leading the operators, when necessary, to
take again the control in hand without any clear situation awareness. M oreover, to
put it in H M C terms, the lack of feedback does not allow operators to anticipate
possible interference between their own tasks and the machines tasks. F inally,
operators are compelled to adopt reactive strategies instead of anticipative strategies
insuring long-term adaptation (Cellier et al. 1997). M ore generally, the various
reasons for all these H M C failures reduces the dynamic cooperation between
humans and machines in real-time to adapt the human machine system to
situations unforeseen by the designers. H M C is an important ingredient of the
adaptive power of the human machine system.

4. A minimal conception of cognitive cooperation to apply to human machine


cooperation
As many other concepts such as communication or intelligence, cooperation refers to
the human (human human cooperation) and H M C can be seen as a chimera, trying
to introduce a human-like relationship between two very di erent entities.
N evertheless, the implication of a human into this kind of relationship, even if it
integrates a machine, may be justi ed to contribute to resolve part of the automation
failures in terms of human machine relationships. D ecisive progress has been made
in introducing `know-how into machines and it is time to provide them with some
`know-how-to-cooperate at the same time distributed arti cial intelligence is
developing (Castelfranchi 1998, M illot and Lemoine 1998). Certainly one cannot
imagine building machines capable of generating all the complexity of the humans
From human machine interaction to human machine cooperation 839

cooperation skill. H owever, a restriction of this skill to its cognitive aspects appears
to be a reasonable medium term aim. H ighly dynamic situations have been selected
that imply small human and machine teams, very goal-oriented to elaborate on
cognitive cooperation modelling (e.g. in ATC, H oc and Lemoine 1998, Carlier and
H oc 1999; or in ghter aircraft piloting, Loiselet and H oc 1999). In this kind of
situation, cognitive aspects of cooperation are major and are limited by time
constraints, and they can result in manageable models. The main objective for a
ghter aircraft crew is returning to the base and the social aspects of cooperation, for
example, are minor.
In the line with Piaget (1965), one considers cooperation from a functional point
of view that of the cooperative activities performed in real-time by individuals
within a team, which are not performed when working alone as opposed to a
structural point of view (e.g. network organization). One considers that two agents
are in a cooperative situation if they meet two minimal conditions:

Each strives towards goals and can interfere with the others on goals,
resources, procedures, etc. Interference can take several forms, for example
precondition (an agent s action being a precondition for another agents
action), mutual control (contributing to correct the others mistakes),
redundancy (replacing another agent for diverse reasons), etc. If there is no
interference, coordination is prebuilt and is not questioned during task
execution; thus, the agents activities are independent.
And each tries to manage interference to facilitate the individual activities or/
and the common task when it exists (e.g. cooperation on resource utilization
does not imply a common task). Cooperation is very closed to competition,
the sole di erence being that in competition interference is managed so that
the others activities are made more di cult.

The symmetric nature of this de nition may be only partly satis ed. Even in
human human cooperation, only one agent can be in charge of managing
interference due to the others workload (e.g. in ATC, the planning controller in
charge of coordination with adjacent sectors is mainly facilitating the radar
controller s task consisting in controlling the sector). Consequently, it is possible to
improve H M C, although the machine has a limited cooperation skill. A minimal
solution is to help the human to identify human machine interference.
F rom a cognitive point of view, interference management to facilitate collective
tasks is performed by the means of cooperative activities that are part of those
described in the human human cooperation literature, which integrates leadership,
engagement, etc. (M ilitello et al. 1999). F ollowing this restrictive view, cooperative
activities can be organized into three levels, at the same time implying an increase in
abstraction and an enlargement of the time span referred to:

Cooperation in action groups together activities that have direct implications


on the short-term and that can rely on a local analysis of the situation. This
level includes local interference creation (e.g. mutual control), detection, and
resolution. It also integrates interference anticipation by identifying the other
agents goals in the short-term.
Cooperation in planning consists in maintaining or/and elaborating a
common frame of reference (COF OR ; D e Terssac and Chabaud 1990).
840 J.-M. Hoc

COF OR includes not only a shared situation awareness (Salas et al. 1995) in
the sense of a common representation of the environment, but also a
representation of the team considered as a resource. COF OR maintenance
and elaboration concern common goals, common plans, role allocation,
action monitoring and evaluation, and common representations of the
environment. These activities need a certain abstraction to develop and
determine the teams activity in the medium term.
Meta-cooperation, situated at a much higher abstraction level, allow the
agents to improve the cooperative activities described above by elaborating
long-term constructs, such as a common code to communicate easily and
shortly, compatible representations formats, and above all models of oneself
and of the other agents.

Studies of human human cooperation in the highly dynamic situations referred


to (ATC, sharing aircraft in the same sector between two radar controllers, and two-
seated ghter aircraft piloting, with a pilot and a weapon system o cer) have
stressed three main facts:

COF OR management represents at least half of the cooperative activities.


Communications on plans, goals and roles are very e cient in reducing most
of the local interference management.
COF OR management is more concerned by the teams process control
activity in itself (plans, goals, role allocation, etc.) than by the process under
control (`situation awareness, strictly speaking).
COF OR maintenance activities are almost as frequent as COFOR
elaboration activities. Thus, a large part of the activities at the planning
level consists in maintaining a common representation without needing a
long and tedious elaboration.

5. Conclusion
The result of this application of a general framework for cooperative activities to
highly dynamic situations can allow one to be quite optimistic as regards the design
of `cooperative machines . Some engineering studies already aim at designing
machines capable of performing some of the cooperative activities described here, for
example identi cation of the human agents goals, cooperative plan generation, etc.
(Castelfranchi 1998, M illot and Lemoine 1998). Certainly, `meta-cooperative
machines are still not attainable, but the designer can introduce some operators
models into machines. Some of the activities described at the two rst levels can be
implemented, at least in speci c situations. The development of this kind of
enterprise is crucial to improve the human machine systems adaptability and
reliability by introducing a exible cooperation in real-time.
H owever, even if the cooperative skills of the machines are very restricted,
machines can at least help the operators to perform the H M C activities in better
conditions than the current ones. The development of tools to maintain a COF OR
between humans and machines is probably a rst and important step towards such
an improvement (e.g. in the line of Lemoine et al. 1996 or Jones and Jasek 1997).
N evertheless, there remain serious problems to be solved in terms of alleviation of
the human workload of informing the machine of the operator s activity. Currently,
these problems in ATC are trying to be solved with a very simple machine capable of
From human machine interaction to human machine cooperation 841

implementing controllers schematic plans for subtasks, but of communicating with


the controllers through a COF OR -like interface. Such an interface aims at
supporting the controllers problem-space, updating it regularly.

References
A M A L BERT I , R . 1992, Safety in risky process control, Reliability Engineering and System Safety,
38, 99 108.
A M A L BERT I , R . 1999, Automation in aviation: a human factors perspective, in D . J. G arland, J.
A. Wise and V. D . H opkin (eds), Handbook of Aviation Human Factors (Mahwah:
Lawrence Erlbaum), 173 192.
BA IN BR ID G E , L. 1987, Ironies of automation, in J. R asmussen, K . D . Duncan and J. Leplat
(eds), New T echnology and Human Error (Chichester: Wiley), 271 284.
BIL LI NG S, C. E. 1991, Human-centered Aircraft Automation: A Concept and Guidelines.
Technical M emorandum no. 103885 (Mo ett F ield: N ASA, Ames R esearch Center).
BISSERE T , A. 1970, M e moire ope rationnelle et structure du travail [Operational memory and
work structure], Bulletin de Psychologie, 24, 280 294.
C A RL IE R , X. and H O C , J. M . 1999, R ole of a common frame of reference in cognitive
cooperation: sharing tasks in air-tra c control. Paper presented at CSAPC `99,
Villeneuve dAscq, F, September.
C A STE LF R A N C HI, C. 1998, M odelling social action for agents, Arti cial Intelligence, 103, 157
182.
C EL LI E R , J. M ., E Y RO L LE , H . and M A RI NE, C. 1997, Expertise in dynamic environments,
Ergonomics, 40, 28 50.
C LA R K , A. A. and SM Y TH , G. G . 1993, A co-operative computer based on the principles of
human co-operation, International Journal of Man Machine Studies, 38, 3 22.
D E T ER SSA C, G . and C H A BA UD , C. 1990, R e fe
rentiel operatif commun et abilite[Common
operative frame of reference and reliability], in J. Leplat and G. D e Terssac (eds), L es
facteurs humains de la abilite (Toulouse: Octares), 110 139.
D O WE LL , J. and L O NG , J. 1998, Conception of the cognitive engineering design problem,
Ergonomics, 41, 126 139.
E N D SL EY , M . 1996, Automation and situation awareness, in R. Parasuraman and M . M ouloua
(eds), Automation and Human Performance: Theory and Applications (Mahwah:
Lawrence Erlbaum), 163 181.
E N D SL EY , M . and K A BER , D . B. 1999, Level of automation e ects on performance, situation
awareness and workload in a dynamic control task, Ergonomics, 42, 462 492.
E P HR A T H , A. R . and Y O U N G , L. R . 1981, M onitoring versus man-in-the-loop detection of
aircraft control failures, in J. R asmussen and W. B. Rouse (eds), Human Detection and
Diagnosis of System Failures (N ew York: Plenum), 143 154.
F IT TS, P. M . (ed.) 1951, Human Engineering for an E ective Air Navigation and T ra c Control
System (Washington, D C: N ational Research Council).
H O C , J. M . 1993, Some dimensions of a cognitive typology of process control situations,
Ergonomics, 36, 1445 1455.
H O C , J. M ., A M A L BE RTI , R . and BO RE H A M , N . 1995a, Human operator expertise in diagnosis,
decision-making, and time management, in J. M . H oc, P. C. Cacciabue and E.
H ollnagel (eds), Expertise and Technology: Cognition & human computer Cooperation
(H illsdale: Lawrence Erlbaum), 19 42.
H O C , J. M ., C A C C I A BU E , P. C. and H O LL N A G EL , E. (eds) 1995b , Expertise and T echnology:
Cognition & human computer Cooperation (Hillsdale: Lawrence Erlbaum).
H O C , J. M . and L E M O I NE , M . P. 1998, Cognitive evaluation of human human and human
machine cooperation modes in air tra c control, International Journal of Aviation
Psychology, 8, 1 32.
H O LL N A G E L , E., M A N C IN I, G. and W O O DS, D . D . (eds) 1988, Cognitive Engineering in Complex
D ynamic Worlds (London: Academic Press).
H O LL N A G E L , E. and W O OD S, D . D . 1983, Cognitive systems engineering: new wine in new
bottles, International Journal of Man Machine Studies, 18, 583 600.
842 J.-M. Hoc

J O NE S, P. M . and J A SEK , C. A. 1997, Intelligent support for activity management (ISAM ): an


architecture to support distributed supervisory control, IEEE T ransactions on Systems,
Man, and Cybernetics Part A: Systems and Humans, 27, 274 288.
J O NE S, P. M . and M IT C HE L L, C. M . 1994, M odel-based communicative acts: human computer
collaboration in supervisory control, International Journal of Human Computer
Studies, 41, 527 551.
L A YTO N , C., SM I TH , P. J. and M C C O Y, E. 1994, D esign of a cooperative problem-solving system
for en-route ight planning: an empirical evaluation, Human Factors, 36, 94 119.
L E E , J. and M O RA Y, N. 1992, Trust, control strategies and allocation of function in human
machine systems, Ergonomics, 35, 1243 1270.
L E E , J. and M O RA Y , N . 1994, Trust, self-con dence, and operators adaptation to automation.
International Journal of Human Computer Studies, 40, 153 184.
L E M O IN E , M . P., D E BE R N A R D , S., C R E VIT S, I. and M I LL O T, P. 1996, Cooperation between humans
and machines: rst results of an experiment with a multi-level cooperative organization
in air tra c control, Computer Supported Cooperative W ork, 5, 299 321.
L O I SEL ET, A. and H O C, J. M . 1999, Assessment of a method to study cognitive cooperation in
ghter aircraft piloting. Paper presented at CSAPC `99, Villeneuve dAscq, F ,
September.
M I LI T EL LO , L. G ., K Y NE , M . M ., K L EI N , G., G E T CH EL L , K. and T H O R D SEN , M . 1999, A
synthesised model of team performance, International Journal of Cognitive Ergonomics,
3, 131 158.
M I LL O T, P. and L E M O IN E , M . P. 1998, An attempt for generic concepts toward human
machine cooperation. Paper presented at IEEE SMC, San D iego, CA, October.
M I LL O T, P. and M A N D IA U , R. 1995, M an machine cooperative organizations: formal and
pragmatic implementation methods, in J. M . Hoc, P. C. Cacciabue and E. H ollnagel
(eds), Expertise and T echnology: Cognition & Human Computer Cooperation (H illsdale:
Lawrence Erlbaum), 213 228.
M O SI ER , K . L. 1997, Myths of expert decision-making and automated decision aids, in C. E.
Zsambok and G. Klein (eds), Naturalistic Decision Making (Mahwah: Lawrence
Erlbaum), 319 330.
M O SI ER , K. L., SK I TK A , L. J., H E ER S, S. and BU R D I C K , M . 1998, Automation bias: decision-
making and performance in high-tech cockpits, International Journal of Aviation
Psychology, 8, 47 63.
M U IR , B. M. 1988, Trust between humans and machines, and the design of decision aids, in E.
H ollnagel, G . M ancini and D. D. Woods (eds), Cognitive Engineering in Complex
Dynamic W orlds (London: Academic Press), 71 84.
O L D E R , M . T., W A T ER SO N , P. E. and C L EG G , C. W. 1997, A critical assessment of task allocation
methods and their applicability, Ergonomics, 40, 151 171.
P A RA SUR A M A N , R . 1997, Humans and automation: use, misuse, disuse, abuse, Human Factors,
39, 230 253.
P A RA SUR A M A N , R . and M O U L O UA , M . (eds) 1996, Automation and Human Performance: T heories
and Applications (M ahwah: Lawrence Erlbaum).
P IA G E T , J. 1965, Etudes Sociologiques [Sociological Studies] (G eneva: D roz).
R A SMU SSEN , J., P E J TE RSE N , A. M . and G O O DST EI N , L. P. 1994, Cognitive Systems Engineering
(N ew York: Wiley).
R I EG ER , C. A. and G R E EN ST E IN , J. S. 1982, The allocation of tasks between the human and
computer in automated systems, in Proceedings of the IEEE 1982 International
Conference on `Cybernetics and Society (N ew York: IEEE), 204 208.
R O T H , E. M ., BEN N E T T, K . B. and W O O DS, D . D . 1988, H uman interaction with an `intelligent
machine, in E. H ollnagel, G . M ancini and D. D . Woods (eds), Cognitive Engineering in
Complex Dynamic W orlds (London: Academic Press), 23 69.
SA L A S, E., P R I NC E , C., BA KE R , D . P. and SH R E STH A , L. 1995, Situation awareness in team
performance: implications for measurement and training, Human Factors, 37, 123 136.
SH E R I DA N , T. B. and VE R P L A NC K , W. L. 1978, Human and Computer Control of Undersea
T eleoperators. technical report for the O ce of N avel Research (Cambridge, M A: M IT
M an M achine Systems Laboratory).
From human machine interaction to human machine cooperation 843

SM I TH , P. J., M C C O Y, E. and L A Y T O N, C. 1997, Brittleness in the design of cooperative problem-


solving systems: the e ects on user performance, IEEE Transactions on Systems, Man,
and Cybernetics Part A: Systems and Humans, 27, 360 371.
VA ND ER H A EG E N , F ., C R E VIT S, I., D EBE R N A R D , S. and M I L LO T, P. 1994, H uman machine
cooperation: toward and activity regulation assistance for di erent air-tra c control
levels, International Journal of Human Computer Interaction, 6, 65 104.
VIC E NT E , K . 1999, Cognitive W ork Analysis (M ahwah: Lawrence Erlbaum).
W O O DS, D . 1988, Coping with complexity: the psychology of human behaviour in complex
systems, in J. G oodstein, H. Andersen and B. Olsen (eds), T asks, Errors, and Mental
Models (London: Taylor & F rancis), 128 148.
W O O DS, D . D . and R O TH , E. M . 1995, Symbolic AI computer simulations as tools for
investigating the dynamics of joint cognitive systems, in J. M . H oc, P. C. Cacciabue and
E. H ollnagel (eds), Expertise and T echnology: Cognition & Human Computer
Cooperation (H illsdale: Lawrence Erlbaum), 75 90.

Potrebbero piacerti anche