Sei sulla pagina 1di 18

http://hfs.sagepub.

com/
Society
Factors and Ergonomics
Journal of the Human
Human Factors: The
http://hfs.sagepub.com/content/30/4/415
The online version of this article can be found at:

DOI: 10.1177/001872088803000404
1988 30: 415 Human Factors: The Journal of the Human Factors and Ergonomics Society
D. D. Woods and E. M. Roth
Cognitive Engineering: Human Problem Solving with Tools

Published by:
http://www.sagepublications.com
On behalf of:

Human Factors and Ergonomics Society


can be found at: Society
Human Factors: The Journal of the Human Factors and Ergonomics Additional services and information for

http://hfs.sagepub.com/cgi/alerts Email Alerts:

http://hfs.sagepub.com/subscriptions Subscriptions:
http://www.sagepub.com/journalsReprints.nav Reprints:

http://www.sagepub.com/journalsPermissions.nav Permissions:

http://hfs.sagepub.com/content/30/4/415.refs.html Citations:

at Afyon Kocatepe Universitesi on April 23, 2014 hfs.sagepub.com Downloaded from at Afyon Kocatepe Universitesi on April 23, 2014 hfs.sagepub.com Downloaded from
What is This?

- Aug 1, 1988 Version of Record >>


at Afyon Kocatepe Universitesi on April 23, 2014 hfs.sagepub.com Downloaded from at Afyon Kocatepe Universitesi on April 23, 2014 hfs.sagepub.com Downloaded from
HU MAN FACTORS, 1988,30(4),415-430
Cognitive Engineering: Human Problem
Solving with Tools
D. D. WOODSt and E. M. ROTH,2 Westinghouse Research and Development Center
Pittsburgh, Pennsylvania '
C?gnitive engi~~ring is an applied cognitive science that draws on the knowledge and tech-
ntqu~ of cog"!ltrve psychology and related disciplines to provide the foundation for princi-
ple-dnven desrgn of person-machine systems. This paper examines the fundamental features
that cha~acten.ze .c0l!nitive engineering and reviews some of the major issues faced by this
nascent mterdrscrplmary field.
INTRODUCTION
Why is there talk about afield of cognitive
engineering? What is cognitive engineering?
What can it contribute to the development of
more effective person-machine systems?
What should it contribute? We will explore
some of the answers to these questions in this
paper. As with any nascent and interdisci-
plinary field, there can be very different per-
spectives about what it isand how it will de-
velop over time. This paper represents one
such view.
The same phenomenon has produced both
the opportunity and the need for cognitive
engineering. With the rapid advances and
dramatic reductions in the cost of compu-
tational power, computers have become
ubiquitous in modern life. In addition to tra-
ditional office applications (e.g., word-pro-
cessing, accounting, information systems),
computers increasingly dominate a broad
IRequests for reprints should be sent to David D.
~oods,. Department of Industrial and Systems Engineer-
mg, OhIOState University, 1971 Neil Ave. Columbus OH
43210. "
zEmilie Roth, Department of Engineering and Public
Policy, Carnegie-Mellon University, Pittsburgh, PA 15213.
range of work environments (e.g., industrial
process control, air traffic control, hospital
emergency rooms, robotic factories). The
need for cognitive engineering occurs be-
cause the introduction of computerization
often radically changes the work environ-
ment and the cognitive demands placed on
the worker. For example, increased automa-
tion in process control applications has re-
sulted in a shift in the human role from a
controller to a supervisor who monitors and
manages semiautonomous resources. Al-
though this change reduces people's physical
workload, mental load often increases as the
human role emphasizes monitoring and com-
pensating for failures. Thus computerization
creates an increasingly larger world of cogni-
tive tasks to be performed. More and more
we create or design cognitive environments.
The opportunity for cognitive engineering
arises because computational technology
also offers new kinds and degrees of machine
power that greatly expand the potential to
assist and augment human cognitive activi-
ties incomplex problem-solving worlds, such
as monitoring, problem formulation, plan
1988, The Human Factors Society, Inc. All rights reserved.
at Afyon Kocatepe Universitesi on April 23, 2014 hfs.sagepub.com Downloaded from
416-August 1988
generation and adaptation, and fault man-
agement. This is highly creative time when
people are exploring and testing what can be
created with the new machine power-dis-
plays with multiple windows and even rooms
(Henderson and Card, 1986). The new capa-
bilities have led to large amounts of activity
devoted to building new and more powerful
tools-how to build better-performing ma-
chine problem solvers. The question wecon-
tinue to face is how we should deploy the
power available through newcapabilities for
tool building to assist human performance.
This question defines the central focus of
cognitive engineering: to understand what is
effective support for human problem solvers.
The capability to build more powerful ma-
chines does not in itself guarantee effective
performance, as witnessed by early attempts
to develop computerized alarm systems in
process control (e.g., Pope, 1978) and at-
tempts to convert paper-based procedures to
a computerized form (e.g., Elm and Woods,
1985). The conditions under which the ma-
chine will be exercised and the human's role
in problem solving have aprofound effect on
the quality of performance. This means that
factors related to tool usage can and should
affect the very nature of the tools to be used.
This observation is not new-in actual work
contexts, performance breakdowns have
been observed repeatedly with support sys-
tems, constructed in a variety of media and
technologies including current AI tools, when
issues of tool use were not considered (see
Roth, Bennett, and Woods, 1987). This is the
dark side: the capability to domore amplifies
the potential magnitude of both our suc-
cesses and our failures. Careful examination
of past shifts in technology reveals that new
difficulties (new types of errors or accidents)
are created when the shift in machine power
has changed the entire human-machine sys-
tem in unforeseen ways (e.g., Hirschhorn,
HUMAN FACTORS
1984; Hoogovens Report, 1976; Noble, 1984;
Wiener, 1985).
Although our ability to build more power-
ful machine cognitive systems has grown and
been promulgated rapidly, our ability to un-
derstand how to use these capabilities has
not kept pace. Today we can describe cogni-
tive tools in terms of the tool-building tech-
nologies (e.g., tiled or overlapping windows).
The impediment to systematic provision of
effective decision support isthe lack of an ad-
equate cognitive language of description
(Clancey, 1985; Rasmussen, 1986). What are
the cognitive implications of some applica-
tion's task demands and of the aids and in-
terfaces available to the practitioners in the
system? How do people behave/perform in
the cognitive situations defined by these de-
mands and tools? Because this independent
cognitive description has been missing, an
uneasy mixture of other types of description
of a complex situation has been substituted
-descriptions in terms of the application it-
self (e.g., internal medicine or power plant
thermodynamics), the implementation tech-
nology of the interfaces/aids, theuser's physi-
cal activities.
Different kinds of media or technology may
bemore powerful than others in that they en-
able or enhance certain kinds of cognitive
support functions. Different choices of media
or technology may also represent trade-offs
between the kinds of support functions that
are provided to the practitioner. The effort
required to provide acognitive support func-
tion in different kinds of media or technology
may also vary. In any case, performance aid-
ing requires that one focus at the level of the
cognitive support functions required and
then at the level of what technology can pro-
vide those functions or how those functions
can be crafted within agiven computational
technology.
This view of cognitive technology as com-
at Afyon Kocatepe Universitesi on April 23, 2014 hfs.sagepub.com Downloaded from
COGNITIVE ENGINEERING
plementary to computational technology is
in stark contrast to another view whereby
cognitive engineering is a necessary but
bothersome step to acquire the knowledge
fuel necessary to run the computational en-
gines of today and tomorrow.
COGNITIVE ENGINEERING IS ...
There has been growing recognition of this
need to develop an applied cognitive science
that draws on knowledge and techniques of
cognitive psychology and related disciplines
to provide the basis for principle-driven de-
sign (Brown and Newman, 1985; Newell and
Card, 1985; Norman, 1981; Norman and
Draper, 1986; Rasmussen, 1986). In this sec-
tion wewill examine some of thecharacteris-
tics of cognitive engineering (or whatever
label you prefer-cognitive technologies,
cognitive factors, cognitive ergonomics,
knowledge engineering). The specific per-
spective for this exposition isthat of cognitive
systems engineering (Hollnagel and Woods,
1983; Woods, 1986).
... about Complex Worlds
Cognitive engineering is about human be-
havior in complex worlds. Studying human
behavior in complex worlds (and designing
support systems) is one case of people en-
gaged inproblem solving inacomplex world,
analogous to the task of other human prob-
lem solvers (e.g., operators, troubleshooters)
who confront complexity in the course of
their daily tasks. Not surprisingly, the strate-
gies researchers and designers use to cope
with complexity are similar as well. For ex-
ample, astandard tactic to manage complex-
ity is to bound the world under consider-
ation. Thus one might address only a single
time slice of adynamic process or only asub-
set of the interconnections among parts of a
highly coupled world. This strategy islimited
because it is not clear whether the relevant
August 1988-417
aspects of the whole have been captured.
First, parts of the problem-solving process
may be missed or their importance underes-
timated; second, some aspects of problem
solving may emerge only when more com-
plex situations are directly examined.
For example, the role of problem formula-
tion and reformulation in effective perfor-
mance is often overlooked. Reducing the
complexity of design or research questions by
bounding the world to be considered merely
displaces the complexity to the person in the
operational world rather than providing a
strategy to cope with the true complexity of
the actual problem-solving context. It is one
major source of failure in the design of ma-
chine problem solvers. For example, the de-
signer of a machine problem solver may as-
sume that only one failure is possible to be
able to completely enumerate possible solu-
tions and to make use of classification prob-
lem-solving techniques (Clancey, 1985). How-
ever, the actual problem solver must cope
with the possibility of multiple failures, mis-
leading signals, and interacting disturbances
(e.g., Pople, 1985; Woods and Roth, 1986).
The result is that we need, particularly in
this time of advancing machine power, toun-
derstand human behavior in complex situa-
tions. What makes problem solving complex?
How does complexity affect the performance
of human and machine problem solvers?
How can problem-solving performance in
complex worlds be improved and deficien-
cies avoided? Understanding the factors that
produce complexity, the cognitive demands
that they create, and some of the cognitive
failure forms that emerge when these de-
mands are not met is essential if advances in
machine power are to lead to new cognitive
tools that actually enhance problem-solving
performance. (See Dorner, 1983; Fischhoff,
Lanir, and Johnson, 1986; Klein, in press;
Montmollin and De Keyser, 1985; Rasmus-
at Afyon Kocatepe Universitesi on April 23, 2014 hfs.sagepub.com Downloaded from
418-August 1988
sen, 1986; Selfridge, Rissland, and Arbib,
1984, for other discussions on the nature of
complexity in problem solving.)
... Ecological
Cognitive engineering is ecological. It is
about multidimensional. open worlds and
not about the artificially bounded closed
worlds typical of the laboratory or the engi-
neer's desktop (e.g., Coombs and Hartley,
1987; Funder, 1987). Of course, virtually all
of the work environments that we might be
interested inare man-made. Thepoint isthat
these worlds encompass more than the de-
sign intent-they exist in the world as natu-
ral problem-solving habitats.
Anexample of the ecological perspective is
the need to study humans solving problems
with tools (i.e., support systems) as opposed
to laboratory research that continues, for the
most part, to examine human performance
stripped of any tools. How to put effective cog-
nitive tools into the hands of practitioners is
the sine qua non for cognitive engineering.
From this viewpoint, quite a lot could be
learned from examining the nature of the
tools that people spontaneously create to
work more effectively in some problem-solv-
ing environment, or examining how preexist-
ing mechanisms are adapted toserve as tools,
as occurred in Roth et al. (1987), or examin-
ing how tools provided for apractitioner are
really put to use by practitioners. Thestudies
by De Keyser (e.g., 1986) are extremely
unique with respect to the latter.
In reducing the target world to a tractable
laboratory or desktop world in search of pre-
ciseresults, werun therisk of eliminating the
critical features of the world that drive be-
havior. This creates the problem of deciding
what counts as an effective stimulus (as Gib-
son has pointed out in ecological perception)
or, to use an alternative terminology, decid-
HUMAN FACTORS
ing what counts for a symbol. To decide this
question, Gibson (1979) and Dennett (1982),
among others, have pointed out the need for
a semantic and pragmatic analysis of envi-
ronment-cognitive agent relationships with
respect to the goals/resources of the agent
and the demands/constraints in the environ-
ment. Asa result, one has to pay very close
attention to what people actually do in a
problem-solving world, given the actual de-
mands that they face (Woods, Roth, and
Pople, 1987). Principle-driven design of sup-
port systems begins wi th understanding
what are the difficult aspects of a problem-
solving situation (e.g., Rasmussen, 1986;
Woods and Hollnagel, 1987).
. .. about the Semantics of a Domain
A corollary to the foregoing points is that
cognitive engineering must address the con-
tents or semantics of a domain (e.g., Coombs,
1986; Coombs and Hartley, 1987). Purely
syntactic and exclusively tool-driven ap-
proaches to develop support systems are vul-
nerable to the error of the third kind: solving
the wrong problem. The danger is to fall into
the psychologist's fallacy of William James
(1890) whereby the psychologist's reality is
confused with the psychological reality of the
human practitioner in his or her problem-
solving world. To guard against this danger,
the psychologist or cognitive engineer must
start with the working assumption that prac-
titioner behavior is reasonable and attempt
to understand how this behavior copes with
the demands and constraints imposed by the
problem-solving world in question. For ex-
ample, the introduction of computerized
alarm systems into power plant control
rooms inadvertently so badly undermined
the strategies operational personnel used to
cope with some problem-solving demands
that the systems had to be removed and the
previous alarm system restored (Pope, 1978).
at Afyon Kocatepe Universitesi on April 23, 2014 hfs.sagepub.com Downloaded from
COGNITIVE ENGINEERING
The question is not why people failed to ac-
cept auseful technology but, rather, how the
original alarm system supported and the new
system failed to support operator strategies
to cope with the world's problem-solving de-
mands. This is not to say that the strategies
developed to cope with the original task de-
mands are always optimal, or even that they
always produce acceptable levels of perfor-
mance, but only that understanding how
they function in the initial cognitive environ-
ment is a starting point to develop truly ef-
fective support systems (e.g., Roth and
Woods, 1988).
Semantic approaches, on the other hand,
are vulnerable to myopia. If each world is
seen as completely unique and must be in-
vestigated tabula rasa, then cognitive engi-
neering can be no more than a set of tech-
niques that are used to investigate every
world anew. If this were the case, it would
impose strong practical constraints on prin-
ciple-driven development of support systems,
restricting it to cases in which the conse-
quences of poor performance are extremely
high.
Toachieve relevance tospecific worlds and
generalizability across worlds, the cognitive
language must beable toescape the language
of particular worlds, as well as the language
of particular computational mechanisms,
and identify pragmatic reasoning situations,
after Cheng and Holyoak (1985) and Cheng,
Holyoak, Nisbett, and Oliver (1986). These
reasoning situations are abstract relative to
the language of the particular application in
question and therefore transportable across
worlds, but they are also pragmatic in the
sense that the reasoning involves knowledge
of the things being reasoned about. Moream-
bitious are attempts to build aformal cogni-
tive language-for example, that by Coombs
and Hartley (1987) through their work onco-
herence in model generative reasoning.
August 1988-419
... about Improved Performance
Cognitive engineering is not merely about
the contents of a world; it is about changing
behavior/performance in that world. This is
both a practical consideration-improving
performance or reducing errors justifies the
investment from the point of view of the
world in question-and atheoretical consid-
eration-the ability to produce practical
changes in performance is the cri terion for
demonstrating an understanding of the fac-
tors involved. Basic concepts are confirmed
only when they generate treatments (aiding
either on-line or off-line) that make a differ-
ence in the target world. Cheng's concepts
about human deductive reasoning (Cheng
and Holyoak, 1985; Cheng et aI., 1986) gener-
ated treatments that produced very large
performance changes both absolutely and
relative to thehistory of rather ineffectual al-
ternative treatments to human biases in de-
ductive reasoning.
To achieve the goal of enhanced perfor-
mance, cognitive engineering must first iden-
tify the sources of error that impair the per-
formance of the current problem-solving
system. This means that there is a need for
cognitive engineering to understand where,
how, and why machine, human, and human-
plus-machine problem solving breaks down
in natural problem-solving habitats.
Buggy knowledge-missing, incomplete,
or erroneous knowledge-is one source of
error (e.g., Brown and Burton, 1978; Brown
and VanLehn, 1980; Gentner and Stevens,
1983). The buggy knowledge approach pro-
vides a specification of the knowledge struc-
tures (e.g., incomplete or erroneous knowl-
edge) that are postulated to produce the
pattern of errors and correct responses that
characterize the performance of particular
individuals. Thespecification istypically em-
bodied as a computer program and consti-
at Afyon Kocatepe Universitesi on April 23, 2014 hfs.sagepub.com Downloaded from
420-August 1988
tutes a "theory" of what these individuals
"know" (including misconceptions). Human
performance aiding then focuses on provid-
ing missing knowledge and correcting the
knowledge bugs. From the point of view of
computational power, more knowledge and
more correct knowledge can be embodied
and delivered in the formof arule-based ex-
pert system following a knowledge acquisi-
tion phase that determines the fine-grained
domain knowledge.
But the more critical question for effective
human performance may be how knowledge
is activated and utilized in the actual prob-
lem-solving environment (e.g., Bransford,
Sherwood, Vye, and Rieser, 1986; Cheng et
aI., 1986). The question concerns not merely
whether the problem solver knows some par-
ticular piece of domain knowledge, such as
the relationship between two entities. Does
heor sheknow that it isrelevant to the prob-
lemat hand, and does heor sheknow how to
utilize this knowledge in problem solving?
Studies of education and training often show
that students successfully acquire knowledge
that ispotentially relevant tosolving domain
problems but that they often fail to exhibit
skilled performance-for example, differ-
ences in solving mathematical exercises
versus word problems (seeGentner and Ste-
vens, 1983, for examples).
The fact that people possess relevant
knowledge does not guarantee that that
knowledge will be activated and utilized
when needed in the actual problem-solving
environment. This is the issue of expression of
knowledge. Education and training tend to
assume that if aperson can beshown to pos-
sess a piece of knowledge in any circum-
stance, then this knowledge should be acces-
sible under all conditions in which it might
be useful. In contrast, a variety of research
has revealed dissociation effects whereby
knowledge accessed in one context remains
HUMAN FACTORS
inert in another (Bransford et aI., 1986;
Cheng et aI., 1986; Perkins and Martin, 1986).
For example, Gick and Holyoak (1980) found
that unless explicitly prompted, people will
fail to apply arecently learned problem-solu-
tion strategy to an isomorphic problem (see
also Kotovsky, Hayes, and Simon, 1985).
Thus the fact that people possess relevant
knowledge does not guarantee that this
knowledge will be activated when needed.
The critical question is not to show that the
problem solver possesses domain knowledge,
but rather the more stringent criterion that
situation-relevant knowledge is accessible
under the conditions inwhich the task isper-
formed. This has been called the problem of
inert knowledge-knowledge that is accessed
only in arestricted set of contexts.
The general conclusion of studies on the
problem of inert knowledge isthat possession
of the relevant domain knowledge or strate-
gies by themselves is not sufficient to ensure
that this knowledge will be accessed in new
contexts. Off-line training experiences need
to promote an understanding of how con-
cepts and procedures can function as tools for
solving relevant problems (Bransford et aI.,
1986; Brown, Bransford, Ferrara, and Cam-
pione, 1983; Brown and Campione, 1986).
Training has to be about more than simply
student knowledge acquisition; it must also
enhance the expression of knowledge by con-
ditionalizing knowledge to its use via infor-
mation about "triggering conditions" and
constraints (Glaser, 1984).
Similarly, on-line representations of the
world can help or hinder problem solvers in
recognizing what information or strategies
are relevant to the problem at hand (Woods,
1986). For example, Fischhoff, Slovic, and
Lichtenstein (1979) and Kruglanski, Fried-
land, and Farkash (1984) found that judg-
mental biases (e.g., representativeness) were
greatly reduced or eliminated when aspects
at Afyon Kocatepe Universitesi on April 23, 2014 hfs.sagepub.com Downloaded from
COGNITIVE ENGINEERING
of the situation cued the relevance of statisti-
cal information and reasoning. Thus one di-
mension along which representations vary is
their ability toprovide prompts totheknowl-
edge relevant in agiven context.
The challenge for cognitive engineering is
to study and develop ways to enhance the ex-
pression of knowledge and to avoid inert
knowledge. What training content and expe-
riences are necessary to develop condi tion-
alized knowledge (Glaser, 1984; Lesh, 1987;
Perkins and Martin, 1986)? What representa-
tions cue people about the knowledge that is
relevant to the current context of goals, sys-
tem state, and practitioner intentions (Wie-
cha and Henrion, 1987; Woods and Roth,
1988)?
... about Systems
Cognitive engineering is about systems. One
source of tremendous confusion has been an
inability toclearly define the "systems" of in-
terest. From one point of view the computer
program being executed is the end applica-
tion of concern. In this case, oneoften speaks
of the interface, the tasks performed within
the syntax of the interface, and human users
of the interface. Notice that the application
world (what the interface is used for) is
deemphasized. The bulk of work on human-
computer interaction takes this perspective.
Issues of concern include designing for learn-
ability (e.g., Brown and Newman, 1985; Car-
roll and Carrithers, 1984; Kieras and Polson,
1985) and designing for ease and pleasureab-
leness of use (Malone, 1983; Norman, 1983;
Shneiderman, 1986).
A second perspective is to distinguish the
interface fromthe application world (Hollna-
gel, Mancini, and Woods, 1986; Mancini,
Woods, and Hollnagel, in press; Miyata and
Norman, 1986; Rasmussen, 1986; Stefik et
aI., 1985). For example, text-editing tasks are
performed only in some larger context such
August 1988-421
as transcription, data entry, and composi-
tion. The interface is an external representa-
tion of an application world; that is, a me-
dium through which agents come to know
and act on the world-troubleshooting elec-
tronic devices (Davis, 1983), logistic mainte-
nance systems, managing data communica-
tion networks, managing power distribution
networks, medical diagnosis (Cohen et aI.,
1987; Gadd and Pople, 1987), aircraft and
helicopter flight decks (Pew et aI., 1986), air
traffic control systems, process control acci-
dent response (Woods, Roth, and Pople,
1987), and command and control of abattle-
field (e.g., Fischhoff et aI., 1986). Tasks are
properties of the world in question, although
performance of these fundamental tasks (i.e.,
demands) is affected by the design of the ex-
ternal representation (e.g., Mitchell and
Saisi, 1987). The human is not apassive user
of acomputer program but isan active prob-
lem-solver in some world. Therefore we will
generally refer to people as domain agents or
actors or problem solvers and not as users.
In part, the difference in the foregoing
views can be traced to differences in the cog-
nitive complexity of the domain task being
supported. Research on person-computer in-
teraction has typically dealt with office ap-
plications (e.g., word processors for docu-
ment preparation or copying machines for
duplicating material), in which the goals to
be accomplished (e.g., replace word 1with
word 2) and the steps required toaccomplish
them are relatively straightforward. These
applications fall at one extreme of the cogni-
tive complexity space. In contrast, there are
many decision-making and supervisory envi-
ronments (e.g., military situation assess-
ment; medical diagnosis) in which problem
formulation, situation assessment, goal defi-
nition, plan generation, and plan monitoring
and adaptation are significantly more com-
plex. It is in designing interfaces and aids for
at Afyon Kocatepe Universitesi on April 23, 2014 hfs.sagepub.com Downloaded from
422-August 1988
these applications that it is essential to dis-
tinguish the world to be acted on from the
interface or window on the world (how one
comes to know that world), and fromagents
who can act directly or indirectly on the
world.
The "system" of interest in design should
not be the machine problem solver per se,
nor should the focus of interest in evaluation
be the performance of the machine problem
solver alone. Ultimately the focus must be
the design and the performance of the
human-machine problem-solving ensemble
-how to "couple" human intelligence and
machine power in asingle integrated system
that maximizes overall performance.
... about Multiple Cognitive Agents
Alarge number of theworlds that cognitive
engineering should be able to address con-
tain multiple agents who can act on the
world in question (e.g., command and con-
trol, process control, data communication
networks). Not only do we need to be clear
about where systemic boundaries are drawn
with respect to the application world and in-
terfaces to or representations of the world,
we also need to be clear about the different
agents who can act directly or indirectly on
the world. Cognitive engineering must be able
to address systems with multiple cognitive
agents. This applies to multiple human cog-
nitive systems (often called distributed
decision making); e.g., (Fischhoff, 1986;
Fischhoff, et ai., 1986; March and Weisinger-
Baylon, 1986; Rochlin, La Porte, and Rob-
erts, in press; Schum, 1980).
Because of the expansions in computa-
tional powers, the machine element can be
thought of as a partially autonomous cogni-
tive agent in its own right. This raises the
problem of how to build a cognitive system
that combines both human and machine cog-
HUMAN FACTORS
nitive systems or, inother words, joint cogni-
tive systems (Hollnagel, Mancini, and Woods,
1986; Mancini et aI., inpress). When asystem
includes these machine agents, the human
role isnot eliminated but shifted. This means
that changes in automation are changes in
the joint human-machine cognitive system,
and the design goal is to maximize overall
performance.
One metaphor that is often invoked to
frame questions about the relationship be-
tween human and machine intelligence is to
examine human-human relationships in
multi person problem-solving or advisory sit-
uations and then to transpose the results to
human-intelligent machine interaction (e.g.,
Coombs and Alty, 1984). Following this meta-
phor leads Muir (1987) to raise the question
of the role of "trust" between man and ma-
chine in effective performance. Oneprovoca-
tive question that Muir's analysis generates
is, how does the level of trust between human
and machine problem solvers affect perfor-
mance? The practitioner's judgment of ma-
chine competence or predictability can be
miscalibrated, leading to excessive trust or
mistrust. Either a system will be underuti-
lized or ignored when it could provide effec-
tive assistance or the practitioner will defer
to the machine even in areas that challenge
or exceed themachine's range of competence.
Another question concerns how trust is es-
tablished between human and machine.
Trust or mistrust is based on cumulative ex-
perience with the other agent that provides
evidence about enduring characteristics of
the agent such as competence and predict-
ability. This means that factors about how
new technology is introduced into the work
environment can playa critical role inbuild-
ing or undermining trust in the machine
problem solver. If this stage of technology in-
troduction is mishandled (for example, prac-
titioners are exposed to the system before it
at Afyon Kocatepe Universitesi on April 23, 2014 hfs.sagepub.com Downloaded from
COGNITIVE ENGINEERING
is adequately debugged), the practitioner's
trust in the machine's competence can beun-
dermined. Muir's analysis shows how varia-
tions in explanation and display facilities af-
fect how the person will use the machine by
affecting his or her ability to seehow the ma-
chine works and therefore his or her level of
calibration. Muir also points out howhuman
information processing biases can affect how
the evidence of experience is interpreted in
the calibration process.
A second metaphor that is frequently in-
voked is supervisory control (Rasmussen,
1986; Sheridan and Hennessy, 1984). Again,
the machine element is thought of as a se-
miautonomous cognitive system, but in this
case it is a lower-order subordinate, albeit
partially autonomous. Thehuman supervisor
generally has awider range of responsibility,
and heor she possesses ultimate responsibil-
ity and authority. Boy (1987) uses this meta-
phor to guide the development of assistant
systems built fromAI technology.
In order for asupervisory control architec-
ture between human and machine agents to
function effectively, several requirements
must be met that, as Woods (1986) has
pointed out, are often overlooked when tool-
driven constraints dominate design. First,
the supervisor must have real as well as titu-
lar authority; machine problem solvers can
be designed and introduced in such a way
that the human retains the responsibility for
outcomes without any effective authority.
Second, the supervisor must be able to redi-
rect the lower-order machine cognitive sys-
tem. Roth et al. (1987) found that some prac-
titioners tried to devise ways to instruct an
expert system in situations inwhich the ma-
chine's problem solving had broken down,
even when the machine's designer had pro-
vided no such mechanisms. Third, inorder to
be able to supervise another agent, there is
need for a common or shared representation
August 1988-423
of thestate of the world and of thestate of the
problem-solving process (Woods and Roth,
1988b); otherwise communication between
the agents will break down (e.g., Suchman,
1987).
Significant attention has been devoted to
the issue of how to get intelligent machines
to assess the goals and intentions of humans
without requiring explicit statements (e.g.,
Allen and Perrault, 1980; Quinn and Russell,
1986). However, the supervisory control met-
aphor highlights that it is at least as impor-
tant to pay attention to what information or
knowledge people need to track the intelli-
gent machine's "state of mind" (Woods and
Roth, 1988a).
A third metaphor is to consider the new
machine capabilities as extensions and ex-
pansions along a dimension of machine
power. In this metaphor machines are tools;
people are tool builders and tool users. Tech-
nological development has moved fromphys-
ical tools (tools that magnify capacity for
physical work) to perceptual tools (exten-
sions to human perceptual apparatus such as
medical imaging) and now, with the arrival
of AI technology, tocognitive tools. (Although
this type of tool has amuch longer history-
e.g., aide-memories or decision analysis-AI
has certainly increased the interest in and
ability to provide cognitive tools.)
In this metaphor the question of the rela-
tionship between machine and human takes
the formof what kind of tool isan intelligent
machine (e.g., Ehn and Kyng, 1984; Such-
man, 1987; Woods, 1986). At oneextreme, the
machine can be a prosthesis that compen-
sates for a deficiency in human reasoning or
problem solving. This could be a local defi-
ciency for the population of expected human
practitioners or aglobal weakness in human
reasoning. At the other extreme, the machine
can bean instrument in thehands of afunda-
mentally competent but limited-resource
at Afyon Kocatepe Universitesi on April 23, 2014 hfs.sagepub.com Downloaded from
424-August 1988
human practitioner (Woods, 1986). The ma-
chine aids the practitioner by providing in-
creased or new kinds of resources (either
knowledge resources or processing resources
such as an expanded field of attention).
Theextra resources may support improved
performance in several ways. One path is to
off-load overhead information-processing ac-
tivities from the person to the machine to
allow the human practitioner to focus his or
her resources on "higher-level" issues and
strategies. Examples includekeepingtrack of
multiple ongoing activities in an external
memory, performing basic data computa-
tions or transformations, and collecting the
evidence related to decisions about particu-
lar domain issues, as occurred recently with
new computer-based displays in nuclear
power plant control rooms. Extra resources
may help to improve performance inanother
way by allowing a restructuring of how the
human performs the task, shifting perfor-
mance onto a new higher plateau (see Pea,
1985). This restructuring concept is in con-
trast to the usual notion of new systems as
amplifiers of user capabilities. As Pea (1985)
points out, the amplification metaphor im-
plies that support systems improve human
performance by increasing the strength or
power of the cognitive processes the human
problem solver goes through to solve the
problem, but without any change in the un-
derlying activities, processes, or strategies
that determine how the problem is solved.
Alternatively, the resources provided (or not
provided) by new performance aids and in-
terface systems can support restructuring of
the activities, processes, or strategies that
carry out the cognitive functions relevant to
performing domain tasks (e.g., Woods and
Roth, 1988). New levels of performance are
now possible, and the kinds of errors one is
prone to (and therefore the consequences of
errors) change as well.
HUMAN FACTORS
Theinstrumental perspective suggests that
the most effective power provided by good
cognitive tools is conceptualization power
(Woods and Roth, 1988a). The importance of
conceptualization power in effective prob-
lem-solving performance is often overlooked
because the part of the problem-solving pro-
cess that it most crucially affects, problem
formulation and reformulation, is often left
out of studies of problem solving and the de-
sign basis of new support systems. Support
systems that increase conceptualization
power (1) enhance aproblem solver's ability
to experiment with possible worlds or strate-
gies (e.g., Hollan, Hutchins, and Weitzman,
1984; Pea, 1985; Woods et aI., 1987), (2) en-
hance their ability to visualize or to make
concrete the abstract and uninspectable
(analogous to perceptual tools) in order to
better seethe implications of concept and to
help one restructure one's view of the prob-
lem (Becker and Cleveland, 1984; Coombs
and Hartley, 1987; Hutchins, Hollan, and
Norman, 1985; Pople, 1985); and (3) to en-
hance error detection by providing better
feedback about the effects/results of actions
(Rizzo, Bagnara, and Visciola, 1987).
... Problem-Driven
Cognitive engineering is problem-driven,
tool-constrained. This means that cognitive
engineering must be able to analyze a prob-
lem-solving context and understand the
sources of both good and poor performance
-that is, the cognitive problems tobesolved
or challenges to be met (e.g., Rasmussen,
1986; Woods and Hollnagel, 1987).
To build acognitive description of aprob-
lem-solving world, one must understand how
representations of the world interact with
different cognitive demands imposed by the
application world inquestion and with char-
acteristics of the cognitive agents, both for
existing and prospective changes in the
at Afyon Kocatepe Universitesi on April 23, 2014 hfs.sagepub.com Downloaded from
COGNITIVE ENGINEERING
world. Building of acognitive description is
part of aproblem-driven approach to the ap-
plication of computational power.
The results from this analysis are used to
define thekind of solutions that are needed to
enhance successful performance-to meet
cognitive demands of the world, to help the
human function more expertly, to eliminate
or mitigate error-prone points in the total
cognitive system (demand-resource mis-
matches). Theresults of this process then can
be deployed in many possible ways as con-
strained by tool-building limitations and
tool-building possibili ties-explora tion
training worlds, new information, represen-
tation aids, advisory systems, or machine
problem solvers (seeRoth and Woods, 1988;
Woods and Roth, 1988).
In tool-driven approaches, knowledge ac-
quisi tion focuses on describing domain
knowledge in terms of the syntax of compu-
tational mechanisms-that is, the language
of implementation is used as acognitive lan-
guage. Semantic questions are displaced ei-
ther to whomever selects the computational
mechanisms or to the domain expert who
enters knowledge.
The alternative is to provide an umbrella
structure of domain semantics that organizes
and makes explicit what particular pieces of
knowledge mean about problem solving in
the domain (Woods, 1988). Acquiring and
using a domain semantics is essential to
avoiding potential errors and specifying per-
formance boundaries when building "intelli-
gent" machines (Roth et aI., 1987). Tech-
niques for analyzing cognitive demands not
only help characterize aparticular world but
also help tobuild arepertoire of general cog-
nitive situations that are transportable.
There isaclear trend toward this conception
of knowledge acquisition in order to achieve
more effective decision support and fewer
brittle machine problem solvers (e.g., Clan-
August 1988-425
cey, 1985; Coombs, 1986; Gruber and Cohen,
1987).
ANEXAMPLE OF HOW COGNITIVE
ANDCOMPUTATIONAL
TECHNOLOGIES INTERACT
To illustrate the role of cognitive engineer-
ing in the deployment of new computational
powers, consider a case in human-computer
interaction (for other cases seeMitchell and
Forren, 1987; Mitchell and Saisi, 1987; Roth
and Woods, 1988; Woods and Roth, 1988). It
is one example of how purely technology-
driven deployment of new automation capa-
bilities can produce unintended and unfore-
seen negative consequences. In this case an
attempt was made toimplement acomputer-
ized procedure system using a commercial
hypertext system for building and navigating
large network data bases. Because cognitive
engineering issues were not considered inthe
application of the new technology, a high-
level person-machine performance problem
resulted-the" getting lost" phenomenon
(Woods, 1984). Based on acognitive analysis
of the world's demands, it was possible tore-
design the system to support domain actors
and eliminate the getting-lost problems (Elm
and Woods, 1985). Through cognitive engi-
neering it proved possible tobuild amore ef-
fective computerized procedure system that,
for the most part, was within the technologi-
cal boundaries set by the original technology
chosen for the application.
The data base application in question was
designed to computerize paper-based in-
structions for nuclear power plant emer-
gency operation. The system was built based
on a network data base "shell" with abuilt-
in interface for navigating the network (Rob-
ertson, McCraken, and Newell, 1981). The
shell aiready treated human-computer inter-
face issues, so it was assumed possible to
create the computerized system simply by
at Afyon Kocatepe Universitesi on April 23, 2014 hfs.sagepub.com Downloaded from
426-August 1988
entering domain knowledge (i.e., the current
instructions as implemented for the paper
medium) into the interface and network data
base framework provided by the shell.
Thesystem contained two kinds of frames:
menu frames, which served to point to other
frames, and content frames, which contained
instructions fromthe paper procedures (gen-
erally one procedure step per frame). In pre-
liminary tests of the system it was found that
people uniformly failed to complete recovery
tasks with procedures computerized in this
way. They became disoriented or "lost," un-
able to keep procedure steps in pace with
plant behavior, unable to determine where
they were inthenetwork of frames, unable to
decide where to go next, or unable even to
find places where they knew they should be
(i.e., they diagnosed the situation, knew the
appropriate responses as trained operators,
yet could not find the relevant procedural
steps in the network). These results were
found with people experienced with the
paper-based procedures and plant operations
as well as with people knowledgeable in the
frame-network software package and how
the procedures were implemented within it.
What was the source of the disorientation
problem? It resulted fromafailure toanalyze
the cognitive demands associated with using
procedures inan externally paced world. For
example, in using the procedures the opera-
tor often isrequired to interrupt one activity
and transition to another step in the proce-
dure or toadifferent procedure depending on
plant conditions and plant responses tooper-
ator actions. Asaresult, operators need to be
able to rapidly transition across procedure
boundaries and toreturn toincomplete steps.
Because of the size of a frame, there was a
very high proportion of menu frames relative
to content frames, and the content frames
provided a narrow window on the world.
This structure made it difficult toread ahead
HUMAN FACTORS
to anticipate instructions, to mark steps
pending completion and return to them eas-
ily, to seethe organization of the steps, or to
mark a"trail" of activities carried out during
the recovery. Many activities that are inher-
ently easy to perform in a physical book
turned out to be very difficult to carry out-
for example, reading ahead. Theresult was a
mismatch between user information-process-
ing activities during domain tasks and the
structure of the interface as arepresentation
of the world of recovery fromabnormalities.
These results triggered a full design cycle
that began with acognitive analysis todeter-
mine the user information-handling activi-
ties needed to effectively accomplish recov-
ery tasks in emergency situations. Following
procedures was not simply a matter of lin-
ear, step-by-step execution of instructions;
rather, it required the ability to maintain a
broad context of the purpose and relation-
ships among the elements in the procedure
(see also Brown et aI., 1982; Roth et aI.,
1987). Operators needed to maintain aware-
ness of the global context (i.e., how a given
step fits into the overall plan), to anticipate
the need for actions by looking ahead, and to
monitor for changes inplant state that would
require adaptation of the current response
plan.
A variety of cognitive engineering tech-
niques wereutilized inanewinterface design
to support the demands (see Woods, 1984).
First, a spatial metaphor was used to make
the system more like a physical book. Sec-
ond, display selection/movement options
were presented in parallel, rather than se-
quentially, with procedural information (de-
fining two types of windows in the interface).
Transition options were presented at several
grains of analysis tosupport moves fromstep
to step as easily as moves across larger units
in the structure of the procedure system. In
addition, incomplete steps were automati-
at Afyon Kocatepe Universitesi on April 23, 2014 hfs.sagepub.com Downloaded from
COGNITIVE ENGINEERING
cally tracked, and those steps were made di-
rectly accessible (e.g., electronic bookmarks
or placeholders).
Toprovide the global context within which
the current procedure step occurs, the step of
interest is presented in detail and is embed-
ded in a skeletal structure of the larger re-
sponse plan of which it is a part (Furnas,
1986; Woods, 1984). Context sensitivity was
supported by displaying therules for possible
adaptation or shifts in the current response
plan that are relevant to the current context
in parallel with current relevant options and
the current region of interest in the proce-
dures (a third window). Note how the cogni-
tive analysis of the domain defined what
types of data needed to beseen effectively in
parallel. which then determined the number
of windows required. Also note that the cog-
nitive engineering redesign was, with a few
exceptions, directly implementable within
the base capabilities of the interface shell.
Asthe foregoing example saliently demon-
strates, there can besevere penalties for fail-
ing toadequately map thecognitive demands
of the environment. However, if we under-
stand the cognitive requirements imposed by
the domain, then avariety of techniques can
be employed to build support systems for
those functions.
SUMMARY
Theproblem of providing effective decision
support hinges on how the designer decides
what will be useful in a particular applica-
tion. Can researchers provide designers with
concepts and techniques to determine what
will beuseful support systems, or are wecon-
demned to simply build what can be built
practically and wait for the judgment of ex-
perience? Is principle-driven design possi-
ble?
A vigorous and viable cognitive engineer-
ing can provide the knowledge and tech-
August 1988-427
niques necessary for principle-driven design.
Cognitive engineering does this by providing
the basis for a problem-driven rather than a
technology-driven approach whereby the re-
quirements and bottlenecks in cognitive task
performance drive the development of tools
to support the human problem solver. Cogni-
tive engineering can address (a) existing cog-
nitive systems in order to identify deficien-
cies that cognitive system redesign can
correct and (b) prospective cognitive systems
as a design tool during the allocation of cog-
nitive tasks and the development of an effec-
tivejoint architecture. In this paper wehave
attempted to outline the questions that need
tobeanswered to make this promise real and
to point to research that already has begun to
provide the necessary concepts and tech-
niques.
REFERENCES
Allen, J ., and Perrault, C. (1980). Analyzing intention in
utterances. ArtificialIntelligence, 15,143-178.
Becker, R. A., and Cleveland, W. S. (1984). Brushing the
scatlerplot matrix: High-interaction graphical methods
for analyzing multidimensional data (Tech. Report).
Murray Hill, NJ: AT&T Bell Laboratories.
Boy, G. A. (1987). Operator assistant systems. Interna-
tional Journal of Man-Machine Studies, 27, 541-554.
Also in G. Mancini, D. Woods, and E. Hollnagel (Eds.).
(in press). Cognitive engineering in dynamic worlds.
London: Academic Press.
Bransford, J., Sherwood, R., Vye, N., and Rieser, J. (1986).
Teaching and problem solving: Research foundations.
American Psychologist, 41, 1078-1089.
Brown, A. L., Bransford, J. D., Ferrara, R. A., and Cam-
pione, J. C. (1983). Learning, remembering, and under-
standing. In J. H. Flavell and E. M. Markman (Eds.),
Carmichael's manual of child psychology. New York:
Wiley.
Brown, A. L., and Campione, J. C. (1986). Psychological
theory and the study of learning disabilities. American
Psychologist, 41,1059-1068.
Brown, J. S., and Burton, R. R. (1978). Diagnostic models
for procedural bugs in basic mathematics. Cognitive
Science, 2, 155-192.
Brown, J. 5., Moran, T. P., and Williams, M. D. (1982). The
semantics of procedures (Tech. Report). Palo Alto, CA:
Xerox Palo Alto Research Center.
Brown, J. 5., and Newman, S. E. (1985). Issues in cogni-
tive and social ergonomics: From our house to Bau-
haus. Human-Computer Interaction, 1, 359-391.
Brown, J. S., and VanLehn, K. (1980). Repair theory: A
generative theory of bugs in procedural skills. Cogni-
tive Science, 4, 379-426.
at Afyon Kocatepe Universitesi on April 23, 2014 hfs.sagepub.com Downloaded from
428-August 1988
Carroll, J. M., and Carrithers, C. (1984). Training wheels in
a user interface. Communications of the ACM, 27,
800-806.
Cheng, P. W., and Holyoak, K. J. (1985). Pragmatic reason-
ing schemas. Cognitive Psychology, 17, 391-416.
Cheng, P., Holyoak, K., Nisbett, R., and Oliver, 1. (1986).
Pragmatic versus syntactic approaches to training de-
ductive reasoning. Cognitive Psychology, 18,293-328.
Clancey, W. J. (1985). Heuristic classification. Artificial In-
telligence, 27, 289-350.
Cohen, P., Day, D., Delisio, J., Greenberg, M., Kjeldsen, R.,
and Suthers, D. (1987). Management of uncertainty in
medicine. In Proceedings of the IEEE Conference on
Computers and Communications. New York: IEEE.
Coombs, M. J. (1986). Artificial intelligence and cognitive
technology: Foundations and perspectives. In E. Holl-
nagel, G. Mancini, and D. D. Woods (Eds.), Intelligent
decision support in process environments. New York:
Springer- Verlag.
Coombs, M. J., and Alty, J. 1. (1984). Expert systems: An
alternative paradigm. International Journal of Man-
Machine Studies, 20, 21-43.
Coombs, M.J., and Hartley, R. T. (1987). The MGR algo-
rithm and its application to the generation of explana-
tions for novel events. International Journal of Man-
Machine Studies, 27, 679-708. Also in Mancini, G.,
Woods, D., and Hollnagel. E. (Eds.). (in press). Cogni-
tive engineering in dynamic worlds. London: Academic
Press.
Davis, R. (1983). Reasoning from first principles in elec-
tronic troubleshooting. International Journal of Man-
Machine Studies, 19,403-423.
De Keyser, V. (1986). Les interactions hommes-machine:
Caracteristiques et utilisations des different supports
d'information par les operateurs. (Person-machine inter-
action: How operators use different information chan-
nels). Rapport Politique Scientifique/FAST no. 8.
Liege, Belgium: Psychologie du Travail, Universite de
I'Etat aLiege.
Dennett, D. (1982). Beyond belief. In A. Woodfield (Ed.),
Thought and object. Oxford: Clarendon Press.
Dorner, D. (1983). Heuristics and cognition in complex
systems. In R. Groner, M. Groner, and W. F. Bischof
(Eds.), Methods of heuristics. Hillsdale, NJ: Erlbaum.
Ehn, P., and Kyng, M. (1984). A tool perspective on design of
interactive computer support for skilled workers. Unpub-
lished manuscript, Swedish Center for Working Life,
Stockholm.
Elm, W. C., and Woods, D. D. (1985). Getting lost: A case
study in interface design. In Proceedings of the Human
Factors Society 29th Annual Meeting (pp. 927-931).
Santa Monica, CA: Human Factors Society.
Fischhoff, B. (1986). Decision making in complex systems.
In E. Hollnagel. G. Mancini, and D. D. Woods, (Eds.),
Intelligent decision support. New York: Springer-Ver-
lag.
Fischhoff, B., Slovic, P., and Lichtenstein, S. (1979). Im-
proving intuitive judgment by subjective sensitivity
analysis. Organizational Behavior and Human Perfor-
mance, 23, 339-359.
Fischhoff, B., Lanir, Z., and Johnson, S. (1986). Military
risk taking and modem C31(Tech. Report 86-2). Eugene,
OR: Decision Research.
Funder, D. C. (1987). Errors and mistakes: Evaluating the
accuracy of social judgments. Psychological Bulletin,
101,75-90.
Furnas, G. W. (1986). Generalized fisheye views. In M.
HUMAN FACTORS
Mantei and P. Orbeton (Eds.), Human factors in com-
puting systems: CHJ'86 Conference Proceedings (pp.
16-23). New York: ACM/SIGCHI.
Gadd, C. S., and Pople, H. E. (1987). Aninterpretation syn-
thesis model of medical teaching rounds discourse:
Implications for expert system interaction. Interna-
tional Journal of Educational Research, 1.
Gentner, D., and Stevens, A. 1. (Eds.). (1983). Mental
models. Hillsdale, NJ: Erlbaum.
Gibson, J. J. (1979). The ecological approach to visual per-
ception. Boston: Houghton Mifflin.
Gick, M. 1., and Holyoak, K. J. (1980). Analogical problem
solving. Cognitive Psychology, 12, 306-365.
Glaser, R. (1984). Education and thinking: The role of
knowledge. American Psychologist, 39, 93-104.
Gruber, T., and Cohen, P. (1987). Design for acquisition:
Principles of knowledge system design to facilitate
knowledge acquisition (special issue on knowledge ac-
quisition for knowledge-based systems). International
Journal of Man-Machine Studies, 26, 143-159.
Henderson, A., and Card, S. (1986). Rooms: The use of mul-
tiple virtual workspaces to reduce space contention in a
window-based graphical user interface (Tech. Report).
Palo Alto: Xerox PARCo
Hirschhorn, 1. (1984). Beyond mechanization: Work and
technology in a postindustrial age. Cambridge, MA: MIT
Press.
Hollan, J., Hutchins, E., and Weitzman, 1. (1984).
Steamer: An interactive inspectable simulation-based
training system. AI Magazine, 4, 15-27.
Hollnagel. E., Mancini, G., and Woods, D. D. (Eds.). (1986).
Intelligent decision support in process environments.
New York: Springer-Verlag.
Hollnagel. E., and Woods, D. D. (1983). Cognitive systems
engineering: New wine in new bottles. International
Journal of Man-Machine Studies, 18, 583-600.
Hoogovens Report. (1976). Human factors evaluation: Hoo-
govens No.2 hot strip mill (Tech. Report FR251). Lon-
don: British Steel Corporation/Hoogovens.
Hutchins, E., Hollan, J., and Norman, D. A. (1985). Direct
manipulation interfaces. Human-Computer Interac-
tion, 1,311-338.
James, W. (1890). The principles of psychology. New York:
Holt.
Kieras, D. E., and Polson, P. G. (1985). Anapproach to the
formal analysis of user complexity. International Jour-
nal of Man-Machine Studies, 22, 365-394.
Klein, G. A. (in press). Recognition-primed decisions. In
W. B. Rouse (t:d.), Advances in man-machine research,
vol. 5. Greenwich, CT: JAI Press.
Kotovsky, K., Hayes, J. R., and Simon, H. A. (1985). Why
are some problems hard? Evidence from Tower of
Hanoi. Cognitive Psychology, 17, 248-294.
Kruglanski, A., Friedland, N., and Farkash, E. (1984). Lay
persons' sensitivity to statistical information: The case
of high perceived applicability. Journal of Personality
and Social Psychology, 46, 503-518.
Lesh, R. (1987). The evolution of problem representations
in the presence of powerful conceptual amplifiers. In
C. Janvier (Ed.), Problems of representation in the teach-
ing and learning of mathematics. Hillsdale, NJ: Erl-
baum.
Malone, T. W. (1983). How do people organize their desks:
Implications for designing office automation systems.
ACM Transactions on Office Information Systems, 1,
99-112.
at Afyon Kocatepe Universitesi on April 23, 2014 hfs.sagepub.com Downloaded from
COGNITIVE ENGINEERING
Mancini, G., Woods, D. D., and Hollnagel. E. (Eds.). (in
press). Cognitive engineering in dynamic worlds. Lon-
don: Academic Press. (Special issue of International
Journal of Man-Machine Studies, vol. 27).
March, J. G., and Weisinger-Baylon, R. (Eds.). (1986). Am-
biguity and command. Marshfield, MA: Pitman Pub-
lishing.
McKendree, 1., and Carroll, J. M. (1986). Advising roles of
a computer consultant. In M. Mantei and P. Oberton
(Eds.), Human factors in computing systems: CHI'86
Conference Proceedings (pp. 35-40). New York: ACM/
SIGCHI.
Miller, P. L. (1983). ATTENDING: Critiquing aphysician's
management plan. IEEE Transactions on Pattern Anal-
ysis and Machine Intelligence, PAMI-5, 449-461.
Mitchell, C., and Forren, M. G. (1987). Multimodal user
input to supervisory control systems: Voice-aug-
mented keyboard. IEEE Transactions on Systems,
Man, and Cybernetics, SMC-I7, 594-607.
Mitchell, C., and Saisi, D. (1987). Useof model-based qual-
itative icons and adaptive windows in workstations for
supervisory control systems. IEEE Transactions on
Systems, Man, and Cybernetics, SMC-I7, 573-593.
Miyata, Y., and Norman, D. A. (1986). Psychological issues
in support of multiple activities. In D. A. Norman and
S. W. Draper (Eds.), User-centered system design: New
perspectives on human-computer interaction. Hillsdale,
NJ: Erlbaum.
Montmollin, M. de, and De Keyser, V. (1985). Expert logic
vs. operator logic. In G. Johannsen, G. Mancini, and L.
Martensson (Eds.), Analysis, design, and evaluation of
man-machine systems. CEC-JRC Ispra, Italy: IFAC.
Muir, B. (1987). Trust between humans and machines.ln-
ternational Journal of Man-Machine Studies, 27,
527-539. Also in Mancini, G., Woods, D., and Hollna-
gel. E. (Eds.). (in press). Cognitive engineering in dy-
namic worlds. London: Academic Press.
Newell, A., and Card. S. K. (1985). The prospects for psy-
chological science in human-computer interaction.
Human-Computer Interaction, I, 209-242.
Noble, D. F. (1984). Forces of production: A social history of
industrial automation. New York: Alfred A. Knopf.
Norman, D. A. (1981). Steps towards a cognitive engineering
(Tech. Report). San Diego: University of California,
San Diego, Program in Cognitive Science.
Norman, D. A. (1983). Design rules based on analyses of
human error. Communications of the ACM, 26,
254-258.
Norman, D. A., and Draper, S. W. (1986). User-centered
system design: New perspectives on human-computer in-
teraction. Hillsdale, NJ: Erlbaum.
Pea, R. D. (1985). Beyond amplification: Using the com-
puter to reorganize mental functioning. Educational
psychologist, 20, 167-182.
Perkins, D., and Martin, F. (1986). Fragile knowledge and
neglected strategies in novice programmers. In E. So-
loway and S. Iyengar (Eds.), Empirical studies of pro-
grammers. Norwood, NJ: Ablex.
Pew, R. W., et at (1986). Cockpit automation technology
(Tech. Report 6133). Cambridge, MA: BBN Laborato-
ries Inc.
Pope, R. H. (1978). Power station control room and desk
design: Alarm system and experience in the use of CRT
displays. In Proceedings of the International Sympo-
sium on Nuclear Power Plant Control and Instrumenta-
tion. Cannes, France.
August 1988-429
Pople, H., Jr. (1985). Evolution of an expert system: From
internist to caduceus. In L De Lotto and M. Stefanelli
(Eds.), Artificial intelligence in medicine. New York: El-
sevier Science Publishers.
Quinn, L., and Russell, D. M. (1986). Intelligent interfaces:
User models and planners. In M. Mantei and P. Ober-
ton (Eds.), Human factors in computing systems:
CHI'86 Conference Proceedings (pp. 314-320). New
York: ACM/SIGCHI.
Rasmussen, J. (1986). Information processing and human-
machine interaction: An approach to cognitive engineer-
ing. New York: North-Holland.
Rizzo, A., Bagnara, S., and Visciola, M. (1987). Human
error detection processes. International Journal of
Man-Machine Studies, 27, 555-570. Also in Mancini,
G., Woods, D., and Hollnagel, E. (Eds.). (in press). Cog-
nitive engineering in dynamic worlds. London: Aca-
demic Press.
Robertson, G., McCracken, D., and Newell, A. (1981). The
ZOG approach to man-machine communication. Inter-
nationa/J ournal of Man-Machine Studies, 14, 461-488.
Rochlin, G. I., La Porte, T. R., and Roberts, K. H. (in
press). The self-designing high-reliability organiza-
tion: Aircraft carrier flight operations at sea. Naval
War College Review.
Roth, E. M., Bennett, K., and Woods, D. D. (1987). Human
interaction with an "intelligent" machine. Interna-
tional Journal of Man-Machine Studies, 27, 479-525.
Also in Mancini, G., Woods, D., and Hollnagel. E.
(Eds.). (in press). Cognitive engineering in dynamic
worlds. London: Academic Press.
Roth, E. M., and Woods, D. D. (1988). Aiding human per-
formance: L Cognitive analysis. Le Travail Humain,
51(1),39-64.
Schum, D. A. (1980). Current developments in research on
cascaded inference. In T. S. Wallstein (Ed.), Cognitive
processes in decision and choice behavior. Hillsdale,
NJ: Erlbaurn.
Selfridge, O. G., Rissland, E. L., and Arbib, M. A. (1984).
Adaptive control of ill-defined systems. New York: Ple-
num Press.
Sheridan, T., and Hennessy, R. (Eds.). (1984). Research and
modeling of supervisory control behavior. Washington,
DC: National Academy Press.
Shneiderman, B. (1986). Seven plus or minus two central
issues in human-computer interaction. In M. Mantei
and P. Obreton (Eds.), Human factors in computing
systems: CHI'86 Conference Proceedings (pp. 343-349).
New York: ACM/SIGCHL
Stefik, M., Foster, G., Bobrow, D., Kahn, K., Lanning, S.,
and Suchman, L. (1985, September). Beyond the chalk-
board: Using computers to support collaboration and
problem solving in meetings (Tech. Report). Palo Alto,
CA: Intelligent Systems Laboratory, Xerox Palo Alto
Research Center.
Suchman, L. A. (1987). Plans and situated actions: The
problem of human-machine communication. Cam-
bridge: Cambridge University Press.
Wiecha, C., and Henrion, M. (1987). A graphical tool for
structuring and understanding quantitative decision
models. In Proceedings of Workshop on Visual Lan-
guages. New York: IEEE Computer Society.
Wiener, E. (1985). Beyond the sterile cockpit. Human Fac-
tors, 27, 75-90.
Woods, D. D. (1984). Visual momentum: A concept to im-
prove the cognitive coupling of person and computer.
at Afyon Kocatepe Universitesi on April 23, 2014 hfs.sagepub.com Downloaded from
430-August 1988
International Journal of Man-Machine Studies, 21,
229-244.
Woods, D. D. (1986). Paradigms for intelligent decision
support. In E. Hollnagel, G. Mancini, and D. D. Woods
(Eds.), Intelligent decision support in process environ-
ments. New York: Springer-Verlag.
Woods, D. D. (1988). Coping with complexity: The psy-
chology of human behavior in complex systems. In
L. P. Goodstein, H. B. Andersen, and S. E. Olsen (Eds.),
Mental models, tasks and errors: A collection of essays to
celebrate Jens Rasmussen's 60th birthday. London: Tay-
lor and Francis.
Woods, D. D., and Hollnagel, E. (1987). Mapping cognitive
demands in complex problem solving worlds (special
issue on knowledge acquisition for knowledge-based
systems). International Journal of Man-Machine Stud-
ies, 26, 257-275.
HUMAN FACTORS
Woods, D. D., and Roth, E. M. (1986). Models of cognitive
behavior in nuclear power plant personnel. (NUREG-
CR-4532). Washington, DC: U.S. Nuclear Regulatory
Commission.
Woods, D. D., and Roth, E. M. (l988a). Cognitive systems
engineering. In M. Helander (Ed.), Handbook of
human-computer interaction. New York: North-Hol-
land.
Woods, D. D., and Roth, E. M. (l988b). Aiding human per-
formance: II. From cognitive analysis to support sys-
tems. Le Travail Humain, 51, 139-172.
Woods, D. D., Roth, E. M., and Pople, H. (1987). Cognitive
Environment Simulation: An artificial intelligence sys-
tem for human performance assessment (NUREG-CR-
4862). Washington, DC: U.S. Nuclear Regulatory Com-
mission.

Potrebbero piacerti anche