Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Abstract
We describe a project to capitalize on newly available levels of computational resources in order to understand human
cognition. We are building an integrated physical system including vision, sound input and output, and dextrous
manipulation, all controlled by a continuously operating large scale parallel MIMD computer. The resulting system
will learn to "think" by building on its bodily experiences to accomplish progressively more abstract tasks. Past
experience suggests that in attempting to build such an integrated system we will have to fundamentally change the
way artificial intelligence, cognitive science, linguistics, and philosophy think about the organization of
intelligence. We expect to be able to better reconcile the theories that will be developed with current work in
neuroscience.
The underlying computational model used on that In the humanoid project much of the computation,
robot — and with many tens of other autonomous especially for the lower levels of the system, will
mobile robots we have builtconsisted of networks of naturally be of a similar nature. We expect to perform
message-passing augmented finite state machines. different experiments where in some cases the higher
Each of these AFSMs was a separate process. The level computations are of the same nature and in
messages were sent over predefined ‘wires’ from a other cases the higher levels will be much more
specific transmitting to a specific receiving AFSM. symbolic in nature, although the symbolic bindings
The messages were simple numbers (typically 8 bits) will be restricted to within individual processors. We
whose meaning depended on the designs of both the need to use software and hardware environments
transmitter and the receiver. An AFSM had additional which give support to these requirements without
registers which held the most recent incoming sacrificing the high levels of performance of which
message on any particular wire. This gives a very we wish to make use.
simple model of parallelism, even simpler than that
of CSP Hoare (1985). The registers could have their 5.1.1 Software. For the software environment we
values fed into a local combinatorial circuit to have a number of requirements:
produce new values for registers or to provide an
output message. The network of AFSMs was totally • There should be a good software development
asynchronous, but individual AFSMs could have environment.
fixed duration monostables which provided for dealing
with the flow of time in the outside world. The • The system should be completely portable over
behavioral competence of the system was improved many hardware environments, so that we can upgrade
by adding more behavior-specific network to the to new parallel machines over the lifetime of this
existing network. This process was called layering. project.
This was a simplistic and crude analogy to
evolutionary development. As with evolution, at • The system should provide efficient code for
every stage of the development, the systems were perceptual processing such as vision.
tested. Each of the layers was a behavior producing
piece of network in its own right, although it might • The system should let us write high level symbolic
implicitly rely on the presence of earlier pieces of programs when desired.
network. For instance, an explore layer did not need
to explicitly avoid obstacles, as the designer knew • The system language should be a standardized
that a previous avoid layer would take care of it. A language that is widely known and understood.
fixed priority arbitration scheme was used to handle
conflicts. In summary, our software environment should let
us gain easy access to high performance parallel
On top of the AFSM substrate we used another computation.
abstraction known as the Behavior Language, or BL
(Brooks 1990a), which was much easier for the user We have chosen to use Common Lisp (Steele Jr.
to program with. The output of the BL compiler was 1990) as the substrate for all software development.
a standard set of augmented finite state machines; by This gives us good programming environments
maintaining this compatibility all existing software including type checked debugging, rapid prototyping,
could be retained. When programming in BL the user symbolic computation, easy ways of writing
has complete access to full Common Lisp as a embedded language abstractions, and automatic
metalanguage by way of a macro mechanism. Thus storage management. We believe that Common Lisp
the user could easily develop abstractions on top of is superior to C (the other major contender) in all of
BL, while still writing programs which compiled these aspects.
down to networks of AFSMs. In a sense, AFSMs
played the role of assembly language in normal high The problem then is how to use Lisp in a
level computer languages. But the structure of the massively parallel machine where each node may not
AFSM networks enforced a programming style which have the vast amounts of memory that we have
naturally compiled into very efficient small become accustomed to feeding Common Lisp
processes. The structure of the Behavior Language implementations on standard Unix boxes.
enforced a modularity where data sharing was
restricted to smallish sets of AFSMs, and whose only We have a long history of building high
performance Lisp compilers (Brooks, Gabriel &
Steele Jr. 1982), including one of the two most compatible with Common Lisp however, we can
common commercial Lisp compilers on the market-, carry out code development and debugging on standard
Lucid Lisp (Brooks et al. 1986). work stations with full programming environments
(e.g., in Macintosh Common Lisp, or Lucid
Recently we have developed L (Brooks 1993, a Common Lisp with Emacs 19 on a Unix box, or in
retargetable small efficient Lisp which is a the Harlequin programming environment on a Unix
downwardly compatible subset of Common Lisp. box). We can then dynamically link code into the
When compiled for a 680W based machine the load running system on our parallel processors.
image (without the compiler) is only 140 Kbytes,
but includes multiple values, strings, characters, There are two remaining problems: (1) how to
arrays, a simplified but compatible package system, maintain super critical real-time performance when
all the “ordinary” aspects of format, backquote and using a Lisp system without hard ephemeral garbage
comma, setf etc. full Common Lisp lambda lists collection, and (2) how to get the level of
including optionals and keyword arguments, macros, within-processor parallelism described earlier.
an inspector, a debugger, defstruct (integrated with
the inspector), block, catch, and throw, etc., full The structure of L’s implementation is such that
dynamic closures, a full lexical interpreter, floating multiple independent heaps can be maintained within
point, fast garbage collection, and so on. The a single address space, sharing all the code and data
compiler runs in time linear in the size of an input segments of the Lisp proper. In this way
expression, except in the presence of lexical closures. super-critical portions of a system can be placed in a
It nevertheless produces highly optimized code in heap where no consing is occurring, and hence there
most cases. L is missing flet and labels, generic is no possibility that they will be blocked by garbage
arithmetic, bignums, rationals, complex numbers, collection.
the library of sequence functions (which can, be
written within L) and esoteric parts of format and The Behavior Language (Brooks 1990a) is an
packages. example of a compiler which builds special purpose
static schedulers for low overhead parallelism. Each
The L system is an intellectual descendent of the process ran until blocked and the syntax of the
dynamically retargetable Lucid Lisp compiler (Brooks language forced there to always be a blocking
et al. 1986) and the dynamically retargetable Behavior condition, so there was no need for pre-emptive
Language compiler (Brooks 1990a). The system is scheduling. Additionally the syntax and semantics of
totally written in L with machine dependent backends the language guaranteed that there would be zero stack
for retargetting. The first backend is for the Motorola context needed to be saved when a blocking condition
68020 (and upwards) family, but it is easily retargeted was reached. We have built a new scheduling system
to new architectures. The process consists of writing with L to address similar issues in this project. To fit
a simple machine description, providing code in with the philosophy of the rest of the system it
templates for about 100 primitive procedures (e.g., has a dynamic scheduler so that new processes can be
fixed precision integer +, *, =, etc., string indexing added and deleted as a user types to the Lisp listener
CHAR and other accessors, CAR, CDR, etc.), code of a particular processor. Reasonably straightforward
macro expansion for about 20 pseudo instructions data structures keep these costs to manageable levels.
(e.g, procedure call, procedure exit, checking correct It was rather straightforward to build a phase into the
number of arguments, linking CATCH frames, etc.) L compiler which recognizes the situations described
and two corresponding sets of assembler routines above. Thus it was straightforward to implement a
which are too big to be expanded as code templates set of macros which provides a language abstraction
every time, but are so critical in speed that they need on top of Lisp which provides all the functionality of
to be written in machine language, without the the Behavior Language and which additionally lets us
overhead of a procedure call, rather than in Lisp (e.g., have dynamic scheduling. A pre-emptive scheduler is
CONS, spreading of multiple values on the stack, used in addition, as it would be difficult to enforce a
etc.). There is a version of the 1/0 system which computation time limit syntactically when Common
operates by calling C routines (e.g., fgetchar, etc.; Lisp is essentially available to the programmer — at
this is how the Macintosh version of L runs) so it is the very least the case of the pre-emptive scheduler
rather simple to port the system to any hardware having to strike down a process is useful as a safety
platform we might choose to use in the future. device, and acts as a debugging tool for the user to
identify time critical computations which are
Note carefully the intention here: L is to be the stressing the bounded computation style of writing.
delivery vehicle running on the brain hardware of the In other cases static analysis is able to determine
humanoid, potentially on hundreds or thousands of maximum stack requirements for a particular process,
small processors. Since it is fully downward and so heap allocated stacks are usable.
consistent global memory, cache coherence, or
The software system so far described is being used arbitrary message routing. Since these are different
to implement crude forms of ‘brain models’, where requirements than those that are normally considered,
computations will be organized in ways inspired by we have to make some measurements with actual
the sorts of anatomical divisions we see occurring in programs before we can make an intelligent off the
animal brains. Note that we are not building a model shelf choice of computer hardware.
of a particular brain, but rather using a modularity
inspired by such components as visual cortex, In order to answer some of these questions we have
auditory cortex, etc., with further modulartiy within built a zero-th generation parallel computer. It is
and across these components, e.g., a particular being built on a very low budget with off the shelf
subsystem to implement the vestibulo-ocular components wherever possible (a few fairly simple
response (VOR). printed circuit boards need to be fabricated). The
processors are 16 Mhz Motorola 68332s on a standard
Thus besides on-processor parallelism we need to board built by Vesta Technology. These plug 16 to a
provide a modularity tool that packages processes backplane. The backplane provides each processor
into groups and limits data sharing, between them. with six communications ports (using the integrated
Each package resides on a single processor, but often timing processor unit to generate the required signals
processors host many such packages. A package that along with special chip select and standard address and
communicates with another package should be data lines) and a peripheral processor port. The
insulated at the syntax level from knowing whether communications ports are hand-wired with patch
the other package is on the same or a different cables, building a fixed topology network. (The
processor. The communication medium between such cables incorporate a single dual ported RAM (8 K by
packages will again be 1-deep buffers without 16 bits) that itself includes hardware semaphores
queuing or receipt acknowledgment- any such writable and readable by the two processors being
acknowledgment will need to be implemented as a connected.)
backward channel, much as we see throughout the
cortex (Churchland & Sejnowski 1992). This Background processes running on the 68332
packaging system can be implemented in Common operating system provide sustained rate transfers of
Lisp as a macro package. 60 Hz packets of 4 Kbytes on each port, with higher
peak rates if desired. These sustained rates do
5.1.2 Computational Hardware. The computational consume processing cycles from the 68332. On
model presented in the previous section is somewhat non-vision processors we expect much lower rates
different from that usually assumed in high will be needed, and even on vision processors we can
performance parallel computer applications. Typically probably reduce the packet frequency to around 15 Hz.
(Cypher et al. 1993) there is a strong bias on system Each processor has an operating system, L, and the
requirements from the sort of benchmarks that arc dynamic scheduler residing in 1M of EPROM. There
used to evaluate performance. The standard is 1M of RAM for program, stack and heap space.
benchmarks for modern high performance Up to 256 processors can be connected together.
computation seem to be Fortran code for
hydrodynamics, molecular simulations, or graphics Up to 16 backplanes can be connected to a single
rendering. We are proposing a very different front end processor (FEP) via a shared 500 K baud
application with very different requirements; in serial line to a SCSI emulator. A large network of
particular we require real-time response to a wide 68332s can span many FEN if we choose to extend
variety of external and internal events, we require the construction of this zero-th prototype. Initially
good symbolic computation performance, we require we use a Macintosh as a FEP Software written in
only integer rather than high performance floating Macintosh Common Lisp on the FEP provides disk
point operationsvii , we require delivery of messages 1/0 services to the 68332's, monitor status and health
only to specific sites determined at program design packets from them, and provides the user with a Lisp
time, rather than at run-time, and we require the listener to any processor they might choose.
ability to do very fast context switches because of the
large number of parallel processes that we intend to The zero-th version uses the standard Motorola SPI
run on each individual processor. (serial peripheral interface) to communicate with up
to 16 Motorola 6811 processors per 68332. These are
The fact that we do not need to support pointer a single chip processor with onboard EEPROM (2 K
references across the computational substrate means bytes) and RAM (256 bytes), including a timer
that we can rely on much simpler, and therefore system, an SPI interface, and 8 channels of analog to
higher performance, parallel computers than many digital conversion. We are building a small custom
other researchers — we do not have to worry about a board for this processor that includes opto-isolated
motor drivers and some standard analog support for “representations” from body schemas and
sensors. manipulation interactions will be developed. This is
completely different from any conventional object
There are certain developments on the horizon recognition schemes, and can not be attempted
within the MIT Artificial Intelligence Lab which we without an integrated vision and manipulation
expect to capitalize upon in order to dramatically environment as we propose.
upgrade our computational systems for early vision,
and hence the resolution at which we can afford to The eyeballs need to be able to saccade up to about
process images in real time. The first of these, will three times per second, stabilizing for 250 ms at each
be a somewhat similar distributed processing system stop. Additionally the yaw axes should be
based on the much higher performance Texas controllable for vergence to a common point and
Instrument C40, which comes with built in support drivable in a manner appropriate for smooth pursuit
for fixed topology message passing. In late ‘95 we and for image stabilization as part of a
expect to be able to make use of the Abacus system, vestibulo-ocular response (VOR) to head movement.
a bit level reconfigurable vision front-end processor The eyeballs do not need to be force or torque
being built under ARPA sponsorship which promises controlled but they do need good fast position and
Tera-op performance on 16 bit fixed precision velocity control. We have previously built, a single
operands. Both these systems will be simply eyeball, A-eye, on which we implemented a model of
integrable with our zero-th order parallel processor via VOR, ocular-kinetic response (OKR) and saccades, all
the standard dual-ported RAM protocol that we are of which used dynamic visually based calibration
using. (Viola 1990).
5.2 Bodies Other active vision systems have had both eyeballs
mounted on a single tilt axis. We will begin
As with the computational hardware, we are also experiments with separate tilt axes but if we find that
currently engaged in building a zero-th generation relative tilt motion is not very useful we will back
body for early experimentation and design refinement off from this requirement in later versions of the
towards more serious constructions within the scope head.
of this project. We are presently limited by budgetary
constraints to building an immobile, armless, deaf, The cameras need to cover a wide field of view,
torso with only black and white vision. preferably close to 180 degrees, while also giving a
foveated central region. Ideally the images should be
In the following subsections we outline the RGB (rather than the very poor color signal of
constraints and requirements on a full scale humanoid standard NTSC). A resolution of 512 by 512 at both
body and also include where relevant details of our the coarse and fine scale is desirable.
zero-th level prototype.
Our zero-th version of the cameras are black and
5.2.1 Eyes. There has been quite a lot of recent work white only. Each eyeball consists of two small
on animate vision using saccading stereo cameras, lightweight cameras mounted with parallel axes. One
most notably at Rochester (Ballard 1989, Coombs gives a 115 degree field of view and the other gives a
1992), but also more recently. at many other 20 degree foveated region. In order to handle the
institutions, such as Oxford University. images in real time in our zero-th parallel processor
we subsample the images to be 128 by 128 which is
The humanoid needs a head with high mechanical much smaller than the ideal.
performance eyeballs and foveated vision if it is to be
able to participate in the world with people in a Later versions of the head will have full RGB color
natural way. Even our earliest heads will include two cameras, wider angles for the peripheral vision, much
eyes, with foveated vision, able to pan and tilt as a finer grain sampling of the images, and perhaps a
unit, and with independent saccading ability (three colinear optics set up using optical fiber cables and
saccades per second) and vergence control of the eyes. beam splitters. With more sophisticated high speed
Fundamental vision based behaviors will include a processing available we will also be able to do
visually calibrated vestibular-ocular reflex, smooth experiments with log-polar image representations.
pursuit, visually calibrated saccades, and object
centered foveal relative depth stereo. Independent 5.2.2 Ears, Voice. Almost no work has been done on
visual systems will provide peripheral and foveal sound understanding, as distinct from speech
motion cues, color discrimination, human face understanding. This project will start on sound
pop-outs, and eventually face recognition. Over the understanding to provide a much more solid
course of the project, object recognition based on processing base for later work on speech input. Early
behavior layers will spatially correlate noises with degrees of freedom, with the axes passing through a
visual events, and spatial registration will be common point which will also lie along the spinal
continuously self calibrating. Efforts will concentrate axis of the body. The head will be capable of yawing
on using this physical cross-correlation as a basis for at 90 degrees per second — less than peak human
reliably pulling out interesting events from speed, but well within the range of natural human
background noise, and mimicking the cocktail party motions. As we build later versions we expect to
effect of being able to focus attention on particular increase these performance figures to more closely
sound sources. Visual correlation with face pop-outs, match the abilities of a human.
etc., will then be used to be able to extract human
sound streams. Work will proceed on using these Apart from the normal sorts of kinematic sensors,
sounds streams to mimic infant's abilities to ignore the torso needs a number of additional sensors
language dependent irrelevances. By the time we get specifically aimed at providing input fodder for the
to elementary speech we will therefore have a system development of bodily metaphors. In particular, strain
able to work in noisy environments and accustomed gauges on the spine can give the system a feel for its
to multiple speakers with varying accents. posture and the symmetry of a particular
configuration, plus a little information about any
Sound perception will consist of four high quality additional load the torso might bear when an arm
microphones. (Although the human head uses only picks up something heavy. Heat sensors on the
two auditory inputs, it relies heavily on the shape of motors and the motor drivers will give feedback as to
the external ear in determining, the vertical how much work has been done by the body recently,
component of directional sound source.) Sound and current sensors on the motors will give an
generation will be accomplished using a single indication of how hard the system is Working
speaker. instantaneously.
Sound is critical for several aspects of the robot's Our zero-th level torso is roughly 18 inches from
activity. First, sound provides immediate feedback for the base of the spine to the base of the neck. This
motor manipulation and positioning. Babies learn to corresponds to a smallish adult. It uses DC motors
find and use their hands by batting at and with built in gearboxes. The main concern we have is
manipulating toys that jingle and rattle. Adults use how quiet it will be, as we do not want the sound
such cues as contact noises — the sound of an object perception system to be overwhelmed by body noise.
hitting the table — to provide feedback to motor
systems. Second, sound aids in socialization even Later versions of the torsos will have touch sensors
before the emergence of language. Patterns such as integrated around the body, will have more compliant
turn-taking and mimicry are critical parts of children's motion, will be quieter, and will need to provide
development, and adults use guttural gestures to better cabling ducts so that the cables can all feed out
express attitudes and other conversational cues. through a lower body outlet.
Certain signal tones indicate encouragement or
disapproval to all ages and stages of development. 5.24 Arms. The eventual manipulator system will be
Finally, even preverbal children use sound effectively a compliant multi-degree of freedom arm with a rather
to convey intent; until our robots develop true simple hand. (A better hand would be nice, but hand
language, other sounds will necessarily be a major research is not yet at a point where we can get an
source of communication. interesting, easy-to use, off-the-shelf hand.) The arm
will be safe enough that humans can interact with it,
5.23 Torsos. In order for the humanoid to be able to handing it things and taking things from it. The arm
participate in the same sorts of body metaphors as are will be compliant enough that the system will be
used by humans, it needs to have a symmetric able to explore its own body — for instance, by
human-like torso. It needs to be able to experience touching its head system — so that it will be able to
imbalance, feel symmetry, learn to coordinate head develop its own body metaphors.
and body motion for stable vision, and be able to
experience relief when it relaxes its body. We want the arms to be very compliant yet still
Additionally the torso must be able to support the able to lift weights of a few pounds so that they can
head, the arms, and any objects they grasp. interact with human artifacts in interesting ways.
Additionally we want the arms to have redundant
The torsos we build will initially have a three degrees of freedom (rather than the six seen in a
degree of freedom hip, with the axes passing through, standard commercial robot arm), so that in many
a common point, capable of leaning and twisting to circumstances we can ‘burn’ some of those degrees of
any position in about three seconds — somewhat freedom in order to align a single joint so that the
slower than a human. The neck will also have three joint coordinates and task coordinates very nearly
match. This will greatly simplify control of use motions of the upper joints of the arm for fine
manipulation. It is the sort of thing people do all the manipulation tasks. More sophisticated hands will
time: for example, when bracing an elbow or the base allow us to use finger motions, with much lower
of the palm (or even their middle and last two fingers) inertias, to carry out these tasks.
on a table to stabilize the hand during some delicate
(or not so delicate) manipulation. Our zeroth version 6 Development Plan
arms have six degrees of freedom and a novel
spring-based transmission system to introduce We plan on modeling the brain at a level above the
passive compliance at every joint. neural level, but below what would normally be
thought of as the cognitive level.
The hands in the first instances will be quite
simple; devices that can grasp from above relying We understand abstraction well enough to know
heavily on mechanical compliance — they may have how to engineer a system that has similar properties
as few as one degree of control freedom. and connections to the human brain without having
to model its detailed local wiring. At the same time it
More sophisticated, however, will be the sensing is clear from the literature that there is no agreement
on the arms and hands. We will use forms of on how things are really organized computationally at
conductive rubber to get a sense of touch over the higher or modular levels, or indeed whether it even
surface of the arm, so that it can detect (compliant) makes sense to talk about modules of the brain (e.g.,
collisions it might participate in. As with the torso short term memory, and long term memory) as
there will be liberal use of strain gauges, heat sensors generative structures.
and current sensors so that the system can have a
‘feel’ for how its arms are being used and how they Nevertheless, we expect to be guided, or one might
are performing. say inspired, by what is known about the high level
connectivity within the human brain (although
We also expect to move towards a more admittedly much of our knowledge actually comes
sophisticated type of hand in later years of this from macaques and other primates and is only
project. Initially, unfortunately, we will be forced to extrapolated to be true of humans, a problem of
Brooks, R.A. 1990a. The Behavior Language User's Guide, Fendrich, R., Wessinger, C.M., and Gazzaniga, M.S. 1992.
Memo 1227, Massachusetts Institute of Technology Artificial Residual Vision in a Scotoma: Implications for Blindsight,
Intelligence Lab. Cambridge, Massachusetts. Science 258, 1489-1491.
Brooks, R.A. 1990b. Elephants Don't Play Chess, in P. Maes, ed., Ferrell, C. 1993. Robust Agent Control of an Autonomous Robot
Designing Autonomous Agents: Theory and Practice from with Many Sensors and Actuators, Master's thesis, MIT,
Biology to Engineering and Back. MIT Press: Cambridge, Department of EECS, Cambridge, Massachusetts.
Massachusetts, pp. 3-15.
Fodor, J.A. 1983. The Modularity of Mind, Bradford Books, MIT
Brooks, R.A. 1991a. Intelligence Without Reason. In Proceedings Press: Cambridge, Massachusetts.
of the 1991 International Joint Conference on Artificial
Intelligence, pp. 569-595. Harris, C.L. 1991. Parallel Distributed Processing Models and
Metaphors for Language and Development, PhD thesis,
Brooks, R.A. 1991b. New Approaches to Robotics, Science 253, University of California, Department of Cognitive Science,
1227-1232. San Diego, California.
Brooks, R.A. 1993. L: A Subset of Common Lisp, Technical Haugeland, J. 1985. Artificial Intelligence — The Very Idea, MIT
report, Massachusetts Institute of Technology Artificial Press: Cambridge, Massachusetts.
Intelligence Lab.
Hoare, C.A.R. 1985. Communicating Sequential Processes,
Brooks, R.A., Gabriel R.P., and Steele Jr., G.L. 1982. An Prentice-Hall: Englewood Cliffs, New Jersey.
Optimizing Compiler for Lexically Scoped Lisp. In
Proceedings of the 1982 Symposium on Compiler Hobbs, J. and Moore, R., eds. 1985. Formal Theories of the
Construction. ACM SIGPLAN, Boston, Massachusetts, pp. Commonsense World, Ablest Publishing Co.: Norwood, New
261-275. Published as ACM SIGPLAN Notices 17, 6 (June Jersey.
1982).
Horswill, I.D. 1993. Specialization of Perceptual Processes, PhD
Brooks, R.A., Posner, D.B., McDonald, J.L., White, J.L., Benson, thesis, MIT, Department of EECS, Cambridge,
E., and Gabriel, R.P. 1986. Design of An Optimizing Massachusetts.
Dynamically Retargetable Compiler for Common Lisp. In
Proceedings of the 1986 ACM Symposium on Lisp and Horswill, I.D. and Brooks, R.A. 1988. Situated Vision in a
Functional Programming, Cambridge, Massachusetts, pp. Dynamic World: Chasing Objects, In Proceedings of the
67-85. Seventh Annual Meeting of the American Association for
Artificial Intelligence, St. Paul, Minnesota, pp. 796-800.
Caudill, M. 1992. In Our Own Image. Building An Artificial
Person, Oxford University Press: New York, New York. Johnson, M. 1987. The Body In The Mind, University of Chicago
Press: Chicago, Illinois.
Churchland, P.S. and Sejnowski, T.J. 1992. The Computational
Brain, MIT Press: Cambridge, Massachusetts. Kinsbourne, M. 1987. Mechanisms of unilateral neglect, in M.
Jeannerod, ed., Neurophysiological and Neuropsychological
Connell, J.H. 1987. Creature Building with the Subsumption Aspects of Spatial Neglect, Elsevier, North Holland.
Architecture. In Proceedings of the International Joint
Conference on Artificial Intelligence, Milan, Italy, pp. Kinsbourne, M. 1988. Integrated field theory of conscious. ness,
1124-1126. in A. Marcel and E. Bisiach, eds, The Concept of
Consciousness in Contemporary Science, Oxford University
Connell, J.H. 1990. Minimalist Mobile Robotics: A Colonystyle Press: London, England.
Architecture for a Mobile Robot, Academic Press:
Cambridge, Massachusetts. Also MIT TR-1151. Kosslyn, S. 1994. Image and brain: The resolution of the imagery
debate, MIT Press: Cambridge, Massachusetts.
Coombs, DJ. 1992. Real-time Gaze Holding in Binocular Robot
Vision, PhD thesis, University of Rochester, Department of Kuipers, B. and Byun, Y.-T. 1991. A robot exploration and
CS, Rochester, New York. mapping strategy based on a semantic hierarchy of spatial
representations, Robotics and Autonomous Systems 8, 47-63.
Crick, F. and Jones, E. 1993. Backwardness of human
neuroanatomy. Nature 361, 109-110. Lakoff, G. 1987. Women, Fire, and Dangerous Things, University
of Chicago Press: Chicago, Illinois.
Cypher, R., Ho, A., Konstantinidou, S., and Messina, P. 1993.
Architectural Requirements of Parallel Scientific Lakoff, G. and Johnson, M. 1980. Metaphors We Live By,
Applications with Explicit Communication, In IEEE University of Chicago Press: Chicago, Illinois.
Proceedings of the 20th International Symposium on
Computer Architecture, San Diego, California, pp. 2-13. Langacker, R.W. 1987. Foundations, of cognitive grammar,
Volume 1, Stanford University Press: Palo Alto, California.
Damasio, H. and Damasio, A.R. 1989. Lesion Analysis in
Neuropsychology, Oxford University Press: New York, New Lempert, H. and Kinsbourne, M. 1985. Possible origin of speech
York. in selective orienting, Psychological Bulletin 97, 62-73.
Dennett, D.C. 1991. Consciousness Explained, Little, Brown: Lisberger, S.G. 1988. The neural basis for motor learning in the
Boston, Massachusetts. vestibulo-ocular reflex in monkeys, Trends in Neuroscience
11, 147-152.
Dennett, D.C. and Kinsbourne, M. 1992. Time and the Observer:
The Where and When of Consciousness in the Brain, Brain Marr, D. 1982. Vision, W.H. Freeman, San Francisco, California.
and Behavioral Sciences 15, 183-247.
Mataric, M.J. 1992a. Designing Emergent Behaviors: From Local
Drescher, G.L. 1991. Made-Up Minds: A Constructivist Interactions to Collective Intelligence. In Proceedings of the
Approach to Artificial Intelligence, MIT Press: Cambridge, Second International Conference on Simulation of Adaptive
Massachusetts. Behavior, MIT Press, Cambridge, Massachusetts, pp.
432-441.
Edelman, G.M. 1987. Neural Darwinism: The Theory of Neuronal
Group Selection, Basic Books, New York, New York. Mataric, M.J. 1992b. Integration of Representation Into
Goal-Driven Behavior-Based Robots, IEEE Journal of
Edelman, G.M. 1989. The Remembered Present. A Biological Robotics and Automation 8(3), 304-312.
Theory of Consciousness, Basic Books, New York, New
York.
McCarthy, R.A. said Warrington, EX 1988. Evidence for
Modality-Specific Systems in the Brain, Nature 334, 428-430. Notes
McCarthy, R.A. and Warrington, E.K. 1990. Cognitive
Neuropsychology, Academic Press: San Diego, California. i
Different sources cite 1947 and 1948 as the time of writing, but
it was not, published until long after his death.
Minsky, M. 1986. The Society of Mind, Simon and Schuster: New
York, New York.
ii
A more egregious version of this is (Penrose 1989) who not
Minsky, M. and Papert, S. 1969. Perceptrons, MIT Press: only makes the same Turing-Gödel error, but then in a
Cambridge, Massachusetts. desperate attempt to find the essence of mind and applying the
standard methodology of physics, namely to find a simplifying
Newcombe, F. and Ratcliff, 0. 1989. Disorders of Visupspatial underlying principle, resorts to an almost mystical reliance on
Analysis. In Handbook of Neuropsychology, Volume 2, quantum mechanics.
Elsevier: New York, New York.
Teitelbaum, P., Pellis, V.C., and Pellis, S.M. 1990. Can Allied
Reflexes Promote the Integration of a Robot's Behavior. In
Proceedings of the First International Conference on
Simulation of Adaptive Behavior, MIT Press: Cambridge,
Massachusetts, pp. 97-104.