Sei sulla pagina 1di 367

Decomposing the Will

PHILOSOPHY OF MIND
Series Editor
David J. Chalmers, Australian National University and
New York University

Self Expression Phenomenal Concepts and Phenomenal


Owen Flanagan Knowledge
Torin Alter, Sven Walter (editors)
Deconstructing the Mind
Stephen Stich Beyond Reduction
Steven Horst
The Conscious Mind
David J. Chalmers What Are We?
Eric T. Olson
Minds and Bodies
Colin McGinn Supersizing the Mind
Andy Clark
What’s Within?
Fiona Cowie Perception, Hallucination, and Illusion
William Fish
The Human Animal
Eric T. Olson Cognitive Systems and the Extended Mind
Robert D. Rupert
Dreaming Souls
Owen Flanagan The Character of Consciousness
David J. Chalmers
Consciousness and Cognition
Michael Thau Perceiving the World
Bence Nanay (editor)
Thinking Without Words
José Luis Bermúdez The Senses
Fiona Macpherson
Identifying the Mind
U. T. Place (author), George Graham, The Contents of Visual Experience
Elizabeth R. Valentine (editors) Susanna Siegel
Purple Haze Attention Is Cognitive Unison
Joseph Levine Christopher Mole
Three Faces of Desire Consciousness and the Prospects of
Timothy Schroeder Physicalism
Derk Pereboom
A Place for Consciousness
Gregg Rosenberg Introspection and Consciousness
Declan Smithies and Daniel Stoljar
Ignorance and Imagination
(editors)
Daniel Stoljar
Decomposing the Will
Simulating Minds
Andy Clark, Julian Kiverstein, and Tillmann
Alvin I. Goldman
Vierkant (editors)
Gut Reactions
Jesse J. Prinz
Decomposing the Will

EDITED BY
A NDY CLARK , JULIAN KIVERSTEIN ,
AND

TILLMANN VIERKANT

1
3
Oxford University Press is a department of the University of Oxford.
It furthers the University’s objective of excellence in research, scholarship,
and education by publishing worldwide.

Oxford New York


Auckland Cape Town Dar es Salaam Hong Kong Karachi
Kuala Lumpur Madrid Melbourne Mexico City Nairobi
New Delhi Shanghai Taipei Toronto

With offices in
Argentina Austria Brazil Chile Czech Republic France Greece
Guatemala Hungary Italy Japan Poland Portugal Singapore
South Korea Switzerland Thailand Turkey Ukraine Vietnam

Oxford is a registered trademark of Oxford University Press in the UK and certain other
countries.

Published in the United States of America by


Oxford University Press
198 Madison Avenue, New York, NY 10016

© Oxford University Press 2013

All rights reserved. No part of this publication may be reproduced, stored in a


retrieval system, or transmitted, in any form or by any means, without the prior
permission in writing of Oxford University Press, or as expressly permitted by law,
by license, or under terms agreed with the appropriate reproduction rights organization.
Inquiries concerning reproduction outside the scope of the above should be sent to the
Rights Department, Oxford University Press, at the address above.

You must not circulate this work in any other form


and you must impose this same condition on any acquirer.

Library of Congress Cataloging-in-Publication Data


Decomposing the will / edited by Andy Clark, Julian Kiverstein, and Tillmann Vierkant.
p. cm.—(Philosophy of mind)
ISBN 978–0–19–974699–6 (hardcover : alk. paper)—ISBN 978–0–19–987687–7 (e-book)
1. Free will and determinism. I. Clark, Andy, 1957– II. Kiverstein, Julian. III. Vierkant, Till.
BJ1461.D37 2013
128′.3—dc23
2012022839

ISBN 978–0–19–974699–6
ISBN 978–0–19–987687–7

9 8 7 6 5 4 3 2 1
Printed in the United States of America
on acid-free paper
CONTENTS

Contributors vii

1. Decomposing the Will: Meeting the Zombie Challenge 1


Tillmann Vierkant, Julian Kiverstein, and Andy Clark

PART ONE The Zombie Challenge


2. The Neuroscience of Volition 33
Adina L. Roskies

3. Beyond Libet: Long-Term Prediction of Free Choices from


Neuroimaging Signals 60
John-Dylan Haynes

4. Vetoing and Consciousness 73


Alfred R. Mele

5. From Determinism to Resignation; and How to Stop It 87


Richard Holton

PART TWO The Sense of Agency


6. From the Fact to the Sense of Agency 103
Manos Tsakiris and Aikaterini Fotopoulou

7. Ambiguity in the Sense of Agency 118


Shaun Gallagher

8. There’s Nothing Like Being Free: Default Dispositions, Judgments


of Freedom, and the Phenomenology of Coercion 136
Fabio Paglieri

9. Agency as a Marker of Consciousness 160


Tim Bayne
vi Contents

PART THREE The Function of Conscious Control:


Conflict Resolution, Emotion, and Mental Actions
10. Voluntary Action and the Three Forms of Binding in the Brain 183
Ezequiel Morsella, Tara C. Dennehy, and John A. Bargh

11. Emotion Regulation and Free Will 199


Nico H. Frijda

12. Action Control by Implementation Intentions: The Role of Discrete


Emotions 221
Sam J. Maglio, Peter M. Gollwitzer, and Gabriele Oettingen

13. Mental Action and the Threat of Automaticity 244


Wayne Wu

14. Mental Acts as Natural Kinds 262


Joëlle Proust

PART FOUR Decomposed Accounts of the Will


15. Managerial Control and Free Mental Agency 283
Tillmann Vierkant

16. Recomposing the Will: Distributed Motivation and Computer-Mediated


Extrospection 298
Lars Hall, Petter Johansson, and David de Léon

17. Situationism and Moral Responsibility: Free Will in Fragments 325


Manuel Vargas

Index 351
CONTRIBUTORS

John A. Bargh is a professor of psychology and cognitive science at Yale University.


After his doctoral research at the University of Michigan (PhD, 1981), he spent 23
years as a faculty member at New York University before moving to Yale in 2003.
Professor Bargh holds an honorary doctorate from the University of Nijmegen and
is a member of the American Academy of Arts and Sciences. His research has always
focused on automatic or unconscious influences on the higher mental processes,
such as goal pursuits and social behavior.
Tim Bayne has taught at the University of Canterbury, Macquarie University, and
the University of Oxford. He is currently professor of philosophy at the University of
Manchester. He is the author of The Unity of Consciousness (2010) and the editor of a
number of volumes in the philosophy of mind, the most recent of which is Cognitive
Phenomenology (2012). He is currently writing a textbook on the philosophy of mind.
David de Léon spent a number of agreeable years with the Lund University
Cognitive Science group, grappling with issues relating to the relations between
artifacts and cognition, as well as teaching user interface design. After receiving his
PhD, David left academia for the mobile phone industry, where he has been vari-
ously inventing, designing mobile user interfaces, and leading designers since 2004.
He is currently global interaction director at Sony Mobile Communications.
Tara C. Dennehy is a doctoral student at the University of Massachusetts, Amherst.
Her research interests lie at the intersection of cognitive and social psychology, with
a focus on examining basic processes and their implications for stereotyping, preju-
dice, and social inequity.
Aikaterini (Katerina) Fotopoulou is a senior lecturer of cognitive neuroscience
and neuropsychology at the Institute of Psychiatry, and a Research Associate at the
Institute of Cognitive Neuroscience, University College London. Dr. Fotopoulou’s
team has been conducting research in the cognitive and social neuroscience of body
representation in healthy volunteers, stroke patients, and patients with conversion
disorders. She is the coeditor of the volume From the Couch to the Lab: Trends in
Psychodynamic Neuroscience (2012).
Nico H. Frijda was a professor extraordinarius in the psychology of emotion,
Amsterdam University, 1992–2002. He was also the chairman, Netherlands
viii Contributors

Psychonomics Foundation, 1965–1975; the executive officer, International Society


for Research on Emotion (ISRE), 1987–1990; and the director, Institute for
Emotion and Motivation, Amsterdam University, 1988–1992. He also is a member,
Royal Netherlands Academy of Sciences, and a member of the American Academy
of Arts and Sciences. He has also received a knighthood in the Royal Order of the
Netherlands Lion. Major works include The Understanding of Facial Expression of
Emotion (1956); The Emotions (1986); The Laws of Emotion (2007).
Shaun Gallagher is the Lillian and Morrie Moss Professor of Excellence in
Philosophy at the University of Memphis. He has a secondary appointment at
the University of Hertfordshire (UK) and is honorary professor of philosophy at
the University of Copenhagen (Denmark). He has held visiting positions at the
Cognition and Brain Science MRC Unit at the University of Cambridge, the École
Normale Supérieure in Lyon, and the Centre de Recherche en Epistémologie
Appliquée (CREA), Paris. He is currently a Humboldt Foundation Anneliese
Maier Research Fellow (2012–2117). His publications include How the Body
Shapes the Mind (2005); The Phenomenological Mind (with Dan Zahavi 2008); and
as editor, The Oxford Handbook of the Self (2011). He is editor in chief of the journal
Phenomenology and the Cognitive Sciences.
Peter M. Gollwitzer is a professor of psychology at the Department of Psychology
of New York University. Throughout his academic career, he has developed various
models of action control: the Theory of Symbolic Self-Completion (with Robert
A. Wicklund); the Mindset Model of Action Phases (with Heinz Heckhausen);
the Auto-Motive Model of Automatic Goal Striving (with John A. Bargh); and the
Theory of Intentional Action Control (that makes a distinction between goal inten-
tions and implementation intentions). In all these models, various mechanisms
of behavior change are delineated, and respective moderators and mediators are
distilled.
Lars Hall is a researcher at Lund University Cognitive Science, where his work cen-
ters on the relationship between decision making and self-knowledge. He as com-
pleted a postdoc in experimental social psychology at Harvard University, and his
doctoral work investigated the use of sensor and computing technology to augment
human metacognition.
John-Dylan Haynes has been professor for “Theory and Analysis of Large-Scale
Brain Signals” at the Bernstein Center of the Charité Berlin since 2006. Since 2009,
he has also been the director of the Berlin Center for Advanced Neuroimaging.
He obtained his PhD on neuroimaging of visual awareness from the University of
Bremen in 2003, after which he spent time at the Plymouth Institute of Neuroscience,
the Institute of Cognitive Neuroscience in London, and the Max Planck Institute
for Human Cognitive and Brain Sciences in Leipzig.
Richard Holton is professor of philosophy at MIT, having previously taught at
Edinburgh, Australian National University, Sheffield, and Monash. He is the author
of Willing, Wanting, Waiting (2009) and of works mainly in moral psychology,
philosophy of law, and philosophy of language.
Contributors ix

Petter Johansson completed his PhD at Lund University Cognitive Science in


2006. After that he has been a postdoc at Tokyo University and at University College
London, and he is currently a Pro Futura Scientia Fellow at Lund University. His
research centers are around self-knowledge, especially in relation to the phenome-
non of choice blindness.
Sam Maglio is an assistant professor of marketing in the Rotman School of
Management at the University of Toronto. He received his PhD from New York
University and his bachelor’s degree from Stanford University.
Alfred R. Mele is the William H. and Lucyle T. Werkmeister Professor of Philosophy
at Florida State University and director of the Big Questions in Free Will Project
(2010–2013). He is the author of Irrationality (1987); Springs of Action (1992);
Autonomous Agents (1995); Self-Deception Unmasked (2001); Motivation and Agency
(2003); Free Will and Luck (2006); Effective Intentions (2009); and Backsliding
(2012). He also is the editor or coeditor of Mental Causation (1993); The Philosophy
of Action (1997); The Oxford Handbook of Rationality (2004); Rationality and the
Good (2007); and Free Will and Consciousness: How Might They Work? (2010).
Ezequiel Morsella conducted his doctoral research at Columbia University and his
postdoctoral training at Yale University. He is now a faculty member at San Francisco
State University and the Department of Neurology at the University of California,
San Francisco. His research on the brain’s conscious and unconscious processes
involved in action production has appeared in journals such as Psychological Review.
He served as chief editor of The Oxford Handbook of Human Action.
Gabriele Oettingen is professor of psychology at New York University. In her
research, she is exploring how conscious and nonconscious processes interact in
influencing people’s control of thought, feelings, and action. She distinguishes
between self-regulatory processes involving fantasies versus expectations, and their
differential short-term and long-term influences on information processing, effort,
and successful performance. She also created the model of mental contrasting that
specifies which conscious and nonconscious processes corroborate in turning
wishes and fantasies into binding goals and plans, and eventually goal attainment.
Fabio Paglieri is a researcher at the Institute for Cognitive Sciences and Technolog-
ies of the Italian National Research Council (ISTC-CNR) in Rome. He is a member
of the Goal-Oriented Agents Lab (GOAL) and works on decision making, inten-
tional action, goal-directed behavior, self-control, genesis and dynamics of mental
states, and argumentation theory. His research combines theoretical and empirical
approaches to the study of cognition and behavior, such as conceptual analysis, logi-
cal formalization, game theory, behavioral experiments, and computer-based simula-
tions. He is editor in chief of Topoi: An International Review of Philosophy, and Sistemi
Intelligenti; he is a member of the editorial board of Argument & Computation, and of
the book series Studies in Logic and Argumentation (College Publications, London).
Joëlle Proust presently works at Institut Jean-Nicod as a director of research for
the Pierre-Gilles de Gennes Foundation for Research (École Normale Supérieure,
Paris). As a member of CNRS (the National Center for Scientific Research) from
x Contributors

1986 to 2012, she conducted research in the domains of the history and philosophy
of logic and the philosophy of mind. For the past 10 years, she has concentrated on
metacognition, that is, epistemic self-evaluation. She has studied, in particular, its
evolutionary and conceptual relations with mindreading (ESF-EUROCORE pro-
ject, 2006–2009), and the variety of epistemic norms to which evaluators from dif-
ferent cultures are implicitly sensitive (ERC senior grant, 2011–2016).
Adina L. Roskies is an associate professor of philosophy at Dartmouth College. She
has PhDs in neuroscience and cognitive science and in philosophy. Previous posi-
tions include a postdoctoral fellowship in cognitive neuroimaging at Washington
University, and a job as senior editor of the journal Neuron. Dr. Roskies’s philo-
sophical research interests lie at the intersection of philosophy and neuroscience,
and include philosophy of mind, philosophy of science, and ethics. She was a
member of the McDonnell Project in Neurophilosophy and the MacArthur Law
and Neuroscience Project. She has published many articles in both philosophy and
the neurosciences, among which are several devoted to exploring and articulat-
ing issues in neuroethics. Recent awards include the William James Prize and the
Stanton Prize, awarded by the Society of Philosophy and Psychology; a Mellon
New Directions Fellowship to pursue her interest in neurolaw; and the Laurance S.
Rockefeller Visiting Faculty Fellowship from the Princeton University Center for
Human Values. She is coeditor of a forthcoming primer for judges and lawyers on
law and neuroscience.
Manos Tsakiris is reader in neuropsychology at the Department of Psychology,
Royal Holloway, University of London. His research focuses on the neurocogni-
tive mechanisms that shape the experience of embodiment and self-identity using
a wide range of research methods, from psychometrics and psychophysics to func-
tional neuroimaging.
Manuel Vargas is professor of philosophy at the University of San Francisco. He
is the author of Building Better Beings: A Theory of Moral Responsibility (2013); a
coauthor of Four Views on Free Will (2007); and coeditor of Rational and Social
Agency (in progress). Vargas has held fellowships from the National Endowment
for the Humanities, the Radcliffe Institute for Advanced Studies, and the Stanford
Humanities Center. He has also been a fellow at the McCoy Family Center for Ethics
at Stanford, and held visiting appointments at the University of California, Berkeley,
and at the California Institute of Technology. Vargas’s main philosophical interests
include moral psychology, philosophy of agency, philosophy of law, and the history
of Latin American philosophy.
Wayne Wu is associate professor in and associate director of the Center for the
Neural Basis of Cognition at Carnegie Mellon University. He graduated with a
PhD in philosophy from the University of California, Berkeley, working with John
Searle and R. Jay Wallace. His current research focuses on attention, agency, schizo-
phrenia, spatial perception, and how empirical work can better engage traditional
philosophical questions about the mind.
1

Decomposing the Will


Meeting the Zombie Challenge

T I L L M A N N V I E R K A N T, J U L I A N K I V E R ST E I N,
A N D A N DY C L A R K

The belief in free will is firmly entrenched in our folk understanding of the mind
and among the most deep-rooted intuitions of Western culture. The intuition that
humans can decide autonomously is absolutely central to many of our social institu-
tions from criminal responsibility to the markets, from democracies to marriage. Yet
despite the central place free will occupies in our commonsense understanding of
human behavior, the nature of this very special human capacity remains shrouded
in mystery. It is widely agreed that the concept of free will has its origins in Christian
thought (Arendt 1971, pt. 2, chap. 1) and yet from the earliest days philosophers,
theologians, and lawyers have disagreed about the nature of this capacity. Many
have doubted its existence. Some have gone further, charging the very idea of free
will with incoherence: Nietzsche famously declared the idea of free will “the best
self-contradiction that has been conceived” (Nietzsche 1966, 21). The philosophi-
cal controversies surrounding free will roll on to this day (see, e.g., Kane 2002; Baer
et al. 2008), but the folk, for the most part, continue to employ the concept as if its
meaning were wholly transparent, and its reality beyond question.
Into this happy state of affairs the cognitive sciences dropped a bomb. All of a
sudden it seemed as if the folk’s happy state of ignorance might have come to an
end, and the questions surrounding the existence of free will might now be settled
once and for all using the experimental tools of the new sciences of the mind. The
truth the scientists told us they had uncovered was not pretty: some claimed to
have decisively demonstrated that there is no free will, and any belief to the con-
trary was a relic of an outdated and immature folk understanding we must leave
behind once and for all (Libet 1985; Roth 1994; Wegner 2002; Prinz 2003; Lau
et al. 2004; Soon et al. 2008).1 The scientists told us that we are the puppets of
our unconscious brain processes, which “decide” what we will do quite some time
before we know anything about it. Some of the scientists’ statements were clearly
2 D ECO M P O S I N G T H E W I L L

intended as polemical provocations. However, they were also giving expression to


an increasingly well-founded skepticism that free will as it is ordinarily understood
might be a particularly powerful and striking illusion on a par with the magic tricks
of the best conjurers. Unsurprisingly, the response from the intelligentsia was swift
(see, e.g., Wolfe 1996; Horgan 2002; Brooks 2007). The findings also generated a
wave of interest within the academic world, with a number of excellent monographs
and collections of essays emerging aimed at evaluating the scientists’ findings (see,
e.g., Pockett et al. 2006; Baer et al. 2008; Vierkant 2008; Mele 2010). However, so
far what has been largely missing from the debate is any serious attempt to use the
knowledge we undoubtedly gain from experimental work to enhance our under-
standing of human behavior and its causes.
This collection aims to fill that void; it is designed to work as a tool for anybody
who is interested in using the advances in the sciences of the mind to better under-
stand the hitherto mysterious capacity for free will. Our talk of decomposition in
the title of our collection should be understood as an explanatory aim: we propose
to take the conscious will, the original home of the homunculus, and to explore
some of the ways in which scientists think it can be broken down into simple unin-
telligent mechanisms whose interactions make us into free agents. In doing so, we
aim for a better understanding of the relationship between psychological mecha-
nisms and the experience we have of authoring and controlling our own actions.
We embrace the scientific advances as an opportunity for a deeper, scientifically
informed self-understanding. We leave it as an open question, to be settled through
a careful dialogue between philosophy and science, the extent to which such a self-
understanding will turn out to be consistent with the folk belief in free will.

CATEGORY MISTAKE?
Before we turn to the real substance of the book, we must pause to consider some
objections to such an explanatory enterprise. Many philosophers have pointed
out that the sciences might be well placed to help us better understand volition,
but there are important limits on what science can tell us about autonomy (see,
e.g., Roskies, this volume). Science might be able to help us with the “will” part of
“free will,” but there are limits to what it can tell us about the “free” part. Science
cannot really help us with the big question of what free will consists in because
this is a metaphysical question about whether or not our actions are fully caus-
ally determined by nature, and the implications this has for our status as autono-
mous agents. This is a debate that has been mainly played out between parties that
believe free will exists. Libertarians (who believe that determinism and freedom are
incompatible and that we are free) have argued with compatibilists (who believe
that freedom and determinism are fully compatible so that our actions can be caus-
ally determined and also free under the right conditions). The position that many
neuroscientists seem to favor of hard determinism also exists in the philosophical
debate but is much less prominent.2 Hard determinists agree with the libertarians
that freedom of the will and determinism are incompatible, but they agree with the
compatibilists that determinism is probably true, and therefore they conclude that
we do not have free will.
Decomposing the Will 3

It is easy to see why the participants in the dispute between libertarians and
compatibilists were not particularly moved by the findings from cognitive science.
At best, neuroscience can help with the question whether or not human behavior
and decision making really are causally determined. (Even here you might won-
der whether that really is possible, given that scientific experiments seem to simply
assume the truth of determinism.) It is difficult to see how the sciences can help
settle the conceptual question of the compatibility or otherwise of freedom of the
will with determinism. How could advances within neuroscience possibly help us
with this metaphysical question (Roskies, this volume)?3
In our volume we start from the assumption that the cognitive sciences will not
be able to help us directly with the discussion between libertarians and compati-
bilists. However, a simple dismissal of the scientific findings as irrelevant to the free
will debate would be, to say the least, premature. Even though these findings might
not have any bearing on the truth or falsity of compatibilism or libertarianism, it
does not follow that the scientific debate about volition is unrelated to the philo-
sophical one about free will. To see why this is the case, suppose for a moment that
compatibilism is correct, and freedom of the will is consistent with a belief in deter-
minism. Obviously this does not mean that everything that is determined would
also be free. Compatibilists are keen to work out which kind of determinants of our
actions make us free and which ones do not. What does it take to be an agent and for
an instance of behavior to count as a free action (see, e.g., Velleman 1992)? If com-
patibilism were true, human autonomy would consist in behaviors being produced
by the right kinds of causal mechanisms, but which ones are those? Is it necessary
that we are able to evaluate action options rationally, for example? Must we be able
to initiate actions in a context-independent (or stimulus-independent) manner? Is
a capacity for metarepresentation, the ability to think about thinking, necessary for
free will? Is conscious control necessary if we are to be treated as responsible for an
action (see Roskies, this volume)? These are all important philosophical questions
about necessary conditions for human autonomy. There are, however, a number of
other questions concerning the nature of autonomy that are undeniably empirical
questions. It is an empirical question, for instance, whether we have any of the abili-
ties our best philosophical theory tells us are necessary for autonomy. Supposing
science tells us we do have these abilities, it is also an open empirical question to
what extent we actually deploy these abilities in everyday decision making. Suppose
we say that conscious control is necessary for free will; the question of whether we
are free or not will then depend on whether we have the capacity for conscious con-
trol, and the extent to which we exercise this capacity in making everyday decisions.
Hence, the question of whether we are free or not turns out to crucially depend on
a better scientific understanding of the machinery of our minds.
Now, take back our initial assumption about the truth of compatibilism, and
suppose libertarianism is the right answer to the determinism question. It seems
as if not a lot changes. If you want to know the conditions for libertarian human
autonomy you look at the compatibilist ones and add the crucial indeterminist
extra. Libertarians will make use of exactly the same or at least very similar ingre-
dients (rationality, action initiation, self-knowledge) to give the necessary condi-
tions an agent needs to fulfill, before adding “true” freedom of choice that turns
4 D ECO M P O S I N G T H E W I L L

merely self-controlled behavior into truly free action.4 If we do not have or rarely
exercise self-knowledge and rationality in generating our behavior, this would seem
to spell trouble for a libertarian account of free will just as much as for a compati-
bilist account.
Still one might worry that an account of free will need not be interested in
questions about mechanisms. Philip Pettit (2007) has, for instance, argued that
neuroscience is only threatening to a notion of free will that is built upon an indi-
vidualist “act of will” picture.5 Such a picture might attempt to identify specific cog-
nitive mechanisms within the individual agent that causally enable “acts of will.”
According to Pettit, this would be to underestimate the social dimension of the free
will discourse. Whether somebody is free or not crucially depends on their ability
to justify their actions according to the rules set by society. The idea of free will
works because we take ourselves to be free agents and have knowledge of the social
norms that tell us what the behavior of a free and responsible agent should look like.
As we all strive to fulfill the normative ideal of a free agent, our behavior begins to
resemble the ideal more and more. This assimilation to the ideal is not dependent
on there being a prior independent mechanism in the agents that would lead to such
behavior naturally without normative guidance.
This move familiar to the philosopher of mind from the discussion on folk
psychology (Dennett 1987) seems particularly appealing in the case of free will
and autonomy. Free will is such a loaded concept, a concept on which so many
high-minded ideals are based, it seems rather likely that it is an idealization, an
abstraction, that is not reducible to actual mechanisms. We have suggested, how-
ever, that both compatibilists and libertarians cannot afford to ignore the science of
voluntary action. These sciences have implications for whether we have the cogni-
tive capacities required for free agency. However, if Pettit is right, our arguments
rests on false assumptions. We are looking for freedom in the wrong place. Instead
of looking inside of free agents, we need to look to the social contexts in which the
actions of responsible agents take place.
We agree with Pettit about the importance of the social context for our folk con-
cept of free will. This does not, however, render questions about mechanisms irrele-
vant. Even if free agency is a matter of our social practices in which we come to think
of ourselves as answerable to others for our actions, this does nothing to undermine
the thought that there may be specific functions and mechanisms essential for such
a self-understanding to influence behavior. Pettit argues that thinking of ourselves
as accountable for our actions can shape the cognitive processes that bring about
our actions in ways that result in us acting as responsible agents. Thus free will is
important only as a self-attribution. We are happy to concede this point, but still
this invites the question why such a self-attribution is important? Surely such a self-
attribution is important only if it exerts a causal influence on behavior. Whether this
is the case or not is a question that will be answered in part by looking to science. It
might well turn out that even though free will might be conceptually quite coher-
ent, the machinery of the mind works in ways that are inconsistent with our being
free agents. The obvious example of such an empirical way to undermine free will
is the recent surge in findings that seem to show that consciousness plays a much
smaller role in causing behavior than previously thought. If conscious awareness
Decomposing the Will 5

and control is necessary for free will, then these findings undermine free will.6 We
will label this worry the zombie challenge.

THE ZOMBIE CHALLENGE


The zombie challenge is based on an amazing wealth of findings in recent cogni-
tive science that demonstrate the surprising ways in which our everyday behavior
is controlled by automatic processes that unfold in the complete absence of con-
sciousness. One of the key aims of this volume is to see whether and how these
findings are relevant for our thinking about free will and, even more important, to
give examples of how these findings might form the basis for empirically informed
accounts of autonomy.
What we are calling the zombie challenge is quite different from the debate in
the metaphysics of mind (Chalmers 1996) about the logical possibility of crea-
tures physically and functionally like us that lack phenomenal consciousness. Our
question is about the actual world and whether consciousness plays a significant
functional role in the causation of our behavior as is normally assumed. The zom-
bie challenge suggests that the conscious self takes a backseat when it comes to the
control of much of our behavior. The functional machinery that initiates and con-
trols much of our everyday behavior does much of its work without involving the
conscious self. If the zombie challenge is effective, consciousness will turn out to
be epiphenomenal. Such a finding would seem to be bad news for any belief in free
will, even if we grant the correctness of compatibilism. Most compatibilists think
that some form of control is what is special about freedom.7 Exactly what this con-
trol consists in is up for grabs, but in most accounts it seems to be assumed that it is
consciously exercised. If the zombie challenge is upheld, the capacity for conscious
control would have been revealed to be illusory or at least mostly idle. Even if we
have a capacity for conscious control, if it does not do any work in the production of
most of our behavior, this would seem to have major implications for views that take
conscious control to be necessary for the exercise of free will.8 It would show that
one of the conditions the compatibilist identifies as necessary for free will does little
or no work in generating our behavior. This increases the temptation to conclude
the same is true for free will. We believe it is something like the zombie challenge
that motivates scientists to claim that free will is an illusion. The experiments that
motivate free will skepticism have very little to do with the truth or falsity of deter-
minism.9 The experiments all seem to point to the conclusion that the conscious self
is an epiphenomenon.
In his essay for our collection, Richard Holton helpfully diagnoses why determin-
ism might have been thought to be a problem for free will by showing how the folk
often mistake the doctrine of determinism for an alternative doctrine that he labels
“predictability.” Holton is responding to recent studies that show how manipulat-
ing a person’s belief in free will can make them more likely to lie, cheat, and steal
(Vohs & Schooler 2008; Baumeister et al. 2009). Commenting on these findings,
Baumeister and colleagues attribute to the participants in these studies the thought
“I should not be blamed for acting as I did, because I could not possibly have done
otherwise.” It is something like this thought that Holton aims to capture with his
6 D ECO M P O S I N G T H E W I L L

doctrine of predictability, which claims that complete knowledge of conditions of


the world at a given time together with knowledge of the laws of the nature will allow
one to predict what will happen at the next instant. Holton argues that the truth
of predictability would indeed justify disbelief in free will. It would encourage the
thought behind the Stoic’s lazy argument that our decisions are powerless to make
a causal difference to what will happen. What will happen was always going to hap-
pen, and our decisions and choices can make no difference. Holton points out, how-
ever, that predictability (which of course amounts to fatalism) is a metaphysically
much more demanding and stronger hypothesis than determinism, and is moreover
probably false.10 Now according to the zombie challenge, our conscious choices are
epiphenomenal: our behavior is determined automatically, and the conscious self
is an impotent bystander. If choices do not make a difference, it seems permissible,
and indeed rational, to stop trying.11 The predictability Holton talks about is clearly
science fiction, but as John-Dylan Haynes shows (this volume), worries about pre-
dictability feel increasingly real. Haynes can already predict simple decisions such
as whether we will use our left or right index finger in making a button press using
machine learning algorithms that identify patterns of fMRI BOLD signals in vari-
ous brain regions. There are many more findings along similar lines that seem to
indicate that full predictability of behavior is just around the corner (e.g., Haynes
et al. 2007; Kay et al. 2008; Soon et al. 2008; Falk et al. 2010; Tusche et al. 2010;
Shirer et al. 2012). By disentangling determinism from predictability, Holton helps
us to understand how the threat to free will comes not from determinism but from
predictability. Predictability leads to resignation, the feeling that the conscious self
can make no difference to what will happen. If predictability undermines free will,
so too does the zombie challenge. Both imply that the conscious self can make no
difference to what happens.12 If we live in a world where our capacity for conscious
control is idle and does no work in initiating action, this is a world in which the con-
scious self makes no difference. It is a world in which our conscious choices make no
difference, so we might just as well give up trying. While Holton is clearly right that
predictability is a red herring when it comes to belief in free will, the zombie chal-
lenge generates the same worry as predictability. It tells us that the conscious self is
an impotent bystander that can make no difference to what happens in a universe of
mindless mechanisms in much the same way as Oedipus could do nothing to avoid
his fate.

ARE YOU A ZOMBIE?


We will discuss two prominent strands of evidence that are often taken to motivate
and support the zombie challenge. There is, on the one hand, the work done in
the wake of Libet’s seminal studies that seem to show that conscious intentions are
preceded by unconscious brain processes that prepare the agent to act. As Adina
Roskies notes in her review of the Libet-related literature, initiating actions is one of
the key concepts associated with the will, and it seems at first glance as if Libet has
given us strong empirical evidence that we do not consciously initiate actions. The
findings themselves have stood up surprisingly well to very close scrutiny, but the
discussion as to what the findings mean is still raging (see, e.g., the recent anthology
Decomposing the Will 7

edited by Sinnott-Armstrong & Nadel 2011). A standard reply to Libet’s findings


has been that consciousness cannot be measured in milliseconds (see, e.g., Dennett
2003; Gallagher 2006). This criticism is partially undermined by the findings of
John-Dylan Haynes’s lab where they used fMRI to predict simple left/right deci-
sions many seconds (9–11 seconds) before subjects were aware of their decisions
(Haynes, this volume).
Libet himself did not believe that his findings showed that we do not have free
will, arguing that our capacity for free will is to be found in the ability to exercise
a veto on any unconsciously generated action. Libet does not deny that conscious
intentions might have a causal role to play in action initiation (in contrast to Wegner
2002, discussed later), but he does worry that these conscious intentions are simply
the product of an unconscious decision and that there might be no role for conscious-
ness in the decision-making process. The veto is supposed to speak to this worry. If
the veto were possible, it would give the conscious self the final say in whether an
unconsciously generated decision gets acted upon. Much has been written about
the possibility of the veto. There have been studies that seem to support the possi-
bility of the veto (Brass & Haggard 2007), while other studies seem to show that the
conscious veto is impossible (Lau & Passingham 2007). Mele (this volume) shows
that the empirical work, crucial as it is, does not suffice to establish the possibility of
a conscious veto. Mele discusses four different interpretations a subject might arrive
at of an instruction to veto. He shows that the strategy that most closely fits the
conceptual requirements of a real veto is also one that is most difficult to make sense
of. Mele’s final verdict is mainly negative: that existing empirical work does not tell
us either positively or negatively whether the vetoing of conscious proximal inten-
tions is possible. Mele suggest that the veto is crucially important for healthy moral
development, but before we can empirically investigate its underlying mechanisms,
we must resolve some very difficult questions about its operationalization.
The Libet studies were primarily concerned with decision, and volition was oper-
ationalized as open response selection. From the go/no go scenario in the original
Libet experiments to the left/right presses in the Haynes studies, what these studies
have in common is that they are built on the intuition that subjects should be able
to decide what they want to do. These experiments buy into the compelling intu-
ition that free will consists essentially in the ability to do otherwise, an intuition at
the heart of incompatibilist notions of free will. The zombie challenge arises to the
extent that seemingly conscious choices in open response selection paradigms are
screened off from doing any causal work by an unconscious process that gets started
well in advance of any conscious decision by the subject.
The zombie challenge is, however, not restricted to questioning the efficacy of
proximal conscious decisions in the etiology of our actions.13 In the social psychol-
ogy literature, the zombie challenge takes a rather different form. Here volition is
located in the control of behavior, and the zombie challenge takes the form of an
argument to the effect that consciousness is not in control of our actions, and we are
ignorant of the automatic, unconscious processes that are really in the driver’s seat.14
Morsella and colleagues in their contribution to this volume begin by reviewing a
host of striking findings that show how much power and influence the automatic
juggernaut exerts in steering our behavior. From the speed of walking to the elevator
8 D ECO M P O S I N G T H E W I L L

after a psychology experiment to cooperation in moral games, Bargh and colleagues


have shown time and again how much of our behavior is subject to capture through
priming. Daniel Wegner’s lab has shown that not only are we frequently ignorant of
the causal factors that are influencing our behavior, but sometimes we do not even
know whether we have initiated an action. Wegner (2002) describes a whole range
of cases from facilitated communication to automatic writing where people have
no sense of authorship for actions they have initiated and controlled. Wegner and
Wheatley (1999) report an ingenious and complex experiment in which they create
the illusion of authoring an action in their participants. Again, there is much that can
and has been said about the philosophical implications of the Wegner studies (see,
e.g., Nahmias 2005; Bayne 2006; Roskies, this volume). Regardless what one makes
of Wegner’s conclusions, he has certainly succeeded in planting the seeds of skepti-
cism about conscious will firmly in the empirical discourse concerned with agency.
Social psychology is replete with examples of our self-ignorance about the rea-
sons behind our decisions (Wilson 2002). Johansson and colleagues (2005) have
come up with a particularly striking paradigm that seems to demonstrate that people
can be quite easily fooled into believing they have made decisions, which in reality
they never made. In the original paradigm, participants are shown photographs of
two faces and then asked to rate which of the two faces they judge more attractive.
In random trials they are asked to justify their choice. Sometimes the experiment-
ers switch photographs, showing the subjects a face they did not choose. In 75 per-
cent of trials, participants fail to notice the mismatch between the photograph they
are shown and the one they chose, and freely volunteer an explanation for a choice
they have not in fact made. In their contribution to this volume, Hall and Johansson
report evidence that this choice blindness extends into the realm of moral decision
making. In one study (Hall et al. 2012), participants are given a two-page ques-
tionnaire and asked to rate a list of morally charged statements relating to topical
news stories. After all the questions had been answered, participants were asked to
read aloud statements and explain their reasons for agreement or disagreement. In
manipulated trials, the statements they read out had been tampered with in ways that
reversed the meaning of the original statements the subjects had read. The reversal of
meaning tended to be noticed when people either strongly agreed or disagreed with
a statement, but in 60 percent of manipulated trials it went undetected. Particularly
striking was the finding that just as in the earlier choice-blindness studies, subjects
that did not detect the change nevertheless proceeded to construct “detailed and
coherent arguments clearly in favor of moral positions they had claimed that they
did not agree with just a few minutes earlier” (Hall & Johansson, ms, p.304).
We have described two strands of a vast empirical literature that strongly sup-
ports the zombie challenge. The evidence clearly does not establish that there is no
role for consciousness in decision making (see the chapters by Roskies and Vargas
for more on this point). It does, however, show that conscious intention and con-
scious control are not necessary causal precursors for much of what we do. It is all
too easy to slip back into a Cartesian way of thinking according to which the mind
is absolutely transparent and we have privileged access to the processes that lead
to our actions. This is strongly counterargued by the empirical findings we have
outlined here.
Decomposing the Will 9

Even if the extent of the zombie challenge and the threat to our commonsense
understanding of responsible agency may have been slightly exaggerated, they nev-
ertheless invite an important question that has hitherto not been on the agenda in
philosophical discussions of agency. Can we come up with a testable function for
conscious behavioral control? Given how much of our everyday behavior can pro-
ceed without conscious guidance and control, what is the role of consciousness in
agency, if any?15

MEETING THE ZOMBIE CHALLENGE


One tempting response to the zombie challenge is to try to disentangle questions
about free will from their traditional associations with consciousness (see Ross et al.
2007). The idea of conscious control seems to be inextricably linked with a Cartesian
homunculus, an inner conscious self that makes free decisions (whatever those may
be). We know that there is no Cartesian homunculus populating our minds: our
cognitive machinery is composed of soft assembled, decentralized, self-organizing
distributed systems (Clark 2008). What holds for cognition in general also holds
for free will. If we are going to find free will, it will turn out to be spread across space
and time in the distributed systems that coalesce to make up our minds. This means
dispensing altogether with the idea of a central executive that plans and microman-
ages each of the actions we freely decide upon. We agree with Dennett (2003) that
if free will requires a Cartesian self, then free will is an illusion, in just the same way
that true love would be an illusion if it required the intervention of Cupid. Of course
true love does not require Cupid’s arrow, and then again nor does free will require
the orchestration of action by a central executive, a Cartesian self.
Could the idea that conscious control is necessary for free will be a part of the
intellectual baggage we have inherited from a Cartesian philosophical tradition,
which we are still engaged in the struggle to free ourselves from? Perhaps what
makes human beings free and flexible is largely a matter of culture and upbringing
(see our earlier discussion of Pettit). Social norms and the enforcement of those
norms allow us to live rich and varied lives and multiply what Dennett (2003) calls
evitability, a term he coins to refer to happenings in a deterministic universe that
are avoidable. Culture, Dennett argues, opens up a “world of imagination to human
beings that would otherwise be closed off ” (2003: 179), and through imagina-
tion we can expand our space of options, enabling us to separate means from ends.
Control remains absolutely central to free will (how else will an agent make her
actions conform with social norms?), but it does not much matter whether the exer-
cise of that control is conscious or not.16 We exercise control over our actions when
we act in accordance with reasons, but Dennett locates this ability in the habits that
are formed in our upbringing and our training in the practices of demanding and
giving reasons (2003: 286).
The conclusion that conscious control is not necessary for free will finds an echo
in the Christian literature. Anthony Freeman (2000) has argued that the lesson we
should draw from cognitive science is not that our responsibility for our behavior
is reduced because it is outside our conscious control. We should accept respon-
sibility for all our behavior whether it is consciously or unconsciously controlled.
10 D ECO M P O S I N G T H E W I L L

Freeman ascribes full responsibility even in cases where conscious control is wholly
absent. Many scientists argue that because we do not have conscious control, we
do not have full responsibility either. Dennett and Freeman argue that this cannot
be right, ascribing to us full responsibility because we have all the control that we
could reasonably want. They invite us to give up on the deep-rooted intuition that
free will and conscious control are necessarily intertwined, and you cannot have
one without the other.
We agree with Dennett that any scientifically viable concept of free will must be
severed of its connection with the Cartesian homunculus. Indeed, we take it to be
implicit in our title that if free will is real, then it is the result of interactions among
many mindless, robot-like mechanisms. However, we wonder whether the willing
agent might be decomposed without entirely relinquishing the commonsense intu-
ition that there is a connection between conscious control and free will. One place
we might look for a defense of this intuition is to the phenomenology of agency.
The phenomenology of agency is as complex as it is elusive and recessive, some-
thing that is pointedly brought out in our collection by the contribution from Shaun
Gallagher. However, at least part of the phenomenology of being an agent resides
in the experience of control. When Penfield directly stimulated his patient’s motor
cortex, causing them to raise an anesthetized arm, one of the reasons the result-
ing movement did not feel like an action of the patient’s was because they had no
experience of controlling the action. If we sever the connection between conscious
control and free will in the way Dennett is recommending, we are saying an agent
can perform a free action without having a feeling that they are in control of the
action. The subject may judge that the action was one that was under their control
after they’ve acted perhaps because the action was in accordance with reasons and
values they endorse. However, to the extent that the control is unconscious, the
agent will have no feeling of control over the action. Once we allow that the type of
control required for free agency does not require consciousness, we can agree with
Wegner that the experience of being in control is an illusion, the outcome of a post
hoc inference, as Wegner has so powerfully argued on the basis of his many inge-
nious experiments. A number of our contributors take issue with Wegner’s claim
that the phenomenology of agency is an illusion. In the next section we will con-
sider whether phenomenological considerations might therefore be marshaled to
deliver the response to the zombie challenge, allowing us to hold on to something
of the commonsense idea that free will requires conscious control.

THE SENSE OF AGENCY


We all have a feeling that we are the authors of our own actions. We experience
ourselves choosing what to do from the most trivial decisions about what to select
from the menu at our favorite restaurant to the potentially life-changing decisions
about whether to accept a job offer or a marriage proposal. This sense of self-efficacy
is one that we have from very early days of our childhood, as every parent can testify.
Nico Fridja recounts the story of his son as a toddler who when offered help would
fly into a rage with the cry “David do.”17 It is this strong intuition that we can choose
how we act that can make the zombie challenge seem so deeply counterintuitive.
Decomposing the Will 11

Proposals that ask us to give up on the intuitive connection between conscious


control and free will naturally inherit something of this counterintuitiveness. They
invite us to give up on the intuition that we exercise conscious control over our
behavior, to treat this intuition as little more than an illusion.
A large body of research has emerged from cognitive neuroscience in recent years
explicitly aimed at understanding the neural mechanisms that underpin the experi-
ence of being an agent (for some discussion, see the essay by Tsakiris and Fotopolou
and the essay by Gallagher). We will return to some of this work later. First, however,
we want to briefly point to the important place the experience of agency has in the
literature on philosophy of action concerned with free will. Carl Ginet (1990) has
described free actions as actions that the agent feels have issued directly from him, a
feeling Ginet describes as an actish quality. Ginet takes this feeling to be at the heart
of all libertarian intuitions. He describes “my impression at each moment that I at
that moment, and nothing prior to that moment, determine which of several open
alternatives is the next sort of bodily exertion I make” (1990: 90). One might not
want to follow Ginet in his libertarian views while nevertheless retaining his gloss
on the experience of agency. David Velleman (1992) in a seminal paper accepts on
behalf of compatibilists the obligation to account for the experience of agency, an
obligation he discharges by appealing to the desire to act rationally. The details of
Velleman’s proposal are intriguing, but we will forgo any discussion of them here.
What we want to take from Velleman is the thought that any account of free will,
compatibilist or otherwise, owes an account of the distinct experience of agency.18
Indeed, if cognitive science could reveal this experience to be veridical, this would
be one strategy for meeting the zombie challenge. Conversely, if cognitive science
revealed the experience to be illusory (see Wegner 2002), this would provide fur-
ther support for the zombie challenge.
In his essay for this volume, Paglieri argues that there is no positive experience
of freedom. Paglieri considers a number of candidates that have been appealed to in
the literature on the phenomenology of agency, and he argues that in each case we
do not make our judgments of agency based on an experience of freedom. Rather,
our judgment is based on absence of coercion. Our belief that an action is a free
action is our default attitude unless we are presented with evidence to the contrary.
If Paglieri is right, this would block any appeal to the experience of freedom in
responding to the zombie challenge, since Paglieri claims there is no positive expe-
rience of freedom.
Paglieri, however, also recognizes that the phenomenology of agency is excep-
tionally complex, and that many varieties of experiences each with their own
distinctive content feed into our experience of being free agents. Gallagher (this
volume) distinguishes between three layers to the sense of agency each with its
own complex character that is part retrospective and part prospective. He begins
by distinguishing a prereflective or first-order sense of agency from a higher-or-
der reflective sense of agency that depends on taking up an introspective attitude
toward our first-order experience of initiating and controlling an action.19 This pre-
reflective experience involves a fairly coarse-grained awareness of what I am doing,
or trying to do. Gallagher is careful to distinguish aspects of the phenomenology of
agency that are prospective and retrospective from aspects of the phenomenology
12 D ECO M P O S I N G T H E W I L L

of agency that relate to acting in the here and now. Actions that are the outcome of
a future-directed and/or present-directed intention are accompanied by a sense of
agency, but the sense of agency will derive in part from the prior planning. The
sense of agency can also take a retrospective form that derives from my ability to
explain my actions in terms of my beliefs, desires, and intentions. I feel like I am in
control of my actions because they fit with my beliefs and desires that rationalize
the action. Both the retrospective and the prospective ingredients are associated
with a reflective sense of agency. In the case of the prospective ingredients, the
actions concerned are the outcome of some kind of reflective deliberation, and the
sense of agency derives from this deliberative process. In the case of the retrospec-
tive ingredient, the sense of agency comes from the process of reflecting on an
action and one’s reasons for performing it. One can have a thin recessive experi-
ence of being the agent of an action without either of these ingredients being in
place. This is arguably the case with many of our skillful behaviors—the skilled
pianist does not have to reflectively deliberate on the finger movements he makes
in performing a piece of music. So long as his performance of the piece is going
smoothly, there is no need for him to reflect on what he is doing. His attention can
be completely taken up with what he is doing. However, it would be a mistake to
conclude that just because there is no reflective sense of agency, there is no sense
of agency whatsoever for skilled behaviors. It is not as though skilled behaviors are
performed unconsciously, as we find in cases of somnambulism or automatic writ-
ing. There seems to be a clear phenomenological difference between performing
an action while sleepwalking and performing a skilled behavior. In the latter case
the subject has some awareness of what he is trying to accomplish and of the effects
of his actions on the world.
Gallagher distinguishes an agent’s prereflective experience of what she is trying to
accomplish, from motor control processes that give me the sense that I am moving
my body, and that my actions are having certain effects on the world. This distinction
also provides the starting point for Tsakiris and Fotopolou in their chapter, more on
which shortly. One way that cognitive neuroscientists have set about studying the
sense of agency is to give subjects tasks where they are asked to judge whether they
caused a particular sensory event. Gallagher discusses an fMRI study by Farrer and
Frith (2002) in which subjects manipulate a joystick to move a colored circle on a
screen. Sometimes the subject causes the movement, sometimes the computer does,
and subjects must judge which of the movements they are seeing are effects of their
own actions. When subjects report causing a movement of a colored circle, Farrer
and Frith found bilateral activation of the anterior insula. Gallagher argues that
what is being measured in these studies is not the neural mechanisms that underpin
a prereflective sense of agency but neural mechanisms that are involved in motor
control. Echoing Gallagher’s worry, Tsakiris and Fotopolou argue that experiments
that ask subjects to judge whether they caused a given sensory event can tell us
very little about the experience of agency. At best they can tell us something about
the “cross-modal matching process” that integrates motor representations of one’s
voluntary actions with sensory representations of actions and their consequences.
They fail to advance our understanding of the experience of initiating and control-
ling an action, key constituents of the prereflective experience of agency.
Decomposing the Will 13

Tsakiris and Fotopolou make some positive proposals about how to investigate
the prereflective experience of agency, which they characterize as “the feeling that
I voluntarily move my body.” They argue that a key requirement is a control condi-
tion in which the movement parameters are kept constant, for example, subjects
are asked to press a button, but the movement is made passively or involuntarily
(also see Tsakiris et al. 2005). The question they suggest we must answer if we are
to scientifically investigate the experience of agency is in what way agency changes
the experience of the body. This is an experience that is present both when we pas-
sively move and when we actively move. Is the sense of agency simply an addition
to an “omnipresent” sense of body-ownership, or is the sense of agency a different
kind of experience to the experience of body-ownership? Tsakiris and Fotopolou
report neuroimaging experiments that support the latter view that the experience
of agency is qualitatively different (Tsakiris et al., 2009). They find no activations
common to active movement and passive movement conditions, and distinct pat-
terns of activation in the two conditions, thus strongly supporting the view that an
experience of agency is a qualitatively distinct kind of experience from the experi-
ence of body-ownership.
Does this prereflective sense of agency provide evidence for free will and the neu-
ral mechanisms that underpin voluntary action? One conclusion we do seem to be
warranted in drawing is that the sense of agency is not an illusion, as has been pow-
erfully argued by Wegner (2002). Wegner and colleagues have amassed substantial
evidence that our experience of mental causation—the experience that our con-
scious proximal intentions are the causes of our actions—may be based on a post
hoc inference. Wegner argues that this post hoc inference is based on the satisfac-
tion of the following three conditions. First, we undergo a conscious thought about
an action at an appropriate interval of time before acting; second, we find that the
action we perform is consistent with the action we thought about performing; and
third, we establish there are no other rival causes of the action. The sense of agency
we arrive at via this route is a reflective sense of agency, and we have seen that this
does not exhaust our experience of agency. In addition, there is what we have been
calling, following Gallagher, a prereflective sense of agency, which is the outcome of
neural processes involved in motor control.
However, one might worry about whether the prereflective sense of agency
really adds up to the experience of freedom that is required for a robust response to
the zombie challenge. What matters for responsibility is precisely that our actions
accord with reasons we endorse (see Vargas, this volume). Perhaps this is a condi-
tion that cannot be satisfied unless we have a prereflective sense of agency for an
action. Thus we can allow that a prereflective sense of agency may well be necessary
for free will, but it looks too thin to give us a notion of conscious control sufficient
for free will.

EXPERIENCING THE WILL


It seems clear that cognitive science can help us to elucidate some notions of
agency, but we are brought back once again to a question with which we started
as to whether cognitive science can help us to understand what it means to have
14 D ECO M P O S I N G T H E W I L L

free agency. The sense of agency was supposed to provide us with an answer to this
question, but it turns out that there is no single sense of agency; there are rather
“multiple senses of agency” (Gallagher 2012, this volume; see also Pacherie 2007).
Cognitive science shows how these different aspects to the experience of agency
can be operationalized. However, as Paglieri’s essay argues, one might doubt that
the sense of agency as studied by Tsakiris is related to the experience of free agency
that Ginet and Velleman discussed.
It might also be the case that some commonsense ideas relating to free agency
do not have anything to do with the phenomenology of agency while others do.
Richard Holton (2010),20 for example, argues that the folk notion of free will is
made up of at least three very distinct and probably incompatible ideas. First, there
is the mental capacity for free agency or the ability to act freely as described by
philosophy of mind. Holton argues that careful phenomenological reflection can
be a rich source of insight for learning about this mental capacity. Second, there is
the conception of free agency required for moral action and responsibility. Finally,
he suggests that both of these conceptions of freedom should be distinguished
from a third metaphysical notion of being able to do otherwise, which is so crucial
to libertarians. Holton finds within the phenomenology of free will two distinct
types of experience, which he labels the “experience of choice” and the “experience
of agency,” respectively. He argues that neither of these experiences tells us much
about our practices of assigning moral responsibility. Experience of choice is not
required for moral responsibility, since we have no hesitation in assigning respon-
sibility to agents for harms that have arisen from habitual or automatic actions they
do not experience choosing. Somewhat more controversially, Holton argues that
the experience of agency is not necessary for moral responsibility either. Holton
considers views that take the experience of agency to be connected with “the capac-
ity to choose rationally” (2010: 91); we will call these accounts “rationality-based
accounts.” Holton argues that our moral practices require us to hold a person mor-
ally culpable even when they lack a capacity for rationally assessing the reasons for
and against a particular course of action. A person might be quite ignorant of her
bad motives, for instance, and so lack the capacity to choose rationally, yet our moral
practices still allow us to hold the person responsible. Holton writes: “A person can
be spiteful, or selfish, or impatient without knowing, or being able to know that they
are, and such ignorance does not excuse the fault” (93). Suppose there is a connec-
tion between having an experience of agency and the capacity to choose rationally as
is claimed by rationality-based accounts. It seems we must conclude that our moral
practices allow us to hold agents responsible for acting even when the agent has no
experience of agency. Thus our moral practices of attributing responsibility do not
line up at all well with cases in which a person has an experience of acting freely.
Even if we agree with Holton about the disconnect between phenomenology
and our moral practices, still one might be reluctant to give up on the connection
between the capacity to choose rationally and ascriptions of moral responsibility.
In his contribution to this volume, Manuel Vargas shows how rationality-based
accounts can accommodate the limited access we have to our motives. He argues in
agreement with Holton that there need not be any incompatibility between ascrip-
tions of moral responsibility and self-blindness, but he denies that such a result is in
Decomposing the Will 15

tension with the rationality-based account he develops in his essay (see also Vargas
2007). Vargas argues that the situationist tradition in social psychology forces
philosophers to accept that humans probably do not possess one stable capacity
to react to reasons, but this does not mean that humans cannot act rationally at
all. All it means is that there might be many heterogeneous abilities to respond to
reasons and that these might be far more context dependent than we might have
assumed. In the case of the person ignorant of her bad motives, this could mean that
there might be a good sense in which this person is still responsive to reasons in her
decision. On the other hand, if there is not, then perhaps our practices of ascribing
responsibility in such situations are simply mistaken. In either case, it would not be
necessary to completely sever the link between mental capacity and an understand-
ing of free will in terms of social practice.
Setting this debate to one side, even Holton admits there must be some link
between a person’s mental capacity to act freely and our moral practices of ascribing
responsibility. This is illustrated well by the insanity defense, which clearly demon-
strates that our practices of responsibility ascription are sensitive to the proper func-
tioning of a person’s cognitive machinery. Holton has argued persuasively that we
can learn a good deal about the nature of the psychological capacities that make us
free from our experience of being free agents. He has also argued that what we can
learn from this kind of phenomenological reflection does not necessarily advance
our understanding of what makes an agent responsible for an action. However,
Holton has not shown (nor does he claim to have shown) that the psychological
capacities that make us agents have no bearing on the question of what makes us
responsible agents. The most we can conclude from his arguments is that the mech-
anisms necessary for an agent to be a target of responsibility ascriptions turn out to
be quite distinct from the mechanisms that ground the experience of agency.
The zombie challenge presents a threat to our practices of ascribing responsibility
to an agent by purporting to show that the conscious self is an impotent bystander,
and so is not causally responsible for bringing about actions we praise or blame.
In what follows we will attempt to meet the challenge head-on by defending the
idea that there is a necessary connection between conscious control and respon-
sible agency. However, let us pause briefly to consider this strategy in the light of
Holton’s arguments. If Holton’s reasoning is correct, the mechanisms that support
our experiences of freedom are not sufficient for responsible agency. So in order
for any form of conscious control to be sufficient for that link, either Holton has
to be wrong, or it has to be possible for an agent to be consciously in control of her
actions but not experience her own agency. Whether this really is possible would
seem to turn on how we understand the experience of agency, a question we have
briefly touched upon earlier. There we argued for a distinction between prereflec-
tive and reflective experience of agency. Once we have this distinction on the table,
we should fully expect an agent could be in conscious control of an action but not
have a reflective experience of her own agency, even though she might well have a
prereflective experience.21
We find further support for this possibility in Tim Bayne’s arguments in his con-
tribution to this volume for treating intentional agency as the marker of conscious-
ness. A “marker” of consciousness is a criterion we use to decide whether a creature
16 D ECO M P O S I N G T H E W I L L

or the state a creature is in qualifies as conscious. Typically scientists and clinicians


have used introspective report as the favored method for testing for the presence of
consciousness. Bayne argues, however, that intentional agency is a better criterion,
able to capture all the cases where we would intuitively want to ascribe conscious-
ness, and better able to avoid the threat of false negatives than the introspective
report criteria. Bayne’s suggestion fits well with our claim that conscious control
and the reflective sense of agency may come apart. A subject might well be exercis-
ing intentional control over her actions in a way that implies consciousness while
being incapable of introspective report.22
If Bayne is right, what we have been calling the zombie challenge may turn out to
be based on a mistaken conception of consciousness. All the experiments that sup-
posedly undermine responsible agency take report to be the marker of conscious-
ness. Based on this understanding of consciousness, they then proceed to argue that
consciousness lags behind the unconscious brain processes that cause our inten-
tional behavior. However, if Bayne is right, perhaps all we are really entitled to con-
clude is that it is introspective report that lags behind the causes of our intentional
actions. The zombie challenge claims that wherever you find intentional agency, this
is the product of unconscious brain processes, but Bayne tells us “the presence of
fully intentional agency” is “good evidence of consciousness” (this volume, p.165).
If you have a case of fully intentional agency, this implies that the agent must be con-
scious. It could be that the agent cannot report on his reasons for acting, but if he is
behaving intentionally, he could not as a matter of fact be behaving unconsciously,
or so Bayne seems to argue.
We will briefly raise two questions for this line of argument. First, Bayne’s argu-
ment would seem to turn on how we understand intentional agency. Bayne makes a
distinction between subpersonal- and personal-level control and says that we attri-
bute agency to the person and not to her parts when her action is “suitably inte-
grated into her cognitive economy” (this volume, p.163). Bayne suggests in passing
that this kind of cognitive integration might be the product of what consciousness
scientists call “global broadcasting” of perceptual information. If this is right, there
is a connection between intentional agency and consciousness because of the con-
nection between global broadcasting and consciousness. However, one might still
wonder about whether intentional agency understood in this way is sufficient for
responsible agency. A clue that it might not be comes from Bayne’s discussions of
infants and animals. Bayne tells us that a lion tracking a gazelle lacks a capacity for
“high-level deliberative and reasons-responsive agency.” Yet the lion exhibits suf-
ficient behavioral flexibility to warrant our treating the lion as an intentional agent.
Bayne makes similar claims about infants. We agree, but to the extent that he dis-
tinguishes intentional agency from what he calls “high-level agency,” this seems
to be to concede that the kind of consciousness intentional agency buys us may
not be directly relevant to responsible agency. At least this will follow if high-level,
reasons-responsive agency is required for responsible agency.
Second, one might worry whether the kind of consciousness that intentional
agency implies is really up to the task of undermining the zombie challenge. Bayne
argues persuasively that the presence of intentional agency implies perceptual con-
sciousness. Even when you are absentmindedly pouring coffee into your coffee mug,
Decomposing the Will 17

you must be able to identify the cup of coffee and factor this into your behavioral
planning, but this suffices for perceptual consciousness of the coffee cup, says Bayne.
Bayne certainly succeeds in showing that wherever you have intentional agency, you
most likely also have perceptual consciousness. However, he does not succeed in
showing that the conscious self is responsible for bringing about intentional agency.
This worry is driven home by the care Bayne takes to sever his thesis that agency is
the marker of (perceptual) consciousness from the claim that when the agent acts
intentionally, he must be conscious of his intentions and motives. However, it is the
absence of conscious intentions and motives in causing our behavior that gener-
ates the worry about self-blindness, which in turn fuels the zombie challenge.23 If
intentional agency can be the marker of consciousness without this implying that the
agent is conscious of his intentions and motives, the zombie challenge would seem to
remain in place. We think Bayne is absolutely right to stress the connection between
consciousness and intentional agency24, but the threat the zombie challenge presents
to responsible agency will persist so long as we lack an account of what it is that con-
sciousness does in generating our intentional behavior. Our strategy in the remain-
der of the volume is therefore to take up the question of the function of the conscious
will. If we can establish possible functions for the conscious will, this would go some
way to furthering our understanding of what makes us responsible agents.

WHAT IS THE FUNCTION OF CONSCIOUS CONTROL?


Suppose you are not yet swayed by the idea that the role of consciousness for the will
is overrated. Yet you also feel the force behind the zombie challenge. What are you to
do? One obvious strategy would be to try to identify a functional role for conscious
control that is compatible with the automaticity and neuroscientific findings but
also tells us what consciousness might be needed for when it comes to acting freely.
A possible role for consciousness might be that of enabling self-knowledge, a capac-
ity necessary for humans to evaluate their reasons in advance of acting. This link can
be found in Aquinas, but it is also implicit in traditional hierarchical compatibilist
accounts of free will. In his seminal work, Harry Frankfurt (1971) equates freedom
of the will with the ability to have self-directed desires. Frankfurt’s account requires
the agent to know that she has first-order desires about which she in turn can have
second-order desires. The agent must be able to exercise reflective self-evaluation in
order to make it the case that the first-order desires that move her to act are desires
she wants to have.
Hierarchical accounts like Frankfurt’s that place self-knowledge at the heart of
free will seem to conflict with much of the evidence we presented earlier in intro-
ducing the zombie challenge. One of the prominent themes in the social psychol-
ogy literature is our pervasive self-blindness. We often do not have knowledge of the
desires that move us to act, and our ignorance can even extend to the choices and
decisions we have made, as is evidenced by the choice-blindness studies discussed
earlier. Perhaps, however, this clash between hierarchical accounts of volition and
science is only apparent and not real?
Morsella et al., certainly no strangers to the literature on self-blindness, argue
that the conscious will is necessary in order to resolve conflicts between high-level
18 D ECO M P O S I N G T H E W I L L

processes that are vying for control of the musculoskeletal system. Consciousness
allows for what Morsella and Bargh describe as “cross-talk” to take place between
competing action-generating systems. Without the intervention of conscious-
ness, each action-generating system is unable to take information from other
action-generating systems into account, with the consequence that the agent may
act in ways that do not cohere with other plans they might have. Morsella and col-
leagues give as an example anarchic hand syndrome and utilization behavior (UB)
in which a patient’s hand performs well-executed, goal-directed movements that
the patient claims are unintentional. Patients will often complain that the hand is
behaving as if it has a mind of its own. Morsella et al. suggest that these disorders
are the result of a failure of cross-talk among competing action-generating systems.
Similarly, in UB, a patient will find himself responding to affordances that are irrel-
evant to his projects, interests, and goals at the time. According to Morsella and col-
leagues, this is because consciousness does not allow for the system that is guiding
his behavior to speak with other action-generating systems, and so is not influenced
by the patient’s wider concerns. Elsewhere, Bargh (2005) has compared UB patients
to participants in his priming studies, arguing that in both cases we find a dissocia-
tion of the system that represents intentional movements and the system that gen-
erates motor representations used to guide behavior. Just as with UB patients, the
behavior of subjects in priming studies is generated unconsciously, which is to say
quite independently of interaction with other behavior-generating systems.25
Morsella et al. may have succeeded in finding a function for consciousness that
is consistent with research in social psychology demonstrating the ubiquity of our
self-blindness. However, it would take more work to establish that their account
is sufficient to rescue hierarchical accounts of free will from the zombie challenge.
Resolving high-level conflicts between action plans is certainly a part of what it
takes for a person to have the will she wants, but it is surely only a part of the story.
It is not still entirely clear, for instance, how a mechanism for ensuring coherence
between action plans could deliver the species of free will we are interested in when
we praise or blame an agent for an action.
Nico Frijda also argues that the function of the conscious will resides in resolving
of inner conflict, but the conflicts he is concerned with are emotional in nature. A
conflict of emotion arises when two or more incompatible emotional inclinations
operate at the same time. You are angry with your partner, but at the same time
you wish to keep the peace so you say nothing. Frijda argues that conflicts between
emotions are resolved through emotion regulation. Sometimes this regulation can
proceed automatically and unconsciously, but Frijda argues that often emotion regu-
lation is effortful and voluntary. Frijda makes a distinction between self-maintenance
and self-control in developing his account of emotion regulation.26 Self-control is
exercised when we reevaluate an action inclination, deciding not to perform an action
we had decided upon previously. Self-maintenance, by contrast, is exercised when an
agent undertakes and persists with an action plan despite its anticipated unpleasant
consequences. Emotion regulation takes one of these two forms. The upshot of exer-
cising either of these forms of control, Frijda argues, is that one acts on the basis of
concerns that one can identify with. Thus, when a person gives up smoking for the
sake of his long-term health, this is the goal he identifies with even though he may
Decomposing the Will 19

also desire the short-term pleasure and relaxation derived from smoking a cigarette.
Frijda is aware of Frankfurt’s hierarchical account, and he speaks favorably of the idea
that an agent acts freely when he acts on desires he can identify with wholeheartedly.
Emotion regulation as Frijda characterizes it involves acts of reflective self-evaluation
in which one reflects on the concerns that are motivating one’s action tendencies and
carefully considers which of the options one prefers. Free will, he says, “refers to free-
dom to pursue available options, in perception and thought, and in constructing pref-
erence, or in finding out what one’s preference is” (this volume, p.214). He is aware
of arguments that purport to show that the conscious self is an epiphenomenon, and
free will an illusion. His response to these arguments is to point to the possibility
of emotion regulation, which he argues buys us all the self-determination we could
want. Yet Frijda ends his essay by recognizing the point we have been stressing in
this section that people are in general quite ignorant of their motivations. He agrees
with social psychologists that “awareness of what moves one is a construction,” and
that there can be “no direct reading-off of the causes of intentions and desires” (this
volume, p.216). Frijda seems to believe that there is no conflict between this kind of
self-ignorance and the reflective self-evaluation required for voluntary and effortful,
emotional regulation. Doesn’t reflective self-evaluation of the kind Frijda argues is
required for emotional regulation require us to know our own motivations? Almost
certainly the resistance fighters and political revolutionaries that pepper Frijda’s essay
are examples of people that know what they want. However, the suspicion remains
that in the end self-deception and self-ignorance may undercut the less heroic and
more mundane person’s capacity for emotional regulation.

MENTAL AGENCY
One of the morals we can take from our discussion of the function of conscious
control in the previous section is that conscious control is as much to do with
self-regulation as it is about action regulation. We saw how Morsella and Bargh
argue that the function of conscious control is to allow for cross-talk between and
integration of different action-generating systems. In the absence of this cross-talk,
behavior is generated that does not reflect, and is indeed encapsulated from, the
projects that drive the agent. An important and direct consequence of integration
of action plans delivered by conscious control is the production of actions that fit
with the agent’s wider projects and concerns. Similarly, Frijda describes how the
concerns at play in emotion regulation are personal norms and values the violation
of which would result in “a loss of sense and coherence in the world,” and more dra-
matically still a “‘breakdown of the symbolic universe.’” Frankfurt talked about free
will as a capacity we exercise when the desires we act on mesh with our second-or-
der volitions—our second-order preferences as to which desires will move us to act.
When this kind of mesh obtains, the desires that cause the agent’s actions are not
alien to him, passively causing him to act in the way an external force might. Instead,
the desire is one that fits with the agent’s self-conception; it is one with which the
agent can self-identify.
There are of course many problems with Frankfurt’s account of free will, which
we do not intend to rehearse here.27 What we want to focus on from Frankfurt’s
20 D ECO M P O S I N G T H E W I L L

account is the emphasis he gives to reflective self-evaluation in accounting for the


will. Hierarchical accounts like Frankfurt’s require the agent to engage in critical,
rational reflection on the desires that move them to act. The agent uses reflective
self-evaluation to exercise control over her attitudes. She literally makes up her
mind about which desires she wishes to be effective. Perhaps conscious control just
is the capacity that gives an agent the ability to exercise control over her attitudes.
If we want to know what work conscious control does in bringing about intentional
actions, perhaps it is mental agency we need to better understand.
One problem we immediately encounter in pursuing this strategy is the question
of the scope of mental agency. Galen Strawson has argued, for instance, that much
of our mental life is relatively passive, and action and intention play little or no role.
Mental activities like reasoning, thinking, and judging are not intentional actions.
In order for a thought to count as a mental action, it would have to be the case that
the content that individuates the thought is one that the agent intentionally brings
to mind. We cannot, however, form an intention to think a particular thought, since
to do so the thought would already have to be available to us for “consideration
and adoption” (Strawson 2003: 235; also see Proust, this volume). The content of
the thought we are intending would have to have already been somehow brought
before one’s mind in a completely passive way.28 The best we can do as active think-
ers, says Strawson, is foster conditions “hospitable to contents’ coming to mind”
(2003: 234); the entertaining of a content in a thought can never be an action. To
borrow a distinction from Mele (2009), one cannot try to remember any more than
one can try to sleep. One can, however, try to bring it about that one remembers just
as one can try to bring it about that one sleeps by, in Strawson’s words, “fostering the
right conditions.” Mental ongoings seem to happen automatically; once the relevant
parameters have been set, it seems all there is left for the agent to do is sit back and
wait for the rest to happen automatically.
Wayne Wu (this volume) argues that the passivity and automaticity Strawson
finds in the mental realm may well generalize in such a way as to undermine alto-
gether any view of mental activity as agentive. Consider what Strawson calls the
fostering of the right conditions for some mental activity like deliberating or judg-
ing to take place. Strawson gives as examples of “setting one’s mind to a problem”
psychological processes like rehearsing inferential transitions, and talking oneself
through a problem. However, Wu points out that these processes happen just as
automatically, without the subject’s control, as any of the examples of mental actions
Strawson has discussed. At no stage in the unfolding of a mental action can we find
any role for control; the entire process from beginning to end unfolds automati-
cally. This is a conclusion that of course threatens the very status of mental actions:
if mental actions are mere happenings over which the agent has no control, it looks
like we make a mistake to call them actions. The worry does not end there, or so
Wu argues, but threatens to generalize also to bodily actions. Bodily movements
are of course ballistic once they are put in motion, but we cannot say that what
makes a bodily movement an action is the proximal or distal intention that causes
the bodily movement. We have just argued that the formation of a proximal or dis-
tal intention is equally automatic and ballistic. Once again, there is no stage in the
process that leads to action where the agent has any room to exert control. So we
Decomposing the Will 21

seem required to conclude that there are no actions whatsoever, of either a mental
or a bodily kind.
What has gone wrong? Wu’s diagnosis involves a clever exploitation of Anscombe’s
insight that actions are intentional under a description. Wu’s twist on this is to argue
that one and the same action can have properties that are best explained by auto-
matic processes, and properties that are best explained by reference to an agent’s
intention. Wu suggests that the way in which we bring a thought to mind is either
through episodic recall or through a process of synthesizing previously encoded
thoughts. Of course both of these processes happen automatically. However, in epi-
sodic recall, there will be a wide variety of possible thoughts to select from. How
does our automatic process of recall select from this wide variety of thoughts, the
particular thought that is relevant to the task we are engaged in? Wu calls this “the
selection problem.” Once the thought has been successfully retrieved, we then run
into a further problem of what to do with the thought if we are to accomplish our
task. There are many responses we could make, and we have to select from among
these possible responses which is the appropriate response for accomplishing the
agent’s goal. Therefore, the agent faces two kinds of selection problems: the selec-
tion of the relevant input, and of the appropriate response to this input. Wu labels
this the “Many-Many Problem.” Wu argues that the agent enters the loop in arriv-
ing at a solution to the Many-Many Problem. The agent selects a particular behav-
ioral trajectory in solving a Many-Many Problem through what he calls “cognitive
attention.” Wu follows William James in conceiving of cognitive attention as the
“selection of a possible train of thought” (Wu, this volume, p.252). It is this pro-
cess of directing thinking along a particular path that is, according to Wu, active
and agentive. He offers an account of the role of cognitive attention in the solving
of a Many-Many Problem in terms of hierarchical processing. An intention of the
agent resides at the top of the hierarchy, exerting a top-down causal influence on
the directing of attention in the solving of a Many-Many Problem. Wu’s response to
the threat from automaticity is therefore to argue that not all actions (both bodily
and mental) are passive because in many cases the action will be the execution of a
solution to a Many-Many Problem where the solution has been arrived at through
cognitive attention. Actions that have this kind of causal history are not wholly pas-
sive but are agentively controlled and therefore count as actions.
Wu argues that bodily and mental actions are agentive for the same reason, since
both types of action can count as solutions to Many-Many Problems arrived at via
the top-down causal influence of an agent’s intention. He therefore agrees with
Proust (this volume) in arguing that the agent is responsible for more than just
stage-setting or fostering the right conditions for mental action. Proust, however,
disagrees with Wu that bodily and mental actions are agentive for the same type
of reasons. According to Proust, mental actions differ from bodily actions in three
important ways: (1) they cannot have prespecified outcomes; (2) they contain a
passive element; and (3) they do not exhibit the phenomenology of intending.
Despite their disagreements, Proust and Wu agree that the automaticity challenge
for mental actions can be answered, and both deny that mental processes are as
ballistic as Strawson suggests, though for slightly different reasons. Wu appeals to
the role of cognitive attention in solving the Many-Many Problem to identify the
22 D ECO M P O S I N G T H E W I L L

role of the agent in mental action. Proust agrees that the channelling of attention is
a crucial ingredient in mental agency, but her account emphasizes the role of meta-
cognitive monitoring in ensuring that the agent’s thinking conforms with the norms
of accuracy, simplicity, and coherence. For Proust, “a mental action results from the
sudden realization that one of the epistemic preconditions for a developing action is
not met.” You seem not to remember what was on the shopping list you left behind
at home, for instance. This epistemic feeling then leads you to try to remember what
to buy. Metacognitive self-evaluation allows for the sensitivity to epistemic norms
such as the norms of accuracy, simplicity, and coherence mentioned earlier. Proust
locates mental agency in part in this kind of norm responsiveness.29
Vierkant opts for a different route. He embraces the Strawsonian picture but
argues that the shepherding or stage-setting that Strawson allows for is what makes
human mental agency special. Vierkant argues that what makes human agency dif-
ferent from the agency of other creatures is humans’ ability to manipulate their own
mentality in an intentional way. In contrast to Wu and Proust, Vierkant does not
believe that most mental ongoings can be called intentional, but he argues that it is
a special ability of humans to be able to exercise any intentional control over their
mentality. Vierkant buys into a distinction introduced by Pamela Hieronymi (2009)
that there are two different forms of mental agency, only one of which is intentional.
On Hieronymi’s picture, the nonintentional (evaluative) form of mental agency is
fundamental, while the intentional form is characterized as only supplementary (i.e.,
in line with Strawson she believes that intentional mental agency can only be used
for stage-setting and shepherding). Vierkant agrees but insists that it is nevertheless
intentional (Hieronymi speaks as well of manipulative) mental agency that makes
human mental agency free. He argues that the ability to self-manipulate is behind
the Frankfurtian intuition of the importance of second-order volitions for the will.
Vierkant argues that this ability allows humans to become free from their first-order
rational evaluations and to instead intentionally internalize desired norms, despite
not being able to assent to their validity by rational means. In other words, it allows
them to be who they want to be. The role of conscious intentional control on this
model is to help us to efficiently implement our desires about who we want to be.

REASON ACCOUNTS AND HIERARCHICAL ACCOUNTS


In the compatibilist literature on free will there are two main positions on the nature
of the mental capacities that make agents responsible for their actions. The first is
the hierarchical account originating with Frankfurt discussed extensively earlier. The
second is the rationality-based account developed in detail by Fischer and Ravizza
(1998). As already mentioned, Vargas (this volume) defends a version of the latter
modified in such a way that it can integrate the results from the new sciences of the
mind. Vierkant, on the other hand, opts for the hierarchical tradition going back
to Frankfurt. Importantly, though, and in contrast with Frankfurt, the higher-order
states Vierkant invokes are not important because they reveal something about the
“real self ” of the agent as in traditional Frankfurtian positions but simply because
they allow a specific and unique way to self-manipulate. What makes humans
responsible on his picture is that they are the only creatures that can try to have the
Decomposing the Will 23

mind required of a responsible agent. They are the only creatures to have this ability
because only humans can intentionally manipulate their minds. Because this posi-
tion does not rely on self-knowledge for freedom of the will, it escapes the zombie
challenge that might seem threatening to traditional Frankfurtian approaches.
Peter Gollwitzer’s work on implementation intentions could be taken as provid-
ing empirical support for such a position. Gollwitzer’s work has a special place in
the empirical research on volition, because in addition to contributing to the social
psychology research that seems to support the zombie challenge, he has always also
been interested in investigating the function of consciousness in generating action.
In a series of fascinating papers (reviewed in the chapter by Maglio and colleagues)
Gollwitzer has shown that the conscious contribution to action execution might be
the formation of implementation intentions. Implementation intentions are inten-
tions that specify the circumstances under which behavior should be triggered in
order to reach a desired goal. The efficacy of implementation intentions conflicts
with the zombie challenge and the claim that consciousness is largely or completely
irrelevant for behavioral control. However, it also suggests a function for conscious
control that is somewhat counterintuitive. Traditionally, consciousness has been
associated with the rational evaluation of goals (see, e.g., Baumeister 2010 for a
recent version of that general idea), but Gollwitzer seems to indicate that the role of
consciousness is far more mundane, consisting mainly in an instrumental managing
function that ensures that the system implements its intentions in the most effective
way. In their contribution Gollwitzer and his collaborators describe some recent
experiments they have carried out concerned with the circumstances under which
people form implementation intentions. These studies were concerned in particular
with the role of emotion in motivating agents to plan.
Further support for the view of self-control as deriving from self-manipulation
comes from the contribution by Hall and Johansson. Hall along with his colleagues
at Lund University were responsible for the choice-blindness experiments discussed
earlier and reviewed in the first part of their chapter. There they argue that what these
experiments and others establish is that our self-knowledge is largely the outcome
of self-interpretation (see also Carruthers 2009). If consciousness provides us with
self-knowledge that in turn allows us to control our behavior, Hall and colleagues
argue that this is accomplished by consciousness only via self-interpretation. We do
not have any kind of privileged access or first-person authority over our attitudes.
We come to know about our own minds more or less in the same way as we know
about the minds of others through interpretation or by adopting Dennett’s inten-
tional stance in relation to our own behavior. Hall and colleagues go on to discuss
how we can use technologies such as ubiquitous computing to augment our capac-
ity for self-control. They argue that these technologies can allow us to make better
self-ascriptions that enhance our self-understanding. They can also provide us with
accurate self-monitoring in the form of sensory feedback, for instance, that we can
then use to regulate impulsive behavior. Finally, these technologies can allow us to
step back from our actions in the heat of the moment and consider what it is we
really want to do. They compare their Computer Mediated Extrospection (CME)
systems to a “pacemaker for the mind, a steady signal or beacon to orient our own
thinking efforts” (this volume, p.312). Like Vierkant, then, they believe that the
24 D ECO M P O S I N G T H E W I L L

crucial ingredient for willed action is an enhanced ability for self-control, and they
argue that we can engineer our environments in such a way as to enhance this capac-
ity for self-control.30

FINALLY: THE ROLE OF THE SOCIAL


We will end by briefly noting the importance of the social for the will. It is an
open question to what extent the significance of the social meshes with the role of
consciousness that this collection examines. A dichotomy is sometimes assumed
between the Cartesian conscious self and distributed socially mediated automatic
mechanisms that actually cause our actions (Ross et al. 2007). We fully concur that
the Cartesian self is a thing of the past, but we dispute that the influence of the situa-
tion and the social on volition means that there can be no role for conscious control
in responsible agency. In his contribution to this collection, Vargas rightly points
out that our growing knowledge about the situatedness of human volition is not
only a threat but also an opportunity. Once it has been understood that environ-
ments matter for responsible agency, the design of these environments can become
a much higher priority than it might otherwise have been in a world where we insist
that choices reside exclusively in a stable internal conscious self.31 The situatedness
of human volition does not mean that consciousness plays no part in responsible
agency. In fact, as this collection shows, there are a multitude of ways in which we
can make sense of consciousness as having a function in volition.32
Unsurprisingly given the title of our collection, we find it plausible that the will
may not be a unitary phenomenon but might instead fractionate into a variety of
capacities. Consciousness will then play different roles depending on which of these
capacities is in play. However, this fragmentation of “the will” as a target phenome-
non should not be seen as motivating a rejection of the larger quest for an integrated
vision of minds and agency. Instead, appreciating the true joints in nature should
enable us to generate increasingly satisfying pictures of real human minds engaged
in solo and collective bouts of action selection and control. The essays in this vol-
ume are a critical step on that path to self-knowledge.

NOTES
1. Libet, in common with the other researchers we have cited, argues that our actions
are prepared for and initiated unconsciously, but unlike these other researchers he
does not deny the causal efficacy of conscious volition. He argues that we should
replace our concept of free will with a concept of “free won’t” that works “either by
permitting or triggering the final motor outcome of unconsciously initiated process
or by vetoing the progression of actual motor activation” (Libet 1985: 529).
2. Saul Smilansky (e.g., Smilansky 2002) is one of the more prominent contemporary
philosophers in favor of hard determinism.
3. It is an interesting sociology of science fact that even though most philosophers agree
broadly on this, it still does not seem to stop the publication of more and more books
on the question. John Baer and colleagues (2008) have edited an excellent volume on
free will and psychology in which the question of the relationship between determin-
ism and free will features prominently.
Decomposing the Will 25

4. For a very good account of the conditions of libertarian freedom, see Mele (1995).
5. Similar arguments are found in many other contemporary compatibilist positions,
e.g., Fischer and Ravizza (1998).
6. Pettit himself does not explicitly talk about consciousness being important. However,
at a crucial juncture his account is ambiguous. Pettit argues that what matters for agent
control are behavioral displays of conversability and orthonomy, but it is unclear in
his account whether these abilities really would be enough if there was no awareness
by the agent of the orthos she is governed by. He writes that even though it is unclear
whether agents have the ability to control their behavior at the point of awareness,
they can still take a stance toward it and make sure that they will avoid behavior in
the future (Pettit 2007: 86). Why would Pettit think that it is necessary for agent
control as he defines it that awareness of reasons has to play any causal role in the
shaping of present or future behavior, if orthonomy and conversability are sufficient
for responsibility? Even if it was the case that our conscious reasons are nothing else
than made-up stories that we tell to justify our behavior post hoc, and even if it were
the case that these confabulations had very little influence on our future behavior, it
might still be true that the machinery is able to be governed by normative rules and
make us converse fluently. Whatever the reason, Pettit clearly does not seem to think
that this could be possible because he states that a conscious endorsing or disendors-
ing of our behavior ensures that our actions are “performed within the domain where
conversability and orthonomy rules” (86).
7. See, e.g., discussion of guidance control in Fischer and Ravizza (1998).
8. Obviously, most accounts will not require constant conscious control, but the zom-
bie challenge suggests that if there is any conscious control at all, it is far more frag-
ile than standardly assumed. It is not only about outsourcing the unimportant stuff
to automatisms while keeping control of the important stuff, but suggests that even
our most cherished long-term decisions and value judgments might be the result of
unconscious automatisms.
9. In many conversations we had, compatibilism was described by scientists as a cheap
philosopher’s trick to simply redefine arbitrarily what free will is. In our collection
John-Dylan Haynes explicitly makes it clear at the beginning of his piece that he is an
incompatibilist.
10. It is important to emphasize that Holton’s use of predictability is unusual. On one
common understanding predictability is weaker than determinism. One might, e.g.,
think that God can predict our behavior without that entailing that our choices are
determined. On Holton’s use, in contrast, predictability is much stronger than deter-
minism for the reasons explained in the text.
11. Again, this is clearly what some scientists have in mind too. See Fridja (this volume)
on Prinz and fatalism.
12. But obviously even if the zombie challenge would turn out to be correct, this would not
mean anything for the truth of predictability. This is because even though both doc-
trines entail the epiphenomenality of the conscious self, predictability is a much more
radical claim. The latter doctrine says that there is a possible world in which all control
mechanisms (whether they are conscious or zombie mechanisms) are powerless.
13. Adina Roskies shows that there are five ways in which volition is investigated in the
neuroscience literature today: (1) action initiation; (2) intention; (3) decision;
(4) inhibition and control; and (5) the phenomenology of agency.
14. For reviews of this literature, see the essays by Morsella et al.; Maglio et al.; and
Hall et al.
26 D ECO M P O S I N G T H E W I L L

15. Many chapters in this volume deal with this question in one form or another (see
in particular the contributions by Bayne; Vargas; Vierkant; Maglio et al.; Fridja;
Morsella et al.).
16. Consider, in this light, Dennett’s discussion of a thought experiment from Mele
(1995) involving two characters, Ann and Beth. Ann’s actions are autonomous at
least sometimes, while Beth’s actions are the outcome of a psychology identical to
that of Ann except that her psychology is the product of brainwashing. Mele argues
that because Beth’s actions have not been caused in the right, they are not genuinely
free. Dennett argues persuasively that we should not regard Beth differently just
because she is ignorant of the springs of her action.
17. See as well the fascinating studies on three-month-old infants, who seem to enjoy
being able to control a mobile much more than if they get the exact same effect pas-
sively (Watson & Ramey 1987).
18. According to Velleman, traditional, purely belief-desire-based philosophical accounts
do not deliver this and therefore face the objection that their models are purely
hydraulic, which flies in the face of the phenomenology of agency. On Velleman’s
own account, the experience of agency can be explained by giving an account of the
functional role of the agent, which in humans is reducible to the desire to act ratio-
nally. Humans have a very strong desire to make sense of their actions. Whenever that
desire is part of influencing a decision, the resulting behavior, according to Velleman,
will feel agentive.
19. For a related distinction, see Bayne and Pacherie (2007) and Synofzik et al. (2008).
Tsakiris and Fotopolou (this volume) make a similar distinction between what they
call “feelings of agency” and “judgments of agency.”
20. Holton claims Nietzsche as the ancestor of this skepticism.
21. We are arguing, then, that the mechanisms that give us a prereflective sense of agency
are likely to be among those that make us responsible agents. We have already seen
earlier, however, that the prereflective sense of agency is too thin to account for
responsible agency. Thus we can still agree with Holton that reflection on the phe-
nomenology of agency probably will not help us to identify the mechanisms that
make us responsible agents.
22. See, e.g., Bayne’s discussion of how to distinguish patients in a minimally conscious
state from those in a vegetative state. Patients in a minimally conscious state may well
lack a capacity for introspective report while nevertheless being able to respond to
external stimuli in a way that suggests purpose or volition. Evidence for this comes
from the important studies by Owen et al. (2006).
23. We suggested earlier that Vargas’s rationality-based account may help us to respond
to zombie challenge–style arguments based on self-blindness, but this is not some-
thing we can pursue here. For more details, see his essay in this collection.
24. See Ward, Roberts, and Clark (2011) for an account of consciousness that also
stresses its connection with intentional agency, in particular our capacity to perform
epistemic actions like sifting, sorting, and tracking.
25. It is interesting to consider Bayne’s argument that intentional agency implies con-
sciousness in the light of Morsella and Bargh’s proposal. Bayne understands inten-
tional agency in terms of the cognitive integration of an action into an agent’s
cognitive economy. According to Morsella and Bargh, this cognitive integration
is what consciousness supplies by allowing for communication between distinct
action-generating systems.
26. Here he is following the work of Kuhl and Koole (2004).
Decomposing the Will 27

27. For a concise overview, see §3 of Haji (2002).


28. Proust (this volume) offers a further reason why mental acts like thinking a particular
thought cannot be intentional. She points out that thoughts are normative, aiming
at truth or validity. Hence, success in performing a mental action is not a matter of
whether the mental act satisfies a thinker’s intention or preferences. Rather, it is a mat-
ter of whether the mental act possesses the right truth-evaluable property, and this is
not something that the agent can bring about just through their intentions or prefer-
ences. She writes: “Mental actions generally have normative-constitutive properties
that preclude their contents from being prespecifiable at will” (this volume, p.264).
29. We say “in part” because the other motivating factor Proust appeals to in her account
of mental agency is an instrumental, the need to recover some information such as
the name of a friend’s partner on the fly in the heat of conversation.
30. See Heath and Anderson (2010) for a related account of how we can uses kludges
and scaffolding so as to “stave off procrastination.”
31. One idea that is as fascinating as it is provocative in this context can be found in the
work of Professor Susan Hurley, who planned to contribute to this collection before
her untimely death in 2007. Hurley argues in her paper “Imitation, Media Violence,
and Freedom of Speech” (2004) that the distributors of violent media entertain-
ment should be at least partially responsible if there were (as she argues there is) an
increase in real violence as a result of this. She argues that this is the case because the
link between media and real violence functions via an automatic imitative link that
individuals do not control.
32. See the essays by Morsella et al.; Frijda; Wu and Vierkant.

REFERENCES
Arendt, H. (1971). The Life of the Mind. Orlando, FL: Harcourt.
Baer, J., Kaufman, J., & Baumeister, R. F. (2008). Are we free? Psychology and free will. New
York: Oxford University Press.
Bargh, J. A. (2005). Bypassing the will: Toward demystifying the nonconscious control
of social behavior. R. Hassin, J. Uleman, & J. Bargh (Eds.), The new unconscious. New
York: Oxford University Press.
Baumeister, R. F. (2010). Understanding free will and consciousness on the basis of cur-
rent research findings in psychology. R. F. Baumeister, A. Mele, & K. Vohs (Eds.), Free
Will and Consciousness. How might they work. Oxford: Oxford University Press: 24–43.
Baumeister, R. F., Masicampo, E. J., & DeWall, C. N. (2009). Prosocial benefits of feeling
free: Disbelief in free will increases aggression and reduces helpfulness. Personality and
Social Psychology Bulletin, 35, 260–268.
Bayne, T. (2006). Phenomenology and the feeling of doing: Wegner on the conscious
will. S. Pockett, W. Banks, & S. Gallagher (Eds.), Does consciousness cause behavior?
Cambridge, MA : MIT Press: 169–186.
Bayne, T., & Pacherie, E. (2007). Narrators and comparators: The architecture of agentive
self-awareness. Synthese, 159, 475–491.
Brass, M. & Haggard, P. (2007) To do or not to do: The neural signature of self-control.
Journal of Neuroscience, 27, 9141–9145.
Brooks, D. (2007). The morality line. New York Times, Opinion Pages, April 19 2007. http://
www.nytimes.com/2007/04/19/opinion/19brooks.html?_r=1&ref=davidbrooks.
Carruthers, P. (2009). How do we know our minds. The relationship between mindread-
ing and metacognition. Behavioral and Brain Sciences, 32, 121–138.
28 D ECO M P O S I N G T H E W I L L

Chalmers, D. (1996). The conscious mind. Oxford: Oxford University Press.


Clark, A. (2008). Supersizing the mind: Embodiment, action and cognitive extension. New
York: Oxford University Press.
Dennett, D. (1987). The intentional stance. Cambridge, MA: MIT Press.
Dennett, D. (2003). Freedom evolves. London: Penguin.
Falk, E., Berkman, E. T., Mann, T., Harrison, B., & Lieberman, M. D. (2010). Predicting
persuasion-induced behavior change from the brain. Journal of Neuroscience, 30,
8421–8424.
Farrer, C., & Frith, C. D. (2002). Experiencing oneself vs another person as being the
cause of an action: The neural correlates of the experience of agency. NeuroImage, 15,
596–603.
Fischer, J., & Ravizza, M. (1998). Responsibility and control. Cambridge: Cambridge
University Press.
Frankfurt, H. (1971). Freedom of the will and the concept of a person. Journal of
Philosophy, 67, 5–20.
Freeman, A. (2000). Decisive action: Personal responsibility all the way down. B. Libet
(Ed.), The volitional brain. Exeter: Imprint Academic: 275–278.
Gallagher, S. (2006). Where’s the action? Epiphenomenalism and the problem of free
will. S. Pockett, W. Banks, & S. Gallagher (Eds.), Does consciousness cause behavior?
Cambridge, MA : MIT Press: 109–124.
Gallagher, S. (2012). Multiple aspects in the sense of agency. New Ideas in Psychology, 30,
15–31.
Ginet, C. (1990). On action. Cambridge: Cambridge University Press.
Haji, I. (2002). Compatibilist views of freedom and responsibility. Robert Kane (Ed.),
The Oxford handbook of free will. Oxford: Oxford University Press.
Hall L., Johansson P., Strandberg T. (2012). Lifting the Veil of Morality: Choice Blindness
and Attitude Reversals on a Self-Transforming Survey. PLoS ONE 7(9): e45457.
doi:10.1371/journal.pone.0045457
Heath, J., & Anderson, J. H. (2010). Procrastination and the extended will. Chrisoula
Andreou & Mark White (Eds.), The thief of time: Philosophical essays on procrastination.
New York: Oxford University Press: 233–252.
Hieronymi, P. (2009). Two kinds of mental agency. L. O’Brien & M. Soteriou (Eds.),
Mental actions. Oxford: Oxford University Press.
Holton, R. (2010). Disentangling the will. R. F. Baumeister, A. Mele, & K. Vohs
(Eds.), Free will and consciousness: How might they work? Oxford : Oxford University
Press.
Horgan, J. (2002). More than good intentions: Holding fast to faith in free will. New York
Times, December 31, 2002. http://www.nytimes.com/2002/12/31/science/essay-m
ore-than-good-intentions-holding-.
Hurley, S. (2004). Imitation, media violence, and freedom of speech. Philosophical Studies,
117, 165–218.
Johansson, P., Hall, L., Sikström, S., & Olsson, A. (2005). Failure to detect mismatches
between intention and outcome in a simple decision task. Science, 310, 116–119.
Kane, R. (2002). The Oxford handbook of free will. New York: Oxford University Press.
Kay, K. N., Naselaris, T., Prenger, R. J., & Gallant, J. L. (2008). Identifying natural images
from human brain activity. Nature, 452, 352–355.
Kuhl, J., & Koole, S. L. (2004). Workings of the will: A functional approach. Handbook of
experimental existential psychology. New York: Guilford: 411–430.
Decomposing the Will 29

Lau, H., & Passingham, R. E. (2007). Unconscious activation of the cognitive con-
trol system in the human prefrontal cortex. Journal of Neuroscience 27(21):
5805–5811.
Lau, H., Rogers, R., Ramnani, N., & Passingham, R. E. (2004). Willed action and the
attentional selection of action. Neuroimage, 21, 1407–1415.
Libet, B. (1985). Unconscious cerebral initiative and the role of conscious will in volun-
tary action. Behavioral and Brain Sciences, 8, 529–566.
Mele, A. (1995). Autonomous agents. Oxford: Oxford University Press.
Mele, A. (2009). Mental action: A case study. L. O’Brien & M. Soteriou (Eds.), Mental
actions. Oxford: Oxford University Press.
Mele, A. (2010). Effective intentions: The power of conscious will. Oxford: Oxford University
Press.
Moran, R. (2001). Authority and estrangement: An essay on self-knowledge. Princeton, NJ:
Princeton University Press.
Nahmias, E. (2005). Agency, authorship and illusion. Consciousness and Cognition, 14,
771–785.
Nietzsche, F. (1966). Beyond good and evil. Trans. W. Kaufmann. New York: Vintage.
Owen, A. M., Coleman, M. R., Boly, M., Davis, M. H., Laureys, S., & Pickard, J. D. (2006).
Detecting awareness in the vegetative state. Science, 313, 1402.
Pacherie, E. (2007). The sense of control and the sense of agency. Psyche, 13(1). http://
jeannicod.ccsd.cnrs.fr/docs/00/35/25/65/PDF/Pacherie_sense_of_control_
Psyche.pdf.
Pettit, P. (2007). Neuroscience and agent control. D. Ross, D. Spurrett, H. Kinkaid, & G.
Lynn Stephens (Eds.), Distributed cognition and the will. Cambridge, MA : MIT Press:
77–91.
Pockett, S., Banks, W., & Gallagher, S. (2006). Does consciousness cause behaviour?
Cambridge, MA : MIT Press.
Prinz, W. (2003). Emerging selves: Representational foundations of subjectivity.
Consciousness and Cognition, 12, 515–528.
Ross, D., Spurrett, D., Kinkaid, H., & Lynn Stephens, G. (2007). Distributed cognition and
the will. Cambridge, MA: MIT Press.
Roth, G. (1994). Das Gehirn und seine Wirklichkeit. Frankfurt: Suhrkamp.
Shirer, W.R., Ryali, S., Rykhlevskaia, E., Menon, V., & Greicius, M. D.(2012). Decoding
subject-driven cognitive states with whole brain connectivity patterns. Cerebral Cortex,
22, 158–165.
Sinnott-Armstrong , W., & Nadel, L., Eds. (2011). Conscious will and responsibility. Oxford:
Oxford University Press.
Smilansky, S. (2002). Free will, fundamental dualism, and centrality of illusion. R. Kane
(Ed.), The Oxford handbook of free will. Oxford: Oxford University Press.
Soon, C. S., Brass, M., Heinze, H-J., & Haynes, J-D. (2008). Unconscious determinants of
free decisions in the human brain. Nature Neuroscience, 11, 543–545.
Strawson, G. (2003). Mental ballistics or the involuntariness of spontaneity. Proceedings of
the Aristotelian Society, 103, 227–256.
Synofzik M., Vosgerau, G., & Newen, A. (2008). Beyond the comparator model: A multi-
factorial two-step account of agency. Conscious and Cognition, 17, 219–239.
Tsakiris, M., Carpenter, L., James, D., & Fotopoulou, A. (2009). Hands only illu-
sion: Multisensory integration elicits sense of ownership for body parts but not for
non-corporeal objects. Experimental Brain Research, 204, 343–352.
30 D ECO M P O S I N G T H E W I L L

Tsakiris, M., Prabhu, G., & Haggard, P. (2005). Having a body versus moving your body:
How agency structures body-ownership. Consciousness and Cognition, 15, 423–432.
Tusche, A., Bode, S., & Haynes, J-D. (2010). Neural responses to unattended products
predict later consumer choices. Journal of Neuroscience, 30, 8024–8031.
Vargas, M. (2007). Revisionism: Four views on free will (Great Debates in Philosophy). Ed.
J. M. Fischer et al. Oxford: Oxford University Press.
Velleman, D. (1992). “What happens when someone acts?” Mind, 101 (403), 461–481.
Vierkant, T. (2008). Willenshandlungen. Frankfurt: Suhrkamp.
Vohs, K., & Schooler, J. (2008). The value of believing in free will: Encouraging a belief in
determinism increases cheating.” Psychological Science, 19, 49–54.
Ward, D., Roberts, T., & Clark, A. (2011). Knowing what we can do: Actions, intentions,
and the construction of phenomenal experience. Synthese, 181, 375–394.
Watson, G. (1982). Free agency. Free will. G. Watson. Oxford: Oxford University Press.
Watson, J. S., & Ramey, C. T. (1987). J. Oates & S. Sheldon (Eds. ), Reactions to
response-contingent stimulation in early infancy. Cognitive development in infancy.
Hillsdale, NJ: Erlbaum.
Wegner, D. M. (2002). The illusion of conscious will. Cambridge, MA: MIT Press.
Wegner, D. M., & Wheatley, T. P. (1999). Apparent mental causation: Sources of the expe-
rience of will. American Psychologist, 54, 480–492.
Wilson, T. (2002). Strangers to ourselves: Discovering the adaptive unconscious. Cambridge,
MA : Belknap Press.
Wolf, S. (1990). Freedom within reason. Oxford: Oxford University Press.
Wolfe, T. (1996/2001). Sorry, but your soul just died. Originally printed in Forbes
Magazine, reprinted in T. Wolfe, Hooking-Up. New York: Farrar, Straus and Giroux:
89–113.
PART ONE

The Zombie Challenge


This page intentionally left blank
2

The Neuroscience of Volition

A D I NA L . RO S K I E S

The concept of volition plays an important role in everyday attributions of respon-


sibility, as well as in the law, in theology, and in a number of philosophical domains.
Because each of these domains tends to refer to a single entity, “the will,” or to
neatly distinguish behaviors undertaken intentionally from those that are auto-
matically generated or constrained, it is natural to suppose that there is a unified
mental faculty that is the faculty of volition, and that in any given circumstance,
either it is operative or it is not. Neuroscientific investigations into the biological
basis of voluntary action are revealing that volition may instead be a heterogeneous
concept, applicable in degree, and unified perhaps only in the embodied actor. This
view may lead to a reconceptualization of human agency as quite unlike the auton-
omous, unified, self-moved movers often supposed by philosophy. The continued
investigation of the nature of volition will further clarify the concept and perhaps
lead to new philosophical puzzles and surprises. The aim of this chapter is to sketch
the current state of knowledge about the neural bases of volition and to consider
the philosophical consequences of this knowledge. The chapter is unavoidably
idiosyncratic, since there is no uncontroversial, univocal concept of volition to be
found in philosophy or in the sciences (Audi 1993; Zhu 2004a, 2004b; Brass and
Haggard 2008).
Although there is no agreed-upon definition of volition, it is an old and vener-
able concept. It has been around since the ancient Greeks, and throughout the
ages the folk have readily used the concept, apparently without obvious difficulty.
The perceived difficulty rests instead in determining whether our will has a certain
property—that is, the property of being free. Since freedom of the will is of major
philosophical concern, I will when appropriate discuss what the neuroscientific
evidence may tell us about freedom. To foreshadow perhaps too much, I confess
now that current knowledge has little direct bearing on this age-old philosophical
problem. However, recent work on the biological basis of volition may highlight
problems rarely discussed about the nature of volition itself and may prompt us to
ask novel questions about the will, or to frame old questions in new ways.
34 T H E Z O M B I E CH A L L E N G E

Because we cannot rely upon definitions, I will presume that the folk conception
of the will is a legitimate shared starting point for discussion. Generally speaking,
volition is a construct used to refer to the ground for endogenous action, autonomy,
or choice. From here, intuitions vary. In some areas of research voluntary actions
are contrasted with actions that are reflexive or elicited by the environment. This
puts the focus of volition on the flexibility or initiating causes of action. In contrast,
in some strands of research the focus is not on bodily action at all but rather on the
mental process of decision making.
The heterogeneity of the preceding visions of the will explains in part why an
attempt to identify the neural basis of volition might be difficult, for the absence of
a clear concept of volition complicates the task of investigating it experimentally,
or even determining what counts as relevant data. Another difficulty arises from
the recognition that in order for neuroscientific research to bear upon the concep-
tion of volition, volition has to be operationalized in some way, so that it can be
approached experimentally. Despite these difficulties, it is now possible to make
some headway.
The intuitive, but less than clear, concept of the will is reflected in the many ways
in which volition has been operationalized. For example, if one takes voluntary
action to contrast with stimulus-generated action, examining the neural events that
distinguish self-initiated movements from similar movements that are responses to
external stimuli ought to provide insight into the proximal mechanisms underly-
ing endogenously generated action. However, if one conceives of volition instead
as primarily related to abstract plans for future action, the proximal mechanisms
that lead to simple movements may be of less interest than the longer-term plans or
intentions one has or forms. Some research on intentions attempts to address these
higher-level aspects of motor planning.
Philosophical discussions of volition often focus upon the ability to choose to
act in one way or another. Although historically this emphasis on choice may be a
vestige of an implicit dualism between the mentalism of choice and the physical-
ism of action, the processes underlying decision as a means for forming intentions
seem to be a central aspect of volition even if one rejects dualism, as have most con-
temporary philosophers and scientists. Yet a different approach to volition focuses
less on the prospective influence of the will on future action than on the occurrent
ability of the agent to inhibit or control action. Moreover, since some lines of evi-
dence from psychology and neuroscience suggest that actions are often initiated
unconsciously, a number of people have argued that if free will is to exist, it will
take the form of control or veto power over unconsciously initiated actions. Finally,
regardless of whether one believes that we can act or choose freely, we normally do
perceive certain actions as self-caused and others as not. There is a phenomenology
that accompanies voluntary action, involving both a sense of causal potency and
ownership. Neuroscience has begun to make significant headway in illuminating
the physiological basis of the phenomenology of agency.
Reflecting the distinct conceptions of will canvassed earlier, I organize my
discussion around the following five topics: (1) action initiation, (2) intention,
(3) decision, (4) inhibition and control, and (5) the phenomenology of agency.
Each of these maps readily to elements of the commonsensical conceptions of
The Neuroscience of Volition 35

volition and can be associated with identifiable bodies of neuroscientific research.


It is not surprising, and perhaps encouraging, that many of the research areas blend
into each other or intersect in suggestive ways. It is not clear, however, whether the
picture that emerges is one that comports with the view of the will as a unitary fac-
ulty, an identifiable source or spark of action. Volition may instead be the result of
the operation of a widely distributed, constantly varying system for planning and
action that operates on a number of spatial and temporal scales simultaneously.
A few caveats before we begin. First, in light of the lack of agreement about
the concept of volition itself, and its close relation to discussions of the will and
of agency, I will feel free to use the terms volition, will, and agency interchangeably,
hopefully without additionally muddying the waters. That is not to say that there
aren’t substantive distinctions to be made between them. This chapter does not
attempt to review the philosophical work regarding volition and agency (see, e.g.,
Mele 2005, 2006, 2009). Second, I focus more on discussions of how neurosci-
ence has, can, or may affect our conception of volition rather than on an exhaustive
review of the neuroscientific literature. The body of empirical work with some con-
nection to volition is too vast to review thoroughly, since it encompasses practically
all aspects of human behavior, from planning to action. In addition, there are many
disagreements within the field; here I often gloss over these points of contention if
I do not think that the resolution of the dispute substantially affects our philosophi-
cal views. Finally, I leave out large portions of arguably relevant literature. For exam-
ple, I do not here discuss the sizable and fascinating literature concerning the neural
basis of perceiving and attributing agency to others, despite the fact that it may be
relevant to understanding self-perception of agency. I note here that it is likely that
common mechanisms are involved in the perception of agency in self and other, and
leave it to the interested reader to explore in more detail (Rizzolatti and Craighero
2004; Fogassi et al. 2005; Cunnington et al. 2006; Farrer et al. 2008; Farrer and Frith
2002). Moreover, I do not discuss the growing psychological literature on automatic
processes, which also puts pressure on our commonsense notions of agency. I try
to restrict my focus to specifically neural investigations (Balaguer 2004; Glimcher
2005; Gold and Shadlen 2007; Mainen and Sejnowski 1995; Franks, Stevens, and
Sejnowski 2003; Nahmias, Coates, and Kvaran 2007; Monterosso, Royzman, and
Schwartz 2005).

THE NEUROSCIENCE OF VOLITION


I summarize the current state of research on the following topics: (1) action initia-
tion, (2) intention, (3) decision, (4) inhibition and control, and (5) the phenom-
enology of agency. The hope is that through a patchwork attempt at characterizing
what we know scientifically about volition, a larger picture will emerge, even if the
pieces do not fit neatly together.

Volition as Initiation of Action


The will is thought to be paradigmatically operative in endogenously generated or
self-initiated actions, as opposed to exogenously triggered actions, like reflexes or
36 T H E Z O M B I E CH A L L E N G E

simple stimulus-response associations. Another dichotomy, often conflated with


the first, is between flexible and stereotyped behavior. Stereotyped behavior is often
thought to be automatic, triggered, or reflex-like, as opposed to flexible behavior,
in which higher (volitional or conscious) processes are thought to adapt behavior
to changing demands. Both characterizations may be criticized on a number of
fronts. For example, these characterizations crosscut each other: there are stereo-
typed behaviors that are clearly consciously initiated, and flexible behaviors that
are externally driven. What is typically characterized as endogenous behavior may
be due to exogenous but not proximal cues, such as experimenter instruction, and
exogenously cued actions may still be species of voluntary movement. In addition,
it is likely that such dichotomies in action types don’t exist: actions fall along a spec-
trum, probably best characterized with these and other dimensions.
The foregoing discussion merely highlights the difficulties one faces when trying
to operationalize something as intangible as willed action. Despite these worries, the
earliest, and arguably the most influential, neuroscientific studies on volition have
compared brain activity during self-initiated action and externally cued actions and
have found differences in the functional architecture subserving these two types of
actions. Even if actions lie upon continua of flexibility and the proximity and diver-
sity of external triggers, important insight can be gained by comparing movements
that are clearly cued from similar movements whose proximal causes are internal.
Neuroimaging studies of endogenous generation of simple motor actions com-
pared with rest consistently show activation of primary motor cortex, cortical
areas such as supplementary motor area (SMA) and presupplementary motor area
(preSMA), dorsolateral prefrontal cortex (DLPFC), regions in the anterior cin-
gulate, and basal ganglia. These networks are involved in both cued and uncued
action. Increased activity in preSMA is often reported in uncued actions, whereas
cued responses seem to involve a network including parietal and lateral premotor
cortices that mediate sensory guidance of action. There seems to be a difference of
degree, rather than kind, in the activity of the basal ganglia/preSMA and parietal/
lateral premotor circuits in endogenously initiated and cued actions.
The SMA has long been an area of interest for action generation. Prior to the
availability of imaging techniques, EEG recordings at the vertex revealed a slow
negative electrical potential that precedes motor activity by 500 milliseconds or
more. This “readiness potential” (RP) was initially hypothesized to arise in the
SMA (Deecke and Kornhuber 1978; Jahanshahi et al. 1995). Further studies have
suggested that the RP reflects more than one component process (Haggard and
Eimer 1999; Shibasaki and Hallett 2006; Libet, Wright, and Gleason 1982), and the
source of the early components of the RP has been localized to preSMA (Shibasaki
and Hallett 2006). The magnitude of the RP is greater in self-paced than in cued
movements, and studies indicate that the late and peak phases of this electrical sig-
nal are associated with spontaneous or self-initiated motor acts, while the earliest
components may be more involved in cognitive processes related to preparation or
motivation (Libet, Wright, and Gleason 1982; Trevena and Miller 2002; Jahanshahi
et al. 1995). The RP has become famous, or infamous, for the role it played in Libet’s
theorizing about free will, and in the subsequent literature discussing Libet’s results.
Libet’s work will be discussed in greater detail in a following section.
The Neuroscience of Volition 37

Despite abundant evidence implicating medial frontal cortex in self-initiation of


movements, determination of the source and function of the differences in brain
activity during self-initiated and cued actions are matters about which there is less
consensus. A positron emission tomography (PET) study controlling for predict-
ability of movement timing suggests that the signals associated with self-paced
actions arise in the rostral SMA, anterior cingulate, and DLPFC ( Jenkins et al.
2000). Using functional magnetic resonance imaging (fMRI), Deiber et al. (1999)
found that activations in preSMA and rostral cingulate zone (rCZ) were greater
in self-initiated than externally cued movements. Cunnington et al. (2002) found
a difference in the timing, but not level of activation in preSMA with self-paced
movements, and also report activation in rCZ. Lau and colleagues (Lau, Rogers,
Ramnani, et al. 2004) try to disentangle attention to selection of action from ini-
tiation, and find that preSMA, but not DLPFC, is preferentially activated during
initiation. However, in this study the greater activity in this region is correlated with
time on task, and thus may not reflect specificity for initiation. Mueller et al. (2007)
argue that once other variables are controlled for, preSMA fails to show differences
between self-initiated and cued movement tasks, and they associate self-initiated
movements with activity in rCZ and not preSMA.
The importance of the preSMA for self-generated action is supported by direct
interventions on brain tissue. Lesions of the preSMA in the monkey inhibit
self-initiation of action but not cued action (Thaler et al. 1995). In humans, direct
electrical stimulation of regions in the SMA (including preSMA) produces an urge
to move; stronger stimulation results in action (Fried et al. 1991). In addition,
repetitive transcranial magnetic stimulation (rTMS) of preSMA disrupts initia-
tion of uncued motor sequences (Kennerley, Sakai, and Rushworth 2004) during
task switching. These studies provide further evidence of the involvement of these
areas in action initiation. These regions also may be involved in automatic inhibition
of competing responses: on the basis of unconscious priming studies with lesion
patients, Sumner et al. (2007) report that SMA (but not preSMA) mediates auto-
matic inhibition of motor plans in performance of alternative voluntary actions.
Lesions in these regions prevent that inhibition (Sumner et al. 2007) and appear
in some syndromes characterized by involuntary motor behavior, such as anarchic
hand syndrome.
In summary, current neuroscience suggests that the areas most consistently asso-
ciated with action initiation are the rCZ and preSMA, but interpretation of their
function is still controversial. A variety of factors, including differences in experi-
mental paradigms and the presence of task confounds that may complicate interpre-
tation, make it difficult to reconcile with confidence results of this body of studies.
In addition, it would be a mistake to identify action initiation with any single brain
region. It is clear that networks of brain areas work together to produce behavior,
and evidence for different levels of brain activity in self-generated versus cued
action need not imply that the relevant signals originate there. Until more is known
about the computations involved, the precise identification of regions involved in
self-initiation does little to influence our conception of volition: in the absence
of other information about the functional role of brain regions, why would activ-
ity occurring in one area or another affect our philosophical views about the will?
38 T H E Z O M B I E CH A L L E N G E

Perhaps the finding that there are increases in brain activity with self-initiated action
supports the physicalist notion that volition is a manifestation of brain activity, but
since correlation cannot prove causation, even this conclusion is not mandated. We
can be sure that future work will better resolve the regions involved in self-initiated
and externally cued activity, and the circuits mediating such processing. While not
resolving any mysteries about the seat of volition, these results may provide clearer
targets for future experiments.

Volition as Intention
Intentions are representational states that bridge the gap between deliberation and
action. Arguably, intentions can be conscious or unconscious. Moreover, there may
be different types of intention, involved in different levels of planning for action.
If we assume that intentions are the proximal cause of all voluntary movement,
then studies of initiation of action and of intention may well concern the same phe-
nomena (we might call these proximal intentions or motor intentions, or as some call
them, volitions). However, we also commonly refer to intentions in a broader, more
abstract sense, as standing states that constitute conscious or purposeful plans for
future action, that exist prior to and independently of action execution. In moral and
legal contexts, when we ask whether a person acted intentionally, we often employ
this more general notion of intention.
In general, willed action involves the intention to act, and many presume that
freely willed actions must be caused by our conscious intentions. The efficacy of our
conscious intentions was challenged by the studies of Benjamin Libet, who exam-
ined the relative timing of awareness of the intention to move and the neural signals
reflecting the initiation of action. Libet reported that the time of onset of the RP
occurs approximately 350 milliseconds or more prior to the awareness of an urge
or intention to move (Libet, Wright, and Gleason 1982, 1983; Libet et al. 1983;
Libet 1985). Libet and others have viewed this discrepancy as evidence that actions
are not consciously initiated (Libet et al. 1983; Libet 1985; see Banks 2002). Many
have taken these results as a challenge to free will, on the supposition that conscious
intention must drive, and thus precede, initiation of action, for that action to be
freely willed. Although Libet’s basic neurophysiological findings about RP tim-
ing have withstood scrutiny (Trevena and Miller 2002; Haggard and Eimer 1999;
Matsuhashi and Hallett 2008), his interpretations have been widely criticized. For
example, Libet’s data do not enable us to determine whether the RP is always fol-
lowed by a movement, and thus whether it really reflects movement initiation, as
opposed to a general preparatory signal or a signal related to intention (Mele 2006;
Roskies 2011; Mele 2009). Haggard and Eimer use temporal correlation to explore
the possibility that the anticipatory brain processes identified by Libet and oth-
ers underlie the awareness of intention. Their results suggest that a different sig-
nal, the lateralized readiness potential (LRP), is a better candidate than the RP for
a brain process related to a specific motor intention (Haggard and Eimer 1999);
other data suggest that awareness of intention may precede the LRP (Trevena and
Miller 2002). These findings call into question Libet’s most revolutionary claim,
that awareness of intention occurs after the motor preparatory signals. If the RP is a
The Neuroscience of Volition 39

precursor to a proximal intention, rather than the neural correlate of such an inten-
tion, then the fact that it occurs prior to consciousness does not rule out conscious
intention as a necessary precursor to action, nor do his data speak to the relative
timing of action initiation and conscious intention. Moreover, if the RP regularly
occurs without movement occurring, then the RP cannot be thought of as the ini-
tial stages of a motor process. Others question the relevance of the paradigm used.
There is reason to think that Libet’s experimental design fails to accurately measure
the onset of conscious intention to move (Young 2006; Bittner 1996; Roskies 2011;
Lau, Rogers, and Passingham 2006, 2007), so that inferences about relative timing
may not be reliable. More philosophical objections to the paradigm are also com-
pelling. Are instructed self-generated finger movements an appropriate target task
for exploring questions of freedom? There are reasons to think not, both theoretical
and experimental. For instance, we are typically interested in freedom because we
are interested in responsibility, but an inconsequential act such as finger raising is
not the type of act relevant to notions of responsibility. In addition, the paradigm
may not measure the time of conscious intention but rather the time of the meta-
cognitive state of being aware that one has a particular conscious intention. It may
be that what we need to be conscious of when choosing are the options available
to us but not the motor intention itself (Mele 2009; Young 2006; Bittner 1996;
Roskies 2011; Lau, Rogers, and Passingham 2006, 2007). Thus, despite the fact that
Libet’s studies remain the most widely heralded neuroscientific challenge to free
will, a growing body of work questions their relevance. A great deal has been written
about Libet’s work, both sympathetic and critical. There is far too much to discuss
here (Mele 2009; Banks and Pockett 2007; Sinnott-Armstrong and Nadel 2011;
Banks 2002; Pacherie 2006). However, because of the experimental and interpre-
tive difficulties with his paradigms, in this author’s eyes, Libet’s studies do little to
undermine the general notion of human freedom.
Challenges to freedom from neuroscience are not restricted to Libet’s results. In
a recent event-related fMRI study probing the timing of motor intentions, Haynes
and colleagues used pattern classification techniques on data from regions of fron-
topolar and parietal cortex to predict a motor decision. Surprisingly, information
that aided prediction was available 7 to 10 seconds before the decision was con-
sciously made, although prediction success prior to the subject’s awareness was only
slightly better than chance (~60 percent; Soon et al. 2008). This study demonstrates
that prior brain states, presumably unconscious, are causally relevant to decision
making. This is unsurprising. Neural precursors to decision and action, and physi-
cal influences on behavior are to be expected from physically embodied cognitive
systems, and finding such signals does not threaten freedom. The authors’ sugges-
tion that this study poses a challenge to free will is therefore somewhat misleading.
Nonetheless, it is perhaps surprising that brain information could provide guidance
to future arbitrary decisions that long in advance: the best explanation of the data is
that there are biases to decision of which we are not conscious. The weak predictive
success of these studies does not undermine our notion of volition or freedom, since
nothing about these studies shows that our choices are dictated by unconscious
intentions, nor do they show that conscious intentions are inefficacious. First, the
subjects moved after making a conscious decision to do so, so the study does not
40 T H E Z O M B I E CH A L L E N G E

show that conscious intentions are not relevant to action. Second, only if our model
of free will held that (1) absolutely no factors other than the immediate exercise of
the conscious will could play into decision making, and (2) nothing about the prior
state of the brain could affect our decision making (the determination of the will)
would these results undermine our ideas about free will. But no plausible theory of
free will requires these things (indeed, if it did, then no rational decision could be
free). However, because this study shows that brain data provide some information
relevant to future decisions well prior to the act of conscious choice, it nonetheless
raises important challenges to ordinary views about the complete independence
and episodic or momentary nature of arbitrary choice.
Some studies have attempted to identify areas in which specific intentions are
encoded. For example, Lau and colleagues (Lau, Rogers, Haggard, et al. 2004)
instructed subjects to press a button at will, while attending either to the timing
of their intention to move, or to the movement itself. Attention to intention led to
increased fMRI signal in pre-SMA, DLPFC, and intraparietal sulcus (IPS) relative
to attention to movement. A large body of imaging results indicates that attention
to specific aspects of a cognitive task increases blood flow to regions involved in
processing those aspects (Corbetta et al. 1990; O’Craven et al. 1997). If so, it is rea-
sonable to interpret the aforementioned results as an indication that motor inten-
tion is represented in the pre-SMA. These results are consistent both with evidence
discussed earlier that proximal intentions leading to self-initiated motor activity are
represented in the pre-SMA, and also with the view that conscious intentions are
represented there as well. (For a discussion of how these differ, as well as the dif-
ficulty of determining what is meant by conscious intention, see chapter 2 of Mele
2009.)
In addition to pre-SMA, Lau’s study highlighted frontal and parietal regions often
implicated in intentional action. Hesse (Hesse et al. 2006) identifies a frontopari-
etal network in motor planning, including left supramarginal gyrus, IPS, and frontal
regions. The left anterior IPS has also been associated with goal representation, cru-
cial in motor planning (Hamilton and Grafton 2006). Lau’s results are consistent
with the view that posterior parietal regions represent motor intentions (Andersen
and Buneo 2003; Cui and Andersen 2007; Thoenissen, Zilles, and Toni 2002;
Quian Quiroga et al. 2006). Sirigu et al. (2004a) report that damage to parietal cor-
tex disrupts awareness of intention to act, although voluntary action is undisturbed.
The role of posterior parietal cortex in the experience of intention will be further
discussed in a later section.
Often we think of intentions as more abstract plans, not closely related to motor
activity. Little neuroscientific work has focused explicitly on abstract human inten-
tions, in part because it is so difficult to figure out how to measure them objectively.
Frontal cortex is generally thought to be the site of executive function. Many stud-
ies indicate that dorsal prefrontal cortex (DPFC) is active in tasks involving willed
action. Medial parts of DPFC may be involved in thinking about one’s own inten-
tions (den Ouden et al. 2005), whereas DLPFC may be involved in generating
cognitive as well as motor responses (Frith et al. 1991; Hyder et al. 1997; Jenkins
et al. 2000; Lau, Rogers, Haggard, et al. 2004). However, it is difficult to determine
whether the activity observed corresponds to selection, control, or attention to
The Neuroscience of Volition 41

action. Lau and colleagues (Lau, Rogers, Ramnani, et al. 2004) attempt to control
for working memory and attention in a task that had a free response condition and
an equally attention-demanding specified response condition in order to determine
what areas are involved in selection of action. DLPFC was not more active in the
free choice rather than the externally specified selection condition, suggesting it had
more to do with attention to selection than with choice. In contrast, preSMA was
more active in free choice than in other conditions. This provides further evidence
that preSMA is involved in free selection of action. Moreover, attention to selec-
tion involves DLPFC. Since attention may be required for awareness of intention,
DLPFC activity may be important for conscious intention.
Thus far, the regions discussed are active during intentional tasks, but their
activity reveals little about neural coding of the content of intentions. Using pat-
tern analysis on fMRI data from regions of prefrontal and parietal cortex, Haynes
and colleagues (2007) were able predict with up to 70 percent accuracy a subject’s
conscious but covert intention to add or subtract numbers. Information related to
specific cognitive intentions is thus present in these regions (including medial, lat-
eral, and frontopolar prefrontal regions) while the subject holds his intended action
in mind. Interestingly, the regions that are predictive appear to be distinct from the
ones generally implicated in representation of intention or endogenous actions,
raising the possibility that information related to the content of intention is rep-
resented differently depending on task. Although these studies do not yet provide
useful information about how intentional content is encoded, they do suggest that
the relevant information in this task is distributed in a coarse enough pattern that
differences are detectable with current technology.
To date, neuroscience has shown that mechanisms underlying endogenous ini-
tiation and selection of action have some features that deviate from commonsensi-
cal conceptions of volition, largely with regard to the relative timing of neural events
and awareness. Recent studies do seem to indicate decisions can be influenced by
factors of which a subject is not conscious. However, this does not warrant the con-
clusion that conscious intentions are inefficacious, that our choices are determined
or predetermined, or that consciousness is epiphenomenal. Although in certain
contexts neural mechanisms of selection and motor intention may be unconsciously
activated, once one takes into account the variety of levels at which intentions oper-
ate (Pacherie 2006; Roskies 2011; Mele 2009), none of the current data undermine
the basic notions of volition or free will. Reports of the death of human freedom
have been greatly exaggerated.

Volition as Decision Making


On one prevalent view, the paradigmatic exercise of the will lies in our ability to
choose what course of action to take. Many philosophers have located freedom of the
will in the ability to choose freely which intentions to form. Decision often precedes
intention and initiation, and may be thought of as the point at which intentional
action originates.
Researchers in primate neurophysiology are constructing a rich picture of the
dynamics of perceptual decision making, using single-cell recording and population
42 T H E Z O M B I E CH A L L E N G E

modeling. Because of its cohesiveness and breadth, I concentrate on a body of work


from the laboratories of William Newsome, Michael Shadlen, and colleagues, who
have elucidated in detail the neural basis of decision making under uncertainty
using a visual motion paradigm. Because this work has been extensively reviewed
elsewhere (Glimcher 2001, 2003; Gold and Shadlen 2007), I only briefly summa-
rize the main findings here.
These studies share a common paradigm: rhesus macaques view random-dot
motion displays. The monkey’s task is to fixate on the stimulus, to determine the
direction of net motion, and to indicate the direction of coherent motion by mov-
ing its eyes in the direction of net motion to one of two targets placed to the right
and left of the fixation point. The task is made more or less difficult by changing the
percentage of dots with component motion vectors in a particular direction, thus
altering the coherence (or strength) of the net motion. By recording from cells in
different brain areas during task performance, neuronal contributions to decision
can be elucidated.
Cells in visual areas MT (middle temporal) and MST (medial superior tempo-
ral) are tuned to motion in particular directions. Recording from cells in these areas
whose receptive fields are coincident with the location of the visual stimulus indi-
cates that their neural activity reflects the momentary strength of the motion signal
in the cell’s preferred direction of motion (Newsome, Britten, and Movshon 1989;
Britten et al. 1992; Celebrini and Newsome 1994). Neural activity in the lateral
intraparietal area (LIP) shows a different profile. Neurons in LIP represent both
visual and motor information (Shadlen and Newsome 1996). LIP cells appear to
integrate signals from extrastriate motion areas over time (Huk and Shadlen 2005);
they are also active in the planning and execution of eye movements (Andersen and
Buneo 2002).
In the random-dot motion task, for example, a stimulus with rightward coherent
net motion will lead to a ramping up of activity in LIP neurons whose response field
encompasses the corresponding saccade target (Shadlen and Newsome 2001). The
rate of increase in firing is proportional to motion strength, and when the activity
in the LIP neurons reaches a certain absolute level, the monkey makes a saccade to
the target, and the firing of the LIP neurons ceases (Roitman and Shadlen 2002).
Another way of describing the results is as follows: LIP neurons seem to accumulate
evidence of motion strength from sensory areas, until a certain threshold is reached,
and a decision is made (Huk and Shadlen 2005). This interpretation is strength-
ened by the finding that if the monkey is trained to withhold her response until
cued, LIP neurons with response fields in the planned response direction maintain
elevated firing rates during the delay period and only cease their activity after the
saccade. This demonstrates that unlike neurons in MT and MST, these neurons are
not purely stimulus-driven, and their continued firing in the absence of the stimu-
lus is taken to reflect the online maintenance of the monkey’s “decision” about the
motion direction of the stimulus (or the corresponding required action), until the
completion of the task (Shadlen and Newsome 2001). Further evidence of LIP
neuron involvement in these perceptual decisions has been found. For instance,
activity in these neurons predicts the monkey’s response not only in trials in which
the monkey chooses correctly but also when he chooses incorrectly (Shadlen and
The Neuroscience of Volition 43

Newsome 2001). Moreover, microstimulation of LIP neurons with response fields


corresponding to a saccade target biases the monkey’s choice and affects the timing
of his responses (Hanks, Ditterich, and Shadlen 2006). This is perhaps the strongest
evidence for the causal involvement of LIP neurons in the decision-making process.
The large body of work characterizing this system has enabled researchers to formal-
ize the relevant computations performed by this neural system, allowing them to
design mathematical models that capture the behavioral patterns exhibited by the
monkeys (Mazurek et al. 2003; Gold and Shadlen 2007).
What relevance does this have to human decision making? On the positive side,
human and monkey visual systems are highly homologous, and psychophysical
results from humans and monkeys are almost identical, indicating that monkeys
and humans perform this task the same way. These considerations suggest that
analogous neuronal processes are involved in human decisions about these motion
stimuli (Palmer, Huk, and Shadlen 2005). However, one can legitimately question
whether the task of making a decision based on perceptual stimuli has much to
do with the sorts of decisions we typically care about, especially when thinking of
decision making as a manifestation of volition. After all, one might argue that the
sorts of decisions to which moral responsibility applies, and for which the notion
of voluntariness is important, involve much more complex considerations of value,
consequences, reasons, feelings, and so on than this simple perceptual system does.
Furthermore, one could object that the stimulus-driven nature of this task is pre-
cisely what we do not mean by volition, which is (arguably) by definition endog-
enous. This perceptual paradigm may seem too impoverished to serve as a model
for voluntary choice in the light of these considerations.
This dismissal is too fast. There are ways of conceiving of these studies as simple
models that are generalizable to much more complex processes, making it easier to
imagine how this model could serve as the core of a model of human decision mak-
ing. For example, in these monkey studies, populations of neurons with particular
response properties represent the choices the monkey can make in the task, and
their relative firing rates appear to represent the weight given to them during the
process leading to decision. These neuronal populations may be conceived of as rep-
resenting distinct hypotheses or alternatives, such as “motion to the right/left,” or
alternatively “move eyes to the right/left target.” If this is accurate, one may conceive
of these neurons as participating in representations with propositional content cor-
responding to the decision alternatives (Gold and Shadlen 2007). It is not a great
leap, then, to accept that other neural populations represent other propositions.
Although we currently lack a general framework for conceiving of how propositional
content is represented in the nervous system, we know it can be, because humans
represent propositional content all the time, and they do so with neuronal machin-
ery. Once we accept that neural populations represent abstract propositions, it is but
a small step to think of them as representing reasons or considerations for action,
and to think of their relative firing rates as reflecting the weight given to reasons for
decision or action (in the case of the monkey, the firing rates reflect the weight of
the evidence, the only reason the monkey has to choose in this case). Thinking of
the representational properties in this way provides a clear link to the philosophical
literature on freedom and responsibility: when we think of free actions, or actions
44 T H E Z O M B I E CH A L L E N G E

for which we are morally responsible, those actions typically are—or are based
on—decisions in response to reasons (Fischer and Ravizza 1998).
In addition, further studies have extended this perceptual decision paradigm in
novel ways, providing new insight into how the general decision-making paradigm
can incorporate richer, more nuanced and abstract considerations that bear on
human decision making. For example, the firing rate of neurons in LIP that are asso-
ciated with decisions in the visual motion task is also influenced by the expected
value of the outcome and its probability, and these play a role in the decision cal-
culus (Yang and Shadlen 2007; Platt and Glimcher 1999). Outcomes (decisions)
associated with higher reward are more heavily weighted, and the time course of the
rise to threshold occurs more rapidly to outcomes with higher payoff or those the
animal has come to expect to be more likely to occur. The firing of these neurons
seems to encode subjective utility, a variable that incorporates the many aspects
of decision making recognized by classical decision theory (Platt and Glimcher
1999; Glimcher 2001; Dorris and Glimcher 2004). Other studies show that simi-
lar computations occur when the number of decision options is increased beyond
two, suggesting that this sort of model can be generalized to decisions with multiple
outcomes (Churchland, Kiani, and Shadlen 2008). In light of these considerations,
this model system can be considered to be a basic framework for understanding the
central elements of human decision making of the most subtle and nuanced sort.
Supposing that the monkey model is an apt model for understanding human
decision making, can it tell us anything about the question of freedom of the will?
Although in most trials the nature of the stimulus itself specifies the correct choice,
in some trials the stimulus does not provide sufficient evidence for the decision,
either because there is enough noise in the stimulus that the monkey must guess the
answer, or because the stimulus itself does not provide any determinative informa-
tion. Monkeys are occasionally presented with random-dot motion displays that
have 0 percent coherent motion. Although there is a visual stimulus, the informa-
tion in the stimulus is ambiguous and unrelated to a “correct” or rewarded choice.
Even in response to identical movies of 0 percent motion, monkeys choose right-
ward and leftward directions seemingly randomly. The monkey’s choices thus can-
not be driven entirely by the external stimulus but must rather be driven by factors
internal to the monkey herself. Recording from LIP neurons during these trials is
instructive: although the activity levels of the populations representing the alterna-
tive choices are nearly evenly matched, slight correlations are found between small
fluctuations in activity in LIP in one direction or another, and the monkey’s ultimate
response (Shadlen and Newsome 2001). This suggests that the monkey’s responses
are indeed driven by competition between these neuronal populations.
Some might take the existence of the correlation between neural firing levels and
choice even in these 0 percent motion cases to be evidence for determinism, while
others could view the stimulus-independent fluctuations as evidence for the exis-
tence and efficacy of random noise in decision making. I think neither position is
warranted, for reasons specified elsewhere (Roskies 2006). One person’s noise is
another person’s signal, and without being able to record from all the neural inputs to
a system, one cannot determine whether such activity is truly due to stochastic vari-
ability of neuronal firing or is activity due to inputs from other parts of a dynamically
The Neuroscience of Volition 45

evolving system, from local ongoing activity, or from non-stimulus-related environ-


mental inputs (Shadlen and Roskies 2012). Without being able to rule out these
alternatives, we cannot ascertain whether these fluctuations are due to indeterminis-
tic processes or not, and whether the inputs should be viewed as noise or just uniden-
tified signal. For these reasons, given our current knowledge, it seems insufficient to
point to them as a basis for libertarian freedom1 or for the absence thereof.
So while the work on the neural basis of decision making does not help adju-
dicate the traditional question of freedom, if taken to be a general model for the
neural basis of decision making, it is illuminating. This work provides a relatively
comprehensive model of a decision process, in that it incorporates all the basic ele-
ments we would intuitively expect—representation of options, value, evidence, a
dynamical characterization of the evolution of the system over time, with changing
inputs, and even confidence (Kiani and Shadlen 2009). It is only the first pass at a
characterization, and there are relevant differences with human decision making.
For example, this system is tightly circumscribed by the task the animal has been
trained to do, and the neural basis for decision and motor preparation are intimately
related (Gold and Shadlen 2000). If the same stimulus is used but the response indi-
cating the decision is not oculomotor, evidence suggests that other neuronal popu-
lations, not in LIP, will represent the decision of direction of motion (Gold and
Shadlen 2007; Cui and Andersen 2007). In contrast, some human decision mak-
ing may operate at a more abstract level—certainly humans can make decisions in
the absence of responses that necessitate concrete motor representations. Whether
monkeys can also make abstract decisions remains an open question. Moreover,
the picture we currently have is still only a partial and piecemeal view of what the
brain is doing during any decision process. Many other brain areas also contribute
to decision making. For example, neuronal activity in DLPFC was also predictive
of the monkey’s decision in the random-dot motion task (Kim and Shadlen 1999),
and responses were sensitive to expected reward value (Leon and Shadlen 1999).
This region of monkey cortex is reciprocally connected with the parietal regions
discussed earlier, and temporal coordination of these regions could be important
in decision making (Pesaran, Nelson, and Andersen 2008). Other areas involved in
reward processing are also undoubtedly involved (see, e.g., Schultz, Tremblya, and
Hollerman 2000; O’Doherty 2001).
How does the work on decision relate to work on intention? In the random-dot
motion paradigm discussed previously, it is tempting to identify the neural activity
in LIP with motor intention: that activity seems to be causally linked to the produc-
tion of a response, and when the monkey is required to delay his response, activ-
ity in LIP persists in the absence of the stimulus, exactly what one would expect
of an intention that bridges the temporal gap between deliberation and action.
However, as noted, activity in LIP is modality specific, reflecting a particular motor
intention, one that involves eye movements, and not an amodal response. It is pos-
sible that most intentions, even many intentions of human animals, are realized in
modality-specific motor programs. However, it is also possible that there are amodal
means of representing intentions for future action for which there is no clear motor
response, such as the intention to finish college, to search for a job, and so on. There
is some evidence in humans linking DLPFC to decisions independent of response
46 T H E Z O M B I E CH A L L E N G E

modality (Heekeren, Marrett, and Ungerleider 2008). Language may make possible
such representations in humans, or there may be nonlinguisitically mediated ways
of encoding abstract intentions.
Despite some shortcomings as a model of human decision making, the mon-
key work on decision encourages us to think about volition mechanistically. Some
philosophers argue that it is not determinism, but the recognition that mechanism
underlies our decisions, that is the most potent challenge to freedom (Nahmias,
Coates, and Kvaran 2007). While there is some evidence to support this notion,
there is much we do not understand about the threat of mechanism, and the rela-
tion of mechanism to reductionism. If mechanism is inimical to freedom, it may
well be that our growing understanding of mechanisms underlying decision making
will undermine our conception of the will as free. Current work in philosophy sug-
gests that the threat of mechanism may arise from misunderstandings about what
mechanism entails (see, e.g., Nahmias and Murry 2010). Thus, it is more likely that
our views about freedom will adapt to embrace the insights this research provides
into the processes underlying our ability to choose among options when the correct
choice is not externally dictated.

Volition as Executive Control


Another notion that is closely allied with that of freedom is the notion of con-
trol. We act freely when we are in control of our actions. The importance of con-
trol for responsibility is present in legal doctrine: the capacity to control one’s
behavior by inhibiting inappropriate actions is recognized as important for legal
culpability. In neuroscientific terms, the control aspect of volition is the notion
that higher-order cortical regions can influence the execution of action by lower
regions. This may take several forms. For example, one conception is that voli-
tion involves the conscious selection of action (Hyder et al. 1997; Matsumoto,
Suzuki, and Tanaka 2003; Lau, Rogers, Ramnani, et al. 2004; Bunge 2004;
Rushworth 2008; Rowe et al. 2008; Donohue, Wendelken, and Bunge 2008;
Fleming et al. 2009). Another is that monitoring can affect the form an action
takes as it is executed (Schall, Stuphorn, and Brown 2002; Schall and Boucher
2007; Barch et al. 2000; Kerns et al. 2004; Ridderinkhof et al. 2004). It is but a
step further to think of control as including a capacity to inhibit an intended or
planned action (Aron et al. 2007; Brass and Haggard 2007; Brown et al. 2008;
Kuhn, Haggard, and Brass 2009).
As mentioned previously, frontal cortex is generally implicated in executive con-
trol, but frontal cortex is a large and heterogeneous area, and much remains to be
determined about the functional role of frontal subregions. Some regions of frontal
cortex appear to be of particular importance to executive control. Numerous stud-
ies implicate interactions between PFC and regions of parietal cortex in attentional
control and task switching (Rossi et al. 2009; Serences and Yantis 2007; Chiu and
Yantis 2009; Praamstra, Boutsen, and Humphreys 2005; Bode and Haynes 2009;
Badre 2008; Dosenbach et al. 2007; Dosenbach et al. 2008). Other regions of cor-
tex, such as some parietal regions, seem to play a role in guiding action that is under
way (Dosenbach et al. 2007; Dosenbach et al. 2008).
The Neuroscience of Volition 47

Several regions in frontal cortex appear time and time again in studies on voli-
tion. DLPFC is activated in many tasks involving choice or decision making
(Cunnington et al. 2006; Lau, Rogers, Haggard, et al. 2004; Jahanshahi et al. 1995;
Kim and Shadlen 1999; Heekeren et al. 2006). DLPFC has been implicated in
abstract and concrete decisions, as it is activated in choices between actions and in
rule selection (Assad, Rainer, and Miller 1998; Rowe et al. 2008; Bunge et al. 2003;
Bunge 2004; Bunge et al. 2005; Donohue, Wendelken, and Bunge 2008). As noted
earlier, there are competing hypotheses about the role of DLPFC in tasks involving
choice and selection of action, including response selection, conscious deliberation,
and conflict resolution. Although some work suggests that DLPFC activity is reflec-
tive of attention to selection of action (and thus, presumably, conscious control;
Lau, Rogers, Ramnani, et al. 2004), other studies indicate that DLPFC activation is
not always to be associated with conscious pathways (Lau and Passingham 2007).
DLPFC has also been implicated in more abstract forms of control in humans. For
example, Knoch and Fehr’s (2007) rTMS studies indicate that the capacity to resist
temptation depends on right DLPFC.
Discerning the networks subserving voluntary inhibitory control of action
appears to be more straightforward. Libet, who argued on the basis of his experi-
mental evidence that conscious intention is not causally efficacious in producing
action, consoled himself with the view that the lag between the RP and action
could possibly allow for inhibition of unconsciously generated actions, thus pre-
serving the spirit of free will with “free won’t” (Libet et al. 1983). However, he
left this as pure conjecture. More recent studies have begun to shed light upon
the neural mechanisms of inhibition of intended actions. For example, Brass and
Haggard (2007) recently performed f MRI experiments in which they report
increased activity in frontomedial cortical areas in Libet-like tasks in which sub-
jects are required to intend to respond, and then to choose randomly whether
or not to inhibit that response. They conjecture that these frontomedial areas
are involved in voluntarily inhibiting self-generated action. Similar regions are
involved in decisions to inhibit prepotent responses (Kuhn, Haggard, and Brass
2009). Connectivity analyses suggest that medial frontal inhibition influences
preSMA in a top-down fashion (Kuhn, Haggard, and Brass 2009). Other evidence
suggests that inhibition occurs at lower levels in the motor hierarchy as well, for
example, in local cortical networks in primary motor areas (Coxon, Stinear, and
Byblow 2006).
While dorsal medial frontal regions appear to be involved directly in inhibitory
processes, the same regions that mediate voluntary decisions to act appear to be
involved in voluntary decisions to refrain from action. Evidence from both event-
related potential (ERP) and fMRI studies demonstrates that the neural signatures of
intentionally not acting, or deciding not to act after forming an intention to act, look
very much like those of decisions to act (Kuhn, Gevers, and Brass 2009; Kuhn and
Brass 2009b). For example, areas in anterior cingulate cortex and dorsal preSMA are
active in both freely chosen button presses and free decisions not to press a button.
The similar neural basis between decisions to act and to refrain from action lends
credence to the commonsensical notion that both actions and omissions are acts of
the will for which we can be held responsible.
48 T H E Z O M B I E CH A L L E N G E

Volition as a Feeling
The experience of willing is an aspect of a multifaceted volitional capacity. Some
think that the conscious will is an illusion, so all there is to explain is the experience
or belief that one wills or intends actions. There are at least two phenomenological
aspects of agency to consider: the awareness of an intention or urge to act that we
identify as occurring prior to action, and the post hoc feeling that an action taken
was one’s own.
With respect to the first, recent results reveal that the experience of voluntary
intention depends upon parietal cortex. Electrical stimulation in this area elicited
motor intentions, and stronger stimulation sometimes led to the erroneous belief
that movement had occurred (Desmurget et al. 1999). In contrast, stimulation of
premotor cortex led to movements without awareness of movement (Desmurget
et al. 2009). In addition, lesions in the inferior parietal lobe alter the awareness of
timing of motor intention. Instead of becoming aware of intentions prior to move-
ment, these lesion patients reported awareness only immediately prior to the time
of movement (Sirigu et al. 2004a). This was not due to an impairment in time per-
ception, as their ability to report movement timing accurately was not impaired.
Although this suggests that awareness of agency relies primarily on parietal rather
than premotor areas, Fried reported that stimulation in SMA also evoked desires to
move. These results may be reconciled, for intentions triggered by stimulation in
SMA, in contrast to those triggered by parietal stimulation, had the phenomenol-
ogy of compulsions more than of voluntary intentions (Fried et al. 1991). Thus, it is
possible that the experience of an impending but not necessarily self-willed action
or urge (like an oncoming sneeze) may be due to frontal areas, while the experience
of voluntarily moving, or being author of the willing, may involve parietal regions.
Considerable progress is also being made in identifying the neural signals
involved in production of the feeling of agency or ownership of action. The feeling
of agency seems to depend on both proprioceptive and perceptual feedback from
the effects of the action (Pacherie 2008; Kuhn and Brass 2009a; Moore et al. 2009;
Moore and Haggard 2008; Tsakiris et al. 2005). A number of studies indicate that
plans for action are often accompanied by efferent signals that allow the system to
form expectations for further sensory feedback that, if not violated, contribute to the
feeling of agency (Linser and Goschke 2007; Sirigu et al. 2004b). Grafton and col-
leagues found activation in right angular gyrus (inferior parietal cortex) in cases of
discrepancy between anticipated and actual movement outcome, and in awareness
of authorship (Farrer et al. 2008). Signals from parietal cortex when predictions of
a forward model match sensory or proprioceptive information may be important in
creating the sense of agency. Moreover, some aspects of awareness of agency seem
constructed retrospectively. A recent study shows that people’s judgments about
the time of formation of intention to move can be altered by time-shifting sensory
feedback, leading to the suggestion that awareness of intention is inferred at least in
part from responses, rather than directly perceived (Banks and Isham 2009). These
studies lend credence to criticisms that the Libet measurement paradigm may affect
the reported time of awareness of intention (Lau, Rogers, and Passingham 2006,
2007). In addition, perceived onset of action relative to effects is modulated by
The Neuroscience of Volition 49

whether the actor perceives the action as volitional (Engbert, Wohlschlager, and
Haggard 2008; Haggard 2008). TMS over SMA after action execution also affects
the reported time of awareness of intention (Lau, Rogers, and Passingham 2007),
further evidence that awareness of intention is in part reconstruction.
These results are consistent with a model in which parietal cortex generates motor
intentions and a predictive signal or forward model for behavior during voluntary
action. The motor plans are relayed to frontal regions for execution, and activation
of these regions may be crucial for aspects of awareness of intention and timing. At
the same time, parietal regions compare the internal predictions with sensory feed-
back, though some hypothesize that this comparison is performed in premotor cor-
tex (Desmurget and Sirigu 2009). Feedback signals alone are insufficient for a sense
of authorship (Tsakiris et al. 2005). When signals match, we may remain unaware
of our motor intentions (Sirigu et al. 2004a, 2004b), yet perceive the actions as
our own. We may only be made aware of our motor intentions when discrepancies
between the forward model and information from perception are detected. Thus,
both an efferent internal model and feedback from the environment are important
in the perception of agency and self-authorship (Moore et al. 2009).
Under normal circumstances, we experience our voluntary actions as voluntary.
Under abnormal circumstances, people may wrongly attribute, or fail to attribute,
agency to themselves (Wegner and Wheatley 1999; Wegner 2002). That feelings
of agency can mislead has led some to suggest that it is merely an illusion that the
will is causally efficacious (Wegner 2002; Hallett 2007). Some may take the neu-
roscience data to suggest that feelings of agency are post hoc inferences, and on a
certain view of inference, they are. However, inferences are often a good route to
knowledge, and although experience of agency is not always veridical, we should
not conclude that in general, feelings of agency do not reflect actual agency, that the
will is not causally efficacious, or that free will is nothing more than a feeling. The
mere fact that the experience of volition has neural underpinnings is also not a basis
for denying freedom of the will. Indeed, if the same neural mechanisms that under-
lie motor intention lead to movement and a predictive signal that is corroborated by
action, then under normal conditions feelings of agency should be good evidence
for the operation of the will. Understanding better the interactions between circuits
mediating the experience of agency and those involved in initiation of movement,
formation of intention, and so on, may explain how these various aspects of volition
are related and how they can be dissociated, with particular forms of brain damage
or given certain arrangements of external events.

PUTTING THE PIECES BACK TOGETHER


In the preceding sections, I discussed recent neuroscience research on five aspects
relevant to our understanding of volition. Clearly there are conceptual connec-
tions between many of the sections discussed. Action initiation deals with simple
motor behaviors, but the neural precursors to the motor activity are thought to
be or be related to proximal or motor intentions. Whether the difference between
self-initiated and externally cued actions relates to the presence or absence of a
proximal intention is unclear, since the neural basis for them is unknown. Motor
50 T H E Z O M B I E CH A L L E N G E

intentions as currently conceived are likely to underlie both kinds of action, but
they may be generated or activated differently (Pacherie 2008). It is likely that more
abstract or distal intentions are not closely tied to motor activity, but they are likely
to be formed by processes similar to those discussed in the section on decision
making (Bratman 1999, 2007). The particular paradigm discussed in that section
involves aspects of both exogenous and endogenous action initiation, since the deci-
sions are driven by perceptual information, but the corresponding motor response
is endogenously produced by the monkey. In all these studies there is some element
of executive control, usually involving conscious execution of the task at hand, and
occasionally involving inhibition of response, or at least the potential for such inhi-
bition or self-regulation. Although executive control is an imprecise, broad term,
and it does not suffice for characterizing a voluntary act, it is likely that the possibil-
ity of such kind of control is a necessary condition on voluntary action. It may be
that the importance of consciousness in volition will be found to be at the level of
executive function rather than at lower levels of processing. The phenomenologi-
cal aspects of volition may also describe ways in which consciousness manifests in
volition, but whether the phenomenology is necessary is doubtful, and it is clearly
not sufficient. Illusions of agency are often invoked as evidence that free will is only
an illusion. However, even though the phenomenology of willing is clearly sepa-
rable from agency, because the phenomenology of willing is normally a result of the
operation of the neural systems subserving agency, the feeling of willing tends to be
good evidence for willing.
Paralleling the conceptual connections are neurobiological connections between
these various aspects. While preSMA and motor regions are important for action
initiation, and possibly with proximal intentions, these areas form part of a larger
network with parietal regions that are involved in planning and prediction of future
action and with aspects of decision, as well as with frontal regions mediating vari-
ous aspects of executive control. The overlap between the neurobiological networks
identified by these different research foci is reassuring, given the ways in which the
conceptual aspects of volition converge and interact. Despite the gradual unfolding
of a coherent picture, however, it is not yet possible to identify the specific neurocir-
cuitry of volition or agency, nor is it clear that the goal of doing so is well conceived.
Indeed, neuroscience appears to reveal volition not to be a single spark or unitary
faculty but rather a collection of largely separable processes that together make pos-
sible flexible, intelligent action. Further elucidation of brain networks may provide
a better way of taxonomizing the elements of volition (Brass and Haggard 2008;
Pacherie 2006, 2008). For now, however, one of the most significant contributions
neuroscience has made has been is in allowing us to formulate novel questions about
the nature of voluntary behavior, and providing new ways of addressing them.

FINAL THOUGHTS
It is difficult, if not impossible, to disentangle our notion of volition from questions
about human freedom. The construct of volition largely exists in order to explain the
possibility, nature, or feeling of autonomous agency. On the whole, neuroscience
has not undermined our conception of volition, or of freedom. It has maintained
The Neuroscience of Volition 51

in large part notions of intention, choice, and the experience of agency. However,
although not posing a direct threat to freedom, neuroscience promises to alter some
of our views about volition. How radical an alteration remains to be seen.
First, neuroscience may affect views on volition and its relation to free will by
illuminating the mechanisms underlying these constructs. Merely demonstrating
mechanism seems to affect the layperson’s views on freedom (Nahmias, Coates,
and Kvaran 2007; Monterosso, Royzman, and Schwartz 2005). However, on the
assumption that dualism is false, mechanistic or causal explanation alone is insuf-
ficient for undermining freedom or for showing will to be an illusion. Thinking of
volition more mechanistically than we currently do may ultimately put pressure on
ordinary notions of what is required for freedom, and the changes may be salutary,
forcing the folk to abandon incoherent notions such as uncaused causes.
Do neuroscientific results show volition to have or lack characteristics that com-
port with our intuitive or theoretically informed notions of the requirements for
freedom of the will? So far, the greatest effect of neuroscience has been to challenge
traditional views of the relationship between consciousness and action. For exam-
ple, although neuroscience seems to ratify the role of intention in action, it does
alter our notions about the degree to which conscious processes alone are causally
effective in generating choice and initiating action. To the extent that the folk con-
ception of volition is wedded to the notion that action is caused and solely caused
by conscious intention, some results challenge this conception. Converging evi-
dence from neuroscience and psychology makes it clear that factors in addition to
consciousness influence our choices. Although the relevant literature on automatic
processes is not reviewed here, this is not unique to volition: more aspects of behav-
ior than previously imagined appear to be at least partly influenced by unconscious
processes. However, it would be a mistake to conclude from this that conscious pro-
cesses are not causally efficacious or that they are epiphenomenal. Moreover, merely
showing that we can be mistaken about our intentions or that there are unconscious
antecedents to conscious behavior does not warrant the conclusion that conscious
intentions do not often or usually play a role in voluntary action. At this time I do
not believe the data strongly support the claims that action initiation precedes con-
scious intention. However, future work may yet affect our beliefs about the relative
timing of conscious processes and action initiation.
More likely, neuroscience may change the way we conceive of conscious inten-
tion. The studies described here suggest that in normal circumstances we do not
experience our intentions as urges or feelings, but rather are made aware of our
intentions when our actions and intentions fail to match. While some might take
this to indicate that normally our intentions are not conscious, we could rather
modify our views of conscious intention. For example, perhaps conscious inten-
tions are not intentions that we are occurrently conscious of, but rather, they are
intentions whose goals or aims we are conscious of, or that we consciously adopt or
endorse, and that play a particular role in action. While this is consonant with some
recent views about the role of consciousness in intention (Mele 2009), it perhaps
marks a revision to the commonsense conception of the nature of conscious inten-
tion operative in volition. It may be that future advances in understanding the neural
basis of consciousness will show even the most sophisticated views to be mistaken.
52 T H E Z O M B I E CH A L L E N G E

However, none of the current challenges, to my mind, has succeeded in radically


undermining traditional views. (Brass and Haggard 2008; Pacherie 2006, 2008)

ACKNOWLEDGMENTS
This chapter was adapted from a paper in Annual Review of Neuroscience (Roskies
2010). The work was supported in part by an NEH collaborative research grant to
the Johns Hopkins Berman Institute of Bioethics, and by the MacArthur Project in
Law and Neuroscience. I would like to thank Nancy McConnell, Al Mele, Shaun
Nichols, Walter Sinnott-Armstrong, and Tillmann Vierkant for comments on an
earlier draft.

NOTE
1. Libertarian freedom refers to freedom that depends upon indeterministic events or
choices.

REFERENCES
Andersen, Richard A., and Christopher A. Buneo. 2002. Intentional maps in posterior
parietal cortex. Annual Review of Neuroscience 25 (1):189–220.
Andersen, R. A., and C. A. Buneo. 2003. Sensorimotor integration in posterior parietal
cortex. Advances in Neurology 93:159–177.
Aron, Adam R., Tim E. Behrens, Steve Smith, Michael J. Frank, and Russell A. Poldrack.
2007. Triangulating a cognitive control network using diffusion-weighted mag-
netic resonance imaging (MRI) and functional MRI. Journal of Neuroscience 27
(14):3743–3752.
Assad, Wael F., Gregor Rainer, and Earl K. Miller. 1998. Neural activity in the primate
prefrontal cortex during associative learning. Neuron 21 (6):1399–1407.
Audi, Robert. 1993. Volition and agency. In Action, intention, and reason, edited by R. Audi.
Ithaca, NY: Cornell University Press.
Badre, David. 2008. Cognitive control, hierarchy, and the rostro-caudal organization of
the frontal lobes. Trends in Cognitive Sciences 12 (5):193–200.
Balaguer, Mark. 2004. A coherent, naturalistic, and plausible formulation of libertarian
free will. Nous 38 (3):379–406.
Banks, W. P., ed. 2002. Consciousness and cognition. Vol. 11. Academic Press.
Banks, W. P., and E. A. Isham. 2009. We infer rather than perceive the moment we decided
to act. Psychological Science 20 (1):17–21.
Banks, William P., and Susan Pockett. 2007. Benjamin Libet’s work on the neuroscience of
free will. In Blackwell companion to consciousness, edited by M. Velmans and S. Schinder
(pp. 657–670). Malden, MA : Blackwell.
Barch, Deanna M., Todd S. Braver, Fred W. Sabb, and Douglas C. Noll. 2000. Anterior cin-
gulate and the monitoring of response conflict: Evidence from an fMRI study of overt
verb generation. Journal of Cognitive Neuroscience 12 (2):298–309.
Bittner, T. 1996. Consciousness and the act of will. Philosophical Studies 81:331–341.
Bode, S., and J. D. Haynes. 2009. Decoding sequential stages of task preparation in the
human brain. Neuroimage 45 (2):606–613.
The Neuroscience of Volition 53

Brass, M., and P. Haggard. 2007. To do or not to do: The neural signature of self-control.
Journal of Neuroscience 27 (34):9141–9145.
Brass, Marcel, and Patrick Haggard. 2008. The what, when, whether model of intentional
action. Neuroscientist 14 (4):319–325.
Bratman, Michael E. 1999. Intention, plans, and practical reason. Stanford, CA: Center for
the Study of Language and Information.
Bratman, Michael. 2007. Structures of agency: Essays. New York: Oxford University Press.
Britten, K. H., M. N. Shadlen, W. T. Newsome, and J. A. Movshon. 1992. The analysis of
visual motion: A comparison of neuronal and psychophysical performance. Journal of
Neuroscience 12 (12):4745–4765.
Brown, J. W., D. P. Hanes, J. D. Schall, and V. Stuphorn. 2008. Relation of frontal eye
field activity to saccade initiation during a countermanding task . Experimental Brain
Research 190 (2):135–151.
Bunge, Silvia A. 2004. How we use rules to select actions: A review of evidence
from cognitive neuroscience. Cognitive, Affective, and Behavioral Neuroscience 4
(4):564–579.
Bunge, Silvia A., Itamar Kahn, Jonathan D. Wallis, Earl K. Miller, and Anthony D. Wagner.
2003. Neural circuits subserving the retrieval and maintenance of abstract rules. Journal
of Neurophysiology 90 (5):3419–3428.
Bunge, Silvia A., Jonathan D. Wallis, Amanda Parker, Marcel Brass, Eveline A. Crone, Eiji
Hoshi, and Katsuyuki Sakai. 2005. Neural circuitry underlying rule use in humans and
nonhuman primates. Journal of Neuroscience. 25 (45):10347–10350.
Celebrini, S., and W. T. Newsome. 1994. Neuronal and psychophysical sensitivity to
motion signals in extrastriate area MST of the macaque monkey. Journal of Neuroscience.
14 (7):4109–4124.
Chiu, Yu-Chin, and Steven Yantis. 2009. A domain-independent source of cognitive con-
trol for task sets: Shifting spatial attention and switching categorization rules. Journal of
Neuroscience. 29 (12):3930–3938.
Churchland, Anne K., Roozbeh Kiani, and Michael N. Shadlen. 2008. Decision-making
with multiple alternatives. Nature Neuroscience 11 (6):693–702.
Corbetta, M., F. M. Miezin, S. Dobmeyer, G. L. Shulman, and S. E. Petersen. 1990.
Attentional modulation of neural processing of shape, color, and velocity in humans.
Science 248:1556–1559.
Coxon, J. P., C. M. Stinear, and W. D. Byblow. 2006. Intracortical inhibition during voli-
tional inhibition of prepared action. Journal of Neurophysiology 95 (6):3371–3383.
Cui, He, and Richard A. Andersen. 2007. Posterior parietal cortex encodes autonomously
selected motor plans. Neuron 56 (3):552–559.
Cunnington, R., C. Windischberger, L. Deecke, and E. Moser. 2002. The preparation and
execution of self-initiated and externally-triggered movement: A study of event-related
fMRI. Neuroimage 15 (2):373–385.
Cunnington, R., C. Windischberger, S. Robinson, and E. Moser. 2006. The selection of
intended actions and the observation of others’ actions: A time-resolved fMRI study.
Neuroimage 29 (4):1294–1302.
Deecke, L., and H. H. Kornhuber. 1978. An electrical sign of participation of the mesial
“supplementary” motor cortex in human voluntary finger movement. Brain Research
159:473–476.
Deiber, Marie-Pierre, Manabu Honda, Vicente Ibanez, Norihiro Sadato, and Mark Hallett.
1999. Mesial motor areas in self-initiated versus externally triggered movements
54 T H E Z O M B I E CH A L L E N G E

examined with fMRI: Effect of movement type and rate Journal of Neurophysiology 81
(6):3065–3077.
den Ouden, H. E., U. Frith, C. Frith, and S. J. Blakemore. 2005. Thinking about intentions.
Neuroimage 28 (4):787–796.
Desmurget, M., C. M. Epstein, R. S. Turner, C. Prablanc, G. E. Alexander, and S. T.
Grafton. 1999. Role of the posterior parietal cortex in updating movements to a visual
target. Nature Neuroscience 2:563–567.
Desmurget, Michel, Karen T. Reilly, Nathalie Richard, Alexandru Szathmari, Carmine
Mottolese, and Angela Sirigu. 2009. Movement intention after parietal cortex stimula-
tion in humans. Science 324 (5928):811–813.
Desmurget, Michel, and Angela Sirigu. 2009. A parietal-premotor network for movement
intention and motor awareness. Trends in Cognitive Sciences 13 (10):411–419.
Donohue, S. E., C. Wendelken, and Silvia A. Bunge. 2008. Neural correlates of prepara-
tion for action selection as a function of specific task demands. Journal of Cognitive
Neuroscience 20 (4):694–706.
Dorris, Michael C., and Paul W. Glimcher. 2004. Activity in posterior parietal cor-
tex is correlated with the relative subjective desirability of action. Neuron 44
(2):365–378.
Dosenbach, Nico U. F., Damien A. Fair, Alexander L. Cohen, Bradley L. Schlaggar, and
Steven E. Petersen. 2008. A dual-networks architecture of top-down control. Trends in
Cognitive Sciences 12 (3):99–105.
Dosenbach, Nico U. F., Damien A. Fair, Francis M. Miezin, Alexander L. Cohen, Kristin
K. Wenger, Ronny A. T. Dosenbach, Michael D. Fox, Abraham Z. Snyder, Justin L.
Vincent, Marcus E. Raichle, Bradley L. Schlaggar, and Steven E. Petersen. 2007.
Distinct brain networks for adaptive and stable task control in humans. Proceedings of
the National Academy of Sciences 104 (26):11073–11078.
Engbert, K., A. Wohlschlager, and P. Haggard. 2008. Who is causing what? The sense of
agency is relational and efferent-triggered. Cognition 107 (2):693–704.
Farrer, C., S. H. Frey, J. D. Van Horn, E. Tunik, D. Turk, S. Inati, and S. T. Grafton. 2008.
The angular gyrus computes action awareness representations. Cerebral Cortex 18
(2):254–261.
Farrer, C., and C. D. Frith. 2002. Experiencing oneself vs. another person as being the
cause of an action: The neural correlates of the experience of agency. Neuroimage
15:596–603.
Fischer, J., and M. Ravizza. 1998. Responsibility and control: A theory of moral responsibility.
Cambridge: Cambridge University Press.
Fleming , Stephen M., Rogier B. Mars, Thomas E. Gladwin, and Patrick Haggard. 2009.
When the brain changes its mind: Flexibility of action selection in instructed and free
choices. Cerebral Cortex 19 (10):2352–2360.
Fogassi, L., P. F. Ferrari, B. Gesierich, S. Rozzi, F. Chersi, and G. Rizzolatti. 2005.
Parietal lobe: From action organization to intention understanding. Science 308
(5722):662–667.
Franks, Kevin M., Charles F. Stevens, and Terrence J. Sejnowski. 2003. Independent
sources of quantal variability at single glutamatergic synapses Journal of Neuroscience
23 (8):3186–3195.
Fried, I., A. Katz, G. McCarthy, K. J. Sass, P. Williamson, S. S. Spencer, and D. D. Spencer.
1991. Functional organization of human supplementary motor cortex studied by elec-
trical stimulation. Journal of Neuroscience 11 (11):3656–3666.
The Neuroscience of Volition 55

Frith, C. D., K. Friston, P. F. Liddle, and R. S. J. Frackowiak. 1991. Willed action and the
prefrontal cortex in man: A study with PET. Proceedings of the Royal Society of London
B 244:241–246.
Glimcher, Paul W. 2001. Making choices: The neurophysiology of visual-saccadic deci-
sion making. Trends in Neurosciences 24 (11):654–659.
Glimcher, Paul W. 2003. The neurobiology of visual-saccadic decision making. Annual
Review of Neuroscience 26 (1):133–179.
Glimcher, Paul W. 2005. Indeterminacy in brain and behavior. Annual Review of Psychology
56 (1):25–56.
Gold, Joshua I., and Michael N. Shadlen. 2000. Representation of a perceptual decision in
developing oculomotor commands. Nature 404 (6776):390–394.
Gold, Joshua I., and Michael N. Shadlen. 2007. The neural basis of decision making.
Annual Review of Neuroscience 30 (1):535–574.
Haggard, P. 2008. Human volition: Towards a neuroscience of will. Nature Reviews
Neuroscience 9 (12):934–946.
Haggard, Patrick, and Martin Eimer. 1999. On the relation between brain potentials and
the awareness of voluntary movements. Experimental Brain Research 126:128–133.
Hallett, Mark. 2007. Volitional control of movement: The physiology of free will. Clinical
Neurophysiology 118 (6):1179–1192.
Hamilton, A. F., and S. T. Grafton. 2006. Goal representation in human anterior intrapari-
etal sulcus. Journal of Neuroscience 26 (4):1133–1137.
Hanks, Timothy D., Jochen Ditterich, and Michael N. Shadlen. 2006. Microstimulation
of macaque area LIP affects decision-making in a motion discrimination task . Nature
Neuroscience 9 (5):682–689.
Haynes, J. D., K. Sakai, G. Rees, S. Gilbert, C. Frith, and R. E. Passingham. 2007. Reading
hidden intentions in the human brain. Current Biology 17 (4):323–328.
Heekeren, H. R., S. Marrett, D. A. Ruff, P. A. Bandettini, and L. G. Ungerleider. 2006.
Involvement of human left dorsolateral prefrontal cortex in perceptual decision mak-
ing is independent of response modality. Proceedings of the National Academy of Sciences
103 (26):10023–10028.
Heekeren, Hauke R., Sean Marrett, and Leslie G. Ungerleider. 2008. The neural sys-
tems that mediate human perceptual decision making. Nature Reviews Neuroscience 9
(6):467–479.
Hesse, M. D., C. M. Thiel, K. E. Stephan, and G. R. Fink. 2006. The left parietal cortex
and motor intention: An event-related functional magnetic resonance imaging study.
Neuroscience 140 (4):1209–1221.
Huk, Alexander C., and Michael N. Shadlen. 2005. Neural activity in macaque parietal
cortex reflects temporal integration of visual motion signals during perceptual decision
making. Journal of Neuroscience 25 (45):10420–10436.
Hyder, Fahmeed, Elizabeth A. Phelps, Christopher J. Wiggins, Kevin S. Labar, Andrew
M. Blamire, and Robert G. Shulman. 1997. “ Willed action”: A functional MRI study
of the human prefrontal cortex during a sensorimotor task . Proceedings of the National
Academy of Sciences 94 (13):6989–6994.
Jahanshahi, Marjan, I. Harri Jenkins, Richard G. Brown, C. David Marsden, Richard
E. Passingham, and David J. Brooks. 1995. Self-initiated versus externally triggered
movements: I. An investigation using measurement of regional cerebral blood
flow with PET and movement-related potentials in normal and Parkinson’s disease
subjects. Brain 118 (4):913–933.
56 T H E Z O M B I E CH A L L E N G E

Jenkins, I. Harri, Marjan Jahanshahi, Markus Jueptner, Richard E. Passingham, and David
J. Brooks. 2000. Self-initiated versus externally triggered movements: II. The effect of
movement predictability on regional cerebral blood flow. Brain 123 (6):1216–1228.
Kennerley, Steve W., K. Sakai, and M. F. S. Rushworth. 2004. Organization of action
sequences and the role of the pre-SMA . J Neurophysiol 91 (2):978–993.
Kerns, John G., Jonathan D. Cohen, Angus W. MacDonald III, Raymond Y. Cho, V.
Andrew Stenger, and Cameron S. Carter. 2004. Anterior cingulate conflict monitoring
and adjustments in control. Science 303 (5660):1023–1026.
Kiani, Roozbeh, and Michael N. Shadlen. 2009. Representation of confidence associated
with a decision by neurons in the parietal cortex. Science 324 (5928):759–764.
Kim, Jong-Nam, and Michael N. Shadlen. 1999. Neural correlates of a decision in the dor-
solateral prefrontal cortex of the macaque. Nature Neuroscience 2 (2):176–185.
Knoch, D., and E. Fehr. 2007. Resisting the power of temptations: The right prefrontal
cortex and self-control. Annals of the New York Academy of Sciences 1104:123–134.
Kuhn, S., and M. Brass. 2009a. Retrospective construction of the judgement of free choice.
Consciousness and Cognition 18 (1):12–21.
Kuhn, Simone, and Marcel Brass. 2009b. When doing nothing is an option: The neural
correlates of deciding whether to act or not. Neuroimage 46 (4):1187–1193.
Kuhn, S., W. Gevers, and M. Brass. 2009. The neural correlates of intending not to do
something. Journal of Neurophysiology 101 (4):1913–1920.
Kuhn, Simone, Patrick Haggard, and Marcel Brass. 2009. Intentional inhibition: How the
“veto-area” exerts control. Human Brain Mapping 30 (9):2834–2843.
Lau, Hakwan C., and Richard E. Passingham. 2007. Unconscious activation of the cog-
nitive control system in the human prefrontal cortex. Journal of Neuroscience 27
(21):5805–5811.
Lau, Hakwan C., Robert D. Rogers, and Richard E. Passingham. 2006. On measuring the
perceived onsets of spontaneous actions. Journal of Neuroscience 26 (27):7265–7271.
Lau, H. C., R. D. Rogers, P. Haggard, and R. E. Passingham. 2004. Attention to intention.
Science 303 (5661):1208–1210.
Lau, H. C., R. D. Rogers, and R. E. Passingham. 2007. Manipulating the experienced onset
of intention after action execution. Journal of Cognitive Neuroscience 19 (1):81–90.
Lau, H. C., R. D. Rogers, N. Ramnani, and R. E. Passingham. 2004. Willed action and
attention to the selection of action. Neuroimage 21 (4):1407–1415.
Leon, Matthew I., and Michael N. Shadlen. 1999. Effect of expected reward magnitude on
the response of neurons in the dorsolateral prefrontal cortex of the macaque. Neuron
24 (2):415–425.
Libet, Benjamin. 1985. Unconscious cerebral initiative and the role of conscious will in
voluntary action. Behavioral and Brain Sciences 8:529–566.
Libet, Benjamin, Curtis A. Gleason, Elwood W. Wright, and Dennis K. Pearl. 1983. Time of
conscious intention to act in relation to onset of cerebral activity (readiness-potential):
The unconscious initiation of a freely voluntary act. Brain 106 (3):623–642.
Libet, Benjamin, E. W. Wright Jr., and Curtis A. Gleason. 1982. Readiness-potentials preced-
ing unrestricted “spontaneous” vs. pre-planned voluntary acts. Electroencephalography
and Clinical Neurophysiology 54:322–335.
Libet, Benjamin, E. W. Wright Jr., and Curtis A. Gleason. 1983. Preparation or intention-to-
act, in relation to pre-event potentials recorded at the vertex. Electroencephalography
and Clinical Neurophysiology 56:367–372.
Linser, K., and T. Goschke. 2007. Unconscious modulation of the conscious experience of
voluntary control. Cognition 104 (3):459–475.
The Neuroscience of Volition 57

Mainen, Zachary F., and Terrence J. Sejnowski. 1995. Reliability of spike timing in neo-
cortical neurons. Science 268 (5216):1503–1506.
Matsuhashi, M., and M. Hallett. 2008. The timing of the conscious intention to move.
European Journal of Neuroscience 28 (11):2344–2351.
Matsumoto, Kenji, Wataru Suzuki, and Keiji Tanaka. 2003. Neuronal correlates of
goal-based motor selection in the prefrontal cortex. Science 301 (5630):229–232.
Mazurek, Mark E., Jamie D. Roitman, Jochen Ditterich, and Michael N. Shadlen. 2003.
A role for neural integrators in perceptual decision making. Cerebral Cortex 13
(11):1257–1269.
Mele, Alfred. 2005. Motivation and agency. New York: Oxford University Press.
Mele, Alfred. 2006. Free will and luck. Oxford: Oxford University Press.
Mele, Alfred. 2009. Effective intentions: The power of conscious will. New York: Oxford
University Press.
Monterosso, John, Edward B. Royzman, and Barry Schwartz. 2005. Explaining away
responsibility: Effects of scientific explanation on perceived culpability. Ethics and
Behavior 15 (2):139–158.
Moore, James, and Patrick Haggard. 2008. Awareness of action: Inference and prediction.
Consciousness and Cognition 17 (1):136–144.
Moore, James W., David Lagnado, Darvany C. Deal, and Patrick Haggard. 2009.
Feelings of control: Contingency determines experience of action. Cognition 110
(2):279–283.
Mueller, Veronika A., Marcel Brass, Florian Waszak, and Wolfgang Prinz. 2007. The role
of the preSMA and the rostral cingulate zone in internally selected actions. Neuroimage
37 (4):1354–1361.
Nahmias, Eddy, D. Justin Coates, and Trevor Kvaran. 2007. Free will, moral responsi-
bility, and mechanism: Experiments on folk intuitions. Midwest Studies in Philosophy
31:215–242.
Nahmias, Eddy, and Dylan Murry. 2010. Experimental philosophy on free will: An error
theory for incompatibilist intuitions In New waves in philosophy of action, edited by J.
Aguilar, A. Buckareff, and K. Frankish. New York: Palgrave-Macmillan.
Newsome, William T., K. H. Britten, and J. A. Movshon. 1989. Neuronal correlates of a
perceptual decision. Nature 341:52–54.
O’Craven, Kathy M. Bruce R. Rosen, Ken K. Kwong, Anne M. Treisman, and Robert L.
Savoy. 1997. Voluntary attention modulates fMRI activity in human MT-MST. Neuron
18 (4):591–598.
O’Doherty, John, et al. 2001. Abstract reward and punishment representations in the
human orbitofrontal cortex. Nature Neuroscience 4 (1):95–102.
Pacherie, Elisabeth. 2006. Toward a dynamic theory of intentions. In Does consciousness
cause behavior?, edited by S. Pockett, W. P. Banks, and S. Gallagher. Cambridge, MA :
MIT Press.
Pacherie, Elisabeth. 2008. The phenomenology of action: A conceptual framework.
Cognition 107 (1):179–217.
Palmer, John, Alexander C. Huk, and Michael N. Shadlen. 2005. The effect of stimu-
lus strength on the speed and accuracy of a perceptual decision. Journal of Vision 5
(5):376–404.
Pesaran, Bijan, Matthew J. Nelson, and Richard A. Andersen. 2008. Free choice activates a
decision circuit between frontal and parietal cortex. Nature 453 (7193):406–409.
Platt, Michael L., and Paul W. Glimcher. 1999. Neural correlates of decision variables in
parietal cortex. Nature 400 (6741):233–238.
58 T H E Z O M B I E CH A L L E N G E

Praamstra, P., L. Boutsen, and G. W. Humphreys. 2005. Frontoparietal control of spa-


tial attention and motor intention in human EEG. Journal of Neurophysiology 94
(1):764–774.
Quian Quiroga, R., L. H. Snyder, A. P. Batista, H. Cui, and R. A. Andersen. 2006.
Movement intention is better predicted than attention in the posterior parietal cortex.
Journal of Neuroscience 26 (13):3615–3620.
Ridderinkhof, K. Richard, Markus Ullsperger, Eveline A. Crone, and Sander Nieuwenhuis.
2004. The role of the medial frontal cortex in cognitive control. Science 306
(5695):443–447.
Rizzolatti, G., and L. Craighero. 2004. The mirror-neuron system. Annual Review of
Neuroscience 27:169–192.
Roitman, Jamie D., and Michael N. Shadlen. 2002. Response of neurons in the lateral
intraparietal area during a combined visual discrimination reaction time task . Journal
of Neuroscience 22 (21):9475–9489.
Roskies, A. L. 2006. Neuroscientific challenges to free will and responsibility. Trends in
Cognitive Sciences 10 (9):419–423.
Roskies, A. L. 2010. How does neuroscience affect our conception of volition? Annual
Review of Neuroscience 33:109–130.
Roskies, A. L. 2011. Why Libet’s studies don’t pose a threat to free will. In Conscious will
and responsibility, edited by W. Sinnott-Armstrong (pp. 11–22). New York: Oxford
University Press.
Rossi, A. F., L. Pessoa, R. Desimone, and L. G. Ungerleider. 2009. The prefron-
tal cortex and the executive control of attention. Experimental Brain Research 192
(3):489–497.
Rowe, J., L. Hughes, D. Eckstein, and A. M. Owen. 2008. Rule-selection and action-selection
have a shared neuroanatomical basis in the human prefrontal and parietal cortex.
Cerebral Cortex 18 (10):2275–2285.
Rushworth, M. F. 2008. Intention, choice, and the medial frontal cortex. Annals of the New
York Academy of Sciences 1124:181–207.
Schall, Jeffrey D., and Leanne Boucher. 2007. Executive control of gaze by the frontal
lobes. Cognitive, Affective, and Behavioral Neuroscience 7 (4):396–412.
Schall, Jeffrey D., Veit Stuphorn, and Joshua W. Brown. 2002. Monitoring and control of
action by the frontal lobes. Neuron 36 (2):309–322.
Schultz, Wolfram, Leon Tremblya, and Jeffrey R. Hollerman. 2000. Reward processing in
primate orbitofrontal cortex and basal ganglia. Cerebral Cortex 10 (3):272–283.
Serences, J. T., and S. Yantis. 2007. Spatially selective representations of voluntary and
stimulus-driven attentional priority in human occipital, parietal, and frontal cortex.
Cerebral Cortex 17 (2):284–293.
Shadlen, Michael N., and William T. Newsome. 1996. Motion perception: Seeing and
deciding. Proceedings of the National Academy of Sciences 93 (2):628–633.
Shadlen, Michael N., and William T. Newsome. 2001. Neural basis of a perceptual deci-
sion in the parietal cortex (area LIP) of the rhesus monkey. Journal of Neurophysiology
86 (4):1916–1936.
Shadlen, M. N., and Roskies, A. L. 2012. The neurobiology of decision-making and
responsibility: reconciling mechanism and mindedness. Frontiers in Neuroscience, 6,
56. doi:10.3389/fnins.2012.00056
Shibasaki, Hiroshi, and Mark Hallett. 2006. What is the Bereitschaftspotential? Clinical
Neurophysiology 117 (11):2341–2356.
The Neuroscience of Volition 59

Sinnott-Armstrong , W., and Nadel, L., eds. 2011. Conscious will and responsibility. New
York: Oxford University Press.
Sirigu, Angela, Elena Daprati, Sophie Ciancia, Pascal Giraux, Norbert Nighoghossian,
Andres Posada, and Patrick Haggard. 2004a. Altered awareness of voluntary action
after damage to the parietal cortex. Nature Neuroscience 7 (1):80–84.
Sirigu, Angela, Elena Daprati, Sophie Ciancia, Pascal Giraux, Norbert Nighoghossian,
Andres Posada, and Patrick Haggard. 2004b. Mere expectation to move causes attenu-
ation of sensory signals. Nature Neuroscience 7 (1):80–84.
Soon, Chun Siong, Marcel Brass, Hans-Jochen Heinze, and John-Dylan Haynes. 2008.
Unconscious determinants of free decisions in the human brain. Nature Neuroscience
11 (5):543–545.
Sumner, Petroc, Parashkev Nachev, Peter Morris, Andrew M. Peters, Stephen R. Jackson,
Christopher Kennard, and Masud Husain. 2007. Human medial frontal cortex medi-
ates unconscious inhibition of voluntary action. Neuron 54 (5):697–711.
Thaler, D., Y. C. Chen, P. D. Nixon, C. E. Stern, and R. E. Passingham. 1995. The func-
tions of the medial premotor cortex. I. Simple learned movements. Experimental Brain
Research 102 (3):445–460.
Thoenissen, D., K. Zilles, and I. Toni. 2002. Differential involvement of parietal and pre-
central regions in movement preparation and motor intention. Journal of Neuroscience
22 (20):9024–9034.
Trevena, Judy Arnel, and Jeff Miller. 2002. Cortical movement preparation before and
after a conscious decision to move. Consciousness and Cognition 11 (2):162–190.
Tsakiris, Manos, Patrick Haggard, Nicolas Franck, Nelly Mainy, and Angela Sirigu. 2005.
A specific role for efferent information in self-recognition. Cognition 96:215–231.
Wegner, Daniel. 2002. The illusion of conscious will. Cambridge, MA : MIT Press.
Wegner, Daniel, and T. Wheatley. 1999. Apparent mental causation: Sources of the experi-
ence of will. American Psychologist 54:480–492.
Yang , Tianming , and Michael N. Shadlen. 2007. Probabilistic reasoning by neurons.
Nature 447 (7148):1075–1080.
Young , Gary. 2006. Preserving the role of conscious decision making in the initiation of
intentional action. Journal of Consciousness Studies 13:51–68.
Zhu, Jing. 2004a. Intention and volition. Canadian Journal of Philosophy 34 (2):175–193.
Zhu, Jing. 2004b. Understanding volition. Philosophical Psychology 17 (2):247–273.
3

Beyond Libet
Long-Term Prediction of Free Choices from Neuroimaging Signals

J O H N - DY L A N H AY N E S

INTRODUCTION
It is a common folk-psychological intuition that we can freely choose between
different behavioral options. Even a simple, restricted movement task with only a
single degree of freedom can be sufficient to yield this intuition, say in an experi-
ment where a subject is asked to “move a finger at some point of their own choice.”
Although such a simple decision might not be perceived as being as important as,
say, a decision to study at one university or another, most subjects feel it is a useful
example of a specific type of freedom that is often experienced when making deci-
sions: they have the impression that the outcome of many decisions is not predeter-
mined at the time they are felt to be made, and instead they are still “free” to choose
one or the other way.
This belief in the freedom of decisions is fundamental to our human self-concept.
It is so strong that it is generally maintained even though it contradicts several other
core beliefs. For example, freedom appears to be incompatible with the nature of our
universe. The deterministic, causally closed physical world seems to stand in the way
of “additional” and “unconstrained” influences on our behavior from mental facul-
ties that exist beyond the laws of physics. Interestingly, in most people’s (and even in
some philosophers’)1 minds, the incompatible beliefs in free will and in determin-
ism coexist happily without any apparent conflict. One reason most people don’t
perceive this as a conflict might be that our belief in freedom is so deeply embed-
ded in our everyday thoughts and behavior that the rather abstract belief in physical
determinism is simply not strong enough to compete. The picture changes, however,
with direct scientific demonstrations that our choices are determined by the brain.
People are immensely fascinated by scientific experiments that directly expose how
our seemingly free decisions are systematically related to prior brain activity.
Beyond Libet 61

In a seminal experiment Benjamin Libet and colleagues (1983, 1985) investigated


the temporal relationship between brain activity and a conscious intention to per-
form a simple voluntary movement. Subjects viewed a “clock” that consisted of a
light point moving on a circular path rotating once every 2.56 seconds. Subjects were
asked to flex a finger at a freely chosen point in time and to remember and report the
position of the moving light point when they first felt the urge to move. The reported
position of the light could then be used to determine the time when the person con-
sciously formed their intention, a time subsequently called “W” as a shorthand for
the conscious experience of “wanting” or “will.” Libet recorded EEG signals from
movement-related brain regions while subjects were performing this task. It had
previously been known that negative deflections of the EEG signal can be observed
immediately preceding voluntary movements (Kornhuber & Deecke 1965). These
so-called readiness potentials (RPs) originate from a region of cortex known as the
supplementary motor cortex (SMA) that is involved in motor preparation. Libet and
colleagues were interested in whether the RPs might begin to arise even before the
person had made up their mind to move. Indeed, they found that the RP already
began to arise a few hundred milliseconds before the “feeling of wanting” entered
awareness. This systematic temporal precedence of brain activity before a freely timed
decision was subsequently taken as evidence that the brain had made the decision to
move before this decision entered awareness. It was proposed that the RP reflects the
primary cortical site where the decision to move is made (Eccles 1982).
Due to their far-reaching implications that unconscious brain processes might
cause what appears to be a free choice, Libet’s groundbreaking experiments imme-
diately met severe criticism. Following the analysis of Hume (1777), two empirical
criteria are required to argue for a causal relationship between two events, say event
B (brain) causing event W (will). First, there has to be a temporal precedence of B
before W, and second there has to be a constant connection between events B and W. It
has been debated whether Libet’s experiments fulfill either of these criteria. Several
authors have questioned whether there is indeed a temporal precedence between
readiness potential and intention, in particular by arguing that the timing judgments
are unreliable (Breitmeyer 1985; Van de Grind 2002). It has long been known that
there are substantial inaccuracies in determining the timing and position of mov-
ing objects (Moutoussis & Zeki 1997; Van de Grind 2002; Wundt 1904). Thus, the
choice of a moving light point to report the timing is far from optimal.
A different line of argument addresses the constant connection between B and
W. Libet reports data averaged across a number of trials. Although this shows that
on average there is a RP before the urge to move, it doesn’t show whether this holds
for every single trial, which would be necessary to provide evidence for a constant
connection. For example, the early onset of the RP might be an artifact of temporal
smearing and might reflect only the onset of the earliest urges to move (Trevena &
Miller 2002). This could only be assessed by measuring the onset time of individ-
ual RPs, which is a particularly challenging signal processing problem that requires
advanced decoding algorithms (Blankertz et al. 2003).
A further important shortcoming of Libet’s experiment is that it investigates
only RPs, which means it is restricted to signals originating from movement-related
brain regions. This leaves unclear how other areas might contribute to the buildup
62 T H E Z O M B I E CH A L L E N G E

of decisions. This is particularly important because several other regions of prefron-


tal cortex have frequently been shown to be involved in free choice situations (e.g.,
Deiber et al. 1991), although it remains unclear to what degree they are involved
in preparing a decision. Another shortcoming of RPs is that they only emerge in a
narrow time window immediately preceding a movement, leaving unclear whether
they indeed reflect the earliest stage where a decision is cortically prepared. In fact,
it has been argued that the close temporal proximity of RP and conscious awareness
of the urge to move means that these two processes are scientifically indistinguish-
able (Merikle & Cheeseman 1985).
Taken together, some of the problems with the original Libet experiment could
be overcome by investigating whether other brain regions might begin to prepare a
decision across longer time spans. Interestingly, it had been shown even before the
original Libet experiments that prefrontal cortex prepares voluntary movements
across longer periods than is visible from the readiness potential alone (Groll-Knapp
et al. 1977). Thus, activity in prefrontal brain regions might be a much better predic-
tor of the outcome of decisions than RPs. However, to date, studies on voluntary
movement preparation in prefrontal cortex have not simultaneously measured the
timing of the self-paced urge to move along with the corresponding brain activity.

THE MODIFIED LIBET EXPERIMENT


To overcome these and other shortcomings of the Libet experiments, we performed
a novel variant of the original task (Soon et al. 2008). We used functional magnetic
resonance imaging (fMRI), a technique that measures changes in the oxygenation
level of blood, which are in turn caused by neural activity, and that has a much higher
spatial resolution than EEG. It uses a measurement grid with a resolution of around
3 millimeters to independently measure the activity at each position in the brain.
Because the fMRI signal has a low temporal resolution (typically around 0.5 Hz)
and lags several seconds behind neural activity, it does not allow one to resolve the
fine-grained cascade of neural processes in the few hundred milliseconds just before
the will enters awareness. However, it is highly suitable for looking back from the
W event at each position in the brain and across longer time spans. Our focus on
longer time spans and the low sampling rate of the fMRI signal enabled us to relax
our requirement on temporal precision of the timing judgment, thus overcoming
a severe limitation of Libet’s original experiments. We replaced the rotating clock
with a randomized stream of letters that updated every 500 milliseconds. Subjects
had to report the letter that was visible on the screen when they made their con-
scious decision. This mode of report has the additional advantage of being unpre-
dictable which minimizes systematic preferences for specific clock positions.
Subjects were asked to freely decide between two response buttons while lying
in an MRI scanner (figure 3.1). They fixated on the center of the screen, where the
stream of letters was presented. While viewing the letter stream, they were asked to
relax and freely decide at some point in time to press either the left or the right but-
ton. In parallel they should remember the letter presented when their decision to
move reached awareness. After subjects made up their mind and pressed their freely
chosen response button, a “response mapping” screen appeared where subjects
Beyond Libet 63

(a)
k

t
500 ms
z

q free response
L R
m m
v l l
q v m m
l l
# z
Intention onset
Judgement

Vi r=3

Spherical
(b) cluster

Pattern-based
decoding
for each
brain position

1 fMRI image
= 25
... r t z v b w m l p x s d q y

–6 sec –4 sec –2 sec

Figure 3.1 (a) The revised Libet task. Subjects are given two response buttons, one
for the left and one for the right hand. In parallel there is a stream of letters on the screen
that changes every 500 milliseconds. They are asked to relax and to decide at some
spontaneous point of their own choice to press either the left or the right button. Once the
button is pressed, they are asked to report which letter was on the screen when they made
up their mind. (b) Pattern-based decoding and prediction of decisions ahead of time.
Using a searchlight technique (Kriegeskorte et al. 2006; Haynes et al. 2007; Soon
et al. 2008), we assessed for each brain region and each time point preceding the decision
whether it is possible to decode the choice ahead of time. Decoding is based on small local
spherical clusters of voxels that form three-dimensional spatial patterns. This allowed us
to systematically investigate which brain regions had predictive information at each time
point preceding the decision.

used a second button press to indicate at which time they had made their decision.
This screen showed three letters plus a hash symbol (#) arranged randomly on the
four corners of an imaginary square centered on fixation. Each of these positions
corresponded to one of four buttons operated by the left and right index and middle
fingers. Subjects were to press the button corresponding to the letter that was visible
64 T H E Z O M B I E CH A L L E N G E

on the screen when they consciously made their decision. When the letter was not
among those presented on the screen, they were asked to press the button corre-
sponding to the hash symbol. Then, after a delay the letter stream started again, and
a new trial began. Note that due to the randomization of the position of letters in the
response mapping screen, the second response is uncorrelated with the first, freely
chosen response. Importantly, in order to facilitate spontaneous behavior, we did
not ask subjects to balance the left and right button selections. This would require
keeping track of the distribution of button selections in memory and would also
encourage preplanning of choices. Instead, we selected subjects who spontaneously
chose a balanced number of left and right button presses without prior instruction
based on a behavioral selection test before scanning.

DECODING CHOICES FROM BRAIN ACTIVITY PATTERNS


An important innovation was that we used a “decoder” to predict how a subject
was going to decide from their brain activity (figure 3.2). We examined for each
time point preceding the intention whether a given brain region carried informa-
tion related to the specific outcome of a decision, that is, the urge to press either
a left or a right button, rather than reflecting unselective motor preparation. To
understand the advantage of “decoding,” it can help to review the standard analysis
techniques in fMRI studies. Most conventional neuroimaging analyses perform sta-
tistical analyses on one position in the brain at a time, and then proceed to the next
position (Friston et al. 1995). This yields a map of statistical parameters that plots
how strong a certain effect is expressed at each individual position in the brain. But
this neglects any information that is present in the distributed spatial patterns of
fMRI signals. Typically, the raw data are also spatially smoothed so any fine-grained
spatial patterning is lost. It has recently emerged, however, that these fine-grained
fMRI patterns contain information that is highly predictive of the detailed contents
of a person’s thoughts (Kamitani & Tong 2005; Haynes & Rees 2005, 2006). This
is in accord with a common view that each region of the brain encodes informa-
tion in a distributed spatial pattern of activity (Tanaka 1997). This information is
lost for conventional analyses. The full information present in brain signals can only
be extracted by jointly analyzing multiple locations using pattern-based decoding
algorithms. Conventional analyses can only reveal whether a brain area is more or
less active during a task (say immediately preceding a decision). In contrast, we
used the novel pattern-based decoding analyses not to investigate the overall level
of activity but to extract a maximal amount of predictive information contained in
the fine-grained spatial pattern of activity. This information allows one to predict
the specific choice a subject is going to make on each trial.
In order to first validate our method, we investigated from which brain regions
the specific decision could be decoded after it had been made and the subject was
already executing the motor response (figure 3.2, top). This served as a sanity check
because it is clear that one would expect to find the decision to be encoded in the
motor cortex. We thus assessed for each brain area and each time point after the
decision whether it was possible to decode from the spatial pattern of brain signals
which motor response the subject was currently executing. As expected, two brain
Beyond Libet 65

Left motor cortex Right motor cortex


80 0.3 0.2

0.0
50 0.0 –8 –4 0 4 8 12
–8 –4 0 4 8 12

L R

SMA
Predictive accuracy [%]
80 0.3

50 0.0
–8 –4 0 4 8 12

W
Time [s]

Lateral frontopolar cortex Medial frontopolar cortex


60 0.1
60 0.1

50 0.0
50 0.0 –8 –4 0 4 8 12
–8 –4 0 4 8 12

Precuneus/Posterior
cingulate cortex

60 0.1

L R
50 0.0
–8 –4 0 4 8 12

Figure 3.2 (Top) First we assessed which brain regions had information about a
subject’s decision after it had been made and the subject was currently pressing the button
corresponding to their choice. As expected, this yielded information in motor cortex and
supplementary motor cortex. (Bottom) Second, we assessed which brain regions had
predictive information about a subject’s decision even before the subject knew how they
were going to decide. This yielded regions of frontopolar cortex and precuneus/posterior
cingulate cortex, which had predictive information already seven seconds before the
decision was made.

regions encoded the outcome of the subject’s decision during the execution phase.
These were primary motor cortex and SMA. Thus, the sanity check demonstrates
the validity of the method. Please note that, as expected, the informative fMRI sig-
nals are delayed by several seconds relative to the decision due to the delay of the
hemodynamic response.
Next we addressed the key question of this study, whether any brain region encoded
the subject’s decision ahead of time. We found that, indeed, two brain regions predicted
prior to the conscious decision whether the subject was about to choose the left or
66 T H E Z O M B I E CH A L L E N G E

right response, even though the subject did not know yet which way they were about to
decide (figure 3.2, bottom). The first region was in frontopolar cortex (FPC), Brodman
area 10 (BA 10). The predictive information in the fMRI signals from this brain region
was present already seven seconds prior to the subject’s decision. This period of seven
seconds is a conservative estimate that does not yet take into account the delay of the
fMRI response with respect to neural activity. Because this delay is several seconds, the
predictive neural information will have preceded the conscious decision by up to 10
seconds. There was a second predictive region located in parietal cortex (PC) stretch-
ing from the precuneus into posterior cingulate cortex. It is important to note that there
is no overall signal increase in the frontopolar and precuneus/posterior cingulate dur-
ing the preparation period. Rather, the predictive information is encoded in the spatial
pattern of fMRI responses, which is presumably why it has only rarely been noticed
before. Please note that due to the temporal delay of the hemodynamic response, the
small lead times in SMA/pre-SMA of up to several hundred milliseconds reported in
previous studies (Libet et al. 1983; Haggard & Eimer 1999) are below the temporal
resolution of our method. Hence, we cannot exclude that other regions contain predic-
tive information in the short period immediately preceding the intention.

The Role of BA 10
The finding of unconscious, predictive brain activity patterns in Brodman area 10
(BA 10) is interesting because this area is not normally discussed in connection with
free choices. This is presumably due to the fact that conventional analyses will only
pick up regions with overall changes in activity but not regions where only the pat-
terning of the signal changes in a choice-specific fashion. However, it has been repeat-
edly demonstrated using other tasks that BA 10 plays an important role in encoding
and storage of intentions. It has long been known that lesions to BA 10 lead to a loss
of prospective memory, thus disrupting the ability to hold action plans in memory
for later execution (Burgess et al. 2001). In a previous study from our group, we have
shown that BA 10 also stores intentions across delay periods after they have reached
consciousness, especially if there is a delay between decision and execution (Haynes
et al. 2007). Although BA 10 has only rarely been implicated in preparation of volun-
tary actions, a direct comparison across different brain regions has revealed that the
earliest cortical region exhibiting preparatory signals before voluntary movements is
frontopolar cortex (Groll-Knapp et al. 1977). BA 10 is also cytoarchitectonically very
special. It has a very low cell density, but each cell forms a large number of synapses,
meaning that it is a highly associative brain region (Ramnani & Owen 2003). One
could speculate that this would allow for locally recurrent processing that could sup-
port the storage of action plans in working memory. Furthermore, BA 10 is believed
to be the area that has most disproportionately grown in size in humans compared
with nonhuman primates (Ramnani & Owen 2004).

Two Preparatory Circuits: “What” versus “When”


When taking a closer look, it becomes apparent that multichoice versions of the
Libet experiment involve not just one but two decisions to be made (Haggard &
Beyond Libet 67

Eimer 1999; Soon et al. 2008). On the one hand, a decision needs to be made as to
when to decide; on the other hand, a decision has to be made as to which button to
choose. Brass and Haggard (2008) have referred to this as “when” and “what” deci-
sions. So far we have decoded the “what” decisions, so next we also conducted a
further decoding analysis where we assessed to which degree the timing of the deci-
sion (as opposed to its outcome) can be decoded. The time of conscious intention
could be significantly predicted from pre-SMA and SMA. The earliest decodable
information on timing was available five seconds before a decision. This might sug-
gest that the brain begins to prepare self-paced decisions through two independent
networks that only converge at later stages of processing. The classical Libet experi-
ments, which were primarily concerned with “when” decisions, found short-term
predictive information in the SMA. This is compatible with our prediction of the
timing from pre-SMA and SMA. In contrast, as our results show, a “what” decision
is prepared much earlier and by a much more extended network in the brain.

SANITY CHECKS
Our findings point toward long-leading brain activity that is predictive of the out-
come of a decision even before the decision reaches awareness. This is a striking
finding, and thus it is important to critically discuss several possible sources of arti-
facts and alternative interpretations. Of particular interest is to make sure that the
report of the timing is correct, and that the information does not reflect a carryover
from previous trials.

Early Decision—Late Action?


One question is whether the subjects are really performing the task correctly. For
example, they might decide early, say, at the beginning of the trial, which button to
press, and then simply wait for a few seconds to execute their response. This could
be the case if, say, the entire group of subjects had been grossly disregarding the
instructions. A similar argument has already been made against the Libet experi-
ment. It is conceivable that as the decision outcome gradually enters awareness,
subjects adopt a very conservative criterion for their report and wait for the aware-
ness to reach its “peak” intensity (Latto 1985; Ringo 1985). Fortunately, there are
reasons that make it implausible that subjects simply waited to report the decision
that had already begun to reach awareness. In situations where subjects know which
button they are going to press, the corresponding movement is already prepared
all the way up to primary motor cortex. In contrast, in our study the motor cortex
contains information only at a very late stage of processing, following the conscious
decision regarding which movement to make. This suggests that subjects did not
decide early and then simply wait.

Carryover from Previous Trial?


Importantly, it is also possible to rule out that the early prediction presumably
reflects a carryover of information from the previous trial. First, the distribution of
68 T H E Z O M B I E CH A L L E N G E

response sequences clearly resembles an exponential distribution without sequen-


tial order, as would be expected if subjects decide randomly from trial to trial which
button to press. This is presumably because, in contrast to previous studies, we did
not ask subjects to balance left and right button presses across trials, thus encour-
aging decisions that were independent of previous trials. Also, in our experiments
subjects often took a long time until they made a decision, which might explain why
subjects behaved more randomly than in traditional random choice experiments
where subjects systematically violate randomness when explicitly asked to rap-
idly generate random sequences (Nickerson 2002). Second, our chosen statistical
analysis method, fitting a so-called finite impulse response function, is designed to
separate the effects of the current trial from the previous and the following trial. This
approach is highly efficient as long as both types of responses are equally frequent,
with variable intertrial intervals, as here. Third, the early onset of predictive infor-
mation in prefrontal and parietal regions cannot be explained by any trailing brain
imaging signals from the previous trials. The onset of information occurs approxi-
mately 12 seconds after the previous trial, which is far beyond the relaxation time of
the hemodynamic response. Also, the predictive information increases with tempo-
ral distance from the previous trial, which is not compatible with the information
being an overlap from the previous trial. Fourth, time points that overlap into the
next trial also revealed no carryover of information. Taken together, the high predic-
tive accuracy preceding the decision reflects prospective information encoded in
prefrontal and parietal cortex related to the decision in the current trial.

IMPLICATIONS FOR THE FREE WILL DEBATE?


Our study shows that the brain can begin to unconsciously prepare decisions several
seconds before they reach awareness. Does our study thus have any novel implica-
tions for the debate on free will that has so far heavily relied on Libet’s experiments?
The potential implications of Libet’s experiments for free will have been discussed
at great length in the literature, which has helped sharpen what the contribution of
such simple free choice paradigms might be. Obviously, they do not address real-
world decisions that have high motivational importance, are not based on long-term
reward expectations, and do not involve complex reasoning. Our and Libet’s deci-
sions have only little motivational salience for the individual and are experienced as
random rather than being based on in-depth trial-to-trial reasoning. However, our
and Libet’s findings do address one specific intuition regarding free will, that is, the
naive folk-psychological intuition that at the time when we make a decision, the
outcome of this decision is free and not fully determined by brain activity. As dis-
cussed earlier, this intuition is scientifically implausible anyway, simply because it
stands in contradiction to our belief in a deterministic universe. However, the direct
demonstration that brain activity predicts the outcomes of decisions before they
reach awareness has additional persuasive power. Dissociations between awareness
and brain processing are nothing unusual; they have been demonstrated in motor
control before (Fourneret & Jeannerod 1998). What our findings now show is that
a whole cascade of unconscious brain processes unfolds across several seconds and
helps prepare subjectively free, self-paced decisions.
Beyond Libet 69

CAUSALITY?
An important point that needs to be discussed is to what degree our findings support
any causal relationship between brain activity and the conscious will. For the criterion
of temporal precedence there should be no doubt that our data finally demonstrate that
brain activity can predict a decision long before it enters awareness. A different point
is the criterion of constant connection. For a constant connection, one would require
that the decision can be predicted with 100 percent accuracy from prior brain activity.
Libet’s original experiments were based on averages, so no statistical assessment can
be made about the accuracy with which decisions can be predicted. Our prediction of
decisions from brain activity is statistically reliable but far from perfect. The predictive
accuracy of around 60 percent can be substantially improved if the decoding is custom-
tailored for each subject. However even under optimal conditions this is far from 100
percent. This could have several reasons. One possibility is that the inaccuracy stems
from imperfections in our ability to measure neural signals. Due to the limitations of
fMRI in terms of spatial and temporal resolution, it is clear that the information we can
measure can only reflect a strongly impoverished version of the information available
from a direct measurement of the activity in populations of neurons in the predictive
areas. A further source of imperfection is that an optimal decoding approach needs a
large (ideally infinite) number of training samples to learn exactly what the predictive
patterns should be. In contrast, the slow sampling rate of fMRI imposes limitations on
the training information available. So, even if the populations of neurons in these areas
would in principle allow a perfect prediction, our ability to extract this information
would be severely limited. However these limitations cannot be used to argue that one
day, with better methods, the prediction will be perfect; this would constitute a mere
“promissory” prediction. Importantly, a different interpretation could be that the inac-
curacy simply reflects the fact that the early neural processes might in principle simply
not be fully, but only partially predictive of the outcome of the decision. In this view,
even full knowledge of the state of activity of populations of neurons in frontopolar cor-
tex and in the precuneus would not permit us to fully predict the decision. In that case
the signals have the form of a biasing signal that influences the decision to a degree, but
additional influences at later time points might still play a role in shaping the decision.
Until a perfect predictive accuracy has been reached in an experiment, both interpreta-
tions—incomplete prediction and incomplete determination—remain possible.

FUTURE PERSPECTIVES
An important question for future research is whether the signals we observed are
indeed decision-related. This might sound strange given that they predict the choices.
However, this early information could hypothetically also be the consequence of sto-
chastic, fluctuating background activity in the decision network (Eccles 1985), simi-
lar to the known fluctuations of signals in early visual cortex (Arieli et al. 1996). In
this view the processes relevant for the decision would occur late, say, in the last sec-
ond before the decision. In the absence of any “reasons” for deciding for one or the
other option, the decision network might need to break the symmetry, for example,
by using stochastic background fluctuations in the network. If the fluctuations in the
network are, say, in one subspace, the decision could be pushed toward “left,” and
70 T H E Z O M B I E CH A L L E N G E

if the fluctuations are in a different subspace, the decision could be pushed toward
“right.” But how could fluctuations at the time of the conscious decision be reflected
already seven seconds before? One possibility is that the temporal autocorrelation of
the fMRI signal smears the ongoing fluctuations across time. However, the fMRI sig-
nal itself is presumably not causally involved in decision making; it is only an indirect
way of measuring the neural processes leading up to the decision. Thus the relevant
question is the temporal autocorrelation of neural signals, which seems incompatible
with a time scale of 7 to 10 seconds. Nonetheless, in future experiments we aim to
investigate even further how tightly the early information is linked to the decision.
One prediction of the slow background fluctuation model is that the outcome of the
decision would be predictable even in cases where a subject does not know that they
are going to have to make a decision or where a subject does not know what a deci-
sion is going to be about. This would point toward a predictive signal that does not
directly computationally contribute to decision making.
A further interesting point for future research is the comparison of self-paced with
rapid decisions that occur in response to sudden and unpredictable external events.
At first sight is seems implausible that rapid, responsive decisions could be predicted
ahead of time. How would we be able to drive a car on a busy road if it always took
us a minimum of seven seconds to make a decision? However, even unpredictable
decisions are likely to be determined by “cognitive sets” or “policies” that are likely to
have a much longer half-life in the brain than a mere seven seconds.
Finally, it would be interesting to investigate whether decisions can be predicted
in real time before a person knows how they are going to decide. Such a real-time
“decision prediction machine” (DP-machine) would allow us to turn certain thought
experiments (Marks 1985; Chiang 2005) into reality, for example, by testing whether
people can guess above chance which future choices are predicted by their current
brain signals even though a person might not have yet made up their mind. Such
forced-choice judgments would be helpful in revealing whether there is evidence
for subtle decision-related information that might enter a person’s awareness at an
earlier stage than would be apparent in the conventional Libet tasks (Marks 1985).
A different experiment could be to ask a person to press a button at a time point of
their own choice, with the one catch that they are not allowed to press it when a
lamp lights up (Chiang 2005). Using real-time decoding techniques, it might then
be possible to predict the impending decision to press the button and to control
the lamp to prevent the action. The phenomenal experience of performing such an
experiment would be interesting. For example, if the prediction is early enough, the
subject is not even aware that they are about to make up their mind and should have
the impression that the light is flickering on and off randomly. It would be possible to
use the DP-machine to inform the subject of their impending decision and get them
to “veto” their action and not press a button. Currently, such “veto” experiments rely
on trusting a person to make up their mind to press a button and then to rapidly
choose to terminate their movement (Brass & Haggard 2007). A DP-machine would
finally allow one to perform true “veto” experiments. If it were possible not only to
predict when a person is going to decide but also which specific option they are going to
take, one could ask them to change their mind and take the opposite option. It seems
plausible that a person should be able to change their mind across a period as long
as seven seconds. However, there is a catch: How can one change one’s mind if one
Beyond Libet 71

doesn’t even know what one has chosen in the first place? If it were one day realized,
such a DP-machine would be similarly useful device in helping us realize the deter-
mination of our free decisions as an auto-cerebroscope (Feigl 1958) is in helping
understand the relationship between our conscious thoughts and our brain activity.

ACKNOWLEDGMENTS
This work was funded by the Max Planck Society, the German Research Foundation,
and the Bernstein Computational Neuroscience Program of the German Federal
Ministry of Education and Research. The author would like to thank Ida Momennejad
for valuable comments on the manuscript.
This text is based on a previous review article: J. D. Haynes, “Decoding and
Predicting Intentions,” Ann N Y Acad Sci 1224, no. 1 (2011): 9–21. This work was
funded by the Bernstein Computational Neuroscience Program of the German
Federal Ministry of Education and Research (BMBF Grant 01GQ0411), the
Excellence Initiative of the German Federal Ministry of Education and Research
(DFG Grant GSC86/1–2009), and the Max Planck Society.

NOTE
1. The author is an incompatibilist.

REFERENCES
Arieli A, Sterkin A, Grinvald A, & Aertsen A (1996). Dynamics of ongoing activity:
Explanation of the large variability in evoked cortical responses. Science 273, 1868–1871.
Blankertz B, Dornhege G, Schäfer C, Krepki R, Kohlmorgen J, Müller KR, Kunzmann V,
Losch F, & Curio G (2003). Boosting bit rates and error detection for the classification
of fast-paced motor commands based on single-trial EEG analysis. IEEE Trans Neural
Syst Rehabil Eng 11, 127–131.
Brass M & Haggard P (2007). To do or not to do: The neural signature of self-control.
J Neurosci 27, 9141–9145.
Brass M & Haggard P (2008). The what, when, whether model of intentional action.
Neuroscientist 14, 319–325.
Breitmeyer BG (1985). Problems with the psychophysics of intention. Behav Brain Sci 8,
539–540.
Burgess PW, Quayle A , & Frith CD (2001). Brain regions involved in prospective mem-
ory as determined by positron emission tomography. Neuropsychologia 39, 545–555.
Chiang T (2005). What’s expected of us. Nature 436, 150.
Deiber MP, Passingham RE, Colebatch JG, Friston KJ, Nixon PD, & Frackowiak RS
(1991). Cortical areas and the selection of movement: A study with positron emission
tomography. Exp Brain Res 84, 393–402.
Eccles JC (1982). The initiation of voluntary movements by the supplementary motor
area. Arch Psychiatr Nervenkr 231, 423–441.
Eccles JC (1985). Mental summation: The timing of voluntary intentions by cortical
activity. Behav Brain Sci 8, 542–543.
Feigl H (1958). The “mental” and the “physical.” University of Minnesota Press.
Fourneret P & Jeannerod M (1998). Limited conscious monitoring of motor performance
in normal subjects. Neuropsychologia 36, 1133–1140.
72 T H E Z O M B I E CH A L L E N G E

Friston KJ, Holmes AP, Poline JB, Grasby PJ, Williams SC, Frackowiak RS, & Turner R.
(1995). Analysis of fMRI time-series revisited. Neuroimage 2, 45–53.
Groll-Knapp E, Ganglberge JA , & Haider M (1977). Voluntary movement-related slow
potentials in cortex and thalamus in man. Progr Clin Neurophysiol 1, 164–173.
Haggard P & Eimer M (1999). On the relation between brain potentials and the aware-
ness of voluntary movements. Exp Brain Res 126, 128–133.
Haynes JD & Rees G (2005). Predicting the orientation of invisible stimuli from activity
in human primary visual cortex. Nat Neurosci 8, 686–691.
Haynes JD, Rees G (2006). Decoding mental states from brain activity in humans. Nat
Rev Neurosci. 7(7):523–34.
Haynes JD, Sakai K, Rees G, Gilbert S, Frith C. & Passingham RE (2007). Reading hidden
intentions in the human brain. Curr Biol 17, 323–328.
Hume D (1777). An enquiry concerning human understanding. Reprinted Boston 1910 by
Collier & Son.
Judy A. Trevena & Jeff G. Miller. Cortical movement preparation before and after a con-
scious decision to move. Consciousness and Cognition 10 (2):162–90 (2002).
Kamitani Y & Tong F (2005). Decoding the visual and subjective contents of the human
brain. Nat Neurosci 8, 679–685.
Kornhuber HH & Deecke L (1965). Hirnpotentialänderungen bei Willkürbewegungen
und passiben Bewegungen des Menschen: Bereitschaftspotential und reafferente
Potentiale. Pflügers Arch Ges Phys 284, 1–17.
Kriegeskorte N, Goebel R, & Bandettini P (2006). Information-based functional brain
mapping. Proc Natl Acad Sci USA 103, 3863–3868.
Latto R (1985). Consciousness as an experimental variable: Problems of definition, prac-
tice and interpretation. Behav Brain Sci 8, 545–546.
Libet B (1985). Unconscious cerebral initiative and the role of conscious will in voluntary
action. Behav Brain Sci 8, 529–566.
Libet B, Gleason CA, Wright EW, & Pearl DK (1983). Time of conscious intention to act
in relation to onset of verebral activities (readiness-potential): The unconscious initia-
tion of a freely voluntary act. Brain 106, 623–642.
Marks LE (1985). Toward a psychophysics of intention. Behav Brain Sci 8, 547–548.
Merikle PM & Cheesman J (1985). Conscious and unconscious processes: Same or dif-
ferent? Behav Brain Sci 8, 547–548.
Moutoussis K & Zeki S (1997). Functional segregation and temporal hierarchy of the
visual perceptive systems. Proc Roy Soc London B 264, 1407–1414.
Nickerson RS (2002). The production and perception of randomness. Psych Rev 109,
330–357.
Ramnani N, Owen AM. Anterior prefrontal cortex: insights into function from anatomy
and neuroimaging. Nat Rev Neurosci. 2004 Mar;5(3):184–94.
Ringo JL (1985). Timing volition: Questions of what and when about W. Behav Brain Sci
8, 550–551.
Soon CS, Brass M, Heinze HJ, & Haynes JD. (2008). Unconscious determinants of free
decisions in the human brain. Nat Neurosci 11, 543–545.
Tanaka K (1997). Mechanisms of visual object recognition: Monkey and human studies.
Curr Opin Neurobiol 7, 523–529.
Van de Grind W (2002). Physical, neural, and mental timing. Conscious Cogn 11, 241–264.
Wundt W (1904). Principles of physiological psychology. Vol. 2. Principles of physiological
psychology. New York: Macmillan.
4

Vetoing and Consciousness

ALFRED R. MELE

Benjamin Libet has argued for a pair of striking theses about free will. First, free
will never initiates actions. Second, free will may be involved in “vetoing” conscious
decisions, intentions, or urges to act (1985; 1999; 2004, 137–149).1 Elsewhere, I
have argued that Libet and others fail to provide adequate evidence for the first
thesis and even for the related thesis that conscious intentions to flex a wrist never
make a causal contribution to the production of a flexing action (Mele 2006, chap.
2; 2009, chaps. 3 and 4). My topic here is Libet’s thesis about vetoing. To veto a
conscious decision, intention, or urge is to decide not to act on it and to refrain,
accordingly, from acting on it. Libet associates veto power with some pretty fancy
metaphysics (see, e.g., Libet 1999). I set the metaphysical issues aside here and
concentrate on the empirical ones, focusing on recent neuroscientific research that
bears on vetoing.

1. GENERAL BACKGROUND
The conscious decisions, intentions, and urges that are candidates for being vetoed,
according to Libet, are limited to what I call proximal decisions, intentions, and
urges (Mele, 1992)—that is, decisions, intentions, or urges to do things at once.
(There are also distal decisions, intentions, and urges: for example, Al’s decision to
host a party next week, Beth’s intention to fly to Calgary next month, and Cathy’s
urge to scold Don when she returns home from work.) Libet attempts to generate
evidence about when his subjects become conscious of proximal decisions, inten-
tions, or urges. His method is to instruct subjects to perform a flexing action when-
ever they wish while watching a rapidly revolving dot on a clock face and to report
later—after they flex—on where the dot was when they first became aware of their
decision, intention, or urge to flex (Libet 1985). (The dot makes a complete revolu-
tion in less than three seconds.) Libet found (1985, 532) that the average time of
reported initial awareness was 200 milliseconds (ms) before the time at which an
electromyogram (EMG) shows relevant muscular motion to begin (time 0).
74 T H E Z O M B I E CH A L L E N G E

The following labels will facilitate discussion:

E-time: The time at which a proximal decision is made or a proximal intention


or urge is acquired.
C-time: The time of the onset of the subject’s consciousness of an item of one
of these kinds.
B-time: The time the subject believes to be C-time when responding to the
experimenter’s question about C-time.

How are these times related? Libet’s view is that average E-time is 550 ms before
time 0 (i.e., –550 ms) for subjects who are regularly encouraged to flex spontane-
ously and who report no “preplanning” of their movements, average C-time is –150
ms, and average B-time is –200 ms (1985, 532; 2004, 123–126).
Libet’s position on average E-time is based on his finding that, in subjects who
satisfy the conditions just mentioned, EEG readings—averaged over at least 40
flexings for each subject—show a shift in “readiness potentials” (RPs) beginning
at about –550 ms. The RP exhibited by these subjects is Libet’s “type II RP” (1985,
532). He contends that “the brain ‘decides’ to initiate or, at least, prepare to initiate
the act before there is any reportable subjective awareness that such a decision has
taken place” (1985, 536), and he apparently takes the unconscious decision to be
made when the shift in RPs begins. Libet arrives at his average C-time of –150 ms by
adding 50 ms to his average B-time (–200 ms) in an attempt to correct for what he
believes to be a 50-ms negative bias in subjects’ reports (see Libet 1985, 534–535;
2004, 128, for alleged evidence for the existence of the bias).
Whether subjects have time to veto conscious proximal decisions, intentions, or
urges, as Libet claims, obviously depends not only on their C-times but also on how
much time it takes to veto a conscious proximal decision, intention, or urge. For
example if C-times are never any earlier than –150 ms, but vetoing a conscious prox-
imal decision, intention, or urge would require at least 200 ms, then such decisions,
intentions, and urges are never vetoed. Let V-time stand for the minimum time it
would take to veto a conscious proximal decision, intention, or urge. An informed,
plausible judgment about whether people ever veto such things would be supported
by good evidence both about people’s C-times and about V-time. I discuss some
(alleged) evidence about C-times and V-time in subsequent sections.

2. LIBET ON VETOING
Libet offers two kinds of alleged evidence to support the idea that we have veto
power. One kind is generated by an experiment in which subjects are instructed to
prepare to flex their fingers at a prearranged clock time and “to veto the develop-
ing intention/preparation to act . . . about 100 to 200 ms before [that] time” (1985,
538). Subjects receive both instructions at the same time. Libet writes:

A ramplike pre-event potential was still recorded . . . resembl[ing] the RP of


self-initiated acts when preplanning is present. . . . The form of the “veto” RP
differed (in most but not all cases) from those “preset” RPs that were followed
Vetoing and Consciousness 75

by actual movements [in another experiment]; the main negative potential


tended to alter in direction (flattening or reversing) at about 150–250 ms
before the preset time. . . . This difference suggests that the conscious veto
interfered with the final development of RP processes leading to action. . . . The
preparatory cerebral processes associated with an RP can and do develop even
when intended motor action is vetoed at approximately the time that con-
scious intention would normally appear before a voluntary act. (1985, 538)2

Does this study provide evidence about V-time or about the vetoing of, in
Libet’s words here, “intended motor action”? Keep in mind that the subjects were
instructed in advance not to flex their fingers but to prepare to flex them at the prear-
ranged time and to “veto” this. The subjects intentionally complied with the request.
They intended from the beginning not to flex their fingers at the appointed time.
So what is indicated by the segment of what Libet refers to as “the ‘veto’ RP” that
precedes the change of direction?3 Presumably, not the presence of an intention to
flex, for then, at some point in time, the subjects would have both an intention to
flex at the prearranged time and an intention not to flex at that time. And how can
a normal agent simultaneously intend to A at t and intend not to A at t? If you were
to intend now to pick up a tempting doughnut two seconds from now while also
intending now not to pick up the doughnut two seconds from now, what would you
do? Would you soon start reaching for it with one hand and quickly grab that hand
with your other hand to halt its progress toward the doughnut? This is far from nor-
mal behavior.4 In short, it is very plausible that Libet is mistaken in describing what
is vetoed as intended motor action.
In some talks I have given on Libet’s work, I tell the audience that I will count
from 1 to 5, and I ask them to prepare to snap their fingers when I say “5” but not to
snap them. (After I say “5” and hear no finger snapping, I jokingly praise my audi-
ence for being in control of their fingers.) Someone might suggest that these people
have conscious intentions not to flex when I get to 5 and unconscious intentions to
flex then and that the former intentions win out over the latter. But this suggestion is
simply a conjecture—an unparsimonious one—that is not backed by evidence.
Given that the subjects in Libet’s veto experiment did not intend (and did not
decide) to flex, the veto experiment provides no evidence about how long it takes
to veto a conscious proximal intention (or decision) to flex. Furthermore, we do not
know whether the subjects had conscious proximal urges to flex. So the veto study
tells us little about V-time.
I mentioned that Libet offered a second kind of alleged evidence for “veto control.”
Subjects encouraged to flex “spontaneously” (in nonveto experiments) “reported that
during some of the trials a recallable conscious urge to act appeared but was ‘aborted’
or somehow suppressed before any actual movement occurred; in such cases the
subject simply waited for another urge to appear, which, when consummated, con-
stituted the actual event whose RP was recorded” (Libet 1985, 538). Libet asserts
that subjects were “free not to act out any given urge or initial decision to act; and
each subject indeed reported frequent instances of such aborted intentions” (530).
Unfortunately, even if we accept the subjects’ reports, we do not know whether the
urges they vetoed were proximal ones, as opposed, for example, to urges to flex a
76 T H E Z O M B I E CH A L L E N G E

second or so later, when the dot hits a certain point on the clock, or urges to flex
pretty soon. Here again Libet fails to provide weighty evidence about V-time.

3. BRASS AND HAGGARD ON VETOING


Marcel Brass and Patrick Haggard conducted an experiment in which subjects “were
instructed to freely decide when to execute a key press while observing a rotating
clock hand” on a Libet-like clock and “to cancel the intended response at the last
possible moment in some trials that they freely selected” (2007, 9141–9142). They
report that “the mean proportion of inhibition trials was 45.5%, but that there were
large interindividual differences, with the proportion of inhibition trials ranging
from 28 to 62%,” and that “subjects reported the subjective experience of deciding
to initiate action a mean of –141 ms before the key press on action trials” (9142).
If the subjects actually did what Brass and Haggard say they were instructed to do,
they vetoed their decisions an average of 45.5 percent of the time.
In light of Brass and Haggard’s results, should everyone now grant that Libet was
right—that people have time to veto conscious proximal decisions or intentions?
Naturally, some researchers will worry that, “in inhibition trials,” subjects were simu-
lating vetoing conscious proximal decisions rather than actually making conscious
proximal decisions to press that they proceeded to veto. A reasonable question to ask
in this connection is what strategy subjects thought they were adopting for complying
with the instructions. There are various possibilities, and four of nineteen subjects in
a “preexperiment” were excluded from the actual experiment because they “reported
that they were not able to follow the instructions” (2007, 9141). Apparently, these
subjects failed to hit upon a strategy that they deemed satisfactory for complying
with the instructions. What strategies might the remaining subjects have used?
Here is one candidate for a strategy:

Strategy 1. On each trial, consciously decide in advance to prepare to press the key when
the clock hand hits a certain point p, but leave it open whether, when the hand hits p, I will
consciously decide to press right then or consciously decide not to press on that trial. On
some trials, when the hand hits p, decide right then to press at once; and on some other trials
decide right then not to press. Pick different p points on different trials.5

Subjects who execute this strategy as planned do not actually veto conscious
proximal decisions to press. In fact, they do not veto any conscious decisions. Their
first conscious decision on each trial is to prepare to press a bit later, when the clock
hand hits point p. They do not veto this decision; they do prepare to press at that
time. Nor do they veto a subsequent conscious decision. If, when they think the
hand reaches p, they consciously decide to press, they press; and if, at that point,
they consciously decide not to press, they do not press. (Inattentive readers may
wonder why I think I know all this. I know it because, by hypothesis, the imagined
subjects execute strategy 1 as planned.)
A second strategy is more streamlined:

Strategy 2. On some trials, consciously decide to press the key and then execute that decision
at once; and on some trials, consciously decide not to press the key and do not press it.
Vetoing and Consciousness 77

Obviously, subjects who execute this strategy as planned do not veto any con-
scious decisions.
Here is a third strategy:

Strategy 3. On some trials, consciously decide to press the key a bit later and execute that
decision. On other trials, consciously decide to press the key a bit later but do not execute
that decision; instead veto (cancel, retract) the decision.

Any subjects who execute this strategy as planned do veto some conscious deci-
sions, but the decisions they veto are not proximal decisions. Instead, they are
decisions to press a bit later. A subject may define “a bit later” in terms of some
preselected point on the clock or leave the notion vague.
The final strategy to be considered is even more ambitious:

Strategy 4. On some trials, consciously decide to “press now” and execute that decision at
once. On other trials, consciously decide to “press now” but do not execute that decision;
instead immediately veto (cancel, retract) the decision.

If any subjects execute the fourth strategy as planned, they do veto some con-
scious proximal decisions. But, of course, we are faced with the question whether
this strategy is actually executable. Do subjects have enough time to prevent
themselves from executing a conscious proximal decision to press? In a real-world
scenario, an agent might proximally decide to do something and then detect
something that warrants retracting the decision. For example, a quarterback
might proximally decide to throw a pass to a certain receiver and then detect the
threat of an interception. Perhaps he has time to veto his decision in light of this
new information. The situation of the subjects in the experiment under consider-
ation is very different. They never detect anything that warrants retracting their
arbitrary decisions. If they were to retract their arbitrary decisions, they would
arbitrarily retract them. This is quite unlike the nonarbitrary imagined vetoing by
the quarterback.6
I asked whether Brass and Haggard’s subjects can prevent themselves from exe-
cuting conscious proximal decisions to press. The results of their experiment leave
this question unanswered. If we knew that some subjects were successfully using
strategy 4, we would have an answer. But what would knowing that require? Possibly,
if asked about their strategy during debriefing, some subjects would describe it as
I have described strategy 4. However, that alone would not give us the knowledge at
issue. People are often wrong about how they do things.

4. RESEARCH ON CONSCIOUS MOTOR INTENTIONS:


IMPLICATIONS FOR VETOING
A study by Hakwan Lau, Robert Rogers, and Richard Passingham purports to pro-
vide evidence that “motor intentions” do not cause actions because they “arise after
the actions” (2007, 81). If conscious proximal intentions to flex a wrist or to press
a button in a Libet-style study are never present until after subjects act, then these
intentions obviously cannot be vetoed.
78 T H E Z O M B I E CH A L L E N G E

Lau and his coauthors motivate their work partly by a reference (2007, 81)
to the following comment by Daniel Wegner on Libet’s results: “The position of
conscious will in the time line suggests perhaps that the experience of will is a link
in a causal chain leading to action, but in fact it might not even be that. It might
just be a loose end—one of those things, like the action, that is caused by prior
brain and mental events” (2002, 55). Lau et al. observe that Wegner “does not
show that motor intentions are in fact not causing the actions” and that “if inten-
tions, in fact, arise after the actions, they could not, in principle, be causing the
actions” (81).
The main experiment (Experiment 1) reported by Lau et al. (2007) combines
Libet’s “clock paradigm” with the application of transcranial magnetic stimulation
(TMS) over the presupplementary motor area. The dot on their Libet clock revolves
at 2,560 ms per cycle. While watching the clock, subjects pressed a computer mouse
button “at a random time point of their own choice” (82). In the “intention con-
dition,” after a delay of a few seconds, subjects were required to move a cursor to
where they believed the dot was “when they first felt their intention to press the
button.” In the “movement condition,” they followed the same procedure to indi-
cate where they believed the dot was “when they actually pressed the button.” There
were a total of 240 trials per subject. TMS was applied in half of the trials. Half of
the applications occurred “immediately after action execution,” and half occurred at
a delay of 200 ms. There were 10 subjects.
Lau et al. discovered an effect that was not observed in a second experi-
ment (Experiment 2) involving the application of TMS either at 500 ms after
the button press or between 3,280 and 4,560 ms after it.7 “The effect observed
in Experiment 1 [was] the exaggeration of the difference of the judgments for
the onsets of intention and movement” (Lau et al. 2007, 87).8 The mean of the
time-of-felt-intention reports and the mean of the time-of-movement reports
shifted in opposite directions from the baselines provided by the mean reports
when TMS was not applied (see note 9 for details). The purpose of the second
experiment, in which this effect was not found, was “to test whether the effect
obtained in Experiment 1 was actually due to memory or responding, rather than
the experienced onset itself ” (84).9
As Lau and his coauthors view matters, “The main question is about whether
the experience of intention is fully determined before action execution” (2007,
87). Their answer is no: “The data suggest that the perceived onset of intention
depends at least in part on neural activity that takes place after the execution of
action” (89).
I have discussed these experiments in some detail elsewhere (Mele 2008; 2009,
chap. 6). Here I will focus on just a pair of observations. The first is that the data pro-
vided by Lau et al. leave it open that the time of the onset of subjects’ consciousness
of proximal intentions (C-time) does not depend at all on “neural activity that takes
place after the execution of the action.” The second is a more general observation
about the bearing of B-times on C-times.
What, exactly, do Lau and his coauthors mean by the suggestion that “the per-
ceived onset of intention depends at least in part on neural activity that takes
place after the execution of action” (2007, 89)? For example, do they mean to
Vetoing and Consciousness 79

exclude the following hypothesis: the subjects are conscious of a proximal inten-
tion before they press the button even though they do not have a definite opinion
about when they perceived the onset of their intention—or when they first felt
the intention (see note 8)—until after they act? Apparently not, for they grant
that “it could be the case that some weaker form of experience of intention is
sufficiently determined by neural activity that takes place before the execution
of the action” (89). I do not know exactly what Lau et al. mean by “weaker form”
here, but two points need to be emphasized. First, there is a difference between
becoming conscious of an intention and having an opinion about when one per-
ceived the onset of one’s intention. Second, “neural activity that takes place after
the execution of action” may have an effect on one’s opinion about when one first
became conscious of one’s intention even if it has no effect on when one actually
became conscious of one’s intention. For example, neural activity produced by
TMS can have an effect on B-times without having an effect on C-times. Possibly,
subjects’ beliefs about when they first felt an intention to press the button are still
in the process of being shaped 200 ms after they press; this is compatible with
their having become conscious of the intention before 0 ms. (Incidentally, even
if the beliefs at issue are still in the process of being shaped at 200 ms, they may
be in place shortly thereafter, and the window for TMS to effect B-time may be
pretty small.) Experiments 1 and 2 cut little ice if what we want to know is when
the subjects first became conscious of proximal intentions (C-time)—as opposed
to when, after they act, they come to a definite opinion about when they first
became conscious of these intentions and as opposed, as well, to how the time the
subjects believe to be C-time when they make their reports (B-time) is influenced
by neural processes that take place after action.
C-time is not directly measured. Instead, subjects are asked to report, after
they act, what they believe C-time was. This is a report of B-time. It may be that
the beliefs that subjects report in response to the experimenter’s question about
C-time are always a product of their conscious experience of a proximal intention
(or decision or urge), their conscious perception of the clock around the time of
the experience just mentioned, and some subsequent events. They may always
estimate C-time after the fact based partly on various conscious experiences
rather than simply remembering it. And making the estimate is not a particularly
easy task, as I will explain after a brief discussion of another pair of experiments.
The results reported by Lau and his coauthors (2007) suggest that reports of
B-times are reports of estimates that are based at least partly on events that follow
action. In a recent article, William Banks and Eve Isham (2009) provide confirma-
tion for this suggestion. Subjects in a Libet-style experiment were asked to report,
shortly after pressing a response button, where the cursor was on a numbered Libet
clock “at the instant they made the decision to respond” (18). “The computer reg-
istered the switch closure and emitted a 200-ms beep . . . at 5, 20, 40, or 60 ms after
closure.” Obviously, subjects were not being asked to report on unconscious deci-
sions; conscious decisions are at issue.
Banks and Isham found that although the average time between the beep and
B-time did not differ significantly across beep delays, the following two average
times did differ significantly across delays: (1) the time between EMG onset and
80 T H E Z O M B I E CH A L L E N G E

B-time; and (2) the time between switch closure and B-time. The data display an
interesting pattern (see Banks and Isham 2009, 19):

Beep delay B-time to EMG B-time to beep B-time to switch closure


+5 –21 –127 –122
+20 +4 –124 –104
+40 +4 –135 –95
+60 +21 –137 –77

The beep affected B-time, and the beep followed switch closure.
In a second experiment, Banks and Isham used a delayed video image “to create
deceptive information about the time of response [i.e., a button press]” (2009, 19).
A delay of 120 ms moved average B-time from –131 ms relative to switch closure
(when there was no delay) to –87 ms—a significant shift.
Both findings provide confirmation for the hypothesis that B-times are esti-
mates based at least partly on events that follow action. But Banks and Isham draw
a conclusion that goes well beyond this hypothesis. They take their findings “to
indicate that . . . the intuitive model of volition is overly simplistic—it assumes a
causal model by which an intention is consciously generated and is the immedi-
ate cause of an action”; and they add: “Our results imply that the intuitive model
has it backwards; generation of responses is largely unconscious, and we infer the
moment of decision from the perceived moment of action” (2009, 20). In fact,
however, their findings do not contradict the following hypothesis: subjects made
their conscious proximal decisions to press before they pressed, those decisions
were among the causes of their pressing actions, and what subjects believed about
when they made their conscious decisions was affected by events that happened
after they pressed.
As a first step toward seeing why this hypothesis is not contradicted by the find-
ings, attend to the following cogent line of reasoning: subjects’ beliefs about when
“they made the decision to respond” are affected by events that occur after switch
closure and therefore after they pressed the button (i.e., pressed it far enough to
close the switch); so those beliefs are not in place until after these subjects pressed
the button. Now, how does one get from this cogent reasoning to the conclusion
that these subjects’ conscious proximal decisions to press are not among the causes
of their pressing actions? If it could be inferred from Banks and Isham’s findings
that, just as subjects’ beliefs about when they made these conscious decisions are
not in place until after they pressed the button, their conscious decisions are not
made until after that time, we would have our answer. An event that occurs after an
action cannot be among its causes. But what would warrant the inference at issue?
If we assented to the premise that subjects’ beliefs about when they made their con-
scious proximal decisions were acquired when they made those conscious decisions,
we could validly make the inference. However, Banks and Isham’s findings provide
no basis for accepting this premise, and the fact that subjects’ beliefs about when
they consciously decided to press are affected by events that follow the pressing
actions leaves it wide open that their conscious proximal decisions precede these
actions (and can be vetoed).
Vetoing and Consciousness 81

Banks and Isham assert that “there is no way to measure” C-time “other than by
report” (2009, 20), and, obviously, the reports are of B-times. But it should not be
inferred from this methodological point that subjects made their conscious deci-
sions at the time at which their beliefs about when they made them were finally
acquired—that is, that the time at which these beliefs were first present was the time
at which the decisions were made. The reports may express beliefs that are acquired
after action even though what they are beliefs about are conscious decisions made
before action.
Some of the points I have been emphasizing are reinforced by attention to the fol-
lowing question: How accurate are subjects’ reports about when they first became
conscious of a proximal decision, intention, or urge likely to have been? Framed in
terms of C-time (the time of the onset of the subject’s consciousness of an item of
one of these kinds) and B-time (the time the subject believes to be C-time when
answering the experimenter’s question about C-time), the question about inten-
tions is this: How closely does B-time approximate C-time?
There is a lively literature on how accurate B-times are likely to be—that is,
on how likely it is that they closely approximate C-times (for a review, see van de
Grind 2002). This is not surprising. Reading the position of a rapidly revolving
dot at a given time is a difficult task, as Wim van de Grind observes (2002, 251).
The same is true of relating the position of the dot to such an event as the onset
of one’s consciousness of a proximal intention to click a button. Patrick Haggard
notes that “the large number of biases inherent in cross-modal synchronization
tasks means that the perceived time of a stimulus may differ dramatically from its
actual onset time. There is every reason to believe that purely internal events, such
as conscious intentions, are at least as subject to this bias as perceptions of external
events” (2006, 82).
One fact that has not received sufficient attention in the literature on accu-
racy is that individuals display great variability of B-times across trials. Patrick
Haggard and Martin Eimer (1999) provide some relevant data. For each of their
eight subjects, they locate the median B-time and then calculate the mean of the
premedian (i.e., “early”) B-times and the mean of the postmedian (i.e., “late”)
B-times. At the low end of variability by this measure, one subject had mean
early and late B-times of –231 ms and –80 ms, and another had means of –542
ms and –351 ms (132). At the high end, one subject’s figures were –940 ms and
–4 ms, and another’s were –984 ms and –253 ms. Bear in mind that these figures
are for means, not extremes. These results do not inspire confidence that B-time
closely approximates C-time. If there were good reason to believe that C-times
vary enormously across trials for the same subject, we might not find enormous
variability in a subject’s B-times worrisome in this connection. But there is good
reason to believe this only if there is good reason to believe that B-times closely
approximate C-times; and given the points made about cross-modal synchroniza-
tion tasks in general and the cross-modal task of subjects in Libet-style experi-
ments, there is not.
Another factor that may make it difficult for subjects to provide B-times that
closely approximate C-times is their uncertainty about exactly what they are expe-
riencing. As Haggard observes, subjects’ reports about their intentions “are easily
82 T H E Z O M B I E CH A L L E N G E

mediated by cognitive strategies, by the subjects’ understanding of the experimen-


tal situation, and by their folk psychological beliefs about intentions” (2006, 81).
He also remarks that “the conscious experience of intending is quite thin and eva-
sive” (2005, 291). Even if the latter claim is an overstatement and some conscious
experiences of intending are robust, the claim may be true of many of the con-
scious experiences at issue in Libet-style studies. One can well imagine subjects
wondering occasionally whether, for example, what they are experiencing is an
intention (or urge) to act or merely a thought about when to act or an anticipation
of acting soon. Lau et al. say that they require their subjects to move a cursor to
where they believed the dot on a Libet clock was “when they first felt their inten-
tion to press the button” (2007, 82, emphasis mine). One should not be surprised
if some subjects given such an instruction were occasionally to wonder whether
they were experiencing an intention to press or just an urge to press, for example.
(Presumably, at least some lay folk treat intentions and urges as conceptually dis-
tinct, as dictionaries do.) Subjects may also wonder occasionally whether they are
actually feeling an intention to press or are mistakenly thinking that they feel such
an intention.
One way to seek to reduce variability in a subject’s B-times is to give him or her
a way of conceiving of, for example, making a conscious proximal decision that is
easily grasped and applied. Subjects in a Libet-style experiment may be given the
following instructions:

One way to think of deciding to flex your right wrist now is as consciously
saying “now!” to yourself silently in order to command yourself to flex at once.
Consciously say “now!” silently to yourself whenever you feel like it and then
immediately flex. Look at the clock and try to determine as closely as possible
where the dot is when you say “now!” You’ll report that location to us after you
flex. (Mele 2009, 125)

Subjects can also be regularly reminded to make their decisions “spontaneously”—


that is, to make them without thinking in advance about when to flex.
If, as I predict, subjects given these instructions individually show much less vari-
ability in B-times than subjects given typical Libet-style instructions, we would have
grounds for believing that their reports about when they consciously said “now!”
involve less guesswork and, accordingly, additional grounds for skepticism about the
reliability of B-times in typical studies (i.e., for skepticism about the hypothesis that
B-times closely approximate C-times in these studies).10
I asked how accurate subjects’ reports about when they first became conscious of
a proximal intention or urge are likely to have been. Not very certainly seems to be
a safe answer. But there may be ways to improve accuracy.11 If such B-times as have
actually been gathered are unreliable indicators of C-times, little weight can be put
on them in arguments about whether or not there is time enough to veto conscious
proximal urges and the like; the same is true of arguments about whether or not
C-time is too late for conscious proximal intentions and the like to play a role in
producing corresponding overt actions.
Vetoing and Consciousness 83

5. CONCLUSION
My conclusions about the work discussed here are as follows:

1. Libet has not shown that his subjects have time to veto conscious proximal
decisions, intentions, or urges.
2. Brass and Haggard have not shown that their subjects actually veto conscious
proximal decisions.
3. Lau and his coauthors have not shown that people never become conscious
of proximal urges or intentions early enough to veto them. Nor have Banks
and Isham.

Much work remains to be done on vetoing. In the meantime, it would be wise


for parents to continue encouraging their young children to veto urges of certain
kinds—including proximal urges to snatch a sibling’s toy or to strike a playmate. If
the capacity to veto proximal urges is real, it would seem to come in handy. Learning
how to veto urges—including proximal ones—appears to be part of learning how
to be a morally responsible agent. To be sure, appearances can be misleading, but
I have not yet encountered good grounds for believing that none of our proximal
urges can be vetoed.

ACKNOWLEDGMENTS
A draft of this chapter was written during my tenure of a 2007–2008 NEH fellow-
ship, and parts of this chapter derive from Mele 2009. (Any views, findings, conclu-
sions, or recommendations expressed in this article do not necessarily reflect those
of the National Endowment for the Humanities.) For discussion or written com-
ments, I am grateful to Seth Shabo, Tyler Stillman, an audience at the University of
Edinburgh ( June 2008), and the editors of this volume.

NOTES
1. For discussion of conceptual differences among decisions, intentions, and urges, see
Mele 2009, chap. 1.
2. For a detailed discussion of the experiment, see Libet, Wright, and Curtis 1983; or
Libet, Gleason, et al. 1983.
3. A potential source of confusion should be identified. According to a common use
of the expression “readiness potential” (RP), the RP is a measure of activity in the
motor cortex that precedes voluntary muscle motion, and, by definition, EEGs gen-
erated in situations in which there is no muscle burst do not count as RPs. Thus, given
that there is no muscle burst in the veto experiment, some scientists would not refer
to what Libet calls “the ‘veto’ RP” as an RP.
4. Sean Spence and Chris Frith suggest that people who display anarchic hand syn-
drome “have conscious ‘intentions to act’ [that] are thwarted by . . . ‘intentions’ to
which the patient does not experience conscious access” (1999, 24).
5. In a variant of this strategy, the clock hand’s getting very close to p replaces the hand’s
hitting p.
84 T H E Z O M B I E CH A L L E N G E

6. Brass and Haggard found some insula activation in inhibition trials, and they suggest
that it “represents the affective-somatic consequences of failing to implement a strong
intention” (2007, 9144). If this is right, subjects who display insula activation are not
using strategy 2. Possibly, subjects who decide to prepare to press when the clock
hand hits p and then refrain from pressing would also display insula activation, in
which case displaying such activation is compatible with using strategy 1. Strategies
3 and 4 both require the vetoing of decisions.
7. Lau et al. also report on two other experiments of theirs. They were designed to test
whether the effect observed in the first experiment was “actually due to the general
mechanism of cross-modal timing using the clock face” (85) and whether it was due
to “TMS noise added to the motor system” (86).
8. Actually, given that the subjects were asked to indicate where they believe the dot
was “when they first felt their intention to press the button” (82), what Lau et al. refer
to as judgments about the “onsets of intention” should be referred to as judgments
about onsets of the feeling (or consciousness) of intentions. Incidentally, whereas
Lau et al. seem to assume that all intentions are conscious intentions, I do not (see
Mele 2004; 2009, chap. 2). Some readers may be curious about how the time of a key
press is related to the time of the EMG activity that defines Libet’s time 0. Patrick
Haggard and Martin Eimer report that EMG onset typically precedes key presses by
30 to 50 ms (1999, 130).
9. In Experiment 1, TMS was applied at a delay of either (1) 0 ms or (2) 200 ms, and in
Experiment 2 it was applied at a delay of either (3) 500 ms or (4) between 3,280 and
4,560 ms. The group means for the TMS effects in the intention condition at these
delays were (1) –9 ms, (2) –16 ms, (3) 9 ms, and (4) 0 ms; and the group means
for the TMS effects in the movement condition at the same delays were (1) 14 ms,
(2) 9 ms, (3) 5 ms, and (4) 5 ms (Lau et al. 2007, 83–84). As I mentioned, the main
effect Lau et al. found in Experiment 1 was “the exaggeration of the difference of the
judgments for the onsets of intention and movement” (87). This is a stronger effect
than the effect of TMS on time-of-felt-intention reports alone. Accordingly, Lau et al.
focus on the “exaggeration of . . . difference” effect, even though the movement judg-
ments and the effects of TMS on them tell us little about E-time, C-time, or B-time.
10. Recall Banks and Isham’s assertion that “there is no way to measure” C-time “other
than by report” (2009, 20). EMG signals can be recorded from speech muscles in
silent speech (Cacioppo & Petty 1981; Jorgensen & Binsted 2005). Such recordings
may be made in variants of the Libet-style study I proposed. Subjects’ after-the-fact
reports provide some evidence about when it was that they consciously silently said
“now!”; and EMG recordings from, for example, the larynx in an experiment of the
kind at issue may provide another form of evidence about this. It would be interest-
ing to see how the results of the two different measures are related. A relatively simple
experiment would leave overt action out. Subjects would be instructed to watch a
Libet clock, to consciously and silently say “now!” to themselves whenever they feel
like it, and to be prepared to report after the silent speech act on where the dot was on
the clock when they said “now!” The times specified in the reports can be compared
to the times of the EMG activity from speech muscles.
11. Would subjects’ conscious, silent “now!”s actually express proximal decisions? Perhaps
not. To see why, consider an imaginary experiment in which subjects are instructed to
count—consciously and silently—from 1 to 3 and to flex just after they consciously
say “3” to themselves. Presumably, these instructions would be no less effective at
eliciting flexings than the “now!” instructions. In this experiment, the subjects are
Vetoing and Consciousness 85

treating a conscious event—the conscious “3”-saying—as a go signal. (When they


say “3,” they are not at all uncertain about what to do, and they make no decision
then to flex.) Possibly, in a study in which subjects are given the “now!” instruc-
tions, they would not actually make proximal decisions to flex but would instead
consciously simulate deciding and use the conscious simulation event as a go signal.
However, the possibility of simulation is not a special problem for studies featuring
the “now!”-saying instructions. In Libet’s own studies, some subjects may be treating
a conscious experience—for example, their initial consciousness of an urge to flex—
as a go signal (see Keller and Heckhausen 1990, 352).

REFERENCES
Banks, W., and E. Isham. 2009. “We Infer Rather Than Perceive the Moment We Decided
to Act.” Psychological Science 20: 17–21.
Brass, M., and P. Haggard. 2007. “To Do or Not to Do: The Neural Signature of
Self-Control.” Journal of Neuroscience 27: 9141–9145.
Cacioppo, J., and R. Petty. 1981. “Electromyographic Specificity during Covert Information
Processing.” Psychophysiology 18: 518–523.
Haggard, P. 2005. “Conscious Intention and Motor Cognition.” Trends in Cognitive Sciences
9: 290–295.
Haggard, P. 2006. “Conscious Intention and the Sense of Agency.” In N. Sebanz and W.
Prinz, eds., Disorders of Volition, 69–85. Cambridge, MA : MIT Press.
Haggard, P., and M. Eimer. 1999. “On the Relation between Brain Potentials and the
Awareness of Voluntary Movements.” Experimental Brain Research 126: 128–133.
Jorgensen, C., and K. Binsted. 2005. “Web Browser Control Using EMG Based Subvocal
Speech Recognition.” Proceedings of the 38th Hawaii International Conference on System
Sciences 38: 1–8.
Keller, I., and H. Heckhausen. 1990. “Readiness Potentials Preceding Spontaneous
Motor Acts: Voluntary vs. Involuntary Control.” Electroencephalography and Clinical
Neurophysiology 76: 351–361.
Lau, H., R. Rogers, and R. Passingham. 2007. “Manipulating the Experienced Onset of
Intention after Action Execution.” Journal of Cognitive Neuroscience 19: 81–90.
Libet, B. 1985. “Unconscious Cerebral Initiative and the Role of Conscious Will in
Voluntary Action.” Behavioral and Brain Sciences 8: 529–566.
Libet, B. 1999. “Do We Have Free Will?” Journal of Consciousness Studies 6: 47–57.
Libet, B. 2004. Mind Time. Cambridge, MA : Harvard University Press.
Libet, B., C. Gleason, E. Wright, and D. Pearl. 1983. “Time of Unconscious Intention
to Act in Relation to Onset of Cerebral Activity (Readiness-Potential).” Brain 106:
623–642.
Libet, B., E. Wright, and A. Curtis. 1983. “Preparation- or Intention-to-Act, in Relation
to Pre-event Potentials Recorded at the Vertex.” Electroencephalography and Clinical
Neurophysiology 56: 367–372.
Mele, A. 1992. Springs of Action: Understanding Intentional Behavior. New York: Oxford
University Press.
Mele, A. 2004. “The Illusion of Conscious Will and the Causation of Intentional Actions.”
Philosophical Topics 32: 193–213.
Mele, A. 2006. Free Will and Luck. New York: Oxford University Press.
Mele, A . 2008. “Proximal Intentions, Intention-Reports, and Vetoing.” Philosophical
Psychology 21: 1–14.
86 T H E Z O M B I E CH A L L E N G E

Mele, A. 2009. Effective Intentions: The Power of Conscious Will. New York: Oxford
University Press.
Spence, S., and C. Frith. 1999. “Towards a Functional Anatomy of Volition.” Journal of
Consciousness Studies 6: 11–29.
van de Grind, W. 2002. “Physical, Neural, and Mental Timing.” Consciousness and Cognition
11: 241–264.
Wegner, D. 2002. The Illusion of Conscious Will. Cambridge, MA: MIT Press.
5

From Determinism to Resignation;


1
and How to Stop It

R I C H A R D H O LTO N

DETERMINISM AND PREDICTABILITY


It could well be—the science is complicated and uncertain—that the world is deter-
ministic.2 What do we mean when we say that? Something like this: that given the
initial conditions, and the laws of nature, the way that the world will unroll is deter-
mined. Or, if we want to avoid using the idea of determination within the definition,
we might say this: that any two situations with identical beginnings working under
identical laws will unroll in the same way.3
Alternatively, we might put the point in terms of knowledge. On this approach,
to say that the world is deterministic is to say that if one knew the initial conditions
and the laws of nature, then one could, given enough time and computing power,
work out what would happen.
I shall reserve the term determinism for the first of these theses; the second I’ll call
the thesis of predictability. Predictability seems to require determinism: How could
knowledge of the initial conditions and laws enable one to know what will happen
if they did not determine what will happen? Whether there is an entailment in the
other direction is less obvious. Before investigating that issue, though, let us identify
another pair of doctrines.

FATALISM AND RESIGNATION


Oedipus was fated to kill his father and marry his mother. What do we mean when
we say that? Certainly it must have been true that Oedipus would kill his father and
marry his mother. But at least on one understanding, the claim seems to involve
more: it involves the idea that there was nothing that Oedipus could have done that
would have stopped him from killing his father and marrying his mother. Somehow,
no matter what he chose to do, no matter what actions he performed, circumstances
88 T H E Z O M B I E CH A L L E N G E

would conspire to guarantee those outcomes. Fatalism, understood this way, thus
amounts to powerlessness to avoid a given outcome.4
We can put the point in terms of a counterfactual:

There are some outcomes such that whatever action Oedipus were to per-
form, they would come about

or more formally:
[∃x: outcome x] [∀y: action y] (If Oedipus were to perform y,
then x would come about)

This is a very specific fatalism: two specific outcomes are fated. There is no impli-
cation that Oedipus could not effectively choose to do other things: he was free to
choose where he went, what he said, what basic bodily actions he performed. But
we can imagine Oedipus’s choices being progressively constrained, so that more
and more outcomes become fated. How far could the process go? We could cer-
tainly imagine a case in which he retained control only of his basic bodily actions;
all the distal outcomes that were the further consequences of those actions would
be fated. That gives us:
[∀x: actual outcome x ̣][∀y: possible action y] (If Oedipus were
to perform y, then x would come about)

We could go further and imagine a global action fatalism, where Oedipus’s choices
would have no impact on his actions, even the most basic. In other words, whatever
Oedipus chose, his actions would be the same:
[∀x: actual action x ̣][∀y: possible choice y] (If Oedipus were
to make y, then Oedipus would perform x)5

I’ll call counterfactuals like these three powerlessness counterfactuals.


Could we go further still, and imagine a form of fatalism in which Oedipus had
no control even over his choices? Here things become unclear. We have made sense
of Oedipus’s powerlessness in terms of the lack of impact of something that is in his
control on something else: his choices and actions lack the power to affect certain
outcomes. Once he loses control of everything, we can’t see it this way; and once
he loses control of his choices, it looks as though he has lost control of everything.
Perhaps there is some way of understanding a totally global fatalism, but I don’t see
what it is. So let us go no further than the idea of losing control over some or all of
one’s actions, as that is captured by the powerlessness counterfactuals. That will be
far enough.
Oedipus knew his fate, but knowledge is not essential to fatalism. One can be
fated without knowing one’s fate, or without knowing that one is fated at all. Indeed,
at least one prominent social psychologist has suggested that choice is never caus-
ally effective—that something like the third powerlessness counterfactual is true—
even though most of us are convinced that our choices are effective.6
From Determinism to Resignation; and How to Stop It 89

Suppose, though, that one comes to realize that a certain outcome is fated. Does
it still make sense to strive to frustrate one’s fate? I shan’t go so far as to claim that
one is rationally bound to bow to one’s fate; certainly we might admire the resolve of
the person who did not, and might not judge them irrational. But once one realizes
that one will have no effect on the outcome, it is surely at least rationally permis-
sible to stop trying to control the outcome. More broadly, even if one doesn’t know
which outcome is fated, the knowledge that some outcome is fated seems to give one
rational permission to stop trying to control the outcome, since one knows that
such activity will be pointless. Knowledge of fatalism as we are understanding it
thus legitimates a fatalistic attitude in the popular sense: the view that since there
is nothing that one can do to affect the future, there is no sense in trying. Without
trying to define it rigorously, let us call such a position resignation.

RELATING THE DOCTRINES


So we have, on the one hand, determinism and predictability; and, on the other
fatalism, and the attitude of resignation. My interest is in the relations that hold
between these various positions. In particular, conceding that knowledge of fatal-
ism rationally legitimates resignation (if you know that an outcome is fated, there is
no point trying to influence it), my interest is in whether there is a path from either
determinism or predictability to resignation.
A few philosophers, typically those who are opposed to determinism, have held
that belief in determinism legitimates resignation: that if you think the world is
determined you are justified in not trying to influence it. Critics of the Stoics took
this line in propounding the Lazy Argument, and some influential recent writers
have made similar claims. More broadly, there is some evidence, as we shall see, that
ordinary people tend to conflate determinism and fatalism, and so tend to move
directly from a belief in determinism to resignation.
Nonetheless, I think that most philosophers have held that moving from deter-
minism to resignation is simply a mistake. Resignation is only justified if one thinks
that one is powerless to affect the future—in effect, if fatalism is true—and deter-
minism gives no grounds to think that. I agree, for reasons that I shall outline.
But that is not the end of the matter. For while one cannot move from determin-
ism to resignation, I will argue that things are rather different for the move from pre-
dictability to resignation. That is, there is an argument that takes us from the premise
that an outcome can be predicted, to the conclusion that there is no point in trying
to influence that outcome, and hence that an attitude of resignation is appropriate.
I am not completely sure that the argument is a good one, but it is not obviously
flawed. It certainly does not fall foul of the standard objections that can be raised
against the move from determinism to resignation.
I suggest that this is important. Predictability is frequently run together with
determinism. How many introductory classes on free will move without comment
from the idea that everything that happens is determined, to the idea that if you
knew enough you could predict what will happen?7 Certainly in the classes I have
taught I have often been guilty of such a slide. And if this is true of philosophers,
whose job it is to be careful about such things, isn’t it likely to be true of many other
90 T H E Z O M B I E CH A L L E N G E

people who have given it less thought? It is little wonder, then, that people tend to
move from a belief in determinism to a belief in fatalism and to an attitude of resig-
nation, for they may be conflating determinism with predictability.
The right response is to distinguish more clearly between determinism and pre-
dictability. In the last section I give further grounds for doing so. We should not
just reject predictability because it gives a plausible route to resignation. There is a
more direct argument for thinking that, in a world containing reflective beings like
ourselves, predictability must be false.

MOVING FROM DETERMINISM TO RESIGNATION: SOME EXAMPLES


I start with the argument that I reject, the one that takes us from determinism to
resignation. It will be useful to have some examples. The Stoics believed in deter-
minism. In response, critics argued that they were committed to resignation:

If it is your fate to recover from this illness, you will recover, regardless of
whether or not you call the doctor. Likewise, if it is your fate not to recover
from this illness, you will not recover, regardless of whether or not you call the
doctor. And one or the other is your fate. Therefore it is pointless to call the
doctor.8

Since it is patently not pointless to call the doctor, the critics concluded that deter-
minism must be false.
As an argument against determinism this is not terribly compelling, since it seems
simply to equate determinism with fatalism. The critics allege that in believing in
determinism the Stoics are committed to certain outcomes being fated; to embrac-
ing something like the powerlessness counterfactuals. But we need an argument for
such an attribution, and the critics failed to provide one. Others have tried to do
better. Recognizing the difference between the views, some more recent philoso-
phers have tried to give an argument that determinism nonetheless entails fatalism.
Richard Taylor writes:

A determinist is simply, if he is consistent, a fatalist about everything; or at


least, he should be. For the essential idea that a man would be expressing by
saying that his attitude was one of fatalism with respect to this or that event—
his own death, for instance—is that it is not up to him whether, or when or
where this event will occur, that it is not within his control. But the theory of
determinism, as we have seen, once it is clearly spelled out and not hedged
about with unresolved “ifs,” entails that this is true of everything that ever hap-
pens, that it is never really up to any man what he does or what he becomes,
and that nothing ever can happen, except what does in fact happen.9

He goes on to say that fatalism should lead to resignation:

A fatalist is best thought of, quite simply, as someone who thinks he cannot do
anything about the future. He think it is not up to him what will happen next
From Determinism to Resignation; and How to Stop It 91

year, tomorrow, or the very next moment. He thinks that even his own behav-
iour is not in the least within his power, any more than the motion of distant
heavenly bodies, the events of remote history, or the political developments
in faraway countries. He supposes, accordingly, that it is pointless for him to
deliberate about anything, for a man deliberates only about those future things
he believes to be within his power to do and forego.

The idea of resignation is here put in terms of deliberation, but presumably the same
holds of choosing and of striving.
This way of thinking is not limited to philosophers. The psychologists Roy
Baumeister, William Crescioni, and Jessica Alquist also characterize deterministic
beliefs in terms of a kind of resignation:

To the lay determinist, everything that happens is inevitable, and nothing else
was possible. Thinking about what might have been is thus pointless if not
downright absurd, because nothing else might have been (other than what
actually happened).

They go on to suggest that such a response is, at the very least, consistent with deter-
minism, and perhaps that it is entailed by it:

The lack of counterfactual thinking in the no-free-will condition can be con-


sidered as a straightforward response that is consistent with determinism.
After all, if nothing could have happened other than what actually happened,
then there are no counterfactuals.10

By “no counterfactuals” I take that they mean there are no true counterfactuals with
false antecedents, and hence that counterfactual thinking is pointless.
There is reason to think that a similar move from determinism to resignation
exists in popular thought. A number of studies have demonstrated that when ordi-
nary people are brought to believe in determinism, they are less likely to behave
morally. How should we explain this? Elsewhere I have suggested that it is because
they fall prey to resignation: they move from the idea of determinism to the idea of
fatalism, and so become convinced that there is no point in struggling against their
baser urges.11 Work by Eddy Nahmias and Dylan Murray provides support for this.
They find that people tend to conflate determinism with what they call “bypassing,”
that is, with the view that what determines action bypasses anything that the con-
scious self can do. Fatalism is a form of bypassing.12

WHY DETERMINISM DOESN’T ENTAIL FATALISM OR SUPPORT


RESIGNATION
I take it that fatalism shouldn’t be conflated with determinism; they are clearly dif-
ferent views. But what of the arguments we have just seem that hold that deter-
minism entails fatalism? Those too I take to be in error. The central idea in fatalism
concerns what I called powerlessness counterfactuals: the claim that outcomes are not
92 T H E Z O M B I E CH A L L E N G E

dependent on agents’ actions or intentions. Returning to the example from the Lazy
Argument, fatalism means that either

If I were to call the doctor I would recover, and


If I were to not call the doctor I would recover

or

If I were to call the doctor I would not recover, and


If I were to not call the doctor I would not recover

will be true. And if one of them is true, then it does indeed seem pointless to call
the doctor. Calling the doctor would only be worthwhile if it made a difference to
recovery; ideally if recovery were counterfactually dependent upon it, that is, if the
following two counterfactuals were true:

If I were to call the doctor I would recover; and


If I were not to call the doctor I would not recover

But the truth of those counterfactuals is quite compatible with determinism. Indeed,
the central idea behind determinism is that outcomes are determined causally. And,
though the exact relation between counterfactuals and causation is controversial, it
seems that something like that pair of counterfactuals will be true whenever there
is causation.13
So determinism does not entail fatalism, nor does it thereby support resignation;
the Lazy Argument against the Stoics does not work. For parallel reasons, it seems
to me that Taylor’s argument, at least as I have understood it, does not work; it sim-
ply isn’t true that if a man embraces determinism he should conclude that for all
events “it is not up to him whether, or when or where this event will occur, that it is
not within his control.”14 And the same can be said of the argument that Baumeister
et al. credit to the lay determinist.15
Indeed, it might seem that the move from determinism to fatalism and so to res-
ignation is so transparently flawed that no thoughtful person would ever make it,
and that it is quite uncharitable to attribute it to ordinary people. This is a response
that I have often received when I have suggested that ordinary thinking tends to run
determinism and fatalism together. Putting aside the issue of whether this is a gross
mistake—what can look obvious once it is carefully spelled out in a philosophi-
cal argument need not be obvious before—I will argue that there is another route
to resignation, from a starting point that is often, even among philosophers, taken
to be effectively equivalent to determinism. Perhaps, rather than making a simple
error, ordinary thinking is on to something that many philosophers have missed.

FROM PREDICTABILITY TO RESIGNATION


Fatalism leads naturally to resignation. I want to argue that predictability leads to it
too. This is clearest when the future is not just predictable but predicted. But even
From Determinism to Resignation; and How to Stop It 93

when the predictions could be made but have not been, there is a slippery slope
argument that resignation still follows. Like most slippery slope arguments, it is not
completely conclusive, but it is strong enough to raise a serious worry.
Let us return to the Lazy Argument, and to whether you should call the doctor.
Let us fill in a few more details to ensure that the issues are clear. Suppose that you
are seriously ill. Suppose too that you live where medicine is private and that you
are poor and have no insurance, so that calling the doctor would cost a great deal of
what little you have. However, you have good grounds for thinking that doctors are
typically effective—you know, for instance, those who call them typically do bet-
ter than those who don’t. So you tend to believe of other people that the following
counterfactuals hold:

If they were to call the doctor they would recover; and


If they were not to call the doctor they would not recover

Putting all this together, you think that while both calling and not calling are open
options for you (you could easily imagine yourself doing either), it is rational for
you to call.
But now suppose that you simply know you will recover. This is not conditional
knowledge: it is not merely that you know that you will recover if you call the doc-
tor, and you intend to call the doctor. You know that you will recover simpliciter.
Perhaps you got the knowledge by divine revelation, or from some trustworthy and
uncharacteristically straightforward oracle; perhaps you have some telescope that
can look into the future; perhaps you were told by a returning time traveler, who
could even be yourself; or perhaps you have some machine that, given the laws and
initial conditions, can compute how certain events will turn out.16 There are real
question marks over the possibility of any of these, but put such doubts aside for
now. Assume simply that you have robust knowledge that you will recover, robust
enough that it can provide a fixed point in your practical reasoning.
Is it rational to try to call the doctor? I suggest that it is not; in this sense, robust
predictability is like fatalism. For if you know that you will recover, then there appear
to be two possibilities for you to consider:

I call the doctor and I recover; and


I do not call the doctor and I recover

Of these you clearly prefer the latter, since calling the doctor is expensive. In each
case the second conjunct you know to be true. But since it appears to be up to you
which of the first conjuncts you make true, and hence which of the two conjunc-
tions is true, then it is rational to make true the one that you prefer. So it is rational
not to call the doctor.
Now suppose that you robustly know in a similarly robust fashion that you will
not recover. In this case the possibilities are:

I call the doctor and I do not recover; and


I do not call the doctor and I do not recover
94 T H E Z O M B I E CH A L L E N G E

Similar reasoning applies. You prefer the latter to the former, since at least you keep
your money to make your last days comfortable and to pass on to your loved ones;
so, given that you know you will not recover, it is rational not to call the doctor.
Note the difference between this case and the case in which determinism is true.
Determinism entails only that you know that you will either recover or not:

Know (p or not-p)

Prediction distributes that knowledge:

Know (p) or Know (not-p)

The interesting result is that if you either robustly know that you will recover, or you
robustly know that you will not, it is rational to not call the doctor.
Suppose, though, that you do not know what will happen, but you know that it is
predictable. This is where we encounter the slippery slope. Suppose first that while
you do not yet know whether or not you will recover, you know that your friend
knows whether or not you will; and, moreover, you know that very soon your friend
will tell you. Suppose, however, that you have to make your decision about whether
to call the doctor before you are told. Should you call? It seems to me that you might
reason as follows. If I could only delay my decision until after I was told, I know
whatever I was told I would decide to not call the doctor. But since that would be a
rational thing to do once I was told, that is good enough grounds for deciding not
to call before I am told.17
Now for the next step along the slippery slope: it’s not my friend who knows, it is
rather that the prediction has been made by a machine that can predict the future,
and I will not have a chance to read the prediction before I have to decide whether
or not to call. The practical conclusion is as before.
Now the next step: the machine is still doing the calculations; it will not finish
until after I have to decide whether to call. Same practical conclusion.
The next step: the machine has all the evidence it needs, but it hasn’t yet started
doing the calculations. Same practical conclusion.
Next: the machine isn’t actually going to do the calculations, but it could. Same
practical conclusion.
Next: the machine hasn’t actually been built yet, but we know how to build it, and
if it were built it could do the calculations. Same practical conclusion.
Next: we don’t know how to build the machine, but we know that in prin-
ciple a machine could be built that could do the calculations. Same practical
conclusion.
There are many places along this slope where someone might want to call a halt.
Perhaps it should be halted right at the beginning: there is all the difference in the
world between knowing something yourself, and having someone else know it. One
might make a case for that, but it does seem rather odd. Moreover, I think that there
is a principled reason for following the slope all the way to the end. The conclusion
of the slope is to move the condition that is needed for resignation from

Know (p) or Know (not-p)


From Determinism to Resignation; and How to Stop It 95

to the weaker

Knowable (p) or Knowable (not-p)

The reason that knowledge justified resignation was that it provided a fixed
point around which deliberation could move. But the same is true if an outcome
is merely knowable in advance. If it is knowable in advance, then one could
come to know it. And so there is a fixed point, even if one doesn’t know what
it is.
Thought of in this way, though, we see why the slippery slope doesn’t take us all
the way to determinism. For determinism doesn’t by itself provide us with the pos-
sibility of a fixed point that could be used in deliberation about what will happen.
For determinism by itself doesn’t guarantee that what will happen is knowable. One
way in which it could fail to be knowable, quite compatible with determinism, is
if the very process of coming to believe something would have an impact on what
would happen. And it is this feature that I will use in arguing against the possibility
of predictability. Before that, though, let us address the question of whether predict-
ability entails fatalism.

DOES PREDICTABILITY ENTAIL FATALISM?


I argued that knowledge that you will recover presents you with two possible
outcomes

I call the doctor and I recover


I do not call the doctor and I recover

of which it is rational to choose the second. Now it might seem that your knowledge
that you will recover, combined with your control over whether or not you call,
gives you confidence in both of the following counterfactuals:

If I were to call the doctor I would recover


If I were to not call the doctor I would recover

But (assuming that there is nothing else that you can do to affect your recovery) that
is just to claim that your recovery is fated. So should we conclude that predictability
entails fatalism?
I think that we should not. These examples don’t show it so clearly. But imagine
another that does. Suppose that a coin is hidden under one of the two cups in front
of you. You have to choose the cup where it is. Suppose you come to know that you
will choose correctly. So there seem to be two outcomes:

I choose the left-hand cup and the coin is under the left-hand cup
I choose the right-hand cup and the coin is under the right-hand cup
96 T H E Z O M B I E CH A L L E N G E

Should you now go on to endorse the counterfactuals:

If I were to choose the left-hand cup, the coin would be under the left-hand
cup
If I were to choose the right-hand cup, the coin would be under the right-hand
cup?

Surely not. The position of the coin is fixed; my choice isn’t going to move it around.
Foreknowledge of the outcome does not commit one to the truth of the counterfac-
tuals. Likewise with the doctor. It could be that recovering or not recovering is fated,
so that nothing I could do would have any impact. But that is not a conclusion that
I can draw from the simple knowledge of what will happen.

DENYING PREDICTABILITY
As I have said, philosophers tend to move between determinism and predictability
without thinking much is at stake. I claim that, even if determinism is true, we have
excellent grounds for thinking that predictability is not. And this doesn’t result from
anything especially sophisticated about human beings and their essential unpredict-
ability. I think that predictability would be equally well refuted by the existence of
the kind of machine that many of my colleagues at MIT could construct in their
lunch hour. Let me explain.

THE CHALLENGE
Suppose that I offer you a challenge. Your job is to predict whether the lightbulb on
a certain machine will be on or off at a given time. Moreover, I will give you consid-
erable resources to make your prediction: you can have full information about the
workings of the machine, together with as much computing power as you wish. Lest
that seem to make things too easy, I point out that makers of the machine had one
goal in mind: to render your prediction wrong.
Of course, a natural way for the makers to construct the machine would be to
equip it with a mechanism that could detect what you would predict, and then a
device to ensure that the prediction was false. Still, there are things that you might
do to avoid this. One technique would involve keeping your prediction a secret
from the machine; another would involve making it so late that the machine would
not have time to act on it. But unfortunately for you, I rule out both of these options.
Your prediction has to be made in full view of the machine. I give you a lightbulb
of your own. You must switch it on to indicate that you predict that the light on the
machine will be on at the set time; you must switch it off to indicate that you predict
the light on the machine will be off. And I require that the prediction be made a
full minute before the set time. So, for instance, if you predict that the light on the
machine will be on at noon, you have to have your light switched on at 11:59.
Now you seem to be in trouble. For, as we said at the outset, making a machine
capable of defeating you in the challenge is surely within the capabilities of many
of my colleagues, indeed many of the undergraduates, at MIT. All it would need is
From Determinism to Resignation; and How to Stop It 97

a simple light sensor, and a circuit that gives control of its light to the output of the
sensor. If it senses that your light is on, it switches its light off, and vice versa.
At this point you might think that I have rigged the challenge against you. So
let me make one final offer. I will give you not simply all the knowledge about the
workings of the machine but all the knowledge about the workings of the universe.
Moreover—don’t ask how I do this—I guarantee that the universe will be determi-
nate. And, as before, I give you as much computing power as you wish.
Should you now accept the challenge? I think not. For how can this new knowl-
edge help? At best it seems that it will enable you to predict how you will lose: you
will perhaps be able to calculate what move you will make, and how the machine
will respond to it. But should you try to use that knowledge by changing your pre-
diction to beat the machine, either you will fail to make the change, or it will turn
out that what you thought was knowledge was not knowledge after all. The machine
will not be concerned with your machinations. Whatever it is that you predict, it
will simply control its light to ensure that your prediction is false.
I say that the best you will be able to do, despite your knowledge and your com-
puting power, is to predict how you will lose. I actually doubt that you will even be
able to do that reliably. But to see this, let me fill in some background.
My story might remind some readers of an article written by Michael Scriven, and
refuted by David Lewis and Jane Richardson.18 Scriven argued that human beings
are unpredictable because they have the capacity to act in a way that runs contrary
to any prediction, and that was the source of the refutation. But I am not making a
parallel claim about the machine that I have challenged you to predict. It would be
quite possible for someone else to predict the machine’s behavior. Indeed, it would
be quite possible for you to predict the machine’s behavior, provided that you didn’t
reveal the prediction to the machine. What I claim that you will not be able to do is
to make an accurate prediction that you reveal to the machine ahead of time.
However, like Scriven, I do want the conclusions to apply to human beings.
Obviously if they are countersuggestible they should be just as hard to predict as the
machine. But there is something more complicated about human beings, since they
are able to make predictions about themselves. This means that they can play the
role both of predictor and of frustrater. Suppose that Tiresias wants to predict what
he will do but also wants to ensure that he will not act in accord with his prediction.
Provided that he makes his prediction clearly to himself, and well enough in advance,
it will be his frustrating side, and not his predicting side, that will triumph.
This is why I doubt that you will even be able to make an accurate prediction
about how your game with the machine will go. Here your motivation is not to frus-
trate yourself; it is to trick the machine. But that is good enough to render any calcu-
lation you make highly unreliable.

CONCLUSION
Determinism and predictability are often run together, but they should be kept well
apart. While determinism does not lead to resignation, predictability plausibly does.
Fortunately, while determinism may well be true, there is a good argument against
predictability, and so, even if it does entail resignation, this poses no threat to our
98 T H E Z O M B I E CH A L L E N G E

ordinary way of thinking about things. The tangle of doctrines surrounding issues of
determinism and freedom is dense. It should be no surprise if ordinary people get
caught in it; philosophers are still some way from disentangling them all.

NOTES
1. Thanks to the members of the graduate class at MIT where I developed some of
these ideas; to the audience at Edinburgh who discussed an early draft; and to Rae
Langton, Damien Rochford, Stephen Yablo, and the editors for useful conversations.
Special thanks to Caspar Hare for much input; someday we might write a better ver-
sion of this essay together.
2. For a review of the science, see Earman, 2004.
3. Lewis, 1983.
4. I don’t say that this is the only way to understand it; we might think that part of the
poignancy of Oedipus’s story is that he could have avoided his fate if only he hadn’t
tried to do so. To this extent then, my use of the term fatalism will be stipulative.
5. I have here defined powerlessness in terms of the powerlessness of choices, which I
find to be the most intuitive way of characterizing the idea. Alternatively, it could be
put in terms of the powerlessness of desires, or of judgments. This would not affect
the points made here.
6. Wegner, 2002, and the work of Benjamin Libet on which Wegner draws. See Mele,
2010, for a convincing rebuttal of this and other similar positions. For more nuanced
conclusions from similar considerations see Haynes, this volume.
7. Even when the distinction is noted, little importance is placed on it. The much
used collection Reason and Responsibility says that determinism and predictabil-
ity are roughly equivalent while noting the distinction in a footnote (Feinberg and
Schafer-Landau, 1999, 410).
8. Cicero, 1987, 339. For a detailed examination of the argument as the Stoics faced it,
see Bobzien, 1998.
9. Taylor, 1963, 55.
10. Baumeister et al., 2011, 3.
11. Holton, 2009, chap. 8.
12. Nahmias and Murray, 2011.
13. The most influential counterfactual account is due to David Lewis (1973). For sub-
sequent work, see Collins et al., 2004. A recent development understands the coun-
terfactuals in terms of interventions; see Pearl, 2000, and Woodward, 2003, for the
account, and the papers collected in Gopnik and Schulz, 2007, for evidence of its
psychological applicability.
14. Perhaps I have misinterpreted Taylor; perhaps he is really aiming to give something
like the argument that if the past is fixed, and the past entails the future, then the
future is fixed—something along the lines of the consequence argument. My argu-
ment here doesn’t touch on that position. But his talk of determinism being global
fatalism certainly strongly suggests the interpretation I have given.
15. I give a longer response in Holton, 2011.
16. Does this mean that you will deny that one of the previous counterfactuals applies to
you, i.e., that if you were not to call the doctor you would not recover? Well you might
deny it; but equally you might not know quite what to think. For further discussion
see below.
From Determinism to Resignation; and How to Stop It 99

17. Debates on the bearing of God’s knowledge of future contingents for our freedom
might be expected to have some relevance here. For comprehensive discussion see
Zagzebski, 1991. In fact though I think that they are not of very much use, for a num-
ber of reasons: (1) most of the discussion is of the broader issue of the compatibil-
ity of God’s foreknowledge with freedom, and not on the rationality of resignation;
(2) God is normally taken to be both essentially knowledgeable and immutable;
(3) there is no discussion of what difference it would make if God communicated his
knowledge with human actors.
18. Scriven, 1964; Lewis and Richardson, 1966. For a very useful review of various
authors who argued along similar lines to Scrivens, and some helpful suggestions of
their own, see Rummensand and Cuypers, 2010. I wish that I had seen their paper
before writing this one.

REFERENCES
Baumeister, Roy, A. William Crescioni, and Jessica Alquist, 2011, “Free Will as Advanced
Action Control for Human Social Life and Culture,” Neuroethics 4, 1–11.
Bobzien, Susanne, 1998, Determinism and Freedom in Stoic Philosophy (Oxford: Clarendon
Press).
Cicero, 1987, “On Fate 28,” in A. Long and D. Sedley (eds.), The Hellenistic Philosophers,
339 (Cambridge: Cambridge University Press).
Collins, John, Edward Hall, and Laurie Paul (eds.), 2004, Causation and Counterfactuals
(Cambridge, MA : MIT Press).
Earman, John, 2004, “Determinism: What We Have Learned, and What We Still Don’t
Know,” in J. Campbell, M. O’Rourke and D. Shier (eds.), Freedom and Determinism,
21–46 (Cambridge, MA : MIT Press).
Feinberg , Joel, and Russ Schafer-Landau, 1999, Reason and Responsibility, 10th ed.
(Belmont, CA : Wadsworth).
Gopnik, Alison, and Laura Schulz (eds.), 2007, Causal Learning (New York: Oxford
University Press).
Holton, Richard, 2009, Willing, Wanting, Waiting (Oxford: Clarendon Press).
Holton, Richard, 2011, “Comments on ‘Free Will as Advanced Action Control for Human
Social Life and Culture’ by Roy F. Baumeister, A. William Crescioni and Jessica L.
Alquist,” Neuroethics 4, 13–16.
Lewis, David, 1973 “Causation,” Journal of Philosophy, 70, 556–67, reprinted with post-
script in his Philosophical Papers, vol. 2, 159–213 (New York: Oxford University Press,
1986).
Lewis, David, 1983, “New Work for a Theory of Universals,” Australasian Journal of
Philosophy 61, 343–377, reprinted in his Papers in Metaphysics and Epistemology, 8–55
(Cambridge: Cambridge University Press, 1999).
Lewis, David, and Jane Richardson, 1966, “Scriven on Human Unpredictability,”
Philosophical Studies, 17, 69–74.
Mele, Al, 2010, Effective Intentions (Oxford: Oxford University Press).
Nahmias, Eddy, and Dylan Murray, 2011, “Experimental Philosophy on Free Will: An
Error Theory for Incompatibilist Intuitions,” in Jesús Aguilar, Andrei Buckareff and
Keith Frankish (eds.), New Waves in Philosophy of Action, 189–216 (Basingstoke:
Palgrave Macmillan).
Pearl, Judea, 2000, Causality (Cambridge: Cambridge University Press).
100 T H E Z O M B I E CH A L L E N G E

Rummens, Stefan, and Stefaan Cuypers, 2010, “Determinism and the Paradox of
Predictability,” Erkenntnis 72, 233–249.
Scriven, Michael, 1964, “An Essential Unpredictability in Human Behavior,” in Benjamin
B. Wolman and Ernest Nagel (eds.), Scientific Psychology: Principles and Approaches,
411–425 (New York: Basic Books).
Taylor, Richard, 1963, Metaphysics (Englewood Cliffs, NJ: Prentice Hall).
Wegner, Daniel, 2002, The Illusion of Conscious Will (Cambridge, MA : Harvard University
Press).
Woodward, James, 2003, Making Things Happen (New York: Oxford University Press).
Zagzebski, Linda, 1991, The Dilemma of Freedom and Foreknowledge (New York: Oxford
University Press.
PART TWO

The Sense of Agency


This page intentionally left blank
6

From the Fact to the Sense of Agency

M A N O S T S A K I R I S A N D A I K AT E R I N I F OTO P O U L O U

1. THE FACT OF AGENCY


Agency refers to a person’s ability to control their actions and, through them, events
in the external world. In this chapter, we use the term “sense of agency” to refer to the
experience of being in control of one’s own actions. We experience agency through-
out our waking lives to the extent that we control the movements of our body in
walking, talking, and other voluntary actions, and we also feel and know that we con-
trol them. As we perform actions in our daily lives, we have a coherent experience of
a seemingly simple fluent flow from our prior thoughts, to our body movements, to
the effects produced in the outside world. Agency seems to be a single experience
because it integrates these three successive stages of sensorimotor control.
We normally experience our own actions as being caused by our intentions that
are formed on the basis of our beliefs and desires (Haggard, 2008). However, it is
still debated whether intentions are indeed the true causes of our own actions. Libet
et al. (1983), who pioneered the experimental study of “free will,” suggested that it
is neural states preceding our conscious decision to act that cause the action, rather
than our conscious intentions. Recently, Wegner (2003) suggested that free will is
an illusory reconstructive perception of the relationship between unconscious brain
processes and events that occur in the world around us at the right time and the
right place. Independently of whether intentions are the true causes of our actions
(conscious) intentions and the sense of agency seem to be an integral part of human
experience and activity. It is this experiential aspect of the fact of agency that we
focus on. Here we will avoid the issue of “free will” and focus on how the different
elements of the sensorimotor sequence produce the experience of agency as studied
in experimental psychology and cognitive neurosciences.

2. EXPLICIT TASKS OF AGENCY


One influential experimental approach to agency has used tasks that explicitly ask
participants to judge whether they caused a particular sensory event. For example,
104 T H E S E N S E O F AG E N C Y

participants perform voluntary actions, typically hand movements, in response to


a cue or at a time of their own choice within a specific temporal window. They then
receive sensory feedback about their movement, which is sometimes distorted (e.g.,
a different hand configuration is shown, the spatial path of movement is disturbed,
or a temporal delay is inserted; see Farrer et al., 2003; Metcalfe & Greene, 2007;
Sato & Yasuda, 2005). The sensory feedback given is considered to be the effect
of the participant’s voluntary movement. Participants explicitly state whether they
experience agency over the effect (e.g., by answering the question “Did you produce
the movement you saw?” or “Did you cause that tone?”). Converging evidence sug-
gests that greater distortions lead to lower agency scores. However, the interpreta-
tions of these results vary. Judgments about agency have been interpreted (Sato &
Yasuda, 2005) as outputs from internal predictive models of the motor system
(Frith, Blakemore & Wolpert, 2000). Alternatively, the mind may infer and recon-
struct a causal connection between conscious intention and effect (Wegner, 2003).
Clearly, this reconstructive process only works when the effect was as intended.
Haggard and Tsakiris (2009) suggested that such experiments capture a rela-
tively small part of the experience of agency, for reasons that relate to the experi-
mental design, the dependent variable used, and the theoretical clarity behind the
experimental approach. In the agency tasks described earlier, we can assume that
a sense of agency is always present and is not manipulated by the experimental
design itself. (Daprati et al., 1997; Sato & Yasuda, 2005), because participants are
instructed to perform self-generated voluntary movements across all conditions
(e.g., lifting their index finger or pressing a key) to produce a specific sensory event
either on their body or in the external world (e.g., move a visual cursor or produce
an auditory tone). In such cases, the participant is always an agent in the sense of
moving her hand voluntarily and controlling it. Because of the self-generated nature
of the movement, the neural activity and subjective experience associated with vol-
untary movements are always present, that is, participants both feel and know that
they moved their hand. Therefore, participants’ responses to the question “Was that
your action?” (or “Was that the effect that you caused?”) do not reflect whether
they experience agency or not. Instead, the question taps into whether the percep-
tual consequences of their action correspond to their experience of the action itself.
Such judgments can elucidate what information the initial experience of agency
contains, but they cannot identify the necessary and sufficient conditions for the
experience of agency itself. In fact, many studies that claim to investigate agency
focus, instead, on the cross-modal matching process between the internal repre-
sentation of one’s own voluntary actions and the sensory representation of those
actions, or their consequences, from an external perspective. For example, angular
discrepancies of up to 15 degrees between a voluntary action and its visual feedback
are generally not detected (Fourneret & Jeannerod, 1998). Such findings reveal a
low sensitivity of cross-modal matching, perhaps reflecting lack of spatial informa-
tion in efferent motor signals. However, they do not clarify agency in our sense of
the experience of being the generator and controller of one’s actions.
The use of only voluntary movements within an experiment is problematic for at
least two reasons. First, there is not an adequate control condition, where movement
parameters would be the same, but the voluntary nature of the movement would be
From the Fact to the Sense of Agency 105

selectively manipulated; for example, a voluntary key press is compared with a pas-
sive key press. This manipulation is important when we want to establish the rela-
tive contributions of different sensory and motor cues for agency. Second, to the
extent that all these movements are self-generated and voluntary, we can assume the
presence of the unique neural events that precede voluntary actions alone, such as
the readiness potential and the generation of an efference copy of the motor com-
mand. In addition, we can assume that participants experience (at least a minimal)
sense of effort as they produce them, and also that they are clearly aware of the fact
that they have moved on every trial, despite the fact that they may not be fully aware
of all the low-level movement parameters. These observations beg the question as
to what we are actually measuring when we ask participants about their sense of
agency. Clearly, we are not asking them whether they felt that they have moved vol-
untarily, because then the answer would always be affirmative, independently of the
discrepancy between their actual movement and the feedback. Thus, most experi-
mental paradigms do not actually investigate a crucial and possibly the most basic
experiential component of the sense of agency, namely, the feeling of agency, the
feeling that I voluntarily move my body.

3. FEELING AND JUDGMENTS OF AGENCY


A recent distinction between the feeling of agency and judgment of agency pro-
posed by Synofzik et al. (2008) can clarify the problems encountered when experi-
menting with agency. Feelings of agency refer to the subjective experience of
fluently controlling the action one is currently making, and is considered as being
nonconceptual. Judgments of agency refer to explicit conceptual attributions of
whether one did or did not make an action or cause an effect. As recent reviews of
the experimental literature on agency suggest (Haggard & Tsakiris, 2009; Synofzik
et al., 2008), most studies have focused on explicit judgments of agency rather than
feelings of agency. They therefore reflect metacognitive beliefs (Metcalfe & Greene,
2007) about agency rather than the more basic experiential component, namely,
the feeling that I voluntarily move my body.
What, then, is the link between the feeling of agency and the judgment of agency?
Under normal circumstances, the feeling seems to be a necessary condition for judg-
ment, and indeed forms the evidence base for the judgment: my belief that I turned
on my laptop depends on my phenomenal experience that I pressed the switch.
However, this principle fails in a few special circumstances, suggesting that the feel-
ing of agency alone might not be necessary. For example, when several people’s
actions simultaneously aim to produce a single effect, people accept agency over
events when they did not in fact make the relevant action (Wegner, 2003). Note,
however, that even in these special circumstances, such as the ones implemented
in Wegner’s experiments, the presence of an intention to perform an action seems a
necessary element for the sense of agency that people report.
In addition, as neuropsychiatric (e.g., schizophrenic delusions of control) and
neuropsychological (e.g., anarchic hand syndrome) cases aptly suggest, the experi-
ence of action by itself is not normally sufficient for veridical judgments of agency.
As suggested by accounts of the internal models of the motor system, a separate
106 T H E S E N S E O F AG E N C Y

cognitive function monitors the effects of actions (Frith, Blakemore & Wolpert,
2000). Explicit judgments of agency require successful completion of the monitor-
ing process: only when I see the laptop booting up, would I judge that I actually
turned it on. For mundane actions, this monitoring process is often unconscious:
indeed, the motor system includes specific mechanisms for predicting the sensory
consequences of our own actions (Frith, Blakemore & Wolpert, 2000).
An interesting neuropsychological condition reveals this interplay between an
advance intention-based prediction of the sensory feedback of action that may
underpin the feeling of agency, and a delayed postdictive attribution of sensory
feedback to the self that may underpin judgments of agency. Some patients with
hemiplegia deny that their affected limb is paralyzed (ansognosia for hemiplegia,
AHP). For example, the patient may assert that they performed an action using
their paralyzed limb, which in fact remains immobile. Judgments of agency in these
patients seem to be based only on the feeling that they prepare appropriate motor
commands for the action, and bypass the normal stage of monitoring whether
appropriate effects of limb movement actually occur (Berti et al., 2005). In AHP,
the feeling of intending an action becomes sufficient for a judgment of agency. This
suggests that the monitoring is a specific cognitive function that normally provides
the appropriate link between feelings of agency and explicit judgments of agency.
More recent accounts, capitalizing on recent computational models of motor
control (Frith et al., 2000), proposed that AHP results from specific impairments
in motor planning. Under normal circumstances, the formation of an intention to
move will be used by “forward models” to generate accurate predictions about the
impending sensory feedback. If an intended movement is not performed as planned,
a comparator will detect a mismatch between the predicted sensory feedback and
the absence of any actual sensory feedback. The error signal at the level of the com-
parator can be then used to inform motor awareness. Berti et al. (2007), following
Frith et al. (2000), hypothesized that patients with AHP form appropriate repre-
sentations of the desired and predicted positions of the limb, but they are not aware
of the discrepancy between their prediction and the actual position. On this view,
patients’ awareness is dominated by intention and does not take into account the
failure of sensory evidence to confirm the execution of the intended action. AHP
arises because action awareness is based on motor commands sent to the plegic
limb, and sensory evidence about lack of movement is not processed. Accordingly,
AHP may involve damage to the brain areas that underpin the monitoring of the
correspondence between motor outflow and sensory inflow (e.g., Brodmann pre-
motor areas 6 and 44 [BA6 and BA44]; Berti et al., 2005), or else contrary sensory
information is neglected (Frith et al., 2000). Consequently, the mismatch between
the predicted state (i.e., movement of the limb) and the actual state (i.e., no move-
ment) is not registered.
The account of Berti and Frith can explain why patients with AHP fail to perceive
the failure to move. However, an equally important question relates to the nonve-
ridical awareness of action that they exhibit, that is, their subjective experience that
they have moved. An experimental demonstration that motor intention dominates
over sensory information about the actual effects of movement in AHP patients was
provided by Fotopoulou et al. (2008). Four hemiplegic patients with anosognosia
From the Fact to the Sense of Agency 107

(AHP) and four without anosognosia (nonAHP) were provided with false visual
feedback of movement in their left paralyzed arm using a prosthetic rubber hand.
This allowed for realistic, three-dimensional visual feedback of movement, and
deceived patients into believing the rubber hand was their own. Crucially, in some
conditions, visual feedback that was incompatible with the patient’s intentions was
given. For instance, in a critical condition, patients were instructed to move their
left hand, but the prosthetic hand remained still. This condition essentially mirrored
the classic anosognosic scenario within an experimentally controlled procedure. In
this way the study was able to examine whether the ability to detect the presence
or absence of movement, based on visual evidence, varied according to whether
the patient had planned to move their limb or not. The key measure of interest was
the patient’s response to a movement detection question (i.e., “Did your left hand
move?”), which required a simple yes/no response. The results revealed a selective
effect of motor intention in patients with AHP; they were more likely than non-
AHP controls to ignore the visual feedback of a motionless hand and claim that they
moved it when they had the intention to do so (self-generated movement) than
when they expected an experimenter to move their own hand (externally generated
movement), or there was no expectation of movement. In other words, patients
with AHP only believed that they had moved their hand when they had intended
to move it themselves, while they were not impaired in admitting that the hand did
not move when they had expected someone else to move it. By contrast, the perfor-
mance of nonAHP patients was not influenced by these manipulations of intention,
and they did not claim they moved their hand when the hand remained still.
These results confirm that AHP is influenced by motor planning, and in partic-
ular that motor “awareness” in AHP derives from the processing of motor inten-
tions. This finding is consistent with the proposals made by Frith et al. (2000; see
also Berti et al., 2007) that the illusory “awareness” of movement in anosognosic
patients is created on the basis of a comparison between the intended and predicted
positions of the limbs, and not on the basis of a mismatch between the predicted
and actual sensory feedback. According to this hypothesis, patients with AHP are
able to form appropriate representations of the desired and predicted positions of
the limb. However, conflicting information derived from sensory feedback that
would indicate a failure of movement is not normally available, because of brain
damage to regions that would register the actual state of the limbs, or else because
this contrary information is neglected. A recent lesion mapping study suggested
that premotor areas BA6 and BA44, which are implicated in action monitoring, are
the most frequently damaged areas in patients with AHP (Berti et al., 2005). This
finding may explain why these patients fail to register their inability to move, but
it does not address the functional mechanism that underpins their illusory aware-
ness of action per se. This study provides direct evidence for the hypothesis that
awareness of action is based on the stream of motor commands and not on sensory
inflow. While previous studies have suggested that conflicting sensory information
may not be capable of altering anosognosic beliefs (Berti et al., 2005), they did not
demonstrate that sensory feedback about the affected limb was ignored even when
it was demonstrably available. Accordingly, this study demonstrated for the first
time why anosognosic beliefs are formed in the first place: the altered awareness
108 T H E S E N S E O F AG E N C Y

of action in AHP depends predominantly on motor intention rather than sensory


inflow. Actual sensory feedback has a remarkably limited role in the experience of
action in neurologically healthy individuals (Sarrazin et al., 2008). To this extent,
AHP may be a pathological exaggeration of the role of proactive and predictive
information in motor awareness, arguing against the view that the sense of agency
is solely postdictive.

4. AN ALTERNATIVE EXPERIMENTAL APPROACH TO


THE STUDY OF AGENCY: THE SEARCH FOR THE
FUNCTIONAL SIGNATURES OF AGENCY
The careful analysis of the experimental designs used in agency studies and the dis-
tinction between feeling and judgment of agency beg a key theoretical question:
What should be the nonagency control condition to which agency is compared?
Most previous studies compare conditions where participants’ actions cause a
sensory effect relatively directly with conditions involving some appropriate trans-
formation between action and effect. We consider that a feeling of agency cannot
be experimentally manipulated in a consistent way, unless the action component
itself is systematically manipulated. Accordingly, an alternative approach put for-
ward by Tsakiris and Haggard (2005) involves the systematic manipulation of the
action component itself, by comparing voluntary action and passive movement
conditions. A voluntary movement and a passive displacement applied externally
may be physically identical but are psychologically different: the voluntary move-
ment supports a sense of agency while the passive movement does not. Implicit in
this alternative experimental approach to agency is the assumption that a sense of
body-ownership (i.e., the sense that this is my body, independently of whether it is
moving or not) is present during both active and passive movement. What there-
fore distinguished the two conditions is the critical addition of agency: only during
an active voluntary movement do I have a sense of agency over my moving hand,
whereas during a passive movement or a purely sensory situation (e.g., see Rubber
Hand Illusion [RHI]), I have only a sense of body-ownership (e.g., that’s my hand
moving or I experience touch on my hand). This approach recalls Wittgenstein’s
(1953) question, “What is left over if I subtract the fact that my arm goes up from
the fact that I raise my arm?” Recent experimental studies have inverted the philo-
sophical question, to ask, “What is added when I raise my arm over and above the
fact that my arm goes up?” This view treats agency as an addition to or modification
of somatic experience.
Studies of this kind have manipulated the intention/preparation stage of the
motor sequence. However, since the experience of intention itself is thin and elu-
sive, most studies have measured the experience of later stages, such as bodily move-
ment and its external effects. The aim here is to understand how voluntary actions
structure the perception of events that relates to one’s own moving body and/or the
effects of such movements in the external world, and use this indirect or implicit
evidence to inform psychological theories about agency. Such an approach has been
adopted in recent studies that focus on time awareness, somatosensory perception,
and proprioceptive awareness during voluntary action. Importantly, a significant
From the Fact to the Sense of Agency 109

methodological advantage of studying these domains is that one can directly com-
pare how the agentive nature of movement affects these three domains over and
above the mere presence of movement cues, that is, one can directly compare volun-
tary with passive movements. Consistent results have shown how the fact of agency
changes the experience of the body and the outside world, measured using depen-
dent variables such as temporal awareness and spatial representation of the body.
They thus provide indirect or implicit evidence about agency. Three fundamental
and robust features of agency emerge: a temporal attraction effect, a sensory attenu-
ation effect, and a change in the spatial representation of the body itself.

4.1. Agency and Temporal Attraction Effects


Action fundamentally changes the experience of time. Both actions and their effects
occur at specific measurable points in time, making correlation between subjective
(i.e., the perceived time onset of events) and objective (i.e., the actual time onset of
events) time possible. Therefore, time perception has been one of the most impor-
tant indirect methods for studying agency. In one approach, participants are asked
to judge the perceived onset of voluntary actions, and of a sensory event (a tone)
occurring shortly afterward. The perceived time of the action was shifted later in
time, toward the ensuing tone, compared with a baseline condition where no tone
occurred. The perceived time of tones, in contrast, was shifted earlier, back toward
the action that caused the tone, relative to a baseline condition in which no action
was made. This intentional binding effect (Haggard et al., 2002; Tsakiris & Haggard,
2003) suggests that the experience of agency reflects a specific cognitive function
that links actions and effects across time, producing a temporal attraction between
them (cf. Ebert & Wegner, 2010; Moore & Haggard, 2010). Crucially, no such
effects were found when passive, involuntary movements were applied, suggesting
intentional binding is a specific marker of the sense of agency.

4.2. Agency and Sensory Attenuation Effects


Sensory stimulation of equal magnitude is perceived as being less intense when it is
self-generated than when it is externally or passively generated. This phenomenon of
sensory attenuation is a robust feature of voluntary motor control. Computational
models suggest that it reduces the possibility of computational overload by reaf-
ferent signals reflecting self-generated actions. Since the sensory consequences of
such actions can be predicted internally, there is no need to sense them, and they
are accordingly attenuated. This prediction is thought to involve efference copies
of the motor command, and internal neural models of the motor system (Frith,
Blakemore & Wolpert, 2000). This concept has been extended from computational
motor control to the experience of agency. On this view, the experience of one’s
own actions depends on the outcome of the comparison between the predicted
and the actual state of our bodies. Sensory stimulation generated by one’s voluntary
actions is predicted and attenuated. Therefore, when there is little or no discrep-
ancy between the predicted and actual state of the body, a subject can be reassured
that she was the agent. This approach can correctly discriminate between internally
110 T H E S E N S E O F AG E N C Y

generated and external sensory events and can therefore ascribe agency. However,
since it suppresses perception of self-generated information, it cannot explain why
there is a positive experience of agency at all. Models based on attenuation treat
agency as absence of exteroceptive perceptual experience, not as a positive experi-
ence in itself. However, the phenomenon of sensory attenuation may be a reliable
functional signature of agency, which can be used as an implicit measure in experi-
mental studies.

4.3. Agency and Spatial Body-Representation Effects


We previously defined agency as an addition to normal experience of the body.
Recent evidence suggests that agency transforms the experience of the body, as well
as adding to it. A number of studies have compared the effects of voluntary action
and passive movement on proprioceptive awareness of one’s body. Agency generally
enhances both spatial and temporal (Tsakiris et al., 2005) processing of propriocep-
tive information. Tsakiris, Prabhu, and Haggard (2006) used the RHI (Botvinick &
Cohen, 1998) to show that voluntary actions produce a more coherent and global
proprioceptive representation of the body than do passive movements. In the RHI,
synchronous stimulation of both a rubber hand, or a video image of the hand, and
the participant’s unseen hand produces a strong illusion that the rubber hand is part
of one’s own body. A reliable behavioral proxy of the illusion is a shift in the perceived
location of the participant’s hand toward the rubber hand. When the stimulation
involves passively displacing the participant’s hand, and monitoring the movement
via a video image of the hand, the effect was confined to the individual finger that
was passively displaced. In contrast, when the participant actively moved the same
finger, the illusion transferred to other fingers also. Voluntary action appeared to
integrate distinct body parts into a coherent, unified awareness of the body, while
equivalent passive stimulation produced local and fragmented effects on proprio-
ceptive awareness. This result suggests that the unity of bodily self-consciousness
may be an important result of agency.

5. THE SEARCH FOR THE NEURAL CORRELATES OF AGENCY


The framework of comparing active to passive movements to study agency
implies that agency is added to the normally continuous and omnipresent sense
of body-ownership. Thus, we experience body-ownership not only during volun-
tary actions but also during passive movement and at rest. In contrast, only volun-
tary actions should produce a sense of agency. Several studies confirm that agency
is closely linked to the generation of efferent motor signals and the monitoring
of their effects (e.g., Blakemore, Wolpert & Frith, 2002). In contrast, the sense of
body-ownership can be induced by afferent sensory signals alone (Botvinick &
Cohen, 1998). However, the exact relation between agency and body-ownership
remains unknown. On one view, as argued earlier, the relation between agency and
body-ownership is additive, meaning that agency entails body-ownership. This view
follows from the observation that one can control movements of one’s own body,
but not other objects, at will, as Descartes suggested. Thus, agency offers a strong
From the Fact to the Sense of Agency 111

cue to body-ownership. On this view, the sense of agency should involve the sense
of body-ownership, plus a possible additional experience of voluntary control. An
alternative view holds that sense of agency and sense of body-ownership are quali-
tatively different experiences, without any common component.
Previous accounts based on introspective evidence favor the additive model,
since they identified a common sense of body-ownership, plus an additional com-
ponent unique to action control (Longo & Haggard, 2009). Recent behavioral
and neuroimaging studies have also focused on the neurocognitive processes that
underpin body-ownership and agency (Fink et al., 1999; Farrer & Frith, 2002;
Farrer et al., 2003; Ehrsson, Spence & Passingham, 2004; Tsakiris et al., 2007), but
the exact neural bases of these two aspects of self-consciousness remain unclear.
For example, neuroimaging studies that investigated the sense of body-ownership
using the RHI (see Botvinick & Cohen, 1998) report activations in the bilateral
premotor cortex and the right posterior insula associated with the illusion of own-
ership of the rubber hand, and present only when visual and tactile stimulations
are synchronized (Ehrsson et al., 2004; Tsakiris et al., 2007). Studies investigating
the neural signatures of the sense of agency have used similar methods, such as the
systematic manipulation of visual feedback to alter the experience of one’s body in
action. Activity in the right posterior insula was correlated with the degree of match
between the performed and viewed movement, and thus with self-attribution
(Farrer et al., 2003). Conversely, activity in the right dorsolateral prefrontal cortex
(Fink et al. 1999; Leube et al., 2003), right inferior parietal lobe, and temporopa-
rietal junction (Farrer et al., 2003, 2008) was associated with degree of disparity
between performed and viewed movement, and thus with actions not attributed to
the self.
These studies were largely based on manipulating visual feedback to either match
or mismatch the participant’s manual action, similar to the behavioral experiments
on agency described earlier. However, such manipulations cannot separate the con-
tributions of efferent and afferent signals that are both inevitably present in manual
action. The imaging data from these studies may therefore confound the neural cor-
relates of agency and body-ownership. For example, with undistorted visual feed-
back of an action, there is a three-way match between efferent motor commands,
afferent proprioceptive signals, and vision. Thus, any effects seen in such condi-
tions could be due to congruence between (1) efferent and proprioceptive sig-
nals, (2) efferent signals and visual feedback, (3) proprioceptive signals and visual
feedback, or (4) some complex interaction of all three signals. Conversely, when
visual feedback is distorted (spatially or temporally), there is sensorimotor conflict
between efferent signals and vision, but also intersensory conflict between prop-
rioception and vision. As a result, any differences between match and mismatch
conditions could reflect sensorimotor comparisons (relating to sense of agency)
or proprioceptive-visual comparisons (relating to sense of body-ownership). As a
result, such experimental designs cannot distinguish between the additive and the
independence model of agency and body-ownership.
However, as suggested previously, the senses of agency and body-ownership
can be disentangled experimentally by comparing voluntary action with passive
movement, as shown earlier. Tsakiris, Longo, and Haggard (2010) implemented
112 T H E S E N S E O F AG E N C Y

this experimental design in a neuroimaging study to disentangle the neural basis


of the relation between the sense of body-ownership and agency using fMRI.
Body-ownership was manipulated by presenting real-time or delayed visual feed-
back of movements, and agency, by comparing voluntary and passive movements.
Synchronous visual feedback causes body parts and bodily events to be attributed to
one’s own self (Longo & Haggard, 2009). The experiment aimed at testing two spe-
cific models of the agency and body-ownership relations. The first, additive model,
holds that agency entails body-ownership. On this view, active movements of the
body should produce both a sense of body-ownership and a sense of agency. The feel-
ing of being in control of one’s body should involve the sense of body-ownership, plus
an additional sense of agency. This produces three concrete predictions about brain
activations in agency and ownership conditions: first there should be some activations
common to agency and body-ownership conditions; second, there should be an addi-
tional activation in agency, which is absent from body-ownership; third, there should
be no activation in the body-ownership condition that is not also present in the agency
condition. A second model, the independence model, holds that sense of agency and
sense of body-ownership are qualitatively different experiences, without any com-
mon component. On this view, the brain could contain distinct networks for sense of
body-ownership and sense of agency. The independence model produces three con-
crete predictions: first, there should be no common activations between agency and
ownership; second, there should be a specific activation in agency conditions that is
absent from ownership; third, there should be a specific activation in ownership that
is absent from agency. In addition to the collection and analysis of fMRI data, partici-
pants were asked to answer a series of questions referring to their experience of agency
and/or body-ownership during the various experimental conditions.
Overall, the introspective evidence broadly supported the additive model of
agency. According to the additive model, a similar sense of body-ownership would
be present for both active and passive movement conditions with synchronous visual
feedback, but the sense of agency would additionally be present following voluntary
movements. Indeed, participants reported significantly more agreement with ques-
tionnaire items reflecting agency in the active/synchronous condition compared
with the other three conditions. In particular, body-ownership questions were also
more highly rated in the active/synchronous condition as compared with the pas-
sive/synchronous condition, suggesting that agency strengthens the experience of
body-ownership. In terms of expected brain activations, if the addition of agency to
body-ownership enhances the same kind of experience, then we would expect to
find at least some shared activations between agency and body-ownership. Another
hypothesis suggests that agency is not simply an addition to body-ownership but
a qualitatively different process. This independence model would predict different
patterns of brain activity in the two cases.
To distinguish between the neural predictions of the additive and independence
models, the first analysis focused on brain areas that are commonly activated by
agency (induced via active movement) and a sensory-driven body-ownership
(induced via passive movement). This analysis revealed no suprathreshold activa-
tions common to the two conditions, inconsistent with the additive model that pre-
dicted at least some common activations. A second hypothesis derived from the
From the Fact to the Sense of Agency 113

additive models is that there should be no activations for body-ownership that are
not also present for agency. However, the analysis did not support this prediction as
the activated networks for agency and body-ownership were sharply distinct. Both
body-ownership and agency were associated with distinct and exclusive patterns of
activation, providing direct evidence that their neural substrates differ. In particular,
agency was specifically associated with activations in the presupplementary motor
area, the superior parietal lobe, the extrastriate body area, and the dorsal premo-
tor cortex bilaterally (BA6). In relation to a purely sensory-driven body-ownership,
suprathreshold activations were observed in a network of midline cortical structures,
including the precuneus, the superior frontal gyrus, and the posterior cingulate.
Notably, these midline cortical activations recall recent suggestions of a dedicated
self-referential processing network (Northoff & Bermpohl, 2004; Northoff et al.,
2006) in the default mode network (Gusnard et al., 2001; Schneider et al., 2008).
Thus, neuroimaging data supported an independence model, while questionnaire
data supported an additive model. This somewhat surprising inconsistency may be
explained in at least two distinct ways. First, the questionnaire data may reflect a
limitation of the folk-psychological concepts used to describe our embodied experi-
ence during sensation and movement. Folk psychology suggests that agency is a very
strong cue for ownership, so that I experience ownership over more or less any events
or object that I control. However, the experience of ownership of action during
agency may represent a distinctive type of ownership that should not be necessarily
conflated with ownership of sensations or body parts.1 Second, the apparent dissocia-
tion between neural activity and introspective reports may suggest that there is not a
one-to-one mapping between brain activity and conscious experience. Qualitatively
similar subjective experiences of ownership appear to be generated by quite differ-
ent brain processes in the passive/synchronous and active/synchronous condition.
Models involving a single neural correlate of each specific consciousness experience
have been highly successful in the study of individual sensory percepts, particularly
in vision (Haynes & Rees, 2006). However, the aspects of self-consciousness that we
call sense of body-ownership and sense of agency are not unique elemental percepts
or qualia in the same way. Rather, they may be a cluster of subjective experiences, feel-
ings, and attitudes (Synofzik, Vosgerau & Newen, 2008; Gallagher, this volume).
Suprathreshold activations unique to the experience of agency were observed in
the presupplementary motor area (pre-SMA), the superior parietal lobe, the extras-
triate body area, and the dorsal premotor cortex bilaterally (BA6). The pre-SMA is
strongly involved in the voluntary control of action (Goldberg, 1985). Neurosurgical
stimulation studies further suggest that it contributes to the experience of volition
itself: stimulation of pre-SMA can produce an “urge” to move, at stimulation lev-
els below threshold for evoking physical movement (Fried et al., 1991). Voluntary
action was present in both the active/synchronous and the active/asynchronous
conditions: these differed only in timing of visual feedback, and the resulting sense
of agency. However, the pre-SMA activation was greater in the active/synchronous
condition, where visual feedback confirms that the observed movement is tempo-
rally related to the voluntary motor command, suggesting that the pre-SMA plays
an important role not only in conscious intention (Lau et al., 2004) but also in the
sense of agency.
114 T H E S E N S E O F AG E N C Y

The observed premotor activation (BA6) is also of relevance to a different type


of action-awareness deficit. Anosognosia for hemiplegia involves denial of motor
deficits after right hemisphere stroke. It arises, in part, by a failure to monitor signals
related to one’s own movement and is associated with lesions in right BA44 and
BA6 (Berti et al., 2005). Interestingly, anosognosic patients seem to “ignore” the
conflict between their own intention to move and the manifest lack of movement
of the left hand. They appear to perceive their intention, but not the failure of their
intention to trigger appropriate proprioceptive and visual feedback (Fotopoulou
et al., 2008). The roles of pre-SMA and BA6 in this experiment could reflect either
an advance intention-based prediction of the sensory feedback of action or a delayed
postdictive attribution of sensory feedback to the self.

6. CONCLUDING REMARKS
This chapter has presented some of the ways with which experimental psychology
and cognitive neurosciences have experimented with the sense of agency. Several
experimental approaches to the study of agency have emerged with recent advance
in research method. From the design of nonecological situations where there is
ambiguity over the authorship of an action to the implementation of control condi-
tions of passive movements that make little sense in our everyday waking life, the
reviewed studies have tried to identity some key elements of what constitutes the
sense of agency in humans. The exact interplay between conscious intentions and
behavior and the balance between predictive and postdictive processes remain con-
troversial. However, the empirical investigation of the fact of agency, that is, the study
of situations where people unambiguously produce voluntary actions, suggests that
self-generated behavior changes the perception of one’s body and the external world
by integrating temporal and spatial representations of movements and their effects
on the world. One important implication of the experiments described in this chap-
ter is that the sense of agency seems to be closely linked to the appropriate process-
ing of efferent information within the motor system. For example, the experiments
on intentional binding and sensory attenuation suggest that efferent signals are suf-
ficient for eliciting these effects, and support the conceptualization of the sense of
agency as an efferent-driven predictive process. From a conceptual point of view,
the efference copy can be considered as a pragmatic index of the origin of move-
ment that operates at the interface between the psychological and the physiological
sides of our actions. The psychological content can be described as an intention-in-
action, and the physiological side relates to the descending motor command and
the sensory feedback. The reviewed agentic effects that are specific to the cascade
of the cognitive-motor processes that underpin voluntary movements allude to the
important functional role of agency for interacting biological organisms.

ACKNOWLEDGMENTS
Dr. Tsakiris and Dr. Fotopoulou were supported by the “European Platform for Life
Sciences, Mind Sciences, and the Humanities” grant by the Volkswagen Stiftung for
“Body-Project: Interdisciplinary Investigations on Bodily Experiences.” Dr. Tsakiris
From the Fact to the Sense of Agency 115

was supported by the European Science Foundation EUROCORES Programme


CNCC, supported by funds from the EC Sixth Framework Programme under con-
tract no. ERAS-CT-2003–980409.

NOTE
1. For example, Marcel distinguished between attributing an action to one’s self, and
attributing the intentional source of the action to one’s self. Patients with anarchic hand
have a clear sense that their involuntary movements are their own, but they strongly
deny intending them (Marcel, 2003). Since the patients often themselves report this
dissociation as surprising, folk psychology may not adequately capture the difference
between ownership of intentional action and ownership of bodily sensation.

REFERENCES
Berti, A., Bottini, G., Gandola, M., Pia, L., Smania, N., Stracciari, A., Castiglioni, I., Vallar,
G., & Paulesu, E. (2005). Shared cortical anatomy for motor awareness and motor con-
trol. Science, 309 (5733), 488–491.
Berti, A., Spinazzola, L., Pia, L., & Rabuffeti, M. (2007). Motor awareness and motor
intention in anosognosia for hemiplegia. In Haggard, P., Rossetti, Y., & Kawato, M.,
eds., Sensorimotor foundations of higher cognition series: Attention and performance num-
ber XXII, 163–182. New York: Oxford University Press.
Blakemore, S. J., Wolpert, D. M., & Frith, C. D. (2002). Abnormalities in the awareness of
action. Trends in Cognitive Sciences, 6, 237–242.
Botvinick, M., & Cohen, J. (1998). Rubber hands “feel” touch that eyes see. Nature, 391
(6669), 756.
Daprati, E., Franck, N., Georgieff, N., Proust, J., Pacherie, E., Dalery, J., & Jeannerod,
M. (1997). Looking for the agent: An investigation into consciousness of action and
self-consciousness in schizophrenic patients. Cognition, 65, 71–86.
Ebert, J. P., & Wegner, D. M. (2010). Time warp: Authorship shapes the perceived timing
of actions and events. Consciousness and Cognition, 19, 481–489.
Ehrsson, H. H., Spence, C., & Passingham, R. E. (2004). That’s my hand! Activity in pre-
motor cortex reflects feeling of ownership of a limb. Science, 305, 875–877.
Farrer, C., Franck, N., Georgieff, N., Frith, C. D., Decety, J., & Jeannerod, M. (2003).
Modulating the experience of agency: A positron emission tomography study.
NeuroImage, 18, 324–333.
Farrer, C., Frey, S. H., Van Horn, J. D., Tunik, E., Turk, D., Inati, S., & Grafton, S. T. (2008).
The angular gyrus computes action awareness representations. Cerebral Cortex, 18,
254–261.
Farrer, C., & Frith, C. D. (2002). Experiencing oneself vs another person as being the
cause of an action: The neural correlates of the experience of agency. NeuroImage, 15,
596–603.
Fink, G. R., Marshall, J. C., Halligan, P. W., Frith, C. D., Driver, J., Frackowiak, R. S., &
Dolan, R. J. (1999). The neural consequences of conflict between intention and the
senses. Brain, 122, 497–512.
Fotopoulou, A., Tsakiris, M., Haggard, P., Vagopoulou, A., Rudd, A., & Kopelman, M.
(2008). The role of motor intention in motor awareness: An experimental study on
anosognosia for hemiplegia. Brain, 131, 3432–3442.
116 T H E S E N S E O F AG E N C Y

Fourneret, P., & Jeannerod, M. (1998). Limited conscious monitoring of motor perfor-
mance in normal subjects. Neuropsychologia, 36, 1133–1140.
Fried, I., Katz, A., McCarthy, G., Sass, K. J., Williamson, P., & Spencer, D. D. (1991).
Functional organization of human supplementary motor cortex studies by electrical
stimulation. Journal of Neuroscience, 11, 3656–3666.
Frith, C. D., Blakemore, S. J., & Wolpert, D. M. (2000). Abnormalities in the awareness
and control of action. Philosophical Transactions Royal Society London Series B Biological
Sciences, 355 (1404), 1771–1788.
Goldberg , G. (1985). Supplementary motor area structure and function: Review and
hypotheses. Behavioural and Brain Sciences, 8, 567–616.
Gusnard, D. A., Akbudak, E., Shulman, G. L., & Raichle, M. E. (2001). Medial prefrontal
cortex and self-referential mental activity: Relation to a default mode of brain function.
Proceedings of National Academy of Sciences, USA, 98, 4259–4264.
Haggard, P. (2008). Human volition: Towards a neuroscience of will. Nature Reviews
Neuroscience, 9, 934–946.
Haggard, P., Clark, S., & Kalogeras, J. (2002). Voluntary action and conscious awareness.
Nature Neuroscience, 5, 382–385.
Haggard, P., & Tsakiris, M. (2009). The experience of agency: Feeling, judgment and
responsibility. Current Directions in Psychological Science, 18, 242–246.
Haynes, J., & Rees, G. (2006). Decoding mental states from brain activity in humans.
Nature Reviews Neuroscience, 7, 523–534.
Lau, H. C., Rogers, R. D., Haggard, P., & Passingham, R. E. (2004). Attention to
intention.
Science, 303, 1208–1210.
Leube, D. T., Knoblich, G., Erb, M., & Kircher, T. T. (2003). Observing one’s hand
become anarchic: An fMRI study of action identification. Consciousness and Cognition,
12, 597–608.
Libet, B., Gleason, C. A., Wright, E. W., & Pearl, D. K. (1983). Time of conscious inten-
tion to act in relation to onset of cerebral activity (readiness-potential): The uncon-
scious initiation of a freely voluntary act. Brain, 106 (pt. 3), 623–642.
Longo, M. R., & Haggard, P. (2009). Sense of agency primes manual motor responses.
Perception, 38, 69–78.
Marcel, A. J. (2003). The sense of agency: Awareness and ownership of actions and inten-
tions. In Roessler, J., & Eilan, N., eds., Agency and self-awareness, 48–93. Oxford: Oxford
University Press.
Metcalfe, J., & Greene M. J. (2007). Metacognition of agency. Journal of Experimental
Psychology General, 136, 184–199.
Moore, J. W., & Haggard, P. (2010). Intentional binding and higher order agency experi-
ence. Consciousness and Cognition, 19, 490–491.
Northoff, G., & Bermpohl, F. (2004). Cortical midline structures and the self. Trends in
Cognitive Sciences, 8, 102–107.
Northoff, G., Heinzel, A., de Greck, M., Bermpohl, F., Dobrowolny, H., & Panksepp, J.
(2006). Self-referential processing in our brain—a meta-analysis of imaging studies on
the self. NeuroImage, 31, 440–457.
Sarrazin, J. C., Cleeremans, A., & Haggard, P. (2008). How do we know what we are doing?
Time, intention, and awareness of action. Consciousness and Cognition, 17, 602–615.
Sato, A., & Yasuda, A . (2005). Illusion of sense of self-agency: Discrepancy between
the predicted and actual sensory consequences of actions modulates the sense of
self-agency, but not the sense of self-ownership. Cognition, 94, 241–255.
From the Fact to the Sense of Agency 117

Schneider, F., Bermpohl, F., Heinzel, A., Rotte, M., Walter, M., Tempelmann, C., Wiebking,
C., Dobrowolny, H., Heinze, H. J., & Northoff, G. (2008). The resting brain and our
self: Self-relatedness modulates resting state neural activity in cortical midline struc-
tures. Neuroscience, 157, 120–131.
Synofzik, M., Vosgerau, G., & Newen, A. (2008). Beyond the comparator model: A multi-
factorial two-step account of agency. Conscious and Cognition, 17, 219–239.
Tsakiris, M., & Haggard, P. (2003). Awareness of somatic events associated with a volun-
tary action. Experimental Brain Research, 149, 439–446.
Tsakiris, M., & Haggard, P. (2005). Experimenting with the acting self. Cognitive
Neuropsychology, 22, 387–407.
Tsakiris, M., Haggard, P., Franck, N., Mainy, N., & Sirigu, A . (2005). A specific role for
efferent information in self-recognition. Cognition, 96, 215–231.
Tsakiris, M., Hesse, M., Boy, C., Haggard, P., & Fink, G. R. (2007). Neural correlates of
body-ownership: A sensory network for bodily self-consciousness. Cerebral Cortex, 17,
2235–2244.
Tsakiris, M., Longo, M. R., & Haggard, P. (2010). Having a body versus moving your body:
Neural signatures of agency and body-ownership. Neuropsychologia, 48, 2740–2749.
Tsakiris, M., Prabhu, G., & Haggard, P. (2006). Having a body versus moving your body:
How agency structures body-ownership. Conscious and Cognition, 15, 423–432.
Wegner, D. M. (2003). The mind’s best trick: How we experience conscious will. Trends
in Cognitive Sciences, 7, 65–69.
Wittgenstein, L. (1953). Philosophical investigations. Blackwell.
7

Ambiguity in the Sense of Agency

S H AU N G A L L AG H E R

In a variety of recent studies the concept of the sense of agency has been shown
to be phenomenologically complex, involving different levels of experience, from
the basic aspects of sensory-motor processing (e.g., Farrer et al. 2003; Tsakiris and
Haggard 2005; Tsakiris, Bosbach, and Gallagher 2007) to the higher levels of inten-
tion formation and retrospective judgment (e.g., Pacherie 2006, 2007; Stephens
and Graham 2000; Synofzik, Vosgerau, and Newen 2008; Gallagher 2007, 2010).
After summarizing this complexity, I will argue, first, that the way that these various
contributory elements manifest themselves in the actual phenomenology of agency
remains ambiguous, and that this ambiguity is in fact part of the phenomenology.
That is, although there surely is some degree of ambiguity in the analysis of this
concept, perhaps because many of the theoretical and empirical studies cut across
disciplinary lines, there is also a genuine ambiguity in the very experience of agency.
Second, most studies of the sense of agency fail to take into consideration that it
involves more than simply something that happens in the head (mind or brain), and
specifically that it has a social dimension.

COMPLEXITIES
Normally when I engage in action, I have a sense of agency for that action. How is
that sense or experience of agency generated?1 It turns out that there are a number
of things that can contribute to this experience. Some, but not all, of these things do
contribute to the experience of agency in all cases. I’ll start with the most basic—
those aspects that seem to be always involved—and then move to those that are
only sometimes involved.

Motor Control Processes


If we think of the sense of self-agency (SA) as the experience that I am the one
who is causing or generating the movement, then we can distinguish SA from the
Ambiguity in the Sense of Agency 119

sense of ownership (SO) for movement, which is the sense that I am the one who
is undergoing the movement—that it is my body moving, whether the movement
is voluntary or involuntary (Gallagher 2000a, 2000b). In the case of involuntary
movement, SA is missing, but I still have SO. If I’m pushed, I still have the sense that
I am the one moving, even if I did not cause the movement. These experiences are
prereflective, which means that they neither are equivalent to nor depend on the
subject taking an introspective reflective attitude. Nor do they require that the sub-
ject engages in an explicit perceptual monitoring of bodily movements. Just as I do
not attend to the details of my own bodily movements as I am engaged in action, my
sense of agency is not normally something that I attend to or something of which I
am explicitly aware. As such, SA is phenomenologically recessive.
If we are thinking of action as physical, embodied action that involves self-gen-
erated movement, then motor control processes are necessarily involved. The most
basic of these are efferent brain processes that are involved in issuing a motor com-
mand. Let’s think again about involuntary movement. In the case of involuntary
movement there is a sense of ownership for the movement but no sense of self-
agency. Awareness of my involuntary movement comes from reafferent sensory
feedback (visual and proprioceptive/kinesthetic information that tells me that I’m
moving). There are no initial motor commands (no efferent signals) that I issue
to generate the movement. It seems possible that in both involuntary and volun-
tary movement SO is generated by sensory feedback, and that in the case of vol-
untary movement a basic, prereflective SA is generated by efferent signals. Tsakiris
and Haggard (2005; also see Tsakiris 2005) review empirical evidence to support
this division of labor. They suggest that efferent processes underlying SA modu-
late sensory feedback resulting from movement. Sensory suppression experiments
(Tsakiris and Haggard 2003) suggest that SA arises at an early efferent stage in the
initiation of action and that awareness of the initiation of my own action depends on
central signals, which precede actual bodily movement. Experiments with subjects
who lack proprioception but still experience a sense of effort reinforce this conclu-
sion (Lafargue, Paillard, Lamarre, and Sirigu 2003; see Marcel 2003). As Tsakiris
and Haggard (2005) put it:

The sense of agency involves a strong efferent component, because actions are
centrally generated. The sense of ownership involves a strong afferent compo-
nent, because the content of body awareness originates mostly by the plurality
of multisensory peripheral signals. We do not normally experience the efferent
and afferent components separately. Instead, we have a general awareness of
our body that involves both components. (387)

This prereflective SA does not arise simply when I initiate an action; as I continue
to control my action, continuing efferent signals, and the kind of afferent feedback
that I get from my movement, contribute to an ongoing SA.2 To the extent that I am
aware of my action, however, I tend to be aware of what I am doing rather than the
details of how I am doing it, for example, what muscles I am using. Even my reces-
sive awareness of my action is struck at the most pragmatic level of description (“I’m
getting a drink”) rather than at a level of motor control mechanisms. That is, the
120 T H E S E N S E O F AG E N C Y

phenomenal experience of my action already involves an intentional aspect. What I


am trying to accomplish in the way of basic movements (e.g., moving out of the way,
walking to open the door, reaching for a drink) informs my body-schematic pro-
cesses, which are intentional (and reflect what Merleau-Ponty calls a motor inten-
tionality) just because they are constrained by what I am trying to do.

Intentional Aspects in SA
Several brain imaging experiments have shown that the intentional aspects of what
I am trying to do and what actually I accomplish in the world enter into our sense of
agency. These experiments help us to distinguish between the purely motor control
contributories (the sense that I am moving my body) and the most immediate and
perceptually based intentional aspects (the sense that I am having an effect on my
immediate environment) of action (Chaminade and Decety 2002; Farrer and Frith
2002). These experiments, however, already introduce a certain theoretical ambigu-
ity into the study of SA, since they fail to clearly distinguish between motor control
aspects and intentional aspects.
For example, in Farrer and Frith’s (2002) fMRI experiment, designed to find the
neural correlates of SA, subjects are asked to manipulate a joystick to drive a colored
circle moving on a screen to specific locations on the screen. In some instances the
subject causes this movement, and in others the experimenter or computer does.
The subject has to discriminate self-agency and other-agency. Farrer and Frith cite
the distinction between SA and SO (from Gallagher 2000a) but associate SA with
the intentional aspect of action, that is, whether I am having some kind of effect with
respect to the goal or intentional task (or what happens on the computer screen).
Accordingly, their claim is that SO (“my hand is moving the joystick”) remains con-
stant while SA (“I’m manipulating the circle”) changes. When subjects feel that they
are not controlling the events on the screen, there is activation in the right inferior
parietal cortex and supposedly no SA for the intentional aspect of the action. When
the subject does have SA for what happens on the screen, the anterior insula is acti-
vated bilaterally.
Although Farrer and Frith clearly think of SA as something tied to the intentional
aspect of action and not to mere bodily movement or motor control, when it comes
to explaining why the anterior insula should be involved in generating SA, they frame
the explanation in terms of motor control and bodily movement:

Why should the parietal lobe have a special role in attributing actions to others
while the anterior insula is concerned with attributing actions to the self? The
sense of agency (i.e., being aware of causing an action) occurs in the context of
a body moving in time and space . . . [and] critically depends upon the experi-
ence of such a body. There is evidence that . . . the anterior insula, in interaction
with limbic structures, is also involved in the representation of body schema . . . .
One aspect of the experience of agency that we feel when we move our bod-
ies through space is the close correspondence between many different sen-
sory signals. In particular there will be a correspondence between three kinds
of signal: somatosensory signals directly consequent upon our movements,
Ambiguity in the Sense of Agency 121

visual and auditory signals that may result indirectly from our movements, and
last, the corollary discharge [efferent signal] associated with motor commands
that generated the movements. A close correspondence between all these sig-
nals helps to give us a sense of agency. (Farrer and Frith 2002, 601–602)

In a separate study Farrer et al. (2003) have the same goal of discovering the neural
correlates of SA. In this experiment subjects provide a report on their experience;
however, all questions about agency were focused on bodily movement rather than
intentional aspect. In fact, subjects were not given an intentional task to carry out
other than making random movements using a joystick, and the focus of their atten-
tion was directed toward a virtual (computer image) hand that either did or did not
represent their own hand movements, although at varying degrees of rotation rela-
tive to the true position of the subject’s hand. That is, they moved their own hand
but saw a virtual hand projected on screen at veridical or nonveridical angles to their
own hand; the virtual hand was either under their control or not. Subjects were
asked about their experience of agency for control of the virtual hand movements.
The less the subject felt in control, the higher the level of activation in the right infe-
rior parietal cortex, consistent with Farrer and Frith (2002). The more the subject
felt in control, the higher the level of activation in the right posterior insula. This
result is in contrast with the previous study, where SA was associated with activation
of the right anterior insula. Referencing this difference, Farrer et al. (2003) state:
“We have no explanation as to why the localization of the activated areas differ[s] in
these studies, except that we know that these two regions are densely and recipro-
cally connected” (331). One clear explanation, however, is that the shift of focus
from the intentional aspect (accomplishing a computer screen task in Farrer and
Frith 2002) to simple control of bodily movement (in Farrer et al. 2003) changes
the aspect of SA that is being studied. It would be helpful in these experiments to
clearly distinguish between the intentional aspect and the motor (efferent) aspect
of agency, and to say that there are at least these two contributories to SA.

Intention Formation
Over and above the sensory-motor processes that involve motor control and the
perceptual processes that allow us to monitor the intentional aspects of our actions,
there are higher-order cognitive components involving intention formation that con-
tribute to SA. Pacherie (2007) and others like Bratman (1987) and Searle (1983)
distinguish between future or distal intentions and present intentions. Future or
F-intentions relate to prior deliberation processes that allow us to formulate our
relatively long-term goals. For example, I may decide to purchase a car tomorrow
(or next week, or next month, or at some undetermined time when there is a good
rebate available), and then at the appropriate time go out and engage in that action.
Not all actions involve prior intention formation. For example, I may decide right
now to get a drink from the kitchen and find myself already moving in that direc-
tion. In that case I have not formed an F-intention, although my action is certainly
intentional. In that case, I may have a present or P-intention (or what Searle calls an
“intention-in-action”). My intention to get a drink from the kitchen may involve an
122 T H E S E N S E O F AG E N C Y

actual decision to get up and to move in the direction of the kitchen—and in doing
so I may be monitoring what I am doing in an explicitly conscious way. It may be a
rather complex action. At my university office the kitchen is located down the hall,
and it is locked in the evening. If I want to get a drink, I have to walk up the hall,
retrieve the key for the kitchen from a common room, and then proceed back down
to the kitchen, unlock the door, retrieve the drink, relock the door, return the key,
and return to my office. Although I may be thinking of other things as I do this, I am
also monitoring a set of steps that are not automatic.
In other cases I may be so immersed in my work that I don’t even notice that I’m
reaching for the glass of water on the table next to me. Here my intentional action
may be closer to habitual, and there is no P- or F-intention involved. In such cases, I
would still have a minimal SA, connected with what Pacherie (2007) calls a motor
or M-intention, and consisting of the prereflective sense generated in motor con-
trol processes and a rather recessive intentional aspect (which I may only notice if I
knock over the glass or spill the drink).
It is likely that when there is an F- and/or P-intention involved, such intentions
generate a stronger SA. Certainly, if I form an F-intention to buy a new car tomor-
row, and tomorrow I go to the car dealership and purchase a car, I will feel more in
charge of my life than if, without prior intention I simply find myself lured into a
car dealership, purchasing a car without prior planning. In the latter case, even if I
do not deny that I am the agent of my action, I might feel a bit out of control. So it
seems clear that part of the phenomenology of agency may be tied, in some cases,
to the formation of a prior intention. It’s important here to distinguish between the
cognitive level of intention formation—which may involve making judgments and
decisions based on beliefs, desires, or evaluations—and a first-order level of experi-
ence where we find SA. SA is not itself a judgment, although I may judge that I am
the agent of a certain action based on my sense of agency for it. But what is clear is
that intention formation may generate a stronger SA than would exist without the
formation of F- or P-intentions.

Retrospective Attribution
The effect of the formation of a prior intention is clearly prospective. But there are
post-action processes that can have a retrospective effect on the sense of agency.
Graham and Stephens (1994; Stephens and Graham 2000) provide an account of
introspective alienation in schizophrenic symptoms of delusions of control and
thought insertion in terms of two kinds of self-attribution.

• Attributions of subjectivity: the subject reflectively realizes and is able to report


that he is moving. For example, he can say, “This is my body that is moving.”
• Attributions of agency: the subject reflectively realizes and is able to report
that he is the cause or author of his movement. For example, he can say, “I am
causing this action.”

According to Graham and Stephens, the sense of agency originates at this


higher-order level of attribution. They propose an explanation of SA in terms of
Ambiguity in the Sense of Agency 123

“our proclivity for constructing self-referential narratives” that allow us to explain


our behavior retrospectively: “Such explanations amount to a sort of theory of the
person’s agency or intentional psychology” (1994, 101; Stephens and Graham
2000, 161). If we take thinking itself to be a kind of action on our part, then our
sense of agency for that thinking action derives from a reflective attitude toward it:
“Whether I take myself to be the agent of a mental episode depends upon whether
I take the occurrence of this episode to be explicable in terms of my underlying
intentional states” (Graham and Stephens 1994, 93).
On this view our sense of agency for a particular action depends on whether we
can reflectively explain our action in terms of our beliefs, desires, and intentions.
Accordingly, if a subject does or thinks something for which she has no intentions,
and her action fails to accord with her beliefs and desires—mental states that would
normally explain or rationalize the action—then the action or thought would not
appear as something she intentionally does or thinks. Whether I count something
as my action thus

depends upon whether I take myself to have beliefs and desires of the sort that
would rationalize its occurrence in me. If my theory of myself ascribes to me the
relevant intentional states, I unproblematically regard this episode as my action.
If not, then I must either revise my picture of my intentional states or refuse to
acknowledge the episode as my doing. (Graham and Stephens 1994, 102)

On this approach, I have a sense of agency, and specifically for my actions because
I have a properly ordered set of second-order retrospective interpretations (see
Graham and Stephens 1994, 102; Stephens and Graham 2000, 162ff.).
Pacherie indicates that F-intentions are subject to normative pressures for consis-
tency and coherence relative to the agent’s beliefs and other intentions. This would
also seem to be the case with Graham and Stephens’s retrospective attributions. But
in either case, the fact that I may fail to justify my actions or think that my actions
fail to fit with my theory or narrative about myself retrospectively does not neces-
sarily remove my sense of agency for the action, although it may diminish it. That is,
it seems wrong to think, as Graham and Stephens suggest, that retrospective attribu-
tion actually constitutes my sense of agency, but one should acknowledge that it can
have an effect on SA, either strengthening it or weakening it.
Within the realm of the normal, we can have two extremes. In one case I may gen-
erally feel that I am in control of my life because I usually follow through and act on
my intentions. I think and deliberate about an action, and form an F-intention to do
it. When the time comes, I remember my F-intention, and I see that it is the appro-
priate time and situation to begin acting to fulfill that intention. My P-intentions
coincide with the successful guidance of the action; my motor control is good, and
all the intentional factors line up. Subsequently, as I reflect on my action, it seems to
me to be a good fit with how I think of myself, and I can fully attribute responsibility
for that action to myself. It seems that in this case I would feel a very strong sense
of agency for the action, all contributing aspects—prospective intention formation,
contemporary control factors, and retrospective attribution—giving me a coherent
experience of that action (see Figure 7.1). In another case, however, I may have a
124 T H E S E N S E O F AG E N C Y

Prospective Retrospective

Deliberation Reflective
Reflective F-intention attribution or
P-intention evaluation

SA

Pre-reflective
Perceptual monitoring of
intentional aspect

Motor control and sensory


Sub-personal/ integration
Non-conscious ACTION

Figure 7.1 Complexities in SA

minimal SA—no F- or P-intention and no retrospective attribution or evaluation.


My SA for the action may just be my thin experience of having motor control over
something that I just did.

AMBIGUITIES
Pacherie suggests that mechanisms analogous to motor control mechanisms can
explain the formation of F- and P-intentions:

The contents represented at the level of F-intentions as well as the format in


which these contents are represented and the computational processes that oper-
ate on them are obviously rather different from the contents, representational
formats and computational processes operating at the level of M-intentions.
Yet, the general idea that internal models divide into inverse models which
compute the means towards a given goal and forward models which compute
the consequences of implementing these means retains its validity at the level
of F-intentions. . . . Similarly, it is highly plausible that action-specification at the
level of P-intentions makes use of internal models. (2007, 4)

That our deliberation about future actions involves thinking about the means and
ends of our actions seems uncontroversial. Pacherie’s proposal does raise one ques-
tion, however. If we regard thinking, such as the deliberative process that may be
involved in intention formation, itself as a kind of action, then do we also have a sense
of agency for the thinking or deliberation involved in the formation of F-intentions?
It seems right to suggest that if I engage in a reflectively conscious process of delib-
erating about my future actions and make some decisions on this basis, I would have
a sense of agency for (and from) this deliberation.3 You could interrupt me during
this process and ask what I am doing, and I could say: “I’m sitting here deliberat-
ing about buying a car.” The sense of agency that I feel for my ongoing deliberation
Ambiguity in the Sense of Agency 125

process may be based on my sense of control over it; my response to your question
is a retrospective attribution that may confirm this sense of agency. It’s also pos-
sible that my SA for my deliberation derives in part from a previous deliberation
process (I may have formed the F-intention yesterday to do my deliberations (i.e.,
to form my F-intentions about car buying today). It is clearly the case, however,
that not all forming of F-intentions requires a prior intention to do so, otherwise
we would have an infinite regress. We would have to deliberate about deliberating
about deliberating, and so on. Furthermore, it is possible to have P-intentions for
the action of forming F-intentions, where P-intentions in this case may be a form of
metacognition where we are conscious of our cognitive strategies as we form our
F-intentions. Certainly, however, it is not always the case that we engage in this kind
of metacognition as we formulate our F-intentions. It seems, then, that we can have
a minimal first-order sense of agency for our deliberations without prior delibera-
tion or occurrent metacognitive monitoring.
On the one hand, the sense of agency for a particular action (X) is different
from the sense of agency for the intention formation to do X. They are obviously
not equivalent, since there are two different actions involved, X, and the act of
deliberation about X. On the other hand, it seems likely that SA for my delibera-
tion may contribute to my reflective sense (and my retrospective attribution) that
I am the agent of my own actions. Pacherie refers to this as the long-term sense
of agency: “a sense of oneself as an agent apart from any particular action, i.e. a
sense of one’s capacity for action over time, and a form of self-narrative where
one’s past actions and projected future actions are given a general coherence and
unified through a set of overarching goals, motivations, projects and general lines
of conduct” (2007, 6).
As such it may enter into the occurrent sense of agency for any particular action.
Furthermore, if I lacked SA for my deliberation process, it might feel more like an
intuition or unbidden thought, or indeed, if I were schizophrenic, it might feel like
an inserted thought. In any case, it might feel less than integrated with what Graham
and Stephens call the “theory or story of [the subject’s] own underlying intentional
states,” something that itself contributes to SA for the action. So it seems that SA
for the deliberation process itself may contribute to SA for the action X in two indi-
rect ways. First, by contributing to my long-term sense of agency, and second, by
contributing to the effect of any retrospective attribution I may engage in. Still, as I
indicated, there need not be (and, under threat of infinite regress, there cannot be)
a deliberation process for every action that I engage in.
Similarly for P-intentions. If action monitoring, at the level of P-intentions, is
itself a kind of action (if, e.g., it involves making judgments about certain environ-
mental factors), there may be a sense of agency for that action monitoring. The
processes that make up a P-intention are much closer to the intended action itself
and may not feel like an additional or separate action. I can imagine a very explicit
kind of P-intention in the form of a conscious monitoring of what I am doing. For
example, I may be putting together a piece of furniture by following a set of instruc-
tions. In that case I could have a sense of agency for following the instructions and
closely monitoring my actions in terms of means-ends. Certainly doing it that way
would feel very different from doing it without following the set of instructions.
126 T H E S E N S E O F AG E N C Y

But the SA for following the instructions would really go hand in glove with SA
for the action of assembling the furniture. How we distinguish such things would
really depend on how we define the action.
In the process of assembling the furniture, I may start by reading instruction
number 1; I then turn to the pieces of wood in front of me and join two of them
together. I can distinguish the act of reading from the act of joining and define SA
for each of them. In that case, however, one can ask whether SA for the act of reading
doesn’t contribute to SA for the act of joining. I might, however, think of the read-
ing and the joining as one larger action of assembling the furniture, and SA might
be defined broadly to incorporate all aspects of that assembling. It might also be the
case that when I put together a second piece of furniture, I don’t consult the instruc-
tions at all, in which case SA is more concentrated in the joining. In most practiced
actions a P-intention is really unnecessary because motor control processes and
perceptual monitoring of the intentional aspect can do the job, that is, can keep my
action on track. I might simply make up my mind (an F-intention) to do this task,
and I go and immediately start to do the task without further monitoring in terms
of means-ends. All of this suggests that how we experience agency is relative to the
way we define specific actions, and how practiced those actions are.
This means that there is some serious ambiguity not simply in the way we define
the sense of agency but in the sense—the experience—of agency itself. This phe-
nomenological ambiguity—the very ambiguity of our experience of agency—
should be included in our considerations about the sense of agency. Clear-cut and
unambiguous definitions may create a neat conceptual map, but the landscape
itself may not be so neat. It is not always the case, as Pacherie sometimes suggests,
that P-intentions serve to implement action plans inherited from F-intentions,
since there are not always F-intentions. It is not always the case that “the final stage
in action specification involves the transformation of the perceptual-actional con-
tents of P-intentions into sensorimotor representations (M-intentions) through a
precise specification of the spatial and temporal characteristics of the constituent
elements of the selected motor program” (Pacherie 2007, 3), since there are not
always P-intentions. Pacherie also suggests that a sense of action initiation and a
sense of control are “crucial” components in the sense of agency (2007, 17–18)
and that in both components the P-intention plays a large role. But the fact that
some actions for which we have SA take place without P-intentions puts this idea
in question.
The sense of action initiation, Pacherie suggests, is based on the binding of
P-intention and awareness of movement onset in the very small time frame of 80
to 200 milliseconds prior to actual movement onset corresponding to the time of
the lateralized readiness potential, a signal that corresponds to selection of a spe-
cific motor program (Libet 1985; Haggard 2003). She associates the P-intention
with what Haggard distinguishes as an urge to move and reference forward to the
goal of the action. But these aspects of action experience can be purely prereflec-
tive, generated by motor-control processes, and form part of the M-intention (see
Desmurget et al. 2009 for relevant data). In this regard it is important to distinguish
P-intention from the prereflective perceptual monitoring of the intentional aspects
of the action that can occur without a formed P-intention, as in practiced action.
Ambiguity in the Sense of Agency 127

Whereas monitoring of the intentional aspects can contribute to SA whether or


not we have a conscious intention in terms of specific goals (Aartsa, Custersa, and
Wegner 2005), the P-intention does not seem crucial for SA.
Pacherie further suggests that the sense of control has three dimensions cor-
responding to F-intentions, P-intentions, and M-intentions. Again, however, the
sense of control may be reflectively conscious for F- and P-intentions, but, as gener-
ated in motor-control mechanisms it may remain prereflectively conscious as long
as the action is going well, for example, as long as I don’t stumble over or knock into
something. A conscious judgment or conscious sense of control associated with
the P-intention may in fact be absent until that point when something starts to go
wrong at the motor-control level, and it may be motivated by what I experience in
the prereflective monitoring of the intentional aspect of action.
What seem legitimate conceptual distinctions on the theoretical level—
“awareness of a goal, awareness of an intention to act, awareness of initiation of
action, awareness of movements, sense of activity, sense of mental effort, sense of
physical effort, sense of control, experience of authorship, experience of intention-
ality, experience of purposiveness, experience of freedom, and experience of mental
causation” (Pacherie 2007, 6)—may not show up as such in the actual first-order
phenomenology. They may be the product of theoretical reflection on the first-order
phenomenology. As I engage in action, for example, I may not experience a differ-
ence between my sense of effort and my sense of control, although I can certainly
make that distinction in my reflective (prospective or retrospective) consideration
of my action. That distinction may show up clearly at the level of my retrospective
attribution but may be entirely lost in my immersed SA. My awareness of what I
am doing and that I am doing it is usually struck at the most pragmatic level of
description (“I’m getting a drink”) rather than at a level that distinguishes between
the action and my agency, or within the action between the goal and the means, or
within agency between intentional causation, initiation, and control—distinctions
that Pacherie suggests can be found in the phenomenology.
Phenomenologically, however, there is no such thing as a “naked intention”—the
awareness of an action without an awareness of who the agent is ( Jeannerod and
Pacherie 2004)—or “agent-neutral” action experience (Pacherie 2007, 16). The
awareness that I am the agent of an action is implicit in the prereflective awareness
of acting, which does not contain an awareness of causation separate from aware-
ness of control. Pacherie is thus absolutely right to note that a conceptual analysis
cannot “preempt the question whether these various aspects are dissociable or not,
for instance whether we can be aware of what we are doing independently of an
awareness of how we’re doing it or whether we can be aware of what we are doing
without at the same time experiencing this action as ours” (2007, 7). What can
decide the issue, however, is agreement on where to draw the lines between phe-
nomenological analysis (i.e., of what we actually experience), neuroscientific analysis
(which may find a much finer grain of articulations at the neuronal level than show
up in phenomenology), and conceptual analysis (which may introduce distinctions
that are in neither the phenomenology nor the neurology but may have a produc-
tive role to play in constructing cognitive models or, in regard to the individual,
explaining psychological motivations, etc.).
128 T H E S E N S E O F AG E N C Y

PUSHING THIS ANALYSIS INTO THE WORLD


The sense of agency is both complex and ambiguous. It has multiple contributories,
some of which are reflectively conscious, some of which are prereflectively con-
scious, and some of which are nonconscious. Consistent with phenomenological
theories of embodiment, in everyday engaged action reafferent or sensory-feedback
signals are attenuated, implying a recessive consciousness of the body in action (see,
e.g., Gallagher 2005; Tsakiris and Haggard 2005). We do not attend to the details
of our bodily movements in most actions. We do not stare at our own hands as
we decide to use them; we do not look at our feet as we walk; we do not attend to
our arm movements as we engage the joystick. Most efferent, motor-control and
body-schematic processes are nonconscious and automatic. Just such processes
nonetheless contribute to a conscious sense of agency by generating a prereflective
awareness of our actions. In most normal actions the sense of agency runs along
with and is experientially indistinguishable from a basic sense of ownership; likely
efferent and reafferent signals are integrated in the insula. SA is part of our basic feel-
ing of embodiment without which our actions would feel very different. In addition,
we also experience, prereflectively, a form of intentional feedback, which is not affer-
ent feedback about our bodily movements but a perceptual sense that my action is
having an effect in the world. This effect is not something that we reflectively dwell
on, or even retain in memory. A good example of this is our usual perceptual aware-
ness while driving a car.
The sense of agency for some actions may amount to nothing more than this. For
other actions, however, the sense of agency is not reducible to just these embod-
ied and prereflective processes. In addition, in many cases we may be reflectively
conscious of and concerned about what we are doing. For such actions the sense
of agency will be tied to a more reflective sense of intention, involving attention
directed toward the project or task that we are engaged in, or toward the means and/
or end that we aim for.
Conceptually we can identify at least five different contributories to the sense of
agency that may be connected with a particular action:

• Formation of F-intentions, often involving the prospective reflective


deliberation or planning that precedes action.
• Formation of P-intentions, that is, the conscious monitoring of action in
terms of specific means-ends relations.
• Basic efferent motor-control processes generate a first-order experience
linked to bodily movement in and toward an environment.
• Prereflective perceptual monitoring of the effect of my action in the world.
• The retrospective attribution of agency that follows action.

We could add to this the long-term sense of one’s capacity for action over time,
which Pacherie identifies as related to self-narrative “where one’s past actions and
projected future actions are given a general coherence and unified through a set of
overarching goals, motivations, projects and general lines of conduct” (2007, 6).
Although conceptually we may distinguish between different levels (first-order,
higher-order) and aspects, and neuroscientifically we may be able to identify
Ambiguity in the Sense of Agency 129

different brain processes responsible for these different contributories, in action,


and in our everyday phenomenology we tend to experience agency in a more holis-
tic, qualitative, and ambiguous way that may be open to a description in terms of
degree.
The conceptual articulation of the different aspects of the sense of agency sug-
gests that the loss or disruption of SA in different pathologies may be varied. In
schizophrenic delusions of control the motor-control aspects may be disrupted. In
other cases the attribution of self-agency may be disrupted by problems with ret-
rospective higher-order cognition or the prospective formation of F-intentions. A
good example of this is the case of narcotic addiction, as discussed by Frankfurt
(1988). If a drug addict invests himself in resisting drugs, he may feel that some-
thing other than himself is compelling him to drug use. If he withdraws from taking
the drug, when he starts using again he may not conceive of himself as the agent.

It is in virtue of this identification and withdrawal, accomplished through the


formation of second-order volition, that the unwilling addict may meaning-
fully make the analytically puzzling statements that the force moving him to
take the drug is a force other than his own, and that it is not of his own free will
but rather against his will that this force moves him to take it. (Frankfurt 1988,
18; see Grünbaum 2009, for discussion)

The sense of agency may be present or absent, diminished or increased depend-


ing on processes or disruptions of processes at different levels. Thus, the loss of the
sense of agency in various pathologies—including schizophrenia, anarchic hand
syndrome, obsessive-compulsive behavior, narcotic addiction, and so forth—may
in fact involve different sorts of loss and very different experiences.
Everything that we have said so far, however, if rich in details, is still narrow in the
scope of what should be included in such an analysis. Although what we have said so
far acknowledges a role for the body and the environment in action—many of the
prereflective aspects being generated in motor control and the intentional aspect of
what we are doing—almost all the processes described remain “in the head,” insofar
as they are either mental processes (deliberation, intention formation, judgment,
evaluation, perceptual monitoring) or brain processes (efferent commands, integra-
tion of afferent signals, premotor processes, and motor control). It almost seems as
if all the action, all the important processes concerning intention and action, take
place in the narrow confines of the mind-brain, even though we know that action
takes place in the world, and most often in social interactions.
One simple way to ask the question is: How do other people and social forces
affect the sense of agency? On the very basic prereflective level, the presence of oth-
ers has an effect on what my possibilities for action are, and the way that I perceive
the world in action contexts.
Jean-Paul Sartre points in this direction, in a very dramatic way. In his example he
is sitting alone in a park. Suddenly, someone else enters the park.

Suddenly an object has appeared which has stolen the world from me.
Everything [remains] in place; everything still exists for me; but everything is
130 T H E S E N S E O F AG E N C Y

traversed by an invisible flight and fixed in the direction of a new object. The
appearance of the Other in the world corresponds therefore to a fixed sliding
of the whole universe, to a decentralization of the world which undermines
the centralization which I am simultaneously effecting. (1969, 255)

This overly dramatic philosophical description, however, is supported by some


interesting science. Consider what is termed the “Social Simon Effect.” The original
Simon Effect is found in a traditional stimulus-response task. Participants respond
to different colors, pressing a button to their left with their left hand for blue and
a button to their right with their right hand for red. They are asked to ignore the
location of the color (which may be displayed either in their right or left visual
field). An incongruence (mismatch) of right versus left between the color loca-
tion and hand used to respond results in increased reaction times (Simon 1969).
When a subject is asked to respond to just one color with one hand, as you might
expect, there is no conflict and no effect on reaction time. The surprising thing is
that when the subject has exactly the same task (pushing one button for one color)
but is seated next to another person who is responding to a different color—each
person responding to one color—each acting as if one of the fingers in the origi-
nal experiment—reaction times increased for the incongruent trials. (Takahama
et al. 2005). This is the social Simon Effect. Similar results are found in trials using a
go-nogo task where reaction times slowed when another person sitting next to the
subject also engaged in the task, but not when that person was simply present and
not engaged. Thus, “the same go-nogo task is performed differently depending on
whether one acts alone or alongside another agent performing a complementary
action” (Sebanz et al. 2003, 15; see Sebanz et al. 2006).
These kinds of things happen on the nonconscious level and likely have an effect
on one’s prereflective sense of agency. But they may become much more explicitly
self-conscious. Consider instances where you are quite capable of and perhaps even
proficient at doing action A, for example, successfully throwing a basketball through
the hoop. Your performance may be affected simply by the fact of having an audi-
ence of very tall basketball superstars. You might in fact feel a degree of inadequacy
in such a circumstance, simply because certain people are present.
More generally, the prospective and retrospective dimensions of intention for-
mation and action interpretation, which affect SA, are often shaped by others, and
by the situations in which we encounter others. Deciding to buy a certain kind of car
(or any other commodity) may be influenced by what your friends consider to be
an appropriate choice. In contrast to internalist views—for example, where simply
having a belief about A encompasses the motivation to A (e.g., Nagel 1970)—and
in contrast to many analyses of agency in philosophy of mind and action theory,
deliberations, intentions, and motivations to act are not simply mental states (prop-
ositional attitudes), or causal brain states—they are often co-constituted with oth-
ers. Phenomena such as peer pressure, social referencing, which may be implicit or
explicit, or our habitual behavior when in the presence of others—these phenom-
ena may detract from or increase one’s feeling of agency.
In this regard, there are extreme cases, like compulsive or addictive situations,
hysteria, or conversion disorder. In addictive behavior, for example, there is a loss
Ambiguity in the Sense of Agency 131

of the sense of agency for one’s actions—but this is not just the result of chemically
induced dependency. Compulsive drug-related behaviors correlate neither with the
degree of pleasure reported by users nor with reductions in withdrawal symptoms
as measured in placebo studies and the subjective reports of users. Robinson and
Berridge (1993, 2000) propose an “incentive-sensitization” model: pathological
addiction correlates highly with the salience of socially situated drug-related behav-
iors and stimuli. For example, specific situations (including the agent’s perception
of his social world) are altered and become increasingly hedonically significant to
the agent. Brain regions mediating incentive sensitization are inscribed within the
same areas that process action specification, motor control, and social cognition—
regions of the brain thought to code for intentional deliberation, social navigation,
and action (Allen 2009). This reinforces the idea that situational salience, includ-
ing perceptual salience of the social situation, contributes to intention formation
and the sense of agency—sometimes enhancing but also (as in extreme addictive
behavior) sometimes subverting SA. Intentions can be dynamically shaped in rela-
tion to how others are behaving, and by what is deemed acceptable behavior within
specific subcultures.
In the case of hysteria or conversion disorder, there is also a loss of the sense of
agency over bodily action. But, as Spence (2009, 276) states: “All hysterical phe-
nomena arise within social milieus.” The presence or absence of specific others
(sometimes the medical personnel) has an effect on the symptom, so that there is
symptomatic inconsistency from one social setting to another. Spence points to the
particular social milieu of Charcot’s practice in Paris, Freud’s practice in Vienna, and
the First World War battlefront—social arrangements that seemed to encourage the
development of hysterical symptoms. As he indicates, “There is clearly a need for
further work in this area” (Spence 2009, 280).
Let me conclude with one further example. In 2009 my daughter Laura volun-
teered with the Peace Corps in South Africa, focusing her efforts on HIV education.
She recounts that her attempts to motivate residents in a small village outside of
Pretoria to help themselves by engaging in particular activities were met by a certain
sardonic attitude and even polite laughter. They explained that they were unable to
help themselves simply because, as everyone knew, they were lazy. That’s “the way
they were,” they explained, and they knew this because all their life they had been
told so by various educational and governmental institutions, especially under the
apartheid regime. In effect, because of the contingencies of certain long-standing
social arrangements, with prolonged effects, they had no long-term sense of agency,
and this robbed them of possibilities for action.
It certainly seems possible that an individual could convince himself of his lazi-
ness, without the effects of external forces playing such a causal role. But it is dif-
ficult to conceive of what would motivate such a normative judgment, or even
that there could be such a normative judgment outside of a social environment.
Could there be a form of self-observation that would lead to a self-ascription of lazi-
ness that would not involve a comparison with what others do or do not do, or with
certain expectations set by others? It seems quite possible that some people, or social
arrangements, more than others may make me feel less in charge of my life, or more
empowered; and it seems quite possible that I can allow (or cannot prevent) others,
132 T H E S E N S E O F AG E N C Y

or some social arrangements, to make me feel more or less empowered. There are
certain ways of raising children, and certain ways of treating others that lead them to
feeling empowered, with a more expansive sense of agency than one finds in other
cases where it goes the other way. None of these possible adumbrations in an indi-
vidual’s sense of agency—from the Peace Corp volunteer who, at least at the begin-
ning, feels empowered enough to risk the effort, to the victim of apartheid, who in
the end has very little sense of agency—happen in social isolation.
If, in thinking about action and agency, we need to look at the most relevant prag-
matic level, that level is not the level of mental or brain states. We shouldn’t be look-
ing exclusively inside the head. Rather, embodied action happens in a world that is
physical and social and that often reflects perceptual and affective valiances, and the
effects of forces and affordances that are both physical and social. Notions of agency
and intention, as well as autonomy and responsibility, are best conceived in terms
that include social effects. Intentions often get co-constituted in interactions with
others—indeed, some kinds of intentions may not be reducible to processes that
are contained exclusively within one individual. In such cases, the sense of agency
is a matter of degree—it can be enhanced or reduced by physical, social, economic,
and cultural factors—sometimes working through our own narrative practices, but
also by loss of motor control or disruptions in prereflective action-consciousness.

NOTES
1. This and the following section summarize some of the material discussed in Gallagher
(2010).
2. It is important to distinguish SA, as related to motor control processes, from what
Fabio Paglieri (this volume) calls the experience of freedom, which, he argues, has no
positive prereflective phenomenology. Paglieri distinguishes the question of an expe-
rience of freedom from other aspects that may be involved in SA, e.g., the experience
of action control, and leaves the phenomenological status of such aspects an open
question. This is consistent with my own view about the distinction between issues
pertaining to motor control (as in the Libet experiments) and anything like an experi-
ence of freedom, which I understand not to be reducible to motor control (Gallagher
2006). Paglieri nonetheless expresses a skepticism about the sense of agency and sug-
gests that “it rests on an invalid inference from subpersonal hypotheses to phenom-
enological conclusions” (this volume, p. 147). In fact, however, the inference validly
goes in the other direction. It starts from the phenomenological distinction between
SA and SO, originally worked out in the context of the schizophrenic delusions of
control, and then asks what the neurological underpinnings of SA might be (see, e.g.,
Farrer and Frith 2002; Tsakiris and Haggard 2005).
3. This may be part of “what it’s like” or the phenomenal feel of such cognitive pro-
cesses. Of course there is an ongoing debate about whether higher-order cogni-
tive activities such as evaluating or judging come with a phenomenal or qualitative
feel to them. There are three possibilities here. (1) Cognitive states simply have
no phenomenal feel to them. But if such states have no qualitative feel to them, it
shouldn’t feel like anything to make a judgment or solve a math problem, and we
would have to say that we do not experience such things, since on standard defini-
tions phenomenal consciousness is the experience (e.g., Block 1995, 230). Do the
Ambiguity in the Sense of Agency 133

phenomenology when you do the math, and this doesn’t seem correct; but let’s
allow it as a possibility. (2) Cognitive states do have a phenomenal feel to them, but
different cognitive states have no distinguishable phenomenal feels to them so that
deciding to make a will and solving a math problem feel the same. (3) Different
cognitive states do have distinguishable phenomenal feels to them—deciding to
make a will does feel different from solving a math problem. On this view, which
is the one I would defend (see Gallagher and Zahavi 2008, 49ff.), in forming our
intentions we sometimes find it easy and sometimes difficult, sometimes with
much uncertainty or much effort, and accordingly one process of intention forma-
tion might feel different from the other. In either case (2) or (3) there would be
room for SA as an experiential component. E.g., part of what it feels like for me to
solve a math problem is that I was the one who actually solved the problem. But
even if there were no phenomenal feel to such cognitive processes, it may still be
the case that having gone through the process, the result itself, e.g., that I have a
plan, or that my mind is made up, may have a certain feel that contributes to a
stronger experience of agency for the action in question. Acting on a prior plan,
e.g., feels differently from acting spontaneously.

REFERENCES
Aartsa, H., Custersa, R., and Wegner, D. M. 2005. On the inference of personal author-
ship: Enhancing experienced agency by priming effect information. Consciousness and
Cognition 14: 439–458.
Allen, M. 2009. The body in action: Intention, action-consciousness, and compulsion.
MA thesis. University of Hertfordshire.
Block, N. 1995. On a confusion about a function of consciousness. Behavioral and Brain
Sciences 18: 227–247.
Bratman, M. E. 1987. Intention, Plans, and Practical Reason. Cambridge: Cambridge
University Press.
Chaminade, T., and Decety, J. 2002. Leader or follower? Involvement of the inferior pari-
etal lobule in agency. Neuroreport 13 (1528): 1975–1978.
Desmurget, M., Reilly, K. T., Richard, N., Szathmari, A., Mottolese, C., and Sirigu, A .
2009. Movement intention after parietal cortex stimulation in humans. Science 324:
811–813.
Farrer, C., Franck, N., Georgieff, N., Frith, C. D., Decety, J., and Jeannerod, M. 2003.
Modulating the experience of agency: A positron emission tomography study.
NeuroImage 18: 324–333.
Farrer, C., and Frith, C. D. 2002. Experiencing oneself vs. another person as being the
cause of an action: The neural correlates of the experience of agency. NeuroImage 15:
596–603.
Frankfurt, H. G. 1988. The Importance of What We Care About: Philosophical Essays.
Cambridge: Cambridge University Press.
Gallagher, S. 2000a. Philosophical conceptions of the self: Implications for cognitive sci-
ence. Trends in Cognitive Science 4: 14–21.
Gallagher, S. 2000b. Self-reference and schizophrenia: A cognitive model of immunity
to error through misidentification. In D. Zahavi (ed.), Exploring the Self: Philosophical
and Psychopathological Perspectives on Self-experience (203–239). Amsterdam and
Philadelphia: John Benjamins.
Gallagher, S. 2005. How the Body Shapes the Mind. Oxford: Oxford University Press.
134 T H E S E N S E O F AG E N C Y

Gallagher, S. 2006. Where’s the action? Epiphenomenalism and the problem of free will.
In W. Banks, S. Pockett, and S. Gallagher (eds.), Does Consciousness Cause Behavior? An
Investigation of the Nature of Volition (109–124). Cambridge, MA : MIT Press.
Gallagher, S. 2007. The natural philosophy of agency. Philosophy Compass 2: 347–357.
Gallagher, S. 2010. Complexities in the sense of agency. New Ideas in Psychology. (http://
dx.doi.org/10.1016/j.newideapsych.2010.03.003). Online publication April 2010.
Gallagher, S., and Zahavi, D. 2008. The Phenomenological Mind. London: Routledge.
Graham, G., and Stephens, G. L. 1994. Mind and mine. In G. Graham and G. L. Stephens
(eds.), Philosophical Psychopathology (91–109). Cambridge, MA : MIT Press.
Grünbaum, T. 2009. Action and agency. In S. Gallagher and D. Schmicking (eds.),
Handbook of Phenomenology and Cognitive Science (337–354). Dordrecht: Springer.
Haggard, P. 2003. Conscious awareness of intention and of action. In J. Roessler and N.
Eilan (eds.), Agency and Self-Awareness (111–127). Oxford: Oxford University Press.
Jeannerod, M., and Pacherie, E. 2004. Agency, simulation, and self-identification. Mind
and Language 19: 113–146.
Lafargue, G., Paillard, J., Lamarre, Y., and Sirigu, Y. 2003. Production and perception of
grip force without proprioception: Is there a sense of effort in deafferented subjects?
European Journal of Neuroscience 17: 2741–2749.
Libet, B. 1985. Unconscious cerebral initiative and the role of conscious will in voluntary
action. Behavioral and Brain Sciences 8: 529–566.
Marcel, A . 2003. The sense of agency: Awareness and ownership of action. In J. Roessler
and N. Eilan (eds.), Agency and Awareness (48–93). Oxford: Oxford University Press.
Nagel, T. 1970. The Possibility of Altruism. Oxford: Clarendon Press.
Pacherie, E. 2006. Towards a dynamic theory of intentions. In S. Pockett, W. P. Banks, and
S. Gallagher (eds.), Does Consciousness Cause Behavior? An Investigation of the Nature of
Volition (145–167). Cambridge, MA : MIT Press.
Pacherie, E. 2007. The sense of control and the sense of agency. Psyche 13 (1) (www.the-
assc.org/files/assc/2667.pdf ).
Robinson, T., and Berridge, K . 1993. The neural basis of drug craving: An
incentive-sensitization theory of addiction. Brain Research Reviews 18: 247–291.
Robinson, T., and Berridge, K . 2000. The psychology and neurobiology of addiction: An
incentive-sensitization view. Addiction 95 (8s2): 91–117.
Sartre, J.-P. 1969. Being and Nothingness: An Essay on Phenomenological Ontology. Trans. H.
E. Barnes. London: Routledge.
Searle, J. 1983. Intentionality: An Essay in the Philosophy of Mind. Cambridge: Cambridge
University Press.
Sebanz N., Bekkering H., and Knoblich G. 2006. Joint action: Bodies and minds moving
together. Trends in Cognitive Sciences 10: 70–76.
Sebanz, N., Knoblich, G., and Prinz, W. 2003. Representing others’ actions: Just like one’s
own? Cognition 88: B11–B21.
Simon, J. R. 1969. Reactions towards the source of stimulation. Journal of experimental
psychology 81: 174-176.
Spence, S. 2009. The Actor’s Brain. Oxford: Oxford University Press.
Stephens, G. L., and Graham, G. 2000. When Self-Consciousness Breaks: Alien Voices and
Inserted Thoughts. Cambridge, MA : MIT Press.
Synofzik, M., Vosgerau, G., and Newen, A . 2008. Beyond the comparator model: A multi-
factorial two-step account of agency. Consciousness and Cognition 17: 219–239.
Takahama, S., Kumada, T., and Saiki, J. 2005. Perception of other’s action influences per-
formance in Simon task [Abstract]. Journal of Vision 5: 396, 396a.
Ambiguity in the Sense of Agency 135

Tsakiris, M. 2005. On agency and body-ownership. Paper presented at Expérience


Subjective Pré-Réflexive & Action (ESPRA) Conference, CREA, Paris. December.
Tsakiris, M., Bosbach, S., and Gallagher, S. 2007. On agency and body-ownership:
Phenomenological and neuroscientific reflections. Consciousness and Cognition 16:
645–660.
Tsakiris, M., and Haggard, P. 2003. Awareness of somatic events associated with a volun-
tary action. Experimental Brain Research 149: 439–446.
Tsakiris, M., and Haggard, P. 2005. Experimenting with the acting self. Cognitive
Neuropsychology 22: 387–407.
8

There’s Nothing Like Being Free


Default Dispositions, Judgments of Freedom,
and the Phenomenology of Coercion

FA B I O PAG L I E R I

1. INTRODUCTION
The experience of acting freely, as opposed to being coerced into action, seems to
be a prime candidate to characterize the phenomenology of free will, and possibly
a necessary ingredient for our sense of agency. Similarly, the fact of acting freely is
taken to provide a necessary condition for being an agent endowed with free will. In
this essay I keep separate the former phenomenological issue from the latter onto-
logical question. In particular, I do not challenge the idea that the fact of freedom,
once properly defined, contributes to determine whether a certain system is an
agent, and whether it is endowed with free will. But I argue that there is no proof
that free actions are phenomenologically marked by some specific “freedom attri-
bute” (i.e., a positive phenomenal content exclusively associated with free actions),
and thus this nonexistent entity cannot be invoked to justify our judgments on free
will and agency, be they correct or mistaken. This is not to say that we do not consult
our phenomenology to assess whether a certain action we performed was free or
coerced: rather, I suggest that (1) we consult our phenomenology by subtraction,
that is, only to check that there is no evidence of our actions being coerced, thus
(2) the resulting judgment of freedom is not based on us finding some phenomeno-
logical proof of freedom, but rather on lack of any experience of coercion. More
drastically, I maintain that (3) there is no “extra” phenomenological ingredient in
the experience of freedom, since that experience is distinguished from coercion by
having something less, not something more.
A terminological clarification is in order: in this essay, I will sometimes use the
word “feeling” as a shorthand for “distinctive firsthand experience endowed with
a positive content,” with no reference to the fact that such experience is bodily
felt or not, and no discussion of the issue whether feelings necessarily have a
There’s Nothing Like Being Free 137

bodily component (for extended discussion, see Damasio 1999; Colombetti and
Thompson 2007). In my usage of the word “feeling” here, being hit on the head with
a cudgel and praising yourself for your cleverness both associate with feelings in the
required sense; in contrast, not being hit on the head with a cudgel and not praising
yourself for your cleverness are not characterized by any distinctive feeling. Subjects
can report on both types of experience, but they do so in different ways: in the first
case, they describe, more or less accurately, what they did experience in certain cir-
cumstances, whereas in the second case they rather report a lack of experience, an
absence of feelings—in short, what they did not perceive, and so they cannot find in
their own phenomenology.
This absence is conceived here as absolute: whenever I speak of absence of a feel-
ing of freedom in this essay, I mean that there is never any positive content in the
subject’s phenomenology that specifically associates with acting freely (whereas of
course there are plenty of positive contents associated with acting per se), so that
judging one’s actions to be free requires only lacking any experience that they are
coerced. This is very different from saying that the feeling of freedom is still pres-
ent, albeit not normally accessed due to habituation or distraction—like not feeling
your bottom while sitting on it, because the sensation, though present and acces-
sible on occasion, is continuous and you are concentrating on something else. In
contrast, utter absence of a feeling of freedom could indicate either lack of any sub-
personal process that stably associates with the freedom of one’s action, or the fact
that such process is invariably inaccessible to awareness: I am inclined to favor the
latter solution, but both options are fully compatible with the account developed in
this essay on how we judge an action to be free, so the distinction is immaterial to
present purposes.
Although deliberately pitched at the phenomenological level, my line of reason-
ing is not devoid of ontological implications. In particular, it conveys two negative
or limiting results:

1. We should not immediately infer presence of an experience of A from the


behavioral capacity of distinguishing A from not-A.
2. We should not be too carefree in drawing ontological conclusions from
phenomenological arguments. David Velleman noted that “the experience
of freedom serves, in some philosophical theories, as a datum from
which conceptual consequences are derived. The conceptual problem of
freedom thus becomes intertwined with the phenomenological problem”
(1989/2000, 32). This, I suggest, is risky business.

2. THE CASE AGAINST THE “FREEDOM ATTRIBUTE”


The fact that we can easily and reliably judge whether or not our actions are free is
proof that the experience of being free is remarkably different from the experience
of not being free. The question is: How is it different? I claim that the experience of
free action is just the experience of action in the absence of signs of coercion—so
that there is something less in our experience of freedom, not something more, as
138 T H E S E N S E O F AG E N C Y

opposed to coercion. A corollary of this view is that there is no specific “freedom


attribute” in our experience of acting freely, just absence of indications of coercion.
Thus it would be misleading to ask, “How does it feel to act freely?,” since the ques-
tion presupposes that there is some “freedom attribute” in the experience of free
action.
Among those who tried to spell out this attribute, libertarians are the most
prominent, with good reasons: as Nahmias and colleagues have discussed (2004),
defending a rich phenomenology of free action is instrumental to put pressure on
the compatibilist view, and vice versa. So let us discuss briefly three candidates as
“freedom attributes” for libertarians: for each of them, I will provide reasons for why
it either does not work as a phenomenological hallmark of freedom, or it is just lack
of coercion in disguise.
Experiencing the possibility of doing otherwise: it is contended that the experience
of free action is characterized by the feeling that one could do otherwise. This feel-
ing is construed as categorical by libertarians (“I could do otherwise, even if every-
thing else were equal, including my internal states”; see, e.g., Lehrer 1960; Searle
1984) and as conditional by compatibilists (“I could do otherwise, if I had differ-
ent motives/considered different aspects of the matter”; Grünbaum 1971), and
empirical evidence suggests that the latter description is closer to people’s intuitions
(Nahmias et al. 2004). However, the relevant point here is that there is no prima
facie reason to think that such counterfactual disposition is occurrently experienced
while acting. On the contrary, based on its description, be it categorical or condi-
tional, it seems natural to regard it as a posteriori reconstruction—in which case,
it is just an intuitive way of spelling out the absence of coercion, consistently with
the view defended here. The existing empirical evidence does not allow establish-
ing whether reporting on the possibility of doing otherwise indicates an experience
occurring during free action or a judgment based on a posteriori reconstruction,
since introspective reports elicited via interviews are compatible with both scenarios.
I argue that the burden of proof is on those who claim that we do experience occur-
rently the presence of alternative courses of action (in the required sense), and that,
until such burden is discharged, the most plausible interpretation of the existing
phenomenological reports is in terms of a posteriori reconstruction—in which case
acknowledging the possibility of having done otherwise is just another way of say-
ing that the subject did not experience the constraints of coercion.
Experiencing oneself to be the cause of one’s actions: libertarians insist that, when
acting freely, subjects experience themselves as the direct and sole cause of their acts
(O’Connor 1995; Horgan et al. 2003); compatibilists opine that what is perceived
as causing actions are the internal states of the subject, and that the whole notion
of self is inferred rather than experienced (Dennett 1984). Whatever version one
endorses, experiencing oneself (or one’s internal states) as causing one’s action is
not distinctive of free action alone but rather characterizes agency more generally,
even under coercion: for instance, there is no reason to assume that a person being
forced at gunpoint to drive a car is not experiencing herself to be the cause of the
driving, or that her internal states (e.g., fearing repercussions from the kidnapper
and wanting to avoid them) are causing it. Yet, the person is certainly not experi-
encing freedom of action in the required sense, even though she experiences being
There’s Nothing Like Being Free 139

the cause of her actions. One might object that the subject here experiences the
assailant and his threatening behavior as being the “true cause” of driving the car,
instead of herself or her own internal dispositions. But what does it mean to experi-
ence something else as the “true cause” of one’s behavior? It means exactly that the
subject is experiencing coercion from some external force, which is something dif-
ferent from, but not incompatible with, experiencing oneself to be the cause of one’s
actions: the fact that the victim experiences her fear as being caused by the assailant
does not imply that she does not experience her actions to be caused by her fear—
being kidnapped should not be confused with delusions of alien control!
Experiencing a given decision as particularly hard and effortful to achieve: libertarians
often consider typical of free choice those situations where more than one option
is strongly attractive for the subject (“close-call” decisions), consistently with their
view that experiencing the possibility of doing otherwise is central to freedom of
action; this view tends to lead to restrictivism on free will, claiming that free will is
exerted only rarely, since it applies only to choices where we feel torn between two
or more options (van Inwagen 1989). As far as phenomenology is concerned, the
problem with restrictivism is that it is far too restrictive to do justice to our judg-
ments of freedom. We consider as free plenty of actions where no strong dilemma
is involved, and in which the effort of making a choice is completely absent. This
does not rule out the possibility that restrictivism may be correct at the metaphysi-
cal level, although I personally agree with Gordon Pettit (2002) that it is not. But
it is certainly worthless to identify a distinctive “freedom attribute,” since it fails to
capture our judgments of freedom, except for very few, extreme cases, which are far
from being typical. In fact, both phenomenological reports (Nahmias et al. 2004)
and experimental results (Wenke et al. 2010) indicate that the experience of freedom
is strongest when decisions are smooth and unproblematic, whereas it is weakest or
even absent when choosing between options that are difficult to discriminate.
An alternative way of looking for some specific “freedom attribute,” indepen-
dently from the libertarianism versus compatibilism debate, is to reduce the experi-
ence of freedom to the combination of experiencing authorship and control of one’s
actions. The problem with this strategy is that it does not identify something specific
of the experience of freedom, since in many cases (such as the kidnapping example
discussed earlier) it is perfectly possible to experience oneself as the author of one’s
actions and (to some degree) in control of them, and yet perceive the action as not
being free. Even if it is true that in some specific cases an experience of coercion may
be triggered by a disruption of one’s sense of authorship or control or both (thought
insertion and alien control are obvious examples), this is not the only possibility and
certainly not the most typical. As a case in point, nonpathological subjects under
coercion retain an experience of authorship and control, and yet they do not judge
their actions as being free—and rightly so. So it would seem that an intact sense of
authorship and control is a necessary condition for experiencing freedom, but not
a sufficient one. In contrast, lack of an experience of coercion is both sufficient and
necessary to judge an action to be free. The fact that an experience of coercion can
be originated either by dysfunctions in one’s sense of agency or by correctly perceiv-
ing external pressures over one’s behavior is relevant and will play a role in the rest
of this chapter, but it does not change the basic tenet of this approach: we judge an
140 T H E S E N S E O F AG E N C Y

action or decision to be free because we lack phenomenological evidence to the


contrary, not because we have some special proof of its freedom. Hence the search
for some “freedom attribute” is a hopeless cause.

3. FROM JUDGMENTS OF FREEDOM TO EXPERIENCES OF FREEDOM


If we look at how people assess the freedom of their own actions on different occa-
sions, three points appear well established on empirical grounds (see Nahmias
et al. 2004):

1. Issuing judgments of freedom is both fast and easy, and usually conveys a high
degree of conviction: when asked whether a certain action was free, subjects answer
rapidly and with no hesitation, apparently with no need of any sophisticated intel-
lectual reconstruction. Let us call this first feature easy reportability.

2. These judgments are prone to error, under specific circumstances: coerced


action can be experienced as freely endorsed by the subject, as the literature on
automaticity demonstrated by using subliminal stimuli to prime responses that
were nonetheless regarded as voluntary by the subjects (Bargh and Chartrand
1999; Bargh et al. 2001); conversely, free action can be experienced as coerced, as
it happens in delusions of alien control and misattributions of agency (Frith 1992;
Frith et al. 2000; Spence 2001; Gallagher 2004; Maes and van Gool 2008). Let us
call this second feature vulnerability to error.

3. The phenomenology associated with these judgments appears to be “thin” and


very elusive: if probed on the exact nature of their experience of freedom, subjects
tend to answer with some causal story on how they arrived at the conclusion of hav-
ing acted freely (“I chose among many options what I wanted to do”), rather than
with any direct phenomenological report (“I felt I was acting freely”). They seem
to be answering more to “Why do you judge your action to be free?” rather than to
“What is it like to feel your action to be free?” Let us call this third feature phenom-
enological vagueness.

There is a common way of accounting for all these factors: easy reportability is
taken to indicate that there is such a thing as a feeling of freedom associated with
actions that we perform freely, and it is because of the phenomenological vague-
ness of such a feeling that systematic mistakes can occur in our judgments, thus
explaining vulnerability to error. The first part of this strategy, inferring presence of
experience from easiness of report, is attributed by Nahmias and colleagues to the
majority of philosophers:

Theories of free will are more plausible when they capture our intuitions and
experiences than when they explain them away. Thus, philosophers generally
want their theories of free will to aptly describe the experiences we have when
we make choices and feel free and responsible for our actions. If a theory mis-
describes our experiences, it may be explaining the wrong phenomenon, and
There’s Nothing Like Being Free 141

if it suggests that our experiences are illusory, it takes on the burden of explain-
ing this illusion with an error theory. (2004, 162, my emphasis)

My first comment is that I agree that a theory of free will fares better when it
relates to our intuitions and experiences, with the proviso that such relationship
need not be one of identity, and the additional admonition that “intuitions” and
“experiences” should by no means be treated as synonyms. Although our intuitions
are bound to reveal something interesting about our experiences, I will argue that
no one-to-one correspondence needs to be assumed. My second comment is that
Nahmias and colleagues here seem to presuppose that there is such a thing as feeling
free (“the experiences we have when we make choices and feel free”), presumably
on the grounds of reports produced by subjects asked to consider whether or not
their actions were free. In what follows, I suggest that this inference is not necessar-
ily valid, since there is at least one alternative explanation that fits our intuitions on
free action much better. So, before considering “What is it that we feel while acting
freely?,” we should take a step back and first ask, “Do we feel anything special while
acting freely?”—more precisely, “Is there any specific positive phenomenal content
characteristic only of freedom of action?” The previous section gave reasons to doubt
that this question can be answered positively; in what follows I endeavor to outline
a phenomenology of free action that dispenses with any “freedom attribute.”
Let it be noted in passing that it is an open question whether some of the follow-
ing considerations could be applied also to other phenomenological features pre-
sumably involved in the experience of agency, like the experience of choice (Holton
2006), the experience of effort (Baumeister et al. 1998; Bayne and Levy 2006), and
the experience of action control (Pacherie 2007). I will get back to this issue at the
end of this essay, albeit in a very cursory manner and mainly with reference to expe-
riences of authorship and action control. Until then, I will provisionally confine
the analysis to the experience of freedom, using it as a test bed for a default theory
of how we use phenomenological evidence (or lack thereof) to draw judgments
about the nature of our actions. One reason that caution is needed in confining this
analysis to freedom is because other aspects of what we experience during an action
are quite clearly endowed with positive phenomenal content (so it would be a lost
cause to doubt its existence), and yet it is not absence or presence of that content
that guides judgments on the freedom of that action (so it would be a red herring
to take its existence as relevant for present purposes). Let us consider again action
control: bodily actions that are under the subject’s direct control have well-defined
phenomenal properties, which contribute to distinguish them from actions under
the direct guidance of some external agency. Nonetheless, judging one’s actions to
be free is not equivalent to judging them to be self-produced without external assis-
tance or even guidance: not only one can one act under coercion while retaining
full agentive control of bodily movements (e.g., a hostage performing illegal activi-
ties while being held at gunpoint), but also, more crucially, one can act freely in the
absence of direct control over bodily movements (e.g., having one’s arm bent and
manipulated by a chiropractor as part of a self-imposed therapy).
Interestingly, the existence of some “freedom attribute” is sometimes taken for
granted even by those who are investigating other features in the phenomenology
142 T H E S E N S E O F AG E N C Y

of agency. Richard Holton, while arguing for a phenomenology of agency based


on the experience of choice, proposes a critique of libertarianism that assumes the
existence of a positive experience of freedom. On the libertarian account, an act of
free will is an uncaused cause, but then, Holton observes, “this is not to describe
an experience; it is hard to think what an experience of that would feel like”; more-
over, “the libertarian thesis is itself a bit of speculative philosophy,” and thus cannot
explain the universality and easiness of our intuitions about what actions are free
and what are not (2006, 1–2). Holton’s argument is two-pronged: on the one hand,
he says, it is hard to figure out what it would be like to be an “uncaused cause,” since
experiencing the negation of an existential condition seems a rather obscure and
perhaps impossible bit of phenomenology; on the other hand, even if such an expe-
rience was possible to have and to describe, it would require a rather complex inter-
nal structure, and this violates the easy reportability characteristic of our everyday
intuitions on freedom of action. Notice that both parts of the argument rest on the
assumption that there is such a thing as a “freedom attribute,” in order to show that
the libertarian notion of “uncaused cause” cannot properly describe it.
Contra Holton, I want to argue that this assumption is not justified, and so his cri-
tique of libertarianism on phenomenological grounds misses the target. Most emphat-
ically, I want also to note that this does not imply that libertarianism should therefore
be endorsed and determinism rejected, either in metaphysics or in our folk phenom-
enology. Although I do not discuss the issue here, I am happy to go on record as a
compatibilist on free will. But I also believe that there is a grain of phenomenological
salt in the libertarian notion of “uncaused cause,” regardless of its many metaphysical
errors. It is upon that grain of salt that I will start building my alternative explanation
of our (fallible) ability to discriminate between free and coerced actions.

4. DEFAULT OPTIONS: A BRIDGE BETWEEN


EXPERIENCE AND JUDGMENT
If one denies that there is such a thing as a specific “freedom attribute” exclusively
associated with free actions, how can one account for the three main features of our
judgments on freedom of action—that is, easy reportability, vulnerability to error,
and phenomenological vagueness? Here invoking subpersonal mechanisms would
do no good: when subjects assess certain actions as free and others as coerced,
they are certainly expressing awareness of the distinction, and not just a behavioral
capacity. So it would seem that there is something in their personal experience that
allows drawing such a distinction, regardless of the exact nature of the subpersonal
mechanisms underlying that experience.
This is where the libertarian scarecrow of an uncaused cause may still play a prof-
itable role: instead of looking at it as indicating presence of experience of a negative
condition, as Holton does, we should perhaps conceive it as suggesting absence of
experience of a positive condition. In other words, the experience of being uncaused
is perhaps best understood as absence of the experience of being caused by someone or
something else. This would entail that the phenomenologically salient occurrences
are those in which the action is determined outside the agent’s volitional control,
whereas all the instances of free action would be discerned by contrast, that is, by
There’s Nothing Like Being Free 143

consulting one’s own phenomenology and failing to find any experience of external
causes prompting the action.
A profitable way of looking at this is in terms of default options: the suggestion
is that, as far as our intuitions are concerned, the default option is to consider our
actions free, with no need of any further “proof ” or “evidence” from our own expe-
rience. In contrast, only a phenomenologically salient experience of being caused
by someone or something else can lift the default and force me to judge my action
as being not free. This is tantamount to suggesting that there is a presumption of
freedom built in our judgments of agency: any action is considered free until proven
coerced. Let us call this the default theory of freedom, and let it be noted that defaults
here work as a bridge between experience and judgment, one that does not entail
any one-to-one correspondence between evaluations (to consider oneself free) and
phenomenological contents (to have a specific feeling of freedom).
Before discussing the explanatory value of this alternative proposal in the next
section, I want to show that here the libertarian notion of uncaused cause works as
a useful pivot, not as a seed of evil. In particular, it does not commit my approach
to libertarianism and antideterminism. The point is that lacking the experience of
our free actions being caused does not make them any less caused. What we con-
sult to explicitly assess our freedom of action is, inevitably, our phenomenology,
not the actual causal processes that determine our final action. So the claim is that,
although all our actions are indeed caused, this fact emerges in our phenomenology
only when certain conditions obtain, and these conditions are those that associate
with actions that we perceive as being coerced and not free. This, in turn, raises the
issue of what these conditions are, and to what types of action they apply.
Notice that the default theory still makes sense even if these questions are not
answered: it remains underspecified but still viable in principle. This is why here I will
only outline an answer, leaving it to future work to provide additional details on this
aspect of the theory. The two most intuitive and widespread notions of freedom con-
cern (1) having multiple choice options and not being restricted to a single path in our
behavior, for example, “In buying my car, I was free to choose among a variety of mod-
els,” and (2) being independent in our choice from any external domination or direct
influence from someone or something else, for example, “I decided freely to marry
my wife.”1 So let us consider what kind of experience is associated with these different
aspects of freedom and with the corresponding forms of coercion, keeping in mind
that in most instances of coerced action we lack both types of freedom: this is the case
when someone is force-fed some foul-tasting medicine, or when a neurophysiologist
makes a patient’s arm twitch by applying a transcranial magnetic stimulation (TMS)
coil to his scalp, both situations where the subject has no other option and is under
the (possibly well-meaning) domination of someone else. Nevertheless, it is useful to
analytically disentangle the experience of being free to choose among many options
(or not) from the experience of being free from external influence (or not).
The first aspect of freedom seems necessarily connected with the experience of
choice, and there are two ways of thinking about the experience of choice: one is by
referring to those situations in which we are torn between incompatible desires of
similar value (e.g., deciding whether or not to take on a new job, which is better paid
but less secure than the current one), so that the act of choosing is characterized
144 T H E S E N S E O F AG E N C Y

by a very distinctive phenomenology; the second option is to consider choice as


a standard feature of all free actions (e.g., choosing what to do next), and then try
to describe the much more elusive phenomenology associated with such mundane
decisions.
The first strategy is often endorsed by restrictivists on free will (Campbell 1951;
van Inwagen 1989; Vander Laan 2001), to claim that genuine instances of free
action are much rarer than what is usually supposed. A way of arguing for this thesis
is by defining freedom of action in a very restrictive way, and then observing that the
required conditions rarely obtain—the so-called inventory strategy (Pettit 2002).
Standard examples of restrictive definition include considering free only those
actions where the choice is among conflicting positive values, or when our overall
policy of behavior is in sharp conflict with temporary desires (van Inwagen 1989).
Reasons to doubt the wisdom of this approach were given in section 2, so I will not
discuss it any further here.
A different and more ambitious strategy to connect freedom and choice is proposed
by Richard Holton, in his attempt to show that believing our actions to be freely cho-
sen does not imply endorsing libertarian positions. In his view, it “is right to insist that
we have an experience of freedom; and surely right to insist that we would need very
good grounds before rejecting it as illusory. So we need to ask what the experience
is an experience of. My contention in this paper is that it is primarily an experience
of choice” (2006, 2). Much of Holton’s argument hinges on the distinction between
deciding that, that is, assessing the best option on the grounds of one’s beliefs and
desires, and deciding to, that is, committing oneself to a given course of action. Choice
is supposed to play its role in the transition between these two different mental pro-
cesses. However, if deciding to do something simply followed from having decided
that such an option is the best viable one, this would assign to choice a very poor role,
just an echo of a judgment on what is best. On the other hand, understanding choice
as a way of subverting one’s best judgment on what to do would seem to reduce it to a
liability, something we should beware of (Holton 2006, 7).
Holton considers both positions unsatisfactory and suggests as a third alternative
“that in very many cases we choose what to do without ever having made a judge-
ment about what would be best—we decide to without deciding that” (2006, 9).
We do not choose in spite of our best judgment, we choose in the absence of it.
To argue that this does not reduce choice to mere random picking, Holton quotes
the extensive literature (for a review, see Wilson 2002) on how we often appear
capable of making the right choice without having any insight on why it is the right
choice. Moreover, these “hunches” seem to be associated with some emotional
response that subjects are aware of, without being able to assess its cause (Bechara
et al. 1997), while patients with prefrontal cortical damages that inhibit emotional
responses are insensitive to these unconscious cues (Bechara et al. 1994). From this,
Holton concludes that “not only does the emotional response influence behaviour
before judgment is made; it appears that without the emotional response, judgment
is powerless” (2006, 11).
What is the import of Holton’s analysis to the present discussion? If he had man-
aged to prove that there is a distinct experience of choice, and that this constitutes
the core of an experience of choosing freely among multiple options, then this would
There’s Nothing Like Being Free 145

undermine the default theory I want to defend. But I do not think that Holton pro-
vides us with a description of the phenomenology of choice (while I agree with him
that the act of choice is independent from judging what is best), and therefore his
account does not offer evidence of any positive experience of freedom—nor was
it intended to, by the way. The only phenomenologically salient facts mentioned
by Holton are the emotional responses experienced by subjects when they were
unaware of the factors affecting their decision.2 But it is an empirical issue whether
such nonconscious decisions are truly typical of human choice. Moreover, even if
we grant that they are indeed frequent, they cannot be coextensive with our judg-
ments of freedom, since we consider ourselves free also when (1) we consciously
decide to perform an action on the grounds of what we consider best, and when
(2) we lack any “hunch” about the right option.
So, on my reconstruction, Holton’s analysis does not support the claim that we
experience some specific “freedom attribute,” whenever we are free to choose among
multiple options. But is there any argument now to support the complementary
claim—that is, that there is such a thing as experiencing lack of this particular aspect
of freedom? In all the cases, which are the vast majority, where lack of options is
paired with subjection to external pressures, I believe the lack of the latter (i.e., being
under the domination of some external force) is much more salient in our phenom-
enology than the lack of the former (i.e., having no other option), and thus I will
discuss these situations and their phenomenology in a moment. But even when lack
of alternatives is suffered without any direct influence from external forces, this pro-
duces very distinctive experiences. Unfortunately, the world is full of free indigents,
that is, people allowed in principle to do whatever they like but lacking the means
to exploit this abundance of alternatives: it is trivial to observe that the limitations
imposed upon their choices are acutely felt by them, whereas it is far from obvious
what “feelings,” if any, may characterize the freedom of choice enjoyed by a rich
bourgeois. When previously impoverished people rescue themselves from misery,
they still retain vivid memories and a rich phenomenology of their past experience,
often explicitly mentioning lack of options as one of the aspects of what they felt at
that time. Nothing of the sort seems to accompany the availability of options, not
even in the experience of the most enthusiastic libertarian. More precisely, lack of
options associates with a specific phenomenology of coercion, inasmuch as poor
people are (and experience themselves to be) coerced by their own misery, even in
the absence of any direct domination from other agentive forces such as a tyrant
or the state. In contrast, the possession of so-called freedom of choice is not felt in
any obvious way, but rather assessed as a matter of intellectual judgment, and often
even taken as a matter of course, at least by those people who never experienced its
absence.
If we now turn to the other aspect of freedom, that is, freedom from external
influences in deciding what to do, it is again easy to see that lack of it correlates with
an extremely rich and vivid phenomenology. Whenever we experience ourselves
as being under the spell of some external force, either while making a decision or
while performing a physical action, this produces a lasting impression in our phe-
nomenology. I am unlikely to fail to notice or to forget afterward that you forced me
to drive the car by pointing a gun to my head, or that you made me press the button
146 T H E S E N S E O F AG E N C Y

by applying a TMS coil upon my scalp, or that it was my cramping stomach that had
me running for the bathroom. In all these cases, there certainly is a most definitive
experience of not being in control.3
However, proving that coercion has a vivid phenomenology is not the same as
showing that the experience of being free from external influences has no specific
phenomenal correlate, some special “freedom attribute.” It could still be the case
that we do have such an experience, albeit a very “thin” one. Indeed, this idea of
“thinness” is recurrent in the discussion on the phenomenology of agency in gen-
eral (see, e.g., Gallagher 2000; Pacherie 2007), and it is to me remarkable that it
failed to engender stronger skepticism on the very notion of having a positive expe-
rience of agency. As I will discuss later on, it is much more plausible to assume that
our judgments of freedom rely on the rich phenomenology of coercion, rather than
postulating any “thin,” and thereby elusive, experience of freedom—and perhaps
something similar could be said of our judgments of authorship, control, and even
agency, as I will suggest in section 7. Be that as it may, I first want to discuss whether
what we know on the phenomenology of agency in general is in contrast or in agree-
ment with the claim that there is no such thing as experiencing to act without direc-
tion from external influences.
Attempts to account for a thin phenomenology of agency, or “minimal selfhood,”
usually rely on subpersonal mechanisms for action control and monitoring: promi-
nent defenders of this view include Shaun Gallagher (2000) and Elisabeth Pacherie
(2007; but see Bayne and Pacherie 2007 for a partially different approach), with
some important differences between their accounts that I will not discuss here. The
key idea is that our capacity to discriminate between self-produced and externally
triggered actions is based at the subpersonal level on matching representations in
the comparator system involved in action control (Wolpert et al. 1995; Wolpert
1997). In this vein, Gallagher suggests the following on how sense of agency might
emerge: “This comparator process anticipates the sensory feedback from movement
and underpins an online sense of self-agency that complements the ecological sense
of self-ownership based on actual sensory feedback. If the forward model fails, or
efference copy is not properly generated, sensory feedback may still produce a sense
of ownership (“I am moving”) but the sense of agency will be compromised (“I am
not causing the movement”), even if the actual movement matches the intended
movement” (2000, 16). Similar hypotheses have been used to interpret experimen-
tal findings on the factors affecting sense of action control (Linser and Goschke
2007), to discuss dissociations between sense of agency and sense of ownership
(Spence et al. 1997; Sato and Yasuda 2005), and to explain a rich variety of anom-
alies in the experience of agency (Blakemore et al. 2002; Blakemore 2003; Frith
2005).
The fact that we have a very good candidate (and possibly more than one)4 as the
subpersonal mechanism responsible for our experience of self-generated action may
seem at odds with the suggestion that judgments of action freedom are based only
on lacking any experience of being forced. A critic might suggest that the default
theory is especially problematic when it comes to experiencing freedom from exter-
nal forces: since there seems to be a positive experience of being the agent of your
own bodily movements (namely, the required match between goal state and efferent
There’s Nothing Like Being Free 147

copy), then what is the need of postulating default options and lack of contrary
evidence?
My answer to this objection is that it rests on an invalid inference from subper-
sonal hypotheses to phenomenological conclusions: even granting that sense of
agency correlates with the matching of goal state and efferent copy, there is no rea-
son to conclude that this produces any significant positive correlate in the subject’s
phenomenology. It could be as easily the case that it is only when there is a mismatch
in the comparator system that the subject experiences something special. Indeed,
this seems more plausible in terms of adaptation—but more on this in the next sec-
tions. For the time being, I just want to stress that what we know of the subpersonal
mechanisms responsible for action control and attribution of agency does not give
us any evidence for or against the existence of a positive experience of freedom.
To sum up, I argue that the phenomenology of lack of various aspects of freedom
(freedom to choose among many options and freedom as independence from exter-
nal forces) is remarkably well defined, whereas there is no evidence of any positive
phenomenal content, thin or otherwise, being associated with presence of freedom.
This suggests that the default theory I am defending is our best bet, at least as far as
nonproblematic cases are concerned. “Nonproblematic” here means that our judg-
ments of freedom are correct in these cases: we think we are freely acting when this
is indeed the case, and regard as coerced those actions in which we are in fact being
forced to act by someone or something. However, our judgments of freedom are
prone to errors, and these mistakes, whether systematic or abnormal, need also to
be grounded in our phenomenology. The next section tries to show how the default
theory can accommodate the possibility of error.

5. TESTING THE DEFAULT THEORY: EXPLANATORY POWER


Our judgments of freedom are vulnerable to error in two main respects: sometimes
we consider our actions to be free when they are not, for example, when a certain
behavior, which we experience as spontaneous and unconstrained, is indeed trig-
gered and shaped via subliminal priming (Bargh and Chartrand 1999; Bargh et al.
2001); at other times, we may judge as being externally controlled some actions
that are in fact produced by our own volition, as it happens in delusions of alien
control and misattributions of agency (Frith 1992; Frith et al. 2000; Spence 2001;
Gallagher 2004; Maes and van Gool 2008). I argue that these kinds of errors are
easily explained by the default theory: not so if we assume there is such a thing as a
“freedom attribute,” some positive experience of acting freely.
Judging your actions to be free when they aren’t is explained by the default theory
as failing to experience the external cause that prompted the action—and, indeed, this
is precisely what happens in automaticity studies based on priming procedures. Via
subliminal priming, the action is induced without and outside the subject’s phe-
nomenal awareness of any external influence. Conversely, judging your actions to
be coerced when they aren’t is explained as presence of a (mistaken) experience of
external causation. This is highly consistent with the reports of subjects affected by
delusions of alien control. Notice that in these cases the phenomenology is not elu-
sive or “thin” at all: deluded subjects have very clear and vivid experiences to report
148 T H E S E N S E O F AG E N C Y

about their actions being externally controlled (see Maes and van Gool 2008 for
some excellent examples), in contrast with attempts from normal subjects to report
on their own (correct) experience of acting freely. On the default theory, the vivid-
ness of the delusional phenomenology is not surprising: it is due to the fact that
these subjects are having a positive experience of being coerced (possibly due to
neurological disorders: Frith et al. 2000; Spence 2001), even if this experience is
mistaken. According to the default theory, the same does not happen to normal
subjects acting freely.
Whereas the default theory makes perfect sense of dissociations in our judg-
ments of freedom, these systematic errors pose a hard challenge for any theory
that takes freedom to be associated with a positive phenomenology. If this was the
case, then one should explain: (1) how a subliminal prime can produce the posi-
tive (mistaken) experience of acting freely, as happens in automaticity studies; and
(2) why this positive experience of freedom fails to arise in delusions of alien con-
trol, and yet deluded subjects derive a richer phenomenology from its absence than
normal subjects do from its presence. Both types of dissociation are thus at odds
with the idea that there is such a thing as a feeling of freedom. Indeed, with refer-
ence to automaticity effects, one could venture a mild analogy5 with the relevance
of change-blindness effects for antirepresentationalist theories of vision (O’Regan
and Noë 2001): as change-blindness suggests that we do not need a rich representa-
tion of what we see in order to see it, so automaticity effects suggest that we do not
need a positive phenomenology of freedom in order to judge whether or not our
actions are free. If judgments of freedom were dependent upon a positive experi-
ence of freedom, we could not fail to notice its absence in all those cases where our
actions are induced and/or shaped via subliminal priming. But we do not notice any
absence in these cases, so it is questionable whether this alleged “sense of freedom”
is truly existent.6
Notice also that automaticity effects apply to any normal subject and affect a vari-
ety of cognitive tasks—that is, they are universal in application and general in scope.
In contrast, delusions of alien control and misattributions of agency are rare and
usually associate with specific neurological pathologies; even though similar disso-
ciations can be induced in normal subjects, doing so requires hypnosis (Blakemore
et al. 2003; Haggard et al. 2004), which is a far more elaborate procedure than sub-
liminal priming. This suggests that preventing awareness of a real act of coercion
(e.g., via subliminal priming) is much easier than conjuring a false experience of
being coerced (e.g., via hypnosis). On the default theory, this makes perfect sense:
in the first case, one just has to manipulate attention in such a way as to ensure that
the subject does not become aware of an external force having a direct causal role in
the action; in the second case, one should create within the subject’s awareness an
experience of being coerced in the absence of any external constraint on the behav-
ior. The latter alteration is clearly greater than the former. But if we assume there is
such a thing as a positive experience of freedom, then we should expect things to be
the other way around: illusions of freedom via subliminal priming should be very
hard to achieve, whereas delusions of being controlled by external forces should be
relatively easy to induce. This is in stark contrast with empirical evidence on typical
errors in our judgments of freedom.
There’s Nothing Like Being Free 149

In this context, it is also interesting to briefly discuss Libet’s paradigm for tim-
ing the onset of volition in voluntary action (Libet et al. 1983), later on revived,
amended, and expanded by Haggard and colleagues in their work on intentional
binding (Haggard et al. 2002; Haggard 2005). If one looks at the experimen-
tal protocols most frequently used in these settings, the instructions given to the
experimental subjects carry some strong pragmatic implication, concerning both
the actions that the subjects are supposed to perform and the phenomenology they
should attend to and report about. As for the actions, it is perhaps debatable what
kind of intentionality is being measured here. Subjects are instructed to perform
a certain basic movement (with or without sensory consequences) “at their will,”
and to pay attention to the timing of either their intention to move (W judgments),
the onset of movement (M judgments), or its sensory consequence (S judgments).
These instructions carry the strong presumption that subjects are indeed expected
to perform the required movement, sooner or later within the duration of the exper-
iment. It is akin to soldiers being ordered to “fire at will”: not firing at all is clearly
not a socially acceptable option in this context. Similarly, subjects in these experi-
ments are free to decide when to do something, but they do not get to decide what to
do, or whether to do it or not. This may limit the validity of the data thus gathered.7
However, my concern here is with the phenomenological implications of these
instructions: investigating W judgments, Libet and colleagues asked people to
report when they first feel the “urge” to make the required movement; similarly,
Haggard and colleagues explained “intention” to experimental subjects as their first
awareness of being about to move. Although the latter formulation is clearly more
neutral, both sets of instructions strongly imply that subjects should have some-
thing positive to consult in their phenomenology (a feeling, an urge, or at least an
awareness) in order to report their intentions. In other words, these experimental
settings implicitly endorse the idea that there is such a thing as a positive feeling of
acting freely, that is, of one’s own volition.
If, on the contrary, the default theory is correct, then similar instructions should
appear quite outlandish to the experimental subjects, since they are basically asked
to report on a phenomenology that they lack, under the presumption that they have
it. This would nicely explain the fact that conscious intention is reported to occur
significantly later than the onset of the readiness potential systematically correlated
with voluntary movement (Libet 1985): the delay would depend on the fact that
people are asked to monitor the occurrence of something (some phenomenological
outburst of free volition) that typically does not occur, since our actions are consid-
ered by default as being freely willed, and not because we experience some “freedom
epiphany.” Faced with such a bizarre task, subjects have to reconstruct intentionality
in an indirect way, which is the only one available to their phenomenology—and
which amounts to turning a default assumption into an inference from ignorance.
They (1) become aware of being about to move when the movement has been
already initiated (and that is why the readiness potential starts before this aware-
ness, and needs to), (2) they do not experience any external cause for their action,
thus (3) they assume by default that they are freely intending to act.
This is a reconstructive process, but notice that here the reconstruction occurs on
the phenomenology, not on the behavior. This explains why W judgments are not
150 T H E S E N S E O F AG E N C Y

sensitive to intentional binding effects, without ruling out reconstructive inference


as the source of those judgments. In contrast, for Haggard and Cole lack of bind-
ing effects on W judgments suggests that “conscious intention is a genuine percept,
rather than a reconstructive inference” (2007, 218). The target of this observation
is the strong reconstructionist claim that there is no awareness of the premotor
processes prior to action itself (Dennett and Kinsbourne 1992), and I agree with
Haggard and Cole that this form of reconstructionism is mistaken, as far as aware-
ness of intentions is concerned. But this does not rule out the possibility that we
apply reconstructive inference on our phenomenology, before action occurs (see
points 1–3 above). Since intentional binding concerns anticipation of the effects of
action, the fact that it does not affect W judgments has no bearing on whether these
judgments involve reconstructive inference on our premotor phenomenology (or
lack thereof). The default theory suggests that, in these artificial experimental set-
tings, there is reconstructive inference going on at the phenomenological level prior
to action, and that this is responsible for the delay between the onset of readiness
potential and awareness of having the intention to act.
It is worth emphasizing that this does not entail that our experience of freedom
is illusory (as Wegner [2002] claimed for the experience of agency), since I agree
with Nahmias that “an experience is illusory if the way it represents things is not
the way things actually are. More precisely, a person’s experience of X is illusory if
the content of the experience includes that X has various features, but X does not
in fact have those features. So, to show that an experience of X is illusory requires
(1) showing that the content of the experience includes that X has certain features
and (2) showing that X does not in fact have those features” (2005, 775). The default
theory does not imply in any way that our experience of freedom is illusory in the
sense of us experiencing actions as free when they are not: it just emphasizes that
the way we experience them to be free is not by having any special phenomeno-
logical epiphany about their freedom, but just failing to experience them as being
coerced. This does not make our experiences of freedom any more (or any less)
illusory, with respect to their accuracy.
To sum up, the default theory offers a natural explanation of the kinds of error
to which our judgments of freedom are vulnerable, whereas the assumption that
freedom is characterized by a positive phenomenology is at odds with such dis-
sociations. Moreover, the default theory suggests an alternative interpretation
of the delayed awareness of intentions in Libet-style experiments, one that does
not involve any “illusion of conscious will”: instead, the delay is explained by the
reconstructive way in which subjects are forced to consult their phenomenology,
verifying absence of experiences of coercion to conclude that they are acting freely.
These results suggest that the default theory has greater explanatory power than the
doctrine of a positive experience of freedom, when it comes to understanding the
phenomenology of free agency.

6. TESTING THE DEFAULT THEORY: EVOLUTIONARY PLAUSIBILITY


If we now consider alternative descriptions of the phenomenology of freedom in
light of their evolutionary plausibility, the default theory appears superior in two
There’s Nothing Like Being Free 151

ways: first, it is evidently the most economical solution to the problem of discrimi-
nating between free and coerced actions; second, its competitor, that is, the idea
that there is some specific feeling of freedom (see section 2 for discussion), is so
spectacularly antieconomical as to be hardly justifiable in evolutionary terms. Both
points can be clarified by analogy.
Let us first assume that individuating instances of coercion is relevant to the
fitness of the agent. This seems plausible enough: acting under coercion implies
relinquishing control over one’s own conduct, and even though coercion can be
sometimes benevolent (e.g., a mother forcing her son to study), there is no guaran-
tee that it will be—on the contrary, most of the time we want to coerce other people
so that they will act for our benefit rather than their own. Hence the ability to recog-
nize coercion is certainly useful for the subjects, to make sure that they are optimiz-
ing their own fitness, and not the fitness of someone else. Detecting coercion is the
first step to prevent it, or to stop it before it jeopardizes the subject’s performance.
This is similar to the reason that we want to have an alarm system in our house,
one that signals unwanted intrusions to either us or the police (even though some
intrusions could be benevolent, e.g., Santa Claus coming down the chimney to bring
us Christmas presents). Again, detecting intrusion is the first step to prevent it, or to
stop it before any serious harm is done. Here it is worth noticing that all alarm sys-
tems in the world, in spite of their variety in terms of technologies and functioning,
have something in common: they are set to go off when someone is breaking and
entering the house, while remaining silent in all other instances—not the other way
around. And if we now imagine an inverse alarm, that is, an alarm that keeps ringing
when there are no malevolent intruders in the house, and silences itself only when
someone is breaking and entering, we immediately perceive how antieconomical
this would be.
The same applies to the phenomenology of freedom: if its evolutionary value is
to allow detection of coercion, then it makes sense that the relevant signal is set to
attract my attention only when someone or something is coercing me, rather than
being constantly on when all is well and I am acting of my own accord. Just as an
alarm ringing most of the time would be incredibly annoying, so would a positive
phenomenal content specifically associated with all our free actions. Acting freely
is both the typical occurrence and the nonproblematic one: we need not be alerted
to the fact that our actions are free, and would not want to be constantly reminded
of it—it would be like having an alarm constantly ringing in your head while every-
thing is just as it should be. We are interested in freedom of action only when we risk
losing it, and that is when we need a rich phenomenology to signal that something
is amiss.
Notice that both regular alarms and inverse alarms function equally well as sig-
naling devices, at least in principle: they both discriminate between intrusion and
its absence. But regular alarms outperform inverse alarms in two respects: they are
much more economical, and they are likely to be more effective, given some features
of our perceptual system. As far as economy is concerned, the problem with inverse
alarms is evident: since lack of intrusion is the norm, it consumes far more resources
to have the alarm on in this condition than the other way around, not to mention
the costs of constantly annoying the owners of the house and their neighbors. The
152 T H E S E N S E O F AG E N C Y

same basic rule applies to our phenomenology: to generate, maintain, and attend a
vivid phenomenology of freedom certainly demands more cognitive resources than
limiting “phenomenological outbursts” to instances of coercion, insofar as freedom
of action is the norm and coercion the exception.
Also effectiveness is an issue for inverse alarms, due to habituation effects: our
perceptual system attenuates the salience of stimuli that are constantly present in
the environment, and the same mechanism would work on the continuous sig-
nal of the inverse alarm. As a consequence, when the signal is silenced to notify
unwanted intrusion, the perceived discrepancy would be lower than the actual dif-
ference between signal and no signal, because in the meantime the alarm ringing
would have become a sort of background noise for the subject. Instead, habituation
does not influence regular alarms, so the salience of the signal associated with intru-
sion would be greater in that case. Again, similar consideration applies to our phe-
nomenology: it is more effective as a signal, that is, more easily noticed, to have an
anomalous experience when coercion occurs rather than stopping to have a normal
experience when freedom is in jeopardy.
The point of the analogy should be clear by now: as it makes much more
sense to have a regular alarm in your house rather than an inverse one, so it
is better for you to have a vivid phenomenology of coercion rather than any
alleged “freedom attribute.”8 More precisely, the analogy suggests reasons that
natural selection would not favor the evolution of a vivid experience of freedom
to detect instances of coercion: being certainly less economical and potentially
less effective than its rival mechanism, that is, vivid experience of coercion, it
is unlikely that individuals endowed with such a trait would fare better than
competitors endowed with the rival mechanism. Moreover, we know that such
a “rival mechanism” is indeed operative within us: we do have a vivid phenom-
enology of coercion, whereas the experience of freedom, even if there were one,
would be “thin” and elusive. If so, what would be the evolutionary point of hav-
ing a positive experience of freedom at all, since the task of detecting coercion
is already efficiently performed otherwise? This argument, like any evolution-
ary argument, is not meant to be conclusive, but it adds presumptive weight to
the case for a default theory of freedom, especially when piled upon the other
evidences discussed so far.
Admittedly the evolutionary argument rests on assuming that the way we expe-
rience freedom is functional to discriminate between coercion and free action.
Three reasons concur to justify this assumption. First, as discussed in sections 2
and 4, the most typical descriptions of the experience of freedom include the pos-
sibility of doing otherwise, the presence of multiple options, and the absence of
external influences shaping one’s conduct, and they all emphasize that free action
is markedly different from situations where the individual is either coerced by
someone or forced by circumstances to follow a rigid path. Second, other aspects
in the experience of free agency are either characteristic of agency in general, free
or otherwise (e.g., authorship and action control), or they refer only to subclasses
of free actions and are far from being typical of freedom in general (e.g., feeling
torn between various options that are equally attractive): as such, they are unlikely
to provide guidance on what is the specific function of the experience of freedom.
There’s Nothing Like Being Free 153

Third, it is perfectly reasonable to presume that detecting instances of coercion,


as opposed to freedom of action, is adaptive, and that it is from attending to our
phenomenology that we realize when we are being coerced or forced: given that,
what else should the experience of freedom serve to tell us, if not that we are free
from coercion?

7. CONCLUSIONS: TOWARDS A DEFAULT


PHENOMENOLOGY OF AGENCY?
Several lines of argument converge in supporting a default theory of our judgments
of freedom, and in undermining the idea that there is such a thing as a “feeling of
freedom.” For one thing, it is a fact that experiences of coercion, both mundane
(being force-fed) and bizarre (being controlled by aliens), are extremely vivid and
phenomenologically rich, whereas phenomenological reports on the experience of
freedom tend to be vague, underspecified, and phrased as ex post reconstructions
of reasons for thinking to have been free, rather than firsthand recollections of any
direct experience of freedom. This provides prima facie evidence for the thesis that
the salient experience, upon which we assess whether or not we act freely, is coer-
cion and not freedom.
This hypothesis allows us to develop a phenomenological account9 that does
justice to the main features of our judgments of freedom: easy reportability, vul-
nerability to error, and phenomenological vagueness. So the default theory is
consistent with the broader agenda of experimental philosophy (Nichols 2004;
Nadelhoffer and Nahmias 2007), according to which our intuitions about folk
conceptions on philosophically relevant topics should also be tested with the
methods of social science, and not just using a priori analysis. In particular, the
default theory explains dissociations and errors in our judgments of freedom bet-
ter than theories based on the presence of a positive phenomenal content spe-
cifically associated with freedom. This is true not only with respect to our “folk
intuitions” on free action but also for the fine-grained cognitive mechanisms that
are held responsible for these intuitions and their occasional anomalies, both
mundane and pathological.
Finally, dispensing with the myth of a “freedom attribute” and embracing the
default theory makes better sense in evolutionary terms, being both more economi-
cal and more effective. Given that conscious discrimination between freedom and
coercion is a valuable asset for the agent’s fitness, vivid awareness of instances of
coercion is the best way to realize that function, and human subjects are certainly
endowed with such a capacity, since we do have a rich phenomenology of coercion.
Accordingly, it is hard to see what would be the adaptive value of any alleged “feeling
of freedom,” and thus why we should invent any “thin” phenomenology for it, other
than the default theory itself.
These converging lines of evidence suggest that a default theory is our best bet
to describe how judgments of freedom are derived from our phenomenology, and
that we should instead reject the idea that free action specifically associates with any
positive experience of acting freely or “feeling of freedom.” It is now interesting to
briefly consider, by way of suggestion for future work, whether the same strategy
154 T H E S E N S E O F AG E N C Y

might apply to other agentive judgments and their underlying phenomenology:


authorship and the sense of being in control of one’s actions are prime candidates
for such exercise. Both experiences refer to situations that are most definitely “busi-
ness as usual” for well-functioning individuals (we are the author of our actions and
in control of them, most of the time, and we do experience ourselves as such, unless
something is amiss with our brain), and they are both part of the alleged “thin” phe-
nomenology of agency. So it is tempting to try the default theory also in these cases,
suggesting that judgments of authorship and control are on average reliable and easy
to establish because we take as a matter of course that our actions are authored and
controlled by us, and we think otherwise only if something blatant in our experi-
ence (e.g., the weird experience of thought insertion or alien hand syndrome) indi-
cates that we are not authoring or controlling our actions. Even more broadly, one
could take on sense of agency in general, and propose that, since we are typically
the agent of our actions, judgments of agency are delivered by default: we assume
to be the agents of our actions, unless there is phenomenological evidence to the
contrary. This could be seen as consistent with Wolfgang Prinz’s (2006) insistence
on how our capacity to discriminate between self and other need not be grounded
in some primitive sense of self, because what we explicitly represent (and need to)
is the environment and not ourselves, so the self-information is just material of the
processing machinery that gets us an accurate representation of the environment.
As a result, it makes sense to maintain that we are only aware of ourselves acting
when something goes wrong, whereas normally our awareness is focused on the
external world.10
As I said, it is tempting to widen the scope of the default strategy to other fea-
tures of our sense of agency, and I believe there would be much to learn in doing so.
However, the intellectual firepower needed to carry out this larger project definitely
exceeds the ambitions of the present contribution. Nonetheless, I hope to have clar-
ified by example what would be needed to defend the default theory with respect
to other aspects of the experience of agency: (1) evidence that the phenomenol-
ogy specifically associated with such aspects is thin and recessive, and tends to be
reported in ways that sound suspiciously close to ex post reconstructions, if not out-
right fabrications; (2) evidence that the lack of such aspects, in contrast, associates
with a clear and rich phenomenology; (3) arguments to the effect that presence of
such aspects is typical and nonproblematic, whereas their absence is the exception
that needs to be promptly detected, for the individuals to perform adequately; and,
finally, (4) evidence that the known pathologies and distortions in those aspects of
agency are adequately accounted for by the default theory.
Building an argument against theories of the sense of agency (e.g., Frith et al.
2000; Gallagher 2000, 2004; Haggard et al. 2002; Bayne and Levy 2006; Bayne
and Pacherie 2007; Pacherie 2007) that satisfies these constraints is no mean feat,
although the results of the present discussion may give some reasons for hope. The
role of default mechanisms as a filter between judgment and experience has not yet
been fully appreciated, but their rationale is both sound and general: as mentioned
at the onset, we should not immediately infer presence of a phenomenological cor-
relate of A from the behavioral capacity of distinguishing A from not-A. This applies
to freedom, and perhaps beyond it.
There’s Nothing Like Being Free 155

ACKNOWLEDGMENTS
I am grateful to Cristiano Castelfranchi, Richard Holton, Dan Hutto, Andy Clark,
Markus Schlosser, Julian Kiverstein, Tillman Vierkant, and two anonymous review-
ers for providing comments and criticisms on previous versions of this chapter.
This work, as part of the European Science Foundation EUROCORES Programme
“Consciousness in a Natural and Cultural Context” (CNCC), was developed within
the collaborative research project “Consciousness in Interaction: The Role of the
Natural and Social Environment in Shaping Consciousness” (CONTACT), and
was supported by funds from the CNR, Consiglio Nazionale delle Ricerche, and
the EC Sixth Framework Programme.

NOTES
1. This is somewhat reminiscent of Philip Pettit’s (2003) distinction between
option-freedom and agency-freedom, albeit he brings up the distinction with the aim
of discussing and possibly reconciling different theories of social freedom, while here
the emphasis is on how the subject’s phenomenology is used to justify his or her
judgments of freedom—a very different issue.
2. Holton would probably contend that awareness of making a choice also is a salient
phenomenological fact in these cases, but this is precisely what needs to be ascer-
tained in referring to such instances as a litmus test for a phenomenology of choice.
So the presence of that phenomenology cannot be taken for granted without begging
the question. My point is precisely that Holton’s analysis of appropriate decisions
based on hunches does not provide any support to the claim that there is such a thing
as experiencing choice, in the sense of having a positive phenomenal content concur-
rent with, and distinctive of, making free choices.
3. The fact that lack of freedom from domination comes in many shapes and degrees
does not alter its vivid phenomenology. It is true that the subject could choose death
by shooting over coercion in the first example, as well as preferring public embarrass-
ment to running for the bathroom in the third, whereas not pressing the button is not
even an option in the TMS coil scenario. Nevertheless, in all three cases some con-
straint interferes with the agent’s freedom, and it is this impairing of agentive control
that registers so deeply in our phenomenology.
4. For an alternative view, see Stephens and Graham (2000); Roser and Gazzaniga
(2004); and Carruthers (2007). For a critical review, see Bayne and Pacherie (2007).
5. I am thankful to Dan Hutto for drawing my attention to this analogy.
6. A stalwart critic of the default theory could object that this only proves that experi-
encing coercion is much more vivid than experiencing freedom, but it does not rule
out the possibility that the latter still exists, albeit only as a “thin” experience. Let us
call this the “Thin Argument”—pun intended. There are two reasons that the Thin
Argument does not work here: first, it violates Occam’s razor, inasmuch as it postu-
lates an entity, to wit, the thin experience of freedom, that does not add anything to
the explanatory power of the theory; second, it has no grip on other empirical facts
in favor of the default theory, e.g., automaticity effects being much easier to induce
than misattributions of agency—a fact consistent with the default theory, but very
hard to reconcile with the existence of a positive experience of freedom (see next
paragraph).
156 T H E S E N S E O F AG E N C Y

7. Chris Frith made a similar point on the social implications of these experimental
designs at the workshop “Subjectivity, Intersubjectivity, and Self-Representation,”
Borupgaard, Denmark, May 9–12, 2007.
8. A similar point could be made for sensory attenuation of self-produced stimuli
(Weiskrantz et al. 1971; Blakemore et al. 1999; Blakemore et al. 2000; Frith et
al. 2000; Blakemore 2003; Shergill et al. 2003; Frith 2005; Bays et al. 2006). It is
a well-established fact that sensory stimulation become less salient when it is
self-produced, and this is interpreted as a way of discriminating between what is caused
by our own movements and what is due to changes in the outside world. Significantly,
such a distinction is achieved by marking as salient those instances where stimuli are
not self-produced, rather than the other way around. This is a well-documented case
of natural selection favoring, in our phenomenology, a regular alarm over an inverse
one. Similarly, to discriminate between actions that are freely undertaken and those
that are coerced, it makes sense to emphasize the phenomenology of the latter, rather
than the former.
9. “Phenomenological account” here only means an account that assigns a key role to
phenomenology in arriving at judgments of freedom. Obviously, it does not imply
that such judgments are indicative of any phenomenal content associated with acting
freely: on the contrary, the default theory maintains that judgments of freedom are
justified by lack of experiences of coercion within the agent’s phenomenology.
10. I am thankful to Tillman Vierkant for drawing my attention to this connection with
Prinz’s work.

REFERENCES
Bargh, J., Chartrand, T. (1999). “The unbearable automaticity of being.” American
Psychologist 54, 462–479.
Bargh, J., Gollwitzer, P., Lee-Chai, A., Barndollar, K., Troetschel, R. (2001). “The auto-
mated will: Nonconscious activation and pursuit of behavioral goals.” Journal of
Personality and Social Psychology 81, 1014–1027.
Baumeister, R., Bratslavsky, E., Muraven, M., Tice, D. (1998). “Ego-depletion: Is the active
self a limited resource?” Journal of Personality and Social Psychology 74, 1252–1265.
Bayne, T., Levy, N. (2006). “The feeling of doing: Deconstructing the phenomenology
of agency.” In N. Sebanz, W. Prinz (eds.), Disorders of volition, 49–68. Cambridge, MA :
MIT Press.
Bayne, T., Pacherie, E. (2007). “Narrators and comparators: The architecture of agentive
self-awareness.” Synthese 159, 475–491.
Bays, P., Flanagan, J., Wolpert, D. (2006). “Attenuation of self-generated tactile sensations
is predictive not postdictive.” PLoS Biology 4 (2), e28.
Bechara, A., Damasio, A., Damasio, H., Anderson, S. (1994). “Insensitivity to future con-
sequences following damage to human prefrontal cortex.” Cognition 50, 7–15.
Bechara, A., Damasio, H., Tranel, D., Damasio, A . (1997). “Deciding advantageously
before knowing the advantageous strategy.” Science 275, 1293–1295.
Blakemore, S.-J. (2003). “Deluding the motor system.” Consciousness and Cognition 12,
647–655.
Blakemore, S.-J., Frith, C., Wolpert, D. (1999). “Spatio-temporal prediction modulates the
perception of self-produced stimuli.” Journal of Cognitive Neuroscience 11, 551–559.
Blakemore, S.-J., Oakley, D., Frith, C. (2003). “Delusions of alien control in the normal
brain.” Neuropsychologia 41, 1058–1067.
There’s Nothing Like Being Free 157

Blakemore, S.-J., Wolpert, D., Frith, C. (2000). “Why can’t you tickle yourself?”
NeuroReport 11 (11), R11–R16.
Blakemore, S.-J., Wolpert, D., Frith, C. (2002). “Abnormalities in the awareness of action.”
Trends in Cognitive Science 6, 237–242.
Campbell, C. (1951). “Is freewill a pseudo-problem?.” Mind 60, 441–465.
Carruthers, P. (2007). “The illusion of conscious will.” Synthese 159, 197–213.
Colombetti, G., Thompson, E. (2007). “The feeling body: Towards an enactive approach
to emotion.” In W. Overton, U. Müller, J. Newman (eds.), Developmental perspectives on
embodiment and consciousness, 45–68. Mahwah, NJ: Erlbaum.
Damasio, A . (1999). The feeling of what happens: Body and emotion in the making of con-
sciousness. New York: Harcourt Brace.
Dennett, D. (1984). Elbow room: The varieties of free will worth wanting. Cambridge, MA :
MIT Press.
Dennett, D., Kinsbourne, M. (1992). “Time and the observer.” Behavioral and Brain
Sciences 15, 183–247.
Frith, C. (1992). The cognitive neuropsychology of schizophrenia. Hove, UK : Erlbaum.
Frith, C. (2005). “The neural basis of hallucinations and delusions.” Comptes Rendus
Biologies 328, 169–175.
Frith, C., Blakemore, S.-J., Wolpert, D. (2000). “Abnormalities in the awareness and control
of action.” Philosophical Transactions of the Royal Society of London Series B—Biological
Sciences 355, 1771–1788.
Gallagher, S. (2000). “Philosophical conceptions of the self: Implications for cognitive
science.” Trends in Cognitive Science 4, 14–21.
Gallagher, S. (2004). “Agency, ownership, and alien control in schizophrenia.” In P. Bovet,
J. Parnas, D. Zahavi (eds.), Interdisciplinary perspectives on self-consciousness, 89–104.
Amsterdam: John Benjamins.
Grünbaum, A. (1971). “Free will and laws of human behavior.” American Philosophical
Quarterly 8, 299–317.
Haggard, P. (2005). “Conscious intention and motor cognition.” Trends in Cognitive
Science 9, 290–295.
Haggard, P., Cartledge, P., Dafydd, M., Oakley, D. (2004). “Anomalous control: When
‘free-will’ is not conscious.” Consciousness and Cognition 13, 646–654.
Haggard, P., Clark, S., Kalogeras, J. (2002). “Voluntary action and conscious awareness.”
Nature Neuroscience 5, 382–385.
Haggard, P., Cole, J. (2007). “Intention, attention and the temporal experience of action.”
Consciousness and Cognition 16, 211–220.
Holton, R . (2006). “The act of choice.” Philosophers’ Imprint 6, 1–15.
Horgan, T., Tienson, J., Graham, G. (2003). “The phenomenology of first-person agency.”
In S. Walter, H. Heckman (eds.), Physicalism and mental causation, 323–340. Exeter:
Imprint Academic.
Lehrer, K . (1960). “Can we know that we have free will by introspection?” Journal of
Philosophy 57, 145–157.
Libet, B. (1985). “Unconscious cerebral initiative and the role of conscious will in volun-
tary action.” Behavioral and Brain Sciences 8, 529–566.
Libet, B., Gleason, C., Wright, E., Pearl, D. (1983). “Time of conscious intention to act in
relation to onset of cerebral activity (readiness potential): The unconscious initiation
of a freely voluntary act.” Brain 106, 623–642.
Linser, K., Goschke, T. (2007). “Unconscious modulation of the conscious experience of
voluntary control.” Cognition 104, 459–475.
158 T H E S E N S E O F AG E N C Y

Maes, J., van Gool, A . (2008). “Misattribution of agency in schizophrenia: An explora-


tion of historical first-person accounts.” Phenomenology and the Cognitive Sciences 7,
191–202.
Nadelhoffer, T., Nahmias, E. (2007). “The past and the future of experimental philoso-
phy.” Philosophical Explorations 10, 123–149.
Nahmias, E. (2005). “Agency, authorship, and illusion.” Consciousness and Cognition 14,
771–785.
Nahmias, E., Morris, S., Nadelhoffer, T., Turner, J. (2004). “The phenomenology of free
will.” Journal of Consciousness Studies 11, 162–179.
Nichols, S. (2004). “Folk concepts and intuitions: From philosophy to cognitive science.”
Trends in Cognitive Science 8, 514–518.
O’Connor, T. (1995). “Agent causation.” In T. O’Connor (ed.), Agents, causes, and events,
173–200. New York: Oxford University Press.
O’Regan, J., Noë, A . (2001). “A sensorimotor account of vision and visual consciousness.”
Behavioral and Brain Sciences 24, 883–917.
Pacherie, E. (2007). “The sense of control and the sense of agency.” Psyche 13 (1), 1–30.
Pettit, G. (2002). “Are we rarely free? A response to restrictivism.” Philosophical Studies
107, 219–237.
Pettit, P. (2003). “Agency-freedom and option-freedom.” Journal of Theoretical Politics 15,
387–403.
Prinz, W. (2006). “Free will as a social institution.” In S. Pockett, W. Banks, S. Gallagher
(eds.), Does consciousness cause behavior?, 257–276. Cambridge, MA : MIT Press.
Roser, M., Gazzaniga, M. (2004). “Automatic brains—interpretive minds.” Current
Directions in Psychological Science 13, 56–59.
Sato, A., Yasuda, A. (2005). “Illusion of sense of self-agency: Discrepancy between the pre-
dicted and actual sensory consequences of actions modulates the sense of self-agency,
but not the sense of ownership.” Cognition 94, 241–255.
Searle, J. (1984). Minds, brains, and science. Cambridge, MA : Harvard University Press.
Shergill, S., Bays, P., Frith, C., Wolpert, D. (2003). “Two eyes for an eye: The neuroscience
of force escalation.” Science 301, 187.
Spence, S. (2001). “Alien control: From phenomenology to cognitive neurobiology.”
Philosophy, Psychiatry, and Psychology 8, 163–172.
Spence, S., Brooks, D., Hirsch, S., Liddle, P., Meehan, J., Grasby, P. (1997). “A PET study
of voluntary movement in schizophrenic patients experiencing passivity phenomena
(delusions of alien control).” Brain 120, 1997–2011.
Stephens, G., Graham, G. (2000). When self-consciousness breaks: Alien voices and inserted
thoughts. Cambridge, MA : MIT Press.
Vander Laan, D. (2001). “A regress argument for restrictive incompatibilism.” Philosophical
Studies 103, 201–215.
Van Inwagen, P. (1989). “When is the will free?” In J. Tomberlin (ed.), Philosophical per-
spectives 3: Philosophy of mind and action theory, 399–422. Atascadero, CA : Ridgeview.
Velleman, D. (1989/2000). “Epistemic freedom.” Pacific Philosophical Quarterly 70, 73–
97. Reprinted in The possibility of practical reason. New York: Oxford University Press,
2000.
Wegner, D. (2002). The illusion of conscious will. Cambridge, MA : MIT Press.
Weiskrantz, L., Elliott, J., Darlington, C. (1971). “Preliminary observations on tickling
oneself.” Nature 230, 589–599.
Wenke, D., Fleming, S., Haggard, P. (2010). “Subliminal priming of actions influences
sense of control over effects of action.” Cognition 115, 26–38.
There’s Nothing Like Being Free 159

Wilson, T. (2002). Strangers to ourselves: Discovering the adaptive unconscious. Cambridge,


MA : Harvard University Press.
Wolpert, D. (1997). “Computational approaches to motor control.” Trends in Cognitive
Science 1, 209–216.
Wolpert, D., Ghahramani, Z., Jordan, M. (1995). “An internal model for sensorimotor
integration.” Science 269, 1880–1882.
9

Agency as a Marker of Consciousness

T I M BAY N E

The primary aim, object and purpose of consciousness is control.


Lloyd Morgan

1. INTRODUCTION
One of the central problems in the study of consciousness concerns the ascription
of consciousness. We want to know whether certain kinds of creatures—such as
nonhuman animals, artificially created organisms, and even members of our own
species who have suffered severe brain damage—are conscious, and we want to
know what kinds of conscious states these creatures might be in if indeed they are
conscious. The identification of accurate markers of consciousness is essential if the
science of consciousness is to have any chance of success.
An attractive place to look for such markers is in the realm of agency. Consider
the infant who reaches for a toy, the lioness who tracks a gazelle running across
the savanna, or the climber who searches for a handhold in the cliff. In each case, it
is tempting to assume that the creature in question is conscious of the perceptual
features of their environment (the toy, the gazelle, the handhold) that guide their
behavior. More generally, we might say that the exercise of intentional, goal-directed
agency is a reliable guide to the presence of consciousness. To put it in a slogan, we
might be tempted to treat agency as a marker of consciousness (AMC).
Although it has its advocates, AMC has come in for sustained criticism, and a
significant number of theorists have argued that the inference from agency to con-
sciousness is illegitimate on the grounds that much of what we do is under the con-
trol of unconscious behavioral guidance systems. These theorists typically hold that
the only sound basis for the ascription of consciousness—at least when it comes
to human beings—is introspective report. Frith and colleagues (1999) claim that
“to discover what someone is conscious of we need them to give us some form of
Agency as a Marker of Consciousness 161

report about their subjective experience” (107); Weiskrantz (1997) suggests that
“we need an off-line commentary to know whether or not a behavioural capacity
is accompanied by awareness” (84); and Naccache (2006) claims that “conscious-
ness is univocally probed in humans through the subject’s reports of his or her own
mental states” (1396).
The central goal of this chapter is to assess the case against AMC. My aim
is to provide a framework for thinking about the ways in which agency might
function as a marker of consciousness, and to argue that the case against AMC is
not nearly as powerful as it is often thought to be. The chapter divides into two
rough halves. The first half examines various ways in which agency might func-
tion as a marker of consciousness: section 2 focuses on the notion of a marker of
consciousness itself, while section 3 examines the relationship between agency
and consciousness. Section 4 forms a bridge between the two halves of the
chapter. Here, I contrast AMC with the claim that the only legitimate marker of
consciousness is introspective report. The second half of the chapter examines
two broad challenges to AMC: section 5 examines the challenge from cogni-
tive neuroscience, while section 6 addresses the challenge from social psychol-
ogy. I argue that although the data provided by these disciplines provide useful
constraints on the application of AMC, they do not undermine the general
approach itself.1

2. CONSCIOUSNESS AND ITS MARKERS


A number of quite distinct things might be meant by the claim that agency is a
marker of consciousness. Let us begin by considering the notion of consciousness.
The kind of consciousness in which I am interested is phenomenal conscious-
ness—the kind of consciousness that there is “something that it is like” to enjoy
(Nagel 1974). We can distinguish two aspects to phenomenal consciousness,
what we might call creature consciousness and state consciousness. Creature con-
sciousness is the property that a creature has when there is something it is like to
be that creature—something for the creature itself. Conscious states, by contrast,
are ways in which the creature’s consciousness is modulated. Such states come
in a wide variety of forms and are distinguished from each other by reference to
their phenomenal character. What it’s like to smell lavender is distinct from what
it’s like to taste coffee, and each of these conscious states is in turn distinct from
that associated with the experience of backache. We can think of conscious states as
determinates of creature consciousness, in roughly the way in which being green is
a determinate of being colored.
The distinction between creature consciousness and state consciousness
brings with it a distinction between two forms of “the” problem of other minds:
we need criteria for the ascription of consciousness as such to a creature (“crea-
ture consciousness”), and we need criteria for the ascription of particular types
of conscious states to a creature (“state consciousness”). Although these two
demands are distinct, they are likely to be intimately related. For one thing, the
ascription of state consciousness will automatically warrant the ascription of
creature consciousness, for—trivially—any creature that is in a conscious state
162 T H E S E N S E O F AG E N C Y

will itself be conscious. Further, although the converse entailment does not hold
(for markers of creature consciousness need not be markers of state conscious-
ness), in general it is likely that the ascription of creature consciousness will be
grounded in the ascription of state consciousness. For example, one’s evidence
that an infant is conscious will typically be grounded in evidence that the infant
is enjoying certain types of conscious states, such as pain or visual experiences.
With these points in mind, my focus in this chapter will be on the ascription of
conscious states.
Thus far I have referred to markers of state consciousness in the abstract, but it is
far from clear that we should be looking for a single marker—or even a single fam-
ily of markers—of conscious states of all kinds. In fact, it is likely that such markers
will need to be relativized in a number of ways. First, there is reason to think that
the markers of consciousness will need to be relativized to distinct conscious state
types, for it seems likely that the difference that consciousness makes to the func-
tional profile of a mental state depends on that kind(s) to which that mental state
belongs. To put the point somewhat crudely, we may need one family of markers for
(say) conscious perceptions, another for conscious thoughts, and a third for con-
scious emotions. Indeed, our taxonomy might need to be even more fine-grained
than this, for we might need to distinguish markers of (say) conscious vision from
markers of conscious olfaction.
Second, markers of consciousness may need to be relativized to particular types
of creatures. If consciousness is multiply realized, then the markers of conscious-
ness that apply to the members of one species may not apply to the members of
another. And even if there are species-invariant markers of consciousness, it may
be difficult to determine whether any putative marker of consciousness is in fact
species invariant or whether it applies only to the members of a certain species or
range of species.
Third, markers of consciousness may need to be relativized to what are variously
referred to as “background states,” “levels,” or “modes” of consciousness, such as
normal wakefulness, delirium, REM sleep, hypnosis, the minimally conscious state,
and so on. Unlike the fine-grained conscious states that can be individuated in
terms of their content (or phenomenal character), modes of consciousness modu-
late consciousness in broad, domain-general ways. Given the complex interactions
between a creature’s mode of consciousness and its various (fine-grained) states of
consciousness, it is highly likely that many of our markers for the latter will need to
be relativized to the former in various ways. In other words, what counts as a marker
of (say) visual experience may depend on whether we are considering creatures in a
state of normal wakefulness or (say) creatures in a state of REM sleep.
My focus will be on the question of whether agency might count as a marker of con-
sciousness in the context of human beings in the normal waking state. The further we
depart from such contexts, the less grip we have on what might be a reasonable marker
of consciousness (Block 2002). So, although the following discussion will touch on
the question of whether agency might be regarded as a marker of consciousness in
other species or in human beings who have suffered severe neurological damage, my
primary interest will be with cognitively unimpaired human beings in the normal
waking state.
Agency as a Marker of Consciousness 163

3. INTENTIONAL AGENCY
Agency is no less multifaceted than consciousness. The folk-psychological
terms that we have for describing agency—“intentional agency,” “goal-directed
agency,” “voluntary agency,” “deliberative agency,” and so on—are imprecise in
various ways, and it is unclear which, if any of them, will find gainful employment
within the scientific study of agency (Prochazka et al. 2000). We might hope that
advances in the scientific understanding of agency will bring with them a more
refined taxonomy for understanding agency, but at present we are largely reliant
on these folk-psychological categories. From within this framework, the most
natural point of contact with consciousness is provided by intentional agency. The
driving intuition behind AMC is that we can use a creature’s intentional responses
to its environment as a guide to the contents of its consciousness. But what is an
intentional action?
Most fundamentally, intentional actions are actions that are carried out by agents
themselves and not some subpersonal or homuncular component of the agent. This
point can be illustrated by considering the control of eye movements. Some eye
movements are controlled by low-level stimulus-driven mechanisms that are located
within the superior colliculus; others are directed by high-level goal-based represen-
tations (Kirchner & Thorpe 2006; de’Sperati & Baud-Bovy 2008). Arguably, the
former are more properly ascribed to subpersonal mechanisms within the agent,
while the latter qualify as things that the agent does—as instances of “looking.” In
drawing a distinction between personal agency and subpersonal motor control, I
am not suggesting that we should think of “the agent” as some kind of homunculus,
directing the creature’s behavior from its director’s box within the Cartesian the-
ater. That would be a grave mistake. As agents we are not “prime movers” but crea-
tures that behave in both reflective (or “willed”) and reactive (or “stimulus-driven”)
modes depending on the dictates of the environment. The agent is not to be identi-
fied with the self of rational reflection or pure spontaneity but is to be found by look-
ing at how the organism copes with its environment. The exercise of willed agency
draws on neural circuits that are distinct from those implicated in stimulus-driven
agency ( Jahanshahi & Frith 1998; Lengfelder & Gollwitzer 2001; Mushiake et al.
1991; Obhi & Haggard 2004), but few of our actions are purely “self-generated” or
purely “stimulus-driven.” Instead, the vast majority of what we do involves a com-
plex interplay between our goals and the demands of our environment (Haggard
2008). Think of a typical conversation, where what one says is guided by both the
behavior of one’s interlocutor and one’s own communicative intentions.
A useful way in which to unpack the contrast between personal and subpersonal
control is in terms of cognitive integration. What it is for an action to be assigned
to the agent herself rather than one of her components is for it to be suitably inte-
grated into her cognitive economy. There is some temptation to think that behav-
ioral responses that are functionally isolated from the agent’s cognitive economy
should not be assigned to the agent, at least not without reservation. In contrast to
“hardwired” stimulus-response behaviors, intentional responses are marked by the
degree to which they are integrated with each other. This integration might be most
obvious in the context of deliberative, reflective, and willed agency, but it can also
164 T H E S E N S E O F AG E N C Y

be seen in so-called stimulus-driven behavior. Consider the behavioral responses


that are exercised in the context of absentmindedly making oneself a cup of coffee
while chatting to a colleague. Although the fine-grained motor control involved in
such a task might draw on unconscious perceptual representations (see later dis-
cussion), there is good reason to think that the identification of objects requires
consciousness.
The foregoing raises the question of what we should say about “primitive
agents”—creatures that lack the cognitive architecture required for (high degrees
of) cognitive integration and rational control. Primitive agents possess little in the
way of deliberative or reasons-responsive capacities, but they are nonetheless able
to draw on those cognitive and behavioral capacities that they do possess in a rela-
tively flexible manner. Are such creatures capable of performing intentional actions?
Our intuitions are likely to be pulled in two directions at once. On the one hand,
the fact that primitive agents can deploy those cognitive and behavioral responses
that they do possess in an integrated way might suggest that they are capable of act-
ing intentionally. At the same time, the fact that such creatures are unable to engage
in high-level behavioral control—control that involves deliberation and rational
reflection—might incline us to deny that they can perform intentional actions.
What should we do here?
I think we can steer a path between these two intuitions by allowing that primitive
agents can engage in intentional agency to the extent that they are able to draw on their
various behavioral capacities in a flexible and appropriate manner. Consider again
the lioness who tracks a gazelle as it moves across her perceptual field. Although
the lioness cannot deploy the contents of her visual representation in the service of
high-level agency (for she lacks the capacities required for high-level agency), she
does act intentionally insofar as she can deploy those behavioral capacities that she
does possess in a flexible, goal-directed manner.
Similar points apply to creatures who possess the mechanisms required for
high-level behavioral control but who cannot currently deploy those mechanisms
due to temporary impairment or environmental demands. Consider, for example,
the action of accelerating in response to a green traffic light in an automatic and
reflexive fashion. Although the person performing such an action might have the
capacity for high-level agentive control, processing bottlenecks may prevent them
from bringing that capacity to bear on this particular behavior. As such, we might
say that there is a sense in which this “prepotent” action is not fully intentional.
However, there is nonetheless a genuine sense in which it can (and indeed should)
be assigned to the agent rather than one of their subpersonal components, and
hence falls within the scope of AMC.
The foregoing means that the employment of AMC will be somewhat problem-
atic in the context of creatures that lack the capacity for flexible, reasons-responsive
agency. I have already mentioned nonhuman animals in this respect, but no less
important will be members of our own species in which such capacities have not
yet appeared or in which they are impaired. Particularly problematic will be the
evaluation of actions that occur in the context of global disorders of conscious-
ness, such as the minimally conscious state, in which the individual’s ability to
engage in high-level, reasons-responsive agency has been lost or at least severely
Agency as a Marker of Consciousness 165

compromised. The loss of high-level agentive control could be taken as evidence


that consciousness too is lost in such disorders, but it could equally be accounted
for by supposing that the creature’s conscious states cannot govern its behavior in
ways that they would normally be able to because those capacities for high-level
behavioral control are themselves disrupted. What we can say is this: we have some
evidence of consciousness in such individuals to the extent that they are able to
engage in stimulus-driven—that is, routinized and automatic—agency, but this evi-
dence is significantly weaker than it would be were those individuals to manifest
flexible and reasons-responsive control.2 More generally, we might say that although
the presence of fully intentional agency in such cases constitutes good evidence of
consciousness, the absence of such agency may not constitute good evidence of the
absence of consciousness, for the mechanisms of cognitive integration required for
intentional agency may not be in place.
Let me bring this section to a close by addressing one oft-encountered objection
to AMC. The objection is that intentional agency cannot qualify as a marker of con-
sciousness because intentional agency presupposes consciousness. More fully, the
worry is that since an agent acts intentionally only if they are aware of what they are
doing, it follows that we cannot ascribe intentional agency to an agent without first
determining whether or not they are aware of what they are doing. But if we need to
do that—the objection continues—then we cannot employ intentional agency as a
marker of consciousness; rather, we must instead employ consciousness as a marker
of intentional agency.
I don’t find the objection compelling. For one thing, I am not convinced that
intentional agency does require that one be aware of what one is doing (or trying
to do) (see section 6). But the objection doesn’t go through even if we grant that
intentional agency requires that one be aware of one’s intentions, for it is entirely
possible that the ascription of intentional agency is, in certain contexts at least, prior
to the ascription of consciousness. As the studies discussed in the following section
illustrate, one can have good behavioral evidence for thinking that an individual
is acting intentionally without already having evidence that they are conscious. Of
course, that evidence will be defeasible—especially if intentional agency requires
intentional awareness on the part of the agent—but defeasible evidence is evidence
all the same. AMC asserts that intentional agency in a creature is a marker of con-
sciousness; it does not assert that it is a surefire guarantee of consciousness.

4. AN INTROSPECTIVE ALTERNATIVE?
As already noted, many consciousness scientists reject AMC in favor of the claim
that introspection is the (only) marker of consciousness. We might call this position
“IMC.” Comparing and contrasting AMC with IMC enables us to further illumi-
nate what it is that AMC commits us to, and also indicates that AMC is in line with
many of our intuitive responses.3
Let us begin by distinguishing two versions of IMC. According to the strong ver-
sion of IMC, introspective report is the only legitimate marker of consciousness in
any context. Where a creature is unable to produce introspective reports that it is in
a certain type of conscious state, then we have no reason to ascribe conscious states
166 T H E S E N S E O F AG E N C Y

of that kind to it; and where a creature is unable to produce introspective reports
of any kind, then we have no reason to think that it is conscious at all. (Indeed, we
might even have reason to think that it is not conscious.) A weaker version of IMC
takes introspective report to be the “gold standard” for the ascription of conscious-
ness when dealing with subjects who possess introspective capacities, but it allows
that nonintrospective measures might play a role in ascribing consciousness when
dealing with creatures in which such capacities are absent.
I will focus here on the strong version of IMC for the following two reasons.
First, I suspect that in general advocates of IMC incline toward the strong rather
than the weak version of the view. Second, it is not clear that the weak version of
IMC is stable. If there are viable nonintrospective markers of consciousness at all,
then it is difficult to see why they couldn’t be usefully applied to creatures with
introspective capacities. Of course, the advocate of the weak version of IMC could
allow that nonintrospective measures can be “applied” to creatures with introspec-
tive capacities but insist that whenever introspective and nonintrospective mea-
sures point in different directions the former trumps the latter. But it is difficult
to see why we should assume that introspective measures should always trump
nonintrospective measures if, as this response concedes, both introspective and
nonintrospective measures can be “applied” to the same creature. At any rate, I will
contrast AMC with the strong version of IMC and will leave the weak version of
the view to one side.
In what contexts will AMC and IMC generate different verdicts with respect to
the ascription of consciousness? Consider a series of experiments by Logothetis
and colleagues concerning binocular rivalry in rhesus monkeys (Logothetis &
Schall 1989; Logothetis et al. 2003; Sheinberg & Logothetis 1997). In this work,
Logothetis and colleagues trained rhesus monkeys to press bars in response to
particular images, such as horizontal and vertical gratings. Following training, the
monkeys were placed in a binocular rivalry paradigm in which a horizontal grating
was presented to one eye and a vertical grating was presented to the other eye, and
the monkeys were required to respond to the two stimuli by means of bar presses.
At the same time, Logothetis and colleagues recorded from the visual system of the
monkeys in order to determine where in the visual hierarchy the closest correla-
tions between their conscious percepts (as measured by their responses) and neural
activity might be found.
How should we interpret the monkeys’ responses? In a view that has been widely
endorsed in the literature, Logothetis and colleagues describe the monkeys as pro-
ducing introspective reports. This view seems to me to be rather problematic. In
fact, it seems highly implausible that the monkeys were producing reports of any
kind, let alone introspective reports. Arguably, a motor response counts as a report
only if it is made in the light of the belief that one’s audience will take it to manifest
the intention to bring about a certain belief in the mind of one’s audience, and it
seems doubtful that the monkeys’ button presses were guided by mental states of
this kind. In other words, commitment to IMC would be at odds with the assump-
tion that the monkeys were indeed conscious of the stimuli that were presented to
them, and would thus undermine the relevance of this work to questions concern-
ing the neural correlates of consciousness.
Agency as a Marker of Consciousness 167

But we can interpret this research as bearing on the neural correlates of con-
sciousness—as it seems very natural to do—by endorsing AMC. Instead of con-
ceiving of the monkeys’ bar presses as reports, we should regard them as intentional
actions made in light of particular types of conscious percepts, for the monkeys
have learned that pressing a certain bar in response to (say) a horizontal grating will
produce a reward.
A second domain in which our intuitive ascription of consciousness appears to
favor AMC over IMC involves severely brain-damaged patients. In drawing the
boundary between the vegetative state and the minimally conscious state, physi-
cians lean heavily on appeals to the patient’s agentive capacities (Bernat 2006;
Jennett 2002; Giacino et al. 2002). A vegetative state diagnosis requires that the
patient show “no response to external stimuli of a kind that would suggest volition
or purpose (as opposed to reflexes)” (Royal College of Physicians 2003, §2.2).
Patients who do produce signs of volition are taken to have left the vegetative state
and entered the minimally conscious state. Volition has even been used to make
the case for consciousness in certain apparently vegetative state patients. One study
involved a 23-year-old woman who had been in a vegetative state for 5 months (Boly
et al. 2007; Owen et al. 2006). On some trials the patient was played a prerecorded
instruction to imagine playing tennis; on other trials she was instructed to imag-
ine visiting the rooms of her home. Astonishingly, the patient exhibited sustained,
domain-specific neural activity in the two conditions that was indistinguishable
from that seen in healthy volunteers.
It is widely—although not universally—supposed that this activity provided evi-
dence of consciousness in this patient. AMC is consistent with that response, for
it is reasonable to regard the neural activity as evidence of sustained, goal-directed
mental imagery. In the same way that limb movement in response to command is
generally taken as a manifestation of consciousness in minimally conscious state
patients, so too we might regard evidence of sustained mental imagery as evidence
of consciousness in vegetative state patients (Shea & Bayne 2010). By contrast,
advocates of IMC will deny that we have any evidence of consciousness in this
patient, as Naccache (2006) does. Indeed, the advocate of IMC may be committed
to denying that even so-called minimally conscious state patients are conscious, a
position that is at odds with current clinical opinion.
A third domain in which AMC appears to deliver a more intuitively plausible ver-
dict than IMC concerns the interpretation of the commissurotomy (or “split-brain”)
syndrome. The commissurotomy procedure involves severing some portion of the
corpus callosum in order to prevent epileptic seizures spreading from one hemi-
sphere to the other. Although split-brain patients are largely unimpaired in every-
day life, under carefully controlled laboratory conditions they can be led to exhibit
striking behavioral dissociations (Gazzaniga 2005; Zaidel et al. 2003). In a typical
split-brain experiment, distinct visual stimuli are presented to the patient in sepa-
rate halves of the visual field. For example, the word “key-ring” might be projected
such that “key” is restricted to the patient’s left visual field and “ring” is restricted
to the patient’s right visual field. The contralateral structure of the visual system
ensures that stimuli presented in the left visual field are processed only in the right
hemisphere and vice versa. The typical finding is that patients say that they see only
168 T H E S E N S E O F AG E N C Y

the word “ring,” yet with their left hand they will select a picture of a key and ignore
pictures of both a ring and a key-ring.
As a number of theorists have pointed out (e.g., Milner 1992), strict adherence
to IMC would require us to conclude that such patients are not conscious of the
word “key.”4 Although some theorists have found this view attractive (e.g., MacKay
1966), most regard the “minor” nonspeaking hemisphere as capable of supporting
consciousness (e.g., LeDoux et al. 1977; Marks 1981). In doing so, theorists seem to
have implicitly embraced some version of AMC. They take the right hemisphere of
split-brain patients to support consciousness on the grounds that it enables various
forms of intentional, goal-directed agency.
We have examined three domains in which the contrast between IMC and AMC
has an important bearing on the ascription of consciousness. Arguably, in each case
AMC does a better job of capturing our pretheoretical intuitions than IMC does.
Of course, the advocate of IMC need not be impressed by this result. Such a theorist
might grant that even if certain experimental paradigms employed in the science of
consciousness cannot be reconstructed according to the dictates of IMC, so much
the worse for those paradigms. Rather than bring our views concerning the markers
of consciousness into line with current experimental practice or pretheoretical intu-
ition, they might say, we should revise both practice and intuition in light of theory
(see, e.g., Papineau 2002).
Although it is certainly true that neither pretheoretical intuition nor experimen-
tal practice is sacrosanct, they do confer a certain de facto legitimacy on AMC.
The fact that IMC is at odds with them suggests that it is a somewhat revisionary
proposal, one that stands in need of independent motivation. How might IMC be
motivated?
Although a thorough evaluation of the case for IMC goes well beyond the scope
of this chapter, I do want to engage with one argument that has been given for it—
what we might call the argument from epistemic conservatism. This argument may
not be the strongest argument for IMC, but I suspect that it is one of the most
influential. The argument begins with the observation that there are two kinds of
error that one can make in studying consciousness: false negatives and false posi-
tives. False negatives occur when a marker misrepresents a conscious state or crea-
ture as unconscious, while false positives occur when a marker misrepresents an
unconscious state (or creature) as conscious. Now, let us say that an approach to the
ascription of consciousness is conservative if it places more importance on avoiding
false positives than on avoiding false negatives, and that it is liberal if it places more
importance on avoiding false negatives than false positives. The argument from
epistemic conservatism holds that because our approach toward the ascription of
consciousness ought to be maximally conservative, we should adopt IMC, for only
IMC is guaranteed to prevent false positives.
I don’t find the argument compelling. For one thing, it is by no means clear that a
conservative approach to the ascription of consciousness would lead inexorably to IMC.
The literature on inattentional blindness and change blindness—to take just two of
many examples that could be cited here—suggests that our introspective beliefs con-
cerning our current conscious states are often false: we ascribe to ourselves conscious
states that we are not in, and we overlook conscious states that we are in (Dennett
Agency as a Marker of Consciousness 169

1991; Haybron 2007; Schwitzgebel 2008). More fundamentally, there is no reason


to assume that our choice between rival markers of consciousness ought to be con-
servative (let alone ultraconservative). We should indeed seek to avoid false positives
in studying consciousness, but we should also seek to avoid false negatives. As far as
I can see, there is no reason to regard one of these errors as more serious than the
other, at least so far as the scientific study of consciousness is concerned.5

5. THE CHALLENGE FROM COGNITIVE NEUROSCIENCE


I suspect that most of those who reject AMC do so not because they regard it as
pretheoretically implausible but because they take it to have been undermined by
findings in cognitive science. In particular, many theorists regard AMC as being at
odds with what we have learned from the study of blindsight and visual agnosia.
Blindsight is a condition caused by damage to primary visual cortex.6 Although
patients deny that they are aware of stimuli that are presented within their scotoma
(“blindfield”), they are nonetheless able to discriminate certain kinds of blindfield
stimuli under forced choice conditions. In some cases, the acuity of the patient’s
blindfield can even exceed that of the sighted portions of the visual field for certain
kinds of stimuli (Weiskrantz et al. 1974; Trevarthen et al. 2007).7 Visual form agno-
sia has been described as a “blindsight for orientation.” As with blindsight proper,
patients retain striking behavioral capacities despite the apparent loss of certain
aspects of conscious awareness. Much of our knowledge of such capacities comes
from the study of D.F., a woman who developed visual agnosia at the age of 35 due
to carbon monoxide poisoning (Goodale & Milner 2004; Milner & Goodale 2006).
Owing to ventral stream damage, D.F. suffered severe impairments in her ability to
see the shape and location of objects. However, the dorsal stream of her visual sys-
tem was left largely unaffected, and she retained the ability to execute fine-grained
online motor control in response to visual cues. In one particularly striking study,
D.F. was able to “post” a letter through a slot despite being unable to report the
slot’s orientation or say whether it matched that of another slot (Carey et al. 1996;
McIntosh et al. 2004).
These findings are widely taken to put pressure on AMC. Frith and colleagues
(1999) preface their rejection of AMC with the claim that “blindsight shows that
goal-directed behaviour is not a reliable indicator of consciousness” (107), and
Weiskrantz (1997) appeals to blindsight in defense of his claim that “we need an
off-line commentary to know whether or not a behavioural capacity is accompanied
by awareness” (84). In a somewhat similar vein, Milner and Goodale’s practice of
describing the contrast between ventral stream processing and dorsal stream pro-
cessing as a contrast between “vision-for-perception” and “vision-for-action” has
encouraged the view that visual experience is not in the business of guiding agency.
In the words of Koch and Crick (2001), it is commonly thought that visually guided
action is subserved only by “zombie systems.”
Although this body of research certainly ought to inform our conception of just
how agency might support ascriptions of consciousness, it would be premature to
conclude that these conditions undermine AMC. We can think of the objection
from cognitive neuroscience as consisting of four steps. First, the introspective
170 T H E S E N S E O F AG E N C Y

reports of the relevant patients are correct: the representations that patients have of
objects in their blindfield are unconscious. Second, these unconscious representa-
tions support various forms of intentional agency, such as pointing and guessing.
The third step of the argument puts these two claims together to argue that con-
sciousness of stimulus (or its relevant properties) is not required for intentional
agency that is directed toward that stimulus (i.e., intentional agency that crucially
involves the relevant properties). But—and this is the fourth step—if conscious-
ness is not required for intentional agency, then intentional agency cannot ground
ascriptions of consciousness. What should we make of this argument?
The first step appears to be warranted. Although some theorists have cast doubt
on the introspective reports of blindsight patients, suggesting that the damage they
have sustained might have impaired their introspective capacities rather than their
visual experience as such (see, e.g., Gertler 2001), this position is undermined by
evidence of “blindsight-like” phenomena (Kolb & Braun 1995; Lau & Passingham
2006) and unconscious dorsal stream visual control (Milner & Goodale 2008)
in normal individuals.8 Although we could suppose that normal subjects also lack
introspective access to these visual representations, it seems more parsimonious to
assume that the introspective capacities of normal subjects are intact, and that their
dorsal representations are—as their introspective reports indicate—unconscious.9
More problematic is the fourth step of the argument. There is no incoherence
in holding that although the intentional actions of these patients are guided by
unconscious representations, these contexts are exceptions to a general rule that
links agency to consciousness. AMC, so the thought goes, might lead us astray in
dealing with patients with blindsight and visual agnosia without being unreliable in
general. In order for X to qualify as a marker of Y, it need not be the case that every
instance of X is accompanied by an instance of Y. Just as smoke is a marker of fire
even though not all instances of smoke are accompanied by fire, intentional agency
might be a marker of consciousness even if it is possible for intentional agency to be
guided by unconscious representations.
Although this line of response is perfectly acceptable as far as it goes, it is not clear
that we need be as concessive to the objection as this response is. The reason for this
is that it is not at all clear that blindsight-supported actions are fully intentional. In
other words, it is not clear that the second step of the argument is warranted. In fact,
there are three respects in which the actions seen in these conditions fall short of
full-blooded intentionality
First, blindsight-supported agency is often curiously “response-specific.” An
early study by Weiskrantz and colleagues (1974) found that D.B. was able to local-
ize blindfield targets much better when he was required to point to them than when
he was required to make an eye movement toward them. In another study, Zihl and
von Cramon (1980) required three blindsight patients to report when they saw a
light that had been flashed into their blindfield. On some trials the patients were
instructed to produce eye-blink reports, on other trials they were required to pro-
duce key-press reports, and on still other trials they were asked to produce verbal
reports (saying “yes”). Although the patients’ blinking and key-pressing responses
were significantly above chance (after practice), their verbal responses were not.
Another series of studies investigating the capacity of two blindsight patients to
Agency as a Marker of Consciousness 171

perceive size and orientation found that their performance was above chance on
goal-directed actions (grasping and posting), but below chance when they were
required to either perceptually match or verbally report the target’s size (Perenin &
Rossetti 1996; Rossetti 1998). The fact that their responses to the stimuli were
restricted to particular behavioral modalities suggests that they were not fully inten-
tional. As we noted in section 3, the more flexible a behavioral response is, the more
reason there is to regard it as fully intentional.
A second respect in which blindfield supported actions are less than fully inten-
tional is that they must typically be prompted. Blindfield content is not generally
available for spontaneous agentive control. Patients must initially be told that their
guesses are reliable before they will employ the contents of their blindfield repre-
sentations in agentive control. The fact that blindfield content is not spontaneously
employed by the subject suggests that it is not accessible to her as such—that is, at
the personal level.
In response, it might be pointed out that some blindsight subjects do use the con-
tents of their blindfield in the service of spontaneous agency. Consider Nicholas
Humphrey’s blindsight monkey—Helen:

Helen, several years after the removal of visual cortex, developed a virtu-
ally normal capacity for ambient spatial vision, such that she could move
around under visual guidance just like any other monkey. This was certainly
unprompted, and in that respect “super” blindsight. (Humphrey 1995: 257;
see also Humphrey 1974).

What should we make of Helen’s “superblindsight”? One possibility is that it involves


the spontaneous use of purely unconscious visual representations. Another possi-
bility is that Helen’s blindfield representations became conscious as she acquired the
ability to spontaneously deploy their contents in the service of behavioral control.
I am not sure that we are yet in a position to adjudicate between these two possibili-
ties. What we can say is that it is far from obvious that Helen’s behavior provides
us with an example of spontaneous behavioral guidance in the absence of visual
consciousness.
Third, and perhaps most important, whereas the contents of perceptual con-
sciousness can be used for both the selection of goals and the execution of inten-
tions, blindfield content appears capable of sustaining only the latter of these two
operations. The dorsal stream is not a homunculus—a “mini-me” that can both
select and initiate actions under its own stream. Milner and Goodale (2006: 232)
liken it to the robotic component of a tele-assistance system: the ventral stream
selects the goal object from the visual array, and the dorsal stream carries out the
computations required for the assigned action (see also Clark 2009).
In response, it might be argued that dorsal stream representations can drive goal
selection. D.F., for example, appears to engage with the world in a fluid and dynamic
way, using her visual experience in order to select target objects on which to act.
However, such behavior cannot necessarily be attributed to dorsal stream represen-
tations acting on their own, for D.F.’s agnosia is quite selective. D.F. is missing only
quite specific properties from her visual experience, and those elements that she
172 T H E S E N S E O F AG E N C Y

retains enable her to engage in intentional agency across a wide range of everyday
contexts (Milner 2008: 181). Restricting conscious access to these properties—as
is done in experimental contexts—reveals how impoverished D.F.’s blindsight-based
behavioral capacities really are. As Dretske (2006) puts it, unconscious sensory
information may be able to “control and tweak” behaviors that have already been
selected, but conscious information appears to be required for behavioral planning
and the selection of goals.10
Let me briefly summarize the claims of this section. The objection from cognitive
neuroscience takes the form of a putative counterexample to the claim that con-
sciousness is required for intentional agency. In response, I have made two central
points. First, AMC requires only a robust correlation between consciousness and
intentional agency, and hence it could be justified even if there are conditions in
which certain types of intentional agency are possible in the absence of conscious-
ness. But—and this is the second point—it is far from clear that cognitive neuro-
science does provide us with counterexamples to the claim that consciousness is
required for intentional agency, for purely blindsight-based actions may not qualify
as fully intentional.

6. THE CHALLENGE FROM SOCIAL PSYCHOLOGY


A second body of research that is widely taken to undermine AMC derives from
social psychology. Work in this field suggests that goals can be nonconsciously
acquired, and that once acquired they can function without the subject being
aware of their influence (e.g., Bargh 2005; Bargh & Chartrand 1999; Bargh &
Morsella 2009; Dijksterhuis & Bargh 2001). Let us begin by considering some
of the data that are emerging from this field, before turning to questions of
interpretation.
The following two “scrambled sentence” studies by Bargh and collaborators are
representative of this body of work. In the first study, half of the participants were
given sentences containing words that primed for stereotypes of old age (“wrinkle,”
“grey,” “wise”), while the other half were given sentences containing only age-neutral
words (Bargh et al. 1996). The participants left the unscrambling task believing that
they had completed the experiment. However, as they left, the time that it took for
them to walk from the experimental room to the end of the corridor was measured.
As Bargh and colleagues predicted, those who had been given sentences containing
old-age primes took significantly longer to reach the end of the corridor than did
control subjects, who had not been primed in this way.
In the second study, Bargh et al. (2001) assigned subjects to one of two groups:
a high-performance group and a neutral group. The members of the former group
were instructed to search for words that primed for performance, such as “win,”
“compete,” “strive,” “attain,” and “succeed,” while the members of the neutral group
were given words that carried no such connotations, such as “ranch,” “carpet,” “river,”
“shampoo,” and “robin.” Subjects who had been primed with high-performance
words did significantly better in a subsequent word-finding task than did controls.
Questioning within a debriefing session indicated that participants had not been
aware of the relationship between the priming task and the subsequent experimental
Agency as a Marker of Consciousness 173

situation. In a follow-up experiment, subjects were told to cease working on the


word-finding task after two minutes and were then surreptitiously observed to
determine whether or not they did as instructed. Surprisingly, 57 percent of sub-
jects in the high-performance group continued to work on the task following the
instruction to stop as opposed to only 22 percent of those in the neutral group. In
yet another follow-up study, subjects were interrupted after one minute and were
then made to wait for five minutes before being given the option of continuing with
the word-finding task or instead participating in a (more enjoyable) cartoon-rating
task. Subjects who had been primed with high-performance words were much more
likely (66 percent) than controls (32 percent) to persevere with the word-finding
task.11
Bargh (2005) draws the following moral from these (and other) studies:

Conscious acts of will are not necessary determinants of social judgment and
behavior; neither are conscious processes necessary for the selection of com-
plex goals to pursue, or for the guidance of those goals to completion. Goals
and motivations can be triggered by the environment, without conscious
choice or intention, then operate with and run to completion entirely non-
consciously, guiding complex behaviour in interaction with a changing and
unpredictable environment, and producing outcomes identical to those that
occur when the person is aware of having that goal. (52)

Do these findings—as Bargh’s comments might be taken to imply—undermine AMC?


I don’t think so. For one thing, we need to be careful about just how we inter-
pret these studies. Although I see no a priori reason to rule out the possibility of
unconscious goal selection, I am not convinced that these studies provide evidence
of such a phenomenon. The primes given in these studies influenced the subjects’
behavior by modulating how they acted (e.g., walking more slowly, persevering with
a task), but they did not provide the subjects with a goal. Subjects in the first study
were not attempting to walk slowly, and high-performance and low-performance
subjects in the second study differed only in how long they stuck with the task.
Subjects in both sets of studies were trying to complete the assigned word-finding
task—and presumably they were fully aware that that was what they were trying to
do. We can grant that subjects were unaware of what factors influenced their degree
of motivation in carrying out the task, and perhaps even that they were unaware of
their degree of motivation itself (although we have no direct evidence on this score,
since subjects weren’t asked about how motivated they were), but being unaware of
these aspects of one’s agency is quite a different thing from being unaware of what it
is that one is trying to do.
Having said that, we should grant that agents are often unaware of their inten-
tions, even when those intentions are “proximal”—that is, are currently guiding
behavior (Mele 2009). Suppose that one is crossing the road while talking with
a friend. One might be so engrossed by the conversation that one is unaware
of one’s intention to cross the road. Moreover, it is likely that many creatures
are capable of goal-directed agency without having the capacity for agentive
self-awareness. But AMC does not require that intentional agency be grounded
174 T H E S E N S E O F AG E N C Y

in an agent’s awareness of its intentions and goals. All it requires is that conscious
states of some kind or another are implicated in the exercise of intentional agency.
Even when agents act on the basis of goals of which they are unaware, they will
generally be aware of the perceptual features of their environment that govern the
selection and implementation of those goals. It is one thing to act on the basis of
an unconscious intention, but quite another to act on the basis of an unconscious
representation of one’s perceptual environment or body. In many cases, the most
likely form of consciousness to be implicated in intentional agency will be percep-
tual rather than agentive.
This point can be further illustrated by considering pathologies of agency, such as
the anarchic hand syndrome (Della Sala & Marchetti 2005; Marchetti & Della Sala
1998). Patients with this condition have a hand that engages in apparently inten-
tional behavior of its own accord. The patient will complain that she has no control
over the hand’s behavior and will describe it as having a “mind of its own.” Although
patients appear not to have any sense of agency with respect to “their” actions, these
actions are presumably triggered and guided by the patient’s conscious perceptual
experiences. So, although anarchic hand actions might provide an unreliable guide
to the presence of conscious intention, there is no reason to think that they provide
an unreliable guide to the presence of conscious perception.
There is a final point to note, a point that is relevant to the assessment of both
the case against AMC based on social psychology and that which is based on cogni-
tive neuroscience. Although there are question marks about the kinds of conscious
states that the subjects studied in these experiences might be in, there is no doubt
whatsoever that the subjects themselves are conscious. As such, it is unclear what
bearing such studies might have on the question of whether completely unconscious
creatures are capable of intentional agency. It is not implausible to suppose that the
kinds of behavioral capacities that unconscious mental states are able to drive in
conscious creatures differ in fundamental respects from those that they are able to
drive in unconscious creatures. Perhaps mentality is something like an iceberg, not
only in the sense that only a small portion of it is conscious but also in the sense that
the existence of its submerged (or unconscious) parts demands the existence of its
unsubmerged (or conscious) parts.
Are there any pathologies of consciousness in which intentional agency occurs
in the complete absence of creature consciousness? A number of authors have
argued that sleepwalking, automatisms, and epileptic absence seizures—to name
just three of the many conditions that might be mentioned here—provide exam-
ples of states in which completely unconscious individuals engage in intentional
actions, albeit ones that are highly automatized (see, e.g., Koch 2004; Lau &
Passingham 2007). The problem with such claims is that it is far from clear that
such individuals are completely unconscious. They might not be conscious of
what they are doing or of why they are doing what they are doing, but it is—it
seems to me—very much an open question whether they might nonetheless be
conscious of objects in their immediate perceptual environment (Bayne 2011).
Such individuals certainly act in ways that are under environmental control, and
as Lloyd Morgan once remarked, control is the “primary aim, object and purpose
of consciousness” (1894: 182).
Agency as a Marker of Consciousness 175

7. CONCLUSION
Current orthodoxy within the science of consciousness holds that the only legiti-
mate basis for ascribing consciousness is introspective report, and the practice of
employing agency as a marker of consciousness is looked upon with some suspicion
by many theorists. In this chapter I have argued that such suspicion is unjustified.
In the first half of the chapter I clarified a number of ways in which agency might
be adopted as a marker of consciousness, and in the second half I examined and
responded to the claim that findings in cognitive neuroscience and social psychol-
ogy undermine the appeal of AMC. I argued that although these findings provide
the advocate of AMC with plenty of food for thought, neither domain demonstrates
that intentional agency is an unreliable guide to the presence of consciousness.
What I have not done in this chapter is provide a direct argument for AMC. My
primary concern has been to defend AMC against a variety of objections, and I have
left the motivation for AMC at a relatively intuitive and pretheoretical level. The
task of developing a positive argument for AMC is a significant one and not some-
thing that I can take on here. All I have attempted to do in this chapter is remove
some of the undergrowth that has come to obscure the claim that agency might
function as a marker of consciousness: a full-scale defense of that claim must wait
for another occasion.12

NOTES
1. My analysis builds on a number of recent discussions of the relationship between
agency and consciousness. I am particularly indebted to Clark (2001, 2009), Dretske
(2006), Flanagan (1992), and van Gulick (1994).
2. See chapter 6 of Bayne (2010) for discussion of the ascription of consciousness in
these conditions.
3. This section draws on chapter 5 of Bayne (2010).
4. Whether or not this interpretation of the split-brain data is at odds with the weak ver-
sion of IMC or merely the strong version of IMC depends on the delicate question
of whether we think of the split-brain patient as two cognitive subjects (only one of
whom possesses introspective capacities) or as a single subject (with introspective
capacities).
5. It is somewhat ironic that many of those who are most strident in their defense of
IMC are also among the most critical of the claim that introspection is reliable.
Although not strictly inconsistent with each other, these two views are not natural
bedfellows.
6. See Pöppel et al. (1973), Weiskrantz et al. (1974), Perenin & Jeannerod (1975), and
Weiskrantz (2009).
7. Note that there are different forms of blindsight, and not all blindsight patients dem-
onstrate the same range of blindsight-related potential for action. My discussion here
concerns what Danckert and Rossetti (2005) call “action-blindsight.”
8. For a sample of the vast array of studies in this vein, see Aglioti et al. (1995); Brenner
& Smeets (1997); Bridgeman et al. (1981); Castiello et al. (1991); Fourneret &
Jeannerod (1998); McIntosh et al. (2004); Milner & Goodale (2008); Slachevsky
et al. (2001); and Schenk & McIntosh (2010).
176 T H E S E N S E O F AG E N C Y

9. Although certain blindsight patients claim to have some form of conscious awareness
of stimuli in their blindfield, they tend to describe such experiences as qualitatively
distinct from visual experiences of stimuli (Magnussen & Mathiesen 1989; Morland
et al. 1999).
10. In part this may be because the dorsal stream lacks access to information about the
categories to which visually perceived objects belong. For example, D.F. will fail to
pick up a screwdriver from the appropriate end because she will not recognize it as a
screwdriver (Dijkerman et al. 2009). However, even when the dorsal stream is able
to represent the appropriate properties of objects, it seems unable to draw on that
content in order to initiate action.
11. For similar studies see Bargh et al. (1996); Bargh & Ferguson (2000); Bargh &
Gollwitzer (1994); and Carver et al. (1983).
12. For helpful comments on earlier versions of this chapter, I am indebted to Bart
Kamphorst, Julian Kiverstein, Hakwan Lau, Eddy Nahmias, and Tillmann Vierkant.

REFERENCES
Aglioti, S., De Souza, J. F. X., & Goodale, M. A . 1995. Size-contrast illusions deceive the
eye but not the hand. Current Biology, 5: 679–685.
Bargh, J. A . 2005. Bypassing the will: Toward demystifying the nonconscious control of
social behavior. In R. Hassin, J. Uleman, & J. Bargh (eds.), The New Unconscious, 37–58.
New York: Oxford University Press.
Bargh, J. A., & Chartrand, T. 1999. The unbearable automaticity of being. American
Psychologist, 54: 462–479.
Bargh, J. A., Chen, A., & Burrows, L. 1996. Automaticity of social behavior: Direct effects
of trait construct and stereotype activation on action. Journal of Personality and Social
Psychology, 71: 230–244.
Bargh, J., and Ferguson, M. 2000. Beyond behaviorism: On the automaticity of higher
mental processes. Psychological Bulletin, 126: 925–945.
Bargh, J., & Gollwitzer, P. M. 1994. Environmental control of goal-directed action:
Automatic and strategic contingencies between situations and behavior. Nebraska
Symposium on Motivation, 41: 71–124.
Bargh, J., Gollwitzer, P. M., Lee-Chai, A. Y., Barndollar, K., & Trötschel, R . 2001. The
automated will: Nonconscious activation and pursuit of behavioural goals. Journal of
Personality and Social Psychology, 81: 1014–1027.
Bargh, J., & Morsella, E. 2009. Unconscious behavioural guidance systems. In C. Agnew, D.
Carlston, W. Graziano, & J. Kelly (eds.), Then a Miracle Occurs: Focusing on Behaviour in
Social Psychological Theory and Research, 89–118. New York: Oxford University Press.
Bayne, T. 2010. The Unity of Consciousness. Oxford: Oxford University Press.
Bayne, T. 2011. The presence of consciousness in “absence” seizures. Behavioural Neurology,
24 (1): 47–53.
Bernat, J. L. 2006. Chronic disorders of consciousness. Lancet, 367: 1181–1192.
Block, N. 2002. The harder problem of consciousness. Journal of Philosophy, 99:
391–425.
Boly, M., Coleman, M. R., Davis, M. H., Hampshire, A., Bor, D., Moonen, G., Maquet, P.
A., Pickard, J. D., Laureys, S., & Owen, A. M. 2007. When thoughts become action: An
fMRI paradigm to study volitional brain activity in noncommunicative brain injured
patients. NeuroImage, 36: 979–992.
Agency as a Marker of Consciousness 177

Brenner, E., & Smeets, J. B. J. 1997. Fast responses of the human hand to changes in target
position. Journal of Motion Behaviour, 29: 297–310.
Bridgeman, B., Kirsch, M., & Sperling , G. 1981. Segregation of cognitive and motor aspects
of visual function using induced motion. Perception and Psychophysics, 29: 336–342.
Carey, D. P., Harvey, M., & Milner, A. D. 1996. Visuomotor sensitivity for shape and orien-
tation in a patient with visual form agnosia. Neuropsychologia, 34: 329–338.
Carver, C. S., Ganellen, R. J., Froming, W. J., & Chambers, W. 1983. Modelling: An anal-
ysis in terms of category accessibility. Journal of Experimental Social Psychology, 19:
403–421.
Castiello, U., Paulignan, Y., & Jeannerod, M. 1991. Temporal dissociation of motor
responses and subjective awareness. Brain, 114: 2639–2655.
Clark, A . 2001. Visual experience and motor action: Are the bonds too tight? Philosophical
Review, 110: 495–519.
Clark, A . 2009. Perception, action and experience: Unraveling the golden braid.
Neuropsychologia, 47: 1460–1468.
Danckert, J., & Rossetti, Y. 2005. Blindsight in action: What does blindsight tell us about
the control of visually guided actions? Neuroscience and Biobehavioural Reviews, 29:
1035–1046.
Della Sala, S., & Marchetti, C. 2005. The anarchic hand syndrome. In H.-J. Freund, M.
Jeannerod, M. Hallett, & R. Leiguarda (eds.), Higher-Order Motor Disorders: From
Neuroanatomy and Neurobiology to Clinical Neurology, 293–301. New York: Oxford
University Press.
Dennett, D. 1991. Consciousness Explained. Boston: Little, Brown.
de’Sperati, C., & Baud-Bovy, G. 2008. Blind saccades: An asynchrony between seeing and
looking. Journal of Neuroscience, 28: 4317–4321.
Dijkerman, H. C., McIntosh, R. D., Schindler, I., Nijboer, T. C. W., & Milner, A. D. 2009.
Choosing between alternative wrist postures: Action planning needs perception.
Neuropsychologia, 47: 1476–1482.
Dijksterhuis, A., & Bargh, J. A . 2001. The perception-behaviour expressway: Automatic
effects of social perception on social behaviour. In M. P. Zanna (ed.), Advances in
Experimental Social Psychology, 33:1–40. San Diego, Academic Press.
Dretske, F. 2006. Perception without awareness. In T. S. Gendler & J. Hawthorne (eds.),
Perceptual Experience, 147–180. Oxford: Oxford University Press.
Flanagan, O. 1992. Consciousness reconsidered. Cambridge, MA: MIT Press.
Fourneret, P., & Jeannerod, M. 1998. Limited conscious monitoring of motor perfor-
mance in normal subjects. Neuropsychologia, 36: 1133–1140.
Frith, C., Perry, R., & Lumer, E. 1999. The neural correlates of conscious experience: An
experimental framework. Trends in the Cognitive Sciences 3: 105–114.
Gazzaniga, M. S. 2005. Forty-five years of split-brain research and still going strong. Nature
Reviews Neuroscience, 6: 653–659.
Gertler, B. 2001. Introspecting phenomenal states, Philosophy and Phenomenological
Research, 63: 305–328.
Giacino, J. T., Ashwal, S., Childs, N., Cranford, R., Jennett, B., Katz, D. I., Kelly, J. P.,
Rosenberg, J. H., Whyte, J., Zafonte, R. D., & Zasler, N. D. 2002. The minimally con-
scious state: Definition and diagnostic criteria. Neurology, 58: 349–353.
Goodale, M., & Milner, A. D. 2004. Sight Unseen: An Exploration of Conscious and
Unconscious Vision. Oxford: Oxford University Press.
Haggard, P. 2008. Human volition: Towards a neuroscience of will. Nature Reviews
Neuroscience, 9: 934–946.
178 T H E S E N S E O F AG E N C Y

Haybron, D. 2007. Do we know how happy we are? On some limits of affective introspec-
tion and recall. Noûs, 41: 394–428.
Humphrey, N. K . 1974. Vision in a monkey without striate cortex: A case study. Perception,
3: 241–255.
Humphrey, N. K . 1995. Blocking out the distinction between sensation and perception:
Superblindsight and the case of Helen. Behavioral and Brain Sciences, 18: 257–258.
Jahanshahi, M., & Frith, C. 1998. Willed action and its impairments. Cognitive
Neuropsychology, 15: 483–533.
Jennett, B. 2002. The Vegetative State. Cambridge: Cambridge University Press.
Kirchner, H., & Thorpe, S. J. 2006. Ultra-rapid object detection with saccadic eye move-
ments: Visual processing speed revisited. Vision Research, 46: 1762–1776.
Koch, C. 2004. The Quest for Consciousness. Englewood, CO: Roberts.
Koch, C., & Crick, F. 2001. On the zombie within. Nature, 411: 893.
Kolb, F. C., & Braun, J. 1995. Blindsight in normal observers. Nature, 377: 336–338.
Lau, H. C., & Passingham, R. E. 2006. Relative blindsight in normal observers and the
neural correlate of visual consciousness. Proceedings of the National Academy of Sciences,
103: 18763–18768.
Lau, H. C., & Passingham, R. E. 2007. Unconscious activation of the cognitive control
system in the human prefrontal cortex. Journal of Neuroscience, 27: 5805–5811.
LeDoux , J. E., Wilson, D. H., & Gazzaniga, M. S. 1977. A divided mind: Observations
on the conscious properties of the separated hemispheres. Annals of Neurology, 2:
417–421.
Lengfelder, A., & Gollwitzer, P. M. 2001. Reflective and reflexive action control in patients
with frontal brain lesions. Neuropsychology, 15: 80–100.
Logothetis, N. K., D. A. Leopold, & Sheinberg , D. L. 2003. Neural mechanisms of per-
ceptual organization. In N. Osaka (ed.), Neural Basis of Consciousness: Advances in
Consciousness Research, 49:87–103. Amsterdam: John Benjamins.
Logothetis, N., & Schall, J. 1989. Neuronal correlates of subjective visual perception.
Science, 245: 761–763.
MacKay, D. M. 1966. Cerebral organization and the conscious control of action. In J. C.
Eccles (ed.), Brain and Conscious Experience, 422–445. Heidelberg : Springer-Verlag.
Magnussen, S., & Mathiesen, T. 1989. Detection of moving and stationary gratings in the
absence of striate cortex. Neuropsychologia, 27: 725–728.
Marchetti, C., & Della Sala, S. 1998. Disentangling the alien and anarchic hand. Cognitive
Neuropsychiatry, 3: 191–207.
Marks, C. 1981. Commissurotomy, Consciousness and Unity of Mind. Cambridge, MA : MIT
Press.
McIntosh, R. D., McClements, K. I., Schindler, I., Cassidy, T. P., Birchall, D., & Milner, A.
D. 2004. Avoidance of obstacles in the absence of visual awareness. Proceedings of the
Royal Society of London Series B Biological Sciences, 271: 15–20.
Mele, A . 2009. Effective Intentions: The Power of Conscious Will. Oxford: Oxford University
Press.
Milner, A. D. 1992. Disorders of perceptual awareness: A commentary. In A. D. Milner and
M. D. Rugg (eds.), The Neuropsychology of Consciousness, 139–158. London: Academic
Press.
Milner, A. D. 2008. Conscious and unconscious visual processing in the human brain.
In L. Weiskrantz and M. Davies (eds.), Frontiers of Consciousness, 169–214. Oxford:
Oxford University Press.
Agency as a Marker of Consciousness 179

Milner, A. D., & Goodale, M. A. 2006. The Visual Brain in Action. 2nd ed. Oxford: Oxford
University Press.
Milner, A. D., & Goodale, M. A . 2008. Two visual systems reviewed. Neuropsychologia,
46: 774–785.
Morgan, C. L. 1894. An Introduction to Comparative Psychology. London. W. Scott.
Morland, A. B., Jones, S. R., Finlay, A. L., Deyzac, E., Le, S., & Kemp, S. 1999. Visual percep-
tion of motion, luminance and colour in a human hemianope. Brain, 122: 1183–1196.
Mushiake, H., Masahiko, I., & Tanji, J. 1991. Neuronal activity in the primate premotor,
supplementary, and precentral motor cortex during visually guided and internally
determined sequential movements. Journal of Neurophysiology, 66: 705–718.
Naccache, L. 2006. Is she conscious? Science, 313: 1395–1396.
Nagel, T. 1974. What is it like to be a bat? Philosophical Review, 83: 435–450.
Obhi, S., & Haggard, P. 2004. Internally generated and externally triggered actions are
physically distinct and independently controlled. Experimental Brain Research, 156:
518–523.
Owen, A. M., Coleman, M. R., Boly, M., Davis, M. H., Laureys, S., & Pickard, J. D. 2006.
Detecting awareness in the vegetative state. Science, 313: 1402.
Papineau, D. 2002. Thinking about Consciousness. Oxford: Oxford University Press.
Perenin, M.-T., & Jeannerod, M. 1975. Residual vision in cortically blind hemifields.
Neuropsychologia, 13: 1–7.
Perenin, M.-T., & Rossetti, Y. 1996. Grasping without form discrimination in a hemiano-
pic field. Neuroreport, 7: 793–797.
Pöppel, E., Held R., & Frost, D. 1973. Residual visual function after brain wounds involv-
ing the central visual pathways in man. Nature, 243: 295–296.
Prochazka, A., Clarac F., Loeb, G. E., Rothwell, J. C., & Wolpaw, J. R . 2000. What do reflex
and voluntary mean? Modern views on an ancient debate. Experimental Brain Research,
130: 417–432.
Rossetti, Y. 1998. Implicit short-lived motor representations of space in brain-damaged
and healthy subjects. Consciousness and Cognition, 7: 520–558.
Royal College of Physicians. 2003. The Vegetative State: Guidance on Diagnosis and
Management. London: Royal College of Physicians.
Schenk, T., & McIntosh, R. D. 2010. Do we have independent visual streams for percep-
tion and action? Cognitive Neuroscience, 1: 52–78.
Schwitzgebel, E. 2008. The unreliability of naïve introspection. Philosophical Review, 117:
245–273.
Shea, N., & Bayne, T. 2010. The vegetative state and the science of consciousness. British
Journal for the Philosophy of Science, 61: 459–484.
Sheinberg , D. L., & Logothetis, N. K . 1997. The role of temporal cortical areas in percep-
tual organization. Proceedings of the National Academy of Sciences, 94: 3408–3413.
Slachevsky, A., Pillon, B., Fourneret, P., Pradat-Diehl, P., Jeannerod, M., & Dubois, B.
2001. Preserved adjustment but impaired awareness in a sensory-motor conflict fol-
lowing prefrontal lesions. Journal of Cognitive Neuroscience, 13: 332–340.
Trevarthen, C. T., Sahraie, A., & Weiskrantz, L. 2007. Can blindsight be superior to
“sighted-sight”? Cognition, 103: 491–501.
Van Gulick, R . 1994. Deficit studies and the function of phenomenal consciousness. In
G. Graham and G.L. Stephens (eds.) Philosophical Psychopathology, 25–49. Cambridge,
MA : MIT Press.
Weiskrantz, L. 1997. Consciousness Lost and Found. Oxford: Oxford University Press.
Weiskrantz L. 2009. Blindsight. 2nd ed.. Oxford: Oxford University Press.
180 T H E S E N S E O F AG E N C Y

Weiskrantz, L., Warrington, E. L., Saunders, M. D., & Marshall J. 1974. Visual capacity in
the heminopic field following a restricted occipital ablation. Brain, 97: 709–728.
Zaidel, E., Iacoboni, M., Zaidel, D. W., & Bogen, J. E. 2003. The callosal syndromes. In
K. H. Heilman and E. Valenstein (eds.), Clinical Neuropsychology, 347–403. Oxford:
Oxford University Press.
Zihl, J., & von Cramon, D. 1980. Registration of light stimuli in the cortically blind hemi-
field and its effect on localization. Behavioural Brain Research, 1: 287–298.
PART THREE

The Function of Conscious


Control
Conflict Resolution, Emotion, and
Mental Actions
This page intentionally left blank
10

Voluntary Action and the Three Forms of


Binding in the Brain

E Z E Q U I E L M O R S E L L A , TA R A C . D E N N E H Y,
A N D J O H N A . BA R G H

Historically, consciousness1 has been linked to the highest of intellectual functions.


For example, investigators have proposed that the primary function of conscious-
ness pertains to language (Banks, 1995; Carlson, 1994; Macphail, 1998), “theory of
mind” (Stuss & Anderson, 2004), the formation of the self (Greenwald & Pratkanis,
1984), cognitive homeostasis (Damasio, 1999), the assessment and monitoring
of mental functions (Reisberg, 2001), semantic processing (Kouider & Dupoux,
2004), the meaningful interpretation of situations (Roser & Gazzaniga, 2004), and
simulations of behavior and perception (Hesslow, 2002). In this chapter, we address
the question regarding what consciousness is for by focusing on the primary, basic
role that consciousness contributes to action production. We approach this ques-
tion from a nontraditional perspective—by working backward from overt voluntary
action to the underlying central processes (Sperry, 1952). This approach reveals
that the primary function of consciousness (to instantiate a unique form of integra-
tion, or “binding,” for the purpose of adaptive behavior) is more basic-level than
what has been proposed and that “volition” and the skeletal muscle output system
are intimately related to this primary function of consciousness. It is important to
emphasize that our question pertains to what consciousness is for (e.g., with respect
to action); it is not about what consciousness is (neurally or physically) or about
the nature of the neural processes associated with it. (With respect to biological sys-
tems, how and why questions are fundamentally different from what for questions;
Lorenz, 1963; Simpson, 1949.)
Theories granting high-level, multifaceted functions to consciousness often fail
to consider the empirical question, Why is consciousness associated with only
some of the many kinds of processes/representations that science tells us must exist
within our nervous system? In the field, there is a consensus that it is associated
with only a subset of all brain regions and processes (Merker, 2007; see review in
184 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

Morsella, Krieger, & Bargh, 2009). To isolate the primary function of conscious-
ness and identify its role in voluntary action, one must first appreciate all that can be
accomplished unconsciously in the nervous system.

UNCONSCIOUS ACTION AND UNCONSCIOUS PROCESSING


Regarding unconscious action, there are several kinds of actions that can occur
while subjects are in what appears to be an unconscious state (Laureys, 2005;
see review in Morsella & Bargh, 2011). Actions such as automatic ocular pursuit
and some reflexes (e.g., pupillary reflex) can occur in certain forms of coma and
persistent vegetative states (Klein, 1984; Laureys, 2005; Pilon & Sullivan, 1996).
In addition, licking, chewing, swallowing, and other behaviors can occur uncon-
sciously once the incentive stimulus activates the appropriate receptors (Bindra,
1974; Kern et al. 2001). Research on the kinds of “automatisms” exhibited during
epileptic seizures, in which the patient appears to be unconscious or to not have any
conscious control, has revealed unconsciously mediated stereotypic actions such
as simple motor acts (Kutlu et al., 2005), spitting (Carmant et al., 1994), humming
(Bartolomei et al., 2002), and oroalimentary automatisms (Maestro et al., 2008).
Even written and spoken (nonsense) utterances (Blanken, Wallesch, & Papagno,
1990), sexual behaviors (Spencer et al., 1983), and rolling, pedaling, and jumping
(Kaido et al., 2006) can be found to occur in a reflexive manner during seizures.
There are cases in which, during seizures, patients sing recognizable songs (Doherty
et al., 2002) or express repetitive affectionate kissing automatisms (Mikati, Comair,
& Shamseddine, 2005). In narcolepsy (Zorick et al., 1979) and somnambulism
(Plazzi et al., 2005; Schenk & Mahowald, 1995), there, too, are complex, uncon-
scious behaviors (e.g., successfully negotiating objects).
Convergent evidence for the existence of unconscious action is found in neuro-
logical cases in which, following brain injury in which a general awareness is spared,
actions become decoupled from consciousness, as in blindsight (Weiskrantz, 1997),
in which patients report to be blind but still exhibit visually guided behaviors.
Analogously, in blind smell (Sobel et al., 1999), people can learn to associate odorants
with certain environments (e.g., a particular room), even though the concentration
of odorants presented during learning was consciously imperceptible. Similarly, in
alien hand syndrome (Bryon & Jedynak, 1972), anarchic hand syndrome (Marchetti &
Della Sala, 1998), and utilization behavior syndrome (Lhermitte, 1983), brain damage
causes hands and arms to function autonomously. These actions include relatively
complex goal-directed behavior (e.g., the manipulation of tools; Yamadori, 1997)
that are maladaptive and, in some cases, can be at odds with a patient’s reported
intentions (Marchetti & Della Sala, 1998). In addition, Goodale and Milner (2004)
report neurological cases in which there is a dissociation between action and con-
scious perception. Suffering from visual form agnosia, patient D.F. was incapable
of reporting the orientation of a tilted slot but could nonetheless negotiate the slot
accurately when inserting an object into it.
Theorists have concluded from these findings that there are two different corti-
cal visual pathways that are activated in the course of perception, a dorsal pathway
that supports actional responses (“what to do”) and a ventral pathway that supports
Voluntary Action and the Three Forms of Binding 185

semantic knowledge regarding the object (“what it is”; see review in Westwood,
2009). Mounting evidence suggests that it is the dorsal (actional) system that oper-
ates outside of conscious awareness, while the operation of the ventral system is
normally associated with awareness (Decety & Grèzes, 1999; Jeannerod, 2003).
Findings regarding perception-action dissociations corroborate what motor
theorists have long known—that one is unconscious of the motor programs guid-
ing action (Rosenbaum, 2002). In addition to action slips and spoonerisms, highly
flexible and “online” adjustments are made unconsciously during an act such as
grasping a fruit. For several reasons (see treatments of this topic in Gray, 2004;
Grossberg, 1999; Rosenbaum, 2002), one is unconscious of these complicated
programs that calculate which muscles should be activated at a given time but is
often aware of the proprioceptive and perceptual consequences of these programs
(e.g., perceiving the hand grasping; Gray, 2004; Gottlieb & Mazzoni, 2004; Helen
and Haggard, 2005). In short, there is a plethora of findings showing that one is
unconscious of the adjustments that are made “online” as one reaches for an object
(Fecteau et al., 2001; Heath et al., 2008; Rossetti, 2001). Many experimental tricks
are based on the fact that one has little if any conscious access to motor programs. In
an experiment by Fourneret and Jeannerod (1998), participants were easily fooled
into thinking that their hand moved one direction when it had actually moved in a
different direction (through false feedback on the computer display).
In conclusion, there is substantial evidence that complex actions can transpire
without conscious mediation. At first glance, these actions are not identifiably less
flexible, complex, controlling, deliberative, or action-like than their conscious coun-
terparts (Bargh & Morsella, 2008).
Regarding unconscious processing, “supraliminal” (consciously perceptible)
stimuli in our immediate environment can exert forms of unconscious “stimulus
control,” leading to unconscious action tendencies. Consistent with this standpoint,
findings suggest that incidental stimuli (e.g., hammers) can automatically prepare us
to physically interact with the world (Tucker & Ellis, 2004; see neuroimaging evi-
dence in Grèzes & Decety, 2002; Longcamp et al., 2005). For instance, perceiving a
cylinder unconsciously increases one’s tendency to perform a power grip (Tucker &
Ellis, 2004). In addition, it has been shown that, in choice response time tasks, the
mere presence of musical notation influences the responses of musicians but not of
nonmusicians (Levine, Morsella, & Bargh, 2007; Stewart et al. 2003). Consistent
with these findings, unconscious action tendencies are readily evident in classic
laboratory paradigms such as the Stroop task2 (Stroop, 1935) and the flanker task
(Eriksen & Schultz, 1979).
In studies involving supraliminal priming of complex social behavior, it has been
demonstrated that many of our complex behaviors occur automatically, determined
by causes far removed from our awareness. Behavioral dispositions can be influenced
by covert stimuli—when presented with supraliminal words associated with the ste-
reotype “old,” people walk slower (Bargh, Chen, & Burrows, 1996); when presented
with stimuli associated with the concept “library,” people make less noise (Aarts &
Dijksterhuis, 2003); and when primed with “hostility,” people become more aggressive
(Carver et al., 1983). These effects have been found not only with verbal stimuli that
are semantically related to the goal (as in many studies) but also with material objects.
186 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

For example, backpacks and briefcases prime cooperation and competitiveness,


respectively (Kay et al., 2004); candy bars prime tempting hedonic goals (Fishbach,
Friedman, & Kruglanski, 2003); dollar bills prime greed (Vohs, Mead, & Goode,
2006); scents such as cleaning fluids prime cleanliness goals (Holland, Hendriks, &
Aarts, 2005); sitting in a professor’s chair primes social behaviors associated with
power (Chen, Lee-Chai, & Bargh, 2001; Custers et al., 2008); control-related words
prime the reduction of prejudice (Araya et al., 2002); and the names of close relation-
ship partners (e.g., mother, friend) prime the goals that those partners have for the
individual as well as those goals the individual characteristically pursues when with
the significant other (Fitzsimons & Bargh, 2003; Shah, 2003). In addition, there is evi-
dence that one can unconsciously process task-irrelevant facial expressions (Preston
& Stansfield, 2008) and be automatically vigilant toward negative or harmful stimuli
(Öhman, Flykt, & Esteves, 2001; Okon-Singer, Tzelgov, & Henik, 2007) or toward
undesirable tendencies such as stereotyping (Glaser, 2007).
Similar “unconsciously mediated” responses have been expressed toward stimuli
that have been rendered imperceptible (“subliminal”) through techniques such as
backward masking, in which a stimulus (e.g., a word) is presented for a brief dura-
tion (e.g., 17 milliseconds) and is then followed by a pattern mask (e.g., #####).
Under such conditions, subjects report that they were unable to perceive the word.
It has been shown that subliminal stimuli can still influence motor responses, atten-
tion shifts, emotional responses, and semantic processes (Ansorge et al., 2007), at
least to a certain extent. For example, in a choice response time task, response times
for responses to subliminal (masked) stimuli are the same as those for responses to
supraliminal stimuli (Taylor & McCloskey, 1990). In addition, subjects can select the
correct motor response (one of two button presses) when confronted with sublimi-
nal stimuli, suggesting that “appropriate programs for two separate movements can
be simultaneously held ready for use, and that either one can be executed when trig-
gered by specific stimuli without subjective awareness” (Taylor & McCloskey, 1996,
62; see review in Hallett, 2007). Interestingly, it has been demonstrated that present-
ing subjects with “2 × 3” subliminally primes naming the number “6” (García-Orza
et al., 2009). Moreover, some forms of Pavlovian, evaluative, and operant conditioning
may occur unconsciously (Duckworth et al., 2002; Field, 2000; Olson & Fazio, 2001;
Olsson & Phelps, 2004; Pessiglione et al., 2007). According to Strahan, Spencer, and
Zanna (2002), certain action plans (e.g., eating popcorn) can be influenced by sub-
liminal stimuli only when those plans are already motivated (e.g., when one is hun-
gry). Subliminal stimuli can influence behavioral inclinations such as motivation and
emotional states (e.g., as indexed by the skin conductance response; Olsson & Phelps,
2004; Pessiglione et al., 2008). Together, these findings reveal that subliminal stimuli
can influence cognitive processing and behavior, at least to some extent.

THE UNIQUE CONTRIBUTION OF CONSCIOUS PROCESSING


OR THE “PHENOMENAL STATE”
According to the integration consensus (Morsella, 2005), consciousness furnishes the
nervous system with a form of internal communication that integrates neural activi-
ties and information-processing structures that would otherwise be independent
Voluntary Action and the Three Forms of Binding 187

(i.e., unintegrated). In virtue of conscious states, diverse kinds of information are


gathered in some sort of global workspace (see reviews in Baars, 2002; Merker,
2007; Morsella, 2005). However, for some time it was unclear which kinds of infor-
mation must be distributed and integrated in a conscious manner and which kinds
can be distributed and integrated unconsciously: not all kinds of information are
capable of being distributed globally (e.g., neural activity related to reflexes, veg-
etative functions, unconscious motor programs, and low-level perceptual analy-
ses), and many kinds can be disseminated and combined with other kinds without
conscious mediation, as in the many cases of intersensory processing. For example,
the McGurk effect (McGurk & MacDonald, 1976) involves interactions between
visual and auditory processes: an observer views a speaker mouthing “ba” while
presented with the sound “ga.” Surprisingly, the observer is unaware of any intersen-
sory interaction, perceiving only “da.” Similarly, the ventriloquism effect involves
unconscious interactions between vision and audition (Morsella, 2005). There are
countless cases of unconscious intersensory interactions (see list in Morsella, 2005,
Appendix A). These phenomena are consistent with the idea that consciousness is
unnecessary, at least in some cases, to integrate information from different modali-
ties. Hence, which kinds of integration require consciousness?
Supramodular Interaction Theory (SIT; Morsella, 2005) addresses this issue
by contrasting the task demands of consciously impenetrable processes (e.g., pupil-
lary reflex, peristalsis, intersensory conflicts, and “vegetative” actions) and con-
sciously penetrable processes (e.g., pain, urge to breathe when holding one’s breath).
Specifically, SIT contrasts interactions that are consciously impenetrable with con-
scious conflicts, a dramatic class of interactions (e.g., one system vetoing the action
tendencies of another system) between different information-processing systems.
For example, when one experiences the common event of holding one’s breath
underwater, withstanding pain, or suppressing elimination behaviors, one is simul-
taneously conscious of the inclinations to perform certain actions and of the incli-
nations to not do so. SIT builds on the integration consensus by proposing that
consciousness is required to integrate information, but only certain kinds of infor-
mation. Specifically, it is required to integrate information from specialized, high-
level (and often multimodal) systems that are unique in that they may conflict with
skeletal muscle plans, as described by the principle of Parallel Responses into Skeletal
Muscle (PRISM; Morsella, 2005). These supramodular systems are defined in terms
of their “concerns” (e.g., bodily needs) rather than in terms of their sensory afference
(e.g., visual, auditory). Operating in parallel, supramodular systems may have dif-
ferent operating principles, concerns, and phylogenetic histories (Morsella, 2005).
For example, an air-intake system has the skeletomotor tendencies of inhaling; a
tissue-damage system has those of pain withdrawal; an elimination system has those
of micturating and defecating; a food-intake system has those of licking, chewing,
and swallowing. These systems have been referred to as the incentive response systems
(Morsella, 2005). Each system can influence action directly and unconsciously
(as in the case of unintegrated action; Morsella & Bargh, 2011), but it is only through
consciousness that they can influence action collectively, leading to integrated action
(Morsella & Bargh, 2011). Integrated action occurs during a conscious conflict
(e.g., when carrying a scorching hot plate or holding one’s breath).
188 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

VOLITION IS MOST INTIMATELY RELATED TO ONE


OF THREE FORMS OF BINDING IN THE BRAIN
Thus, in the nervous system there are three distinct kinds of integration or “binding.”
Perceptual binding (or afference binding) is the binding of perceptual processes and
representations. This occurs in intersensory binding, as in the McGurk effect, and
in intrasensory, feature binding (e.g., the binding of shape to color; Zeki & Bartels,
1999). Another form of binding, linking perceptual processing to action/motor pro-
cessing, is known as efference binding (Haggard et al., 2002). This kind of stimulus-
response binding is what allows one to learn to press a button when presented with
a cue in a laboratory paradigm. Research has shown that responding on the basis of
efference binding can occur unconsciously. Again, Taylor and McCloskey (1990)
demonstrated that, in a choice response time task, response times for responses to
subliminal (masked) stimuli were the same as those for responses to supraliminal
stimuli. In addition, in a series of studies involving subliminal stimuli, Taylor and
McCloskey (1996) demonstrated that subjects could select the correct motor
response (one of two button presses) when confronted with subliminal stimuli (see
review in Hallett, 2007). The third kind of binding, efference-efference binding, occurs
when two streams of efference binding are trying to influence skeletomotor action at
the same time. This occurs in the incongruent conditions of interference paradigms,
in which stimulus dimensions activate competing action plans. It also occurs when
one holds one’s breath, suppresses a prepotent response, or experiences another form
of conscious conflict. In the SIT framework (Figure 10.1), it is the instantiation of
conflicting efference-efference binding that requires consciousness. Consciousness
is the “cross-talk” medium that allows such actional processes to influence action col-
lectively. Absent consciousness, behavior can be influenced by only one of the effer-
ence streams, leading to unintegrated actions such as unconsciously inhaling while
underwater or reflexively removing one’s hand from a hot object.
Not requiring such cross-talk, unconscious perceptual processes (e.g., as in the
attentional blink; Raymond, Shapiro, & Arnell, 1992) involve smaller networks of
brain areas than phenomenal processes (Sergent & Dehaene, 2004), and automatic
behaviors (e.g., reflexive pharyngeal swallowing) are believed to involve substan-
tially fewer brain regions than their intentional counterparts (e.g., volitional swal-
lowing; Kern et al., 2001; Ortinski & Meador, 2004). These finding are consistent
with the tenets of both SIT and the more general integration consensus. Supporting
SIT’s notion that the suppression of a skeletomotor act requires conscious media-
tion, Brass and Haggard (2007) present fMRI evidence that there is greater activa-
tion in a certain area of the frontomedian cortex when planned actions are canceled
than when they are carried through.
According to SIT, one can breathe unconsciously, but consciousness is required
to suppress breathing. Similarly, one can unconsciously emit a pain-withdrawal
response, but one cannot override such a response for food or water concerns with-
out consciousness. Similar classes of conflict involve air-intake, food-intake, water-
intake, sleep onset, and the various elimination behaviors. Supramodular systems
(“supramodular” because they are “beyond” the basic Fodorian module such as a
feature detector) are inflexible in the sense that, without consciousness, they are
Voluntary Action and the Three Forms of Binding 189

Response System 1
unconscious afference binding
Module 1

Module 2
Module 3
Module 4

Response System 2
Efference-Efference Module 100 Efference binding
binding (does not require
Module 101
(requires phenomenal Module 102 consciousness)
field)
Module 103

Response System 3
Module 1

Module 101
Module 4
Module 103

Integrated skeletomotor action (e.g., suppressing Un-integrated skeletomotor action (e.g., reflexive inhaling
inhaling or another pre-potent response) or pain withdrawal, responding to a subliminal stimulus)

Figure 10.1 Fodorian modules operate within a few multimodal, supramodular response
systems, each defined by its concern. Afference binding within systems can be unconscious.
Although the response systems can influence action directly (illustrated by the arrows
on the right), only in virtue of conscious states can they interact and influence action
collectively, as when one holds one’s breath (the path illustrated on the left). The sense of
agency is most intimately associated to this efference-efference binding.

incapable of taking information generated by other systems into account. For exam-
ple, the tissue-damage system is “encapsulated” in the sense that it will protest (e.g.,
create subjective avoidance tendencies) the onset of potential tissue damage even
when the action engendering the damage is lifesaving. Regardless of the adaptive-
ness of one’s plan (e.g., running across hot desert sand to reach water), the strife that
is coupled with conflict cannot be turned off voluntarily (Morsella, 2005). Under
conditions of conflict, inclinations can be behaviorally suppressed but not mentally
suppressed (Bargh & Morsella, 2008). Although actional systems that are phyloge-
netically ancient may no longer influence behavior directly, they now influence the
nature of consciousness: inclinations continue to be experienced consciously, even
when they are not expressed behaviorally.

NO HOMUNCULUS IS REQUIRED FOR “VOLITIONAL” PROCESSING


Although phenomena such as alien hand syndrome (Bryon & Jedynak, 1972),
anarchic hand syndrome (Marchetti & Della Sala, 1998), and utilization behavior
syndrome (Lhermitte, 1983) have been explained as resulting from impaired super-
visory processes (e.g., Shallice et al., 1989), SIT proposes that they are symptoms of
a more basic condition—the lack of adequate cross-talk (i.e., interactions) among
response systems. Without one system checking another, unintegrated actions arise,
190 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

wherein one system guides behavior and is uninfluenced by the concerns of another
system. In this way, perhaps it is better to compare the phenomenal field not to a sur-
veillance system but to a senate, in which representatives from different provinces are
always in attendance, regardless of whether they should sit quietly or debate. In other
words, phenomenal states allow for the channels of communication across systems
to always be open (see discussion of chronic engagement in Morsella, 2005).
In phylogeny, the introduction of new structures (e.g., organs and tissues)
involves complex, often competitive interactions with extant ones. This is known as
the “struggle of parts” problem (cf. Mayr, 2001), and it may have been a formidable
challenge during the evolution of something as complex as the human nervous sys-
tem. Although such integration could conceivably occur without something like
phenomenal states (as in an automaton or in an elegant “blackboard” neural net-
work with all its modules nicely interconnected), such a solution was not selected
in our evolutionary history. Instead, and for reasons that only the happenstance
and tinkering process of evolution could explain (Gould, 1977; Simpson, 1949),
it is proposed that these physical processes were selected to solve this large-scale,
cross-talk problem. We will now discuss how the senses (or illusion) of volition and
agency arise from these conscious states.
The sense of agency and authorship processing (i.e., attributing actions to oneself;
Wegner, 2003) are based on several high-level processes, including the perception
of a lawful correspondence between action intentions and action outcomes (Wegner,
2003). Research has revealed that experimentally manipulating the nature of this
correspondence leads to systematic distortions in the sense of agency/authorship,
such that subjects can be fooled into believing that they caused actions that were
in fact caused by someone else (Wegner, 2002). Linser and Goschke (2007) dem-
onstrate that feelings of control are based on unconscious comparisons of actual
action-effect sequences to the anticipated sequence: “matches” result in feelings of
control, and mismatches result in the effect being attributed to an external source.
Hence, when intentions and outcomes mismatch, as in action slips and spooner-
isms, people are less likely to perceive actions as originating from the self (Wegner,
2002). Similar self-versus-other attributions are found in intrapsychic conflicts
(Livnat & Pippenger, 2006), as captured by the “monkey on one’s back” metaphor
that is often used to describe the tendencies associated with aspects of addiction.
Accordingly, in the classic Stroop task, participants perceive the activation of the
undesired word-reading plans as less associated with the self when the plans conflict
with intended action (e.g., in the incongruent condition) than when the same plans
lead to no such interference (e.g., in the congruent condition; Riddle & Morsella,
2009). In two interference paradigms, response interference was associated with
weakened perceptions of control and stronger perceptions of competition (Riddle &
Morsella, 2009). It is important to appreciate that, despite these introspective
judgments, and as revealed in recent action production research, there need be no
homunculus in charge of suppressing one action in order to express another action,
as concluded by Curtis and D’Esposito (2009): “No single area of the brain is spe-
cialized for inhibiting all unwanted actions” (72). For example, in the morning,
action plan A may conflict with action plan B; and, in the evening, plan C may con-
flict with D, with there never being the same third party (a homunculus) observing
Voluntary Action and the Three Forms of Binding 191

each conflict. Ideomotor approaches (Greenwald, 1970; Hommel, 2009; Hommel


et al., 2001) have arrived at a similar conclusion: Lotze (1852) and James’s (1890)
“acts of express fiat” referred not to a homunculus reining in action but rather to
the actions of an incompatible idea (i.e., a competing action plan). From this stand-
point, instead of a homunculus, there exists a forum in which representations vie
for action control. In synthesis, it may not be that there is something akin to a self
or supervisor overlooking action conflicts, but that the sense of agency emerges as
a high-level cognition, a construction based on more basic processing, such as the
conflict between actional systems.
Regarding the topic of voluntary action, one should consider that, more than any
other effector system (e.g., smooth muscle), skeletal muscle is influenced by distinct (and
often opposing) systems/regions of the brain. Figuratively speaking, the skeletal muscle
system is a steering wheel that is controlled by many systems, each with its own agenda.
Thus, action selection suffers from the “degrees of freedom” problem (Rosenbaum, 2002),
in which there are countless ways in which to perform a given action. For instance, there
are many ways to grab a cup of coffee: one could grab it with the left hand or the right
hand, with a power grip or precision grip, or with three versus four fingers. This challenge
of multiple possibilities in action selection is met not by unconscious motor algorithms
(as in motor control; Rosenbaum, 2002) but by the ability of conscious states to con-
strain what the organism does by having the inclinations of multiple systems constrain
skeletomotor output: whether by the conscious percept of a doorway, an inclination
toward an incentive stimulus, or the urge to refrain from doing something impulsive in
public, consciousness minimizes the degrees of freedom problem.

CONCLUSION
By following Sperry’s (1952) recommendation and identifying the primary func-
tion of consciousness by taking the untraditional approach of working backward
from overt voluntary action to the central processes involved (instead of working
forward from perceptual processing toward central processes), one can appreciate
that what consciousness is for is more “nuts-and-boltsy” than what has been pro-
posed historically: at this stage of understanding, it seems that the primary function
of consciousness is to instantiate a unique form of binding in the nervous system.
This kind of integration (efference-efference binding) is intimately related to the
skeletal muscle system, the sense of agency, and volition.

ACKNOWLEDGMENT
This chapter is based in part on ideas first reported in Morsella (2005) and Morsella
and Bargh (2011).

NOTES
1. Often referred to as “subjective experience,” “qualia,” “sentience,” “phenomenal
states,” and “awareness,” basic consciousness has proven to be difficult to describe
and analyze but easy to identify, for it constitutes the totality of our experience.
192 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

Perhaps this basic form of consciousness has been best defined by Nagel (1974), who
claimed that an organism has basic consciousness if there is something it is like to be
that organism—something it is like, for example, to be human and experience pain,
love, breathlessness, or yellow afterimages. Similarly, Block (1995) claimed, “The
phenomenally conscious aspect of a state is what it is like to be in that state” (227).
2. In this task, participants name the colors in which stimulus words are written as
quickly and as accurately as possible. When the word and color are incongruous
(e.g., RED presented in blue), response interference leads to increased error rates,
response times, and reported urges to make a mistake (Stroop, 1935; Morsella et
al., 2009). When the color matches the word (e.g., RED presented in red), or is
presented on a neutral stimulus (e.g., a series of X’s as in “XXXX”), there is little or
no interference.

REFERENCES
Aarts, Henk, and Ap Dijksterhuis. “The silence of the library: Environment, situational
norm, and social behavior.” Journal of Personality and Social Psychology 84, no. 1 (2003):
18–28.
Ansorge, Ulrich, Odmar Neumann, Stefanie I. Becker, Holger Kälberer, and Holk Cruse.
“Sensorimotor supremacy: Investigating conscious and unconscious vision by masked
priming.” Advances in Cognitive Psychology 3, nos. 1–2 (2007): 257–274.
Araya, Tadesse, Nazar Akrami, Bo Ekehammar, and Lars-Erik Hedlund. “Reducing prej-
udice through priming of control-related words.” Experimental Psychology 49, no. 3
(2002): 222–227.
Baars, Bernard J. “The conscious access hypothesis: Origins and recent evidence.” Trends
in Cognitive Sciences 6, no. 1 (2002): 47–52.
Banks, William P. “Evidence for consciousness.” Consciousness and Cognition 4, no. 2
(1995): 270–272.
Bargh, John A., Mark Chen, and Lara Burrows. “Automaticity of social behavior: Direct
effects of trait construct and stereotype activation on action.” Journal of Personality and
Social Psychology 71, no. 2 (1996): 230–244.
Bargh, John A., and Ezequiel Morsella. “The unconscious mind.” Perspectives on
Psychological Science 3, no. 1 (2008): 73–79.
Bartolomei, Fabrice, Fabrice Wendling, Jean-Pierre Vignal, Patrick Chauvel, and Catherine
Liegois-Chauvel. “Neural networks underlying epileptic humming.” Epilepsia 43, no. 9
(2002): 1001–1012.
Bindra, Dalbir. “A motivational view of learning, performance, and behavior modifica-
tion.” Psychological Review 81, no. 3 (1974): 199–213.
Blanken, Gerhard, Claus-W Wallesch, and C. Papagno. “Dissociations of language func-
tions in aphasics with speech automatisms (recurring utterances).” Cortex 26, no. 1
(1990): 41–63.
Block, Ned. “On a confusion about a function of consciousness.” Behavioral and Brain
Sciences, 18, no. 2 (1995): 227–287.
Brass, Marcel, and Patrick Haggard. “To do or not to do: The neural signature of
self-control.” Journal of Neuroscience 27, no. 34 (2007): 9141–9145.
Bryon, S., and C. P. Jedynak . “Troubles du transfert interhemispherique: A propos de
trios observations de tumeurs du corps calleux. Le signe de la main entrangère.” Revue
Neurologique 126 (1972): 257–266.
Carlson, Neil R . Physiology of behavior. Needham Heights, MA : Allyn and Bacon, 1994.
Voluntary Action and the Three Forms of Binding 193

Carmant, Lionel, James J. Riviello, Elizabeth. A. Thiele, Uri Kramer, Sandra L. Helmers,
Mohamed Mikati, Joseph R. Madsen, Peter McL. Black, and Gregory L. Holmes.
“Compulsory spitting: An unusual manifestation of temporal lobe epilepsy.” Journal of
Epilepsy 7, no. 3 (1994): 167–170.
Carver, Charles S., Ronald J. Ganellen, William J. Froming , and William Chambers.
“Modeling: An analysis in terms of category accessibility.” Journal of Experimental Social
Psychology 19, no. 5 (1983): 403–421.
Chen, Serena, Annette Y. Lee-Chai, and John A. Bargh. “Relationship orientation as a
moderator of the effects of social power.” Journal of Personality and Social Psychology 80,
no. 2 (2001): 173–187.
Curtis, Clayton E., and Mark D’Esposito. “The inhibition of unwanted actions.” In The
Oxford handbook of human action, edited by Ezequiel Morsella, John A. Bargh, and
Peter M. Gollwitzer, 72–97. New York: Oxford University Press, 2009.
Custers, Ruud, Marjolein Maas, Miranda Wildenbeest, and Henk Aarts. “Nonconscious
goal pursuit and the surmounting of physical and social obstacles.” European Journal of
Social Psychology 38, no. 6 (2008): 1013–1022.
Damasio, Antonio R . The feeling of what happens: Body and emotion in the making of con-
sciousness. New York: Harcourt Brace, 1999.
Decety, Jean, and Julie Grèzes. “Neural mechanisms subserving the perception of human
actions.” Trends in Cognitive Sciences 3, no. 5 (1999): 172–178.
Doherty, M. J., A. J. Wilensky, M. D. Holmes, D. H. Lewis, J. Rae, and G. H. Cohn. “Singing
seizures.” Neurology 59 (2002): 1435–1438.
Duckworth, Kimberly L., John A. Bargh, Magda Garcia, and Shelly Chaiken. “The auto-
matic evaluation of novel stimuli.” Psychological Science 13, no. 6 (2002): 513–519.
Eriksen, C. W., and D. W. Schultz. “Information processing in visual search: A continuous flow
conception and experimental results.” Perception and Psychophysics 25 (1979): 249–263.
Fecteau, Jillian H., Romeo Chua, Ian Franks, and James T. Enns. “ Visual awareness and
the online modification of action.” Canadian Journal of Experimental Psychology 55, no.
2 (2001): 104–110.
Field, Andy P. “I like it but I’m not sure why: Can evaluative conditioning occur without
conscious awareness?” Consciousness and Cognition 9, no. 1 (2000): 13–36.
Fishbach, Ayelet, Ronald S. Friedman, and Arie W. Kruglanski. “Leading us not unto temp-
tation: Momentary allurements elicit overriding goal activation.” Journal of Personality
and Social Psychology 84, no. 2 (2003): 296–309.
Fitzsimons, Gráinne M., and John A. Bargh. “Thinking of you: Nonconscious pursuit of
interpersonal goals associated with relationship partners.” Journal of Personality and
Social Psychology 84, no. 1 (2003): 148–163.
Fourneret, Pierre, and Marc Jeannerod. “Limited conscious monitoring of motor perfor-
mance in normal subjects.” Neuropsychologia 36, no. 11 (1998): 1133–1140.
García-Orza, Javier, Jesus Damas-López, Antonio Matas, and José Miguel Rodríguez.
“‘2 × 3’ primes naming ‘6’: Evidence from masked priming.” Attention, Perception, and
Psychophysics 71, no. 3 (2009): 471–480.
Glaser, Jack . “Contrast effects in automatic affect, cognition, and behavior.” In Assimilation
and contrast in social psychology, edited by Diederik A. Stapel and Jerry Suls, 229–248.
New York: Psychology Press, 2007..
Goodale, Melvyn A., and David Milner. Sight unseen: An exploration of conscious and uncon-
scious vision. New York: Oxford University Press, 2004.
Gottlieb, Jacqueline, and Pietro Mazzoni. “Neuroscience: Action, illusion, and percep-
tion.” Science 303, no. 5656 (2004): 317–318.
194 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

Gould, Stephen Jay. Ever since Darwin: Reflections in natural history. New York: Norton,
1977.
Gray, Jeffrey A . Consciousness: Creeping up on the hard problem. New York: Oxford
University Press, 2004.
Greenwald, Anthony G. “Sensory feedback mechanisms in performance control: With
special reference to the ideomotor mechanism.” Psychological Review 77, no. 2 (1970):
73–99.
Greenwald, Anthony G., and Anthony R. Pratkanis. “The self.” In Handbook of social cogni-
tion, edited by Robert S. Wyer and Thomas K. Srull, 129–178. Hillsdale, NJ: Erlbaum,
1984.
Grèzes, Julie, and Jean Decety. “Does visual perception of object afford action? Evidence
from a neuroimaging study.” Neuropsychologia 40, no. 2 (2002): 212–222.
Grossberg , Stephen. “The link between brain learning, attention, and consciousness.”
Consciousness and Cognition 8 (1999): 1–44.
Haggard, Patrick, Gisa Aschersleben, Jörg Gehrke, and Wolfgang Prinz. “Action, binding,
and awareness.” In Common mechanisms in perception and action: Attention and perfor-
mance, edited by Wolfgang Prinz and Bernhard Hommel, 266–285. Oxford: Oxford
University Press, 2002.
Hallet, Mark . “ Volitional control of movement: The physiology of free will.” Clinical
Neurophysiology 117, no. 6 (2007): 1179–1192.
Heath, Matthew, Kristina A. Neely, Jason Yakimishyn, and Gordon Binstead. “ Visuomotor
memory is independent of conscious awareness of target features.” Experimental Brain
Research 188, no. 4 (2008): 517–527.
Hesslow, Germund. “Conscious thought as simulation of behavior and perception.” Trends
in Cognitive Sciences 6, no. 6 (2002): 242–247.
Holland, Rob W., Merel Hendriks, and Henk Aarts. “Smells like clean spirit: Nonconscious
effects of scent on cognition and behavior.” Psychological Science 16, no. 9 (2005):
689–693.
Hommel, Bernhard. “Action control according to TEC (theory of event coding).”
Psychological Research 73, no. 4 (2009): 512–526.
Hommel, Bernard, Jochen Müsseler, Gisa Aschersleben, and Wolfgang Prinz. “The theory
of event coding: A framework for perception and action planning.” Behavioral and Brain
Sciences 24, no. 5 (2001): 849–937.
James, William. The principles of psychology. New York: Dover, 1890.
Jeannerod, Marc. “Simulation of action as a unifying concept for motor cognition.” In
Taking action: Cognitive neuroscience perspectives on intentional acts, edited by Scott H.
Johnson-Frey, 139–164. Cambridge, MA: MIT Press, 2003.
Johnson, Helen, and Patrick Haggard. “Motor awareness without perceptual awareness.”
Neuropsychologia 43, no. 2 (2005): 227–237.
Kaido, Takanobu, Taisuke Otsuki, Hideyuki Nakama, Yuu Kaneko, Yuichi Kubota, Kenji
Sugai, and Osamu Saito. “Complex behavioral automatism arising from insular cortex.”
Epilepsy and Behavior 8, no. 1 (2006): 315–319.
Kay, Aaron C., S. Christian Wheeler, John A. Bargh, and Lee Ross. “Material priming: The
influence of mundane physical objects on situational construal and competitive behav-
ioral choice.” Organizational Behavior and Human Decision Processes 95, no. 1 (2004):
83–96.
Kern, Mark K., Safwan Jaradeh, Ronald C. Arndorfer, and Reza Shaker. “Cerebral cortical
representation of reflexive and volitional swallowing in humans.” American Journal of
Physiology: Gastrointestinal and Liver Physiology 280, no. 3 (2001): G354–G360.
Voluntary Action and the Three Forms of Binding 195

Klein, David B. The concept of consciousness: A survey. Lincoln: University of Nebraska


Press, 1984.
Kouider, Sid, and Emmanuel Dupoux. “Partial awareness creates the illusion of subliminal
semantic priming.” Psychological Science 15, no. 2 (2004): 75–81.
Kutlu, Guinihal, Erhan Bilir, Atilla Erdem, Yasemin B. Gomceli, G. Semiha Kurt, and Ayse
Serdaroglu. “Hush sign: A new clinical sign in temporal lobe epilepsy.” Epilepsy and
Behavior 6, no. 3 (2005): 452–455.
Laureys, Steven. “The neural correlate of (un)awareness: Lessons from the vegetative
state.” Trends in Cognitive Sciences 12, no. 12 (2005): 556–559.
Levine, Lindsay R., Ezequiel Morsella, and John A. Bargh. “The perversity of inanimate
objects: Stimulus control by incidental musical notation.” Social Cognition 25, no 2
(2007): 265–280.
Lhermitte, F. “‘Utilization behavior’ and its relation to lesions of the frontal lobes.” Brain
106, no. 2 (1983): 237–255.
Linser, Katrin, and Thomas Goschke. “Unconscious modulation of the conscious experi-
ence of voluntary control.” Cognition 104, no. 3 (2007): 459–475.
Livnat, Adi, and Nicholas Pippenger. “An optimal brain can be composed of conflict-
ing agents.” Proceedings of the National Academy of Sciences, USA 103, no. 9 (2006):
3198–3202.
Longcamp, Marieke, Jean-Luc Anton, Muriel Roth, and Jean-Luc Velay. “Premotor activa-
tions in response to visually presented single letters depend on the hand used to write:
A study on left handers.” Neuropsychologia 43, no. 12 (2005): 1801–1809.
Lorenz, Konrad. On aggression. New York: Harcourt, Brace, and World, 1963.
Lotze, Rudolf Hermann. Medizinische Psychologie oder Physiologie der Seele. Leipzig :
Weidmann’sche Buchhandlung , 1852.
Macphail, Euan M. The evolution of consciousness. New York: Oxford University Press,
1998.
Maestro, Iratxe, Mar Carreno, Antonio Donaire, Jordi Rumia, Gerardo Conesa, Nuria
Bargallo, Carlos Falcon, Xavier Setoain, Luis Pintor, and Teresa Boget. “Oroalimentary
automatisms induced by electrical stimulation of the fronto-opercular cortex in a
patient without automotor seizures.” Epilepsy and Behavior 13, no. 2 (2008): 410–412.
Marchetti, Clelia, and Sergio Della Sala. “Disentangling the alien and anarchic hand.”
Cognitive Neuropsychiatry 3, no. 3 (1998): 191–207.
Mayr, Ernst. What evolution is. London: Weidenfeld and Nicolson, 2001.
McGurk, Harry, and John MacDonald. “Hearing lips and seeing voices.” Nature 264
(1976): 746–748.
Merker, Bjorn. “Consciousness without a cerebral cortex: A challenge for neuroscience
and medicine.” Behavioral and Brain Sciences 30, no. 1 (2007): 63–134.
Mikati, Mohamed A., Youssef G. Comair, and Alhan N. Shamseddine. “Pattern-induced
partial seizures with repetitive affectionate kissing: An unusual manifestation of right
temporal lobe epilepsy.” Epilepsy and Behavior 6, no. 3 (2005): 447–451.
Morsella, Ezequiel. “The function of phenomenal states: Supramodular interaction the-
ory.” Psychological Review 112, no. 4 (2005): 1000–1021.
Morsella, Ezequiel, and John A. Bargh. “Unconscious action tendencies: Sources of
‘un-integrated’ action.” In Handbook of social neuroscience, edited by John T. Cacioppo
and John Decety, 335–347. New York: Oxford University Press, 2011.
Morsella, Ezequiel, Jeremy R. Gray, Stephen C. Krieger, and John A. Bargh. “The essence
of conscious conflict: Subjective effects of sustaining incompatible intentions.” Emotion
9, no. 5 (2009): 717–728.
196 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

Morsella, Ezequiel, Stephen C. Krieger, and John A. Bargh. “The function of conscious-
ness: Why skeletal muscles are ‘voluntary’ muscles.” In Oxford handbook of human
action, edited by Ezequiel Morsella, John A. Bargh, and Peter M. Gollwitzer, 625–634.
New York: Oxford University Press, 2009.
Nagel, Thomas. “What is it like to be a bat?” Philosophical Review 83, no. 4 (1974):
435–450.
Öhman, Arne, Anders Flykt, and Francisco Esteves. “Emotion drives attention: Detecting
the snake in the grass.” Journal of Experimental Psychology: General 130, no. 3 (2001):
466–478.
Okon-Singer, Hadas, Joseph Tzelgov, and Avishai Henik . “Distinguishing between auto-
maticity and attention in the processing of emotionally significant stimuli.” Emotion 7,
no. 1 (2007): 147–157.
Olson, Michael A., and Russell H. Fazio. “Implicit attitude formation through classical
conditioning.” Psychological Science 12, no. 5 (2001): 413–417.
Olsson, Andreas, and Elizabeth A. Phelps. “Learned fear of ‘unseen’ faces after Pavlovian,
observational, and instructed fear.” Psychological Science 15, no. 12 (2004): 822–828.
Ortinski, Pavel, and Kimford J. Meador. “Neuronal mechanisms of conscious awareness.”
Neurological Review 61, no. 7 (2004): 1017–1020.
Pessiglione, Matthias, Predrag Petrovic, Jean Daunizeau, Stefano Palminteri, Raymond J.
Dolan, and Chris D. Frith. “Subliminal instrumental conditioning demonstrated in the
human brain.” Neuron 59, no. 4 (2008): 561–567.
Pessiglione, Mathias, Liane Schmidt, Bogdan Draganski, Raffael Kalisch, Hakwan
Lau, Raymond J. Dolan, and Chris D. Frith. “How the brain translates money into
force: A neuroimaging study of subliminal motivation.” Science 11, no. 5826 (2007):
904–906.
Pilon, Manon, and S. John Sullivan. “Motor profile of patients in minimally responsive and
persistent vegetative states.” Brain Injury 10, no. 6 (1996): 421–437.
Plazzi, Giuseppe, R. Vetrugno, F. Provini, and P. Montagna. “Sleepwalking and other
ambulatory behaviors during sleep.” Neurological Sciences 26 (2005): S193–S198.
Preston, Stephanie D., and R. Brent Stansfield. “I know how you feel: Task-irrelevant facial
expressions are spontaneously processed at a semantic level.” Cognitive, Affective, and
Behavioral Neuroscience 8, no. 1 (2008): 54–64.
Raymond, Jane E., Kimron L. Shapiro, and Karen M. Arnell. “Temporary suppression
of visual processing in an RSVP task: An attentional blink?” Journal of Experimental
Psychology: Human Perception and Performance 18, no. 3 (1992): 849–860.
Reisberg , Daniel. Cognition: Exploring the science of the mind. 2nd ed. New York: Norton,
2001.
Riddle, Travis A., and Ezequiel Morsella. “Is that me? Authorship processing as a function
of intra-psychic conflict.” Poster presented at the annual meeting of the Association for
Psychological Science, San Francisco, CA , May 2009.
Rosenbaum, David A. “Motor control.” In Stevens’ handbook of experimental psychology,
vol. 1, Sensation and perception, 3rd ed., edited by Hal Pashler and Steven Yantis, 315–
339. New York: Wiley, 2002.
Roser, Matthew, and Michael S. Gazzaniga. “Automatic brains—interpretive minds.”
Current Directions in Psychological Science 13, no. 2 (2004): 56–59.
Rossetti, Yves. “Implicit perception in action: Short-lived motor representation of space.”
In Finding consciousness in the brain: A neurocognitive approach, edited by Peter G.
Grossenbacher, 133–181. Amsterdam: John Benjamins, 2001.
Voluntary Action and the Three Forms of Binding 197

Schenk, Carlos H., and Mark W. Mahowald. “A polysomnographically documented case


of adult somnambulism with long-distance automobile driving and frequent nocturnal
violence: Parasomnia with continuing danger and a noninsane automatism?” Sleep 18,
no. 9 (1995): 765–772.
Sergent, Claire, and Stanislas Dehaene. “Is consciousness a gradual phenomenon?
Evidence for an all-or-none bifurcation during the attentional blink .” Psychological
Science 15, no. 11 (2004): 720–728.
Shah, James Y. “The motivational looking glass: How significant others implicitly
affect goal appraisals.” Journal of Personality and Social Psychology 85, no. 3 (2003):
424–439.
Shallice, Tim, Paul W. Burgess, Frederick Shon, and Doreen M. Boxter. “The origins of
utilization behavior.” Brain 112, no. 6 (1989): 1587–1598.
Simpson, George G. The meaning of evolution. New Haven, CT: Yale University Press,
1949.
Sobel, Noam, Vivek Prabhakaran, Catherine A. Hartley, John E. Desmond, Gary H.
Glover, Edith V. Sullivan, John D. E. Gabrieli. “Blind smell: Brain activation induced by
an undetected air-borne chemical.” Brain 122, no. 2 (1999): 209–217.
Spencer, Susan S., Dennis D. Spencer, Peter D. Williamson, and Richard H. Mattson.
“Sexual automatisms in complex partial seizures.” Neurology 33 (1983): 527.
Sperry, Roger W. “Neurology and the mind-brain problem.” American Scientist 40, no. 2
(1952): 291–312.
Stewart, Lauren, Rik Henson, Knut Kampe, Vincent Walsh, Robert Turner, and Uta Frith.
“Brain changes after learning to read and play music.” NeuroImage 20, no. 1 (2003):
71–83.
Strahan, Erin J., Steven J. Spencer, and Mark P. Zanna. “Subliminal priming and persua-
sion: Striking while the iron is hot.” Journal of Experimental Social Psychology 38, no. 6
(2002): 556–568.
Stroop, John R . “Studies of interference in serial verbal reactions.” Journal of Experimental
Psychology 18, no. 6 (1935): 643–662.
Stuss, Donald T., and Vicki Anderson. “The frontal lobes and theory of mind:
Developmental concepts from adult focal lesion research.” Brain and Cognition 55, no.
1 (2004): 69–83.
Taylor, Janet L., and D. I. McCloskey. “Triggering of preprogrammed movements as reac-
tions to masked stimuli.” Journal of Neurophysiology 63, no. 3 (1990): 439–446.
Taylor, Janet L., and D. I. McCloskey. “Selection of motor responses on the basis of unper-
ceived stimuli.” Experimental Brain Research 110 (1996): 62–66.
Tucker, Mike, and Rob Ellis. “Action priming by briefly presented objects.” Acta Psychologica
116, no. 2 (2004): 185–203.
Vohs, Kathleen D., Nicole L. Mead, and Miranda R. Goode. “The psychological conse-
quences of money.” Science 314, no. 5802 (2006): 1154–1156.
Wegner, Daniel M. The illusion of conscious will. Cambridge, MA : MIT Press, 2002.
Wegner, Daniel M. “The mind’s best trick: How we experience conscious will.” Trends in
Cognitive Science 7, no. 2 (2003): 65–69.
Weiskrantz, L. Consciousness lost and found: A neuropsychological exploration. New York:
Oxford University Press, 1997.
Westwood, David A. “The visual control of object manipulation.” In Oxford handbook of
human action, edited by Ezequiel Morsella, John A. Bargh, and Peter M. Gollwitzer,
88–103. New York: Oxford University Press, 2009.
198 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

Yamadori, Atsushi. “Body awareness and its disorders.” In Cognition, computation, and
consciousness, edited by Masao Ito, Yasushi Miyashita, and Edmund T. Rolls, 169–176.
Washington, DC: American Psychological Association, 1997.
Zeki, S., and A. Bartels. “Toward a theory of visual consciousness.” Consciousness and
Cognition 8, no. 2 (1999): 225–259.
Zorick, F. J., P. J. Salis, T. Roth, and M. Kramer. “Narcolepsy and automatic behavior.”
Journal of Clinical Psychiatry 40, no. 4 (1979): 194–197.
11

Emotion Regulation and Free Will

N I C O H . F R I J DA

Will is central to the human experience of action, and free will in particular. People
want to act when it suits them. They have a strong sense that they take decisions and
that those decisions make a difference to their life and that of others. That sense is
ontogenetically early and important. I have a son who, while at the age of just two
years, reached for some coveted object. I, as a loving father, picked it up and gave
it to him. He fiercely threw it back, crying out, “David do!,” and started to reach
out again. He later delighted to visit the record store, in which he was free to roam
around in the abundance of obtainable options.
Psychology recognizes this “sense of self-efficacy” (Bandura, 1997) as a major fac-
tor in meeting adaptation problems. Emotion theory follows suit in positing a cogni-
tive variable labeled “appraised coping potential” as decisive for whether an event is
perceived as a threat or as a challenge (Lazarus, 1991). Feeling unable to influence
one’s fate supposedly leads to anomia, which was considered the cause of felt alien-
ation and increases in suicide in emerging industrial society (Durkheim, 1897).
In the Auschwitz concentration camp during World War II, the notion emerged
of “musulmen” (Levi, 1988, mentions it): inmates who fell into motionless apathy
after having lost all hope of being able to influence their fate. It shows that a belief
in self-efficacy can be a matter of life and death. The musulmen tended to die within
days, unless encouraged by the example or the help from other inmates who retained
some sense of coping potential, such as Milena Jesenska, the former girlfriend of
Franz Kafka (Buber-Neumann, 1989), or the Nacht und Nebel (Night and Mist) pris-
oners Pim Boellaards and Oscar Mohr in the Dachau camp (Withuis, 2008). Nor
do experiences of self-efficacy under hardship necessarily involve drama as in these
cases. A few years ago, a comic strip was published in which its author, Mohammed
Nadrani, recounts how he discovered when a prisoner in solitary confinement in
Morocco that he could regain his sense of identity by drawing a horse, and then his
name, with the help of a piece of charcoal fallen from his ceiling (Nadrani, 2005).
All this does not sit easily with the conviction in current cognitive psychology
that self-determination is an invalid concept, and the “self ” is not an agent (e.g., Prinz
200 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

2004, 2006). Moreover, all feeling and behavior is strictly determined by anteced-
ent causal events that leave no room for a mysterious force like “self-determination.”
Explanation of actions and feelings must be found at the subpersonal level of descrip-
tion at which no I or self exists: “I” and “self ” merely are socially transmitted concepts.
Freedom to choose—the very reality of a self that can exercise choice—is argued to
merely amount to a social institution, established for the purpose of assigning social
responsibility. Selves cannot really willfully affect their own fate Prinz (2004, 2006).
I will argue that this perspective is incorrect and fails to account for either behav-
ior or experience. Note that, if taken seriously, this perspective quickly leads to fatal-
ism and to becoming a musulman. It justifies feeling a victim where one is not, or
not necessarily. If the supposed mechanisms lead to such consequences, the sup-
positions must be mistaken.
It is true that intentional descriptions—“I want this,” “I do not like that”—do not
directly mirror underlying psychological processes. Underlying processes have to
be described at a subpersonal, functional, or psychological level (Dennett, 1978).
However, if our descriptions of subpersonal processes have the implications that one
feels a victim when one is not, these descriptions must be incomplete. The elemen-
tary desires to take things in hand, to make an imprint on the world, to affect one’s
fate, to practice self-determination are real at least sometimes; witness my David;
witness the other evidence just alluded to. I take the domain of emotion regulation
to examine this apparent contradiction.

EMOTION REGULATION
The term “emotion regulation” applies to emotional feeling or behavior that dif-
fers from the emotion or behavior an event by itself might otherwise have evoked.
Actual emotional behavior or feeling is often weaker or different. One is afraid but
keeps firm. One does desire one’s neighbor’s wife but suppresses the inclination,
and one may even succeed to get rid of the desire by seeking distraction or taking
to drink.
Emotion regulation generally results from emotional conflict. Two more or less
incompatible emotional inclinations operate at the same time or in close succession.
One has an emotional inclination or feeling. One feels fear and wants to run away;
one desires one’s neighbor’s wife and wants to be with her. But inclinations as well as
feelings may evoke unwanted consequences that in turn may evoke regulating inclina-
tions. Fear may make one feel a coward, and evoke contempt in others. Desiring one’s
neighbor’s wife may anger that wife and the neighbor, and evoke feelings of guilt. These
consequences are also emotional. They are also relevant to some emotional concerns.
Not only does one desire one’s neighbor’s wife. One also wants a clear conscience
and social harmony. One also wants self-respect and social appreciation. In fact if the
consequence of having or showing one’s emotion would not be emotional— if they
would leave one cold—no regulation would occur, except perhaps when regulation
aims at getting rid of the discomfort of a desire that cannot be fulfilled.
Regulation serves to deal with such conflicts. It can do so in several ways. One can
seek to prevent the unwanted consequences by suppressing action and inclination.
One can also prevent the to-be-regulated inclination from arising or developing.
Emotion Regulation and Free Will 201

Alternatively, one may find a compromise—an attenuated mode of action that will
achieve a little less satisfaction and no or a weaker aversive consequence.
Some regulation procedures proceed smoothly and automatically. Regulation
is often motivated by anxiety that operates by response inhibition (Gray &
McNaughton, 2000). Anger, for instance, often falls away when one’s target is big and
strong. Different forms of automatic regulation stem from culturally shaped modes
of appraising situations. These include culturally learned action programs that are
tailor-made for handling social conflict and satisfying conflicting concerns, such
as action programs for polite discourse, for discussing disagreements rather than
deploying hostility, and encouragement of cognitive strategies for mild appraisal
of other people’s hostile actions (Campos, Frankel, & Camras, 2004; Mesquita &
Albert, 2007).

EFFORTFUL REGULATION
But very often regulation does not occur automatically. One often has to expend
effort to suppress emotional expression. The reason is clear. Emotions are pas-
sions that instigate desires and actions; desires and actions tend to persist over
time, and they do so despite obstacles and interruptions. Emotions exert “con-
trol precedence” (Frijda, 2007). They also channel attention. The degree of con-
trol precedence in fact defines emotion intensity. How people rate the intensity
of their emotions correlates strongly with how they rate the features of control
precedence just mentioned (Sonnemans & Frijda, 1995). Regulation seeks to
influence these features: to diminish precedence, to attenuate action strength, to
slow down response, or to act differently from what passion instigates. Regulation
may require considerable effort that calls upon restricted resources of “willpower.”
Experiments have indeed shown that effortful decisions take energy (glucose con-
sumption, in this case; Baumeister, Muraven, & Tice, 2000; Baumeister, 2008). So
do controlled processing, mental effort, concentration, and attentional vigilance
generally (Mulder, 1980; Sanders, 1998).
The motivational role of expected response consequences differs importantly
from one instigation of emotion regulation to another. Julius Kuhl and Sander
Koole (2004) offer an important distinction between what they call self-control
and self-maintenance. Self-control consists of not performing some intended action,
doing something differently, or not doing it at all, as in most of the preceding
examples. The paradigm examples of self-control are improving social interaction,
resisting temptation, and improving instrumental action under emotional circum-
stances. In social interactions, one may lose consideration and make enemies. In
desire, one might offend consideration, prudence, moral rectitude, and decency,
and one may lose money. In panic, one may lose control over coherent escape; in
fear, one may drop one’s house key; trembling from anger may make one miss one’s
hit or shot.
Self-maintenance, by contrast, consists of undertaking and persisting in some
action despite anticipated unwanted response consequences. It is exemplified by
devoted passions, by costly interests and hobbies, by actions that do not comply
with social pressures, and by self-sacrificial behavior, ranging from jumping into
202 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

cold water to save a child, to undertaking an ideology-inspired suicide attack, and


to voicing social and political criticism in a hostile environment. It is illustrated by
strong passions in which the individual knowingly accepts and supports depriva-
tions and frustrations, as Paul Rée entertained toward Lou Salomé, who held him
at a distance, until he jumped to his death. It is paradoxically illustrated in the out-
comes of the Milgram (1974) experiments on obedience. 85 percent of subjects, in
the standard experiment, continued to obey the order to administer electric shocks
to presumed pupils to the strongest extent possible. But, as that figure implies,
15 percent did not. They refused to follow social pressure. Obvious instances of
self-maintenance are actions with self-sacrificial risks. The risks were obvious for
Hutu mothers who hid Tutsi children during the Rwanda genocide and who could
expect brute murder if detected (Prunier, 1995). Such risks were also clear to rescu-
ers of persecuted Jews during World War II, as is evident in later interviews with
such rescuers collected by Reykovski (2001).
Self-maintenance is also illustrated by actions undertaken or maintained under
harsh circumstances called forth, in part, by those same actions. Examples include
people caught under political oppression who did not hide their views or did not
betray their comrades under torture, and others who chose suicide to prevent such
betrayal. For instance, there is the story described in a cell of the Auschwitz con-
centration camp of an Italian priest who was starved to death after having offered to
take someone else’s place for execution.
Functionally, these actions can be considered instances of emotion regulation.
Dying was preferred over having to continue living with awareness of having failed
when one could have lightened someone else’s burden. Prejudicing one’s own sur-
vival chances can be manifest when expending energy to lighten someone else’s bur-
den, or just by taking a heavy task over from another person, as did Milena Jesenska,
Pim Boellaards, and Oscar Mohr mentioned earlier.
How large a claim is made on energy resources by self-maintenance can be illus-
trated by the story of Marcia Merino, who became known under the name La Flaca
Alejandra. She was a member of a resistance group under the Pinochet dictatorship
in Chile, taken prisoner by the secret police and severely tortured and humiliated
during weeks or months. Exhausted, she was used by the secret police to identify
former comrades by involuntary signs of recognition, having lost any resource for
self-control, as well as every grain of self-respect (Castillo & Girard, 1993).
The upshot of all these examples is that such effortful regulation is voluntary. It is
freely chosen. It is intentional, geared to keeping up the intended direction of action
and the restraining of emotions that might undermine that direction. Given all the
effort, all the risks and pains and other costs, one wonders how are they worth it?

WHAT ONE CARES ABOUT: CONCERNS


Why go to all this trouble for self-control and self-maintenance? Is it worth it?
It is worth it because emotions arise from events that one cares about. And insofar
as emotion regulation results from situations involving some measure of emotional
conflict, it results from two distinct things we care about. First, there is a reason
for having an emotion and acting upon it. Second, there also is a reason for acting
Emotion Regulation and Free Will 203

differently from how the emotion disposes us to act, or for not acting at all. One
cares both ways, at the same time. When offended, one cares about restoring one’s
self esteem. One is inclined to angrily retorting. But one also cares about retaining
social harmony; one is inclined to let the offense go by or to find some other solu-
tion, such as responding calmly, “You made me very angry.”
Caring about an event or object implies the emergence of an emotion and feeling.
Caring also implies that the event or object touches a sensitivity of the individual
that renders the object or event to be appraised as relevant, and to cause the emo-
tion or feeling to emerge. Relevance appraisal turns the object or event into an emo-
tionally meaningful one.
“Concern” is the term I use for the assumed dispositions. Concerns are defined as
dispositions to desire the occurrence or nonoccurrence of given kinds of situations
(Frijda, 1986, 335). Emotions are evoked when encountering an object or event
that is appraised as possibly or actually satisfying some concern or threatening such
satisfaction. Concerns can be operationally defined as emotional sensitivity for par-
ticular classes of events instantiating concerns. These emotional sensitivities serve as
reference points in seeking concern satisfaction or escape from concern frustration.
Concerns are awakened when perceiving an object or event that matches its sensitiv-
ity, or even by just thinking about such an object or event. At other times they silently
sit in the mind-brain until a relevant event or thought appears (Frijda, 2007, chap. 7).
The notion of concerns includes major motives, goals, and needs, under which
terms they are usually discussed in the literature. They also include “sentiments”
(Shand, 1920): affective attitudes toward persons, objects and issues, active inter-
ests or hobbies, and attachments as well as hatreds. An individual’s concerns have
diverse and complex sources, in innate biological mechanisms, cultural transmis-
sions, direct social influences, individual propensities, and individual life histories,
which cannot be enlarged upon here. In my view, we still await a general analysis
of concerns (or of motivation, for that matter) that provides a unitary, coherent
account of both motivations that stem from social prescriptions and those with
direct biological backgrounds (Berridge, 2004).
People harbor a large number of concerns—indefinitely large, in fact, because
distinctions can be made at many different levels of generality. One can have a
concern for dealing with people generally, and for dealing with that one particular
individual. Concerns account for the fact that people care about many things that
they at that moment do not strive for, but that influence thought in planning and
judgment, and at any moment may arise as generators of emotions by the appear-
ance of some stimulus, along the lines of the so-called incentive motivation model
(Berridge, 2004; Gallistel, 1980; Mook, 1996; Toates, 1986).
Concerns are mental structures that one in principle is not conscious of—not
even when they are activated and are motivating actions and choices (Berridge &
Winkielman, 2003; Wilson, 2002). One does not need to be conscious of them, since
they operate directly by entailing an individual’s sensitivity to relevant objects and
events, and by thereby instigating affect and action upon meeting relevant objects or
events. Concerns are inferred from such sensitivities, by the individual him/herself
and by observers and theorists. A sexual concern is inferred from an individual’s
or a species’ frequent affective response to meeting, viewing, or smelling potential
204 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

sexual mates, or thoughts about one of those. Attachment to a particular individual


is inferred from frequent emotional responses to events involving his or her pres-
ence, proximity and well-being. Being a spider phobic is inferred from frequently
fearing spiders when one meets them, and from detecting more spiders than do
other people.
But such self-ascription of concerns results from inferential activity that does
not necessarily correspond to the concerns that actually do determine given
choices, emotions, and actions (Wilson, 2002). One may ascribe one’s economic
decisions to rational considerations whereas upon exhaustive analysis they may
rather be ascribed to greed, to desires one does not truly desire to abandon, or to
other “animal spirits” as Akerlof and Shiller (2009) named them, following Adam
Smith (1759).
To return to the question of why emotion regulation (both as self-control and
as self-maintenance) is worth the trouble: it is when and because one cares about
what is at stake in an event. It depends on the strength of the concerns at stake in
the event (Frijda, 2007). Indeed, reported felt emotion intensity has been shown to
correlate significantly with measures of concern strength that are or were relevant to
the reported events (Sonnemans & Frijda, 1995). It is very unpleasant when some-
one disagrees with you when your self-esteem is vulnerable and easily engaged.
Tournedos are wonderful if one is moderately hungry, but not when one just had
three of them.

CONCERN STRENGTH
Relative concern strength thus can be held to be decisive for preference between
courses of action in emotion conflicts. No cat crosses an electrified grid unless that
grid is on the path to its kittens. One reenters one’s burning house to save one’s
child, and not as readily to save one’s canary.
But how to assess concern strength? Concerns can differ in strength in vari-
ous ways, and I do not know whether these types of variation tend to go together.
Concerns can differ in the strength of the felt urge evoked by relevant objects or
events; the rapidity of urge onset; the speed, scope, and power of motor, verbal,
and cognitive action. Concerns also can differ in the willingness they may induce to
incur costs, take risks, or accept pain in seeking to satisfy the concern at stake. High
grid voltage that the cat accepts to get to her kittens signals strong maternal concern,
and paying dearly for a rare postage stamp signals eager philatelism. All this is prob-
ably independent from the more quantitative action parameters. Again as a prob-
ably separate dimension, concerns can differ in scope. By “concern scope” I mean
the range of events that touch the concern’s sensitivity and give rise to some sort
of emotion. Scope may pertain to events evaluated as positive (they signal concern
promotion) and to events appraised as negative (they harm concern satisfaction).
Strong concerns in this sense are spider phobias. Spider phobics tend to get nervous
just upon seeing a spiderweb, or when coming to a place were spiders are common.
Sex during puberty gives another example: then, everything reminds the young per-
son of sex. In subcultures where maintaining self-esteem, social status, and social
identity is emphasized, notable shame is aroused by every failure or critical remark
Emotion Regulation and Free Will 205

(e.g., Abu-Lughod, 1986). Self-esteem is vulnerable when one is a fugitive in a soci-


ety where nobody knows who or what one is, or even might be (Apfelbaum, 2000,
2002). Lack of a social or economic role appears to provide a major motivation for
rendering life meaningful by becoming a terrorist (Kruglanski et al., 2008). At a
more modest level, orientation in physical and in social space represents a concern
with very large scope because losing orientation disables all and every action and
may thereby give rise to panic, as when in a strange building one suddenly does not
know where are its front, its back, and its exit.
An important proviso needs to be made. Emotion intensity does not depend
on concern strength directly, but by way of an emotional appraisal of the concern
relevance of an event. Knowing that smoking can be lethal does not render one
inclined to abstain. Only appraising that an effect on one’s health by smoking is
likely or imminent does. That does not happen readily on the basis of mere infor-
mation of likelihoods. For most smokers, lung cancer may or may not occur, and
anyway for most people its occurrence is remote in time. The actual pleasure and
releases of tension while smoking, by contrast, are here and now as acute sensa-
tions. Actual lung cancer affects emotion only once it hurts and cannot any longer
be put aside as mere coughing.
Preferences thus depend on two closely related regularities of emotional appraisal.
The first is the very much greater emotional impact of events present to the senses
than of facts just known about. The second is the time discounting of the emotional
impact of future events, discussed extensively by Ainslie (2001). The felt impact of
an event to come decreases hyperbolically with time until the esteemed moment
of its arrival. It falls away steeply when the time until the occurrence of the event
is more than a brief period away, and responding is not urgent, as happens with all
other than imminent warnings or promises. Ainslie has shown the “breakdown of
the will” in resisting temptation when enticing stimuli are actually there, to be seen
and smelled, or when the termination of pain is within one’s power.
All this is central to handling emotional conflict, and notably in endeavors at
self-maintenance. Take keeping one’s secrets during torture. The pains and humili-
ations are there, and talking provides an immediate way out of them, or so the
pains and the torturers make one inclined to believe (Asadi, 2010). Betraying a
friend and what may happen to him or her, by contrast, is in the future and may
not happen at all.
Or take the situation that calls for hiding a Tutsi child, or being asked to hide a Jew.
Neglecting the call or rejecting the request has no immediate adverse consequences.
Compare the immediate problems to be faced by the call or the request: the risks
of being found out, the needs for secrecy, the problems of feeding, all of which will
have played through the minds of possible and actual hosts upon requests for help-
ing Jews, mentioned earlier (Reykovski, 2001). It probably is for reasons like these
that many other people did turn down such requests.
But on the other hand, self-esteem may get involved when considering the
consequences of rejecting the option to help and shelter. If you betrayed a friend,
how can you later live with this? The life story of Marcia Merino showed that it
tainted her later life. The taint can be foreseen by someone not as tortured and
exhausted as Marcia Merino had been. A host of a Jew during World War II later
206 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

explained: “If I had declined the request, and I afterward heard that evil befell
him, I would have been unable to look myself in the eye.”

PREFERENCES
Emotion regulation is due to emotion conflict. An event has elicited several dif-
ferent action inclinations. The inclinations aim at actions that meet different con-
cerns. How to deal with this? One of the major options is to select among them.
One chooses the option of which one prefers the envisaged outcome: the one that
will yield the smallest loss or the largest gain in concern satisfaction. Preference
depends, in the first place, upon relative concern strength, in proportion to the
expected outcome magnitudes of action. One continues smoking until blots appear
on the X-ray pictures, and sometimes even after that.
Preferences follow the pleasure principle, taken broadly. One chooses what one
likes most or dislikes least. If only, however, things were that simple! The point is
that most likes and dislikes, pleasures and pains, are not simple givens (Frijda, 2009).
They are not even simple givens in taste experiences, since the label on the wine bottle
makes a difference to how much the wine is liked. Pleasures and pains are not simply
compared when embedded in a conflict of consequences for multiple concerns. They
readily form the Sophie’s Choice problem. To offer another example, the pleasures of
foreseeing an erotic encounter get qualified by the thought of marital strife, which in
turn gets qualified by the urge to terminate the encounters. The net outcome may in
fact favor termination; it in any case favors socially oriented self-regard.
Assessing preference includes the sensing and weighing of the expected impact
of action outcomes on feelings, and on multiple concerns. It involves mostly non-
articulate and perhaps even nonconscious information activated in the mind-brain.
It may include glimpsing feelings and future feelings, contingent on vague thoughts
of future events and one’s briefly felt inclinations and will to deal with them. The
entire process usually is not automatic, and yet it is not explicitly conscious. Only
the variable affective outcome is really conscious, one may suppose, even when it is
not focally reflected upon, until the end of the pondering, or signaling that end. The
process is intuitive, noting feelings rather than the perceptions that generated it, as
modes of situated “directed discontent” (Rietveld, 2008), prodding us toward situ-
ated contentment. This, at least, is the sketch that Rietveld (2008) and Dijksterhuis
(2004) plausibly present of the process of preference construction. The process may
proceed haltingly, with deliberations. That this occurs shows in hesitations, inner
struggles, uncertainty, sense of making true sacrifices, considering and reconsidering
one’s decision to take, deciding that the cost is or is not too heavy or in conflict with
other concerns. It may also proceed straightforwardly, as occurs when one instantly
knows that a central value is being touched, a central concern is being threatened,
a unique opportunity is met, and only verifications are opportune. In one story I
was told, from the Netherlands during World War II, a farmer’s wife visits the village
grocery store and is asked to come to the neat room behind. There, the village doctor
asks her to consider hiding a Jew. “I’ll have to ask my husband,” she replies. When
the husband comes home at six, and while at the evening meal, she tells him. “I’ll
have to think about it,” he answers, and after a minute or two: “I think we’ll have to
Emotion Regulation and Free Will 207

do it”; and they did. What happened during those minutes? Probably, some implica-
tions crossed his mind: the harm expected for the Jew; the indignity of Jews’ having
to hide; the risks, for the farmer and for his own family; one’s sense of obligation to
people in need; the worth connected to remaining decent and not shrinking from
responsibilities, but also the consequences of refusal: something like, “If I decline the
request, and evil befalls him afterward, I will be unable to look myself in the eye.”
Note that the reflection may not find a preference. Hesitation may settle into
remaining undecided. Being undecided may be resolved by refusing the request.
When a preference does emerge, its primary determinant presumably will be the
strength of the concerns at stake in that alternative, which makes its call to have to
be heeded.
What causes this preference? What is it that sometimes makes relevance to some
concern decisive? Why do such considerations as looking oneself in the eye, or the
foresight of the evil fate that awaits someone one does not know, or the indignity of
the situation weigh so heavily?
Some cues about the concerns at play come from the motives mentioned post
hoc, to explain one’s decisions made in conditions of what was earlier labeled
“self-maintenance.” The stated motives of people who did hide Jews largely fell under
the headings of empathy, personal norms, and values. Other cues for the concerns
operative in self-maintenance come from the experiences that underlie traumatic
stress syndromes. These concerns appear from the loss of sense and coherence in
the experienced events (Epstein, 1991; Janoff-Bulman, 1992; Rimé, 2005), or the
collapse of the implicit expectations that together with the world’s events form a
meaningful, reasonably predictable, and reasonably controllable world. In the
words of Rimé (2005), traumatic events represent a “breakdown of the symbolic
universe.” A world after such breakdown has revealed itself as a world one would not
want to live in, and could not live in without losing self-regard.
This implies that resisting such breakdown is motivated by a concern with a very
large scope. Resisting cruelty and oppression, rejecting humiliation and enslave-
ment can be viewed as worth the effort to do so, whatever the price.
The motivation provided by such a concern belongs to what Frankfurt (1988)
called “second-order desires”: desires to desire one’s first-order desires. They are
desires that one identifies with and that one can desire “wholeheartedly.” It is not
clear that identifying with desires forms the most basic and satisfactory description
for singling out desires that one has wholeheartedly. Part of the problem is that iden-
tifying with one of one’s desires is not a very transparent notion, if only because the
relationship between the “self ” who identifies and the “self ” who has the desire is
unclear. The feeling or sense of identifying with a desire can perhaps be understood
in a more analytical fashion. Desires that one desires desiring, in Frankfurt’s analy-
sis, may in the first place be those desires that result from events having relevance
to concerns with a very large scope, in the sense used earlier. Their satisfaction or
dissatisfaction enables or disables a very large number of actions. Such concerns
may also involve sensitivities and underlie desires that are part of one’s conception
of oneself. They belong to the goals that may consciously orient one’s conduct of life
when called upon to make choices: this is who I am, and whom I choose to be. One
identifies with it in part because one decides to do so, and commits oneself to that
208 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

decision, in true Sartrean fashion. In the second place, one may have second-order
desires that when followed do not result in frustrating other desires, like excessive
drinking frustrates the desires for health and self-respect (if one does have those lat-
ter desires). One cannot give in to those latter desires wholeheartedly, but one can
wholeheartedly will the desire to uphold the symbolic universe, or to abolish cru-
elty. One can wholeheartedly desire to find meaning in life, to be more than a speck
of dust in the world. As already mentioned, Kruglanski et al. (2008) concluded that
the motivations of terrorists primarily consisted in constituting a sense of life. That
motivation may suffice for action. I once read a letter a European resistance fighter
wrote to his family on the evening before his execution during world war II: “I have
not fallen for a political ideal. I die and fought for myself.”
By contrast, acting at variance with one’s conception of oneself, or in ways that
offend one’s values, may take away one’s sense of the meaning of life. To the indi-
vidual, a world without these values may represent a world one does not want to
live in. As I remarked earlier, betraying one’s conception of oneself and what one
strives for would produce a self-conception one would not want to live with. As
Myriam Ortega said, who had been incarcerated by Pinochet for 13 years: “To
betray in exchange for not dying is also dying” (Calzada, no date). Take a (perhaps
authentic) story from the Hungarian writer Imre Kertesz. A Holocaust captive lays
ill on his bunk, with his ration of bread on his belly, too weak to eat it yet. He looks
up and sees someone snatching it away. A little later that man comes back, replac-
ing the ration, grinning, and saying: “What the hell did you think?” Why did he do
that? Kertesz’s interpretation is similar to what Myriam Ortega said: there is more
than one way to die (Kertesz, 1997). It also is similar to what the Italian priest in
Auschwitz in the earlier example must have realized.
Upholding empathy, desiring a coherent symbolic universe, living according to
one’s conception of oneself—indeed, all three are concerns with a very large scope,
making them worth effort, pain, and sacrifice. Self-regulation, both as self-control
and as self-maintenance, can be effective. It sometimes works to promote social
harmony and internal harmony. True, its effectiveness is modest. In social interac-
tion, a sensitive person can still notice that her antagonist is angry, and she still may
take offense. The smoker may stop smoking, even if for only a week or two. But
some Tutsi children and Jews have been saved, some smokers do in fact stop before
the onset of lung cancer, some individuals gain in insight in their world and their
motives, some drinkers and drug addicts do in fact come to realize that the profits
from drinking and drug use are not really worth the outcomes (Lewis, 2011).

WHAT IT TAKES
What does all this take? The core processes in nonautomatic self-regulation are
fairly clear. They consist of assessing or assigning a preference among the emotional
options and selecting some action to implement that option.
What does that take? There are four tasks to be fulfilled:

• first, to explore the nature of each emotional option in order to determine


the nature of each of the conflicting action tendencies;
Emotion Regulation and Free Will 209

• second, to explore the concerns targeted or affected in the envisaged action


options, and whether promoting or harming them implies gain or loss;
• third, to explore possible action options under the various contingencies of
concern relevance, and whether any is feasible;
• fourth, to accept the preferred option and be ready for the consequences,
unless one sticks to remaining undecided.

It takes these four tasks to arrive at preference for an alternative in emotion con-
flict that optimally considers the relevance to the various concerns at stake. But the
set of tasks is not simple. The cognitive structure to be explored is intricate. Events
with their manifold aspects can be pertinent to several concerns, in particular when
one examines consequences at several steps removed. Killing an insect removes
an insect but also is a step in decreasing environmental diversity and undermining
respect for life. Each concern relevance can give rise to an a large number of actions,
each with their different consequences.
These various kinds of exploration are usually possible, but they are not always
undertaken. They can be undertaken to widely varying extent. Responses in
emotional conflict tend to be driven by situational conditions. One readily tones
down one’s anger so as not to hurt one’s target if the situation is a run-of-the-mill
social one. One moderates one’s erotic approach so as not to chase the poten-
tial target away. One may take an aversive event as inevitable; one may without
questioning do whatever one is told to do; one may view oneself as a powerless
victim of unpleasant behavior whereas there are many things one could have
done. Often, no preference assessment occurs even when what the situation
suggests—doing nothing, doing what one is told—is actually not the only avail-
able option. Blatant examples are easily given. Recall the 70 percent of Milgram’s
subjects. Recall the members of the 101 Police Battalion from Hamburg—the
battalion that was among the first to engage in systematic killing of Jews in Poland
(as described by Browning, 1993, and by Goldhagen, 1996). Only one member
asked to be excused and was assigned a different job. Evidently, preference assess-
ment requires free and unrestrained exploration of the event’s meanings, and the
implications of one’s actions.
Let me briefly enlarge on the four tasks mentioned. First, exploring the nature
of the options in emotional conflict. What is the nature of each of the conflicting
action tendencies? The task includes taking time for a thought about context, and if
the thought comes, not at once to brush it away. Many of the 101 Battalion members
got used to the discomforts of their task rather rapidly (Browning, 1993). Many of
Milgram’s subjects did have qualms that they did not heed. Those that did heed
them included subjects who wondered what the qualms suggested they should do
instead, and some of them discussed these alternatives with the experimenters.
Second, preference assessment calls for exploring the concerns and sensitivities
at stake in an event and in the consequences of one’s eventual emotional reaction.
We explore our concerns by pondering about an event, reflecting upon it, as well
as by deliberating about action. In doing so we expand our awareness of the event’s
implications and consequences. Recall the guess I made about the contents of the
mentioned farmer’s pondering the request to shelter a Jew. He presumably thought
210 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

about the risks involved, the values offended by the reasons for the request, as well
as by the actions of agreeing or not agreeing with the request.
The point I wish to stress here is that sustained attention may engage other con-
cerns than those that were initially activated. It may become clear that an individual
is being threatened by a scandal, and that the scandal is an event in which basic
human values are not respected. Expansion of the domain of implicated concerns
may climb to increasingly more encompassing levels of meaning, and of implica-
tions for one’s image of oneself. It is what, presumably, happened in Kertesz’s story.
Having stolen someone’s ration represents a gain but can also fill one with uneasi-
ness, up until sensing a loss of self-respect.
The extent and content of these reflections require a measure of receptiveness
or openness to them (Kruglanski, 2004). They require willingness and capacity to
explore and face their implications. The mentioned receptiveness, openness, and
willingness to explore appear to form requirements for accessing the event’s full
concern relevance.
This role of openness and willingness to explore consequences is not trivial, as
shown by the Milgram data, and those from the 101 Hamburg police battalion. On
occasion, it goes even further. The commanders of several World War II concentra-
tion camps (Stängl, the commander of the Treblinka camp; Höss, the commander
of Auschwitz-Birkenau) prided themselves after the war, in interviews or autobio-
graphical accounts, for their having done their jobs as best they could, among which
perfecting the mass-extermination procedures (Levi, 1988). Prisoners of the Gulag
who hated the Soviet regime took pains to build the walls around their compounds
as straight and neat as they could (Adler, 2002). Furthermore, recall that limiting
use of available information is not restricted to hostile contexts, (as evident from the
illustrations given by Akerlof and Shiller (2009).
Concerns and attendant motivations, as argued before, need not be consciously
identified to operate. Events directly produce affect and action. Conscious identifica-
tion of why one wants what one prefers frequently consist of justifications and con-
structions produced after the fact (Wilson, 2002). This is different from examining
one’s emotional motives and concerns in deliberating over a decision. Deliberating is
geared to “Why do I want to do that?” rather than to “Why did I do that?” It serves to
establish an inventory of relevant concerns to enable estimating the relevance of each
one. One may feel the urge to save someone in need: Is it for the glory of having taken
a risk and the glow of goodness, or because of concern for the suffering of others? One
may show courage in fighting a cause: Is it to give meaning to one’s empty routine life,
or to obtain 75 virgins in the afterlife, or because an offense to God’s honor should be
paid back? One probably never can be certain about one’s motives, particularly when
climbing the motive hierarchy (Frijda 2010a). But attending to relevant concerns and
motivating states promotes openness for the motivational options, and it may blunt
the sensitivity for social and situational primings, as these figure in the after-decision
justifications as spelled out by Wegner (2002), Wilson (2002), and others.
The third task is to search for alternative action options. The way to do so is pretty
straightforward. It is similar to the openness in exploring concerns. Alternative
action options are found by looking around in the world and in the world of imagi-
nation. One can recognize novel options when coming across them. This root of
Emotion Regulation and Free Will 211

freedom is prominent in the outcome of cognitive activities prior to decisions, as


is shown in the phenomenological analysis of unreflected action (Rietveld, 2008).
Freedom lies in the openness to novel affordances.
Search may hit upon the simplest possible alternative action when faced with
unacceptable requests: doing nothing, refusal to act as requested, or just saying no
when ordered. It was what the 15 percent of Milgram’s subjects did. It was what just
one man of the 101 Police Battalion thought of doing. His was a more effective action
than that of his commander, who, when giving his men the order to kill, burst out
crying (Browning, 1993). It was the kind of thing that La Flaca Alejandra was unable
to do or maintain, due to exhaustion and humiliation, and regretted for 40 years after
(Castello & Girard 1993).
Alternative options are not merely found. They can also be invented. The German
actor Langhoff was imprisoned in a concentration camp in the 1930s (Langhoff,
1935). During his imprisonment he was tortured by the SS. He shrieked in fear
and pain, then suddenly “a good thought came to him,” in an oddly clear and cool
fashion: that he, as an actor, could fake passing out. He did so, upon which he was
left alone by the cheerful torturers.
Awareness of freedom of choice is demonstrably vital for survival under hard-
ship or external compulsion. Such awareness of freedom to act and to influence
one’s fate operates even under modestly difficult conditions. When exposed to
electric shocks, being able to press a button to end shock lowers felt discomfort
and increase in stress hormones, even when the button is never actually used
(Frankenhauser, 1975).
Discovering alternative action options has effects beyond itself. The options can
become part of one’s knowledge and skill repertoire. It creates awareness that seek-
ing action options is possible, which shapes expectations that motivate later similar
searches. One can say “no”. One can act, fear and pain notwithstanding, for the sake
of more important issues. Choice of suicide as a regulation mode is another exten-
sion of the coping repertoire, after one first has hit upon the idea, and without con-
straint considered its implications. The initiative of the Italian priest in Auschwitz,
and acceptance of the risks of hiding a Tutsi or Jewish child all are discoveries of
thought without constraints. Accepting death to end one’s own meaningless suffer-
ing is of course a similar discovery.
Self-control of course can employ similar means. Controlling temptation can be
achieved by avoiding meeting your neighbor’s wife. Controlling bulimia is helped by
hitting upon the idea of leaving the fridge empty, or of throwing away the cookies your
partner bought. Controlling jealousy can lead to the discovery that the severest pangs
usually subside after a minute or so, and this knowledge can cause one to refrain from
giving in to the urge to look into his or her email, and to instead suffer for that minute
or two. Controlling fear is helped by visualizing the contempt of one’s team members
upon showing fear, and control of unsafe sex is helped by acquiring the habit of first
visualizing the next morning’s anxiety (Richard, Van der Pligt, & DeVries, 1996).
The fourth requirement for forming a preference is to accept the preferred
option and prepare for the consequences, or else to remain undecided. Preference
assessment and retrieving meaningful action options actually constitute making
one’s choice and settling on a preferred option. “I think we’ll have to do it” was the
212 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

mentioned farmer’s conclusion after his two minutes’ reflection on the request. In
self-maintenance at least, something does happen in such choosing that goes beyond
merely selecting one of the action options. One also sets the preferred option as
the chosen goal and accepts it, risks and all, come hell or high water. The selected
option turns into commitment to stick to one’s decision when the heavy costs will
emerge, and to thus form the intention to face the harm when it comes. This sounds
suspiciously like a self really deciding. It probably works by a profound reappraisal:
effecting a cognitive change that gives the action consequences a different evalua-
tion, as challenges instead of threats. Risks to come will not be stimuli for flight or
restraint any longer but obstacles on one’s path to a goal that are to be overcome or
avoided. Reappraisals of this sort may have considerable effect. An interview-based
study suggested that communist prisoners in the 1933–1945 German concentra-
tion camps showed higher resistance to the sufferings. After the war they appeared
to show fewer PTSD symptoms than, for instance, Jewish survivors, probably
because they viewed their sufferings as the results of deliberate choices for acting
for a honorable cause, rather than of merely having been victims (Withuis, 2005).
The same appears to apply to suicide terrorists.
Such reappraisal and acceptance of hardships not only characterize commit-
ment to political and religious ideologies. They also apply to devoted personal pas-
sion and love, as illustrated by John Bayley’s attachment to his wife, Iris Murdoch
(Bayley, 1998).
What all this takes, then, is emotion. Emotion engages capacities for preference
and for reflection or pondering: for perusing and considering options, probing pref-
erences, sensing emotional impacts.
What this takes is in fact rather a lot. Not for nothing did Dennett (2004) elabo-
rate the point that freedom evolved. Free choice or free will appears possible with
the help of three almost uniquely human faculties: conceiving of the merely pos-
sible, which includes imagination; detachment, which includes the ability for rela-
tively unprejudiced scanning of the world and of what imagination happens to offer;
and reflective thought, making one’s thoughts, perceptions, and emotions into
objects of inspection. As far as I can see, it is not easy as yet to find a satisfactory
account of the operation of these faculties in subpersonal terms. Imagination and
cognizing the merely possible, for instance, not only involve producing representa-
tions and simulations but also include awareness that direct links with motor action
are absent. Imagery subjectively differs from perception in that its content cannot
be inspected. Detachment, stepping back, adopting the position of an unconcerned
spectator appear to involve general loss of sense of agency, and even of being a vul-
nerable target—automatically under dissociation (as in Langhoff ’s case), or inten-
tionally in self-observation and the cinema. Reflective thought includes rendering
oneself an object of attention.

FREE WILL
The process of finding action options and final preferences consists of exerting free
choice or free will—liber arbitirium. What is valued in what is being called “free will”
is freedom, to seek to act according to one’s preferences and one’s considerations.
Emotion Regulation and Free Will 213

It is to obtain and generate information, without internal or external constraint,


under acceptance of certain aversive consequences for others and oneself, as a conse-
quence of second-order desires. Whether instances of decision are to be considered
instantiations of free will depends on whether the decision indeed followed freedom
of consideration, and indeed includes acceptance of aversive consequences, which
only can show after the fact, and for as long as resources for acceptance remain.
This applies to devoted passions as well as to the costly choices by heretics, wel-
fare workers in dangerous areas, resistance fighters, revolutionaries. Attention may
have been allowed to roam freely over external and internal information: over the
implications of each available course of action, such as losing one’s self-esteem if
betraying one’s partisans, or the persisting anguish while not doing so, or dying in
the event. Free will includes intending to act against the pressure of circumstances
and of aroused inclinations; and of willing that decision, that is, of being ready to
indeed act upon it.
Thoughtful reflection or attending to perceptions, images, feelings, and thoughts
is central in the issue of free will, for the simple reason that thought itself can be free.
Die Gedanken sind frei. This is not merely a political statement. It is a factual one, or
can be. Thoughts can fly as free as birds, and flights of imagination indeed manifest
free will as much as the flight of birds does—or more so, since our thoughts can
travel further. This does not mean that these thoughts or the actions they inspire
are undetermined. They are determined by cognitive processes and the information
that fed them as their determinants. What more would one want from free will? Not
the possibility of groundless, arbitrary action (Dennett, 1984).
Notice that in considering action options the freedom is there, objectively as well
as subjectively. One not only feels free to let thought and attention roam: one is free,
or can obtain that freedom by relaxing and by stepping back from engagements.
The options are there, out there in the world and in the creativity of thought and the
reaches of former experience and what it might teach. The options are there for the
picking, There is no constraint in observing some and not considering others. There
is no limit to the options that might be considered. Under external compulsion to
act—including under that of not considering some options and only considering
others—one in fact can refuse to comply. One can say no in Milgram’s lab if one is
willing to pay the costs for doing so. One can withdraw in “I am not there” by mov-
ing into depersonalization, as so often described by rape victims, or under torture;
and by imagination. A poem by the Dutch poet Slauerhoff, freely translated from
the Chinese into Dutch (Slauerhoff, 1961, p.512), and from the Dutch into English
by the present author, illustrates this:

The Sage
My house is filthy and my many children shrieking.
The pigs are rooting, grunting, in the yard.
But mountains, rising high to the blue heaven,
Draw my attention, soaring up from stink and dirt.

In a sense, freedom resides in the world: in its abundance of possibilities if the pos-
sible and the outcomes of imagination are included. This is connected to another
214 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

main point. Information search implies initiative. There is freedom also in going out
in the world of ideas and options. The determinants come from the outside meeting
the inside: there is self-determination.
“Freedom of choice” refers to freedom from external constraints to consider and
weigh available options, and to act in agreement with preference. “Free will” like-
wise refers to freedom to pursue available options, in perception and thought, and
in constructing preference, and in finding out what one’s preference is. Freedom is
the opposite of compulsion and constraint—not of aimless arbitrariness.
The notion of “free will” appears to offend the axiom of universal causal determi-
nation. However, such offense is in no way involved in the notion behind people’s
experience of free will, and for their striving to exert it (Frijda, 2010a). True, the
notion of free will had religious backgrounds, in Augustinus, Aquino, and Duns
Scotus. It served to free God from the blame for human evil. However, this con-
cept of free will was not what provided the facts that produced the notion of will
and freedom of action. The facts were condensed in Aristotle’s analysis of voluntary
action, and in Aquino’s analysis of how to escape from sin: by not sinning. They
included the observations and experiences that not fleeing in fear from the battle-
field is an available option, and that one can in fact leave one’s neighbor’s wife alone,
even if it may be hard to do.
Of course, free choices have ample determinants. They require cognitive abilities
to explore, and to explore against inclinations not to consider certain determinants.
They require the determination of the preferences among options. They require the
ability to envisage remote consequences as against proximal perceptual influences;
they require the resources to retain detachment and clarity under stress. They, first of
all, require interest, spontaneity, and engagement, that is, alive concerns and affective
sensitivity for what may be relevant to concerns at stake in the choice situation.
It has been argued that the awareness “I could have acted differently,” after a
decision, is an illusion. Of course, it is not an illusion. All different ways to act that
appeared feasible were considered and compared while deliberating and before set-
tling on the decision. Saying of someone “He could not have acted differently” after
the fact of having decided is a play on the word “could,” since that word presupposes
that options existed before the decision cut them off. Deliberation, reflection, and
hesitation showed that the determinants of the final preference were all effective pro-
cesses: weightings and evaluations. Most important, in my view, is the fact that decid-
ing and committing oneself consists of the act of terminating the intention to explore
and weigh action options, and a judgment that enough exploration is enough. One
arrived at the satisficing criterion (Simon, 1986). Processing shifts from weighing
options to shaping the intention of following the path of the satisficing option. In
fact: there is no compelling reason to terminate exploring options. Some people do
not, and continue deliberating until too old to act anyway, or let the course of events
or Allah’s will decide, or continue ruminating on “If only I had . . . !”

SELF-DETERMINATION
The preceding analysis of free will is phrased in intentional language, in terms that
presuppose self-determination. But what if there is no self to be found that guides
Emotion Regulation and Free Will 215

choices, as a central commander (Prinz, 2004), or an I that is doing the doing? Look
at the brain, look at the mind: there is no one there (Metzinger, 2003).
An initial response is to insist there certainly is someone here. He or she stands
here, right before you in presenting this essay (or sits behind the computer while
writing this text). He or she is doing things, and can do other things, some of which
he or she is doing to the world—speaking to you, for instance—and some to him/
herself, such as scratching his/her head and self-correcting his/her English.
If one should ask, he—in the present example—would answer that he experi-
ences and views himself as a person, and operates as a person. He or she tells me to
experience and view him/herself as a person and he/she shows to be operating as a
person. The things which that person is doing result from subpersonal processes plus
the information that triggers these processes and that these processes operate upon.
“Me,” “I” and “self ” designate the sum total of process dispositions, information rep-
resentations, and the procedures to organize and harmonize these processes, which
together, and in their confluence, function as the agent in action. One should stress
this “confluence.” Few if any mental activities engage only one subpersonal process,
or even only a few of them. Correspondingly, mental processes engage large infor-
mational and neural networks (Edelman & Tononi, 2000; Freeman, 1999; Lewis,
2005; Lewis & Todd, 2005; Lutz & Thompson, 2003). They thus engage large parts
of the total system. They thus engage large parts of the me, I, or self.
Among these processes are those that produce the behavioral, physiological, and
experiential outputs, in accordance with outcome preferences and, thus, with the
individual’s concerns. They include a representation of the body and its location in
space, which is referred to, by Damasio (2000) and Panksepp (1998), as the “core
self.” These processes may be located anatomically in what has been termed the
“cortical-subcortical midline system” that runs from the periaqueductal gray to the
medial prefrontal cortex (Panksepp & Northoff, 2009). It may include the fron-
tal pole, which distinguishes between actions that the subject has decided upon,
as contrasted to actions he performs on somebody else’s initiative (Forstmann
et al., 2008). It may extend to the “self ’s” affective states of the moment, and its
relationship to the world and with persons and objects around it. All this underlies
Metzinger’s (2003) self-model of subjectivity, functionally as well as (on occasion)
in conscious awareness.
Is there an “I”? Yes, there is an I, in the sense just indicated. Likewise, there exist
a “me” and a “self.” They all three refer to the same total interconnected network.
Neither supervises the network. They are the entire network, with its informational
contents, and ongoing processes, together with its appendages consisting of eyes,
hands, bodily sensitivities, and actions by these eyes, hands, and bodily sensitivities.
They are the entire network, just as a “wave” is the entire collection of droplets that
by their interactions is capable of breaking dikes and destroying oil wells.
“Self-determination” in most of its usages just means “autodetermination”: the
operation of attractors within the network in determining preference or in agree-
ment of actions with goals (Lewis, 2005). Perhaps in many uses of this term, our
focus is on more or less stable ingredients of neural network-like concerns and sen-
timents, in addition to representations of the body. When asking oneself, “Who am
I?” one tends to answer by thinking of one’s body and its location, or by thinking
216 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

of one’s major goals and concerns: the “myself ” that one can or cannot look in
the eye.
In connection with emotion regulation, self-determination has a more specific
and emphatic accent. As discussed, some actions are caused primarily by internal
processes, not directly dependent upon external events. The internal processes
have been enumerated: viewing counterfactuals and the merely possible; main-
taining aims in working memory despite distractions; reflection and deliberation;
stepping back from affordances and action urges; social influences; detaching from
event pressures; noting primings by experimental or advertising manipulations, and
largely discounting them. Centrally, these internal processes involve using the free-
dom of thought that the world allows and that allows thought to explore and to
detect nonobvious concern relevance. They are possible in spite of habits, stimuli
with their affordances, and the presence of conflicting urgent urges. These internal
processes are processes within the total system of interacting concerns, preferences,
and dispositions that are remote from external stimulus events, and that on occa-
sion are designated for brief as “the self ” or “a self.”
All of this amounts to saying that free decisions and free will are enabled by the
human system as a whole, and its neural structures that integrate multiple subper-
sonal processes and their outcomes.

FINAL REMARKS
My main conclusion should be obvious: free will exists, and is effective in influenc-
ing the actor’s action and his or her fate. It is equally obvious that free will has noth-
ing to do with decisions and actions not being fully determined. It is determined in
large part by processes internal to the deciding person.
None of this says anything about the scope of free will. Many life circumstances
leave few options. Moreover, people in general know little about their motiva-
tions. Awareness of what moves one is a construction. There exists no direct
reading-off of the causes of intentions or desires; one did not need Nisbett and
Wilson (1977) or Wegner (2002) to make this much clear. In any culture with
macho ethics a man whose wife left him tends to say, “Good riddance,” or “She
broke her commitments and promises; I am angry, not sad.” Any heroic deed may
be motivated by expected glory or enhancement of self-esteem, while constructed
as idealism or courage. The schooling and pay offered by fundamentalist move-
ments obscure the role of overriding religious value, even in the eyes of funda-
mentalists themselves (Stern, 2003).
There remains a major riddle. Free will, and decision after deliberation, or upon
intuitive preference, or after hesitation are largely products of conscious cognitive
action. How does consciousness impact on matter—the matter of the muscles and
that of the nerves? How, in other words, is the relationship between mind and mat-
ter to be seen? As a state of the neural matter, as Searle (2007) proposes? Certainly
not as the relationship between a phenomenon and an epiphenomenon. So far,
neuroscience has given no satisfactory account. On the other hand, conscious
thoughts, even when entirely dependent on concurrent neural processes, represent
novel input that stirs or shapes neural representations that have their further effects.
Emotion Regulation and Free Will 217

I react to what you say; why shouldn’t I be able to react on what I say to myself or
“think to myself ”?
Then there is the very will-like notion of acceptance of the consequences of deci-
sions taken. It results (I think) in commitments, that is, in long-term intentions that
are concerns (and that have emotional impact when something relevant occurs,
including not following up on the commitment). Finally, I have stressed that aware-
ness that one can influence one’s fate provides or sustains the motivation to seek
obtaining such influence when it is needed, and to look for actions or opportunities
that may not be immediately obvious. All of this may save lives and offer escape
from despair.

ACKNOWLEDGEMENTS
This work, as part of the European Science Foundation EUROCORES Programme
CNCC, was supported by funds from NWO and the EC Sixth Framework
Programme under contract no. ERAS-CT-2003–980409.
I am much indebted to the discussions at the CNCC meetings, and to comments
on previous versions by Michael Frijda, Machiel Keestra, Julian Kiverstein, Batja
Mesquita, Erik Rietveld, and Till Vierkant.

REFERENCES
Abu-Lughod, L. (1986). Veiled sentiments. Berkeley: University of California Press.
Adler, N. (2002). The Gulag survivor: Beyond the Soviet system. London: Transaction.
Ainslie, G. (2001). Breakdown of will. Cambridge: Cambridge University Press.
Akerlof, G. A., & Shiller, R. J. (2009). Animal spirits: How human psychology drives the
economy, and why it matters for global capitalism. Princeton, NJ: Princeton University
Press.
Apfelbaum, E. (2000). And now what, after such tribulations? Memory and dislocation in
the era of uprooting. American Psychologist, 55, 1008–1013.
Apfelbaum, E. (2002). Uprooted communities, silenced cultures and the need for legacy.
In V. Walkerdine (Ed.), Challenging subjects: Critical psychology for a new millennium.
London: Palgrave.
Asadi, H. (2010). Letters to my torturer: Love, revolution, and imprisonment in Iran.
Oneworld.
Bandura, A. (1997). Self-efficacy: The exercise of control. New York: W. H. Freeman.
Baumeister, R. F., Muraven, M., & Tice, D. (2000). Ego depletion: A resource model of
volition, self-regulation, and controlled processing. Social Cognition, 18, 130–150.
Baumeister, R. F. (2008). Free will in scientific psychology. Perspectives on Psychological
Science, 3, 14–19.
Bayley, J. (1998). Iris: a memoir of Iris Murdoch. London: Duckworth.
Berridge, K. C. (2004). Motivation concepts in behavioral neuroscience. Physiology and
Behavior, 81, 179–209.
Berridge, K. C., & Winkielman, P. (2003). What is an unconscious emotion? (The case for
unconscious “liking”). Cognition and Emotion, 17, 181–211.
Browning , C. R . (1993). Ordinary men: Reserve Police Battalion 101 and the final solution in
Poland. New York: Harper Perennial.
218 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

Buber-Neumann, M. (1989). Milena. London: Collins Harvill.


Calzada, V. F. (no date). Flaca Alejandra.
Campos, J. J., Frankel, C.B., & Camras, L. (2004). On the nature of emotion regulation.
Child Development, 75, 377–394.
Castillo, C., & Girard, G. (1993). La Flaca Alejandra (film).
Damasio, A. (2000). The feeling of what happens: Body, emotion, and consciousness. London:
Random House.
Dennett, D. C. (1978). Brainstorms: Philosophical essays on mind and psychology.
Montgomery, VT: Bradford Books.
Dennett, D. C. (1984). Elbow room: The varieties of free will worth wanting Oxford:
Clarendon Press.
Dennett, D. C. (2004). Freedom evolves. London: Penguin.
Dijksterhuis, A . (2004). Think different: The merits of unconscious thought in prefer-
ence development and decision making. Journal of Personality and Social Behavior, 87,
586–598.
Durkheim, E. (1897). Le suicide: Étude sociologique. Paris: Alcan.
Edelman, G. M., & Tononi, G. (2000). Consciousness: How matter becomes imagination.
London: Penguin.
Epstein, S. (1991). The self-concept, the traumatic neurosis, and the structure of personal-
ity. In D. Ozer, J. M. Healey Jr., & A. J. Stewart (Eds.), Perspectives on personality (Vol. 3,
A, 63–98). London: Kingsley.
Forstmann, B. U., Wolfensteller, U., Derrfuss, J., Neumann, J., Brass, M., Ridderinkhof, K.
R., & Von Cramon, Y. (2008). When the choice is ours: Context and agency modulate
the neural basis of decision-making. PLoS One, 3, 4.
Frankenhauser, M. (1975). Experimental approaches to the study of catecholamines and
emotion. In L. Levi (Ed.), Emotions: Their parameters and measurement (209–234).
New York: Raven Press.
Frankfurt, H. G. (1988) The importance of what we care about. New York: Cambridge
University Press.
Freeman, W. J. (1999) How brains make up their minds. London: Weidenfeld and
Nicolson.
Frijda, N. H. (1986). The emotions. Cambridge: Cambridge University Press
Frijda, N. H. (2007). The laws of emotion. Mahwah, NJ: Erlbaum.
Frijda, N. H. (2009). On the nature and function of pleasure. In K. Berridge &
M. Kringelbach (Eds.), Pleasures of the brain: The neural basis of sensory rewards (99–
112). Oxford: Blackwell.
Frijda, N. H. (2010a). Impulsive action, and motivation. Biological Psychology, 84,
570–579.
Frijda, N. H. (2010b). Not passion’s slave. Emotion Review, 2, 68–75.
Gallistel, C. R . (1980). The organization of action: A new synthesis. Hillsdale, NJ: Erlbaum.
Goldhagen, D. J. (1996). Hitler’s willing executioners. New York: Knopf.
Gray, J. A., & McNaughton, N. (2000). The neuropsychology of anxiety: An enquiry into the
functions of the septo-hippocampal system. Oxford: Oxford University Press.
Janoff-Bulman, R . (1992). Shattered assumptions: Towards a new psychology of trauma. New
York: Free Press.
Kertesz, I. (1997). Kaddish for a child not born. Evanston, IL: Northwestern University
Press.
Kruglanski, A. W. (2004). The psychology of closed mindedness. New York: Psychology
Press.
Emotion Regulation and Free Will 219

Kruglanski, A. W., Chen, X., Dechesne, M., Fishman, S., & Orehek, E. (2008). Fully com-
mitted: Suicide bombers’ motivation and the quest for personal significance. Political
Psychology, 30, 331–357.
Kuhl, J. & Koole, S,L, (2004). Workings of the will: A functional approach. In J. Greenberg,
S. L. Koole, & T. Pyszczynski (Eds.), Handbook of experimental existential psychology
(411–430). New York: Guilford Press.
Langhoff, W. (1935). Die Moorsoldaten (reprinted, 1958, Stuttgart: Verlag Neuer Weg).
Lazarus, R. S. (1991). Emotion and adaptation. New York: Oxford University Press.
Levi, P. (1988). The drowned and the saved. New York: Simon and Schuster.
Lewis, M. D. (2005). Bridging emotion theory and neurobiology through dynamic sys-
tem modeling. Behavioral and Brain Sciences, 28, 105–131.
Lewis, M. D. (2011). Memoirs of an addicted brain: A neuroscientist examines his former life
on drugs. Toronto: Doubleday Canada.
Lewis, M. R., & Todd, R. M. (2005). Getting emotional: A neural perspective on emo-
tion, intention, and consciousness. Journal of Consciousness Studies, 12, 210–235.
Lutz, A. S., & Thompson, E. (2003). Neurophenomenology: Integrating subjective experi-
ence and brain dynamics in the neuroscience of consciousness. Journal of Consciousness
Studies, 10, 31–52.
Mesquita, B., & Albert, D. (2007). The cultural regulation of emotions. In J. Gross (Ed.),
Handbook of emotion regulation (486–504). New York: Guilford Press.
Metzinger, T. (2003). Being no one: The self-model theory of subjectivity. Cambridge, MA :
MIT Press.
Milgram, S. (1974). Obedience to authority. New York: Harper and Row.
Mook, D. G. (1996). Motivation: The organization of action. 2nd ed. New York: Norton.
Mulder, G. (1980). The heart of mental effort. Ph.D. diss. University of Groningen.
Nadrani, M. (2005). Les sarcophages du complexe.[Years of lead]. Casablanca: Éditions Al
Ayam.
Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we can know: Verbal reports on
mental processes. Psychological Review, 84, 231–259.
Panksepp, J. (1998). Affective neuroscience. Oxford: Oxford University Press.
Panksepp, J., & Northoff, G. (2009). The trans-species core SELF: The emergence of
active cultural and neuro-ecological agents through self-related processing within
subcortical-cortical midline networks. Consciousness and Cognition, 18, 193–215.
Prinz, W. (2004). Kritik des freien Willens: Bemerkungen über eine sociale Institution.
Psychologische Rundschau, 55, 198–206.
Prinz, W. (2006). Free will as a social institution. In S. Pockett, W. P. Banks, & S. Gallagher
(Eds.), Does consciousness cause behaviour? Cambridge, MA : MIT Press.
Prunier, G. (1995). The Rwanda crisis: History of a genocide. London: Hurst.
Reykovski, J. (2001). Justice motive and altruistic helping. In M. Ross & D. T. Miller
(Eds.), The justice motive in everyday life. New York: Cambridge University Press.
Richard, R., Van der Pligt, J., & De Vries, N. K . (1996). Anticipated affect and behavioral
choice. Basic and Applied Social Psychology, 18, 111–129.
Rietveld, E. (2008). Situated normativity: The normative aspect of embodied cognition in
unreflective action. Mind, 117, 973–1001.
Rimé, B. (2005). Le partage social des émotions [Social sharing of emotions]. Paris: Presses
Universitaires de France.
Sanders, A. F. (1998). Elements of human performance: Reaction processes and attention in
human skill. Mahwah, NJ: Erlbaum.
Searle, J. R. (2007). Freedom and neurobiology. New York: Columbia University Press.
220 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

Shand, A. F. (1920). The foundations of character: A study of the emotions and sentiments.
London: Macmillan.
Simon, H. A. (1956). Rational choice and the structure of the environment. Psychological
Review, 63, 129–138.
Slauerhoff, J. (1961). Verzamelde gedichten [Collected poems]. Vol. 2, 512. The Hague,
Nijgh & Van Ditmar (present author’s Dutch-English translation).
Sonnemans, J., & Frijda, N. H. (1995). The determinants of subjective emotional inten-
sity. Cognition and Emotion, 9, 483–507.
Stern, J. (2003). Terror in the name of God: Why religious militants kill. New York:
HarperCollins.
Toates, F. M. (1986). Motivational systems. Cambridge: Cambridge University Press.
Wegner, D. M. (2002) The illusion of conscious will. Cambridge, MA : MIT Press.
Wilson, T. D. (2002). Strangers to ourselves. Cambridge, MA : Harvard University Press.
Withuis, J. (2005). Erkenning. Van oorlogstrauma naar klaagcultuur [Recognition: From
war trauma to the culture of complaining]. Amsterdam: Bezige Bij.
Withuis, J. (2008). Weest manlijk, zijt sterk: Pim Boellaards (1903–2001). Het leven van een
verzetsheld [Be manly, be strong: Pim Boellaards (1903–2001) The life of a resistance
hero]. Amsterdam: Bezige Bij.
12

Action Control by Implementation Intentions


The Role of Discrete Emotions

S A M J. M AG L I O , P E T E R M . G O L LW I T Z E R ,
AND GABRIELE OETTINGEN

INTRODUCTION
At the end of the 10-year Trojan War, its hero Odysseus was exhausted and desper-
ate to return home to Ithaca. The road home would prove to be as difficult as the war
itself, fraught with challenges and temptations. None of these better demonstrates
Odysseus’ effective action control than his encounter with the Sirens. Known for
their beautiful song—capable of tempting people into certain death—the Sirens
were located on the path between Odysseus’ ship and his home. They were approach-
ing fast, and Odysseus devised a clever but simple plan: he ordered his crew to place
wax in their ears, rendering them incapable of hearing the Sirens’ song, and then to
tie him to the mast of the ship, from which he would be unable to escape regardless
of how strong the impending temptation might be. His ship neared the island of the
Sirens, and the alluring song proved to be even more tempting than Odysseus had
anticipated. He struggled to work free from the mast but remained securely in place.
Before long, they had successfully sailed beyond the Sirens and were one step closer
to attaining the goal of returning home safely.
In the modern era, this same principle of finding means by which to succeed in
goal pursuit has become a major theme of research within the domains of motiva-
tion and self-regulation (Gollwitzer and Moskowitz 1996; Oettingen and Gollwitzer
2001). This research has drawn an important distinction between the setting of
appropriate goals and the effective striving for goal attainment, and this chapter will
focus primarily upon the latter. To return to the example of Odysseus, he had already
chosen the goal of successfully returning home. In the service of this goal, he con-
sciously willed an explicit plan—having himself tied to the mast of his ship. From
there, however, he had in a sense surrendered his conscious intent to nonconscious
control: though his conscious will had changed (e.g., to succumb to the temptation
222 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

of the Sirens), the bounds of the rope remained, guiding his behavior without his
conscious intent. From our perspective, the rope provides a simple metaphor for the
form and function of planning that specifies when, where, and how to direct action
control in the service of long-term goals. This chapter will describe a specific (yet
broadly applicable) type of planning: the formation of implementation intentions,
or if-then plans that identify an anticipated goal-relevant situation (e.g., encounter-
ing a temptation) and link it to an appropriate goal-directed response (e.g., coping
with temptations). In so doing, we will first develop a definition of such plans and
elaborate upon their effects and effectiveness, especially as they operate outside
of conscious awareness. Subsequently, we turn our consideration to an emerging
topic within the domain of planning—the emotional precursors to the formation
of plans.

Goal Intentions and Implementation Intentions


In working toward set goals, Gollwitzer and colleagues have suggested that merely
wanting something is often not sufficient to enable goal attainment. For example,
what would have come of Odysseus if his mental preparation for the goal of return-
ing home had stopped there? This is what Gollwitzer (1993, 1999) has identified as
a goal intention, which takes the structure of “I intend to reach Z,” with Z relating
to a certain outcome or behavior to which the person has committed him- or her-
self. However, Odysseus went one step further, furnishing his goal intention with
a plan. To form an implementation intention (or if-then plan; Gollwitzer 1999),
the person must identify both an anticipated goal-relevant situational cue (i.e., the
if-component) and a proper goal-directed response (i.e., the then-component) and
link the two. Thus, implementation intentions follow the form “if situation X arises,
then I will perform the goal-directed response Y.”
The furnishing of a goal intention with an implementation intention affords
the person a better chance of ultimately attaining the desired goal. Gollwitzer and
Sheeran (2006) conducted a meta-analysis of 94 independent studies involving
more than 8,000 participants and reported an effect size of d =.65. This medium-to-
large effect size represents the additional facilitation of goal achievement by imple-
mentation intentions compared with goal intentions alone. It is important to note
that goal intentions alone have a facilitating effect on behavior enactment (Webb
and Sheeran 2006). As a result, the implementation intention effect, arising in addi-
tion to the goal intention effect, is not only robust but also quite substantial.

Implementation Intentions as Strategic Automaticity in Goal Pursuit


Given how well they work, we next explore why implementation intention effects
come about. A core component of the answer to this question is the translation of
a conscious act of will (the formation of the plan) to nonconscious or automatic
control of action (the execution of the plan). As we have described, the formation
of an implementation intention requires the selection of a critical future situation,
the corresponding behavioral response, and the link between the two (Gollwitzer
1999). In support of this coactivation, studies have indicated that implementation
Action Control by Implementation Intentions 223

intentions forge a strong association between the specified opportunity and the
specified response (Webb and Sheeran 2007). As a result, the initiation of the
goal-directed response specified in the if-then plan becomes automated. By auto-
mated, we mean that this behavior exhibits features of automaticity, including
immediacy, efficiency, and lack of conscious intent. Said differently, the person fac-
ing the critical situation does not have to actively decide how to behave (e.g., suc-
cumb to the temptation or not). Like Odysseus, bound by ropes to the mast, their
previous act of conscious and deliberate will in forming the plan has precluded the
will in the critical situation: the prescribed behavior is executed automatically. Such
automatic, predetermined behavior stands in stark contrast to people who have
formed mere goal intentions.
Empirical evidence is consistent with this conception of strategic automaticity.
If-then planners act quickly (Gollwitzer and Brandstätter 1997, Study 3), deal effec-
tively with cognitive demands (Brandstätter, Lengfelder, and Gollwitzer 2001), and
do not need to consciously intend to act at the critical moment (Sheeran, Webb,
and Gollwitzer 2005, Study 2). In addition to this behavioral readiness, research on
implementation intentions has also observed a perceptual readiness for the speci-
fied critical cues (e.g., Aarts, Dijksterhuis, and Midden 1999; Webb and Sheeran
2007). In sum, implementation intentions allow the person to readily see and seize
good opportunities to move toward their goals. Forming if-then plans thus auto-
mates goal striving (Gollwitzer and Schaal 1998) by strategically delegating the
control of goal-directed responses to preselected situational cues with the explicit
purpose of reaching one’s goals. The cool, rational agent engages an a priori strategy
to take conscious control away from the hot, vulnerable future self.

Using Implementation Intentions to Solve Action Control Problems


As we have suggested, implementation intentions facilitate goal striving by auto-
mating behavioral responses upon encountering situational cues. Within the realm
of goal implementation, there are a host of especially challenging problems that can
hinder progress toward goal attainment. Research over the past decade has exam-
ined the effects of implementation intentions in remedying such problems. Though
such effects are wide-reaching, we here focus on a handful of specific issues: starting
on a goal, shielding a goal, allocating resources, and application to special challenges
and populations.
Getting Started. Having set and committed to a goal, the first major hindrance
can be getting started on work toward achieving the goal; evidence suggests that
this problem can be solved effectively by forming implementation intentions. For
instance, Oettingen, Hönig, and Gollwitzer (2000, Study 3) observed that peo-
ple who furnished task goals (i.e., taking a concentration test) with implementa-
tion intentions were better able to perform the task on time (e.g., at 10 a.m. every
Wednesday over four straight weeks). Further, implementation intentions may be
particularly effective in fostering goal striving that is unpleasant to perform. For
instance, the goals to perform regular breast examinations (Orbell, Hodgkins, and
Sheeran 1997), resume functional activity after joint replacement surgery (Orbell
224 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

and Sheeran 2000), recycle (Holland, Aarts, and Langendam 2006), and engage
in physical exercise (Milne, Orbell, and Sheeran 2002) were all more readily acted
upon when people had furnished these goals with implementation intentions.
Implementation intentions also were found to help attainment of goal intentions
where it is easy to forget to act (e.g., regular intake of vitamin pills; Sheeran and
Orbell 1999).
Goal Shielding. Ongoing goals require that people keep striving for the goal over
an extended period of time, and implementation intentions can facilitate the shield-
ing of such goal striving from interferences that stem from inside or outside the
person (Gollwitzer and Schaal 1998). For instance, imagine a person who wants to
avoid being unfriendly to a friend who is known to make sudden outrageous requests
during casual conversations. To meet the goal of having an undisrupted casual con-
versation with her friend, the person may form one of the following implementation
intentions. She can focus on preventing the unwanted response of being unfriendly
by forming the implementation intention either to ignore the unfriendly request or
to stay calm in the face of the request. Alternatively, she can focus on strengthening
the striving for the focal goal (i.e., bringing the casual conversation to a successful
ending) by planning it out in detail; for instance, she may form if-then plans that
cover how the casual conversation with the friend is to run off from the beginning
to its successful ending (Bayer, Gollwitzer, and Achtziger 2010).
Allocating Resources. An additional problem in goal striving is the failure
to disengage from one goal in order to direct limited resources to other goals.
Implementation intentions have been found to facilitate such disengagement and
switching. Henderson, Gollwitzer, and Oettingen (2007) showed that implemen-
tation intentions can be used to curb the escalation of behavioral commitment
commonly observed when people experience failure with a chosen strategy of goal
striving. Furthermore, as implementation intentions subject behavior to the direct
control of situational cues, the self should not be involved when action is controlled
by implementation intentions. Therefore, the self should not become depleted
(Muraven and Baumeister 2000) when task performance is regulated by implemen-
tation intentions, and thus individuals using implementation intentions should not
show overextension effects in their limited cognitive resources. Within different
paradigms, participants who had used implementation intentions to regulate behav-
ior in a first task do not show reduced self-regulatory capacity (i.e., depletion) in a
subsequent task (e.g., Webb and Sheeran 2003). Thus, implementation intentions
successfully preserved self-regulatory resources as demonstrated by greater persis-
tence on subsequent difficult tasks (i.e., solving difficult anagrams).
Special Challenges and Populations. Recent research has shown that imple-
mentation intentions ameliorate action control problems even when goal striving is
limited by conditions that seem quite resistant to change by self-regulatory efforts
(summary by Gollwitzer and Oettingen 2011). For instance, it was observed that
implementation intentions facilitated achieving high scores on math and intelligence
tests (Bayer and Gollwitzer 2007), even though such performances are known to be
limited by a person’s respective capabilities. Implementation intentions have also
helped people succeed in sports competitions (Achtziger, Gollwitzer, and Sheeran
2008, Study 2) and negotiations over limited resources (Trötschel and Gollwitzer
Action Control by Implementation Intentions 225

2007), even though in such competitive situations a person’s goal striving is limited
by the opponents’ behavior. Moreover, implementation intentions were found to
help people’s goal striving even in cases where effective goal striving is threatened
by competing habitual responses; this seems to be true no matter whether these
automatic competing responses are behavioral (e.g., Cohen et al. 2008; Mendoza,
Gollwitzer, and Amodio 2010), cognitive (e.g., Gollwitzer and Schaal 1998; Stewart
and Payne 2008), or affective (e.g., Schweiger Gallo et al. 2009) in nature. These lat-
ter findings suggest that forming implementation intentions turns top-down action
control by goals into bottom-up control by the situational cues specified in the if-
component of an implementation intention (Gilbert et al. 2009), and they explain
why special samples that are known to suffer from ineffective effortful control of
their thoughts, feelings, and actions still benefit from forming implementation
intentions. Examples include heroin addicts during withdrawal and schizophrenic
patients (Brandstätter, Lengfelder, and Gollwitzer 2001, Studies 1 and 2), fron-
tal lobe patients (Lengfelder and Gollwitzer 2001), and children with ADHD
(Gawrilow and Gollwitzer 2008).

Summary
In this section, we have described how forming implementation intentions—speci-
fying the where, when, and how of performing a goal-directed response—facilitates
the control of goal-relevant action. In going beyond a mere goal intention, the per-
son who forms an implementation intention creates a crucial link between a critical
situational cue and a desired behavioral response. The result is that the prescribed
behavior is executed automatically (i.e., immediate, efficient, and without further
conscious intent), preventing the fallible person in the hazardous situation from
straying from the desired path. As Odysseus was bound to the mast of his ship by his
“plan,” so too do implementation intentions determine behavioral responding ahead
of time. The result, with respect to the overarching goal, is an enhanced likelihood
of successfully attaining that goal. This is accomplished by any of several applica-
tions of implementation intentions, including to issues of getting started, shielding
the goal from competing concerns, appropriately allocating one’s limited resources
toward the goal, and even overriding special challenges (e.g., habitual problems)
and the difficulties faced by special populations (e.g., children with ADHD). In
sum, the self-regulatory exercise of furnishing goal intentions with implementation
intentions provides a simple yet effective means of managing one’s goal striving in
the interest of achieving desired outcomes.

PRECURSORS TO PLANNING
As documented in the previous section, research spanning more than two decades
has offered a clear prescription for people committed toward reaching a desired
goal: the formation of if-then plans to enhance goal striving. That is, the primary
empirical paradigm has people furnish goal intentions with if-then plans and then
observes the benefits they enjoy for goal striving. Despite identifying a host of factors
that contribute to the downstream consequences of forming these implementation
226 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

intentions, relatively little attention has been devoted to understanding the circum-
stances under which people may spontaneously generate them. In this section,
we offer an initial attempt to reverse this trend. We suggest that the experience of
certain specific (or discrete) emotions provides an insight into understanding why
and how people may engage in the act of planning on their own. To develop our
theoretical perspective, we first define what we mean by discrete emotion, relate
emotion to an established precursor to plan formation, and then use this connec-
tion to make predictions for behavior. As we will suggest, the relation between emo-
tion and planning provides a unique opportunity to investigate the interrelations
among motivation, emotion, cognition, and action. Ultimately, by capitalizing on
emotional experience, we suggest that these feeling states may play an important
role in the goal pursuit process.

The Trouble with Emotions


To understand what is meant by emotion, it must first be distinguished from mood
states. Whereas moods tend to arise from nonspecific sources and last for a rela-
tively long period of time, emotions are more intense but fleeting feeling states that
can be traced back to specific causes. For example, think of the difference between
spending an entire day in a bad mood versus being made briefly afraid by a back-
firing car. Furthermore, those short-lived emotions must be further subdivided by
valence into positive emotions and negative emotions. That is, receiving a gift and
receiving an insult are far from the same type of experience. For the purposes of
the present chapter, we will investigate only negatively valenced emotions and their
implications for planning. Nevertheless, mounting research speaks to the necessity
of parsing further still the realm of negative emotion into specific or discrete emo-
tions (e.g., Higgins 1997; Lerner and Keltner 2001; Lerner, Small, and Loewenstein
2004; Tiedens and Linton 2001). This is because discrete negative emotions vary
with respect to the types of situations that elicit them and the style of cognition or
appraisal that they activate (Lerner and Keltner 2000; Smith and Ellsworth 1985),
a point to which we will momentarily return.
But first, having established a definition of what we mean by negative discrete
emotions, we next ask why we would expect any benefit to come from them. After
all, a large body of literature speaks to the detrimental consequences of negative
emotion for thoughts and behaviors. To sample only a few, negative emotions can
increase impulsivity at the expense of long-term interests (Loewenstein 1996) and
compromise rational decision making (Damasio 1994; Shiv et al. 2005). Sadness
can enhance the accessibility of other sad thoughts and prompt depressive rumina-
tion (Bower 1981; Nolen-Hoeksema, Morrow, and Fredrickson 1993), and anger
can decrease risk estimates and increase risk-taking behaviors (Lerner and Keltner
2000, 2001). Given these effects, people commonly attempt to reduce their inten-
sity or duration through a process of emotion regulation (Frijda, this volume; Gross
1998, 2007).
Without denying the potentially detrimental consequences of negative emotions
(specifically, sadness and anger), we suggest that, in putting them to work in the
service of a goal, they may provide practical benefits as well. This possibility seems
Action Control by Implementation Intentions 227

important to explore given how intricately connected emotional experience is to the


process of implementing goals. To date, the main theme on the topic of emotion and
motivation has explored the role of emotion in setting goals. For example, individu-
als prioritize goals expected to yield positive emotion (Oettingen and Gollwitzer
2001; Custers and Aarts 2005), base their initiation of goal-directed action on these
emotions (Bagozzi, Baumgartner, and Pieters 1998), and consult their emotions as
indicators of progress toward a goal (Carver and Scheier 1990; Schwarz and Clore
1983). However, as our primary concern here is the planning and implementation
of goals, we address the relatively unexamined question of how emotion influences
striving for goals.

Emotions Reconsidered
To understand how different negative emotions can have different consequences—
good or bad—we first trace negative emotional experience back to its source. As we
mentioned earlier, discrete negative emotions (like sadness and anger) are concep-
tualized as discrete because they arise from fundamentally different types of sources
and activate different patterns of cognition and behavior. Let’s take two goal-relevant
examples, both related to buying a car. In the first scenario, imagine driving across
town to your favorite dealership with your heart set on buying the newest model of
your favorite make of car. You can practically feel the soft new leather seats and whiff
that new car smell. But, when you arrive, you learn that the make you were hoping for
has been discontinued. Driving back home, bemoaning your current car’s cracked
windshield and puny horsepower, it isn’t hard to intuit a feeling state of sadness. On
the other hand, your experience at the dealership could have been much different.
Instead, imagine being told by the shifty salesman in a plaid jacket that price of the
new model has been increased as the result of the inclusion of necessities—rust-
proofing, customized floor mats—and that the price is nonnegotiable. Certain that
the only function of these necessities is to boost his commission, you storm out of
the dealership. You’re again driving home, again in the same dull car you were hop-
ing to replace, but the feeling state is now different—it is one of anger.
How might the patterns of thought in response to the events at the dealership
differ between the two situations? Further, how will you respond—in thought and
action—to being cut off in traffic on your drive back home depending on whether
you just experienced scenario one or two? In response to discrete negative emo-
tions, research has suggested that the patterns of thought prompted by an emotion
extend beyond the emotion elicitor to novel situations and judgments. Within this
tradition, no other pair of emotions has produced such discrepant results on judg-
ment tasks as sadness and anger. This carryover effect has been documented in the
divergent effects of sadness and anger on a host of cognitive assessments: causal
judgment (Keltner, Ellsworth, and Edwards 1993), stereotyping (Bodenhausen,
Sheppard, and Kramer 1994), and expectations and likelihood estimations
(DeSteno et al. 2004; DeSteno et al. 2000).
But why do we observe these carryover effects? And why do they differ for sadness
and anger? The appraisal tendency framework (Lerner and Keltner 2000, 2001) sug-
gests a specific mechanism by which the experience of incidental emotion impacts
228 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

subsequent, unrelated judgments. Prior research on appraisal theory suggested that


discrete emotions are characterized by different central themes—what it means, at
the core, to experience that emotion (Lazarus 1991; Smith and Ellsworth 1985).
In turn, the way a person thinks about the emotion elicitor (vis-à-vis these core
themes) can be conceptualized as a specified cognitive appraisal pattern (Ortony,
Clore, and Collins 1988; Smith and Ellsworth 1985; Smith and Lazarus 1993). The
appraisal tendency framework posits that this pattern of thinking becomes gener-
ally activated and, in turn, is translated and applied beyond the emotion elicitor.
Consequently, the salient theme underlying the experience of an emotion (and the
cognitive appraisal pattern associated with it) colors later judgments.
The central themes of sadness and anger are, respectively, the experience of an
irrevocable loss and the experience of an insult or injustice (Berkowitz and Harmon-
Jones 2004; Keltner, Ellsworth, and Edwards 1993; Lazarus 1991). A central com-
ponent underlying both is the sense of certainty, but in opposite directions: whereas
sadness is characterized by uncertainty of the emotion’s cause (attributed vaguely
to situational forces), anger is characterized by a strong sense of certainty and the
responsibility of a specific other person (Ortony, Clore, and Collins 1988). As such,
sadness prompts a desire for better understanding, which gives rise to cautious and
evenhanded information processing (Bless et al. 1996; Clore and Huntsinger 2009;
Tiedens and Linton 2001). Conversely, anger is associated with heuristic processing
and stronger feelings of optimism and control (Lerner and Keltner 2001; Tiedens
and Linton 2001). Thus, with the induction of discrete negative emotion, the impact
of the source fails to be distinguished from application to new targets. Essentially,
the divergent patterns of judgment between people experiencing sadness and anger
arise from the application of different patterns of cognition to new situations. From
this perspective, it is understandable that, for example, anger may exaggerate risk
taking or impulsiveness. However, given the appropriate outlet, might these emo-
tions be successfully channeled toward beneficial action?

Action Phases and Mindsets


To answer this question, we examine the cognition-behavior link described in the
mindset model of action phases (Gollwitzer 1990, 2012). The model postulates
that goals are pursued via successive stages—or action phases—and that each phase
is defined by the distinct task to be performed during it. Additionally, a distinct cog-
nitive orientation—or mindset—corresponds to each phase and facilitates comple-
tion of the specified task. In the first, predecisional stage (the phase prior to the
selection of a goal), the salient task is to choose the best goal to pursue. Accordingly,
the person is predisposed to process desirability- and feasibility-related informa-
tion about the options from an impartial and objective perspective and takes on a
deliberative mindset. Subsequently, having chosen a goal, the person in the postde-
cisional stage now seeks opportunities to initiate action in working toward attain-
ment of the chosen goal. Importantly, this stage can be further subdivided into two
successive substages. The first is conceptualized as preactional, whereby people have
chosen a goal and begin planning how to work toward it without actually having
started to do so. Subsequently, when they begin active, behavioral goal striving,
Action Control by Implementation Intentions 229

they enter into the actional phase. Rather than objective assessment, cognition in
both postdecisional phases is oriented toward effective goal striving, constituting an
implemental mindset (for reviews, Gollwitzer and Bayer 1999; Gollwitzer, Fujita,
and Oettingen 2004).
Empirical evidence has provided support for this theory by probing the contents
and patterns of thought characteristic of deliberative and implemental mindsets. In
order to facilitate successful goal selection, the deliberative mindset is characterized
by both voluntary generation of and selective attention toward outcome (i.e., goal)
value—specifically, its desirability and feasibility. Conversely, the implemental mind-
set generates and attends to information regarding situational specifics (the when,
where, and how) for initiating goal-directed behavior (Gollwitzer, Heckhausen,
and Steller 1990; Puca and Schmalt 2001; Taylor and Gollwitzer 1995). A second
theme of this research has considered information-processing differences between
the two mindsets. Relative to the deliberative mindset, the implemental mindset is
more susceptible to a number of cognitive biases, including illusory control over the
situation (Gollwitzer and Kinney 1989), reduced perceived vulnerability to prob-
lems (Taylor and Gollwitzer 1995), stronger attitudes (Henderson, de Liver, and
Gollwitzer 2008), and decreased openness to information (Fujita, Gollwitzer, and
Oettingen 2007; Heckhausen and Gollwitzer 1987). Overall, the evidence speaks
to the evenhanded processing of outcome-relevant information in the deliberative
mindset and biased appraisal driving goal-directed action initiation in the imple-
mental mindset.
From both a theoretical and methodological perspective, it is important to note
a central mechanism by which mindsets operate. The act of either deliberating over
a choice or trying to enact a choice that has been made activates separable cog-
nitive procedures associated with those separate tasks, and it is via this activation
that mindset effects can generalize to new situations. The predominant paradigm in
this tradition asks participants to first either elaborate upon an unresolved personal
problem or plan the implementation of a chosen project (creating a deliberative
or implemental mindset, respectively). Subsequently, the participant performs the
ostensibly unrelated task to measure the effect of the induced mindset on general
cognitive style (e.g., perceived control over a random event; Gollwitzer and Kinney
1989). As such, deliberative and implemental mindsets serve as procedural primes,
making salient distinct frameworks by which to interpret, assess, and act upon new
information.

Similarities between Discrete Emotions and Mindset


Taken together, these two research traditions suggest that the careful cognitive objec-
tivity of sadness closely matches that of a deliberative mindset, whereas the enhanced
optimism and control (i.e., bias) of anger is consistent with an implemental mindset.
Additionally, the cognitive patterns characteristic of both emotional experience and
mindset are not limited in relevance only to their point of origin. Instead, both trigger
unique modes of thought (termed appraisal tendency and procedural priming, respec-
tively) that enable them to generalize to new targets. We draw upon this observa-
tion in formulating the emotion as mindset hypothesis: the experience of sadness will
230 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

prompt deliberative consideration of a goal, and the experience of anger will prompt
implemental consideration. If anger indeed engenders the same patterns of thought
(e.g., biases) as the implemental mindset, it should similarly orient people toward
identifying opportunities to enact goal-directed action (see Gollwitzer, Heckhausen,
and Steller 1990). As we have already discussed, linking critical situations to goal-
directed responses constitutes if-then planning, or formation of implementation
intentions. The deliberative mindset, conversely, is oriented toward outcomes (“Is
this goal worth pursuing?”) rather than behaviors (“When/Where/How can I work
toward attaining this goal?”). Beyond the formation of plans, an implemental (ver-
sus deliberative) mindset should additionally enhance the effectiveness with which
existing plans are enacted. As we have described, the implemental mindset is charac-
terized by a general goal-enhancing bias (e.g., enhanced self-confidence). One con-
sequence of such bias is that when an opportunity for planned behavior execution is
made available, it is immediately taken. On the other hand, a person in a deliberative
mindset might instead reconsider whether this behavior (or even this goal) is in fact
the best course of action to take, compromising plan implementation.

EMPIRICAL SUPPORT
With the three studies reported next, our aim was to test this emotion as mind-
set hypothesis across two goal- and planning-relevant domains. For the first two
studies, we drew upon an established measure to assess degree of implemental
thought: the formation of plans (Gollwitzer 1990). The first study induces con-
scious emotion and examines whether anger yields formation of more implemen-
tation intentions than sadness. In Study 2, we conceptually replicated the effects
of Study 1 but by utilizing a different (nonconscious) emotion manipulation
prior to a modified measure of plan formation. In our third study, we examined
how anger and sadness influence the execution of behavior as prescribed by pre-
existing plans.

Emotion Induction and Plan Formation


Our first study tested the basic notion that the experience of conscious anger and
sadness would differentially affect the planning of goals. Specifically, based on our
theoretical perspective, we hypothesized that people experiencing anger would
form more plans than those experiencing sadness. To test this prediction, partici-
pants were recruited to take part in a study ostensibly related to perspective taking.
Their first task was to name their most important academic goal, after which they
performed a perspective-taking task that served as our emotion manipulation (e.g.,
Hemenover and Zhang 2004; Smith and Lazarus 1993). In the anger condition,
the protagonist was evicted from an apartment by a landlord without cause; in the
sadness condition, the protagonist experienced the death of a pet; in the no emo-
tion condition, the protagonist compiled a grocery list and shopped for the items.
Next, all participants completed a basic manipulation check, rating their present
feelings with respect to four anger-related adjectives (angry, annoyed, frustrated,
and irritated), three sadness-related adjectives (sad, gloomy, and down), two
Action Control by Implementation Intentions 231

other negative emotions (fearful, nervous), and two positive emotions (happy,
content). Subsequently, participants recalled the academic goal they had named
earlier and then performed a sentence stem completion task with respect to that
goal, which served as a measure of plan formation (Oettingen, Pak, and Schnetter
2001). The task presented them with eight different incomplete sentence stems
and asked them first to review each of the stems and then select and complete the
four that best matched their thinking about their goal by filling in the correspond-
ing blank lines. Four of the phrases constituted implementation intentions (e.g.,
“Specifically, . . . ”), whereas the other four related to broader goal consideration
(e.g., “All in all, . . . ”).
The results from the manipulation check indicated that the perspective-taking
task successfully induced discrete sadness in the sadness condition, discrete anger
in the anger condition, and slightly positive affect in the neutral affect condition.
Based upon selection of sentence stems, each participant received a score on the
planning measure from 0 to 4, with higher scores indicating more implementa-
tion intentions formed. Consistent with our hypothesis, participants in the anger
condition formed more plans than those in the sadness condition, with plan for-
mation among those in the neutral condition falling between the two emotion
conditions. Thus, as predicted, the experience of anger prompted a greater ten-
dency toward implemental thought (i.e., plan formation) than sadness in preparing
goal-directed action.

Emotion Priming and Plan Formation


While our first study found evidence for differences in planning between con-
sciously felt emotional states, we conducted a second study on plan formation
to extend the breadth of our emotion as mindset hypothesis to include noncon-
scious emotion. Recent evidence suggests that behavioral findings from conscious
manipulations of emotion are replicable using nonconscious means by which to
prime them (Winkielman, Berridge, and Wilbarger 2005; Zemack-Rugar, Bettman,
and Fitzsimons 2007). This affords the opportunity to explore how the mere con-
cepts of specific emotions can activate cognitive procedures (i.e., serve as proce-
dural primes), as has been independently documented in the domain of mindsets
(Gollwitzer, Heckhausen, and Steller 1990).
Participants took part in a study ostensibly related to how people resume think-
ing about their goals after a distraction. Their first task was to name one specific
goal that was currently important to them. They then read a newspaper article that
served as our emotion manipulation. We primed discrete sadness and anger using
a method that draws upon appraisal theory (Lerner and Keltner 2000; Smith and
Ellsworth 1985), emphasizing the cognitive procedures that define the core mean-
ing of the emotion. That is, to nonconsciously prime discrete sadness and anger,
participants in both conditions read the same newspaper article about an earth-
quake that occurred in Peru (adapted from Wegener and Petty 1994) and then
were asked a series of different questions related to both the emotional aspects of
the article and their own reactions to it. In the anger priming condition, the ques-
tions related to injustices that had occurred in the context of the earthquake and
232 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

the culpability of specific individuals. In the sadness priming condition, the ques-
tions related to the tragic aspects of the earthquake and its unpredictability. Next,
all participants were asked to indicate the extent to which the article had made
them angry and sad.
Subsequently, participants were asked to recall the goal they had named earlier
and then perform a sentence stem completion task with respect to that goal. The task
presented them with four different incomplete sentence stems and asked them first
to review each of the stems and then select and complete the one that best matched
their thinking about their goal by filling in the corresponding blank lines. Two of the
stems were formatted such that they explicitly linked situations to behaviors (e.g.,
“If _____ happens, then I will do _____”), whereas the other two identified only
outcomes and the potential value they offered (e.g., “If _____ is achieved, it will
_____”). The former were meant to represent the implemental mindset, whereas
the latter reflected the deliberative mindset. Thus, all participants chose only one
type of structure to represent their conceptualization of the goal.
The results from the manipulation check indicated that our nonconscious emo-
tion induction was successful (i.e., no differences in conscious sadness and anger
were observed between emotion conditions). Based upon their selection of sen-
tence stems, participants were each categorized as utilizing either a deliberative or
an implemental structure (i.e., forming or not forming an implementation inten-
tion). Again, the results for this task supported our emotion as mindset hypoth-
esis: those in the anger-prime condition were more than three times more likely
than those in the sadness-prime condition to choose an implementation intention.
Importantly, these results suggest that conscious and nonconscious emotions have
similar consequences for the planning of goal-directed action. Because participants
in the two conditions read the same newspaper article and rated their conscious
emotions similarly, the observed difference in degree of implemental thinking must
be due solely to the leading questions that followed the article. Thus, our second
study suggests that activation of the construct of sadness or anger is sufficient to
prompt goal conceptualization in a manner consistent with the deliberative or
implemental mindset, respectively.
In sum, these first two studies provide support for the emotion as mindset
hypothesis in terms of anger (versus sadness) inducing more preactional implemen-
tal thought. Specifically, by forming more plans for how to act on their goals, people
made to feel angry showed more behavior characteristic of a postdecisional—but
preactional—implemental orientation. Consistent with past theorizing described
earlier, we consider the formation of such implementation intentions to reflect a
conscious act of will with implications for future behavior: when people later
encounter the critical cue specified by their plans, they will execute the associated
behavior immediately and without conscious reflection. However, in the stud-
ies presented thus far, this claim amounts to little more than idle speculation. We
believe anger initiates a general implemental mindset, applicable to both the preac-
tional and actional stages of the postdecisional action phase. Therefore, in the next
study, we tested the latter claim: whether conscious emotion (i.e., sadness or anger)
would influence the automatic, nonconscious execution of behavioral scripts pre-
scribed by planning.
Action Control by Implementation Intentions 233

Plan Execution
Having established in the first two studies that the experience of state anger (versus
sadness) makes a person more likely to form an implementation intention, we next turn
to the question of how emotion influences acting upon existing plans. An implemen-
tal (versus deliberative) mindset should enhance the effectiveness with which existing
plans are enacted. As we have described, the implemental mindset is characterized by a
general goal-enhancing bias (e.g., increased self-confidence). One consequence of such
bias is that when an opportunity for planned behavior execution is made available, it is
immediately taken (i.e., occurs nonconsciously). On the other hand, a person in a delib-
erative mindset might instead reconsider whether this behavior (or even this goal) is in
fact the best course of action to take. This interruption of conscious deliberation hin-
ders plan execution. Thus, as an implemental (versus deliberative) mindset facilitates
the efficient execution of planned behavior, and as the experience of anger operates like
an implemental mindset, anger (versus sadness) should therefore enhance the benefi-
cial effect of planning by better enabling efficient action initiation. Thus, in an exten-
sion of our emotion as mindset hypothesis, we predict that a conscious anger (versus
sadness) induction will expedite reaction times in responding to critical trials of a go/
no-go task as specified by predetermined planning. We tested this prediction using a go/
no-go task consistent with past research (Brandstätter et al., 2001). Participants were
instructed to press the “x” key as quickly as possible when numbers—but not letters—
were presented. They were assigned to one of six conditions in a 3 (sadness, anger, or
neutral affect) × 2 (goal intention or implementation intention) factorial design.
As in the first study, the cover story described the study as an experiment on
perspective taking. First, ostensibly to help their performance during a later ses-
sion of the task, participants were provided with one of two sets of instructions to
facilitate their responding to numbers. This constituted the intention manipulation.
All participants first said to themselves, “I want to react to numbers as quickly as
possible.” Then, half of the participants were instructed to say the following phrase
to themselves three times: “I will particularly think of the number 3” (goal inten-
tion). The other half of the participants repeated this phrase three times: “And if the
number 3 appears, then I will press the ‘x’ key particularly fast” (implementation
intention). All participants then performed one of three perspective-taking tasks
(emotion manipulations) and then rated their feeling states, both in a manner iden-
tical to Study 1. Following the emotion manipulation, the main session of the go/
no-go task was presented, lasting seven minutes.
As in Study 1, the manipulation check indicated that our emotion induction pro-
cedure successfully elicited differences in experiencing discrete sadness or anger.
We then calculated for each participant the mean reaction times to both neutral
numbers and the critical number 3. In general, participants responded faster to the
critical number 3 relative to the neutral numbers and faster to all numbers in the
implementation intention condition relative to those in the goal intention condi-
tion. Additionally, these main effects were qualified by an interaction between the
two factors such that responses were fastest to the critical numbers by those in the
implementation intention condition. This finding provided a replication of previ-
ous basic research on implementation intention effects.
234 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

To turn to consideration of discrete emotion, we observed the strongest imple-


mentation intention effect (i.e., speeded reaction time to the critical number) in the
anger condition. That is, it was when the implementation intention was coupled
with the optimal frame of mind (i.e., anger) that participants performed best on
the reaction time task. On the other hand, participants experiencing anger but with
only a goal intention performed much worse. For the sadness and neutral condi-
tions, we observed a weaker implementation intention effect. As such, we interpret
these results as evidence that anger facilitates action control in a manner similar
to an implemental mindset: by increasing the effectiveness with which preexisting
plans are implemented.
In summary, then, across each of the studies presented here, we observed evi-
dence consistent with the idea of emotion as mindset. The data suggest that anger
and sadness—in a manner similar to implemental and deliberative mindsets,
respectively—have robust but opposite effects on both the conscious (i.e., goal
planning, Studies 1 and 2) and the nonconscious (i.e., plan execution, Study 3)
aspects of goal pursuit. That is, anger more successfully enabled the preactional task
of formulating plans as well as the actional task of readily executing those plans in
the interest of attaining a set goal. Interestingly, the results of Study 2 suggest that
the mere activation of the emotion concept—its nonconscious priming—sufficed
to evoke the corresponding mindset. Thus, we observe effects of consciousness for
both action control and emotion manipulation (although the question of noncon-
scious emotion effects on plan execution remains open for future research). While
here we have investigated the independent components of planning and acting, a
longitudinal design warrants consideration of how they interact. For example, is a
plan formed while feeling anger better executed under anger as well? Are there condi-
tions under which anger hinders rather than facilitates the conscious planning and
automatic execution of behavior? What about nonplanned behavior? These issues
hint at the broader theoretical relevance of emotion in action control, considered in
the next section.

IMPLICATIONS AND OUTLOOK


Taylor and Gollwitzer (1995) foreshadowed the notion that emotion could be
brought to bear on goal pursuit by contending that “intermittent bouts of sadness,
frustration, poor mood, loss experiences, or stress may . . . be a time when people
have an opportunity to reflect relatively realistically on their talents, aspirations,
decisions, and goals” (225). Across three studies, the present investigation pro-
vides evidence in (qualified) support of this notion. In keeping with our emotion as
mindset hypothesis, we observed discrete sadness to engender a more deliberative
mindset, whereas anger made people predisposed toward an implemental mindset.

What Good Is Sadness?


Each of the studies reported here assessed performance on an implemental mea-
sure after an emotion manipulation and found stronger effects under anger than
under sadness. Given widespread evidence for a strong effect of implementation
Action Control by Implementation Intentions 235

intentions to benefit goal achievement (Brandstätter, Lengfelder, and Gollwitzer


2001; Gollwitzer 1999; Gollwitzer, Fujita, and Oettingen 2004; Gollwitzer and
Sheeran 2006; Sheeran, Webb, and Gollwitzer 2005), it may be tempting to adopt
the maxim “When in doubt, get angry.” However, one must be cautious against
assuming that anger is the only emotion of use to the goal pursuer. Rather, our data
can only imply that the experience of anger provides a boost (via plan formation
and execution) to ongoing goal striving, as the participants in our studies worked
toward goals that already had been set.
What, then, is to be made of sadness? Said differently, what is it that our partici-
pants in the sadness conditions were doing instead of forming and quickly execut-
ing plans for behavior? From the perspective of the mindset model of action phases,
the setting of goals is equally important as their implementation (Gollwitzer 1990).
Participants in our sadness conditions manifested more deliberative goal consid-
erations, as they completed sentence stems that indicated their thoughts were ori-
ented toward outcome value (Study 2) and the bigger picture of what they wanted
to achieve (Study 1). Thus, they were more willing to (re)consider the goal they had
chosen rather than how to implement it. Though being less effective for focused
goal striving, sadness can facilitate effective goal setting.
But what constitutes effective goal setting? When determining which goal to
pursue, people may consult their expectations of success to inform their decision,
as expectations offer a quick and simple summary judgment of whether invested
effort is likely to pay off in the form of ultimate goal attainment. Therefore, by defi-
nition, high-expectancy goals are those that are judged as more likely to be attained
(Bandura 1997; Heckhausen 1991). In order to set high-expectancy goals, Oettingen
and colleagues have prescribed the self-regulatory exercise of mentally contrasting
a desired, high-expectancy future outcome with the obstacles of reality currently
precluding the realization of the future. This procedure activates expectations of
success and creates strong commitment to realize future outcomes for which expec-
tations are high (Oettingen 2000, 2012; Oettingen et al. 2009; Oettingen, Pak, and
Schnetter 2001; Oettingen and Stephens 2009). Importantly, recent research has
found that self-initiated usage of this strategy is more likely following the induction
of sadness than following a neutral affect manipulation (Kappes et al. 2011). In tan-
dem with this research, the results from our studies suggest not a value judgment on
which emotion is best for goal pursuit but instead that sadness and anger each has
an important, distinct purpose in goal pursuit.
To add to this point, our data suggest that sadness is less conducive than anger to
direct action initiation. While the deliberative mindset is characteristic of the preac-
tional phase (prior to goal striving), the mindset model of action phases posits that
it is also evident in the postactional phase, where people assess the success or failure
of their goal striving (Gollwitzer 1990; Gollwitzer and Bayer 1999). Perhaps, then,
sadness facilitates the termination of goal striving and the subsequent assessment
of whether the chosen course of action was beneficial. As such, sadness may enable
disengagement from goals that cannot be attained, allowing for the reallocation of
limited resources (e.g., time and energy) toward other goals that are more likely to
yield successful attainment ( Janoff-Bulman and Brickman 1982). Taken together,
these new possibilities offer exciting directions for future research.
236 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

Discrete Emotion Theory


The present research finds support for the emotion as mindset hypothesis, situated
at the intersection between cognition and action. As such, it fills an important gap
in theorizing to date on the downstream consequences of emotional experience.
From one perspective, discrete emotion is posited to have a direct effect on action
by activating automatic or reflexive scripts for potential behaviors to be taken, or
action tendencies (Frijda 1986); the actual course of action that is ultimately taken
comes from this subset of potential behaviors. The types of actions that become
activated have direct relevance for the emotion-eliciting situation and follow an
especially brief time line. To take fear as an example, hearing a startling noise might
automatically activate the action tendency to duck or take cover, and this behav-
ior could be subsequently executed quickly and with little to no conscious intent.
The speed with which such behavioral responses become activated—and, in turn,
implemented—speaks to the functionality of emotion from an evolutionary per-
spective, facilitating effective and potentially vital actions. At the same time, auto-
mated execution of all behavioral inclinations would be problematic. After all, you
wouldn’t hit the car dealer in the aforementioned example of anger— despite your
inclination to do so. Therefore, to understand how emotions function in the present
day, we must also understand how they can influence behavior beyond mere activa-
tion of action tendencies.
From a very different perspective, emotion may instead be conceptualized as exert-
ing an indirect force on action by providing a system of information or feedback that
informs future behavior (Baumeister et al. 2007). This model posits that the experi-
ence of emotion compels people to reflect on what actions were responsible for giving
rise to the emotion in the first place. A result of such cognitive reflection in response to
emotional experience in turn informs deliberative considerations of potential future
behaviors. That is, if someone cheats on a test and gets caught, they come to feel regret
or remorse. The negativity of this experience underlies the desire to understand where
it came from and ensure that it does not occur again in the future. As a result, the
person will refrain from cheating in the future in order to avoid a similar future nega-
tive emotional outcome. With the studies presented in this chapter, we offer a con-
ceptualization of emotion that lies between these two reflexive-reflective ends of the
spectrum. That is, we suggest that emotion may additionally affect the link between
cognition and action, as anger and sadness prompt different mindsets that differen-
tially guide subsequent behavior. Finally, our studies implicate both the experience
of discrete emotion states (Studies 1 and 3) and the nonconscious priming of them
(Study 2) as sufficient to instill the corresponding mindset.
Another future consideration within this line of research is to utilize an emotion
elicitor that is directly related to the activated goal. Such a methodological tweak
would be helpful in understanding whether our observed consequences of emotion
for goal striving extend to a more ecologically valid context. Our current set of stud-
ies only explores goals that are unrelated to the emotion elicitor, which is impor-
tant to address the transferability of emotion-as-mindset to new targets. However,
a more naturalistic emotion manipulation could ask participants to name a current
goal and then recall an instance related to the goal in which they were made sad
Action Control by Implementation Intentions 237

or angry. Evidence for our emotion as mindset hypothesis from such a paradigm
would more directly inform how people respond to emotional triggers in the envi-
ronment associated with their goals.

Discrete Emotions and Motivation


That anger and sadness activate separable processing styles is not a novel suggestion.
A growing body of literature speaks to the distinction between anger and sadness as
they relate to motivational tendencies. Carver and colleagues (Carver 2004; Carver
and White 1994) have identified the association between sadness and the behavioral
inhibition system (BIS) and between anger and the behavioral activation system
(BAS). As the names imply, the latter energizes behavior and action initiation, and
the former dampens this inclination. Recent evidence has pointed to separable neural
underpinnings of this effect (Harmon-Jones 2003; Harmon-Jones and Allen 1998;
Peterson, Shackman, and Harmon-Jones 2008). Taken together, the conclusion from
this work on motivation has differentiated anger from other negative emotions—
including sadness—in its connection to approach motivation, heightening rather than
reducing the inclination to initiate action (for a review, see Carver and Harmon-Jones
2009). Though mindset theory does not conceptualize its successive stages as avoid-
ance and approach motivation per se, the implications for action control are clear.
People in a deliberative mindset by definition have not yet taken action, signifying
a behavioral disposition of avoidance or withdrawal with respect to action initiation.
Conversely, people in an implemental mindset by definition are in the process of ini-
tiating action, which corresponds directly to approach motivation. Drawing upon this
framework to understand the motivational (as well as the cognitive) consequences of
emotion may lead to novel predictions and future explorations into the relationship
between discrete emotions and the self-regulation of goal pursuit.
Finally, the utilization of different emotion manipulations and measures would
provide insight into the breadth and applicability of the emotion as mindset hypoth-
esis. For example, the mere order of our experimental protocol could assess the bidi-
rectional relationship between emotion and mindset. We have demonstrated that an
emotion manipulation prompts behavior consistent with certain mindsets. However,
it would also be possible to follow a protocol consistent with the majority of mindset
research (e.g., Fujita, Gollwitzer, and Oettingen 2007; Heckhausen and Gollwitzer
1987; Henderson, de Liver, and Gollwitzer 2008) in which either a deliberative or
an implemental mindset is induced and subsequently an attempt is made to induce
sadness or anger in a crossed design. Perhaps people would be more responsive to a
manipulation of sadness following a deliberative mindset induction and more so to
one of anger following an implemental mindset induction. Results such as these would
speak to the proposed activation of similar cognitive and motivational systems.

CONCLUSION
In sum, we have presented implementation intentions as action plans that automate
efficient, goal-directed responding. The breadth of their effects has been well docu-
mented and has prompted the need to understand contextual factors that might
238 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

influence how ready people are to generate and use implementation intentions. The
research described here identified discrete emotional experience—specifically, that
of anger—as one contextual factor that gives rise to the formation and effective exe-
cution of implementation intentions. More broadly, in explicating the emotion as
mindset hypothesis, we provide an integration of discrete emotion theory and the
self-regulation of goal striving. By parsing the realm of negative emotion, sadness
and anger were proposed as distinct emotional experiences, each defined by separa-
ble cognitive and motivational components, corresponding to the successive stages
of the mindset model of action phases: the deliberative and implemental mindsets,
respectively. The findings from three studies supported this hypothesis, as anger
elicited greater planning for goal-directed behavior and superior plan effectiveness
relative to state sadness. This effect should inform future research in continuing to
explore the role of emotional experience in action control.

REFERENCES
Aarts, H., A. Dijksterhuis, and C. Midden. 1999. To plan or not to plan? Goal achievement
or interrupting the performance of mundane behaviors. European Journal of Social
Psychology 29:971–979.
Achtziger, A., Peter M. Gollwitzer, and Paschal Sheeran. 2008. Implementation intentions
and shielding goal striving from unwanted thoughts and feelings. Personality and Social
Psychology Bulletin 34:381.
Bagozzi, Richard P., Hans Baumgartner, and Rik Pieters. 1998. Goal-directed emotions.
Cognition and Emotion 12:1–26.
Bandura, A. 1997. Self-efficacy: The exercise of control. New York: Freeman.
Baumeister, Roy F., Kathleen D. Vohs, C. Nathan DeWall, and Liqing Zhang. 2007. How
emotion shapes behavior: Feedback, anticipation, and reflection, rather than direct
causation. Personality and Social Psychology Review 11:167–203.
Bayer, U. C., and P. M. Gollwitzer. 2007. Boosting scholastic test scores by willpower: The
role of implementation intentions. Self and Identity 6:1–19.
Bayer, U. C., P. M. Gollwitzer, and A. Achtziger. 2010. Staying on track: Planned goal striv-
ing is protected from disruptive internal states. Journal of Experimental Social Psychology
46:505–514.
Berkowitz, L., and Eddie Harmon-Jones. 2004. Toward an understanding of the determi-
nants of anger. Emotion 4:107–130.
Bless, Herbert, Gerald L. Clore, Norbert Schwarz, Verena Golisano, Christina Rabe, and
Marcus Wölk . 1996. Mood and the use of scripts: Does a happy mood really lead to
mindlessness? Journal of Personality and Social Psychology 71:665–679.
Bodenhausen, Galen V., Lori A. Sheppard, and Geoffrey P. Kramer. 1994. Negative affect
and social judgment: The differential impact of anger and sadness. European Journal of
Social Psychology. Special Issue: Affect in Social Judgments and Cognition 24:45–62.
Bower, Gordon H. 1981. Mood and memory. American Psychologist 36:129–148.
Brandstätter, V., A. Lengfelder, and Peter M. Gollwitzer. 2001. Implementation intentions
and efficient action initiation. Journal of Personality and Social Psychology 81:946–960.
Carver, C. S. 2004. Negative affects deriving from the behavioral approach system. Emotion
4:3–22.
Carver, C. S, and Eddie Harmon-Jones. 2009. Anger is an approach-related affect: Evidence
and implications. Psychological Bulletin 135:183–204.
Action Control by Implementation Intentions 239

Carver, C. S., and M. F. Scheier. 1990. Origins and functions of positive and negative
affect: A control-process view. Psychological Review 97:19–35.
Carver, C. S., and Teri L. White. 1994. Behavioral inhibition, behavioral activation, and
affective responses to impending reward and punishment: The BIS/BAS Scales. Journal
of Personality and Social Psychology 67:319–333.
Clore, G. L., and J. R. Huntsinger. 2009. How the object of affect guides its impact. Emotion
Review 1:39–54.
Cohen, A. L., U. C. Bayer, A. Jaudas, and Peter M. Gollwitzer. 2008. Self-regulatory strat-
egy and executive control: Implementation intentions modulate task switching and
Simon task performance. Psychological Research 72:12–26.
Custers, Ruud, and Henk Aarts. 2005. Positive affect as implicit motivator: On the non-
conscious operation of behavioral goals. Journal of Personality and Social Psychology
89:129–142.
Damasio, A. R . 1994. Descartes’ error: Emotion, reason, and the human brain. New York:
Grosset/Putnam.
DeSteno, David, Richard E. Petty, Derek D. Rucker, Duane T. Wegener, and Julia
Braverman. 2004. Discrete emotions and persuasion: The role of emotion-induced
expectancies. Journal of Personality and Social Psychology 86:43–56.
DeSteno, David, Richard E. Petty, Duane T. Wegener, and Derek D. Rucker. 2000. Beyond
valence in the perception of likelihood: The role of emotion specificity. Journal of
Personality and Social Psychology 78:397–416.
Frijda, Nico H. 1986. The emotions. New York: Cambridge University Press.
Fujita, Kentaro, Peter M. Gollwitzer, and Gabriele Oettingen. 2007. Mindsets and
pre-conscious open-mindedness to incidental information. Journal of Experimental
Social Psychology 43:48–61.
Gawrilow, C., and Peter M. Gollwitzer. 2008. Implementation intentions facilitate response
inhibition in children with ADHD. Cognitive Therapy and Research 32:261–280.
Gilbert, S., Peter M. Gollwitzer, A. Cohen, P. Burgess, and Gabriele Oettingen. 2009.
Separable brain systems supporting cued versus self-initiated realization of delayed
intentions. Journal of Experimental Psychology: Learning, Memory, and Cognition
35:905–915.
Gollwitzer, Peter M. 1990. Action phases and mind-sets. In Handbook of motivation and
cognition: Foundations of social behavior, edited by E. T. Higgins and R. M. Sorrentino.
New York: Guilford Press.
Gollwitzer, Peter M. 1993. Goal achievement: The role of intentions. European Review of
Social Psychology 4:141–185.
Gollwitzer, Peter M. 1999. Implementation intentions: Strong effects of simple plans.
American Psychologist 54:493–503.
Gollwitzer, Peter M. 2012. Mindset theory of action phases. In Handbook of theories in
social psychology, edited by P. Van Lange, A. W. Kruglanski, and E. T. Higgins. London:
Sage.
Gollwitzer, Peter M., and Ute Bayer. 1999. Deliberative versus implemental mindsets in
the control of action. In Dual-process theories in social psychology, edited by S. Chaiken
and Y. Trope. New York: Guilford Press.
Gollwitzer, Peter M., and V. Brandstätter. 1997. Implementation intentions and effective
goal pursuit. Journal of Personality and Social Psychology 73:186–199.
Gollwitzer, Peter M., Kentaro Fujita, and Gabriele Oettingen. 2004. Planning and the
implementation of goals. In Handbook of self-regulation: Research, theory, and applica-
tions, edited by R. F. Baumeister and K. D. Vohs. New York: Guilford Press.
240 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

Gollwitzer, Peter M., Heinz Heckhausen, and Birgit Steller. 1990. Deliberative and imple-
mental mind-sets: Cognitive tuning toward congruous thoughts and information.
Journal of Personality and Social Psychology 59:1119–1127.
Gollwitzer, Peter M., and Ronald F. Kinney. 1989. Effects of deliberative and imple-
mental mind-sets on illusion of control. Journal of Personality and Social Psychology
56:531–542.
Gollwitzer, Peter M., and G. B. Moskowitz. 1996. Goal effects on action and cognition.
In Social psychology: Handbook of basic principles, edited by E. T. Higgins and A. T.
Kruglanski, 361–399. New York: Guilford Press.
Gollwitzer, Peter M., and Gabriele Oettingen. 2011. Planning promotes goal striving. In
Handbook of self-regulation: Research, theory, and applications, edited by K. D. Vohs and
R. F. Baumeister, 2nd ed., 162–185. New York: Guilford.
Gollwitzer, Peter M., and B. Schaal. 1998. Metacognition in action: The importance of
implementation intentions. Personality and Social Psychology Review 2:124–136.
Gollwitzer, Peter M., and Paschal Sheeran. 2006. Implementation intentions and goal
achievement: A meta-analysis of effects and processes. Advances in Experimental Social
Psychology 38:69–119.
Gross, J. J. 1998. The emerging field of emotion regulation: An integrative review. Review
of General Psychology 2:271–299.
Gross, J. J. 2007. Handbook of emotion regulation. New York: Guilford Press.
Harmon-Jones, Eddie. 2003. Anger and the behavioral approach system. Personality and
Individual Differences 35:995–1005.
Harmon-Jones, Eddie, and John J. B. Allen. 1998. Anger and frontal brain activity: EEG
asymmetry consistent with approach motivation despite negative affective valence.
Journal of Personality and Social Psychology 74:1310–1316.
Heckhausen, H. 1991. Motivation and action. Berlin: Springer.
Heckhausen, H., and Peter M. Gollwitzer. 1987. Thought contents and cognitive functioning
in motivational versus volitional states of mind. Motivation and Emotion 11:101–120.
Hemenover, Scott H., and Shen Zhang. 2004. Anger, personality, and optimistic stress
appraisals. Cognition and Emotion 18:363–382.
Henderson, Marlone D., Peter M. Gollwitzer, and Gabriele Oettingen. 2007.
Implementation intentions and disengagement from a failing course of action. Journal
of Behavioral Decision Making 20:81–102.
Henderson, Marlone D., Yael de Liver, and Peter M. Gollwitzer. 2008. The effects of an
implemental mind-set on attitude strength. Journal of Personality and Social Psychology
94:396–411.
Higgins, E. Tory. 1997. Beyond pleasure and pain. American Psychologist 52:1280–1300.
Holland, Rob W., H. Aarts, and D. Langendam. 2006. Breaking and creating habits on the
working floor: A field experiment on the power of implementation intentions. Journal
of Experimental Social Psychology 42:776–783.
Janoff-Bulman, R., and P. Brickman. 1982. Expectations and what people learn from fail-
ure. In Expectations and actions: Expectancy-value models in psychology, edited by N. T.
Feather, 207–237. Hillsdale, NJ: Erlbaum.
Kappes, H. B., Gabriele Oettingen, D. Mayer, and Sam J. Maglio. 2011. Sad mood pro-
motes self-initiated mental contrasting of future and reality. Emotion 11:1206–1222.
Keltner, Dacher, Phoebe C. Ellsworth, and Kari Edwards. 1993. Beyond simple pes-
simism: Effects of sadness and anger on social perception. Journal of Personality and
Social Psychology 64:740–752.
Action Control by Implementation Intentions 241

Lazarus, Richard S. 1991. Emotion and adaptation. New York: Oxford University Press.
Lengfelder, A., and Peter M. Gollwitzer. 2001. Reflective and reflexive action control in
patients with frontal brain lesions. Neuropsychology 15:80–100.
Lerner, Jennifer S., and Dacher Keltner. 2000. Beyond valence: Toward a model of
emotion-specific influences on judgement and choice. Cognition and Emotion. Special
Issue: Emotion, Cognition, and Decision Making 14:473–493.
Lerner, Jennifer S., and Dacher Keltner. 2001. Fear, anger, and risk . Journal of Personality
and Social Psychology 81:146–159.
Lerner, Jennifer S., Deborah A. Small, and George Loewenstein. 2004. Heart strings and
purse strings: Carryover effects of emotions on economic decisions. Psychological
Science 15:337–341.
Loewenstein, George. 1996. Out of control: Visceral influences on behavior. Organizational
Behavior and Human Decision Processes 65:272–292.
Mendoza, Saaid A., Peter M. Gollwitzer, and David M. Amodio. 2010. Reducing the
expression of implicit stereotypes: Reflexive control through implementation inten-
tions. Personality and Social Psychology Bulletin 36:512–523.
Milne, S., S. Orbell, and Pascal Sheeran. 2002. Combining motivational and volitional
interventions to promote exercise participation: Protection motivation theory and
implementation intentions. British Journal of Health Psychology 7:163–184.
Muraven, M., and R. F. Baumeister. 2000. Self-regulation and depletion of limited
resources: Does self-control resemble a muscle? Psychological Bulletin 126:247–259.
Nolen-Hoeksema, Susan, Jannay Morrow, and Barbara L. Fredrickson. 1993. Response
styles and the duration of episodes of depressed mood. Journal of Abnormal Psychology
102:20–28.
Oettingen, Gabriele. 2000. Expectancy effects on behavior depend on self-regulatory
thought. Social Cognition. Special Issue: Social Ignition: The Interplay of Motivation and
Social Cognition 18:101–129.
Oettingen, Gabriele. 2012. Future thought and behavior change. European Review of Social
Psychology 23:1–63.
Oettingen, Gabriele, and Peter M. Gollwitzer. 2001. Goal setting and goal striving. In
Intraindividual Processes. Vol. 1 of Blackwell Handbook in Social Psychology, edited by
A. Tesser and N. Schwarz. Oxford: Blackwell.
Oettingen, Gabriele, G. Hönig , and Peter M. Gollwitzer. 2000. Effective self-regulation of
goal attainment. International Journal of Education Research 33:705–732.
Oettingen, Gabriele, D. Mayer, A. T. Sevincer, E. J. Stephens, H. J. Pak, and M. Hagenah.
2009. Mental contrasting and goal commitment: The mediating role of energization.
Personality and Social Psychology Bulletin 35:608–622.
Oettingen, Gabriele, H. Pak, and K. Schnetter. 2001. Self-regulation of goal setting:
Turning free fantasies about the future into binding goals. Journal of Personality and
Social Psychology 80:736–753.
Oettingen, Gabriele, and E. J. Stephens. 2009. Fantasies and motivationally intelligent
goal setting. In The psychology of goals, edited by G. B. Moskowitz and H. Grant. New
York: Guilford Press.
Orbell, S., S. Hodgkins, and Pascal Sheeran. 1997. Implementation intentions and the
theory of planned behavior. Personality and Social Psychology Bulletin 23:945–954.
Orbell, S., and Pascal Sheeran. 2000. Motivational and volitional processes in action ini-
tiation: A field study of the role of implementation intentions. Journal of Applied Social
Psychology 30:780–797.
242 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

Ortony, Andrew, Gerald L. Clore, and Allan Collins. 1988. The cognitive structure of emo-
tions. New York: Cambridge University Press.
Peterson, Carly K., Alexander J. Shackman, and Eddie Harmon-Jones. 2008. The role of
asymmetrical frontal cortical activity in aggression. Psychophysiology 45:86–92.
Puca, Rosa Maria, and Heinz-Dieter Schmalt. 2001. The influence of the achievement
motive on spontaneous thoughts in pre- and postdecisional action phases. Personality
and Social Psychology Bulletin 27:302–308.
Schwarz, Norbert, and Gerald L. Clore. 1983. Mood, misattribution, and judgments of
well-being: Informative and directive functions of affective states. Journal of Personality
and Social Psychology 45:513–523.
Schweiger Gallo, I., A. Keil, K. C. McCulloch, B. Rockstroh, and Peter M. Gollwitzer.
2009. Strategic automation of emotion regulation. Journal of Personality and Social
Psychology 96:11–31.
Sheeran, Paschal, and S. Orbell. 1999. Implementation intentions and repeated behavior:
Augmenting the predictive validity of the theory of planned behavior. European Journal
of Social Psychology 29:349–369.
Sheeran, Paschal, T. L. Webb, and Peter M. Gollwitzer. 2005. The interplay between goal
intentions and implementation intentions. Personality and Social Psychology Bulletin
31:87–98.
Shiv, B., George Loewenstein, A. Bechara, H. Damasio, and A. R. Damasio. 2005. Investment
behavior and the negative side of emotion. Psychological Science 16:435–439.
Smith, Craig A., and Phoebe C. Ellsworth. 1985. Patterns of cognitive appraisal in emo-
tion. Journal of Personality and Social Psychology 48:813–838.
Smith, Craig A., and Richard S. Lazarus. 1993. Appraisal components, core relational
themes, and the emotions. Cognition and Emotion 7:233–269.
Stewart, B. D., and B. K. Payne. 2008. Bringing automatic stereotyping under control:
Implementation intentions as efficient means of thought control. Personality and Social
Psychology Bulletin 34:1332–1345.
Taylor, S. E., and Peter M. Gollwitzer. 1995. Effects of mindset on positive illusions. Journal
of Personality and Social Psychology 69:213–226.
Tiedens, Larissa Z., and Susan Linton. 2001. Judgment under emotional certainty and
uncertainty: The effects of specific emotions on information processing. Journal of
Personality and Social Psychology 81:973–988.
Trötschel, R., and Peter M. Gollwitzer. 2007. Implementation intentions and the will-
ful pursuit of prosocial goals in negotiations. Journal of Experimental Social Psychology
43:579–598.
Webb, T. L., and Paschal Sheeran. 2003. Can implementation intentions help to overcome
ego-depletion? Journal of Experimental Social Psychology 39:279–286.
Webb, T. L., and Paschal Sheeran. 2006. Does changing behavioral intentions engender
behavior change? A meta-analysis of the experimental evidence. Psychological Bulletin
132:249.
Webb, T. L., and Paschal Sheeran. 2007. How do implementation intentions promote goal
attainment? A test of component processes. Journal of Experimental Social Psychology
43:295–302.
Wegener, Duane T., and Richard E. Petty. 1994. Mood management across affective
states: The hedonic contingency hypothesis. Journal of Personality and Social Psychology
66:1034–1048.
Action Control by Implementation Intentions 243

Winkielman, Piotr, Kent C. Berridge, and Julia L. Wilbarger. 2005. Unconscious affec-
tive reactions to masked happy versus angry faces influence consumption behavior and
judgments of value. Personality and Social Psychology Bulletin 31:121–135.
Zemack-Rugar, Yael, James R. Bettman, and Gavan J. Fitzsimons. 2007. The effects of non-
consciously priming emotion concepts on behavior. Journal of Personality and Social
Psychology 93:927–939.
13

Mental Action and the Threat of Automaticity

WAY N E W U

Input mechanisms approximate the condition often ascribed to reflexes:


they are automatically triggered by the stimuli that they apply to. . . . It is
perhaps unnecessary to remark that it does not seem to be true for nonper-
ceptual cognitive processes . . . we have all the leeway in the world as to how
we shall represent the objects of thought.
( Jerry Fodor 1983, 54–55)
The movement of the natural causality of reason (practical reason in this
case) to its conclusion in choice or decision is lived (by some) as action
when it is really just reflex; distinctively rational reflex, to be sure, but not in
any case a matter of action.
(Galen Strawson 2003, 244)

1. INTRODUCTION
The starkly opposed assertions of Fodor and Strawson highlight one controversy
addressed in this chapter: When does something count as a mental action? This
disagreement, however, points to a deeper controversy, one intimated by the appeal
to reflex as a contrast to genuine action. Reflex, such as blinking to looming visual
stimuli or withdrawing one’s hand from a burning surface, is a paradigm form of
automatic behavior. As we shall see, automaticity is what makes decisions about
mental agency controversial, but it has also in recent years led to unsettling conclu-
sions regarding our conceptions of agency and agency itself.
Psychological research on automaticity reveals it to be a pervasive feature of
human behavior. John Bargh and Tanya Chartrand (1999) speak of the “unbearable
automaticity of being,” arguing that “most of a person’s everyday life is determined
Mental Action and the Threat of Automaticity 245

not by their conscious intentions and deliberate choices but by mental processes
that are put into motion by features of the environment and that operate outside
of conscious awareness and guidance” (462). Relatedly, what is unbearable is that
we are also zombies (in a non-flesh-eating sense, thankfully). Christof Koch and
Francis Crick (2001) have written about “zombie agents” understood as “systems
[that] can deal with certain commonly encountered situations automatically” where
automaticity implies the absence of conscious control. But it is not just systems that
are zombie agents, but subjects could be as well. They ask: “Could mutation of a
single gene turn a conscious animal into a zombie?” Yet if our behavior is permeated
with automaticity, aren’t we all zombie agents?
In fact, automaticity appears to eliminate agency. Even if agency were unbearably
automatic or zombie-like, it would still be agency. Yet I shall show that a common
assumption regarding automaticity suggests that it is incompatible with agency. If
so, it is not that agency is unbearably automatic or zombie-like. It isn’t agency at all.
And if there is no agency, there is a fortiori no free, rational, moral, or conscious
agency. To illuminate these issues, automaticity and its correlate, control, must be
incorporated in a theory of agency. Specifically, this chapter shows how we can
simultaneously hold two seemingly inconsistent claims: that automaticity implies
the absence of control and that agency, as an agent’s exemplification of control,
involves, and often requires, much automaticity.
What, then, is automaticity? In section 2, I review empirical conceptions of
automaticity where psychologists came to reject a simple connection, namely, that
automaticity implies the absence of control. Philosophical reflection on mental
agency also suggests that we should reject the simple connection, and in sec-
tion 3, I develop an argument that adhering to it eliminates mental agency, and
indeed bodily agency as well. This is the threat of automaticity. In response, I
explicate the causal structure of mental agency in section 4 and then defend the
simple connection in section 5 in light of that structure. The final sections put
these results to work: section 6 defuses the threat from automaticity, section 7
responds to the striking philosophical disagreements about basic cases of mental
action, and section 8 reflects on inferences from automaticity to claims about
agency in cognitive science.

2. AUTOMATICITY AND CONTROL IN COGNITIVE SCIENCE:


A VERY BRIEF AND SELECTIVE OVERVIEW
The psychological literature on automaticity and control is voluminous, so I make
no claims to completeness in what follows. Rather, I shall highlight specific features
of the discussion of Schneider and Shiffrin (1977) that in many respects set the
terms of the subsequent empirical debate. My goal is to highlight how psychologists
moved away from Schneider and Shiffrin’s simple connection between automaticity
and control.
Schneider and Shiffrin focused specifically on the notion of an automatic process
within a familiar picture of the cognitive system as a set of connected nodes (a neu-
ral network) where processing involves sequential activation of a subset of nodes.
Consequently:
246 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

An automatic process can be defined . . . as the activation of a sequence of


nodes with the following properties: (a) The sequence of nodes (nearly)
always becomes active in response to a particular input configuration, where
the inputs may be externally or internally generated and include the general
situational context. (b) The sequence is activated automatically without the
necessity of active control or attention by the subject. (2)

The simple connection is that automaticity implies the absence of control (or atten-
tion) by the subject. Automaticity is then clarified by explaining control, and on
this, Schneider and Shiffrin note that “a controlled process is a temporary sequence
of nodes activated under control of, and through attention by, the subject” (2).
Control, as they conceive it, involves the deployment of attention. Thus, the initial
foray into defining automaticity and control relies on two links: the simple connec-
tion relating automaticity to the absence of control and the conception of control in
terms of the direction of attention.
Things rapidly became more complicated. John Bargh (1994) notes in a review
of the social psychology literature on automaticity that theories after Schneider and
Shiffrin (1988) gravitated to four factors as defining automaticity: automatic pro-
cesses are “unintentional, occur outside of awareness, are uncontrollable, and are
efficient in their use of attentional resources” (2). The problem, as Bargh (1988)
emphasizes, is that many paradigm cases of automaticity fail to exemplify all four
properties. Shiffrin later observed that “there do not seem to be any simple defin-
ing features of automatic and attentive processes that can be applied in complete
generality” (765), and he identified 10 typical contrastive features of automatic
and controlled processes. Similarly, Thomas Palmeri (2002), in an encyclopedia
entry on automaticity, echoes Bargh’s characterization, writing that “automaticity
refers to the way we perform some mental tasks quickly and effortlessly, with little
thought or conscious intention. Automatic processes are contrasted with delib-
erate, attention-demanding, conscious, controlled aspects of cognition” (290).
He then goes on to list 13 contrasting features between automatic and controlled
processes (table 1, 291). Schneider (2001), in his entry titled “Automaticity” for
the MIT Encyclopedia of Cognitive Science, notes that “automatic processing shows
seven qualitatively and quantitatively different processing characteristics relative to
controlled process” (63). Current psychological conceptions of automaticity have
clearly gone far from the simple connection.
Why have psychologists given up the simple connection? Gordon Logan (1988)
has observed, given the characterization of control as the (generally conscious)
deployment of attention, the simple connection thereby predicted that automatic
phenomena would be freed of the constraints imposed by attentional processing
such as capacity or load limitations.1 The problem, Logan emphasizes, is that empir-
ical work has shown that many putatively automatic phenomena often are subject
to attentional limitations or are otherwise influenced by how attention is directed.
Psychologists have continued to construe control to be or at least to involve deploy-
ment of attention, so the way to accommodate evidence of capacity limitations in
automatic processes is to sever the simple connection (Logan [1988] analyzes auto-
maticity in terms of memory via his instance theory).2
Mental Action and the Threat of Automaticity 247

In what follows, I do the reverse: I retain the simple connection and give up
equating control with attention, although on my view, control in action implies the
deployment of attention. This will allow us to address the consequences of auto-
maticity on agency noted earlier. The most serious of these is the threatened loss of
agency in the face of automaticity. I turn to this threat now, which is most apparent
in mental action.

3. MENTAL BALLISTICS
Given the armchair nature of their work, it is striking that philosophers have largely
neglected mental actions.3 Moreover, when philosophers speak of mental actions,
there is striking discord as to what counts as an instance. For example, you ask me,
“Who was the prime minister of Czechoslovakia when the Soviet Union invaded?”
and I try to recall. When I recall the answer, I then judge, “It was Dubček.”4 Here we
have a remembering and a judging, two mundane mental events. Are these actions?
Christopher Peacocke (2007) claims that they are. On the other hand, Alfred Mele
(2009, 19) asserts that remembering is never an action, and Strawson (2003) denies
action status to both. Who is right?
In contrast, similar questions regarding mundane bodily events, say the grabbing
of an object when one is asked for it, elicit broad agreement as to their status as
actions. I conjecture that the disagreement stems in part from the fact that men-
tal actions are typically automatic, and that there are differing intuitions about the
import of automaticity with respect to agency. Work in social psychology has cata-
loged the pervasiveness of automaticity even in goal-directed behavior where con-
trol is conceived as delegated to the environment, obviating any need for conscious
control on the part of the subject (Bargh and Ferguson 2000). Bernhard Hommel
(2000) has spoken of intention as a “prepared reflex,” the upshot being that inten-
tions allow the environment to take control of behavior without the need of further
intervention (control) by the subject.
I interpret Strawson’s (2003) arguments against mental action as exploiting the
ubiquity of automaticity. Strawson argues that most of what we take to be men-
tal actions involve mental ballistics, namely, things that happen automatically.
Strikingly, he includes deliberation and imagination. A natural reaction to this
constriction on the sphere of mental action is to take it as a reductio of Strawson’s
assumptions, but his arguments provide an opportunity to focus on the signifi-
cance of automaticity.
To bring out the threat, let us push Strawson’s argument to its logical conclusion.
Of imagination, Strawson observes:

When one has set oneself to imagine something one must obviously start from
some conceptual or linguistic specification of the content (spangled pink ele-
phant), and given that one’s imagining duly fits the specification one may say that it
is intentionally produced. But there isn’t intentional control in any further sense: the
rest is a matter of ballistics, mental ballistics. One entertains the verbal specifica-
tion and waits for the mechanism of imagination—the (involuntary) sponta-
neity of imagination—to deliver the image. (241, my emphasis)
248 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

Why should there be control in any further sense than the one Strawson speci-
fies, namely, that the process fits what one intended? I shall return to this, but
whatever this further element is supposed to be, Strawson thinks it is incompat-
ible with the automaticity of imagination. In imagination, we have only mental
ballistics, some automatically generated process for which we can only wait once
we have specified the relevant parameters, say the image of one’s mother’s face or
of a pink elephant. Thus,

When one sets oneself to imagine anything there comes a moment when what
one does is precisely to relinquish control. To think that the actual content-issu-
ing and content-entertaining that are the heart of imagining are themselves
a matter of action seems like thinking, when one has thrown a dart, that the
dart’s entering the dartboard is itself an action. (242, my emphasis)

Similarly for practical deliberation:

Very often [in the case of practical deliberation] there is no action at all: none
of the activation of relevant considerations is something one does intention-
ally. It simply happens, driven by the practical need to make a decision. The
play of pros and cons is automatic—and sometimes unstoppable. (243, my
emphasis)

So, deliberation and judgment are not actions, and other cases like memory recall
will be more grist for Strawson’s mill. It would seem that there is no space for mental
agency.
Still, Strawson notes that “there is of course such a thing as mental action” (231)
and finds space for it in stage setting:

The role of genuine action in thought is at best indirect. It is entirely prefa-


tory, it is essentially—merely—catalytic. For what actually happens, when
one wants to think about some issue or work something out? If the issue is a
difficult one, then there may well be a distinct, and distinctive, phenomenon
of setting one’s mind at the problem, and this phenomenon, I think, may well
be a matter of action. It may involve rapidly and silently imaging key words or
sentences to oneself, rehearsing inferential transitions, refreshing images of a
scene, and these acts of priming, which may be regularly repeated once things
are under way, are likely to be fully fledged actions. (231)

The problem is that he cannot avoid the threat posed by automaticity. What is it
that goes beyond mental ballistics in stage setting? Strawson speaks of “setting one’s
mind at the problem,” but what is this setting? “It may involve rapidly and silently
imaging key words or sentences to oneself, rehearsing inferential transitions, refresh-
ing images of a scene” (231). Yet surely imaging, rehearsing inferences, and refresh-
ing images are just instances of the kinds of cases his previous arguments ruled out
as actions, namely, imagination, deliberation, and recall. Thus, he cannot avoid the
threat even in the remaining space he allows for mental agency.
Mental Action and the Threat of Automaticity 249

To be fair, Strawson points out other cases of mental action: shepherding or dra-
gooning back a wandering mind, “concertion” in thought, stopping, piling and retak-
ing up thoughts that come to us quickly, or an active receptive blanking of the mind
(231–232). All of these are catalytic, and mental action goes no further than this.
Yet he also says that these need not be actions, and then their occurrence would be
automatic. But if automaticity is found in these cases too, then is there any missing
agentive element to be found here that we failed to find in deliberation, imagination,
or recall? As we have seen, in each case Strawson considers, the uncovering of auto-
maticity in mental ballistics leads to a negative answer. The threat of automaticity is
ubiquitous, and accordingly, action seems to be eliminated from the mental sphere.
Strawson explicitly invokes the concept of automaticity at only one point, but
the idea is captured in his talk of reflex, ballistics, and mere happenings. There is in
all of these a contrast with the agent’s being in control. This is just an instance of the
simple connection that originally guided contemporary psychological conceptions
of automaticity: where x is automatic, then x is not a case of control. If we think of
action just as an agent’s exerting control, automaticity implies the loss of agency.
The threat extends to bodily agency. Consider moving one’s arm, say to reach for a
glass. The specific movement that the agent is said to make is individuated by certain
parameters, say the specific trajectory, speed, acceleration, grip force, sequence of
muscle contractions, angle of the relevant joints, and so forth. Yet in a similar sense,
the production of these features is itself merely automatic (and, indeed, the initial
movement of the arm in reaching is literally ballistic). Given that any token move-
ment is individuated by automatically generated features, one might argue that the
concrete movement itself is automatically generated. After all, what features are left
for the agent to control?
Even in bodily action, it seems that we are pushed back to talk of stage setting, but
there is no avoiding the threat there. After all, intentions set the stage for intentional
bodily movement, but Strawson’s argument that deliberation and its product, inten-
tion, are essentially mental ballistics shows us that agency evaporates here as well.
So, the threat from automaticity is total: mental and bodily actions disappear. This
conclusion is more extreme than Strawson allows, but it seems unavoidable given
three initially plausible claims: (1) the pervasiveness of automaticity in human
activity, (2) the conceptual connection between actions and an agent’s exertion
of control, and (3) the simple connection linking automaticity to the absence of
control. I take it that the loss of all action is a reductio of these claims. Since (1) is
empirically established and (2) a conceptual truth (on my view), the source of the
problem is the simple connection.
It is prima facie plausible that the simple connection must go, for automatic-
ity seems required in skilled action. In many cases where we practice an action,
we aim to make that exercise automatic yet at the same time do not aim to abol-
ish our agency. Thus, when I practice a difficult arpeggio in the left hand on the
piano, my aim is to master a particular passage so that I can play the entire piece
correctly. Getting the left hand to perform automatically is precisely what is needed
to play the piece as I intend. Indeed, with the left hand “automatized,” I focus on the
right hand’s part, one that requires effort and attention. Playing with both hands,
one automatically and the other with explicit effort, I play a passage in the piece
250 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

intentionally. I think this is right, but in what follows, I show that we can also hold
on to all three claims that generated our reductio. To see this, we must embed auto-
maticity within a theory of agency.

4. MENTAL ACTION AND ATTENTION


To start, let us warm up with a few mental exercises (please go along): (1) imagine
your mother’s face; (2) recall the capital of Britain; (3) add 2 + 2; (4) (in your head,
answer): Is it wrong to torture a kitten?
I am going to conjecture that for most of these actions, the required answer sim-
ply “popped into your head.” At first glance, what you did in each case was to under-
take a mental action—you did something in your head—and in each, the desired
product comes to mind seemingly automatically. These are the data of automaticity
on which Strawson’s argument rests. In this section, I outline a causal account of
mental action, one in which intentions play a specific role in organizing and generat-
ing action. Central to this conception will be attention, and indeed, I argue, borrow-
ing from William James, that conscious mental action is cognitive attention.
Our primary target will be those cases where there is something it is like to per-
form a mental action, and thus, we are interested in conscious mental actions. I will
speak of such cases as bringing thought contents to awareness, whether this involves
recalling some content, imagining some content, expressing content in internal
speech, reasoning via transitions in contents, and so on with the limiting case that
of keeping hold of a specific thought in the fore in one’s cognitive awareness. In all
actual cases, prior to bringing the content to awareness, that content (or at least its
constituents) was recorded in the subject’s memory, where memory is broadly con-
strued so as to involve any content retained on the basis of alterations to the mind in
light of past activity (including perception of the environment or synthesis of con-
tent already stored). The reason for holding conscious mental actions as so based in
memory is that our thoughts are not created ex nihilo but depend on prior thoughts
or perceptions. Thought contents that we bring to awareness are either previously
encoded (e.g., in episodic recall) or synthesized from previously encoded memo-
ries (e.g., as when we imagine fantastical creatures like unicorns). This appeal to
a dependence on memory acknowledges that mental actions can also depend in
analogous ways on the deliverances of perception (as when we choose some object
perceived to think about). This does not affect the following points, so for simplic-
ity, I shall largely not mention perceptual-based thoughts.
Once we recognize that the mental actions at issue, generically bringing thoughts
to awareness, are based in memory, we immediately confront a selection problem.
There is more than one item stored in memory, but conscious thought has to be
specific. We do not activate all memories, there being limits to our capacity as to
what we can entertain. These capacity limitations are commonly acknowledged, but
given these limitations, the mental action at issue requires selection of memory. This
leads to a Problem of Selection: Which memory should be brought to awareness?
For example, a mathematics graduate student intends to recall some mathemati-
cal proposition during an exam. She is motivated to recall a certain axiom of set the-
ory, but of course, she has in memory a huge number of mathematical propositions.
Mental Action and the Threat of Automaticity 251

Nevertheless, the relevant proposition occurs to her, and she can then use it in
subsequent reasoning, which itself calls upon further dredging through memory.
Action requires solving this Problem of Selection, specifically reducing a set of many
possible inputs by selection of exactly what is relevant to the task at hand (similar
points arise in respect of perceptually based thoughts). For simplicity, we can speak
of this as reducing a set of many possible inputs to a single relevant input.
A similar point arises regarding output or behavior. Returning to our mathemati-
cian, she has, in fact, many choices open to her even after selecting a specific mem-
ory. There are many responses that she can give: she can deploy the remembered
axiom to construct her proof, she can think of how best to format a representation
of that axiom for the final version of a journal article, or she can imagine writing a
song where a statement of the axiom is used as part of a catchy jingle (and much
more besides). There is a set of possible things that she can do with the thought in
light of bringing it to awareness. In mental action, then, doing something is navi-
gating through a behavioral space defined by many inputs and many outputs and
their possible connections. Bringing a thought to awareness is selecting the relevant
memory so as to inform a specific type of conscious mental activity. Mental action
requires the selection of a path in the available behavioral space. In earlier work
(Wu, 2011a), I have called this Problem of Selection the Many-Many Problem. The
Many-Many Problem is, I have argued, a metaphysically necessary feature of inten-
tional bodily action, and it is exemplified in intentional mental action.5 Certainly, in
actual mental actions, we have the Many-Many Problem, and that weaker claim will
suffice for our purposes.
Solving the Many-Many Problem is a necessary condition on intentional agency.
But clearly not any “solution,” namely, a one-to-one mapping of input to output, will
be sufficient for intentional agency.6 In the context of an exam, our mathematician
wants to find a solution to the mathematical problem she is tackling. She is not,
at that moment, interested in writing songs about set theory or formatting a text
for publication. Should such outputs—writing a song, attempts to format a text—
be what results during the exam, she would see these behaviors as inadvertent and
involuntary. Moreover, should the selections at issue routinely pop into her head in
other contexts such as when she is discussing a poem (but not one about set the-
ory!), then this would be something odd, inconsistent with her goals at that time.7
What this points to is that solving the Many-Many Problem cannot be inconsistent
with one’s current intentions. The content of one’s current intentions sets the standard
by which one’s actions are successful or not, and the way to ensure consistency with
intention is to require that solutions to the Many-Many Problem are not independent
of intention. Dependence of selection on intention should then be understood as the
causal influence of intention on selection, and this is intuitive: our mathematician
recalls the relevant axiom in set theory precisely because she intends to solve a prob-
lem in set theory of a certain sort; and she constructs a proof precisely because the
problem at issue is to prove a certain theorem. More abstractly, the behavioral space
that identifies the agent’s action possibilities for a given time is constrained by inten-
tion such that a specific path, namely, the intended one, is prioritized.8
In mental action, solving the Many-Many Problem by making appropriate selec-
tion imputes a certain form of activity to the agent. This activity, however, is not
252 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

an additional thing that the agent does so as to act in the intended way. Moreover,
selection is not necessarily conscious. Rather, these are just aspects of or part of the
mental action itself, just as a specific movement of the fingers or perhaps the specific
contraction of muscles in the hand is part of tying one’s shoelaces. These are not
additional things that one does so as to tie one’s shoelaces, nor does one need to be
conscious of those features of the action. The same point applies to any selection
relevant to solving the Many-Many Problem.9
Earlier I mentioned the central role of attention in agentive control, and we can
now see that the selection of a path in behavioral space that constitutes solving the
Many-Many Problem in mental action yields a form of cognitive attention. Consider
this oft-quoted passage from William James (1890):

Everyone knows what attention is. It is the taking possession by the mind, in
clear and vivid form, of one out of what seem several simultaneously possible
objects or trains of thought. Focalization, concentration, of consciousness are
of its essence. It implies withdrawal from some things in order to deal effec-
tively with others. (403)10

I have found this passage often invoked when philosophers discuss attention,
although they typically go on to discuss perceptual attention (I am guilty of this as
well). In this passage, however, James speaks not only of perceptual attention but
also of attention in thought, what we can call cognitive attention. Accordingly, it is
important to bear in mind a critical difference between attention in perception and
attention in thought: generally, the inputs that must be selected in perception are
simultaneously presented in perceptual awareness, but the inputs, namely, thought
contents, are not simultaneously given to awareness in thought.11 Thus, when you
perceive a cluttered scene, looking for your lost keys, vision simultaneously gives
you multiple objects, and visual attention is part of an active searching for that
object among many actually perceived objects. In contrast, when one is trying to
find the right thought (say the right axiom to construct a proof), one is not in fact
cognitively aware of multiple simultaneous thoughts (i.e., encoded mnemonic con-
tent). The thoughts from which we must select the relevant content are not actual
objects of awareness in the way that perceived objects are, but (to borrow James’s
phrasing) only possible objects of awareness. They are the items that we have in
memory, and thus, the Many-Many Problem in thought does not involve a con-
scious act of selecting the appropriate mnemonic items. James does in that passage
speak of how it seems that one is confronted with multiple possible thoughts, but
I read this as pointing to one’s sense of multiple behavioral possibilities (recall the
behavioral space).
The way I have put the point is that in bringing a thought to awareness, or, as James
says, taking possession of it by the mind, we have to select the relevant memory. This
is just to solve the input side of the Many-Many Problem in bringing the thought to
awareness. The solution to the Many-Many Problem, understood as path selection
leading to awareness of a thought content, fits with James’s description of cognitive
attention as the selection of a possible train of thought where focalization and con-
centration of consciousness, the awareness of a specific thought, are of its essence.
Mental Action and the Threat of Automaticity 253

Given James’s characterization of attention, such conscious mental actions of bring-


ing thoughts to cognitive awareness are Jamesian forms of attention. Plausibly, this
category of conscious cognitive awareness (broadened to include perceptually
dependent thoughts) exhausts the category of conscious mental action. So, all con-
scious mental actions are instances of Jamesian cognitive attention.12 With this con-
crete proposal for mental actions, I return to the simple connection.

5. THE SIMPLE CONNECTION


I shall define automaticity in action in terms of the absence of agentive control, and
I show that an action can be both automatic and controlled. How is this possible?
The answer is a relativization that seems not to have been made in the empirical lit-
erature though it echoes G. E. M. Anscombe’s (1957) notion of action as intentional
under a description. The account I shall give, however, is not concerned with descrip-
tions of actions but rather with properties of actions. Nevertheless, it links a tradi-
tional concern in philosophy, namely, intentional action, with a traditional concern
in psychology, namely, automaticity. It is in this way that I propose to incorporate
the latter in the former and respond to the threats and difficulties that automaticity
brings.
We relativize automaticity and control in respect of what the subject intends to
do, namely, the types of actions, F, that she aims to bring about. We can then speak of
control and automaticity in respect of F, for any action type F that the subject can be
said to do.13 As a consequence, we can affirm that automaticity is the absence of con-
trol and that an action event can exhibit both automaticity and control. The crucial
element that bridges philosophy and psychology is that of solving the Many-Many
Problem. In the definitions, I will speak of a subject’s executing a solution to the
Many-Many Problem when the subject acts in a certain way, though typically in
discussion, I will speak simply of solving the Problem, with the executing implicit.
My account assumes a notion of top-down causal influence where this depends
on imposing a hierarchy in processing between cognitive and perceptual systems.14
There are issues regarding how to construct such hierarchies, but since the notion
is widely deployed in cognitive science, I will assume it. The specific assumption is
fairly minimal, namely, that intentions are closer to the top of any hierarchy than
the basic perceptual, motor, and memory processing that solves the Many-Many
Problem. The definitions are complicated, so let me emphasize the basic ideas they
express: control and automaticity of a process are relativized to features of that pro-
cess such as an action. Control for some feature F of an action arises from solving
the Many-Many Problem in respect of one’s intention to F, and automaticity with
respect to some F is the absence of control for that feature F, namely, an absence
of an intention to F. Control and automaticity are incompatible with respect to the
same feature, but a given action can have both controlled and automatic features, so
long as those features are not the same ones at the same time.
Here, then, are the definitions:

(AC) Agentive Control


For any subject S’s token behavior of type F at some time t:
254 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

S’s F-ing is agentively controlled in respect of F iff S’s F-ing is S’s execution of a
solution to the appropriate Many-Many Problem given S’s intention to F.

When there is control, S’s F-ing is understood to be an action event, a concrete


particular whose causal structure is as described in the right-hand side of the
biconditional, which expresses the account of intentional action given earlier. The
appropriate Many-Many Problem is one such that the solving of it suffices for that
event to be classified as an F. For example, where F is kicking a football, S’s kicking
of a football is agentively controlled in respect of F because S’s kicking of a football
is S’s solution to the Many-Many Problem given S’s intention to kick a football. In
short, we control what we intend.15
We can define (agentive) automaticity as follows:

(AA) Agentive Automaticity


For any subject S’s token behavior of type F at some time t:

S’s F-ing is agentively automatic in respect of F iff it is not the case that S’s F-ing
is S’s execution of a solution to the Many-Many Problem given S’s intention
to F.

This is just the simple connection: the automaticity of F in S’s F-ing is equivalent
to the absence of control in respect of F. On this account, most of the F’s S can be
said to do at a time will be automatic. In our example, S’s kicking with a force N, S’s
moving his foot with trajectory T, with acceleration A, at time t, and so on will be
automatic. Finally, we can also define a strong form of automaticity, what I shall call
passivity.

(P) Passivity
For any subject S and token behavior of type B at some time t:

S’s B-ing is passive if for all F under which S’s B-ing falls, S’s F-ing is agentively
automatic.

Where all of an action’s features are automatic, then the agent is passive and there
is no intentional action or agentive control. Let us now put these notions to work.

6. RESOLVING THE THREAT FROM AUTOMATICITY


Paradoxically, automaticity seems to imply the absence of agentive control and yet is
a critical part of skilled agency. We can now eliminate the air of paradox. On the one
hand, the definition of automaticity relies on the simple connection and automatic-
ity as passivity does imply the absence of agency. On the other hand, a process can
be both automatic and controlled with respect to different types of activities that
the agent exemplifies at any given time. That an event falls under an action type with
respect to which it is automatic does not imply that it is not an intentional action,
so long as it does exemplify control as per AC for some F. What is disallowed is that
Mental Action and the Threat of Automaticity 255

an action is simultaneously automatic and controlled with respect to the same F at


the same time.
Consider playing the piano again. In aiming for automaticity, we relinquish some
control but not all. We control the type of thing that we intend to do, say playing
a certain work. We thus act intentionally. In playing the piano, the automaticity
aimed for is that the specific notes played need not be represented in one’s inten-
tion. “Parameter specification” is automatic because no top-down modulation at the
level of intention is required to specify the specific notes played, the ordering of fin-
gering, and so forth. Certainly, in learning that passage, one can act with a changing
set of demonstrative intentions, say to play that note with these fingers, and this is
attentionally demanding. One has to attentively focus on relevant notes or relevant
keys. But once the piece is mastered, setting those parameters is automatic. Playing
the piece involves both automaticity and control, and the simple connection, prop-
erly understood, shows us how this can be true.
Actions as events are an agent’s exertion of control simpliciter, as we might say.
We secure the presence of action by ascertaining that the corresponding path in
behavioral space is a result of an intention-guided solution to the Many-Many
Problem. The focus thus far has been on successful intentional actions, and where
the Many-Many Problem is successfully solved, it follows that there is an action
event (agentive control simpliciter) and also agentive control in respect of the
intended action type as per AC. One form of action worth considering at this junc-
ture is unsuccessful action. While this is defective action, it is still action. Its status
as action is secured in light of an underlying, aborted solution to the Many-Many
Problem where what selections were made conform to the relevant intention. The
question is whether there is, in all such cases, something that the agent controls as
per AC? The earlier thought is that agentive control simpliciter (the existence of an
action event) implies that there is agentive control with respect to some F as per AC.
Yet there may not be any F subject to control in defective action.16
Let me raise two points as gestures to a fuller response. First, the contents of our
intentions are typically more complex than their expressions in language, say that
one is going to F. That is, the content typically includes the more specific ways one
will F, though never to the fine detail of parameter specification. Thus, one not only
intends to make tea but to make tea with this mug by pouring from this pot and
so forth. So, there are many opportunities for control, as per AC, even in aborted
processes as when one fails to make tea because the mug cracks from the heat. The
pouring from the pot was under the agent’s control because that F is in the scope of
the intention. Second, as Kevin Falvey (2000) has noted, there are correct descrip-
tions of action that appeal to the openness of the progressive. One can be said to
be F-ing while one is doing just that, before the action is completed. So, I am mak-
ing tea as I intend even if, when the mug breaks, the action is a failure. When such
descriptions of the process are appropriate and reflect what we intend, we can be
said to be exerting control in such cases.
AC allows for gradations of control. For example, over time, one’s playing of an
instrument might show increased automaticity over a defined set of features and
thus less control in respect of those features. Yet that suggests that acquisition of
skill means loss of control.17 My piano instructor imparted a useful insight that is
256 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

relevant here, namely, that one practices not to play a piece in a specific way, say to
imitate a performance of a Brahms intermezzo by Radu Lupu, but rather to be able
to play in whatever way one intends. The control found in skillful action is that by
automatizing a host of basic features, the skilled agent opens up behavioral possi-
bilities that were not available before.
The putative threats mentioned in the introduction are threats to the will and
thus threats to any of its forms such as free will or conscious will. It is important to
see that the threat thus aims at agency itself, and that avoiding the threat requires
incorporating the notion of automaticity into the theory of agency. I have defused
the threat of automaticity by showing that while automaticity is defined as the
absence of control, its presence does not imply the absence of agency.

7. SETTLING DISAGREEMENTS
Psychologists gave up the simple connection and, consequently, the possibility of
dividing kinds of processes between the automatic and the controlled. Reinstating
the simple connection allows us to individuate processes along the automaticity/
control divide, namely, between actions and nonactions (passivity). Still, the meta-
physical division between actions and nonactions is not the primary target of psy-
chological research. Certainly, subjects in psychological experiments often perform
actions (tasks) where the experiments investigate certain features of those actions
that are characteristic of automaticity: insensitivity to cognitive load, parallel pro-
cessing, bypassing of awareness, and so forth. Of course, psychologists have also
connected such features to broader questions about agency, and then they enter a
traditional philosophical domain. I shall close with these issues in the last section,
but I first revisit the puzzling disagreement among philosophers as to what counts
as a mental action.
Let me begin with a disagreement with Strawson, namely, his verdict regarding
imagination and deliberation. On imagination, I noted that Strawson gave the cor-
rect answer: “Given that one’s imagining duly fits the specification [as intended]
one may say that it is intentionally produced.” While higher-ordered properties of
action, say freedom or rationality, might require meeting further conditions, it is a
mistake to look for action in any further sense than found in Strawson’s character-
ization of intentional control, one that comports with AC. One’s entertaining of
an image of a pink elephant given one’s intention to imagine that type of image is a
reflection of one’s agentive control. The intention explains why that specific path in
behavioral space is selected. There are, of course, a host of automatic features asso-
ciated with such imagination, say one’s imagining the elephant as standing on one
foot or facing left from one’s point of view. One may not have intended to imagine
those elements, yet that is what the mind automatically produced. By acknowledg-
ing these points, however, we do not reject that imagining as one intends is a matter
of one’s intentional control and thus is an action.
What of deliberation, where the “play of pros and cons is automatic”? The auto-
maticity of specific thought contents as one deliberates does not imply that the pro-
cess of deliberation is not an action, namely, the solving of the Many-Many Problem
in light of intending to determine whether p is true (theoretical deliberation) or
Mental Action and the Threat of Automaticity 257

whether to bring it about that p (practical deliberation). One’s intention to do just


that is a form of stage setting in Strawson’s sense that allows executing appropriate
solutions to the Many-Many Problem. Accordingly, as the agent’s intention plays a
causal role in solving the Many-Many Problem, we have an action that is properly
described as deliberating whether p or whether to bring p about. That the process
can be categorized in terms of various forms of automaticity is not incompatible
with its instantiating agency.
There are related phenomena where I might agree with Strawson, namely, regard-
ing the natural end points of successful theoretical and practical deliberation: judg-
ment (belief) and decision (intention), respectively. These are, however, difficult
matters, so for present purposes, I will assume that it is not possible to intend to
judge (believe) that p or to intend to decide (intend) to F. If that is so, then that one
judges that p or intends to F is by our account not something that the agent controls.
Nevertheless, we should not lose track of these states or events as marking the com-
pletion of a process that the agent does control, namely, deliberation. In focusing on
one specific part of the process, namely, its completion, it may seem that we do not
have action at that specific point. There is nothing wrong in that thought, so long
as we don’t lose the forest for the trees, namely, the end point as the culmination
of solving the Many-Many Problem. Judging and deciding are just the automatic
culmination of an extended action. Indeed, the automaticity of judgments and deci-
sions is perhaps what we should expect. Deliberators who aim to settle the truth or
determine what should be done subject themselves to norms of rationality. Yet in
properly subjecting oneself, there is nothing further to do once one recognizes the
force of reasons as supporting a judgment or decision. One does nothing more than
judge or decide. That is to say, one automatically draws the right conclusion.18
Finally, one can agree with both Fodor and Strawson (for the most part). With
Fodor, we can acknowledge that we have all the leeway in the world in thought
in that the extent of the Many-Many manifold that constitutes the Many-Many
Problem—the space of behavioral possibilities—is very large. In thought, it is as
large as what we can remember and can bring to awareness for various tasks at a
given time. Yet, with Strawson, we can acknowledge that there is also a way in which
action is just a reflex or, better, automatic, in that many of the features of what we do
are not subject to our control though some features must be.

8. AGENCY, AUTOMATICITY, AND CONTROL


I want to close with some reflections on the picture of agency that arises from the
previous points. Intentional actions, mental and bodily, are specific ways of solving
the Many-Many Problem, specifically solutions that are constrained in the right way
by one’s intentions. Where there are such processes, we have agentive control. The
crucial feature of this perspective is that the mark of agency is internal to an event.
Its status as an action does not reside in its antecedent causes but in its internal
structure, the intention-constrained selection of a path in behavioral space at a given
time.
There is a lingering worry given my earlier acknowledgment that intentions
might always be automatically generated. How can action count as agentive control
258 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

when its source is a matter of automaticity? It is a compelling thought that the agent
must make herself felt at precisely such points when control threatens to evaporate,
to reassert control so as to stave off its loss. The point of the current perspective is
that control just is the role of intention in structuring a solution to the Many-Many
Problem, full stop. The question of how the intention arises is an important one, but
not one about agency in the basic sense. Rather, it pertains to whether the resulting
action has higher-ordered properties such as whether it is free, rational, or moral.
I do not deal with these questions here but simply emphasize two different ques-
tions: one about the conditions for agency, the other about conditions for its dif-
ferent forms.19
Given the Many-Many Problem, the pervasive automaticity of agency is what we
should expect: we cannot intend and thereby control all the various ways of solving
the Many-Many Problem even once a specific path is selected in intention. We may
intend to grab an object or to recall a specific image, but the specific parameters
that must be filled in to instantiate the solution to the Problem are not things that
we explicitly intend to bring about, and thankfully so. It is not automaticity that is
unbearable. What would be unbearable would be its absence.
Psychologists have shown that much of our behavior is automatic, yet they have
also spun this result as threatening to agency, leading to a discomfiting picture of
human beings as zombies and automatons.20 One recent line of thought is the lack
of conscious control in action. As Bargh and Chartrand put it, our “conscious inten-
tions” are generally not in control of our behavior. Talk of conscious intention is
common enough in the empirical literature on agency, but I must confess to not
being certain what cognitive scientists are referring to. Do they mean an intention
made conscious, namely, an occurrent state where the content of the intention is at
the focus of my awareness? A thought about my intention, namely, a second-order
occurrent state where my first-order state of intention is at the focus of my aware-
ness? In normal action, I typically find no such thing. Rather, intentions are persist-
ing nonphenomenal mental states of subjects that coordinate and constrain one’s
meandering through behavioral space. As I type these words, I am executing my
intention to finish the essay by the deadline in a way consistent with my other inten-
tions that are also operative (e.g., that I need to get to a meeting at noon), but there
are no correlated conscious intentions or related metacognitive forms of awareness.
Thank goodness! That would be distracting. Of course, I can bring the content of
intention to consciousness on reflection, but that is a special case where I reevaluate
action. In general, when I act, my intentions are preparations to respond to the envi-
ronment in certain ways, or, as Strawson says, stage setting. Agentive control does
not require that the intentions be conscious in either of the senses noted earlier.21
That our actions often bypass this form of conscious control is no sign that we are
somehow not at the reins in our actions. We are at the reins to the extent that what
we do is a result of intentional control in the basic sense that our behavior is the
intention-guided solving of the Many-Many Problem. There is no denying that we
are often moved to act, and on the antecedents of action rest important questions
about the rationality, morality, and freedom of our actions. But all these are higher-
ordered properties of agency. Agency itself is an internal feature of certain processes,
our navigation through a complex world that throws at us Many-Many Problems.22
Mental Action and the Threat of Automaticity 259

NOTES
1. It is worth pointing out that a canonical type of automatic process is attentional,
namely, attentional capture, and that a function of attentional capture is to disrupt
other activities.
2. In the case of mental action, I will emphasize the central role of memory in many
forms of cognitive attention, and this resonates with certain key aspects of Logan’s
theory. In contrast to his account, however, I emphasize the simple connection.
3. The essays in O’Brien and Soteriou (2009) are a corrective to this neglect.
4. The example is from Peacocke 1998.
5. For a detailed discussion of the account, see Wu 2011a. This chapter extends that
account to bodily action and the issue of automaticity versus control.
6. Many nonsentient structures can solve the Many-Many Problem. Think of a set
of points in a train track that directs rail traffic in different ways, depending on the
day. Might there, however, be nonintentional actions exemplified by solving the
Many-Many Problem independently of intention? Perhaps, but I suspect that we will
want to carve off many of the nonsentient cases. This issue requires more discussion
than can be given at this point.
7. Persistent automaticity of mental events might be the source of the positive symp-
toms associated with schizophrenia (see Wu 2011b). Here, the patient is passive in
the sense defined later. This emphasis on automaticity contrasts with the standard
explanation of positive symptoms, self-monitoring, which posits a defect in a control
mechanism.
8. How should we understand this causal influence? In Wu 2011a, I construe intentions
as persisting structural states of the subject in the sense that the settings of points in a
set of train tracks (see note 6) and the setting of weights in a neural network count as
structural states. Because of these states, certain processes are prioritized over others
within the system in question.
9. So, zombie action (in Koch and Crick’s sense) is compatible with the account given
here (see later definitions). Neither our intentions nor the specific selections that we
make need be conscious. Of course, typically, there is some conscious element in
our actions. A second issue concerns metaphysical zombies, creatures fully devoid
of phenomenal consciousness. One might wonder whether such creatures could be
agents at all. To the extent that we think such creatures are capable of genuine (non-
phenomenal) mental states such as intentions as well as genuine nonphenomenal
perceptual states, then if they solve the Many-Many Problem via such intentions,
they are thus intentionally acting. But again, these issues require more discussion
( Julian Kiverstein raised certain issues to which this note is an initial response).
10. See also Peacocke 1998, 70.
11. Of course, for perceptually based thoughts, the putative objects of thought can be
given simultaneously.
12. There are, of course, different things we can mean by “attention.” I am here emphasiz-
ing the insight in James’s description, what he takes to be part of what we all know
about attention. The general point is that action requires attentional selection given
the Many-Many Problem.
13. These F’s are also the basis of descriptions under which the action can be said to be
intentional. It is not clear that the definitions of automaticity or control of an event
in respect of F to be given later are equivalent to claims about the intentionality of
the event under the description “the F.” Whether an action is intentional under a
260 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

description is likely to be a function of features of the linguistic context whereas the


conditions to be offered later are based on the causal structure of the event vis-à-vis
solving the Many-Many Problem.
14. Other notions of automaticity and control will be of use in cognitive science. I sug-
gest that in each case, theorists invoke the simple connection and then explicate con-
trol as a type of top-down modulation. For agentive control, the type of top-down
modulation involves solving the Many-Many Problem.
15. We can think of more specific accounts of control as species of the genus agentive
control that is characterized by AC. As Till Vierkant reminds me, Pamela Hieronymi
(2009) distinguishes between evaluative and managerial control. I think the distinc-
tion an interesting one, but it seems that what ties the two together as forms of con-
trol is that they imply that the Many-Many Problem is solved.
16. Here, I respond to queries raised by Julian Kiverstein.
17. I owe this observation to Till Vierkant and the way to respond to it to Jack Kurutz.
18. These preliminary thoughts reflect an exchange with Matt Nudds. I noted earlier
Mele’s claim that remembering that p is never an action. Mele claims that the only
action is bringing it about that one remembers that p. That view might be consistent
with my account. One cannot, it seems, intend to remember that p, since one has
thereby remembered that p, as p is in the content of the intention. Moreover, if bring-
ing it about that one remembers that p (automatically, then) is a way of describing
the process of solving the Many-Many Problem and not of just stage setting, then
Mele and I are in agreement. I would add, however, that one can intend to remember
this person’s name (demonstratively identifying someone) and that attempt can be
subject to control.
19. Andy Clark raised various issues to which this paragraph is meant as an initial reply.
20. Bargh’s work, in particular, on the automatic processing of stereotypes and their influ-
ence on behavior yields surprising and interesting results.
21. There are decisions and these may be conscious events (though they can be uncon-
scious as well). Perhaps this is what Bargh and Chartrand mean when they speak of
“deliberative choices.” Still, these do not accompany every action, though when they
do, they may contribute to action by setting one’s intentions. There are also questions
about unconscious vision that is also behind talk of zombie agents. On this, see my
(forthcoming) which is a response to Mole (2009). Briefly, while I do believe some
visual representations guiding action are unconscious, I do not think there is suf-
ficient empirical evidence for any stronger claim.
22. This work has been presented at multiple venues and I am grateful to all participants
at those venues for helpful feedback. Recently, I have greatly benefited from the com-
ments of the editors of this volume and the students and faculty of the Philosophy
Department at the University of Edinburgh. In addition to those named in the foot-
notes, special thanks to Nathalie Gold and Mark Sprevak for discussions.

REFERENCES
Anscombe, G. E. M. 1957. Intention. Oxford: Blackwell.
Bargh, J. A . 1994. The four horsemen of automaticity: Awareness, intention, efficiency,
and control in social cognition. Handbook of social cognition: Basic processes, ed. Robert
S. Wyer and Thomas K. Srull, 1–40. 2nd ed. Hillsdale, NJ: Erlbaum.
Bargh, J. A., and T. L. Chartrand. 1999. The unbearable automaticity of being. American
Psychologist 54 (7): 462–479.
Mental Action and the Threat of Automaticity 261

Bargh, J. A., and M. J. Ferguson. 2000. Beyond behaviourism: On the automaticity of


higher mental processes. Psychological Bulletin 126 (6): 925–945.
Falvey, Kevin. 2000. Knowledge in intention. Philosophical Studies 99: 21–44.
Fodor, Jerry A. 1983. The modularity of mind: An essay on faculty psychology. Cambridge,
MA: MIT Press.
Hieronymi, Pamela. 2009. Two kinds of agency. In Mental actions, ed. Lucy O’Brien and
Matthew Soteriou, 138–162. Oxford: Oxford University Press.
Hommel, Bernhard. 2000. The prepared reflex: Automaticity and control in
stimulus-response translation. In Control of cognitive processes: Attention and perfor-
mance XVIII, ed. Stephen Monsell and John Driver, 247–273. Cambridge, MA : MIT
Press.
James, William. 1890. The principles of psychology. Vol. 1. Boston: Holt.
Koch, C., and F. Crick . 2001. The zombie within. Nature 411 (6840): 893.
Logan, G. D. 1988. Toward an instance theory of automatization. Psychological Review 95
(4): 492–527.
Mele, Alfred. 2009. Mental action: A case study. In Mental actions, ed. Lucy O’Brien and
Matthew Soteriou, 17–37. Oxford: Oxford University Press.
Mole, Christopher. 2009. Illusions, demonstratives and the zombie action hypothesis.
Mind 118 (472): 995–1011.
O’Brien, Lucy, and Matthew Soteriou, eds.. 2009. Mental actions. Oxford: Oxford
University Press.
Palmeri, Thomas. 2002. Automaticity. Encyclopedia of cognitive science, ed. L. Nadel, 290–
301. London: Nature Publishing Group.
Peacocke, Christopher. 1998. Conscious attitudes, attention, and self-knowledge. In
Knowing our own minds, ed. Crispin Wright, Barry C. Smith, and Cynthia Macdonald,
63–98. Oxford: Oxford University Press.
Peacocke, Christopher. 2007. Mental action and self-awareness (I). In Contemporary
debates in philosophy of mind, ed. Brian McLaughlin and Jonathan Cohen, 358–376.
Oxford: Blackwell.
Schneider, Walter. 2001. Automaticity. The MIT encyclopedia of the cognitive sciences, ed.
Robert A. Wilson and Frank C. Keil, 63–64. Cambridge, MA : MIT Press.
Schneider, W., and R. M. Shiffrin. 1977. Controlled and automatic human information
processing: I. Detection, search and attention. Psychological Review 84 (1): 1–66.
Shiffrin, Richard. 1988. Attention. Stevens handbook of experimental psychology, ed. Stanley
Smith Stevens and Richard C. Atkinson, 739–779. New York: Wiley.
Strawson, Galen. 2003. Mental ballistics or the involuntariness of spontaniety. Proceedings
of the Aristotelian Society 103 (3): 227–257.
Wu, Wayne. 2011a. Confronting Many-Many Problems: Attention and agentive control.
Noûs 45 (1): 50–76.
Wu, Wayne. 2011b. Explaining schizophrenia: Auditory verbal hallucination and
self-monitoring. Mind and Language 27 (1): 86–107.
Wu, Wayne. Forthcoming. The case for zombie action. Mind.
14

Mental Acts as Natural Kinds 1

J O Ë L L E P RO U ST

INTRODUCTION. WHAT IS MEANT BY AN “ACT”?


In contemporary philosophy of action, “mental act” is generally used to refer to the
process of intentionally activating a mental disposition in order to acquire a desired
mental property. Traditionally, however, there has been a second way of applying
the phrase “mental act,” in an actualization rather than an agentive sense. In the
sense of being an actualization, act is contrasted with potentiality. In Aristotle’s
(2006) use of the term, potentiality refers to “a principle of change, movement,
or rest” in oneself or other entities (Metaphysics, Θ, 1049b7). An act constitutes
the actual expression of this potentiality. For example, a seeing is an act, while the
disposition to see is the potentiality associated with it (Metaphysics, Θ, 1049b21).
“Mental act,” in this sense, is close to “mental event,” in other words, “what happens
in a person’s mind.”2
An “agentive” act thus necessarily includes an actualization. An act in the dispo-
sitional sense becomes an act in the agentive sense if an instance of the underlying
event type can be brought about willingly, rather than being automatically triggered
under the combined influences of the mental apparatus and the environment. As
a consequence, mental events of a given type (such as imaginings, or remember-
ings) may qualify as mental acts on one occasion, and not on another. A thinker can
think about John, memorize a telephone number, mentally solve a math problem.
But these events are not necessarily mental “doings.” Some instances are voluntarily
brought about, in order to make a certain mental content available. Others are asso-
ciatively activated in the mind by contextual cues.
When one comes up with a nominal distinction such as this, the next question is
whether the distinction is real; does it connect to a natural distinction between two
natural kinds? Are there, in fact, mental acts in the agentive sense? Is there, further-
more, a reason to consider that supposedly “mental” acts are of a different nature
from ordinary bodily actions? Are they not normal ingredients of an action, rather
than being independent actions?
Mental Acts as Natural Kinds 263

To lay the groundwork for the discussion, we need to start with a tentative char-
acterization of the general structure of action, on the basis of which mental acts can
be specified. A commonly held view is that both bodily and mental acts involve
some kind of intention, volition, or reason to act; the latter factor both causes and
guides the action to be executed.3 Along these lines, the general structure of an
action is something like:

(C1) Intending (willing, having reasons) to see goal G realized → (=causes)


trying to H in order to have G realized.

On the basis of this general characterization, one can identify a mental act as an
act H that is tried in order to bring about a specific property G—of a self-directed,
mental, or cognitive variety.4 The epistemic class of mental acts encompasses per-
ceptual attendings, directed memorizings, reasonings, imaginings, visualizings. A
mixed category, involving a combination of epistemic, prudential, or motivational
ingredients, includes acceptings, plannings, deliberatings, preference weightings,
and episodes of emotional control.
Three main arguments have been directed against the characterization of men-
tal acts described in (C1). First, it seems incoherent, even contradictory, to rep-
resent a mental act as trying to bring about a prespecified thought content: if the
content is prespecified, it already exists, so there is no need to try to produce it.
Second, the output of most of the mental operations listed above seems to crucially
involve events of passive acquisition, a fact that does not seem to be accommodated
by (C1). Trying to remember, for example, does not seem to be entirely a matter of
willing to remember: it seems to involve an essentially receptive sequence. Third,
it makes little sense, from a phenomenological viewpoint, to say that mental acts
result from intentions: one never intends to form a particular thought. We will dis-
cuss each of these objections and will examine whether and how (C1) should be
modified as a result.

I. WHAT IS AN EPISTEMIC GOAL?


Bernard Williams5 has emphasized that if beliefs depended, for their specific con-
tents, on believers’ arbitrary desires or intentions, then the truth-evaluative property
that makes them beliefs would be compromised. What holds for belief acquisition
also holds for controlled forms of epistemic functions, such as trying to remember
that P or trying to perceive that Q. Here a subject cannot want to remember or
perceive a given content because it is a condition of satisfaction of the correspond-
ing mental act that it responds to truth or validity, rather than to the thinker’s pref-
erences.6 Even mental acts of the mixed (epistemic-conative) variety also involve
constraints that do not seem to merely depend on a thinker’s arbitrary goal: when
planning, (for example) minimal objective constraints such as relevance and coher-
ence need to be respected, in order to have the associated episode qualify as plan-
ning. This kind of observation leads to articulation of the following principle:

(P1) Mental actions generally have normative-constitutive properties that


preclude their contents from being prespecifiable at will.
264 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

In virtue of (P1), one cannot try to judge that P. One can try, however, to form one’s
opinion about whether P; or examine whether something can be taken as a premise
in reasoning, namely, accept that P. Or work backward from a given conclusion to
the premises that would justify it. But in all such cases, accepting that P is conditional
upon feeling justified in accepting that P. For example, one can feel justified in accept-
ing a claim “for the sake of argument” or because, as an attorney, one is supposed to
reason on the basis of a client’s claim. Various norms thus apply to a mental action,
constituting it as the mental action it is. In the case of accepting, coherence regulates
the relations between accepted premises and conclusions. Relevance applies to the
particular selection of premises accepted, given the overall demonstrative intention
of the agent. Exhaustivity applies to the selection of the relevant premises given one’s
epistemic goal.
These norms work as constraints on nonagentive epistemic attitudes as well as on
mental actions. Forming and revising a belief are operations that aim at truth, and
at coherence among credal contents. Thus normative requirements do not apply
only to mental actions. Mental actions, rather, inherit the normative requirements
that already apply to their epistemic attitudinal preconditions and outcomes. If a
thinker was intending to reach conclusions, build up plans, and so forth, irrespective
of norms such as relevance, coherence, or exhaustivity, her resulting mental activity
would not count as a mental action of reasoning or planning. It would be merely an
illusory attempt at planning, or reasoning.7 A mental agent cannot, therefore, try to
Φ without being sensitive to the norm(s) that constitute successful Φ-ings.
The upshot is that characterization (C1) earlier should be rephrased, as in (C2),
in order to allow for the fact that the mental property that is aimed at should be
acquired in the “right way,” as a function of the kind of property it is.

(C2) Intending to see goal G realized → (=causes) trying to H in conformity


with a constitutive epistemic norm in order to have G realized as a conse-
quence of this normative requirement.

Characterization (C2) can be used to explain the specific difference of mental ver-
sus bodily acts in the following way. Just as bodily acts aim at changing the world by
using certain means-to-end relations (captured in instrumental beliefs and know-
how), mental acts have, as their goal, changing one’s mind by relying on two types
of norm: means-to-end instrumental norms (e.g., “concentrating helps remember-
ing”) and constitutive norms (“my memory attempt ought to bring about a correct
outcome”). The specific difference between a mental and a bodily act, then, is that
specifically epistemic, constitutive norms are only enforced in mental acts and atti-
tudes, and that an agent has to be sensitive to them to be able to perform epistemic
actions. This does not entail, however, that a thinker has to have normative concepts
such as truth or relevance. An agent only needs to practically adjust her mental per-
formance as a function of considerations of truth, exhaustivity, or relevance, and so
forth. There is a parallel in bodily action: an agent does not need to explicitly recog-
nize the role of gravity in her posture and bodily effort to adjust them appropriately,
when gravity changes, for example, under water. It is an important property of con-
stitutive norms that they don’t need to be consciously exercised to be recognized as
Mental Acts as Natural Kinds 265

practical constraints on what can be done mentally. For example, an agent who tries
to remember a date, a fact, a name implicitly knows that success has to do with the
accuracy of the recalled material; an agent who tries to notice a defect in a crystal
glass implicitly knows that her attempt depends on the validity of her perceptual
judgment. In all such cases, the properties of informational extraction and trans-
fer constrain mental performance just as the properties of gravity constrain bodily
performance.

A characterization along these lines, however, seems to be blocked by two


objections.

A. There Are No Constitutive Norms


Dretske (2000) defends the claim that there is only one variety of rationality: instru-
mental rationality. Violations of truth, mistakes, fallacies are merely properties we
don’t like, such as “foul weather on the day of our picnic.” Epistemic constraints,
then, should not be seen as constituting a special category; they belong to instru-
mental, conditional norms involved in practical reasoning:

(P2) “One ought to adopt the means one believes necessary (in the circum-
stances) to do what one intends to do.”8

Given that what one intends to do varies with agents and circumstances, some people
may prefer to ignore a fallacy in their reasoning, or jump to a conclusion, just as
some prefer to picnic in the rain. There are many types of instrumental conditions
for attaining goals, and they each define a norm, in the weak sense of a reason for
adopting the means one adopts. From this perspective, epistemic norms are no more
constitutive for a mental act than beliefs in means-end conditions for realizing a
goal are constitutive for a bodily act. They are merely instrumental conditions under
the dependence of one’s intention to reach a given end. A closely related argument
in favor of the instrumental view of epistemic norms, proposed by Papineau (1999),
is that they compete with each other: one can desire that one’s beliefs be formed so
as to be true, or informative, or economical, and so forth. Truth is not an overarching
norm; norms apply in a context-relative way, according to the agent’s goal.
This kind of argument, however, has been criticized for conflating reasons to act
and normative requirements on acting. Adopting John Broome’s (1999) distinction,
one might say that Dretske’s proposition (P2) supress earlier correctly articulates a
relation of normative requirement between intending an end, and intending what
you believe to be a necessary means to this end. But this does not ipso facto provide
you with a reason to intend what you believe to be a necessary means to the end;
conversely, whatever reason you may have to take this particular means as necessary
to reach this end does not count as normatively required. Let us see why. A reason
is “an ought” pro tanto—“an ought so far as it goes.” For example, it you intend to
open a wine bottle, granting that you believe that you need a corkscrew, you ought
to get one. Believing that you ought to get a corkscrew, however, cannot make it true
that you ought to do so. You ought to do so if there is no reason not to do it. The
266 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

conclusion of your practical reasoning, finally, can be detached: get a corkscrew!


In short, “a reason is slack, but absolute.” In contrast, a normative requirement is
“strict, but relative.” Why is a normative requirement strict? Suppose that, accepting
A, you explore whether to accept B, which is in fact logically entailed by A. If you
accept A, you have no choice but to accept B. Entailment does not depend upon cir-
cumstances, or on ulterior reasons. But, in contrast to having a reason, which, being
absolute (although “slack”), is detachable, being normatively required to accept B
(although “strict”) is merely relative to accepting A: you are not ipso facto normatively
required to accept A in the first place.
The same holds for instrumental reasoning. The relation between intending an
end and intending the means you believe to lead to that end is a conceptual require-
ment for you to form the intention to act. But this does not entail that the specific
means intended are normatively required. Therefore, the strict normative require-
ment expressed in (P2) cannot form the agent’s end. For it is neither detachable
nor pro tanto.9 Broome’s distinction between reason and normative requirement
allows us to explain why one can understand the normative requirements involved
in someone else’s instrumental reasoning even when her instrumental beliefs seem
wrong, and/or her ends irrational.
Broome’s distinction also allows us to respond to Papineau’s argument based
on norm conflict. There are two main ways of understanding such a conflict. In
Papineau’s argument, normative competition derives from agents’ having conflict-
ing desires: any one of these desires, for truth, informativeness, or economy, can
appear to gain precedence over the others in a particular context. They do this
because the norms are conceived of as “slack” and “absolute.” In an alternative con-
strual, inspired by Broome, a conflict among the epistemic ends to be pursued in
a mental task does not affect the normative requirements applying to the resulting
mental act. Selecting a goal does not produce a normative shift because slackness
and absoluteness of potentially conflicting reasons is compatible with strictness
and relativity in normative requirements. Strictly speaking, then, there is no con-
flict among normative requirements (in the abstract, these requirements are all
coherent and subordinated to truth). An epistemic conflict only occurs when an
agent predicts that she will fail to have the necessary time and cognitive resources
to reach a solution to a problem that is, say, simultaneously accurate, justified,
exhaustive, and relevant. Various possible epistemic strategies are available; the
agent needs to decide which will best serve her current needs. By so doing, she
ipso facto adopts the (strict, relational) normative requirement(s) inherent in the
chosen strategy.
Competition between strategies, therefore, does not amount to normative con-
flict. An agent may be right or wrong in selecting, under time pressure, a strategy
of exhaustivity (this depends in part on the anticipated cost-benefit schedule). But
her reason for selecting a strategy has nothing to do with a normative requirement.
Normative requirements only step in once a specific strategy is chosen. Relative to that
strategy, the normative requirements constitutive of this strategy will apply. If the
agent aims to be exhaustive, she will aim to find all the true positives and accept
the risk of producing false positives; the normative requirement conditional on
this strategy is that she ought to include all the true answers in her responses. If,
Mental Acts as Natural Kinds 267

however, the agent’s aim is to retrieve only correct answers, the normative require-
ment is that no incorrect answer should be included in her responses.
So there are two very different ways in which a mental agent can fail in an action:
she can select an aim that she has no good reason to select (aiming to be exhaustive
when she should have aimed at accuracy). Or she can fail to fulfill the normative
requirements that are inherent to the strategy she selected (aiming to be exhaustive
and leaving out half of the items in the target set). For example, if the agent tries to
remember an event, no representation of an event other than the intended one will
do. The agent, however, may misrepresent an imagining for a remembering. This
example can be described as the agent’s confusion of a norm of fluency with a norm
of accuracy. Another frequent example is that, having accepted A, an agent fails to
accept B, which is a logical consequence of A (maybe she is strongly motivated to
reject B).10 Here again, the agent was committed to a norm of coherence but failed
to apply it, while turning, possibly, to another norm, such as fluency or temporal
economy, unsuitable to the task at hand.
If indeed there are two different forms of failure, connected, respectively, with
goal selection, and with a goal-dependent normative requirement, we should recog-
nize that it is one thing to select the type of mental act that responds to the needs of
a given context, and another to fulfill the normative requirements associated with
this selected mental act. Selecting one act may be more or less rational, given a dis-
tal goal and a context. An agent may be wrong to believe that she needs to offer an
exhaustive, or a fine-grained, answer to a question (contexts such as conversation,
eyewitness testimony, academic discussion, and so on, prompt different mental
goals). Having selected a given goal, however, the agent now comes under the pur-
view of one or several constitutive norms, which define the satisfaction conditions
of the associated mental action. The fact that there can be conflicts among epistemic
strategies thus just means that an agent must select a particular mental act in a given
context, if she is in fact unable to carry out several of them at once. Each possi-
ble mental act is inherently responsive to one or several distinctive norms. Which
mental act is needed, however, must be decided on the joint basis of the contextual
needs and of one’s present dispositions.
If this conclusion is correct, it suggests, first, that mental acts are natural kinds,
which are only very roughly captured by commonsense categories such as “trying
to remember” or “trying to perceive.” An act of directed memory, or perceptual
attending, for example, should be distinguished from another if it aims at exhaustiv-
ity or at strict accuracy. Similarly, a type of reasoning could be aiming at coherence,
or at truth, depending on whether the premises are only being considered, that is,
assumed temporarily, or fully accepted. These are very different types of mental
acts, which, since they invoke different normative requirements, have different con-
ditions of success and also require different cognitive abilities from the agent.
Second, the conclusion also suggests that the conformity of present cognitive
dispositions with a given normative requirement should be assessed prior to men-
tally acting: a thinker needs to evaluate the likelihood that a mental action of this
type, in this context, will be successful. In other words, a predictive self-evaluation
needs to take place for a subject to appropriately select which mental act to per-
form. For example, a subject engaged in a learning process may need to appreciate
268 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

whether she will be able to remember either accurately, or exhaustively, a set of


items. Evaluation should also be made after the action is performed. At the end of
a controlled retrieval, the agent should assess whether her retrieval is correct, accu-
rate, and exhaustive. Differences in normative requirements for mental acts should
thus be reflected in various forms of metacognitive evaluations.

B. The “Unrefined Thinkers” Argument


The preceding discussion allows us to deal more quickly with a second objection,
offered by David Papineau (1999): epistemic norms cannot be constitutive because
they are frequently unobserved (ignored), by higher mammals and very young chil-
dren, or even routinely violated by normal believers. Research on reasoning, indeed,
provides a wealth of examples, where people make major deductive mistakes in
evaluating syllogisms, using or evaluating a conditional rule, or perform incorrectly
on even elementary problems of probabilistic reasoning.11 Let us deal first with the
violation argument. There is nothing threatening for the constitutive view in the
fact that agents can fail to abide by norms. Given that the normative requirements
of interest are constitutive of a given act, one can perfectly well accommodate vio-
lations as cases in which an agent thought she was trying to Φ (reason, remember,
perceive), but either applied an irrelevant norm for this particular trying (trying to
φ through ψ-ing, i.e., having the illusion of trying to φ and actually merely ψ-ing), or
failed to abide by the chosen norm. The first kind of failure may seem quite strange;
research on metamemory, however, offers many examples of illusions of this type.
Agents can actually try to conjure up a vivid image of a scene (based on third-per-
son narratives, or films), and believe that this mental action is a reliable indicator
for remembering one’s personal experience of the scene. We can understand why
nonsophisticated agents commit such mistakes: indeed the nonconceptual content
that allows them to identify an effort to remember (rather than to imagining) is the
vividness and precision, that is, the fluency with which the memory comes to mind;
but imagining may be fluent too, particularly in conditions where the content of
the imagination has been primed. Fluency can thus trick subjects into performing a
mental action different from the one they think they are engaged in.12
The other part of Papineau’s “unrefined thinkers” argument reasons that “since
young infants, and probably all animals, lack the notion of true belief, they will be
incapable of sensitivity to such norms.” Papineau considers that the concept of true
belief has to belong to the agent’s conceptual repertoire for her to have a reason to
pursue truth, and to be sensitive to the norm of truth when forming beliefs about
the world.
It is arguable, however, that all that is necessary for a subject to be sensitive to
truth and other epistemic norms is some awareness of the conditions of success for
acting in a complex world (social or physical). A toddler may want to get back all the
toys she has lent to another: she thus needs to try to remember them all, even before
she understands, in the abstract, concepts such as exhaustivity, truth, or memory.
An objector might insist that one can only become sensitive to certain epistemic
norms through explicit conceptual tutoring. Organisms can only apply normative
requirements if they are sensitive to them, either because evolution has provided
Mental Acts as Natural Kinds 269

such sensitivity or because social learning has made them sensitive to new norms.
Given the internal relations between normative requirements, norm sensitivity, and
mental acts, the range of mental acts available to an agent is partly, although not
fully, constrained by the concepts she has acquired. In particular, when an agent
becomes able to refer to her own cognitive abilities and to their respective norma-
tive requirements, she ipso facto extends the repertoire of her “tryings” (i.e., of her
dispositions to act mentally).
This objection is perfectly correct. In response to Papineau’s “unrefined thinkers”
argument, we should only claim that some basic constitutive requirements, at least,
are implicitly represented in one’s sense of cognitive efficiency. Among these basic
requirements, fluency is a major epistemic norm that paves the way for the others.
A feeling of perceptual or mnemonic fluency, experienced while engaged in some
world-directed action (such as reclaiming one’s toys), allows a subject to assess the
validity of her perceptual judgments, or the exhaustivity of a recall episode.
This idea of basic normative requirements has recently received support from
comparative psychology. It has been shown, in various animal studies, that some
nonhuman primates, although not mind readers, are able to evaluate their mem-
ory or their ability to perceptually discriminate between categories of stimuli.
Macaques, for example, are able to choose to perform a task when and only when
they predict that they can remember a test stimulus; they have the same patterning
of psychophysical decision as humans.13 This suggests that macaques can perform
the mental action of trying to remember, or of trying to discriminate, just as humans
do; furthermore, they are able to choose the cognitive task that will optimize their
gains, based on their assessment of how well they perceive, or remember, (rather
than on stimulus-response associations, which are not made available to them).
The obvious question, then, is how animals can conduct rational self-evaluation
(i.e., use a form of “metacognition”) in the absence of conceptual self-knowledge. A
plausible answer, currently being explored by philosophers and cognitive scientists,
is that affective states have an essential role in providing the bases of norm sensitiv-
ity in animals, in children, and also in human adults.14 A feeling “tells” a subject,
in a practical, unarticulated, embodied way, how a given mental act is developing
with respect to its constitutive norm, without needing to be reflectively available to
the believer. Cues such as contraction of the corrugator muscle (correlating with a
sense of difficulty, experienced when frowning), or the absence of tension, seem to
be associated with a gradient in self-confidence about the outcome of the current
mental act. This feeling, however, also has a motivational force, making the prospect
of pursuing the action attractive or aversive to the agent.
Philosophically, however, the important question is not only how epistemic emo-
tions are implemented,15 not only how they influence decision,16 but also how they
can contribute to rational evaluation. How can epistemic feelings generate mental
contents that actually enable a subject to perform self-evaluation? One possibility is
that emotions provide access to facts about one’s own attitudes and commitments.17
If these facts are articulated in a propositional way, then emotions are subjected
to the agent’s self-interpretive activity as a mind reader. Another possibility, not
incompatible with the first, based on the comparative evidence reported earlier, is
that epistemic feelings express affordances in a nonconceptual way (Proust, 2009).
270 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

On this view, a “memory affordance” does not need to be represented in conceptual


terms: an animal able to control its memory can simply use embodied emotional
cues that correlate with reliability. Granting that mental agents range from unre-
fined to refined thinkers, these two theories may well both be true, and correspond
to different, complementary ways in which epistemic emotions can influence ratio-
nal human decision making, even in the same individual.
Let us take stock of the discussion so far. Epistemic norms are constitutive of
mental acts, rather than being purely instrumental in general goal-directed behav-
ior. Epistemic requirements determine classes of mental acts. Mental agents may be
sensitive to constitutive norms either on the basis of a conceptual understanding of
the dependence of the success of a mental act on its epistemic conditions, or on the
basis of felt emotions (also called “noetic feelings”) that track specific normative
requirements. Evaluation of one’s mental dispositions (before acting mentally) and
postevaluation of one’s mental achievement (once the action is performed) are two
steps where sensitivity to constitutive norms is used to select and monitor mental
performance.
We have so far been led to retain our earlier characterization (C2) of mental
actions:

(C2) Intending to see goal G realized → (=causes) trying to H in conformity


with a constitutive epistemic norm in order to have G realized as a conse-
quence of this normative requirement.

II. PASSIVITY AND MENTAL AGENCY


A second puzzle, however, is raised by (C2). As frequently pointed out, most mental
acts seem to include sequences that are receptive rather than active.18 Let us assume,
for example, that I need to recover the name of Julie’s husband. Is there anything I
can do to recover this name? Should I concentrate? Should I, rather, think about
something else? Should I look it up in a list? Either the content pops into the mind,
so to speak, or it does not. When the name pops into one’s mind, this is not a case
of an action, but rather an effect of associations that allow one to utter a name con-
nected to another name. When it does not, there is not much one can do to produce
the wanted name.
Even though one cannot truly be said, in the goal-directed sense, to intend to judge
whether P, to recall X, and so forth, in the same sense in which one intends to turn
on the light, there is a sense in which we deliberately put ourselves in a position
that should increase the probability of judging whether P or recalling X. These out-
comes would not occur if the relevant actions were not intentionally performed.19
Bringing it about that one remembers, or any other controlled psychological opera-
tion, therefore, qualifies as a mental action, insofar as it is produced deliberately.
But deliberateness alone will not do, and this is a new argument in favor of cau-
tion with respect to (C2). As McCann (1974) noticed, someone could believe that
she can deliberately control her heartbeat, and be falsely confirmed in her belief
when the excitement (produced by her false expectation that she can do it) actu-
ally speeds it up. To exclude this type of case, a constraint is needed on the kind of
Mental Acts as Natural Kinds 271

trying responsible for a mental act (or a nonmental one). It must, in general, fulfill a
“voluntary control condition” (VCC):

(P3) VCC: Trying to A necessarily involves an actual capacity of exerting vol-


untary control over a bodily or mental change.

Having voluntary control over a change means that the agent knows how, and is
normally able, to produce a desired effect; in other terms, the type of procedural
or instrumental activity that she is trying to set in motion must belong to her rep-
ertoire. Even though interfering conditions may block the desired outcome, the
agent has tried to act, if and only if she has exerted voluntary control in an area in
which she has in fact the associated competence to act. An important consequence
of McCann’s suggestion is that the agent may not be in a position to know whether
an action belongs to her repertoire or not. All she knows is that she seems to be try-
ing to perform action A. Trying, however, is not a sure sign that a mental action is
indeed being performed.
It is compatible with VCC, however, that bodily or mental properties that seem
prima facie uncontrollable, such as sneezing, feeling angry, or remembering the
party, can be indirectly controlled by an agent, if she has found a way to cause herself
to sneeze, feel angry about S, or remember the party. She can then bring it about that
she feels angry about S, or that she remembers the party, and so on. Are these bona
fide cases of mental action? Here intuitions divide in an interesting way.
Some theorists of action20 consider that an intrinsic property of action is that, in
Al Mele’s (2009) terms:

(P4) The things that agents can, strictly speaking, try to do, include no nonac-
tions (INN).

An agentive episode, on this view, needs to include subsequences that are them-
selves actions. It must not essentially involve receptivity. Those who hold the INN
principle contrast cases such as trying to remember, where success hinges on a
receptive event (through which the goal is supposed to be brought about), with
directly controlled events, such as lighting up the room. For example, while agreeing
that a thinker’s intention is able to have a “catalytic” influence on her thought pro-
cesses, Galen Strawson rejects the view that she can try to entertain mental contents
intentionally.
Is Strawson’s claim justified? We saw in section I that “entertaining a thought con-
tent” does not qualify as an action, and cannot even constitute the aim of an action
(except in the particular case of accepting). But as Mele (2009) remarks, “It leaves
plenty of room for related intentional mental actions” (31). Take Mele’s task of find-
ing seven animals whose name starts with “g” (Mele, 2009). There are several things
that the agent does in order to complete the task: exclude animal names not begin-
ning with “g,” make a mental note of each word beginning with “g” that has already
come to mind, keep her attention focused, and so on. Her retrieving “goat,” however,
does not qualify as a mental action, because “goat” came to her mind involuntarily,
that is, was a nonaction. In conclusion: bringing it about that one thinks of seven
272 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

animal names is intentional and can be tried, while forming the conscious thoughts
of seven individual animal names is not (Mele, 2009).
One can agree with Mele, while observing that bodily actions rarely fulfill the
INN condition. Most ordinary actions involve some passive relying on objects or
procedures: making a phone call, for example, presupposes that there exists a reli-
able mechanism that conveys my vocal message to a distant hearer. Asking some-
one, at the dinner table, “Is there any salt?” is an indirect speech act that relies on
the hearer’s activity for computing the relevant meaning of the utterance, a request
rather than a question. Gardening, or parenting, consists in actions that are meant
to make certain consequences more probable, rather than producing them outright.
There is a deeper reason, however, to insist that “trying to A mentally” does not
need to respect the INN principle. If section I is right, acting mentally has a two-
tiered structure. Let us reproduce the characterization discussed earlier:

(C2) Intending to see goal G realized → (=causes) trying to H in conformity


with a constitutive epistemic norm in order to have G realized as a conse-
quence of this normative requirement.

There are, as we saw, two kinds of motives that have to be present for a mental act to
succeed. A first motive is instrumental: a mental act is performed because of some
basic informational need, such as “remembering the name of the play.” A second
motive is normative: given the specific type of the mental action performed, a spe-
cific epistemic norm has to apply to the act. These two motives actually correspond
to different phases of a mental act. The first motivates the mental act itself through
its final goal. The second offers an evaluation of the feasibility of the act; if the pre-
diction does not reach an adequacy threshold, then the instrumental motive needs
to be revised. This second step, however, is of a “monitoring” variety. The thinker
asks herself a question, whose answer is brought about in the agent by her emo-
tions and prior beliefs, in the form a feeling of knowing, or of intelligibility, or of
memorial fluency. Sometimes, bodily actions require an analogous form of moni-
toring: if an agent is unsure of her physical capacity to perform a given effort, for
example, she needs to form a judgment of her ability based on a simulation of the
action to be performed. Mental acts, however, being highly contextual, and tightly
associated with normative requirements, need to include a receptive component.
In summary, mental agency must adjudicate between two kinds of motives that
jointly regulate mental acts. The agent’s instrumental reason is to have a mental goal
realized (more or less important, given a context). This goal, however, is conditional
on her attention being correctly oriented, and on her existing cognitive dispositions for
producing the mental goal. Here, epistemic requirements become salient to the agent.
Feelings of cognitive feasibility are passively produced in the agent’s mind as a result of
her attention being channeled in a given epistemic direction. These feelings predict the
probability for a presently activated disposition to fulfill the constraints associated with
a given norm (accuracy, or simplicity, or coherence, etc.). Epistemic beliefs and theo-
ries can also help the agent monitor her ability to attain a desired cognitive outcome.
Thus, orienting one’s attention as a result of an instrumental reason (finding the
name of the play) creates a unique pressure on self-evaluation, which constitutes a
Mental Acts as Natural Kinds 273

precondition and a postevaluative condition for the mental act. One can capture
this complex structure in the following theoretical definition of a mental act:

(C3) Being motivated to have goal G realized → (=causes) trying to bring


about H in order to see G realized by taking advantage of one’s cognitive dis-
positions and norm-sensitivity for H reliably producing G.

This characterization stresses the functional association of normativity and recep-


tivity. Given the importance of normative requirements in mental actions, there has
to exist a capacity for observing, or for intuitively grasping, where norms lie in a
given case. Constitutive norm sensitivity is a receptive capacity without which no
mental action could be performed.

III. INTENTIONS AND MENTAL ACTS


Although (C3) no longer includes a causal role for intentions, it is necessary to dis-
cuss their possible causal role in mental actions, or else give an alternative explanation
of how a mental act is caused. As Ryle observed, if a thought presupposed a former
intention, namely, another thought, we would embark on an infinite regress.21 It does
not seem to be the case, however, that we normally intend to move from one thought
to the next. The process of thinking does not seem to be constrained, in general, by
prior intentions. In the commonsense view, targeted by the philosophy of action of
the last century,22 a personal-level prior intention causes an action on the basis of a
representation of a goal and of how to reach it. Emotional or impulsive actions were
shown to resist this explanation; this led to postulating a specific category of inten-
tions, called “intentions in action,” supposed to trigger the action at the very moment
they are formed.23 Neither kind of intention, however, fits the phenomenology of
mental actions, as has often been noticed.24 An ordinary thinker, in contrast with phi-
losophers, scientists, mathematicians, or politicians, normally does not form a prior
intention to make up her mind about a given issue. Mental acts are generally per-
formed while pursuing some other type of ordinary action, such as shopping, having
a conversation, asking for one’s toys to be given back, or packing for a trip.
A more appropriate proposal seems to be that a mental action results from the
sudden realization that one of the epistemic preconditions for a developing action
is not met. An example of epistemic precondition is the capacity to access one’s
knowledge of the various means that need to be involved in the overall action. For
example, conversation requires the ability to fluently retrieve various proper names
and episodic facts. Another is the ability to recognize spatial or temporal cues
(e.g., while navigating in a foreign city). When an agent is confronted with such an
epistemic mismatch between a desired and an existing mental state or property, she
is prompted into the agentive mode. What does this mean, exactly? She needs to
bring herself to acquire the mental state in question, or else to substantially revise
her plan of action. Note that the agent does not need to represent this situation in
any deeply reflective way. She only needs to concentrate on how to make her action
possible. This strategy, typically, starts with a self-addressed question: Can I find
this proper name? Will I be able to recognize my friend’s home?
274 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

Let us suppose, for example, that you go to the supermarket and suddenly real-
ize, once there, that you have forgotten your shopping list. You experience a spe-
cific unpleasant emotion, which, functionally, serves as an error signal: a crucial
epistemic precondition for your planned action is not fulfilled, because you seem
not to remember what was on the list. When such an error signal is produced, the
representation of the current action switches into a revision mode. Note that this
epistemic feeling differs from an intention: it does not have a habitual structure, as
an intention normally does, given the highly planned, hierarchical structure of most
of our instrumental actions. It is, rather, highly contextual, difficult to anticipate,
unintentional, and dependent upon the way things turn out to be in one’s interac-
tion with the environment. The error signal is associated with a judgment concern-
ing the fact that your shopping list is not available as expected. Now, what we need
to understand is when and why this judgment leads to selection of a mental act
rather than to a new bodily action.

A. Hypothesis A
A first hypothesis—hypothesis A—is that the error signal is an ordinary, garden-
variety action feedback. It is generated when some expectation concerning either
the current motor development of the action or its outcome in the world does not
match what is observed (according to a popular “comparator view” of action).25 But
there is no reason to say that such feedback has to trigger a mental act. What it
may trigger, rather, is a correction of the trajectory of one’s limbs, or a change in
the instrumental conditions used to realize the goal. If I realize that my arm does
not extend far enough to reach the glass I want, I have the option of adjusting my
posture or taking an extra step. When I realize that I don’t have my shopping list at
hand, I have the option of looking for it, reconstituting it, or shopping without a list.
In both situations, no mental act seems necessary.

B. Hypothesis B
Hypothesis B states that the error signal of interest is not of a postural, spatial, or
purely instrumental kind. The distinction between epistemic and instrumental rel-
evance discussed in section I is then saliently involved in the decision process. An
instrumental error signal carries the information that the existing means do not
predict success (“shopping will be difficult, or even impossible”). An epistemic
error signal carries, in addition, the information that epistemic norms are involved
in repairing the planning defect (“can my memory reliably replace my list?”). The
comparator that produces the epistemic error signal, on this hypothesis, has access
to the cognitive resources to be used in a given task. To make an optimal decision,
the agent needs to be sensitive to the norms involved, such as accuracy or exhaus-
tivity. Norm sensitivity is, indeed, implicit in the practical trilemma with which
the agent is confronted: either (1) she needs to interrupt her shopping, or, (2) she
needs to reconstruct the relevant list of items from memory, or, finally, (3) she may
shop without a list, in the hope that roaming about will allow her to track down the
needed items. The trilemma is only available to an agent if mental acts are in her
Mental Acts as Natural Kinds 275

repertoire, and if she can select an option on the basis of her contextual metacog-
nitive self-evaluations. Now consider the constraints that will play a role in deter-
mining how the trilemma should be solved. The list can be more or less accurately
reconstructed: the new list can include fewer items than the original list, and thus
violate a norm of exhaustivity (or quantity). It can include more items than the origi-
nal list, thus violating a norm of accuracy (or truth). As shown in section I, norma-
tive requirements depend upon the goal pursued, but they are strict, rather than pro
tanto. Self-probing her own memory is an initial phase that will orient the shopper
toward the normatively proper strategy.
A defender of the A-hypothesis usually blames the B-hypothesis for taking the
principle of Occam’s razor too lightly. Here is how the argument goes. Any simple
postural adjustment can, from a B-viewpoint, be turned into a mental act. When
realizing that a movement was inadequate, you engaged into a reflective episode;
you compared your prior (estimated) belief of the distance between your arm and
the glass with your present knowledge of the actual distance. A precondition of the
current action fails to be met. As a result, you actively revise your former belief, and,
as a consequence, you reflectively form the mental intention to perform a correc-
tive postural action. Surely, this picture is overintellectualist. Any animal can correct
its trajectory to reach a goal: no mental act, no comparison between belief states
is needed; a navigating animal merely compares perceptual contents; it aims at a
matching state and perseveres until it gets it.
The A-objector correctly emphasizes that the concept of a “mental property”
can describe any world property one cares to think about. A color, or a shape,
becomes mental once seen. A behavior becomes mental as soon as it is anticipated
or rehearsed. A more economical theory, the objector concludes, should explain
actions through first-order properties; what is of cognitive interest is the world, not
the mind turning to itself to see the world.
The B-defender, however, will respond that the A-objector ignores existing
psychological mechanisms that have the function of assessing one’s cognitive dis-
positions as such—they are not merely assessing the probability of the world turn-
ing or not to be favorable to one’s plans. Indeed, crucial evidence in favor of the
B-hypothesis consists in the contrast between animals that are able to perform meta-
cognitive self-evaluation to decide what to do, such as some nonhuman primates
and dolphins, and those unable to do so, such as pigeons and rats.26 Metacognitive
self-evaluation, however, is not in itself a mental action. It is the initial and the last
step of such an action, in a way that closely parallels the functional structure of bodily
actions. Neuroscientific evidence suggests that a bodily action starts with a covert
rehearsal of the movement to be performed.27 This rehearsal, although “covert,” is
not a mental action but, rather, a subpersonal operation that is a normal ingredient
of a bodily action. Its function is strictly instrumental: to compare predicted effi-
ciency with a stored norm. Similarly, a mental action starts with evaluating whether
a cognitive disposition can reliably be activated. Its function is, as argued in sec-
tion I, directly critical and indirectly instrumental. Its critical function is to evaluate
how reliable or dependable my own cognitive dispositions are relative to a given
normative requirement. Its instrumental function is to guide a decision to act in
this or that way to attain the goal. The parallel also applies to the ultimate step of an
276 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

action. Once an action is performed, it must be evaluated: Does the observed goal
match the expected goal? Again, there is an interesting difference in postevaluating
a bodily and a mental action. In a bodily action, sensory feedback normally tells
the agent whether there is a match or a mismatch. In a mental action, however, the
feedback is of a different kind. The subject needs to appreciate the normative status
of the output of the mental act: Is the name retrieved correct? Has the list been
exhaustively reproduced? Here, again, a subject is sensitive to the norms involved
in self-evaluation through a global impression, including feelings of fluency, coher-
ence, and so on, as well as situational cues and beliefs about his or her competence
with respect to the task involved.
The upshot is that, from the B-viewpoint, the existence of metacognition as a spe-
cifically evolved set of dispositions is a crucial argument in favor of the existence of
mental acts as a natural kind, distinct from motor or bodily acts. Let’s come back to
the error signal as a trigger for a mental action. In the shopper example, the error sig-
nal that makes a mental action necessary is the absence of an expected precondition
for an ordinary action: the shopping list being missing, the agent must rely on her
unaided memory. It is interesting to note that the list itself represented an attempt
to avoid having to rely on one’s uncertain memory to succeed in the shopping task.
The anticipated error to which the list responds is thus one of failing to act accord-
ing to one’s plan. Externalizing one’s metacognitive capacities is a standard way of
securing normative requirements as well as instrumental success in one’s actions.
The error signal often consists in a temporal lag affecting the onset of a sequence
of action. For example, in a conversation, a name fails to be quickly available. The
error signal makes this manifest to the agent. How, from that error signal, is a mental
act selected? In some favorable cases, an instrumental routine will save the trouble
of resorting to a specific mental act: “Just read the name tag of the person you are
speaking to.” When, however, no such routine is available, the speaker must either
cause herself to retrieve the missing name or else modify the sentence she plans to
utter. In order to decide whether to search her memory, she needs to consider both
the uncertainty of her retrieving the name she needs to utter and the cost-benefit
ratio, or utility, of the final decision. Dedicated noetic, or epistemic, feelings help
the agent evaluate her uncertainty. These feelings are functionally distinct from the
error signals that trigger mental acts. Nonetheless, the emotional experience of the
agent may develop seamlessly from error signal to noetic feeling.
In summary, our discussion of the shopper example suggests, first, that the error
signal that triggers a mental act has to do with information, and related epistemic
norms; and second, that the mental act is subordinated to another encompassing
action, that itself has a given utility, that is, a cost-benefit schedule. The two acts are
clearly distinct and related. A failure in the mental act can occur as a consequence
of overconfidence, or for some other reason: it will normally affect, all things being
equal, the outcome of the ordinary action. An obvious objection that was discussed
is one of hyperintellectualism: Are not we projecting into our shopper an awareness
of the epistemic norms that she does not need to have? A perceiving animal clearly
does not need to know that it is exercising a norm of validity when it is acting on the
basis of its perception. We need to grant that norm sensitivity need not involve any
conceptual knowledge of what a norm is. Depending on context, an agent will be
Mental Acts as Natural Kinds 277

sensitive to certain epistemic norms rather than others, just as, in the case of a child,
the issue may be about getting back all the toys, or merely the favored one. She may
also implicitly recognize that the demands of different norms are mutually incom-
patible in a given context. If one remembers that normative requirements apply to
attitudes as well as to mental actions, then the question of normative sensitivity is
already presupposed by the ability to revise one’s beliefs in a norm-sensitive way, an
ability that is largely shared with nonhumans.

CONCLUSION
A careful analysis of the role of normative requirements as opposed to instrumental
reasons has hopefully established that mental and bodily forms of action are two
distinct natural kinds. In contrast with bodily action, two kinds of motives have to
be present for a mental act to develop. A first motive is instrumental: a mental act
is performed because of some basic informational need, such as “remembering the
name of the play” as part of an encompassing action. A second motive is epistemic:
given the specific type of mental action performed, a specific epistemic norm must
apply to the act (e.g., accuracy). These two motives actually correspond to differ-
ent phases in a mental act. The first motivates the mental act instrumentally. This
instrumental motivation is often underwritten by a mere time lag, which works as
an error signal. The second offers an evaluation of the feasibility of the act, on the
basis of its constitutive epistemic requirement(s). Self-probing one’s disposition to
act and postevaluating the outcome of the act involve a distinctive sensitivity to the
epistemic norms that constitute the current mental action.
Conceived in this way, a characterization of mental acts eschews the three difficulties
mentioned at the outset. The possibility of prespecifying the outcome of an epistemic
mental act is blocked by the fact that such an act is constituted by strict normative
requirements. That mental acts include receptive features is shown to be a necessary
architectural constraint for mental agents to be sensitive to epistemic requirements,
through emotional feelings and normatively relevant attitudes. Finally, the phenom-
enology of intending is shown to be absent in most mental acts; the motivational struc-
ture of mental acts is, rather, associated with error signals and self-directed doubting.

NOTES
1. I thank Dick Carter for his critical comments on a former version and linguistic help.
I am grateful to Anne Coubray, Pierre Jacob, Anna Loussouarn, Conor McHugh,
Kirk Michaelian, and the members of the Action-Perception-Intentionality-Conci
ousness (APIC) seminar for helpful feedback. This research was supported by the
DIVIDNORM ERC senior grant no. 269616.
2. Geach 1957, 1.
3. See Davidson 1980; Brand 1984; Mele 1997; Proust 2001; Peacocke 2007.
4. Words such as “willing” or “trying” are sometimes taken to refer to independent
mental acts. This does not necessarily involve a regress, for although tryings or will-
ings are caused, they don’t have to be caused in turn by antecedent tryings or willings.
See Locke [1689] 2006, vol. 2, §30, 250; Proust 2001; Peacocke 2007.
5. See Williams 1973, 136–151.
278 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

6. Obviously, one can try to remember a proper name under a description, such as
“John’s spouse.” But this does not allow one to say that the content of one’s memory
is prespecified in one’s intention to remember: one cannot decide to remember that
the name of John’s spouse is Mary.
7. The difference between a bad plan and an illusory attempt at planning is that, in the
first case, the subject is sensitive to the associated normative requirements, but fails
to abide them, while, in the second, the subject fails to be sensitive to them.
8. Dretske 2000, 250. See Christine Korsgaard (1997) for a similar view, where norma-
tivity in instrumental reasoning is derived from the intention to bring about a given
end.
9. Broome 1999, 8. See also Broome 2001.
10. The case of an elderly man accepting that he has a low chance of cancer, to avoid being
upset, discussed in Papineau (1999, 24), is a case of acceptance aiming at emotional
control; there is no problem accommodating this case within a normative require-
ment framework: the norm in this case constitutes a mental act of emotional control;
it requires accepting to be coherent with the emotional outcome and relevant to it.
11. See Evans 1990.
12. See, in particular, Kelley and Jacoby 1989; Kelley and Lindsay 1993; Whittlesea
1993.
13. For a review, see Smith et al. 2003.
14. Koriat 2000; Proust 2007; Hookway 2003, 2008; De Sousa 2009.
15. It is an empirical project to identify the informational sources that are subpersonally
involved in generating embodied epistemic feelings (Koriat 2000). Temporal cues,
having to do with the onset and swiftness of processing, as well as the overall dynamic
pattern of the mental episode, must also contribute to forming a global impression,
which reliably correlates with an epistemically calibrated outcome.
16. See Damasio et al. 1996.
17. Elgin 2008.
18. See, e.g., Strawson 2003; Mele 2009; Dorsch 2009; Carruthers 2009.
19. See Mele 2009, 29, for a similar argument.
20. See Strawson 2003.
21. See Ryle 1949 for a general presentation of this argument, and Proust 2001 for a
response.
22. See Davidson 1980; Brand 1984.
23. Cf. Searle 1983.
24. Cf. Campbell 1999; Gallagher 2000.
25. See Wolpert et al. 2001. For an extension to mental action, see Feinberg 1978.
26. See Hampton 2001; Smith et al. 2003. For a skeptical analysis of this evidence, see
Carruthers 2008.
27. See Krams et al. 1998.

REFERENCES
Aristotle. Metaphysics Theta. 2006. Edited by S. Makin. Oxford: Clarendon Press.
Brand, M. 1984. Intending and acting. Cambridge, MA : MIT Press.
Broome, J. 1999. Normative requirements. Ratio, 12, 398–419.
Broome, J. 2001. Are intentions reasons? And how should we cope with incommensu-
rable values? In C. Morris and A. Ripstein (eds.), Practical rationality and preference:
Essays for David Gauthier, 98–120. Cambridge: Cambridge University Press.
Mental Acts as Natural Kinds 279

Campbell, J. 1999. Schizophrenia, the space of reasons, and thinking as a motor process.
Monist, 82, 609–625.
Carruthers, P. 2008. Meta-cognition in animals: A skeptical look . Mind and Language, 23,
58–89.
Carruthers, P. 2009. Action-awareness and the active mind. Philosophical Papers, 38,
133–156.
Damasio, A. R., Everitt, B. J., and Bishop, D. 1996. The somatic marker hypothesis and the
possible functions of the prefrontal cortex [and discussion]. Philosophical Transactions;
Biological Sciences, 351 (1346), 1413–1420.
Davidson, D. 1980 Essays on actions and events. Oxford: Oxford University Press.
Dorsch, F. (2009). Judging and the scope of mental agency. In L. O’Brien and M. Soteriou
(eds.), Mental Actions, 38–71. Oxford: Oxford University Press.
Dretske, F. 2000. Norms, history and the constitution of the mental. In Perception, knowl-
edge and belief: Selected essays, 242–258. Cambridge: Cambridge University Press.
Elgin, C. Z. 2008. Emotion and understanding. In G. Brun, U. Doguoglu, and D. Kuentzle
(eds.), Epistemology and emotions, 33–50. Aldershot, Hampshire: Ashgate.
Evans, J. 1990. Bias in human reasoning: Causes and consequences. London: Psychology
Press.
Feinberg , I. 1978. Efference copy and corollary discharge: Implications for thinking and
its disorders. Schizophrenia Bulletin, 4, 636–640.
Gallagher, S. 2000. Self-reference and schizophrenia. In D. Zahavi (ed.), Exploring the self,
203–239. Amsterdam: John Benjamins.
Geach, P. 1957. Mental acts: Their content and their objects. London: Routledge and Kegan
Paul.
Hampton, R. R . 2001. Rhesus monkeys know when they remember. Proceedings of the
National Academy of Sciences U.S.A., 98, 5359–5362.
Hookway, C. 2003. Affective states and epistemic immediacy. Metaphilosophy, 34, 78–96.
Reprinted in M. Brady and D. Pritchard (eds.), Moral and epistemic virtues, 75–92.
Oxford: Blackwell, 2003.
Hookway, C. 2008. Epistemic immediacy, doubt and anxiety: On a role for affective states
in epistemic evaluation. In G. Brun, U. Doguoglu, and D. Kuentzle (eds.), Epistemology
and Emotions, 51–66. Aldershot, Hampshire: Ashgate.
Kelley, C. M., and Jacoby, L. L. 1998. Subjective reports and process dissociation: Fluency,
knowing, and feeling , Acta Psychologica, 98, 127–140.
Kelley, C. M., and Lindsay, D. S. 1993. Remembering mistaken for knowing: Ease of
retrieval as a basis for confidence in answers to general knowledge questions. Journal of
Memory and Language, 32, 1–24.
Koriat, A . 2000. The feeling of knowing: Some metatheoretical implications for con-
sciousness and control. Consciousness and Cognition, 9, 149–171.
Korsgaard, C. 1997. The normativity of instrumental reason. In G. Cullity and B. Gaut
(eds.), Ethics and practical reason, 215–254. Oxford: Clarendon Press.
Krams, M., Rushworth, M. F. S., Deiber, M.-P., Frackowiak, R. S. J. and Passingham, R. E.
1998. The preparation, execution and suppression of copied movements in the human
brain. Experimental Brain Research, 120, 386–398.
Locke, J. [1689] 2006. An essay concerning human understanding. 2 vols. London: Elibron
Classics.
McCann, H. 1974. Volition and basic action. Philosophical Review, 83, 451–473.
Mele, A. R . 1997. Agency and mental action. Philosophical Perspectives, 11, 231–249.
280 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L

Mele, A. R . 2009. Mental action: A case study. In L. O’Brien and M. Soteriou (eds.),
Mental actions and agency, 17–37. Oxford: Oxford University Press.
Papineau, D. 1999. Normativity and judgment. Proceedings of the Aristotelian Society,
Supplementary Volumes, 73, 17–43.
Peacocke, C. 2007. Mental action and self-awareness (I). In J. Cohen and B. McLaughlin
(eds.), Contemporary debates in the philosophy of mind, 358–376. Oxford: Blackwell.
Proust, J. 2001. A plea for mental acts. Synthese, 129, 105–128.
Proust, J. 2007. Metacognition and metarepresentation: Is a self-directed theory of mind
a precondition for metacognition? Synthese, 2, 271–295.
Proust, J. 2009. The representational basis of brute metacognition: A proposal. In R. Lurz
(ed.), Philosophy of animal minds: New essays on animal thought and consciousness, 165–
183. Cambridge: Cambridge University Press.
Ryle, G. 1949. The concept of mind. London: Hutchinson.
Searle, J. R . 1983. Intentionality: An essay in the philosophy of mind. Cambridge: Cambridge
University Press.
Smith, J. D., Shields, W. E., and Washburn, D. A. 2003. The comparative psychology
of uncertainty monitoring and metacognition. Behavioral and Brain Sciences, 26,
317–373.
Sousa, R. (de). 2009. Epistemic feelings. Mind and Matter, 7, 139–161.
Strawson, G. 2003. Mental ballistics or the involuntariness of spontaneity. Proceedings of
the Aristotelian Society, 77, 227–256.
Whittlesea, B. W. A . 1993. Illusions of familiarity. Journal of Experimental Psychology:
Learning, Memory and Cognition, 19, 1235–1253.
Williams, B. 1973. Deciding to believe. In Problems of the self, 136–147. Cambridge:
Cambridge University Press.
Wolpert, D. M., Ghahramani, Z., and Flanagan, J. R. 2001. Perspectives and problems in
motor learning. Trends in Cognitive Sciences, 5, 487–494.
PART FOUR

Decomposed Accounts
of the Will
This page intentionally left blank
15

Managerial Control and Free Mental Agency

TI LLMANN VIERKANT

In this chapter it is argued that insights from recent literature on mental agency
can help us to better understand what it is that makes us free agents. According to
Pamela Hieronymi (2009), who developed Richard Moran’s (2001) work on men-
tal agency, there are two quite different ways in which we can be mental agents—
either through “evaluative control” or through “managerial control.” According to
Hieronymi, managerial control works very much like other forms of intentional
action, whereas evaluative control is different and distinctive of mental agency. The
first section of this chapter will discuss why the distinction introduced by Hieronymi
is a good one, and will then go on to argue that Hieronymi nevertheless underes-
timates the importance of managerial control. This is because, as the chapter will
argue, managerial control is central to free mental agency.
The chapter argues that managerial control is crucial for the will not because it
enhances our understanding of our reasons, as one might easily assume, but because
it creates an opportunity for the individual to change their beliefs and desires at will
despite their own first-order rational evaluations. The discussion of the distinction
between evaluative and managerial/manipulative control in Hieronymi will help
us to see that there is no such thing as intentional rational evaluation, and what the
intentional control of the mental is really good for. The last section of the chapter
then tries to clarify what exactly is required for managerial control, in order for it to
fulfill its function for the will and how this account compares to seemingly similar
moves made by Michael Bratman and Richard Holton.

MENTAL ACTIONS
Hieronymi (2009) has argued that hierarchical accounts of mental agency fail to
take into account what is at the heart of mental agency. This is because, according
to Hieronymi, there are two distinct forms of mental agency. One, which she refers
to as managerial/manipulative control,1 works very much like bodily agency. The
other form is referred to as evaluative control and lacks some of the most important
284 D ECO M P O S E D ACCO U N TS O F T H E W I L L

standard features of ordinary actions. Hieronymi nevertheless thinks that it is evalu-


ative control that is the more fundamental form of mental agency.
To understand why she thinks this, let us have a short look at the two different
forms of control. As stated, acts of managerial control work like ordinary actions.
The agent forms an intention about an object in the world, and this intention is
involved in bringing it about that the object is manipulated in a way that expresses
the content of the intention. One crucial feature of action so understood is the
reflective distance between the agent and the object of the action. In managerial
control, we manipulate mental objects in exactly the same way as we would physical
objects in bodily actions. We form an intention about a mental state that we want to
manipulate, and our intention helps to bring it about that the relevant manipulation
takes place.
But even though this form of mental agency clearly does exist, many have doubted
that most of our mental activity works according to this model (see, e.g., Moran
2001; McGeer 2007; Hieronymi 2009; McHugh 2011). It seems that, for example,
the forming of a judgment or an intention does not normally work according to this
model.
It seems absurd to think that we always need an intention with the content “form
an intention” in order to form one. Equally strange seems the notion that we nor-
mally should have to have intentions to form a judgment. The formation of judg-
ments and intentions simply happens as a result of our ongoing evaluation of the
world. We deliberate about what would be the best thing to do or what the facts
of the matter are. As a result of this activity, we come to conclusions that can take
the form of intentions and judgments, but our deliberations were not about mental
attitudes,2 they were about the world. We deliberated, for example, about whether
we should go to the football match, or whether or not our football team would win
the Champion’s League.
In fact, in many cases there seems to be something bordering on the incoherent
about the idea that we should acquire a belief or an intention in a way that is not
content-directed deliberation, but by forming an intention that is directed at acquir-
ing an attitude.
This can be easily demonstrated, if one considers the following: if it were up to us
whether or not we believe a proposition, then it should be quite possible for us to
decide to acquire a false belief.
Famously, Moore points out that there is something very odd about this. Consider
the sentence, “I believe that p, but p is false.” This sentence sounds paradoxical
because whether or not a person believes something is not normally dependent on
whether the person forms the intention to have that belief but on the plausibility of
its content. Similarly for intentions, Kavka’s famous toxin puzzle seems to show that
acquiring a desirable intention simply for the intention’s sake seems impossible, if
the agent knows that she will have no reason to actually act on the intention. If, for
example, you are offered €10,000 for having the intention at midnight to drink a
mildly unpleasant toxin the next day, but you know that you will not actually have
to drink it in order to receive the money, then it becomes puzzling to understand
how you can acquire the intention to drink the toxin, given that you know that you
will have no motivation to drink it after midnight has passed. It seems, then, that
Managerial Control and Free Mental Agency 285

normally at least we do not acquire attitudes like beliefs or intentions in the same
way as we achieve our aims in bodily actions, because there does not seem to be the
same element of intentional control or the reflective distance typical of that kind of
control.
Obviously, however, there are nonstandard cases that complicate the picture.
Kavka carefully rules out hypnosis in order to get the puzzle going, and in the
belief case there is among many others the famous example of Pascal’s wager. Pascal
argued that it is rational to acquire the belief that God exists, even if there is little
evidence for that belief. Faced with Moore’s paradox, Pascal advises that one can
still acquire the belief by going to mass, praying rosaries, and the like. What Pascal
advises here in effect is basically a form of self-conditioning. So, it is quite possible
to acquire mental attitudes for their own sake. Nevertheless, when an agent does
acquire an attitude in a managerial way, she bypasses her own rationality, and her
mind becomes a simple psychological object to manipulate. This is exactly what we
should expect, if mental agency is modeled on bodily agency, but it seems clear that
this is not the ordinary way of acquiring beliefs or intentions.
Because of this, and because of the very strong intuition that deliberating is some-
thing that we do rather than something that merely happens to us, Hieronymi argues
that we should introduce a second form of mental agency that better describes the
deliberative process, even if it fails to exhibit many of the characteristics that we
ordinarily associate with agency. This is why she introduces evaluative control.
Hieronymi has a positive and a negative reason for insisting that evaluative control
really is a form of agency. The positive reason is that evaluative control is nothing
else than the agent’s rational machinery in action. The reflective distance that is so
important for bodily action simply does not seem adequate when talking about the
activity of the mind. Our deliberations are not something external to us, but express
our understanding of the world. When we want to find out whether we judge that
p we do not introspect to find our judgment there, but we look at the evidence for
or against p.
The negative reason Hieronymi gives is that there simply is no alternative adequate
account that would allow us to understand most of our judgings and intendings as
actions. This is because, even though she acknowledges that there is a second form
of mental agency—that is, the managerial control mentioned earlier—she does not
believe that this managerial control can explain most of our judgings or intendings,
nor does she believe that managerial control is in any case a completely indepen-
dent form of mental agency. It is easy to see why: managerial control, like ordinary
bodily action, requires an intention that can help to bring about the desired effect
in the world. In managerial control the relevant intention would have as its con-
tent an intention or judgment that the agent would like to acquire. The problem is
obviously that the intention that is controlling the managerial act is itself in need of
being formed. Even in the highly unlikely case where the formation of this intention
was also done in a managerial way, we have obviously entered a vicious circle. In the
end, there will have to be an intention that has been brought about without the use
of a previous intention to acquire the intention, and this intention will presumably
be acquired by an act of evaluative control. In effect, then, every instance of manage-
rial control will require at the very least one instance of evaluative control to get off
286 D ECO M P O S E D ACCO U N TS O F T H E W I L L

the ground.3 The agent has to form the intention evaluatively to bring it about that
she will acquire the relevant attitude. Pascal, for example, has to evaluatively acquire
the judgment that it would be best, all things considered, to have the belief in God.
Similarly, the agent in the toxin puzzle has to judge evaluatively that she should hyp-
notize herself in order to acquire the intention to drink the toxin.
Evaluative control, then, so Hieronymi’s conclusion, is the basic form of mental
agency. It is the way in which we ordinarily acquire attitudes like beliefs and inten-
tions, and we should not be worried about this because in contrast to managerial
control, in evaluative control we express ourselves as rational deliberating beings.

REFLECTIVE CONTROL
Evaluative control is indispensable, because on Hieronymi’s picture there is no
alternative account of mental agency that could fulfill the functions of evaluative
control and have at the same time the same features as ordinary bodily agency. This
claim, one might think, is wrong, though: there seems to be an alternative, which
she labels reflective control. It seems to be quite possible to intentionally reflect and
thereby change mental attitudes. An agent might, for example, think about placing a
bet on her team winning the Champion’s League. She might then form the intention
that she should reexamine her reasons for her belief. It seems that she can now easily
and fully intentionally think about the individual reasons she has for her belief. For
example, are the forwards really that strong? Are the opponents really vulnerable on
the left wing? She might, again fully intentionally, go through a list of things that are
important for success and of which she might not have thought so far (proneness
to injury, whether the referee likes the aggressive style of her team, etc.). Now the
great thing is that even though it seems that these are things that an agent clearly
can do intentionally, they seem as well to exhibit the same characteristics as evalu-
ative control. If the agent were to find upon reflection that her team was not quite
as strong as she originally thought, then she would change her belief that the team
will win the Champion’s League. It does not seem that this has happened in a way
that bypasses rationality as in the managerial scenario, but that reflection makes the
agent more rational because she now does not purely deliberate about the content
of her belief, but explicitly about the question whether or not the attitude in ques-
tion is justified.
But even though reflective control does seem very tempting, there is an obvious
problem. How exactly does reflection bring about a change in attitude? Obviously,
we can intentionally reflect on our reasons for a specific belief, but whether or not
this reflection will bring about a change in attitude depends on our rational evalua-
tion of the reasons, and it is not up to us (in the intentional sense) how that evalu-
ation goes. Hieronymi therefore suspects that at the heart of reflective control we
will find an exercise in evaluative control, which is doing all the work and which
obviously does not have the intentional characteristics that seemed to make reflec-
tive control so attractive.
So is there a way, then, to make sense of reflective control without falling back on
evaluative control? The most promising account, according to Hieronymi, would
be a hierarchical account. According to such an account, when one reflects on one’s
Managerial Control and Free Mental Agency 287

reasons for a belief that p and finds that those reasons are not sufficient, one will
then form a second-order belief that the first-order belief is unjustified. Once one
has this belief, all that needs to be the case for the first-order belief to change is
that the first-order belief is sensitive to that second-order belief. This sounds like a
good account, but Hieronymi’s next move notes an obvious problem. The account
does not give us a story about how it is that the second-order belief will change the
first-order belief. Now, once we look closer at how this could happen, it becomes
clear that this will not happen in a way that resembles intentional control.
Let’s take stock of what we discussed so far. We followed Hieronymi’s convincing
defense of evaluative control as a specific form of mental agency that is importantly
different from ordinary bodily agency. We saw as well that there is a second form
of mental agency (managerial control), and that this form of mental agency can
be modeled successfully on ordinary intentional actions. However, as Hieronymi
pointed out, managerial control requires at the very least one instance of evaluative
control in order to get off the ground and is therefore not a completely indepen-
dent alternative to evaluative control. We then wondered whether reflective control,
understood as a higher-order account of mental agency, could not fulfill the same
function as evaluative control, while at the same time being a form of intentional
control. A closer look revealed that reflective control necessarily has at its heart acts
of evaluative control, and that these obviously cannot be modeled as intentional
control.

MANAGERIAL CONTROL AND SELF-CONTROL


The hierarchical account of free mental agency seemed attractive because it pro-
vided us with a way of combining deliberative mental agency with intentional
mental agency, but as Hieronymi convincingly argues, this combination does not
work. The deliberative part in higher-order accounts looks on all consistent mod-
els very much like evaluative control. If one is convinced by this line of reasoning,
higher-order accounts do lose their intuitive appeal as being able to combine ratio-
nal with intentional control. One obvious consequence one could draw from this
would be to abandon such higher-order accounts and to focus instead on develop-
ing a fuller picture of mental agency along the lines of evaluative control.
Alternatively, one could pursue the idea that perhaps higher-order accounts of
free mental agency are correct after all, but that this is not related to the alleged
features of reflective control. This is the direction this chapter will pursue. The
idea starts from the thought that perhaps higher-order beliefs and desires do not
make the agent more rational but rather allow her valuable room for maneuver with
regard to her own evaluative processes. Perhaps this additional wiggle room is so
important that it makes sense to argue that the ability to manipulate one’s mind is a
necessary condition for free mental agency.
That managerial control has its uses has already been discussed. Only by means
of managerial control can Pascal overcome his skepticism, and the toxin puzzle does
not seem very puzzling any longer as soon as we allow the agent to exercise manage-
rial control over herself. But admitting that managerial control can be useful is obvi-
ously by far not enough to justify the much stronger claim that managerial control
288 D ECO M P O S E D ACCO U N TS O F T H E W I L L

is a necessary condition for free mental agency. Two immediate objections might
be raised here.
First of all, these are rather contrived examples. It is obviously true that agents
can manipulate themselves in the way the examples suggest, but the overwhelm-
ing majority of our mental acts are not like that. We do not normally think about
how we can influence our attitudes, but we think about first-order content. We are
interested in what is true about our world and the best thing to do in the world
we live in. A focus on our own attitudes sounds as unlikely as it sounds strangely
narcissist.
Second, even if one were convinced of the importance of managerial control, it
would still be the case that managerial control is always parasitic on at least one act
of evaluative control. Pascal goes to mass because he evaluated that this is the best
thing to do if he wants to acquire the desired belief, and the agent in the toxin puzzle
evaluatively forms the intention to hypnotize herself to get the cash.
Let us look at this second objection first. If it really were the case that the pro-
posed account suggested that managerial control was a completely independent
form of mental agency, then this would be a knock-down argument, but this is not
what the claim amounts to. Rather, the account accepts that the most basic form of
mental agency is evaluative control, but it wants to add that evaluative control on its
own is not enough for free mental agency.
Let us now move on to the first objection. In answering this objection, we will as
well encounter the central argument for the main claim of the chapter, that is, that
managerial control is necessary for free mental agency. In order to get the answer off
the ground, it will be helpful to have a closer look at an idea that Victoria McGeer
(2007) explores in her essay “The Moral Development of First-Person Authority,”
because her account is in many important ways similar to the one developed here. In
addition, some of the topics McGeer discusses will prepare us for the crucial discus-
sion of what it is that is important about managerial control.
McGeer, in contrast especially to Moran, does think that managerial control is
extremely important for the moral development of an imperfect rational creature.
She identifies two problems for such an agent. On the one hand, there is the prob-
lem of rampant rationalization.4 Agents might be able to rationalize their behavior
even though it is perfectly obvious to a neutral observer that their behavior is actu-
ally controlled by different motives from the ones that they ascribe to themselves.
Second, even if an agent is aware of the right thing to do most of the time, this
does not mean that there cannot be situations where their judgment is changed in
undesirable ways because of the strong affordances of the situation. McGeer dis-
cusses the example of a Middlemarch character who sincerely believes in a marriage
between two friends of his, but who has the problem that he is quite fond of the girl
himself. In order to stop himself from giving in to temptation, the character reveals
his feelings for the girl to his male friend. By confessing his feelings, the agent makes
it impossible for himself to pursue his desire for the girl. In other words, the agent
uses his knowledge about his potentially changing psychology in order to prevent
the feared changes.
Taking McGeer’s musings about her Middlemarch characters as a starting point,
we can now return to the worry that managerial control—even though clearly
Managerial Control and Free Mental Agency 289

useful—is simply not of enough relevance in our lives to justify the strong claim of
making it a necessary condition for free mental agency.
The crucial point here is that even though Pascal and the toxin puzzle describe
very unusual situations, McGeer’s story does not. In effect, McGeer describes a
case of self-control. The character binds himself to the mast in a way that is struc-
turally very similar to the archetype of all self-control stories. Like Odysseus, the
character knows that he cannot be sure that his rational evaluation of the situation
will remain constant if the wrong circumstances should arise. Self-control delivers
the wiggle room mentioned earlier because it allows the agent to keep believing p,
even under circumstances where the agent normally would tend to reevaluate and
believe –p.
Now if it were the case that managerial control is necessary for us to be able to
exercise future-directed acts of self-control, then it seems very plausible to main-
tain that it is a necessary condition for free mental agency, because I take it to be
uncontroversial that the ability for future-directed self-control is at least a necessary
condition for free mental agency. Before we move on, one very important difference
between the account defended here and McGeer should be pointed out. McGeer
argues that an ideally rational creature might not need these self-control techniques,
and this is one important point where the accounts differ. Ideal rationality would
not help Pascal or the agent in the toxin puzzle. Even for ideally rational agents, a
knowledge of their own psychology is important, because sometimes mental states
matter to us as states, rather than because of their content.
The most obvious problem with this account is that it seems simply false to say
that future-directed self-control necessarily requires managerial control. The next
two sections will flesh out this worry, first by discussing exactly what ability is
required for managerial control and why this ability might be important for future-
directed self-control, and second by looking at Michael Bratman’s and Richard
Holton’s work on self-control. Because both their accounts do not seem to require
managerial control, the rest of the chapter will then try to justify why it is neverthe-
less necessary.

WHAT EXACTLY DOES MANAGERIAL CONTROL INVOLVE?


In order to answer the question of whether managerial control is necessary for
future-directed self-control, some more has to be said about what managerial con-
trol is. In particular, it is important to point out one ability that I take to be neces-
sary for the intentional control of the mind. This requirement is not discussed in
Hieronymi, but as I will explain now, it seems like a necessary condition in order to
make sense of managerial control. Managerial control requires a theory that allows
one in general to understand that one’s mind is to a degree independent of the actual
state of the world and can be manipulated to represent states even if they do not
obtain. In other words, what is required is a basic understanding of the nature of
representation5 or the ability to metarepresent.6
This basic understanding allows the agent to do two very important things. One,
the agent can now understand that it is possible to acquire a false belief if that is use-
ful for the agent, and two, she can understand that it is possible that she will hold
290 D ECO M P O S E D ACCO U N TS O F T H E W I L L

a false belief in the future, even if she has a true belief about the same matter at the
moment.
Both abilities seem crucial for future-directed self-control. As long as an agent
cannot understand that a belief which they hold might nevertheless in the future be
considered by themselves as false, self-control seems pointless. Only an understand-
ing of the nature of misrepresentation allows an agent to be aware of the need for
self-regulation. A gambler who has a habit of betting large sums of money on her
team to win the Champion’s League might in a quiet moment know that the team
is not really good enough, but whenever she goes past the bookmakers, she might
be overcome by the irrational belief that this year things will be different. Now, as
long as she does not understand that beliefs can change from true to false, she will
not be able to understand that in the moment where she understands that her team
has no chance of winning, she might have to take precautions against acquiring the
false belief again. She will see no need to do anything as she knows at that moment
without any doubt that betting on the team would be the wrong thing to do and
has absolutely no intention of doing so. Only if she does not only think about the
evidence for the belief but about the belief itself as a mental attitude that can misrepresent
the state of the world will she realize that beliefs are vulnerable to misleading evidence
or situational effects. Only then can she understand that she has to be worried about
doing the wrong thing in the future, even though she knows what the right thing to
do now is and can take steps to avoid doing the wrong thing.

DOES SELF-CONTROL HAVE TO BE MANAGERIAL


AND DOES IT NEED METAREPRESENTATION?
If future-directed self-control really is centrally dependent on managerial manipula-
tions of the mind, which in turn requires metarepresentation, then it would seem to
make the argument very plausible that the ability to perform these manipulations is
necessary for free mental agency. But is this claim really correct?
Future-directed self-control has received a lot of attention in the philosophi-
cal literature. Probably the most influential player in this literature is Michael
Bratman. According to him, intentions are the mental states designed to facilitate
future-directed self-control (e.g., Bratman 1987).
For Bratman, intentions are all about making yourself more rational in the long
term. They do that by providing extra reasons to do something that seems not attrac-
tive anymore at the moment of temptation. If I form the intention to go to the cin-
ema tonight and on my way there I realize that there is as well an interesting football
match on, then my intention to go to the cinema provides me with an extra reason
to stick with my original plan. Basically, intentions change the evaluative situation
during temptation, and this in turn helps ensure that the right (diachronically con-
sistent) evaluations are made.
Obviously, Bratman’s account is extremely rich, and one cannot even begin to
do justice to it in a short paragraph, but for our purposes here only two things mat-
ter. First, it is not clear that one needs to understand the nature of mental states in
order to form an intention and to commit to it, and so Bratman’s account seems to
be in contrast to the claim defended here that metarepresentation is required for
Managerial Control and Free Mental Agency 291

self-control. Second, on Bratman’s account, intentions do their job by influenc-


ing the process of rational evaluation. In fact, intentions seem, on this account, in
line with what we discussed in the Hieronymi section, very much part of evalua-
tive control and do not normally involve intentional action. Again, this seems to be
bad news for the claim defended here that future-directed self-control requires
managerial (i.e., intentional) control.

HOLTON ON SELF-CONTROL
One reason to doubt that Bratman’s account of intentions is all that there is to
self-control can be constructed from Richard Holton’s (2009) work on self-control.
According to Holton, forming a resolution7 in order to prepare against future temp-
tations is not about providing new reasons for action, as Bratman’s accounts would
have it (these new reasons are on Bratman’s account nothing more than the inten-
tions themselves), but simply reduces the ability of the agent to take new reasons
into account. It makes the agent in effect less judgment sensitive. Judgment sensi-
tivity here means the ability to reevaluate one’s beliefs if the environment provides
reasons to do so.
Even more important, according to Holton, this making oneself less judgment
sensitive is something that the agent does, and it is clear that what Holton has in
mind here is intentional action rather than evaluative control.
This seems very much in the spirit of the account here. On Holton’s account, as
on the one defended here, following through on one’s resolutions requires the agent
to be able to break free of her natural evaluative tendencies by means of intentional
control of her mind.
Holton’s main argument for his account is the phenomenology of battling temp-
tation. If Bratman’s account were right, it ought to be the case that battling tempta-
tion feels like evaluating two desires and then naturally going with whichever turns
out to be stronger. In reality, though, fighting temptation really does seem to involve
a constant intentional trying to keep one’s mind from reevaluating. One has to be
quite revisionist to deny that the phenomenology of fighting temptation does not
involve intentional tryings.
As the account defended here also insists on the importance of the intentional
control of the mind for self-control, is my account simply a version of Holton’s
view?
The answer to this question is not at all, because even though, like Holton, this
account does emphasize the role of intentional action for self-control, there is one
decisive difference. On Holton’s account, trying to actively reduce judgment sensi-
tivity does not imply that we form resolutions in order to manipulate our minds.
One might, for example, form the resolution to stop smoking. It seems quite pos-
sible to form this resolution without ever thinking about one’s mental states. In fact,
this seems to be the norm. One will think about the associated health risks and then
vow to not smoke another cigarette ever again. This seems like a very fair point to
make, but does this not fatally undermine the claim defended here that self-control
requires manipulating mental states as states? This is a crucial worry and requires a
new section.
292 D ECO M P O S E D ACCO U N TS O F T H E W I L L

REFLECTIVE CONTROL REVISITED


In the last section, we claimed that Holton is a partial ally for the account developed
here because, like Holton, this account places a lot of emphasis on the difference
between judgment sensitivity and the will. Like Holton, it argues that self-control
is about making yourself immune to being too sensitive to circumstances that could
change your judgment. However, as already pointed out, there are important differ-
ences in the two conceptions as well. The account defended here insists on the abil-
ity to metarepresent in order to be able to self-control in the relevant way. Holton
not only disagrees with the idea that metarepresentation is crucial for self-control
but also fundamentally disagrees with the idea that self-control by means of
self-manipulation has anything to do with the will at all.
Holton is very impressed by the work done in psychology, especially by
Baumeister (2008), according to whom the will is like a mental muscle. Holton
thinks that willpower is something active and that the tying to the mast strategies
that were discussed here, though clearly useful in self-control, are not exercises of
the special mental organ that is the will. This disagreement is especially interesting
because, again, it looks as if the exercise of the mental muscle might be a form of
mental agency that does not sit easily with the either-or distinction between evalu-
ative control and managerial control. The crucial idea behind Holton’s account is
that controlling impulses to act by means of deliberating about the right way to act
is an effortful activity. This is very much what Baumeister and others have found
in many empirical studies, and it is intuitively plausible. But does that not mean
that agents can reflectively control their behavior? It seems right that deliberation
does contain intentional tryings to remember, to attend to, and so on. Does that not
mean that reflective control of the mind exists in contrast to what we claimed in line
with Hieronymi? Hieronymi’s answer to that kind of scenario was that it is still the
case that the actual judgings and intendings will be evaluative and not intentionally
controlled. Once she had established the need for evaluative control, she happily
admitted that the rest was down to managerial control.
But there is a problem lurking here because, as we have seen with Holton, it does
not seem true that these intentional doings are always directed explicitly at mental
attitudes. To be sure, there are cases where management is clearly attitude-directed.
The agent might tell herself: concentrate or think of the reasons you had for your
decision, or try to remember, and so on.
But as described in the Holton scenario, it does not seem to be the case that an
agent, when trying to convince herself, will always think about mental states in cases
that seem very similar. An agent who wants to stop smoking might not try explicitly
to evoke specific mental states. She might, for example, try to focus her attention on
reasons for not smoking, but she will not think of these as psychological states. She
might think: Why exactly did I want to stop, were there not health reasons?, and so
on. What is more, this kind of thing seems ubiquitous.8
This shows that there is a very important distinction to be made within what
Hieronymi calls managerial control. There are managerial actions that are
attitude-directed in the sense that the agent is treating the attitude as an object,
and there are managerial actions where the agent is bringing about a change in the
Managerial Control and Free Mental Agency 293

attitude by changing the conditions under which her evaluative processes are taking
place, but without an attitude-directed intention. In such cases the agent is obvi-
ously not interested in acquiring the specific attitude for the attitude’s sake, but does
know that certain intentional behaviors have desirable first-order effects.9 These
effects can be things, like getting the bigger reward, not smoking, and so on. The
agent does not have to know that these effects are obtained by means of the acquisi-
tion of a mental state.10
For Hieronymi’s purposes, this distinction might not be crucial because she is
mainly interested in showing that evaluative control is a specific form of mental
agency, and it is certainly true that the intentional part of this—unaware manage-
rial control—would not bring about any change in attitude without the evaluative
component.
But the distinction matters here. It matters because it is necessary in order to clar-
ify the claim made in this chapter. The form of managerial control we are interested
in has to be one where Hieronymi’s statement that we are intentionally manipulat-
ing mental attitudes like ordinary objects is literally true, because only once this is
the case will the agent be able to understand that attitudes can be false, can change
over time, and so on—and, as we argued earlier, these are necessary elements of
self-control in the sense that we are after. So, if self-control can be exercised by
means of unaware managerial control, then our claim that the intentional targeting
of attitudes is a necessary condition for self-control collapses.
In addition, once we have introduced this distinction, we obtain as well an expla-
nation of where exactly the difference between Holton and Baumeister and the
position defended here lies. Holton and Baumeister argue that willing is intentional
and effortful, but the scenarios they describe are clearly not ones where subjects are
manipulating their attitudes. As mentioned earlier, bringing about a mental state that
will easily allow you to master the self-control task on Holton’s model is not about
the will at all, because as soon as the manipulation is successful, the characteristics
of effort and depletion will vanish. It seems clear that Holton and Baumeister, in the
terminology used here, think of the will mainly as a form of unaware managerial con-
trol. In these cases, subjects are trying intentionally to evaluate a first-order proposi-
tion in a specific way. Obviously, as Hieronymi told us, that is impossible; you can
intentionally focus or repeat reasons for a specific action, but you cannot intention-
ally evaluate. Attempting to do it does, however, have an attitude-directed effect.
It can help to bring it about that the agent will evaluate the situation differently.11
It does this not by bringing new material to the evaluation but by changing the eval-
uator (e.g., by making it less interested in new evidence, as in Holton’s scenario of
self-control).12
However, the agent in this scenario is not aware of what it is that they are doing.
And that means that such tools are much less flexible and effective than the tools
used in managerial control that is aware. If that is the right way to understand such
acts of behavior control, then in one sense they are simply less sophisticated versions
of real self-control. They achieve their aims by changing an attitude, rather than by
providing new evidence for content evaluations. However, they are obviously not
intentionally directed at the attitude itself. If that is right, then it seems implausible to
exclude the more sophisticated versions from the will and to describe them as mere
294 D ECO M P O S E D ACCO U N TS O F T H E W I L L

tricks. On the other hand, however, there obviously is a major difference between
the two mental tools. Obviously, once you understand what it is that you are doing,
the level of control and flexibility is many times higher than before, and that is why
the claim is justified that this very general form of theoretical self-awareness is nec-
essary for free mental agency, while the ability to control behavior with the Holton
tool on its own is not good enough.
Finally, one common objection to seeing self-control by manipulation in contrast
to behavioral control by sheer effort as part of the will has to be discussed here.
This objection states that effortful control is active, while in manipulation the agent
gives up control and becomes passive. As soon as the manipulation is successful, the
agent cannot go back. On closer examination, this is really quite a weak argument.
On the one hand, it is obviously not true that self-manipulations cannot be revers-
ible or conditional, and on the other, control by sheer effort obviously does make
the agent more passive, in the sense that she will be less judgment sensitive to good
reasons as well as to temptations. Both forms of control are about introducing a cer-
tain element of passivity—that is in fact the very point of them. How durable that
intentionally introduced passivity should be depends obviously on the situation,
but it is again true that understanding managerial control as just that will help to
optimize strategies. Once we see this, the argument is now turned on its head. Once
the agent understands what it is she is doing, it will be much more easily possible
to calibrate the right mixture between flexibility and rigidity. Once again, it makes
sense to argue that only aware managerial control is good enough for the kind of
self-control that is intuitively a necessary condition for free mental agency.13

SUMMING UP, FURTHER SUPPORT FOR


THIS POSITION, AND OUTLOOK
The chapter started off by accepting Hieronymi’s argument that at heart mental
agency is not something voluntary but a different form of agency. In line with her
account, this form of agency was labeled evaluative control. The chapter agreed as
well that the alternative of reflective control, which combines the features of evalu-
ative control and managerial control, does not work because reflective control can
be broken up into the two distinct elements of managerial and evaluative control.
The chapter disagreed with Hieronymi, however, in arguing that managerial control
is at the heart of free mental agency nevertheless. The argument for this was that
only managerial control allows the agent to become free from her own evaluations
of the world and to begin to construct the kind of psychology that she might think
desirable. It was claimed that this form of managerial control requires the ability to
metarepresent.
The last couple of sections then clarified why this account is different from
other accounts of self-control, especially Holton’s view. The crucial point here was
that Holton’s form of self-control is not really intentional control of the mind at
all, because either we understand it as intentional evaluation, which is nothing
else than the reflective control shown to be impossible earlier, or it is really just
behavioral control that has nonintended side effects in the mind of the agent. So,
if Holton’s idea is right—namely, that intentional control of the mind is crucial for
Managerial Control and Free Mental Agency 295

self-control—then it was argued that the only way to achieve this coherently is to
put forward the account defended here.
The chapter concentrated on presenting the main idea behind the account and
discussed some necessary clarifications and obvious objections, but there are many
more things that one could add in favor of the account. Here is a loose collection
of them.
If the account is right, it would give us an explanation for why free agency is some-
thing that we intuitively think only humans can do. As yet there seems to be no clear
evidence that any other species other than humans is able to metarepresent—and
metarepresentation is a necessary condition for aware managerial control.
The account also has a story to tell about what the function of making people
responsible for their mental states might be. It is true, we do tell our criminals that
they should understand the error of their ways, but this has always been a big ask.
Philosophers and most ordinary people struggle to find a fault-proof rational way
of arguing that doing or being good is also being rational. So why do we think that
criminals should be able to do it? However, what has a chance of succeeding is an
exercise in managerial attitude acquisition, which helps the potential reoffender to
overcome her reasoning, which had seemed previously to make the offense rational
for her. Aware managerial control is something that we can teach people to do.
Interestingly, it is a sociological fact that the genre of books that is supposed to
help people to exercise self-control is already one of the biggest sellers on the mar-
ket.14 Many people look down on the self-help genre, but many more swear by it.
This is not that surprising actually, because the advice given in these books maps
quite nicely on the findings in serious cognitive science labs like Gollwitzer’s, that
is, it works by helping people to exercise managerial control.
Finally, the account has some interesting consequences. It was argued that self-
blindness is not a problem for the account, as long as the agent understands the
nature of representation, but obviously, new knowledge in the sciences does allow
us to be far more effective in this form of mind and self-creation. This already has led
to enormous changes in the way we manipulate our minds, for example, psychoac-
tive pharmacy or cognitive behavioral therapy. In this respect, the account is in the
end about breaking down the boundary between forms of self-control that are sup-
posed to be internal to the agent, like the mental muscle phenomena that Holton
and Baumeister describe, and the use of external scaffolding that humans use to aid
their self-control. This chapter shows that both forms use the same mechanism and
that, if anything, the aware use of external scaffolding is a more sophisticated form
of the will than the simple straining of the supposed mental muscle.15

NOTES
1. Managerial and manipulative control differ only insofar as in managerial control the
agent influences the environment in such a way that a normal evaluative process
brings about the desired result, whereas in manipulative control the bringing about of
the judgment does not depend on a normal functioning of the evaluative machinery.
From here on, I will label both forms managerial.
2. They were not looking under the hood, in Moran’s apt phrase (Moran 2001).
296 D ECO M P O S E D ACCO U N TS O F T H E W I L L

3. Hieronymi argues that actually two acts of evaluative control are required. The sec-
ond act consists in the evaluation that the intentionally brought about circumstances
cause. This seems plausible enough for managerial control (see distinction in note
2), but there is an ambiguity here for manipulative control, where the bringing about
of the attitude does not seem to necessarily require evaluative control at this stage.
Imagine, e.g., that the belief is surgically implanted. It is not clear that in such a sce-
nario there has to be initially a second act of evaluation.
4. I owe this term to Andreas Paraskevaides.
5. I.e., the ability to pass the false belief task. See Perner 1993.
6. This is obviously most fitting for belief, but arguably it works for intention as well.
Intentions always contain a judgment about what is the best thing to do, and obvi-
ously this judgment can go wrong. Understanding this is crucial if one wants to
implant an intention for an intention’s sake, rather than for the sake of its content.
7. Holton’s term for an intention formed in order to ensure that one sticks to one’s plans
in the face of temptation.
8. Even though the ability to intentionally guide deliberation is no mean feat. In fact,
there is good reason to think that this controlled deliberation is what gives humans
a form of thought regulation that other animals do not have. However, it is still true
that this form of controlled thinking does not require metarepresentation. I discuss
the role of intentionally controlled deliberation in detail in a forthcoming paper
(Vierkant 2012).
9. Unaware managerial self-control is itself a very broad term. In one sense, it includes
most intentional behaviors that there are, because most intentional behaviors have
consequences for the mental states of the agents. However, there are some forms of
unaware managerial control that are far more sophisticated and effective in control-
ling minds as a side effect than others. There is no room to elaborate on the various
forms of unaware managerial control here, but I do develop this point in (Vierkant &
Paraskevaides 2012)
10. There is no space here to expand on this distinction, but it would seem to be a worth-
while undertaking. There has been a very lively debate on which mental actions
can be performed intentionally (e.g., Strawson 2003; Pettit 2007). In most of these
debates, however, it is presumed that we know what it is that we are doing when we
manage our attitudes or bring it about that we have better conditions for our evalua-
tions. Obviously, this knowledge is theoretically available in humans, but it is far less
clear whether it plays a role in many of these managerial acts. In fact, it seems not that
unlikely that the intuition of reflective control is created exactly by the fact that very
many managerial acts are not understood as such by the agent.
11. This very short sketch of nonaware managerial control only scratches the surface of a
huge fascinating field of cognitive science. There are probably many stages on the way
to making an animal aware of its own mentality.
12. There is an interesting link here to the discussion on metacognition in animals. See,
e.g., Smith et al. 2003.
13. How knowledge of our psychology could enable us to optimize self-control can be
seen in the chapter by Hall and Johansson, this volume.
14. For an interesting analysis of the self-help genre as the contemporary form of talking
about the will, see Maasen et al. 2008.
15. For a way of fleshing out this idea of how we could use external scaffolding to support
the will, see Hall and Johansson, this volume.
Managerial Control and Free Mental Agency 297

REFERENCES
Baumeister, R . (2008). Free will, consciousness and cultural animals. In J. Baer (ed.), Are
we free, 65–85. New York: Oxford University Press.
Bratman, M. (1987). Intention, plans, and practical reason. Cambridge, MA: Harvard
University Press.
Faude-Koivisto, T., Würz, D., and Gollwitzer, P. M. (2009). Implementation intentions:
The mental representations and cognitive procedures of IF-THEN planning. In W.
Klein and K. Markman (eds.), The handbook of imagination and mental simulation, 69–
86. New York: Guilford Press.
Hieronymi, P. (2009). Two kinds of mental agency. In M. S. Lucy O’Brien (ed.), Mental
actions, 138–162. Oxford: Oxford University Press.
Holton, R . (2009). Willing, wanting, waiting. New York: Oxford University Press.
Maasen, S. Sutter, B., and Duttweiler, S. (2008). Wille und Gesellschaft oder ist der Wille
ein soziales Phaenomen. In T. Vierkant (ed.), Willenshandlungen, 136–169. Frankfurt:
Suhrkamp.
McGeer, V. (2007). The moral development of first-person authority. European Journal of
Philosophy 16: 81–108.
McHugh, C. (2011.). Judging as a non-voluntary action. Philosophical Studies 152:
245–269.
Moran, R . (2001). Authority and estrangement: An essay on self-knowledge. Princeton, NJ:
Princeton University Press.
Perner, J. (1993). Understanding the representational mind. Cambridge, MA : MIT Press.
Pettit, P. (2007). Neuroscience and agent control. In D. Ross (ed.), Distributed cognition
and the will, 77–91. Cambridge MA , MIT Press.
Smith, J. D., Shields, W. E., and Washburn, D. (2003). The comparative psychology
of uncertainty monitoring and metacognition. Behavioural and Brain Sciences 26:
317–373.
Strawson, G. (2003). Mental ballistics or the involuntariness of spontaneity. Proceedings of
the Aristotelian Society 103:227–257.
Vierkant, T. (2012). What metarepresentation is for. In J. Brandl, J. Perner, and J. Proust
(eds.), Foundations of metacognition. Oxford: Oxford University Press.
Vierkant, T., and Paraskevaides, A. (2012). How social is our understanding of minds? In
F. Paglieri and C. Castelfranchi (eds.), Consciousness in interaction, 105–124. Amsterdam:
John Benjamins.
Wegner, D. M. (2002). The illusion of conscious will. Cambridge, MA : MIT Press.
16

Recomposing the Will


Distributed Motivation and Computer-Mediated Extrospection

L A R S H A L L , P E T T E R J O H A N S S O N, A N D DAV I D D E L É O N

At the beginning of the novel Do Androids Dream of Electrical Sheep? by Philip K.


Dick, we find Rick Deckard and his wife, Iran, in bed arguing over how to dial their
daily mental states on their bedside Penfield mood organs. Deckard has wisely pro-
grammed the organ the night before to awake him in a state of general well-being
and industriousness. Now he is ready to dial for the businesslike professional atti-
tude that his electronic schedule says is needed of him today. Iran, on the other
hand, has awoken to her natural proclivities and just feels irritated about Deckard’s
attempts to persuade her into dialing for a more productive mood. In fact, for today
she has scheduled a full three-hour episode of self-accusatory depression. Deckard
is unable to comprehend why anyone would ever want to willfully schedule for an
episode of depression. Depression would only serve to increase the risk of her not
using the organ at a later stage to dial into a constructive and positive mood. Iran,
however, has reflected further on this dilemma and has programmed the Penfield
for an automatic resetting after three hours. She will face the rest of the day in a state
of “hope and awareness of the manifold possibilities open to her in the future.”
In this short episode of imaginative science fiction it is not difficult to find
examples of many of the most difficult conundrums of human motivation and
self-control. In no small part is this of course due to Philip K. Dick being a very
astute observer of the human condition, but doubtlessly it also reveals the pervasive
nature of these problems in everyday life. Not being equipped with near-magical
instruments of brain stimulation, people adopt all manner of strategies available
to handle the ever so complicated, and in many ways both unnatural and conflict-
ing, motivational demands of modern society. Like Deckard and Iran, how do we
manage to get ourselves into the “businesslike professional attitude” that is required
of us, if all we really want to do is stay in bed? Or, to up the ante, what effective,
long-term means do we have to follow through on venerable goals like dieting or
Recomposing the Will 299

quitting smoking, or on general desires like becoming a more creative and lovable
person? One class of answers to these questions rings particularly empty; those
are the ones that in one way or another simply say, “just do it”—by acts of will, by
showing character, by sheer motivational force, and so forth. These answers are not
empty because it is difficult to find examples of people who suddenly and dramati-
cally alter their most ingrained habits, values, and manners, seemingly without any
other aid than a determined mind. It is, rather, that invoking something like “will”
or “character” to explain these rare feats of mental control does little more than label
them as successes. The interesting question is, rather, what we ordinary folks do
when we decide to set out to pursue some lofty goal—to start exercising on a regu-
lar basis, to finally write that film script, to become a less impulsive and irritable
person—if we cannot just look inside our minds, exercise our “will,” and simply
be done with it. The answer, we believe, is that people cope as best they can with a
heterogeneous collection of culturally evolved and personally discovered strategies,
skills, tools, tricks, and props. We write authoritative lists and schedules, we rely on
push and pull from social companions and family members, we rehearse and mull
and exhort ourselves with linguistic mantras or potent images of success, and we
even set up ceremonial pseudo-contracts (trying in vain to be our own effective
enforcing agencies). Often we put salient markers and tracks in the environment to
remind us of, and hopefully guide us onto, some chosen path, or create elaborate
scenes with manifest ambience designed to evoke the right mood or attitude (like
listening to sound tracks of old Rocky movies before jogging around the block). We
also frequently latch onto role models, seek out formal support groups, try to lock
ourselves into wider institutional arrangements (such as joining a very expensive
tennis club with all its affiliated activities), or even hire personal pep coaches. In
short, we prod, nudge, and twiddle with our fickle minds, and in general try to dis-
tribute our motivation onto stable social and artifactual structures in the world.
In this chapter we trace the self-control dilemma back to its roots in research on
agency and intentionality, and summarize the evidence we have accumulated in our
choice-blindness paradigm for a vision of the mind as radically opaque to the self. In
addition, we provide a range of suggestions for how modern sensor and computing
technology might be of use in scaffolding and augmenting our self-control abilities,
an avenue that, lamentably, has remained largely unexplored. To this end, we intro-
duce two core concepts that we hope may serve an important role in elucidating the
problem of self-control from a modern computing perspective. First, we introduce
the concept of computer-mediated extrospection, which builds and expands on the
familiar idea of self-observation or self-monitoring. Second, we present the idea of
distributed motivation, as a natural extension of previous discussions of precommit-
ment and self-binding in the self-control literature.

LETTING THE INTENTIONS OUT OF THE BOX


For someone who has a few minutes to spare for scrutinizing cognitive science–
oriented flow-chart models of goal-directed behavior in humans, it would not take
long to discover that in the uppermost region of the chart, a big box sits perched
overlooking the flow of action. If the model deals with language, it often goes by the
300 D ECO M P O S E D ACCO U N TS O F T H E W I L L

name of the conceptualizer (Levelt, Roelofts, & Meyer, 1999; Postma, 2000); if the
model deals with action selection in general, it is the box containing the prior inten-
tions (Brown & Pluck, 2000, but see also Koechlin & Summerfield, 2007). The rea-
son that such an all-powerful, all-important homunculus is left so tightly boxed up
in these models might simply be a reflection of our scant knowledge of how “central
cognition” works (e.g., Fodor, 2000), and that the box just serves as a placeholder
for better theories to come. Another more likely possibility is that the researchers
often think that intentions (for action) and meaning (for language) in some very
concrete sense are in the head, and that they constitute basic building blocks for
any serious theory of human behavior. The line of inference is that, just because the
tools of folk psychology (the beliefs, desires, intentions, decisions, etc.) are so use-
ful, there must be corresponding processes in the brain that closely resemble these
tools. In some sense this must of course be true, but the question remains whether
intentions are to be primarily regarded as emanating from deep within the brain, or
best thought of as interactive properties of the whole mind. The first option cor-
responds to what Fodor and Lepore (1993) call intentional realism, and it is within
this framework that one finds the license to leave the prior intentions (or the con-
ceptualizer) intact in its big, comfortable box, and in control of all the important
happenings in the system. The second option sees intentional states as patterns in
the behavior of the whole organism, emerging over time, and in interaction with the
environment (Dennett, 1987, 1991a). Within this perspective, the question of how
our intentional competence is realized in the brain is not settled by an appeal to the
familiar “shape” of folk-psychological explanations. As Dennett (1987) writes:

We would be unwise to model our serious, academic psychology too closely


on these putative illata [concrete entities] of folk theory. We postulate all these
apparent activities and mental processes in order to make sense of the behav-
ior we observe—in order, in fact, to make as much sense as possible of the
behavior, especially when the behavior we observe is our own. . . . each of us
is in most regards a sort of inveterate auto-psychologist, effortlessly inventing
intentional interpretations of our own actions in an inseparable mix of con-
fabulation, retrospective self-justification, and (on occasion, no doubt) good
theorizing. (91, emphasis in original)

Within this framework, every system that can be profitably treated as an intentional
system by the ascription of beliefs, desires, and so forth, also is an intentional system
in the fullest sense (see Westbury & Dennett, 2000; Dennett, 2009). But, impor-
tantly, a belief-desire prediction reveals very little about the underlying, internal
machinery responsible for the behavior. Instead, Dennett (1991b) sees beliefs and
desires as indirect “measurements” of a reality diffused in the behavioral disposi-
tions of the brain/body (if the introspective reports of ordinary people suggest oth-
erwise, we must separate the ideology of folk psychology from the folk-craft: what
we actually do, from what we say and think we do; see Dennett, 1991c).
However, when reading current work on introspection and intentionality, it is
hard to even find traces of the previously mentioned debate on the nature of propo-
sitional attitudes conducted by Dennett and other luminaries like Fodor and the
Recomposing the Will 301

Churchlands in the 1980s and early 1990s (for a notable recent exception, see
Carruthers, 2009),1 and the comprehensive collections on folk psychology and phi-
losophy of mind from the period (e.g., Bogdan, 1991; Christensen & Turner, 1993)
now only seem to serve as a dire warning about the possible fate of ambitious vol-
umes trying to decompose the will!
What we have now is a situation where “modern” accounts of intentionality
instead are based either on concepts and evidence drawn from the field of motor con-
trol (e.g., emulator/comparator models; see Wolpert & Ghahramani, 2004; Grush,
2004) or are is built almost purely on introspective and phenomenological consider-
ations. This has resulted in a set of successful studies of simple manual actions, such
as pushing buttons or pulling joysticks (e.g., Haggard, Clark, & Kalogeras, 2002;
Moore, Wegner, & Haggard, 2009; Ebert & Wegner, 2010), but it remains unclear
whether this framework can generalize to more complex and long-term activities.
Similarly, from the fount of introspection some interesting conceptual frameworks
for intentionality have been forthcoming (e.g., Pacherie, 2008; Gallagher, 2007;
Pacherie & Haggard, 2010), but with the drawback of introducing a bewildering
array of “senses” and “experiences” that people are supposed to enjoy. For example,
without claiming an exhaustive search, Pacherie’s (2008) survey identifies the fol-
lowing concepts in need of an explanation: “awareness of a goal, awareness of an
intention to act, awareness of initiation of action, awareness of movements, sense of
activity, sense of mental effort, sense of physical effort, sense of control, experience
of authorship, experience of intentionality, experience of purposiveness, experience
of freedom, and experience of mental causation” (180).
While it is hard to make one-to-one mappings of these “senses” to the previous
discussion of intentional realism, the framework of Dennett entails a thorough
skepticism about the deliverances of introspection, and if we essentially come to
know our minds by applying the intentional stance toward ourselves (i.e., finding
out what we think and what we want by interpreting what we say and what we do),
then it is also natural to shift the focus of agency research away from speculative
senses and toward the wider external context of action. From our perspective as
experimentalists, it is a pity that the remarkable philosophical groundwork done
by Dennett has generated so few empirical explorations of intentionality (see Hall
& Johansson, 2003, for an overview). This is especially puzzling because the coun-
terintuitive nature of the intentions-as-patterns position has some rather obvious
experimental implications regarding the fallibility of introspection and possible
ways to investigate the nature of confabulation. As Carruthers (2009) puts it: “The
account . . . predicts that it should be possible to induce subjects to confabulate attri-
butions of mental states to themselves by manipulating perceptual and behavioral
cues in such a way as to provide misleading input to the self-interpretation process
(just as subjects can be misled in their interpretation of others)” (123).

CHOICES THAT MISBEHAVE


Recently, we introduced choice blindness as a new tool to explicitly test the predic-
tions implied by the intentional stance ( Johansson et al., 2005). Choice blindness
is an experimental paradigm inspired by techniques from the domain of close-up
302 D ECO M P O S E D ACCO U N TS O F T H E W I L L

card magic, which permits us to surreptitiously manipulate the relationship between


choice and outcome that our participants experience. The participants in Johansson
et al. (2005) were asked to choose which of two pairwise presented female faces they
found most attractive. Immediately after, they were also asked to describe the reasons
for their choice. Unknown to the participants, on certain trials, a double-card ploy
was used to covertly exchange one face for the other. Thus, on these trials, the out-
come of the choice became the opposite of what they intended (see figure 16.1).
From a commonsense perspective it would seem that everyone immediately
would notice such a radical change in the outcome of a choice. But that is not the
case. The results showed that overall the participants detected less than 75 percent
of the manipulated trials, while nevertheless being prepared to offer introspectively
derived reasons for why they chose the way they did. An extensive debriefing pro-
cedure was used after the experiment to make sure that the participants who had
shown no signs of detection actually were unaware of the manipulation. When we
told the participants that we had in fact switched the pictures, they often showed
great surprise, even disbelief at times, which indicates that they were truly unaware
of the changes made during the experiment.2
When analyzing the reasons the participants gave, it was clear that they often con-
fabulated their answers, as when they referred to unique features of the previously
rejected face as being the reason for having made their choice (e.g., stating, “I liked

Figure 16.1 A snapshot sequence of the choice procedure during a manipulation trial.
(A) Participants are shown two pictures of female faces and asked to choose which one
they find most attractive. Unknown to the participants, a second card depicting the
opposite face is concealed behind the visible alternatives. (B) Participants indicate their
choice by pointing at the face they prefer the most. (C) The experimenter flips down the
pictures and slides the hidden picture over to the participants, covering the previously
shown picture with the sleeve of his moving arm. (D) Participants pick up the picture and
are immediately asked to explain why they chose the way they did.
Recomposing the Will 303

the earrings” when the option they actually preferred did not have any). Additional
analysis of the verbal reports in Johansson et al. (2005) as well as Johansson et al.
(2006) also showed that very few differences could be found between cases where
participants talked about a choice they actually made and those trials where the out-
come had been reversed. One interpretation of this is that the lack of differentiation
between the manipulated and nonmanipulated reports cast doubt on the origin of
the nonmanipulated reports as well; confabulation could be seen to be the norm,
and “truthful” reporting something that needs to be argued for.
We have replicated the original study a number of times, with different sets of
faces ( Johansson et al., 2006), for choices between abstract patterns ( Johansson,
Hall, & Sikström, 2008), and when the pictures where presented onscreen in a
computer-based paradigm (Hall & Johansson, 2008). We have also extended the
choice-blindness paradigm to cover more naturalistic settings, and to attribute- and
monetary-based economic decisions. First, we wanted to know whether choice
blindness could be found for choices involving easily identifiable semantic attri-
butes. In this study participants made hypothetical choices between two consumer
goods based on lists of general positive and negative attributes (e.g., for laptops: low
price, short battery-life, etc.), and then we made extensive changes to these attri-
butes before the participants discussed their choice. Again, the great majority of
the trials remained undetected ( Johansson et al., in preparation). In a similar vein,
we constructed a mock-up version of a well-known online shopping site and let the
participants decide which of three MP4 players they would rather buy. This time we
had changed the actual price and memory storage of the chosen item when the par-
ticipants reach the “checkout” stage, but despite being asked very specific questions
about why they preferred this item and not the other, very few of these changes were
detected ( Johansson et al., in preparation). Second, we have also demonstrated the
effect of choice blindness for the taste of jam and the smell of tea in an ecologically
valid supermarket setting. In this study, even when participants decided between
such remarkably different tastes as spicy cinnamon-apple and bitter grapefruit, or
between the sweet smell of mango and the pungent Pernod, was less than half of all
manipulation trials detected (Hall et al., 2010). This result shows that the effect is
not just a lab-based phenomenon; as people may display choice blindness for deci-
sions made in the real world as well.
Since the publication of Johansson et al. (2005), we have been repeatedly challenged
to demonstrate that choice blindness extends to domains such as moral reasoning,
where decisions are of greater importance, and where deliberation and introspection
are seen as crucial ingredients of the process (e.g., Moore & Haggard, 2006, comment-
ing on Johansson et al., 2006; see also the response by Hall et al., 2006). In order to meet
this challenge, we developed a magical paper survey (Hall, Johansson & Strandberg,
2012). In this study, the participants were given a two-page questionnaire attached to
a clipboard and were asked to rate to what extent they agreed with either a number
of formulations of fundamental moral principles (such as: “Even if an action might
harm the innocent, it is morally permissible to perform it,” or “What is morally per-
missible ought to vary between different societies and cultures”), or morally charged
statements taken from the currently most hotly debated topics in Swedish news (such
as: “The violence Israel used in the conflict with Hamas was morally reprehensible
304 D ECO M P O S E D ACCO U N TS O F T H E W I L L

because of the civilian casualties suffered by the Palestinians,” or “It is morally repre-
hensible to purchase sexual services even in democratic societies where prostitution is
legal and regulated by the government”). When the participants had answered all the
questions on the two-page form, they were asked to read a few of the statements aloud
and explain to the experimenter why they agreed or disagreed with them. However,
the statements on the first page of the questionnaire were written on a lightly glued
piece of paper, which got attached to the backside of the survey when the participants
flipped to the second page. Hidden under the removed paper slip was a set of slightly
altered statements. When the participants read the statements the second time to dis-
cuss their answers, the meaning was now reversed (e.g., “If an action might harm the
innocent, it is morally reprehensible to perform it,” or “The violence Israel used in the
conflict with Hamas was morally acceptable despite the civilian casualties suffered by
the Palestinians”). Because their rating was left unchanged, their opinion in relation
to the statement had now effectively been reversed. Despite concerning current and
well-known issues, the detection rate only reached 50 percent for the concrete state-
ments, and even less for the abstract moral principles
We found an intuitively plausible correlation between level of agreement with
the statement and likelihood of detection (i.e., the stronger participants agreed
or disagreed, the more likely they were to also detect the manipulation), but even
manipulations that resulted in a full reversal of the scale sometimes remained unde-
tected. In addition, there was no correlation between detection of manipulation and
self-reported strength of general moral certainty.
But perhaps the most noteworthy finding here was that the participants that
did not detect the change also often constructed detailed and coherent arguments
clearly in favor of moral positions they had claimed that they did not agree with
just a few minutes earlier. Across all conditions, not counting the trials that were
detected, 65 percent of the remaining trials were categorized as strong confabula-
tion, with clear evidence that the participants now gave arguments in favor of the
previously rejected position.
We believe the choice-blindness experiments reviewed here are among the
strongest indicators around for an interpretative framework of self-knowledge for
intentional states, as well as a dramatic example of the nontransparent nature of the
human mind. In particular, we think the choice-blindness methodology represents
a significant improvement to the classic and notorious studies of self-knowledge
by Nisbett and Wilson (1977; see Johansson et al., 2006). While choice blindness
obviously puts no end to the philosophical debate on intentionality (because empir-
ical evidence almost never settles philosophical disputes of this magnitude; Rorty,
1993), there is one simple and powerful idea that springs from it. Carruthers (2009)
accurately predicted that it would be possible to “induce subjects to confabulate
attributions of mental states to themselves by manipulating perceptual and behav-
ioral cues in such a way as to provide misleading input to the self-interpretation pro-
cess” (123), but there is also a natural flip side to that prediction—if our systems for
intentional ascription can be fooled, then they can also be helped! If self-interpretation
is a fundamental component in our self-understanding, it should be possible to aug-
ment our inferential capacities by providing more and better information than we
normally have at hand.
Recomposing the Will 305

To this end, in the second section of this chapter, we introduce computer-


mediated extrospection and distributed motivation as two novel concepts inspired
by the Dennettian view. For intentional realists, if there is anything in the world
that our private introspections tell us with certainty, it is what we believe, desire, and
intend (Goldman, 1993). From this perspective, it would seem that a scheme of cap-
turing and representing aspects of user context, for the supposed benefit of the users
themselves, would be of limited value. Such information would at best be redundant
and superfluous, and at worst a gross mischaracterization of the user’s true state of
mind. However, we contend, this is exactly what is needed to overcome the peren-
nial problem of self-control.

THE FUTURE OF SELF-CONTROL

Computer-Mediated Extrospection
In our view, one of the most important building blocks to gain reliable knowledge
about our own minds lies in realizing that it often is a mistake to confine judg-
ment of self-knowledge to a brief temporal snapshot, when the rationality of the
process instead might be found in the distribution of information traveling between
minds: in the asking, judging, revising, and clarifying of critical, communal dis-
course (Mansour, 2009). As Dennett (1993) says: “Above the biological level of
brute belief and simple intentional icons, human beings have constructed a level
that is composed of objects that are socially constructed, replicated, distributed,
traded, endorsed (“I’ll buy that!”), rejected, ignored, obsessed about, refined,
revised, attacked, advertised, discarded” (230). The point about critical communal
discourse as a basis for making better self-ascriptions also naturally extends to the
use of new tools and technologies to improve our self-understanding. Studies have
shown that if people are simply asked to introspect (about their feelings, about the
reasons for their attitudes, about the causes of their behavior, etc.), they often end
up with worse judgments than the ones they initially provided (Wilson & Dunn,
2004; Silvia & Gendolla, 2001; Dijksterhuis & Aarts, 2010). On the other hand,
when people are given an enhanced ability to observe their own behavior, they can
often make sizable and profitable revisions to their prior beliefs about themselves
(e.g., by way of video capture in social interaction and collaboration; see Albright
& Malloy, 1999). For example, Descriptive Experience Sampling (DES) is said to
be an introspective research technique. It works by using a portable beeper to cue
subjects at random times, “to pay immediate attention to their ongoing experience
at the moment they heard the beep. They then jot down in a notebook [or PDA]
the characteristics of that particular moment” (Hurlburt & Heavey, 2001, 400; for
other similar techniques, see Scollon, Kim-Prieto, & Diener, 2003; Christensen
et al., 2003). Later, an in-depth interview is conducted in which the experiences are
elaborated upon. What is interesting is that most participants when confronted with
the processed data from the sampling protocols are surprised by some aspects of
the results (e.g., Hurlburt & Heavey, 2001, describe a case of a man named Donald
who discovers in the protocols that he has frequent angry thoughts directed at his
children, something he was completely unaware of before). Similarly, by the use
306 D ECO M P O S E D ACCO U N TS O F T H E W I L L

of external DES-like probes in the study of task-unrelated thought (TuT, or sim-


ply “mind wandering”), it has repeatedly been shown that participants underesti-
mate how much their minds tend to wander—that is, that they are often unaware
of zoning out from the task at hand (Smallwood & Schooler, 2006; Smallwood,
McSpadden, & Schooler, 2008 Smallwood, Nind, & O’Connor, 2009; Christoff
et al., 2009), an effect that can be tied to practical consequences outside the lab, such
as educational or occupational goals (McVay, Kane, & Kwapil, 2009; Smallwood,
Fishman, & Schooler, 2007; but see Baars, 2010).
Most important for us, even if the particular theories about introspection at play
here are contested (e.g., see the discussion in Hurlburt & Schwitzgebel, 2007, or
the exchange between Smallwood & Schooler, 2006, and McVay & Kane, 2010),
there is an undeniable power for self-discovery in the external tools that enable the
systematic gathering and processing of the data.3
But why stop with a single impoverished channel of verbal reports, when we can
use modern technology to sense and compile a fantastic array of data about our-
selves? The ubiquitous vision is one in which computers take an increasing part in
our everyday activities, in ways that mesh naturally with how people think, act, and
communicate (Bell & Dourish, 2007; Greenfield, 2006; Poslad, 2009). Work within
ubiquitous computing and context awareness has made us increasingly familiar with
computers that mediate our interactions with the world, but what about computers
that mediate our interactions with ourselves? In the same manner that computers can
be made more powerful by letting them gain information about the user, we also
believe users can be made smarter and more powerful by letting them gain addi-
tional knowledge about themselves.
In a pioneering effort, Gordon Bell in the MyLifeBits project (see Gemmell
et al., 2002; Gemmell, Bell, & Lueder, 2006; Bell & Gemmell, 2009) has collected
and digitized every conceivable aspect of his own life over the span of several years.
Similarly, but with an even denser assortment of wearable sensors, Clarkson (2002)
gathered around-the-clock measurements over several weeks. Apart from the obvi-
ous implications for remembrance, this allows a powerful form of personal data min-
ing that can reveal interesting, unintuitive, and predictive patterns in our everyday
behavior. An even more ambitious approach is that of Roberts (2004, 2010), who
gathered data about himself for two decades (concerning sleep, weight loss, cog-
nitive acuity, etc.) and subjected it to a quasi-experimental approach to overcome
obstacles and improve his lot. These are three examples from a rapidly growing
public trend in augmenting our inferences and attributions with extensive tracking
of self-data (e.g., see the portal at http://www.quantifiedself.com/, or the services
at http://your.flowingdata.com/ or http://daytum.com/, which are specifically
geared the toward quantification and data mining of information gathered about
the self). We believe this type of observation—what we call computer-mediated
extrospection (CME)—is a very promising domain to explore, and that it holds
great potential for improving our self-knowledge, and to extend our powers of
self-regulation and control.
Drawing upon existing research in ubiquitous computing (and from conceptual
neighbors like wearable computing, telemedicine, affective computing, and persua-
sive computing), it can be seen that capturing user context occupies center stage
Recomposing the Will 307

in human-computer interaction (Dey, Abowd, & Salber, 2001). The typical and
most easily accessible context for CME is that of macrolevel activity markers, clas-
sified on a physical, intentional, and even interactive-social level (e.g., see Dalton &
O’Laighin, 2009; Bajcsy et al., 2009). But perhaps even more interesting from a
CME perspective are the more “intimate” measures that can be gathered from medi-
cal and/or psychophysiological monitoring. Recently, an explosion in the field of
wireless, wearable (or, in some cases, even off-body) sensing has enabled reliable
measuring of (among other things) electrocardiogram, blood pressure, body/skin
temperature, respiration, oxygen saturation, heart rate, heart sounds, perspiration,
dehydration, skin conductivity, blood glucose, electromyogram, and internal tissue
bleeding (for an overview, see Pantelopoulos & Bourbakis, 2010; Kwang, 2009;
Frantzidis et al., 2010). It is from these sensors, and in particular from wireless, dry
electroencephalogram (EEG; Gargiulo et al., 2008; Chi & Cauwenberghs, 2010),
that it is possible to build up the most critical CME variables, such as the detection
and continuous monitoring of arousal, vigilance, attention, mental workload, stress,
frustration, and so on (see Pan, Ren, & Lu, 2010; Ghassemi et al., 2009; Henelius
et al., 2009; Grundlehner et al., 2009).

Distributed Motivation
As we stated in the opening paragraphs, the problem of self-control is not just
a problem manifested in the behavior of certain “weak-willed” individuals, and
it is not only operative in such salient and life-threatening domains as crav-
ing and addiction, but also in the minute workings of everyday plans, choices,
and actions. Ameliorative action is as pertinent to the dreadful experience of
withdrawal from heroin as it is to innocuously hitting the snooze button on the
alarm clock and missing the fist morning bus to school (Rachlin, 2000; Ainslie,
2001). Maglio, Gollwitzer, and Oettingen (chapter 12) present the evidence for
the effectiveness of (so-called) implementation intentions (IMPs), which has
shown that when people are prompted to elaborate a long list of very specific
contingency goals (of the form “when situation X arises, I will perform response
Y”), they are also significantly more likely to perform that action (Gollwitzer,
1999; Webb & Sheeran, 2008). This effect has been repeatedly demonstrated in
real-world environments, for example, in relation to rehabilitation training after
surgery, to keeping up an exercise program, to eating more healthy food, to breast
self-examination and screening for cervical cancer (see Gollwitzer & Sheeran,
2006, for a recent meta-analysis, but see also Sniehotta, 2009; Wood & Neal,
2007). But why does forming IMPs work? Is it not enough to have “normal”
intentions to act accordingly? Maglio, Gollwitzer, and Oettingen (this volume)
favor the explanation that IMPs “create instant habits” and “pass the control
of one’s behavior to the environment” (Gollwitzer, 1999), and they choose to
frame their discussion of IMPs around the well-known parable of Odysseus and
the Sirens. They write:

In the service of [Odysseus’] goal, he consciously willed an explicit plan—


having himself tied to the mast of his ship. From there, however, he had in a
308 D ECO M P O S E D ACCO U N TS O F T H E W I L L

sense surrendered his conscious intent to nonconscious control: though his


conscious will had changed (e.g., to succumb to the temptation of the Sirens),
the bounds of the rope remained, guiding his behavior without his conscious
intent. From our perspective, the rope provides a simple metaphor for the
form and function of planning that specifies when, where, and how to direct
action control in the service of long-term goals. (p. 221–222, chapter 12).

Indeed, like Odysseus facing the Sirens we often know that we will find ourselves
in conditions where we are likely to do something detrimental to our long-term
goals, and like Odysseus tying himself to the mast we would often like to be able
to self-bind or precommit, and avoid or resist such temptations. As in the episode
from Do Androids Dream of Electrical Sheep?, when Deckard chooses to have his
Penfield awake him in an industrious mood to avoid the lure of the warm bed, and
Iran programs an automatic resetting to block the self-perpetuating nature of the
induced depression, we would often like to be able to choose our course of action in
a calm moment of reflection rather than having to battle it out in the grip of power-
ful urges.
For all the practical potential of IMPs, we think it is a disservice to place them
next to the mighty Odysseus. The Greek king adventurer was truly and effectively
bound at the mast, but Gollwitzer himself admits that IMPs “need to be based on
strong goal intentions. As well, certain types of implementation intentions work bet-
ter than others, and people need to be committed to their implementation intentions”
(Gollwitzer, 1999, 501, our emphasis). One might reasonably wonder why we need
the extra “old-school” willpower that allows us to entertain “strong” goal intentions,
and be “committed” to our implementation intentions, when the whole idea of the
concept was to relieve us of the burden to consciously initiate action in the face of
temptations and distractions. In fact, looking at the literature, it is clear that IMPs
face a disturbing creep of “moderating” variables—they are less effective for more
impulsive participants (Churchill & Jessop, 2009), they only work for people with
high self-efficacy (Lippke et al., 2009), they are curtailed by preexisting “response
biases” (Miles & Proctor, 2008), “habit strength” (Webb, Sheeran, & Luszczynska,
2009), as well as the “stability” of the intentions (Godin et al., 2010) and the
strength of the “goal desires” (Prestwich, Perugini, & Hurling, 2008). In addition,
IMPs are generally only effective when they are provided by the experimenter, who
has an expert knowledge of the (often controlled) stimuli and contingencies the
participants will encounter (Sniehotta, 2009). In relation to this, the obvious ques-
tion is, why settle for implementation intentions as a metaphor for Odysseus and
the Sirens. Why not implement the actual strategy of external binding?
This is what we try to capture with our second concept distributed motivation: the
general strategy of using stable features of both the social and the artifactual environ-
ment to scaffold the process of goal attainment. As such, distributed motivation is
a subclass of the well-established theory of distributed cognition (Hutchins, 1995;
Clark, 2008; Hollan, Hutchins & Kirsh, 2000). Distributed cognition deals with
computational processes distributed among agents, artifacts, and environments. It
is a set of tools and methodologies that allow the researcher to look beyond simple
“cognizant” agents and shift the unit of analysis to wider computational structures.
Recomposing the Will 309

As previewed in our discussion of Maglio, Gollwitzer, and Oettingen (this volume),


one of the most central features of our notion of distributed motivation is the con-
cept of precommitment or self-binding. The tale of Odysseus and the Sirens is a stan-
dard illustration of this principle (Elster, 2000; for an in-depth treatment, see Sally,
2000a, 2000b). What we would like to argue here is that the image of the clever
Odysseus foiling the Sirens might serve as a promising template for the design of
modern remedies based on ubiquitous and context-aware technology. While peo-
ple generally strive to approximate the Odyssean ideal in their daily self-regulation
behavior, they seldom manage to create conditions of precommitment stable
enough to sustain them through complex and difficult problems. As sure as the fact
that the majority of folk strategies of self-control have been tried and tested in harsh
conditions of cultural evolution, or over the full life span of incessantly extrospect-
ing individuals, and that they embody considerable pragmatic wisdom, is also the
fact that they fail miserably when looked at on a societal scale.
The problem with most folk strategies is of course that they do not have enough
binding power (sadly, the injunctions are often no stronger than the glue on the back
of the post-it notes they are written on). For example, an often-told anecdote in the
context of research on self-control is that of the young African American man that
made a “powerful” commitment to pay US$20 to the Ku Klux Klan every time he
smoked a cigarette. In contrast to many other cases, it is easy to understand the force
this commitment might have on his behavior, but the fact still remains that once he
has succumbed to the temptation, nothing really compels him to transfer money
to the KKK. But if no such crucial deterrent for future behavior can be established,
then why on earth should he adjust his behavior in relation to the commitment to
begin with? Without going into philosophical niceties, it is easy to see that there is
something deeply paradoxical about this kind of self-punishment. Indeed, if one
really could exert the type of mental control that effectively binds oneself to pay the
smoking fee to the KKK, then why not just simply bind oneself not to smoke in the
first place?
However, even something as lowly as a pigeon can act in a self-controlled man-
ner in a suitably arranged environment. Given a choice between pecking an illumi-
nated button, and be administered one morsel of food after 10 seconds of delay, or
pecking another button to receive twice as much after 14 seconds of delay, pigeons
strongly prefer the second alternative (if the rewards were equally large, they would
of course go for the one with the shorter delay). Since the pigeons clearly value the
second alternative more, they should continue to do so up until the time of deliv-
ery. However, this is not always the case. With a simple manipulation of the reward
contingencies it is possible to induce “irrational” choice behavior. If the pigeons
are presented with the same choice pair, but given an opportunity to “reconsider”
after 10 seconds (i.e., the buttons are illuminated again to allow a peck to discrimi-
nate between one unit immediately, or two units after an additional 4 seconds), the
pigeons switch to the immediate and lesser reward (Rachlin, 2000). What is irratio-
nal about this? one may ask. Are pigeons not allowed to change their minds? Well,
of course they are, but the poor pigeons who live in a laboratory that has the “tempt-
ing” reconsideration-button installed will award themselves considerably less food
than their friends down the hall. In fact, in some sense, the pigeons seem to “realize”
310 D ECO M P O S E D ACCO U N TS O F T H E W I L L

this. If yet another choice-button is introduced in the experiment, this time giving
the pigeons a chance to eliminate the reconsideration-button (i.e., a peck on the
new button prevents the reconsideration option from being illuminated), they con-
sistently choose to do so (Rachlin, 2000). Thus, the pigeons show self-control by
precommitment to their earlier choice. What is so remarkable about this example
is that pigeons are manifestly not smart. Instead, it is clear that the intelligence of
the system lies as much in the technology of the setup as in the mechanisms of the
pigeon’s nervous system.
In the following sections we discuss how the conceptual tools we have proposed
(CME and distributed motivation) can be applied and tailored to the demands of
particular self-control problems. We start with comparatively less difficult problems
and move on to harder ones.

CME AND DISTRIBUTED MOTIVATION IN ACTION

Self-Monitoring
The starting point for many discussions of self-control is the observation that peo-
ple are often aware of their self-control problems but seldom optimally aware of
the way these problems are expressed in their behavior, or under what contingen-
cies or in which situations they are most prone to lapses in control (what is called
partial naïveté in behavioral economics). Most likely, this is due to a mix of biased
self-perception, cognitive limitations, and lack of inferential activity (Frederick,
Loewenstein, & O’Donoghue, 2002). Within this domain, we see two rough cat-
egories of CME tools that could serve to correct faulty self-perceptions.
First, CME can capture and represent information that we normally success-
fully access and monitor, but which we sometimes momentarily fail to survey. The
phenomenology of self-control lapses is often completely bereft of any feeling of
us having consciously weighed alternatives and finally chosen the more tempting
one. Instead, we often just find ourselves, post hoc, having completed an action
that we did not previously intend to do (Elster, 2000; Ainslie, 2001). Studies have
shown that while humans are quite capable at self-monitoring when given clear
directives and timely external prompts, performance quickly deteriorates under
natural conditions (Rachlin, 2000; Schooler, 2002; Smallwood & Schooler, 2006).
(Compare not trying to scratch an itch under stern scrutiny in the doctor’s office,
and not scratching it later while watching TV.) The degree of self-monitoring, in
turn, greatly influences the nature of our self-control behavior. There is a big differ-
ence between smoking a cigarette that happens to be the 24th of the day and being
aware that one is about to light up the 24th cigarette for the day. The simple fact of
providing accurate monitoring of self-control-related context has been shown to
markedly reduce the incidence of self-control lapses (Rachlin, 2000; Fogg, 2003).
The problem is of course that it is almost as difficult to stay constantly vigilant and
attentive to such context as it is to control the behavior in the first place. This, we
surmise, is an area where the use of context-aware technology and CME would be
of great use (see Quinn et al. 2010, for a recent and powerful example of CME of
bad habits).
Recomposing the Will 311

Second, instead of helping people to monitor what they are doing right now, CME
could be used to predict what they are just about to do. By using more intimate con-
textual measures like the psychophysiological state of the user, these micro-predic-
tions should be situated at the moment of activity, and come (minutes or seconds)
before the actual action is performed. For some types of self-control problems this
will be comparatively easy. For example, any goals having to do with strong emo-
tions (like trying to become a less aggressive person or trying to stifle unproductive
anger in marital disagreements) will be an ideal target for CME micro-prediction.
As Elster (2000) has pointed out, advice about emotion regulation most often fails
simply because it comes after the unwanted emotion has already been aroused and
taken full effect upon behavior. At an earlier stage such advice might have been
perfectly effective (i.e., here the proper assessment of the need for self-control is
as important as the control itself). Considerable research already exists on psy-
chophysiological markers that indicate the implicit buildup or expression of emo-
tional states not only for anger and aggression but also for more subtle conditions
like frustration, stress, and anxiety (e.g., Belle et al., 2010; Hosseini & Khalilzadeh,
2010). Promising efforts have also been made to identify similarly predictive pro-
files for less obviously emotional behavior like smoking and gambling (Parker &
Gilbert, 2008; Goudriaan et al., 2004). To increase the chances of finding predic-
tive regularities, CME technology would add an additional layer to these techniques
by allowing the measurements to be individually calibrated over time and multiple
contexts (Clarkson, 2002).

Active Goal Representation


In the opening discussion we cataloged some of the many cultural strategies of
self-control that people employ in their daily lives and noticed how they often fail
because of the lack of crucial binding power. However, degree of binding is not
the only variable that determines success or failure of any particular attempt at
self-control. Sometimes the solution is actually easier than we might first think. At
the most basic level of analysis an often overlooked factor is the nature of the repre-
sentation of the goals we are striving for. An example from the clinical literature pro-
vides a good illustration of this. Patients who have suffered damage to the prefrontal
cortex (PFC) often face dramatic impairments in their ability to engage in behaviors
that depend on knowledge of a goal and the means to achieve it. They distract too
easily and are said to be “stimulus bound” (Miller, 2000; see also Manuck et al.,
2003). Despite this, rehabilitation studies have shown that performance on difficult
tasks can be fully restored to the level of control subjects on demanding clinical
tasks, by the simple use of a wireless, auditory pager system that alerts the patients
at random intervals to think about their goals and what they are currently doing
(Manly et al., 2002; Fish et al., 2007). In this example the pager does not function
as a specific memory prosthesis, like a day planner or a PDA; it is not telling the
patients what to do. It is a cheap, global signal that tells them to think about what it
was they really wanted to do. Similarly, for normal people, there is reason to believe
that many of our common failures to follow through on goals and plans simply stem
from an inability to continuously keep our goals active in the face of a bewildering
312 D ECO M P O S E D ACCO U N TS O F T H E W I L L

array of distracting (and, of course, often tempting) stimuli. Maintenance of behav-


ioral goals is a full-time job even for people with perfectly intact prefrontal struc-
tures (Miller & Cohen, 2001).
Thus, the first tier in any program for alleviating problems of self-control should
focus on maintaining important goals in an active state. Specific types of enhance-
ments to prospective memory exist in countless forms: from simple post-it notes,
to smartphone apps that allow users to associate items or actions to be remembered
with specific geographic locations (Massimi et al., 2010; see also the impressive
clinical results by Berry et al., 2009, where a wearable camera from the MyLifeBits
project was used to improve the memory recall of a severely amnesic patient). More
general systems, like the pager system described earlier, have been far less exten-
sively explored. This is unfortunate, because such systems could occupy an impor-
tant niche that traditional remembrance agents cannot fill. What CME systems like
the wireless pager promise to do is to act like a pacemaker for the mind, a steady signal
or beacon to orient our own thinking efforts. It would not require us to specify all
our actions in advance (and then give reminders to do those things), but instead
encourage us to think back and apply the knowledge of our prior goals to whatever
situation we happen to find ourselves in at the time of the alert (see Tobias, 2009,
for a similar perspective).
A further reason to explore such applications comes from basic learning theory.
Nelson and Bouton (2002; see also Bouton, 2004; Archbold, Bouton, & Nader,
2010) have found that an asymmetry exists between initial learning in any domain
and subsequent attempts at unlearning such behavior (e.g., eating or drinking habits
we would like to change). With few exceptions, initial learning is far less context-
dependent, while attempts at unlearning generally only work in the specific context
where the training took place (e.g., in a specific environment, or in a specific state
of mind, or even at a specific time; see Bouton, 2004). This means that the risk of
relapse is always great unless meticulous care is taken to control for contextual vari-
ables that could be of importance. Technically, this means that learning to break a
bad habit does not involve unlearning the old patterns, but rather that a new form
of learning has been established that (in certain contexts) inhibits the old learning.
However, Nelson and Bouton (2002) have also shown that this problem can be sub-
stantially alleviated by conditioning the retraining to a salient object that is acces-
sible in practically any context (i.e., the object in effect works as a portable context).
In the light of the previous discussion, a system like the wireless pager described by
Manly et al. (2002) could, with proper preparation, work both as a beacon that is
used to reengage attention to our goals and simultaneously as a signal to inhibit our
bad habits.

Goal Progression
As we mentioned in the earlier discussion of CME, there is a world of differ-
ence between lighting up a cigarette that happens to be the 24th of the day, and
knowingly and willingly smoking the 24th cigarette of the day. But while CME
technology could provide substantial help with monitoring of goals in relation to
clear-cut objectives like dieting or smoking (it is a relatively straightforward task
Recomposing the Will 313

to implement context-aware devices that could count the amount of calories or


cigarettes consumed), it promises to provide an even greater impact in relation to
goals that are more abstract, nebulous, or distantly long-term. For example, imag-
ine someone who has decided to become a more amiable and caring person. How
would she go about fulfilling this goal, and how would she know when she has
fulfilled it? One solution that is realizable by means of context-aware technology
is to operationalize the goal in such a way as to be able to get discriminating feed-
back on the outcome of her behavior. This is a perfect job for context-aware CME
technology. What computers do best is to capture, record, store, and analyze data.
With the help of ubiquitous or wearable computing devices, conditions of “goal
attainment” could be specified and used as an objective comparison for the agent
involved. Criteria could be set in relation to any behavior, or activity, or reaction of
value that can be automatically captured (number of smiles received, time spend
in charity organization service, galvanic skin responses that indicate deception and
lying, reductions in stress cortisol levels, environmental contexts that suggest plea-
surable social interaction, number of scheduled appointments met in time, amount
of empathic thoughts captured in DES, etc.). But would this really capture all there
is to being an amiable person? No, obviously not, but that does not detract from
the fact that any change in behavior in the direction toward such a goal would be
for the better. The role of CME in such cases could be seen as a form of scaffolding
that gets people started in the direction toward some abstract or long-term goal.
When the behavioral change has gained some momentum, the scaffolding can be
dropped in order for more complex (and less measurable) behaviors to flourish.
Another similar, but subtly different role for computational technology in moni-
toring goal attainment and goal criteria is provided by Ainslie (2001). He discusses
the difficult problem of trying to establish self-controlled behavior by applying and
following principles. He argues that in the cultural sphere, and over the lifetime of
an individual, a natural evolution of principles takes place, such that (with very
few exceptions) principles come to evolve away from what we ideally would like
them to do, to instead focus on what is clear and simple and easy to uphold. That
is, people who insist on keeping their goals all “in the head” often end up with very
simple and impoverished goals (because how could we otherwise remember them;
Monterosso & Ainslie, 1999). Thus, an alcoholic who is lucky enough to recover
does not recover as a “social” drinker with a controlled (and presumably) positive
intake of alcohol, but as one who abstains from all forms of drinking (Ainslie, 2001;
see also discussion in Rachlin, 2000). Total abstinence as a principled approach
is much easier to uphold because it leaves no room for subjective interpretation
(a beer together with a steak is no real drink, another drink will not hurt me because
I have no more cash on me, etc.), and so it does not put the user on a slippery slope.
On the other hand, as Ainslie (2001, 2005) forcefully argues, what such principles
completely ignore is that this situation might often not be anywhere near what the
subject would really want their lives to be like. Again, what CME can bring to this
situation is the promise of using computing technology to precisely measure con-
ditions of behavior and criteria for goal attainment, in order to effectively emulate
the function of principles but without having to settle for the few cases that are so
clear-cut that our ordinary senses can reliably tell them apart (i.e., we could imagine
314 D ECO M P O S E D ACCO U N TS O F T H E W I L L

that with finely tuned sensor and computing equipment, the “social” drinker could
live by a CME-augmented principle that said that she is allowed to drink only once
every other month, or only a certain amount each week, or only if she is at a party
of a certain size, etc.).

Micro-Precommitment
While active goal representation, accurate self-monitoring, and monitoring of goal
progression are important CME strategies, they are clearly less applicable in cases
of genuine reward conflict. In such cases, precommitment is the right strategy to
apply. On the other hand, reward conflicts come in many different flavors, and often
it is not the binding power as such that determines the value of any specific scheme
of precommitment. Apart from nonmetaphorical binding, what technology has to
offer the age-old strategy of precommitment is a much-lowered cost and a much-
increased range of operation. This is good news because some species of precom-
mitment need to be fast and easy to set up, and should come at a very low cost.
For example, we have remote controls for many electrical appliances that enable
us to turn them on and off at our convenience. But we have no remotes that allow
us to turn appliances off in a way that, within a set limit of time, we cannot turn
them on again (for TV and web surfing, we have things like parental or employer
control devices that can block certain channels or domains, but we have not nearly
enough effective equipment for self-binding).4 We can of course always climb under
the sofa, pull the plug and the antenna from the TV, and put them in a place we
cannot easily reach (to make TV viewing relatively inaccessible), but such ad hoc
maneuvers are generally too costly and cumbersome to perform in the long run. The
trick is to strike a balance between inaccessibility and flexibility. That is, for many
behaviors and situations we would like to be able to make quick, easy, but transient
precommitments that allow us to move beyond some momentary temptation but
then expire so as not to further limit our range of alternatives. We call this micro-
precommitment (MPC). MPC finds its primary use when the temptations we are
dealing with are not overwhelming but still noticeable enough to bring us to the fall.
As an example, imagine a cell phone–based location-aware system (using GPS or
any other modern positioning technique) where we can instantaneously “tag” dif-
ferent places from which we wish to be kept. The mechanism for tagging could be as
simple as having the phone in the same “cell” as the object to be tagged, or having a
place-map database in the phone that allows for distance-independent blocking. Let
us now say we have a minor shoe-shopping compulsion and walk around town on
an important errand. Walking down the street with this system, we could, with just
a brief moment of forethought, tag an upcoming tempting shoe store. The tagging
could have any number of consequences, like locking our wallet or credit card, or
even tuning the store alarm to go off if we enter the premises (!). The point of MPC
is not to set up consequences that represent maximally strong deterrents. Quite the
opposite: it is a technique suited for temporarily bringing us past small but nagging
distractions. Tomorrow, when we have no important errands anymore, we might
want to shop for shoes again and would not want to spend our time unwinding a too
forceful and elaborate precommitment scheme. In fact, since MPCs, in our view,
Recomposing the Will 315

should be as easy and cheap as possible to instigate, they should also not be allowed to
have costly or long-term consequences.

Precommitment
If MPCs are swift and cheap and play with low stakes and short-term consequences,
regular precommitment holds no such limits. For precommitment the amount of
binding power and the cost of engagement are determined in relation to the magni-
tude of the problem and may be as strong as any agent desires. In contrast to MPC,
regular precommitment should not come easy. To make sure that the binding rep-
resents a “true” preference, a certain amount of inertia ought to be built into any
precommitment decision procedure (for a sensitive discussion of how to handle
this problem, see Elster, 2000). For example, some larger casinos give patrons prone
to too much gambling the option of having themselves banned from playing. Since
casinos are generally equipped with rigorous security and surveillance systems,
the ban can be very effectively enforced. However, one cannot just walk up to the
entrance cashier and ask to be banned. The decision must be made in dialogue and
with counselfrom the casino management, because once you are banned the casino
will not be coaxed into letting you in again. As would be expected from a compulsive
gambler, you soon find yourself back at the gates trying to undo your former deci-
sion. It is at this point that the casino enforces the bind by bluntly disregarding your
pleas (and if the commitment was made in too light a manner, this would be an
unfortunate outcome).
Craving and addiction are extremely difficult topics to approach. Behavioral
abnormalities associated with addiction are exceptionally long-lived, and currently
no reliable remedies exist for the pathological changes in brain-reward systems that
are associated with prolonged substance abuse (Nestler, 2001; Everitt, Dickinson, &
Robbins, 2001; Robinson & Berridge, 2003). With reference to precommitment,
it is sometimes said that it is an ineffective strategy for handling things like addic-
tion, because in the addicted state we supposedly never find a clear preference plat-
form from which to initiate the precommitment (i.e., we do not know which of our
preferences are the “true” ones). Rachlin (2000) writes: “Instead of clearly defined
points of time where one strong preference gives way to its opposite we generally
experience a continuous opposition of forces and apparently random alternation
between making and breaking our resolutions” (54). This state of complex ambiva-
lence also makes it likely that a fierce arms race will be put in motion by the intro-
duction of any scheme of precommitment, where the addicted subject will waste
precious resources and energy trying to slip through the bind of the commitment.
The drug Antabuse illustrates these problems. If you take Antabuse and then have
a drink, you will experience severe pain. Thus, taking Antabuse is a form of pre-
commitment not to drink alcohol. However, alcoholics have been known to sub-
vert the effects of the drug by sipping the alcohol excruciatingly slowly, and some
even drink the alcohol despite the severe pain (Rachlin, 2000). Also, the outcome
of Antabuse treatment has been generally less than satisfying because many alcohol-
ics decide against taking the drug in the first place. In our view, this example should
be taken as a cautionary tale for any overly optimistic outlook on the prospects of
316 D ECO M P O S E D ACCO U N TS O F T H E W I L L

precommitment technology to handle really tough cases like addiction, but we do


not believe it warrants a general doubt about the approach. As is evident by the fan-
tastically prosperous industry for the supply of services and products that purport
to alleviate problems of self-control (in practically any domain of life), people are
willing to take on substantial commitments, in terms of time, energy, and resources,
to change their current ways of life.
Take smoking as an example. What would a ubiquitous precommitment scheme
for helping smokers to quit look like? First, as a foundation, some means of detect-
ing the presence or absence of smoking-related context is needed. The context
could be built from observation of the actual smoking, from traces of smoking
(from smoking-related behavior patterns or from psychophysiological concomi-
tants of smoking), and many types of sensors could be used to generate the match.
For example, one sensor platform that might be used in the near future to provide
robust and efficient measurement is in-blood substance detection. In relation to dia-
betes treatment, Tamada, Lesho, and Tierney (2002) describe a host of emerging
transdermal (through the skin) techniques for measuring glucose levels in the blood.
While not yet perfected, such sensors can be worn continually and unobtrusively
by diabetics to efficiently monitor and manage their blood sugar levels. (e.g., see
Gough et al., 2010). A similar system could easily be envisaged for nicotine. Yet, as
many current context-aware applications have shown, a combination of many cheap
and overlapping environmental sensors (i.e., things like temperature, acceleration,
light, movement) might provide equally robust context measurement as a special-
ized subcutaneous device (Bulling, Roggen, & Troester, 2011). The great boon of
ubiquitous precommitment technology is that once the basic sensing of context is
in place, a multitude of distributed motivational strategies can be latched onto it,
and varieties of binding can be added or subtracted depending on the nature and
severity of the case. To take a dramatic example, for providing strong and relentless
binding, a wireless bracelet for nicotine monitoring could be hooked up directly to
the bank account of the participating subject and simply withdraw money in pro-
portion to the amount of smoking the subject does. But to prevent loss of money,
an anticipatory CME backup system that detects “lapse-critical” behavior could
be employed alongside the nicotine bracelet and make automatic support calls to
other participants in the program if the subject is in danger of taking a smoke. While
exceptionally strong single precommitment criteria can be put in place (i.e., you
lose all your money if you smoke one single cigarette), it is the possibility of mixing
and merging many less forceful strategies in one system that will provide the great-
est benefits. Most likely, venerable cultural strategies like situation avoidance (e.g.,
the shoe store “tagging” example), social facilitation, reward substitution, and so
forth, will experience a strong resurgence in the hand of ubiquitous technology for
distributed motivation.

CONCLUSION
In this chapter we discussed how the problem of self-control can be approached from
a perspective on intentionality and introspection derived from the work of Dennett,
and the evidence from our own choice-blindness paradigm. We have provided a
Recomposing the Will 317

range of suggestions for how sensor and computing technology might be of use in
scaffolding and augmenting our self-control abilities, and we have introduce the
concepts of computer-mediated extrospection and distributed motivation that we hope
may serve an important role in elucidating the problem of self-control from a mod-
ern computing perspective. Some researchers have expressed pessimism about the
ability of context-aware systems to make meaningful inferences about important
human social and emotional states, and believe that context-aware applications can
only supplant human initiative in the most carefully proscribed situations (Bellotti
& Edwards, 2001). As evidenced by the current chapter, we think this pessimism
is greatly overstated. Precommitment technologies offer people the option of tem-
porary but forceful binding, aided by computer systems that will not be swayed or
cajoled, and it is through their very inflexibility that these systems have the potential to
support individual self-realization. As Dennett (2003) notes, in the domain of self-
control, effectively constraining our options actually gives us more freedom than we
otherwise would have had.

ACKNOWLEDGMENT
L.H. thanks the Swedish Research Council, and P.J. thanks the Bank of Sweden
Tercentenary Foundation for financial support.

NOTES
1. At times, tension ran so high in this debate that one might have thought it would
have been remembered for its rhetorical flair, if nothing else. As an example, Fodor
and Lepore (1993) scolded Dennett’s superficialism about the mental and professed
that there really are no other ideas than commonsense “Granny-psychology” to take
seriously, while Dennett (1994) in response, coined the name hysterical realism for
Fodor’s program and admitted that he regarded “the large and well-regarded litera-
ture on propositional attitudes . . . to be history’s most slowly unwinding unintended
reductio ad absurdum” (241, emphasis in original).
2. After having probed what they thought of the experiment, and if they thought
anything had felt strange with the procedure, the participants were also asked the
hypothetical question if they think they would have noticed if we had switched
the pictures. No less than 84 percent of the participants who did not detect any of
the manipulations still answered that they would have noticed if they had been pre-
sented with mismatched outcomes in this way, thus displaying what might be called
“choice-blindness blindness”—the false metacognitive belief of being able to detect
changes to the outcome of one’s choices (See Levin et al., 2000, for a similar result in
relation to change blindness).
3. Incidentally, the DES paradigm also represents one additional strong line of evidence
against the concept of intentional realism. As Hurlburt (2009) writes: “As a result of
30 years of carefully questioning subjects about their momentary experiences, my
sense is that trained DES subjects who wear a beeper and inspect what is directly
before the footlights of consciousness at the moment of the beeps almost never
directly apprehend an attitude. Inadequately trained subjects, particularly on their
first sampling day, occasionally report that they are experiencing some attitude. But
318 D ECO M P O S E D ACCO U N TS O F T H E W I L L

when those reports are scrutinized in the usual DES way, querying carefully about
any perceptual aspects, those subjects retreat from the attitude-was-directly-observed
position, apparently coming to recognize that their attitude had been merely “back-
ground” or “context.” That seems entirely consonant with the view that these subjects
had initially inferred their own attitudes in the same way they infer the attitudes of
others (150).
4. But see the OSX self-control application by Steve Lambert (http://visitsteve.com/
work/selfcontrol/), which allows the user to selectively and irrevocably (within a
time limit) shut down sections of the web, or the slightly less weighty, but ever so use-
ful Don’t Dial (http://www.dontdial.com/) app for the iPhone/Android platform,
which before an intoxicating evening allows the user to designate a range of sensitive
phone contacts that later will be blocked from calling.

REFERENCES
Ainslie, G. (2001). Breakdown of will. New York: Cambridge University Press.
Ainslie, G. (2005). Précis of Breakdown of Will. Behavioral and Brain Sciences, 28(5),
635–673.
Albright, L., & Malloy, T. E. (1999). Self-observation of social behavior and metapercep-
tion. Journal of Personality and Social Psychology, 77(4), 726–743.
Archbold, G. E. B., Bouton, M. E., & Nader, K . (2010). Evidence for the persistence
of contextual fear memories following immediate extinction. European Journal of
Neuroscience, 31(7), 1303–1311.
Baars, B. J. (2010). Spontaneous repetitive thoughts can be adaptive: Postscript on “mind
wandering.” Psychological Bulletin, 136(2), 208–210.
Bajcsy, R., Giani, A., Tomlin, C., Borri, A., & Di Benedetto, M. (2009). Classification of
physical interactions between two subjects. In BSN 2009: Sixth International Workshop
on Wearable and Implantable Body Sensor Networks (pp. 187–192).
Bell, G., & Dourish, P. (2007). Yesterday’s tomorrows: Notes on ubiquitous computing’s
dominant vision. Personal and Ubiquitous Computing, 11(2), 133–143.
Bell, G., & Gemmell, J. (2009). Total recall: How the e-memory revolution will change every-
thing. New York: Dutton Adult.
Belle, A., Soo-Yeon Ji, Ansari, S., Hakimzadeh, R., Ward, K., & Najarian, K . (2010).
Frustration detection with electrocardiograph signal using wavelet transform. In 2010
International Conference on Biosciences (pp. 91–94).
Bellotti, V. M., & Edwards, W. K . (2001). Intelligibility and accountability: Human con-
siderations in context aware system. Human–Computer Interaction, 16, 193–212.
Berry, E., Hampshire, A., Rowe, J., Hodges, S., Kapur, N., Watson, P., & Browne, G. (2009).
The neural basis of effective memory therapy in a patient with limbic encephalitis.
Journal of Neurology, Neurosurgery, and Psychiatry, 80(11), 1202–1205.
Bogdan, R. J. (Ed.). (1991). Mind and common sense: Philosophical essays on commonsense
psychology. Cambridge: Cambridge University Press.
Bouton, M. E. (2004). Context and behavioral processes in extinction. Learning and
Memory, 11(5), 485–494.
Brown, R. G., & Pluck, G. (2000). Negative symptoms: The “pathology” of motivation
and goal-directed behaviour. Trends in Neurosciences, 23(9), 412–417.
Bulling , A., Roggen, D., & Troester, G. (2011). What’s in the eyes for context-awareness?
Pervasive Computing, IEEE, April–June, pp. 48–57.
Recomposing the Will 319

Carruthers, P. (2009). How we know our own minds: The relationship between mind-
reading and metacognition. Behavioral and Brain Sciences, 32, 121–182.
Chi, Y. M., & Cauwenberghs, G. (2010). Wireless non-contact EEG/ECG electrodes
for body sensor networks. In 2010 International Conference on Body Sensor Networks
(pp. 297–301).
Christensen, S. M., & Turner D. R . (1993). Folk psychology and the philosophy of mind.
Hillsdale, NJ: Erlbaum.
Christensen, T. C., Barrett, L, F., Bliss-Moreau, E., Lebo, K., & Kaschub, C. (2003). A prac-
tical guide to experience-sampling procedures. Journal of Happiness Studies, 4, 53–78.
Christoff, K., Gordon, A. M., Smallwood, J., Smith, R., & Schooler, J. W. (2009). Experience
sampling during fMRI reveals default network and executive system contributions to
mind wandering. Proceedings of the National Academy of Sciences of the United States of
America, 106(21), 8719–8724.
Churchill, S., & Jessop, D. (2009). Spontaneous implementation intentions and impulsiv-
ity: Can impulsivity moderate the effectiveness of planning strategies? British Journal of
Health Psychology, 13, 529–541.
Clark, A. (2008). Supersizing the mind. Oxford: Oxford University Press.
Clarkson, B. (2002). Life patterns: Structure from wearable sensors. PhD diss., MIT.
Dalton, A., & O’Laighin, G. (2009). Identifying activities of daily living using wire-
less kinematic sensors. In BSN 2009: Sixth International Workshop on Wearable and
Implantable Body Sensor Networks (pp. 87–91).
Dennett, D. C. (1987). The intentional stance. Cambridge, MA : MIT Press.
Dennett, D. C. (1991a). Consciousness explained. Boston: Little, Brown.
Dennett, D. C. (1991b). Real patterns. Journal of Philosophy, 89, 27–51.
Dennett, D. C. (1991c). Two contrasts: Folk craft versus folk science and belief versus
opinion. In J. Greenwood (Ed.), The future of folk psychology: Intentionality and cognitive
science (135–148). Cambridge: Cambridge University Press.
Dennett, D. C. (1993). The message is: There is no medium. Philosophy and
Phenomenological Research, 53, 889–931.
Dennett, D. C. (1994). Get real. Philosophical Topics, 22, 505–568.
Dennett, D. C. (2003). Freedom evolves. London: Allen Lane.
Dennett, D. C. (2009). Intentional systems theory. In B. McLaughlin, A. Beckermann, &
S. Walter (Eds.), Oxford handbook of the philosophy of mind (pp. 339–350). New York:
Oxford University Press.
Dey, A. K., Abowd, G. D., & Salber, D. (2001). A conceptual framework and a toolkit
for supporting the rapid prototyping of context-aware applications. Human–Computer
Interaction, 16, 167–176.
Dijksterhuis, A., & Aarts, H. (2010). Goals, attention, and (un)consciousness. Annual
Review of Psychology, 61, 467–490.
Ebert, J. P., & Wegner, D. M. (2010). Time warp: Authorship shapes the perceived timing
of actions and events. Consciousness and Cognition, 19(1), 481–489.
Elster, J. (2000). Ulysses unbound. Cambridge: Cambridge University Press.
Everitt, B. J., Dickinson, A., & Robbins, T. W. (2001). The neuropsychological basis of
addictive behavior. Brain Research Reviews, 36, 129–138.
Fish, J., Evans, J. J., Nimmo, M., Martin, E., Kersel, D., Bateman, A., & Wilson, B. A.
(2007). Rehabilitation of executive dysfunction following brain injury: “Content-free”
cueing improves everyday prospective memory performance. Neuropsychologia, 45(6),
1318–1330.
320 D ECO M P O S E D ACCO U N TS O F T H E W I L L

Fodor, J. A . (2000). The mind doesn’t work that way: The scope and limits of computational
psychology. Cambridge, MA : MIT Press.
Fodor, J., & Lepore, E. (1993). Is intentional ascription intrinsically normative? In Bo
Dahlbom (Ed.), Dennett and his critics (pp. 70–82). Oxford: Blackwell.
Fogg , B. J. (2003). Persuasive technology: Using computers to change what we think and do.
San Francisco: Morgan Kaufmann.
Frantzidis, C., Bratsas, C., Papadelis, C., Konstantinidis, E., Pappas, C., & Bamidis, P.
(2010). Toward emotion aware computing: An integrated approach using multichannel
neurophysiological recordings and affective visual stimuli. Transactions on Information
Technology in Biomedicine, IEEE, 14(3), 589–597.
Frederick, S., Loewenstein, G., & O’Donoghue, T. (2002). Time discounting and time
preference: A critical review. Journal of Economic Literature, 40(2), 351–401.
Gallagher, S. (2007). The natural philosophy of agency. Philosophy Compass, 2(2),
347–357.
Gargiulo, G., Bifulco, P., Calvo, R., Cesarelli, M., Jin, C., & van Schaik, A. (2008). A mobile
EEG system with dry electrodes. In Biomedical Circuits and Systems Conference. BioCAS
2008. IEEE (pp. 273–276).
Gemmell, J., Bell, G., & Lueder, R. (2006). MyLifeBits: A personal database for every-
thing. Communications of the ACM, 49(1), 88–95.
Gemmell, J., Bell, G., Lueder, R., Drucker, S., & Wong , C. (2002). MyLifeBits: Fulfilling the
Memex vision. In Proceedings of the Tenth ACM International Conference on Multimedia
(pp. 235–238).
Ghassemi, F., Moradi, M., Doust, M., & Abootalebi, V. (2009). Classification of sustained
attention level based on morphological features of EEG’s independent components. In
ICME International Conference on Complex Medical Engineering, 2009 (pp. 1–6).
Godin, G., Belanger-Gravel, A., Amireault, S., Gallani, M., Vohl, M., & Perusse, L. (2010).
Effect of implementation intentions to change behaviour: moderation by intention sta-
bility. Psychological Reports, 106(1), 147–159.
Goldman, A . (1993). The psychology of folk psychology. Behavioral and Brain Sciences,
16, 15–28.
Gollwitzer, P. M. (1999). Implementation intentions: Strong effects of simple plans.
American Psychologist, 54, 493–503.
Gollwitzer, P., & Sheeran, P. (2006). Implementation intentions and goal achievement: A
meta-analysis of effects and processes. Advances in Experimental Social Psychology, 38,
69–119.
Goudriaan, A. E., Oosterlaan, J., de Beurs, E., & Van den Brink, W. (2004). Pathological
gambling: A comprehensive review of biobehavioral findings. Neuroscience and
Biobehavioral Reviews, 28(2), 123–141.
Gough, D., Kumosa, L., Routh, T., Lin, J., & Lucisano, J. (2010). Function of an implanted
tissue glucose sensor for more than 1 year in animals. Science Translational Medicine,
2(42), 42–53.
Greenfield, A . (2006). Everyware: The dawning age of ubiquitous computing. Berkeley, CA:
New Riders.
Grundlehner, B., Brown, L., Penders, J., & Gyselinckx , B. (2009). The design and analysis
of a real-time, continuous arousal monitor. In BSN 2009: Sixth International Workshop
on Wearable and Implantable Body Sensor Networks (pp. 156–161).
Grush, R. (2004). The emulation theory of representation: Motor control, imagery, and
perception. Behavioral and Brain Sciences, 27, 377–442.
Recomposing the Will 321

Haggard, P., Clark, S., & Kalogeras, J. (2002). Voluntary action and conscious awareness.
Nature Neuroscience, 5(4), 382–385.
Hall, L., & Johansson, P. (2003). Introspection and extrospection: Some notes on the contex-
tual nature of self-knowledge. Lund University Cognitive Studies, 107. Lund: LUCS.
Hall, L., & Johansson, P. (2008). Using choice blindness to study decision making and
introspection. In P. Gärdenfors & A. Wallin (Eds.), Cognition: A smorgasbord (pp. 267–
283). Lund: Nya Doxa.
Hall, L., Johansson, P., Sikström, S., Tärning, B. & Lind, A . (2006). How something
can be said about Telling More Than We Can Know: Reply to Moore and Haggard.
Consciousness and Cognition, 15, 697–699.
Hall, L., Johansson, P., Tärning, B., Sikström, S., & Deutgen, T. (2010). Magic at the market-
place: Choice blindness for the taste of jam and the smell of tea. Cognition, 117, 54–61.
Hall, L., Johansson, P., & Strandberg , T. (2012). Lifting the Veil of Morality: Choice
Blindness and Attitude Reversals on a Self-Transforming Survey. PLoS ONE 7(9):
e45457. doi:10.1371/journal.pone.0045457
Henelius, A., Hirvonen, K., Holm, A., Korpela, J., & Muller, K . (2009). Mental work-
load classification using heart rate metrics. In Annual International Conference of the
Engineering in Medicine and Biology Society (pp. 1836–1839).
Hollan, J., Hutchins, E., & Kirsh, D. (2000). Distributed cognition: Toward a new founda-
tion for human-computer interaction research. ACM Transactions on Computer-Human
Interaction, 7(2), 174–196.
Hosseini, S., & Khalilzadeh, M. (2010). Emotional stress recognition system using EEG
and psychophysiological signals: Using new labelling process of EEG signals in emo-
tional stress state. In International Conference on Biomedical Engineering and Computer
Science (pp. 1–6).
Hurlburt, R. T., & Christopher L. H. (2006). Exploring inner experience. Amsterdam: John
Benjamins.
Hurlburt, R. T., & Heavey, C. L. (2001). Telling what we know: Describing inner experi-
ence. Trends in Cognitive Science, 5(9), 400–403.
Hurlburt, R. T., & Schwitzgebel, E. (2007). Describing inner experience? Proponent meets
skeptic. Cambridge, MA : MIT Press.
Hutchins, E. (1995). Cognition in the wild. Cambridge, MA : MIT Press.
Johansson, P., Hall, L., Kusev, P., Aldrovandi, S., Yamaguchi, Y., Watanabe, K. (in prepara-
tion). Choice blindness in multi attribute decision making.
Johansson, P., Hall, L., & Sikström, S. (2008). From change blindness to choice blindness.
Psychologia, 51, 142–155.
Johansson, P., Hall, L., Sikström, S., & Olsson, A. (2005). Failure to detect mismatches
between intention and outcome in a simple decision task . Science, 310, 116–119.
Johansson, P., Hall, L., Sikström, S., Tärning, B., & Lind, A . (2006). How something
can be said about Telling More Than We Can Know. Consciousness and Cognition, 15,
673–692.
Koechlin, E & Summerfield, C (2007) An information theoretical approach to prefrontal
executive function. Trends in Cognitive Science, 11(6), 229–235.
Kwang , P. (2009). Nonintrusive measurement of biological signals for ubiquitous health-
care. In Annual International Conference of the IEngineering in Medicine and Biology
Society (pp. 6573–6575).
Levelt, W. J. M., Roelofts, A., & Meyer, A. S. (1999). A theory of lexical access in speech
production. Behavioral and Brain Sciences, 22(1), 1–76.
322 D ECO M P O S E D ACCO U N TS O F T H E W I L L

Levin, D. T., Momen, N., Drivdahl, S. B., & Simons, D. J. (2000). Change blindness
blindness: The metacognitive error of overestimating change-detection ability. Visual
Cognition, 7, 397–412.
Lippke, S., Wiedemann, A., Ziegelmann, J., Reuter, T., & Schwarzer, R . (2009). Self-efficacy
moderates the mediation of intentions into behavior via plans. American Journal of
Health Behavior, 33(5), 521–529.
Manly, T., Hawkins, K., Evans, J., Woldt, K., & Robertson, I. H. (2002). Rehabilitation of
executive function: Facilitation of effective goal management on complex tasks using
periodic auditory alerts. Neuropsychologia, 40(3), 271–281.
Mansour, O. (2009). Group intelligence: A distributed cognition perspective. In INCOS
’09: Proceedings of the International Conference on Intelligent Networking and Collaborative
Systems (pp. 247–250). Washington, DC: IEEE Computer Society.
Manuck, S. B., Flory, J. D., Muldoon, M. F., & Ferrell, R. E. (2003). A neurobiology of
intertemporal choice. In G. Loewenstein, D. Read, & R. Baumeister (Eds.), Time and
decision: Economic and psychological perspectives on intertemporal choice (139–172).
New York: Russell Sage Foundation.
Massimi, M., Truong, K., Dearman, D., & Hayes, G. (2010). Understanding recording
technologies in everyday life. Pervasive Computing, IEEE, 9(3), 64–71.
McVay, J. C., & Kane, M. J. (2010). Does mind wandering reflect executive function or
executive failure? Comment on Smallwood and Schooler (2006) and Watkins (2008).
Psychological Bulletin, 136(2), 188–197; discussion 198–207.
McVay, J. C., Kane, M. J., & Kwapil, T. R . (2009). Tracking the train of thought from
the laboratory into everyday life: An experience-sampling study of mind wandering
across controlled and ecological contexts. Psychonomic Bulletin and Review, 16(5),
857–863.
Miles, J. D., & Proctor, R. W. (2008). Improving performance through implementation
intentions: Are preexisting response biases replaced? Psychonomic Bulletin and Review,
15(6), 1105–1110
Miller, E. K . (2000). The prefrontal cortex and cognitive control. Nature Reviews
Neuroscience, 1, 59–65.
Miller, E. K., & Cohen, J. D. (2001). An integrative theory of prefrontal cortex function.
Annual Review of Neuroscience, 24, 167–202.
Monterosso, J., & Ainslie, G. (1999). Beyond discounting: Possible experimental models
of impulse control. Psychopharmacology, 146, 339–347.
Moore, J., & Haggard, P. (2006). Commentary on “How something can be said about tell-
ing more than we can know: On choice blindness and introspection.” Consciousness and
Cognition, 15(4), 693–696.
Moore, J. W., Wegner, D. M., & Haggard, P. (2009). Modulating the sense of agency with
external cues. Consciousness and Cognition, 18(4), 1056–1064.
Nelson, J. B., & Bouton, M. E. (2002). Extinction, inhibition, and emotional intelligence.
In L. F. Barrett & P. Salovey (Eds.), The wisdom in feeling: Psychological processes in emo-
tional intelligence (pp. 60–85). New York: The Guilford Press.
Nestler, E. J. (2001). Molecular basis of long-term plasticity underlying addiction. Nature
Reviews Neuroscience, 2, 119–128.
Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we can know: Verbal reports on
mental processes. Psychological Review, 84, 231–259.
Pacherie, E. (2008). The phenomenology of action: A conceptual framework . Cognition,
107, 179–217.
Recomposing the Will 323

Pacherie, E., & Haggard, P. (2010). What are intentions? In, L. Nadel &
W. Sinnott-Armstrong (Eds.), Benjamin Libet and agency (pp. 70–84). Oxford: Oxford
University Press.
Pan, J., Ren, Q., & Lu, H. (2010). Vigilance analysis based on fractal features of EEG sig-
nals. In International Symposium on Computer Communication Control and Automation
(3CA), 2010 (pp. 446–449).
Pantelopoulos, A., & Bourbakis, N. (2010). A survey on wearable sensor-based systems
for health monitoring and prognosis. Transactions on Systems, Man, and Cybernetics,
Part C: Applications and Reviews, 40(1), 1–12.
Parker, A. B., & Gilbert, D. G. (2008). Brain activity during anticipation of smoking-related
and emotionally positive pictures in smokers and nonsmokers: A new measure of cue
reactivity. Nicotine and Tobacco Research, 10(11), 1627.
Poslad, S. (2009). Ubiquitous computing: smart devices, smart environments and smart inter-
action. Chichester, UK: Wiley.
Postma, A . (2000). Detection of errors during speech production: A review of speech
monitoring models. Cognition, 77, 97–131.
Prestwich, A., Perugini, M., & Hurling , R. (2008). Goal desires moderate
intention-behaviour relations. British Journal of Social Psychology/British Psychological
Society, 47, 49–71.
Quinn, J. M., Pascoe, A., Wood, W., & Neal, D. T. (2010). Can’t control yourself? Monitor
those bad habits. Personality and Social Psychology Bulletin, 36(4), 499–511.
Rachlin, H. (2000). The science of self-control. Cambridge, MA : Harvard University Press.
Roberts, S. (2004). Self-experimentation as a source of new ideas: Ten examples about
sleep, mood, health, and weight. Behavioral and Brain Sciences, 27, 227–288.
Roberts, S. (2010). The unreasonable effectiveness of my self-experimentation. Medical
Hypotheses, 75(6), 482–489.
Robinson, T. E., & Berridge, K, C. (2003). Addiction. Annual Review of Psychology, 54,
25–53.
Rorty, R. (1993). Holism, intrinsicality, and the ambition of transcendence. In B. Dahlbom
(Ed.), Dennett and his critics: Demystifying mind (pp. 184–202). Cambridge, MA : Basil
Blackwell.
Sally, D. (2000a). Confronting the Sirens: Rational behavior in the face of changing pref-
erences. Journal of Institutional and Theoretical Economics, 156(4), 685–714.
Sally, D. (2000b). I, too, sail past: Odysseus and the logic of self-control. Kyklos, 53(2),
173–200.
Schooler, J. W. (2002). Re-representing consciousness: Dissociations between experience
and meta-consciousness. Trends in Cognitive Science, 6(8), 339–344.
Scollon, C. N., Kim-Prieto, C., & Diener, E. (2003). Experience sampling: Promises and
pitfalls, strengths and weaknesses. Journal of Happiness Studies, 4, 5–34.
Silvia, P., & Gendolla, G. (2001). On introspection and self-perception: Does self-focused
attention enable accurate self-knowledge? Review of General Psychology, 5(3), 241–269.
Smallwood, J., Fishman, D. J., & Schooler, J. W. (2007). Counting the cost of an absent
mind: Mind wandering as an underrecognized influence on educational performance.
Psychonomic Bulletin & Review, 14(2), 230–236.
Smallwood, J., McSpadden, M., & Schooler, J. W. (2008). When attention matters: The
curious incident of the wandering mind. Memory and Cognition, 36(6), 1144–1150.
Smallwood, J., Nind, L., & O’Connor, R. C. (2009). When is your head at? An exploration
of the factors associated with the temporal focus of the wandering mind. Consciousness
and Cognition, 18(1), 118–125.
324 D ECO M P O S E D ACCO U N TS O F T H E W I L L

Smallwood, J., & Schooler, J. W. (2006). The restless mind. Psychological Bulletin, 132(6),
946–958.
Sniehotta, F. F. (2009). Towards a theory of intentional behaviour change: Plans, plan-
ning, and self-regulation. British Journal of Health Psychology, 14, 261–273.
Tamada, J. A., Lesho, M., & Tierny, M. (2002). Keeping watch on glucose. IEEE Spectrum
Online, 39(4), 52–57.
Tobias, R . (2009). Changing behavior by memory aids: A social psychological model
of prospective memory and habit development tested with dynamic field data.
Psychological Review, 116(2), 408–438.
Webb, T. L., & Sheeran, P. (2008). Mechanisms of implementation intention effects:
The role of goal intentions, self-efficacy, and accessibility of plan components. British
Journal of Social Psychology/British Psychological Society, 47, 373–395.
Webb, T. L., Sheeran, P., & Luszczynska, A . (2009). Planning to break unwanted hab-
its: Habit strength moderates implementation intention effects on behaviour change.
British Journal of Social Psychology/British Psychological Society, 48, 507–523.
Westbury, C., & Dennett, D. (2000). Mining the past to construct the future: Memory and
belief as forms of knowledge. In D. L. Schacter & E. Scarry (Eds.), Memory, brain, and
belief (pp. 11–32). Cambridge, MA : Harvard University Press.
Wilson, T. D., & Dunn, E. W. (2004). Self-knowledge: Its limits, value, and potential for
improvement. Annual Review of Psychology, 55, 493–518.
Wolpert, D., & Ghahramani, Z. (2004). Computational motor control. In M. Gazzaniga
(Ed.), The cognitive neurosciences (3rd ed., pp. 485–494). Cambridge, MA: MIT Press.
Wood, W., & Neal, D. T. (2007). A new look at habits and the habit-goal interface.
Psychological Review, 114(4), 843–863.
17

Situationism and Moral Responsibility


Free Will in Fragments

M A N U E L VA R G AS

Many prominent accounts of free will and moral responsibility treat as central the
ability of agents to respond to reasons. Call such theories Reasons accounts. In what
follows, I consider the tenability of Reasons accounts in light of situationist social
psychology and, to a lesser extent, the automaticity literature. In the first half of this
chapter, I argue that Reasons accounts are genuinely threatened by contemporary
psychology. In the second half, I consider whether such threats can be met, and at
what cost. Ultimately, I argue that Reasons accounts can abandon some familiar
assumptions, and that doing so permits us to build a more empirically plausible
picture of our agency.

1. PRELIMINARIES: FREE WILL AND MORAL RESPONSIBILITY


“Free will” is a term of both ordinary and technical discourse. We might say that
Ava chose to major in mathematics “of her own free will” or that Ruth lacks free
will because of, say, brainwashing or mental illness. What is less clear is whether
ordinary sorts of “usages of “free will” reflect” a unified or single concept of free
will. Perhaps ordinary usage picks out distinct features of the world, masquerad-
ing under a single term. Referential fragmentation seems an especially appeal-
ing hypothesis when one considers the varied characterizations given to free
will among scientists, philosophers, and theologians. So, for example, scientists
have used the term “free will” to refer to, among other things, the feeling of con-
scious control (Wegner 2002); “undetermined choices of action” (Bargh 2008,
130), and the idea that we make choices “independent of anything remotely
resembling a physical process” (Montague 2008, 584–585). Philosophical uses
display variation, too. Among other things, free will has been characterized as
the ability to do otherwise; a kind of control required for moral responsibility;
326 D ECO M P O S E D ACCO U N TS O F T H E W I L L

decision making in accord with reason; and a capacity to act in the way we believe
when we deliberate about what to do. The univocality of “free will” is dubious
(Vargas 2011).
In what follows, I treat free will as the variety of control distinctively required
for agents to be morally responsible.1 It is a further matter, one I will not address
here, whether such control constitutes or is a part of any other powers that have
sometimes been discussed under the banner of “free will.” Among theories of free
will characterized along these lines, of special interest here are Reasons accounts.
These are accounts on which an agent’s capacity to appropriately respond to reasons
constitutes the agent’s having the form of control that (perhaps with some other
things)2 constitutes free will or is required for moral responsibility (Wolf 1990;
Wallace 1994; Fischer and Ravizza 1998; Arpaly 2003; Nelkin 2008). There are a
number of attractions to these accounts, although here I will only gesture at some
of them.3
First, Reasons theorists have been motivated by the idea that in calling one
another to account, in (especially) blaming one another, and in judging that some-
one is responsible, we are suggesting that the evaluated agent had a reason to do
otherwise. Having reams of alternative possibilities available, even on the most
metaphysically demanding conception of these things, is of little use or interest
in and of itself. It is a condition on the possibility of an alternative being relevant
that there be a reason in favor of it, and this is true both for the purposes of calling
someone to account and for an agent trying to decide what to do. In the absence
of the ability to discern or act on a discerned reason in favor of that possibility, it is
something of an error or confusion to blame the agent for failing to have acted on
that alternative (unless, perhaps, the agent knowingly undermined or destroyed his
or her reasons-responsive capacity).
Second, and relatedly, Reasons accounts appear to cohere with the bulk of
ordinary judgments about cases (e.g., why young children are treated differ-
ently than normal adults, why cognitive and affective defects seem to under-
mine responsibility, why manipulation that disrupts people’s rational abilities
seems troublesome). So, there is a “fit” with the data of ordinary practices and
judgments.
Finally, Reasons accounts provide us with a comparatively straightforward
account of our apparent uniqueness in having free will and being morally respon-
sible. To the extent to which we are responsive to a special class of considerations
(and to the extent to which it is worthwhile, valuable, or appropriate to be sensi-
tive to these considerations), this form of agency stands out against the fabric of
the universe; it constitutes a particularly notable form of agency worth cultivating.
Reasons accounts are thus appealing because of a package of explanatory and nor-
mative considerations.
However we characterize reasons, it would be enormously problematic if we sel-
dom acted for reasons, or if it turned out that there was a large disconnect between
conscious, reasons-involved deliberative powers and the causal mechanisms that
move us. Unfortunately, a body of research in social psychology and neuroscience
appears to suggest exactly these things (Doris 2002; Nelkin 2005; Woolfolk et al.
2006; Nahmias 2007).
Situationism and Moral Responsibility 327

2. SITUATIONISM
Consider the following classic social psychology experiments.
Phone Booth: In 1972 Isen and Levin performed an experiment on subjects
using a pay phone. When a subject emerged from the pay phone, confederates of
the experimenters “inadvertently” spilled a manila envelope full of papers in front
of the subject as the subject left the phone booth. The remarkable thing was the
difference a dime made. When subjects had just found a dime in the change return
of the telephone, helping behavior jumped to almost 89 percent of the time. When
subjects had not found a dime in the change return, helping behavior occurred only
4 percent of the time (Isen and Levin 1972).4
Samaritan: In 1973, Darley and Batson performed an experiment on semi-
nary students. In one case, the students were asked to prepare a talk on the Good
Samaritan parable. In the other case, students were asked to prepare a talk on poten-
tial occupations for seminary students. Subjects were then told to walk to another
building to deliver the talk. Along the route to the other building, a confederate of
the experimenters was slumped in a doorway in apparent need of medical attention.
The contents of the seminarian’s talks made little difference in whether they stopped
to help or not. What did make a sizable difference was how time-pressured the sub-
jects were. In some conditions, subjects were told they had considerable time to
make it to the next building, and in other conditions subjects were told they had
some or considerable need to hurry. The more hurried the subjects were, the less
frequently they helped (Darley and Batson 1973).
Obedience to Authority: In a series of widely duplicated experiments, Stanley
Milgram showed that on the mild insistence of an authority figure, a range of very
ordinary subjects were surprisingly willing to (apparently) shock others, even
to apparent death, for failing to correctly answer innocuous question prompts
(Milgram 1969).
The literature is filled with plenty of other fascinating cases. For example: psy-
chologists have found that members of a group are more likely to dismiss the evi-
dence of their senses if subjected to patently false claims by a majority of others in
the circumstance; the likelihood of helping behavior depends in large degree on
the numbers of other people present and the degree of familiarity of the subject
with the other subjects in the situation (Asch 1951; Latané and Rodin 1969); social
preferences are driven by subliminal smells (Li et al. 2007); one’s name can play a
startlingly large role in one’s important life choices; and so on (Pelham et al. 2002).
Collectively, such work is known as situationist social psychology, or situationism.
The general lesson of situationism is that we underestimate the influence of the
situation and we overestimate the influence of purportedly fixed features of the
agent. Crucially, the “situational inputs” typically operate without the awareness of
the agent. Seemingly inconsequential—and deliberatively irrelevant—features of
the context or situation predict and explain behavior, suggesting that our agency is
somewhat less than we presume. Indeed, when agents are asked about the relevance
of those apparently innocuous factors in the situation, the usual reply is outright
denial or dismissal of their relevance to the subject’s deliberation and decision. Thus,
contemporary psychological science threatens the plausibility of Reasons accounts
328 D ECO M P O S E D ACCO U N TS O F T H E W I L L

by showing that the basis of our actions is disconnected from our assessments of
what we have reason to do.5
While particular experiments may be subject to principled dispute, the general
lesson—that we frequently underestimate the causal role of apparently irrelevant
features of contexts on our behavior, both prospectively and retrospectively—has
considerable support (Doris 2002, 12–13).6 There are ongoing disputes about the
precise implications of situationism, and in particular, what these data show about
the causal role of personality and what implications this might have for philosophi-
cal theories of virtue (Doris 1998; Harman 1999; Kamtekar 2004; Merritt 2000;
Sabini et al. 2001). In the present context, however, these concerns can be brack-
eted. What follows does not obviously depend on the nature of personality or char-
acter traits, and so whatever the status of those debates, we have reason to worry
about situationism’s implications for Reasons views.
The situationist threat operates on two dimensions. On the one hand, it may
threaten our “pretheoretical” or folk view of free will. On the other hand, to the
extent to which situationism suggests we lack powers of agency that are appealed
to on philosophical accounts, it threatens our philosophical theories of free will.
In what follows I focus on the philosophical threat, and in particular, the threat to
Reasons accounts. Whatever we say about folk judgments of freedom and respon-
sibility, what is crucial here is what our best theory ought to say about free will, all
things considered.
None of this implies that situationism’s significance for ordinary beliefs is alto-
gether irrelevant for philosophical accounts. On the contrary: Reasons accounts
are partly motivated by their coherence with ordinary judgments. If ordinary judg-
ments turn out to be at odds with the scientific picture of our agency, this undercuts
some of the motivation for accepting a Reasons theory. A Reasons theorist might
be willing to sever the account’s appeal to ordinary judgments, but if so, then some-
thing needs to be said about the basis of such accounts. For conventional Reasons
theorists, however, situationism presents an unappealing dilemma: we can either
downgrade our confidence in Reasons theories (in light of the threat of situation-
ism), or we can disconnect our theories from ordinary judgments and downgrade
our confidence in our ordinary judgments of responsibility.
So, the dual threat situationism presents to common sense and philosophical
theorizing does not easily disentangle. Nevertheless, my focus is primarily on the
philosophical threat, whatever its larger implications for our ordinary thinking and
its consequent ramifications for theorizing.7

3. SITUATIONISM, IRRATIONALITY, AND BYPASSING


Situationism might impugn our rationality in at least two ways. It might show that
those psychological processes that bring about action are irrational. Alternately,
it could show that what rationality we have is too shallow to constitute freedom.
I explore these possibilities in turn.
Let’s start with a nonhuman case of impugned rationality.
The common digger wasp can act in some intriguingly complex ways. Consider
its behavior in providing food for its eggs. It will drag a stung cricket to the threshold
Situationism and Moral Responsibility 329

of its burrow, release the cricket, enter the burrow for a moment (presumably to look
things over), and then return to the threshold of the burrow to pull the cricket in.
Here is the surprising thing: if you move the cricket more than a few inches from the
threshold of the burrow when the wasp enters its burrow for the first time without
the cricket, the wasp will “reboot” the process, moving the cricket closer, dropping
it, checking out the burrow, and returning outside to get the cricket. Surprisingly,
the wasp will do this every time the cricket is moved. Again, and again, and again, and
again, if necessary. The wasp never does the obvious thing of pulling the cricket into
the burrow straightaway. Douglas Hofstadter calls this sphexishness, or the property
of being mechanical and stupid in the way suggested by the behavior of the digger
wasp (Sphex ichneumoneus) (Dennett 1984, 10–11).
Now consider the human case. Perhaps situationism shows that we are sphexish.
We think of ourselves as complicated, generally rational creatures that ordinarily act
in response to our best assessments of reasons. What situationism shows, perhaps,
is that we are not like that at all. Instead, our agency is revealed as blind instinct,
perhaps masked by high-level confabulation (i.e., the manufacturing of sincere but
ad hoc explanations of the sources of our action).8 If one thinks of instinct as para-
digmatically opposed to free will, then we have a compact explanation for why situ-
ationism threatens free will: it shows we are instinctual and not free.
Recall, however, that the present issue is not whether our naive, pretheorized
views of the self are threatened by situationism. What is at stake is whether a
Reasons theory of free will should be threatened by situationism. In this context, it
is much harder to make out why it should matter that some of our rational capaci-
ties reduce to the functioning of lower-level “instinctual” mental operations. One
way to put the point is that it is simply a mistake to assume that instinct is necessarily
opposed to rationality. To be sure, some behavior we label “instinctive” might be, in
some cases, irrational by nearly any measure. Still, there are cases where instinctive
behavior is ordinarily rational in a straightforwardly instrumental sense. In a wide
range of circumstances, our “instincts” (to breathe, or to jerk our hands away from
burning sensations, to socialize with other humans, and so on) are paradigms of
rational behavior. What we call “instinct” is (at least sometimes) Mother Nature’s
way of encoding a kind of bounded rationality into the basic mechanisms of the
creature. We view it as mere instinct only when we find the limits of that rationality,
or when it conflicts with our higher-order aims.
On this picture, the fact that we are sphexish (in the sense of having an instinctual
base for rational behaviors) does not threaten our freedom. Instinctual behaviors
can be irrational, especially when they operate under conditions different than they
were presumably acquired, or when they come into conflict with some privileged
element of the psychic economy.9 However, neither the fact of instinctual behav-
ior nor the possibility of a reduction of our complex behavior to more basic, less
globally rational elements shows that we cannot respond to reasons any more than
pockets of local irrationality in the wasp would show us anything about the wasp’s
rationality under normal conditions. Our rationality is, like the wasp’s, plausibly
limited. Such limitations, however, do not constitute global irrationality.
The line of response suggested here—construing instinctual, subagen-
tial mechanisms with bounded rationality as partly constitutive of our general
330 D ECO M P O S E D ACCO U N TS O F T H E W I L L

rationality—might suggest a different sort of worry. On this alternative worry, what


situationism shows is not that there is no sense to be made of the idea that we might
be rational, but rather that what rationality we have is a function of processes that
bypass our agency. That is, under ordinary circumstances our behavior might be
rational or not, but what drives that behavior does not involve a contribution of
active, deliberating agency. To the extent to which a Reasons account requires that
the seat of reasons-responsive agency be a conscious, deliberative self, situationism
will threaten this picture of free will.
If situationism could show this much, this might indeed constitute a threat
that Reasons theorists should take seriously. However, there are good reasons to
doubt that situationism has shown this much. Even if our agency were oftentimes
bypassed, this would not obviously undermine all attributions of free will. At least in
ordinary practice, we do not hold that agents must always exercise their conscious,
active agency in order to be responsible for the outcome. Negligence, for example,
is typically treated as a case of responsibility where the failure to act need not be the
product of an intentional or conscious choice not to act.10 And, in a range of cases,
we seem perfectly willing to regard ourselves and others as responsible for actions
that arrive unexpectedly, but whose arrival we regard as telling us something about
where we stand. One’s enthusiasm (or lack thereof) in response to, for example, a
marriage proposal, a job offer, or an invitation to a weekend away can be unexpected.
Nevertheless, we can (and oftentimes do) regard those reactions as privileged and
the emotional bedrock on which the praiseworthiness and blameworthiness of sub-
sequent action are anchored (Arpaly 2003). So, the mere fact that our active agency
is sometimes bypassed does not obviously show we lack the freedom sufficient for
moral responsibility.
Reactions that bypass our conscious, deliberative selves could be a problem if,
for example, the bypassing mechanisms were themselves never or rarely respon-
sive to reason. However, this possibility suffers from the same basic problem that
plagued the deterministic gloss on situationism: the evidential base does not sup-
port so sweeping a generalization. Unless we receive compelling evidence for the
possibility—evidence that extends far beyond the idea that situations play a larger,
oftentimes puzzling, role in action than we ordinarily acknowledge—the mere fact
of our actions bypassing or overriding conscious deliberation is not, in itself, prob-
lematic for a Reasons theory.11
Still, there seems to be a lurking difficulty for the Reasons theorist. If we con-
cede that experimental data can show that there are conditions under which we are
irrational, we might wonder what the point is at which we become too irrational
for our ordinary practices of moralized praise and blame to retain their integrity.
Too much irrationality too often might mean that we cannot often enough assume
people satisfy the conditions of rationality that hold on free will for us to go on as
we did before.
To resolve this matter, we require two things: (1) more detailed experimental
data than we currently have, and (2) an account of what sort of rational powers we
need for free will and moral responsibility. In their absence, it is difficult to evaluate
whether the frequency and degree of irrationality we ordinarily exhibit undermine
free will. I cannot provide the former, but in a bit I will attempt to provide a sketch
Situationism and Moral Responsibility 331

of the latter: it is, I think, enough to blunt some of the worry, even if it cannot eradi-
cate it altogether. First, though, I want to consider one further issue that might be
taken to supplement the basic worry generated by situationism.

4. AN AUTOMATICITY THREAT?
A suitably informed interlocutor might contend that even if situationism lacks the
resources to show that we lack free will, other work in psychology can do so. In
recent years a fertile research program has sprung up around detailing the scope of
fast, nonconscious determinants of action and preferences and their mechanisms of
operation. This work is usually thought of as describing the automaticity of human
action. Automatic processes, if sufficiently irrational and pervasive, would presum-
ably show that we are not often enough responsive to reasons.
As John Kihlstrom (2008) characterizes it, automatic processes have four
features:
1. Inevitable evocation: Automatic processes are inevitably engaged by the
appearance of specific environmental stimuli, regardless of the person’s conscious
intentions, deployment of attention, or mental set.
2. Incorrigible completion: Once evoked, they run to completion in a ballistic
fashion, regardless of the person’s attempt to control them.
3. Efficient execution: Automatic processes are effortless, in that they consume
no attentional resources.
4. Parallel processing: Automatic processes do not interfere with, and are not
subject to interference by, other ongoing processes—except when they compete
with these processes for input or output channels, as in the Stroop effect. (156)
Part of what makes automatic processes notable is not the mere fact of quick,
usually sub- or unconscious mental operations but the pervasiveness of automatic
processes in general. That is, proponents of the automaticity research program sug-
gest that automatic behaviors are not the exception but rather the rule in human
action production (Bargh and Ferguson 2000).
The situationist and automaticity research programs are complementary. Both
emphasize that we overestimate the degree to which we understand the sources of
our behavior, that conscious deliberative reflection is oftentimes affected (one might
even say “contaminated”) by forces largely invisible to us, and that these forces are
ones that we would regard as irrelevant to the rationality of the act if we were aware
of them.
The work on automaticity raises some interesting questions of its own, and it
merits a much more substantial reply than I will give to it here. Still, because con-
cerns about automaticity interlock with the situationist threat, it may be useful to
sketch what sorts of things the Reasons theorist might say in reply to automaticity
worries.
First, it is hardly clear how much of our mental life is automatic in the way defined
at the start of this section (Kihlstrom 2008). There are ongoing disputes about this
issue among psychologists, and the dust is not yet settled, especially with respect to
the matter of the ubiquitousness of diversity of automatic processes and the extent
332 D ECO M P O S E D ACCO U N TS O F T H E W I L L

to which ballistic processes are immune to deliberate modification. Notice, though,


that as long as the formation of conscious intentions has work to do in the constrain-
ing of courses of action and the assignments of weights to elements of deliberation,
and as long as those processes can respond to reasons, there seems to be room for a
picture of agency that characterizes the responsibility-relevant notion of freedom or
control in terms of rational agency.
A critic might object that this picture of intentional control is precisely what psy-
chologists (from the earlier work of Benjamin Libet to more recent work by Daniel
Wegner and John Bargh) have been denying. Despite the considerable attention this
work has generated, some of the most dramatic claims—for example, that conscious
intentions do no work in action production—have been subject to trenchant criti-
cism on both conceptual and empirical grounds (Mele 2009; Nahmias 2002, 2007;
Bayne 2006). So, the Reasons theorist will surely be quick to note that matters are
not obviously settled in the skeptic’s favor.
Moreover, in response to threats from automatic processes, many of the same
replies as were offered against situationism are available. First, that a process is auto-
matic does not mean it is necessarily irrational. What matters is whether we are appro-
priately responding to reasons, regardless of whether we are thinking of them as such.
So, the free will skeptic would need to show that our automatic processes are both
ubiquitous and overwhelmingly irrational in ordinary action. Second, that subagential
automatic processes can partly constitute our agency does not mean that those auto-
matic processes are necessarily not attributable to us. It may well make sense to regard
a good many automatic processes as attributable to us, depending on the details. Third,
automatic does not mean uncontrolled, especially if the agent embraces the presences
of those automatic processes, has knowingly inculcated them, or they appropriately
contribute to a disposition, aim, or practice that is valuable to the agent.
This is all too quick, of course. Sorting out the details will require painstaking
work I will not attempt to pursue here. Still, considerations along the lines of those
I have sketched will surely be part of what a Reasons theorists will say in reply to
more detailed objections derived from research on automaticity. Here, as elsewhere,
the task of reading off philosophical ramifications from empirical work is a messy
business.
Presumably, some automatic processes will not be suitably responsive to
reasons—moral or otherwise—in a range of comparatively ordinary circumstances
of action. To that extent, what philosophers have to learn from psychology and
related sciences are the contours of ordinary dispositions to rationally respond to
pertinent considerations. The degree to which the empirical data raise philosophi-
cal problems, however, is unlikely to be settled in the philosopher’s armchair or the
scientist’s lab, for (as I argue in the next few sections) the issues are fundamentally
both empirical and philosophical.

5. GIVING UP SOME ASSUMPTIONS?


Situationism does not show that we are always irrational, or that situational forces
always bypass our agency. So, situationism does not threaten for those reasons. Still,
a critic might note, even if we are sometimes (perhaps even regularly) as rational
Situationism and Moral Responsibility 333

as we can realistically hope to be, our rational, moral natures are very fragile and
bounded. The critic might charge that this is the real situationist threat.
That seems right to me. In order to better address this criticism, however, I think
we must recast Reasons accounts, abandoning some suppositions that are usually
folded into such accounts. Let me explain.
Many accounts of free will are implicitly committed to something I shall call
atomism. Atomism is the view that free will is a nonrelational property of agents; it
is characterizable in isolation from broader social and physical contexts. An atom-
ist (in the present sense) holds that whether a given agent has free will and/or is
capable of being morally responsible can, at least in principle, be determined simply
by reading off the properties of just the agent. Atomistic theories provide character-
izations of free will or responsible agency that do not appeal to relational properties,
such as the normative relations of the agent to institutions or collectives.
Atomism is often coupled with a view that there is only one natural power or
arrangement of agential features that constitutes free will or the control condition
on moral responsibility. This is a monistic view of the ontology of free will. Monistic
views include those accounts that hold that free will is the conditional ability to
act on a counterfactual desire, should one want to. Identificationist accounts, which
hold that free will is had only when the agent identifies with a special psychological
element (a desire, a value, an intention, etc.), are also monistic. So are libertarian
accounts, on which one acts freely only when one acts in a specific nondetermin-
istic fashion. In contrast, nonmonistic (or pluralistic) accounts hold that there are
multiple agential structures or combinations of powers that constitute the control
or freedom required for moral responsibility.
If we assume that the freedom or control implicated in assessments of moral
responsibility is a single, unified capacity that relies on a particular cross-situationally
stable mechanism, then the sciences of the mind will be threatening to these
accounts. The situation-dependent nature of our capacities seems to be perhaps the
most compelling claim of situationist research. Consequently, the implicit picture
of our natural capacities invoked by going philosophical theories—an atomistic,
monistic picture—looks to be just plain false.
Psychological research suggests that what appears to us as a general capacity
of reasons responsiveness is really a cluster of more specific, ecologically limited
capacities indexed to particular circumstances. Consequently, what powers we have
are not had independently of situations. What capacity we have for responding to
reasons is not some single thing, some fixed structure or cross-situationally stable
faculty.
Importantly, degradation of our more particular capacities can be quite localized
and context-specific. Consider the literature on “stereotype threat” or “social identity
threat.” What Steele, Aronson, and their colleges have found is that performance in
a wide range of mental and physical activities is subject to degradation in light of
subjects perceiving that there is some possibility of their being evaluated in terms of
a negative stereotype (Aronson et al. 1999; Steele et al. 2002). So, for example, when
there is a background assumption that women and blacks do less well than white
men at math, the performance of women and blacks on math exams—a task that
plausibly involves a species rationality, if anything does—will drop when the exam
334 D ECO M P O S E D ACCO U N TS O F T H E W I L L

is presented as testing native ability. These startling results disappear when the threat
is removed, as when, for example, the exam is presented as testing cognitive pro-
cesses and not purportedly native ability. One can do the same thing to white males,
by priming them with information about their stereotypically poor performance on
math tests when compared with their Asian counterparts. When the threatening
comparison is made salient to subjects, performance drops. When the threatening
comparison is taken away, and the exam is explicitly presented as not susceptible to
such bias, scores rise for the populations ordinarily susceptible to the threat.
Remarkably, these results generalize to a variety of more and less cognitive
domains, including physical performance (Steele et al. 2002).12 Indeed, the more
general thesis, that the environment can degrade our cognitive capacities in
domain-specific ways, has considerable support (Doris and Murphy 2007). One
could resist the general lesson by arguing that (perhaps) there is a basic underlying
capacity that is stable, and perception (whether conscious or not) of stereotypes
affects the ease with which those capacities are exercised. Notice, though, that this
just pushes the problem with atomistic views back a level. Even if our basic capaci-
ties are stable across contexts, our abilities to exercise them vary by circumstance,
and this suggests that our situation-indexed capacities vary considerably.
Given that free will is a fundamentally practical capacity—it is tied to action,
which always occurs in a circumstance—then the characterization of our freedom
independent of circumstance looks like a vain aspiration. What we need to know is
whether we have a capacity relevant for action (and, on the present interpretation,
responsible action)—this requires an account of free will that is sensitive to the role
of the situation. An atomistic account cannot hope to provide this, so we must build
our account with different assumptions.
There are various ways the conventional Reasons theorist might attempt to reha-
bilitate atomism and monism. In what follows, however, I explore what possibilities
are available to us if we retain an interest in a Reasons account but pursue it without
the assumptions of atomism and monism.

6. REASONS-RESPONSIVENESS RECONCEIVED
Situationism presses us to acknowledge that our reasons-sensitive capacities are
importantly dependent on the environment in which those capacities operate, and
that the cross-situational robustness of our reasons-responsive agency is a sham. At
its core, the idea is intuitive enough—the power of a seed to grow a tree is only a
power it has in some contexts and not others. The challenge is to remember that this
is true of persons, too, and that this generates the corresponding need to appreciate
the circumstances that structure our powers.13
In this section, my goal is to provide an account that is (1) consistent with a
broadly Reasons approach, (2) free of the supposition of atomism and monism
about the involved agential powers, and (3) compatible with a wide range of plausi-
ble theories of normative ethics. So, the account I offer is one where the characteris-
tic basic structure of responsible agency is to be understood as a variably constituted
capacity to recognize or detect moral considerations in the relevant circumstances,
and to appropriately govern one’s conduct in light of them.
Situationism and Moral Responsibility 335

It may help to start by contrasting the account to a more familiar approach.


Consider the traditional “conditional analysis” of classical compatibilism. On this
approach, to say that an agent has the capacity to do otherwise is to attribute a con-
ditional power (or, perhaps, a conditional analysis of a categorical power): were one
to decide to do otherwise, one would do otherwise.
The traditional conditional analysis was elegant and problematic in equal mea-
sure. In recent years there have been a number of intriguing attempts to resurrect
the general approach.14 Whatever the virtues of those accounts, the picture I am
offering is rather different. On the account I favor, the powers that constitute free
will are precisely those that are required for a sufficiently good arrangement of prais-
ing and blaming practices, one that has as its aim the attainment of our recognizing
and appropriately responding to moral considerations.15
Let us start with a relatively uncontroversial characterization of the terrain, given
the presumption that free will is the capacity distinctive in virtue of which agents
can be morally responsible. On this picture, we can say this:

For an agent S to be responsible for some act token A in context C requires


that S is a responsible agent and the action is morally praiseworthy or morally
blameworthy.

The present schema invokes several technical notions: the idea of a responsible
agent, and an account of what it is for an action to be morally praiseworthy and mor-
ally blameworthy. I will leave the latter two notions unanalyzed, focusing on the
implications of abandoning the standard atomistic and monistic model of respon-
sible agency and its capacities.
Here is how I think the Reasons theorist should characterize responsible agency,
and by extension, free will:

An agent S is a responsible agent with respect to considerations of type M in


circumstances C if S possesses a suite of basic agential capacities implicated in
effective self-directed agency (including, for example, beliefs, desires, inten-
tions, instrumental reasoning, and generally reliable beliefs about the world
and the consequences of action) and is also possessed of the relevant capacity
for (A) detection of suitable moral considerations M in C and (B) self-gover-
nance with respect to M in C. Conditions (A) and (B) are to be understood in
the following ways:

A. the capacity for detection of the relevant moral considerations obtains when:
i. S actually detects moral considerations of type M in C that are pertinent
to actions available to S or
ii. in those possible worlds where S is in a context relevantly similar to
C, and moral considerations of type M are present in those contexts,
in a suitable proportion of those worlds S successfully detects those
considerations.
B. the capacity for volitional control, or self-governance with respect to the
relevant moral considerations M in circumstances C obtains when either
336 D ECO M P O S E D ACCO U N TS O F T H E W I L L

i. S is, in light of awareness of M in C, motivated to accordingly pursue


courses of action for which M counts in favor, and to avoid courses of
action disfavored by M or
ii. when S is not so motivated, in a suitable proportion of those worlds
where S is in a context relevantly similar to C
a. S detects moral considerations of type M, and
b. in virtue of detecting M considerations, S acquires the motivation to
act accordingly, and
c. S successfully acts accordingly.

And, the notions of suitability and relevant similarity invoked in Aii and Bii
are given by the standards an ideal, fully informed, rational observer in the actual
world would select as at least co-optimal for the cultivation of our moral reasons-
responsive agency, holding fixed a range of general facts about our current custom-
ary psychologies, the cultural and social circumstances of our agency, our interest
in resisting counterfactuals we regard as deliberatively irrelevant, and given the exis-
tence of genuine moral considerations, and the need of agents to internalize norms
of action for moral considerations at a level of granularity that is useful in ordinary
deliberative and practical circumstances. Lastly, the ideal observer’s determination
is structured by the following ordering of preferences:

1. that agents recognize moral considerations and govern themselves


accordingly in ordinary contexts of action in the actual world
2. that agents have a wider rather than narrower range of contexts of action and
deliberation in which agents recognize and respond to moral considerations.

So, free will is a composite of conditions A and B. In turn, A and B are subject
to varied ways of being constituted in the natural world. It is a picture of free will
that can be had without a commitment to atomism and monism of the sort that the
contemporary sciences of the mind impugn.
Before exploring the virtues of this account, some clarification is in order. First,
the preceding characterizations make use of the language of possible worlds as a con-
vention. The involved locutions (e.g., “in those worlds”) is not meant to commit us to
a particular conception of possibility as referring to concrete particulars. Second, the
possibilities invoked in the preceding account are—by design—to be understood as
constituting the responsibility-relevant capacities of agents. These capacities will ordi-
narily be distinct from the “basic abilities,” or the intrinsic dispositions of agents.16
Instead, they are higher-order characterizations picked out because of their rele-
vance to the cultivation and refinement of those forms of agency that recognize and
respond accordingly to moral considerations of the sort we are likely to encounter in
the world. This is a picture on which the relevant metaphysics of our powers is deter-
mined not by the physical structures to which our agency may reduce but instead by
the roles that various collections of our powers play in our shared, normatively struc-
tured lives. Third, the account is neutral on the nature of moral considerations. Moral
considerations presumably depend on the nature of right and wrong action and facts
about the circumstances in which an agent is considering what to do.17
Situationism and Moral Responsibility 337

So, where does all of this get us? Characterizing the capacities that constitute free
will as somewhat loosely connected to our intrinsic dispositions allows us to recon-
cile the picture of our agency recommended by the psychological sciences without
abandoning the conviction that our judgments and practices of moral responsibility
have genuine normative structure to them. We are laden with cognitive and sub-
cognitive mechanisms that (however ecologically bounded) sometimes can and
do operate rationally. There are surely times when our autonomic, nonconscious
responses to features of the world come to hijack our conscious plans. When this
occurs, sometimes it will mean we are not responding to reasons. Other times it will
mean that we are responding to reasons, just not the reasons our conscious, delib-
erative selves are aware of, or hoping to guide action. Still, this fact does not mean
that we are incapable of recognizing and responding to reasons, even moral reasons.
The facts concerning our unexercised capacities, at least as they pertain to assess-
ments of the responsibility-relevant notion of control, depend on counterfactuals
structured by normative considerations.18
A distinctive feature of the account is that it foregrounds a pluralist epistemol-
ogy of moral considerations. It recognizes that sensitivity to moral considerations
is not a unified phenomenon, relying on a single faculty or mechanism. Moral con-
siderations may be constituted by or generated from as diverse things as affective
states, propositional content, situational awareness, and so on. Consequently, the
corresponding epistemic mechanisms for apprehending these considerations will
presumably be diverse as well.19 Moreover, the present picture does not hold that
sensitivity to moral considerations must be conscious, or that the agent must recog-
nize a moral consideration qua moral consideration for it to count as such. An agent
may be moved by moral considerations without consciously recognizing those con-
siderations and without conceiving of them as moral considerations.20
Another notable feature of the account is that it makes free will variably had in
the same individual. That is, the range of moral considerations an agent recognizes
in some or another context or circumstance will vary. In some circumstances agents
will be capable of recognizing a wide range of moral considerations. In other circum-
stances those sensitivities may be narrower or even absent. When they are absent,
or when they dip beneath a minimal threshold, the agent ceases to be a responsible
agent, in that context. We need not suppose that if someone is a responsible agent
at a given time and context, that he or she possesses that form of agency at all times
across all contexts. In some contexts I will be a responsible agent, and in others not.
Those might not be the same contexts in which you are a responsible agent.
When we have reason to believe that particular agents do not have the relevant
sensitivities or volitional capacities in place, we ought not hold that they are genu-
inely responsible, even if we think that in other circumstances the agent does count as
responsible. We may, however, respond in responsibility-characteristic ways with an
eye toward getting the agent to be genuinely responsible in that or related contexts.
Or, we may withhold such treatment altogether if we take such acculturation to be
pointless, not worth the effort, or impossible to achieve in the time available to us.21
However, we can understand a good deal about the normative structure of moral
responsibility if we think of it as modestly teleological, aiming at the development of
morally responsive self-control and the expansion of contexts in which it is effective.
338 D ECO M P O S E D ACCO U N TS O F T H E W I L L

This limited teleology is perhaps most visible in the way a good deal of child rear-
ing and other processes of cultural inculcation are bent to the task of expanding the
range of contexts in which we recognize and rightly respond to moral considerations
(Vargas 2010). By the time we become adults, praise and blame have comparatively
little effect on our internalizing norms, for we oftentimes have come to develop
habits of thought and action that deflect the force of moral blame directed at us.
Still, the propriety of our judgments turns on facts about whether we are capable
of recognizing and appropriately responding to the relevant moral considerations
in play.

7. REASSESSING THE SITUATIONIST THREAT


All of this is well and good, one might reply, but how does this help us address the
situationist threat? To see how, we can revisit the situationist experiments men-
tioned at the outset of the chapter.
In retrospect, Isen and Levin’s Phone Booth seems unproblematic. On the present
account, the fact that a situation might radically alter our disposition to respond to
reasons to help is neither puzzling nor especially troubling. As mentioned earlier,
the natural explanation here seems to be the effects of the dime on mood. There
is consensus among psychologists that mood affects helping behavior (Weyant
1978). In this particular case, there is nothing to suggest that the agent has been
robbed of the capacities that constitute free will. The basic capacities we have reason
to be worried about in ascribing responsibility appear to be intact, the influence of
the situation is benign (i.e., enhancing willingness to help others), and anyway, the
helping may well be supererogatory.22
Situations may influence mood, and mood may affect the likelihood of some
or another resultant action, but those influences (unless radically debilitating) do
not usually change the presence or absence of the capacities that constitute free
will. What the present account helps to explain is why the mere fact of a change
in the likelihood of some action (e.g., because of a change in salience of some fact
in the agent’s deliberations or the effect of mood)—or even a fundamental change
in capacity to do otherwise in the relevant sense—does not automatically entail
that the agent lacks free will, however counterintuitive that might initially strike
us (Talbert 2009; Vargas and Nichols 2007). The higher-level capacities that are
required for moral responsibility are not necessarily disrupted by such changes.
There is some complexity to the way mood affects behavior, and it raises a poten-
tial difficulty for the present account. Positive moods generally increase helping
behavior, in contrast to neutral moods. However, the effect of negative moods on
helping behavior is varied. It is especially sensitive to the cost to the agent and the
apparent benefit generated by the helping in some interesting ways. In cases where
helping is of low cost to the agent but of plausibly high benefit (e.g., volunteering
time to fund-raise by sitting at a donation desk for the American Cancer Society),
negative moods actually increase helping behavior over neutral moods. However,
in cases where the benefits are low and the costs to the agent are high (going
door-to-door to raise funds for the Little League), negative moods tend to mildly
suppress helping behavior (Weyant 1978).23 In cases where both the benefits and
Situationism and Moral Responsibility 339

the costs are matched high or matched low, negative moods have no effect over
neutral states.
These data may suggest a problem for the present account. Perhaps what the
mood data show is that the agent is not being driven by reasons so much as an
impulse to maintain equilibrium in moods. According to this line of objection, help-
ing is merely an instrumental means to eliminate the bad mood, albeit one that is
structured by the payoffs and challenges of doing so. If this is so, however, then it
appears that agents in bad moods do not seem to be helping for good reasons, or
even moral reasons at all. Consequently, the present account seems to have made
no headway against the threat that experimental psychology presents to Reasons
accounts.24
I agree that the role of mood in agents is complex. Still, I think the challenge can
be met. As an initial move, we must be careful not to presume that affective states
and moral reasons are always divorced. Plausibly, moral considerations will be at
least partly constituted by an agent’s affective states. Moreover, an agent’s affective
states will play a complex role in the detection of what moral considerations there
are. So, what mood data might show is not that agents in negative moods do not
help for good reasons or for no moral reasons at all, but rather that being in negative
moods can make one aware of, or responsive to, particular kinds of moral reasons.25
Commiseration and sympathy are quite plausibly vehicles by which the structure of
morality becomes integrated with our psychology. And, as far as I can tell, nothing
in the mood literature rules out this possibility. Indeed, what we may yet have rea-
son to conclude is that the mechanisms of mood equilibrium are some of the main
mechanisms of sympathy and commiseration. To note their activity would thus not
undermine a Reasons picture so much as it would explain its mechanisms.26 If all
this is correct, the proposed account can usefully guide our thinking about Phone
Booth and related examples.
Now consider Samaritan. What this experiment appears to show is that increased
time pressure decreases helping behavior. Nonhelping behavior is, presumably, com-
patible with free will. An agent might decide to not help. Or, depending on how that
subject understands the situation, he or she might justifiably conclude that helping is
supererogatory. So, decreased helping behavior is not direct evidence for absence of
free will. Still, perhaps some agents in Samaritan suffered a loss of free will.
Here are two ways that might have happened. First, if what happened in Samaritan
is that time pressure radically reduced the ability of agents to recognize that some-
one else needs help (which is what at least some of the subjects reported), then this
sort of situational effect can indeed undermine free will precisely by degrading an
agent’s capacity to recognize moral considerations. So, perhaps some Samaritan sub-
jects were like this. A second way to lose free will in Samaritan-like circumstances
could be when time pressure sufficiently undermines the ability of the agent to act
on perceived pro-helping considerations.
A natural question here is how much loss of ability constitutes loss of capacity.
Here, we can appeal to the account given earlier, but it does not give us a bright line.
At best, it gives us some resources for what sorts of things to look for (e.g., what data
do we have about how much time pressure, if any, saps ordinary motivational effi-
cacy of recognized moral considerations?). Some of these issues are quasi-empirical
340 D ECO M P O S E D ACCO U N TS O F T H E W I L L

matters for which more research into the limits of human agency is required. Still, in
the ordinary cases of subjects in Samaritan, it seems that we can say this: if one did
not see the “injured” person, then one is not responsible. Matters are more compli-
cated if one did see the “injured” person and thought he or she needed help, and the
agent thought him- or herself capable of helping without undue risk (the Samaritan
study did not distinguish between agents who had and lacked these convictions). In
such circumstances, I am inclined to think that one could avoid blameworthiness
only if, roughly, time or another situational pressure were sufficiently great that most
persons in such circumstances would be incapable of bringing themselves to help. It
is, as far as I know, an open question whether there are any empirical data that speak
to this question, one that folds in the agent’s understanding of the situation. So, I
think, the Samaritan data do not give us a unified reason to infer a general lack of free
will in time-pressure cases; sometimes time pressure may reduce helping behavior
because of any number of reasons, only some of which are free will disrupting.
Finally, let us reconsider the Milgram Obedience cases. Here, there is some reason
to doubt that subjects in the described situations retain the responsibility-relevant
capacities. At least in very general terms, Obedience-like cases are precisely ones in
which agents are in unusual environments and/or subject to unusual pressures.
Plausibly, they are situations that stress the ordinary capacities for responding to rea-
sons, or they invoke novel cognitive and emotional processes in agents. This shift in
situation, and the corresponding psychological mechanisms that are invoked, may
decrease the likelihood that an ordinary agent will have the responsibility-relevant
capacities. We check, roughly, by asking whether in a significant number of delibera-
tively relevant circumstances the evaluated agent would fail to appropriately respond
to the relevant moral considerations. Ceteris paribus, the higher our estimation of
the failure rate in a given agent, the more reason we have to doubt that the agent
possesses the capacity required for moral responsibility. Still, in Obedience-like situa-
tions, agents are not necessarily globally insensitive to moral considerations or even
insensitive to only relevant moral considerations. Some may well be suitably sensitive
to some or all of the moral considerations in play. (Indeed, some subjects did resist
some of the available abhorrent courses of action with greater and lesser degrees of
success.) So, there is a threshold issue here, and in some cases it will be comparatively
unclear to us what an ideal observer would say about a case thus described.

8. WHAT ABOUT ACTIVE, SELF-AWARE AGENCY?


Before concluding, it may be useful to remark on what role, if any, the present pic-
ture leaves for agency of the active, self-aware variety. Suppose that we accept that
some of our reasons-detecting processes are conscious and others are not, and that
some move from conscious to unconscious (or in the opposite direction) through
sufficient attention or practice. The outputs of these varied mechanisms will some-
times converge, and other times conflict. What role there is for active, conscious,
deliberative agency is crucial. Minimally, it functions as an arbiter in the regular
flow of these processes. We intervene when our conscious, active, self judges it fit
to do so. Sometimes this executive self intervenes to resolve conflicts. Sometimes it
intervenes to derail a mix of other mechanisms that have converged on a conclusion
Situationism and Moral Responsibility 341

that, upon reflection, is consciously rejected. But conscious deliberation and the
corresponding exercise of active agency do not always involve themselves solely to
turn the present psychological tide. Sometimes we are forward-looking, setting up
plans or weighing up values that structure downstream operation.
Much of the time it is obvious what the agent should do, and what way counts
as a satisfactory way of doing it. Among adults it may frequently be the case that
conscious deliberation only injects itself into the psychological tide when there is
a special reason to do so. Such economy of intervention is oftentimes a good thing.
Conscious deliberation is slow and demanding of neurochemical resources. Like
all mechanisms, it is capable of error. Even so, to the extent to which it effectively
resolves conflicts and sets in motion constraints on deliberation and action through
planning and related mechanisms of psychological disciplining, it has an important
role to play.
Situationism suggests that the empirical facts of our agency are at odds with our
self-conception, that context matters more than we tend to suppose. The picture of
free will I have offered is an attempt to be responsive to those facts. The resultant
account is therefore likely to also be at some remove from our naive self-conception.
For example, the tendency to think that our capacities for control are metaphysi-
cally robust, unified, and cross-situationally stable is not preserved by my account of
free will. Instead, free will involves capacities that are functions of agents, a context
of action, and normatively structured practices. It is simply a mistake to think of free
will as a kind of intrinsic power of agents.
Notice that this means free will is partly insulated from direct threats arising from
experimental research. No matter how much we learn about the physical constitu-
tion of agency, it is exceedingly difficult to see how science alone could ever be in a
position to settle whether some or another arrangement of our physical nature has
normative significance for the integrity of attributions of praise and blame.

9. FROM THREAT TO TOOL


Current work in the psychological sciences threatens conventional Reasons
accounts. I have argued that these threats can, to a large extent, either be shown
to be less serious than they may appear or be met by abandoning some standard
presumptions about what free agency requires (e.g., abandoning monism and atom-
ism). I now wish to turn the argument on its head. Rather than thinking of the psy-
chological sciences as presenting threats to free will, we do well to think of them as
providing us with resources for enhancing what freedom we have.
There are at least two ways in which the data might guide us in the quest for
enhanced control and the construction of circumstances conducive to our
considerations-mongering agency. First, experimental results may tell us something
about the conditions under which we are better and worse at perceiving moral con-
siderations. Second, experimental results can illuminate the conditions under which
we are better and worse at translating our commitments into action. Admittedly,
translating the discoveries of experimental work into practical guidelines for moral
improvement will always be a dodgy affair. Still, there are some simple reasons for
optimism about the utility of these data.
342 D ECO M P O S E D ACCO U N TS O F T H E W I L L

One way of being better at perceiving moral considerations is to simply avoid


things that degrade our perceptual capacities. For example, suppose Samaritan-like
data show that for most people time pressure significantly reduces the capacity to
perceive moral considerations. Such a discovery would have an important implica-
tion for our moral ecology: in those cases where we have an interest in not signifi-
cantly limiting helping behavior, it behooves us to limit time-pressuring forces.
A related way data might prove to be useful is simply by making us aware of how
situational effects influence us. There is some evidence that knowledge of situ-
ational effects can, at least sometimes, help reduce or eliminate their deleterious
effects (Pietromonaco and Nisbett 1992; Beaman et al. 1978). These are simple
but suggestive ways in which data and philosophical theory might work together to
limit our irrationality.
More ambitiously, there is intriguing evidence to the effect that situations do not
merely contain the power to degrade our baseline capabilities, but that they may
enhance our capacity to detect at least some varieties of considerations. For example,
when individuals perceive bias in their favor, it can actually enhance some cognitive
tasks, and not just because of motivational effects.27 For example, Aronson et al.
reported that in one study, males “performed worse in conditions where the female
stereotype was nullified by experimental conditions. Specifically, males tended to
perform worse when told that the test was not expected to show gender differences,
suggesting that their performance may be boosted by the implicit stereotype” (42).
In other words, stereotypes about male math advantage seem to benefit males in con-
texts where the stereotype is operating, as opposed to neutral contexts that counter
stereotype bias. Thus far, there is disappointingly little experimental information on
this phenomenon, which we might call stereotype advantage. Nevertheless, it points
to another way in which the data, contrary to threatening us, might instead serve to
contribute to the powers of our agency.
This possibility obviously raises troubling issues. It is surely morally problem-
atic to exploit false stereotypes for cognitive advantage. Moreover, there are practi-
cal challenges to equitably exploiting stereotypes that are indexed to subsets of the
larger population in a widespread way. Nevertheless, the possibility of nonproblem-
atic cognitive enhancers to our baseline capacities is worth further consideration.
Experimental data might also affect how we go about translating principles into
action. As we have seen, one of the disturbing elements in the Milgram studies, and
more recently in the Abu Ghraib tortures, is the suggestion that ordinary people can
be led to behave abhorrently. The combination of the ordinariness of the perpetra-
tors with the uncommonness of the atrocity is precisely what is so striking about
these cases (Doris and Murphy 2007). One way of understanding such instances is
that situational effects do not necessarily undo one’s grasp of moral considerations
(although they may do this, too), but that at least sometimes, they weaken the con-
nection between conviction and action.
Issues here are complex. In the real world, distinguishing between perceptual and
motivational breakdowns may be difficult. Moreover, there are complex issues here
concerning weakness of will and the point at which we do not expect people to
satisfy the normative demands to which we ordinarily take them to be a subject.28
For that matter, there is a lively discussion in psychology concerning the extent to
Situationism and Moral Responsibility 343

which our conscious attitudes explain our behaviors (McGuire 1985; Kraus 1995).
One thing that situationism strongly suggests, however, is that circumstances make
a difference for the ability of agents to control their behaviors in light of their prin-
ciples.29 To the extent that responsible agency requires that an agent’s attitudes con-
trol the agent’s behaviors, experimental data can again provide some guidance on
how we might better shape our environments to contribute to that control.
In sum, while situationist data might initially appear to threaten our freedom and
moral responsibility, what we have seen is, if not quite the opposite, at least some-
thing considerably less threatening. Given a situation-sensitive theory of respon-
sible agency and some attention to the data, we find that our agency is somewhat
different than we imagine. The situationist threat turns out to be only one aspect of a
more complex picture of the forces that enhance and degrade our agency. Whether
and where we build bulwarks against the bad and granaries for the good is up to us.
Here, a suitably cynical critic might retort that this is indeed something, but not
enough. After all, we are still faced with a not-altogether-inspiring upshot that since
we do not control our situations as much as we like, we are still not responsible
agents as much as we might have hoped.
At this point, a concession is in order. I agree that we have less freedom than we
might have hoped for, but I must insist that we have more freedom than we might
have feared. Although we must acknowledge that our freedom-relevant capacities
are jeopardized outside of responsibility-supporting circumstances, we may still
console ourselves with the thought that we have a remarkable amount of control in
suitable environments.
Such thoughts do not end the matter. The present equilibrium point gives rise to
new issues worth mentioning, if only in closing. In particular, it is important to recog-
nize that societies, states, and cultures all structure our actual capacities. Being raised
in an antiracist context plays a role in enhancing sensitivity to moral considerations
tied to antiracist concerns. Similarly, being raised in a sexist, fascist, or classist culture
will ordinarily shape a person’s incapacities to respond to egalitarian concerns. Such
considerations may suggest that we need to ask whether societies or states have some
kind of moral, practical, or political obligation to endeavor to shape the circumstances
of actors in ways that insulate them against situational effects that degrade their (moral
or other) reasoning. We might go on to ask whether societies or states have commen-
surate obligations to foster contexts that enhance our rational and moral agency. If they
do, it suggest that free will is less a matter of science than it is of politics or morality.

ACKNOWLEDGMENTS
Thanks to Henrik Walter and Dana Nelkin (twice over) both for providing com-
ments on predecessor papers and for affording me the circumstances in which I
couldn’t put off writing about these things. Thanks also to Kristin Drake, Eddy
Nahmias, Joshua Knobe, David Velleman, Till Vierkant, and David Widerker for
helpful feedback on ideas in this chapter. I am also grateful to Ruben Berrios and
Christian Miller for their commentaries at the Selfhood, Normativity, and Control
conference in Nijmegen and the Pacific APA in 2007, respectively; thanks, too, to
audience members in both places.
344 D ECO M P O S E D ACCO U N TS O F T H E W I L L

NOTES
1. In the contemporary philosophical literature on free will, this seems to be the domi-
nant (but not exclusive) characterization of free will (Vargas 2011).
2. These “other” conditions might be relatively pedestrian things: for example, being
sometimes capable of consciousness, having beliefs and desires, being able to form
intentions, having generally reliable beliefs about the immediate effects of one’s
actions, and so on. Also, a Reasons account need not preclude other, more ambi-
tious demands on free will. One might also hold that free will requires the presence
of indeterminism, nonreductive causal powers, self-knowledge, and so on. However,
these further conditions are not of primary interest in what follows.
3. Elsewhere, I have attempted to say a bit about the attractions of a Reasons view in
contrast to “Identificationist” views (Vargas 2009). There are, however, other pos-
sibilities beyond these options.
4. It may be worth noting that the particulars of this experiment have not been easily
reproduced. Still, I include it because (1) there is an enormous body of literature
that supports the basic idea of “mood effects” dramatically altering behavior, and (2)
because this example is a familiar and useful illustration of the basic situationist idea.
Thanks to Christian Miller for drawing my attention to some of the troubles of the
Isen and Levin work.
5. In ordinary discourse, to say something “is a threat” is to say something ambiguous. It
either can mean the appearance of some risk, or it can indicate the actuality of risk or
jeopardy, where this latter thing is meant in some nonepistemic way. When my kids and
I pile out of the minivan and into our front yard, I strongly suspect our neighbors regard
us as a threat to the calm of the neighborhood. Nevertheless, there are some days on
which the threat is only apparent. Sometimes we are simply too tired to yell and make
the usual ruckus. As I will use the phrase the situationist threat, it is meant to be neutral
between the appearance and actuality of jeopardy. Some apparent threats will prove to
be only apparent, and others will be more and less actual to different degrees.
6. We must be careful, though, not to overclaim what the body of literature gets us.
Although it is somewhat improbable that one could do so, it is possible that one
could generate an alternative explanation that (1) is consistent with the data but that
(2) does not have the implication that we are subject to situational effects that we
misidentify or fail to recognize. This would be a genuine problem for the situationist
program.
7. For what it is worth, I suspect that the one source of the perception of a situationist
threat is traceable to an overly simple description of the phenomena. Recall the data
in Phone Booth. We might be tempted to suppose that what it shows is that agents
are a site upon which the causal levers of the world operate, if we describe it as a
case where “the dime makes the subject help.” Such descriptions obscure something
important: the fact of the agent’s psychology and its mediating role between situation
and action. The more accurate description of Phone Booth seems to be this: the situ-
ation influences (say) the agent’s mood, which affects what the agent does. Once we
say this, however, we are appealing to the role of some psychological elements that
presumably (at least sometimes) constitute inputs to and/or elements of the agent’s
active, conscious self. If we focus on this fact, we do not so easily lose the sense of the
subject’s agency. A coarse-grained description of situationist effects may thus some-
times imply a form of fatalism that bypasses the agent and his or her psychology.
More on a related issue in section 4.
Situationism and Moral Responsibility 345

8. This idea plays a particularly prominent role in the work of Daniel Wegner and pro-
ponents of recent research on the automaticity of mental processes (e.g., John Bargh).
See section 4.
9. On some views, the privileged psychological element could include things such as
the agent’s conscious deliberative judgments or some maximally coherent arrange-
ment of the agent’s desires and beliefs.
10. Negligence is a particularly difficult to account for aspect of moral responsibility, so
perhaps this is not so telling. Matt King (2009) has recently argued against treating
negligence as a case of responsibility precisely because it lacks the structure of more
paradigmatic cases of responsibility.
11. The bypass threat might work in a different way. Perhaps the worry is not that our
conscious deliberative agency is sometimes trumped by our emotions. Perhaps the
picture is, instead, that our active, deliberative agency never plays a role in deciding
what we do. Perhaps situationist data suggest that our active, mental life is a kind of
sham, obscuring the operation of subagential processes beyond our awareness. I am
dubious, but for the moment I will bracket this concern, returning to it when I dis-
cuss the automaticity literature.
12. One remarkable result from that study: women shown TV commercials with women
in stereotypically unintelligent roles before an exam had poorer performance on
math tests (393).
13. Some social psychologists have contended that the degree to which populations
emphasize individual versus situation in explanation and prediction varies across
cultures (Nisbett 2003). Recently, the idea that circumstances structure decision
making in subtle and underappreciated ways has recently received popular atten-
tion because of the visibility of because of the visibility of the work of Thaler and
Sunstein (2009). The present challenge is to provide a characterization of what the
responsibility-relevant notion of control comes to given that our decisions are vul-
nerable to “nudges” of the sort they describe.
14. For a useful overview of the difficulties faced by the classical conditional analysis,
see Kane 1996. For a critical discussion of more recent approaches in this vein, see
Clarke 2009.
15. Much of the machinery I introduce to explicate this idea can, I think, be paired with
a different conception of the normative aim for moral responsibility; the specific
powers identified will presumably be somewhat different, but the basic approach
is amenable to different conceptions of the organizing normative structure to the
responsibility system. I leave it to others to show how that might go.
16. I borrow the term “basic abilities” from John Perry, although my usage is, I think, a bit
different (Perry 2010).
17. I favor the language of moral considerations (as opposed to moral reasons) only
because talk of reasons sometimes is taken to imply a commitment to something like
an autonomous faculty that properly operates independently of the effects of affect.
There is nothing in my account that is intended to exclude the possibility that affect
and emotion, in both the deliberating agent and in those with whom the agent inter-
acts, play constitutive roles in the ontology of moral considerations.
18. Notice that even if the skeptic is right that we are very often not suitably responsive
to moral considerations, the present account suggests that there may yet be some
reason for optimism, at least to the extent to which we can enhance our capacities and
expand the domains in which they are effective.
346 D ECO M P O S E D ACCO U N TS O F T H E W I L L

19. It would be surprising if the epistemic mechanisms were the same for recognizing
such diverse things as that someone is in emotional pain, that other persons are ends
in themselves, and that one should not delay in getting one’s paper to one’s commen-
tator. The cataloging of the varied epistemic mechanisms of moral considerations will
require empirical work informed by a more general theory of moral considerations,
but there is already good evidence to suggest that there are diverse neurocognitive
mechanisms involved in moral judgments (Nichols 2004; Moll et al. 2005).
20. Huck Finn may be like this, when he helps his friend Jim escape from slavery. For an
insightful discussion of this case, and the virtues of de re reasons responsiveness, see
(Arpaly 2003).
21. Cases of this latter sort can occur when one visits (or is visited by) a member of a
largely alien culture. In such cases, we (at least in the West, currently) tend toward
tolerance of behavior we would ordinarily regard as blameworthy precisely because
of the conviction that the other party operates out of an ignorance that precludes
apprehension of the suitable moral considerations. As George Fourlas pointed out to
me, in recently popular culture, this phenomenon has been exploited to substantial
and controversial comedic effect by comedian Sasha Baron Cohen.
22. The supererogatory nature of helping can be important if, for example, one is worried
about the not helping condition, and how infrequently people help strangers with
minor problems. Perhaps one more global issue here is simply how rare it is that we
act on moral considerations, whether because of failures of perception or motivation.
I return to this issue, at least in part, at the end.
23. For a helpful discussion of this literature, and its significance for philosophical work
on moral psychology, see (Miller 2009b; Miller 2009a).
24. Thanks to Christian Miller for putting this concern to me.
25. Note that none of this requires that the agent conceive of the reasons as moral rea-
sons. As noted earlier, the relevant notion of responding to moral reasons is, borrow-
ing Arpaly’s terminology, responsiveness de re—not de dicto.
26. What would undermine the Reasons approach? Here’s one possibility amenable to
empirical data: If mood data showed that people were driven to increased helping
behavior when they ought not (e.g., if the only way to help would be to do some-
thing really immoral), this would suggest that at least in those cases mood effects were
indeed disabling or bypassing the relevant moral considerations-sensitive capacities.
But in mood-mediated cases, such behavior is rarely ballistic in this way.
27. Claude Steele et al. (2002) doubt it is a motivation effect because “the effort
people expend while exercising stereotype threat on a standardized test has been
measured in several ways: how long people work on the test, how many problems
they attempt, how much effort they report putting in, and so on. But none of
these has yielded evidence, in the sample studied, that stereotype threat reduces
test effort” (397).
28. For my part, I do not think there is anything like a unified account to be told of the
justified norms governing what counts as weakness of will and culpable failure. I sus-
pect that these norms will vary by context and agent in complex ways, and in ways that
are sensitive to folk expectations about psychological resilience and requirements on
impulse management. In his characteristically insightful way, Gary Watson (2004)
may have anticipated something like this point in the context of addiction: “The
moral and legal significance of an individual’s volitional weaknesses depends not only
on judgments about individual responsibility and the limits of human endurance but
on judgments about the meaning and value of those vulnerabilities” (347).
Situationism and Moral Responsibility 347

29. For example, in one study, the subjects’ attitudes controlled their behavior more
when they were looking at themselves in a mirror (Carver 1975).

REFERENCES
Aronson, Joshua, Michael J. Lustina, Catherine Good, Kelli Keough, Claude M. Steele,
and Joseph Brown. 1999. When White Men Can’t Do Math: Necessary and Sufficient
Factors in Stereotype Threat. Journal of Experimental Social Psychology 35: 29–46.
Arpaly, Nomy. 2003. Unprincipled Virtue. New York: Oxford University Press.
Asch, Solomon. 1951. Effects of Group Pressures upon the Modification and Distortion of
Judgment. In Groups, Leadership, and Men, edited by Harold Guetzkow, 177–190.
Pittsburgh: Carnegie Press.
Bargh, John A . 2008. Free Will Is Un-natural. In Are We Free? Psychology and Free Will,
edited by John Baer, James C. Kaufman, and Roy F. Baumeister, 128–154. New York:
Oxford University Press.
Bargh, John A., and M. J. Ferguson. 2000. Beyond Behaviorism: On the Automaticity of
Higher Mental Processes. Psychological Bulletin 126 (6 Special Issue): 925–945.
Bayne, Tim. 2006. Phenomenology and the Feeling of Doing: Wegner on the Conscious Will. In
Does Consciousness Cause Behavior? An Investigation of the Nature of Volition, edited by S.
Pockett, W. P. Banks, and S. Gallagher, 169–186. Cambridge, MA : MIT Press.
Beaman, A. L., P. L. Barnes, and B. McQuirk . 1978. Increasing Helping Rates through
Information Dissemination: Teaching Pays. Personality and Social Psychology Bulletin
4: 406–411.
Carver, C. S. 1975. Physical Aggression as a Function of Objective Self-Awareness and
Attitudes towards Punishment. Journal of Experimental Social Psychology 11: 510–519.
Clarke, Randolph. 2009. Dispositions, Abilities to Act, and Free Will: The New
Dispositionalism. Mind 118 (470): 323–351.
Darley, John, and Daniel Batson. 1973. “From Jerusalem to Jericho”: A Study of Situational
and Dispositional Variables in Helping Behavior. Journal of Personality and Social
Psychology 27: 100–108.
Dennett, Daniel. 1984. Elbow Room. Cambridge, MA : MIT Press.
Doris, John. 1998. Persons, Situations, and Virtue Ethics. Nous 32: 504–530.
Doris, John. 2002. Lack of Character. New York: Cambridge University Press.
Doris, John, and Dominic Murphy. 2007. From My Lai to Abu Ghraib: The Moral
Psychology of Atrocity. Midwest Studies in Philosophy 31: 25–55.
Fischer, John Martin, and Mark Ravizza. 1998. Responsibility and Control: A Theory of
Moral Responsibility. New York: Cambridge University Press.
Harman, Gilbert. 1999. Moral Philosophy Meets Social Psychology: Virtue Ethics and the
Fundamental Attribution Error. Proceedings of the Aristotelian Society 99 (3): 315–331.
Isen, Alice, and Paula Levin. 1972. Effect of Feeling Good on Helping. Journal of Personality
and Social Psychology 21: 384–388.
Kamtekar, Rachana. 2004. Situationism and Virtue Ethics on the Content of Our
Character. Ethics 114 (3): 458–491.
Kane, Robert. 1996. The Significance of Free Will. Oxford: Oxford University Press.
Kihlstrom, John F. 2008. The Automaticity Juggernaut—Or, Are We Automatons after
All? In Are We Free? Psychology and Free Will, edited by John Baer, James C. Kaufman,
and Roy F. Baumeister, 155–180. New York: Oxford University Press.
King , Matt. 2009. The Problem with Negligence. Social Theory and Practice 35: 577–595.
348 D ECO M P O S E D ACCO U N TS O F T H E W I L L

Kraus, Stephen. 1995. Attitudes and the Prediction of Behavior: A Meta-analysis of the
Empirical Literature. Personality and Social Psychology Bulletin 21: 58–75.
Latané, Bibb, and Judith Rodin. 1969. A Lady in Distress: Inhibiting Effects of Friends
and Strangers on Bystander Intervention. Journal of Experimental Social Psychology 5:
189–202.
Li, Wen, Isabel Moallem, Ken A. Paller, and Jay A. Gottfried. 2007. Subliminal Smells Can
Guide Social Preferences. Psychological Science 18 (12): 1044–1049.
McGuire, W. J. 1985. Attitudes and Attitude Change. In The Handbook of Social Psychology,
edited by G. Lindzey, and E. Aronson, 238–241. New York: Random House.
Mele, Alfred R . 2009. Effective Intentions: The Power of the Conscious Will. New York:
Oxford University Press.
Merritt, Maria. 2000. Virtue Ethics and Situationist Personality Psychology. Ethical Theory
and Moral Practice 3: 365–383.
Milgram, Stanley. 1969. Obedience to Authority. New York: Harper and Row.
Miller, Christian. 2009a. Empathy, Social Psychology, and Global Helping Traits.
Philosophical Studies 142: 247–275.
Miller, Christian. 2009b. Social Psychology, Mood, and Helping: Mixed Results for Virtue
Ethics. Journal of Ethics 13: 145–173.
Moll, Jorge, Roland Zahn, R de Olliveira-Souza, Frank Krueger, and Jordan Grafman.
2005. The Neural Basis of Human Moral Cognition. Nature Reviews Neuroscience 6:
799–809.
Montague, P. Read. 2008. Free Will. Current Biology 18 (14): R584–R585.
Nahmias, Eddy. 2002. When Consciousness Matters: A Critical Review of Daniel Wegner’s
the Illusion of Conscious Will. Philosophical Psychology 15 (4): 527–541.
Nahmias, Eddy. 2007. Autonomous Agency and Social Psychology. In Cartographies of the
Mind: Philosophy and Psychology in Intersection, edited by Massimo Marraffa, Mario De
Caro, and Francesco Ferretti, 169–185. Berlin: Springer.
Nelkin, Dana. 2005. Freedom, Responsibility, and the Challenge of Situationism. Midwest
Studies in Philosophy 29 (1): 181–206.
Nelkin, Dana. 2008. Responsibility and Rational Abilities: Defending an Asymmetrical
View. Pacific Philosophical Quarterly 89: 497–515.
Nichols, Shaun. 2004. Sentimental Rules: On the Natural Foundations of Moral Judgment.
Oxford: Oxford University Press.
Nisbett, Richard E. 2003. The Geography of Thought: How Asians and Westerners Think
Differently—and Why. New York: Free Press.
Pelham, Brett W., Matthew C. Mirenberg , and John T. Jones. 2002. Why Susie Sells
Seashells by the Seashore: Implicit Egotism and Major Life Decisions. Journal of
Personality and Social Psychology 82 (4): 469–487.
Perry, John. 2010. Wretched Subterfuge: A Defense of the Compatibilism of Freedom and
Natural Causation. Proceedings and Addresses of the American Philosophical Association
84 (2): 93–113.
Pietromonaco, P., and Richard Nisbett. 1992. Swimming Upstream against the
Fundamental Attribution Error: Subjects’ Weak Generalizations from the Darley and
Batson Study. Social Behavior and Personality 10: 1–4.
Sabini, John, Michael Siepmann, and Julia Stein. 2001. The Really Fundamental Attribution
Error in Social Psychological Research. Psychological Inquiry 12: 1–15.
Steele, Claude M., Steven J. Spencer, and Joshua Aronson. 2002. Contending with
Group Image: The Psychology of Stereotype and Social Identity Threat. Advances in
Experimental Social Psychology 34: 379–440.
Situationism and Moral Responsibility 349

Talbert, Matthew. 2009. Situationism, Normative Competence, and Responsibility for


Wartime Behavior. Journal of Value Inquiry 43 (3): 415–432.
Thaler, Richard H., and Cass R. Sunstein. 2009. Nudge: Improving Decisions about Health,
Wealth, and Happiness. New York: Penguin.
Vargas, Manuel. 2009. Reasons and Real Selves. Ideas y Valores: Revista colombiana de filo-
sofía 58 (141): 67–84.
Vargas, Manuel. 2010. Responsibility in a World of Causes. Philosophic Exchange 40:
56–78.
Vargas, Manuel. 2011. The Revisionist Turn: Reflection on the Recent History of Work
on Free Will. In New Waves in the Philosophy of Action, edited by Jesus Aguilar, Andrei
Buckareff, and Keith Frankish, 143–172. Palgrave Macmillan.
Vargas, Manuel, and Shaun Nichols. 2007. Psychopaths and Moral Knowledge. Philosophy,
Psychiatry, and Psychology 14 (2): 157–162.
Wallace, R. Jay. 1994. Responsibility and the Moral Sentiments. Cambridge, MA : Harvard
University Press.
Watson, Gary. 2004. Excusing Addiction. In Agency and Answerability, 318–350. New York:
Oxford University Press.
Wegner, Daniel M. 2002. The Illusion of Conscious Will. Cambridge, MA : MIT Press.
Weyant, J. 1978. Effects of Mood States, Costs, and Benefits on Helping. Journal of
Personality and Social Psychology 36: 1169–1176.
Wolf, Susan. 1990. Freedom within Reason. New York: Oxford University Press.
Woolfolk, Robert L., John Doris, and John Darley. 2006. Identification, Situational
Constraint, and Social Cognition: Studies in the Attribution of Moral Responsibility.
Cognition 100: 283–401.
This page intentionally left blank
Index

Abu Ghraib, 342 Awareness


Accuracy, Norm of, 267, 275 proprioceptive, 108, 110
Action time, 108, 109
awareness, 106, 114, 185
bodily, 20, 21, 88, 131, 141, 249, 251, Banks, W., 38, 48, 79f, 84, 183
262f Bargh, J., 8, 18, 19, 26, 140, 147, 172f, 244,
control, 111, 132, 141, 146, 147, 152, 246, 258, 260, 325, 331, 332, 345
191, 221ff, 253–4, 255 Baumeister, R., 5, 23, 91, 141, 201, 224,
endogenous, 34ff, 44, 49 292f
free, 3, 11, 43, 136ff Bayne, T., 15f, 26, 141, 146, 155, 332
inhibition, 34ff, 47, 75f, 237 Bell, G., 306
initiation, 3, 7, 34ff, 49f, 51, 126, 229, Berti, A., 107, 114
233, 235, 237 Binocular rivalry, 166
selection, 46, 191 Blakemore, S., 104, 106, 109, 146, 148,
stimulus driven, 34ff, 42ff, 49 156
unconscious, 174, 184 Blindsight, 169–71, 175, 184
Addiction, 129, 130–1, 190, 225, 307, Body schema, 120, 128
315, 346 Brass, M, 7, 33, 47, 50, 67, 70, 76ff, 188
incentive sensitisation model, Bratman, M., 50, 121, 283, 290f
131, 203 Brodman area 10, 66
Agnosia, visual, 169f Broome, J., 265
Ainslie, G., 205, 307, 310, 313 Bypassing, 91ff, 256, 328, 330, 346
Alquist, J., 91
Anarchic hand syndrome, 18, 37, 83, 105, Carruthers, P., 23, 301, 304
115, 129, 174, 184, 189 Choice, 6, 14, 24, 34f, 39f, 42f, 47, 60ff,
Anosognosia for hemiplegia (AHP), 63, 66, 70, 88ff, 104, 139ff, 200, 204,
106–8, 114 211, 212ff, 245, 260, 325
Anterior insula, 12, 120f blindness, 8, 17, 23, 299, 301ff
Attention, cognitive, 21, 250ff Clarkson, B., 306
Attentional Deficit Hyperactive Cognitive integration, 16, 26, 163ff, 186–8
Disorder (ADHD), 225 Commisurotomy (“split brain”)
Automaticity, 7–8, 17, 20–1, 140, syndrome, 167–8, 175
148, 155, 222–3, 244–6, 253–6, Comparator models, 106, 146f, 274, 301
331f, 345 Compatibilism, 2ff, 25, 138, 335
Automatisms, 25, 174, 184 conditional analysis of, 335
352 Index

Computer Mediated Extrospection, 23–4, planning, 226ff


298ff, 306 priming, 231
Concern, 202–3 negative, 226, 237f
strength, 204–5 regulation, 18–19, 200ff, 226, 311
Confabulation, 25, 302, 329 Epihenomenalism, 5, 19, 41, 51, 217
Conscious control (volition), 3ff, 17–18, Epistemic conservatism, argument from,
23, 34, 47, 184, 222, 271, 340 168–9
Consciousness Epistemic feelings, 269, 274, 276, 278
agency as marker, 160ff Epistemic goal, 263
cross-talk medium, 188 Evaluative control, 283ff, 292ff
disorders of, 164–5, 167 Executive control, 46ff, 50
function of, 23, 183 Experience of choice (freedom), 14, 136ff,
reportability as marker, 160–1, 165 211
Crescioni, W., 91 default theory of, 143, 145, 146, 147,
Cross-modal matching, 12, 104, 111 150ff
Experience of coercion, 136, 139, 152
Decision making, 3f, 7, 34, 39ff, 60, 67, 70, Extrastriate body area, 42, 113
80, 226, 270, 326
hunches, 144–5, 155 Fatalism, 6, 87ff, 344
Decision Prediction Machine, 70 Fischer, J., 22, 44, 326
Decomposition, explanatory, 2 Fluency, 268, 272
Default network, 113 Fodor, J, 188, 189, 244, 257, 300, 317
Deliberation, 12, 38, 91, 95, 121, 124, 125, Folk intuitions, 60, 68, 138, 153
128, 164, 216, 233, 247ff, 260, 284f, Folk psychology, 4, 60, 68, 113, 115, 300f
292, 303, 330, 336, 341 Forward models, 48f, 104, 106, 107, 124,
Delusions of control, 105, 122, 129, 132, 146
147, 148, 259 Fotopolou, A., 12–13
Dennett, D.C., 4, 7, 9–10, 23, 26, 138, 150, Frankfurt, H., 17ff, 129, 207
168, 200 212f, 300ff, 329 Free will, 1ff, 17, 39f, 50f, 60, 68, 73ff, 136,
Descriptive Experience Sampling, 305f 140ff, 199ff 212–14, 325ff, 335, 337,
Determinism, 2–3, 5, 24, 25, 44, 46, 60, 339
87ff, 142 atomism about, 333
Dick, P., 298 illusion of, 2, 5, 8ff, 48f, 141, 148, 190,
Distributed Motivation, 305, 307ff 214
Dorsal premotor cortex (BA6), 113, 114 monism about, 333
Dorsolateral prefrontal Cortex, 36f, 40, pluralism about, 333
47, 111 Freeman, A., 9–10
Dretske, F., 172, 265ff Frijda, N., 18–19
Frith, C., 12, 35, 106ff, 120f, 140, 147,
Efference copy, 105, 109, 110, 114 119, 156, 160f, 169
128, 146, 274 Frontal lobe patients, 225
Efferent binding, 188 Frontal pole, 215
Efficacious (causally), 41, 51, 88ff Functional magnetic resonance imaging
Embodiment, 128, 132 (fMRI), 62, 64
Emotion, 18f, 201, 144, 199ff, 221ff, 263, limitations, 69
269, 273, 311, 345
action tendencies, 236 Gallagher, S., 11–12, 146
appraisal, 205, 228 Ginet, C., 11
experience, 236 Goal attainment, 221, 222, 223–4
Index 353

Goal selection (setting), 171, 173, 222, motor, 38ff, 48f, 77, 106–7, 122, 128
229, 267 proximal, 38f, 49, 73, 77ff, 121, 125,
Gollwitzer, P., 23, 295, 307f 128, 173
Graham, G. & Stephens, G.L., 122ff Introspection, 165ff, 175, 300ff
Isham, E., 79ff
Haggard, P., 38, 47, 67, 76ff, 104, 108ff,
111f, 119, 126, 149, 150, 188 James, W., 191, 250, 252f, 259
Hall, L., 8 Johannson, P., 8
Hard determinism, 2, 24 Joint action, 105, 130
Haynes, J.D., 6f, 25, 39 Judgment sensitivity, 291f
Heavey, C, 305f
Hieronymi, P., 22, 260, 283ff, 292ff Kertesz, I., 208ff
Holton, R., 5ff, 14–15, 25, 26, 142, 144, Kihlstrom, J, 331
155, 283, 291ff Ku Klux Klan, 309
Homunculus (fallacy), 2, 9, 163, 171, 189, Kuhl, J. & Koole, S., 201
190, 300
Humphrey, N., 171 Lateral Frontopolar Cortex, 65
Hurlburt, R, 305f, 317 Lateral Intraparietal Area (LIP), 42
Hurley, S., 27 Lau, H., 37, 40, 41, 77ff, 84
Hyperintellectualism, 276 Lazy argument, 6, 89, 92f
Hysteria/Conversion disorder, 131 Lewis, D., 97, 98
Libertarianism, 2, 3, 139, 142, 143
Ideal rationality, 289 Libet, B., 6, 7, 24, 36ff, 47, 61ff, 73ff, 103,
Ideomotor theory, 191 149
Impulsivity, 226, 228 Longo, M, 111ff
Incompatibilism, 2, 7, 25, 60
Indeterminism, 3, 45, 52, 344 Managerial control, 283ff
Individualism, 4 unaware, 293ff
Inferior parietal Many-many problem, 21, 250ff
cortex, 48, 120, 121 Marcel, T., 115
lobe, 111, 48 McCann, H, 270ff
Insanity defense, 15 McGeer, V., 288ff
Instinct, 329 Mechanism, 46, 51
Instrumental reasoning, 266ff Medial dorsolateral prefrontal cortex, 40,
Intentional agency, 16f, 26, 38, 163ff, 251 47
Intentional binding effect, 109, 114, 149, Medial prefrontal cortex, 37, 41, 47, 65,
150 215
Intentional realism, 300, 305, 317 Mele, A., 7, 247, 260, 271ff
Intention-in-action, 121 Mental action, 20–2, 247, 264ff, 296
Intentions, 34, 38ff, 45, 49, 51, 75, 77ff automaticity, 248
103, 106, 108, 121ff, 221ff, 249ff, Mental agency, 19–22, 283ff
258, 263, 273, 290, 296 hierarchical accounts of, 286ff
awareness of, 48, 78, 79, 150 Mental muscle, 292ff
conscious, 6f, 38f, 51, 73, 77, 84, 258 Merino, M. (La Flaca Alejandra), 202,
distal (prior), 38, 50, 121, 122, 123, 205, 211
126, 128, 300 Merleau-Ponty, M., 120
experience of, 78f, 108, 120–1, 277 Metacognition, 22, 39, 125, 258, 268, 276
formation, 121–3, 125, 126, 128, 130–1 Metacognitive beliefs, 105, 275, 317
implementation, 23, 222–5, 307 Metarepresentation, 3, 290ff
354 Index

Metzinger, T., 215 Papineau, D., 168, 265ff


Micro-precommitment, 314ff Pascal’s wager, 285ff
Middlemarch, 288ff Passingham, R., 77ff
Milgram, S., 202, 209, 210, 211, 213, 327, Passive movement, 13, 108ff
340, 342 Passivity (Receptivity), 19, 20, 21, 254,
Milner, D. & Goodale, M., 169, 171, 184 270ff, 294
Mindsets Periaqueductal grey, 215
deliberative, 228–34 Pettit, P., 4, 25, 155
implementational, 228–34 Phenomenology of agency, 10–11, 50,
Monitoring, 12, 183, 272 122, 127, 129, 132, 136ff
action, 107, 110, 119, 122ff Phenomenology of thought, 133, 301, 310
self, 23, 106, 259, 299, 307, 310ff Phone Booth, 327, 338ff, 344
Moods, 226, 234, 298, 299, 338ff Plan execution, 233–4
Moore’s paradox, 284ff Police battalion 101, Hamburg, 209
Moral principles, 303, 304 Possible worlds, 335ff
Moral responsibility, 14, 43, 325ff Post hoc inference, 10, 13, 25, 48, 49, 80,
Moran, R, 283, 288 105–6, 149, 207, 310
Morsella, E., 17–18 Posterior cingulate, 65, 66, 113
Motivation, 19, 36, 68, 125, 130, 186, 201, Posterior insula, 111, 121
208, 210, 221, 226, 227, 237–8, 263, Posterior parietal cortex, 40
269, 277, 298, 299, 305ff, 336, 339, Powerlessness counterfactuals, 88ff, 98
342, 346 Practical Reasoning, 93, 244, 265, 266
Motivational internalism, 130 Precommitment, 309, 310, 314ff
Multisensory integration, 124, 129 Predicition (motor control), 48, 50, 106,
Murray, D., 91 109
Predictability, 5–6, 25, 87ff
Nahmias, E., 91, 138, 140, 141, 160 Prediction (brainreading), 39, 60ff
Natural kinds, 262, 267, 277 Preference formation, 19, 206–7, 208,
Negligence, 330, 345 209, 211, 212, 214, 215, 263, 315,
Neuroimaging, 36, 37, 40, 60ff, 111, 112, 331, 336
113, 120, 185 Presupplementary motor area (preSMA),
Newsome, W, 42 36, 37, 40, 41, 47, 50, 67, 78, 113,
Nietzsche, F., 1, 26 114
Norms, 4, 9, 21–2, 25, 27, 131, 257, 263ff, Priming, 37, 206, 248, 334
336–8, 346 emotion, 241ff
constitutive, 265ff procedural, 229
epistemic, 264ff situational, 210
failure to abide by, 268 subliminal, 140, 147, 148, 186, 188,
instrumental, 265ff 189, 327
of fluency, 267 supraliminal, 7–8, 18, 172–3, 185–6,
188
Odysseus, 221–2, 225, 289, 307ff Prinz, W., 154, 199, 200, 215
Oedipus, 80ff PRISM (Parallel Responses into Skeletal
Open response selection, 7, 41 Muscle), 187
Ortega, M., 208 Proust, J., 21–2, 27

Pacherie, E., 121ff, 146, 301 Rachlin, H., 309ff


Paglieri, F., 11 Random dot motion task, 42, 44, 45
Index 355

Ravizza, M, 22, 44 Selves, 214–6


Readiness Potential, 36, 38, 61, 62, 74, 83, Sense of agency, 13–14, 16, 26, 34, 48,
105, 149–50 103ff, 154, 174, 189, 190, 191, 212
lateralized, 38, 126 action initiation, 126
Reason accounts, 325ff control, 127
Reason Responsiveness, 44, 61, 337 feeling of, 105, 122
Reflective Control, 286ff, 292ff functional signature of, 107f
Report, 16, 26, 48, 61–2, 67, 74ff, 75, 78, judgment of, 105, 122
79, 81, 82, 84, 122, 137, 138, 139, long term, 125
140ff, 160–1, 165–7, 169, 170–1, minimal, 122, 123–4, 128, 146
175, 186, 204, 300, 303, 306 social modulation, 130–1
Resignation, 89ff Sense of effort, 105, 127, 345
Responsibility, 33, 39, 43, 46, 83, 228, Sense of (body) ownership, 13, 34, 108,
325ff 110, 111, 112, 128
Restrictivism, 139, 144 relation to sense of agency, 110–14,
Richardson, D., 97 118–20, 146
Roberts, S., 306 Sensorimotor control, 103
Rogers, R., 77ff Sensory attenuation effects, 109–10, 156
Roskies, A., 6, 25 Sentiment, 203, 215
Rostral cingulate zone (rCZ), 37 Shadlen, M, 42
Rostral supplementary motor area Situationism, 15, 325ff, 343
(rSMA), 37 Slippery Slope, 94f
Rubber Hand Illusion, 107, 108, 110, 111 Social norms, 4, 9
Social Simon Effect, 130
Samaritan, 327, 339f, 342 Somatosensory perception, 108, 120
Sartre, J-P., 129–30, 208 Spatial Body-Representation effects, 110
Schizophrenia, 129, 225, 259 Sphexishness, 329
Schneider, W. & Shriffrin, R.M., 245–6 Stereotypes, 172, 333f, 342
Scriven, M., 97 Stimulus driven behaviour, 42, 43, 163,
Self-attribution, 4, 122, 124, 190 164, 165
Self-consciousness, 110, 111, 113 Strawson, G., 20, 22, 244, 247–9, 256–7,
pre-reflective, 124, 127, 128 258, 271
Self-control (regulation), 4, 18, 23, 24, Superior colliculus, 163
202, 208, 211, 224, 225, 287ff, 299, Superior frontal gyrus, 113
305 Superior parietal lobe, 113
depletion, 224, 293 Supplementary motor area (SMA), 36,
Self-efficacy, 10, 199 61, 65
Self-governance, 334, 335 Supramodular Interaction Theory, 187ff
Self-ignorance, 8, 14, 17, 18, 19, 295
Self-interpretation/Self-narrative, 23, 123, Taylor, Richard, 90ff
125, 128, 204, 210, 216, 304 Temporal Attraction Effects (Intentional
Self-knowledge, 4, 23, 269, 304, 305, 306 binding), 109, 114, 149, 150
Self-maintenance, 18, 201, 202, 204, 207, Temporal discounting, 205
208, 212 Thought insertion, 122, 125, 139, 154, 259
Self-model, 215 Toxin puzzle, 284f
Self-referential processing network Transcranial Magnetic Stimulation
(Cortical-subcortical midline (TMS), 37, 47, 49, 78, 79, 84, 143,
system), 113, 215 146
356 Index

Truth, aiming at, 263ff Wegner, D., 7, 8, 10, 13, 49, 78, 98, 103,
Tsakiris, M., 12–13, 14, 104, 108, 110, 105, 150, 189, 190, 210, 216, 332,
111, 119 345
Two visual systems hypothesis, 169, Williams, B., 263
171–2, 184–5 Willpower, 201, 308
Wittgenstein, L., 108
Vargas, M., 8, 13, 14–15, 22, 24, 26 Wolpert, D., 106
Velleman, D., 11, 14, 26, 137 Wu, W., 20–1, 22
Veto, 7, 24, 34, 47, 70, 73ff, 187
Vierkant, T., 22, 23 Zombie challenge, 5–9, 10, 11, 15, 16, 17,
Volition, 7, 23, 24, 25, 33ff, 80, 113, 147, 18, 23, 25, 26, 258
149, 167, 183, 188, 190, 191, 263, Zombie systems, 169, 174, 245, 258
337
Voluntary movement/action, 12, 23, 32,
34, 36, 37, 38, 40, 49, 50, 51, 61, 62,
66, 75, 103, 104, 105, 108, 109, 110,
111, 112, 113, 114, 119, 149, 183,
184, 191, 214

Potrebbero piacerti anche