Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
PHILOSOPHY OF MIND
Series Editor
David J. Chalmers, Australian National University and
New York University
EDITED BY
A NDY CLARK , JULIAN KIVERSTEIN ,
AND
TILLMANN VIERKANT
1
3
Oxford University Press is a department of the University of Oxford.
It furthers the University’s objective of excellence in research, scholarship,
and education by publishing worldwide.
With offices in
Argentina Austria Brazil Chile Czech Republic France Greece
Guatemala Hungary Italy Japan Poland Portugal Singapore
South Korea Switzerland Thailand Turkey Ukraine Vietnam
Oxford is a registered trademark of Oxford University Press in the UK and certain other
countries.
ISBN 978–0–19–974699–6
ISBN 978–0–19–987687–7
9 8 7 6 5 4 3 2 1
Printed in the United States of America
on acid-free paper
CONTENTS
Contributors vii
Index 351
CONTRIBUTORS
1986 to 2012, she conducted research in the domains of the history and philosophy
of logic and the philosophy of mind. For the past 10 years, she has concentrated on
metacognition, that is, epistemic self-evaluation. She has studied, in particular, its
evolutionary and conceptual relations with mindreading (ESF-EUROCORE pro-
ject, 2006–2009), and the variety of epistemic norms to which evaluators from dif-
ferent cultures are implicitly sensitive (ERC senior grant, 2011–2016).
Adina L. Roskies is an associate professor of philosophy at Dartmouth College. She
has PhDs in neuroscience and cognitive science and in philosophy. Previous posi-
tions include a postdoctoral fellowship in cognitive neuroimaging at Washington
University, and a job as senior editor of the journal Neuron. Dr. Roskies’s philo-
sophical research interests lie at the intersection of philosophy and neuroscience,
and include philosophy of mind, philosophy of science, and ethics. She was a
member of the McDonnell Project in Neurophilosophy and the MacArthur Law
and Neuroscience Project. She has published many articles in both philosophy and
the neurosciences, among which are several devoted to exploring and articulat-
ing issues in neuroethics. Recent awards include the William James Prize and the
Stanton Prize, awarded by the Society of Philosophy and Psychology; a Mellon
New Directions Fellowship to pursue her interest in neurolaw; and the Laurance S.
Rockefeller Visiting Faculty Fellowship from the Princeton University Center for
Human Values. She is coeditor of a forthcoming primer for judges and lawyers on
law and neuroscience.
Manos Tsakiris is reader in neuropsychology at the Department of Psychology,
Royal Holloway, University of London. His research focuses on the neurocogni-
tive mechanisms that shape the experience of embodiment and self-identity using
a wide range of research methods, from psychometrics and psychophysics to func-
tional neuroimaging.
Manuel Vargas is professor of philosophy at the University of San Francisco. He
is the author of Building Better Beings: A Theory of Moral Responsibility (2013); a
coauthor of Four Views on Free Will (2007); and coeditor of Rational and Social
Agency (in progress). Vargas has held fellowships from the National Endowment
for the Humanities, the Radcliffe Institute for Advanced Studies, and the Stanford
Humanities Center. He has also been a fellow at the McCoy Family Center for Ethics
at Stanford, and held visiting appointments at the University of California, Berkeley,
and at the California Institute of Technology. Vargas’s main philosophical interests
include moral psychology, philosophy of agency, philosophy of law, and the history
of Latin American philosophy.
Wayne Wu is associate professor in and associate director of the Center for the
Neural Basis of Cognition at Carnegie Mellon University. He graduated with a
PhD in philosophy from the University of California, Berkeley, working with John
Searle and R. Jay Wallace. His current research focuses on attention, agency, schizo-
phrenia, spatial perception, and how empirical work can better engage traditional
philosophical questions about the mind.
1
T I L L M A N N V I E R K A N T, J U L I A N K I V E R ST E I N,
A N D A N DY C L A R K
The belief in free will is firmly entrenched in our folk understanding of the mind
and among the most deep-rooted intuitions of Western culture. The intuition that
humans can decide autonomously is absolutely central to many of our social institu-
tions from criminal responsibility to the markets, from democracies to marriage. Yet
despite the central place free will occupies in our commonsense understanding of
human behavior, the nature of this very special human capacity remains shrouded
in mystery. It is widely agreed that the concept of free will has its origins in Christian
thought (Arendt 1971, pt. 2, chap. 1) and yet from the earliest days philosophers,
theologians, and lawyers have disagreed about the nature of this capacity. Many
have doubted its existence. Some have gone further, charging the very idea of free
will with incoherence: Nietzsche famously declared the idea of free will “the best
self-contradiction that has been conceived” (Nietzsche 1966, 21). The philosophi-
cal controversies surrounding free will roll on to this day (see, e.g., Kane 2002; Baer
et al. 2008), but the folk, for the most part, continue to employ the concept as if its
meaning were wholly transparent, and its reality beyond question.
Into this happy state of affairs the cognitive sciences dropped a bomb. All of a
sudden it seemed as if the folk’s happy state of ignorance might have come to an
end, and the questions surrounding the existence of free will might now be settled
once and for all using the experimental tools of the new sciences of the mind. The
truth the scientists told us they had uncovered was not pretty: some claimed to
have decisively demonstrated that there is no free will, and any belief to the con-
trary was a relic of an outdated and immature folk understanding we must leave
behind once and for all (Libet 1985; Roth 1994; Wegner 2002; Prinz 2003; Lau
et al. 2004; Soon et al. 2008).1 The scientists told us that we are the puppets of
our unconscious brain processes, which “decide” what we will do quite some time
before we know anything about it. Some of the scientists’ statements were clearly
2 D ECO M P O S I N G T H E W I L L
CATEGORY MISTAKE?
Before we turn to the real substance of the book, we must pause to consider some
objections to such an explanatory enterprise. Many philosophers have pointed
out that the sciences might be well placed to help us better understand volition,
but there are important limits on what science can tell us about autonomy (see,
e.g., Roskies, this volume). Science might be able to help us with the “will” part of
“free will,” but there are limits to what it can tell us about the “free” part. Science
cannot really help us with the big question of what free will consists in because
this is a metaphysical question about whether or not our actions are fully caus-
ally determined by nature, and the implications this has for our status as autono-
mous agents. This is a debate that has been mainly played out between parties that
believe free will exists. Libertarians (who believe that determinism and freedom are
incompatible and that we are free) have argued with compatibilists (who believe
that freedom and determinism are fully compatible so that our actions can be caus-
ally determined and also free under the right conditions). The position that many
neuroscientists seem to favor of hard determinism also exists in the philosophical
debate but is much less prominent.2 Hard determinists agree with the libertarians
that freedom of the will and determinism are incompatible, but they agree with the
compatibilists that determinism is probably true, and therefore they conclude that
we do not have free will.
Decomposing the Will 3
It is easy to see why the participants in the dispute between libertarians and
compatibilists were not particularly moved by the findings from cognitive science.
At best, neuroscience can help with the question whether or not human behavior
and decision making really are causally determined. (Even here you might won-
der whether that really is possible, given that scientific experiments seem to simply
assume the truth of determinism.) It is difficult to see how the sciences can help
settle the conceptual question of the compatibility or otherwise of freedom of the
will with determinism. How could advances within neuroscience possibly help us
with this metaphysical question (Roskies, this volume)?3
In our volume we start from the assumption that the cognitive sciences will not
be able to help us directly with the discussion between libertarians and compati-
bilists. However, a simple dismissal of the scientific findings as irrelevant to the free
will debate would be, to say the least, premature. Even though these findings might
not have any bearing on the truth or falsity of compatibilism or libertarianism, it
does not follow that the scientific debate about volition is unrelated to the philo-
sophical one about free will. To see why this is the case, suppose for a moment that
compatibilism is correct, and freedom of the will is consistent with a belief in deter-
minism. Obviously this does not mean that everything that is determined would
also be free. Compatibilists are keen to work out which kind of determinants of our
actions make us free and which ones do not. What does it take to be an agent and for
an instance of behavior to count as a free action (see, e.g., Velleman 1992)? If com-
patibilism were true, human autonomy would consist in behaviors being produced
by the right kinds of causal mechanisms, but which ones are those? Is it necessary
that we are able to evaluate action options rationally, for example? Must we be able
to initiate actions in a context-independent (or stimulus-independent) manner? Is
a capacity for metarepresentation, the ability to think about thinking, necessary for
free will? Is conscious control necessary if we are to be treated as responsible for an
action (see Roskies, this volume)? These are all important philosophical questions
about necessary conditions for human autonomy. There are, however, a number of
other questions concerning the nature of autonomy that are undeniably empirical
questions. It is an empirical question, for instance, whether we have any of the abili-
ties our best philosophical theory tells us are necessary for autonomy. Supposing
science tells us we do have these abilities, it is also an open empirical question to
what extent we actually deploy these abilities in everyday decision making. Suppose
we say that conscious control is necessary for free will; the question of whether we
are free or not will then depend on whether we have the capacity for conscious con-
trol, and the extent to which we exercise this capacity in making everyday decisions.
Hence, the question of whether we are free or not turns out to crucially depend on
a better scientific understanding of the machinery of our minds.
Now, take back our initial assumption about the truth of compatibilism, and
suppose libertarianism is the right answer to the determinism question. It seems
as if not a lot changes. If you want to know the conditions for libertarian human
autonomy you look at the compatibilist ones and add the crucial indeterminist
extra. Libertarians will make use of exactly the same or at least very similar ingre-
dients (rationality, action initiation, self-knowledge) to give the necessary condi-
tions an agent needs to fulfill, before adding “true” freedom of choice that turns
4 D ECO M P O S I N G T H E W I L L
merely self-controlled behavior into truly free action.4 If we do not have or rarely
exercise self-knowledge and rationality in generating our behavior, this would seem
to spell trouble for a libertarian account of free will just as much as for a compati-
bilist account.
Still one might worry that an account of free will need not be interested in
questions about mechanisms. Philip Pettit (2007) has, for instance, argued that
neuroscience is only threatening to a notion of free will that is built upon an indi-
vidualist “act of will” picture.5 Such a picture might attempt to identify specific cog-
nitive mechanisms within the individual agent that causally enable “acts of will.”
According to Pettit, this would be to underestimate the social dimension of the free
will discourse. Whether somebody is free or not crucially depends on their ability
to justify their actions according to the rules set by society. The idea of free will
works because we take ourselves to be free agents and have knowledge of the social
norms that tell us what the behavior of a free and responsible agent should look like.
As we all strive to fulfill the normative ideal of a free agent, our behavior begins to
resemble the ideal more and more. This assimilation to the ideal is not dependent
on there being a prior independent mechanism in the agents that would lead to such
behavior naturally without normative guidance.
This move familiar to the philosopher of mind from the discussion on folk
psychology (Dennett 1987) seems particularly appealing in the case of free will
and autonomy. Free will is such a loaded concept, a concept on which so many
high-minded ideals are based, it seems rather likely that it is an idealization, an
abstraction, that is not reducible to actual mechanisms. We have suggested, how-
ever, that both compatibilists and libertarians cannot afford to ignore the science of
voluntary action. These sciences have implications for whether we have the cogni-
tive capacities required for free agency. However, if Pettit is right, our arguments
rests on false assumptions. We are looking for freedom in the wrong place. Instead
of looking inside of free agents, we need to look to the social contexts in which the
actions of responsible agents take place.
We agree with Pettit about the importance of the social context for our folk con-
cept of free will. This does not, however, render questions about mechanisms irrele-
vant. Even if free agency is a matter of our social practices in which we come to think
of ourselves as answerable to others for our actions, this does nothing to undermine
the thought that there may be specific functions and mechanisms essential for such
a self-understanding to influence behavior. Pettit argues that thinking of ourselves
as accountable for our actions can shape the cognitive processes that bring about
our actions in ways that result in us acting as responsible agents. Thus free will is
important only as a self-attribution. We are happy to concede this point, but still
this invites the question why such a self-attribution is important? Surely such a self-
attribution is important only if it exerts a causal influence on behavior. Whether this
is the case or not is a question that will be answered in part by looking to science. It
might well turn out that even though free will might be conceptually quite coher-
ent, the machinery of the mind works in ways that are inconsistent with our being
free agents. The obvious example of such an empirical way to undermine free will
is the recent surge in findings that seem to show that consciousness plays a much
smaller role in causing behavior than previously thought. If conscious awareness
Decomposing the Will 5
and control is necessary for free will, then these findings undermine free will.6 We
will label this worry the zombie challenge.
Even if the extent of the zombie challenge and the threat to our commonsense
understanding of responsible agency may have been slightly exaggerated, they nev-
ertheless invite an important question that has hitherto not been on the agenda in
philosophical discussions of agency. Can we come up with a testable function for
conscious behavioral control? Given how much of our everyday behavior can pro-
ceed without conscious guidance and control, what is the role of consciousness in
agency, if any?15
Freeman ascribes full responsibility even in cases where conscious control is wholly
absent. Many scientists argue that because we do not have conscious control, we
do not have full responsibility either. Dennett and Freeman argue that this cannot
be right, ascribing to us full responsibility because we have all the control that we
could reasonably want. They invite us to give up on the deep-rooted intuition that
free will and conscious control are necessarily intertwined, and you cannot have
one without the other.
We agree with Dennett that any scientifically viable concept of free will must be
severed of its connection with the Cartesian homunculus. Indeed, we take it to be
implicit in our title that if free will is real, then it is the result of interactions among
many mindless, robot-like mechanisms. However, we wonder whether the willing
agent might be decomposed without entirely relinquishing the commonsense intu-
ition that there is a connection between conscious control and free will. One place
we might look for a defense of this intuition is to the phenomenology of agency.
The phenomenology of agency is as complex as it is elusive and recessive, some-
thing that is pointedly brought out in our collection by the contribution from Shaun
Gallagher. However, at least part of the phenomenology of being an agent resides
in the experience of control. When Penfield directly stimulated his patient’s motor
cortex, causing them to raise an anesthetized arm, one of the reasons the result-
ing movement did not feel like an action of the patient’s was because they had no
experience of controlling the action. If we sever the connection between conscious
control and free will in the way Dennett is recommending, we are saying an agent
can perform a free action without having a feeling that they are in control of the
action. The subject may judge that the action was one that was under their control
after they’ve acted perhaps because the action was in accordance with reasons and
values they endorse. However, to the extent that the control is unconscious, the
agent will have no feeling of control over the action. Once we allow that the type of
control required for free agency does not require consciousness, we can agree with
Wegner that the experience of being in control is an illusion, the outcome of a post
hoc inference, as Wegner has so powerfully argued on the basis of his many inge-
nious experiments. A number of our contributors take issue with Wegner’s claim
that the phenomenology of agency is an illusion. In the next section we will con-
sider whether phenomenological considerations might therefore be marshaled to
deliver the response to the zombie challenge, allowing us to hold on to something
of the commonsense idea that free will requires conscious control.
of agency that relate to acting in the here and now. Actions that are the outcome of
a future-directed and/or present-directed intention are accompanied by a sense of
agency, but the sense of agency will derive in part from the prior planning. The
sense of agency can also take a retrospective form that derives from my ability to
explain my actions in terms of my beliefs, desires, and intentions. I feel like I am in
control of my actions because they fit with my beliefs and desires that rationalize
the action. Both the retrospective and the prospective ingredients are associated
with a reflective sense of agency. In the case of the prospective ingredients, the
actions concerned are the outcome of some kind of reflective deliberation, and the
sense of agency derives from this deliberative process. In the case of the retrospec-
tive ingredient, the sense of agency comes from the process of reflecting on an
action and one’s reasons for performing it. One can have a thin recessive experi-
ence of being the agent of an action without either of these ingredients being in
place. This is arguably the case with many of our skillful behaviors—the skilled
pianist does not have to reflectively deliberate on the finger movements he makes
in performing a piece of music. So long as his performance of the piece is going
smoothly, there is no need for him to reflect on what he is doing. His attention can
be completely taken up with what he is doing. However, it would be a mistake to
conclude that just because there is no reflective sense of agency, there is no sense
of agency whatsoever for skilled behaviors. It is not as though skilled behaviors are
performed unconsciously, as we find in cases of somnambulism or automatic writ-
ing. There seems to be a clear phenomenological difference between performing
an action while sleepwalking and performing a skilled behavior. In the latter case
the subject has some awareness of what he is trying to accomplish and of the effects
of his actions on the world.
Gallagher distinguishes an agent’s prereflective experience of what she is trying to
accomplish, from motor control processes that give me the sense that I am moving
my body, and that my actions are having certain effects on the world. This distinction
also provides the starting point for Tsakiris and Fotopolou in their chapter, more on
which shortly. One way that cognitive neuroscientists have set about studying the
sense of agency is to give subjects tasks where they are asked to judge whether they
caused a particular sensory event. Gallagher discusses an fMRI study by Farrer and
Frith (2002) in which subjects manipulate a joystick to move a colored circle on a
screen. Sometimes the subject causes the movement, sometimes the computer does,
and subjects must judge which of the movements they are seeing are effects of their
own actions. When subjects report causing a movement of a colored circle, Farrer
and Frith found bilateral activation of the anterior insula. Gallagher argues that
what is being measured in these studies is not the neural mechanisms that underpin
a prereflective sense of agency but neural mechanisms that are involved in motor
control. Echoing Gallagher’s worry, Tsakiris and Fotopolou argue that experiments
that ask subjects to judge whether they caused a given sensory event can tell us
very little about the experience of agency. At best they can tell us something about
the “cross-modal matching process” that integrates motor representations of one’s
voluntary actions with sensory representations of actions and their consequences.
They fail to advance our understanding of the experience of initiating and control-
ling an action, key constituents of the prereflective experience of agency.
Decomposing the Will 13
Tsakiris and Fotopolou make some positive proposals about how to investigate
the prereflective experience of agency, which they characterize as “the feeling that
I voluntarily move my body.” They argue that a key requirement is a control condi-
tion in which the movement parameters are kept constant, for example, subjects
are asked to press a button, but the movement is made passively or involuntarily
(also see Tsakiris et al. 2005). The question they suggest we must answer if we are
to scientifically investigate the experience of agency is in what way agency changes
the experience of the body. This is an experience that is present both when we pas-
sively move and when we actively move. Is the sense of agency simply an addition
to an “omnipresent” sense of body-ownership, or is the sense of agency a different
kind of experience to the experience of body-ownership? Tsakiris and Fotopolou
report neuroimaging experiments that support the latter view that the experience
of agency is qualitatively different (Tsakiris et al., 2009). They find no activations
common to active movement and passive movement conditions, and distinct pat-
terns of activation in the two conditions, thus strongly supporting the view that an
experience of agency is a qualitatively distinct kind of experience from the experi-
ence of body-ownership.
Does this prereflective sense of agency provide evidence for free will and the neu-
ral mechanisms that underpin voluntary action? One conclusion we do seem to be
warranted in drawing is that the sense of agency is not an illusion, as has been pow-
erfully argued by Wegner (2002). Wegner and colleagues have amassed substantial
evidence that our experience of mental causation—the experience that our con-
scious proximal intentions are the causes of our actions—may be based on a post
hoc inference. Wegner argues that this post hoc inference is based on the satisfac-
tion of the following three conditions. First, we undergo a conscious thought about
an action at an appropriate interval of time before acting; second, we find that the
action we perform is consistent with the action we thought about performing; and
third, we establish there are no other rival causes of the action. The sense of agency
we arrive at via this route is a reflective sense of agency, and we have seen that this
does not exhaust our experience of agency. In addition, there is what we have been
calling, following Gallagher, a prereflective sense of agency, which is the outcome of
neural processes involved in motor control.
However, one might worry about whether the prereflective sense of agency
really adds up to the experience of freedom that is required for a robust response to
the zombie challenge. What matters for responsibility is precisely that our actions
accord with reasons we endorse (see Vargas, this volume). Perhaps this is a condi-
tion that cannot be satisfied unless we have a prereflective sense of agency for an
action. Thus we can allow that a prereflective sense of agency may well be necessary
for free will, but it looks too thin to give us a notion of conscious control sufficient
for free will.
free agency. The sense of agency was supposed to provide us with an answer to this
question, but it turns out that there is no single sense of agency; there are rather
“multiple senses of agency” (Gallagher 2012, this volume; see also Pacherie 2007).
Cognitive science shows how these different aspects to the experience of agency
can be operationalized. However, as Paglieri’s essay argues, one might doubt that
the sense of agency as studied by Tsakiris is related to the experience of free agency
that Ginet and Velleman discussed.
It might also be the case that some commonsense ideas relating to free agency
do not have anything to do with the phenomenology of agency while others do.
Richard Holton (2010),20 for example, argues that the folk notion of free will is
made up of at least three very distinct and probably incompatible ideas. First, there
is the mental capacity for free agency or the ability to act freely as described by
philosophy of mind. Holton argues that careful phenomenological reflection can
be a rich source of insight for learning about this mental capacity. Second, there is
the conception of free agency required for moral action and responsibility. Finally,
he suggests that both of these conceptions of freedom should be distinguished
from a third metaphysical notion of being able to do otherwise, which is so crucial
to libertarians. Holton finds within the phenomenology of free will two distinct
types of experience, which he labels the “experience of choice” and the “experience
of agency,” respectively. He argues that neither of these experiences tells us much
about our practices of assigning moral responsibility. Experience of choice is not
required for moral responsibility, since we have no hesitation in assigning respon-
sibility to agents for harms that have arisen from habitual or automatic actions they
do not experience choosing. Somewhat more controversially, Holton argues that
the experience of agency is not necessary for moral responsibility either. Holton
considers views that take the experience of agency to be connected with “the capac-
ity to choose rationally” (2010: 91); we will call these accounts “rationality-based
accounts.” Holton argues that our moral practices require us to hold a person mor-
ally culpable even when they lack a capacity for rationally assessing the reasons for
and against a particular course of action. A person might be quite ignorant of her
bad motives, for instance, and so lack the capacity to choose rationally, yet our moral
practices still allow us to hold the person responsible. Holton writes: “A person can
be spiteful, or selfish, or impatient without knowing, or being able to know that they
are, and such ignorance does not excuse the fault” (93). Suppose there is a connec-
tion between having an experience of agency and the capacity to choose rationally as
is claimed by rationality-based accounts. It seems we must conclude that our moral
practices allow us to hold agents responsible for acting even when the agent has no
experience of agency. Thus our moral practices of attributing responsibility do not
line up at all well with cases in which a person has an experience of acting freely.
Even if we agree with Holton about the disconnect between phenomenology
and our moral practices, still one might be reluctant to give up on the connection
between the capacity to choose rationally and ascriptions of moral responsibility.
In his contribution to this volume, Manuel Vargas shows how rationality-based
accounts can accommodate the limited access we have to our motives. He argues in
agreement with Holton that there need not be any incompatibility between ascrip-
tions of moral responsibility and self-blindness, but he denies that such a result is in
Decomposing the Will 15
tension with the rationality-based account he develops in his essay (see also Vargas
2007). Vargas argues that the situationist tradition in social psychology forces
philosophers to accept that humans probably do not possess one stable capacity
to react to reasons, but this does not mean that humans cannot act rationally at
all. All it means is that there might be many heterogeneous abilities to respond to
reasons and that these might be far more context dependent than we might have
assumed. In the case of the person ignorant of her bad motives, this could mean that
there might be a good sense in which this person is still responsive to reasons in her
decision. On the other hand, if there is not, then perhaps our practices of ascribing
responsibility in such situations are simply mistaken. In either case, it would not be
necessary to completely sever the link between mental capacity and an understand-
ing of free will in terms of social practice.
Setting this debate to one side, even Holton admits there must be some link
between a person’s mental capacity to act freely and our moral practices of ascribing
responsibility. This is illustrated well by the insanity defense, which clearly demon-
strates that our practices of responsibility ascription are sensitive to the proper func-
tioning of a person’s cognitive machinery. Holton has argued persuasively that we
can learn a good deal about the nature of the psychological capacities that make us
free from our experience of being free agents. He has also argued that what we can
learn from this kind of phenomenological reflection does not necessarily advance
our understanding of what makes an agent responsible for an action. However,
Holton has not shown (nor does he claim to have shown) that the psychological
capacities that make us agents have no bearing on the question of what makes us
responsible agents. The most we can conclude from his arguments is that the mech-
anisms necessary for an agent to be a target of responsibility ascriptions turn out to
be quite distinct from the mechanisms that ground the experience of agency.
The zombie challenge presents a threat to our practices of ascribing responsibility
to an agent by purporting to show that the conscious self is an impotent bystander,
and so is not causally responsible for bringing about actions we praise or blame.
In what follows we will attempt to meet the challenge head-on by defending the
idea that there is a necessary connection between conscious control and respon-
sible agency. However, let us pause briefly to consider this strategy in the light of
Holton’s arguments. If Holton’s reasoning is correct, the mechanisms that support
our experiences of freedom are not sufficient for responsible agency. So in order
for any form of conscious control to be sufficient for that link, either Holton has
to be wrong, or it has to be possible for an agent to be consciously in control of her
actions but not experience her own agency. Whether this really is possible would
seem to turn on how we understand the experience of agency, a question we have
briefly touched upon earlier. There we argued for a distinction between prereflec-
tive and reflective experience of agency. Once we have this distinction on the table,
we should fully expect an agent could be in conscious control of an action but not
have a reflective experience of her own agency, even though she might well have a
prereflective experience.21
We find further support for this possibility in Tim Bayne’s arguments in his con-
tribution to this volume for treating intentional agency as the marker of conscious-
ness. A “marker” of consciousness is a criterion we use to decide whether a creature
16 D ECO M P O S I N G T H E W I L L
you must be able to identify the cup of coffee and factor this into your behavioral
planning, but this suffices for perceptual consciousness of the coffee cup, says Bayne.
Bayne certainly succeeds in showing that wherever you have intentional agency, you
most likely also have perceptual consciousness. However, he does not succeed in
showing that the conscious self is responsible for bringing about intentional agency.
This worry is driven home by the care Bayne takes to sever his thesis that agency is
the marker of (perceptual) consciousness from the claim that when the agent acts
intentionally, he must be conscious of his intentions and motives. However, it is the
absence of conscious intentions and motives in causing our behavior that gener-
ates the worry about self-blindness, which in turn fuels the zombie challenge.23 If
intentional agency can be the marker of consciousness without this implying that the
agent is conscious of his intentions and motives, the zombie challenge would seem to
remain in place. We think Bayne is absolutely right to stress the connection between
consciousness and intentional agency24, but the threat the zombie challenge presents
to responsible agency will persist so long as we lack an account of what it is that con-
sciousness does in generating our intentional behavior. Our strategy in the remain-
der of the volume is therefore to take up the question of the function of the conscious
will. If we can establish possible functions for the conscious will, this would go some
way to furthering our understanding of what makes us responsible agents.
processes that are vying for control of the musculoskeletal system. Consciousness
allows for what Morsella and Bargh describe as “cross-talk” to take place between
competing action-generating systems. Without the intervention of conscious-
ness, each action-generating system is unable to take information from other
action-generating systems into account, with the consequence that the agent may
act in ways that do not cohere with other plans they might have. Morsella and col-
leagues give as an example anarchic hand syndrome and utilization behavior (UB)
in which a patient’s hand performs well-executed, goal-directed movements that
the patient claims are unintentional. Patients will often complain that the hand is
behaving as if it has a mind of its own. Morsella et al. suggest that these disorders
are the result of a failure of cross-talk among competing action-generating systems.
Similarly, in UB, a patient will find himself responding to affordances that are irrel-
evant to his projects, interests, and goals at the time. According to Morsella and col-
leagues, this is because consciousness does not allow for the system that is guiding
his behavior to speak with other action-generating systems, and so is not influenced
by the patient’s wider concerns. Elsewhere, Bargh (2005) has compared UB patients
to participants in his priming studies, arguing that in both cases we find a dissocia-
tion of the system that represents intentional movements and the system that gen-
erates motor representations used to guide behavior. Just as with UB patients, the
behavior of subjects in priming studies is generated unconsciously, which is to say
quite independently of interaction with other behavior-generating systems.25
Morsella et al. may have succeeded in finding a function for consciousness that
is consistent with research in social psychology demonstrating the ubiquity of our
self-blindness. However, it would take more work to establish that their account
is sufficient to rescue hierarchical accounts of free will from the zombie challenge.
Resolving high-level conflicts between action plans is certainly a part of what it
takes for a person to have the will she wants, but it is surely only a part of the story.
It is not still entirely clear, for instance, how a mechanism for ensuring coherence
between action plans could deliver the species of free will we are interested in when
we praise or blame an agent for an action.
Nico Frijda also argues that the function of the conscious will resides in resolving
of inner conflict, but the conflicts he is concerned with are emotional in nature. A
conflict of emotion arises when two or more incompatible emotional inclinations
operate at the same time. You are angry with your partner, but at the same time
you wish to keep the peace so you say nothing. Frijda argues that conflicts between
emotions are resolved through emotion regulation. Sometimes this regulation can
proceed automatically and unconsciously, but Frijda argues that often emotion regu-
lation is effortful and voluntary. Frijda makes a distinction between self-maintenance
and self-control in developing his account of emotion regulation.26 Self-control is
exercised when we reevaluate an action inclination, deciding not to perform an action
we had decided upon previously. Self-maintenance, by contrast, is exercised when an
agent undertakes and persists with an action plan despite its anticipated unpleasant
consequences. Emotion regulation takes one of these two forms. The upshot of exer-
cising either of these forms of control, Frijda argues, is that one acts on the basis of
concerns that one can identify with. Thus, when a person gives up smoking for the
sake of his long-term health, this is the goal he identifies with even though he may
Decomposing the Will 19
also desire the short-term pleasure and relaxation derived from smoking a cigarette.
Frijda is aware of Frankfurt’s hierarchical account, and he speaks favorably of the idea
that an agent acts freely when he acts on desires he can identify with wholeheartedly.
Emotion regulation as Frijda characterizes it involves acts of reflective self-evaluation
in which one reflects on the concerns that are motivating one’s action tendencies and
carefully considers which of the options one prefers. Free will, he says, “refers to free-
dom to pursue available options, in perception and thought, and in constructing pref-
erence, or in finding out what one’s preference is” (this volume, p.214). He is aware
of arguments that purport to show that the conscious self is an epiphenomenon, and
free will an illusion. His response to these arguments is to point to the possibility
of emotion regulation, which he argues buys us all the self-determination we could
want. Yet Frijda ends his essay by recognizing the point we have been stressing in
this section that people are in general quite ignorant of their motivations. He agrees
with social psychologists that “awareness of what moves one is a construction,” and
that there can be “no direct reading-off of the causes of intentions and desires” (this
volume, p.216). Frijda seems to believe that there is no conflict between this kind of
self-ignorance and the reflective self-evaluation required for voluntary and effortful,
emotional regulation. Doesn’t reflective self-evaluation of the kind Frijda argues is
required for emotional regulation require us to know our own motivations? Almost
certainly the resistance fighters and political revolutionaries that pepper Frijda’s essay
are examples of people that know what they want. However, the suspicion remains
that in the end self-deception and self-ignorance may undercut the less heroic and
more mundane person’s capacity for emotional regulation.
MENTAL AGENCY
One of the morals we can take from our discussion of the function of conscious
control in the previous section is that conscious control is as much to do with
self-regulation as it is about action regulation. We saw how Morsella and Bargh
argue that the function of conscious control is to allow for cross-talk between and
integration of different action-generating systems. In the absence of this cross-talk,
behavior is generated that does not reflect, and is indeed encapsulated from, the
projects that drive the agent. An important and direct consequence of integration
of action plans delivered by conscious control is the production of actions that fit
with the agent’s wider projects and concerns. Similarly, Frijda describes how the
concerns at play in emotion regulation are personal norms and values the violation
of which would result in “a loss of sense and coherence in the world,” and more dra-
matically still a “‘breakdown of the symbolic universe.’” Frankfurt talked about free
will as a capacity we exercise when the desires we act on mesh with our second-or-
der volitions—our second-order preferences as to which desires will move us to act.
When this kind of mesh obtains, the desires that cause the agent’s actions are not
alien to him, passively causing him to act in the way an external force might. Instead,
the desire is one that fits with the agent’s self-conception; it is one with which the
agent can self-identify.
There are of course many problems with Frankfurt’s account of free will, which
we do not intend to rehearse here.27 What we want to focus on from Frankfurt’s
20 D ECO M P O S I N G T H E W I L L
seem required to conclude that there are no actions whatsoever, of either a mental
or a bodily kind.
What has gone wrong? Wu’s diagnosis involves a clever exploitation of Anscombe’s
insight that actions are intentional under a description. Wu’s twist on this is to argue
that one and the same action can have properties that are best explained by auto-
matic processes, and properties that are best explained by reference to an agent’s
intention. Wu suggests that the way in which we bring a thought to mind is either
through episodic recall or through a process of synthesizing previously encoded
thoughts. Of course both of these processes happen automatically. However, in epi-
sodic recall, there will be a wide variety of possible thoughts to select from. How
does our automatic process of recall select from this wide variety of thoughts, the
particular thought that is relevant to the task we are engaged in? Wu calls this “the
selection problem.” Once the thought has been successfully retrieved, we then run
into a further problem of what to do with the thought if we are to accomplish our
task. There are many responses we could make, and we have to select from among
these possible responses which is the appropriate response for accomplishing the
agent’s goal. Therefore, the agent faces two kinds of selection problems: the selec-
tion of the relevant input, and of the appropriate response to this input. Wu labels
this the “Many-Many Problem.” Wu argues that the agent enters the loop in arriv-
ing at a solution to the Many-Many Problem. The agent selects a particular behav-
ioral trajectory in solving a Many-Many Problem through what he calls “cognitive
attention.” Wu follows William James in conceiving of cognitive attention as the
“selection of a possible train of thought” (Wu, this volume, p.252). It is this pro-
cess of directing thinking along a particular path that is, according to Wu, active
and agentive. He offers an account of the role of cognitive attention in the solving
of a Many-Many Problem in terms of hierarchical processing. An intention of the
agent resides at the top of the hierarchy, exerting a top-down causal influence on
the directing of attention in the solving of a Many-Many Problem. Wu’s response to
the threat from automaticity is therefore to argue that not all actions (both bodily
and mental) are passive because in many cases the action will be the execution of a
solution to a Many-Many Problem where the solution has been arrived at through
cognitive attention. Actions that have this kind of causal history are not wholly pas-
sive but are agentively controlled and therefore count as actions.
Wu argues that bodily and mental actions are agentive for the same reason, since
both types of action can count as solutions to Many-Many Problems arrived at via
the top-down causal influence of an agent’s intention. He therefore agrees with
Proust (this volume) in arguing that the agent is responsible for more than just
stage-setting or fostering the right conditions for mental action. Proust, however,
disagrees with Wu that bodily and mental actions are agentive for the same type
of reasons. According to Proust, mental actions differ from bodily actions in three
important ways: (1) they cannot have prespecified outcomes; (2) they contain a
passive element; and (3) they do not exhibit the phenomenology of intending.
Despite their disagreements, Proust and Wu agree that the automaticity challenge
for mental actions can be answered, and both deny that mental processes are as
ballistic as Strawson suggests, though for slightly different reasons. Wu appeals to
the role of cognitive attention in solving the Many-Many Problem to identify the
22 D ECO M P O S I N G T H E W I L L
role of the agent in mental action. Proust agrees that the channelling of attention is
a crucial ingredient in mental agency, but her account emphasizes the role of meta-
cognitive monitoring in ensuring that the agent’s thinking conforms with the norms
of accuracy, simplicity, and coherence. For Proust, “a mental action results from the
sudden realization that one of the epistemic preconditions for a developing action is
not met.” You seem not to remember what was on the shopping list you left behind
at home, for instance. This epistemic feeling then leads you to try to remember what
to buy. Metacognitive self-evaluation allows for the sensitivity to epistemic norms
such as the norms of accuracy, simplicity, and coherence mentioned earlier. Proust
locates mental agency in part in this kind of norm responsiveness.29
Vierkant opts for a different route. He embraces the Strawsonian picture but
argues that the shepherding or stage-setting that Strawson allows for is what makes
human mental agency special. Vierkant argues that what makes human agency dif-
ferent from the agency of other creatures is humans’ ability to manipulate their own
mentality in an intentional way. In contrast to Wu and Proust, Vierkant does not
believe that most mental ongoings can be called intentional, but he argues that it is
a special ability of humans to be able to exercise any intentional control over their
mentality. Vierkant buys into a distinction introduced by Pamela Hieronymi (2009)
that there are two different forms of mental agency, only one of which is intentional.
On Hieronymi’s picture, the nonintentional (evaluative) form of mental agency is
fundamental, while the intentional form is characterized as only supplementary (i.e.,
in line with Strawson she believes that intentional mental agency can only be used
for stage-setting and shepherding). Vierkant agrees but insists that it is nevertheless
intentional (Hieronymi speaks as well of manipulative) mental agency that makes
human mental agency free. He argues that the ability to self-manipulate is behind
the Frankfurtian intuition of the importance of second-order volitions for the will.
Vierkant argues that this ability allows humans to become free from their first-order
rational evaluations and to instead intentionally internalize desired norms, despite
not being able to assent to their validity by rational means. In other words, it allows
them to be who they want to be. The role of conscious intentional control on this
model is to help us to efficiently implement our desires about who we want to be.
mind required of a responsible agent. They are the only creatures to have this ability
because only humans can intentionally manipulate their minds. Because this posi-
tion does not rely on self-knowledge for freedom of the will, it escapes the zombie
challenge that might seem threatening to traditional Frankfurtian approaches.
Peter Gollwitzer’s work on implementation intentions could be taken as provid-
ing empirical support for such a position. Gollwitzer’s work has a special place in
the empirical research on volition, because in addition to contributing to the social
psychology research that seems to support the zombie challenge, he has always also
been interested in investigating the function of consciousness in generating action.
In a series of fascinating papers (reviewed in the chapter by Maglio and colleagues)
Gollwitzer has shown that the conscious contribution to action execution might be
the formation of implementation intentions. Implementation intentions are inten-
tions that specify the circumstances under which behavior should be triggered in
order to reach a desired goal. The efficacy of implementation intentions conflicts
with the zombie challenge and the claim that consciousness is largely or completely
irrelevant for behavioral control. However, it also suggests a function for conscious
control that is somewhat counterintuitive. Traditionally, consciousness has been
associated with the rational evaluation of goals (see, e.g., Baumeister 2010 for a
recent version of that general idea), but Gollwitzer seems to indicate that the role of
consciousness is far more mundane, consisting mainly in an instrumental managing
function that ensures that the system implements its intentions in the most effective
way. In their contribution Gollwitzer and his collaborators describe some recent
experiments they have carried out concerned with the circumstances under which
people form implementation intentions. These studies were concerned in particular
with the role of emotion in motivating agents to plan.
Further support for the view of self-control as deriving from self-manipulation
comes from the contribution by Hall and Johansson. Hall along with his colleagues
at Lund University were responsible for the choice-blindness experiments discussed
earlier and reviewed in the first part of their chapter. There they argue that what these
experiments and others establish is that our self-knowledge is largely the outcome
of self-interpretation (see also Carruthers 2009). If consciousness provides us with
self-knowledge that in turn allows us to control our behavior, Hall and colleagues
argue that this is accomplished by consciousness only via self-interpretation. We do
not have any kind of privileged access or first-person authority over our attitudes.
We come to know about our own minds more or less in the same way as we know
about the minds of others through interpretation or by adopting Dennett’s inten-
tional stance in relation to our own behavior. Hall and colleagues go on to discuss
how we can use technologies such as ubiquitous computing to augment our capac-
ity for self-control. They argue that these technologies can allow us to make better
self-ascriptions that enhance our self-understanding. They can also provide us with
accurate self-monitoring in the form of sensory feedback, for instance, that we can
then use to regulate impulsive behavior. Finally, these technologies can allow us to
step back from our actions in the heat of the moment and consider what it is we
really want to do. They compare their Computer Mediated Extrospection (CME)
systems to a “pacemaker for the mind, a steady signal or beacon to orient our own
thinking efforts” (this volume, p.312). Like Vierkant, then, they believe that the
24 D ECO M P O S I N G T H E W I L L
crucial ingredient for willed action is an enhanced ability for self-control, and they
argue that we can engineer our environments in such a way as to enhance this capac-
ity for self-control.30
NOTES
1. Libet, in common with the other researchers we have cited, argues that our actions
are prepared for and initiated unconsciously, but unlike these other researchers he
does not deny the causal efficacy of conscious volition. He argues that we should
replace our concept of free will with a concept of “free won’t” that works “either by
permitting or triggering the final motor outcome of unconsciously initiated process
or by vetoing the progression of actual motor activation” (Libet 1985: 529).
2. Saul Smilansky (e.g., Smilansky 2002) is one of the more prominent contemporary
philosophers in favor of hard determinism.
3. It is an interesting sociology of science fact that even though most philosophers agree
broadly on this, it still does not seem to stop the publication of more and more books
on the question. John Baer and colleagues (2008) have edited an excellent volume on
free will and psychology in which the question of the relationship between determin-
ism and free will features prominently.
Decomposing the Will 25
4. For a very good account of the conditions of libertarian freedom, see Mele (1995).
5. Similar arguments are found in many other contemporary compatibilist positions,
e.g., Fischer and Ravizza (1998).
6. Pettit himself does not explicitly talk about consciousness being important. However,
at a crucial juncture his account is ambiguous. Pettit argues that what matters for agent
control are behavioral displays of conversability and orthonomy, but it is unclear in
his account whether these abilities really would be enough if there was no awareness
by the agent of the orthos she is governed by. He writes that even though it is unclear
whether agents have the ability to control their behavior at the point of awareness,
they can still take a stance toward it and make sure that they will avoid behavior in
the future (Pettit 2007: 86). Why would Pettit think that it is necessary for agent
control as he defines it that awareness of reasons has to play any causal role in the
shaping of present or future behavior, if orthonomy and conversability are sufficient
for responsibility? Even if it was the case that our conscious reasons are nothing else
than made-up stories that we tell to justify our behavior post hoc, and even if it were
the case that these confabulations had very little influence on our future behavior, it
might still be true that the machinery is able to be governed by normative rules and
make us converse fluently. Whatever the reason, Pettit clearly does not seem to think
that this could be possible because he states that a conscious endorsing or disendors-
ing of our behavior ensures that our actions are “performed within the domain where
conversability and orthonomy rules” (86).
7. See, e.g., discussion of guidance control in Fischer and Ravizza (1998).
8. Obviously, most accounts will not require constant conscious control, but the zom-
bie challenge suggests that if there is any conscious control at all, it is far more frag-
ile than standardly assumed. It is not only about outsourcing the unimportant stuff
to automatisms while keeping control of the important stuff, but suggests that even
our most cherished long-term decisions and value judgments might be the result of
unconscious automatisms.
9. In many conversations we had, compatibilism was described by scientists as a cheap
philosopher’s trick to simply redefine arbitrarily what free will is. In our collection
John-Dylan Haynes explicitly makes it clear at the beginning of his piece that he is an
incompatibilist.
10. It is important to emphasize that Holton’s use of predictability is unusual. On one
common understanding predictability is weaker than determinism. One might, e.g.,
think that God can predict our behavior without that entailing that our choices are
determined. On Holton’s use, in contrast, predictability is much stronger than deter-
minism for the reasons explained in the text.
11. Again, this is clearly what some scientists have in mind too. See Fridja (this volume)
on Prinz and fatalism.
12. But obviously even if the zombie challenge would turn out to be correct, this would not
mean anything for the truth of predictability. This is because even though both doc-
trines entail the epiphenomenality of the conscious self, predictability is a much more
radical claim. The latter doctrine says that there is a possible world in which all control
mechanisms (whether they are conscious or zombie mechanisms) are powerless.
13. Adina Roskies shows that there are five ways in which volition is investigated in the
neuroscience literature today: (1) action initiation; (2) intention; (3) decision;
(4) inhibition and control; and (5) the phenomenology of agency.
14. For reviews of this literature, see the essays by Morsella et al.; Maglio et al.; and
Hall et al.
26 D ECO M P O S I N G T H E W I L L
15. Many chapters in this volume deal with this question in one form or another (see
in particular the contributions by Bayne; Vargas; Vierkant; Maglio et al.; Fridja;
Morsella et al.).
16. Consider, in this light, Dennett’s discussion of a thought experiment from Mele
(1995) involving two characters, Ann and Beth. Ann’s actions are autonomous at
least sometimes, while Beth’s actions are the outcome of a psychology identical to
that of Ann except that her psychology is the product of brainwashing. Mele argues
that because Beth’s actions have not been caused in the right, they are not genuinely
free. Dennett argues persuasively that we should not regard Beth differently just
because she is ignorant of the springs of her action.
17. See as well the fascinating studies on three-month-old infants, who seem to enjoy
being able to control a mobile much more than if they get the exact same effect pas-
sively (Watson & Ramey 1987).
18. According to Velleman, traditional, purely belief-desire-based philosophical accounts
do not deliver this and therefore face the objection that their models are purely
hydraulic, which flies in the face of the phenomenology of agency. On Velleman’s
own account, the experience of agency can be explained by giving an account of the
functional role of the agent, which in humans is reducible to the desire to act ratio-
nally. Humans have a very strong desire to make sense of their actions. Whenever that
desire is part of influencing a decision, the resulting behavior, according to Velleman,
will feel agentive.
19. For a related distinction, see Bayne and Pacherie (2007) and Synofzik et al. (2008).
Tsakiris and Fotopolou (this volume) make a similar distinction between what they
call “feelings of agency” and “judgments of agency.”
20. Holton claims Nietzsche as the ancestor of this skepticism.
21. We are arguing, then, that the mechanisms that give us a prereflective sense of agency
are likely to be among those that make us responsible agents. We have already seen
earlier, however, that the prereflective sense of agency is too thin to account for
responsible agency. Thus we can still agree with Holton that reflection on the phe-
nomenology of agency probably will not help us to identify the mechanisms that
make us responsible agents.
22. See, e.g., Bayne’s discussion of how to distinguish patients in a minimally conscious
state from those in a vegetative state. Patients in a minimally conscious state may well
lack a capacity for introspective report while nevertheless being able to respond to
external stimuli in a way that suggests purpose or volition. Evidence for this comes
from the important studies by Owen et al. (2006).
23. We suggested earlier that Vargas’s rationality-based account may help us to respond
to zombie challenge–style arguments based on self-blindness, but this is not some-
thing we can pursue here. For more details, see his essay in this collection.
24. See Ward, Roberts, and Clark (2011) for an account of consciousness that also
stresses its connection with intentional agency, in particular our capacity to perform
epistemic actions like sifting, sorting, and tracking.
25. It is interesting to consider Bayne’s argument that intentional agency implies con-
sciousness in the light of Morsella and Bargh’s proposal. Bayne understands inten-
tional agency in terms of the cognitive integration of an action into an agent’s
cognitive economy. According to Morsella and Bargh, this cognitive integration
is what consciousness supplies by allowing for communication between distinct
action-generating systems.
26. Here he is following the work of Kuhl and Koole (2004).
Decomposing the Will 27
REFERENCES
Arendt, H. (1971). The Life of the Mind. Orlando, FL: Harcourt.
Baer, J., Kaufman, J., & Baumeister, R. F. (2008). Are we free? Psychology and free will. New
York: Oxford University Press.
Bargh, J. A. (2005). Bypassing the will: Toward demystifying the nonconscious control
of social behavior. R. Hassin, J. Uleman, & J. Bargh (Eds.), The new unconscious. New
York: Oxford University Press.
Baumeister, R. F. (2010). Understanding free will and consciousness on the basis of cur-
rent research findings in psychology. R. F. Baumeister, A. Mele, & K. Vohs (Eds.), Free
Will and Consciousness. How might they work. Oxford: Oxford University Press: 24–43.
Baumeister, R. F., Masicampo, E. J., & DeWall, C. N. (2009). Prosocial benefits of feeling
free: Disbelief in free will increases aggression and reduces helpfulness. Personality and
Social Psychology Bulletin, 35, 260–268.
Bayne, T. (2006). Phenomenology and the feeling of doing: Wegner on the conscious
will. S. Pockett, W. Banks, & S. Gallagher (Eds.), Does consciousness cause behavior?
Cambridge, MA : MIT Press: 169–186.
Bayne, T., & Pacherie, E. (2007). Narrators and comparators: The architecture of agentive
self-awareness. Synthese, 159, 475–491.
Brass, M. & Haggard, P. (2007) To do or not to do: The neural signature of self-control.
Journal of Neuroscience, 27, 9141–9145.
Brooks, D. (2007). The morality line. New York Times, Opinion Pages, April 19 2007. http://
www.nytimes.com/2007/04/19/opinion/19brooks.html?_r=1&ref=davidbrooks.
Carruthers, P. (2009). How do we know our minds. The relationship between mindread-
ing and metacognition. Behavioral and Brain Sciences, 32, 121–138.
28 D ECO M P O S I N G T H E W I L L
Lau, H., & Passingham, R. E. (2007). Unconscious activation of the cognitive con-
trol system in the human prefrontal cortex. Journal of Neuroscience 27(21):
5805–5811.
Lau, H., Rogers, R., Ramnani, N., & Passingham, R. E. (2004). Willed action and the
attentional selection of action. Neuroimage, 21, 1407–1415.
Libet, B. (1985). Unconscious cerebral initiative and the role of conscious will in volun-
tary action. Behavioral and Brain Sciences, 8, 529–566.
Mele, A. (1995). Autonomous agents. Oxford: Oxford University Press.
Mele, A. (2009). Mental action: A case study. L. O’Brien & M. Soteriou (Eds.), Mental
actions. Oxford: Oxford University Press.
Mele, A. (2010). Effective intentions: The power of conscious will. Oxford: Oxford University
Press.
Moran, R. (2001). Authority and estrangement: An essay on self-knowledge. Princeton, NJ:
Princeton University Press.
Nahmias, E. (2005). Agency, authorship and illusion. Consciousness and Cognition, 14,
771–785.
Nietzsche, F. (1966). Beyond good and evil. Trans. W. Kaufmann. New York: Vintage.
Owen, A. M., Coleman, M. R., Boly, M., Davis, M. H., Laureys, S., & Pickard, J. D. (2006).
Detecting awareness in the vegetative state. Science, 313, 1402.
Pacherie, E. (2007). The sense of control and the sense of agency. Psyche, 13(1). http://
jeannicod.ccsd.cnrs.fr/docs/00/35/25/65/PDF/Pacherie_sense_of_control_
Psyche.pdf.
Pettit, P. (2007). Neuroscience and agent control. D. Ross, D. Spurrett, H. Kinkaid, & G.
Lynn Stephens (Eds.), Distributed cognition and the will. Cambridge, MA : MIT Press:
77–91.
Pockett, S., Banks, W., & Gallagher, S. (2006). Does consciousness cause behaviour?
Cambridge, MA : MIT Press.
Prinz, W. (2003). Emerging selves: Representational foundations of subjectivity.
Consciousness and Cognition, 12, 515–528.
Ross, D., Spurrett, D., Kinkaid, H., & Lynn Stephens, G. (2007). Distributed cognition and
the will. Cambridge, MA: MIT Press.
Roth, G. (1994). Das Gehirn und seine Wirklichkeit. Frankfurt: Suhrkamp.
Shirer, W.R., Ryali, S., Rykhlevskaia, E., Menon, V., & Greicius, M. D.(2012). Decoding
subject-driven cognitive states with whole brain connectivity patterns. Cerebral Cortex,
22, 158–165.
Sinnott-Armstrong , W., & Nadel, L., Eds. (2011). Conscious will and responsibility. Oxford:
Oxford University Press.
Smilansky, S. (2002). Free will, fundamental dualism, and centrality of illusion. R. Kane
(Ed.), The Oxford handbook of free will. Oxford: Oxford University Press.
Soon, C. S., Brass, M., Heinze, H-J., & Haynes, J-D. (2008). Unconscious determinants of
free decisions in the human brain. Nature Neuroscience, 11, 543–545.
Strawson, G. (2003). Mental ballistics or the involuntariness of spontaneity. Proceedings of
the Aristotelian Society, 103, 227–256.
Synofzik M., Vosgerau, G., & Newen, A. (2008). Beyond the comparator model: A multi-
factorial two-step account of agency. Conscious and Cognition, 17, 219–239.
Tsakiris, M., Carpenter, L., James, D., & Fotopoulou, A. (2009). Hands only illu-
sion: Multisensory integration elicits sense of ownership for body parts but not for
non-corporeal objects. Experimental Brain Research, 204, 343–352.
30 D ECO M P O S I N G T H E W I L L
Tsakiris, M., Prabhu, G., & Haggard, P. (2005). Having a body versus moving your body:
How agency structures body-ownership. Consciousness and Cognition, 15, 423–432.
Tusche, A., Bode, S., & Haynes, J-D. (2010). Neural responses to unattended products
predict later consumer choices. Journal of Neuroscience, 30, 8024–8031.
Vargas, M. (2007). Revisionism: Four views on free will (Great Debates in Philosophy). Ed.
J. M. Fischer et al. Oxford: Oxford University Press.
Velleman, D. (1992). “What happens when someone acts?” Mind, 101 (403), 461–481.
Vierkant, T. (2008). Willenshandlungen. Frankfurt: Suhrkamp.
Vohs, K., & Schooler, J. (2008). The value of believing in free will: Encouraging a belief in
determinism increases cheating.” Psychological Science, 19, 49–54.
Ward, D., Roberts, T., & Clark, A. (2011). Knowing what we can do: Actions, intentions,
and the construction of phenomenal experience. Synthese, 181, 375–394.
Watson, G. (1982). Free agency. Free will. G. Watson. Oxford: Oxford University Press.
Watson, J. S., & Ramey, C. T. (1987). J. Oates & S. Sheldon (Eds. ), Reactions to
response-contingent stimulation in early infancy. Cognitive development in infancy.
Hillsdale, NJ: Erlbaum.
Wegner, D. M. (2002). The illusion of conscious will. Cambridge, MA: MIT Press.
Wegner, D. M., & Wheatley, T. P. (1999). Apparent mental causation: Sources of the expe-
rience of will. American Psychologist, 54, 480–492.
Wilson, T. (2002). Strangers to ourselves: Discovering the adaptive unconscious. Cambridge,
MA : Belknap Press.
Wolf, S. (1990). Freedom within reason. Oxford: Oxford University Press.
Wolfe, T. (1996/2001). Sorry, but your soul just died. Originally printed in Forbes
Magazine, reprinted in T. Wolfe, Hooking-Up. New York: Farrar, Straus and Giroux:
89–113.
PART ONE
A D I NA L . RO S K I E S
Because we cannot rely upon definitions, I will presume that the folk conception
of the will is a legitimate shared starting point for discussion. Generally speaking,
volition is a construct used to refer to the ground for endogenous action, autonomy,
or choice. From here, intuitions vary. In some areas of research voluntary actions
are contrasted with actions that are reflexive or elicited by the environment. This
puts the focus of volition on the flexibility or initiating causes of action. In contrast,
in some strands of research the focus is not on bodily action at all but rather on the
mental process of decision making.
The heterogeneity of the preceding visions of the will explains in part why an
attempt to identify the neural basis of volition might be difficult, for the absence of
a clear concept of volition complicates the task of investigating it experimentally,
or even determining what counts as relevant data. Another difficulty arises from
the recognition that in order for neuroscientific research to bear upon the concep-
tion of volition, volition has to be operationalized in some way, so that it can be
approached experimentally. Despite these difficulties, it is now possible to make
some headway.
The intuitive, but less than clear, concept of the will is reflected in the many ways
in which volition has been operationalized. For example, if one takes voluntary
action to contrast with stimulus-generated action, examining the neural events that
distinguish self-initiated movements from similar movements that are responses to
external stimuli ought to provide insight into the proximal mechanisms underly-
ing endogenously generated action. However, if one conceives of volition instead
as primarily related to abstract plans for future action, the proximal mechanisms
that lead to simple movements may be of less interest than the longer-term plans or
intentions one has or forms. Some research on intentions attempts to address these
higher-level aspects of motor planning.
Philosophical discussions of volition often focus upon the ability to choose to
act in one way or another. Although historically this emphasis on choice may be a
vestige of an implicit dualism between the mentalism of choice and the physical-
ism of action, the processes underlying decision as a means for forming intentions
seem to be a central aspect of volition even if one rejects dualism, as have most con-
temporary philosophers and scientists. Yet a different approach to volition focuses
less on the prospective influence of the will on future action than on the occurrent
ability of the agent to inhibit or control action. Moreover, since some lines of evi-
dence from psychology and neuroscience suggest that actions are often initiated
unconsciously, a number of people have argued that if free will is to exist, it will
take the form of control or veto power over unconsciously initiated actions. Finally,
regardless of whether one believes that we can act or choose freely, we normally do
perceive certain actions as self-caused and others as not. There is a phenomenology
that accompanies voluntary action, involving both a sense of causal potency and
ownership. Neuroscience has begun to make significant headway in illuminating
the physiological basis of the phenomenology of agency.
Reflecting the distinct conceptions of will canvassed earlier, I organize my
discussion around the following five topics: (1) action initiation, (2) intention,
(3) decision, (4) inhibition and control, and (5) the phenomenology of agency.
Each of these maps readily to elements of the commonsensical conceptions of
The Neuroscience of Volition 35
Perhaps the finding that there are increases in brain activity with self-initiated action
supports the physicalist notion that volition is a manifestation of brain activity, but
since correlation cannot prove causation, even this conclusion is not mandated. We
can be sure that future work will better resolve the regions involved in self-initiated
and externally cued activity, and the circuits mediating such processing. While not
resolving any mysteries about the seat of volition, these results may provide clearer
targets for future experiments.
Volition as Intention
Intentions are representational states that bridge the gap between deliberation and
action. Arguably, intentions can be conscious or unconscious. Moreover, there may
be different types of intention, involved in different levels of planning for action.
If we assume that intentions are the proximal cause of all voluntary movement,
then studies of initiation of action and of intention may well concern the same phe-
nomena (we might call these proximal intentions or motor intentions, or as some call
them, volitions). However, we also commonly refer to intentions in a broader, more
abstract sense, as standing states that constitute conscious or purposeful plans for
future action, that exist prior to and independently of action execution. In moral and
legal contexts, when we ask whether a person acted intentionally, we often employ
this more general notion of intention.
In general, willed action involves the intention to act, and many presume that
freely willed actions must be caused by our conscious intentions. The efficacy of our
conscious intentions was challenged by the studies of Benjamin Libet, who exam-
ined the relative timing of awareness of the intention to move and the neural signals
reflecting the initiation of action. Libet reported that the time of onset of the RP
occurs approximately 350 milliseconds or more prior to the awareness of an urge
or intention to move (Libet, Wright, and Gleason 1982, 1983; Libet et al. 1983;
Libet 1985). Libet and others have viewed this discrepancy as evidence that actions
are not consciously initiated (Libet et al. 1983; Libet 1985; see Banks 2002). Many
have taken these results as a challenge to free will, on the supposition that conscious
intention must drive, and thus precede, initiation of action, for that action to be
freely willed. Although Libet’s basic neurophysiological findings about RP tim-
ing have withstood scrutiny (Trevena and Miller 2002; Haggard and Eimer 1999;
Matsuhashi and Hallett 2008), his interpretations have been widely criticized. For
example, Libet’s data do not enable us to determine whether the RP is always fol-
lowed by a movement, and thus whether it really reflects movement initiation, as
opposed to a general preparatory signal or a signal related to intention (Mele 2006;
Roskies 2011; Mele 2009). Haggard and Eimer use temporal correlation to explore
the possibility that the anticipatory brain processes identified by Libet and oth-
ers underlie the awareness of intention. Their results suggest that a different sig-
nal, the lateralized readiness potential (LRP), is a better candidate than the RP for
a brain process related to a specific motor intention (Haggard and Eimer 1999);
other data suggest that awareness of intention may precede the LRP (Trevena and
Miller 2002). These findings call into question Libet’s most revolutionary claim,
that awareness of intention occurs after the motor preparatory signals. If the RP is a
The Neuroscience of Volition 39
precursor to a proximal intention, rather than the neural correlate of such an inten-
tion, then the fact that it occurs prior to consciousness does not rule out conscious
intention as a necessary precursor to action, nor do his data speak to the relative
timing of action initiation and conscious intention. Moreover, if the RP regularly
occurs without movement occurring, then the RP cannot be thought of as the ini-
tial stages of a motor process. Others question the relevance of the paradigm used.
There is reason to think that Libet’s experimental design fails to accurately measure
the onset of conscious intention to move (Young 2006; Bittner 1996; Roskies 2011;
Lau, Rogers, and Passingham 2006, 2007), so that inferences about relative timing
may not be reliable. More philosophical objections to the paradigm are also com-
pelling. Are instructed self-generated finger movements an appropriate target task
for exploring questions of freedom? There are reasons to think not, both theoretical
and experimental. For instance, we are typically interested in freedom because we
are interested in responsibility, but an inconsequential act such as finger raising is
not the type of act relevant to notions of responsibility. In addition, the paradigm
may not measure the time of conscious intention but rather the time of the meta-
cognitive state of being aware that one has a particular conscious intention. It may
be that what we need to be conscious of when choosing are the options available
to us but not the motor intention itself (Mele 2009; Young 2006; Bittner 1996;
Roskies 2011; Lau, Rogers, and Passingham 2006, 2007). Thus, despite the fact that
Libet’s studies remain the most widely heralded neuroscientific challenge to free
will, a growing body of work questions their relevance. A great deal has been written
about Libet’s work, both sympathetic and critical. There is far too much to discuss
here (Mele 2009; Banks and Pockett 2007; Sinnott-Armstrong and Nadel 2011;
Banks 2002; Pacherie 2006). However, because of the experimental and interpre-
tive difficulties with his paradigms, in this author’s eyes, Libet’s studies do little to
undermine the general notion of human freedom.
Challenges to freedom from neuroscience are not restricted to Libet’s results. In
a recent event-related fMRI study probing the timing of motor intentions, Haynes
and colleagues used pattern classification techniques on data from regions of fron-
topolar and parietal cortex to predict a motor decision. Surprisingly, information
that aided prediction was available 7 to 10 seconds before the decision was con-
sciously made, although prediction success prior to the subject’s awareness was only
slightly better than chance (~60 percent; Soon et al. 2008). This study demonstrates
that prior brain states, presumably unconscious, are causally relevant to decision
making. This is unsurprising. Neural precursors to decision and action, and physi-
cal influences on behavior are to be expected from physically embodied cognitive
systems, and finding such signals does not threaten freedom. The authors’ sugges-
tion that this study poses a challenge to free will is therefore somewhat misleading.
Nonetheless, it is perhaps surprising that brain information could provide guidance
to future arbitrary decisions that long in advance: the best explanation of the data is
that there are biases to decision of which we are not conscious. The weak predictive
success of these studies does not undermine our notion of volition or freedom, since
nothing about these studies shows that our choices are dictated by unconscious
intentions, nor do they show that conscious intentions are inefficacious. First, the
subjects moved after making a conscious decision to do so, so the study does not
40 T H E Z O M B I E CH A L L E N G E
show that conscious intentions are not relevant to action. Second, only if our model
of free will held that (1) absolutely no factors other than the immediate exercise of
the conscious will could play into decision making, and (2) nothing about the prior
state of the brain could affect our decision making (the determination of the will)
would these results undermine our ideas about free will. But no plausible theory of
free will requires these things (indeed, if it did, then no rational decision could be
free). However, because this study shows that brain data provide some information
relevant to future decisions well prior to the act of conscious choice, it nonetheless
raises important challenges to ordinary views about the complete independence
and episodic or momentary nature of arbitrary choice.
Some studies have attempted to identify areas in which specific intentions are
encoded. For example, Lau and colleagues (Lau, Rogers, Haggard, et al. 2004)
instructed subjects to press a button at will, while attending either to the timing
of their intention to move, or to the movement itself. Attention to intention led to
increased fMRI signal in pre-SMA, DLPFC, and intraparietal sulcus (IPS) relative
to attention to movement. A large body of imaging results indicates that attention
to specific aspects of a cognitive task increases blood flow to regions involved in
processing those aspects (Corbetta et al. 1990; O’Craven et al. 1997). If so, it is rea-
sonable to interpret the aforementioned results as an indication that motor inten-
tion is represented in the pre-SMA. These results are consistent both with evidence
discussed earlier that proximal intentions leading to self-initiated motor activity are
represented in the pre-SMA, and also with the view that conscious intentions are
represented there as well. (For a discussion of how these differ, as well as the dif-
ficulty of determining what is meant by conscious intention, see chapter 2 of Mele
2009.)
In addition to pre-SMA, Lau’s study highlighted frontal and parietal regions often
implicated in intentional action. Hesse (Hesse et al. 2006) identifies a frontopari-
etal network in motor planning, including left supramarginal gyrus, IPS, and frontal
regions. The left anterior IPS has also been associated with goal representation, cru-
cial in motor planning (Hamilton and Grafton 2006). Lau’s results are consistent
with the view that posterior parietal regions represent motor intentions (Andersen
and Buneo 2003; Cui and Andersen 2007; Thoenissen, Zilles, and Toni 2002;
Quian Quiroga et al. 2006). Sirigu et al. (2004a) report that damage to parietal cor-
tex disrupts awareness of intention to act, although voluntary action is undisturbed.
The role of posterior parietal cortex in the experience of intention will be further
discussed in a later section.
Often we think of intentions as more abstract plans, not closely related to motor
activity. Little neuroscientific work has focused explicitly on abstract human inten-
tions, in part because it is so difficult to figure out how to measure them objectively.
Frontal cortex is generally thought to be the site of executive function. Many stud-
ies indicate that dorsal prefrontal cortex (DPFC) is active in tasks involving willed
action. Medial parts of DPFC may be involved in thinking about one’s own inten-
tions (den Ouden et al. 2005), whereas DLPFC may be involved in generating
cognitive as well as motor responses (Frith et al. 1991; Hyder et al. 1997; Jenkins
et al. 2000; Lau, Rogers, Haggard, et al. 2004). However, it is difficult to determine
whether the activity observed corresponds to selection, control, or attention to
The Neuroscience of Volition 41
action. Lau and colleagues (Lau, Rogers, Ramnani, et al. 2004) attempt to control
for working memory and attention in a task that had a free response condition and
an equally attention-demanding specified response condition in order to determine
what areas are involved in selection of action. DLPFC was not more active in the
free choice rather than the externally specified selection condition, suggesting it had
more to do with attention to selection than with choice. In contrast, preSMA was
more active in free choice than in other conditions. This provides further evidence
that preSMA is involved in free selection of action. Moreover, attention to selec-
tion involves DLPFC. Since attention may be required for awareness of intention,
DLPFC activity may be important for conscious intention.
Thus far, the regions discussed are active during intentional tasks, but their
activity reveals little about neural coding of the content of intentions. Using pat-
tern analysis on fMRI data from regions of prefrontal and parietal cortex, Haynes
and colleagues (2007) were able predict with up to 70 percent accuracy a subject’s
conscious but covert intention to add or subtract numbers. Information related to
specific cognitive intentions is thus present in these regions (including medial, lat-
eral, and frontopolar prefrontal regions) while the subject holds his intended action
in mind. Interestingly, the regions that are predictive appear to be distinct from the
ones generally implicated in representation of intention or endogenous actions,
raising the possibility that information related to the content of intention is rep-
resented differently depending on task. Although these studies do not yet provide
useful information about how intentional content is encoded, they do suggest that
the relevant information in this task is distributed in a coarse enough pattern that
differences are detectable with current technology.
To date, neuroscience has shown that mechanisms underlying endogenous ini-
tiation and selection of action have some features that deviate from commonsensi-
cal conceptions of volition, largely with regard to the relative timing of neural events
and awareness. Recent studies do seem to indicate decisions can be influenced by
factors of which a subject is not conscious. However, this does not warrant the con-
clusion that conscious intentions are inefficacious, that our choices are determined
or predetermined, or that consciousness is epiphenomenal. Although in certain
contexts neural mechanisms of selection and motor intention may be unconsciously
activated, once one takes into account the variety of levels at which intentions oper-
ate (Pacherie 2006; Roskies 2011; Mele 2009), none of the current data undermine
the basic notions of volition or free will. Reports of the death of human freedom
have been greatly exaggerated.
for which we are morally responsible, those actions typically are—or are based
on—decisions in response to reasons (Fischer and Ravizza 1998).
In addition, further studies have extended this perceptual decision paradigm in
novel ways, providing new insight into how the general decision-making paradigm
can incorporate richer, more nuanced and abstract considerations that bear on
human decision making. For example, the firing rate of neurons in LIP that are asso-
ciated with decisions in the visual motion task is also influenced by the expected
value of the outcome and its probability, and these play a role in the decision cal-
culus (Yang and Shadlen 2007; Platt and Glimcher 1999). Outcomes (decisions)
associated with higher reward are more heavily weighted, and the time course of the
rise to threshold occurs more rapidly to outcomes with higher payoff or those the
animal has come to expect to be more likely to occur. The firing of these neurons
seems to encode subjective utility, a variable that incorporates the many aspects
of decision making recognized by classical decision theory (Platt and Glimcher
1999; Glimcher 2001; Dorris and Glimcher 2004). Other studies show that simi-
lar computations occur when the number of decision options is increased beyond
two, suggesting that this sort of model can be generalized to decisions with multiple
outcomes (Churchland, Kiani, and Shadlen 2008). In light of these considerations,
this model system can be considered to be a basic framework for understanding the
central elements of human decision making of the most subtle and nuanced sort.
Supposing that the monkey model is an apt model for understanding human
decision making, can it tell us anything about the question of freedom of the will?
Although in most trials the nature of the stimulus itself specifies the correct choice,
in some trials the stimulus does not provide sufficient evidence for the decision,
either because there is enough noise in the stimulus that the monkey must guess the
answer, or because the stimulus itself does not provide any determinative informa-
tion. Monkeys are occasionally presented with random-dot motion displays that
have 0 percent coherent motion. Although there is a visual stimulus, the informa-
tion in the stimulus is ambiguous and unrelated to a “correct” or rewarded choice.
Even in response to identical movies of 0 percent motion, monkeys choose right-
ward and leftward directions seemingly randomly. The monkey’s choices thus can-
not be driven entirely by the external stimulus but must rather be driven by factors
internal to the monkey herself. Recording from LIP neurons during these trials is
instructive: although the activity levels of the populations representing the alterna-
tive choices are nearly evenly matched, slight correlations are found between small
fluctuations in activity in LIP in one direction or another, and the monkey’s ultimate
response (Shadlen and Newsome 2001). This suggests that the monkey’s responses
are indeed driven by competition between these neuronal populations.
Some might take the existence of the correlation between neural firing levels and
choice even in these 0 percent motion cases to be evidence for determinism, while
others could view the stimulus-independent fluctuations as evidence for the exis-
tence and efficacy of random noise in decision making. I think neither position is
warranted, for reasons specified elsewhere (Roskies 2006). One person’s noise is
another person’s signal, and without being able to record from all the neural inputs to
a system, one cannot determine whether such activity is truly due to stochastic vari-
ability of neuronal firing or is activity due to inputs from other parts of a dynamically
The Neuroscience of Volition 45
modality (Heekeren, Marrett, and Ungerleider 2008). Language may make possible
such representations in humans, or there may be nonlinguisitically mediated ways
of encoding abstract intentions.
Despite some shortcomings as a model of human decision making, the mon-
key work on decision encourages us to think about volition mechanistically. Some
philosophers argue that it is not determinism, but the recognition that mechanism
underlies our decisions, that is the most potent challenge to freedom (Nahmias,
Coates, and Kvaran 2007). While there is some evidence to support this notion,
there is much we do not understand about the threat of mechanism, and the rela-
tion of mechanism to reductionism. If mechanism is inimical to freedom, it may
well be that our growing understanding of mechanisms underlying decision making
will undermine our conception of the will as free. Current work in philosophy sug-
gests that the threat of mechanism may arise from misunderstandings about what
mechanism entails (see, e.g., Nahmias and Murry 2010). Thus, it is more likely that
our views about freedom will adapt to embrace the insights this research provides
into the processes underlying our ability to choose among options when the correct
choice is not externally dictated.
Several regions in frontal cortex appear time and time again in studies on voli-
tion. DLPFC is activated in many tasks involving choice or decision making
(Cunnington et al. 2006; Lau, Rogers, Haggard, et al. 2004; Jahanshahi et al. 1995;
Kim and Shadlen 1999; Heekeren et al. 2006). DLPFC has been implicated in
abstract and concrete decisions, as it is activated in choices between actions and in
rule selection (Assad, Rainer, and Miller 1998; Rowe et al. 2008; Bunge et al. 2003;
Bunge 2004; Bunge et al. 2005; Donohue, Wendelken, and Bunge 2008). As noted
earlier, there are competing hypotheses about the role of DLPFC in tasks involving
choice and selection of action, including response selection, conscious deliberation,
and conflict resolution. Although some work suggests that DLPFC activity is reflec-
tive of attention to selection of action (and thus, presumably, conscious control;
Lau, Rogers, Ramnani, et al. 2004), other studies indicate that DLPFC activation is
not always to be associated with conscious pathways (Lau and Passingham 2007).
DLPFC has also been implicated in more abstract forms of control in humans. For
example, Knoch and Fehr’s (2007) rTMS studies indicate that the capacity to resist
temptation depends on right DLPFC.
Discerning the networks subserving voluntary inhibitory control of action
appears to be more straightforward. Libet, who argued on the basis of his experi-
mental evidence that conscious intention is not causally efficacious in producing
action, consoled himself with the view that the lag between the RP and action
could possibly allow for inhibition of unconsciously generated actions, thus pre-
serving the spirit of free will with “free won’t” (Libet et al. 1983). However, he
left this as pure conjecture. More recent studies have begun to shed light upon
the neural mechanisms of inhibition of intended actions. For example, Brass and
Haggard (2007) recently performed f MRI experiments in which they report
increased activity in frontomedial cortical areas in Libet-like tasks in which sub-
jects are required to intend to respond, and then to choose randomly whether
or not to inhibit that response. They conjecture that these frontomedial areas
are involved in voluntarily inhibiting self-generated action. Similar regions are
involved in decisions to inhibit prepotent responses (Kuhn, Haggard, and Brass
2009). Connectivity analyses suggest that medial frontal inhibition influences
preSMA in a top-down fashion (Kuhn, Haggard, and Brass 2009). Other evidence
suggests that inhibition occurs at lower levels in the motor hierarchy as well, for
example, in local cortical networks in primary motor areas (Coxon, Stinear, and
Byblow 2006).
While dorsal medial frontal regions appear to be involved directly in inhibitory
processes, the same regions that mediate voluntary decisions to act appear to be
involved in voluntary decisions to refrain from action. Evidence from both event-
related potential (ERP) and fMRI studies demonstrates that the neural signatures of
intentionally not acting, or deciding not to act after forming an intention to act, look
very much like those of decisions to act (Kuhn, Gevers, and Brass 2009; Kuhn and
Brass 2009b). For example, areas in anterior cingulate cortex and dorsal preSMA are
active in both freely chosen button presses and free decisions not to press a button.
The similar neural basis between decisions to act and to refrain from action lends
credence to the commonsensical notion that both actions and omissions are acts of
the will for which we can be held responsible.
48 T H E Z O M B I E CH A L L E N G E
Volition as a Feeling
The experience of willing is an aspect of a multifaceted volitional capacity. Some
think that the conscious will is an illusion, so all there is to explain is the experience
or belief that one wills or intends actions. There are at least two phenomenological
aspects of agency to consider: the awareness of an intention or urge to act that we
identify as occurring prior to action, and the post hoc feeling that an action taken
was one’s own.
With respect to the first, recent results reveal that the experience of voluntary
intention depends upon parietal cortex. Electrical stimulation in this area elicited
motor intentions, and stronger stimulation sometimes led to the erroneous belief
that movement had occurred (Desmurget et al. 1999). In contrast, stimulation of
premotor cortex led to movements without awareness of movement (Desmurget
et al. 2009). In addition, lesions in the inferior parietal lobe alter the awareness of
timing of motor intention. Instead of becoming aware of intentions prior to move-
ment, these lesion patients reported awareness only immediately prior to the time
of movement (Sirigu et al. 2004a). This was not due to an impairment in time per-
ception, as their ability to report movement timing accurately was not impaired.
Although this suggests that awareness of agency relies primarily on parietal rather
than premotor areas, Fried reported that stimulation in SMA also evoked desires to
move. These results may be reconciled, for intentions triggered by stimulation in
SMA, in contrast to those triggered by parietal stimulation, had the phenomenol-
ogy of compulsions more than of voluntary intentions (Fried et al. 1991). Thus, it is
possible that the experience of an impending but not necessarily self-willed action
or urge (like an oncoming sneeze) may be due to frontal areas, while the experience
of voluntarily moving, or being author of the willing, may involve parietal regions.
Considerable progress is also being made in identifying the neural signals
involved in production of the feeling of agency or ownership of action. The feeling
of agency seems to depend on both proprioceptive and perceptual feedback from
the effects of the action (Pacherie 2008; Kuhn and Brass 2009a; Moore et al. 2009;
Moore and Haggard 2008; Tsakiris et al. 2005). A number of studies indicate that
plans for action are often accompanied by efferent signals that allow the system to
form expectations for further sensory feedback that, if not violated, contribute to the
feeling of agency (Linser and Goschke 2007; Sirigu et al. 2004b). Grafton and col-
leagues found activation in right angular gyrus (inferior parietal cortex) in cases of
discrepancy between anticipated and actual movement outcome, and in awareness
of authorship (Farrer et al. 2008). Signals from parietal cortex when predictions of
a forward model match sensory or proprioceptive information may be important in
creating the sense of agency. Moreover, some aspects of awareness of agency seem
constructed retrospectively. A recent study shows that people’s judgments about
the time of formation of intention to move can be altered by time-shifting sensory
feedback, leading to the suggestion that awareness of intention is inferred at least in
part from responses, rather than directly perceived (Banks and Isham 2009). These
studies lend credence to criticisms that the Libet measurement paradigm may affect
the reported time of awareness of intention (Lau, Rogers, and Passingham 2006,
2007). In addition, perceived onset of action relative to effects is modulated by
The Neuroscience of Volition 49
whether the actor perceives the action as volitional (Engbert, Wohlschlager, and
Haggard 2008; Haggard 2008). TMS over SMA after action execution also affects
the reported time of awareness of intention (Lau, Rogers, and Passingham 2007),
further evidence that awareness of intention is in part reconstruction.
These results are consistent with a model in which parietal cortex generates motor
intentions and a predictive signal or forward model for behavior during voluntary
action. The motor plans are relayed to frontal regions for execution, and activation
of these regions may be crucial for aspects of awareness of intention and timing. At
the same time, parietal regions compare the internal predictions with sensory feed-
back, though some hypothesize that this comparison is performed in premotor cor-
tex (Desmurget and Sirigu 2009). Feedback signals alone are insufficient for a sense
of authorship (Tsakiris et al. 2005). When signals match, we may remain unaware
of our motor intentions (Sirigu et al. 2004a, 2004b), yet perceive the actions as
our own. We may only be made aware of our motor intentions when discrepancies
between the forward model and information from perception are detected. Thus,
both an efferent internal model and feedback from the environment are important
in the perception of agency and self-authorship (Moore et al. 2009).
Under normal circumstances, we experience our voluntary actions as voluntary.
Under abnormal circumstances, people may wrongly attribute, or fail to attribute,
agency to themselves (Wegner and Wheatley 1999; Wegner 2002). That feelings
of agency can mislead has led some to suggest that it is merely an illusion that the
will is causally efficacious (Wegner 2002; Hallett 2007). Some may take the neu-
roscience data to suggest that feelings of agency are post hoc inferences, and on a
certain view of inference, they are. However, inferences are often a good route to
knowledge, and although experience of agency is not always veridical, we should
not conclude that in general, feelings of agency do not reflect actual agency, that the
will is not causally efficacious, or that free will is nothing more than a feeling. The
mere fact that the experience of volition has neural underpinnings is also not a basis
for denying freedom of the will. Indeed, if the same neural mechanisms that under-
lie motor intention lead to movement and a predictive signal that is corroborated by
action, then under normal conditions feelings of agency should be good evidence
for the operation of the will. Understanding better the interactions between circuits
mediating the experience of agency and those involved in initiation of movement,
formation of intention, and so on, may explain how these various aspects of volition
are related and how they can be dissociated, with particular forms of brain damage
or given certain arrangements of external events.
intentions as currently conceived are likely to underlie both kinds of action, but
they may be generated or activated differently (Pacherie 2008). It is likely that more
abstract or distal intentions are not closely tied to motor activity, but they are likely
to be formed by processes similar to those discussed in the section on decision
making (Bratman 1999, 2007). The particular paradigm discussed in that section
involves aspects of both exogenous and endogenous action initiation, since the deci-
sions are driven by perceptual information, but the corresponding motor response
is endogenously produced by the monkey. In all these studies there is some element
of executive control, usually involving conscious execution of the task at hand, and
occasionally involving inhibition of response, or at least the potential for such inhi-
bition or self-regulation. Although executive control is an imprecise, broad term,
and it does not suffice for characterizing a voluntary act, it is likely that the possibil-
ity of such kind of control is a necessary condition on voluntary action. It may be
that the importance of consciousness in volition will be found to be at the level of
executive function rather than at lower levels of processing. The phenomenologi-
cal aspects of volition may also describe ways in which consciousness manifests in
volition, but whether the phenomenology is necessary is doubtful, and it is clearly
not sufficient. Illusions of agency are often invoked as evidence that free will is only
an illusion. However, even though the phenomenology of willing is clearly sepa-
rable from agency, because the phenomenology of willing is normally a result of the
operation of the neural systems subserving agency, the feeling of willing tends to be
good evidence for willing.
Paralleling the conceptual connections are neurobiological connections between
these various aspects. While preSMA and motor regions are important for action
initiation, and possibly with proximal intentions, these areas form part of a larger
network with parietal regions that are involved in planning and prediction of future
action and with aspects of decision, as well as with frontal regions mediating vari-
ous aspects of executive control. The overlap between the neurobiological networks
identified by these different research foci is reassuring, given the ways in which the
conceptual aspects of volition converge and interact. Despite the gradual unfolding
of a coherent picture, however, it is not yet possible to identify the specific neurocir-
cuitry of volition or agency, nor is it clear that the goal of doing so is well conceived.
Indeed, neuroscience appears to reveal volition not to be a single spark or unitary
faculty but rather a collection of largely separable processes that together make pos-
sible flexible, intelligent action. Further elucidation of brain networks may provide
a better way of taxonomizing the elements of volition (Brass and Haggard 2008;
Pacherie 2006, 2008). For now, however, one of the most significant contributions
neuroscience has made has been is in allowing us to formulate novel questions about
the nature of voluntary behavior, and providing new ways of addressing them.
FINAL THOUGHTS
It is difficult, if not impossible, to disentangle our notion of volition from questions
about human freedom. The construct of volition largely exists in order to explain the
possibility, nature, or feeling of autonomous agency. On the whole, neuroscience
has not undermined our conception of volition, or of freedom. It has maintained
The Neuroscience of Volition 51
in large part notions of intention, choice, and the experience of agency. However,
although not posing a direct threat to freedom, neuroscience promises to alter some
of our views about volition. How radical an alteration remains to be seen.
First, neuroscience may affect views on volition and its relation to free will by
illuminating the mechanisms underlying these constructs. Merely demonstrating
mechanism seems to affect the layperson’s views on freedom (Nahmias, Coates,
and Kvaran 2007; Monterosso, Royzman, and Schwartz 2005). However, on the
assumption that dualism is false, mechanistic or causal explanation alone is insuf-
ficient for undermining freedom or for showing will to be an illusion. Thinking of
volition more mechanistically than we currently do may ultimately put pressure on
ordinary notions of what is required for freedom, and the changes may be salutary,
forcing the folk to abandon incoherent notions such as uncaused causes.
Do neuroscientific results show volition to have or lack characteristics that com-
port with our intuitive or theoretically informed notions of the requirements for
freedom of the will? So far, the greatest effect of neuroscience has been to challenge
traditional views of the relationship between consciousness and action. For exam-
ple, although neuroscience seems to ratify the role of intention in action, it does
alter our notions about the degree to which conscious processes alone are causally
effective in generating choice and initiating action. To the extent that the folk con-
ception of volition is wedded to the notion that action is caused and solely caused
by conscious intention, some results challenge this conception. Converging evi-
dence from neuroscience and psychology makes it clear that factors in addition to
consciousness influence our choices. Although the relevant literature on automatic
processes is not reviewed here, this is not unique to volition: more aspects of behav-
ior than previously imagined appear to be at least partly influenced by unconscious
processes. However, it would be a mistake to conclude from this that conscious pro-
cesses are not causally efficacious or that they are epiphenomenal. Moreover, merely
showing that we can be mistaken about our intentions or that there are unconscious
antecedents to conscious behavior does not warrant the conclusion that conscious
intentions do not often or usually play a role in voluntary action. At this time I do
not believe the data strongly support the claims that action initiation precedes con-
scious intention. However, future work may yet affect our beliefs about the relative
timing of conscious processes and action initiation.
More likely, neuroscience may change the way we conceive of conscious inten-
tion. The studies described here suggest that in normal circumstances we do not
experience our intentions as urges or feelings, but rather are made aware of our
intentions when our actions and intentions fail to match. While some might take
this to indicate that normally our intentions are not conscious, we could rather
modify our views of conscious intention. For example, perhaps conscious inten-
tions are not intentions that we are occurrently conscious of, but rather, they are
intentions whose goals or aims we are conscious of, or that we consciously adopt or
endorse, and that play a particular role in action. While this is consonant with some
recent views about the role of consciousness in intention (Mele 2009), it perhaps
marks a revision to the commonsense conception of the nature of conscious inten-
tion operative in volition. It may be that future advances in understanding the neural
basis of consciousness will show even the most sophisticated views to be mistaken.
52 T H E Z O M B I E CH A L L E N G E
ACKNOWLEDGMENTS
This chapter was adapted from a paper in Annual Review of Neuroscience (Roskies
2010). The work was supported in part by an NEH collaborative research grant to
the Johns Hopkins Berman Institute of Bioethics, and by the MacArthur Project in
Law and Neuroscience. I would like to thank Nancy McConnell, Al Mele, Shaun
Nichols, Walter Sinnott-Armstrong, and Tillmann Vierkant for comments on an
earlier draft.
NOTE
1. Libertarian freedom refers to freedom that depends upon indeterministic events or
choices.
REFERENCES
Andersen, Richard A., and Christopher A. Buneo. 2002. Intentional maps in posterior
parietal cortex. Annual Review of Neuroscience 25 (1):189–220.
Andersen, R. A., and C. A. Buneo. 2003. Sensorimotor integration in posterior parietal
cortex. Advances in Neurology 93:159–177.
Aron, Adam R., Tim E. Behrens, Steve Smith, Michael J. Frank, and Russell A. Poldrack.
2007. Triangulating a cognitive control network using diffusion-weighted mag-
netic resonance imaging (MRI) and functional MRI. Journal of Neuroscience 27
(14):3743–3752.
Assad, Wael F., Gregor Rainer, and Earl K. Miller. 1998. Neural activity in the primate
prefrontal cortex during associative learning. Neuron 21 (6):1399–1407.
Audi, Robert. 1993. Volition and agency. In Action, intention, and reason, edited by R. Audi.
Ithaca, NY: Cornell University Press.
Badre, David. 2008. Cognitive control, hierarchy, and the rostro-caudal organization of
the frontal lobes. Trends in Cognitive Sciences 12 (5):193–200.
Balaguer, Mark. 2004. A coherent, naturalistic, and plausible formulation of libertarian
free will. Nous 38 (3):379–406.
Banks, W. P., ed. 2002. Consciousness and cognition. Vol. 11. Academic Press.
Banks, W. P., and E. A. Isham. 2009. We infer rather than perceive the moment we decided
to act. Psychological Science 20 (1):17–21.
Banks, William P., and Susan Pockett. 2007. Benjamin Libet’s work on the neuroscience of
free will. In Blackwell companion to consciousness, edited by M. Velmans and S. Schinder
(pp. 657–670). Malden, MA : Blackwell.
Barch, Deanna M., Todd S. Braver, Fred W. Sabb, and Douglas C. Noll. 2000. Anterior cin-
gulate and the monitoring of response conflict: Evidence from an fMRI study of overt
verb generation. Journal of Cognitive Neuroscience 12 (2):298–309.
Bittner, T. 1996. Consciousness and the act of will. Philosophical Studies 81:331–341.
Bode, S., and J. D. Haynes. 2009. Decoding sequential stages of task preparation in the
human brain. Neuroimage 45 (2):606–613.
The Neuroscience of Volition 53
Brass, M., and P. Haggard. 2007. To do or not to do: The neural signature of self-control.
Journal of Neuroscience 27 (34):9141–9145.
Brass, Marcel, and Patrick Haggard. 2008. The what, when, whether model of intentional
action. Neuroscientist 14 (4):319–325.
Bratman, Michael E. 1999. Intention, plans, and practical reason. Stanford, CA: Center for
the Study of Language and Information.
Bratman, Michael. 2007. Structures of agency: Essays. New York: Oxford University Press.
Britten, K. H., M. N. Shadlen, W. T. Newsome, and J. A. Movshon. 1992. The analysis of
visual motion: A comparison of neuronal and psychophysical performance. Journal of
Neuroscience 12 (12):4745–4765.
Brown, J. W., D. P. Hanes, J. D. Schall, and V. Stuphorn. 2008. Relation of frontal eye
field activity to saccade initiation during a countermanding task . Experimental Brain
Research 190 (2):135–151.
Bunge, Silvia A. 2004. How we use rules to select actions: A review of evidence
from cognitive neuroscience. Cognitive, Affective, and Behavioral Neuroscience 4
(4):564–579.
Bunge, Silvia A., Itamar Kahn, Jonathan D. Wallis, Earl K. Miller, and Anthony D. Wagner.
2003. Neural circuits subserving the retrieval and maintenance of abstract rules. Journal
of Neurophysiology 90 (5):3419–3428.
Bunge, Silvia A., Jonathan D. Wallis, Amanda Parker, Marcel Brass, Eveline A. Crone, Eiji
Hoshi, and Katsuyuki Sakai. 2005. Neural circuitry underlying rule use in humans and
nonhuman primates. Journal of Neuroscience. 25 (45):10347–10350.
Celebrini, S., and W. T. Newsome. 1994. Neuronal and psychophysical sensitivity to
motion signals in extrastriate area MST of the macaque monkey. Journal of Neuroscience.
14 (7):4109–4124.
Chiu, Yu-Chin, and Steven Yantis. 2009. A domain-independent source of cognitive con-
trol for task sets: Shifting spatial attention and switching categorization rules. Journal of
Neuroscience. 29 (12):3930–3938.
Churchland, Anne K., Roozbeh Kiani, and Michael N. Shadlen. 2008. Decision-making
with multiple alternatives. Nature Neuroscience 11 (6):693–702.
Corbetta, M., F. M. Miezin, S. Dobmeyer, G. L. Shulman, and S. E. Petersen. 1990.
Attentional modulation of neural processing of shape, color, and velocity in humans.
Science 248:1556–1559.
Coxon, J. P., C. M. Stinear, and W. D. Byblow. 2006. Intracortical inhibition during voli-
tional inhibition of prepared action. Journal of Neurophysiology 95 (6):3371–3383.
Cui, He, and Richard A. Andersen. 2007. Posterior parietal cortex encodes autonomously
selected motor plans. Neuron 56 (3):552–559.
Cunnington, R., C. Windischberger, L. Deecke, and E. Moser. 2002. The preparation and
execution of self-initiated and externally-triggered movement: A study of event-related
fMRI. Neuroimage 15 (2):373–385.
Cunnington, R., C. Windischberger, S. Robinson, and E. Moser. 2006. The selection of
intended actions and the observation of others’ actions: A time-resolved fMRI study.
Neuroimage 29 (4):1294–1302.
Deecke, L., and H. H. Kornhuber. 1978. An electrical sign of participation of the mesial
“supplementary” motor cortex in human voluntary finger movement. Brain Research
159:473–476.
Deiber, Marie-Pierre, Manabu Honda, Vicente Ibanez, Norihiro Sadato, and Mark Hallett.
1999. Mesial motor areas in self-initiated versus externally triggered movements
54 T H E Z O M B I E CH A L L E N G E
examined with fMRI: Effect of movement type and rate Journal of Neurophysiology 81
(6):3065–3077.
den Ouden, H. E., U. Frith, C. Frith, and S. J. Blakemore. 2005. Thinking about intentions.
Neuroimage 28 (4):787–796.
Desmurget, M., C. M. Epstein, R. S. Turner, C. Prablanc, G. E. Alexander, and S. T.
Grafton. 1999. Role of the posterior parietal cortex in updating movements to a visual
target. Nature Neuroscience 2:563–567.
Desmurget, Michel, Karen T. Reilly, Nathalie Richard, Alexandru Szathmari, Carmine
Mottolese, and Angela Sirigu. 2009. Movement intention after parietal cortex stimula-
tion in humans. Science 324 (5928):811–813.
Desmurget, Michel, and Angela Sirigu. 2009. A parietal-premotor network for movement
intention and motor awareness. Trends in Cognitive Sciences 13 (10):411–419.
Donohue, S. E., C. Wendelken, and Silvia A. Bunge. 2008. Neural correlates of prepara-
tion for action selection as a function of specific task demands. Journal of Cognitive
Neuroscience 20 (4):694–706.
Dorris, Michael C., and Paul W. Glimcher. 2004. Activity in posterior parietal cor-
tex is correlated with the relative subjective desirability of action. Neuron 44
(2):365–378.
Dosenbach, Nico U. F., Damien A. Fair, Alexander L. Cohen, Bradley L. Schlaggar, and
Steven E. Petersen. 2008. A dual-networks architecture of top-down control. Trends in
Cognitive Sciences 12 (3):99–105.
Dosenbach, Nico U. F., Damien A. Fair, Francis M. Miezin, Alexander L. Cohen, Kristin
K. Wenger, Ronny A. T. Dosenbach, Michael D. Fox, Abraham Z. Snyder, Justin L.
Vincent, Marcus E. Raichle, Bradley L. Schlaggar, and Steven E. Petersen. 2007.
Distinct brain networks for adaptive and stable task control in humans. Proceedings of
the National Academy of Sciences 104 (26):11073–11078.
Engbert, K., A. Wohlschlager, and P. Haggard. 2008. Who is causing what? The sense of
agency is relational and efferent-triggered. Cognition 107 (2):693–704.
Farrer, C., S. H. Frey, J. D. Van Horn, E. Tunik, D. Turk, S. Inati, and S. T. Grafton. 2008.
The angular gyrus computes action awareness representations. Cerebral Cortex 18
(2):254–261.
Farrer, C., and C. D. Frith. 2002. Experiencing oneself vs. another person as being the
cause of an action: The neural correlates of the experience of agency. Neuroimage
15:596–603.
Fischer, J., and M. Ravizza. 1998. Responsibility and control: A theory of moral responsibility.
Cambridge: Cambridge University Press.
Fleming , Stephen M., Rogier B. Mars, Thomas E. Gladwin, and Patrick Haggard. 2009.
When the brain changes its mind: Flexibility of action selection in instructed and free
choices. Cerebral Cortex 19 (10):2352–2360.
Fogassi, L., P. F. Ferrari, B. Gesierich, S. Rozzi, F. Chersi, and G. Rizzolatti. 2005.
Parietal lobe: From action organization to intention understanding. Science 308
(5722):662–667.
Franks, Kevin M., Charles F. Stevens, and Terrence J. Sejnowski. 2003. Independent
sources of quantal variability at single glutamatergic synapses Journal of Neuroscience
23 (8):3186–3195.
Fried, I., A. Katz, G. McCarthy, K. J. Sass, P. Williamson, S. S. Spencer, and D. D. Spencer.
1991. Functional organization of human supplementary motor cortex studied by elec-
trical stimulation. Journal of Neuroscience 11 (11):3656–3666.
The Neuroscience of Volition 55
Frith, C. D., K. Friston, P. F. Liddle, and R. S. J. Frackowiak. 1991. Willed action and the
prefrontal cortex in man: A study with PET. Proceedings of the Royal Society of London
B 244:241–246.
Glimcher, Paul W. 2001. Making choices: The neurophysiology of visual-saccadic deci-
sion making. Trends in Neurosciences 24 (11):654–659.
Glimcher, Paul W. 2003. The neurobiology of visual-saccadic decision making. Annual
Review of Neuroscience 26 (1):133–179.
Glimcher, Paul W. 2005. Indeterminacy in brain and behavior. Annual Review of Psychology
56 (1):25–56.
Gold, Joshua I., and Michael N. Shadlen. 2000. Representation of a perceptual decision in
developing oculomotor commands. Nature 404 (6776):390–394.
Gold, Joshua I., and Michael N. Shadlen. 2007. The neural basis of decision making.
Annual Review of Neuroscience 30 (1):535–574.
Haggard, P. 2008. Human volition: Towards a neuroscience of will. Nature Reviews
Neuroscience 9 (12):934–946.
Haggard, Patrick, and Martin Eimer. 1999. On the relation between brain potentials and
the awareness of voluntary movements. Experimental Brain Research 126:128–133.
Hallett, Mark. 2007. Volitional control of movement: The physiology of free will. Clinical
Neurophysiology 118 (6):1179–1192.
Hamilton, A. F., and S. T. Grafton. 2006. Goal representation in human anterior intrapari-
etal sulcus. Journal of Neuroscience 26 (4):1133–1137.
Hanks, Timothy D., Jochen Ditterich, and Michael N. Shadlen. 2006. Microstimulation
of macaque area LIP affects decision-making in a motion discrimination task . Nature
Neuroscience 9 (5):682–689.
Haynes, J. D., K. Sakai, G. Rees, S. Gilbert, C. Frith, and R. E. Passingham. 2007. Reading
hidden intentions in the human brain. Current Biology 17 (4):323–328.
Heekeren, H. R., S. Marrett, D. A. Ruff, P. A. Bandettini, and L. G. Ungerleider. 2006.
Involvement of human left dorsolateral prefrontal cortex in perceptual decision mak-
ing is independent of response modality. Proceedings of the National Academy of Sciences
103 (26):10023–10028.
Heekeren, Hauke R., Sean Marrett, and Leslie G. Ungerleider. 2008. The neural sys-
tems that mediate human perceptual decision making. Nature Reviews Neuroscience 9
(6):467–479.
Hesse, M. D., C. M. Thiel, K. E. Stephan, and G. R. Fink. 2006. The left parietal cortex
and motor intention: An event-related functional magnetic resonance imaging study.
Neuroscience 140 (4):1209–1221.
Huk, Alexander C., and Michael N. Shadlen. 2005. Neural activity in macaque parietal
cortex reflects temporal integration of visual motion signals during perceptual decision
making. Journal of Neuroscience 25 (45):10420–10436.
Hyder, Fahmeed, Elizabeth A. Phelps, Christopher J. Wiggins, Kevin S. Labar, Andrew
M. Blamire, and Robert G. Shulman. 1997. “ Willed action”: A functional MRI study
of the human prefrontal cortex during a sensorimotor task . Proceedings of the National
Academy of Sciences 94 (13):6989–6994.
Jahanshahi, Marjan, I. Harri Jenkins, Richard G. Brown, C. David Marsden, Richard
E. Passingham, and David J. Brooks. 1995. Self-initiated versus externally triggered
movements: I. An investigation using measurement of regional cerebral blood
flow with PET and movement-related potentials in normal and Parkinson’s disease
subjects. Brain 118 (4):913–933.
56 T H E Z O M B I E CH A L L E N G E
Jenkins, I. Harri, Marjan Jahanshahi, Markus Jueptner, Richard E. Passingham, and David
J. Brooks. 2000. Self-initiated versus externally triggered movements: II. The effect of
movement predictability on regional cerebral blood flow. Brain 123 (6):1216–1228.
Kennerley, Steve W., K. Sakai, and M. F. S. Rushworth. 2004. Organization of action
sequences and the role of the pre-SMA . J Neurophysiol 91 (2):978–993.
Kerns, John G., Jonathan D. Cohen, Angus W. MacDonald III, Raymond Y. Cho, V.
Andrew Stenger, and Cameron S. Carter. 2004. Anterior cingulate conflict monitoring
and adjustments in control. Science 303 (5660):1023–1026.
Kiani, Roozbeh, and Michael N. Shadlen. 2009. Representation of confidence associated
with a decision by neurons in the parietal cortex. Science 324 (5928):759–764.
Kim, Jong-Nam, and Michael N. Shadlen. 1999. Neural correlates of a decision in the dor-
solateral prefrontal cortex of the macaque. Nature Neuroscience 2 (2):176–185.
Knoch, D., and E. Fehr. 2007. Resisting the power of temptations: The right prefrontal
cortex and self-control. Annals of the New York Academy of Sciences 1104:123–134.
Kuhn, S., and M. Brass. 2009a. Retrospective construction of the judgement of free choice.
Consciousness and Cognition 18 (1):12–21.
Kuhn, Simone, and Marcel Brass. 2009b. When doing nothing is an option: The neural
correlates of deciding whether to act or not. Neuroimage 46 (4):1187–1193.
Kuhn, S., W. Gevers, and M. Brass. 2009. The neural correlates of intending not to do
something. Journal of Neurophysiology 101 (4):1913–1920.
Kuhn, Simone, Patrick Haggard, and Marcel Brass. 2009. Intentional inhibition: How the
“veto-area” exerts control. Human Brain Mapping 30 (9):2834–2843.
Lau, Hakwan C., and Richard E. Passingham. 2007. Unconscious activation of the cog-
nitive control system in the human prefrontal cortex. Journal of Neuroscience 27
(21):5805–5811.
Lau, Hakwan C., Robert D. Rogers, and Richard E. Passingham. 2006. On measuring the
perceived onsets of spontaneous actions. Journal of Neuroscience 26 (27):7265–7271.
Lau, H. C., R. D. Rogers, P. Haggard, and R. E. Passingham. 2004. Attention to intention.
Science 303 (5661):1208–1210.
Lau, H. C., R. D. Rogers, and R. E. Passingham. 2007. Manipulating the experienced onset
of intention after action execution. Journal of Cognitive Neuroscience 19 (1):81–90.
Lau, H. C., R. D. Rogers, N. Ramnani, and R. E. Passingham. 2004. Willed action and
attention to the selection of action. Neuroimage 21 (4):1407–1415.
Leon, Matthew I., and Michael N. Shadlen. 1999. Effect of expected reward magnitude on
the response of neurons in the dorsolateral prefrontal cortex of the macaque. Neuron
24 (2):415–425.
Libet, Benjamin. 1985. Unconscious cerebral initiative and the role of conscious will in
voluntary action. Behavioral and Brain Sciences 8:529–566.
Libet, Benjamin, Curtis A. Gleason, Elwood W. Wright, and Dennis K. Pearl. 1983. Time of
conscious intention to act in relation to onset of cerebral activity (readiness-potential):
The unconscious initiation of a freely voluntary act. Brain 106 (3):623–642.
Libet, Benjamin, E. W. Wright Jr., and Curtis A. Gleason. 1982. Readiness-potentials preced-
ing unrestricted “spontaneous” vs. pre-planned voluntary acts. Electroencephalography
and Clinical Neurophysiology 54:322–335.
Libet, Benjamin, E. W. Wright Jr., and Curtis A. Gleason. 1983. Preparation or intention-to-
act, in relation to pre-event potentials recorded at the vertex. Electroencephalography
and Clinical Neurophysiology 56:367–372.
Linser, K., and T. Goschke. 2007. Unconscious modulation of the conscious experience of
voluntary control. Cognition 104 (3):459–475.
The Neuroscience of Volition 57
Mainen, Zachary F., and Terrence J. Sejnowski. 1995. Reliability of spike timing in neo-
cortical neurons. Science 268 (5216):1503–1506.
Matsuhashi, M., and M. Hallett. 2008. The timing of the conscious intention to move.
European Journal of Neuroscience 28 (11):2344–2351.
Matsumoto, Kenji, Wataru Suzuki, and Keiji Tanaka. 2003. Neuronal correlates of
goal-based motor selection in the prefrontal cortex. Science 301 (5630):229–232.
Mazurek, Mark E., Jamie D. Roitman, Jochen Ditterich, and Michael N. Shadlen. 2003.
A role for neural integrators in perceptual decision making. Cerebral Cortex 13
(11):1257–1269.
Mele, Alfred. 2005. Motivation and agency. New York: Oxford University Press.
Mele, Alfred. 2006. Free will and luck. Oxford: Oxford University Press.
Mele, Alfred. 2009. Effective intentions: The power of conscious will. New York: Oxford
University Press.
Monterosso, John, Edward B. Royzman, and Barry Schwartz. 2005. Explaining away
responsibility: Effects of scientific explanation on perceived culpability. Ethics and
Behavior 15 (2):139–158.
Moore, James, and Patrick Haggard. 2008. Awareness of action: Inference and prediction.
Consciousness and Cognition 17 (1):136–144.
Moore, James W., David Lagnado, Darvany C. Deal, and Patrick Haggard. 2009.
Feelings of control: Contingency determines experience of action. Cognition 110
(2):279–283.
Mueller, Veronika A., Marcel Brass, Florian Waszak, and Wolfgang Prinz. 2007. The role
of the preSMA and the rostral cingulate zone in internally selected actions. Neuroimage
37 (4):1354–1361.
Nahmias, Eddy, D. Justin Coates, and Trevor Kvaran. 2007. Free will, moral responsi-
bility, and mechanism: Experiments on folk intuitions. Midwest Studies in Philosophy
31:215–242.
Nahmias, Eddy, and Dylan Murry. 2010. Experimental philosophy on free will: An error
theory for incompatibilist intuitions In New waves in philosophy of action, edited by J.
Aguilar, A. Buckareff, and K. Frankish. New York: Palgrave-Macmillan.
Newsome, William T., K. H. Britten, and J. A. Movshon. 1989. Neuronal correlates of a
perceptual decision. Nature 341:52–54.
O’Craven, Kathy M. Bruce R. Rosen, Ken K. Kwong, Anne M. Treisman, and Robert L.
Savoy. 1997. Voluntary attention modulates fMRI activity in human MT-MST. Neuron
18 (4):591–598.
O’Doherty, John, et al. 2001. Abstract reward and punishment representations in the
human orbitofrontal cortex. Nature Neuroscience 4 (1):95–102.
Pacherie, Elisabeth. 2006. Toward a dynamic theory of intentions. In Does consciousness
cause behavior?, edited by S. Pockett, W. P. Banks, and S. Gallagher. Cambridge, MA :
MIT Press.
Pacherie, Elisabeth. 2008. The phenomenology of action: A conceptual framework.
Cognition 107 (1):179–217.
Palmer, John, Alexander C. Huk, and Michael N. Shadlen. 2005. The effect of stimu-
lus strength on the speed and accuracy of a perceptual decision. Journal of Vision 5
(5):376–404.
Pesaran, Bijan, Matthew J. Nelson, and Richard A. Andersen. 2008. Free choice activates a
decision circuit between frontal and parietal cortex. Nature 453 (7193):406–409.
Platt, Michael L., and Paul W. Glimcher. 1999. Neural correlates of decision variables in
parietal cortex. Nature 400 (6741):233–238.
58 T H E Z O M B I E CH A L L E N G E
Sinnott-Armstrong , W., and Nadel, L., eds. 2011. Conscious will and responsibility. New
York: Oxford University Press.
Sirigu, Angela, Elena Daprati, Sophie Ciancia, Pascal Giraux, Norbert Nighoghossian,
Andres Posada, and Patrick Haggard. 2004a. Altered awareness of voluntary action
after damage to the parietal cortex. Nature Neuroscience 7 (1):80–84.
Sirigu, Angela, Elena Daprati, Sophie Ciancia, Pascal Giraux, Norbert Nighoghossian,
Andres Posada, and Patrick Haggard. 2004b. Mere expectation to move causes attenu-
ation of sensory signals. Nature Neuroscience 7 (1):80–84.
Soon, Chun Siong, Marcel Brass, Hans-Jochen Heinze, and John-Dylan Haynes. 2008.
Unconscious determinants of free decisions in the human brain. Nature Neuroscience
11 (5):543–545.
Sumner, Petroc, Parashkev Nachev, Peter Morris, Andrew M. Peters, Stephen R. Jackson,
Christopher Kennard, and Masud Husain. 2007. Human medial frontal cortex medi-
ates unconscious inhibition of voluntary action. Neuron 54 (5):697–711.
Thaler, D., Y. C. Chen, P. D. Nixon, C. E. Stern, and R. E. Passingham. 1995. The func-
tions of the medial premotor cortex. I. Simple learned movements. Experimental Brain
Research 102 (3):445–460.
Thoenissen, D., K. Zilles, and I. Toni. 2002. Differential involvement of parietal and pre-
central regions in movement preparation and motor intention. Journal of Neuroscience
22 (20):9024–9034.
Trevena, Judy Arnel, and Jeff Miller. 2002. Cortical movement preparation before and
after a conscious decision to move. Consciousness and Cognition 11 (2):162–190.
Tsakiris, Manos, Patrick Haggard, Nicolas Franck, Nelly Mainy, and Angela Sirigu. 2005.
A specific role for efferent information in self-recognition. Cognition 96:215–231.
Wegner, Daniel. 2002. The illusion of conscious will. Cambridge, MA : MIT Press.
Wegner, Daniel, and T. Wheatley. 1999. Apparent mental causation: Sources of the experi-
ence of will. American Psychologist 54:480–492.
Yang , Tianming , and Michael N. Shadlen. 2007. Probabilistic reasoning by neurons.
Nature 447 (7148):1075–1080.
Young , Gary. 2006. Preserving the role of conscious decision making in the initiation of
intentional action. Journal of Consciousness Studies 13:51–68.
Zhu, Jing. 2004a. Intention and volition. Canadian Journal of Philosophy 34 (2):175–193.
Zhu, Jing. 2004b. Understanding volition. Philosophical Psychology 17 (2):247–273.
3
Beyond Libet
Long-Term Prediction of Free Choices from Neuroimaging Signals
J O H N - DY L A N H AY N E S
INTRODUCTION
It is a common folk-psychological intuition that we can freely choose between
different behavioral options. Even a simple, restricted movement task with only a
single degree of freedom can be sufficient to yield this intuition, say in an experi-
ment where a subject is asked to “move a finger at some point of their own choice.”
Although such a simple decision might not be perceived as being as important as,
say, a decision to study at one university or another, most subjects feel it is a useful
example of a specific type of freedom that is often experienced when making deci-
sions: they have the impression that the outcome of many decisions is not predeter-
mined at the time they are felt to be made, and instead they are still “free” to choose
one or the other way.
This belief in the freedom of decisions is fundamental to our human self-concept.
It is so strong that it is generally maintained even though it contradicts several other
core beliefs. For example, freedom appears to be incompatible with the nature of our
universe. The deterministic, causally closed physical world seems to stand in the way
of “additional” and “unconstrained” influences on our behavior from mental facul-
ties that exist beyond the laws of physics. Interestingly, in most people’s (and even in
some philosophers’)1 minds, the incompatible beliefs in free will and in determin-
ism coexist happily without any apparent conflict. One reason most people don’t
perceive this as a conflict might be that our belief in freedom is so deeply embed-
ded in our everyday thoughts and behavior that the rather abstract belief in physical
determinism is simply not strong enough to compete. The picture changes, however,
with direct scientific demonstrations that our choices are determined by the brain.
People are immensely fascinated by scientific experiments that directly expose how
our seemingly free decisions are systematically related to prior brain activity.
Beyond Libet 61
(a)
k
t
500 ms
z
q free response
L R
m m
v l l
q v m m
l l
# z
Intention onset
Judgement
Vi r=3
Spherical
(b) cluster
Pattern-based
decoding
for each
brain position
1 fMRI image
= 25
... r t z v b w m l p x s d q y
Figure 3.1 (a) The revised Libet task. Subjects are given two response buttons, one
for the left and one for the right hand. In parallel there is a stream of letters on the screen
that changes every 500 milliseconds. They are asked to relax and to decide at some
spontaneous point of their own choice to press either the left or the right button. Once the
button is pressed, they are asked to report which letter was on the screen when they made
up their mind. (b) Pattern-based decoding and prediction of decisions ahead of time.
Using a searchlight technique (Kriegeskorte et al. 2006; Haynes et al. 2007; Soon
et al. 2008), we assessed for each brain region and each time point preceding the decision
whether it is possible to decode the choice ahead of time. Decoding is based on small local
spherical clusters of voxels that form three-dimensional spatial patterns. This allowed us
to systematically investigate which brain regions had predictive information at each time
point preceding the decision.
used a second button press to indicate at which time they had made their decision.
This screen showed three letters plus a hash symbol (#) arranged randomly on the
four corners of an imaginary square centered on fixation. Each of these positions
corresponded to one of four buttons operated by the left and right index and middle
fingers. Subjects were to press the button corresponding to the letter that was visible
64 T H E Z O M B I E CH A L L E N G E
on the screen when they consciously made their decision. When the letter was not
among those presented on the screen, they were asked to press the button corre-
sponding to the hash symbol. Then, after a delay the letter stream started again, and
a new trial began. Note that due to the randomization of the position of letters in the
response mapping screen, the second response is uncorrelated with the first, freely
chosen response. Importantly, in order to facilitate spontaneous behavior, we did
not ask subjects to balance the left and right button selections. This would require
keeping track of the distribution of button selections in memory and would also
encourage preplanning of choices. Instead, we selected subjects who spontaneously
chose a balanced number of left and right button presses without prior instruction
based on a behavioral selection test before scanning.
0.0
50 0.0 –8 –4 0 4 8 12
–8 –4 0 4 8 12
L R
SMA
Predictive accuracy [%]
80 0.3
50 0.0
–8 –4 0 4 8 12
W
Time [s]
50 0.0
50 0.0 –8 –4 0 4 8 12
–8 –4 0 4 8 12
Precuneus/Posterior
cingulate cortex
60 0.1
L R
50 0.0
–8 –4 0 4 8 12
Figure 3.2 (Top) First we assessed which brain regions had information about a
subject’s decision after it had been made and the subject was currently pressing the button
corresponding to their choice. As expected, this yielded information in motor cortex and
supplementary motor cortex. (Bottom) Second, we assessed which brain regions had
predictive information about a subject’s decision even before the subject knew how they
were going to decide. This yielded regions of frontopolar cortex and precuneus/posterior
cingulate cortex, which had predictive information already seven seconds before the
decision was made.
regions encoded the outcome of the subject’s decision during the execution phase.
These were primary motor cortex and SMA. Thus, the sanity check demonstrates
the validity of the method. Please note that, as expected, the informative fMRI sig-
nals are delayed by several seconds relative to the decision due to the delay of the
hemodynamic response.
Next we addressed the key question of this study, whether any brain region encoded
the subject’s decision ahead of time. We found that, indeed, two brain regions predicted
prior to the conscious decision whether the subject was about to choose the left or
66 T H E Z O M B I E CH A L L E N G E
right response, even though the subject did not know yet which way they were about to
decide (figure 3.2, bottom). The first region was in frontopolar cortex (FPC), Brodman
area 10 (BA 10). The predictive information in the fMRI signals from this brain region
was present already seven seconds prior to the subject’s decision. This period of seven
seconds is a conservative estimate that does not yet take into account the delay of the
fMRI response with respect to neural activity. Because this delay is several seconds, the
predictive neural information will have preceded the conscious decision by up to 10
seconds. There was a second predictive region located in parietal cortex (PC) stretch-
ing from the precuneus into posterior cingulate cortex. It is important to note that there
is no overall signal increase in the frontopolar and precuneus/posterior cingulate dur-
ing the preparation period. Rather, the predictive information is encoded in the spatial
pattern of fMRI responses, which is presumably why it has only rarely been noticed
before. Please note that due to the temporal delay of the hemodynamic response, the
small lead times in SMA/pre-SMA of up to several hundred milliseconds reported in
previous studies (Libet et al. 1983; Haggard & Eimer 1999) are below the temporal
resolution of our method. Hence, we cannot exclude that other regions contain predic-
tive information in the short period immediately preceding the intention.
The Role of BA 10
The finding of unconscious, predictive brain activity patterns in Brodman area 10
(BA 10) is interesting because this area is not normally discussed in connection with
free choices. This is presumably due to the fact that conventional analyses will only
pick up regions with overall changes in activity but not regions where only the pat-
terning of the signal changes in a choice-specific fashion. However, it has been repeat-
edly demonstrated using other tasks that BA 10 plays an important role in encoding
and storage of intentions. It has long been known that lesions to BA 10 lead to a loss
of prospective memory, thus disrupting the ability to hold action plans in memory
for later execution (Burgess et al. 2001). In a previous study from our group, we have
shown that BA 10 also stores intentions across delay periods after they have reached
consciousness, especially if there is a delay between decision and execution (Haynes
et al. 2007). Although BA 10 has only rarely been implicated in preparation of volun-
tary actions, a direct comparison across different brain regions has revealed that the
earliest cortical region exhibiting preparatory signals before voluntary movements is
frontopolar cortex (Groll-Knapp et al. 1977). BA 10 is also cytoarchitectonically very
special. It has a very low cell density, but each cell forms a large number of synapses,
meaning that it is a highly associative brain region (Ramnani & Owen 2003). One
could speculate that this would allow for locally recurrent processing that could sup-
port the storage of action plans in working memory. Furthermore, BA 10 is believed
to be the area that has most disproportionately grown in size in humans compared
with nonhuman primates (Ramnani & Owen 2004).
Eimer 1999; Soon et al. 2008). On the one hand, a decision needs to be made as to
when to decide; on the other hand, a decision has to be made as to which button to
choose. Brass and Haggard (2008) have referred to this as “when” and “what” deci-
sions. So far we have decoded the “what” decisions, so next we also conducted a
further decoding analysis where we assessed to which degree the timing of the deci-
sion (as opposed to its outcome) can be decoded. The time of conscious intention
could be significantly predicted from pre-SMA and SMA. The earliest decodable
information on timing was available five seconds before a decision. This might sug-
gest that the brain begins to prepare self-paced decisions through two independent
networks that only converge at later stages of processing. The classical Libet experi-
ments, which were primarily concerned with “when” decisions, found short-term
predictive information in the SMA. This is compatible with our prediction of the
timing from pre-SMA and SMA. In contrast, as our results show, a “what” decision
is prepared much earlier and by a much more extended network in the brain.
SANITY CHECKS
Our findings point toward long-leading brain activity that is predictive of the out-
come of a decision even before the decision reaches awareness. This is a striking
finding, and thus it is important to critically discuss several possible sources of arti-
facts and alternative interpretations. Of particular interest is to make sure that the
report of the timing is correct, and that the information does not reflect a carryover
from previous trials.
CAUSALITY?
An important point that needs to be discussed is to what degree our findings support
any causal relationship between brain activity and the conscious will. For the criterion
of temporal precedence there should be no doubt that our data finally demonstrate that
brain activity can predict a decision long before it enters awareness. A different point
is the criterion of constant connection. For a constant connection, one would require
that the decision can be predicted with 100 percent accuracy from prior brain activity.
Libet’s original experiments were based on averages, so no statistical assessment can
be made about the accuracy with which decisions can be predicted. Our prediction of
decisions from brain activity is statistically reliable but far from perfect. The predictive
accuracy of around 60 percent can be substantially improved if the decoding is custom-
tailored for each subject. However even under optimal conditions this is far from 100
percent. This could have several reasons. One possibility is that the inaccuracy stems
from imperfections in our ability to measure neural signals. Due to the limitations of
fMRI in terms of spatial and temporal resolution, it is clear that the information we can
measure can only reflect a strongly impoverished version of the information available
from a direct measurement of the activity in populations of neurons in the predictive
areas. A further source of imperfection is that an optimal decoding approach needs a
large (ideally infinite) number of training samples to learn exactly what the predictive
patterns should be. In contrast, the slow sampling rate of fMRI imposes limitations on
the training information available. So, even if the populations of neurons in these areas
would in principle allow a perfect prediction, our ability to extract this information
would be severely limited. However these limitations cannot be used to argue that one
day, with better methods, the prediction will be perfect; this would constitute a mere
“promissory” prediction. Importantly, a different interpretation could be that the inac-
curacy simply reflects the fact that the early neural processes might in principle simply
not be fully, but only partially predictive of the outcome of the decision. In this view,
even full knowledge of the state of activity of populations of neurons in frontopolar cor-
tex and in the precuneus would not permit us to fully predict the decision. In that case
the signals have the form of a biasing signal that influences the decision to a degree, but
additional influences at later time points might still play a role in shaping the decision.
Until a perfect predictive accuracy has been reached in an experiment, both interpreta-
tions—incomplete prediction and incomplete determination—remain possible.
FUTURE PERSPECTIVES
An important question for future research is whether the signals we observed are
indeed decision-related. This might sound strange given that they predict the choices.
However, this early information could hypothetically also be the consequence of sto-
chastic, fluctuating background activity in the decision network (Eccles 1985), simi-
lar to the known fluctuations of signals in early visual cortex (Arieli et al. 1996). In
this view the processes relevant for the decision would occur late, say, in the last sec-
ond before the decision. In the absence of any “reasons” for deciding for one or the
other option, the decision network might need to break the symmetry, for example,
by using stochastic background fluctuations in the network. If the fluctuations in the
network are, say, in one subspace, the decision could be pushed toward “left,” and
70 T H E Z O M B I E CH A L L E N G E
if the fluctuations are in a different subspace, the decision could be pushed toward
“right.” But how could fluctuations at the time of the conscious decision be reflected
already seven seconds before? One possibility is that the temporal autocorrelation of
the fMRI signal smears the ongoing fluctuations across time. However, the fMRI sig-
nal itself is presumably not causally involved in decision making; it is only an indirect
way of measuring the neural processes leading up to the decision. Thus the relevant
question is the temporal autocorrelation of neural signals, which seems incompatible
with a time scale of 7 to 10 seconds. Nonetheless, in future experiments we aim to
investigate even further how tightly the early information is linked to the decision.
One prediction of the slow background fluctuation model is that the outcome of the
decision would be predictable even in cases where a subject does not know that they
are going to have to make a decision or where a subject does not know what a deci-
sion is going to be about. This would point toward a predictive signal that does not
directly computationally contribute to decision making.
A further interesting point for future research is the comparison of self-paced with
rapid decisions that occur in response to sudden and unpredictable external events.
At first sight is seems implausible that rapid, responsive decisions could be predicted
ahead of time. How would we be able to drive a car on a busy road if it always took
us a minimum of seven seconds to make a decision? However, even unpredictable
decisions are likely to be determined by “cognitive sets” or “policies” that are likely to
have a much longer half-life in the brain than a mere seven seconds.
Finally, it would be interesting to investigate whether decisions can be predicted
in real time before a person knows how they are going to decide. Such a real-time
“decision prediction machine” (DP-machine) would allow us to turn certain thought
experiments (Marks 1985; Chiang 2005) into reality, for example, by testing whether
people can guess above chance which future choices are predicted by their current
brain signals even though a person might not have yet made up their mind. Such
forced-choice judgments would be helpful in revealing whether there is evidence
for subtle decision-related information that might enter a person’s awareness at an
earlier stage than would be apparent in the conventional Libet tasks (Marks 1985).
A different experiment could be to ask a person to press a button at a time point of
their own choice, with the one catch that they are not allowed to press it when a
lamp lights up (Chiang 2005). Using real-time decoding techniques, it might then
be possible to predict the impending decision to press the button and to control
the lamp to prevent the action. The phenomenal experience of performing such an
experiment would be interesting. For example, if the prediction is early enough, the
subject is not even aware that they are about to make up their mind and should have
the impression that the light is flickering on and off randomly. It would be possible to
use the DP-machine to inform the subject of their impending decision and get them
to “veto” their action and not press a button. Currently, such “veto” experiments rely
on trusting a person to make up their mind to press a button and then to rapidly
choose to terminate their movement (Brass & Haggard 2007). A DP-machine would
finally allow one to perform true “veto” experiments. If it were possible not only to
predict when a person is going to decide but also which specific option they are going to
take, one could ask them to change their mind and take the opposite option. It seems
plausible that a person should be able to change their mind across a period as long
as seven seconds. However, there is a catch: How can one change one’s mind if one
Beyond Libet 71
doesn’t even know what one has chosen in the first place? If it were one day realized,
such a DP-machine would be similarly useful device in helping us realize the deter-
mination of our free decisions as an auto-cerebroscope (Feigl 1958) is in helping
understand the relationship between our conscious thoughts and our brain activity.
ACKNOWLEDGMENTS
This work was funded by the Max Planck Society, the German Research Foundation,
and the Bernstein Computational Neuroscience Program of the German Federal
Ministry of Education and Research. The author would like to thank Ida Momennejad
for valuable comments on the manuscript.
This text is based on a previous review article: J. D. Haynes, “Decoding and
Predicting Intentions,” Ann N Y Acad Sci 1224, no. 1 (2011): 9–21. This work was
funded by the Bernstein Computational Neuroscience Program of the German
Federal Ministry of Education and Research (BMBF Grant 01GQ0411), the
Excellence Initiative of the German Federal Ministry of Education and Research
(DFG Grant GSC86/1–2009), and the Max Planck Society.
NOTE
1. The author is an incompatibilist.
REFERENCES
Arieli A, Sterkin A, Grinvald A, & Aertsen A (1996). Dynamics of ongoing activity:
Explanation of the large variability in evoked cortical responses. Science 273, 1868–1871.
Blankertz B, Dornhege G, Schäfer C, Krepki R, Kohlmorgen J, Müller KR, Kunzmann V,
Losch F, & Curio G (2003). Boosting bit rates and error detection for the classification
of fast-paced motor commands based on single-trial EEG analysis. IEEE Trans Neural
Syst Rehabil Eng 11, 127–131.
Brass M & Haggard P (2007). To do or not to do: The neural signature of self-control.
J Neurosci 27, 9141–9145.
Brass M & Haggard P (2008). The what, when, whether model of intentional action.
Neuroscientist 14, 319–325.
Breitmeyer BG (1985). Problems with the psychophysics of intention. Behav Brain Sci 8,
539–540.
Burgess PW, Quayle A , & Frith CD (2001). Brain regions involved in prospective mem-
ory as determined by positron emission tomography. Neuropsychologia 39, 545–555.
Chiang T (2005). What’s expected of us. Nature 436, 150.
Deiber MP, Passingham RE, Colebatch JG, Friston KJ, Nixon PD, & Frackowiak RS
(1991). Cortical areas and the selection of movement: A study with positron emission
tomography. Exp Brain Res 84, 393–402.
Eccles JC (1982). The initiation of voluntary movements by the supplementary motor
area. Arch Psychiatr Nervenkr 231, 423–441.
Eccles JC (1985). Mental summation: The timing of voluntary intentions by cortical
activity. Behav Brain Sci 8, 542–543.
Feigl H (1958). The “mental” and the “physical.” University of Minnesota Press.
Fourneret P & Jeannerod M (1998). Limited conscious monitoring of motor performance
in normal subjects. Neuropsychologia 36, 1133–1140.
72 T H E Z O M B I E CH A L L E N G E
Friston KJ, Holmes AP, Poline JB, Grasby PJ, Williams SC, Frackowiak RS, & Turner R.
(1995). Analysis of fMRI time-series revisited. Neuroimage 2, 45–53.
Groll-Knapp E, Ganglberge JA , & Haider M (1977). Voluntary movement-related slow
potentials in cortex and thalamus in man. Progr Clin Neurophysiol 1, 164–173.
Haggard P & Eimer M (1999). On the relation between brain potentials and the aware-
ness of voluntary movements. Exp Brain Res 126, 128–133.
Haynes JD & Rees G (2005). Predicting the orientation of invisible stimuli from activity
in human primary visual cortex. Nat Neurosci 8, 686–691.
Haynes JD, Rees G (2006). Decoding mental states from brain activity in humans. Nat
Rev Neurosci. 7(7):523–34.
Haynes JD, Sakai K, Rees G, Gilbert S, Frith C. & Passingham RE (2007). Reading hidden
intentions in the human brain. Curr Biol 17, 323–328.
Hume D (1777). An enquiry concerning human understanding. Reprinted Boston 1910 by
Collier & Son.
Judy A. Trevena & Jeff G. Miller. Cortical movement preparation before and after a con-
scious decision to move. Consciousness and Cognition 10 (2):162–90 (2002).
Kamitani Y & Tong F (2005). Decoding the visual and subjective contents of the human
brain. Nat Neurosci 8, 679–685.
Kornhuber HH & Deecke L (1965). Hirnpotentialänderungen bei Willkürbewegungen
und passiben Bewegungen des Menschen: Bereitschaftspotential und reafferente
Potentiale. Pflügers Arch Ges Phys 284, 1–17.
Kriegeskorte N, Goebel R, & Bandettini P (2006). Information-based functional brain
mapping. Proc Natl Acad Sci USA 103, 3863–3868.
Latto R (1985). Consciousness as an experimental variable: Problems of definition, prac-
tice and interpretation. Behav Brain Sci 8, 545–546.
Libet B (1985). Unconscious cerebral initiative and the role of conscious will in voluntary
action. Behav Brain Sci 8, 529–566.
Libet B, Gleason CA, Wright EW, & Pearl DK (1983). Time of conscious intention to act
in relation to onset of verebral activities (readiness-potential): The unconscious initia-
tion of a freely voluntary act. Brain 106, 623–642.
Marks LE (1985). Toward a psychophysics of intention. Behav Brain Sci 8, 547–548.
Merikle PM & Cheesman J (1985). Conscious and unconscious processes: Same or dif-
ferent? Behav Brain Sci 8, 547–548.
Moutoussis K & Zeki S (1997). Functional segregation and temporal hierarchy of the
visual perceptive systems. Proc Roy Soc London B 264, 1407–1414.
Nickerson RS (2002). The production and perception of randomness. Psych Rev 109,
330–357.
Ramnani N, Owen AM. Anterior prefrontal cortex: insights into function from anatomy
and neuroimaging. Nat Rev Neurosci. 2004 Mar;5(3):184–94.
Ringo JL (1985). Timing volition: Questions of what and when about W. Behav Brain Sci
8, 550–551.
Soon CS, Brass M, Heinze HJ, & Haynes JD. (2008). Unconscious determinants of free
decisions in the human brain. Nat Neurosci 11, 543–545.
Tanaka K (1997). Mechanisms of visual object recognition: Monkey and human studies.
Curr Opin Neurobiol 7, 523–529.
Van de Grind W (2002). Physical, neural, and mental timing. Conscious Cogn 11, 241–264.
Wundt W (1904). Principles of physiological psychology. Vol. 2. Principles of physiological
psychology. New York: Macmillan.
4
ALFRED R. MELE
Benjamin Libet has argued for a pair of striking theses about free will. First, free
will never initiates actions. Second, free will may be involved in “vetoing” conscious
decisions, intentions, or urges to act (1985; 1999; 2004, 137–149).1 Elsewhere, I
have argued that Libet and others fail to provide adequate evidence for the first
thesis and even for the related thesis that conscious intentions to flex a wrist never
make a causal contribution to the production of a flexing action (Mele 2006, chap.
2; 2009, chaps. 3 and 4). My topic here is Libet’s thesis about vetoing. To veto a
conscious decision, intention, or urge is to decide not to act on it and to refrain,
accordingly, from acting on it. Libet associates veto power with some pretty fancy
metaphysics (see, e.g., Libet 1999). I set the metaphysical issues aside here and
concentrate on the empirical ones, focusing on recent neuroscientific research that
bears on vetoing.
1. GENERAL BACKGROUND
The conscious decisions, intentions, and urges that are candidates for being vetoed,
according to Libet, are limited to what I call proximal decisions, intentions, and
urges (Mele, 1992)—that is, decisions, intentions, or urges to do things at once.
(There are also distal decisions, intentions, and urges: for example, Al’s decision to
host a party next week, Beth’s intention to fly to Calgary next month, and Cathy’s
urge to scold Don when she returns home from work.) Libet attempts to generate
evidence about when his subjects become conscious of proximal decisions, inten-
tions, or urges. His method is to instruct subjects to perform a flexing action when-
ever they wish while watching a rapidly revolving dot on a clock face and to report
later—after they flex—on where the dot was when they first became aware of their
decision, intention, or urge to flex (Libet 1985). (The dot makes a complete revolu-
tion in less than three seconds.) Libet found (1985, 532) that the average time of
reported initial awareness was 200 milliseconds (ms) before the time at which an
electromyogram (EMG) shows relevant muscular motion to begin (time 0).
74 T H E Z O M B I E CH A L L E N G E
How are these times related? Libet’s view is that average E-time is 550 ms before
time 0 (i.e., –550 ms) for subjects who are regularly encouraged to flex spontane-
ously and who report no “preplanning” of their movements, average C-time is –150
ms, and average B-time is –200 ms (1985, 532; 2004, 123–126).
Libet’s position on average E-time is based on his finding that, in subjects who
satisfy the conditions just mentioned, EEG readings—averaged over at least 40
flexings for each subject—show a shift in “readiness potentials” (RPs) beginning
at about –550 ms. The RP exhibited by these subjects is Libet’s “type II RP” (1985,
532). He contends that “the brain ‘decides’ to initiate or, at least, prepare to initiate
the act before there is any reportable subjective awareness that such a decision has
taken place” (1985, 536), and he apparently takes the unconscious decision to be
made when the shift in RPs begins. Libet arrives at his average C-time of –150 ms by
adding 50 ms to his average B-time (–200 ms) in an attempt to correct for what he
believes to be a 50-ms negative bias in subjects’ reports (see Libet 1985, 534–535;
2004, 128, for alleged evidence for the existence of the bias).
Whether subjects have time to veto conscious proximal decisions, intentions, or
urges, as Libet claims, obviously depends not only on their C-times but also on how
much time it takes to veto a conscious proximal decision, intention, or urge. For
example if C-times are never any earlier than –150 ms, but vetoing a conscious prox-
imal decision, intention, or urge would require at least 200 ms, then such decisions,
intentions, and urges are never vetoed. Let V-time stand for the minimum time it
would take to veto a conscious proximal decision, intention, or urge. An informed,
plausible judgment about whether people ever veto such things would be supported
by good evidence both about people’s C-times and about V-time. I discuss some
(alleged) evidence about C-times and V-time in subsequent sections.
2. LIBET ON VETOING
Libet offers two kinds of alleged evidence to support the idea that we have veto
power. One kind is generated by an experiment in which subjects are instructed to
prepare to flex their fingers at a prearranged clock time and “to veto the develop-
ing intention/preparation to act . . . about 100 to 200 ms before [that] time” (1985,
538). Subjects receive both instructions at the same time. Libet writes:
Does this study provide evidence about V-time or about the vetoing of, in
Libet’s words here, “intended motor action”? Keep in mind that the subjects were
instructed in advance not to flex their fingers but to prepare to flex them at the prear-
ranged time and to “veto” this. The subjects intentionally complied with the request.
They intended from the beginning not to flex their fingers at the appointed time.
So what is indicated by the segment of what Libet refers to as “the ‘veto’ RP” that
precedes the change of direction?3 Presumably, not the presence of an intention to
flex, for then, at some point in time, the subjects would have both an intention to
flex at the prearranged time and an intention not to flex at that time. And how can
a normal agent simultaneously intend to A at t and intend not to A at t? If you were
to intend now to pick up a tempting doughnut two seconds from now while also
intending now not to pick up the doughnut two seconds from now, what would you
do? Would you soon start reaching for it with one hand and quickly grab that hand
with your other hand to halt its progress toward the doughnut? This is far from nor-
mal behavior.4 In short, it is very plausible that Libet is mistaken in describing what
is vetoed as intended motor action.
In some talks I have given on Libet’s work, I tell the audience that I will count
from 1 to 5, and I ask them to prepare to snap their fingers when I say “5” but not to
snap them. (After I say “5” and hear no finger snapping, I jokingly praise my audi-
ence for being in control of their fingers.) Someone might suggest that these people
have conscious intentions not to flex when I get to 5 and unconscious intentions to
flex then and that the former intentions win out over the latter. But this suggestion is
simply a conjecture—an unparsimonious one—that is not backed by evidence.
Given that the subjects in Libet’s veto experiment did not intend (and did not
decide) to flex, the veto experiment provides no evidence about how long it takes
to veto a conscious proximal intention (or decision) to flex. Furthermore, we do not
know whether the subjects had conscious proximal urges to flex. So the veto study
tells us little about V-time.
I mentioned that Libet offered a second kind of alleged evidence for “veto control.”
Subjects encouraged to flex “spontaneously” (in nonveto experiments) “reported that
during some of the trials a recallable conscious urge to act appeared but was ‘aborted’
or somehow suppressed before any actual movement occurred; in such cases the
subject simply waited for another urge to appear, which, when consummated, con-
stituted the actual event whose RP was recorded” (Libet 1985, 538). Libet asserts
that subjects were “free not to act out any given urge or initial decision to act; and
each subject indeed reported frequent instances of such aborted intentions” (530).
Unfortunately, even if we accept the subjects’ reports, we do not know whether the
urges they vetoed were proximal ones, as opposed, for example, to urges to flex a
76 T H E Z O M B I E CH A L L E N G E
second or so later, when the dot hits a certain point on the clock, or urges to flex
pretty soon. Here again Libet fails to provide weighty evidence about V-time.
Strategy 1. On each trial, consciously decide in advance to prepare to press the key when
the clock hand hits a certain point p, but leave it open whether, when the hand hits p, I will
consciously decide to press right then or consciously decide not to press on that trial. On
some trials, when the hand hits p, decide right then to press at once; and on some other trials
decide right then not to press. Pick different p points on different trials.5
Subjects who execute this strategy as planned do not actually veto conscious
proximal decisions to press. In fact, they do not veto any conscious decisions. Their
first conscious decision on each trial is to prepare to press a bit later, when the clock
hand hits point p. They do not veto this decision; they do prepare to press at that
time. Nor do they veto a subsequent conscious decision. If, when they think the
hand reaches p, they consciously decide to press, they press; and if, at that point,
they consciously decide not to press, they do not press. (Inattentive readers may
wonder why I think I know all this. I know it because, by hypothesis, the imagined
subjects execute strategy 1 as planned.)
A second strategy is more streamlined:
Strategy 2. On some trials, consciously decide to press the key and then execute that decision
at once; and on some trials, consciously decide not to press the key and do not press it.
Vetoing and Consciousness 77
Obviously, subjects who execute this strategy as planned do not veto any con-
scious decisions.
Here is a third strategy:
Strategy 3. On some trials, consciously decide to press the key a bit later and execute that
decision. On other trials, consciously decide to press the key a bit later but do not execute
that decision; instead veto (cancel, retract) the decision.
Any subjects who execute this strategy as planned do veto some conscious deci-
sions, but the decisions they veto are not proximal decisions. Instead, they are
decisions to press a bit later. A subject may define “a bit later” in terms of some
preselected point on the clock or leave the notion vague.
The final strategy to be considered is even more ambitious:
Strategy 4. On some trials, consciously decide to “press now” and execute that decision at
once. On other trials, consciously decide to “press now” but do not execute that decision;
instead immediately veto (cancel, retract) the decision.
If any subjects execute the fourth strategy as planned, they do veto some con-
scious proximal decisions. But, of course, we are faced with the question whether
this strategy is actually executable. Do subjects have enough time to prevent
themselves from executing a conscious proximal decision to press? In a real-world
scenario, an agent might proximally decide to do something and then detect
something that warrants retracting the decision. For example, a quarterback
might proximally decide to throw a pass to a certain receiver and then detect the
threat of an interception. Perhaps he has time to veto his decision in light of this
new information. The situation of the subjects in the experiment under consider-
ation is very different. They never detect anything that warrants retracting their
arbitrary decisions. If they were to retract their arbitrary decisions, they would
arbitrarily retract them. This is quite unlike the nonarbitrary imagined vetoing by
the quarterback.6
I asked whether Brass and Haggard’s subjects can prevent themselves from exe-
cuting conscious proximal decisions to press. The results of their experiment leave
this question unanswered. If we knew that some subjects were successfully using
strategy 4, we would have an answer. But what would knowing that require? Possibly,
if asked about their strategy during debriefing, some subjects would describe it as
I have described strategy 4. However, that alone would not give us the knowledge at
issue. People are often wrong about how they do things.
Lau and his coauthors motivate their work partly by a reference (2007, 81)
to the following comment by Daniel Wegner on Libet’s results: “The position of
conscious will in the time line suggests perhaps that the experience of will is a link
in a causal chain leading to action, but in fact it might not even be that. It might
just be a loose end—one of those things, like the action, that is caused by prior
brain and mental events” (2002, 55). Lau et al. observe that Wegner “does not
show that motor intentions are in fact not causing the actions” and that “if inten-
tions, in fact, arise after the actions, they could not, in principle, be causing the
actions” (81).
The main experiment (Experiment 1) reported by Lau et al. (2007) combines
Libet’s “clock paradigm” with the application of transcranial magnetic stimulation
(TMS) over the presupplementary motor area. The dot on their Libet clock revolves
at 2,560 ms per cycle. While watching the clock, subjects pressed a computer mouse
button “at a random time point of their own choice” (82). In the “intention con-
dition,” after a delay of a few seconds, subjects were required to move a cursor to
where they believed the dot was “when they first felt their intention to press the
button.” In the “movement condition,” they followed the same procedure to indi-
cate where they believed the dot was “when they actually pressed the button.” There
were a total of 240 trials per subject. TMS was applied in half of the trials. Half of
the applications occurred “immediately after action execution,” and half occurred at
a delay of 200 ms. There were 10 subjects.
Lau et al. discovered an effect that was not observed in a second experi-
ment (Experiment 2) involving the application of TMS either at 500 ms after
the button press or between 3,280 and 4,560 ms after it.7 “The effect observed
in Experiment 1 [was] the exaggeration of the difference of the judgments for
the onsets of intention and movement” (Lau et al. 2007, 87).8 The mean of the
time-of-felt-intention reports and the mean of the time-of-movement reports
shifted in opposite directions from the baselines provided by the mean reports
when TMS was not applied (see note 9 for details). The purpose of the second
experiment, in which this effect was not found, was “to test whether the effect
obtained in Experiment 1 was actually due to memory or responding, rather than
the experienced onset itself ” (84).9
As Lau and his coauthors view matters, “The main question is about whether
the experience of intention is fully determined before action execution” (2007,
87). Their answer is no: “The data suggest that the perceived onset of intention
depends at least in part on neural activity that takes place after the execution of
action” (89).
I have discussed these experiments in some detail elsewhere (Mele 2008; 2009,
chap. 6). Here I will focus on just a pair of observations. The first is that the data pro-
vided by Lau et al. leave it open that the time of the onset of subjects’ consciousness
of proximal intentions (C-time) does not depend at all on “neural activity that takes
place after the execution of the action.” The second is a more general observation
about the bearing of B-times on C-times.
What, exactly, do Lau and his coauthors mean by the suggestion that “the per-
ceived onset of intention depends at least in part on neural activity that takes
place after the execution of action” (2007, 89)? For example, do they mean to
Vetoing and Consciousness 79
exclude the following hypothesis: the subjects are conscious of a proximal inten-
tion before they press the button even though they do not have a definite opinion
about when they perceived the onset of their intention—or when they first felt
the intention (see note 8)—until after they act? Apparently not, for they grant
that “it could be the case that some weaker form of experience of intention is
sufficiently determined by neural activity that takes place before the execution
of the action” (89). I do not know exactly what Lau et al. mean by “weaker form”
here, but two points need to be emphasized. First, there is a difference between
becoming conscious of an intention and having an opinion about when one per-
ceived the onset of one’s intention. Second, “neural activity that takes place after
the execution of action” may have an effect on one’s opinion about when one first
became conscious of one’s intention even if it has no effect on when one actually
became conscious of one’s intention. For example, neural activity produced by
TMS can have an effect on B-times without having an effect on C-times. Possibly,
subjects’ beliefs about when they first felt an intention to press the button are still
in the process of being shaped 200 ms after they press; this is compatible with
their having become conscious of the intention before 0 ms. (Incidentally, even
if the beliefs at issue are still in the process of being shaped at 200 ms, they may
be in place shortly thereafter, and the window for TMS to effect B-time may be
pretty small.) Experiments 1 and 2 cut little ice if what we want to know is when
the subjects first became conscious of proximal intentions (C-time)—as opposed
to when, after they act, they come to a definite opinion about when they first
became conscious of these intentions and as opposed, as well, to how the time the
subjects believe to be C-time when they make their reports (B-time) is influenced
by neural processes that take place after action.
C-time is not directly measured. Instead, subjects are asked to report, after
they act, what they believe C-time was. This is a report of B-time. It may be that
the beliefs that subjects report in response to the experimenter’s question about
C-time are always a product of their conscious experience of a proximal intention
(or decision or urge), their conscious perception of the clock around the time of
the experience just mentioned, and some subsequent events. They may always
estimate C-time after the fact based partly on various conscious experiences
rather than simply remembering it. And making the estimate is not a particularly
easy task, as I will explain after a brief discussion of another pair of experiments.
The results reported by Lau and his coauthors (2007) suggest that reports of
B-times are reports of estimates that are based at least partly on events that follow
action. In a recent article, William Banks and Eve Isham (2009) provide confirma-
tion for this suggestion. Subjects in a Libet-style experiment were asked to report,
shortly after pressing a response button, where the cursor was on a numbered Libet
clock “at the instant they made the decision to respond” (18). “The computer reg-
istered the switch closure and emitted a 200-ms beep . . . at 5, 20, 40, or 60 ms after
closure.” Obviously, subjects were not being asked to report on unconscious deci-
sions; conscious decisions are at issue.
Banks and Isham found that although the average time between the beep and
B-time did not differ significantly across beep delays, the following two average
times did differ significantly across delays: (1) the time between EMG onset and
80 T H E Z O M B I E CH A L L E N G E
B-time; and (2) the time between switch closure and B-time. The data display an
interesting pattern (see Banks and Isham 2009, 19):
The beep affected B-time, and the beep followed switch closure.
In a second experiment, Banks and Isham used a delayed video image “to create
deceptive information about the time of response [i.e., a button press]” (2009, 19).
A delay of 120 ms moved average B-time from –131 ms relative to switch closure
(when there was no delay) to –87 ms—a significant shift.
Both findings provide confirmation for the hypothesis that B-times are esti-
mates based at least partly on events that follow action. But Banks and Isham draw
a conclusion that goes well beyond this hypothesis. They take their findings “to
indicate that . . . the intuitive model of volition is overly simplistic—it assumes a
causal model by which an intention is consciously generated and is the immedi-
ate cause of an action”; and they add: “Our results imply that the intuitive model
has it backwards; generation of responses is largely unconscious, and we infer the
moment of decision from the perceived moment of action” (2009, 20). In fact,
however, their findings do not contradict the following hypothesis: subjects made
their conscious proximal decisions to press before they pressed, those decisions
were among the causes of their pressing actions, and what subjects believed about
when they made their conscious decisions was affected by events that happened
after they pressed.
As a first step toward seeing why this hypothesis is not contradicted by the find-
ings, attend to the following cogent line of reasoning: subjects’ beliefs about when
“they made the decision to respond” are affected by events that occur after switch
closure and therefore after they pressed the button (i.e., pressed it far enough to
close the switch); so those beliefs are not in place until after these subjects pressed
the button. Now, how does one get from this cogent reasoning to the conclusion
that these subjects’ conscious proximal decisions to press are not among the causes
of their pressing actions? If it could be inferred from Banks and Isham’s findings
that, just as subjects’ beliefs about when they made these conscious decisions are
not in place until after they pressed the button, their conscious decisions are not
made until after that time, we would have our answer. An event that occurs after an
action cannot be among its causes. But what would warrant the inference at issue?
If we assented to the premise that subjects’ beliefs about when they made their con-
scious proximal decisions were acquired when they made those conscious decisions,
we could validly make the inference. However, Banks and Isham’s findings provide
no basis for accepting this premise, and the fact that subjects’ beliefs about when
they consciously decided to press are affected by events that follow the pressing
actions leaves it wide open that their conscious proximal decisions precede these
actions (and can be vetoed).
Vetoing and Consciousness 81
Banks and Isham assert that “there is no way to measure” C-time “other than by
report” (2009, 20), and, obviously, the reports are of B-times. But it should not be
inferred from this methodological point that subjects made their conscious deci-
sions at the time at which their beliefs about when they made them were finally
acquired—that is, that the time at which these beliefs were first present was the time
at which the decisions were made. The reports may express beliefs that are acquired
after action even though what they are beliefs about are conscious decisions made
before action.
Some of the points I have been emphasizing are reinforced by attention to the fol-
lowing question: How accurate are subjects’ reports about when they first became
conscious of a proximal decision, intention, or urge likely to have been? Framed in
terms of C-time (the time of the onset of the subject’s consciousness of an item of
one of these kinds) and B-time (the time the subject believes to be C-time when
answering the experimenter’s question about C-time), the question about inten-
tions is this: How closely does B-time approximate C-time?
There is a lively literature on how accurate B-times are likely to be—that is,
on how likely it is that they closely approximate C-times (for a review, see van de
Grind 2002). This is not surprising. Reading the position of a rapidly revolving
dot at a given time is a difficult task, as Wim van de Grind observes (2002, 251).
The same is true of relating the position of the dot to such an event as the onset
of one’s consciousness of a proximal intention to click a button. Patrick Haggard
notes that “the large number of biases inherent in cross-modal synchronization
tasks means that the perceived time of a stimulus may differ dramatically from its
actual onset time. There is every reason to believe that purely internal events, such
as conscious intentions, are at least as subject to this bias as perceptions of external
events” (2006, 82).
One fact that has not received sufficient attention in the literature on accu-
racy is that individuals display great variability of B-times across trials. Patrick
Haggard and Martin Eimer (1999) provide some relevant data. For each of their
eight subjects, they locate the median B-time and then calculate the mean of the
premedian (i.e., “early”) B-times and the mean of the postmedian (i.e., “late”)
B-times. At the low end of variability by this measure, one subject had mean
early and late B-times of –231 ms and –80 ms, and another had means of –542
ms and –351 ms (132). At the high end, one subject’s figures were –940 ms and
–4 ms, and another’s were –984 ms and –253 ms. Bear in mind that these figures
are for means, not extremes. These results do not inspire confidence that B-time
closely approximates C-time. If there were good reason to believe that C-times
vary enormously across trials for the same subject, we might not find enormous
variability in a subject’s B-times worrisome in this connection. But there is good
reason to believe this only if there is good reason to believe that B-times closely
approximate C-times; and given the points made about cross-modal synchroniza-
tion tasks in general and the cross-modal task of subjects in Libet-style experi-
ments, there is not.
Another factor that may make it difficult for subjects to provide B-times that
closely approximate C-times is their uncertainty about exactly what they are expe-
riencing. As Haggard observes, subjects’ reports about their intentions “are easily
82 T H E Z O M B I E CH A L L E N G E
One way to think of deciding to flex your right wrist now is as consciously
saying “now!” to yourself silently in order to command yourself to flex at once.
Consciously say “now!” silently to yourself whenever you feel like it and then
immediately flex. Look at the clock and try to determine as closely as possible
where the dot is when you say “now!” You’ll report that location to us after you
flex. (Mele 2009, 125)
5. CONCLUSION
My conclusions about the work discussed here are as follows:
1. Libet has not shown that his subjects have time to veto conscious proximal
decisions, intentions, or urges.
2. Brass and Haggard have not shown that their subjects actually veto conscious
proximal decisions.
3. Lau and his coauthors have not shown that people never become conscious
of proximal urges or intentions early enough to veto them. Nor have Banks
and Isham.
ACKNOWLEDGMENTS
A draft of this chapter was written during my tenure of a 2007–2008 NEH fellow-
ship, and parts of this chapter derive from Mele 2009. (Any views, findings, conclu-
sions, or recommendations expressed in this article do not necessarily reflect those
of the National Endowment for the Humanities.) For discussion or written com-
ments, I am grateful to Seth Shabo, Tyler Stillman, an audience at the University of
Edinburgh ( June 2008), and the editors of this volume.
NOTES
1. For discussion of conceptual differences among decisions, intentions, and urges, see
Mele 2009, chap. 1.
2. For a detailed discussion of the experiment, see Libet, Wright, and Curtis 1983; or
Libet, Gleason, et al. 1983.
3. A potential source of confusion should be identified. According to a common use
of the expression “readiness potential” (RP), the RP is a measure of activity in the
motor cortex that precedes voluntary muscle motion, and, by definition, EEGs gen-
erated in situations in which there is no muscle burst do not count as RPs. Thus, given
that there is no muscle burst in the veto experiment, some scientists would not refer
to what Libet calls “the ‘veto’ RP” as an RP.
4. Sean Spence and Chris Frith suggest that people who display anarchic hand syn-
drome “have conscious ‘intentions to act’ [that] are thwarted by . . . ‘intentions’ to
which the patient does not experience conscious access” (1999, 24).
5. In a variant of this strategy, the clock hand’s getting very close to p replaces the hand’s
hitting p.
84 T H E Z O M B I E CH A L L E N G E
6. Brass and Haggard found some insula activation in inhibition trials, and they suggest
that it “represents the affective-somatic consequences of failing to implement a strong
intention” (2007, 9144). If this is right, subjects who display insula activation are not
using strategy 2. Possibly, subjects who decide to prepare to press when the clock
hand hits p and then refrain from pressing would also display insula activation, in
which case displaying such activation is compatible with using strategy 1. Strategies
3 and 4 both require the vetoing of decisions.
7. Lau et al. also report on two other experiments of theirs. They were designed to test
whether the effect observed in the first experiment was “actually due to the general
mechanism of cross-modal timing using the clock face” (85) and whether it was due
to “TMS noise added to the motor system” (86).
8. Actually, given that the subjects were asked to indicate where they believe the dot
was “when they first felt their intention to press the button” (82), what Lau et al. refer
to as judgments about the “onsets of intention” should be referred to as judgments
about onsets of the feeling (or consciousness) of intentions. Incidentally, whereas
Lau et al. seem to assume that all intentions are conscious intentions, I do not (see
Mele 2004; 2009, chap. 2). Some readers may be curious about how the time of a key
press is related to the time of the EMG activity that defines Libet’s time 0. Patrick
Haggard and Martin Eimer report that EMG onset typically precedes key presses by
30 to 50 ms (1999, 130).
9. In Experiment 1, TMS was applied at a delay of either (1) 0 ms or (2) 200 ms, and in
Experiment 2 it was applied at a delay of either (3) 500 ms or (4) between 3,280 and
4,560 ms. The group means for the TMS effects in the intention condition at these
delays were (1) –9 ms, (2) –16 ms, (3) 9 ms, and (4) 0 ms; and the group means
for the TMS effects in the movement condition at the same delays were (1) 14 ms,
(2) 9 ms, (3) 5 ms, and (4) 5 ms (Lau et al. 2007, 83–84). As I mentioned, the main
effect Lau et al. found in Experiment 1 was “the exaggeration of the difference of the
judgments for the onsets of intention and movement” (87). This is a stronger effect
than the effect of TMS on time-of-felt-intention reports alone. Accordingly, Lau et al.
focus on the “exaggeration of . . . difference” effect, even though the movement judg-
ments and the effects of TMS on them tell us little about E-time, C-time, or B-time.
10. Recall Banks and Isham’s assertion that “there is no way to measure” C-time “other
than by report” (2009, 20). EMG signals can be recorded from speech muscles in
silent speech (Cacioppo & Petty 1981; Jorgensen & Binsted 2005). Such recordings
may be made in variants of the Libet-style study I proposed. Subjects’ after-the-fact
reports provide some evidence about when it was that they consciously silently said
“now!”; and EMG recordings from, for example, the larynx in an experiment of the
kind at issue may provide another form of evidence about this. It would be interest-
ing to see how the results of the two different measures are related. A relatively simple
experiment would leave overt action out. Subjects would be instructed to watch a
Libet clock, to consciously and silently say “now!” to themselves whenever they feel
like it, and to be prepared to report after the silent speech act on where the dot was on
the clock when they said “now!” The times specified in the reports can be compared
to the times of the EMG activity from speech muscles.
11. Would subjects’ conscious, silent “now!”s actually express proximal decisions? Perhaps
not. To see why, consider an imaginary experiment in which subjects are instructed to
count—consciously and silently—from 1 to 3 and to flex just after they consciously
say “3” to themselves. Presumably, these instructions would be no less effective at
eliciting flexings than the “now!” instructions. In this experiment, the subjects are
Vetoing and Consciousness 85
REFERENCES
Banks, W., and E. Isham. 2009. “We Infer Rather Than Perceive the Moment We Decided
to Act.” Psychological Science 20: 17–21.
Brass, M., and P. Haggard. 2007. “To Do or Not to Do: The Neural Signature of
Self-Control.” Journal of Neuroscience 27: 9141–9145.
Cacioppo, J., and R. Petty. 1981. “Electromyographic Specificity during Covert Information
Processing.” Psychophysiology 18: 518–523.
Haggard, P. 2005. “Conscious Intention and Motor Cognition.” Trends in Cognitive Sciences
9: 290–295.
Haggard, P. 2006. “Conscious Intention and the Sense of Agency.” In N. Sebanz and W.
Prinz, eds., Disorders of Volition, 69–85. Cambridge, MA : MIT Press.
Haggard, P., and M. Eimer. 1999. “On the Relation between Brain Potentials and the
Awareness of Voluntary Movements.” Experimental Brain Research 126: 128–133.
Jorgensen, C., and K. Binsted. 2005. “Web Browser Control Using EMG Based Subvocal
Speech Recognition.” Proceedings of the 38th Hawaii International Conference on System
Sciences 38: 1–8.
Keller, I., and H. Heckhausen. 1990. “Readiness Potentials Preceding Spontaneous
Motor Acts: Voluntary vs. Involuntary Control.” Electroencephalography and Clinical
Neurophysiology 76: 351–361.
Lau, H., R. Rogers, and R. Passingham. 2007. “Manipulating the Experienced Onset of
Intention after Action Execution.” Journal of Cognitive Neuroscience 19: 81–90.
Libet, B. 1985. “Unconscious Cerebral Initiative and the Role of Conscious Will in
Voluntary Action.” Behavioral and Brain Sciences 8: 529–566.
Libet, B. 1999. “Do We Have Free Will?” Journal of Consciousness Studies 6: 47–57.
Libet, B. 2004. Mind Time. Cambridge, MA : Harvard University Press.
Libet, B., C. Gleason, E. Wright, and D. Pearl. 1983. “Time of Unconscious Intention
to Act in Relation to Onset of Cerebral Activity (Readiness-Potential).” Brain 106:
623–642.
Libet, B., E. Wright, and A. Curtis. 1983. “Preparation- or Intention-to-Act, in Relation
to Pre-event Potentials Recorded at the Vertex.” Electroencephalography and Clinical
Neurophysiology 56: 367–372.
Mele, A. 1992. Springs of Action: Understanding Intentional Behavior. New York: Oxford
University Press.
Mele, A. 2004. “The Illusion of Conscious Will and the Causation of Intentional Actions.”
Philosophical Topics 32: 193–213.
Mele, A. 2006. Free Will and Luck. New York: Oxford University Press.
Mele, A . 2008. “Proximal Intentions, Intention-Reports, and Vetoing.” Philosophical
Psychology 21: 1–14.
86 T H E Z O M B I E CH A L L E N G E
Mele, A. 2009. Effective Intentions: The Power of Conscious Will. New York: Oxford
University Press.
Spence, S., and C. Frith. 1999. “Towards a Functional Anatomy of Volition.” Journal of
Consciousness Studies 6: 11–29.
van de Grind, W. 2002. “Physical, Neural, and Mental Timing.” Consciousness and Cognition
11: 241–264.
Wegner, D. 2002. The Illusion of Conscious Will. Cambridge, MA: MIT Press.
5
R I C H A R D H O LTO N
would conspire to guarantee those outcomes. Fatalism, understood this way, thus
amounts to powerlessness to avoid a given outcome.4
We can put the point in terms of a counterfactual:
There are some outcomes such that whatever action Oedipus were to per-
form, they would come about
or more formally:
[∃x: outcome x] [∀y: action y] (If Oedipus were to perform y,
then x would come about)
This is a very specific fatalism: two specific outcomes are fated. There is no impli-
cation that Oedipus could not effectively choose to do other things: he was free to
choose where he went, what he said, what basic bodily actions he performed. But
we can imagine Oedipus’s choices being progressively constrained, so that more
and more outcomes become fated. How far could the process go? We could cer-
tainly imagine a case in which he retained control only of his basic bodily actions;
all the distal outcomes that were the further consequences of those actions would
be fated. That gives us:
[∀x: actual outcome x ̣][∀y: possible action y] (If Oedipus were
to perform y, then x would come about)
We could go further and imagine a global action fatalism, where Oedipus’s choices
would have no impact on his actions, even the most basic. In other words, whatever
Oedipus chose, his actions would be the same:
[∀x: actual action x ̣][∀y: possible choice y] (If Oedipus were
to make y, then Oedipus would perform x)5
Suppose, though, that one comes to realize that a certain outcome is fated. Does
it still make sense to strive to frustrate one’s fate? I shan’t go so far as to claim that
one is rationally bound to bow to one’s fate; certainly we might admire the resolve of
the person who did not, and might not judge them irrational. But once one realizes
that one will have no effect on the outcome, it is surely at least rationally permis-
sible to stop trying to control the outcome. More broadly, even if one doesn’t know
which outcome is fated, the knowledge that some outcome is fated seems to give one
rational permission to stop trying to control the outcome, since one knows that
such activity will be pointless. Knowledge of fatalism as we are understanding it
thus legitimates a fatalistic attitude in the popular sense: the view that since there
is nothing that one can do to affect the future, there is no sense in trying. Without
trying to define it rigorously, let us call such a position resignation.
people who have given it less thought? It is little wonder, then, that people tend to
move from a belief in determinism to a belief in fatalism and to an attitude of resig-
nation, for they may be conflating determinism with predictability.
The right response is to distinguish more clearly between determinism and pre-
dictability. In the last section I give further grounds for doing so. We should not
just reject predictability because it gives a plausible route to resignation. There is a
more direct argument for thinking that, in a world containing reflective beings like
ourselves, predictability must be false.
If it is your fate to recover from this illness, you will recover, regardless of
whether or not you call the doctor. Likewise, if it is your fate not to recover
from this illness, you will not recover, regardless of whether or not you call the
doctor. And one or the other is your fate. Therefore it is pointless to call the
doctor.8
Since it is patently not pointless to call the doctor, the critics concluded that deter-
minism must be false.
As an argument against determinism this is not terribly compelling, since it seems
simply to equate determinism with fatalism. The critics allege that in believing in
determinism the Stoics are committed to certain outcomes being fated; to embrac-
ing something like the powerlessness counterfactuals. But we need an argument for
such an attribution, and the critics failed to provide one. Others have tried to do
better. Recognizing the difference between the views, some more recent philoso-
phers have tried to give an argument that determinism nonetheless entails fatalism.
Richard Taylor writes:
A fatalist is best thought of, quite simply, as someone who thinks he cannot do
anything about the future. He think it is not up to him what will happen next
From Determinism to Resignation; and How to Stop It 91
year, tomorrow, or the very next moment. He thinks that even his own behav-
iour is not in the least within his power, any more than the motion of distant
heavenly bodies, the events of remote history, or the political developments
in faraway countries. He supposes, accordingly, that it is pointless for him to
deliberate about anything, for a man deliberates only about those future things
he believes to be within his power to do and forego.
The idea of resignation is here put in terms of deliberation, but presumably the same
holds of choosing and of striving.
This way of thinking is not limited to philosophers. The psychologists Roy
Baumeister, William Crescioni, and Jessica Alquist also characterize deterministic
beliefs in terms of a kind of resignation:
To the lay determinist, everything that happens is inevitable, and nothing else
was possible. Thinking about what might have been is thus pointless if not
downright absurd, because nothing else might have been (other than what
actually happened).
They go on to suggest that such a response is, at the very least, consistent with deter-
minism, and perhaps that it is entailed by it:
By “no counterfactuals” I take that they mean there are no true counterfactuals with
false antecedents, and hence that counterfactual thinking is pointless.
There is reason to think that a similar move from determinism to resignation
exists in popular thought. A number of studies have demonstrated that when ordi-
nary people are brought to believe in determinism, they are less likely to behave
morally. How should we explain this? Elsewhere I have suggested that it is because
they fall prey to resignation: they move from the idea of determinism to the idea of
fatalism, and so become convinced that there is no point in struggling against their
baser urges.11 Work by Eddy Nahmias and Dylan Murray provides support for this.
They find that people tend to conflate determinism with what they call “bypassing,”
that is, with the view that what determines action bypasses anything that the con-
scious self can do. Fatalism is a form of bypassing.12
dependent on agents’ actions or intentions. Returning to the example from the Lazy
Argument, fatalism means that either
or
will be true. And if one of them is true, then it does indeed seem pointless to call
the doctor. Calling the doctor would only be worthwhile if it made a difference to
recovery; ideally if recovery were counterfactually dependent upon it, that is, if the
following two counterfactuals were true:
But the truth of those counterfactuals is quite compatible with determinism. Indeed,
the central idea behind determinism is that outcomes are determined causally. And,
though the exact relation between counterfactuals and causation is controversial, it
seems that something like that pair of counterfactuals will be true whenever there
is causation.13
So determinism does not entail fatalism, nor does it thereby support resignation;
the Lazy Argument against the Stoics does not work. For parallel reasons, it seems
to me that Taylor’s argument, at least as I have understood it, does not work; it sim-
ply isn’t true that if a man embraces determinism he should conclude that for all
events “it is not up to him whether, or when or where this event will occur, that it is
not within his control.”14 And the same can be said of the argument that Baumeister
et al. credit to the lay determinist.15
Indeed, it might seem that the move from determinism to fatalism and so to res-
ignation is so transparently flawed that no thoughtful person would ever make it,
and that it is quite uncharitable to attribute it to ordinary people. This is a response
that I have often received when I have suggested that ordinary thinking tends to run
determinism and fatalism together. Putting aside the issue of whether this is a gross
mistake—what can look obvious once it is carefully spelled out in a philosophi-
cal argument need not be obvious before—I will argue that there is another route
to resignation, from a starting point that is often, even among philosophers, taken
to be effectively equivalent to determinism. Perhaps, rather than making a simple
error, ordinary thinking is on to something that many philosophers have missed.
when the predictions could be made but have not been, there is a slippery slope
argument that resignation still follows. Like most slippery slope arguments, it is not
completely conclusive, but it is strong enough to raise a serious worry.
Let us return to the Lazy Argument, and to whether you should call the doctor.
Let us fill in a few more details to ensure that the issues are clear. Suppose that you
are seriously ill. Suppose too that you live where medicine is private and that you
are poor and have no insurance, so that calling the doctor would cost a great deal of
what little you have. However, you have good grounds for thinking that doctors are
typically effective—you know, for instance, those who call them typically do bet-
ter than those who don’t. So you tend to believe of other people that the following
counterfactuals hold:
Putting all this together, you think that while both calling and not calling are open
options for you (you could easily imagine yourself doing either), it is rational for
you to call.
But now suppose that you simply know you will recover. This is not conditional
knowledge: it is not merely that you know that you will recover if you call the doc-
tor, and you intend to call the doctor. You know that you will recover simpliciter.
Perhaps you got the knowledge by divine revelation, or from some trustworthy and
uncharacteristically straightforward oracle; perhaps you have some telescope that
can look into the future; perhaps you were told by a returning time traveler, who
could even be yourself; or perhaps you have some machine that, given the laws and
initial conditions, can compute how certain events will turn out.16 There are real
question marks over the possibility of any of these, but put such doubts aside for
now. Assume simply that you have robust knowledge that you will recover, robust
enough that it can provide a fixed point in your practical reasoning.
Is it rational to try to call the doctor? I suggest that it is not; in this sense, robust
predictability is like fatalism. For if you know that you will recover, then there appear
to be two possibilities for you to consider:
Of these you clearly prefer the latter, since calling the doctor is expensive. In each
case the second conjunct you know to be true. But since it appears to be up to you
which of the first conjuncts you make true, and hence which of the two conjunc-
tions is true, then it is rational to make true the one that you prefer. So it is rational
not to call the doctor.
Now suppose that you robustly know in a similarly robust fashion that you will
not recover. In this case the possibilities are:
Similar reasoning applies. You prefer the latter to the former, since at least you keep
your money to make your last days comfortable and to pass on to your loved ones;
so, given that you know you will not recover, it is rational not to call the doctor.
Note the difference between this case and the case in which determinism is true.
Determinism entails only that you know that you will either recover or not:
Know (p or not-p)
The interesting result is that if you either robustly know that you will recover, or you
robustly know that you will not, it is rational to not call the doctor.
Suppose, though, that you do not know what will happen, but you know that it is
predictable. This is where we encounter the slippery slope. Suppose first that while
you do not yet know whether or not you will recover, you know that your friend
knows whether or not you will; and, moreover, you know that very soon your friend
will tell you. Suppose, however, that you have to make your decision about whether
to call the doctor before you are told. Should you call? It seems to me that you might
reason as follows. If I could only delay my decision until after I was told, I know
whatever I was told I would decide to not call the doctor. But since that would be a
rational thing to do once I was told, that is good enough grounds for deciding not
to call before I am told.17
Now for the next step along the slippery slope: it’s not my friend who knows, it is
rather that the prediction has been made by a machine that can predict the future,
and I will not have a chance to read the prediction before I have to decide whether
or not to call. The practical conclusion is as before.
Now the next step: the machine is still doing the calculations; it will not finish
until after I have to decide whether to call. Same practical conclusion.
The next step: the machine has all the evidence it needs, but it hasn’t yet started
doing the calculations. Same practical conclusion.
Next: the machine isn’t actually going to do the calculations, but it could. Same
practical conclusion.
Next: the machine hasn’t actually been built yet, but we know how to build it, and
if it were built it could do the calculations. Same practical conclusion.
Next: we don’t know how to build the machine, but we know that in prin-
ciple a machine could be built that could do the calculations. Same practical
conclusion.
There are many places along this slope where someone might want to call a halt.
Perhaps it should be halted right at the beginning: there is all the difference in the
world between knowing something yourself, and having someone else know it. One
might make a case for that, but it does seem rather odd. Moreover, I think that there
is a principled reason for following the slope all the way to the end. The conclusion
of the slope is to move the condition that is needed for resignation from
to the weaker
The reason that knowledge justified resignation was that it provided a fixed
point around which deliberation could move. But the same is true if an outcome
is merely knowable in advance. If it is knowable in advance, then one could
come to know it. And so there is a fixed point, even if one doesn’t know what
it is.
Thought of in this way, though, we see why the slippery slope doesn’t take us all
the way to determinism. For determinism doesn’t by itself provide us with the pos-
sibility of a fixed point that could be used in deliberation about what will happen.
For determinism by itself doesn’t guarantee that what will happen is knowable. One
way in which it could fail to be knowable, quite compatible with determinism, is
if the very process of coming to believe something would have an impact on what
would happen. And it is this feature that I will use in arguing against the possibility
of predictability. Before that, though, let us address the question of whether predict-
ability entails fatalism.
of which it is rational to choose the second. Now it might seem that your knowledge
that you will recover, combined with your control over whether or not you call,
gives you confidence in both of the following counterfactuals:
But (assuming that there is nothing else that you can do to affect your recovery) that
is just to claim that your recovery is fated. So should we conclude that predictability
entails fatalism?
I think that we should not. These examples don’t show it so clearly. But imagine
another that does. Suppose that a coin is hidden under one of the two cups in front
of you. You have to choose the cup where it is. Suppose you come to know that you
will choose correctly. So there seem to be two outcomes:
I choose the left-hand cup and the coin is under the left-hand cup
I choose the right-hand cup and the coin is under the right-hand cup
96 T H E Z O M B I E CH A L L E N G E
If I were to choose the left-hand cup, the coin would be under the left-hand
cup
If I were to choose the right-hand cup, the coin would be under the right-hand
cup?
Surely not. The position of the coin is fixed; my choice isn’t going to move it around.
Foreknowledge of the outcome does not commit one to the truth of the counterfac-
tuals. Likewise with the doctor. It could be that recovering or not recovering is fated,
so that nothing I could do would have any impact. But that is not a conclusion that
I can draw from the simple knowledge of what will happen.
DENYING PREDICTABILITY
As I have said, philosophers tend to move between determinism and predictability
without thinking much is at stake. I claim that, even if determinism is true, we have
excellent grounds for thinking that predictability is not. And this doesn’t result from
anything especially sophisticated about human beings and their essential unpredict-
ability. I think that predictability would be equally well refuted by the existence of
the kind of machine that many of my colleagues at MIT could construct in their
lunch hour. Let me explain.
THE CHALLENGE
Suppose that I offer you a challenge. Your job is to predict whether the lightbulb on
a certain machine will be on or off at a given time. Moreover, I will give you consid-
erable resources to make your prediction: you can have full information about the
workings of the machine, together with as much computing power as you wish. Lest
that seem to make things too easy, I point out that makers of the machine had one
goal in mind: to render your prediction wrong.
Of course, a natural way for the makers to construct the machine would be to
equip it with a mechanism that could detect what you would predict, and then a
device to ensure that the prediction was false. Still, there are things that you might
do to avoid this. One technique would involve keeping your prediction a secret
from the machine; another would involve making it so late that the machine would
not have time to act on it. But unfortunately for you, I rule out both of these options.
Your prediction has to be made in full view of the machine. I give you a lightbulb
of your own. You must switch it on to indicate that you predict that the light on the
machine will be on at the set time; you must switch it off to indicate that you predict
the light on the machine will be off. And I require that the prediction be made a
full minute before the set time. So, for instance, if you predict that the light on the
machine will be on at noon, you have to have your light switched on at 11:59.
Now you seem to be in trouble. For, as we said at the outset, making a machine
capable of defeating you in the challenge is surely within the capabilities of many
of my colleagues, indeed many of the undergraduates, at MIT. All it would need is
From Determinism to Resignation; and How to Stop It 97
a simple light sensor, and a circuit that gives control of its light to the output of the
sensor. If it senses that your light is on, it switches its light off, and vice versa.
At this point you might think that I have rigged the challenge against you. So
let me make one final offer. I will give you not simply all the knowledge about the
workings of the machine but all the knowledge about the workings of the universe.
Moreover—don’t ask how I do this—I guarantee that the universe will be determi-
nate. And, as before, I give you as much computing power as you wish.
Should you now accept the challenge? I think not. For how can this new knowl-
edge help? At best it seems that it will enable you to predict how you will lose: you
will perhaps be able to calculate what move you will make, and how the machine
will respond to it. But should you try to use that knowledge by changing your pre-
diction to beat the machine, either you will fail to make the change, or it will turn
out that what you thought was knowledge was not knowledge after all. The machine
will not be concerned with your machinations. Whatever it is that you predict, it
will simply control its light to ensure that your prediction is false.
I say that the best you will be able to do, despite your knowledge and your com-
puting power, is to predict how you will lose. I actually doubt that you will even be
able to do that reliably. But to see this, let me fill in some background.
My story might remind some readers of an article written by Michael Scriven, and
refuted by David Lewis and Jane Richardson.18 Scriven argued that human beings
are unpredictable because they have the capacity to act in a way that runs contrary
to any prediction, and that was the source of the refutation. But I am not making a
parallel claim about the machine that I have challenged you to predict. It would be
quite possible for someone else to predict the machine’s behavior. Indeed, it would
be quite possible for you to predict the machine’s behavior, provided that you didn’t
reveal the prediction to the machine. What I claim that you will not be able to do is
to make an accurate prediction that you reveal to the machine ahead of time.
However, like Scriven, I do want the conclusions to apply to human beings.
Obviously if they are countersuggestible they should be just as hard to predict as the
machine. But there is something more complicated about human beings, since they
are able to make predictions about themselves. This means that they can play the
role both of predictor and of frustrater. Suppose that Tiresias wants to predict what
he will do but also wants to ensure that he will not act in accord with his prediction.
Provided that he makes his prediction clearly to himself, and well enough in advance,
it will be his frustrating side, and not his predicting side, that will triumph.
This is why I doubt that you will even be able to make an accurate prediction
about how your game with the machine will go. Here your motivation is not to frus-
trate yourself; it is to trick the machine. But that is good enough to render any calcu-
lation you make highly unreliable.
CONCLUSION
Determinism and predictability are often run together, but they should be kept well
apart. While determinism does not lead to resignation, predictability plausibly does.
Fortunately, while determinism may well be true, there is a good argument against
predictability, and so, even if it does entail resignation, this poses no threat to our
98 T H E Z O M B I E CH A L L E N G E
ordinary way of thinking about things. The tangle of doctrines surrounding issues of
determinism and freedom is dense. It should be no surprise if ordinary people get
caught in it; philosophers are still some way from disentangling them all.
NOTES
1. Thanks to the members of the graduate class at MIT where I developed some of
these ideas; to the audience at Edinburgh who discussed an early draft; and to Rae
Langton, Damien Rochford, Stephen Yablo, and the editors for useful conversations.
Special thanks to Caspar Hare for much input; someday we might write a better ver-
sion of this essay together.
2. For a review of the science, see Earman, 2004.
3. Lewis, 1983.
4. I don’t say that this is the only way to understand it; we might think that part of the
poignancy of Oedipus’s story is that he could have avoided his fate if only he hadn’t
tried to do so. To this extent then, my use of the term fatalism will be stipulative.
5. I have here defined powerlessness in terms of the powerlessness of choices, which I
find to be the most intuitive way of characterizing the idea. Alternatively, it could be
put in terms of the powerlessness of desires, or of judgments. This would not affect
the points made here.
6. Wegner, 2002, and the work of Benjamin Libet on which Wegner draws. See Mele,
2010, for a convincing rebuttal of this and other similar positions. For more nuanced
conclusions from similar considerations see Haynes, this volume.
7. Even when the distinction is noted, little importance is placed on it. The much
used collection Reason and Responsibility says that determinism and predictabil-
ity are roughly equivalent while noting the distinction in a footnote (Feinberg and
Schafer-Landau, 1999, 410).
8. Cicero, 1987, 339. For a detailed examination of the argument as the Stoics faced it,
see Bobzien, 1998.
9. Taylor, 1963, 55.
10. Baumeister et al., 2011, 3.
11. Holton, 2009, chap. 8.
12. Nahmias and Murray, 2011.
13. The most influential counterfactual account is due to David Lewis (1973). For sub-
sequent work, see Collins et al., 2004. A recent development understands the coun-
terfactuals in terms of interventions; see Pearl, 2000, and Woodward, 2003, for the
account, and the papers collected in Gopnik and Schulz, 2007, for evidence of its
psychological applicability.
14. Perhaps I have misinterpreted Taylor; perhaps he is really aiming to give something
like the argument that if the past is fixed, and the past entails the future, then the
future is fixed—something along the lines of the consequence argument. My argu-
ment here doesn’t touch on that position. But his talk of determinism being global
fatalism certainly strongly suggests the interpretation I have given.
15. I give a longer response in Holton, 2011.
16. Does this mean that you will deny that one of the previous counterfactuals applies to
you, i.e., that if you were not to call the doctor you would not recover? Well you might
deny it; but equally you might not know quite what to think. For further discussion
see below.
From Determinism to Resignation; and How to Stop It 99
17. Debates on the bearing of God’s knowledge of future contingents for our freedom
might be expected to have some relevance here. For comprehensive discussion see
Zagzebski, 1991. In fact though I think that they are not of very much use, for a num-
ber of reasons: (1) most of the discussion is of the broader issue of the compatibil-
ity of God’s foreknowledge with freedom, and not on the rationality of resignation;
(2) God is normally taken to be both essentially knowledgeable and immutable;
(3) there is no discussion of what difference it would make if God communicated his
knowledge with human actors.
18. Scriven, 1964; Lewis and Richardson, 1966. For a very useful review of various
authors who argued along similar lines to Scrivens, and some helpful suggestions of
their own, see Rummensand and Cuypers, 2010. I wish that I had seen their paper
before writing this one.
REFERENCES
Baumeister, Roy, A. William Crescioni, and Jessica Alquist, 2011, “Free Will as Advanced
Action Control for Human Social Life and Culture,” Neuroethics 4, 1–11.
Bobzien, Susanne, 1998, Determinism and Freedom in Stoic Philosophy (Oxford: Clarendon
Press).
Cicero, 1987, “On Fate 28,” in A. Long and D. Sedley (eds.), The Hellenistic Philosophers,
339 (Cambridge: Cambridge University Press).
Collins, John, Edward Hall, and Laurie Paul (eds.), 2004, Causation and Counterfactuals
(Cambridge, MA : MIT Press).
Earman, John, 2004, “Determinism: What We Have Learned, and What We Still Don’t
Know,” in J. Campbell, M. O’Rourke and D. Shier (eds.), Freedom and Determinism,
21–46 (Cambridge, MA : MIT Press).
Feinberg , Joel, and Russ Schafer-Landau, 1999, Reason and Responsibility, 10th ed.
(Belmont, CA : Wadsworth).
Gopnik, Alison, and Laura Schulz (eds.), 2007, Causal Learning (New York: Oxford
University Press).
Holton, Richard, 2009, Willing, Wanting, Waiting (Oxford: Clarendon Press).
Holton, Richard, 2011, “Comments on ‘Free Will as Advanced Action Control for Human
Social Life and Culture’ by Roy F. Baumeister, A. William Crescioni and Jessica L.
Alquist,” Neuroethics 4, 13–16.
Lewis, David, 1973 “Causation,” Journal of Philosophy, 70, 556–67, reprinted with post-
script in his Philosophical Papers, vol. 2, 159–213 (New York: Oxford University Press,
1986).
Lewis, David, 1983, “New Work for a Theory of Universals,” Australasian Journal of
Philosophy 61, 343–377, reprinted in his Papers in Metaphysics and Epistemology, 8–55
(Cambridge: Cambridge University Press, 1999).
Lewis, David, and Jane Richardson, 1966, “Scriven on Human Unpredictability,”
Philosophical Studies, 17, 69–74.
Mele, Al, 2010, Effective Intentions (Oxford: Oxford University Press).
Nahmias, Eddy, and Dylan Murray, 2011, “Experimental Philosophy on Free Will: An
Error Theory for Incompatibilist Intuitions,” in Jesús Aguilar, Andrei Buckareff and
Keith Frankish (eds.), New Waves in Philosophy of Action, 189–216 (Basingstoke:
Palgrave Macmillan).
Pearl, Judea, 2000, Causality (Cambridge: Cambridge University Press).
100 T H E Z O M B I E CH A L L E N G E
Rummens, Stefan, and Stefaan Cuypers, 2010, “Determinism and the Paradox of
Predictability,” Erkenntnis 72, 233–249.
Scriven, Michael, 1964, “An Essential Unpredictability in Human Behavior,” in Benjamin
B. Wolman and Ernest Nagel (eds.), Scientific Psychology: Principles and Approaches,
411–425 (New York: Basic Books).
Taylor, Richard, 1963, Metaphysics (Englewood Cliffs, NJ: Prentice Hall).
Wegner, Daniel, 2002, The Illusion of Conscious Will (Cambridge, MA : Harvard University
Press).
Woodward, James, 2003, Making Things Happen (New York: Oxford University Press).
Zagzebski, Linda, 1991, The Dilemma of Freedom and Foreknowledge (New York: Oxford
University Press.
PART TWO
M A N O S T S A K I R I S A N D A I K AT E R I N I F OTO P O U L O U
selectively manipulated; for example, a voluntary key press is compared with a pas-
sive key press. This manipulation is important when we want to establish the rela-
tive contributions of different sensory and motor cues for agency. Second, to the
extent that all these movements are self-generated and voluntary, we can assume the
presence of the unique neural events that precede voluntary actions alone, such as
the readiness potential and the generation of an efference copy of the motor com-
mand. In addition, we can assume that participants experience (at least a minimal)
sense of effort as they produce them, and also that they are clearly aware of the fact
that they have moved on every trial, despite the fact that they may not be fully aware
of all the low-level movement parameters. These observations beg the question as
to what we are actually measuring when we ask participants about their sense of
agency. Clearly, we are not asking them whether they felt that they have moved vol-
untarily, because then the answer would always be affirmative, independently of the
discrepancy between their actual movement and the feedback. Thus, most experi-
mental paradigms do not actually investigate a crucial and possibly the most basic
experiential component of the sense of agency, namely, the feeling of agency, the
feeling that I voluntarily move my body.
cognitive function monitors the effects of actions (Frith, Blakemore & Wolpert,
2000). Explicit judgments of agency require successful completion of the monitor-
ing process: only when I see the laptop booting up, would I judge that I actually
turned it on. For mundane actions, this monitoring process is often unconscious:
indeed, the motor system includes specific mechanisms for predicting the sensory
consequences of our own actions (Frith, Blakemore & Wolpert, 2000).
An interesting neuropsychological condition reveals this interplay between an
advance intention-based prediction of the sensory feedback of action that may
underpin the feeling of agency, and a delayed postdictive attribution of sensory
feedback to the self that may underpin judgments of agency. Some patients with
hemiplegia deny that their affected limb is paralyzed (ansognosia for hemiplegia,
AHP). For example, the patient may assert that they performed an action using
their paralyzed limb, which in fact remains immobile. Judgments of agency in these
patients seem to be based only on the feeling that they prepare appropriate motor
commands for the action, and bypass the normal stage of monitoring whether
appropriate effects of limb movement actually occur (Berti et al., 2005). In AHP,
the feeling of intending an action becomes sufficient for a judgment of agency. This
suggests that the monitoring is a specific cognitive function that normally provides
the appropriate link between feelings of agency and explicit judgments of agency.
More recent accounts, capitalizing on recent computational models of motor
control (Frith et al., 2000), proposed that AHP results from specific impairments
in motor planning. Under normal circumstances, the formation of an intention to
move will be used by “forward models” to generate accurate predictions about the
impending sensory feedback. If an intended movement is not performed as planned,
a comparator will detect a mismatch between the predicted sensory feedback and
the absence of any actual sensory feedback. The error signal at the level of the com-
parator can be then used to inform motor awareness. Berti et al. (2007), following
Frith et al. (2000), hypothesized that patients with AHP form appropriate repre-
sentations of the desired and predicted positions of the limb, but they are not aware
of the discrepancy between their prediction and the actual position. On this view,
patients’ awareness is dominated by intention and does not take into account the
failure of sensory evidence to confirm the execution of the intended action. AHP
arises because action awareness is based on motor commands sent to the plegic
limb, and sensory evidence about lack of movement is not processed. Accordingly,
AHP may involve damage to the brain areas that underpin the monitoring of the
correspondence between motor outflow and sensory inflow (e.g., Brodmann pre-
motor areas 6 and 44 [BA6 and BA44]; Berti et al., 2005), or else contrary sensory
information is neglected (Frith et al., 2000). Consequently, the mismatch between
the predicted state (i.e., movement of the limb) and the actual state (i.e., no move-
ment) is not registered.
The account of Berti and Frith can explain why patients with AHP fail to perceive
the failure to move. However, an equally important question relates to the nonve-
ridical awareness of action that they exhibit, that is, their subjective experience that
they have moved. An experimental demonstration that motor intention dominates
over sensory information about the actual effects of movement in AHP patients was
provided by Fotopoulou et al. (2008). Four hemiplegic patients with anosognosia
From the Fact to the Sense of Agency 107
(AHP) and four without anosognosia (nonAHP) were provided with false visual
feedback of movement in their left paralyzed arm using a prosthetic rubber hand.
This allowed for realistic, three-dimensional visual feedback of movement, and
deceived patients into believing the rubber hand was their own. Crucially, in some
conditions, visual feedback that was incompatible with the patient’s intentions was
given. For instance, in a critical condition, patients were instructed to move their
left hand, but the prosthetic hand remained still. This condition essentially mirrored
the classic anosognosic scenario within an experimentally controlled procedure. In
this way the study was able to examine whether the ability to detect the presence
or absence of movement, based on visual evidence, varied according to whether
the patient had planned to move their limb or not. The key measure of interest was
the patient’s response to a movement detection question (i.e., “Did your left hand
move?”), which required a simple yes/no response. The results revealed a selective
effect of motor intention in patients with AHP; they were more likely than non-
AHP controls to ignore the visual feedback of a motionless hand and claim that they
moved it when they had the intention to do so (self-generated movement) than
when they expected an experimenter to move their own hand (externally generated
movement), or there was no expectation of movement. In other words, patients
with AHP only believed that they had moved their hand when they had intended
to move it themselves, while they were not impaired in admitting that the hand did
not move when they had expected someone else to move it. By contrast, the perfor-
mance of nonAHP patients was not influenced by these manipulations of intention,
and they did not claim they moved their hand when the hand remained still.
These results confirm that AHP is influenced by motor planning, and in partic-
ular that motor “awareness” in AHP derives from the processing of motor inten-
tions. This finding is consistent with the proposals made by Frith et al. (2000; see
also Berti et al., 2007) that the illusory “awareness” of movement in anosognosic
patients is created on the basis of a comparison between the intended and predicted
positions of the limbs, and not on the basis of a mismatch between the predicted
and actual sensory feedback. According to this hypothesis, patients with AHP are
able to form appropriate representations of the desired and predicted positions of
the limb. However, conflicting information derived from sensory feedback that
would indicate a failure of movement is not normally available, because of brain
damage to regions that would register the actual state of the limbs, or else because
this contrary information is neglected. A recent lesion mapping study suggested
that premotor areas BA6 and BA44, which are implicated in action monitoring, are
the most frequently damaged areas in patients with AHP (Berti et al., 2005). This
finding may explain why these patients fail to register their inability to move, but
it does not address the functional mechanism that underpins their illusory aware-
ness of action per se. This study provides direct evidence for the hypothesis that
awareness of action is based on the stream of motor commands and not on sensory
inflow. While previous studies have suggested that conflicting sensory information
may not be capable of altering anosognosic beliefs (Berti et al., 2005), they did not
demonstrate that sensory feedback about the affected limb was ignored even when
it was demonstrably available. Accordingly, this study demonstrated for the first
time why anosognosic beliefs are formed in the first place: the altered awareness
108 T H E S E N S E O F AG E N C Y
methodological advantage of studying these domains is that one can directly com-
pare how the agentive nature of movement affects these three domains over and
above the mere presence of movement cues, that is, one can directly compare volun-
tary with passive movements. Consistent results have shown how the fact of agency
changes the experience of the body and the outside world, measured using depen-
dent variables such as temporal awareness and spatial representation of the body.
They thus provide indirect or implicit evidence about agency. Three fundamental
and robust features of agency emerge: a temporal attraction effect, a sensory attenu-
ation effect, and a change in the spatial representation of the body itself.
generated and external sensory events and can therefore ascribe agency. However,
since it suppresses perception of self-generated information, it cannot explain why
there is a positive experience of agency at all. Models based on attenuation treat
agency as absence of exteroceptive perceptual experience, not as a positive experi-
ence in itself. However, the phenomenon of sensory attenuation may be a reliable
functional signature of agency, which can be used as an implicit measure in experi-
mental studies.
cue to body-ownership. On this view, the sense of agency should involve the sense
of body-ownership, plus a possible additional experience of voluntary control. An
alternative view holds that sense of agency and sense of body-ownership are quali-
tatively different experiences, without any common component.
Previous accounts based on introspective evidence favor the additive model,
since they identified a common sense of body-ownership, plus an additional com-
ponent unique to action control (Longo & Haggard, 2009). Recent behavioral
and neuroimaging studies have also focused on the neurocognitive processes that
underpin body-ownership and agency (Fink et al., 1999; Farrer & Frith, 2002;
Farrer et al., 2003; Ehrsson, Spence & Passingham, 2004; Tsakiris et al., 2007), but
the exact neural bases of these two aspects of self-consciousness remain unclear.
For example, neuroimaging studies that investigated the sense of body-ownership
using the RHI (see Botvinick & Cohen, 1998) report activations in the bilateral
premotor cortex and the right posterior insula associated with the illusion of own-
ership of the rubber hand, and present only when visual and tactile stimulations
are synchronized (Ehrsson et al., 2004; Tsakiris et al., 2007). Studies investigating
the neural signatures of the sense of agency have used similar methods, such as the
systematic manipulation of visual feedback to alter the experience of one’s body in
action. Activity in the right posterior insula was correlated with the degree of match
between the performed and viewed movement, and thus with self-attribution
(Farrer et al., 2003). Conversely, activity in the right dorsolateral prefrontal cortex
(Fink et al. 1999; Leube et al., 2003), right inferior parietal lobe, and temporopa-
rietal junction (Farrer et al., 2003, 2008) was associated with degree of disparity
between performed and viewed movement, and thus with actions not attributed to
the self.
These studies were largely based on manipulating visual feedback to either match
or mismatch the participant’s manual action, similar to the behavioral experiments
on agency described earlier. However, such manipulations cannot separate the con-
tributions of efferent and afferent signals that are both inevitably present in manual
action. The imaging data from these studies may therefore confound the neural cor-
relates of agency and body-ownership. For example, with undistorted visual feed-
back of an action, there is a three-way match between efferent motor commands,
afferent proprioceptive signals, and vision. Thus, any effects seen in such condi-
tions could be due to congruence between (1) efferent and proprioceptive sig-
nals, (2) efferent signals and visual feedback, (3) proprioceptive signals and visual
feedback, or (4) some complex interaction of all three signals. Conversely, when
visual feedback is distorted (spatially or temporally), there is sensorimotor conflict
between efferent signals and vision, but also intersensory conflict between prop-
rioception and vision. As a result, any differences between match and mismatch
conditions could reflect sensorimotor comparisons (relating to sense of agency)
or proprioceptive-visual comparisons (relating to sense of body-ownership). As a
result, such experimental designs cannot distinguish between the additive and the
independence model of agency and body-ownership.
However, as suggested previously, the senses of agency and body-ownership
can be disentangled experimentally by comparing voluntary action with passive
movement, as shown earlier. Tsakiris, Longo, and Haggard (2010) implemented
112 T H E S E N S E O F AG E N C Y
additive models is that there should be no activations for body-ownership that are
not also present for agency. However, the analysis did not support this prediction as
the activated networks for agency and body-ownership were sharply distinct. Both
body-ownership and agency were associated with distinct and exclusive patterns of
activation, providing direct evidence that their neural substrates differ. In particular,
agency was specifically associated with activations in the presupplementary motor
area, the superior parietal lobe, the extrastriate body area, and the dorsal premo-
tor cortex bilaterally (BA6). In relation to a purely sensory-driven body-ownership,
suprathreshold activations were observed in a network of midline cortical structures,
including the precuneus, the superior frontal gyrus, and the posterior cingulate.
Notably, these midline cortical activations recall recent suggestions of a dedicated
self-referential processing network (Northoff & Bermpohl, 2004; Northoff et al.,
2006) in the default mode network (Gusnard et al., 2001; Schneider et al., 2008).
Thus, neuroimaging data supported an independence model, while questionnaire
data supported an additive model. This somewhat surprising inconsistency may be
explained in at least two distinct ways. First, the questionnaire data may reflect a
limitation of the folk-psychological concepts used to describe our embodied experi-
ence during sensation and movement. Folk psychology suggests that agency is a very
strong cue for ownership, so that I experience ownership over more or less any events
or object that I control. However, the experience of ownership of action during
agency may represent a distinctive type of ownership that should not be necessarily
conflated with ownership of sensations or body parts.1 Second, the apparent dissocia-
tion between neural activity and introspective reports may suggest that there is not a
one-to-one mapping between brain activity and conscious experience. Qualitatively
similar subjective experiences of ownership appear to be generated by quite differ-
ent brain processes in the passive/synchronous and active/synchronous condition.
Models involving a single neural correlate of each specific consciousness experience
have been highly successful in the study of individual sensory percepts, particularly
in vision (Haynes & Rees, 2006). However, the aspects of self-consciousness that we
call sense of body-ownership and sense of agency are not unique elemental percepts
or qualia in the same way. Rather, they may be a cluster of subjective experiences, feel-
ings, and attitudes (Synofzik, Vosgerau & Newen, 2008; Gallagher, this volume).
Suprathreshold activations unique to the experience of agency were observed in
the presupplementary motor area (pre-SMA), the superior parietal lobe, the extras-
triate body area, and the dorsal premotor cortex bilaterally (BA6). The pre-SMA is
strongly involved in the voluntary control of action (Goldberg, 1985). Neurosurgical
stimulation studies further suggest that it contributes to the experience of volition
itself: stimulation of pre-SMA can produce an “urge” to move, at stimulation lev-
els below threshold for evoking physical movement (Fried et al., 1991). Voluntary
action was present in both the active/synchronous and the active/asynchronous
conditions: these differed only in timing of visual feedback, and the resulting sense
of agency. However, the pre-SMA activation was greater in the active/synchronous
condition, where visual feedback confirms that the observed movement is tempo-
rally related to the voluntary motor command, suggesting that the pre-SMA plays
an important role not only in conscious intention (Lau et al., 2004) but also in the
sense of agency.
114 T H E S E N S E O F AG E N C Y
6. CONCLUDING REMARKS
This chapter has presented some of the ways with which experimental psychology
and cognitive neurosciences have experimented with the sense of agency. Several
experimental approaches to the study of agency have emerged with recent advance
in research method. From the design of nonecological situations where there is
ambiguity over the authorship of an action to the implementation of control condi-
tions of passive movements that make little sense in our everyday waking life, the
reviewed studies have tried to identity some key elements of what constitutes the
sense of agency in humans. The exact interplay between conscious intentions and
behavior and the balance between predictive and postdictive processes remain con-
troversial. However, the empirical investigation of the fact of agency, that is, the study
of situations where people unambiguously produce voluntary actions, suggests that
self-generated behavior changes the perception of one’s body and the external world
by integrating temporal and spatial representations of movements and their effects
on the world. One important implication of the experiments described in this chap-
ter is that the sense of agency seems to be closely linked to the appropriate process-
ing of efferent information within the motor system. For example, the experiments
on intentional binding and sensory attenuation suggest that efferent signals are suf-
ficient for eliciting these effects, and support the conceptualization of the sense of
agency as an efferent-driven predictive process. From a conceptual point of view,
the efference copy can be considered as a pragmatic index of the origin of move-
ment that operates at the interface between the psychological and the physiological
sides of our actions. The psychological content can be described as an intention-in-
action, and the physiological side relates to the descending motor command and
the sensory feedback. The reviewed agentic effects that are specific to the cascade
of the cognitive-motor processes that underpin voluntary movements allude to the
important functional role of agency for interacting biological organisms.
ACKNOWLEDGMENTS
Dr. Tsakiris and Dr. Fotopoulou were supported by the “European Platform for Life
Sciences, Mind Sciences, and the Humanities” grant by the Volkswagen Stiftung for
“Body-Project: Interdisciplinary Investigations on Bodily Experiences.” Dr. Tsakiris
From the Fact to the Sense of Agency 115
NOTE
1. For example, Marcel distinguished between attributing an action to one’s self, and
attributing the intentional source of the action to one’s self. Patients with anarchic hand
have a clear sense that their involuntary movements are their own, but they strongly
deny intending them (Marcel, 2003). Since the patients often themselves report this
dissociation as surprising, folk psychology may not adequately capture the difference
between ownership of intentional action and ownership of bodily sensation.
REFERENCES
Berti, A., Bottini, G., Gandola, M., Pia, L., Smania, N., Stracciari, A., Castiglioni, I., Vallar,
G., & Paulesu, E. (2005). Shared cortical anatomy for motor awareness and motor con-
trol. Science, 309 (5733), 488–491.
Berti, A., Spinazzola, L., Pia, L., & Rabuffeti, M. (2007). Motor awareness and motor
intention in anosognosia for hemiplegia. In Haggard, P., Rossetti, Y., & Kawato, M.,
eds., Sensorimotor foundations of higher cognition series: Attention and performance num-
ber XXII, 163–182. New York: Oxford University Press.
Blakemore, S. J., Wolpert, D. M., & Frith, C. D. (2002). Abnormalities in the awareness of
action. Trends in Cognitive Sciences, 6, 237–242.
Botvinick, M., & Cohen, J. (1998). Rubber hands “feel” touch that eyes see. Nature, 391
(6669), 756.
Daprati, E., Franck, N., Georgieff, N., Proust, J., Pacherie, E., Dalery, J., & Jeannerod,
M. (1997). Looking for the agent: An investigation into consciousness of action and
self-consciousness in schizophrenic patients. Cognition, 65, 71–86.
Ebert, J. P., & Wegner, D. M. (2010). Time warp: Authorship shapes the perceived timing
of actions and events. Consciousness and Cognition, 19, 481–489.
Ehrsson, H. H., Spence, C., & Passingham, R. E. (2004). That’s my hand! Activity in pre-
motor cortex reflects feeling of ownership of a limb. Science, 305, 875–877.
Farrer, C., Franck, N., Georgieff, N., Frith, C. D., Decety, J., & Jeannerod, M. (2003).
Modulating the experience of agency: A positron emission tomography study.
NeuroImage, 18, 324–333.
Farrer, C., Frey, S. H., Van Horn, J. D., Tunik, E., Turk, D., Inati, S., & Grafton, S. T. (2008).
The angular gyrus computes action awareness representations. Cerebral Cortex, 18,
254–261.
Farrer, C., & Frith, C. D. (2002). Experiencing oneself vs another person as being the
cause of an action: The neural correlates of the experience of agency. NeuroImage, 15,
596–603.
Fink, G. R., Marshall, J. C., Halligan, P. W., Frith, C. D., Driver, J., Frackowiak, R. S., &
Dolan, R. J. (1999). The neural consequences of conflict between intention and the
senses. Brain, 122, 497–512.
Fotopoulou, A., Tsakiris, M., Haggard, P., Vagopoulou, A., Rudd, A., & Kopelman, M.
(2008). The role of motor intention in motor awareness: An experimental study on
anosognosia for hemiplegia. Brain, 131, 3432–3442.
116 T H E S E N S E O F AG E N C Y
Fourneret, P., & Jeannerod, M. (1998). Limited conscious monitoring of motor perfor-
mance in normal subjects. Neuropsychologia, 36, 1133–1140.
Fried, I., Katz, A., McCarthy, G., Sass, K. J., Williamson, P., & Spencer, D. D. (1991).
Functional organization of human supplementary motor cortex studies by electrical
stimulation. Journal of Neuroscience, 11, 3656–3666.
Frith, C. D., Blakemore, S. J., & Wolpert, D. M. (2000). Abnormalities in the awareness
and control of action. Philosophical Transactions Royal Society London Series B Biological
Sciences, 355 (1404), 1771–1788.
Goldberg , G. (1985). Supplementary motor area structure and function: Review and
hypotheses. Behavioural and Brain Sciences, 8, 567–616.
Gusnard, D. A., Akbudak, E., Shulman, G. L., & Raichle, M. E. (2001). Medial prefrontal
cortex and self-referential mental activity: Relation to a default mode of brain function.
Proceedings of National Academy of Sciences, USA, 98, 4259–4264.
Haggard, P. (2008). Human volition: Towards a neuroscience of will. Nature Reviews
Neuroscience, 9, 934–946.
Haggard, P., Clark, S., & Kalogeras, J. (2002). Voluntary action and conscious awareness.
Nature Neuroscience, 5, 382–385.
Haggard, P., & Tsakiris, M. (2009). The experience of agency: Feeling, judgment and
responsibility. Current Directions in Psychological Science, 18, 242–246.
Haynes, J., & Rees, G. (2006). Decoding mental states from brain activity in humans.
Nature Reviews Neuroscience, 7, 523–534.
Lau, H. C., Rogers, R. D., Haggard, P., & Passingham, R. E. (2004). Attention to
intention.
Science, 303, 1208–1210.
Leube, D. T., Knoblich, G., Erb, M., & Kircher, T. T. (2003). Observing one’s hand
become anarchic: An fMRI study of action identification. Consciousness and Cognition,
12, 597–608.
Libet, B., Gleason, C. A., Wright, E. W., & Pearl, D. K. (1983). Time of conscious inten-
tion to act in relation to onset of cerebral activity (readiness-potential): The uncon-
scious initiation of a freely voluntary act. Brain, 106 (pt. 3), 623–642.
Longo, M. R., & Haggard, P. (2009). Sense of agency primes manual motor responses.
Perception, 38, 69–78.
Marcel, A. J. (2003). The sense of agency: Awareness and ownership of actions and inten-
tions. In Roessler, J., & Eilan, N., eds., Agency and self-awareness, 48–93. Oxford: Oxford
University Press.
Metcalfe, J., & Greene M. J. (2007). Metacognition of agency. Journal of Experimental
Psychology General, 136, 184–199.
Moore, J. W., & Haggard, P. (2010). Intentional binding and higher order agency experi-
ence. Consciousness and Cognition, 19, 490–491.
Northoff, G., & Bermpohl, F. (2004). Cortical midline structures and the self. Trends in
Cognitive Sciences, 8, 102–107.
Northoff, G., Heinzel, A., de Greck, M., Bermpohl, F., Dobrowolny, H., & Panksepp, J.
(2006). Self-referential processing in our brain—a meta-analysis of imaging studies on
the self. NeuroImage, 31, 440–457.
Sarrazin, J. C., Cleeremans, A., & Haggard, P. (2008). How do we know what we are doing?
Time, intention, and awareness of action. Consciousness and Cognition, 17, 602–615.
Sato, A., & Yasuda, A . (2005). Illusion of sense of self-agency: Discrepancy between
the predicted and actual sensory consequences of actions modulates the sense of
self-agency, but not the sense of self-ownership. Cognition, 94, 241–255.
From the Fact to the Sense of Agency 117
Schneider, F., Bermpohl, F., Heinzel, A., Rotte, M., Walter, M., Tempelmann, C., Wiebking,
C., Dobrowolny, H., Heinze, H. J., & Northoff, G. (2008). The resting brain and our
self: Self-relatedness modulates resting state neural activity in cortical midline struc-
tures. Neuroscience, 157, 120–131.
Synofzik, M., Vosgerau, G., & Newen, A. (2008). Beyond the comparator model: A multi-
factorial two-step account of agency. Conscious and Cognition, 17, 219–239.
Tsakiris, M., & Haggard, P. (2003). Awareness of somatic events associated with a volun-
tary action. Experimental Brain Research, 149, 439–446.
Tsakiris, M., & Haggard, P. (2005). Experimenting with the acting self. Cognitive
Neuropsychology, 22, 387–407.
Tsakiris, M., Haggard, P., Franck, N., Mainy, N., & Sirigu, A . (2005). A specific role for
efferent information in self-recognition. Cognition, 96, 215–231.
Tsakiris, M., Hesse, M., Boy, C., Haggard, P., & Fink, G. R. (2007). Neural correlates of
body-ownership: A sensory network for bodily self-consciousness. Cerebral Cortex, 17,
2235–2244.
Tsakiris, M., Longo, M. R., & Haggard, P. (2010). Having a body versus moving your body:
Neural signatures of agency and body-ownership. Neuropsychologia, 48, 2740–2749.
Tsakiris, M., Prabhu, G., & Haggard, P. (2006). Having a body versus moving your body:
How agency structures body-ownership. Conscious and Cognition, 15, 423–432.
Wegner, D. M. (2003). The mind’s best trick: How we experience conscious will. Trends
in Cognitive Sciences, 7, 65–69.
Wittgenstein, L. (1953). Philosophical investigations. Blackwell.
7
S H AU N G A L L AG H E R
In a variety of recent studies the concept of the sense of agency has been shown
to be phenomenologically complex, involving different levels of experience, from
the basic aspects of sensory-motor processing (e.g., Farrer et al. 2003; Tsakiris and
Haggard 2005; Tsakiris, Bosbach, and Gallagher 2007) to the higher levels of inten-
tion formation and retrospective judgment (e.g., Pacherie 2006, 2007; Stephens
and Graham 2000; Synofzik, Vosgerau, and Newen 2008; Gallagher 2007, 2010).
After summarizing this complexity, I will argue, first, that the way that these various
contributory elements manifest themselves in the actual phenomenology of agency
remains ambiguous, and that this ambiguity is in fact part of the phenomenology.
That is, although there surely is some degree of ambiguity in the analysis of this
concept, perhaps because many of the theoretical and empirical studies cut across
disciplinary lines, there is also a genuine ambiguity in the very experience of agency.
Second, most studies of the sense of agency fail to take into consideration that it
involves more than simply something that happens in the head (mind or brain), and
specifically that it has a social dimension.
COMPLEXITIES
Normally when I engage in action, I have a sense of agency for that action. How is
that sense or experience of agency generated?1 It turns out that there are a number
of things that can contribute to this experience. Some, but not all, of these things do
contribute to the experience of agency in all cases. I’ll start with the most basic—
those aspects that seem to be always involved—and then move to those that are
only sometimes involved.
sense of ownership (SO) for movement, which is the sense that I am the one who
is undergoing the movement—that it is my body moving, whether the movement
is voluntary or involuntary (Gallagher 2000a, 2000b). In the case of involuntary
movement, SA is missing, but I still have SO. If I’m pushed, I still have the sense that
I am the one moving, even if I did not cause the movement. These experiences are
prereflective, which means that they neither are equivalent to nor depend on the
subject taking an introspective reflective attitude. Nor do they require that the sub-
ject engages in an explicit perceptual monitoring of bodily movements. Just as I do
not attend to the details of my own bodily movements as I am engaged in action, my
sense of agency is not normally something that I attend to or something of which I
am explicitly aware. As such, SA is phenomenologically recessive.
If we are thinking of action as physical, embodied action that involves self-gen-
erated movement, then motor control processes are necessarily involved. The most
basic of these are efferent brain processes that are involved in issuing a motor com-
mand. Let’s think again about involuntary movement. In the case of involuntary
movement there is a sense of ownership for the movement but no sense of self-
agency. Awareness of my involuntary movement comes from reafferent sensory
feedback (visual and proprioceptive/kinesthetic information that tells me that I’m
moving). There are no initial motor commands (no efferent signals) that I issue
to generate the movement. It seems possible that in both involuntary and volun-
tary movement SO is generated by sensory feedback, and that in the case of vol-
untary movement a basic, prereflective SA is generated by efferent signals. Tsakiris
and Haggard (2005; also see Tsakiris 2005) review empirical evidence to support
this division of labor. They suggest that efferent processes underlying SA modu-
late sensory feedback resulting from movement. Sensory suppression experiments
(Tsakiris and Haggard 2003) suggest that SA arises at an early efferent stage in the
initiation of action and that awareness of the initiation of my own action depends on
central signals, which precede actual bodily movement. Experiments with subjects
who lack proprioception but still experience a sense of effort reinforce this conclu-
sion (Lafargue, Paillard, Lamarre, and Sirigu 2003; see Marcel 2003). As Tsakiris
and Haggard (2005) put it:
The sense of agency involves a strong efferent component, because actions are
centrally generated. The sense of ownership involves a strong afferent compo-
nent, because the content of body awareness originates mostly by the plurality
of multisensory peripheral signals. We do not normally experience the efferent
and afferent components separately. Instead, we have a general awareness of
our body that involves both components. (387)
This prereflective SA does not arise simply when I initiate an action; as I continue
to control my action, continuing efferent signals, and the kind of afferent feedback
that I get from my movement, contribute to an ongoing SA.2 To the extent that I am
aware of my action, however, I tend to be aware of what I am doing rather than the
details of how I am doing it, for example, what muscles I am using. Even my reces-
sive awareness of my action is struck at the most pragmatic level of description (“I’m
getting a drink”) rather than at a level of motor control mechanisms. That is, the
120 T H E S E N S E O F AG E N C Y
Intentional Aspects in SA
Several brain imaging experiments have shown that the intentional aspects of what
I am trying to do and what actually I accomplish in the world enter into our sense of
agency. These experiments help us to distinguish between the purely motor control
contributories (the sense that I am moving my body) and the most immediate and
perceptually based intentional aspects (the sense that I am having an effect on my
immediate environment) of action (Chaminade and Decety 2002; Farrer and Frith
2002). These experiments, however, already introduce a certain theoretical ambigu-
ity into the study of SA, since they fail to clearly distinguish between motor control
aspects and intentional aspects.
For example, in Farrer and Frith’s (2002) fMRI experiment, designed to find the
neural correlates of SA, subjects are asked to manipulate a joystick to drive a colored
circle moving on a screen to specific locations on the screen. In some instances the
subject causes this movement, and in others the experimenter or computer does.
The subject has to discriminate self-agency and other-agency. Farrer and Frith cite
the distinction between SA and SO (from Gallagher 2000a) but associate SA with
the intentional aspect of action, that is, whether I am having some kind of effect with
respect to the goal or intentional task (or what happens on the computer screen).
Accordingly, their claim is that SO (“my hand is moving the joystick”) remains con-
stant while SA (“I’m manipulating the circle”) changes. When subjects feel that they
are not controlling the events on the screen, there is activation in the right inferior
parietal cortex and supposedly no SA for the intentional aspect of the action. When
the subject does have SA for what happens on the screen, the anterior insula is acti-
vated bilaterally.
Although Farrer and Frith clearly think of SA as something tied to the intentional
aspect of action and not to mere bodily movement or motor control, when it comes
to explaining why the anterior insula should be involved in generating SA, they frame
the explanation in terms of motor control and bodily movement:
Why should the parietal lobe have a special role in attributing actions to others
while the anterior insula is concerned with attributing actions to the self? The
sense of agency (i.e., being aware of causing an action) occurs in the context of
a body moving in time and space . . . [and] critically depends upon the experi-
ence of such a body. There is evidence that . . . the anterior insula, in interaction
with limbic structures, is also involved in the representation of body schema . . . .
One aspect of the experience of agency that we feel when we move our bod-
ies through space is the close correspondence between many different sen-
sory signals. In particular there will be a correspondence between three kinds
of signal: somatosensory signals directly consequent upon our movements,
Ambiguity in the Sense of Agency 121
visual and auditory signals that may result indirectly from our movements, and
last, the corollary discharge [efferent signal] associated with motor commands
that generated the movements. A close correspondence between all these sig-
nals helps to give us a sense of agency. (Farrer and Frith 2002, 601–602)
In a separate study Farrer et al. (2003) have the same goal of discovering the neural
correlates of SA. In this experiment subjects provide a report on their experience;
however, all questions about agency were focused on bodily movement rather than
intentional aspect. In fact, subjects were not given an intentional task to carry out
other than making random movements using a joystick, and the focus of their atten-
tion was directed toward a virtual (computer image) hand that either did or did not
represent their own hand movements, although at varying degrees of rotation rela-
tive to the true position of the subject’s hand. That is, they moved their own hand
but saw a virtual hand projected on screen at veridical or nonveridical angles to their
own hand; the virtual hand was either under their control or not. Subjects were
asked about their experience of agency for control of the virtual hand movements.
The less the subject felt in control, the higher the level of activation in the right infe-
rior parietal cortex, consistent with Farrer and Frith (2002). The more the subject
felt in control, the higher the level of activation in the right posterior insula. This
result is in contrast with the previous study, where SA was associated with activation
of the right anterior insula. Referencing this difference, Farrer et al. (2003) state:
“We have no explanation as to why the localization of the activated areas differ[s] in
these studies, except that we know that these two regions are densely and recipro-
cally connected” (331). One clear explanation, however, is that the shift of focus
from the intentional aspect (accomplishing a computer screen task in Farrer and
Frith 2002) to simple control of bodily movement (in Farrer et al. 2003) changes
the aspect of SA that is being studied. It would be helpful in these experiments to
clearly distinguish between the intentional aspect and the motor (efferent) aspect
of agency, and to say that there are at least these two contributories to SA.
Intention Formation
Over and above the sensory-motor processes that involve motor control and the
perceptual processes that allow us to monitor the intentional aspects of our actions,
there are higher-order cognitive components involving intention formation that con-
tribute to SA. Pacherie (2007) and others like Bratman (1987) and Searle (1983)
distinguish between future or distal intentions and present intentions. Future or
F-intentions relate to prior deliberation processes that allow us to formulate our
relatively long-term goals. For example, I may decide to purchase a car tomorrow
(or next week, or next month, or at some undetermined time when there is a good
rebate available), and then at the appropriate time go out and engage in that action.
Not all actions involve prior intention formation. For example, I may decide right
now to get a drink from the kitchen and find myself already moving in that direc-
tion. In that case I have not formed an F-intention, although my action is certainly
intentional. In that case, I may have a present or P-intention (or what Searle calls an
“intention-in-action”). My intention to get a drink from the kitchen may involve an
122 T H E S E N S E O F AG E N C Y
actual decision to get up and to move in the direction of the kitchen—and in doing
so I may be monitoring what I am doing in an explicitly conscious way. It may be a
rather complex action. At my university office the kitchen is located down the hall,
and it is locked in the evening. If I want to get a drink, I have to walk up the hall,
retrieve the key for the kitchen from a common room, and then proceed back down
to the kitchen, unlock the door, retrieve the drink, relock the door, return the key,
and return to my office. Although I may be thinking of other things as I do this, I am
also monitoring a set of steps that are not automatic.
In other cases I may be so immersed in my work that I don’t even notice that I’m
reaching for the glass of water on the table next to me. Here my intentional action
may be closer to habitual, and there is no P- or F-intention involved. In such cases, I
would still have a minimal SA, connected with what Pacherie (2007) calls a motor
or M-intention, and consisting of the prereflective sense generated in motor con-
trol processes and a rather recessive intentional aspect (which I may only notice if I
knock over the glass or spill the drink).
It is likely that when there is an F- and/or P-intention involved, such intentions
generate a stronger SA. Certainly, if I form an F-intention to buy a new car tomor-
row, and tomorrow I go to the car dealership and purchase a car, I will feel more in
charge of my life than if, without prior intention I simply find myself lured into a
car dealership, purchasing a car without prior planning. In the latter case, even if I
do not deny that I am the agent of my action, I might feel a bit out of control. So it
seems clear that part of the phenomenology of agency may be tied, in some cases,
to the formation of a prior intention. It’s important here to distinguish between the
cognitive level of intention formation—which may involve making judgments and
decisions based on beliefs, desires, or evaluations—and a first-order level of experi-
ence where we find SA. SA is not itself a judgment, although I may judge that I am
the agent of a certain action based on my sense of agency for it. But what is clear is
that intention formation may generate a stronger SA than would exist without the
formation of F- or P-intentions.
Retrospective Attribution
The effect of the formation of a prior intention is clearly prospective. But there are
post-action processes that can have a retrospective effect on the sense of agency.
Graham and Stephens (1994; Stephens and Graham 2000) provide an account of
introspective alienation in schizophrenic symptoms of delusions of control and
thought insertion in terms of two kinds of self-attribution.
depends upon whether I take myself to have beliefs and desires of the sort that
would rationalize its occurrence in me. If my theory of myself ascribes to me the
relevant intentional states, I unproblematically regard this episode as my action.
If not, then I must either revise my picture of my intentional states or refuse to
acknowledge the episode as my doing. (Graham and Stephens 1994, 102)
On this approach, I have a sense of agency, and specifically for my actions because
I have a properly ordered set of second-order retrospective interpretations (see
Graham and Stephens 1994, 102; Stephens and Graham 2000, 162ff.).
Pacherie indicates that F-intentions are subject to normative pressures for consis-
tency and coherence relative to the agent’s beliefs and other intentions. This would
also seem to be the case with Graham and Stephens’s retrospective attributions. But
in either case, the fact that I may fail to justify my actions or think that my actions
fail to fit with my theory or narrative about myself retrospectively does not neces-
sarily remove my sense of agency for the action, although it may diminish it. That is,
it seems wrong to think, as Graham and Stephens suggest, that retrospective attribu-
tion actually constitutes my sense of agency, but one should acknowledge that it can
have an effect on SA, either strengthening it or weakening it.
Within the realm of the normal, we can have two extremes. In one case I may gen-
erally feel that I am in control of my life because I usually follow through and act on
my intentions. I think and deliberate about an action, and form an F-intention to do
it. When the time comes, I remember my F-intention, and I see that it is the appro-
priate time and situation to begin acting to fulfill that intention. My P-intentions
coincide with the successful guidance of the action; my motor control is good, and
all the intentional factors line up. Subsequently, as I reflect on my action, it seems to
me to be a good fit with how I think of myself, and I can fully attribute responsibility
for that action to myself. It seems that in this case I would feel a very strong sense
of agency for the action, all contributing aspects—prospective intention formation,
contemporary control factors, and retrospective attribution—giving me a coherent
experience of that action (see Figure 7.1). In another case, however, I may have a
124 T H E S E N S E O F AG E N C Y
Prospective Retrospective
Deliberation Reflective
Reflective F-intention attribution or
P-intention evaluation
SA
Pre-reflective
Perceptual monitoring of
intentional aspect
AMBIGUITIES
Pacherie suggests that mechanisms analogous to motor control mechanisms can
explain the formation of F- and P-intentions:
That our deliberation about future actions involves thinking about the means and
ends of our actions seems uncontroversial. Pacherie’s proposal does raise one ques-
tion, however. If we regard thinking, such as the deliberative process that may be
involved in intention formation, itself as a kind of action, then do we also have a sense
of agency for the thinking or deliberation involved in the formation of F-intentions?
It seems right to suggest that if I engage in a reflectively conscious process of delib-
erating about my future actions and make some decisions on this basis, I would have
a sense of agency for (and from) this deliberation.3 You could interrupt me during
this process and ask what I am doing, and I could say: “I’m sitting here deliberat-
ing about buying a car.” The sense of agency that I feel for my ongoing deliberation
Ambiguity in the Sense of Agency 125
process may be based on my sense of control over it; my response to your question
is a retrospective attribution that may confirm this sense of agency. It’s also pos-
sible that my SA for my deliberation derives in part from a previous deliberation
process (I may have formed the F-intention yesterday to do my deliberations (i.e.,
to form my F-intentions about car buying today). It is clearly the case, however,
that not all forming of F-intentions requires a prior intention to do so, otherwise
we would have an infinite regress. We would have to deliberate about deliberating
about deliberating, and so on. Furthermore, it is possible to have P-intentions for
the action of forming F-intentions, where P-intentions in this case may be a form of
metacognition where we are conscious of our cognitive strategies as we form our
F-intentions. Certainly, however, it is not always the case that we engage in this kind
of metacognition as we formulate our F-intentions. It seems, then, that we can have
a minimal first-order sense of agency for our deliberations without prior delibera-
tion or occurrent metacognitive monitoring.
On the one hand, the sense of agency for a particular action (X) is different
from the sense of agency for the intention formation to do X. They are obviously
not equivalent, since there are two different actions involved, X, and the act of
deliberation about X. On the other hand, it seems likely that SA for my delibera-
tion may contribute to my reflective sense (and my retrospective attribution) that
I am the agent of my own actions. Pacherie refers to this as the long-term sense
of agency: “a sense of oneself as an agent apart from any particular action, i.e. a
sense of one’s capacity for action over time, and a form of self-narrative where
one’s past actions and projected future actions are given a general coherence and
unified through a set of overarching goals, motivations, projects and general lines
of conduct” (2007, 6).
As such it may enter into the occurrent sense of agency for any particular action.
Furthermore, if I lacked SA for my deliberation process, it might feel more like an
intuition or unbidden thought, or indeed, if I were schizophrenic, it might feel like
an inserted thought. In any case, it might feel less than integrated with what Graham
and Stephens call the “theory or story of [the subject’s] own underlying intentional
states,” something that itself contributes to SA for the action. So it seems that SA
for the deliberation process itself may contribute to SA for the action X in two indi-
rect ways. First, by contributing to my long-term sense of agency, and second, by
contributing to the effect of any retrospective attribution I may engage in. Still, as I
indicated, there need not be (and, under threat of infinite regress, there cannot be)
a deliberation process for every action that I engage in.
Similarly for P-intentions. If action monitoring, at the level of P-intentions, is
itself a kind of action (if, e.g., it involves making judgments about certain environ-
mental factors), there may be a sense of agency for that action monitoring. The
processes that make up a P-intention are much closer to the intended action itself
and may not feel like an additional or separate action. I can imagine a very explicit
kind of P-intention in the form of a conscious monitoring of what I am doing. For
example, I may be putting together a piece of furniture by following a set of instruc-
tions. In that case I could have a sense of agency for following the instructions and
closely monitoring my actions in terms of means-ends. Certainly doing it that way
would feel very different from doing it without following the set of instructions.
126 T H E S E N S E O F AG E N C Y
But the SA for following the instructions would really go hand in glove with SA
for the action of assembling the furniture. How we distinguish such things would
really depend on how we define the action.
In the process of assembling the furniture, I may start by reading instruction
number 1; I then turn to the pieces of wood in front of me and join two of them
together. I can distinguish the act of reading from the act of joining and define SA
for each of them. In that case, however, one can ask whether SA for the act of reading
doesn’t contribute to SA for the act of joining. I might, however, think of the read-
ing and the joining as one larger action of assembling the furniture, and SA might
be defined broadly to incorporate all aspects of that assembling. It might also be the
case that when I put together a second piece of furniture, I don’t consult the instruc-
tions at all, in which case SA is more concentrated in the joining. In most practiced
actions a P-intention is really unnecessary because motor control processes and
perceptual monitoring of the intentional aspect can do the job, that is, can keep my
action on track. I might simply make up my mind (an F-intention) to do this task,
and I go and immediately start to do the task without further monitoring in terms
of means-ends. All of this suggests that how we experience agency is relative to the
way we define specific actions, and how practiced those actions are.
This means that there is some serious ambiguity not simply in the way we define
the sense of agency but in the sense—the experience—of agency itself. This phe-
nomenological ambiguity—the very ambiguity of our experience of agency—
should be included in our considerations about the sense of agency. Clear-cut and
unambiguous definitions may create a neat conceptual map, but the landscape
itself may not be so neat. It is not always the case, as Pacherie sometimes suggests,
that P-intentions serve to implement action plans inherited from F-intentions,
since there are not always F-intentions. It is not always the case that “the final stage
in action specification involves the transformation of the perceptual-actional con-
tents of P-intentions into sensorimotor representations (M-intentions) through a
precise specification of the spatial and temporal characteristics of the constituent
elements of the selected motor program” (Pacherie 2007, 3), since there are not
always P-intentions. Pacherie also suggests that a sense of action initiation and a
sense of control are “crucial” components in the sense of agency (2007, 17–18)
and that in both components the P-intention plays a large role. But the fact that
some actions for which we have SA take place without P-intentions puts this idea
in question.
The sense of action initiation, Pacherie suggests, is based on the binding of
P-intention and awareness of movement onset in the very small time frame of 80
to 200 milliseconds prior to actual movement onset corresponding to the time of
the lateralized readiness potential, a signal that corresponds to selection of a spe-
cific motor program (Libet 1985; Haggard 2003). She associates the P-intention
with what Haggard distinguishes as an urge to move and reference forward to the
goal of the action. But these aspects of action experience can be purely prereflec-
tive, generated by motor-control processes, and form part of the M-intention (see
Desmurget et al. 2009 for relevant data). In this regard it is important to distinguish
P-intention from the prereflective perceptual monitoring of the intentional aspects
of the action that can occur without a formed P-intention, as in practiced action.
Ambiguity in the Sense of Agency 127
We could add to this the long-term sense of one’s capacity for action over time,
which Pacherie identifies as related to self-narrative “where one’s past actions and
projected future actions are given a general coherence and unified through a set of
overarching goals, motivations, projects and general lines of conduct” (2007, 6).
Although conceptually we may distinguish between different levels (first-order,
higher-order) and aspects, and neuroscientifically we may be able to identify
Ambiguity in the Sense of Agency 129
Suddenly an object has appeared which has stolen the world from me.
Everything [remains] in place; everything still exists for me; but everything is
130 T H E S E N S E O F AG E N C Y
traversed by an invisible flight and fixed in the direction of a new object. The
appearance of the Other in the world corresponds therefore to a fixed sliding
of the whole universe, to a decentralization of the world which undermines
the centralization which I am simultaneously effecting. (1969, 255)
of the sense of agency for one’s actions—but this is not just the result of chemically
induced dependency. Compulsive drug-related behaviors correlate neither with the
degree of pleasure reported by users nor with reductions in withdrawal symptoms
as measured in placebo studies and the subjective reports of users. Robinson and
Berridge (1993, 2000) propose an “incentive-sensitization” model: pathological
addiction correlates highly with the salience of socially situated drug-related behav-
iors and stimuli. For example, specific situations (including the agent’s perception
of his social world) are altered and become increasingly hedonically significant to
the agent. Brain regions mediating incentive sensitization are inscribed within the
same areas that process action specification, motor control, and social cognition—
regions of the brain thought to code for intentional deliberation, social navigation,
and action (Allen 2009). This reinforces the idea that situational salience, includ-
ing perceptual salience of the social situation, contributes to intention formation
and the sense of agency—sometimes enhancing but also (as in extreme addictive
behavior) sometimes subverting SA. Intentions can be dynamically shaped in rela-
tion to how others are behaving, and by what is deemed acceptable behavior within
specific subcultures.
In the case of hysteria or conversion disorder, there is also a loss of the sense of
agency over bodily action. But, as Spence (2009, 276) states: “All hysterical phe-
nomena arise within social milieus.” The presence or absence of specific others
(sometimes the medical personnel) has an effect on the symptom, so that there is
symptomatic inconsistency from one social setting to another. Spence points to the
particular social milieu of Charcot’s practice in Paris, Freud’s practice in Vienna, and
the First World War battlefront—social arrangements that seemed to encourage the
development of hysterical symptoms. As he indicates, “There is clearly a need for
further work in this area” (Spence 2009, 280).
Let me conclude with one further example. In 2009 my daughter Laura volun-
teered with the Peace Corps in South Africa, focusing her efforts on HIV education.
She recounts that her attempts to motivate residents in a small village outside of
Pretoria to help themselves by engaging in particular activities were met by a certain
sardonic attitude and even polite laughter. They explained that they were unable to
help themselves simply because, as everyone knew, they were lazy. That’s “the way
they were,” they explained, and they knew this because all their life they had been
told so by various educational and governmental institutions, especially under the
apartheid regime. In effect, because of the contingencies of certain long-standing
social arrangements, with prolonged effects, they had no long-term sense of agency,
and this robbed them of possibilities for action.
It certainly seems possible that an individual could convince himself of his lazi-
ness, without the effects of external forces playing such a causal role. But it is dif-
ficult to conceive of what would motivate such a normative judgment, or even
that there could be such a normative judgment outside of a social environment.
Could there be a form of self-observation that would lead to a self-ascription of lazi-
ness that would not involve a comparison with what others do or do not do, or with
certain expectations set by others? It seems quite possible that some people, or social
arrangements, more than others may make me feel less in charge of my life, or more
empowered; and it seems quite possible that I can allow (or cannot prevent) others,
132 T H E S E N S E O F AG E N C Y
or some social arrangements, to make me feel more or less empowered. There are
certain ways of raising children, and certain ways of treating others that lead them to
feeling empowered, with a more expansive sense of agency than one finds in other
cases where it goes the other way. None of these possible adumbrations in an indi-
vidual’s sense of agency—from the Peace Corp volunteer who, at least at the begin-
ning, feels empowered enough to risk the effort, to the victim of apartheid, who in
the end has very little sense of agency—happen in social isolation.
If, in thinking about action and agency, we need to look at the most relevant prag-
matic level, that level is not the level of mental or brain states. We shouldn’t be look-
ing exclusively inside the head. Rather, embodied action happens in a world that is
physical and social and that often reflects perceptual and affective valiances, and the
effects of forces and affordances that are both physical and social. Notions of agency
and intention, as well as autonomy and responsibility, are best conceived in terms
that include social effects. Intentions often get co-constituted in interactions with
others—indeed, some kinds of intentions may not be reducible to processes that
are contained exclusively within one individual. In such cases, the sense of agency
is a matter of degree—it can be enhanced or reduced by physical, social, economic,
and cultural factors—sometimes working through our own narrative practices, but
also by loss of motor control or disruptions in prereflective action-consciousness.
NOTES
1. This and the following section summarize some of the material discussed in Gallagher
(2010).
2. It is important to distinguish SA, as related to motor control processes, from what
Fabio Paglieri (this volume) calls the experience of freedom, which, he argues, has no
positive prereflective phenomenology. Paglieri distinguishes the question of an expe-
rience of freedom from other aspects that may be involved in SA, e.g., the experience
of action control, and leaves the phenomenological status of such aspects an open
question. This is consistent with my own view about the distinction between issues
pertaining to motor control (as in the Libet experiments) and anything like an experi-
ence of freedom, which I understand not to be reducible to motor control (Gallagher
2006). Paglieri nonetheless expresses a skepticism about the sense of agency and sug-
gests that “it rests on an invalid inference from subpersonal hypotheses to phenom-
enological conclusions” (this volume, p. 147). In fact, however, the inference validly
goes in the other direction. It starts from the phenomenological distinction between
SA and SO, originally worked out in the context of the schizophrenic delusions of
control, and then asks what the neurological underpinnings of SA might be (see, e.g.,
Farrer and Frith 2002; Tsakiris and Haggard 2005).
3. This may be part of “what it’s like” or the phenomenal feel of such cognitive pro-
cesses. Of course there is an ongoing debate about whether higher-order cogni-
tive activities such as evaluating or judging come with a phenomenal or qualitative
feel to them. There are three possibilities here. (1) Cognitive states simply have
no phenomenal feel to them. But if such states have no qualitative feel to them, it
shouldn’t feel like anything to make a judgment or solve a math problem, and we
would have to say that we do not experience such things, since on standard defini-
tions phenomenal consciousness is the experience (e.g., Block 1995, 230). Do the
Ambiguity in the Sense of Agency 133
phenomenology when you do the math, and this doesn’t seem correct; but let’s
allow it as a possibility. (2) Cognitive states do have a phenomenal feel to them, but
different cognitive states have no distinguishable phenomenal feels to them so that
deciding to make a will and solving a math problem feel the same. (3) Different
cognitive states do have distinguishable phenomenal feels to them—deciding to
make a will does feel different from solving a math problem. On this view, which
is the one I would defend (see Gallagher and Zahavi 2008, 49ff.), in forming our
intentions we sometimes find it easy and sometimes difficult, sometimes with
much uncertainty or much effort, and accordingly one process of intention forma-
tion might feel different from the other. In either case (2) or (3) there would be
room for SA as an experiential component. E.g., part of what it feels like for me to
solve a math problem is that I was the one who actually solved the problem. But
even if there were no phenomenal feel to such cognitive processes, it may still be
the case that having gone through the process, the result itself, e.g., that I have a
plan, or that my mind is made up, may have a certain feel that contributes to a
stronger experience of agency for the action in question. Acting on a prior plan,
e.g., feels differently from acting spontaneously.
REFERENCES
Aartsa, H., Custersa, R., and Wegner, D. M. 2005. On the inference of personal author-
ship: Enhancing experienced agency by priming effect information. Consciousness and
Cognition 14: 439–458.
Allen, M. 2009. The body in action: Intention, action-consciousness, and compulsion.
MA thesis. University of Hertfordshire.
Block, N. 1995. On a confusion about a function of consciousness. Behavioral and Brain
Sciences 18: 227–247.
Bratman, M. E. 1987. Intention, Plans, and Practical Reason. Cambridge: Cambridge
University Press.
Chaminade, T., and Decety, J. 2002. Leader or follower? Involvement of the inferior pari-
etal lobule in agency. Neuroreport 13 (1528): 1975–1978.
Desmurget, M., Reilly, K. T., Richard, N., Szathmari, A., Mottolese, C., and Sirigu, A .
2009. Movement intention after parietal cortex stimulation in humans. Science 324:
811–813.
Farrer, C., Franck, N., Georgieff, N., Frith, C. D., Decety, J., and Jeannerod, M. 2003.
Modulating the experience of agency: A positron emission tomography study.
NeuroImage 18: 324–333.
Farrer, C., and Frith, C. D. 2002. Experiencing oneself vs. another person as being the
cause of an action: The neural correlates of the experience of agency. NeuroImage 15:
596–603.
Frankfurt, H. G. 1988. The Importance of What We Care About: Philosophical Essays.
Cambridge: Cambridge University Press.
Gallagher, S. 2000a. Philosophical conceptions of the self: Implications for cognitive sci-
ence. Trends in Cognitive Science 4: 14–21.
Gallagher, S. 2000b. Self-reference and schizophrenia: A cognitive model of immunity
to error through misidentification. In D. Zahavi (ed.), Exploring the Self: Philosophical
and Psychopathological Perspectives on Self-experience (203–239). Amsterdam and
Philadelphia: John Benjamins.
Gallagher, S. 2005. How the Body Shapes the Mind. Oxford: Oxford University Press.
134 T H E S E N S E O F AG E N C Y
Gallagher, S. 2006. Where’s the action? Epiphenomenalism and the problem of free will.
In W. Banks, S. Pockett, and S. Gallagher (eds.), Does Consciousness Cause Behavior? An
Investigation of the Nature of Volition (109–124). Cambridge, MA : MIT Press.
Gallagher, S. 2007. The natural philosophy of agency. Philosophy Compass 2: 347–357.
Gallagher, S. 2010. Complexities in the sense of agency. New Ideas in Psychology. (http://
dx.doi.org/10.1016/j.newideapsych.2010.03.003). Online publication April 2010.
Gallagher, S., and Zahavi, D. 2008. The Phenomenological Mind. London: Routledge.
Graham, G., and Stephens, G. L. 1994. Mind and mine. In G. Graham and G. L. Stephens
(eds.), Philosophical Psychopathology (91–109). Cambridge, MA : MIT Press.
Grünbaum, T. 2009. Action and agency. In S. Gallagher and D. Schmicking (eds.),
Handbook of Phenomenology and Cognitive Science (337–354). Dordrecht: Springer.
Haggard, P. 2003. Conscious awareness of intention and of action. In J. Roessler and N.
Eilan (eds.), Agency and Self-Awareness (111–127). Oxford: Oxford University Press.
Jeannerod, M., and Pacherie, E. 2004. Agency, simulation, and self-identification. Mind
and Language 19: 113–146.
Lafargue, G., Paillard, J., Lamarre, Y., and Sirigu, Y. 2003. Production and perception of
grip force without proprioception: Is there a sense of effort in deafferented subjects?
European Journal of Neuroscience 17: 2741–2749.
Libet, B. 1985. Unconscious cerebral initiative and the role of conscious will in voluntary
action. Behavioral and Brain Sciences 8: 529–566.
Marcel, A . 2003. The sense of agency: Awareness and ownership of action. In J. Roessler
and N. Eilan (eds.), Agency and Awareness (48–93). Oxford: Oxford University Press.
Nagel, T. 1970. The Possibility of Altruism. Oxford: Clarendon Press.
Pacherie, E. 2006. Towards a dynamic theory of intentions. In S. Pockett, W. P. Banks, and
S. Gallagher (eds.), Does Consciousness Cause Behavior? An Investigation of the Nature of
Volition (145–167). Cambridge, MA : MIT Press.
Pacherie, E. 2007. The sense of control and the sense of agency. Psyche 13 (1) (www.the-
assc.org/files/assc/2667.pdf ).
Robinson, T., and Berridge, K . 1993. The neural basis of drug craving: An
incentive-sensitization theory of addiction. Brain Research Reviews 18: 247–291.
Robinson, T., and Berridge, K . 2000. The psychology and neurobiology of addiction: An
incentive-sensitization view. Addiction 95 (8s2): 91–117.
Sartre, J.-P. 1969. Being and Nothingness: An Essay on Phenomenological Ontology. Trans. H.
E. Barnes. London: Routledge.
Searle, J. 1983. Intentionality: An Essay in the Philosophy of Mind. Cambridge: Cambridge
University Press.
Sebanz N., Bekkering H., and Knoblich G. 2006. Joint action: Bodies and minds moving
together. Trends in Cognitive Sciences 10: 70–76.
Sebanz, N., Knoblich, G., and Prinz, W. 2003. Representing others’ actions: Just like one’s
own? Cognition 88: B11–B21.
Simon, J. R. 1969. Reactions towards the source of stimulation. Journal of experimental
psychology 81: 174-176.
Spence, S. 2009. The Actor’s Brain. Oxford: Oxford University Press.
Stephens, G. L., and Graham, G. 2000. When Self-Consciousness Breaks: Alien Voices and
Inserted Thoughts. Cambridge, MA : MIT Press.
Synofzik, M., Vosgerau, G., and Newen, A . 2008. Beyond the comparator model: A multi-
factorial two-step account of agency. Consciousness and Cognition 17: 219–239.
Takahama, S., Kumada, T., and Saiki, J. 2005. Perception of other’s action influences per-
formance in Simon task [Abstract]. Journal of Vision 5: 396, 396a.
Ambiguity in the Sense of Agency 135
FA B I O PAG L I E R I
1. INTRODUCTION
The experience of acting freely, as opposed to being coerced into action, seems to
be a prime candidate to characterize the phenomenology of free will, and possibly
a necessary ingredient for our sense of agency. Similarly, the fact of acting freely is
taken to provide a necessary condition for being an agent endowed with free will. In
this essay I keep separate the former phenomenological issue from the latter onto-
logical question. In particular, I do not challenge the idea that the fact of freedom,
once properly defined, contributes to determine whether a certain system is an
agent, and whether it is endowed with free will. But I argue that there is no proof
that free actions are phenomenologically marked by some specific “freedom attri-
bute” (i.e., a positive phenomenal content exclusively associated with free actions),
and thus this nonexistent entity cannot be invoked to justify our judgments on free
will and agency, be they correct or mistaken. This is not to say that we do not consult
our phenomenology to assess whether a certain action we performed was free or
coerced: rather, I suggest that (1) we consult our phenomenology by subtraction,
that is, only to check that there is no evidence of our actions being coerced, thus
(2) the resulting judgment of freedom is not based on us finding some phenomeno-
logical proof of freedom, but rather on lack of any experience of coercion. More
drastically, I maintain that (3) there is no “extra” phenomenological ingredient in
the experience of freedom, since that experience is distinguished from coercion by
having something less, not something more.
A terminological clarification is in order: in this essay, I will sometimes use the
word “feeling” as a shorthand for “distinctive firsthand experience endowed with
a positive content,” with no reference to the fact that such experience is bodily
felt or not, and no discussion of the issue whether feelings necessarily have a
There’s Nothing Like Being Free 137
bodily component (for extended discussion, see Damasio 1999; Colombetti and
Thompson 2007). In my usage of the word “feeling” here, being hit on the head with
a cudgel and praising yourself for your cleverness both associate with feelings in the
required sense; in contrast, not being hit on the head with a cudgel and not praising
yourself for your cleverness are not characterized by any distinctive feeling. Subjects
can report on both types of experience, but they do so in different ways: in the first
case, they describe, more or less accurately, what they did experience in certain cir-
cumstances, whereas in the second case they rather report a lack of experience, an
absence of feelings—in short, what they did not perceive, and so they cannot find in
their own phenomenology.
This absence is conceived here as absolute: whenever I speak of absence of a feel-
ing of freedom in this essay, I mean that there is never any positive content in the
subject’s phenomenology that specifically associates with acting freely (whereas of
course there are plenty of positive contents associated with acting per se), so that
judging one’s actions to be free requires only lacking any experience that they are
coerced. This is very different from saying that the feeling of freedom is still pres-
ent, albeit not normally accessed due to habituation or distraction—like not feeling
your bottom while sitting on it, because the sensation, though present and acces-
sible on occasion, is continuous and you are concentrating on something else. In
contrast, utter absence of a feeling of freedom could indicate either lack of any sub-
personal process that stably associates with the freedom of one’s action, or the fact
that such process is invariably inaccessible to awareness: I am inclined to favor the
latter solution, but both options are fully compatible with the account developed in
this essay on how we judge an action to be free, so the distinction is immaterial to
present purposes.
Although deliberately pitched at the phenomenological level, my line of reason-
ing is not devoid of ontological implications. In particular, it conveys two negative
or limiting results:
the cause of her actions. One might object that the subject here experiences the
assailant and his threatening behavior as being the “true cause” of driving the car,
instead of herself or her own internal dispositions. But what does it mean to experi-
ence something else as the “true cause” of one’s behavior? It means exactly that the
subject is experiencing coercion from some external force, which is something dif-
ferent from, but not incompatible with, experiencing oneself to be the cause of one’s
actions: the fact that the victim experiences her fear as being caused by the assailant
does not imply that she does not experience her actions to be caused by her fear—
being kidnapped should not be confused with delusions of alien control!
Experiencing a given decision as particularly hard and effortful to achieve: libertarians
often consider typical of free choice those situations where more than one option
is strongly attractive for the subject (“close-call” decisions), consistently with their
view that experiencing the possibility of doing otherwise is central to freedom of
action; this view tends to lead to restrictivism on free will, claiming that free will is
exerted only rarely, since it applies only to choices where we feel torn between two
or more options (van Inwagen 1989). As far as phenomenology is concerned, the
problem with restrictivism is that it is far too restrictive to do justice to our judg-
ments of freedom. We consider as free plenty of actions where no strong dilemma
is involved, and in which the effort of making a choice is completely absent. This
does not rule out the possibility that restrictivism may be correct at the metaphysi-
cal level, although I personally agree with Gordon Pettit (2002) that it is not. But
it is certainly worthless to identify a distinctive “freedom attribute,” since it fails to
capture our judgments of freedom, except for very few, extreme cases, which are far
from being typical. In fact, both phenomenological reports (Nahmias et al. 2004)
and experimental results (Wenke et al. 2010) indicate that the experience of freedom
is strongest when decisions are smooth and unproblematic, whereas it is weakest or
even absent when choosing between options that are difficult to discriminate.
An alternative way of looking for some specific “freedom attribute,” indepen-
dently from the libertarianism versus compatibilism debate, is to reduce the experi-
ence of freedom to the combination of experiencing authorship and control of one’s
actions. The problem with this strategy is that it does not identify something specific
of the experience of freedom, since in many cases (such as the kidnapping example
discussed earlier) it is perfectly possible to experience oneself as the author of one’s
actions and (to some degree) in control of them, and yet perceive the action as not
being free. Even if it is true that in some specific cases an experience of coercion may
be triggered by a disruption of one’s sense of authorship or control or both (thought
insertion and alien control are obvious examples), this is not the only possibility and
certainly not the most typical. As a case in point, nonpathological subjects under
coercion retain an experience of authorship and control, and yet they do not judge
their actions as being free—and rightly so. So it would seem that an intact sense of
authorship and control is a necessary condition for experiencing freedom, but not
a sufficient one. In contrast, lack of an experience of coercion is both sufficient and
necessary to judge an action to be free. The fact that an experience of coercion can
be originated either by dysfunctions in one’s sense of agency or by correctly perceiv-
ing external pressures over one’s behavior is relevant and will play a role in the rest
of this chapter, but it does not change the basic tenet of this approach: we judge an
140 T H E S E N S E O F AG E N C Y
1. Issuing judgments of freedom is both fast and easy, and usually conveys a high
degree of conviction: when asked whether a certain action was free, subjects answer
rapidly and with no hesitation, apparently with no need of any sophisticated intel-
lectual reconstruction. Let us call this first feature easy reportability.
There is a common way of accounting for all these factors: easy reportability is
taken to indicate that there is such a thing as a feeling of freedom associated with
actions that we perform freely, and it is because of the phenomenological vague-
ness of such a feeling that systematic mistakes can occur in our judgments, thus
explaining vulnerability to error. The first part of this strategy, inferring presence of
experience from easiness of report, is attributed by Nahmias and colleagues to the
majority of philosophers:
Theories of free will are more plausible when they capture our intuitions and
experiences than when they explain them away. Thus, philosophers generally
want their theories of free will to aptly describe the experiences we have when
we make choices and feel free and responsible for our actions. If a theory mis-
describes our experiences, it may be explaining the wrong phenomenon, and
There’s Nothing Like Being Free 141
if it suggests that our experiences are illusory, it takes on the burden of explain-
ing this illusion with an error theory. (2004, 162, my emphasis)
My first comment is that I agree that a theory of free will fares better when it
relates to our intuitions and experiences, with the proviso that such relationship
need not be one of identity, and the additional admonition that “intuitions” and
“experiences” should by no means be treated as synonyms. Although our intuitions
are bound to reveal something interesting about our experiences, I will argue that
no one-to-one correspondence needs to be assumed. My second comment is that
Nahmias and colleagues here seem to presuppose that there is such a thing as feeling
free (“the experiences we have when we make choices and feel free”), presumably
on the grounds of reports produced by subjects asked to consider whether or not
their actions were free. In what follows, I suggest that this inference is not necessar-
ily valid, since there is at least one alternative explanation that fits our intuitions on
free action much better. So, before considering “What is it that we feel while acting
freely?,” we should take a step back and first ask, “Do we feel anything special while
acting freely?”—more precisely, “Is there any specific positive phenomenal content
characteristic only of freedom of action?” The previous section gave reasons to doubt
that this question can be answered positively; in what follows I endeavor to outline
a phenomenology of free action that dispenses with any “freedom attribute.”
Let it be noted in passing that it is an open question whether some of the follow-
ing considerations could be applied also to other phenomenological features pre-
sumably involved in the experience of agency, like the experience of choice (Holton
2006), the experience of effort (Baumeister et al. 1998; Bayne and Levy 2006), and
the experience of action control (Pacherie 2007). I will get back to this issue at the
end of this essay, albeit in a very cursory manner and mainly with reference to expe-
riences of authorship and action control. Until then, I will provisionally confine
the analysis to the experience of freedom, using it as a test bed for a default theory
of how we use phenomenological evidence (or lack thereof) to draw judgments
about the nature of our actions. One reason that caution is needed in confining this
analysis to freedom is because other aspects of what we experience during an action
are quite clearly endowed with positive phenomenal content (so it would be a lost
cause to doubt its existence), and yet it is not absence or presence of that content
that guides judgments on the freedom of that action (so it would be a red herring
to take its existence as relevant for present purposes). Let us consider again action
control: bodily actions that are under the subject’s direct control have well-defined
phenomenal properties, which contribute to distinguish them from actions under
the direct guidance of some external agency. Nonetheless, judging one’s actions to
be free is not equivalent to judging them to be self-produced without external assis-
tance or even guidance: not only one can one act under coercion while retaining
full agentive control of bodily movements (e.g., a hostage performing illegal activi-
ties while being held at gunpoint), but also, more crucially, one can act freely in the
absence of direct control over bodily movements (e.g., having one’s arm bent and
manipulated by a chiropractor as part of a self-imposed therapy).
Interestingly, the existence of some “freedom attribute” is sometimes taken for
granted even by those who are investigating other features in the phenomenology
142 T H E S E N S E O F AG E N C Y
consulting one’s own phenomenology and failing to find any experience of external
causes prompting the action.
A profitable way of looking at this is in terms of default options: the suggestion
is that, as far as our intuitions are concerned, the default option is to consider our
actions free, with no need of any further “proof ” or “evidence” from our own expe-
rience. In contrast, only a phenomenologically salient experience of being caused
by someone or something else can lift the default and force me to judge my action
as being not free. This is tantamount to suggesting that there is a presumption of
freedom built in our judgments of agency: any action is considered free until proven
coerced. Let us call this the default theory of freedom, and let it be noted that defaults
here work as a bridge between experience and judgment, one that does not entail
any one-to-one correspondence between evaluations (to consider oneself free) and
phenomenological contents (to have a specific feeling of freedom).
Before discussing the explanatory value of this alternative proposal in the next
section, I want to show that here the libertarian notion of uncaused cause works as
a useful pivot, not as a seed of evil. In particular, it does not commit my approach
to libertarianism and antideterminism. The point is that lacking the experience of
our free actions being caused does not make them any less caused. What we con-
sult to explicitly assess our freedom of action is, inevitably, our phenomenology,
not the actual causal processes that determine our final action. So the claim is that,
although all our actions are indeed caused, this fact emerges in our phenomenology
only when certain conditions obtain, and these conditions are those that associate
with actions that we perceive as being coerced and not free. This, in turn, raises the
issue of what these conditions are, and to what types of action they apply.
Notice that the default theory still makes sense even if these questions are not
answered: it remains underspecified but still viable in principle. This is why here I will
only outline an answer, leaving it to future work to provide additional details on this
aspect of the theory. The two most intuitive and widespread notions of freedom con-
cern (1) having multiple choice options and not being restricted to a single path in our
behavior, for example, “In buying my car, I was free to choose among a variety of mod-
els,” and (2) being independent in our choice from any external domination or direct
influence from someone or something else, for example, “I decided freely to marry
my wife.”1 So let us consider what kind of experience is associated with these different
aspects of freedom and with the corresponding forms of coercion, keeping in mind
that in most instances of coerced action we lack both types of freedom: this is the case
when someone is force-fed some foul-tasting medicine, or when a neurophysiologist
makes a patient’s arm twitch by applying a transcranial magnetic stimulation (TMS)
coil to his scalp, both situations where the subject has no other option and is under
the (possibly well-meaning) domination of someone else. Nevertheless, it is useful to
analytically disentangle the experience of being free to choose among many options
(or not) from the experience of being free from external influence (or not).
The first aspect of freedom seems necessarily connected with the experience of
choice, and there are two ways of thinking about the experience of choice: one is by
referring to those situations in which we are torn between incompatible desires of
similar value (e.g., deciding whether or not to take on a new job, which is better paid
but less secure than the current one), so that the act of choosing is characterized
144 T H E S E N S E O F AG E N C Y
undermine the default theory I want to defend. But I do not think that Holton pro-
vides us with a description of the phenomenology of choice (while I agree with him
that the act of choice is independent from judging what is best), and therefore his
account does not offer evidence of any positive experience of freedom—nor was
it intended to, by the way. The only phenomenologically salient facts mentioned
by Holton are the emotional responses experienced by subjects when they were
unaware of the factors affecting their decision.2 But it is an empirical issue whether
such nonconscious decisions are truly typical of human choice. Moreover, even if
we grant that they are indeed frequent, they cannot be coextensive with our judg-
ments of freedom, since we consider ourselves free also when (1) we consciously
decide to perform an action on the grounds of what we consider best, and when
(2) we lack any “hunch” about the right option.
So, on my reconstruction, Holton’s analysis does not support the claim that we
experience some specific “freedom attribute,” whenever we are free to choose among
multiple options. But is there any argument now to support the complementary
claim—that is, that there is such a thing as experiencing lack of this particular aspect
of freedom? In all the cases, which are the vast majority, where lack of options is
paired with subjection to external pressures, I believe the lack of the latter (i.e., being
under the domination of some external force) is much more salient in our phenom-
enology than the lack of the former (i.e., having no other option), and thus I will
discuss these situations and their phenomenology in a moment. But even when lack
of alternatives is suffered without any direct influence from external forces, this pro-
duces very distinctive experiences. Unfortunately, the world is full of free indigents,
that is, people allowed in principle to do whatever they like but lacking the means
to exploit this abundance of alternatives: it is trivial to observe that the limitations
imposed upon their choices are acutely felt by them, whereas it is far from obvious
what “feelings,” if any, may characterize the freedom of choice enjoyed by a rich
bourgeois. When previously impoverished people rescue themselves from misery,
they still retain vivid memories and a rich phenomenology of their past experience,
often explicitly mentioning lack of options as one of the aspects of what they felt at
that time. Nothing of the sort seems to accompany the availability of options, not
even in the experience of the most enthusiastic libertarian. More precisely, lack of
options associates with a specific phenomenology of coercion, inasmuch as poor
people are (and experience themselves to be) coerced by their own misery, even in
the absence of any direct domination from other agentive forces such as a tyrant
or the state. In contrast, the possession of so-called freedom of choice is not felt in
any obvious way, but rather assessed as a matter of intellectual judgment, and often
even taken as a matter of course, at least by those people who never experienced its
absence.
If we now turn to the other aspect of freedom, that is, freedom from external
influences in deciding what to do, it is again easy to see that lack of it correlates with
an extremely rich and vivid phenomenology. Whenever we experience ourselves
as being under the spell of some external force, either while making a decision or
while performing a physical action, this produces a lasting impression in our phe-
nomenology. I am unlikely to fail to notice or to forget afterward that you forced me
to drive the car by pointing a gun to my head, or that you made me press the button
146 T H E S E N S E O F AG E N C Y
by applying a TMS coil upon my scalp, or that it was my cramping stomach that had
me running for the bathroom. In all these cases, there certainly is a most definitive
experience of not being in control.3
However, proving that coercion has a vivid phenomenology is not the same as
showing that the experience of being free from external influences has no specific
phenomenal correlate, some special “freedom attribute.” It could still be the case
that we do have such an experience, albeit a very “thin” one. Indeed, this idea of
“thinness” is recurrent in the discussion on the phenomenology of agency in gen-
eral (see, e.g., Gallagher 2000; Pacherie 2007), and it is to me remarkable that it
failed to engender stronger skepticism on the very notion of having a positive expe-
rience of agency. As I will discuss later on, it is much more plausible to assume that
our judgments of freedom rely on the rich phenomenology of coercion, rather than
postulating any “thin,” and thereby elusive, experience of freedom—and perhaps
something similar could be said of our judgments of authorship, control, and even
agency, as I will suggest in section 7. Be that as it may, I first want to discuss whether
what we know on the phenomenology of agency in general is in contrast or in agree-
ment with the claim that there is no such thing as experiencing to act without direc-
tion from external influences.
Attempts to account for a thin phenomenology of agency, or “minimal selfhood,”
usually rely on subpersonal mechanisms for action control and monitoring: promi-
nent defenders of this view include Shaun Gallagher (2000) and Elisabeth Pacherie
(2007; but see Bayne and Pacherie 2007 for a partially different approach), with
some important differences between their accounts that I will not discuss here. The
key idea is that our capacity to discriminate between self-produced and externally
triggered actions is based at the subpersonal level on matching representations in
the comparator system involved in action control (Wolpert et al. 1995; Wolpert
1997). In this vein, Gallagher suggests the following on how sense of agency might
emerge: “This comparator process anticipates the sensory feedback from movement
and underpins an online sense of self-agency that complements the ecological sense
of self-ownership based on actual sensory feedback. If the forward model fails, or
efference copy is not properly generated, sensory feedback may still produce a sense
of ownership (“I am moving”) but the sense of agency will be compromised (“I am
not causing the movement”), even if the actual movement matches the intended
movement” (2000, 16). Similar hypotheses have been used to interpret experimen-
tal findings on the factors affecting sense of action control (Linser and Goschke
2007), to discuss dissociations between sense of agency and sense of ownership
(Spence et al. 1997; Sato and Yasuda 2005), and to explain a rich variety of anom-
alies in the experience of agency (Blakemore et al. 2002; Blakemore 2003; Frith
2005).
The fact that we have a very good candidate (and possibly more than one)4 as the
subpersonal mechanism responsible for our experience of self-generated action may
seem at odds with the suggestion that judgments of action freedom are based only
on lacking any experience of being forced. A critic might suggest that the default
theory is especially problematic when it comes to experiencing freedom from exter-
nal forces: since there seems to be a positive experience of being the agent of your
own bodily movements (namely, the required match between goal state and efferent
There’s Nothing Like Being Free 147
copy), then what is the need of postulating default options and lack of contrary
evidence?
My answer to this objection is that it rests on an invalid inference from subper-
sonal hypotheses to phenomenological conclusions: even granting that sense of
agency correlates with the matching of goal state and efferent copy, there is no rea-
son to conclude that this produces any significant positive correlate in the subject’s
phenomenology. It could be as easily the case that it is only when there is a mismatch
in the comparator system that the subject experiences something special. Indeed,
this seems more plausible in terms of adaptation—but more on this in the next sec-
tions. For the time being, I just want to stress that what we know of the subpersonal
mechanisms responsible for action control and attribution of agency does not give
us any evidence for or against the existence of a positive experience of freedom.
To sum up, I argue that the phenomenology of lack of various aspects of freedom
(freedom to choose among many options and freedom as independence from exter-
nal forces) is remarkably well defined, whereas there is no evidence of any positive
phenomenal content, thin or otherwise, being associated with presence of freedom.
This suggests that the default theory I am defending is our best bet, at least as far as
nonproblematic cases are concerned. “Nonproblematic” here means that our judg-
ments of freedom are correct in these cases: we think we are freely acting when this
is indeed the case, and regard as coerced those actions in which we are in fact being
forced to act by someone or something. However, our judgments of freedom are
prone to errors, and these mistakes, whether systematic or abnormal, need also to
be grounded in our phenomenology. The next section tries to show how the default
theory can accommodate the possibility of error.
about their actions being externally controlled (see Maes and van Gool 2008 for
some excellent examples), in contrast with attempts from normal subjects to report
on their own (correct) experience of acting freely. On the default theory, the vivid-
ness of the delusional phenomenology is not surprising: it is due to the fact that
these subjects are having a positive experience of being coerced (possibly due to
neurological disorders: Frith et al. 2000; Spence 2001), even if this experience is
mistaken. According to the default theory, the same does not happen to normal
subjects acting freely.
Whereas the default theory makes perfect sense of dissociations in our judg-
ments of freedom, these systematic errors pose a hard challenge for any theory
that takes freedom to be associated with a positive phenomenology. If this was the
case, then one should explain: (1) how a subliminal prime can produce the posi-
tive (mistaken) experience of acting freely, as happens in automaticity studies; and
(2) why this positive experience of freedom fails to arise in delusions of alien con-
trol, and yet deluded subjects derive a richer phenomenology from its absence than
normal subjects do from its presence. Both types of dissociation are thus at odds
with the idea that there is such a thing as a feeling of freedom. Indeed, with refer-
ence to automaticity effects, one could venture a mild analogy5 with the relevance
of change-blindness effects for antirepresentationalist theories of vision (O’Regan
and Noë 2001): as change-blindness suggests that we do not need a rich representa-
tion of what we see in order to see it, so automaticity effects suggest that we do not
need a positive phenomenology of freedom in order to judge whether or not our
actions are free. If judgments of freedom were dependent upon a positive experi-
ence of freedom, we could not fail to notice its absence in all those cases where our
actions are induced and/or shaped via subliminal priming. But we do not notice any
absence in these cases, so it is questionable whether this alleged “sense of freedom”
is truly existent.6
Notice also that automaticity effects apply to any normal subject and affect a vari-
ety of cognitive tasks—that is, they are universal in application and general in scope.
In contrast, delusions of alien control and misattributions of agency are rare and
usually associate with specific neurological pathologies; even though similar disso-
ciations can be induced in normal subjects, doing so requires hypnosis (Blakemore
et al. 2003; Haggard et al. 2004), which is a far more elaborate procedure than sub-
liminal priming. This suggests that preventing awareness of a real act of coercion
(e.g., via subliminal priming) is much easier than conjuring a false experience of
being coerced (e.g., via hypnosis). On the default theory, this makes perfect sense:
in the first case, one just has to manipulate attention in such a way as to ensure that
the subject does not become aware of an external force having a direct causal role in
the action; in the second case, one should create within the subject’s awareness an
experience of being coerced in the absence of any external constraint on the behav-
ior. The latter alteration is clearly greater than the former. But if we assume there is
such a thing as a positive experience of freedom, then we should expect things to be
the other way around: illusions of freedom via subliminal priming should be very
hard to achieve, whereas delusions of being controlled by external forces should be
relatively easy to induce. This is in stark contrast with empirical evidence on typical
errors in our judgments of freedom.
There’s Nothing Like Being Free 149
In this context, it is also interesting to briefly discuss Libet’s paradigm for tim-
ing the onset of volition in voluntary action (Libet et al. 1983), later on revived,
amended, and expanded by Haggard and colleagues in their work on intentional
binding (Haggard et al. 2002; Haggard 2005). If one looks at the experimen-
tal protocols most frequently used in these settings, the instructions given to the
experimental subjects carry some strong pragmatic implication, concerning both
the actions that the subjects are supposed to perform and the phenomenology they
should attend to and report about. As for the actions, it is perhaps debatable what
kind of intentionality is being measured here. Subjects are instructed to perform
a certain basic movement (with or without sensory consequences) “at their will,”
and to pay attention to the timing of either their intention to move (W judgments),
the onset of movement (M judgments), or its sensory consequence (S judgments).
These instructions carry the strong presumption that subjects are indeed expected
to perform the required movement, sooner or later within the duration of the exper-
iment. It is akin to soldiers being ordered to “fire at will”: not firing at all is clearly
not a socially acceptable option in this context. Similarly, subjects in these experi-
ments are free to decide when to do something, but they do not get to decide what to
do, or whether to do it or not. This may limit the validity of the data thus gathered.7
However, my concern here is with the phenomenological implications of these
instructions: investigating W judgments, Libet and colleagues asked people to
report when they first feel the “urge” to make the required movement; similarly,
Haggard and colleagues explained “intention” to experimental subjects as their first
awareness of being about to move. Although the latter formulation is clearly more
neutral, both sets of instructions strongly imply that subjects should have some-
thing positive to consult in their phenomenology (a feeling, an urge, or at least an
awareness) in order to report their intentions. In other words, these experimental
settings implicitly endorse the idea that there is such a thing as a positive feeling of
acting freely, that is, of one’s own volition.
If, on the contrary, the default theory is correct, then similar instructions should
appear quite outlandish to the experimental subjects, since they are basically asked
to report on a phenomenology that they lack, under the presumption that they have
it. This would nicely explain the fact that conscious intention is reported to occur
significantly later than the onset of the readiness potential systematically correlated
with voluntary movement (Libet 1985): the delay would depend on the fact that
people are asked to monitor the occurrence of something (some phenomenological
outburst of free volition) that typically does not occur, since our actions are consid-
ered by default as being freely willed, and not because we experience some “freedom
epiphany.” Faced with such a bizarre task, subjects have to reconstruct intentionality
in an indirect way, which is the only one available to their phenomenology—and
which amounts to turning a default assumption into an inference from ignorance.
They (1) become aware of being about to move when the movement has been
already initiated (and that is why the readiness potential starts before this aware-
ness, and needs to), (2) they do not experience any external cause for their action,
thus (3) they assume by default that they are freely intending to act.
This is a reconstructive process, but notice that here the reconstruction occurs on
the phenomenology, not on the behavior. This explains why W judgments are not
150 T H E S E N S E O F AG E N C Y
ways: first, it is evidently the most economical solution to the problem of discrimi-
nating between free and coerced actions; second, its competitor, that is, the idea
that there is some specific feeling of freedom (see section 2 for discussion), is so
spectacularly antieconomical as to be hardly justifiable in evolutionary terms. Both
points can be clarified by analogy.
Let us first assume that individuating instances of coercion is relevant to the
fitness of the agent. This seems plausible enough: acting under coercion implies
relinquishing control over one’s own conduct, and even though coercion can be
sometimes benevolent (e.g., a mother forcing her son to study), there is no guaran-
tee that it will be—on the contrary, most of the time we want to coerce other people
so that they will act for our benefit rather than their own. Hence the ability to recog-
nize coercion is certainly useful for the subjects, to make sure that they are optimiz-
ing their own fitness, and not the fitness of someone else. Detecting coercion is the
first step to prevent it, or to stop it before it jeopardizes the subject’s performance.
This is similar to the reason that we want to have an alarm system in our house,
one that signals unwanted intrusions to either us or the police (even though some
intrusions could be benevolent, e.g., Santa Claus coming down the chimney to bring
us Christmas presents). Again, detecting intrusion is the first step to prevent it, or to
stop it before any serious harm is done. Here it is worth noticing that all alarm sys-
tems in the world, in spite of their variety in terms of technologies and functioning,
have something in common: they are set to go off when someone is breaking and
entering the house, while remaining silent in all other instances—not the other way
around. And if we now imagine an inverse alarm, that is, an alarm that keeps ringing
when there are no malevolent intruders in the house, and silences itself only when
someone is breaking and entering, we immediately perceive how antieconomical
this would be.
The same applies to the phenomenology of freedom: if its evolutionary value is
to allow detection of coercion, then it makes sense that the relevant signal is set to
attract my attention only when someone or something is coercing me, rather than
being constantly on when all is well and I am acting of my own accord. Just as an
alarm ringing most of the time would be incredibly annoying, so would a positive
phenomenal content specifically associated with all our free actions. Acting freely
is both the typical occurrence and the nonproblematic one: we need not be alerted
to the fact that our actions are free, and would not want to be constantly reminded
of it—it would be like having an alarm constantly ringing in your head while every-
thing is just as it should be. We are interested in freedom of action only when we risk
losing it, and that is when we need a rich phenomenology to signal that something
is amiss.
Notice that both regular alarms and inverse alarms function equally well as sig-
naling devices, at least in principle: they both discriminate between intrusion and
its absence. But regular alarms outperform inverse alarms in two respects: they are
much more economical, and they are likely to be more effective, given some features
of our perceptual system. As far as economy is concerned, the problem with inverse
alarms is evident: since lack of intrusion is the norm, it consumes far more resources
to have the alarm on in this condition than the other way around, not to mention
the costs of constantly annoying the owners of the house and their neighbors. The
152 T H E S E N S E O F AG E N C Y
same basic rule applies to our phenomenology: to generate, maintain, and attend a
vivid phenomenology of freedom certainly demands more cognitive resources than
limiting “phenomenological outbursts” to instances of coercion, insofar as freedom
of action is the norm and coercion the exception.
Also effectiveness is an issue for inverse alarms, due to habituation effects: our
perceptual system attenuates the salience of stimuli that are constantly present in
the environment, and the same mechanism would work on the continuous sig-
nal of the inverse alarm. As a consequence, when the signal is silenced to notify
unwanted intrusion, the perceived discrepancy would be lower than the actual dif-
ference between signal and no signal, because in the meantime the alarm ringing
would have become a sort of background noise for the subject. Instead, habituation
does not influence regular alarms, so the salience of the signal associated with intru-
sion would be greater in that case. Again, similar consideration applies to our phe-
nomenology: it is more effective as a signal, that is, more easily noticed, to have an
anomalous experience when coercion occurs rather than stopping to have a normal
experience when freedom is in jeopardy.
The point of the analogy should be clear by now: as it makes much more
sense to have a regular alarm in your house rather than an inverse one, so it
is better for you to have a vivid phenomenology of coercion rather than any
alleged “freedom attribute.”8 More precisely, the analogy suggests reasons that
natural selection would not favor the evolution of a vivid experience of freedom
to detect instances of coercion: being certainly less economical and potentially
less effective than its rival mechanism, that is, vivid experience of coercion, it
is unlikely that individuals endowed with such a trait would fare better than
competitors endowed with the rival mechanism. Moreover, we know that such
a “rival mechanism” is indeed operative within us: we do have a vivid phenom-
enology of coercion, whereas the experience of freedom, even if there were one,
would be “thin” and elusive. If so, what would be the evolutionary point of hav-
ing a positive experience of freedom at all, since the task of detecting coercion
is already efficiently performed otherwise? This argument, like any evolution-
ary argument, is not meant to be conclusive, but it adds presumptive weight to
the case for a default theory of freedom, especially when piled upon the other
evidences discussed so far.
Admittedly the evolutionary argument rests on assuming that the way we expe-
rience freedom is functional to discriminate between coercion and free action.
Three reasons concur to justify this assumption. First, as discussed in sections 2
and 4, the most typical descriptions of the experience of freedom include the pos-
sibility of doing otherwise, the presence of multiple options, and the absence of
external influences shaping one’s conduct, and they all emphasize that free action
is markedly different from situations where the individual is either coerced by
someone or forced by circumstances to follow a rigid path. Second, other aspects
in the experience of free agency are either characteristic of agency in general, free
or otherwise (e.g., authorship and action control), or they refer only to subclasses
of free actions and are far from being typical of freedom in general (e.g., feeling
torn between various options that are equally attractive): as such, they are unlikely
to provide guidance on what is the specific function of the experience of freedom.
There’s Nothing Like Being Free 153
ACKNOWLEDGMENTS
I am grateful to Cristiano Castelfranchi, Richard Holton, Dan Hutto, Andy Clark,
Markus Schlosser, Julian Kiverstein, Tillman Vierkant, and two anonymous review-
ers for providing comments and criticisms on previous versions of this chapter.
This work, as part of the European Science Foundation EUROCORES Programme
“Consciousness in a Natural and Cultural Context” (CNCC), was developed within
the collaborative research project “Consciousness in Interaction: The Role of the
Natural and Social Environment in Shaping Consciousness” (CONTACT), and
was supported by funds from the CNR, Consiglio Nazionale delle Ricerche, and
the EC Sixth Framework Programme.
NOTES
1. This is somewhat reminiscent of Philip Pettit’s (2003) distinction between
option-freedom and agency-freedom, albeit he brings up the distinction with the aim
of discussing and possibly reconciling different theories of social freedom, while here
the emphasis is on how the subject’s phenomenology is used to justify his or her
judgments of freedom—a very different issue.
2. Holton would probably contend that awareness of making a choice also is a salient
phenomenological fact in these cases, but this is precisely what needs to be ascer-
tained in referring to such instances as a litmus test for a phenomenology of choice.
So the presence of that phenomenology cannot be taken for granted without begging
the question. My point is precisely that Holton’s analysis of appropriate decisions
based on hunches does not provide any support to the claim that there is such a thing
as experiencing choice, in the sense of having a positive phenomenal content concur-
rent with, and distinctive of, making free choices.
3. The fact that lack of freedom from domination comes in many shapes and degrees
does not alter its vivid phenomenology. It is true that the subject could choose death
by shooting over coercion in the first example, as well as preferring public embarrass-
ment to running for the bathroom in the third, whereas not pressing the button is not
even an option in the TMS coil scenario. Nevertheless, in all three cases some con-
straint interferes with the agent’s freedom, and it is this impairing of agentive control
that registers so deeply in our phenomenology.
4. For an alternative view, see Stephens and Graham (2000); Roser and Gazzaniga
(2004); and Carruthers (2007). For a critical review, see Bayne and Pacherie (2007).
5. I am thankful to Dan Hutto for drawing my attention to this analogy.
6. A stalwart critic of the default theory could object that this only proves that experi-
encing coercion is much more vivid than experiencing freedom, but it does not rule
out the possibility that the latter still exists, albeit only as a “thin” experience. Let us
call this the “Thin Argument”—pun intended. There are two reasons that the Thin
Argument does not work here: first, it violates Occam’s razor, inasmuch as it postu-
lates an entity, to wit, the thin experience of freedom, that does not add anything to
the explanatory power of the theory; second, it has no grip on other empirical facts
in favor of the default theory, e.g., automaticity effects being much easier to induce
than misattributions of agency—a fact consistent with the default theory, but very
hard to reconcile with the existence of a positive experience of freedom (see next
paragraph).
156 T H E S E N S E O F AG E N C Y
7. Chris Frith made a similar point on the social implications of these experimental
designs at the workshop “Subjectivity, Intersubjectivity, and Self-Representation,”
Borupgaard, Denmark, May 9–12, 2007.
8. A similar point could be made for sensory attenuation of self-produced stimuli
(Weiskrantz et al. 1971; Blakemore et al. 1999; Blakemore et al. 2000; Frith et
al. 2000; Blakemore 2003; Shergill et al. 2003; Frith 2005; Bays et al. 2006). It is
a well-established fact that sensory stimulation become less salient when it is
self-produced, and this is interpreted as a way of discriminating between what is caused
by our own movements and what is due to changes in the outside world. Significantly,
such a distinction is achieved by marking as salient those instances where stimuli are
not self-produced, rather than the other way around. This is a well-documented case
of natural selection favoring, in our phenomenology, a regular alarm over an inverse
one. Similarly, to discriminate between actions that are freely undertaken and those
that are coerced, it makes sense to emphasize the phenomenology of the latter, rather
than the former.
9. “Phenomenological account” here only means an account that assigns a key role to
phenomenology in arriving at judgments of freedom. Obviously, it does not imply
that such judgments are indicative of any phenomenal content associated with acting
freely: on the contrary, the default theory maintains that judgments of freedom are
justified by lack of experiences of coercion within the agent’s phenomenology.
10. I am thankful to Tillman Vierkant for drawing my attention to this connection with
Prinz’s work.
REFERENCES
Bargh, J., Chartrand, T. (1999). “The unbearable automaticity of being.” American
Psychologist 54, 462–479.
Bargh, J., Gollwitzer, P., Lee-Chai, A., Barndollar, K., Troetschel, R. (2001). “The auto-
mated will: Nonconscious activation and pursuit of behavioral goals.” Journal of
Personality and Social Psychology 81, 1014–1027.
Baumeister, R., Bratslavsky, E., Muraven, M., Tice, D. (1998). “Ego-depletion: Is the active
self a limited resource?” Journal of Personality and Social Psychology 74, 1252–1265.
Bayne, T., Levy, N. (2006). “The feeling of doing: Deconstructing the phenomenology
of agency.” In N. Sebanz, W. Prinz (eds.), Disorders of volition, 49–68. Cambridge, MA :
MIT Press.
Bayne, T., Pacherie, E. (2007). “Narrators and comparators: The architecture of agentive
self-awareness.” Synthese 159, 475–491.
Bays, P., Flanagan, J., Wolpert, D. (2006). “Attenuation of self-generated tactile sensations
is predictive not postdictive.” PLoS Biology 4 (2), e28.
Bechara, A., Damasio, A., Damasio, H., Anderson, S. (1994). “Insensitivity to future con-
sequences following damage to human prefrontal cortex.” Cognition 50, 7–15.
Bechara, A., Damasio, H., Tranel, D., Damasio, A . (1997). “Deciding advantageously
before knowing the advantageous strategy.” Science 275, 1293–1295.
Blakemore, S.-J. (2003). “Deluding the motor system.” Consciousness and Cognition 12,
647–655.
Blakemore, S.-J., Frith, C., Wolpert, D. (1999). “Spatio-temporal prediction modulates the
perception of self-produced stimuli.” Journal of Cognitive Neuroscience 11, 551–559.
Blakemore, S.-J., Oakley, D., Frith, C. (2003). “Delusions of alien control in the normal
brain.” Neuropsychologia 41, 1058–1067.
There’s Nothing Like Being Free 157
Blakemore, S.-J., Wolpert, D., Frith, C. (2000). “Why can’t you tickle yourself?”
NeuroReport 11 (11), R11–R16.
Blakemore, S.-J., Wolpert, D., Frith, C. (2002). “Abnormalities in the awareness of action.”
Trends in Cognitive Science 6, 237–242.
Campbell, C. (1951). “Is freewill a pseudo-problem?.” Mind 60, 441–465.
Carruthers, P. (2007). “The illusion of conscious will.” Synthese 159, 197–213.
Colombetti, G., Thompson, E. (2007). “The feeling body: Towards an enactive approach
to emotion.” In W. Overton, U. Müller, J. Newman (eds.), Developmental perspectives on
embodiment and consciousness, 45–68. Mahwah, NJ: Erlbaum.
Damasio, A . (1999). The feeling of what happens: Body and emotion in the making of con-
sciousness. New York: Harcourt Brace.
Dennett, D. (1984). Elbow room: The varieties of free will worth wanting. Cambridge, MA :
MIT Press.
Dennett, D., Kinsbourne, M. (1992). “Time and the observer.” Behavioral and Brain
Sciences 15, 183–247.
Frith, C. (1992). The cognitive neuropsychology of schizophrenia. Hove, UK : Erlbaum.
Frith, C. (2005). “The neural basis of hallucinations and delusions.” Comptes Rendus
Biologies 328, 169–175.
Frith, C., Blakemore, S.-J., Wolpert, D. (2000). “Abnormalities in the awareness and control
of action.” Philosophical Transactions of the Royal Society of London Series B—Biological
Sciences 355, 1771–1788.
Gallagher, S. (2000). “Philosophical conceptions of the self: Implications for cognitive
science.” Trends in Cognitive Science 4, 14–21.
Gallagher, S. (2004). “Agency, ownership, and alien control in schizophrenia.” In P. Bovet,
J. Parnas, D. Zahavi (eds.), Interdisciplinary perspectives on self-consciousness, 89–104.
Amsterdam: John Benjamins.
Grünbaum, A. (1971). “Free will and laws of human behavior.” American Philosophical
Quarterly 8, 299–317.
Haggard, P. (2005). “Conscious intention and motor cognition.” Trends in Cognitive
Science 9, 290–295.
Haggard, P., Cartledge, P., Dafydd, M., Oakley, D. (2004). “Anomalous control: When
‘free-will’ is not conscious.” Consciousness and Cognition 13, 646–654.
Haggard, P., Clark, S., Kalogeras, J. (2002). “Voluntary action and conscious awareness.”
Nature Neuroscience 5, 382–385.
Haggard, P., Cole, J. (2007). “Intention, attention and the temporal experience of action.”
Consciousness and Cognition 16, 211–220.
Holton, R . (2006). “The act of choice.” Philosophers’ Imprint 6, 1–15.
Horgan, T., Tienson, J., Graham, G. (2003). “The phenomenology of first-person agency.”
In S. Walter, H. Heckman (eds.), Physicalism and mental causation, 323–340. Exeter:
Imprint Academic.
Lehrer, K . (1960). “Can we know that we have free will by introspection?” Journal of
Philosophy 57, 145–157.
Libet, B. (1985). “Unconscious cerebral initiative and the role of conscious will in volun-
tary action.” Behavioral and Brain Sciences 8, 529–566.
Libet, B., Gleason, C., Wright, E., Pearl, D. (1983). “Time of conscious intention to act in
relation to onset of cerebral activity (readiness potential): The unconscious initiation
of a freely voluntary act.” Brain 106, 623–642.
Linser, K., Goschke, T. (2007). “Unconscious modulation of the conscious experience of
voluntary control.” Cognition 104, 459–475.
158 T H E S E N S E O F AG E N C Y
T I M BAY N E
1. INTRODUCTION
One of the central problems in the study of consciousness concerns the ascription
of consciousness. We want to know whether certain kinds of creatures—such as
nonhuman animals, artificially created organisms, and even members of our own
species who have suffered severe brain damage—are conscious, and we want to
know what kinds of conscious states these creatures might be in if indeed they are
conscious. The identification of accurate markers of consciousness is essential if the
science of consciousness is to have any chance of success.
An attractive place to look for such markers is in the realm of agency. Consider
the infant who reaches for a toy, the lioness who tracks a gazelle running across
the savanna, or the climber who searches for a handhold in the cliff. In each case, it
is tempting to assume that the creature in question is conscious of the perceptual
features of their environment (the toy, the gazelle, the handhold) that guide their
behavior. More generally, we might say that the exercise of intentional, goal-directed
agency is a reliable guide to the presence of consciousness. To put it in a slogan, we
might be tempted to treat agency as a marker of consciousness (AMC).
Although it has its advocates, AMC has come in for sustained criticism, and a
significant number of theorists have argued that the inference from agency to con-
sciousness is illegitimate on the grounds that much of what we do is under the con-
trol of unconscious behavioral guidance systems. These theorists typically hold that
the only sound basis for the ascription of consciousness—at least when it comes
to human beings—is introspective report. Frith and colleagues (1999) claim that
“to discover what someone is conscious of we need them to give us some form of
Agency as a Marker of Consciousness 161
report about their subjective experience” (107); Weiskrantz (1997) suggests that
“we need an off-line commentary to know whether or not a behavioural capacity
is accompanied by awareness” (84); and Naccache (2006) claims that “conscious-
ness is univocally probed in humans through the subject’s reports of his or her own
mental states” (1396).
The central goal of this chapter is to assess the case against AMC. My aim
is to provide a framework for thinking about the ways in which agency might
function as a marker of consciousness, and to argue that the case against AMC is
not nearly as powerful as it is often thought to be. The chapter divides into two
rough halves. The first half examines various ways in which agency might func-
tion as a marker of consciousness: section 2 focuses on the notion of a marker of
consciousness itself, while section 3 examines the relationship between agency
and consciousness. Section 4 forms a bridge between the two halves of the
chapter. Here, I contrast AMC with the claim that the only legitimate marker of
consciousness is introspective report. The second half of the chapter examines
two broad challenges to AMC: section 5 examines the challenge from cogni-
tive neuroscience, while section 6 addresses the challenge from social psychol-
ogy. I argue that although the data provided by these disciplines provide useful
constraints on the application of AMC, they do not undermine the general
approach itself.1
will itself be conscious. Further, although the converse entailment does not hold
(for markers of creature consciousness need not be markers of state conscious-
ness), in general it is likely that the ascription of creature consciousness will be
grounded in the ascription of state consciousness. For example, one’s evidence
that an infant is conscious will typically be grounded in evidence that the infant
is enjoying certain types of conscious states, such as pain or visual experiences.
With these points in mind, my focus in this chapter will be on the ascription of
conscious states.
Thus far I have referred to markers of state consciousness in the abstract, but it is
far from clear that we should be looking for a single marker—or even a single fam-
ily of markers—of conscious states of all kinds. In fact, it is likely that such markers
will need to be relativized in a number of ways. First, there is reason to think that
the markers of consciousness will need to be relativized to distinct conscious state
types, for it seems likely that the difference that consciousness makes to the func-
tional profile of a mental state depends on that kind(s) to which that mental state
belongs. To put the point somewhat crudely, we may need one family of markers for
(say) conscious perceptions, another for conscious thoughts, and a third for con-
scious emotions. Indeed, our taxonomy might need to be even more fine-grained
than this, for we might need to distinguish markers of (say) conscious vision from
markers of conscious olfaction.
Second, markers of consciousness may need to be relativized to particular types
of creatures. If consciousness is multiply realized, then the markers of conscious-
ness that apply to the members of one species may not apply to the members of
another. And even if there are species-invariant markers of consciousness, it may
be difficult to determine whether any putative marker of consciousness is in fact
species invariant or whether it applies only to the members of a certain species or
range of species.
Third, markers of consciousness may need to be relativized to what are variously
referred to as “background states,” “levels,” or “modes” of consciousness, such as
normal wakefulness, delirium, REM sleep, hypnosis, the minimally conscious state,
and so on. Unlike the fine-grained conscious states that can be individuated in
terms of their content (or phenomenal character), modes of consciousness modu-
late consciousness in broad, domain-general ways. Given the complex interactions
between a creature’s mode of consciousness and its various (fine-grained) states of
consciousness, it is highly likely that many of our markers for the latter will need to
be relativized to the former in various ways. In other words, what counts as a marker
of (say) visual experience may depend on whether we are considering creatures in a
state of normal wakefulness or (say) creatures in a state of REM sleep.
My focus will be on the question of whether agency might count as a marker of con-
sciousness in the context of human beings in the normal waking state. The further we
depart from such contexts, the less grip we have on what might be a reasonable marker
of consciousness (Block 2002). So, although the following discussion will touch on
the question of whether agency might be regarded as a marker of consciousness in
other species or in human beings who have suffered severe neurological damage, my
primary interest will be with cognitively unimpaired human beings in the normal
waking state.
Agency as a Marker of Consciousness 163
3. INTENTIONAL AGENCY
Agency is no less multifaceted than consciousness. The folk-psychological
terms that we have for describing agency—“intentional agency,” “goal-directed
agency,” “voluntary agency,” “deliberative agency,” and so on—are imprecise in
various ways, and it is unclear which, if any of them, will find gainful employment
within the scientific study of agency (Prochazka et al. 2000). We might hope that
advances in the scientific understanding of agency will bring with them a more
refined taxonomy for understanding agency, but at present we are largely reliant
on these folk-psychological categories. From within this framework, the most
natural point of contact with consciousness is provided by intentional agency. The
driving intuition behind AMC is that we can use a creature’s intentional responses
to its environment as a guide to the contents of its consciousness. But what is an
intentional action?
Most fundamentally, intentional actions are actions that are carried out by agents
themselves and not some subpersonal or homuncular component of the agent. This
point can be illustrated by considering the control of eye movements. Some eye
movements are controlled by low-level stimulus-driven mechanisms that are located
within the superior colliculus; others are directed by high-level goal-based represen-
tations (Kirchner & Thorpe 2006; de’Sperati & Baud-Bovy 2008). Arguably, the
former are more properly ascribed to subpersonal mechanisms within the agent,
while the latter qualify as things that the agent does—as instances of “looking.” In
drawing a distinction between personal agency and subpersonal motor control, I
am not suggesting that we should think of “the agent” as some kind of homunculus,
directing the creature’s behavior from its director’s box within the Cartesian the-
ater. That would be a grave mistake. As agents we are not “prime movers” but crea-
tures that behave in both reflective (or “willed”) and reactive (or “stimulus-driven”)
modes depending on the dictates of the environment. The agent is not to be identi-
fied with the self of rational reflection or pure spontaneity but is to be found by look-
ing at how the organism copes with its environment. The exercise of willed agency
draws on neural circuits that are distinct from those implicated in stimulus-driven
agency ( Jahanshahi & Frith 1998; Lengfelder & Gollwitzer 2001; Mushiake et al.
1991; Obhi & Haggard 2004), but few of our actions are purely “self-generated” or
purely “stimulus-driven.” Instead, the vast majority of what we do involves a com-
plex interplay between our goals and the demands of our environment (Haggard
2008). Think of a typical conversation, where what one says is guided by both the
behavior of one’s interlocutor and one’s own communicative intentions.
A useful way in which to unpack the contrast between personal and subpersonal
control is in terms of cognitive integration. What it is for an action to be assigned
to the agent herself rather than one of her components is for it to be suitably inte-
grated into her cognitive economy. There is some temptation to think that behav-
ioral responses that are functionally isolated from the agent’s cognitive economy
should not be assigned to the agent, at least not without reservation. In contrast to
“hardwired” stimulus-response behaviors, intentional responses are marked by the
degree to which they are integrated with each other. This integration might be most
obvious in the context of deliberative, reflective, and willed agency, but it can also
164 T H E S E N S E O F AG E N C Y
4. AN INTROSPECTIVE ALTERNATIVE?
As already noted, many consciousness scientists reject AMC in favor of the claim
that introspection is the (only) marker of consciousness. We might call this position
“IMC.” Comparing and contrasting AMC with IMC enables us to further illumi-
nate what it is that AMC commits us to, and also indicates that AMC is in line with
many of our intuitive responses.3
Let us begin by distinguishing two versions of IMC. According to the strong ver-
sion of IMC, introspective report is the only legitimate marker of consciousness in
any context. Where a creature is unable to produce introspective reports that it is in
a certain type of conscious state, then we have no reason to ascribe conscious states
166 T H E S E N S E O F AG E N C Y
of that kind to it; and where a creature is unable to produce introspective reports
of any kind, then we have no reason to think that it is conscious at all. (Indeed, we
might even have reason to think that it is not conscious.) A weaker version of IMC
takes introspective report to be the “gold standard” for the ascription of conscious-
ness when dealing with subjects who possess introspective capacities, but it allows
that nonintrospective measures might play a role in ascribing consciousness when
dealing with creatures in which such capacities are absent.
I will focus here on the strong version of IMC for the following two reasons.
First, I suspect that in general advocates of IMC incline toward the strong rather
than the weak version of the view. Second, it is not clear that the weak version of
IMC is stable. If there are viable nonintrospective markers of consciousness at all,
then it is difficult to see why they couldn’t be usefully applied to creatures with
introspective capacities. Of course, the advocate of the weak version of IMC could
allow that nonintrospective measures can be “applied” to creatures with introspec-
tive capacities but insist that whenever introspective and nonintrospective mea-
sures point in different directions the former trumps the latter. But it is difficult
to see why we should assume that introspective measures should always trump
nonintrospective measures if, as this response concedes, both introspective and
nonintrospective measures can be “applied” to the same creature. At any rate, I will
contrast AMC with the strong version of IMC and will leave the weak version of
the view to one side.
In what contexts will AMC and IMC generate different verdicts with respect to
the ascription of consciousness? Consider a series of experiments by Logothetis
and colleagues concerning binocular rivalry in rhesus monkeys (Logothetis &
Schall 1989; Logothetis et al. 2003; Sheinberg & Logothetis 1997). In this work,
Logothetis and colleagues trained rhesus monkeys to press bars in response to
particular images, such as horizontal and vertical gratings. Following training, the
monkeys were placed in a binocular rivalry paradigm in which a horizontal grating
was presented to one eye and a vertical grating was presented to the other eye, and
the monkeys were required to respond to the two stimuli by means of bar presses.
At the same time, Logothetis and colleagues recorded from the visual system of the
monkeys in order to determine where in the visual hierarchy the closest correla-
tions between their conscious percepts (as measured by their responses) and neural
activity might be found.
How should we interpret the monkeys’ responses? In a view that has been widely
endorsed in the literature, Logothetis and colleagues describe the monkeys as pro-
ducing introspective reports. This view seems to me to be rather problematic. In
fact, it seems highly implausible that the monkeys were producing reports of any
kind, let alone introspective reports. Arguably, a motor response counts as a report
only if it is made in the light of the belief that one’s audience will take it to manifest
the intention to bring about a certain belief in the mind of one’s audience, and it
seems doubtful that the monkeys’ button presses were guided by mental states of
this kind. In other words, commitment to IMC would be at odds with the assump-
tion that the monkeys were indeed conscious of the stimuli that were presented to
them, and would thus undermine the relevance of this work to questions concern-
ing the neural correlates of consciousness.
Agency as a Marker of Consciousness 167
But we can interpret this research as bearing on the neural correlates of con-
sciousness—as it seems very natural to do—by endorsing AMC. Instead of con-
ceiving of the monkeys’ bar presses as reports, we should regard them as intentional
actions made in light of particular types of conscious percepts, for the monkeys
have learned that pressing a certain bar in response to (say) a horizontal grating will
produce a reward.
A second domain in which our intuitive ascription of consciousness appears to
favor AMC over IMC involves severely brain-damaged patients. In drawing the
boundary between the vegetative state and the minimally conscious state, physi-
cians lean heavily on appeals to the patient’s agentive capacities (Bernat 2006;
Jennett 2002; Giacino et al. 2002). A vegetative state diagnosis requires that the
patient show “no response to external stimuli of a kind that would suggest volition
or purpose (as opposed to reflexes)” (Royal College of Physicians 2003, §2.2).
Patients who do produce signs of volition are taken to have left the vegetative state
and entered the minimally conscious state. Volition has even been used to make
the case for consciousness in certain apparently vegetative state patients. One study
involved a 23-year-old woman who had been in a vegetative state for 5 months (Boly
et al. 2007; Owen et al. 2006). On some trials the patient was played a prerecorded
instruction to imagine playing tennis; on other trials she was instructed to imag-
ine visiting the rooms of her home. Astonishingly, the patient exhibited sustained,
domain-specific neural activity in the two conditions that was indistinguishable
from that seen in healthy volunteers.
It is widely—although not universally—supposed that this activity provided evi-
dence of consciousness in this patient. AMC is consistent with that response, for
it is reasonable to regard the neural activity as evidence of sustained, goal-directed
mental imagery. In the same way that limb movement in response to command is
generally taken as a manifestation of consciousness in minimally conscious state
patients, so too we might regard evidence of sustained mental imagery as evidence
of consciousness in vegetative state patients (Shea & Bayne 2010). By contrast,
advocates of IMC will deny that we have any evidence of consciousness in this
patient, as Naccache (2006) does. Indeed, the advocate of IMC may be committed
to denying that even so-called minimally conscious state patients are conscious, a
position that is at odds with current clinical opinion.
A third domain in which AMC appears to deliver a more intuitively plausible ver-
dict than IMC concerns the interpretation of the commissurotomy (or “split-brain”)
syndrome. The commissurotomy procedure involves severing some portion of the
corpus callosum in order to prevent epileptic seizures spreading from one hemi-
sphere to the other. Although split-brain patients are largely unimpaired in every-
day life, under carefully controlled laboratory conditions they can be led to exhibit
striking behavioral dissociations (Gazzaniga 2005; Zaidel et al. 2003). In a typical
split-brain experiment, distinct visual stimuli are presented to the patient in sepa-
rate halves of the visual field. For example, the word “key-ring” might be projected
such that “key” is restricted to the patient’s left visual field and “ring” is restricted
to the patient’s right visual field. The contralateral structure of the visual system
ensures that stimuli presented in the left visual field are processed only in the right
hemisphere and vice versa. The typical finding is that patients say that they see only
168 T H E S E N S E O F AG E N C Y
the word “ring,” yet with their left hand they will select a picture of a key and ignore
pictures of both a ring and a key-ring.
As a number of theorists have pointed out (e.g., Milner 1992), strict adherence
to IMC would require us to conclude that such patients are not conscious of the
word “key.”4 Although some theorists have found this view attractive (e.g., MacKay
1966), most regard the “minor” nonspeaking hemisphere as capable of supporting
consciousness (e.g., LeDoux et al. 1977; Marks 1981). In doing so, theorists seem to
have implicitly embraced some version of AMC. They take the right hemisphere of
split-brain patients to support consciousness on the grounds that it enables various
forms of intentional, goal-directed agency.
We have examined three domains in which the contrast between IMC and AMC
has an important bearing on the ascription of consciousness. Arguably, in each case
AMC does a better job of capturing our pretheoretical intuitions than IMC does.
Of course, the advocate of IMC need not be impressed by this result. Such a theorist
might grant that even if certain experimental paradigms employed in the science of
consciousness cannot be reconstructed according to the dictates of IMC, so much
the worse for those paradigms. Rather than bring our views concerning the markers
of consciousness into line with current experimental practice or pretheoretical intu-
ition, they might say, we should revise both practice and intuition in light of theory
(see, e.g., Papineau 2002).
Although it is certainly true that neither pretheoretical intuition nor experimen-
tal practice is sacrosanct, they do confer a certain de facto legitimacy on AMC.
The fact that IMC is at odds with them suggests that it is a somewhat revisionary
proposal, one that stands in need of independent motivation. How might IMC be
motivated?
Although a thorough evaluation of the case for IMC goes well beyond the scope
of this chapter, I do want to engage with one argument that has been given for it—
what we might call the argument from epistemic conservatism. This argument may
not be the strongest argument for IMC, but I suspect that it is one of the most
influential. The argument begins with the observation that there are two kinds of
error that one can make in studying consciousness: false negatives and false posi-
tives. False negatives occur when a marker misrepresents a conscious state or crea-
ture as unconscious, while false positives occur when a marker misrepresents an
unconscious state (or creature) as conscious. Now, let us say that an approach to the
ascription of consciousness is conservative if it places more importance on avoiding
false positives than on avoiding false negatives, and that it is liberal if it places more
importance on avoiding false negatives than false positives. The argument from
epistemic conservatism holds that because our approach toward the ascription of
consciousness ought to be maximally conservative, we should adopt IMC, for only
IMC is guaranteed to prevent false positives.
I don’t find the argument compelling. For one thing, it is by no means clear that a
conservative approach to the ascription of consciousness would lead inexorably to IMC.
The literature on inattentional blindness and change blindness—to take just two of
many examples that could be cited here—suggests that our introspective beliefs con-
cerning our current conscious states are often false: we ascribe to ourselves conscious
states that we are not in, and we overlook conscious states that we are in (Dennett
Agency as a Marker of Consciousness 169
reports of the relevant patients are correct: the representations that patients have of
objects in their blindfield are unconscious. Second, these unconscious representa-
tions support various forms of intentional agency, such as pointing and guessing.
The third step of the argument puts these two claims together to argue that con-
sciousness of stimulus (or its relevant properties) is not required for intentional
agency that is directed toward that stimulus (i.e., intentional agency that crucially
involves the relevant properties). But—and this is the fourth step—if conscious-
ness is not required for intentional agency, then intentional agency cannot ground
ascriptions of consciousness. What should we make of this argument?
The first step appears to be warranted. Although some theorists have cast doubt
on the introspective reports of blindsight patients, suggesting that the damage they
have sustained might have impaired their introspective capacities rather than their
visual experience as such (see, e.g., Gertler 2001), this position is undermined by
evidence of “blindsight-like” phenomena (Kolb & Braun 1995; Lau & Passingham
2006) and unconscious dorsal stream visual control (Milner & Goodale 2008)
in normal individuals.8 Although we could suppose that normal subjects also lack
introspective access to these visual representations, it seems more parsimonious to
assume that the introspective capacities of normal subjects are intact, and that their
dorsal representations are—as their introspective reports indicate—unconscious.9
More problematic is the fourth step of the argument. There is no incoherence
in holding that although the intentional actions of these patients are guided by
unconscious representations, these contexts are exceptions to a general rule that
links agency to consciousness. AMC, so the thought goes, might lead us astray in
dealing with patients with blindsight and visual agnosia without being unreliable in
general. In order for X to qualify as a marker of Y, it need not be the case that every
instance of X is accompanied by an instance of Y. Just as smoke is a marker of fire
even though not all instances of smoke are accompanied by fire, intentional agency
might be a marker of consciousness even if it is possible for intentional agency to be
guided by unconscious representations.
Although this line of response is perfectly acceptable as far as it goes, it is not clear
that we need be as concessive to the objection as this response is. The reason for this
is that it is not at all clear that blindsight-supported actions are fully intentional. In
other words, it is not clear that the second step of the argument is warranted. In fact,
there are three respects in which the actions seen in these conditions fall short of
full-blooded intentionality
First, blindsight-supported agency is often curiously “response-specific.” An
early study by Weiskrantz and colleagues (1974) found that D.B. was able to local-
ize blindfield targets much better when he was required to point to them than when
he was required to make an eye movement toward them. In another study, Zihl and
von Cramon (1980) required three blindsight patients to report when they saw a
light that had been flashed into their blindfield. On some trials the patients were
instructed to produce eye-blink reports, on other trials they were required to pro-
duce key-press reports, and on still other trials they were asked to produce verbal
reports (saying “yes”). Although the patients’ blinking and key-pressing responses
were significantly above chance (after practice), their verbal responses were not.
Another series of studies investigating the capacity of two blindsight patients to
Agency as a Marker of Consciousness 171
perceive size and orientation found that their performance was above chance on
goal-directed actions (grasping and posting), but below chance when they were
required to either perceptually match or verbally report the target’s size (Perenin &
Rossetti 1996; Rossetti 1998). The fact that their responses to the stimuli were
restricted to particular behavioral modalities suggests that they were not fully inten-
tional. As we noted in section 3, the more flexible a behavioral response is, the more
reason there is to regard it as fully intentional.
A second respect in which blindfield supported actions are less than fully inten-
tional is that they must typically be prompted. Blindfield content is not generally
available for spontaneous agentive control. Patients must initially be told that their
guesses are reliable before they will employ the contents of their blindfield repre-
sentations in agentive control. The fact that blindfield content is not spontaneously
employed by the subject suggests that it is not accessible to her as such—that is, at
the personal level.
In response, it might be pointed out that some blindsight subjects do use the con-
tents of their blindfield in the service of spontaneous agency. Consider Nicholas
Humphrey’s blindsight monkey—Helen:
Helen, several years after the removal of visual cortex, developed a virtu-
ally normal capacity for ambient spatial vision, such that she could move
around under visual guidance just like any other monkey. This was certainly
unprompted, and in that respect “super” blindsight. (Humphrey 1995: 257;
see also Humphrey 1974).
retains enable her to engage in intentional agency across a wide range of everyday
contexts (Milner 2008: 181). Restricting conscious access to these properties—as
is done in experimental contexts—reveals how impoverished D.F.’s blindsight-based
behavioral capacities really are. As Dretske (2006) puts it, unconscious sensory
information may be able to “control and tweak” behaviors that have already been
selected, but conscious information appears to be required for behavioral planning
and the selection of goals.10
Let me briefly summarize the claims of this section. The objection from cognitive
neuroscience takes the form of a putative counterexample to the claim that con-
sciousness is required for intentional agency. In response, I have made two central
points. First, AMC requires only a robust correlation between consciousness and
intentional agency, and hence it could be justified even if there are conditions in
which certain types of intentional agency are possible in the absence of conscious-
ness. But—and this is the second point—it is far from clear that cognitive neuro-
science does provide us with counterexamples to the claim that consciousness is
required for intentional agency, for purely blindsight-based actions may not qualify
as fully intentional.
Conscious acts of will are not necessary determinants of social judgment and
behavior; neither are conscious processes necessary for the selection of com-
plex goals to pursue, or for the guidance of those goals to completion. Goals
and motivations can be triggered by the environment, without conscious
choice or intention, then operate with and run to completion entirely non-
consciously, guiding complex behaviour in interaction with a changing and
unpredictable environment, and producing outcomes identical to those that
occur when the person is aware of having that goal. (52)
in an agent’s awareness of its intentions and goals. All it requires is that conscious
states of some kind or another are implicated in the exercise of intentional agency.
Even when agents act on the basis of goals of which they are unaware, they will
generally be aware of the perceptual features of their environment that govern the
selection and implementation of those goals. It is one thing to act on the basis of
an unconscious intention, but quite another to act on the basis of an unconscious
representation of one’s perceptual environment or body. In many cases, the most
likely form of consciousness to be implicated in intentional agency will be percep-
tual rather than agentive.
This point can be further illustrated by considering pathologies of agency, such as
the anarchic hand syndrome (Della Sala & Marchetti 2005; Marchetti & Della Sala
1998). Patients with this condition have a hand that engages in apparently inten-
tional behavior of its own accord. The patient will complain that she has no control
over the hand’s behavior and will describe it as having a “mind of its own.” Although
patients appear not to have any sense of agency with respect to “their” actions, these
actions are presumably triggered and guided by the patient’s conscious perceptual
experiences. So, although anarchic hand actions might provide an unreliable guide
to the presence of conscious intention, there is no reason to think that they provide
an unreliable guide to the presence of conscious perception.
There is a final point to note, a point that is relevant to the assessment of both
the case against AMC based on social psychology and that which is based on cogni-
tive neuroscience. Although there are question marks about the kinds of conscious
states that the subjects studied in these experiences might be in, there is no doubt
whatsoever that the subjects themselves are conscious. As such, it is unclear what
bearing such studies might have on the question of whether completely unconscious
creatures are capable of intentional agency. It is not implausible to suppose that the
kinds of behavioral capacities that unconscious mental states are able to drive in
conscious creatures differ in fundamental respects from those that they are able to
drive in unconscious creatures. Perhaps mentality is something like an iceberg, not
only in the sense that only a small portion of it is conscious but also in the sense that
the existence of its submerged (or unconscious) parts demands the existence of its
unsubmerged (or conscious) parts.
Are there any pathologies of consciousness in which intentional agency occurs
in the complete absence of creature consciousness? A number of authors have
argued that sleepwalking, automatisms, and epileptic absence seizures—to name
just three of the many conditions that might be mentioned here—provide exam-
ples of states in which completely unconscious individuals engage in intentional
actions, albeit ones that are highly automatized (see, e.g., Koch 2004; Lau &
Passingham 2007). The problem with such claims is that it is far from clear that
such individuals are completely unconscious. They might not be conscious of
what they are doing or of why they are doing what they are doing, but it is—it
seems to me—very much an open question whether they might nonetheless be
conscious of objects in their immediate perceptual environment (Bayne 2011).
Such individuals certainly act in ways that are under environmental control, and
as Lloyd Morgan once remarked, control is the “primary aim, object and purpose
of consciousness” (1894: 182).
Agency as a Marker of Consciousness 175
7. CONCLUSION
Current orthodoxy within the science of consciousness holds that the only legiti-
mate basis for ascribing consciousness is introspective report, and the practice of
employing agency as a marker of consciousness is looked upon with some suspicion
by many theorists. In this chapter I have argued that such suspicion is unjustified.
In the first half of the chapter I clarified a number of ways in which agency might
be adopted as a marker of consciousness, and in the second half I examined and
responded to the claim that findings in cognitive neuroscience and social psychol-
ogy undermine the appeal of AMC. I argued that although these findings provide
the advocate of AMC with plenty of food for thought, neither domain demonstrates
that intentional agency is an unreliable guide to the presence of consciousness.
What I have not done in this chapter is provide a direct argument for AMC. My
primary concern has been to defend AMC against a variety of objections, and I have
left the motivation for AMC at a relatively intuitive and pretheoretical level. The
task of developing a positive argument for AMC is a significant one and not some-
thing that I can take on here. All I have attempted to do in this chapter is remove
some of the undergrowth that has come to obscure the claim that agency might
function as a marker of consciousness: a full-scale defense of that claim must wait
for another occasion.12
NOTES
1. My analysis builds on a number of recent discussions of the relationship between
agency and consciousness. I am particularly indebted to Clark (2001, 2009), Dretske
(2006), Flanagan (1992), and van Gulick (1994).
2. See chapter 6 of Bayne (2010) for discussion of the ascription of consciousness in
these conditions.
3. This section draws on chapter 5 of Bayne (2010).
4. Whether or not this interpretation of the split-brain data is at odds with the weak ver-
sion of IMC or merely the strong version of IMC depends on the delicate question
of whether we think of the split-brain patient as two cognitive subjects (only one of
whom possesses introspective capacities) or as a single subject (with introspective
capacities).
5. It is somewhat ironic that many of those who are most strident in their defense of
IMC are also among the most critical of the claim that introspection is reliable.
Although not strictly inconsistent with each other, these two views are not natural
bedfellows.
6. See Pöppel et al. (1973), Weiskrantz et al. (1974), Perenin & Jeannerod (1975), and
Weiskrantz (2009).
7. Note that there are different forms of blindsight, and not all blindsight patients dem-
onstrate the same range of blindsight-related potential for action. My discussion here
concerns what Danckert and Rossetti (2005) call “action-blindsight.”
8. For a sample of the vast array of studies in this vein, see Aglioti et al. (1995); Brenner
& Smeets (1997); Bridgeman et al. (1981); Castiello et al. (1991); Fourneret &
Jeannerod (1998); McIntosh et al. (2004); Milner & Goodale (2008); Slachevsky
et al. (2001); and Schenk & McIntosh (2010).
176 T H E S E N S E O F AG E N C Y
9. Although certain blindsight patients claim to have some form of conscious awareness
of stimuli in their blindfield, they tend to describe such experiences as qualitatively
distinct from visual experiences of stimuli (Magnussen & Mathiesen 1989; Morland
et al. 1999).
10. In part this may be because the dorsal stream lacks access to information about the
categories to which visually perceived objects belong. For example, D.F. will fail to
pick up a screwdriver from the appropriate end because she will not recognize it as a
screwdriver (Dijkerman et al. 2009). However, even when the dorsal stream is able
to represent the appropriate properties of objects, it seems unable to draw on that
content in order to initiate action.
11. For similar studies see Bargh et al. (1996); Bargh & Ferguson (2000); Bargh &
Gollwitzer (1994); and Carver et al. (1983).
12. For helpful comments on earlier versions of this chapter, I am indebted to Bart
Kamphorst, Julian Kiverstein, Hakwan Lau, Eddy Nahmias, and Tillmann Vierkant.
REFERENCES
Aglioti, S., De Souza, J. F. X., & Goodale, M. A . 1995. Size-contrast illusions deceive the
eye but not the hand. Current Biology, 5: 679–685.
Bargh, J. A . 2005. Bypassing the will: Toward demystifying the nonconscious control of
social behavior. In R. Hassin, J. Uleman, & J. Bargh (eds.), The New Unconscious, 37–58.
New York: Oxford University Press.
Bargh, J. A., & Chartrand, T. 1999. The unbearable automaticity of being. American
Psychologist, 54: 462–479.
Bargh, J. A., Chen, A., & Burrows, L. 1996. Automaticity of social behavior: Direct effects
of trait construct and stereotype activation on action. Journal of Personality and Social
Psychology, 71: 230–244.
Bargh, J., and Ferguson, M. 2000. Beyond behaviorism: On the automaticity of higher
mental processes. Psychological Bulletin, 126: 925–945.
Bargh, J., & Gollwitzer, P. M. 1994. Environmental control of goal-directed action:
Automatic and strategic contingencies between situations and behavior. Nebraska
Symposium on Motivation, 41: 71–124.
Bargh, J., Gollwitzer, P. M., Lee-Chai, A. Y., Barndollar, K., & Trötschel, R . 2001. The
automated will: Nonconscious activation and pursuit of behavioural goals. Journal of
Personality and Social Psychology, 81: 1014–1027.
Bargh, J., & Morsella, E. 2009. Unconscious behavioural guidance systems. In C. Agnew, D.
Carlston, W. Graziano, & J. Kelly (eds.), Then a Miracle Occurs: Focusing on Behaviour in
Social Psychological Theory and Research, 89–118. New York: Oxford University Press.
Bayne, T. 2010. The Unity of Consciousness. Oxford: Oxford University Press.
Bayne, T. 2011. The presence of consciousness in “absence” seizures. Behavioural Neurology,
24 (1): 47–53.
Bernat, J. L. 2006. Chronic disorders of consciousness. Lancet, 367: 1181–1192.
Block, N. 2002. The harder problem of consciousness. Journal of Philosophy, 99:
391–425.
Boly, M., Coleman, M. R., Davis, M. H., Hampshire, A., Bor, D., Moonen, G., Maquet, P.
A., Pickard, J. D., Laureys, S., & Owen, A. M. 2007. When thoughts become action: An
fMRI paradigm to study volitional brain activity in noncommunicative brain injured
patients. NeuroImage, 36: 979–992.
Agency as a Marker of Consciousness 177
Brenner, E., & Smeets, J. B. J. 1997. Fast responses of the human hand to changes in target
position. Journal of Motion Behaviour, 29: 297–310.
Bridgeman, B., Kirsch, M., & Sperling , G. 1981. Segregation of cognitive and motor aspects
of visual function using induced motion. Perception and Psychophysics, 29: 336–342.
Carey, D. P., Harvey, M., & Milner, A. D. 1996. Visuomotor sensitivity for shape and orien-
tation in a patient with visual form agnosia. Neuropsychologia, 34: 329–338.
Carver, C. S., Ganellen, R. J., Froming, W. J., & Chambers, W. 1983. Modelling: An anal-
ysis in terms of category accessibility. Journal of Experimental Social Psychology, 19:
403–421.
Castiello, U., Paulignan, Y., & Jeannerod, M. 1991. Temporal dissociation of motor
responses and subjective awareness. Brain, 114: 2639–2655.
Clark, A . 2001. Visual experience and motor action: Are the bonds too tight? Philosophical
Review, 110: 495–519.
Clark, A . 2009. Perception, action and experience: Unraveling the golden braid.
Neuropsychologia, 47: 1460–1468.
Danckert, J., & Rossetti, Y. 2005. Blindsight in action: What does blindsight tell us about
the control of visually guided actions? Neuroscience and Biobehavioural Reviews, 29:
1035–1046.
Della Sala, S., & Marchetti, C. 2005. The anarchic hand syndrome. In H.-J. Freund, M.
Jeannerod, M. Hallett, & R. Leiguarda (eds.), Higher-Order Motor Disorders: From
Neuroanatomy and Neurobiology to Clinical Neurology, 293–301. New York: Oxford
University Press.
Dennett, D. 1991. Consciousness Explained. Boston: Little, Brown.
de’Sperati, C., & Baud-Bovy, G. 2008. Blind saccades: An asynchrony between seeing and
looking. Journal of Neuroscience, 28: 4317–4321.
Dijkerman, H. C., McIntosh, R. D., Schindler, I., Nijboer, T. C. W., & Milner, A. D. 2009.
Choosing between alternative wrist postures: Action planning needs perception.
Neuropsychologia, 47: 1476–1482.
Dijksterhuis, A., & Bargh, J. A . 2001. The perception-behaviour expressway: Automatic
effects of social perception on social behaviour. In M. P. Zanna (ed.), Advances in
Experimental Social Psychology, 33:1–40. San Diego, Academic Press.
Dretske, F. 2006. Perception without awareness. In T. S. Gendler & J. Hawthorne (eds.),
Perceptual Experience, 147–180. Oxford: Oxford University Press.
Flanagan, O. 1992. Consciousness reconsidered. Cambridge, MA: MIT Press.
Fourneret, P., & Jeannerod, M. 1998. Limited conscious monitoring of motor perfor-
mance in normal subjects. Neuropsychologia, 36: 1133–1140.
Frith, C., Perry, R., & Lumer, E. 1999. The neural correlates of conscious experience: An
experimental framework. Trends in the Cognitive Sciences 3: 105–114.
Gazzaniga, M. S. 2005. Forty-five years of split-brain research and still going strong. Nature
Reviews Neuroscience, 6: 653–659.
Gertler, B. 2001. Introspecting phenomenal states, Philosophy and Phenomenological
Research, 63: 305–328.
Giacino, J. T., Ashwal, S., Childs, N., Cranford, R., Jennett, B., Katz, D. I., Kelly, J. P.,
Rosenberg, J. H., Whyte, J., Zafonte, R. D., & Zasler, N. D. 2002. The minimally con-
scious state: Definition and diagnostic criteria. Neurology, 58: 349–353.
Goodale, M., & Milner, A. D. 2004. Sight Unseen: An Exploration of Conscious and
Unconscious Vision. Oxford: Oxford University Press.
Haggard, P. 2008. Human volition: Towards a neuroscience of will. Nature Reviews
Neuroscience, 9: 934–946.
178 T H E S E N S E O F AG E N C Y
Haybron, D. 2007. Do we know how happy we are? On some limits of affective introspec-
tion and recall. Noûs, 41: 394–428.
Humphrey, N. K . 1974. Vision in a monkey without striate cortex: A case study. Perception,
3: 241–255.
Humphrey, N. K . 1995. Blocking out the distinction between sensation and perception:
Superblindsight and the case of Helen. Behavioral and Brain Sciences, 18: 257–258.
Jahanshahi, M., & Frith, C. 1998. Willed action and its impairments. Cognitive
Neuropsychology, 15: 483–533.
Jennett, B. 2002. The Vegetative State. Cambridge: Cambridge University Press.
Kirchner, H., & Thorpe, S. J. 2006. Ultra-rapid object detection with saccadic eye move-
ments: Visual processing speed revisited. Vision Research, 46: 1762–1776.
Koch, C. 2004. The Quest for Consciousness. Englewood, CO: Roberts.
Koch, C., & Crick, F. 2001. On the zombie within. Nature, 411: 893.
Kolb, F. C., & Braun, J. 1995. Blindsight in normal observers. Nature, 377: 336–338.
Lau, H. C., & Passingham, R. E. 2006. Relative blindsight in normal observers and the
neural correlate of visual consciousness. Proceedings of the National Academy of Sciences,
103: 18763–18768.
Lau, H. C., & Passingham, R. E. 2007. Unconscious activation of the cognitive control
system in the human prefrontal cortex. Journal of Neuroscience, 27: 5805–5811.
LeDoux , J. E., Wilson, D. H., & Gazzaniga, M. S. 1977. A divided mind: Observations
on the conscious properties of the separated hemispheres. Annals of Neurology, 2:
417–421.
Lengfelder, A., & Gollwitzer, P. M. 2001. Reflective and reflexive action control in patients
with frontal brain lesions. Neuropsychology, 15: 80–100.
Logothetis, N. K., D. A. Leopold, & Sheinberg , D. L. 2003. Neural mechanisms of per-
ceptual organization. In N. Osaka (ed.), Neural Basis of Consciousness: Advances in
Consciousness Research, 49:87–103. Amsterdam: John Benjamins.
Logothetis, N., & Schall, J. 1989. Neuronal correlates of subjective visual perception.
Science, 245: 761–763.
MacKay, D. M. 1966. Cerebral organization and the conscious control of action. In J. C.
Eccles (ed.), Brain and Conscious Experience, 422–445. Heidelberg : Springer-Verlag.
Magnussen, S., & Mathiesen, T. 1989. Detection of moving and stationary gratings in the
absence of striate cortex. Neuropsychologia, 27: 725–728.
Marchetti, C., & Della Sala, S. 1998. Disentangling the alien and anarchic hand. Cognitive
Neuropsychiatry, 3: 191–207.
Marks, C. 1981. Commissurotomy, Consciousness and Unity of Mind. Cambridge, MA : MIT
Press.
McIntosh, R. D., McClements, K. I., Schindler, I., Cassidy, T. P., Birchall, D., & Milner, A.
D. 2004. Avoidance of obstacles in the absence of visual awareness. Proceedings of the
Royal Society of London Series B Biological Sciences, 271: 15–20.
Mele, A . 2009. Effective Intentions: The Power of Conscious Will. Oxford: Oxford University
Press.
Milner, A. D. 1992. Disorders of perceptual awareness: A commentary. In A. D. Milner and
M. D. Rugg (eds.), The Neuropsychology of Consciousness, 139–158. London: Academic
Press.
Milner, A. D. 2008. Conscious and unconscious visual processing in the human brain.
In L. Weiskrantz and M. Davies (eds.), Frontiers of Consciousness, 169–214. Oxford:
Oxford University Press.
Agency as a Marker of Consciousness 179
Milner, A. D., & Goodale, M. A. 2006. The Visual Brain in Action. 2nd ed. Oxford: Oxford
University Press.
Milner, A. D., & Goodale, M. A . 2008. Two visual systems reviewed. Neuropsychologia,
46: 774–785.
Morgan, C. L. 1894. An Introduction to Comparative Psychology. London. W. Scott.
Morland, A. B., Jones, S. R., Finlay, A. L., Deyzac, E., Le, S., & Kemp, S. 1999. Visual percep-
tion of motion, luminance and colour in a human hemianope. Brain, 122: 1183–1196.
Mushiake, H., Masahiko, I., & Tanji, J. 1991. Neuronal activity in the primate premotor,
supplementary, and precentral motor cortex during visually guided and internally
determined sequential movements. Journal of Neurophysiology, 66: 705–718.
Naccache, L. 2006. Is she conscious? Science, 313: 1395–1396.
Nagel, T. 1974. What is it like to be a bat? Philosophical Review, 83: 435–450.
Obhi, S., & Haggard, P. 2004. Internally generated and externally triggered actions are
physically distinct and independently controlled. Experimental Brain Research, 156:
518–523.
Owen, A. M., Coleman, M. R., Boly, M., Davis, M. H., Laureys, S., & Pickard, J. D. 2006.
Detecting awareness in the vegetative state. Science, 313: 1402.
Papineau, D. 2002. Thinking about Consciousness. Oxford: Oxford University Press.
Perenin, M.-T., & Jeannerod, M. 1975. Residual vision in cortically blind hemifields.
Neuropsychologia, 13: 1–7.
Perenin, M.-T., & Rossetti, Y. 1996. Grasping without form discrimination in a hemiano-
pic field. Neuroreport, 7: 793–797.
Pöppel, E., Held R., & Frost, D. 1973. Residual visual function after brain wounds involv-
ing the central visual pathways in man. Nature, 243: 295–296.
Prochazka, A., Clarac F., Loeb, G. E., Rothwell, J. C., & Wolpaw, J. R . 2000. What do reflex
and voluntary mean? Modern views on an ancient debate. Experimental Brain Research,
130: 417–432.
Rossetti, Y. 1998. Implicit short-lived motor representations of space in brain-damaged
and healthy subjects. Consciousness and Cognition, 7: 520–558.
Royal College of Physicians. 2003. The Vegetative State: Guidance on Diagnosis and
Management. London: Royal College of Physicians.
Schenk, T., & McIntosh, R. D. 2010. Do we have independent visual streams for percep-
tion and action? Cognitive Neuroscience, 1: 52–78.
Schwitzgebel, E. 2008. The unreliability of naïve introspection. Philosophical Review, 117:
245–273.
Shea, N., & Bayne, T. 2010. The vegetative state and the science of consciousness. British
Journal for the Philosophy of Science, 61: 459–484.
Sheinberg , D. L., & Logothetis, N. K . 1997. The role of temporal cortical areas in percep-
tual organization. Proceedings of the National Academy of Sciences, 94: 3408–3413.
Slachevsky, A., Pillon, B., Fourneret, P., Pradat-Diehl, P., Jeannerod, M., & Dubois, B.
2001. Preserved adjustment but impaired awareness in a sensory-motor conflict fol-
lowing prefrontal lesions. Journal of Cognitive Neuroscience, 13: 332–340.
Trevarthen, C. T., Sahraie, A., & Weiskrantz, L. 2007. Can blindsight be superior to
“sighted-sight”? Cognition, 103: 491–501.
Van Gulick, R . 1994. Deficit studies and the function of phenomenal consciousness. In
G. Graham and G.L. Stephens (eds.) Philosophical Psychopathology, 25–49. Cambridge,
MA : MIT Press.
Weiskrantz, L. 1997. Consciousness Lost and Found. Oxford: Oxford University Press.
Weiskrantz L. 2009. Blindsight. 2nd ed.. Oxford: Oxford University Press.
180 T H E S E N S E O F AG E N C Y
Weiskrantz, L., Warrington, E. L., Saunders, M. D., & Marshall J. 1974. Visual capacity in
the heminopic field following a restricted occipital ablation. Brain, 97: 709–728.
Zaidel, E., Iacoboni, M., Zaidel, D. W., & Bogen, J. E. 2003. The callosal syndromes. In
K. H. Heilman and E. Valenstein (eds.), Clinical Neuropsychology, 347–403. Oxford:
Oxford University Press.
Zihl, J., & von Cramon, D. 1980. Registration of light stimuli in the cortically blind hemi-
field and its effect on localization. Behavioural Brain Research, 1: 287–298.
PART THREE
E Z E Q U I E L M O R S E L L A , TA R A C . D E N N E H Y,
A N D J O H N A . BA R G H
Morsella, Krieger, & Bargh, 2009). To isolate the primary function of conscious-
ness and identify its role in voluntary action, one must first appreciate all that can be
accomplished unconsciously in the nervous system.
semantic knowledge regarding the object (“what it is”; see review in Westwood,
2009). Mounting evidence suggests that it is the dorsal (actional) system that oper-
ates outside of conscious awareness, while the operation of the ventral system is
normally associated with awareness (Decety & Grèzes, 1999; Jeannerod, 2003).
Findings regarding perception-action dissociations corroborate what motor
theorists have long known—that one is unconscious of the motor programs guid-
ing action (Rosenbaum, 2002). In addition to action slips and spoonerisms, highly
flexible and “online” adjustments are made unconsciously during an act such as
grasping a fruit. For several reasons (see treatments of this topic in Gray, 2004;
Grossberg, 1999; Rosenbaum, 2002), one is unconscious of these complicated
programs that calculate which muscles should be activated at a given time but is
often aware of the proprioceptive and perceptual consequences of these programs
(e.g., perceiving the hand grasping; Gray, 2004; Gottlieb & Mazzoni, 2004; Helen
and Haggard, 2005). In short, there is a plethora of findings showing that one is
unconscious of the adjustments that are made “online” as one reaches for an object
(Fecteau et al., 2001; Heath et al., 2008; Rossetti, 2001). Many experimental tricks
are based on the fact that one has little if any conscious access to motor programs. In
an experiment by Fourneret and Jeannerod (1998), participants were easily fooled
into thinking that their hand moved one direction when it had actually moved in a
different direction (through false feedback on the computer display).
In conclusion, there is substantial evidence that complex actions can transpire
without conscious mediation. At first glance, these actions are not identifiably less
flexible, complex, controlling, deliberative, or action-like than their conscious coun-
terparts (Bargh & Morsella, 2008).
Regarding unconscious processing, “supraliminal” (consciously perceptible)
stimuli in our immediate environment can exert forms of unconscious “stimulus
control,” leading to unconscious action tendencies. Consistent with this standpoint,
findings suggest that incidental stimuli (e.g., hammers) can automatically prepare us
to physically interact with the world (Tucker & Ellis, 2004; see neuroimaging evi-
dence in Grèzes & Decety, 2002; Longcamp et al., 2005). For instance, perceiving a
cylinder unconsciously increases one’s tendency to perform a power grip (Tucker &
Ellis, 2004). In addition, it has been shown that, in choice response time tasks, the
mere presence of musical notation influences the responses of musicians but not of
nonmusicians (Levine, Morsella, & Bargh, 2007; Stewart et al. 2003). Consistent
with these findings, unconscious action tendencies are readily evident in classic
laboratory paradigms such as the Stroop task2 (Stroop, 1935) and the flanker task
(Eriksen & Schultz, 1979).
In studies involving supraliminal priming of complex social behavior, it has been
demonstrated that many of our complex behaviors occur automatically, determined
by causes far removed from our awareness. Behavioral dispositions can be influenced
by covert stimuli—when presented with supraliminal words associated with the ste-
reotype “old,” people walk slower (Bargh, Chen, & Burrows, 1996); when presented
with stimuli associated with the concept “library,” people make less noise (Aarts &
Dijksterhuis, 2003); and when primed with “hostility,” people become more aggressive
(Carver et al., 1983). These effects have been found not only with verbal stimuli that
are semantically related to the goal (as in many studies) but also with material objects.
186 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L
Response System 1
unconscious afference binding
Module 1
Module 2
Module 3
Module 4
Response System 2
Efference-Efference Module 100 Efference binding
binding (does not require
Module 101
(requires phenomenal Module 102 consciousness)
field)
Module 103
Response System 3
Module 1
Module 101
Module 4
Module 103
Integrated skeletomotor action (e.g., suppressing Un-integrated skeletomotor action (e.g., reflexive inhaling
inhaling or another pre-potent response) or pain withdrawal, responding to a subliminal stimulus)
Figure 10.1 Fodorian modules operate within a few multimodal, supramodular response
systems, each defined by its concern. Afference binding within systems can be unconscious.
Although the response systems can influence action directly (illustrated by the arrows
on the right), only in virtue of conscious states can they interact and influence action
collectively, as when one holds one’s breath (the path illustrated on the left). The sense of
agency is most intimately associated to this efference-efference binding.
incapable of taking information generated by other systems into account. For exam-
ple, the tissue-damage system is “encapsulated” in the sense that it will protest (e.g.,
create subjective avoidance tendencies) the onset of potential tissue damage even
when the action engendering the damage is lifesaving. Regardless of the adaptive-
ness of one’s plan (e.g., running across hot desert sand to reach water), the strife that
is coupled with conflict cannot be turned off voluntarily (Morsella, 2005). Under
conditions of conflict, inclinations can be behaviorally suppressed but not mentally
suppressed (Bargh & Morsella, 2008). Although actional systems that are phyloge-
netically ancient may no longer influence behavior directly, they now influence the
nature of consciousness: inclinations continue to be experienced consciously, even
when they are not expressed behaviorally.
wherein one system guides behavior and is uninfluenced by the concerns of another
system. In this way, perhaps it is better to compare the phenomenal field not to a sur-
veillance system but to a senate, in which representatives from different provinces are
always in attendance, regardless of whether they should sit quietly or debate. In other
words, phenomenal states allow for the channels of communication across systems
to always be open (see discussion of chronic engagement in Morsella, 2005).
In phylogeny, the introduction of new structures (e.g., organs and tissues)
involves complex, often competitive interactions with extant ones. This is known as
the “struggle of parts” problem (cf. Mayr, 2001), and it may have been a formidable
challenge during the evolution of something as complex as the human nervous sys-
tem. Although such integration could conceivably occur without something like
phenomenal states (as in an automaton or in an elegant “blackboard” neural net-
work with all its modules nicely interconnected), such a solution was not selected
in our evolutionary history. Instead, and for reasons that only the happenstance
and tinkering process of evolution could explain (Gould, 1977; Simpson, 1949),
it is proposed that these physical processes were selected to solve this large-scale,
cross-talk problem. We will now discuss how the senses (or illusion) of volition and
agency arise from these conscious states.
The sense of agency and authorship processing (i.e., attributing actions to oneself;
Wegner, 2003) are based on several high-level processes, including the perception
of a lawful correspondence between action intentions and action outcomes (Wegner,
2003). Research has revealed that experimentally manipulating the nature of this
correspondence leads to systematic distortions in the sense of agency/authorship,
such that subjects can be fooled into believing that they caused actions that were
in fact caused by someone else (Wegner, 2002). Linser and Goschke (2007) dem-
onstrate that feelings of control are based on unconscious comparisons of actual
action-effect sequences to the anticipated sequence: “matches” result in feelings of
control, and mismatches result in the effect being attributed to an external source.
Hence, when intentions and outcomes mismatch, as in action slips and spooner-
isms, people are less likely to perceive actions as originating from the self (Wegner,
2002). Similar self-versus-other attributions are found in intrapsychic conflicts
(Livnat & Pippenger, 2006), as captured by the “monkey on one’s back” metaphor
that is often used to describe the tendencies associated with aspects of addiction.
Accordingly, in the classic Stroop task, participants perceive the activation of the
undesired word-reading plans as less associated with the self when the plans conflict
with intended action (e.g., in the incongruent condition) than when the same plans
lead to no such interference (e.g., in the congruent condition; Riddle & Morsella,
2009). In two interference paradigms, response interference was associated with
weakened perceptions of control and stronger perceptions of competition (Riddle &
Morsella, 2009). It is important to appreciate that, despite these introspective
judgments, and as revealed in recent action production research, there need be no
homunculus in charge of suppressing one action in order to express another action,
as concluded by Curtis and D’Esposito (2009): “No single area of the brain is spe-
cialized for inhibiting all unwanted actions” (72). For example, in the morning,
action plan A may conflict with action plan B; and, in the evening, plan C may con-
flict with D, with there never being the same third party (a homunculus) observing
Voluntary Action and the Three Forms of Binding 191
CONCLUSION
By following Sperry’s (1952) recommendation and identifying the primary func-
tion of consciousness by taking the untraditional approach of working backward
from overt voluntary action to the central processes involved (instead of working
forward from perceptual processing toward central processes), one can appreciate
that what consciousness is for is more “nuts-and-boltsy” than what has been pro-
posed historically: at this stage of understanding, it seems that the primary function
of consciousness is to instantiate a unique form of binding in the nervous system.
This kind of integration (efference-efference binding) is intimately related to the
skeletal muscle system, the sense of agency, and volition.
ACKNOWLEDGMENT
This chapter is based in part on ideas first reported in Morsella (2005) and Morsella
and Bargh (2011).
NOTES
1. Often referred to as “subjective experience,” “qualia,” “sentience,” “phenomenal
states,” and “awareness,” basic consciousness has proven to be difficult to describe
and analyze but easy to identify, for it constitutes the totality of our experience.
192 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L
Perhaps this basic form of consciousness has been best defined by Nagel (1974), who
claimed that an organism has basic consciousness if there is something it is like to be
that organism—something it is like, for example, to be human and experience pain,
love, breathlessness, or yellow afterimages. Similarly, Block (1995) claimed, “The
phenomenally conscious aspect of a state is what it is like to be in that state” (227).
2. In this task, participants name the colors in which stimulus words are written as
quickly and as accurately as possible. When the word and color are incongruous
(e.g., RED presented in blue), response interference leads to increased error rates,
response times, and reported urges to make a mistake (Stroop, 1935; Morsella et
al., 2009). When the color matches the word (e.g., RED presented in red), or is
presented on a neutral stimulus (e.g., a series of X’s as in “XXXX”), there is little or
no interference.
REFERENCES
Aarts, Henk, and Ap Dijksterhuis. “The silence of the library: Environment, situational
norm, and social behavior.” Journal of Personality and Social Psychology 84, no. 1 (2003):
18–28.
Ansorge, Ulrich, Odmar Neumann, Stefanie I. Becker, Holger Kälberer, and Holk Cruse.
“Sensorimotor supremacy: Investigating conscious and unconscious vision by masked
priming.” Advances in Cognitive Psychology 3, nos. 1–2 (2007): 257–274.
Araya, Tadesse, Nazar Akrami, Bo Ekehammar, and Lars-Erik Hedlund. “Reducing prej-
udice through priming of control-related words.” Experimental Psychology 49, no. 3
(2002): 222–227.
Baars, Bernard J. “The conscious access hypothesis: Origins and recent evidence.” Trends
in Cognitive Sciences 6, no. 1 (2002): 47–52.
Banks, William P. “Evidence for consciousness.” Consciousness and Cognition 4, no. 2
(1995): 270–272.
Bargh, John A., Mark Chen, and Lara Burrows. “Automaticity of social behavior: Direct
effects of trait construct and stereotype activation on action.” Journal of Personality and
Social Psychology 71, no. 2 (1996): 230–244.
Bargh, John A., and Ezequiel Morsella. “The unconscious mind.” Perspectives on
Psychological Science 3, no. 1 (2008): 73–79.
Bartolomei, Fabrice, Fabrice Wendling, Jean-Pierre Vignal, Patrick Chauvel, and Catherine
Liegois-Chauvel. “Neural networks underlying epileptic humming.” Epilepsia 43, no. 9
(2002): 1001–1012.
Bindra, Dalbir. “A motivational view of learning, performance, and behavior modifica-
tion.” Psychological Review 81, no. 3 (1974): 199–213.
Blanken, Gerhard, Claus-W Wallesch, and C. Papagno. “Dissociations of language func-
tions in aphasics with speech automatisms (recurring utterances).” Cortex 26, no. 1
(1990): 41–63.
Block, Ned. “On a confusion about a function of consciousness.” Behavioral and Brain
Sciences, 18, no. 2 (1995): 227–287.
Brass, Marcel, and Patrick Haggard. “To do or not to do: The neural signature of
self-control.” Journal of Neuroscience 27, no. 34 (2007): 9141–9145.
Bryon, S., and C. P. Jedynak . “Troubles du transfert interhemispherique: A propos de
trios observations de tumeurs du corps calleux. Le signe de la main entrangère.” Revue
Neurologique 126 (1972): 257–266.
Carlson, Neil R . Physiology of behavior. Needham Heights, MA : Allyn and Bacon, 1994.
Voluntary Action and the Three Forms of Binding 193
Carmant, Lionel, James J. Riviello, Elizabeth. A. Thiele, Uri Kramer, Sandra L. Helmers,
Mohamed Mikati, Joseph R. Madsen, Peter McL. Black, and Gregory L. Holmes.
“Compulsory spitting: An unusual manifestation of temporal lobe epilepsy.” Journal of
Epilepsy 7, no. 3 (1994): 167–170.
Carver, Charles S., Ronald J. Ganellen, William J. Froming , and William Chambers.
“Modeling: An analysis in terms of category accessibility.” Journal of Experimental Social
Psychology 19, no. 5 (1983): 403–421.
Chen, Serena, Annette Y. Lee-Chai, and John A. Bargh. “Relationship orientation as a
moderator of the effects of social power.” Journal of Personality and Social Psychology 80,
no. 2 (2001): 173–187.
Curtis, Clayton E., and Mark D’Esposito. “The inhibition of unwanted actions.” In The
Oxford handbook of human action, edited by Ezequiel Morsella, John A. Bargh, and
Peter M. Gollwitzer, 72–97. New York: Oxford University Press, 2009.
Custers, Ruud, Marjolein Maas, Miranda Wildenbeest, and Henk Aarts. “Nonconscious
goal pursuit and the surmounting of physical and social obstacles.” European Journal of
Social Psychology 38, no. 6 (2008): 1013–1022.
Damasio, Antonio R . The feeling of what happens: Body and emotion in the making of con-
sciousness. New York: Harcourt Brace, 1999.
Decety, Jean, and Julie Grèzes. “Neural mechanisms subserving the perception of human
actions.” Trends in Cognitive Sciences 3, no. 5 (1999): 172–178.
Doherty, M. J., A. J. Wilensky, M. D. Holmes, D. H. Lewis, J. Rae, and G. H. Cohn. “Singing
seizures.” Neurology 59 (2002): 1435–1438.
Duckworth, Kimberly L., John A. Bargh, Magda Garcia, and Shelly Chaiken. “The auto-
matic evaluation of novel stimuli.” Psychological Science 13, no. 6 (2002): 513–519.
Eriksen, C. W., and D. W. Schultz. “Information processing in visual search: A continuous flow
conception and experimental results.” Perception and Psychophysics 25 (1979): 249–263.
Fecteau, Jillian H., Romeo Chua, Ian Franks, and James T. Enns. “ Visual awareness and
the online modification of action.” Canadian Journal of Experimental Psychology 55, no.
2 (2001): 104–110.
Field, Andy P. “I like it but I’m not sure why: Can evaluative conditioning occur without
conscious awareness?” Consciousness and Cognition 9, no. 1 (2000): 13–36.
Fishbach, Ayelet, Ronald S. Friedman, and Arie W. Kruglanski. “Leading us not unto temp-
tation: Momentary allurements elicit overriding goal activation.” Journal of Personality
and Social Psychology 84, no. 2 (2003): 296–309.
Fitzsimons, Gráinne M., and John A. Bargh. “Thinking of you: Nonconscious pursuit of
interpersonal goals associated with relationship partners.” Journal of Personality and
Social Psychology 84, no. 1 (2003): 148–163.
Fourneret, Pierre, and Marc Jeannerod. “Limited conscious monitoring of motor perfor-
mance in normal subjects.” Neuropsychologia 36, no. 11 (1998): 1133–1140.
García-Orza, Javier, Jesus Damas-López, Antonio Matas, and José Miguel Rodríguez.
“‘2 × 3’ primes naming ‘6’: Evidence from masked priming.” Attention, Perception, and
Psychophysics 71, no. 3 (2009): 471–480.
Glaser, Jack . “Contrast effects in automatic affect, cognition, and behavior.” In Assimilation
and contrast in social psychology, edited by Diederik A. Stapel and Jerry Suls, 229–248.
New York: Psychology Press, 2007..
Goodale, Melvyn A., and David Milner. Sight unseen: An exploration of conscious and uncon-
scious vision. New York: Oxford University Press, 2004.
Gottlieb, Jacqueline, and Pietro Mazzoni. “Neuroscience: Action, illusion, and percep-
tion.” Science 303, no. 5656 (2004): 317–318.
194 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L
Gould, Stephen Jay. Ever since Darwin: Reflections in natural history. New York: Norton,
1977.
Gray, Jeffrey A . Consciousness: Creeping up on the hard problem. New York: Oxford
University Press, 2004.
Greenwald, Anthony G. “Sensory feedback mechanisms in performance control: With
special reference to the ideomotor mechanism.” Psychological Review 77, no. 2 (1970):
73–99.
Greenwald, Anthony G., and Anthony R. Pratkanis. “The self.” In Handbook of social cogni-
tion, edited by Robert S. Wyer and Thomas K. Srull, 129–178. Hillsdale, NJ: Erlbaum,
1984.
Grèzes, Julie, and Jean Decety. “Does visual perception of object afford action? Evidence
from a neuroimaging study.” Neuropsychologia 40, no. 2 (2002): 212–222.
Grossberg , Stephen. “The link between brain learning, attention, and consciousness.”
Consciousness and Cognition 8 (1999): 1–44.
Haggard, Patrick, Gisa Aschersleben, Jörg Gehrke, and Wolfgang Prinz. “Action, binding,
and awareness.” In Common mechanisms in perception and action: Attention and perfor-
mance, edited by Wolfgang Prinz and Bernhard Hommel, 266–285. Oxford: Oxford
University Press, 2002.
Hallet, Mark . “ Volitional control of movement: The physiology of free will.” Clinical
Neurophysiology 117, no. 6 (2007): 1179–1192.
Heath, Matthew, Kristina A. Neely, Jason Yakimishyn, and Gordon Binstead. “ Visuomotor
memory is independent of conscious awareness of target features.” Experimental Brain
Research 188, no. 4 (2008): 517–527.
Hesslow, Germund. “Conscious thought as simulation of behavior and perception.” Trends
in Cognitive Sciences 6, no. 6 (2002): 242–247.
Holland, Rob W., Merel Hendriks, and Henk Aarts. “Smells like clean spirit: Nonconscious
effects of scent on cognition and behavior.” Psychological Science 16, no. 9 (2005):
689–693.
Hommel, Bernhard. “Action control according to TEC (theory of event coding).”
Psychological Research 73, no. 4 (2009): 512–526.
Hommel, Bernard, Jochen Müsseler, Gisa Aschersleben, and Wolfgang Prinz. “The theory
of event coding: A framework for perception and action planning.” Behavioral and Brain
Sciences 24, no. 5 (2001): 849–937.
James, William. The principles of psychology. New York: Dover, 1890.
Jeannerod, Marc. “Simulation of action as a unifying concept for motor cognition.” In
Taking action: Cognitive neuroscience perspectives on intentional acts, edited by Scott H.
Johnson-Frey, 139–164. Cambridge, MA: MIT Press, 2003.
Johnson, Helen, and Patrick Haggard. “Motor awareness without perceptual awareness.”
Neuropsychologia 43, no. 2 (2005): 227–237.
Kaido, Takanobu, Taisuke Otsuki, Hideyuki Nakama, Yuu Kaneko, Yuichi Kubota, Kenji
Sugai, and Osamu Saito. “Complex behavioral automatism arising from insular cortex.”
Epilepsy and Behavior 8, no. 1 (2006): 315–319.
Kay, Aaron C., S. Christian Wheeler, John A. Bargh, and Lee Ross. “Material priming: The
influence of mundane physical objects on situational construal and competitive behav-
ioral choice.” Organizational Behavior and Human Decision Processes 95, no. 1 (2004):
83–96.
Kern, Mark K., Safwan Jaradeh, Ronald C. Arndorfer, and Reza Shaker. “Cerebral cortical
representation of reflexive and volitional swallowing in humans.” American Journal of
Physiology: Gastrointestinal and Liver Physiology 280, no. 3 (2001): G354–G360.
Voluntary Action and the Three Forms of Binding 195
Morsella, Ezequiel, Stephen C. Krieger, and John A. Bargh. “The function of conscious-
ness: Why skeletal muscles are ‘voluntary’ muscles.” In Oxford handbook of human
action, edited by Ezequiel Morsella, John A. Bargh, and Peter M. Gollwitzer, 625–634.
New York: Oxford University Press, 2009.
Nagel, Thomas. “What is it like to be a bat?” Philosophical Review 83, no. 4 (1974):
435–450.
Öhman, Arne, Anders Flykt, and Francisco Esteves. “Emotion drives attention: Detecting
the snake in the grass.” Journal of Experimental Psychology: General 130, no. 3 (2001):
466–478.
Okon-Singer, Hadas, Joseph Tzelgov, and Avishai Henik . “Distinguishing between auto-
maticity and attention in the processing of emotionally significant stimuli.” Emotion 7,
no. 1 (2007): 147–157.
Olson, Michael A., and Russell H. Fazio. “Implicit attitude formation through classical
conditioning.” Psychological Science 12, no. 5 (2001): 413–417.
Olsson, Andreas, and Elizabeth A. Phelps. “Learned fear of ‘unseen’ faces after Pavlovian,
observational, and instructed fear.” Psychological Science 15, no. 12 (2004): 822–828.
Ortinski, Pavel, and Kimford J. Meador. “Neuronal mechanisms of conscious awareness.”
Neurological Review 61, no. 7 (2004): 1017–1020.
Pessiglione, Matthias, Predrag Petrovic, Jean Daunizeau, Stefano Palminteri, Raymond J.
Dolan, and Chris D. Frith. “Subliminal instrumental conditioning demonstrated in the
human brain.” Neuron 59, no. 4 (2008): 561–567.
Pessiglione, Mathias, Liane Schmidt, Bogdan Draganski, Raffael Kalisch, Hakwan
Lau, Raymond J. Dolan, and Chris D. Frith. “How the brain translates money into
force: A neuroimaging study of subliminal motivation.” Science 11, no. 5826 (2007):
904–906.
Pilon, Manon, and S. John Sullivan. “Motor profile of patients in minimally responsive and
persistent vegetative states.” Brain Injury 10, no. 6 (1996): 421–437.
Plazzi, Giuseppe, R. Vetrugno, F. Provini, and P. Montagna. “Sleepwalking and other
ambulatory behaviors during sleep.” Neurological Sciences 26 (2005): S193–S198.
Preston, Stephanie D., and R. Brent Stansfield. “I know how you feel: Task-irrelevant facial
expressions are spontaneously processed at a semantic level.” Cognitive, Affective, and
Behavioral Neuroscience 8, no. 1 (2008): 54–64.
Raymond, Jane E., Kimron L. Shapiro, and Karen M. Arnell. “Temporary suppression
of visual processing in an RSVP task: An attentional blink?” Journal of Experimental
Psychology: Human Perception and Performance 18, no. 3 (1992): 849–860.
Reisberg , Daniel. Cognition: Exploring the science of the mind. 2nd ed. New York: Norton,
2001.
Riddle, Travis A., and Ezequiel Morsella. “Is that me? Authorship processing as a function
of intra-psychic conflict.” Poster presented at the annual meeting of the Association for
Psychological Science, San Francisco, CA , May 2009.
Rosenbaum, David A. “Motor control.” In Stevens’ handbook of experimental psychology,
vol. 1, Sensation and perception, 3rd ed., edited by Hal Pashler and Steven Yantis, 315–
339. New York: Wiley, 2002.
Roser, Matthew, and Michael S. Gazzaniga. “Automatic brains—interpretive minds.”
Current Directions in Psychological Science 13, no. 2 (2004): 56–59.
Rossetti, Yves. “Implicit perception in action: Short-lived motor representation of space.”
In Finding consciousness in the brain: A neurocognitive approach, edited by Peter G.
Grossenbacher, 133–181. Amsterdam: John Benjamins, 2001.
Voluntary Action and the Three Forms of Binding 197
Yamadori, Atsushi. “Body awareness and its disorders.” In Cognition, computation, and
consciousness, edited by Masao Ito, Yasushi Miyashita, and Edmund T. Rolls, 169–176.
Washington, DC: American Psychological Association, 1997.
Zeki, S., and A. Bartels. “Toward a theory of visual consciousness.” Consciousness and
Cognition 8, no. 2 (1999): 225–259.
Zorick, F. J., P. J. Salis, T. Roth, and M. Kramer. “Narcolepsy and automatic behavior.”
Journal of Clinical Psychiatry 40, no. 4 (1979): 194–197.
11
N I C O H . F R I J DA
Will is central to the human experience of action, and free will in particular. People
want to act when it suits them. They have a strong sense that they take decisions and
that those decisions make a difference to their life and that of others. That sense is
ontogenetically early and important. I have a son who, while at the age of just two
years, reached for some coveted object. I, as a loving father, picked it up and gave
it to him. He fiercely threw it back, crying out, “David do!,” and started to reach
out again. He later delighted to visit the record store, in which he was free to roam
around in the abundance of obtainable options.
Psychology recognizes this “sense of self-efficacy” (Bandura, 1997) as a major fac-
tor in meeting adaptation problems. Emotion theory follows suit in positing a cogni-
tive variable labeled “appraised coping potential” as decisive for whether an event is
perceived as a threat or as a challenge (Lazarus, 1991). Feeling unable to influence
one’s fate supposedly leads to anomia, which was considered the cause of felt alien-
ation and increases in suicide in emerging industrial society (Durkheim, 1897).
In the Auschwitz concentration camp during World War II, the notion emerged
of “musulmen” (Levi, 1988, mentions it): inmates who fell into motionless apathy
after having lost all hope of being able to influence their fate. It shows that a belief
in self-efficacy can be a matter of life and death. The musulmen tended to die within
days, unless encouraged by the example or the help from other inmates who retained
some sense of coping potential, such as Milena Jesenska, the former girlfriend of
Franz Kafka (Buber-Neumann, 1989), or the Nacht und Nebel (Night and Mist) pris-
oners Pim Boellaards and Oscar Mohr in the Dachau camp (Withuis, 2008). Nor
do experiences of self-efficacy under hardship necessarily involve drama as in these
cases. A few years ago, a comic strip was published in which its author, Mohammed
Nadrani, recounts how he discovered when a prisoner in solitary confinement in
Morocco that he could regain his sense of identity by drawing a horse, and then his
name, with the help of a piece of charcoal fallen from his ceiling (Nadrani, 2005).
All this does not sit easily with the conviction in current cognitive psychology
that self-determination is an invalid concept, and the “self ” is not an agent (e.g., Prinz
200 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L
2004, 2006). Moreover, all feeling and behavior is strictly determined by anteced-
ent causal events that leave no room for a mysterious force like “self-determination.”
Explanation of actions and feelings must be found at the subpersonal level of descrip-
tion at which no I or self exists: “I” and “self ” merely are socially transmitted concepts.
Freedom to choose—the very reality of a self that can exercise choice—is argued to
merely amount to a social institution, established for the purpose of assigning social
responsibility. Selves cannot really willfully affect their own fate Prinz (2004, 2006).
I will argue that this perspective is incorrect and fails to account for either behav-
ior or experience. Note that, if taken seriously, this perspective quickly leads to fatal-
ism and to becoming a musulman. It justifies feeling a victim where one is not, or
not necessarily. If the supposed mechanisms lead to such consequences, the sup-
positions must be mistaken.
It is true that intentional descriptions—“I want this,” “I do not like that”—do not
directly mirror underlying psychological processes. Underlying processes have to
be described at a subpersonal, functional, or psychological level (Dennett, 1978).
However, if our descriptions of subpersonal processes have the implications that one
feels a victim when one is not, these descriptions must be incomplete. The elemen-
tary desires to take things in hand, to make an imprint on the world, to affect one’s
fate, to practice self-determination are real at least sometimes; witness my David;
witness the other evidence just alluded to. I take the domain of emotion regulation
to examine this apparent contradiction.
EMOTION REGULATION
The term “emotion regulation” applies to emotional feeling or behavior that dif-
fers from the emotion or behavior an event by itself might otherwise have evoked.
Actual emotional behavior or feeling is often weaker or different. One is afraid but
keeps firm. One does desire one’s neighbor’s wife but suppresses the inclination,
and one may even succeed to get rid of the desire by seeking distraction or taking
to drink.
Emotion regulation generally results from emotional conflict. Two more or less
incompatible emotional inclinations operate at the same time or in close succession.
One has an emotional inclination or feeling. One feels fear and wants to run away;
one desires one’s neighbor’s wife and wants to be with her. But inclinations as well as
feelings may evoke unwanted consequences that in turn may evoke regulating inclina-
tions. Fear may make one feel a coward, and evoke contempt in others. Desiring one’s
neighbor’s wife may anger that wife and the neighbor, and evoke feelings of guilt. These
consequences are also emotional. They are also relevant to some emotional concerns.
Not only does one desire one’s neighbor’s wife. One also wants a clear conscience
and social harmony. One also wants self-respect and social appreciation. In fact if the
consequence of having or showing one’s emotion would not be emotional— if they
would leave one cold—no regulation would occur, except perhaps when regulation
aims at getting rid of the discomfort of a desire that cannot be fulfilled.
Regulation serves to deal with such conflicts. It can do so in several ways. One can
seek to prevent the unwanted consequences by suppressing action and inclination.
One can also prevent the to-be-regulated inclination from arising or developing.
Emotion Regulation and Free Will 201
Alternatively, one may find a compromise—an attenuated mode of action that will
achieve a little less satisfaction and no or a weaker aversive consequence.
Some regulation procedures proceed smoothly and automatically. Regulation
is often motivated by anxiety that operates by response inhibition (Gray &
McNaughton, 2000). Anger, for instance, often falls away when one’s target is big and
strong. Different forms of automatic regulation stem from culturally shaped modes
of appraising situations. These include culturally learned action programs that are
tailor-made for handling social conflict and satisfying conflicting concerns, such
as action programs for polite discourse, for discussing disagreements rather than
deploying hostility, and encouragement of cognitive strategies for mild appraisal
of other people’s hostile actions (Campos, Frankel, & Camras, 2004; Mesquita &
Albert, 2007).
EFFORTFUL REGULATION
But very often regulation does not occur automatically. One often has to expend
effort to suppress emotional expression. The reason is clear. Emotions are pas-
sions that instigate desires and actions; desires and actions tend to persist over
time, and they do so despite obstacles and interruptions. Emotions exert “con-
trol precedence” (Frijda, 2007). They also channel attention. The degree of con-
trol precedence in fact defines emotion intensity. How people rate the intensity
of their emotions correlates strongly with how they rate the features of control
precedence just mentioned (Sonnemans & Frijda, 1995). Regulation seeks to
influence these features: to diminish precedence, to attenuate action strength, to
slow down response, or to act differently from what passion instigates. Regulation
may require considerable effort that calls upon restricted resources of “willpower.”
Experiments have indeed shown that effortful decisions take energy (glucose con-
sumption, in this case; Baumeister, Muraven, & Tice, 2000; Baumeister, 2008). So
do controlled processing, mental effort, concentration, and attentional vigilance
generally (Mulder, 1980; Sanders, 1998).
The motivational role of expected response consequences differs importantly
from one instigation of emotion regulation to another. Julius Kuhl and Sander
Koole (2004) offer an important distinction between what they call self-control
and self-maintenance. Self-control consists of not performing some intended action,
doing something differently, or not doing it at all, as in most of the preceding
examples. The paradigm examples of self-control are improving social interaction,
resisting temptation, and improving instrumental action under emotional circum-
stances. In social interactions, one may lose consideration and make enemies. In
desire, one might offend consideration, prudence, moral rectitude, and decency,
and one may lose money. In panic, one may lose control over coherent escape; in
fear, one may drop one’s house key; trembling from anger may make one miss one’s
hit or shot.
Self-maintenance, by contrast, consists of undertaking and persisting in some
action despite anticipated unwanted response consequences. It is exemplified by
devoted passions, by costly interests and hobbies, by actions that do not comply
with social pressures, and by self-sacrificial behavior, ranging from jumping into
202 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L
differently from how the emotion disposes us to act, or for not acting at all. One
cares both ways, at the same time. When offended, one cares about restoring one’s
self esteem. One is inclined to angrily retorting. But one also cares about retaining
social harmony; one is inclined to let the offense go by or to find some other solu-
tion, such as responding calmly, “You made me very angry.”
Caring about an event or object implies the emergence of an emotion and feeling.
Caring also implies that the event or object touches a sensitivity of the individual
that renders the object or event to be appraised as relevant, and to cause the emo-
tion or feeling to emerge. Relevance appraisal turns the object or event into an emo-
tionally meaningful one.
“Concern” is the term I use for the assumed dispositions. Concerns are defined as
dispositions to desire the occurrence or nonoccurrence of given kinds of situations
(Frijda, 1986, 335). Emotions are evoked when encountering an object or event
that is appraised as possibly or actually satisfying some concern or threatening such
satisfaction. Concerns can be operationally defined as emotional sensitivity for par-
ticular classes of events instantiating concerns. These emotional sensitivities serve as
reference points in seeking concern satisfaction or escape from concern frustration.
Concerns are awakened when perceiving an object or event that matches its sensitiv-
ity, or even by just thinking about such an object or event. At other times they silently
sit in the mind-brain until a relevant event or thought appears (Frijda, 2007, chap. 7).
The notion of concerns includes major motives, goals, and needs, under which
terms they are usually discussed in the literature. They also include “sentiments”
(Shand, 1920): affective attitudes toward persons, objects and issues, active inter-
ests or hobbies, and attachments as well as hatreds. An individual’s concerns have
diverse and complex sources, in innate biological mechanisms, cultural transmis-
sions, direct social influences, individual propensities, and individual life histories,
which cannot be enlarged upon here. In my view, we still await a general analysis
of concerns (or of motivation, for that matter) that provides a unitary, coherent
account of both motivations that stem from social prescriptions and those with
direct biological backgrounds (Berridge, 2004).
People harbor a large number of concerns—indefinitely large, in fact, because
distinctions can be made at many different levels of generality. One can have a
concern for dealing with people generally, and for dealing with that one particular
individual. Concerns account for the fact that people care about many things that
they at that moment do not strive for, but that influence thought in planning and
judgment, and at any moment may arise as generators of emotions by the appear-
ance of some stimulus, along the lines of the so-called incentive motivation model
(Berridge, 2004; Gallistel, 1980; Mook, 1996; Toates, 1986).
Concerns are mental structures that one in principle is not conscious of—not
even when they are activated and are motivating actions and choices (Berridge &
Winkielman, 2003; Wilson, 2002). One does not need to be conscious of them, since
they operate directly by entailing an individual’s sensitivity to relevant objects and
events, and by thereby instigating affect and action upon meeting relevant objects or
events. Concerns are inferred from such sensitivities, by the individual him/herself
and by observers and theorists. A sexual concern is inferred from an individual’s
or a species’ frequent affective response to meeting, viewing, or smelling potential
204 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L
CONCERN STRENGTH
Relative concern strength thus can be held to be decisive for preference between
courses of action in emotion conflicts. No cat crosses an electrified grid unless that
grid is on the path to its kittens. One reenters one’s burning house to save one’s
child, and not as readily to save one’s canary.
But how to assess concern strength? Concerns can differ in strength in vari-
ous ways, and I do not know whether these types of variation tend to go together.
Concerns can differ in the strength of the felt urge evoked by relevant objects or
events; the rapidity of urge onset; the speed, scope, and power of motor, verbal,
and cognitive action. Concerns also can differ in the willingness they may induce to
incur costs, take risks, or accept pain in seeking to satisfy the concern at stake. High
grid voltage that the cat accepts to get to her kittens signals strong maternal concern,
and paying dearly for a rare postage stamp signals eager philatelism. All this is prob-
ably independent from the more quantitative action parameters. Again as a prob-
ably separate dimension, concerns can differ in scope. By “concern scope” I mean
the range of events that touch the concern’s sensitivity and give rise to some sort
of emotion. Scope may pertain to events evaluated as positive (they signal concern
promotion) and to events appraised as negative (they harm concern satisfaction).
Strong concerns in this sense are spider phobias. Spider phobics tend to get nervous
just upon seeing a spiderweb, or when coming to a place were spiders are common.
Sex during puberty gives another example: then, everything reminds the young per-
son of sex. In subcultures where maintaining self-esteem, social status, and social
identity is emphasized, notable shame is aroused by every failure or critical remark
Emotion Regulation and Free Will 205
explained: “If I had declined the request, and I afterward heard that evil befell
him, I would have been unable to look myself in the eye.”
PREFERENCES
Emotion regulation is due to emotion conflict. An event has elicited several dif-
ferent action inclinations. The inclinations aim at actions that meet different con-
cerns. How to deal with this? One of the major options is to select among them.
One chooses the option of which one prefers the envisaged outcome: the one that
will yield the smallest loss or the largest gain in concern satisfaction. Preference
depends, in the first place, upon relative concern strength, in proportion to the
expected outcome magnitudes of action. One continues smoking until blots appear
on the X-ray pictures, and sometimes even after that.
Preferences follow the pleasure principle, taken broadly. One chooses what one
likes most or dislikes least. If only, however, things were that simple! The point is
that most likes and dislikes, pleasures and pains, are not simple givens (Frijda, 2009).
They are not even simple givens in taste experiences, since the label on the wine bottle
makes a difference to how much the wine is liked. Pleasures and pains are not simply
compared when embedded in a conflict of consequences for multiple concerns. They
readily form the Sophie’s Choice problem. To offer another example, the pleasures of
foreseeing an erotic encounter get qualified by the thought of marital strife, which in
turn gets qualified by the urge to terminate the encounters. The net outcome may in
fact favor termination; it in any case favors socially oriented self-regard.
Assessing preference includes the sensing and weighing of the expected impact
of action outcomes on feelings, and on multiple concerns. It involves mostly non-
articulate and perhaps even nonconscious information activated in the mind-brain.
It may include glimpsing feelings and future feelings, contingent on vague thoughts
of future events and one’s briefly felt inclinations and will to deal with them. The
entire process usually is not automatic, and yet it is not explicitly conscious. Only
the variable affective outcome is really conscious, one may suppose, even when it is
not focally reflected upon, until the end of the pondering, or signaling that end. The
process is intuitive, noting feelings rather than the perceptions that generated it, as
modes of situated “directed discontent” (Rietveld, 2008), prodding us toward situ-
ated contentment. This, at least, is the sketch that Rietveld (2008) and Dijksterhuis
(2004) plausibly present of the process of preference construction. The process may
proceed haltingly, with deliberations. That this occurs shows in hesitations, inner
struggles, uncertainty, sense of making true sacrifices, considering and reconsidering
one’s decision to take, deciding that the cost is or is not too heavy or in conflict with
other concerns. It may also proceed straightforwardly, as occurs when one instantly
knows that a central value is being touched, a central concern is being threatened,
a unique opportunity is met, and only verifications are opportune. In one story I
was told, from the Netherlands during World War II, a farmer’s wife visits the village
grocery store and is asked to come to the neat room behind. There, the village doctor
asks her to consider hiding a Jew. “I’ll have to ask my husband,” she replies. When
the husband comes home at six, and while at the evening meal, she tells him. “I’ll
have to think about it,” he answers, and after a minute or two: “I think we’ll have to
Emotion Regulation and Free Will 207
do it”; and they did. What happened during those minutes? Probably, some implica-
tions crossed his mind: the harm expected for the Jew; the indignity of Jews’ having
to hide; the risks, for the farmer and for his own family; one’s sense of obligation to
people in need; the worth connected to remaining decent and not shrinking from
responsibilities, but also the consequences of refusal: something like, “If I decline the
request, and evil befalls him afterward, I will be unable to look myself in the eye.”
Note that the reflection may not find a preference. Hesitation may settle into
remaining undecided. Being undecided may be resolved by refusing the request.
When a preference does emerge, its primary determinant presumably will be the
strength of the concerns at stake in that alternative, which makes its call to have to
be heeded.
What causes this preference? What is it that sometimes makes relevance to some
concern decisive? Why do such considerations as looking oneself in the eye, or the
foresight of the evil fate that awaits someone one does not know, or the indignity of
the situation weigh so heavily?
Some cues about the concerns at play come from the motives mentioned post
hoc, to explain one’s decisions made in conditions of what was earlier labeled
“self-maintenance.” The stated motives of people who did hide Jews largely fell under
the headings of empathy, personal norms, and values. Other cues for the concerns
operative in self-maintenance come from the experiences that underlie traumatic
stress syndromes. These concerns appear from the loss of sense and coherence in
the experienced events (Epstein, 1991; Janoff-Bulman, 1992; Rimé, 2005), or the
collapse of the implicit expectations that together with the world’s events form a
meaningful, reasonably predictable, and reasonably controllable world. In the
words of Rimé (2005), traumatic events represent a “breakdown of the symbolic
universe.” A world after such breakdown has revealed itself as a world one would not
want to live in, and could not live in without losing self-regard.
This implies that resisting such breakdown is motivated by a concern with a very
large scope. Resisting cruelty and oppression, rejecting humiliation and enslave-
ment can be viewed as worth the effort to do so, whatever the price.
The motivation provided by such a concern belongs to what Frankfurt (1988)
called “second-order desires”: desires to desire one’s first-order desires. They are
desires that one identifies with and that one can desire “wholeheartedly.” It is not
clear that identifying with desires forms the most basic and satisfactory description
for singling out desires that one has wholeheartedly. Part of the problem is that iden-
tifying with one of one’s desires is not a very transparent notion, if only because the
relationship between the “self ” who identifies and the “self ” who has the desire is
unclear. The feeling or sense of identifying with a desire can perhaps be understood
in a more analytical fashion. Desires that one desires desiring, in Frankfurt’s analy-
sis, may in the first place be those desires that result from events having relevance
to concerns with a very large scope, in the sense used earlier. Their satisfaction or
dissatisfaction enables or disables a very large number of actions. Such concerns
may also involve sensitivities and underlie desires that are part of one’s conception
of oneself. They belong to the goals that may consciously orient one’s conduct of life
when called upon to make choices: this is who I am, and whom I choose to be. One
identifies with it in part because one decides to do so, and commits oneself to that
208 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L
decision, in true Sartrean fashion. In the second place, one may have second-order
desires that when followed do not result in frustrating other desires, like excessive
drinking frustrates the desires for health and self-respect (if one does have those lat-
ter desires). One cannot give in to those latter desires wholeheartedly, but one can
wholeheartedly will the desire to uphold the symbolic universe, or to abolish cru-
elty. One can wholeheartedly desire to find meaning in life, to be more than a speck
of dust in the world. As already mentioned, Kruglanski et al. (2008) concluded that
the motivations of terrorists primarily consisted in constituting a sense of life. That
motivation may suffice for action. I once read a letter a European resistance fighter
wrote to his family on the evening before his execution during world war II: “I have
not fallen for a political ideal. I die and fought for myself.”
By contrast, acting at variance with one’s conception of oneself, or in ways that
offend one’s values, may take away one’s sense of the meaning of life. To the indi-
vidual, a world without these values may represent a world one does not want to
live in. As I remarked earlier, betraying one’s conception of oneself and what one
strives for would produce a self-conception one would not want to live with. As
Myriam Ortega said, who had been incarcerated by Pinochet for 13 years: “To
betray in exchange for not dying is also dying” (Calzada, no date). Take a (perhaps
authentic) story from the Hungarian writer Imre Kertesz. A Holocaust captive lays
ill on his bunk, with his ration of bread on his belly, too weak to eat it yet. He looks
up and sees someone snatching it away. A little later that man comes back, replac-
ing the ration, grinning, and saying: “What the hell did you think?” Why did he do
that? Kertesz’s interpretation is similar to what Myriam Ortega said: there is more
than one way to die (Kertesz, 1997). It also is similar to what the Italian priest in
Auschwitz in the earlier example must have realized.
Upholding empathy, desiring a coherent symbolic universe, living according to
one’s conception of oneself—indeed, all three are concerns with a very large scope,
making them worth effort, pain, and sacrifice. Self-regulation, both as self-control
and as self-maintenance, can be effective. It sometimes works to promote social
harmony and internal harmony. True, its effectiveness is modest. In social interac-
tion, a sensitive person can still notice that her antagonist is angry, and she still may
take offense. The smoker may stop smoking, even if for only a week or two. But
some Tutsi children and Jews have been saved, some smokers do in fact stop before
the onset of lung cancer, some individuals gain in insight in their world and their
motives, some drinkers and drug addicts do in fact come to realize that the profits
from drinking and drug use are not really worth the outcomes (Lewis, 2011).
WHAT IT TAKES
What does all this take? The core processes in nonautomatic self-regulation are
fairly clear. They consist of assessing or assigning a preference among the emotional
options and selecting some action to implement that option.
What does that take? There are four tasks to be fulfilled:
It takes these four tasks to arrive at preference for an alternative in emotion con-
flict that optimally considers the relevance to the various concerns at stake. But the
set of tasks is not simple. The cognitive structure to be explored is intricate. Events
with their manifold aspects can be pertinent to several concerns, in particular when
one examines consequences at several steps removed. Killing an insect removes
an insect but also is a step in decreasing environmental diversity and undermining
respect for life. Each concern relevance can give rise to an a large number of actions,
each with their different consequences.
These various kinds of exploration are usually possible, but they are not always
undertaken. They can be undertaken to widely varying extent. Responses in
emotional conflict tend to be driven by situational conditions. One readily tones
down one’s anger so as not to hurt one’s target if the situation is a run-of-the-mill
social one. One moderates one’s erotic approach so as not to chase the poten-
tial target away. One may take an aversive event as inevitable; one may without
questioning do whatever one is told to do; one may view oneself as a powerless
victim of unpleasant behavior whereas there are many things one could have
done. Often, no preference assessment occurs even when what the situation
suggests—doing nothing, doing what one is told—is actually not the only avail-
able option. Blatant examples are easily given. Recall the 70 percent of Milgram’s
subjects. Recall the members of the 101 Police Battalion from Hamburg—the
battalion that was among the first to engage in systematic killing of Jews in Poland
(as described by Browning, 1993, and by Goldhagen, 1996). Only one member
asked to be excused and was assigned a different job. Evidently, preference assess-
ment requires free and unrestrained exploration of the event’s meanings, and the
implications of one’s actions.
Let me briefly enlarge on the four tasks mentioned. First, exploring the nature
of the options in emotional conflict. What is the nature of each of the conflicting
action tendencies? The task includes taking time for a thought about context, and if
the thought comes, not at once to brush it away. Many of the 101 Battalion members
got used to the discomforts of their task rather rapidly (Browning, 1993). Many of
Milgram’s subjects did have qualms that they did not heed. Those that did heed
them included subjects who wondered what the qualms suggested they should do
instead, and some of them discussed these alternatives with the experimenters.
Second, preference assessment calls for exploring the concerns and sensitivities
at stake in an event and in the consequences of one’s eventual emotional reaction.
We explore our concerns by pondering about an event, reflecting upon it, as well
as by deliberating about action. In doing so we expand our awareness of the event’s
implications and consequences. Recall the guess I made about the contents of the
mentioned farmer’s pondering the request to shelter a Jew. He presumably thought
210 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L
about the risks involved, the values offended by the reasons for the request, as well
as by the actions of agreeing or not agreeing with the request.
The point I wish to stress here is that sustained attention may engage other con-
cerns than those that were initially activated. It may become clear that an individual
is being threatened by a scandal, and that the scandal is an event in which basic
human values are not respected. Expansion of the domain of implicated concerns
may climb to increasingly more encompassing levels of meaning, and of implica-
tions for one’s image of oneself. It is what, presumably, happened in Kertesz’s story.
Having stolen someone’s ration represents a gain but can also fill one with uneasi-
ness, up until sensing a loss of self-respect.
The extent and content of these reflections require a measure of receptiveness
or openness to them (Kruglanski, 2004). They require willingness and capacity to
explore and face their implications. The mentioned receptiveness, openness, and
willingness to explore appear to form requirements for accessing the event’s full
concern relevance.
This role of openness and willingness to explore consequences is not trivial, as
shown by the Milgram data, and those from the 101 Hamburg police battalion. On
occasion, it goes even further. The commanders of several World War II concentra-
tion camps (Stängl, the commander of the Treblinka camp; Höss, the commander
of Auschwitz-Birkenau) prided themselves after the war, in interviews or autobio-
graphical accounts, for their having done their jobs as best they could, among which
perfecting the mass-extermination procedures (Levi, 1988). Prisoners of the Gulag
who hated the Soviet regime took pains to build the walls around their compounds
as straight and neat as they could (Adler, 2002). Furthermore, recall that limiting
use of available information is not restricted to hostile contexts, (as evident from the
illustrations given by Akerlof and Shiller (2009).
Concerns and attendant motivations, as argued before, need not be consciously
identified to operate. Events directly produce affect and action. Conscious identifica-
tion of why one wants what one prefers frequently consist of justifications and con-
structions produced after the fact (Wilson, 2002). This is different from examining
one’s emotional motives and concerns in deliberating over a decision. Deliberating is
geared to “Why do I want to do that?” rather than to “Why did I do that?” It serves to
establish an inventory of relevant concerns to enable estimating the relevance of each
one. One may feel the urge to save someone in need: Is it for the glory of having taken
a risk and the glow of goodness, or because of concern for the suffering of others? One
may show courage in fighting a cause: Is it to give meaning to one’s empty routine life,
or to obtain 75 virgins in the afterlife, or because an offense to God’s honor should be
paid back? One probably never can be certain about one’s motives, particularly when
climbing the motive hierarchy (Frijda 2010a). But attending to relevant concerns and
motivating states promotes openness for the motivational options, and it may blunt
the sensitivity for social and situational primings, as these figure in the after-decision
justifications as spelled out by Wegner (2002), Wilson (2002), and others.
The third task is to search for alternative action options. The way to do so is pretty
straightforward. It is similar to the openness in exploring concerns. Alternative
action options are found by looking around in the world and in the world of imagi-
nation. One can recognize novel options when coming across them. This root of
Emotion Regulation and Free Will 211
mentioned farmer’s conclusion after his two minutes’ reflection on the request. In
self-maintenance at least, something does happen in such choosing that goes beyond
merely selecting one of the action options. One also sets the preferred option as
the chosen goal and accepts it, risks and all, come hell or high water. The selected
option turns into commitment to stick to one’s decision when the heavy costs will
emerge, and to thus form the intention to face the harm when it comes. This sounds
suspiciously like a self really deciding. It probably works by a profound reappraisal:
effecting a cognitive change that gives the action consequences a different evalua-
tion, as challenges instead of threats. Risks to come will not be stimuli for flight or
restraint any longer but obstacles on one’s path to a goal that are to be overcome or
avoided. Reappraisals of this sort may have considerable effect. An interview-based
study suggested that communist prisoners in the 1933–1945 German concentra-
tion camps showed higher resistance to the sufferings. After the war they appeared
to show fewer PTSD symptoms than, for instance, Jewish survivors, probably
because they viewed their sufferings as the results of deliberate choices for acting
for a honorable cause, rather than of merely having been victims (Withuis, 2005).
The same appears to apply to suicide terrorists.
Such reappraisal and acceptance of hardships not only characterize commit-
ment to political and religious ideologies. They also apply to devoted personal pas-
sion and love, as illustrated by John Bayley’s attachment to his wife, Iris Murdoch
(Bayley, 1998).
What all this takes, then, is emotion. Emotion engages capacities for preference
and for reflection or pondering: for perusing and considering options, probing pref-
erences, sensing emotional impacts.
What this takes is in fact rather a lot. Not for nothing did Dennett (2004) elabo-
rate the point that freedom evolved. Free choice or free will appears possible with
the help of three almost uniquely human faculties: conceiving of the merely pos-
sible, which includes imagination; detachment, which includes the ability for rela-
tively unprejudiced scanning of the world and of what imagination happens to offer;
and reflective thought, making one’s thoughts, perceptions, and emotions into
objects of inspection. As far as I can see, it is not easy as yet to find a satisfactory
account of the operation of these faculties in subpersonal terms. Imagination and
cognizing the merely possible, for instance, not only involve producing representa-
tions and simulations but also include awareness that direct links with motor action
are absent. Imagery subjectively differs from perception in that its content cannot
be inspected. Detachment, stepping back, adopting the position of an unconcerned
spectator appear to involve general loss of sense of agency, and even of being a vul-
nerable target—automatically under dissociation (as in Langhoff ’s case), or inten-
tionally in self-observation and the cinema. Reflective thought includes rendering
oneself an object of attention.
FREE WILL
The process of finding action options and final preferences consists of exerting free
choice or free will—liber arbitirium. What is valued in what is being called “free will”
is freedom, to seek to act according to one’s preferences and one’s considerations.
Emotion Regulation and Free Will 213
The Sage
My house is filthy and my many children shrieking.
The pigs are rooting, grunting, in the yard.
But mountains, rising high to the blue heaven,
Draw my attention, soaring up from stink and dirt.
In a sense, freedom resides in the world: in its abundance of possibilities if the pos-
sible and the outcomes of imagination are included. This is connected to another
214 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L
main point. Information search implies initiative. There is freedom also in going out
in the world of ideas and options. The determinants come from the outside meeting
the inside: there is self-determination.
“Freedom of choice” refers to freedom from external constraints to consider and
weigh available options, and to act in agreement with preference. “Free will” like-
wise refers to freedom to pursue available options, in perception and thought, and
in constructing preference, and in finding out what one’s preference is. Freedom is
the opposite of compulsion and constraint—not of aimless arbitrariness.
The notion of “free will” appears to offend the axiom of universal causal determi-
nation. However, such offense is in no way involved in the notion behind people’s
experience of free will, and for their striving to exert it (Frijda, 2010a). True, the
notion of free will had religious backgrounds, in Augustinus, Aquino, and Duns
Scotus. It served to free God from the blame for human evil. However, this con-
cept of free will was not what provided the facts that produced the notion of will
and freedom of action. The facts were condensed in Aristotle’s analysis of voluntary
action, and in Aquino’s analysis of how to escape from sin: by not sinning. They
included the observations and experiences that not fleeing in fear from the battle-
field is an available option, and that one can in fact leave one’s neighbor’s wife alone,
even if it may be hard to do.
Of course, free choices have ample determinants. They require cognitive abilities
to explore, and to explore against inclinations not to consider certain determinants.
They require the determination of the preferences among options. They require the
ability to envisage remote consequences as against proximal perceptual influences;
they require the resources to retain detachment and clarity under stress. They, first of
all, require interest, spontaneity, and engagement, that is, alive concerns and affective
sensitivity for what may be relevant to concerns at stake in the choice situation.
It has been argued that the awareness “I could have acted differently,” after a
decision, is an illusion. Of course, it is not an illusion. All different ways to act that
appeared feasible were considered and compared while deliberating and before set-
tling on the decision. Saying of someone “He could not have acted differently” after
the fact of having decided is a play on the word “could,” since that word presupposes
that options existed before the decision cut them off. Deliberation, reflection, and
hesitation showed that the determinants of the final preference were all effective pro-
cesses: weightings and evaluations. Most important, in my view, is the fact that decid-
ing and committing oneself consists of the act of terminating the intention to explore
and weigh action options, and a judgment that enough exploration is enough. One
arrived at the satisficing criterion (Simon, 1986). Processing shifts from weighing
options to shaping the intention of following the path of the satisficing option. In
fact: there is no compelling reason to terminate exploring options. Some people do
not, and continue deliberating until too old to act anyway, or let the course of events
or Allah’s will decide, or continue ruminating on “If only I had . . . !”
SELF-DETERMINATION
The preceding analysis of free will is phrased in intentional language, in terms that
presuppose self-determination. But what if there is no self to be found that guides
Emotion Regulation and Free Will 215
choices, as a central commander (Prinz, 2004), or an I that is doing the doing? Look
at the brain, look at the mind: there is no one there (Metzinger, 2003).
An initial response is to insist there certainly is someone here. He or she stands
here, right before you in presenting this essay (or sits behind the computer while
writing this text). He or she is doing things, and can do other things, some of which
he or she is doing to the world—speaking to you, for instance—and some to him/
herself, such as scratching his/her head and self-correcting his/her English.
If one should ask, he—in the present example—would answer that he experi-
ences and views himself as a person, and operates as a person. He or she tells me to
experience and view him/herself as a person and he/she shows to be operating as a
person. The things which that person is doing result from subpersonal processes plus
the information that triggers these processes and that these processes operate upon.
“Me,” “I” and “self ” designate the sum total of process dispositions, information rep-
resentations, and the procedures to organize and harmonize these processes, which
together, and in their confluence, function as the agent in action. One should stress
this “confluence.” Few if any mental activities engage only one subpersonal process,
or even only a few of them. Correspondingly, mental processes engage large infor-
mational and neural networks (Edelman & Tononi, 2000; Freeman, 1999; Lewis,
2005; Lewis & Todd, 2005; Lutz & Thompson, 2003). They thus engage large parts
of the total system. They thus engage large parts of the me, I, or self.
Among these processes are those that produce the behavioral, physiological, and
experiential outputs, in accordance with outcome preferences and, thus, with the
individual’s concerns. They include a representation of the body and its location in
space, which is referred to, by Damasio (2000) and Panksepp (1998), as the “core
self.” These processes may be located anatomically in what has been termed the
“cortical-subcortical midline system” that runs from the periaqueductal gray to the
medial prefrontal cortex (Panksepp & Northoff, 2009). It may include the fron-
tal pole, which distinguishes between actions that the subject has decided upon,
as contrasted to actions he performs on somebody else’s initiative (Forstmann
et al., 2008). It may extend to the “self ’s” affective states of the moment, and its
relationship to the world and with persons and objects around it. All this underlies
Metzinger’s (2003) self-model of subjectivity, functionally as well as (on occasion)
in conscious awareness.
Is there an “I”? Yes, there is an I, in the sense just indicated. Likewise, there exist
a “me” and a “self.” They all three refer to the same total interconnected network.
Neither supervises the network. They are the entire network, with its informational
contents, and ongoing processes, together with its appendages consisting of eyes,
hands, bodily sensitivities, and actions by these eyes, hands, and bodily sensitivities.
They are the entire network, just as a “wave” is the entire collection of droplets that
by their interactions is capable of breaking dikes and destroying oil wells.
“Self-determination” in most of its usages just means “autodetermination”: the
operation of attractors within the network in determining preference or in agree-
ment of actions with goals (Lewis, 2005). Perhaps in many uses of this term, our
focus is on more or less stable ingredients of neural network-like concerns and sen-
timents, in addition to representations of the body. When asking oneself, “Who am
I?” one tends to answer by thinking of one’s body and its location, or by thinking
216 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L
of one’s major goals and concerns: the “myself ” that one can or cannot look in
the eye.
In connection with emotion regulation, self-determination has a more specific
and emphatic accent. As discussed, some actions are caused primarily by internal
processes, not directly dependent upon external events. The internal processes
have been enumerated: viewing counterfactuals and the merely possible; main-
taining aims in working memory despite distractions; reflection and deliberation;
stepping back from affordances and action urges; social influences; detaching from
event pressures; noting primings by experimental or advertising manipulations, and
largely discounting them. Centrally, these internal processes involve using the free-
dom of thought that the world allows and that allows thought to explore and to
detect nonobvious concern relevance. They are possible in spite of habits, stimuli
with their affordances, and the presence of conflicting urgent urges. These internal
processes are processes within the total system of interacting concerns, preferences,
and dispositions that are remote from external stimulus events, and that on occa-
sion are designated for brief as “the self ” or “a self.”
All of this amounts to saying that free decisions and free will are enabled by the
human system as a whole, and its neural structures that integrate multiple subper-
sonal processes and their outcomes.
FINAL REMARKS
My main conclusion should be obvious: free will exists, and is effective in influenc-
ing the actor’s action and his or her fate. It is equally obvious that free will has noth-
ing to do with decisions and actions not being fully determined. It is determined in
large part by processes internal to the deciding person.
None of this says anything about the scope of free will. Many life circumstances
leave few options. Moreover, people in general know little about their motiva-
tions. Awareness of what moves one is a construction. There exists no direct
reading-off of the causes of intentions or desires; one did not need Nisbett and
Wilson (1977) or Wegner (2002) to make this much clear. In any culture with
macho ethics a man whose wife left him tends to say, “Good riddance,” or “She
broke her commitments and promises; I am angry, not sad.” Any heroic deed may
be motivated by expected glory or enhancement of self-esteem, while constructed
as idealism or courage. The schooling and pay offered by fundamentalist move-
ments obscure the role of overriding religious value, even in the eyes of funda-
mentalists themselves (Stern, 2003).
There remains a major riddle. Free will, and decision after deliberation, or upon
intuitive preference, or after hesitation are largely products of conscious cognitive
action. How does consciousness impact on matter—the matter of the muscles and
that of the nerves? How, in other words, is the relationship between mind and mat-
ter to be seen? As a state of the neural matter, as Searle (2007) proposes? Certainly
not as the relationship between a phenomenon and an epiphenomenon. So far,
neuroscience has given no satisfactory account. On the other hand, conscious
thoughts, even when entirely dependent on concurrent neural processes, represent
novel input that stirs or shapes neural representations that have their further effects.
Emotion Regulation and Free Will 217
I react to what you say; why shouldn’t I be able to react on what I say to myself or
“think to myself ”?
Then there is the very will-like notion of acceptance of the consequences of deci-
sions taken. It results (I think) in commitments, that is, in long-term intentions that
are concerns (and that have emotional impact when something relevant occurs,
including not following up on the commitment). Finally, I have stressed that aware-
ness that one can influence one’s fate provides or sustains the motivation to seek
obtaining such influence when it is needed, and to look for actions or opportunities
that may not be immediately obvious. All of this may save lives and offer escape
from despair.
ACKNOWLEDGEMENTS
This work, as part of the European Science Foundation EUROCORES Programme
CNCC, was supported by funds from NWO and the EC Sixth Framework
Programme under contract no. ERAS-CT-2003–980409.
I am much indebted to the discussions at the CNCC meetings, and to comments
on previous versions by Michael Frijda, Machiel Keestra, Julian Kiverstein, Batja
Mesquita, Erik Rietveld, and Till Vierkant.
REFERENCES
Abu-Lughod, L. (1986). Veiled sentiments. Berkeley: University of California Press.
Adler, N. (2002). The Gulag survivor: Beyond the Soviet system. London: Transaction.
Ainslie, G. (2001). Breakdown of will. Cambridge: Cambridge University Press.
Akerlof, G. A., & Shiller, R. J. (2009). Animal spirits: How human psychology drives the
economy, and why it matters for global capitalism. Princeton, NJ: Princeton University
Press.
Apfelbaum, E. (2000). And now what, after such tribulations? Memory and dislocation in
the era of uprooting. American Psychologist, 55, 1008–1013.
Apfelbaum, E. (2002). Uprooted communities, silenced cultures and the need for legacy.
In V. Walkerdine (Ed.), Challenging subjects: Critical psychology for a new millennium.
London: Palgrave.
Asadi, H. (2010). Letters to my torturer: Love, revolution, and imprisonment in Iran.
Oneworld.
Bandura, A. (1997). Self-efficacy: The exercise of control. New York: W. H. Freeman.
Baumeister, R. F., Muraven, M., & Tice, D. (2000). Ego depletion: A resource model of
volition, self-regulation, and controlled processing. Social Cognition, 18, 130–150.
Baumeister, R. F. (2008). Free will in scientific psychology. Perspectives on Psychological
Science, 3, 14–19.
Bayley, J. (1998). Iris: a memoir of Iris Murdoch. London: Duckworth.
Berridge, K. C. (2004). Motivation concepts in behavioral neuroscience. Physiology and
Behavior, 81, 179–209.
Berridge, K. C., & Winkielman, P. (2003). What is an unconscious emotion? (The case for
unconscious “liking”). Cognition and Emotion, 17, 181–211.
Browning , C. R . (1993). Ordinary men: Reserve Police Battalion 101 and the final solution in
Poland. New York: Harper Perennial.
218 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L
Kruglanski, A. W., Chen, X., Dechesne, M., Fishman, S., & Orehek, E. (2008). Fully com-
mitted: Suicide bombers’ motivation and the quest for personal significance. Political
Psychology, 30, 331–357.
Kuhl, J. & Koole, S,L, (2004). Workings of the will: A functional approach. In J. Greenberg,
S. L. Koole, & T. Pyszczynski (Eds.), Handbook of experimental existential psychology
(411–430). New York: Guilford Press.
Langhoff, W. (1935). Die Moorsoldaten (reprinted, 1958, Stuttgart: Verlag Neuer Weg).
Lazarus, R. S. (1991). Emotion and adaptation. New York: Oxford University Press.
Levi, P. (1988). The drowned and the saved. New York: Simon and Schuster.
Lewis, M. D. (2005). Bridging emotion theory and neurobiology through dynamic sys-
tem modeling. Behavioral and Brain Sciences, 28, 105–131.
Lewis, M. D. (2011). Memoirs of an addicted brain: A neuroscientist examines his former life
on drugs. Toronto: Doubleday Canada.
Lewis, M. R., & Todd, R. M. (2005). Getting emotional: A neural perspective on emo-
tion, intention, and consciousness. Journal of Consciousness Studies, 12, 210–235.
Lutz, A. S., & Thompson, E. (2003). Neurophenomenology: Integrating subjective experi-
ence and brain dynamics in the neuroscience of consciousness. Journal of Consciousness
Studies, 10, 31–52.
Mesquita, B., & Albert, D. (2007). The cultural regulation of emotions. In J. Gross (Ed.),
Handbook of emotion regulation (486–504). New York: Guilford Press.
Metzinger, T. (2003). Being no one: The self-model theory of subjectivity. Cambridge, MA :
MIT Press.
Milgram, S. (1974). Obedience to authority. New York: Harper and Row.
Mook, D. G. (1996). Motivation: The organization of action. 2nd ed. New York: Norton.
Mulder, G. (1980). The heart of mental effort. Ph.D. diss. University of Groningen.
Nadrani, M. (2005). Les sarcophages du complexe.[Years of lead]. Casablanca: Éditions Al
Ayam.
Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we can know: Verbal reports on
mental processes. Psychological Review, 84, 231–259.
Panksepp, J. (1998). Affective neuroscience. Oxford: Oxford University Press.
Panksepp, J., & Northoff, G. (2009). The trans-species core SELF: The emergence of
active cultural and neuro-ecological agents through self-related processing within
subcortical-cortical midline networks. Consciousness and Cognition, 18, 193–215.
Prinz, W. (2004). Kritik des freien Willens: Bemerkungen über eine sociale Institution.
Psychologische Rundschau, 55, 198–206.
Prinz, W. (2006). Free will as a social institution. In S. Pockett, W. P. Banks, & S. Gallagher
(Eds.), Does consciousness cause behaviour? Cambridge, MA : MIT Press.
Prunier, G. (1995). The Rwanda crisis: History of a genocide. London: Hurst.
Reykovski, J. (2001). Justice motive and altruistic helping. In M. Ross & D. T. Miller
(Eds.), The justice motive in everyday life. New York: Cambridge University Press.
Richard, R., Van der Pligt, J., & De Vries, N. K . (1996). Anticipated affect and behavioral
choice. Basic and Applied Social Psychology, 18, 111–129.
Rietveld, E. (2008). Situated normativity: The normative aspect of embodied cognition in
unreflective action. Mind, 117, 973–1001.
Rimé, B. (2005). Le partage social des émotions [Social sharing of emotions]. Paris: Presses
Universitaires de France.
Sanders, A. F. (1998). Elements of human performance: Reaction processes and attention in
human skill. Mahwah, NJ: Erlbaum.
Searle, J. R. (2007). Freedom and neurobiology. New York: Columbia University Press.
220 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L
Shand, A. F. (1920). The foundations of character: A study of the emotions and sentiments.
London: Macmillan.
Simon, H. A. (1956). Rational choice and the structure of the environment. Psychological
Review, 63, 129–138.
Slauerhoff, J. (1961). Verzamelde gedichten [Collected poems]. Vol. 2, 512. The Hague,
Nijgh & Van Ditmar (present author’s Dutch-English translation).
Sonnemans, J., & Frijda, N. H. (1995). The determinants of subjective emotional inten-
sity. Cognition and Emotion, 9, 483–507.
Stern, J. (2003). Terror in the name of God: Why religious militants kill. New York:
HarperCollins.
Toates, F. M. (1986). Motivational systems. Cambridge: Cambridge University Press.
Wegner, D. M. (2002) The illusion of conscious will. Cambridge, MA : MIT Press.
Wilson, T. D. (2002). Strangers to ourselves. Cambridge, MA : Harvard University Press.
Withuis, J. (2005). Erkenning. Van oorlogstrauma naar klaagcultuur [Recognition: From
war trauma to the culture of complaining]. Amsterdam: Bezige Bij.
Withuis, J. (2008). Weest manlijk, zijt sterk: Pim Boellaards (1903–2001). Het leven van een
verzetsheld [Be manly, be strong: Pim Boellaards (1903–2001) The life of a resistance
hero]. Amsterdam: Bezige Bij.
12
S A M J. M AG L I O , P E T E R M . G O L LW I T Z E R ,
AND GABRIELE OETTINGEN
INTRODUCTION
At the end of the 10-year Trojan War, its hero Odysseus was exhausted and desper-
ate to return home to Ithaca. The road home would prove to be as difficult as the war
itself, fraught with challenges and temptations. None of these better demonstrates
Odysseus’ effective action control than his encounter with the Sirens. Known for
their beautiful song—capable of tempting people into certain death—the Sirens
were located on the path between Odysseus’ ship and his home. They were approach-
ing fast, and Odysseus devised a clever but simple plan: he ordered his crew to place
wax in their ears, rendering them incapable of hearing the Sirens’ song, and then to
tie him to the mast of the ship, from which he would be unable to escape regardless
of how strong the impending temptation might be. His ship neared the island of the
Sirens, and the alluring song proved to be even more tempting than Odysseus had
anticipated. He struggled to work free from the mast but remained securely in place.
Before long, they had successfully sailed beyond the Sirens and were one step closer
to attaining the goal of returning home safely.
In the modern era, this same principle of finding means by which to succeed in
goal pursuit has become a major theme of research within the domains of motiva-
tion and self-regulation (Gollwitzer and Moskowitz 1996; Oettingen and Gollwitzer
2001). This research has drawn an important distinction between the setting of
appropriate goals and the effective striving for goal attainment, and this chapter will
focus primarily upon the latter. To return to the example of Odysseus, he had already
chosen the goal of successfully returning home. In the service of this goal, he con-
sciously willed an explicit plan—having himself tied to the mast of his ship. From
there, however, he had in a sense surrendered his conscious intent to nonconscious
control: though his conscious will had changed (e.g., to succumb to the temptation
222 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L
of the Sirens), the bounds of the rope remained, guiding his behavior without his
conscious intent. From our perspective, the rope provides a simple metaphor for the
form and function of planning that specifies when, where, and how to direct action
control in the service of long-term goals. This chapter will describe a specific (yet
broadly applicable) type of planning: the formation of implementation intentions,
or if-then plans that identify an anticipated goal-relevant situation (e.g., encounter-
ing a temptation) and link it to an appropriate goal-directed response (e.g., coping
with temptations). In so doing, we will first develop a definition of such plans and
elaborate upon their effects and effectiveness, especially as they operate outside
of conscious awareness. Subsequently, we turn our consideration to an emerging
topic within the domain of planning—the emotional precursors to the formation
of plans.
intentions forge a strong association between the specified opportunity and the
specified response (Webb and Sheeran 2007). As a result, the initiation of the
goal-directed response specified in the if-then plan becomes automated. By auto-
mated, we mean that this behavior exhibits features of automaticity, including
immediacy, efficiency, and lack of conscious intent. Said differently, the person fac-
ing the critical situation does not have to actively decide how to behave (e.g., suc-
cumb to the temptation or not). Like Odysseus, bound by ropes to the mast, their
previous act of conscious and deliberate will in forming the plan has precluded the
will in the critical situation: the prescribed behavior is executed automatically. Such
automatic, predetermined behavior stands in stark contrast to people who have
formed mere goal intentions.
Empirical evidence is consistent with this conception of strategic automaticity.
If-then planners act quickly (Gollwitzer and Brandstätter 1997, Study 3), deal effec-
tively with cognitive demands (Brandstätter, Lengfelder, and Gollwitzer 2001), and
do not need to consciously intend to act at the critical moment (Sheeran, Webb,
and Gollwitzer 2005, Study 2). In addition to this behavioral readiness, research on
implementation intentions has also observed a perceptual readiness for the speci-
fied critical cues (e.g., Aarts, Dijksterhuis, and Midden 1999; Webb and Sheeran
2007). In sum, implementation intentions allow the person to readily see and seize
good opportunities to move toward their goals. Forming if-then plans thus auto-
mates goal striving (Gollwitzer and Schaal 1998) by strategically delegating the
control of goal-directed responses to preselected situational cues with the explicit
purpose of reaching one’s goals. The cool, rational agent engages an a priori strategy
to take conscious control away from the hot, vulnerable future self.
and Sheeran 2000), recycle (Holland, Aarts, and Langendam 2006), and engage
in physical exercise (Milne, Orbell, and Sheeran 2002) were all more readily acted
upon when people had furnished these goals with implementation intentions.
Implementation intentions also were found to help attainment of goal intentions
where it is easy to forget to act (e.g., regular intake of vitamin pills; Sheeran and
Orbell 1999).
Goal Shielding. Ongoing goals require that people keep striving for the goal over
an extended period of time, and implementation intentions can facilitate the shield-
ing of such goal striving from interferences that stem from inside or outside the
person (Gollwitzer and Schaal 1998). For instance, imagine a person who wants to
avoid being unfriendly to a friend who is known to make sudden outrageous requests
during casual conversations. To meet the goal of having an undisrupted casual con-
versation with her friend, the person may form one of the following implementation
intentions. She can focus on preventing the unwanted response of being unfriendly
by forming the implementation intention either to ignore the unfriendly request or
to stay calm in the face of the request. Alternatively, she can focus on strengthening
the striving for the focal goal (i.e., bringing the casual conversation to a successful
ending) by planning it out in detail; for instance, she may form if-then plans that
cover how the casual conversation with the friend is to run off from the beginning
to its successful ending (Bayer, Gollwitzer, and Achtziger 2010).
Allocating Resources. An additional problem in goal striving is the failure
to disengage from one goal in order to direct limited resources to other goals.
Implementation intentions have been found to facilitate such disengagement and
switching. Henderson, Gollwitzer, and Oettingen (2007) showed that implemen-
tation intentions can be used to curb the escalation of behavioral commitment
commonly observed when people experience failure with a chosen strategy of goal
striving. Furthermore, as implementation intentions subject behavior to the direct
control of situational cues, the self should not be involved when action is controlled
by implementation intentions. Therefore, the self should not become depleted
(Muraven and Baumeister 2000) when task performance is regulated by implemen-
tation intentions, and thus individuals using implementation intentions should not
show overextension effects in their limited cognitive resources. Within different
paradigms, participants who had used implementation intentions to regulate behav-
ior in a first task do not show reduced self-regulatory capacity (i.e., depletion) in a
subsequent task (e.g., Webb and Sheeran 2003). Thus, implementation intentions
successfully preserved self-regulatory resources as demonstrated by greater persis-
tence on subsequent difficult tasks (i.e., solving difficult anagrams).
Special Challenges and Populations. Recent research has shown that imple-
mentation intentions ameliorate action control problems even when goal striving is
limited by conditions that seem quite resistant to change by self-regulatory efforts
(summary by Gollwitzer and Oettingen 2011). For instance, it was observed that
implementation intentions facilitated achieving high scores on math and intelligence
tests (Bayer and Gollwitzer 2007), even though such performances are known to be
limited by a person’s respective capabilities. Implementation intentions have also
helped people succeed in sports competitions (Achtziger, Gollwitzer, and Sheeran
2008, Study 2) and negotiations over limited resources (Trötschel and Gollwitzer
Action Control by Implementation Intentions 225
2007), even though in such competitive situations a person’s goal striving is limited
by the opponents’ behavior. Moreover, implementation intentions were found to
help people’s goal striving even in cases where effective goal striving is threatened
by competing habitual responses; this seems to be true no matter whether these
automatic competing responses are behavioral (e.g., Cohen et al. 2008; Mendoza,
Gollwitzer, and Amodio 2010), cognitive (e.g., Gollwitzer and Schaal 1998; Stewart
and Payne 2008), or affective (e.g., Schweiger Gallo et al. 2009) in nature. These lat-
ter findings suggest that forming implementation intentions turns top-down action
control by goals into bottom-up control by the situational cues specified in the if-
component of an implementation intention (Gilbert et al. 2009), and they explain
why special samples that are known to suffer from ineffective effortful control of
their thoughts, feelings, and actions still benefit from forming implementation
intentions. Examples include heroin addicts during withdrawal and schizophrenic
patients (Brandstätter, Lengfelder, and Gollwitzer 2001, Studies 1 and 2), fron-
tal lobe patients (Lengfelder and Gollwitzer 2001), and children with ADHD
(Gawrilow and Gollwitzer 2008).
Summary
In this section, we have described how forming implementation intentions—speci-
fying the where, when, and how of performing a goal-directed response—facilitates
the control of goal-relevant action. In going beyond a mere goal intention, the per-
son who forms an implementation intention creates a crucial link between a critical
situational cue and a desired behavioral response. The result is that the prescribed
behavior is executed automatically (i.e., immediate, efficient, and without further
conscious intent), preventing the fallible person in the hazardous situation from
straying from the desired path. As Odysseus was bound to the mast of his ship by his
“plan,” so too do implementation intentions determine behavioral responding ahead
of time. The result, with respect to the overarching goal, is an enhanced likelihood
of successfully attaining that goal. This is accomplished by any of several applica-
tions of implementation intentions, including to issues of getting started, shielding
the goal from competing concerns, appropriately allocating one’s limited resources
toward the goal, and even overriding special challenges (e.g., habitual problems)
and the difficulties faced by special populations (e.g., children with ADHD). In
sum, the self-regulatory exercise of furnishing goal intentions with implementation
intentions provides a simple yet effective means of managing one’s goal striving in
the interest of achieving desired outcomes.
PRECURSORS TO PLANNING
As documented in the previous section, research spanning more than two decades
has offered a clear prescription for people committed toward reaching a desired
goal: the formation of if-then plans to enhance goal striving. That is, the primary
empirical paradigm has people furnish goal intentions with if-then plans and then
observes the benefits they enjoy for goal striving. Despite identifying a host of factors
that contribute to the downstream consequences of forming these implementation
226 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L
intentions, relatively little attention has been devoted to understanding the circum-
stances under which people may spontaneously generate them. In this section,
we offer an initial attempt to reverse this trend. We suggest that the experience of
certain specific (or discrete) emotions provides an insight into understanding why
and how people may engage in the act of planning on their own. To develop our
theoretical perspective, we first define what we mean by discrete emotion, relate
emotion to an established precursor to plan formation, and then use this connec-
tion to make predictions for behavior. As we will suggest, the relation between emo-
tion and planning provides a unique opportunity to investigate the interrelations
among motivation, emotion, cognition, and action. Ultimately, by capitalizing on
emotional experience, we suggest that these feeling states may play an important
role in the goal pursuit process.
Emotions Reconsidered
To understand how different negative emotions can have different consequences—
good or bad—we first trace negative emotional experience back to its source. As we
mentioned earlier, discrete negative emotions (like sadness and anger) are concep-
tualized as discrete because they arise from fundamentally different types of sources
and activate different patterns of cognition and behavior. Let’s take two goal-relevant
examples, both related to buying a car. In the first scenario, imagine driving across
town to your favorite dealership with your heart set on buying the newest model of
your favorite make of car. You can practically feel the soft new leather seats and whiff
that new car smell. But, when you arrive, you learn that the make you were hoping for
has been discontinued. Driving back home, bemoaning your current car’s cracked
windshield and puny horsepower, it isn’t hard to intuit a feeling state of sadness. On
the other hand, your experience at the dealership could have been much different.
Instead, imagine being told by the shifty salesman in a plaid jacket that price of the
new model has been increased as the result of the inclusion of necessities—rust-
proofing, customized floor mats—and that the price is nonnegotiable. Certain that
the only function of these necessities is to boost his commission, you storm out of
the dealership. You’re again driving home, again in the same dull car you were hop-
ing to replace, but the feeling state is now different—it is one of anger.
How might the patterns of thought in response to the events at the dealership
differ between the two situations? Further, how will you respond—in thought and
action—to being cut off in traffic on your drive back home depending on whether
you just experienced scenario one or two? In response to discrete negative emo-
tions, research has suggested that the patterns of thought prompted by an emotion
extend beyond the emotion elicitor to novel situations and judgments. Within this
tradition, no other pair of emotions has produced such discrepant results on judg-
ment tasks as sadness and anger. This carryover effect has been documented in the
divergent effects of sadness and anger on a host of cognitive assessments: causal
judgment (Keltner, Ellsworth, and Edwards 1993), stereotyping (Bodenhausen,
Sheppard, and Kramer 1994), and expectations and likelihood estimations
(DeSteno et al. 2004; DeSteno et al. 2000).
But why do we observe these carryover effects? And why do they differ for sadness
and anger? The appraisal tendency framework (Lerner and Keltner 2000, 2001) sug-
gests a specific mechanism by which the experience of incidental emotion impacts
228 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L
they enter into the actional phase. Rather than objective assessment, cognition in
both postdecisional phases is oriented toward effective goal striving, constituting an
implemental mindset (for reviews, Gollwitzer and Bayer 1999; Gollwitzer, Fujita,
and Oettingen 2004).
Empirical evidence has provided support for this theory by probing the contents
and patterns of thought characteristic of deliberative and implemental mindsets. In
order to facilitate successful goal selection, the deliberative mindset is characterized
by both voluntary generation of and selective attention toward outcome (i.e., goal)
value—specifically, its desirability and feasibility. Conversely, the implemental mind-
set generates and attends to information regarding situational specifics (the when,
where, and how) for initiating goal-directed behavior (Gollwitzer, Heckhausen,
and Steller 1990; Puca and Schmalt 2001; Taylor and Gollwitzer 1995). A second
theme of this research has considered information-processing differences between
the two mindsets. Relative to the deliberative mindset, the implemental mindset is
more susceptible to a number of cognitive biases, including illusory control over the
situation (Gollwitzer and Kinney 1989), reduced perceived vulnerability to prob-
lems (Taylor and Gollwitzer 1995), stronger attitudes (Henderson, de Liver, and
Gollwitzer 2008), and decreased openness to information (Fujita, Gollwitzer, and
Oettingen 2007; Heckhausen and Gollwitzer 1987). Overall, the evidence speaks
to the evenhanded processing of outcome-relevant information in the deliberative
mindset and biased appraisal driving goal-directed action initiation in the imple-
mental mindset.
From both a theoretical and methodological perspective, it is important to note
a central mechanism by which mindsets operate. The act of either deliberating over
a choice or trying to enact a choice that has been made activates separable cog-
nitive procedures associated with those separate tasks, and it is via this activation
that mindset effects can generalize to new situations. The predominant paradigm in
this tradition asks participants to first either elaborate upon an unresolved personal
problem or plan the implementation of a chosen project (creating a deliberative
or implemental mindset, respectively). Subsequently, the participant performs the
ostensibly unrelated task to measure the effect of the induced mindset on general
cognitive style (e.g., perceived control over a random event; Gollwitzer and Kinney
1989). As such, deliberative and implemental mindsets serve as procedural primes,
making salient distinct frameworks by which to interpret, assess, and act upon new
information.
prompt deliberative consideration of a goal, and the experience of anger will prompt
implemental consideration. If anger indeed engenders the same patterns of thought
(e.g., biases) as the implemental mindset, it should similarly orient people toward
identifying opportunities to enact goal-directed action (see Gollwitzer, Heckhausen,
and Steller 1990). As we have already discussed, linking critical situations to goal-
directed responses constitutes if-then planning, or formation of implementation
intentions. The deliberative mindset, conversely, is oriented toward outcomes (“Is
this goal worth pursuing?”) rather than behaviors (“When/Where/How can I work
toward attaining this goal?”). Beyond the formation of plans, an implemental (ver-
sus deliberative) mindset should additionally enhance the effectiveness with which
existing plans are enacted. As we have described, the implemental mindset is charac-
terized by a general goal-enhancing bias (e.g., enhanced self-confidence). One con-
sequence of such bias is that when an opportunity for planned behavior execution is
made available, it is immediately taken. On the other hand, a person in a deliberative
mindset might instead reconsider whether this behavior (or even this goal) is in fact
the best course of action to take, compromising plan implementation.
EMPIRICAL SUPPORT
With the three studies reported next, our aim was to test this emotion as mind-
set hypothesis across two goal- and planning-relevant domains. For the first two
studies, we drew upon an established measure to assess degree of implemental
thought: the formation of plans (Gollwitzer 1990). The first study induces con-
scious emotion and examines whether anger yields formation of more implemen-
tation intentions than sadness. In Study 2, we conceptually replicated the effects
of Study 1 but by utilizing a different (nonconscious) emotion manipulation
prior to a modified measure of plan formation. In our third study, we examined
how anger and sadness influence the execution of behavior as prescribed by pre-
existing plans.
other negative emotions (fearful, nervous), and two positive emotions (happy,
content). Subsequently, participants recalled the academic goal they had named
earlier and then performed a sentence stem completion task with respect to that
goal, which served as a measure of plan formation (Oettingen, Pak, and Schnetter
2001). The task presented them with eight different incomplete sentence stems
and asked them first to review each of the stems and then select and complete the
four that best matched their thinking about their goal by filling in the correspond-
ing blank lines. Four of the phrases constituted implementation intentions (e.g.,
“Specifically, . . . ”), whereas the other four related to broader goal consideration
(e.g., “All in all, . . . ”).
The results from the manipulation check indicated that the perspective-taking
task successfully induced discrete sadness in the sadness condition, discrete anger
in the anger condition, and slightly positive affect in the neutral affect condition.
Based upon selection of sentence stems, each participant received a score on the
planning measure from 0 to 4, with higher scores indicating more implementa-
tion intentions formed. Consistent with our hypothesis, participants in the anger
condition formed more plans than those in the sadness condition, with plan for-
mation among those in the neutral condition falling between the two emotion
conditions. Thus, as predicted, the experience of anger prompted a greater ten-
dency toward implemental thought (i.e., plan formation) than sadness in preparing
goal-directed action.
the culpability of specific individuals. In the sadness priming condition, the ques-
tions related to the tragic aspects of the earthquake and its unpredictability. Next,
all participants were asked to indicate the extent to which the article had made
them angry and sad.
Subsequently, participants were asked to recall the goal they had named earlier
and then perform a sentence stem completion task with respect to that goal. The task
presented them with four different incomplete sentence stems and asked them first
to review each of the stems and then select and complete the one that best matched
their thinking about their goal by filling in the corresponding blank lines. Two of the
stems were formatted such that they explicitly linked situations to behaviors (e.g.,
“If _____ happens, then I will do _____”), whereas the other two identified only
outcomes and the potential value they offered (e.g., “If _____ is achieved, it will
_____”). The former were meant to represent the implemental mindset, whereas
the latter reflected the deliberative mindset. Thus, all participants chose only one
type of structure to represent their conceptualization of the goal.
The results from the manipulation check indicated that our nonconscious emo-
tion induction was successful (i.e., no differences in conscious sadness and anger
were observed between emotion conditions). Based upon their selection of sen-
tence stems, participants were each categorized as utilizing either a deliberative or
an implemental structure (i.e., forming or not forming an implementation inten-
tion). Again, the results for this task supported our emotion as mindset hypoth-
esis: those in the anger-prime condition were more than three times more likely
than those in the sadness-prime condition to choose an implementation intention.
Importantly, these results suggest that conscious and nonconscious emotions have
similar consequences for the planning of goal-directed action. Because participants
in the two conditions read the same newspaper article and rated their conscious
emotions similarly, the observed difference in degree of implemental thinking must
be due solely to the leading questions that followed the article. Thus, our second
study suggests that activation of the construct of sadness or anger is sufficient to
prompt goal conceptualization in a manner consistent with the deliberative or
implemental mindset, respectively.
In sum, these first two studies provide support for the emotion as mindset
hypothesis in terms of anger (versus sadness) inducing more preactional implemen-
tal thought. Specifically, by forming more plans for how to act on their goals, people
made to feel angry showed more behavior characteristic of a postdecisional—but
preactional—implemental orientation. Consistent with past theorizing described
earlier, we consider the formation of such implementation intentions to reflect a
conscious act of will with implications for future behavior: when people later
encounter the critical cue specified by their plans, they will execute the associated
behavior immediately and without conscious reflection. However, in the stud-
ies presented thus far, this claim amounts to little more than idle speculation. We
believe anger initiates a general implemental mindset, applicable to both the preac-
tional and actional stages of the postdecisional action phase. Therefore, in the next
study, we tested the latter claim: whether conscious emotion (i.e., sadness or anger)
would influence the automatic, nonconscious execution of behavioral scripts pre-
scribed by planning.
Action Control by Implementation Intentions 233
Plan Execution
Having established in the first two studies that the experience of state anger (versus
sadness) makes a person more likely to form an implementation intention, we next turn
to the question of how emotion influences acting upon existing plans. An implemen-
tal (versus deliberative) mindset should enhance the effectiveness with which existing
plans are enacted. As we have described, the implemental mindset is characterized by a
general goal-enhancing bias (e.g., increased self-confidence). One consequence of such
bias is that when an opportunity for planned behavior execution is made available, it is
immediately taken (i.e., occurs nonconsciously). On the other hand, a person in a delib-
erative mindset might instead reconsider whether this behavior (or even this goal) is in
fact the best course of action to take. This interruption of conscious deliberation hin-
ders plan execution. Thus, as an implemental (versus deliberative) mindset facilitates
the efficient execution of planned behavior, and as the experience of anger operates like
an implemental mindset, anger (versus sadness) should therefore enhance the benefi-
cial effect of planning by better enabling efficient action initiation. Thus, in an exten-
sion of our emotion as mindset hypothesis, we predict that a conscious anger (versus
sadness) induction will expedite reaction times in responding to critical trials of a go/
no-go task as specified by predetermined planning. We tested this prediction using a go/
no-go task consistent with past research (Brandstätter et al., 2001). Participants were
instructed to press the “x” key as quickly as possible when numbers—but not letters—
were presented. They were assigned to one of six conditions in a 3 (sadness, anger, or
neutral affect) × 2 (goal intention or implementation intention) factorial design.
As in the first study, the cover story described the study as an experiment on
perspective taking. First, ostensibly to help their performance during a later ses-
sion of the task, participants were provided with one of two sets of instructions to
facilitate their responding to numbers. This constituted the intention manipulation.
All participants first said to themselves, “I want to react to numbers as quickly as
possible.” Then, half of the participants were instructed to say the following phrase
to themselves three times: “I will particularly think of the number 3” (goal inten-
tion). The other half of the participants repeated this phrase three times: “And if the
number 3 appears, then I will press the ‘x’ key particularly fast” (implementation
intention). All participants then performed one of three perspective-taking tasks
(emotion manipulations) and then rated their feeling states, both in a manner iden-
tical to Study 1. Following the emotion manipulation, the main session of the go/
no-go task was presented, lasting seven minutes.
As in Study 1, the manipulation check indicated that our emotion induction pro-
cedure successfully elicited differences in experiencing discrete sadness or anger.
We then calculated for each participant the mean reaction times to both neutral
numbers and the critical number 3. In general, participants responded faster to the
critical number 3 relative to the neutral numbers and faster to all numbers in the
implementation intention condition relative to those in the goal intention condi-
tion. Additionally, these main effects were qualified by an interaction between the
two factors such that responses were fastest to the critical numbers by those in the
implementation intention condition. This finding provided a replication of previ-
ous basic research on implementation intention effects.
234 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L
or angry. Evidence for our emotion as mindset hypothesis from such a paradigm
would more directly inform how people respond to emotional triggers in the envi-
ronment associated with their goals.
CONCLUSION
In sum, we have presented implementation intentions as action plans that automate
efficient, goal-directed responding. The breadth of their effects has been well docu-
mented and has prompted the need to understand contextual factors that might
238 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L
influence how ready people are to generate and use implementation intentions. The
research described here identified discrete emotional experience—specifically, that
of anger—as one contextual factor that gives rise to the formation and effective exe-
cution of implementation intentions. More broadly, in explicating the emotion as
mindset hypothesis, we provide an integration of discrete emotion theory and the
self-regulation of goal striving. By parsing the realm of negative emotion, sadness
and anger were proposed as distinct emotional experiences, each defined by separa-
ble cognitive and motivational components, corresponding to the successive stages
of the mindset model of action phases: the deliberative and implemental mindsets,
respectively. The findings from three studies supported this hypothesis, as anger
elicited greater planning for goal-directed behavior and superior plan effectiveness
relative to state sadness. This effect should inform future research in continuing to
explore the role of emotional experience in action control.
REFERENCES
Aarts, H., A. Dijksterhuis, and C. Midden. 1999. To plan or not to plan? Goal achievement
or interrupting the performance of mundane behaviors. European Journal of Social
Psychology 29:971–979.
Achtziger, A., Peter M. Gollwitzer, and Paschal Sheeran. 2008. Implementation intentions
and shielding goal striving from unwanted thoughts and feelings. Personality and Social
Psychology Bulletin 34:381.
Bagozzi, Richard P., Hans Baumgartner, and Rik Pieters. 1998. Goal-directed emotions.
Cognition and Emotion 12:1–26.
Bandura, A. 1997. Self-efficacy: The exercise of control. New York: Freeman.
Baumeister, Roy F., Kathleen D. Vohs, C. Nathan DeWall, and Liqing Zhang. 2007. How
emotion shapes behavior: Feedback, anticipation, and reflection, rather than direct
causation. Personality and Social Psychology Review 11:167–203.
Bayer, U. C., and P. M. Gollwitzer. 2007. Boosting scholastic test scores by willpower: The
role of implementation intentions. Self and Identity 6:1–19.
Bayer, U. C., P. M. Gollwitzer, and A. Achtziger. 2010. Staying on track: Planned goal striv-
ing is protected from disruptive internal states. Journal of Experimental Social Psychology
46:505–514.
Berkowitz, L., and Eddie Harmon-Jones. 2004. Toward an understanding of the determi-
nants of anger. Emotion 4:107–130.
Bless, Herbert, Gerald L. Clore, Norbert Schwarz, Verena Golisano, Christina Rabe, and
Marcus Wölk . 1996. Mood and the use of scripts: Does a happy mood really lead to
mindlessness? Journal of Personality and Social Psychology 71:665–679.
Bodenhausen, Galen V., Lori A. Sheppard, and Geoffrey P. Kramer. 1994. Negative affect
and social judgment: The differential impact of anger and sadness. European Journal of
Social Psychology. Special Issue: Affect in Social Judgments and Cognition 24:45–62.
Bower, Gordon H. 1981. Mood and memory. American Psychologist 36:129–148.
Brandstätter, V., A. Lengfelder, and Peter M. Gollwitzer. 2001. Implementation intentions
and efficient action initiation. Journal of Personality and Social Psychology 81:946–960.
Carver, C. S. 2004. Negative affects deriving from the behavioral approach system. Emotion
4:3–22.
Carver, C. S, and Eddie Harmon-Jones. 2009. Anger is an approach-related affect: Evidence
and implications. Psychological Bulletin 135:183–204.
Action Control by Implementation Intentions 239
Carver, C. S., and M. F. Scheier. 1990. Origins and functions of positive and negative
affect: A control-process view. Psychological Review 97:19–35.
Carver, C. S., and Teri L. White. 1994. Behavioral inhibition, behavioral activation, and
affective responses to impending reward and punishment: The BIS/BAS Scales. Journal
of Personality and Social Psychology 67:319–333.
Clore, G. L., and J. R. Huntsinger. 2009. How the object of affect guides its impact. Emotion
Review 1:39–54.
Cohen, A. L., U. C. Bayer, A. Jaudas, and Peter M. Gollwitzer. 2008. Self-regulatory strat-
egy and executive control: Implementation intentions modulate task switching and
Simon task performance. Psychological Research 72:12–26.
Custers, Ruud, and Henk Aarts. 2005. Positive affect as implicit motivator: On the non-
conscious operation of behavioral goals. Journal of Personality and Social Psychology
89:129–142.
Damasio, A. R . 1994. Descartes’ error: Emotion, reason, and the human brain. New York:
Grosset/Putnam.
DeSteno, David, Richard E. Petty, Derek D. Rucker, Duane T. Wegener, and Julia
Braverman. 2004. Discrete emotions and persuasion: The role of emotion-induced
expectancies. Journal of Personality and Social Psychology 86:43–56.
DeSteno, David, Richard E. Petty, Duane T. Wegener, and Derek D. Rucker. 2000. Beyond
valence in the perception of likelihood: The role of emotion specificity. Journal of
Personality and Social Psychology 78:397–416.
Frijda, Nico H. 1986. The emotions. New York: Cambridge University Press.
Fujita, Kentaro, Peter M. Gollwitzer, and Gabriele Oettingen. 2007. Mindsets and
pre-conscious open-mindedness to incidental information. Journal of Experimental
Social Psychology 43:48–61.
Gawrilow, C., and Peter M. Gollwitzer. 2008. Implementation intentions facilitate response
inhibition in children with ADHD. Cognitive Therapy and Research 32:261–280.
Gilbert, S., Peter M. Gollwitzer, A. Cohen, P. Burgess, and Gabriele Oettingen. 2009.
Separable brain systems supporting cued versus self-initiated realization of delayed
intentions. Journal of Experimental Psychology: Learning, Memory, and Cognition
35:905–915.
Gollwitzer, Peter M. 1990. Action phases and mind-sets. In Handbook of motivation and
cognition: Foundations of social behavior, edited by E. T. Higgins and R. M. Sorrentino.
New York: Guilford Press.
Gollwitzer, Peter M. 1993. Goal achievement: The role of intentions. European Review of
Social Psychology 4:141–185.
Gollwitzer, Peter M. 1999. Implementation intentions: Strong effects of simple plans.
American Psychologist 54:493–503.
Gollwitzer, Peter M. 2012. Mindset theory of action phases. In Handbook of theories in
social psychology, edited by P. Van Lange, A. W. Kruglanski, and E. T. Higgins. London:
Sage.
Gollwitzer, Peter M., and Ute Bayer. 1999. Deliberative versus implemental mindsets in
the control of action. In Dual-process theories in social psychology, edited by S. Chaiken
and Y. Trope. New York: Guilford Press.
Gollwitzer, Peter M., and V. Brandstätter. 1997. Implementation intentions and effective
goal pursuit. Journal of Personality and Social Psychology 73:186–199.
Gollwitzer, Peter M., Kentaro Fujita, and Gabriele Oettingen. 2004. Planning and the
implementation of goals. In Handbook of self-regulation: Research, theory, and applica-
tions, edited by R. F. Baumeister and K. D. Vohs. New York: Guilford Press.
240 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L
Gollwitzer, Peter M., Heinz Heckhausen, and Birgit Steller. 1990. Deliberative and imple-
mental mind-sets: Cognitive tuning toward congruous thoughts and information.
Journal of Personality and Social Psychology 59:1119–1127.
Gollwitzer, Peter M., and Ronald F. Kinney. 1989. Effects of deliberative and imple-
mental mind-sets on illusion of control. Journal of Personality and Social Psychology
56:531–542.
Gollwitzer, Peter M., and G. B. Moskowitz. 1996. Goal effects on action and cognition.
In Social psychology: Handbook of basic principles, edited by E. T. Higgins and A. T.
Kruglanski, 361–399. New York: Guilford Press.
Gollwitzer, Peter M., and Gabriele Oettingen. 2011. Planning promotes goal striving. In
Handbook of self-regulation: Research, theory, and applications, edited by K. D. Vohs and
R. F. Baumeister, 2nd ed., 162–185. New York: Guilford.
Gollwitzer, Peter M., and B. Schaal. 1998. Metacognition in action: The importance of
implementation intentions. Personality and Social Psychology Review 2:124–136.
Gollwitzer, Peter M., and Paschal Sheeran. 2006. Implementation intentions and goal
achievement: A meta-analysis of effects and processes. Advances in Experimental Social
Psychology 38:69–119.
Gross, J. J. 1998. The emerging field of emotion regulation: An integrative review. Review
of General Psychology 2:271–299.
Gross, J. J. 2007. Handbook of emotion regulation. New York: Guilford Press.
Harmon-Jones, Eddie. 2003. Anger and the behavioral approach system. Personality and
Individual Differences 35:995–1005.
Harmon-Jones, Eddie, and John J. B. Allen. 1998. Anger and frontal brain activity: EEG
asymmetry consistent with approach motivation despite negative affective valence.
Journal of Personality and Social Psychology 74:1310–1316.
Heckhausen, H. 1991. Motivation and action. Berlin: Springer.
Heckhausen, H., and Peter M. Gollwitzer. 1987. Thought contents and cognitive functioning
in motivational versus volitional states of mind. Motivation and Emotion 11:101–120.
Hemenover, Scott H., and Shen Zhang. 2004. Anger, personality, and optimistic stress
appraisals. Cognition and Emotion 18:363–382.
Henderson, Marlone D., Peter M. Gollwitzer, and Gabriele Oettingen. 2007.
Implementation intentions and disengagement from a failing course of action. Journal
of Behavioral Decision Making 20:81–102.
Henderson, Marlone D., Yael de Liver, and Peter M. Gollwitzer. 2008. The effects of an
implemental mind-set on attitude strength. Journal of Personality and Social Psychology
94:396–411.
Higgins, E. Tory. 1997. Beyond pleasure and pain. American Psychologist 52:1280–1300.
Holland, Rob W., H. Aarts, and D. Langendam. 2006. Breaking and creating habits on the
working floor: A field experiment on the power of implementation intentions. Journal
of Experimental Social Psychology 42:776–783.
Janoff-Bulman, R., and P. Brickman. 1982. Expectations and what people learn from fail-
ure. In Expectations and actions: Expectancy-value models in psychology, edited by N. T.
Feather, 207–237. Hillsdale, NJ: Erlbaum.
Kappes, H. B., Gabriele Oettingen, D. Mayer, and Sam J. Maglio. 2011. Sad mood pro-
motes self-initiated mental contrasting of future and reality. Emotion 11:1206–1222.
Keltner, Dacher, Phoebe C. Ellsworth, and Kari Edwards. 1993. Beyond simple pes-
simism: Effects of sadness and anger on social perception. Journal of Personality and
Social Psychology 64:740–752.
Action Control by Implementation Intentions 241
Lazarus, Richard S. 1991. Emotion and adaptation. New York: Oxford University Press.
Lengfelder, A., and Peter M. Gollwitzer. 2001. Reflective and reflexive action control in
patients with frontal brain lesions. Neuropsychology 15:80–100.
Lerner, Jennifer S., and Dacher Keltner. 2000. Beyond valence: Toward a model of
emotion-specific influences on judgement and choice. Cognition and Emotion. Special
Issue: Emotion, Cognition, and Decision Making 14:473–493.
Lerner, Jennifer S., and Dacher Keltner. 2001. Fear, anger, and risk . Journal of Personality
and Social Psychology 81:146–159.
Lerner, Jennifer S., Deborah A. Small, and George Loewenstein. 2004. Heart strings and
purse strings: Carryover effects of emotions on economic decisions. Psychological
Science 15:337–341.
Loewenstein, George. 1996. Out of control: Visceral influences on behavior. Organizational
Behavior and Human Decision Processes 65:272–292.
Mendoza, Saaid A., Peter M. Gollwitzer, and David M. Amodio. 2010. Reducing the
expression of implicit stereotypes: Reflexive control through implementation inten-
tions. Personality and Social Psychology Bulletin 36:512–523.
Milne, S., S. Orbell, and Pascal Sheeran. 2002. Combining motivational and volitional
interventions to promote exercise participation: Protection motivation theory and
implementation intentions. British Journal of Health Psychology 7:163–184.
Muraven, M., and R. F. Baumeister. 2000. Self-regulation and depletion of limited
resources: Does self-control resemble a muscle? Psychological Bulletin 126:247–259.
Nolen-Hoeksema, Susan, Jannay Morrow, and Barbara L. Fredrickson. 1993. Response
styles and the duration of episodes of depressed mood. Journal of Abnormal Psychology
102:20–28.
Oettingen, Gabriele. 2000. Expectancy effects on behavior depend on self-regulatory
thought. Social Cognition. Special Issue: Social Ignition: The Interplay of Motivation and
Social Cognition 18:101–129.
Oettingen, Gabriele. 2012. Future thought and behavior change. European Review of Social
Psychology 23:1–63.
Oettingen, Gabriele, and Peter M. Gollwitzer. 2001. Goal setting and goal striving. In
Intraindividual Processes. Vol. 1 of Blackwell Handbook in Social Psychology, edited by
A. Tesser and N. Schwarz. Oxford: Blackwell.
Oettingen, Gabriele, G. Hönig , and Peter M. Gollwitzer. 2000. Effective self-regulation of
goal attainment. International Journal of Education Research 33:705–732.
Oettingen, Gabriele, D. Mayer, A. T. Sevincer, E. J. Stephens, H. J. Pak, and M. Hagenah.
2009. Mental contrasting and goal commitment: The mediating role of energization.
Personality and Social Psychology Bulletin 35:608–622.
Oettingen, Gabriele, H. Pak, and K. Schnetter. 2001. Self-regulation of goal setting:
Turning free fantasies about the future into binding goals. Journal of Personality and
Social Psychology 80:736–753.
Oettingen, Gabriele, and E. J. Stephens. 2009. Fantasies and motivationally intelligent
goal setting. In The psychology of goals, edited by G. B. Moskowitz and H. Grant. New
York: Guilford Press.
Orbell, S., S. Hodgkins, and Pascal Sheeran. 1997. Implementation intentions and the
theory of planned behavior. Personality and Social Psychology Bulletin 23:945–954.
Orbell, S., and Pascal Sheeran. 2000. Motivational and volitional processes in action ini-
tiation: A field study of the role of implementation intentions. Journal of Applied Social
Psychology 30:780–797.
242 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L
Ortony, Andrew, Gerald L. Clore, and Allan Collins. 1988. The cognitive structure of emo-
tions. New York: Cambridge University Press.
Peterson, Carly K., Alexander J. Shackman, and Eddie Harmon-Jones. 2008. The role of
asymmetrical frontal cortical activity in aggression. Psychophysiology 45:86–92.
Puca, Rosa Maria, and Heinz-Dieter Schmalt. 2001. The influence of the achievement
motive on spontaneous thoughts in pre- and postdecisional action phases. Personality
and Social Psychology Bulletin 27:302–308.
Schwarz, Norbert, and Gerald L. Clore. 1983. Mood, misattribution, and judgments of
well-being: Informative and directive functions of affective states. Journal of Personality
and Social Psychology 45:513–523.
Schweiger Gallo, I., A. Keil, K. C. McCulloch, B. Rockstroh, and Peter M. Gollwitzer.
2009. Strategic automation of emotion regulation. Journal of Personality and Social
Psychology 96:11–31.
Sheeran, Paschal, and S. Orbell. 1999. Implementation intentions and repeated behavior:
Augmenting the predictive validity of the theory of planned behavior. European Journal
of Social Psychology 29:349–369.
Sheeran, Paschal, T. L. Webb, and Peter M. Gollwitzer. 2005. The interplay between goal
intentions and implementation intentions. Personality and Social Psychology Bulletin
31:87–98.
Shiv, B., George Loewenstein, A. Bechara, H. Damasio, and A. R. Damasio. 2005. Investment
behavior and the negative side of emotion. Psychological Science 16:435–439.
Smith, Craig A., and Phoebe C. Ellsworth. 1985. Patterns of cognitive appraisal in emo-
tion. Journal of Personality and Social Psychology 48:813–838.
Smith, Craig A., and Richard S. Lazarus. 1993. Appraisal components, core relational
themes, and the emotions. Cognition and Emotion 7:233–269.
Stewart, B. D., and B. K. Payne. 2008. Bringing automatic stereotyping under control:
Implementation intentions as efficient means of thought control. Personality and Social
Psychology Bulletin 34:1332–1345.
Taylor, S. E., and Peter M. Gollwitzer. 1995. Effects of mindset on positive illusions. Journal
of Personality and Social Psychology 69:213–226.
Tiedens, Larissa Z., and Susan Linton. 2001. Judgment under emotional certainty and
uncertainty: The effects of specific emotions on information processing. Journal of
Personality and Social Psychology 81:973–988.
Trötschel, R., and Peter M. Gollwitzer. 2007. Implementation intentions and the will-
ful pursuit of prosocial goals in negotiations. Journal of Experimental Social Psychology
43:579–598.
Webb, T. L., and Paschal Sheeran. 2003. Can implementation intentions help to overcome
ego-depletion? Journal of Experimental Social Psychology 39:279–286.
Webb, T. L., and Paschal Sheeran. 2006. Does changing behavioral intentions engender
behavior change? A meta-analysis of the experimental evidence. Psychological Bulletin
132:249.
Webb, T. L., and Paschal Sheeran. 2007. How do implementation intentions promote goal
attainment? A test of component processes. Journal of Experimental Social Psychology
43:295–302.
Wegener, Duane T., and Richard E. Petty. 1994. Mood management across affective
states: The hedonic contingency hypothesis. Journal of Personality and Social Psychology
66:1034–1048.
Action Control by Implementation Intentions 243
Winkielman, Piotr, Kent C. Berridge, and Julia L. Wilbarger. 2005. Unconscious affec-
tive reactions to masked happy versus angry faces influence consumption behavior and
judgments of value. Personality and Social Psychology Bulletin 31:121–135.
Zemack-Rugar, Yael, James R. Bettman, and Gavan J. Fitzsimons. 2007. The effects of non-
consciously priming emotion concepts on behavior. Journal of Personality and Social
Psychology 93:927–939.
13
WAY N E W U
1. INTRODUCTION
The starkly opposed assertions of Fodor and Strawson highlight one controversy
addressed in this chapter: When does something count as a mental action? This
disagreement, however, points to a deeper controversy, one intimated by the appeal
to reflex as a contrast to genuine action. Reflex, such as blinking to looming visual
stimuli or withdrawing one’s hand from a burning surface, is a paradigm form of
automatic behavior. As we shall see, automaticity is what makes decisions about
mental agency controversial, but it has also in recent years led to unsettling conclu-
sions regarding our conceptions of agency and agency itself.
Psychological research on automaticity reveals it to be a pervasive feature of
human behavior. John Bargh and Tanya Chartrand (1999) speak of the “unbearable
automaticity of being,” arguing that “most of a person’s everyday life is determined
Mental Action and the Threat of Automaticity 245
not by their conscious intentions and deliberate choices but by mental processes
that are put into motion by features of the environment and that operate outside
of conscious awareness and guidance” (462). Relatedly, what is unbearable is that
we are also zombies (in a non-flesh-eating sense, thankfully). Christof Koch and
Francis Crick (2001) have written about “zombie agents” understood as “systems
[that] can deal with certain commonly encountered situations automatically” where
automaticity implies the absence of conscious control. But it is not just systems that
are zombie agents, but subjects could be as well. They ask: “Could mutation of a
single gene turn a conscious animal into a zombie?” Yet if our behavior is permeated
with automaticity, aren’t we all zombie agents?
In fact, automaticity appears to eliminate agency. Even if agency were unbearably
automatic or zombie-like, it would still be agency. Yet I shall show that a common
assumption regarding automaticity suggests that it is incompatible with agency. If
so, it is not that agency is unbearably automatic or zombie-like. It isn’t agency at all.
And if there is no agency, there is a fortiori no free, rational, moral, or conscious
agency. To illuminate these issues, automaticity and its correlate, control, must be
incorporated in a theory of agency. Specifically, this chapter shows how we can
simultaneously hold two seemingly inconsistent claims: that automaticity implies
the absence of control and that agency, as an agent’s exemplification of control,
involves, and often requires, much automaticity.
What, then, is automaticity? In section 2, I review empirical conceptions of
automaticity where psychologists came to reject a simple connection, namely, that
automaticity implies the absence of control. Philosophical reflection on mental
agency also suggests that we should reject the simple connection, and in sec-
tion 3, I develop an argument that adhering to it eliminates mental agency, and
indeed bodily agency as well. This is the threat of automaticity. In response, I
explicate the causal structure of mental agency in section 4 and then defend the
simple connection in section 5 in light of that structure. The final sections put
these results to work: section 6 defuses the threat from automaticity, section 7
responds to the striking philosophical disagreements about basic cases of mental
action, and section 8 reflects on inferences from automaticity to claims about
agency in cognitive science.
The simple connection is that automaticity implies the absence of control (or atten-
tion) by the subject. Automaticity is then clarified by explaining control, and on
this, Schneider and Shiffrin note that “a controlled process is a temporary sequence
of nodes activated under control of, and through attention by, the subject” (2).
Control, as they conceive it, involves the deployment of attention. Thus, the initial
foray into defining automaticity and control relies on two links: the simple connec-
tion relating automaticity to the absence of control and the conception of control in
terms of the direction of attention.
Things rapidly became more complicated. John Bargh (1994) notes in a review
of the social psychology literature on automaticity that theories after Schneider and
Shiffrin (1988) gravitated to four factors as defining automaticity: automatic pro-
cesses are “unintentional, occur outside of awareness, are uncontrollable, and are
efficient in their use of attentional resources” (2). The problem, as Bargh (1988)
emphasizes, is that many paradigm cases of automaticity fail to exemplify all four
properties. Shiffrin later observed that “there do not seem to be any simple defin-
ing features of automatic and attentive processes that can be applied in complete
generality” (765), and he identified 10 typical contrastive features of automatic
and controlled processes. Similarly, Thomas Palmeri (2002), in an encyclopedia
entry on automaticity, echoes Bargh’s characterization, writing that “automaticity
refers to the way we perform some mental tasks quickly and effortlessly, with little
thought or conscious intention. Automatic processes are contrasted with delib-
erate, attention-demanding, conscious, controlled aspects of cognition” (290).
He then goes on to list 13 contrasting features between automatic and controlled
processes (table 1, 291). Schneider (2001), in his entry titled “Automaticity” for
the MIT Encyclopedia of Cognitive Science, notes that “automatic processing shows
seven qualitatively and quantitatively different processing characteristics relative to
controlled process” (63). Current psychological conceptions of automaticity have
clearly gone far from the simple connection.
Why have psychologists given up the simple connection? Gordon Logan (1988)
has observed, given the characterization of control as the (generally conscious)
deployment of attention, the simple connection thereby predicted that automatic
phenomena would be freed of the constraints imposed by attentional processing
such as capacity or load limitations.1 The problem, Logan emphasizes, is that empir-
ical work has shown that many putatively automatic phenomena often are subject
to attentional limitations or are otherwise influenced by how attention is directed.
Psychologists have continued to construe control to be or at least to involve deploy-
ment of attention, so the way to accommodate evidence of capacity limitations in
automatic processes is to sever the simple connection (Logan [1988] analyzes auto-
maticity in terms of memory via his instance theory).2
Mental Action and the Threat of Automaticity 247
In what follows, I do the reverse: I retain the simple connection and give up
equating control with attention, although on my view, control in action implies the
deployment of attention. This will allow us to address the consequences of auto-
maticity on agency noted earlier. The most serious of these is the threatened loss of
agency in the face of automaticity. I turn to this threat now, which is most apparent
in mental action.
3. MENTAL BALLISTICS
Given the armchair nature of their work, it is striking that philosophers have largely
neglected mental actions.3 Moreover, when philosophers speak of mental actions,
there is striking discord as to what counts as an instance. For example, you ask me,
“Who was the prime minister of Czechoslovakia when the Soviet Union invaded?”
and I try to recall. When I recall the answer, I then judge, “It was Dubček.”4 Here we
have a remembering and a judging, two mundane mental events. Are these actions?
Christopher Peacocke (2007) claims that they are. On the other hand, Alfred Mele
(2009, 19) asserts that remembering is never an action, and Strawson (2003) denies
action status to both. Who is right?
In contrast, similar questions regarding mundane bodily events, say the grabbing
of an object when one is asked for it, elicit broad agreement as to their status as
actions. I conjecture that the disagreement stems in part from the fact that men-
tal actions are typically automatic, and that there are differing intuitions about the
import of automaticity with respect to agency. Work in social psychology has cata-
loged the pervasiveness of automaticity even in goal-directed behavior where con-
trol is conceived as delegated to the environment, obviating any need for conscious
control on the part of the subject (Bargh and Ferguson 2000). Bernhard Hommel
(2000) has spoken of intention as a “prepared reflex,” the upshot being that inten-
tions allow the environment to take control of behavior without the need of further
intervention (control) by the subject.
I interpret Strawson’s (2003) arguments against mental action as exploiting the
ubiquity of automaticity. Strawson argues that most of what we take to be men-
tal actions involve mental ballistics, namely, things that happen automatically.
Strikingly, he includes deliberation and imagination. A natural reaction to this
constriction on the sphere of mental action is to take it as a reductio of Strawson’s
assumptions, but his arguments provide an opportunity to focus on the signifi-
cance of automaticity.
To bring out the threat, let us push Strawson’s argument to its logical conclusion.
Of imagination, Strawson observes:
When one has set oneself to imagine something one must obviously start from
some conceptual or linguistic specification of the content (spangled pink ele-
phant), and given that one’s imagining duly fits the specification one may say that it
is intentionally produced. But there isn’t intentional control in any further sense: the
rest is a matter of ballistics, mental ballistics. One entertains the verbal specifica-
tion and waits for the mechanism of imagination—the (involuntary) sponta-
neity of imagination—to deliver the image. (241, my emphasis)
248 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L
Why should there be control in any further sense than the one Strawson speci-
fies, namely, that the process fits what one intended? I shall return to this, but
whatever this further element is supposed to be, Strawson thinks it is incompat-
ible with the automaticity of imagination. In imagination, we have only mental
ballistics, some automatically generated process for which we can only wait once
we have specified the relevant parameters, say the image of one’s mother’s face or
of a pink elephant. Thus,
When one sets oneself to imagine anything there comes a moment when what
one does is precisely to relinquish control. To think that the actual content-issu-
ing and content-entertaining that are the heart of imagining are themselves
a matter of action seems like thinking, when one has thrown a dart, that the
dart’s entering the dartboard is itself an action. (242, my emphasis)
Very often [in the case of practical deliberation] there is no action at all: none
of the activation of relevant considerations is something one does intention-
ally. It simply happens, driven by the practical need to make a decision. The
play of pros and cons is automatic—and sometimes unstoppable. (243, my
emphasis)
So, deliberation and judgment are not actions, and other cases like memory recall
will be more grist for Strawson’s mill. It would seem that there is no space for mental
agency.
Still, Strawson notes that “there is of course such a thing as mental action” (231)
and finds space for it in stage setting:
The problem is that he cannot avoid the threat posed by automaticity. What is it
that goes beyond mental ballistics in stage setting? Strawson speaks of “setting one’s
mind at the problem,” but what is this setting? “It may involve rapidly and silently
imaging key words or sentences to oneself, rehearsing inferential transitions, refresh-
ing images of a scene” (231). Yet surely imaging, rehearsing inferences, and refresh-
ing images are just instances of the kinds of cases his previous arguments ruled out
as actions, namely, imagination, deliberation, and recall. Thus, he cannot avoid the
threat even in the remaining space he allows for mental agency.
Mental Action and the Threat of Automaticity 249
To be fair, Strawson points out other cases of mental action: shepherding or dra-
gooning back a wandering mind, “concertion” in thought, stopping, piling and retak-
ing up thoughts that come to us quickly, or an active receptive blanking of the mind
(231–232). All of these are catalytic, and mental action goes no further than this.
Yet he also says that these need not be actions, and then their occurrence would be
automatic. But if automaticity is found in these cases too, then is there any missing
agentive element to be found here that we failed to find in deliberation, imagination,
or recall? As we have seen, in each case Strawson considers, the uncovering of auto-
maticity in mental ballistics leads to a negative answer. The threat of automaticity is
ubiquitous, and accordingly, action seems to be eliminated from the mental sphere.
Strawson explicitly invokes the concept of automaticity at only one point, but
the idea is captured in his talk of reflex, ballistics, and mere happenings. There is in
all of these a contrast with the agent’s being in control. This is just an instance of the
simple connection that originally guided contemporary psychological conceptions
of automaticity: where x is automatic, then x is not a case of control. If we think of
action just as an agent’s exerting control, automaticity implies the loss of agency.
The threat extends to bodily agency. Consider moving one’s arm, say to reach for a
glass. The specific movement that the agent is said to make is individuated by certain
parameters, say the specific trajectory, speed, acceleration, grip force, sequence of
muscle contractions, angle of the relevant joints, and so forth. Yet in a similar sense,
the production of these features is itself merely automatic (and, indeed, the initial
movement of the arm in reaching is literally ballistic). Given that any token move-
ment is individuated by automatically generated features, one might argue that the
concrete movement itself is automatically generated. After all, what features are left
for the agent to control?
Even in bodily action, it seems that we are pushed back to talk of stage setting, but
there is no avoiding the threat there. After all, intentions set the stage for intentional
bodily movement, but Strawson’s argument that deliberation and its product, inten-
tion, are essentially mental ballistics shows us that agency evaporates here as well.
So, the threat from automaticity is total: mental and bodily actions disappear. This
conclusion is more extreme than Strawson allows, but it seems unavoidable given
three initially plausible claims: (1) the pervasiveness of automaticity in human
activity, (2) the conceptual connection between actions and an agent’s exertion
of control, and (3) the simple connection linking automaticity to the absence of
control. I take it that the loss of all action is a reductio of these claims. Since (1) is
empirically established and (2) a conceptual truth (on my view), the source of the
problem is the simple connection.
It is prima facie plausible that the simple connection must go, for automatic-
ity seems required in skilled action. In many cases where we practice an action,
we aim to make that exercise automatic yet at the same time do not aim to abol-
ish our agency. Thus, when I practice a difficult arpeggio in the left hand on the
piano, my aim is to master a particular passage so that I can play the entire piece
correctly. Getting the left hand to perform automatically is precisely what is needed
to play the piece as I intend. Indeed, with the left hand “automatized,” I focus on the
right hand’s part, one that requires effort and attention. Playing with both hands,
one automatically and the other with explicit effort, I play a passage in the piece
250 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L
intentionally. I think this is right, but in what follows, I show that we can also hold
on to all three claims that generated our reductio. To see this, we must embed auto-
maticity within a theory of agency.
Nevertheless, the relevant proposition occurs to her, and she can then use it in
subsequent reasoning, which itself calls upon further dredging through memory.
Action requires solving this Problem of Selection, specifically reducing a set of many
possible inputs by selection of exactly what is relevant to the task at hand (similar
points arise in respect of perceptually based thoughts). For simplicity, we can speak
of this as reducing a set of many possible inputs to a single relevant input.
A similar point arises regarding output or behavior. Returning to our mathemati-
cian, she has, in fact, many choices open to her even after selecting a specific mem-
ory. There are many responses that she can give: she can deploy the remembered
axiom to construct her proof, she can think of how best to format a representation
of that axiom for the final version of a journal article, or she can imagine writing a
song where a statement of the axiom is used as part of a catchy jingle (and much
more besides). There is a set of possible things that she can do with the thought in
light of bringing it to awareness. In mental action, then, doing something is navi-
gating through a behavioral space defined by many inputs and many outputs and
their possible connections. Bringing a thought to awareness is selecting the relevant
memory so as to inform a specific type of conscious mental activity. Mental action
requires the selection of a path in the available behavioral space. In earlier work
(Wu, 2011a), I have called this Problem of Selection the Many-Many Problem. The
Many-Many Problem is, I have argued, a metaphysically necessary feature of inten-
tional bodily action, and it is exemplified in intentional mental action.5 Certainly, in
actual mental actions, we have the Many-Many Problem, and that weaker claim will
suffice for our purposes.
Solving the Many-Many Problem is a necessary condition on intentional agency.
But clearly not any “solution,” namely, a one-to-one mapping of input to output, will
be sufficient for intentional agency.6 In the context of an exam, our mathematician
wants to find a solution to the mathematical problem she is tackling. She is not,
at that moment, interested in writing songs about set theory or formatting a text
for publication. Should such outputs—writing a song, attempts to format a text—
be what results during the exam, she would see these behaviors as inadvertent and
involuntary. Moreover, should the selections at issue routinely pop into her head in
other contexts such as when she is discussing a poem (but not one about set the-
ory!), then this would be something odd, inconsistent with her goals at that time.7
What this points to is that solving the Many-Many Problem cannot be inconsistent
with one’s current intentions. The content of one’s current intentions sets the standard
by which one’s actions are successful or not, and the way to ensure consistency with
intention is to require that solutions to the Many-Many Problem are not independent
of intention. Dependence of selection on intention should then be understood as the
causal influence of intention on selection, and this is intuitive: our mathematician
recalls the relevant axiom in set theory precisely because she intends to solve a prob-
lem in set theory of a certain sort; and she constructs a proof precisely because the
problem at issue is to prove a certain theorem. More abstractly, the behavioral space
that identifies the agent’s action possibilities for a given time is constrained by inten-
tion such that a specific path, namely, the intended one, is prioritized.8
In mental action, solving the Many-Many Problem by making appropriate selec-
tion imputes a certain form of activity to the agent. This activity, however, is not
252 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L
an additional thing that the agent does so as to act in the intended way. Moreover,
selection is not necessarily conscious. Rather, these are just aspects of or part of the
mental action itself, just as a specific movement of the fingers or perhaps the specific
contraction of muscles in the hand is part of tying one’s shoelaces. These are not
additional things that one does so as to tie one’s shoelaces, nor does one need to be
conscious of those features of the action. The same point applies to any selection
relevant to solving the Many-Many Problem.9
Earlier I mentioned the central role of attention in agentive control, and we can
now see that the selection of a path in behavioral space that constitutes solving the
Many-Many Problem in mental action yields a form of cognitive attention. Consider
this oft-quoted passage from William James (1890):
Everyone knows what attention is. It is the taking possession by the mind, in
clear and vivid form, of one out of what seem several simultaneously possible
objects or trains of thought. Focalization, concentration, of consciousness are
of its essence. It implies withdrawal from some things in order to deal effec-
tively with others. (403)10
I have found this passage often invoked when philosophers discuss attention,
although they typically go on to discuss perceptual attention (I am guilty of this as
well). In this passage, however, James speaks not only of perceptual attention but
also of attention in thought, what we can call cognitive attention. Accordingly, it is
important to bear in mind a critical difference between attention in perception and
attention in thought: generally, the inputs that must be selected in perception are
simultaneously presented in perceptual awareness, but the inputs, namely, thought
contents, are not simultaneously given to awareness in thought.11 Thus, when you
perceive a cluttered scene, looking for your lost keys, vision simultaneously gives
you multiple objects, and visual attention is part of an active searching for that
object among many actually perceived objects. In contrast, when one is trying to
find the right thought (say the right axiom to construct a proof), one is not in fact
cognitively aware of multiple simultaneous thoughts (i.e., encoded mnemonic con-
tent). The thoughts from which we must select the relevant content are not actual
objects of awareness in the way that perceived objects are, but (to borrow James’s
phrasing) only possible objects of awareness. They are the items that we have in
memory, and thus, the Many-Many Problem in thought does not involve a con-
scious act of selecting the appropriate mnemonic items. James does in that passage
speak of how it seems that one is confronted with multiple possible thoughts, but
I read this as pointing to one’s sense of multiple behavioral possibilities (recall the
behavioral space).
The way I have put the point is that in bringing a thought to awareness, or, as James
says, taking possession of it by the mind, we have to select the relevant memory. This
is just to solve the input side of the Many-Many Problem in bringing the thought to
awareness. The solution to the Many-Many Problem, understood as path selection
leading to awareness of a thought content, fits with James’s description of cognitive
attention as the selection of a possible train of thought where focalization and con-
centration of consciousness, the awareness of a specific thought, are of its essence.
Mental Action and the Threat of Automaticity 253
S’s F-ing is agentively controlled in respect of F iff S’s F-ing is S’s execution of a
solution to the appropriate Many-Many Problem given S’s intention to F.
S’s F-ing is agentively automatic in respect of F iff it is not the case that S’s F-ing
is S’s execution of a solution to the Many-Many Problem given S’s intention
to F.
This is just the simple connection: the automaticity of F in S’s F-ing is equivalent
to the absence of control in respect of F. On this account, most of the F’s S can be
said to do at a time will be automatic. In our example, S’s kicking with a force N, S’s
moving his foot with trajectory T, with acceleration A, at time t, and so on will be
automatic. Finally, we can also define a strong form of automaticity, what I shall call
passivity.
(P) Passivity
For any subject S and token behavior of type B at some time t:
S’s B-ing is passive if for all F under which S’s B-ing falls, S’s F-ing is agentively
automatic.
Where all of an action’s features are automatic, then the agent is passive and there
is no intentional action or agentive control. Let us now put these notions to work.
relevant here, namely, that one practices not to play a piece in a specific way, say to
imitate a performance of a Brahms intermezzo by Radu Lupu, but rather to be able
to play in whatever way one intends. The control found in skillful action is that by
automatizing a host of basic features, the skilled agent opens up behavioral possi-
bilities that were not available before.
The putative threats mentioned in the introduction are threats to the will and
thus threats to any of its forms such as free will or conscious will. It is important to
see that the threat thus aims at agency itself, and that avoiding the threat requires
incorporating the notion of automaticity into the theory of agency. I have defused
the threat of automaticity by showing that while automaticity is defined as the
absence of control, its presence does not imply the absence of agency.
7. SETTLING DISAGREEMENTS
Psychologists gave up the simple connection and, consequently, the possibility of
dividing kinds of processes between the automatic and the controlled. Reinstating
the simple connection allows us to individuate processes along the automaticity/
control divide, namely, between actions and nonactions (passivity). Still, the meta-
physical division between actions and nonactions is not the primary target of psy-
chological research. Certainly, subjects in psychological experiments often perform
actions (tasks) where the experiments investigate certain features of those actions
that are characteristic of automaticity: insensitivity to cognitive load, parallel pro-
cessing, bypassing of awareness, and so forth. Of course, psychologists have also
connected such features to broader questions about agency, and then they enter a
traditional philosophical domain. I shall close with these issues in the last section,
but I first revisit the puzzling disagreement among philosophers as to what counts
as a mental action.
Let me begin with a disagreement with Strawson, namely, his verdict regarding
imagination and deliberation. On imagination, I noted that Strawson gave the cor-
rect answer: “Given that one’s imagining duly fits the specification [as intended]
one may say that it is intentionally produced.” While higher-ordered properties of
action, say freedom or rationality, might require meeting further conditions, it is a
mistake to look for action in any further sense than found in Strawson’s character-
ization of intentional control, one that comports with AC. One’s entertaining of
an image of a pink elephant given one’s intention to imagine that type of image is a
reflection of one’s agentive control. The intention explains why that specific path in
behavioral space is selected. There are, of course, a host of automatic features asso-
ciated with such imagination, say one’s imagining the elephant as standing on one
foot or facing left from one’s point of view. One may not have intended to imagine
those elements, yet that is what the mind automatically produced. By acknowledg-
ing these points, however, we do not reject that imagining as one intends is a matter
of one’s intentional control and thus is an action.
What of deliberation, where the “play of pros and cons is automatic”? The auto-
maticity of specific thought contents as one deliberates does not imply that the pro-
cess of deliberation is not an action, namely, the solving of the Many-Many Problem
in light of intending to determine whether p is true (theoretical deliberation) or
Mental Action and the Threat of Automaticity 257
when its source is a matter of automaticity? It is a compelling thought that the agent
must make herself felt at precisely such points when control threatens to evaporate,
to reassert control so as to stave off its loss. The point of the current perspective is
that control just is the role of intention in structuring a solution to the Many-Many
Problem, full stop. The question of how the intention arises is an important one, but
not one about agency in the basic sense. Rather, it pertains to whether the resulting
action has higher-ordered properties such as whether it is free, rational, or moral.
I do not deal with these questions here but simply emphasize two different ques-
tions: one about the conditions for agency, the other about conditions for its dif-
ferent forms.19
Given the Many-Many Problem, the pervasive automaticity of agency is what we
should expect: we cannot intend and thereby control all the various ways of solving
the Many-Many Problem even once a specific path is selected in intention. We may
intend to grab an object or to recall a specific image, but the specific parameters
that must be filled in to instantiate the solution to the Problem are not things that
we explicitly intend to bring about, and thankfully so. It is not automaticity that is
unbearable. What would be unbearable would be its absence.
Psychologists have shown that much of our behavior is automatic, yet they have
also spun this result as threatening to agency, leading to a discomfiting picture of
human beings as zombies and automatons.20 One recent line of thought is the lack
of conscious control in action. As Bargh and Chartrand put it, our “conscious inten-
tions” are generally not in control of our behavior. Talk of conscious intention is
common enough in the empirical literature on agency, but I must confess to not
being certain what cognitive scientists are referring to. Do they mean an intention
made conscious, namely, an occurrent state where the content of the intention is at
the focus of my awareness? A thought about my intention, namely, a second-order
occurrent state where my first-order state of intention is at the focus of my aware-
ness? In normal action, I typically find no such thing. Rather, intentions are persist-
ing nonphenomenal mental states of subjects that coordinate and constrain one’s
meandering through behavioral space. As I type these words, I am executing my
intention to finish the essay by the deadline in a way consistent with my other inten-
tions that are also operative (e.g., that I need to get to a meeting at noon), but there
are no correlated conscious intentions or related metacognitive forms of awareness.
Thank goodness! That would be distracting. Of course, I can bring the content of
intention to consciousness on reflection, but that is a special case where I reevaluate
action. In general, when I act, my intentions are preparations to respond to the envi-
ronment in certain ways, or, as Strawson says, stage setting. Agentive control does
not require that the intentions be conscious in either of the senses noted earlier.21
That our actions often bypass this form of conscious control is no sign that we are
somehow not at the reins in our actions. We are at the reins to the extent that what
we do is a result of intentional control in the basic sense that our behavior is the
intention-guided solving of the Many-Many Problem. There is no denying that we
are often moved to act, and on the antecedents of action rest important questions
about the rationality, morality, and freedom of our actions. But all these are higher-
ordered properties of agency. Agency itself is an internal feature of certain processes,
our navigation through a complex world that throws at us Many-Many Problems.22
Mental Action and the Threat of Automaticity 259
NOTES
1. It is worth pointing out that a canonical type of automatic process is attentional,
namely, attentional capture, and that a function of attentional capture is to disrupt
other activities.
2. In the case of mental action, I will emphasize the central role of memory in many
forms of cognitive attention, and this resonates with certain key aspects of Logan’s
theory. In contrast to his account, however, I emphasize the simple connection.
3. The essays in O’Brien and Soteriou (2009) are a corrective to this neglect.
4. The example is from Peacocke 1998.
5. For a detailed discussion of the account, see Wu 2011a. This chapter extends that
account to bodily action and the issue of automaticity versus control.
6. Many nonsentient structures can solve the Many-Many Problem. Think of a set
of points in a train track that directs rail traffic in different ways, depending on the
day. Might there, however, be nonintentional actions exemplified by solving the
Many-Many Problem independently of intention? Perhaps, but I suspect that we will
want to carve off many of the nonsentient cases. This issue requires more discussion
than can be given at this point.
7. Persistent automaticity of mental events might be the source of the positive symp-
toms associated with schizophrenia (see Wu 2011b). Here, the patient is passive in
the sense defined later. This emphasis on automaticity contrasts with the standard
explanation of positive symptoms, self-monitoring, which posits a defect in a control
mechanism.
8. How should we understand this causal influence? In Wu 2011a, I construe intentions
as persisting structural states of the subject in the sense that the settings of points in a
set of train tracks (see note 6) and the setting of weights in a neural network count as
structural states. Because of these states, certain processes are prioritized over others
within the system in question.
9. So, zombie action (in Koch and Crick’s sense) is compatible with the account given
here (see later definitions). Neither our intentions nor the specific selections that we
make need be conscious. Of course, typically, there is some conscious element in
our actions. A second issue concerns metaphysical zombies, creatures fully devoid
of phenomenal consciousness. One might wonder whether such creatures could be
agents at all. To the extent that we think such creatures are capable of genuine (non-
phenomenal) mental states such as intentions as well as genuine nonphenomenal
perceptual states, then if they solve the Many-Many Problem via such intentions,
they are thus intentionally acting. But again, these issues require more discussion
( Julian Kiverstein raised certain issues to which this note is an initial response).
10. See also Peacocke 1998, 70.
11. Of course, for perceptually based thoughts, the putative objects of thought can be
given simultaneously.
12. There are, of course, different things we can mean by “attention.” I am here emphasiz-
ing the insight in James’s description, what he takes to be part of what we all know
about attention. The general point is that action requires attentional selection given
the Many-Many Problem.
13. These F’s are also the basis of descriptions under which the action can be said to be
intentional. It is not clear that the definitions of automaticity or control of an event
in respect of F to be given later are equivalent to claims about the intentionality of
the event under the description “the F.” Whether an action is intentional under a
260 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L
REFERENCES
Anscombe, G. E. M. 1957. Intention. Oxford: Blackwell.
Bargh, J. A . 1994. The four horsemen of automaticity: Awareness, intention, efficiency,
and control in social cognition. Handbook of social cognition: Basic processes, ed. Robert
S. Wyer and Thomas K. Srull, 1–40. 2nd ed. Hillsdale, NJ: Erlbaum.
Bargh, J. A., and T. L. Chartrand. 1999. The unbearable automaticity of being. American
Psychologist 54 (7): 462–479.
Mental Action and the Threat of Automaticity 261
J O Ë L L E P RO U ST
To lay the groundwork for the discussion, we need to start with a tentative char-
acterization of the general structure of action, on the basis of which mental acts can
be specified. A commonly held view is that both bodily and mental acts involve
some kind of intention, volition, or reason to act; the latter factor both causes and
guides the action to be executed.3 Along these lines, the general structure of an
action is something like:
On the basis of this general characterization, one can identify a mental act as an
act H that is tried in order to bring about a specific property G—of a self-directed,
mental, or cognitive variety.4 The epistemic class of mental acts encompasses per-
ceptual attendings, directed memorizings, reasonings, imaginings, visualizings. A
mixed category, involving a combination of epistemic, prudential, or motivational
ingredients, includes acceptings, plannings, deliberatings, preference weightings,
and episodes of emotional control.
Three main arguments have been directed against the characterization of men-
tal acts described in (C1). First, it seems incoherent, even contradictory, to rep-
resent a mental act as trying to bring about a prespecified thought content: if the
content is prespecified, it already exists, so there is no need to try to produce it.
Second, the output of most of the mental operations listed above seems to crucially
involve events of passive acquisition, a fact that does not seem to be accommodated
by (C1). Trying to remember, for example, does not seem to be entirely a matter of
willing to remember: it seems to involve an essentially receptive sequence. Third,
it makes little sense, from a phenomenological viewpoint, to say that mental acts
result from intentions: one never intends to form a particular thought. We will dis-
cuss each of these objections and will examine whether and how (C1) should be
modified as a result.
In virtue of (P1), one cannot try to judge that P. One can try, however, to form one’s
opinion about whether P; or examine whether something can be taken as a premise
in reasoning, namely, accept that P. Or work backward from a given conclusion to
the premises that would justify it. But in all such cases, accepting that P is conditional
upon feeling justified in accepting that P. For example, one can feel justified in accept-
ing a claim “for the sake of argument” or because, as an attorney, one is supposed to
reason on the basis of a client’s claim. Various norms thus apply to a mental action,
constituting it as the mental action it is. In the case of accepting, coherence regulates
the relations between accepted premises and conclusions. Relevance applies to the
particular selection of premises accepted, given the overall demonstrative intention
of the agent. Exhaustivity applies to the selection of the relevant premises given one’s
epistemic goal.
These norms work as constraints on nonagentive epistemic attitudes as well as on
mental actions. Forming and revising a belief are operations that aim at truth, and
at coherence among credal contents. Thus normative requirements do not apply
only to mental actions. Mental actions, rather, inherit the normative requirements
that already apply to their epistemic attitudinal preconditions and outcomes. If a
thinker was intending to reach conclusions, build up plans, and so forth, irrespective
of norms such as relevance, coherence, or exhaustivity, her resulting mental activity
would not count as a mental action of reasoning or planning. It would be merely an
illusory attempt at planning, or reasoning.7 A mental agent cannot, therefore, try to
Φ without being sensitive to the norm(s) that constitute successful Φ-ings.
The upshot is that characterization (C1) earlier should be rephrased, as in (C2),
in order to allow for the fact that the mental property that is aimed at should be
acquired in the “right way,” as a function of the kind of property it is.
Characterization (C2) can be used to explain the specific difference of mental ver-
sus bodily acts in the following way. Just as bodily acts aim at changing the world by
using certain means-to-end relations (captured in instrumental beliefs and know-
how), mental acts have, as their goal, changing one’s mind by relying on two types
of norm: means-to-end instrumental norms (e.g., “concentrating helps remember-
ing”) and constitutive norms (“my memory attempt ought to bring about a correct
outcome”). The specific difference between a mental and a bodily act, then, is that
specifically epistemic, constitutive norms are only enforced in mental acts and atti-
tudes, and that an agent has to be sensitive to them to be able to perform epistemic
actions. This does not entail, however, that a thinker has to have normative concepts
such as truth or relevance. An agent only needs to practically adjust her mental per-
formance as a function of considerations of truth, exhaustivity, or relevance, and so
forth. There is a parallel in bodily action: an agent does not need to explicitly recog-
nize the role of gravity in her posture and bodily effort to adjust them appropriately,
when gravity changes, for example, under water. It is an important property of con-
stitutive norms that they don’t need to be consciously exercised to be recognized as
Mental Acts as Natural Kinds 265
practical constraints on what can be done mentally. For example, an agent who tries
to remember a date, a fact, a name implicitly knows that success has to do with the
accuracy of the recalled material; an agent who tries to notice a defect in a crystal
glass implicitly knows that her attempt depends on the validity of her perceptual
judgment. In all such cases, the properties of informational extraction and trans-
fer constrain mental performance just as the properties of gravity constrain bodily
performance.
(P2) “One ought to adopt the means one believes necessary (in the circum-
stances) to do what one intends to do.”8
Given that what one intends to do varies with agents and circumstances, some people
may prefer to ignore a fallacy in their reasoning, or jump to a conclusion, just as
some prefer to picnic in the rain. There are many types of instrumental conditions
for attaining goals, and they each define a norm, in the weak sense of a reason for
adopting the means one adopts. From this perspective, epistemic norms are no more
constitutive for a mental act than beliefs in means-end conditions for realizing a
goal are constitutive for a bodily act. They are merely instrumental conditions under
the dependence of one’s intention to reach a given end. A closely related argument
in favor of the instrumental view of epistemic norms, proposed by Papineau (1999),
is that they compete with each other: one can desire that one’s beliefs be formed so
as to be true, or informative, or economical, and so forth. Truth is not an overarching
norm; norms apply in a context-relative way, according to the agent’s goal.
This kind of argument, however, has been criticized for conflating reasons to act
and normative requirements on acting. Adopting John Broome’s (1999) distinction,
one might say that Dretske’s proposition (P2) supress earlier correctly articulates a
relation of normative requirement between intending an end, and intending what
you believe to be a necessary means to this end. But this does not ipso facto provide
you with a reason to intend what you believe to be a necessary means to the end;
conversely, whatever reason you may have to take this particular means as necessary
to reach this end does not count as normatively required. Let us see why. A reason
is “an ought” pro tanto—“an ought so far as it goes.” For example, it you intend to
open a wine bottle, granting that you believe that you need a corkscrew, you ought
to get one. Believing that you ought to get a corkscrew, however, cannot make it true
that you ought to do so. You ought to do so if there is no reason not to do it. The
266 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L
however, the agent’s aim is to retrieve only correct answers, the normative require-
ment is that no incorrect answer should be included in her responses.
So there are two very different ways in which a mental agent can fail in an action:
she can select an aim that she has no good reason to select (aiming to be exhaustive
when she should have aimed at accuracy). Or she can fail to fulfill the normative
requirements that are inherent to the strategy she selected (aiming to be exhaustive
and leaving out half of the items in the target set). For example, if the agent tries to
remember an event, no representation of an event other than the intended one will
do. The agent, however, may misrepresent an imagining for a remembering. This
example can be described as the agent’s confusion of a norm of fluency with a norm
of accuracy. Another frequent example is that, having accepted A, an agent fails to
accept B, which is a logical consequence of A (maybe she is strongly motivated to
reject B).10 Here again, the agent was committed to a norm of coherence but failed
to apply it, while turning, possibly, to another norm, such as fluency or temporal
economy, unsuitable to the task at hand.
If indeed there are two different forms of failure, connected, respectively, with
goal selection, and with a goal-dependent normative requirement, we should recog-
nize that it is one thing to select the type of mental act that responds to the needs of
a given context, and another to fulfill the normative requirements associated with
this selected mental act. Selecting one act may be more or less rational, given a dis-
tal goal and a context. An agent may be wrong to believe that she needs to offer an
exhaustive, or a fine-grained, answer to a question (contexts such as conversation,
eyewitness testimony, academic discussion, and so on, prompt different mental
goals). Having selected a given goal, however, the agent now comes under the pur-
view of one or several constitutive norms, which define the satisfaction conditions
of the associated mental action. The fact that there can be conflicts among epistemic
strategies thus just means that an agent must select a particular mental act in a given
context, if she is in fact unable to carry out several of them at once. Each possi-
ble mental act is inherently responsive to one or several distinctive norms. Which
mental act is needed, however, must be decided on the joint basis of the contextual
needs and of one’s present dispositions.
If this conclusion is correct, it suggests, first, that mental acts are natural kinds,
which are only very roughly captured by commonsense categories such as “trying
to remember” or “trying to perceive.” An act of directed memory, or perceptual
attending, for example, should be distinguished from another if it aims at exhaustiv-
ity or at strict accuracy. Similarly, a type of reasoning could be aiming at coherence,
or at truth, depending on whether the premises are only being considered, that is,
assumed temporarily, or fully accepted. These are very different types of mental
acts, which, since they invoke different normative requirements, have different con-
ditions of success and also require different cognitive abilities from the agent.
Second, the conclusion also suggests that the conformity of present cognitive
dispositions with a given normative requirement should be assessed prior to men-
tally acting: a thinker needs to evaluate the likelihood that a mental action of this
type, in this context, will be successful. In other words, a predictive self-evaluation
needs to take place for a subject to appropriately select which mental act to per-
form. For example, a subject engaged in a learning process may need to appreciate
268 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L
such sensitivity or because social learning has made them sensitive to new norms.
Given the internal relations between normative requirements, norm sensitivity, and
mental acts, the range of mental acts available to an agent is partly, although not
fully, constrained by the concepts she has acquired. In particular, when an agent
becomes able to refer to her own cognitive abilities and to their respective norma-
tive requirements, she ipso facto extends the repertoire of her “tryings” (i.e., of her
dispositions to act mentally).
This objection is perfectly correct. In response to Papineau’s “unrefined thinkers”
argument, we should only claim that some basic constitutive requirements, at least,
are implicitly represented in one’s sense of cognitive efficiency. Among these basic
requirements, fluency is a major epistemic norm that paves the way for the others.
A feeling of perceptual or mnemonic fluency, experienced while engaged in some
world-directed action (such as reclaiming one’s toys), allows a subject to assess the
validity of her perceptual judgments, or the exhaustivity of a recall episode.
This idea of basic normative requirements has recently received support from
comparative psychology. It has been shown, in various animal studies, that some
nonhuman primates, although not mind readers, are able to evaluate their mem-
ory or their ability to perceptually discriminate between categories of stimuli.
Macaques, for example, are able to choose to perform a task when and only when
they predict that they can remember a test stimulus; they have the same patterning
of psychophysical decision as humans.13 This suggests that macaques can perform
the mental action of trying to remember, or of trying to discriminate, just as humans
do; furthermore, they are able to choose the cognitive task that will optimize their
gains, based on their assessment of how well they perceive, or remember, (rather
than on stimulus-response associations, which are not made available to them).
The obvious question, then, is how animals can conduct rational self-evaluation
(i.e., use a form of “metacognition”) in the absence of conceptual self-knowledge. A
plausible answer, currently being explored by philosophers and cognitive scientists,
is that affective states have an essential role in providing the bases of norm sensitiv-
ity in animals, in children, and also in human adults.14 A feeling “tells” a subject,
in a practical, unarticulated, embodied way, how a given mental act is developing
with respect to its constitutive norm, without needing to be reflectively available to
the believer. Cues such as contraction of the corrugator muscle (correlating with a
sense of difficulty, experienced when frowning), or the absence of tension, seem to
be associated with a gradient in self-confidence about the outcome of the current
mental act. This feeling, however, also has a motivational force, making the prospect
of pursuing the action attractive or aversive to the agent.
Philosophically, however, the important question is not only how epistemic emo-
tions are implemented,15 not only how they influence decision,16 but also how they
can contribute to rational evaluation. How can epistemic feelings generate mental
contents that actually enable a subject to perform self-evaluation? One possibility is
that emotions provide access to facts about one’s own attitudes and commitments.17
If these facts are articulated in a propositional way, then emotions are subjected
to the agent’s self-interpretive activity as a mind reader. Another possibility, not
incompatible with the first, based on the comparative evidence reported earlier, is
that epistemic feelings express affordances in a nonconceptual way (Proust, 2009).
270 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L
trying responsible for a mental act (or a nonmental one). It must, in general, fulfill a
“voluntary control condition” (VCC):
Having voluntary control over a change means that the agent knows how, and is
normally able, to produce a desired effect; in other terms, the type of procedural
or instrumental activity that she is trying to set in motion must belong to her rep-
ertoire. Even though interfering conditions may block the desired outcome, the
agent has tried to act, if and only if she has exerted voluntary control in an area in
which she has in fact the associated competence to act. An important consequence
of McCann’s suggestion is that the agent may not be in a position to know whether
an action belongs to her repertoire or not. All she knows is that she seems to be try-
ing to perform action A. Trying, however, is not a sure sign that a mental action is
indeed being performed.
It is compatible with VCC, however, that bodily or mental properties that seem
prima facie uncontrollable, such as sneezing, feeling angry, or remembering the
party, can be indirectly controlled by an agent, if she has found a way to cause herself
to sneeze, feel angry about S, or remember the party. She can then bring it about that
she feels angry about S, or that she remembers the party, and so on. Are these bona
fide cases of mental action? Here intuitions divide in an interesting way.
Some theorists of action20 consider that an intrinsic property of action is that, in
Al Mele’s (2009) terms:
(P4) The things that agents can, strictly speaking, try to do, include no nonac-
tions (INN).
An agentive episode, on this view, needs to include subsequences that are them-
selves actions. It must not essentially involve receptivity. Those who hold the INN
principle contrast cases such as trying to remember, where success hinges on a
receptive event (through which the goal is supposed to be brought about), with
directly controlled events, such as lighting up the room. For example, while agreeing
that a thinker’s intention is able to have a “catalytic” influence on her thought pro-
cesses, Galen Strawson rejects the view that she can try to entertain mental contents
intentionally.
Is Strawson’s claim justified? We saw in section I that “entertaining a thought con-
tent” does not qualify as an action, and cannot even constitute the aim of an action
(except in the particular case of accepting). But as Mele (2009) remarks, “It leaves
plenty of room for related intentional mental actions” (31). Take Mele’s task of find-
ing seven animals whose name starts with “g” (Mele, 2009). There are several things
that the agent does in order to complete the task: exclude animal names not begin-
ning with “g,” make a mental note of each word beginning with “g” that has already
come to mind, keep her attention focused, and so on. Her retrieving “goat,” however,
does not qualify as a mental action, because “goat” came to her mind involuntarily,
that is, was a nonaction. In conclusion: bringing it about that one thinks of seven
272 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L
animal names is intentional and can be tried, while forming the conscious thoughts
of seven individual animal names is not (Mele, 2009).
One can agree with Mele, while observing that bodily actions rarely fulfill the
INN condition. Most ordinary actions involve some passive relying on objects or
procedures: making a phone call, for example, presupposes that there exists a reli-
able mechanism that conveys my vocal message to a distant hearer. Asking some-
one, at the dinner table, “Is there any salt?” is an indirect speech act that relies on
the hearer’s activity for computing the relevant meaning of the utterance, a request
rather than a question. Gardening, or parenting, consists in actions that are meant
to make certain consequences more probable, rather than producing them outright.
There is a deeper reason, however, to insist that “trying to A mentally” does not
need to respect the INN principle. If section I is right, acting mentally has a two-
tiered structure. Let us reproduce the characterization discussed earlier:
There are, as we saw, two kinds of motives that have to be present for a mental act to
succeed. A first motive is instrumental: a mental act is performed because of some
basic informational need, such as “remembering the name of the play.” A second
motive is normative: given the specific type of the mental action performed, a spe-
cific epistemic norm has to apply to the act. These two motives actually correspond
to different phases of a mental act. The first motivates the mental act itself through
its final goal. The second offers an evaluation of the feasibility of the act; if the pre-
diction does not reach an adequacy threshold, then the instrumental motive needs
to be revised. This second step, however, is of a “monitoring” variety. The thinker
asks herself a question, whose answer is brought about in the agent by her emo-
tions and prior beliefs, in the form a feeling of knowing, or of intelligibility, or of
memorial fluency. Sometimes, bodily actions require an analogous form of moni-
toring: if an agent is unsure of her physical capacity to perform a given effort, for
example, she needs to form a judgment of her ability based on a simulation of the
action to be performed. Mental acts, however, being highly contextual, and tightly
associated with normative requirements, need to include a receptive component.
In summary, mental agency must adjudicate between two kinds of motives that
jointly regulate mental acts. The agent’s instrumental reason is to have a mental goal
realized (more or less important, given a context). This goal, however, is conditional
on her attention being correctly oriented, and on her existing cognitive dispositions for
producing the mental goal. Here, epistemic requirements become salient to the agent.
Feelings of cognitive feasibility are passively produced in the agent’s mind as a result of
her attention being channeled in a given epistemic direction. These feelings predict the
probability for a presently activated disposition to fulfill the constraints associated with
a given norm (accuracy, or simplicity, or coherence, etc.). Epistemic beliefs and theo-
ries can also help the agent monitor her ability to attain a desired cognitive outcome.
Thus, orienting one’s attention as a result of an instrumental reason (finding the
name of the play) creates a unique pressure on self-evaluation, which constitutes a
Mental Acts as Natural Kinds 273
precondition and a postevaluative condition for the mental act. One can capture
this complex structure in the following theoretical definition of a mental act:
Let us suppose, for example, that you go to the supermarket and suddenly real-
ize, once there, that you have forgotten your shopping list. You experience a spe-
cific unpleasant emotion, which, functionally, serves as an error signal: a crucial
epistemic precondition for your planned action is not fulfilled, because you seem
not to remember what was on the list. When such an error signal is produced, the
representation of the current action switches into a revision mode. Note that this
epistemic feeling differs from an intention: it does not have a habitual structure, as
an intention normally does, given the highly planned, hierarchical structure of most
of our instrumental actions. It is, rather, highly contextual, difficult to anticipate,
unintentional, and dependent upon the way things turn out to be in one’s interac-
tion with the environment. The error signal is associated with a judgment concern-
ing the fact that your shopping list is not available as expected. Now, what we need
to understand is when and why this judgment leads to selection of a mental act
rather than to a new bodily action.
A. Hypothesis A
A first hypothesis—hypothesis A—is that the error signal is an ordinary, garden-
variety action feedback. It is generated when some expectation concerning either
the current motor development of the action or its outcome in the world does not
match what is observed (according to a popular “comparator view” of action).25 But
there is no reason to say that such feedback has to trigger a mental act. What it
may trigger, rather, is a correction of the trajectory of one’s limbs, or a change in
the instrumental conditions used to realize the goal. If I realize that my arm does
not extend far enough to reach the glass I want, I have the option of adjusting my
posture or taking an extra step. When I realize that I don’t have my shopping list at
hand, I have the option of looking for it, reconstituting it, or shopping without a list.
In both situations, no mental act seems necessary.
B. Hypothesis B
Hypothesis B states that the error signal of interest is not of a postural, spatial, or
purely instrumental kind. The distinction between epistemic and instrumental rel-
evance discussed in section I is then saliently involved in the decision process. An
instrumental error signal carries the information that the existing means do not
predict success (“shopping will be difficult, or even impossible”). An epistemic
error signal carries, in addition, the information that epistemic norms are involved
in repairing the planning defect (“can my memory reliably replace my list?”). The
comparator that produces the epistemic error signal, on this hypothesis, has access
to the cognitive resources to be used in a given task. To make an optimal decision,
the agent needs to be sensitive to the norms involved, such as accuracy or exhaus-
tivity. Norm sensitivity is, indeed, implicit in the practical trilemma with which
the agent is confronted: either (1) she needs to interrupt her shopping, or, (2) she
needs to reconstruct the relevant list of items from memory, or, finally, (3) she may
shop without a list, in the hope that roaming about will allow her to track down the
needed items. The trilemma is only available to an agent if mental acts are in her
Mental Acts as Natural Kinds 275
repertoire, and if she can select an option on the basis of her contextual metacog-
nitive self-evaluations. Now consider the constraints that will play a role in deter-
mining how the trilemma should be solved. The list can be more or less accurately
reconstructed: the new list can include fewer items than the original list, and thus
violate a norm of exhaustivity (or quantity). It can include more items than the origi-
nal list, thus violating a norm of accuracy (or truth). As shown in section I, norma-
tive requirements depend upon the goal pursued, but they are strict, rather than pro
tanto. Self-probing her own memory is an initial phase that will orient the shopper
toward the normatively proper strategy.
A defender of the A-hypothesis usually blames the B-hypothesis for taking the
principle of Occam’s razor too lightly. Here is how the argument goes. Any simple
postural adjustment can, from a B-viewpoint, be turned into a mental act. When
realizing that a movement was inadequate, you engaged into a reflective episode;
you compared your prior (estimated) belief of the distance between your arm and
the glass with your present knowledge of the actual distance. A precondition of the
current action fails to be met. As a result, you actively revise your former belief, and,
as a consequence, you reflectively form the mental intention to perform a correc-
tive postural action. Surely, this picture is overintellectualist. Any animal can correct
its trajectory to reach a goal: no mental act, no comparison between belief states
is needed; a navigating animal merely compares perceptual contents; it aims at a
matching state and perseveres until it gets it.
The A-objector correctly emphasizes that the concept of a “mental property”
can describe any world property one cares to think about. A color, or a shape,
becomes mental once seen. A behavior becomes mental as soon as it is anticipated
or rehearsed. A more economical theory, the objector concludes, should explain
actions through first-order properties; what is of cognitive interest is the world, not
the mind turning to itself to see the world.
The B-defender, however, will respond that the A-objector ignores existing
psychological mechanisms that have the function of assessing one’s cognitive dis-
positions as such—they are not merely assessing the probability of the world turn-
ing or not to be favorable to one’s plans. Indeed, crucial evidence in favor of the
B-hypothesis consists in the contrast between animals that are able to perform meta-
cognitive self-evaluation to decide what to do, such as some nonhuman primates
and dolphins, and those unable to do so, such as pigeons and rats.26 Metacognitive
self-evaluation, however, is not in itself a mental action. It is the initial and the last
step of such an action, in a way that closely parallels the functional structure of bodily
actions. Neuroscientific evidence suggests that a bodily action starts with a covert
rehearsal of the movement to be performed.27 This rehearsal, although “covert,” is
not a mental action but, rather, a subpersonal operation that is a normal ingredient
of a bodily action. Its function is strictly instrumental: to compare predicted effi-
ciency with a stored norm. Similarly, a mental action starts with evaluating whether
a cognitive disposition can reliably be activated. Its function is, as argued in sec-
tion I, directly critical and indirectly instrumental. Its critical function is to evaluate
how reliable or dependable my own cognitive dispositions are relative to a given
normative requirement. Its instrumental function is to guide a decision to act in
this or that way to attain the goal. The parallel also applies to the ultimate step of an
276 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L
action. Once an action is performed, it must be evaluated: Does the observed goal
match the expected goal? Again, there is an interesting difference in postevaluating
a bodily and a mental action. In a bodily action, sensory feedback normally tells
the agent whether there is a match or a mismatch. In a mental action, however, the
feedback is of a different kind. The subject needs to appreciate the normative status
of the output of the mental act: Is the name retrieved correct? Has the list been
exhaustively reproduced? Here, again, a subject is sensitive to the norms involved
in self-evaluation through a global impression, including feelings of fluency, coher-
ence, and so on, as well as situational cues and beliefs about his or her competence
with respect to the task involved.
The upshot is that, from the B-viewpoint, the existence of metacognition as a spe-
cifically evolved set of dispositions is a crucial argument in favor of the existence of
mental acts as a natural kind, distinct from motor or bodily acts. Let’s come back to
the error signal as a trigger for a mental action. In the shopper example, the error sig-
nal that makes a mental action necessary is the absence of an expected precondition
for an ordinary action: the shopping list being missing, the agent must rely on her
unaided memory. It is interesting to note that the list itself represented an attempt
to avoid having to rely on one’s uncertain memory to succeed in the shopping task.
The anticipated error to which the list responds is thus one of failing to act accord-
ing to one’s plan. Externalizing one’s metacognitive capacities is a standard way of
securing normative requirements as well as instrumental success in one’s actions.
The error signal often consists in a temporal lag affecting the onset of a sequence
of action. For example, in a conversation, a name fails to be quickly available. The
error signal makes this manifest to the agent. How, from that error signal, is a mental
act selected? In some favorable cases, an instrumental routine will save the trouble
of resorting to a specific mental act: “Just read the name tag of the person you are
speaking to.” When, however, no such routine is available, the speaker must either
cause herself to retrieve the missing name or else modify the sentence she plans to
utter. In order to decide whether to search her memory, she needs to consider both
the uncertainty of her retrieving the name she needs to utter and the cost-benefit
ratio, or utility, of the final decision. Dedicated noetic, or epistemic, feelings help
the agent evaluate her uncertainty. These feelings are functionally distinct from the
error signals that trigger mental acts. Nonetheless, the emotional experience of the
agent may develop seamlessly from error signal to noetic feeling.
In summary, our discussion of the shopper example suggests, first, that the error
signal that triggers a mental act has to do with information, and related epistemic
norms; and second, that the mental act is subordinated to another encompassing
action, that itself has a given utility, that is, a cost-benefit schedule. The two acts are
clearly distinct and related. A failure in the mental act can occur as a consequence
of overconfidence, or for some other reason: it will normally affect, all things being
equal, the outcome of the ordinary action. An obvious objection that was discussed
is one of hyperintellectualism: Are not we projecting into our shopper an awareness
of the epistemic norms that she does not need to have? A perceiving animal clearly
does not need to know that it is exercising a norm of validity when it is acting on the
basis of its perception. We need to grant that norm sensitivity need not involve any
conceptual knowledge of what a norm is. Depending on context, an agent will be
Mental Acts as Natural Kinds 277
sensitive to certain epistemic norms rather than others, just as, in the case of a child,
the issue may be about getting back all the toys, or merely the favored one. She may
also implicitly recognize that the demands of different norms are mutually incom-
patible in a given context. If one remembers that normative requirements apply to
attitudes as well as to mental actions, then the question of normative sensitivity is
already presupposed by the ability to revise one’s beliefs in a norm-sensitive way, an
ability that is largely shared with nonhumans.
CONCLUSION
A careful analysis of the role of normative requirements as opposed to instrumental
reasons has hopefully established that mental and bodily forms of action are two
distinct natural kinds. In contrast with bodily action, two kinds of motives have to
be present for a mental act to develop. A first motive is instrumental: a mental act
is performed because of some basic informational need, such as “remembering the
name of the play” as part of an encompassing action. A second motive is epistemic:
given the specific type of mental action performed, a specific epistemic norm must
apply to the act (e.g., accuracy). These two motives actually correspond to differ-
ent phases in a mental act. The first motivates the mental act instrumentally. This
instrumental motivation is often underwritten by a mere time lag, which works as
an error signal. The second offers an evaluation of the feasibility of the act, on the
basis of its constitutive epistemic requirement(s). Self-probing one’s disposition to
act and postevaluating the outcome of the act involve a distinctive sensitivity to the
epistemic norms that constitute the current mental action.
Conceived in this way, a characterization of mental acts eschews the three difficulties
mentioned at the outset. The possibility of prespecifying the outcome of an epistemic
mental act is blocked by the fact that such an act is constituted by strict normative
requirements. That mental acts include receptive features is shown to be a necessary
architectural constraint for mental agents to be sensitive to epistemic requirements,
through emotional feelings and normatively relevant attitudes. Finally, the phenom-
enology of intending is shown to be absent in most mental acts; the motivational struc-
ture of mental acts is, rather, associated with error signals and self-directed doubting.
NOTES
1. I thank Dick Carter for his critical comments on a former version and linguistic help.
I am grateful to Anne Coubray, Pierre Jacob, Anna Loussouarn, Conor McHugh,
Kirk Michaelian, and the members of the Action-Perception-Intentionality-Conci
ousness (APIC) seminar for helpful feedback. This research was supported by the
DIVIDNORM ERC senior grant no. 269616.
2. Geach 1957, 1.
3. See Davidson 1980; Brand 1984; Mele 1997; Proust 2001; Peacocke 2007.
4. Words such as “willing” or “trying” are sometimes taken to refer to independent
mental acts. This does not necessarily involve a regress, for although tryings or will-
ings are caused, they don’t have to be caused in turn by antecedent tryings or willings.
See Locke [1689] 2006, vol. 2, §30, 250; Proust 2001; Peacocke 2007.
5. See Williams 1973, 136–151.
278 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L
6. Obviously, one can try to remember a proper name under a description, such as
“John’s spouse.” But this does not allow one to say that the content of one’s memory
is prespecified in one’s intention to remember: one cannot decide to remember that
the name of John’s spouse is Mary.
7. The difference between a bad plan and an illusory attempt at planning is that, in the
first case, the subject is sensitive to the associated normative requirements, but fails
to abide them, while, in the second, the subject fails to be sensitive to them.
8. Dretske 2000, 250. See Christine Korsgaard (1997) for a similar view, where norma-
tivity in instrumental reasoning is derived from the intention to bring about a given
end.
9. Broome 1999, 8. See also Broome 2001.
10. The case of an elderly man accepting that he has a low chance of cancer, to avoid being
upset, discussed in Papineau (1999, 24), is a case of acceptance aiming at emotional
control; there is no problem accommodating this case within a normative require-
ment framework: the norm in this case constitutes a mental act of emotional control;
it requires accepting to be coherent with the emotional outcome and relevant to it.
11. See Evans 1990.
12. See, in particular, Kelley and Jacoby 1989; Kelley and Lindsay 1993; Whittlesea
1993.
13. For a review, see Smith et al. 2003.
14. Koriat 2000; Proust 2007; Hookway 2003, 2008; De Sousa 2009.
15. It is an empirical project to identify the informational sources that are subpersonally
involved in generating embodied epistemic feelings (Koriat 2000). Temporal cues,
having to do with the onset and swiftness of processing, as well as the overall dynamic
pattern of the mental episode, must also contribute to forming a global impression,
which reliably correlates with an epistemically calibrated outcome.
16. See Damasio et al. 1996.
17. Elgin 2008.
18. See, e.g., Strawson 2003; Mele 2009; Dorsch 2009; Carruthers 2009.
19. See Mele 2009, 29, for a similar argument.
20. See Strawson 2003.
21. See Ryle 1949 for a general presentation of this argument, and Proust 2001 for a
response.
22. See Davidson 1980; Brand 1984.
23. Cf. Searle 1983.
24. Cf. Campbell 1999; Gallagher 2000.
25. See Wolpert et al. 2001. For an extension to mental action, see Feinberg 1978.
26. See Hampton 2001; Smith et al. 2003. For a skeptical analysis of this evidence, see
Carruthers 2008.
27. See Krams et al. 1998.
REFERENCES
Aristotle. Metaphysics Theta. 2006. Edited by S. Makin. Oxford: Clarendon Press.
Brand, M. 1984. Intending and acting. Cambridge, MA : MIT Press.
Broome, J. 1999. Normative requirements. Ratio, 12, 398–419.
Broome, J. 2001. Are intentions reasons? And how should we cope with incommensu-
rable values? In C. Morris and A. Ripstein (eds.), Practical rationality and preference:
Essays for David Gauthier, 98–120. Cambridge: Cambridge University Press.
Mental Acts as Natural Kinds 279
Campbell, J. 1999. Schizophrenia, the space of reasons, and thinking as a motor process.
Monist, 82, 609–625.
Carruthers, P. 2008. Meta-cognition in animals: A skeptical look . Mind and Language, 23,
58–89.
Carruthers, P. 2009. Action-awareness and the active mind. Philosophical Papers, 38,
133–156.
Damasio, A. R., Everitt, B. J., and Bishop, D. 1996. The somatic marker hypothesis and the
possible functions of the prefrontal cortex [and discussion]. Philosophical Transactions;
Biological Sciences, 351 (1346), 1413–1420.
Davidson, D. 1980 Essays on actions and events. Oxford: Oxford University Press.
Dorsch, F. (2009). Judging and the scope of mental agency. In L. O’Brien and M. Soteriou
(eds.), Mental Actions, 38–71. Oxford: Oxford University Press.
Dretske, F. 2000. Norms, history and the constitution of the mental. In Perception, knowl-
edge and belief: Selected essays, 242–258. Cambridge: Cambridge University Press.
Elgin, C. Z. 2008. Emotion and understanding. In G. Brun, U. Doguoglu, and D. Kuentzle
(eds.), Epistemology and emotions, 33–50. Aldershot, Hampshire: Ashgate.
Evans, J. 1990. Bias in human reasoning: Causes and consequences. London: Psychology
Press.
Feinberg , I. 1978. Efference copy and corollary discharge: Implications for thinking and
its disorders. Schizophrenia Bulletin, 4, 636–640.
Gallagher, S. 2000. Self-reference and schizophrenia. In D. Zahavi (ed.), Exploring the self,
203–239. Amsterdam: John Benjamins.
Geach, P. 1957. Mental acts: Their content and their objects. London: Routledge and Kegan
Paul.
Hampton, R. R . 2001. Rhesus monkeys know when they remember. Proceedings of the
National Academy of Sciences U.S.A., 98, 5359–5362.
Hookway, C. 2003. Affective states and epistemic immediacy. Metaphilosophy, 34, 78–96.
Reprinted in M. Brady and D. Pritchard (eds.), Moral and epistemic virtues, 75–92.
Oxford: Blackwell, 2003.
Hookway, C. 2008. Epistemic immediacy, doubt and anxiety: On a role for affective states
in epistemic evaluation. In G. Brun, U. Doguoglu, and D. Kuentzle (eds.), Epistemology
and Emotions, 51–66. Aldershot, Hampshire: Ashgate.
Kelley, C. M., and Jacoby, L. L. 1998. Subjective reports and process dissociation: Fluency,
knowing, and feeling , Acta Psychologica, 98, 127–140.
Kelley, C. M., and Lindsay, D. S. 1993. Remembering mistaken for knowing: Ease of
retrieval as a basis for confidence in answers to general knowledge questions. Journal of
Memory and Language, 32, 1–24.
Koriat, A . 2000. The feeling of knowing: Some metatheoretical implications for con-
sciousness and control. Consciousness and Cognition, 9, 149–171.
Korsgaard, C. 1997. The normativity of instrumental reason. In G. Cullity and B. Gaut
(eds.), Ethics and practical reason, 215–254. Oxford: Clarendon Press.
Krams, M., Rushworth, M. F. S., Deiber, M.-P., Frackowiak, R. S. J. and Passingham, R. E.
1998. The preparation, execution and suppression of copied movements in the human
brain. Experimental Brain Research, 120, 386–398.
Locke, J. [1689] 2006. An essay concerning human understanding. 2 vols. London: Elibron
Classics.
McCann, H. 1974. Volition and basic action. Philosophical Review, 83, 451–473.
Mele, A. R . 1997. Agency and mental action. Philosophical Perspectives, 11, 231–249.
280 T H E F U N CT I O N O F CO N S CI O U S CO N T R O L
Mele, A. R . 2009. Mental action: A case study. In L. O’Brien and M. Soteriou (eds.),
Mental actions and agency, 17–37. Oxford: Oxford University Press.
Papineau, D. 1999. Normativity and judgment. Proceedings of the Aristotelian Society,
Supplementary Volumes, 73, 17–43.
Peacocke, C. 2007. Mental action and self-awareness (I). In J. Cohen and B. McLaughlin
(eds.), Contemporary debates in the philosophy of mind, 358–376. Oxford: Blackwell.
Proust, J. 2001. A plea for mental acts. Synthese, 129, 105–128.
Proust, J. 2007. Metacognition and metarepresentation: Is a self-directed theory of mind
a precondition for metacognition? Synthese, 2, 271–295.
Proust, J. 2009. The representational basis of brute metacognition: A proposal. In R. Lurz
(ed.), Philosophy of animal minds: New essays on animal thought and consciousness, 165–
183. Cambridge: Cambridge University Press.
Ryle, G. 1949. The concept of mind. London: Hutchinson.
Searle, J. R . 1983. Intentionality: An essay in the philosophy of mind. Cambridge: Cambridge
University Press.
Smith, J. D., Shields, W. E., and Washburn, D. A. 2003. The comparative psychology
of uncertainty monitoring and metacognition. Behavioral and Brain Sciences, 26,
317–373.
Sousa, R. (de). 2009. Epistemic feelings. Mind and Matter, 7, 139–161.
Strawson, G. 2003. Mental ballistics or the involuntariness of spontaneity. Proceedings of
the Aristotelian Society, 77, 227–256.
Whittlesea, B. W. A . 1993. Illusions of familiarity. Journal of Experimental Psychology:
Learning, Memory and Cognition, 19, 1235–1253.
Williams, B. 1973. Deciding to believe. In Problems of the self, 136–147. Cambridge:
Cambridge University Press.
Wolpert, D. M., Ghahramani, Z., and Flanagan, J. R. 2001. Perspectives and problems in
motor learning. Trends in Cognitive Sciences, 5, 487–494.
PART FOUR
Decomposed Accounts
of the Will
This page intentionally left blank
15
TI LLMANN VIERKANT
In this chapter it is argued that insights from recent literature on mental agency
can help us to better understand what it is that makes us free agents. According to
Pamela Hieronymi (2009), who developed Richard Moran’s (2001) work on men-
tal agency, there are two quite different ways in which we can be mental agents—
either through “evaluative control” or through “managerial control.” According to
Hieronymi, managerial control works very much like other forms of intentional
action, whereas evaluative control is different and distinctive of mental agency. The
first section of this chapter will discuss why the distinction introduced by Hieronymi
is a good one, and will then go on to argue that Hieronymi nevertheless underes-
timates the importance of managerial control. This is because, as the chapter will
argue, managerial control is central to free mental agency.
The chapter argues that managerial control is crucial for the will not because it
enhances our understanding of our reasons, as one might easily assume, but because
it creates an opportunity for the individual to change their beliefs and desires at will
despite their own first-order rational evaluations. The discussion of the distinction
between evaluative and managerial/manipulative control in Hieronymi will help
us to see that there is no such thing as intentional rational evaluation, and what the
intentional control of the mental is really good for. The last section of the chapter
then tries to clarify what exactly is required for managerial control, in order for it to
fulfill its function for the will and how this account compares to seemingly similar
moves made by Michael Bratman and Richard Holton.
MENTAL ACTIONS
Hieronymi (2009) has argued that hierarchical accounts of mental agency fail to
take into account what is at the heart of mental agency. This is because, according
to Hieronymi, there are two distinct forms of mental agency. One, which she refers
to as managerial/manipulative control,1 works very much like bodily agency. The
other form is referred to as evaluative control and lacks some of the most important
284 D ECO M P O S E D ACCO U N TS O F T H E W I L L
normally at least we do not acquire attitudes like beliefs or intentions in the same
way as we achieve our aims in bodily actions, because there does not seem to be the
same element of intentional control or the reflective distance typical of that kind of
control.
Obviously, however, there are nonstandard cases that complicate the picture.
Kavka carefully rules out hypnosis in order to get the puzzle going, and in the
belief case there is among many others the famous example of Pascal’s wager. Pascal
argued that it is rational to acquire the belief that God exists, even if there is little
evidence for that belief. Faced with Moore’s paradox, Pascal advises that one can
still acquire the belief by going to mass, praying rosaries, and the like. What Pascal
advises here in effect is basically a form of self-conditioning. So, it is quite possible
to acquire mental attitudes for their own sake. Nevertheless, when an agent does
acquire an attitude in a managerial way, she bypasses her own rationality, and her
mind becomes a simple psychological object to manipulate. This is exactly what we
should expect, if mental agency is modeled on bodily agency, but it seems clear that
this is not the ordinary way of acquiring beliefs or intentions.
Because of this, and because of the very strong intuition that deliberating is some-
thing that we do rather than something that merely happens to us, Hieronymi argues
that we should introduce a second form of mental agency that better describes the
deliberative process, even if it fails to exhibit many of the characteristics that we
ordinarily associate with agency. This is why she introduces evaluative control.
Hieronymi has a positive and a negative reason for insisting that evaluative control
really is a form of agency. The positive reason is that evaluative control is nothing
else than the agent’s rational machinery in action. The reflective distance that is so
important for bodily action simply does not seem adequate when talking about the
activity of the mind. Our deliberations are not something external to us, but express
our understanding of the world. When we want to find out whether we judge that
p we do not introspect to find our judgment there, but we look at the evidence for
or against p.
The negative reason Hieronymi gives is that there simply is no alternative adequate
account that would allow us to understand most of our judgings and intendings as
actions. This is because, even though she acknowledges that there is a second form
of mental agency—that is, the managerial control mentioned earlier—she does not
believe that this managerial control can explain most of our judgings or intendings,
nor does she believe that managerial control is in any case a completely indepen-
dent form of mental agency. It is easy to see why: managerial control, like ordinary
bodily action, requires an intention that can help to bring about the desired effect
in the world. In managerial control the relevant intention would have as its con-
tent an intention or judgment that the agent would like to acquire. The problem is
obviously that the intention that is controlling the managerial act is itself in need of
being formed. Even in the highly unlikely case where the formation of this intention
was also done in a managerial way, we have obviously entered a vicious circle. In the
end, there will have to be an intention that has been brought about without the use
of a previous intention to acquire the intention, and this intention will presumably
be acquired by an act of evaluative control. In effect, then, every instance of manage-
rial control will require at the very least one instance of evaluative control to get off
286 D ECO M P O S E D ACCO U N TS O F T H E W I L L
the ground.3 The agent has to form the intention evaluatively to bring it about that
she will acquire the relevant attitude. Pascal, for example, has to evaluatively acquire
the judgment that it would be best, all things considered, to have the belief in God.
Similarly, the agent in the toxin puzzle has to judge evaluatively that she should hyp-
notize herself in order to acquire the intention to drink the toxin.
Evaluative control, then, so Hieronymi’s conclusion, is the basic form of mental
agency. It is the way in which we ordinarily acquire attitudes like beliefs and inten-
tions, and we should not be worried about this because in contrast to managerial
control, in evaluative control we express ourselves as rational deliberating beings.
REFLECTIVE CONTROL
Evaluative control is indispensable, because on Hieronymi’s picture there is no
alternative account of mental agency that could fulfill the functions of evaluative
control and have at the same time the same features as ordinary bodily agency. This
claim, one might think, is wrong, though: there seems to be an alternative, which
she labels reflective control. It seems to be quite possible to intentionally reflect and
thereby change mental attitudes. An agent might, for example, think about placing a
bet on her team winning the Champion’s League. She might then form the intention
that she should reexamine her reasons for her belief. It seems that she can now easily
and fully intentionally think about the individual reasons she has for her belief. For
example, are the forwards really that strong? Are the opponents really vulnerable on
the left wing? She might, again fully intentionally, go through a list of things that are
important for success and of which she might not have thought so far (proneness
to injury, whether the referee likes the aggressive style of her team, etc.). Now the
great thing is that even though it seems that these are things that an agent clearly
can do intentionally, they seem as well to exhibit the same characteristics as evalu-
ative control. If the agent were to find upon reflection that her team was not quite
as strong as she originally thought, then she would change her belief that the team
will win the Champion’s League. It does not seem that this has happened in a way
that bypasses rationality as in the managerial scenario, but that reflection makes the
agent more rational because she now does not purely deliberate about the content
of her belief, but explicitly about the question whether or not the attitude in ques-
tion is justified.
But even though reflective control does seem very tempting, there is an obvious
problem. How exactly does reflection bring about a change in attitude? Obviously,
we can intentionally reflect on our reasons for a specific belief, but whether or not
this reflection will bring about a change in attitude depends on our rational evalua-
tion of the reasons, and it is not up to us (in the intentional sense) how that evalu-
ation goes. Hieronymi therefore suspects that at the heart of reflective control we
will find an exercise in evaluative control, which is doing all the work and which
obviously does not have the intentional characteristics that seemed to make reflec-
tive control so attractive.
So is there a way, then, to make sense of reflective control without falling back on
evaluative control? The most promising account, according to Hieronymi, would
be a hierarchical account. According to such an account, when one reflects on one’s
Managerial Control and Free Mental Agency 287
reasons for a belief that p and finds that those reasons are not sufficient, one will
then form a second-order belief that the first-order belief is unjustified. Once one
has this belief, all that needs to be the case for the first-order belief to change is
that the first-order belief is sensitive to that second-order belief. This sounds like a
good account, but Hieronymi’s next move notes an obvious problem. The account
does not give us a story about how it is that the second-order belief will change the
first-order belief. Now, once we look closer at how this could happen, it becomes
clear that this will not happen in a way that resembles intentional control.
Let’s take stock of what we discussed so far. We followed Hieronymi’s convincing
defense of evaluative control as a specific form of mental agency that is importantly
different from ordinary bodily agency. We saw as well that there is a second form
of mental agency (managerial control), and that this form of mental agency can
be modeled successfully on ordinary intentional actions. However, as Hieronymi
pointed out, managerial control requires at the very least one instance of evaluative
control in order to get off the ground and is therefore not a completely indepen-
dent alternative to evaluative control. We then wondered whether reflective control,
understood as a higher-order account of mental agency, could not fulfill the same
function as evaluative control, while at the same time being a form of intentional
control. A closer look revealed that reflective control necessarily has at its heart acts
of evaluative control, and that these obviously cannot be modeled as intentional
control.
is a necessary condition for free mental agency. Two immediate objections might
be raised here.
First of all, these are rather contrived examples. It is obviously true that agents
can manipulate themselves in the way the examples suggest, but the overwhelm-
ing majority of our mental acts are not like that. We do not normally think about
how we can influence our attitudes, but we think about first-order content. We are
interested in what is true about our world and the best thing to do in the world
we live in. A focus on our own attitudes sounds as unlikely as it sounds strangely
narcissist.
Second, even if one were convinced of the importance of managerial control, it
would still be the case that managerial control is always parasitic on at least one act
of evaluative control. Pascal goes to mass because he evaluated that this is the best
thing to do if he wants to acquire the desired belief, and the agent in the toxin puzzle
evaluatively forms the intention to hypnotize herself to get the cash.
Let us look at this second objection first. If it really were the case that the pro-
posed account suggested that managerial control was a completely independent
form of mental agency, then this would be a knock-down argument, but this is not
what the claim amounts to. Rather, the account accepts that the most basic form of
mental agency is evaluative control, but it wants to add that evaluative control on its
own is not enough for free mental agency.
Let us now move on to the first objection. In answering this objection, we will as
well encounter the central argument for the main claim of the chapter, that is, that
managerial control is necessary for free mental agency. In order to get the answer off
the ground, it will be helpful to have a closer look at an idea that Victoria McGeer
(2007) explores in her essay “The Moral Development of First-Person Authority,”
because her account is in many important ways similar to the one developed here. In
addition, some of the topics McGeer discusses will prepare us for the crucial discus-
sion of what it is that is important about managerial control.
McGeer, in contrast especially to Moran, does think that managerial control is
extremely important for the moral development of an imperfect rational creature.
She identifies two problems for such an agent. On the one hand, there is the prob-
lem of rampant rationalization.4 Agents might be able to rationalize their behavior
even though it is perfectly obvious to a neutral observer that their behavior is actu-
ally controlled by different motives from the ones that they ascribe to themselves.
Second, even if an agent is aware of the right thing to do most of the time, this
does not mean that there cannot be situations where their judgment is changed in
undesirable ways because of the strong affordances of the situation. McGeer dis-
cusses the example of a Middlemarch character who sincerely believes in a marriage
between two friends of his, but who has the problem that he is quite fond of the girl
himself. In order to stop himself from giving in to temptation, the character reveals
his feelings for the girl to his male friend. By confessing his feelings, the agent makes
it impossible for himself to pursue his desire for the girl. In other words, the agent
uses his knowledge about his potentially changing psychology in order to prevent
the feared changes.
Taking McGeer’s musings about her Middlemarch characters as a starting point,
we can now return to the worry that managerial control—even though clearly
Managerial Control and Free Mental Agency 289
useful—is simply not of enough relevance in our lives to justify the strong claim of
making it a necessary condition for free mental agency.
The crucial point here is that even though Pascal and the toxin puzzle describe
very unusual situations, McGeer’s story does not. In effect, McGeer describes a
case of self-control. The character binds himself to the mast in a way that is struc-
turally very similar to the archetype of all self-control stories. Like Odysseus, the
character knows that he cannot be sure that his rational evaluation of the situation
will remain constant if the wrong circumstances should arise. Self-control delivers
the wiggle room mentioned earlier because it allows the agent to keep believing p,
even under circumstances where the agent normally would tend to reevaluate and
believe –p.
Now if it were the case that managerial control is necessary for us to be able to
exercise future-directed acts of self-control, then it seems very plausible to main-
tain that it is a necessary condition for free mental agency, because I take it to be
uncontroversial that the ability for future-directed self-control is at least a necessary
condition for free mental agency. Before we move on, one very important difference
between the account defended here and McGeer should be pointed out. McGeer
argues that an ideally rational creature might not need these self-control techniques,
and this is one important point where the accounts differ. Ideal rationality would
not help Pascal or the agent in the toxin puzzle. Even for ideally rational agents, a
knowledge of their own psychology is important, because sometimes mental states
matter to us as states, rather than because of their content.
The most obvious problem with this account is that it seems simply false to say
that future-directed self-control necessarily requires managerial control. The next
two sections will flesh out this worry, first by discussing exactly what ability is
required for managerial control and why this ability might be important for future-
directed self-control, and second by looking at Michael Bratman’s and Richard
Holton’s work on self-control. Because both their accounts do not seem to require
managerial control, the rest of the chapter will then try to justify why it is neverthe-
less necessary.
a false belief in the future, even if she has a true belief about the same matter at the
moment.
Both abilities seem crucial for future-directed self-control. As long as an agent
cannot understand that a belief which they hold might nevertheless in the future be
considered by themselves as false, self-control seems pointless. Only an understand-
ing of the nature of misrepresentation allows an agent to be aware of the need for
self-regulation. A gambler who has a habit of betting large sums of money on her
team to win the Champion’s League might in a quiet moment know that the team
is not really good enough, but whenever she goes past the bookmakers, she might
be overcome by the irrational belief that this year things will be different. Now, as
long as she does not understand that beliefs can change from true to false, she will
not be able to understand that in the moment where she understands that her team
has no chance of winning, she might have to take precautions against acquiring the
false belief again. She will see no need to do anything as she knows at that moment
without any doubt that betting on the team would be the wrong thing to do and
has absolutely no intention of doing so. Only if she does not only think about the
evidence for the belief but about the belief itself as a mental attitude that can misrepresent
the state of the world will she realize that beliefs are vulnerable to misleading evidence
or situational effects. Only then can she understand that she has to be worried about
doing the wrong thing in the future, even though she knows what the right thing to
do now is and can take steps to avoid doing the wrong thing.
HOLTON ON SELF-CONTROL
One reason to doubt that Bratman’s account of intentions is all that there is to
self-control can be constructed from Richard Holton’s (2009) work on self-control.
According to Holton, forming a resolution7 in order to prepare against future temp-
tations is not about providing new reasons for action, as Bratman’s accounts would
have it (these new reasons are on Bratman’s account nothing more than the inten-
tions themselves), but simply reduces the ability of the agent to take new reasons
into account. It makes the agent in effect less judgment sensitive. Judgment sensi-
tivity here means the ability to reevaluate one’s beliefs if the environment provides
reasons to do so.
Even more important, according to Holton, this making oneself less judgment
sensitive is something that the agent does, and it is clear that what Holton has in
mind here is intentional action rather than evaluative control.
This seems very much in the spirit of the account here. On Holton’s account, as
on the one defended here, following through on one’s resolutions requires the agent
to be able to break free of her natural evaluative tendencies by means of intentional
control of her mind.
Holton’s main argument for his account is the phenomenology of battling temp-
tation. If Bratman’s account were right, it ought to be the case that battling tempta-
tion feels like evaluating two desires and then naturally going with whichever turns
out to be stronger. In reality, though, fighting temptation really does seem to involve
a constant intentional trying to keep one’s mind from reevaluating. One has to be
quite revisionist to deny that the phenomenology of fighting temptation does not
involve intentional tryings.
As the account defended here also insists on the importance of the intentional
control of the mind for self-control, is my account simply a version of Holton’s
view?
The answer to this question is not at all, because even though, like Holton, this
account does emphasize the role of intentional action for self-control, there is one
decisive difference. On Holton’s account, trying to actively reduce judgment sensi-
tivity does not imply that we form resolutions in order to manipulate our minds.
One might, for example, form the resolution to stop smoking. It seems quite pos-
sible to form this resolution without ever thinking about one’s mental states. In fact,
this seems to be the norm. One will think about the associated health risks and then
vow to not smoke another cigarette ever again. This seems like a very fair point to
make, but does this not fatally undermine the claim defended here that self-control
requires manipulating mental states as states? This is a crucial worry and requires a
new section.
292 D ECO M P O S E D ACCO U N TS O F T H E W I L L
attitude by changing the conditions under which her evaluative processes are taking
place, but without an attitude-directed intention. In such cases the agent is obvi-
ously not interested in acquiring the specific attitude for the attitude’s sake, but does
know that certain intentional behaviors have desirable first-order effects.9 These
effects can be things, like getting the bigger reward, not smoking, and so on. The
agent does not have to know that these effects are obtained by means of the acquisi-
tion of a mental state.10
For Hieronymi’s purposes, this distinction might not be crucial because she is
mainly interested in showing that evaluative control is a specific form of mental
agency, and it is certainly true that the intentional part of this—unaware manage-
rial control—would not bring about any change in attitude without the evaluative
component.
But the distinction matters here. It matters because it is necessary in order to clar-
ify the claim made in this chapter. The form of managerial control we are interested
in has to be one where Hieronymi’s statement that we are intentionally manipulat-
ing mental attitudes like ordinary objects is literally true, because only once this is
the case will the agent be able to understand that attitudes can be false, can change
over time, and so on—and, as we argued earlier, these are necessary elements of
self-control in the sense that we are after. So, if self-control can be exercised by
means of unaware managerial control, then our claim that the intentional targeting
of attitudes is a necessary condition for self-control collapses.
In addition, once we have introduced this distinction, we obtain as well an expla-
nation of where exactly the difference between Holton and Baumeister and the
position defended here lies. Holton and Baumeister argue that willing is intentional
and effortful, but the scenarios they describe are clearly not ones where subjects are
manipulating their attitudes. As mentioned earlier, bringing about a mental state that
will easily allow you to master the self-control task on Holton’s model is not about
the will at all, because as soon as the manipulation is successful, the characteristics
of effort and depletion will vanish. It seems clear that Holton and Baumeister, in the
terminology used here, think of the will mainly as a form of unaware managerial con-
trol. In these cases, subjects are trying intentionally to evaluate a first-order proposi-
tion in a specific way. Obviously, as Hieronymi told us, that is impossible; you can
intentionally focus or repeat reasons for a specific action, but you cannot intention-
ally evaluate. Attempting to do it does, however, have an attitude-directed effect.
It can help to bring it about that the agent will evaluate the situation differently.11
It does this not by bringing new material to the evaluation but by changing the eval-
uator (e.g., by making it less interested in new evidence, as in Holton’s scenario of
self-control).12
However, the agent in this scenario is not aware of what it is that they are doing.
And that means that such tools are much less flexible and effective than the tools
used in managerial control that is aware. If that is the right way to understand such
acts of behavior control, then in one sense they are simply less sophisticated versions
of real self-control. They achieve their aims by changing an attitude, rather than by
providing new evidence for content evaluations. However, they are obviously not
intentionally directed at the attitude itself. If that is right, then it seems implausible to
exclude the more sophisticated versions from the will and to describe them as mere
294 D ECO M P O S E D ACCO U N TS O F T H E W I L L
tricks. On the other hand, however, there obviously is a major difference between
the two mental tools. Obviously, once you understand what it is that you are doing,
the level of control and flexibility is many times higher than before, and that is why
the claim is justified that this very general form of theoretical self-awareness is nec-
essary for free mental agency, while the ability to control behavior with the Holton
tool on its own is not good enough.
Finally, one common objection to seeing self-control by manipulation in contrast
to behavioral control by sheer effort as part of the will has to be discussed here.
This objection states that effortful control is active, while in manipulation the agent
gives up control and becomes passive. As soon as the manipulation is successful, the
agent cannot go back. On closer examination, this is really quite a weak argument.
On the one hand, it is obviously not true that self-manipulations cannot be revers-
ible or conditional, and on the other, control by sheer effort obviously does make
the agent more passive, in the sense that she will be less judgment sensitive to good
reasons as well as to temptations. Both forms of control are about introducing a cer-
tain element of passivity—that is in fact the very point of them. How durable that
intentionally introduced passivity should be depends obviously on the situation,
but it is again true that understanding managerial control as just that will help to
optimize strategies. Once we see this, the argument is now turned on its head. Once
the agent understands what it is she is doing, it will be much more easily possible
to calibrate the right mixture between flexibility and rigidity. Once again, it makes
sense to argue that only aware managerial control is good enough for the kind of
self-control that is intuitively a necessary condition for free mental agency.13
self-control—then it was argued that the only way to achieve this coherently is to
put forward the account defended here.
The chapter concentrated on presenting the main idea behind the account and
discussed some necessary clarifications and obvious objections, but there are many
more things that one could add in favor of the account. Here is a loose collection
of them.
If the account is right, it would give us an explanation for why free agency is some-
thing that we intuitively think only humans can do. As yet there seems to be no clear
evidence that any other species other than humans is able to metarepresent—and
metarepresentation is a necessary condition for aware managerial control.
The account also has a story to tell about what the function of making people
responsible for their mental states might be. It is true, we do tell our criminals that
they should understand the error of their ways, but this has always been a big ask.
Philosophers and most ordinary people struggle to find a fault-proof rational way
of arguing that doing or being good is also being rational. So why do we think that
criminals should be able to do it? However, what has a chance of succeeding is an
exercise in managerial attitude acquisition, which helps the potential reoffender to
overcome her reasoning, which had seemed previously to make the offense rational
for her. Aware managerial control is something that we can teach people to do.
Interestingly, it is a sociological fact that the genre of books that is supposed to
help people to exercise self-control is already one of the biggest sellers on the mar-
ket.14 Many people look down on the self-help genre, but many more swear by it.
This is not that surprising actually, because the advice given in these books maps
quite nicely on the findings in serious cognitive science labs like Gollwitzer’s, that
is, it works by helping people to exercise managerial control.
Finally, the account has some interesting consequences. It was argued that self-
blindness is not a problem for the account, as long as the agent understands the
nature of representation, but obviously, new knowledge in the sciences does allow
us to be far more effective in this form of mind and self-creation. This already has led
to enormous changes in the way we manipulate our minds, for example, psychoac-
tive pharmacy or cognitive behavioral therapy. In this respect, the account is in the
end about breaking down the boundary between forms of self-control that are sup-
posed to be internal to the agent, like the mental muscle phenomena that Holton
and Baumeister describe, and the use of external scaffolding that humans use to aid
their self-control. This chapter shows that both forms use the same mechanism and
that, if anything, the aware use of external scaffolding is a more sophisticated form
of the will than the simple straining of the supposed mental muscle.15
NOTES
1. Managerial and manipulative control differ only insofar as in managerial control the
agent influences the environment in such a way that a normal evaluative process
brings about the desired result, whereas in manipulative control the bringing about of
the judgment does not depend on a normal functioning of the evaluative machinery.
From here on, I will label both forms managerial.
2. They were not looking under the hood, in Moran’s apt phrase (Moran 2001).
296 D ECO M P O S E D ACCO U N TS O F T H E W I L L
3. Hieronymi argues that actually two acts of evaluative control are required. The sec-
ond act consists in the evaluation that the intentionally brought about circumstances
cause. This seems plausible enough for managerial control (see distinction in note
2), but there is an ambiguity here for manipulative control, where the bringing about
of the attitude does not seem to necessarily require evaluative control at this stage.
Imagine, e.g., that the belief is surgically implanted. It is not clear that in such a sce-
nario there has to be initially a second act of evaluation.
4. I owe this term to Andreas Paraskevaides.
5. I.e., the ability to pass the false belief task. See Perner 1993.
6. This is obviously most fitting for belief, but arguably it works for intention as well.
Intentions always contain a judgment about what is the best thing to do, and obvi-
ously this judgment can go wrong. Understanding this is crucial if one wants to
implant an intention for an intention’s sake, rather than for the sake of its content.
7. Holton’s term for an intention formed in order to ensure that one sticks to one’s plans
in the face of temptation.
8. Even though the ability to intentionally guide deliberation is no mean feat. In fact,
there is good reason to think that this controlled deliberation is what gives humans
a form of thought regulation that other animals do not have. However, it is still true
that this form of controlled thinking does not require metarepresentation. I discuss
the role of intentionally controlled deliberation in detail in a forthcoming paper
(Vierkant 2012).
9. Unaware managerial self-control is itself a very broad term. In one sense, it includes
most intentional behaviors that there are, because most intentional behaviors have
consequences for the mental states of the agents. However, there are some forms of
unaware managerial control that are far more sophisticated and effective in control-
ling minds as a side effect than others. There is no room to elaborate on the various
forms of unaware managerial control here, but I do develop this point in (Vierkant &
Paraskevaides 2012)
10. There is no space here to expand on this distinction, but it would seem to be a worth-
while undertaking. There has been a very lively debate on which mental actions
can be performed intentionally (e.g., Strawson 2003; Pettit 2007). In most of these
debates, however, it is presumed that we know what it is that we are doing when we
manage our attitudes or bring it about that we have better conditions for our evalua-
tions. Obviously, this knowledge is theoretically available in humans, but it is far less
clear whether it plays a role in many of these managerial acts. In fact, it seems not that
unlikely that the intuition of reflective control is created exactly by the fact that very
many managerial acts are not understood as such by the agent.
11. This very short sketch of nonaware managerial control only scratches the surface of a
huge fascinating field of cognitive science. There are probably many stages on the way
to making an animal aware of its own mentality.
12. There is an interesting link here to the discussion on metacognition in animals. See,
e.g., Smith et al. 2003.
13. How knowledge of our psychology could enable us to optimize self-control can be
seen in the chapter by Hall and Johansson, this volume.
14. For an interesting analysis of the self-help genre as the contemporary form of talking
about the will, see Maasen et al. 2008.
15. For a way of fleshing out this idea of how we could use external scaffolding to support
the will, see Hall and Johansson, this volume.
Managerial Control and Free Mental Agency 297
REFERENCES
Baumeister, R . (2008). Free will, consciousness and cultural animals. In J. Baer (ed.), Are
we free, 65–85. New York: Oxford University Press.
Bratman, M. (1987). Intention, plans, and practical reason. Cambridge, MA: Harvard
University Press.
Faude-Koivisto, T., Würz, D., and Gollwitzer, P. M. (2009). Implementation intentions:
The mental representations and cognitive procedures of IF-THEN planning. In W.
Klein and K. Markman (eds.), The handbook of imagination and mental simulation, 69–
86. New York: Guilford Press.
Hieronymi, P. (2009). Two kinds of mental agency. In M. S. Lucy O’Brien (ed.), Mental
actions, 138–162. Oxford: Oxford University Press.
Holton, R . (2009). Willing, wanting, waiting. New York: Oxford University Press.
Maasen, S. Sutter, B., and Duttweiler, S. (2008). Wille und Gesellschaft oder ist der Wille
ein soziales Phaenomen. In T. Vierkant (ed.), Willenshandlungen, 136–169. Frankfurt:
Suhrkamp.
McGeer, V. (2007). The moral development of first-person authority. European Journal of
Philosophy 16: 81–108.
McHugh, C. (2011.). Judging as a non-voluntary action. Philosophical Studies 152:
245–269.
Moran, R . (2001). Authority and estrangement: An essay on self-knowledge. Princeton, NJ:
Princeton University Press.
Perner, J. (1993). Understanding the representational mind. Cambridge, MA : MIT Press.
Pettit, P. (2007). Neuroscience and agent control. In D. Ross (ed.), Distributed cognition
and the will, 77–91. Cambridge MA , MIT Press.
Smith, J. D., Shields, W. E., and Washburn, D. (2003). The comparative psychology
of uncertainty monitoring and metacognition. Behavioural and Brain Sciences 26:
317–373.
Strawson, G. (2003). Mental ballistics or the involuntariness of spontaneity. Proceedings of
the Aristotelian Society 103:227–257.
Vierkant, T. (2012). What metarepresentation is for. In J. Brandl, J. Perner, and J. Proust
(eds.), Foundations of metacognition. Oxford: Oxford University Press.
Vierkant, T., and Paraskevaides, A. (2012). How social is our understanding of minds? In
F. Paglieri and C. Castelfranchi (eds.), Consciousness in interaction, 105–124. Amsterdam:
John Benjamins.
Wegner, D. M. (2002). The illusion of conscious will. Cambridge, MA : MIT Press.
16
L A R S H A L L , P E T T E R J O H A N S S O N, A N D DAV I D D E L É O N
quitting smoking, or on general desires like becoming a more creative and lovable
person? One class of answers to these questions rings particularly empty; those
are the ones that in one way or another simply say, “just do it”—by acts of will, by
showing character, by sheer motivational force, and so forth. These answers are not
empty because it is difficult to find examples of people who suddenly and dramati-
cally alter their most ingrained habits, values, and manners, seemingly without any
other aid than a determined mind. It is, rather, that invoking something like “will”
or “character” to explain these rare feats of mental control does little more than label
them as successes. The interesting question is, rather, what we ordinary folks do
when we decide to set out to pursue some lofty goal—to start exercising on a regu-
lar basis, to finally write that film script, to become a less impulsive and irritable
person—if we cannot just look inside our minds, exercise our “will,” and simply
be done with it. The answer, we believe, is that people cope as best they can with a
heterogeneous collection of culturally evolved and personally discovered strategies,
skills, tools, tricks, and props. We write authoritative lists and schedules, we rely on
push and pull from social companions and family members, we rehearse and mull
and exhort ourselves with linguistic mantras or potent images of success, and we
even set up ceremonial pseudo-contracts (trying in vain to be our own effective
enforcing agencies). Often we put salient markers and tracks in the environment to
remind us of, and hopefully guide us onto, some chosen path, or create elaborate
scenes with manifest ambience designed to evoke the right mood or attitude (like
listening to sound tracks of old Rocky movies before jogging around the block). We
also frequently latch onto role models, seek out formal support groups, try to lock
ourselves into wider institutional arrangements (such as joining a very expensive
tennis club with all its affiliated activities), or even hire personal pep coaches. In
short, we prod, nudge, and twiddle with our fickle minds, and in general try to dis-
tribute our motivation onto stable social and artifactual structures in the world.
In this chapter we trace the self-control dilemma back to its roots in research on
agency and intentionality, and summarize the evidence we have accumulated in our
choice-blindness paradigm for a vision of the mind as radically opaque to the self. In
addition, we provide a range of suggestions for how modern sensor and computing
technology might be of use in scaffolding and augmenting our self-control abilities,
an avenue that, lamentably, has remained largely unexplored. To this end, we intro-
duce two core concepts that we hope may serve an important role in elucidating the
problem of self-control from a modern computing perspective. First, we introduce
the concept of computer-mediated extrospection, which builds and expands on the
familiar idea of self-observation or self-monitoring. Second, we present the idea of
distributed motivation, as a natural extension of previous discussions of precommit-
ment and self-binding in the self-control literature.
name of the conceptualizer (Levelt, Roelofts, & Meyer, 1999; Postma, 2000); if the
model deals with action selection in general, it is the box containing the prior inten-
tions (Brown & Pluck, 2000, but see also Koechlin & Summerfield, 2007). The rea-
son that such an all-powerful, all-important homunculus is left so tightly boxed up
in these models might simply be a reflection of our scant knowledge of how “central
cognition” works (e.g., Fodor, 2000), and that the box just serves as a placeholder
for better theories to come. Another more likely possibility is that the researchers
often think that intentions (for action) and meaning (for language) in some very
concrete sense are in the head, and that they constitute basic building blocks for
any serious theory of human behavior. The line of inference is that, just because the
tools of folk psychology (the beliefs, desires, intentions, decisions, etc.) are so use-
ful, there must be corresponding processes in the brain that closely resemble these
tools. In some sense this must of course be true, but the question remains whether
intentions are to be primarily regarded as emanating from deep within the brain, or
best thought of as interactive properties of the whole mind. The first option cor-
responds to what Fodor and Lepore (1993) call intentional realism, and it is within
this framework that one finds the license to leave the prior intentions (or the con-
ceptualizer) intact in its big, comfortable box, and in control of all the important
happenings in the system. The second option sees intentional states as patterns in
the behavior of the whole organism, emerging over time, and in interaction with the
environment (Dennett, 1987, 1991a). Within this perspective, the question of how
our intentional competence is realized in the brain is not settled by an appeal to the
familiar “shape” of folk-psychological explanations. As Dennett (1987) writes:
Within this framework, every system that can be profitably treated as an intentional
system by the ascription of beliefs, desires, and so forth, also is an intentional system
in the fullest sense (see Westbury & Dennett, 2000; Dennett, 2009). But, impor-
tantly, a belief-desire prediction reveals very little about the underlying, internal
machinery responsible for the behavior. Instead, Dennett (1991b) sees beliefs and
desires as indirect “measurements” of a reality diffused in the behavioral disposi-
tions of the brain/body (if the introspective reports of ordinary people suggest oth-
erwise, we must separate the ideology of folk psychology from the folk-craft: what
we actually do, from what we say and think we do; see Dennett, 1991c).
However, when reading current work on introspection and intentionality, it is
hard to even find traces of the previously mentioned debate on the nature of propo-
sitional attitudes conducted by Dennett and other luminaries like Fodor and the
Recomposing the Will 301
Churchlands in the 1980s and early 1990s (for a notable recent exception, see
Carruthers, 2009),1 and the comprehensive collections on folk psychology and phi-
losophy of mind from the period (e.g., Bogdan, 1991; Christensen & Turner, 1993)
now only seem to serve as a dire warning about the possible fate of ambitious vol-
umes trying to decompose the will!
What we have now is a situation where “modern” accounts of intentionality
instead are based either on concepts and evidence drawn from the field of motor con-
trol (e.g., emulator/comparator models; see Wolpert & Ghahramani, 2004; Grush,
2004) or are is built almost purely on introspective and phenomenological consider-
ations. This has resulted in a set of successful studies of simple manual actions, such
as pushing buttons or pulling joysticks (e.g., Haggard, Clark, & Kalogeras, 2002;
Moore, Wegner, & Haggard, 2009; Ebert & Wegner, 2010), but it remains unclear
whether this framework can generalize to more complex and long-term activities.
Similarly, from the fount of introspection some interesting conceptual frameworks
for intentionality have been forthcoming (e.g., Pacherie, 2008; Gallagher, 2007;
Pacherie & Haggard, 2010), but with the drawback of introducing a bewildering
array of “senses” and “experiences” that people are supposed to enjoy. For example,
without claiming an exhaustive search, Pacherie’s (2008) survey identifies the fol-
lowing concepts in need of an explanation: “awareness of a goal, awareness of an
intention to act, awareness of initiation of action, awareness of movements, sense of
activity, sense of mental effort, sense of physical effort, sense of control, experience
of authorship, experience of intentionality, experience of purposiveness, experience
of freedom, and experience of mental causation” (180).
While it is hard to make one-to-one mappings of these “senses” to the previous
discussion of intentional realism, the framework of Dennett entails a thorough
skepticism about the deliverances of introspection, and if we essentially come to
know our minds by applying the intentional stance toward ourselves (i.e., finding
out what we think and what we want by interpreting what we say and what we do),
then it is also natural to shift the focus of agency research away from speculative
senses and toward the wider external context of action. From our perspective as
experimentalists, it is a pity that the remarkable philosophical groundwork done
by Dennett has generated so few empirical explorations of intentionality (see Hall
& Johansson, 2003, for an overview). This is especially puzzling because the coun-
terintuitive nature of the intentions-as-patterns position has some rather obvious
experimental implications regarding the fallibility of introspection and possible
ways to investigate the nature of confabulation. As Carruthers (2009) puts it: “The
account . . . predicts that it should be possible to induce subjects to confabulate attri-
butions of mental states to themselves by manipulating perceptual and behavioral
cues in such a way as to provide misleading input to the self-interpretation process
(just as subjects can be misled in their interpretation of others)” (123).
Figure 16.1 A snapshot sequence of the choice procedure during a manipulation trial.
(A) Participants are shown two pictures of female faces and asked to choose which one
they find most attractive. Unknown to the participants, a second card depicting the
opposite face is concealed behind the visible alternatives. (B) Participants indicate their
choice by pointing at the face they prefer the most. (C) The experimenter flips down the
pictures and slides the hidden picture over to the participants, covering the previously
shown picture with the sleeve of his moving arm. (D) Participants pick up the picture and
are immediately asked to explain why they chose the way they did.
Recomposing the Will 303
the earrings” when the option they actually preferred did not have any). Additional
analysis of the verbal reports in Johansson et al. (2005) as well as Johansson et al.
(2006) also showed that very few differences could be found between cases where
participants talked about a choice they actually made and those trials where the out-
come had been reversed. One interpretation of this is that the lack of differentiation
between the manipulated and nonmanipulated reports cast doubt on the origin of
the nonmanipulated reports as well; confabulation could be seen to be the norm,
and “truthful” reporting something that needs to be argued for.
We have replicated the original study a number of times, with different sets of
faces ( Johansson et al., 2006), for choices between abstract patterns ( Johansson,
Hall, & Sikström, 2008), and when the pictures where presented onscreen in a
computer-based paradigm (Hall & Johansson, 2008). We have also extended the
choice-blindness paradigm to cover more naturalistic settings, and to attribute- and
monetary-based economic decisions. First, we wanted to know whether choice
blindness could be found for choices involving easily identifiable semantic attri-
butes. In this study participants made hypothetical choices between two consumer
goods based on lists of general positive and negative attributes (e.g., for laptops: low
price, short battery-life, etc.), and then we made extensive changes to these attri-
butes before the participants discussed their choice. Again, the great majority of
the trials remained undetected ( Johansson et al., in preparation). In a similar vein,
we constructed a mock-up version of a well-known online shopping site and let the
participants decide which of three MP4 players they would rather buy. This time we
had changed the actual price and memory storage of the chosen item when the par-
ticipants reach the “checkout” stage, but despite being asked very specific questions
about why they preferred this item and not the other, very few of these changes were
detected ( Johansson et al., in preparation). Second, we have also demonstrated the
effect of choice blindness for the taste of jam and the smell of tea in an ecologically
valid supermarket setting. In this study, even when participants decided between
such remarkably different tastes as spicy cinnamon-apple and bitter grapefruit, or
between the sweet smell of mango and the pungent Pernod, was less than half of all
manipulation trials detected (Hall et al., 2010). This result shows that the effect is
not just a lab-based phenomenon; as people may display choice blindness for deci-
sions made in the real world as well.
Since the publication of Johansson et al. (2005), we have been repeatedly challenged
to demonstrate that choice blindness extends to domains such as moral reasoning,
where decisions are of greater importance, and where deliberation and introspection
are seen as crucial ingredients of the process (e.g., Moore & Haggard, 2006, comment-
ing on Johansson et al., 2006; see also the response by Hall et al., 2006). In order to meet
this challenge, we developed a magical paper survey (Hall, Johansson & Strandberg,
2012). In this study, the participants were given a two-page questionnaire attached to
a clipboard and were asked to rate to what extent they agreed with either a number
of formulations of fundamental moral principles (such as: “Even if an action might
harm the innocent, it is morally permissible to perform it,” or “What is morally per-
missible ought to vary between different societies and cultures”), or morally charged
statements taken from the currently most hotly debated topics in Swedish news (such
as: “The violence Israel used in the conflict with Hamas was morally reprehensible
304 D ECO M P O S E D ACCO U N TS O F T H E W I L L
because of the civilian casualties suffered by the Palestinians,” or “It is morally repre-
hensible to purchase sexual services even in democratic societies where prostitution is
legal and regulated by the government”). When the participants had answered all the
questions on the two-page form, they were asked to read a few of the statements aloud
and explain to the experimenter why they agreed or disagreed with them. However,
the statements on the first page of the questionnaire were written on a lightly glued
piece of paper, which got attached to the backside of the survey when the participants
flipped to the second page. Hidden under the removed paper slip was a set of slightly
altered statements. When the participants read the statements the second time to dis-
cuss their answers, the meaning was now reversed (e.g., “If an action might harm the
innocent, it is morally reprehensible to perform it,” or “The violence Israel used in the
conflict with Hamas was morally acceptable despite the civilian casualties suffered by
the Palestinians”). Because their rating was left unchanged, their opinion in relation
to the statement had now effectively been reversed. Despite concerning current and
well-known issues, the detection rate only reached 50 percent for the concrete state-
ments, and even less for the abstract moral principles
We found an intuitively plausible correlation between level of agreement with
the statement and likelihood of detection (i.e., the stronger participants agreed
or disagreed, the more likely they were to also detect the manipulation), but even
manipulations that resulted in a full reversal of the scale sometimes remained unde-
tected. In addition, there was no correlation between detection of manipulation and
self-reported strength of general moral certainty.
But perhaps the most noteworthy finding here was that the participants that
did not detect the change also often constructed detailed and coherent arguments
clearly in favor of moral positions they had claimed that they did not agree with
just a few minutes earlier. Across all conditions, not counting the trials that were
detected, 65 percent of the remaining trials were categorized as strong confabula-
tion, with clear evidence that the participants now gave arguments in favor of the
previously rejected position.
We believe the choice-blindness experiments reviewed here are among the
strongest indicators around for an interpretative framework of self-knowledge for
intentional states, as well as a dramatic example of the nontransparent nature of the
human mind. In particular, we think the choice-blindness methodology represents
a significant improvement to the classic and notorious studies of self-knowledge
by Nisbett and Wilson (1977; see Johansson et al., 2006). While choice blindness
obviously puts no end to the philosophical debate on intentionality (because empir-
ical evidence almost never settles philosophical disputes of this magnitude; Rorty,
1993), there is one simple and powerful idea that springs from it. Carruthers (2009)
accurately predicted that it would be possible to “induce subjects to confabulate
attributions of mental states to themselves by manipulating perceptual and behav-
ioral cues in such a way as to provide misleading input to the self-interpretation pro-
cess” (123), but there is also a natural flip side to that prediction—if our systems for
intentional ascription can be fooled, then they can also be helped! If self-interpretation
is a fundamental component in our self-understanding, it should be possible to aug-
ment our inferential capacities by providing more and better information than we
normally have at hand.
Recomposing the Will 305
Computer-Mediated Extrospection
In our view, one of the most important building blocks to gain reliable knowledge
about our own minds lies in realizing that it often is a mistake to confine judg-
ment of self-knowledge to a brief temporal snapshot, when the rationality of the
process instead might be found in the distribution of information traveling between
minds: in the asking, judging, revising, and clarifying of critical, communal dis-
course (Mansour, 2009). As Dennett (1993) says: “Above the biological level of
brute belief and simple intentional icons, human beings have constructed a level
that is composed of objects that are socially constructed, replicated, distributed,
traded, endorsed (“I’ll buy that!”), rejected, ignored, obsessed about, refined,
revised, attacked, advertised, discarded” (230). The point about critical communal
discourse as a basis for making better self-ascriptions also naturally extends to the
use of new tools and technologies to improve our self-understanding. Studies have
shown that if people are simply asked to introspect (about their feelings, about the
reasons for their attitudes, about the causes of their behavior, etc.), they often end
up with worse judgments than the ones they initially provided (Wilson & Dunn,
2004; Silvia & Gendolla, 2001; Dijksterhuis & Aarts, 2010). On the other hand,
when people are given an enhanced ability to observe their own behavior, they can
often make sizable and profitable revisions to their prior beliefs about themselves
(e.g., by way of video capture in social interaction and collaboration; see Albright
& Malloy, 1999). For example, Descriptive Experience Sampling (DES) is said to
be an introspective research technique. It works by using a portable beeper to cue
subjects at random times, “to pay immediate attention to their ongoing experience
at the moment they heard the beep. They then jot down in a notebook [or PDA]
the characteristics of that particular moment” (Hurlburt & Heavey, 2001, 400; for
other similar techniques, see Scollon, Kim-Prieto, & Diener, 2003; Christensen
et al., 2003). Later, an in-depth interview is conducted in which the experiences are
elaborated upon. What is interesting is that most participants when confronted with
the processed data from the sampling protocols are surprised by some aspects of
the results (e.g., Hurlburt & Heavey, 2001, describe a case of a man named Donald
who discovers in the protocols that he has frequent angry thoughts directed at his
children, something he was completely unaware of before). Similarly, by the use
306 D ECO M P O S E D ACCO U N TS O F T H E W I L L
in human-computer interaction (Dey, Abowd, & Salber, 2001). The typical and
most easily accessible context for CME is that of macrolevel activity markers, clas-
sified on a physical, intentional, and even interactive-social level (e.g., see Dalton &
O’Laighin, 2009; Bajcsy et al., 2009). But perhaps even more interesting from a
CME perspective are the more “intimate” measures that can be gathered from medi-
cal and/or psychophysiological monitoring. Recently, an explosion in the field of
wireless, wearable (or, in some cases, even off-body) sensing has enabled reliable
measuring of (among other things) electrocardiogram, blood pressure, body/skin
temperature, respiration, oxygen saturation, heart rate, heart sounds, perspiration,
dehydration, skin conductivity, blood glucose, electromyogram, and internal tissue
bleeding (for an overview, see Pantelopoulos & Bourbakis, 2010; Kwang, 2009;
Frantzidis et al., 2010). It is from these sensors, and in particular from wireless, dry
electroencephalogram (EEG; Gargiulo et al., 2008; Chi & Cauwenberghs, 2010),
that it is possible to build up the most critical CME variables, such as the detection
and continuous monitoring of arousal, vigilance, attention, mental workload, stress,
frustration, and so on (see Pan, Ren, & Lu, 2010; Ghassemi et al., 2009; Henelius
et al., 2009; Grundlehner et al., 2009).
Distributed Motivation
As we stated in the opening paragraphs, the problem of self-control is not just
a problem manifested in the behavior of certain “weak-willed” individuals, and
it is not only operative in such salient and life-threatening domains as crav-
ing and addiction, but also in the minute workings of everyday plans, choices,
and actions. Ameliorative action is as pertinent to the dreadful experience of
withdrawal from heroin as it is to innocuously hitting the snooze button on the
alarm clock and missing the fist morning bus to school (Rachlin, 2000; Ainslie,
2001). Maglio, Gollwitzer, and Oettingen (chapter 12) present the evidence for
the effectiveness of (so-called) implementation intentions (IMPs), which has
shown that when people are prompted to elaborate a long list of very specific
contingency goals (of the form “when situation X arises, I will perform response
Y”), they are also significantly more likely to perform that action (Gollwitzer,
1999; Webb & Sheeran, 2008). This effect has been repeatedly demonstrated in
real-world environments, for example, in relation to rehabilitation training after
surgery, to keeping up an exercise program, to eating more healthy food, to breast
self-examination and screening for cervical cancer (see Gollwitzer & Sheeran,
2006, for a recent meta-analysis, but see also Sniehotta, 2009; Wood & Neal,
2007). But why does forming IMPs work? Is it not enough to have “normal”
intentions to act accordingly? Maglio, Gollwitzer, and Oettingen (this volume)
favor the explanation that IMPs “create instant habits” and “pass the control
of one’s behavior to the environment” (Gollwitzer, 1999), and they choose to
frame their discussion of IMPs around the well-known parable of Odysseus and
the Sirens. They write:
Indeed, like Odysseus facing the Sirens we often know that we will find ourselves
in conditions where we are likely to do something detrimental to our long-term
goals, and like Odysseus tying himself to the mast we would often like to be able
to self-bind or precommit, and avoid or resist such temptations. As in the episode
from Do Androids Dream of Electrical Sheep?, when Deckard chooses to have his
Penfield awake him in an industrious mood to avoid the lure of the warm bed, and
Iran programs an automatic resetting to block the self-perpetuating nature of the
induced depression, we would often like to be able to choose our course of action in
a calm moment of reflection rather than having to battle it out in the grip of power-
ful urges.
For all the practical potential of IMPs, we think it is a disservice to place them
next to the mighty Odysseus. The Greek king adventurer was truly and effectively
bound at the mast, but Gollwitzer himself admits that IMPs “need to be based on
strong goal intentions. As well, certain types of implementation intentions work bet-
ter than others, and people need to be committed to their implementation intentions”
(Gollwitzer, 1999, 501, our emphasis). One might reasonably wonder why we need
the extra “old-school” willpower that allows us to entertain “strong” goal intentions,
and be “committed” to our implementation intentions, when the whole idea of the
concept was to relieve us of the burden to consciously initiate action in the face of
temptations and distractions. In fact, looking at the literature, it is clear that IMPs
face a disturbing creep of “moderating” variables—they are less effective for more
impulsive participants (Churchill & Jessop, 2009), they only work for people with
high self-efficacy (Lippke et al., 2009), they are curtailed by preexisting “response
biases” (Miles & Proctor, 2008), “habit strength” (Webb, Sheeran, & Luszczynska,
2009), as well as the “stability” of the intentions (Godin et al., 2010) and the
strength of the “goal desires” (Prestwich, Perugini, & Hurling, 2008). In addition,
IMPs are generally only effective when they are provided by the experimenter, who
has an expert knowledge of the (often controlled) stimuli and contingencies the
participants will encounter (Sniehotta, 2009). In relation to this, the obvious ques-
tion is, why settle for implementation intentions as a metaphor for Odysseus and
the Sirens. Why not implement the actual strategy of external binding?
This is what we try to capture with our second concept distributed motivation: the
general strategy of using stable features of both the social and the artifactual environ-
ment to scaffold the process of goal attainment. As such, distributed motivation is
a subclass of the well-established theory of distributed cognition (Hutchins, 1995;
Clark, 2008; Hollan, Hutchins & Kirsh, 2000). Distributed cognition deals with
computational processes distributed among agents, artifacts, and environments. It
is a set of tools and methodologies that allow the researcher to look beyond simple
“cognizant” agents and shift the unit of analysis to wider computational structures.
Recomposing the Will 309
this. If yet another choice-button is introduced in the experiment, this time giving
the pigeons a chance to eliminate the reconsideration-button (i.e., a peck on the
new button prevents the reconsideration option from being illuminated), they con-
sistently choose to do so (Rachlin, 2000). Thus, the pigeons show self-control by
precommitment to their earlier choice. What is so remarkable about this example
is that pigeons are manifestly not smart. Instead, it is clear that the intelligence of
the system lies as much in the technology of the setup as in the mechanisms of the
pigeon’s nervous system.
In the following sections we discuss how the conceptual tools we have proposed
(CME and distributed motivation) can be applied and tailored to the demands of
particular self-control problems. We start with comparatively less difficult problems
and move on to harder ones.
Self-Monitoring
The starting point for many discussions of self-control is the observation that peo-
ple are often aware of their self-control problems but seldom optimally aware of
the way these problems are expressed in their behavior, or under what contingen-
cies or in which situations they are most prone to lapses in control (what is called
partial naïveté in behavioral economics). Most likely, this is due to a mix of biased
self-perception, cognitive limitations, and lack of inferential activity (Frederick,
Loewenstein, & O’Donoghue, 2002). Within this domain, we see two rough cat-
egories of CME tools that could serve to correct faulty self-perceptions.
First, CME can capture and represent information that we normally success-
fully access and monitor, but which we sometimes momentarily fail to survey. The
phenomenology of self-control lapses is often completely bereft of any feeling of
us having consciously weighed alternatives and finally chosen the more tempting
one. Instead, we often just find ourselves, post hoc, having completed an action
that we did not previously intend to do (Elster, 2000; Ainslie, 2001). Studies have
shown that while humans are quite capable at self-monitoring when given clear
directives and timely external prompts, performance quickly deteriorates under
natural conditions (Rachlin, 2000; Schooler, 2002; Smallwood & Schooler, 2006).
(Compare not trying to scratch an itch under stern scrutiny in the doctor’s office,
and not scratching it later while watching TV.) The degree of self-monitoring, in
turn, greatly influences the nature of our self-control behavior. There is a big differ-
ence between smoking a cigarette that happens to be the 24th of the day and being
aware that one is about to light up the 24th cigarette for the day. The simple fact of
providing accurate monitoring of self-control-related context has been shown to
markedly reduce the incidence of self-control lapses (Rachlin, 2000; Fogg, 2003).
The problem is of course that it is almost as difficult to stay constantly vigilant and
attentive to such context as it is to control the behavior in the first place. This, we
surmise, is an area where the use of context-aware technology and CME would be
of great use (see Quinn et al. 2010, for a recent and powerful example of CME of
bad habits).
Recomposing the Will 311
Second, instead of helping people to monitor what they are doing right now, CME
could be used to predict what they are just about to do. By using more intimate con-
textual measures like the psychophysiological state of the user, these micro-predic-
tions should be situated at the moment of activity, and come (minutes or seconds)
before the actual action is performed. For some types of self-control problems this
will be comparatively easy. For example, any goals having to do with strong emo-
tions (like trying to become a less aggressive person or trying to stifle unproductive
anger in marital disagreements) will be an ideal target for CME micro-prediction.
As Elster (2000) has pointed out, advice about emotion regulation most often fails
simply because it comes after the unwanted emotion has already been aroused and
taken full effect upon behavior. At an earlier stage such advice might have been
perfectly effective (i.e., here the proper assessment of the need for self-control is
as important as the control itself). Considerable research already exists on psy-
chophysiological markers that indicate the implicit buildup or expression of emo-
tional states not only for anger and aggression but also for more subtle conditions
like frustration, stress, and anxiety (e.g., Belle et al., 2010; Hosseini & Khalilzadeh,
2010). Promising efforts have also been made to identify similarly predictive pro-
files for less obviously emotional behavior like smoking and gambling (Parker &
Gilbert, 2008; Goudriaan et al., 2004). To increase the chances of finding predic-
tive regularities, CME technology would add an additional layer to these techniques
by allowing the measurements to be individually calibrated over time and multiple
contexts (Clarkson, 2002).
Goal Progression
As we mentioned in the earlier discussion of CME, there is a world of differ-
ence between lighting up a cigarette that happens to be the 24th of the day, and
knowingly and willingly smoking the 24th cigarette of the day. But while CME
technology could provide substantial help with monitoring of goals in relation to
clear-cut objectives like dieting or smoking (it is a relatively straightforward task
Recomposing the Will 313
that with finely tuned sensor and computing equipment, the “social” drinker could
live by a CME-augmented principle that said that she is allowed to drink only once
every other month, or only a certain amount each week, or only if she is at a party
of a certain size, etc.).
Micro-Precommitment
While active goal representation, accurate self-monitoring, and monitoring of goal
progression are important CME strategies, they are clearly less applicable in cases
of genuine reward conflict. In such cases, precommitment is the right strategy to
apply. On the other hand, reward conflicts come in many different flavors, and often
it is not the binding power as such that determines the value of any specific scheme
of precommitment. Apart from nonmetaphorical binding, what technology has to
offer the age-old strategy of precommitment is a much-lowered cost and a much-
increased range of operation. This is good news because some species of precom-
mitment need to be fast and easy to set up, and should come at a very low cost.
For example, we have remote controls for many electrical appliances that enable
us to turn them on and off at our convenience. But we have no remotes that allow
us to turn appliances off in a way that, within a set limit of time, we cannot turn
them on again (for TV and web surfing, we have things like parental or employer
control devices that can block certain channels or domains, but we have not nearly
enough effective equipment for self-binding).4 We can of course always climb under
the sofa, pull the plug and the antenna from the TV, and put them in a place we
cannot easily reach (to make TV viewing relatively inaccessible), but such ad hoc
maneuvers are generally too costly and cumbersome to perform in the long run. The
trick is to strike a balance between inaccessibility and flexibility. That is, for many
behaviors and situations we would like to be able to make quick, easy, but transient
precommitments that allow us to move beyond some momentary temptation but
then expire so as not to further limit our range of alternatives. We call this micro-
precommitment (MPC). MPC finds its primary use when the temptations we are
dealing with are not overwhelming but still noticeable enough to bring us to the fall.
As an example, imagine a cell phone–based location-aware system (using GPS or
any other modern positioning technique) where we can instantaneously “tag” dif-
ferent places from which we wish to be kept. The mechanism for tagging could be as
simple as having the phone in the same “cell” as the object to be tagged, or having a
place-map database in the phone that allows for distance-independent blocking. Let
us now say we have a minor shoe-shopping compulsion and walk around town on
an important errand. Walking down the street with this system, we could, with just
a brief moment of forethought, tag an upcoming tempting shoe store. The tagging
could have any number of consequences, like locking our wallet or credit card, or
even tuning the store alarm to go off if we enter the premises (!). The point of MPC
is not to set up consequences that represent maximally strong deterrents. Quite the
opposite: it is a technique suited for temporarily bringing us past small but nagging
distractions. Tomorrow, when we have no important errands anymore, we might
want to shop for shoes again and would not want to spend our time unwinding a too
forceful and elaborate precommitment scheme. In fact, since MPCs, in our view,
Recomposing the Will 315
should be as easy and cheap as possible to instigate, they should also not be allowed to
have costly or long-term consequences.
Precommitment
If MPCs are swift and cheap and play with low stakes and short-term consequences,
regular precommitment holds no such limits. For precommitment the amount of
binding power and the cost of engagement are determined in relation to the magni-
tude of the problem and may be as strong as any agent desires. In contrast to MPC,
regular precommitment should not come easy. To make sure that the binding rep-
resents a “true” preference, a certain amount of inertia ought to be built into any
precommitment decision procedure (for a sensitive discussion of how to handle
this problem, see Elster, 2000). For example, some larger casinos give patrons prone
to too much gambling the option of having themselves banned from playing. Since
casinos are generally equipped with rigorous security and surveillance systems,
the ban can be very effectively enforced. However, one cannot just walk up to the
entrance cashier and ask to be banned. The decision must be made in dialogue and
with counselfrom the casino management, because once you are banned the casino
will not be coaxed into letting you in again. As would be expected from a compulsive
gambler, you soon find yourself back at the gates trying to undo your former deci-
sion. It is at this point that the casino enforces the bind by bluntly disregarding your
pleas (and if the commitment was made in too light a manner, this would be an
unfortunate outcome).
Craving and addiction are extremely difficult topics to approach. Behavioral
abnormalities associated with addiction are exceptionally long-lived, and currently
no reliable remedies exist for the pathological changes in brain-reward systems that
are associated with prolonged substance abuse (Nestler, 2001; Everitt, Dickinson, &
Robbins, 2001; Robinson & Berridge, 2003). With reference to precommitment,
it is sometimes said that it is an ineffective strategy for handling things like addic-
tion, because in the addicted state we supposedly never find a clear preference plat-
form from which to initiate the precommitment (i.e., we do not know which of our
preferences are the “true” ones). Rachlin (2000) writes: “Instead of clearly defined
points of time where one strong preference gives way to its opposite we generally
experience a continuous opposition of forces and apparently random alternation
between making and breaking our resolutions” (54). This state of complex ambiva-
lence also makes it likely that a fierce arms race will be put in motion by the intro-
duction of any scheme of precommitment, where the addicted subject will waste
precious resources and energy trying to slip through the bind of the commitment.
The drug Antabuse illustrates these problems. If you take Antabuse and then have
a drink, you will experience severe pain. Thus, taking Antabuse is a form of pre-
commitment not to drink alcohol. However, alcoholics have been known to sub-
vert the effects of the drug by sipping the alcohol excruciatingly slowly, and some
even drink the alcohol despite the severe pain (Rachlin, 2000). Also, the outcome
of Antabuse treatment has been generally less than satisfying because many alcohol-
ics decide against taking the drug in the first place. In our view, this example should
be taken as a cautionary tale for any overly optimistic outlook on the prospects of
316 D ECO M P O S E D ACCO U N TS O F T H E W I L L
CONCLUSION
In this chapter we discussed how the problem of self-control can be approached from
a perspective on intentionality and introspection derived from the work of Dennett,
and the evidence from our own choice-blindness paradigm. We have provided a
Recomposing the Will 317
range of suggestions for how sensor and computing technology might be of use in
scaffolding and augmenting our self-control abilities, and we have introduce the
concepts of computer-mediated extrospection and distributed motivation that we hope
may serve an important role in elucidating the problem of self-control from a mod-
ern computing perspective. Some researchers have expressed pessimism about the
ability of context-aware systems to make meaningful inferences about important
human social and emotional states, and believe that context-aware applications can
only supplant human initiative in the most carefully proscribed situations (Bellotti
& Edwards, 2001). As evidenced by the current chapter, we think this pessimism
is greatly overstated. Precommitment technologies offer people the option of tem-
porary but forceful binding, aided by computer systems that will not be swayed or
cajoled, and it is through their very inflexibility that these systems have the potential to
support individual self-realization. As Dennett (2003) notes, in the domain of self-
control, effectively constraining our options actually gives us more freedom than we
otherwise would have had.
ACKNOWLEDGMENT
L.H. thanks the Swedish Research Council, and P.J. thanks the Bank of Sweden
Tercentenary Foundation for financial support.
NOTES
1. At times, tension ran so high in this debate that one might have thought it would
have been remembered for its rhetorical flair, if nothing else. As an example, Fodor
and Lepore (1993) scolded Dennett’s superficialism about the mental and professed
that there really are no other ideas than commonsense “Granny-psychology” to take
seriously, while Dennett (1994) in response, coined the name hysterical realism for
Fodor’s program and admitted that he regarded “the large and well-regarded litera-
ture on propositional attitudes . . . to be history’s most slowly unwinding unintended
reductio ad absurdum” (241, emphasis in original).
2. After having probed what they thought of the experiment, and if they thought
anything had felt strange with the procedure, the participants were also asked the
hypothetical question if they think they would have noticed if we had switched
the pictures. No less than 84 percent of the participants who did not detect any of
the manipulations still answered that they would have noticed if they had been pre-
sented with mismatched outcomes in this way, thus displaying what might be called
“choice-blindness blindness”—the false metacognitive belief of being able to detect
changes to the outcome of one’s choices (See Levin et al., 2000, for a similar result in
relation to change blindness).
3. Incidentally, the DES paradigm also represents one additional strong line of evidence
against the concept of intentional realism. As Hurlburt (2009) writes: “As a result of
30 years of carefully questioning subjects about their momentary experiences, my
sense is that trained DES subjects who wear a beeper and inspect what is directly
before the footlights of consciousness at the moment of the beeps almost never
directly apprehend an attitude. Inadequately trained subjects, particularly on their
first sampling day, occasionally report that they are experiencing some attitude. But
318 D ECO M P O S E D ACCO U N TS O F T H E W I L L
when those reports are scrutinized in the usual DES way, querying carefully about
any perceptual aspects, those subjects retreat from the attitude-was-directly-observed
position, apparently coming to recognize that their attitude had been merely “back-
ground” or “context.” That seems entirely consonant with the view that these subjects
had initially inferred their own attitudes in the same way they infer the attitudes of
others (150).
4. But see the OSX self-control application by Steve Lambert (http://visitsteve.com/
work/selfcontrol/), which allows the user to selectively and irrevocably (within a
time limit) shut down sections of the web, or the slightly less weighty, but ever so use-
ful Don’t Dial (http://www.dontdial.com/) app for the iPhone/Android platform,
which before an intoxicating evening allows the user to designate a range of sensitive
phone contacts that later will be blocked from calling.
REFERENCES
Ainslie, G. (2001). Breakdown of will. New York: Cambridge University Press.
Ainslie, G. (2005). Précis of Breakdown of Will. Behavioral and Brain Sciences, 28(5),
635–673.
Albright, L., & Malloy, T. E. (1999). Self-observation of social behavior and metapercep-
tion. Journal of Personality and Social Psychology, 77(4), 726–743.
Archbold, G. E. B., Bouton, M. E., & Nader, K . (2010). Evidence for the persistence
of contextual fear memories following immediate extinction. European Journal of
Neuroscience, 31(7), 1303–1311.
Baars, B. J. (2010). Spontaneous repetitive thoughts can be adaptive: Postscript on “mind
wandering.” Psychological Bulletin, 136(2), 208–210.
Bajcsy, R., Giani, A., Tomlin, C., Borri, A., & Di Benedetto, M. (2009). Classification of
physical interactions between two subjects. In BSN 2009: Sixth International Workshop
on Wearable and Implantable Body Sensor Networks (pp. 187–192).
Bell, G., & Dourish, P. (2007). Yesterday’s tomorrows: Notes on ubiquitous computing’s
dominant vision. Personal and Ubiquitous Computing, 11(2), 133–143.
Bell, G., & Gemmell, J. (2009). Total recall: How the e-memory revolution will change every-
thing. New York: Dutton Adult.
Belle, A., Soo-Yeon Ji, Ansari, S., Hakimzadeh, R., Ward, K., & Najarian, K . (2010).
Frustration detection with electrocardiograph signal using wavelet transform. In 2010
International Conference on Biosciences (pp. 91–94).
Bellotti, V. M., & Edwards, W. K . (2001). Intelligibility and accountability: Human con-
siderations in context aware system. Human–Computer Interaction, 16, 193–212.
Berry, E., Hampshire, A., Rowe, J., Hodges, S., Kapur, N., Watson, P., & Browne, G. (2009).
The neural basis of effective memory therapy in a patient with limbic encephalitis.
Journal of Neurology, Neurosurgery, and Psychiatry, 80(11), 1202–1205.
Bogdan, R. J. (Ed.). (1991). Mind and common sense: Philosophical essays on commonsense
psychology. Cambridge: Cambridge University Press.
Bouton, M. E. (2004). Context and behavioral processes in extinction. Learning and
Memory, 11(5), 485–494.
Brown, R. G., & Pluck, G. (2000). Negative symptoms: The “pathology” of motivation
and goal-directed behaviour. Trends in Neurosciences, 23(9), 412–417.
Bulling , A., Roggen, D., & Troester, G. (2011). What’s in the eyes for context-awareness?
Pervasive Computing, IEEE, April–June, pp. 48–57.
Recomposing the Will 319
Carruthers, P. (2009). How we know our own minds: The relationship between mind-
reading and metacognition. Behavioral and Brain Sciences, 32, 121–182.
Chi, Y. M., & Cauwenberghs, G. (2010). Wireless non-contact EEG/ECG electrodes
for body sensor networks. In 2010 International Conference on Body Sensor Networks
(pp. 297–301).
Christensen, S. M., & Turner D. R . (1993). Folk psychology and the philosophy of mind.
Hillsdale, NJ: Erlbaum.
Christensen, T. C., Barrett, L, F., Bliss-Moreau, E., Lebo, K., & Kaschub, C. (2003). A prac-
tical guide to experience-sampling procedures. Journal of Happiness Studies, 4, 53–78.
Christoff, K., Gordon, A. M., Smallwood, J., Smith, R., & Schooler, J. W. (2009). Experience
sampling during fMRI reveals default network and executive system contributions to
mind wandering. Proceedings of the National Academy of Sciences of the United States of
America, 106(21), 8719–8724.
Churchill, S., & Jessop, D. (2009). Spontaneous implementation intentions and impulsiv-
ity: Can impulsivity moderate the effectiveness of planning strategies? British Journal of
Health Psychology, 13, 529–541.
Clark, A. (2008). Supersizing the mind. Oxford: Oxford University Press.
Clarkson, B. (2002). Life patterns: Structure from wearable sensors. PhD diss., MIT.
Dalton, A., & O’Laighin, G. (2009). Identifying activities of daily living using wire-
less kinematic sensors. In BSN 2009: Sixth International Workshop on Wearable and
Implantable Body Sensor Networks (pp. 87–91).
Dennett, D. C. (1987). The intentional stance. Cambridge, MA : MIT Press.
Dennett, D. C. (1991a). Consciousness explained. Boston: Little, Brown.
Dennett, D. C. (1991b). Real patterns. Journal of Philosophy, 89, 27–51.
Dennett, D. C. (1991c). Two contrasts: Folk craft versus folk science and belief versus
opinion. In J. Greenwood (Ed.), The future of folk psychology: Intentionality and cognitive
science (135–148). Cambridge: Cambridge University Press.
Dennett, D. C. (1993). The message is: There is no medium. Philosophy and
Phenomenological Research, 53, 889–931.
Dennett, D. C. (1994). Get real. Philosophical Topics, 22, 505–568.
Dennett, D. C. (2003). Freedom evolves. London: Allen Lane.
Dennett, D. C. (2009). Intentional systems theory. In B. McLaughlin, A. Beckermann, &
S. Walter (Eds.), Oxford handbook of the philosophy of mind (pp. 339–350). New York:
Oxford University Press.
Dey, A. K., Abowd, G. D., & Salber, D. (2001). A conceptual framework and a toolkit
for supporting the rapid prototyping of context-aware applications. Human–Computer
Interaction, 16, 167–176.
Dijksterhuis, A., & Aarts, H. (2010). Goals, attention, and (un)consciousness. Annual
Review of Psychology, 61, 467–490.
Ebert, J. P., & Wegner, D. M. (2010). Time warp: Authorship shapes the perceived timing
of actions and events. Consciousness and Cognition, 19(1), 481–489.
Elster, J. (2000). Ulysses unbound. Cambridge: Cambridge University Press.
Everitt, B. J., Dickinson, A., & Robbins, T. W. (2001). The neuropsychological basis of
addictive behavior. Brain Research Reviews, 36, 129–138.
Fish, J., Evans, J. J., Nimmo, M., Martin, E., Kersel, D., Bateman, A., & Wilson, B. A.
(2007). Rehabilitation of executive dysfunction following brain injury: “Content-free”
cueing improves everyday prospective memory performance. Neuropsychologia, 45(6),
1318–1330.
320 D ECO M P O S E D ACCO U N TS O F T H E W I L L
Fodor, J. A . (2000). The mind doesn’t work that way: The scope and limits of computational
psychology. Cambridge, MA : MIT Press.
Fodor, J., & Lepore, E. (1993). Is intentional ascription intrinsically normative? In Bo
Dahlbom (Ed.), Dennett and his critics (pp. 70–82). Oxford: Blackwell.
Fogg , B. J. (2003). Persuasive technology: Using computers to change what we think and do.
San Francisco: Morgan Kaufmann.
Frantzidis, C., Bratsas, C., Papadelis, C., Konstantinidis, E., Pappas, C., & Bamidis, P.
(2010). Toward emotion aware computing: An integrated approach using multichannel
neurophysiological recordings and affective visual stimuli. Transactions on Information
Technology in Biomedicine, IEEE, 14(3), 589–597.
Frederick, S., Loewenstein, G., & O’Donoghue, T. (2002). Time discounting and time
preference: A critical review. Journal of Economic Literature, 40(2), 351–401.
Gallagher, S. (2007). The natural philosophy of agency. Philosophy Compass, 2(2),
347–357.
Gargiulo, G., Bifulco, P., Calvo, R., Cesarelli, M., Jin, C., & van Schaik, A. (2008). A mobile
EEG system with dry electrodes. In Biomedical Circuits and Systems Conference. BioCAS
2008. IEEE (pp. 273–276).
Gemmell, J., Bell, G., & Lueder, R. (2006). MyLifeBits: A personal database for every-
thing. Communications of the ACM, 49(1), 88–95.
Gemmell, J., Bell, G., Lueder, R., Drucker, S., & Wong , C. (2002). MyLifeBits: Fulfilling the
Memex vision. In Proceedings of the Tenth ACM International Conference on Multimedia
(pp. 235–238).
Ghassemi, F., Moradi, M., Doust, M., & Abootalebi, V. (2009). Classification of sustained
attention level based on morphological features of EEG’s independent components. In
ICME International Conference on Complex Medical Engineering, 2009 (pp. 1–6).
Godin, G., Belanger-Gravel, A., Amireault, S., Gallani, M., Vohl, M., & Perusse, L. (2010).
Effect of implementation intentions to change behaviour: moderation by intention sta-
bility. Psychological Reports, 106(1), 147–159.
Goldman, A . (1993). The psychology of folk psychology. Behavioral and Brain Sciences,
16, 15–28.
Gollwitzer, P. M. (1999). Implementation intentions: Strong effects of simple plans.
American Psychologist, 54, 493–503.
Gollwitzer, P., & Sheeran, P. (2006). Implementation intentions and goal achievement: A
meta-analysis of effects and processes. Advances in Experimental Social Psychology, 38,
69–119.
Goudriaan, A. E., Oosterlaan, J., de Beurs, E., & Van den Brink, W. (2004). Pathological
gambling: A comprehensive review of biobehavioral findings. Neuroscience and
Biobehavioral Reviews, 28(2), 123–141.
Gough, D., Kumosa, L., Routh, T., Lin, J., & Lucisano, J. (2010). Function of an implanted
tissue glucose sensor for more than 1 year in animals. Science Translational Medicine,
2(42), 42–53.
Greenfield, A . (2006). Everyware: The dawning age of ubiquitous computing. Berkeley, CA:
New Riders.
Grundlehner, B., Brown, L., Penders, J., & Gyselinckx , B. (2009). The design and analysis
of a real-time, continuous arousal monitor. In BSN 2009: Sixth International Workshop
on Wearable and Implantable Body Sensor Networks (pp. 156–161).
Grush, R. (2004). The emulation theory of representation: Motor control, imagery, and
perception. Behavioral and Brain Sciences, 27, 377–442.
Recomposing the Will 321
Haggard, P., Clark, S., & Kalogeras, J. (2002). Voluntary action and conscious awareness.
Nature Neuroscience, 5(4), 382–385.
Hall, L., & Johansson, P. (2003). Introspection and extrospection: Some notes on the contex-
tual nature of self-knowledge. Lund University Cognitive Studies, 107. Lund: LUCS.
Hall, L., & Johansson, P. (2008). Using choice blindness to study decision making and
introspection. In P. Gärdenfors & A. Wallin (Eds.), Cognition: A smorgasbord (pp. 267–
283). Lund: Nya Doxa.
Hall, L., Johansson, P., Sikström, S., Tärning, B. & Lind, A . (2006). How something
can be said about Telling More Than We Can Know: Reply to Moore and Haggard.
Consciousness and Cognition, 15, 697–699.
Hall, L., Johansson, P., Tärning, B., Sikström, S., & Deutgen, T. (2010). Magic at the market-
place: Choice blindness for the taste of jam and the smell of tea. Cognition, 117, 54–61.
Hall, L., Johansson, P., & Strandberg , T. (2012). Lifting the Veil of Morality: Choice
Blindness and Attitude Reversals on a Self-Transforming Survey. PLoS ONE 7(9):
e45457. doi:10.1371/journal.pone.0045457
Henelius, A., Hirvonen, K., Holm, A., Korpela, J., & Muller, K . (2009). Mental work-
load classification using heart rate metrics. In Annual International Conference of the
Engineering in Medicine and Biology Society (pp. 1836–1839).
Hollan, J., Hutchins, E., & Kirsh, D. (2000). Distributed cognition: Toward a new founda-
tion for human-computer interaction research. ACM Transactions on Computer-Human
Interaction, 7(2), 174–196.
Hosseini, S., & Khalilzadeh, M. (2010). Emotional stress recognition system using EEG
and psychophysiological signals: Using new labelling process of EEG signals in emo-
tional stress state. In International Conference on Biomedical Engineering and Computer
Science (pp. 1–6).
Hurlburt, R. T., & Christopher L. H. (2006). Exploring inner experience. Amsterdam: John
Benjamins.
Hurlburt, R. T., & Heavey, C. L. (2001). Telling what we know: Describing inner experi-
ence. Trends in Cognitive Science, 5(9), 400–403.
Hurlburt, R. T., & Schwitzgebel, E. (2007). Describing inner experience? Proponent meets
skeptic. Cambridge, MA : MIT Press.
Hutchins, E. (1995). Cognition in the wild. Cambridge, MA : MIT Press.
Johansson, P., Hall, L., Kusev, P., Aldrovandi, S., Yamaguchi, Y., Watanabe, K. (in prepara-
tion). Choice blindness in multi attribute decision making.
Johansson, P., Hall, L., & Sikström, S. (2008). From change blindness to choice blindness.
Psychologia, 51, 142–155.
Johansson, P., Hall, L., Sikström, S., & Olsson, A. (2005). Failure to detect mismatches
between intention and outcome in a simple decision task . Science, 310, 116–119.
Johansson, P., Hall, L., Sikström, S., Tärning, B., & Lind, A . (2006). How something
can be said about Telling More Than We Can Know. Consciousness and Cognition, 15,
673–692.
Koechlin, E & Summerfield, C (2007) An information theoretical approach to prefrontal
executive function. Trends in Cognitive Science, 11(6), 229–235.
Kwang , P. (2009). Nonintrusive measurement of biological signals for ubiquitous health-
care. In Annual International Conference of the IEngineering in Medicine and Biology
Society (pp. 6573–6575).
Levelt, W. J. M., Roelofts, A., & Meyer, A. S. (1999). A theory of lexical access in speech
production. Behavioral and Brain Sciences, 22(1), 1–76.
322 D ECO M P O S E D ACCO U N TS O F T H E W I L L
Levin, D. T., Momen, N., Drivdahl, S. B., & Simons, D. J. (2000). Change blindness
blindness: The metacognitive error of overestimating change-detection ability. Visual
Cognition, 7, 397–412.
Lippke, S., Wiedemann, A., Ziegelmann, J., Reuter, T., & Schwarzer, R . (2009). Self-efficacy
moderates the mediation of intentions into behavior via plans. American Journal of
Health Behavior, 33(5), 521–529.
Manly, T., Hawkins, K., Evans, J., Woldt, K., & Robertson, I. H. (2002). Rehabilitation of
executive function: Facilitation of effective goal management on complex tasks using
periodic auditory alerts. Neuropsychologia, 40(3), 271–281.
Mansour, O. (2009). Group intelligence: A distributed cognition perspective. In INCOS
’09: Proceedings of the International Conference on Intelligent Networking and Collaborative
Systems (pp. 247–250). Washington, DC: IEEE Computer Society.
Manuck, S. B., Flory, J. D., Muldoon, M. F., & Ferrell, R. E. (2003). A neurobiology of
intertemporal choice. In G. Loewenstein, D. Read, & R. Baumeister (Eds.), Time and
decision: Economic and psychological perspectives on intertemporal choice (139–172).
New York: Russell Sage Foundation.
Massimi, M., Truong, K., Dearman, D., & Hayes, G. (2010). Understanding recording
technologies in everyday life. Pervasive Computing, IEEE, 9(3), 64–71.
McVay, J. C., & Kane, M. J. (2010). Does mind wandering reflect executive function or
executive failure? Comment on Smallwood and Schooler (2006) and Watkins (2008).
Psychological Bulletin, 136(2), 188–197; discussion 198–207.
McVay, J. C., Kane, M. J., & Kwapil, T. R . (2009). Tracking the train of thought from
the laboratory into everyday life: An experience-sampling study of mind wandering
across controlled and ecological contexts. Psychonomic Bulletin and Review, 16(5),
857–863.
Miles, J. D., & Proctor, R. W. (2008). Improving performance through implementation
intentions: Are preexisting response biases replaced? Psychonomic Bulletin and Review,
15(6), 1105–1110
Miller, E. K . (2000). The prefrontal cortex and cognitive control. Nature Reviews
Neuroscience, 1, 59–65.
Miller, E. K., & Cohen, J. D. (2001). An integrative theory of prefrontal cortex function.
Annual Review of Neuroscience, 24, 167–202.
Monterosso, J., & Ainslie, G. (1999). Beyond discounting: Possible experimental models
of impulse control. Psychopharmacology, 146, 339–347.
Moore, J., & Haggard, P. (2006). Commentary on “How something can be said about tell-
ing more than we can know: On choice blindness and introspection.” Consciousness and
Cognition, 15(4), 693–696.
Moore, J. W., Wegner, D. M., & Haggard, P. (2009). Modulating the sense of agency with
external cues. Consciousness and Cognition, 18(4), 1056–1064.
Nelson, J. B., & Bouton, M. E. (2002). Extinction, inhibition, and emotional intelligence.
In L. F. Barrett & P. Salovey (Eds.), The wisdom in feeling: Psychological processes in emo-
tional intelligence (pp. 60–85). New York: The Guilford Press.
Nestler, E. J. (2001). Molecular basis of long-term plasticity underlying addiction. Nature
Reviews Neuroscience, 2, 119–128.
Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we can know: Verbal reports on
mental processes. Psychological Review, 84, 231–259.
Pacherie, E. (2008). The phenomenology of action: A conceptual framework . Cognition,
107, 179–217.
Recomposing the Will 323
Pacherie, E., & Haggard, P. (2010). What are intentions? In, L. Nadel &
W. Sinnott-Armstrong (Eds.), Benjamin Libet and agency (pp. 70–84). Oxford: Oxford
University Press.
Pan, J., Ren, Q., & Lu, H. (2010). Vigilance analysis based on fractal features of EEG sig-
nals. In International Symposium on Computer Communication Control and Automation
(3CA), 2010 (pp. 446–449).
Pantelopoulos, A., & Bourbakis, N. (2010). A survey on wearable sensor-based systems
for health monitoring and prognosis. Transactions on Systems, Man, and Cybernetics,
Part C: Applications and Reviews, 40(1), 1–12.
Parker, A. B., & Gilbert, D. G. (2008). Brain activity during anticipation of smoking-related
and emotionally positive pictures in smokers and nonsmokers: A new measure of cue
reactivity. Nicotine and Tobacco Research, 10(11), 1627.
Poslad, S. (2009). Ubiquitous computing: smart devices, smart environments and smart inter-
action. Chichester, UK: Wiley.
Postma, A . (2000). Detection of errors during speech production: A review of speech
monitoring models. Cognition, 77, 97–131.
Prestwich, A., Perugini, M., & Hurling , R. (2008). Goal desires moderate
intention-behaviour relations. British Journal of Social Psychology/British Psychological
Society, 47, 49–71.
Quinn, J. M., Pascoe, A., Wood, W., & Neal, D. T. (2010). Can’t control yourself? Monitor
those bad habits. Personality and Social Psychology Bulletin, 36(4), 499–511.
Rachlin, H. (2000). The science of self-control. Cambridge, MA : Harvard University Press.
Roberts, S. (2004). Self-experimentation as a source of new ideas: Ten examples about
sleep, mood, health, and weight. Behavioral and Brain Sciences, 27, 227–288.
Roberts, S. (2010). The unreasonable effectiveness of my self-experimentation. Medical
Hypotheses, 75(6), 482–489.
Robinson, T. E., & Berridge, K, C. (2003). Addiction. Annual Review of Psychology, 54,
25–53.
Rorty, R. (1993). Holism, intrinsicality, and the ambition of transcendence. In B. Dahlbom
(Ed.), Dennett and his critics: Demystifying mind (pp. 184–202). Cambridge, MA : Basil
Blackwell.
Sally, D. (2000a). Confronting the Sirens: Rational behavior in the face of changing pref-
erences. Journal of Institutional and Theoretical Economics, 156(4), 685–714.
Sally, D. (2000b). I, too, sail past: Odysseus and the logic of self-control. Kyklos, 53(2),
173–200.
Schooler, J. W. (2002). Re-representing consciousness: Dissociations between experience
and meta-consciousness. Trends in Cognitive Science, 6(8), 339–344.
Scollon, C. N., Kim-Prieto, C., & Diener, E. (2003). Experience sampling: Promises and
pitfalls, strengths and weaknesses. Journal of Happiness Studies, 4, 5–34.
Silvia, P., & Gendolla, G. (2001). On introspection and self-perception: Does self-focused
attention enable accurate self-knowledge? Review of General Psychology, 5(3), 241–269.
Smallwood, J., Fishman, D. J., & Schooler, J. W. (2007). Counting the cost of an absent
mind: Mind wandering as an underrecognized influence on educational performance.
Psychonomic Bulletin & Review, 14(2), 230–236.
Smallwood, J., McSpadden, M., & Schooler, J. W. (2008). When attention matters: The
curious incident of the wandering mind. Memory and Cognition, 36(6), 1144–1150.
Smallwood, J., Nind, L., & O’Connor, R. C. (2009). When is your head at? An exploration
of the factors associated with the temporal focus of the wandering mind. Consciousness
and Cognition, 18(1), 118–125.
324 D ECO M P O S E D ACCO U N TS O F T H E W I L L
Smallwood, J., & Schooler, J. W. (2006). The restless mind. Psychological Bulletin, 132(6),
946–958.
Sniehotta, F. F. (2009). Towards a theory of intentional behaviour change: Plans, plan-
ning, and self-regulation. British Journal of Health Psychology, 14, 261–273.
Tamada, J. A., Lesho, M., & Tierny, M. (2002). Keeping watch on glucose. IEEE Spectrum
Online, 39(4), 52–57.
Tobias, R . (2009). Changing behavior by memory aids: A social psychological model
of prospective memory and habit development tested with dynamic field data.
Psychological Review, 116(2), 408–438.
Webb, T. L., & Sheeran, P. (2008). Mechanisms of implementation intention effects:
The role of goal intentions, self-efficacy, and accessibility of plan components. British
Journal of Social Psychology/British Psychological Society, 47, 373–395.
Webb, T. L., Sheeran, P., & Luszczynska, A . (2009). Planning to break unwanted hab-
its: Habit strength moderates implementation intention effects on behaviour change.
British Journal of Social Psychology/British Psychological Society, 48, 507–523.
Westbury, C., & Dennett, D. (2000). Mining the past to construct the future: Memory and
belief as forms of knowledge. In D. L. Schacter & E. Scarry (Eds.), Memory, brain, and
belief (pp. 11–32). Cambridge, MA : Harvard University Press.
Wilson, T. D., & Dunn, E. W. (2004). Self-knowledge: Its limits, value, and potential for
improvement. Annual Review of Psychology, 55, 493–518.
Wolpert, D., & Ghahramani, Z. (2004). Computational motor control. In M. Gazzaniga
(Ed.), The cognitive neurosciences (3rd ed., pp. 485–494). Cambridge, MA: MIT Press.
Wood, W., & Neal, D. T. (2007). A new look at habits and the habit-goal interface.
Psychological Review, 114(4), 843–863.
17
M A N U E L VA R G AS
Many prominent accounts of free will and moral responsibility treat as central the
ability of agents to respond to reasons. Call such theories Reasons accounts. In what
follows, I consider the tenability of Reasons accounts in light of situationist social
psychology and, to a lesser extent, the automaticity literature. In the first half of this
chapter, I argue that Reasons accounts are genuinely threatened by contemporary
psychology. In the second half, I consider whether such threats can be met, and at
what cost. Ultimately, I argue that Reasons accounts can abandon some familiar
assumptions, and that doing so permits us to build a more empirically plausible
picture of our agency.
decision making in accord with reason; and a capacity to act in the way we believe
when we deliberate about what to do. The univocality of “free will” is dubious
(Vargas 2011).
In what follows, I treat free will as the variety of control distinctively required
for agents to be morally responsible.1 It is a further matter, one I will not address
here, whether such control constitutes or is a part of any other powers that have
sometimes been discussed under the banner of “free will.” Among theories of free
will characterized along these lines, of special interest here are Reasons accounts.
These are accounts on which an agent’s capacity to appropriately respond to reasons
constitutes the agent’s having the form of control that (perhaps with some other
things)2 constitutes free will or is required for moral responsibility (Wolf 1990;
Wallace 1994; Fischer and Ravizza 1998; Arpaly 2003; Nelkin 2008). There are a
number of attractions to these accounts, although here I will only gesture at some
of them.3
First, Reasons theorists have been motivated by the idea that in calling one
another to account, in (especially) blaming one another, and in judging that some-
one is responsible, we are suggesting that the evaluated agent had a reason to do
otherwise. Having reams of alternative possibilities available, even on the most
metaphysically demanding conception of these things, is of little use or interest
in and of itself. It is a condition on the possibility of an alternative being relevant
that there be a reason in favor of it, and this is true both for the purposes of calling
someone to account and for an agent trying to decide what to do. In the absence
of the ability to discern or act on a discerned reason in favor of that possibility, it is
something of an error or confusion to blame the agent for failing to have acted on
that alternative (unless, perhaps, the agent knowingly undermined or destroyed his
or her reasons-responsive capacity).
Second, and relatedly, Reasons accounts appear to cohere with the bulk of
ordinary judgments about cases (e.g., why young children are treated differ-
ently than normal adults, why cognitive and affective defects seem to under-
mine responsibility, why manipulation that disrupts people’s rational abilities
seems troublesome). So, there is a “fit” with the data of ordinary practices and
judgments.
Finally, Reasons accounts provide us with a comparatively straightforward
account of our apparent uniqueness in having free will and being morally respon-
sible. To the extent to which we are responsive to a special class of considerations
(and to the extent to which it is worthwhile, valuable, or appropriate to be sensi-
tive to these considerations), this form of agency stands out against the fabric of
the universe; it constitutes a particularly notable form of agency worth cultivating.
Reasons accounts are thus appealing because of a package of explanatory and nor-
mative considerations.
However we characterize reasons, it would be enormously problematic if we sel-
dom acted for reasons, or if it turned out that there was a large disconnect between
conscious, reasons-involved deliberative powers and the causal mechanisms that
move us. Unfortunately, a body of research in social psychology and neuroscience
appears to suggest exactly these things (Doris 2002; Nelkin 2005; Woolfolk et al.
2006; Nahmias 2007).
Situationism and Moral Responsibility 327
2. SITUATIONISM
Consider the following classic social psychology experiments.
Phone Booth: In 1972 Isen and Levin performed an experiment on subjects
using a pay phone. When a subject emerged from the pay phone, confederates of
the experimenters “inadvertently” spilled a manila envelope full of papers in front
of the subject as the subject left the phone booth. The remarkable thing was the
difference a dime made. When subjects had just found a dime in the change return
of the telephone, helping behavior jumped to almost 89 percent of the time. When
subjects had not found a dime in the change return, helping behavior occurred only
4 percent of the time (Isen and Levin 1972).4
Samaritan: In 1973, Darley and Batson performed an experiment on semi-
nary students. In one case, the students were asked to prepare a talk on the Good
Samaritan parable. In the other case, students were asked to prepare a talk on poten-
tial occupations for seminary students. Subjects were then told to walk to another
building to deliver the talk. Along the route to the other building, a confederate of
the experimenters was slumped in a doorway in apparent need of medical attention.
The contents of the seminarian’s talks made little difference in whether they stopped
to help or not. What did make a sizable difference was how time-pressured the sub-
jects were. In some conditions, subjects were told they had considerable time to
make it to the next building, and in other conditions subjects were told they had
some or considerable need to hurry. The more hurried the subjects were, the less
frequently they helped (Darley and Batson 1973).
Obedience to Authority: In a series of widely duplicated experiments, Stanley
Milgram showed that on the mild insistence of an authority figure, a range of very
ordinary subjects were surprisingly willing to (apparently) shock others, even
to apparent death, for failing to correctly answer innocuous question prompts
(Milgram 1969).
The literature is filled with plenty of other fascinating cases. For example: psy-
chologists have found that members of a group are more likely to dismiss the evi-
dence of their senses if subjected to patently false claims by a majority of others in
the circumstance; the likelihood of helping behavior depends in large degree on
the numbers of other people present and the degree of familiarity of the subject
with the other subjects in the situation (Asch 1951; Latané and Rodin 1969); social
preferences are driven by subliminal smells (Li et al. 2007); one’s name can play a
startlingly large role in one’s important life choices; and so on (Pelham et al. 2002).
Collectively, such work is known as situationist social psychology, or situationism.
The general lesson of situationism is that we underestimate the influence of the
situation and we overestimate the influence of purportedly fixed features of the
agent. Crucially, the “situational inputs” typically operate without the awareness of
the agent. Seemingly inconsequential—and deliberatively irrelevant—features of
the context or situation predict and explain behavior, suggesting that our agency is
somewhat less than we presume. Indeed, when agents are asked about the relevance
of those apparently innocuous factors in the situation, the usual reply is outright
denial or dismissal of their relevance to the subject’s deliberation and decision. Thus,
contemporary psychological science threatens the plausibility of Reasons accounts
328 D ECO M P O S E D ACCO U N TS O F T H E W I L L
by showing that the basis of our actions is disconnected from our assessments of
what we have reason to do.5
While particular experiments may be subject to principled dispute, the general
lesson—that we frequently underestimate the causal role of apparently irrelevant
features of contexts on our behavior, both prospectively and retrospectively—has
considerable support (Doris 2002, 12–13).6 There are ongoing disputes about the
precise implications of situationism, and in particular, what these data show about
the causal role of personality and what implications this might have for philosophi-
cal theories of virtue (Doris 1998; Harman 1999; Kamtekar 2004; Merritt 2000;
Sabini et al. 2001). In the present context, however, these concerns can be brack-
eted. What follows does not obviously depend on the nature of personality or char-
acter traits, and so whatever the status of those debates, we have reason to worry
about situationism’s implications for Reasons views.
The situationist threat operates on two dimensions. On the one hand, it may
threaten our “pretheoretical” or folk view of free will. On the other hand, to the
extent to which situationism suggests we lack powers of agency that are appealed
to on philosophical accounts, it threatens our philosophical theories of free will.
In what follows I focus on the philosophical threat, and in particular, the threat to
Reasons accounts. Whatever we say about folk judgments of freedom and respon-
sibility, what is crucial here is what our best theory ought to say about free will, all
things considered.
None of this implies that situationism’s significance for ordinary beliefs is alto-
gether irrelevant for philosophical accounts. On the contrary: Reasons accounts
are partly motivated by their coherence with ordinary judgments. If ordinary judg-
ments turn out to be at odds with the scientific picture of our agency, this undercuts
some of the motivation for accepting a Reasons theory. A Reasons theorist might
be willing to sever the account’s appeal to ordinary judgments, but if so, then some-
thing needs to be said about the basis of such accounts. For conventional Reasons
theorists, however, situationism presents an unappealing dilemma: we can either
downgrade our confidence in Reasons theories (in light of the threat of situation-
ism), or we can disconnect our theories from ordinary judgments and downgrade
our confidence in our ordinary judgments of responsibility.
So, the dual threat situationism presents to common sense and philosophical
theorizing does not easily disentangle. Nevertheless, my focus is primarily on the
philosophical threat, whatever its larger implications for our ordinary thinking and
its consequent ramifications for theorizing.7
of its burrow, release the cricket, enter the burrow for a moment (presumably to look
things over), and then return to the threshold of the burrow to pull the cricket in.
Here is the surprising thing: if you move the cricket more than a few inches from the
threshold of the burrow when the wasp enters its burrow for the first time without
the cricket, the wasp will “reboot” the process, moving the cricket closer, dropping
it, checking out the burrow, and returning outside to get the cricket. Surprisingly,
the wasp will do this every time the cricket is moved. Again, and again, and again, and
again, if necessary. The wasp never does the obvious thing of pulling the cricket into
the burrow straightaway. Douglas Hofstadter calls this sphexishness, or the property
of being mechanical and stupid in the way suggested by the behavior of the digger
wasp (Sphex ichneumoneus) (Dennett 1984, 10–11).
Now consider the human case. Perhaps situationism shows that we are sphexish.
We think of ourselves as complicated, generally rational creatures that ordinarily act
in response to our best assessments of reasons. What situationism shows, perhaps,
is that we are not like that at all. Instead, our agency is revealed as blind instinct,
perhaps masked by high-level confabulation (i.e., the manufacturing of sincere but
ad hoc explanations of the sources of our action).8 If one thinks of instinct as para-
digmatically opposed to free will, then we have a compact explanation for why situ-
ationism threatens free will: it shows we are instinctual and not free.
Recall, however, that the present issue is not whether our naive, pretheorized
views of the self are threatened by situationism. What is at stake is whether a
Reasons theory of free will should be threatened by situationism. In this context, it
is much harder to make out why it should matter that some of our rational capaci-
ties reduce to the functioning of lower-level “instinctual” mental operations. One
way to put the point is that it is simply a mistake to assume that instinct is necessarily
opposed to rationality. To be sure, some behavior we label “instinctive” might be, in
some cases, irrational by nearly any measure. Still, there are cases where instinctive
behavior is ordinarily rational in a straightforwardly instrumental sense. In a wide
range of circumstances, our “instincts” (to breathe, or to jerk our hands away from
burning sensations, to socialize with other humans, and so on) are paradigms of
rational behavior. What we call “instinct” is (at least sometimes) Mother Nature’s
way of encoding a kind of bounded rationality into the basic mechanisms of the
creature. We view it as mere instinct only when we find the limits of that rationality,
or when it conflicts with our higher-order aims.
On this picture, the fact that we are sphexish (in the sense of having an instinctual
base for rational behaviors) does not threaten our freedom. Instinctual behaviors
can be irrational, especially when they operate under conditions different than they
were presumably acquired, or when they come into conflict with some privileged
element of the psychic economy.9 However, neither the fact of instinctual behav-
ior nor the possibility of a reduction of our complex behavior to more basic, less
globally rational elements shows that we cannot respond to reasons any more than
pockets of local irrationality in the wasp would show us anything about the wasp’s
rationality under normal conditions. Our rationality is, like the wasp’s, plausibly
limited. Such limitations, however, do not constitute global irrationality.
The line of response suggested here—construing instinctual, subagen-
tial mechanisms with bounded rationality as partly constitutive of our general
330 D ECO M P O S E D ACCO U N TS O F T H E W I L L
of the latter: it is, I think, enough to blunt some of the worry, even if it cannot eradi-
cate it altogether. First, though, I want to consider one further issue that might be
taken to supplement the basic worry generated by situationism.
4. AN AUTOMATICITY THREAT?
A suitably informed interlocutor might contend that even if situationism lacks the
resources to show that we lack free will, other work in psychology can do so. In
recent years a fertile research program has sprung up around detailing the scope of
fast, nonconscious determinants of action and preferences and their mechanisms of
operation. This work is usually thought of as describing the automaticity of human
action. Automatic processes, if sufficiently irrational and pervasive, would presum-
ably show that we are not often enough responsive to reasons.
As John Kihlstrom (2008) characterizes it, automatic processes have four
features:
1. Inevitable evocation: Automatic processes are inevitably engaged by the
appearance of specific environmental stimuli, regardless of the person’s conscious
intentions, deployment of attention, or mental set.
2. Incorrigible completion: Once evoked, they run to completion in a ballistic
fashion, regardless of the person’s attempt to control them.
3. Efficient execution: Automatic processes are effortless, in that they consume
no attentional resources.
4. Parallel processing: Automatic processes do not interfere with, and are not
subject to interference by, other ongoing processes—except when they compete
with these processes for input or output channels, as in the Stroop effect. (156)
Part of what makes automatic processes notable is not the mere fact of quick,
usually sub- or unconscious mental operations but the pervasiveness of automatic
processes in general. That is, proponents of the automaticity research program sug-
gest that automatic behaviors are not the exception but rather the rule in human
action production (Bargh and Ferguson 2000).
The situationist and automaticity research programs are complementary. Both
emphasize that we overestimate the degree to which we understand the sources of
our behavior, that conscious deliberative reflection is oftentimes affected (one might
even say “contaminated”) by forces largely invisible to us, and that these forces are
ones that we would regard as irrelevant to the rationality of the act if we were aware
of them.
The work on automaticity raises some interesting questions of its own, and it
merits a much more substantial reply than I will give to it here. Still, because con-
cerns about automaticity interlock with the situationist threat, it may be useful to
sketch what sorts of things the Reasons theorist might say in reply to automaticity
worries.
First, it is hardly clear how much of our mental life is automatic in the way defined
at the start of this section (Kihlstrom 2008). There are ongoing disputes about this
issue among psychologists, and the dust is not yet settled, especially with respect to
the matter of the ubiquitousness of diversity of automatic processes and the extent
332 D ECO M P O S E D ACCO U N TS O F T H E W I L L
as we can realistically hope to be, our rational, moral natures are very fragile and
bounded. The critic might charge that this is the real situationist threat.
That seems right to me. In order to better address this criticism, however, I think
we must recast Reasons accounts, abandoning some suppositions that are usually
folded into such accounts. Let me explain.
Many accounts of free will are implicitly committed to something I shall call
atomism. Atomism is the view that free will is a nonrelational property of agents; it
is characterizable in isolation from broader social and physical contexts. An atom-
ist (in the present sense) holds that whether a given agent has free will and/or is
capable of being morally responsible can, at least in principle, be determined simply
by reading off the properties of just the agent. Atomistic theories provide character-
izations of free will or responsible agency that do not appeal to relational properties,
such as the normative relations of the agent to institutions or collectives.
Atomism is often coupled with a view that there is only one natural power or
arrangement of agential features that constitutes free will or the control condition
on moral responsibility. This is a monistic view of the ontology of free will. Monistic
views include those accounts that hold that free will is the conditional ability to
act on a counterfactual desire, should one want to. Identificationist accounts, which
hold that free will is had only when the agent identifies with a special psychological
element (a desire, a value, an intention, etc.), are also monistic. So are libertarian
accounts, on which one acts freely only when one acts in a specific nondetermin-
istic fashion. In contrast, nonmonistic (or pluralistic) accounts hold that there are
multiple agential structures or combinations of powers that constitute the control
or freedom required for moral responsibility.
If we assume that the freedom or control implicated in assessments of moral
responsibility is a single, unified capacity that relies on a particular cross-situationally
stable mechanism, then the sciences of the mind will be threatening to these
accounts. The situation-dependent nature of our capacities seems to be perhaps the
most compelling claim of situationist research. Consequently, the implicit picture
of our natural capacities invoked by going philosophical theories—an atomistic,
monistic picture—looks to be just plain false.
Psychological research suggests that what appears to us as a general capacity
of reasons responsiveness is really a cluster of more specific, ecologically limited
capacities indexed to particular circumstances. Consequently, what powers we have
are not had independently of situations. What capacity we have for responding to
reasons is not some single thing, some fixed structure or cross-situationally stable
faculty.
Importantly, degradation of our more particular capacities can be quite localized
and context-specific. Consider the literature on “stereotype threat” or “social identity
threat.” What Steele, Aronson, and their colleges have found is that performance in
a wide range of mental and physical activities is subject to degradation in light of
subjects perceiving that there is some possibility of their being evaluated in terms of
a negative stereotype (Aronson et al. 1999; Steele et al. 2002). So, for example, when
there is a background assumption that women and blacks do less well than white
men at math, the performance of women and blacks on math exams—a task that
plausibly involves a species rationality, if anything does—will drop when the exam
334 D ECO M P O S E D ACCO U N TS O F T H E W I L L
is presented as testing native ability. These startling results disappear when the threat
is removed, as when, for example, the exam is presented as testing cognitive pro-
cesses and not purportedly native ability. One can do the same thing to white males,
by priming them with information about their stereotypically poor performance on
math tests when compared with their Asian counterparts. When the threatening
comparison is made salient to subjects, performance drops. When the threatening
comparison is taken away, and the exam is explicitly presented as not susceptible to
such bias, scores rise for the populations ordinarily susceptible to the threat.
Remarkably, these results generalize to a variety of more and less cognitive
domains, including physical performance (Steele et al. 2002).12 Indeed, the more
general thesis, that the environment can degrade our cognitive capacities in
domain-specific ways, has considerable support (Doris and Murphy 2007). One
could resist the general lesson by arguing that (perhaps) there is a basic underlying
capacity that is stable, and perception (whether conscious or not) of stereotypes
affects the ease with which those capacities are exercised. Notice, though, that this
just pushes the problem with atomistic views back a level. Even if our basic capaci-
ties are stable across contexts, our abilities to exercise them vary by circumstance,
and this suggests that our situation-indexed capacities vary considerably.
Given that free will is a fundamentally practical capacity—it is tied to action,
which always occurs in a circumstance—then the characterization of our freedom
independent of circumstance looks like a vain aspiration. What we need to know is
whether we have a capacity relevant for action (and, on the present interpretation,
responsible action)—this requires an account of free will that is sensitive to the role
of the situation. An atomistic account cannot hope to provide this, so we must build
our account with different assumptions.
There are various ways the conventional Reasons theorist might attempt to reha-
bilitate atomism and monism. In what follows, however, I explore what possibilities
are available to us if we retain an interest in a Reasons account but pursue it without
the assumptions of atomism and monism.
6. REASONS-RESPONSIVENESS RECONCEIVED
Situationism presses us to acknowledge that our reasons-sensitive capacities are
importantly dependent on the environment in which those capacities operate, and
that the cross-situational robustness of our reasons-responsive agency is a sham. At
its core, the idea is intuitive enough—the power of a seed to grow a tree is only a
power it has in some contexts and not others. The challenge is to remember that this
is true of persons, too, and that this generates the corresponding need to appreciate
the circumstances that structure our powers.13
In this section, my goal is to provide an account that is (1) consistent with a
broadly Reasons approach, (2) free of the supposition of atomism and monism
about the involved agential powers, and (3) compatible with a wide range of plausi-
ble theories of normative ethics. So, the account I offer is one where the characteris-
tic basic structure of responsible agency is to be understood as a variably constituted
capacity to recognize or detect moral considerations in the relevant circumstances,
and to appropriately govern one’s conduct in light of them.
Situationism and Moral Responsibility 335
The present schema invokes several technical notions: the idea of a responsible
agent, and an account of what it is for an action to be morally praiseworthy and mor-
ally blameworthy. I will leave the latter two notions unanalyzed, focusing on the
implications of abandoning the standard atomistic and monistic model of respon-
sible agency and its capacities.
Here is how I think the Reasons theorist should characterize responsible agency,
and by extension, free will:
A. the capacity for detection of the relevant moral considerations obtains when:
i. S actually detects moral considerations of type M in C that are pertinent
to actions available to S or
ii. in those possible worlds where S is in a context relevantly similar to
C, and moral considerations of type M are present in those contexts,
in a suitable proportion of those worlds S successfully detects those
considerations.
B. the capacity for volitional control, or self-governance with respect to the
relevant moral considerations M in circumstances C obtains when either
336 D ECO M P O S E D ACCO U N TS O F T H E W I L L
And, the notions of suitability and relevant similarity invoked in Aii and Bii
are given by the standards an ideal, fully informed, rational observer in the actual
world would select as at least co-optimal for the cultivation of our moral reasons-
responsive agency, holding fixed a range of general facts about our current custom-
ary psychologies, the cultural and social circumstances of our agency, our interest
in resisting counterfactuals we regard as deliberatively irrelevant, and given the exis-
tence of genuine moral considerations, and the need of agents to internalize norms
of action for moral considerations at a level of granularity that is useful in ordinary
deliberative and practical circumstances. Lastly, the ideal observer’s determination
is structured by the following ordering of preferences:
So, free will is a composite of conditions A and B. In turn, A and B are subject
to varied ways of being constituted in the natural world. It is a picture of free will
that can be had without a commitment to atomism and monism of the sort that the
contemporary sciences of the mind impugn.
Before exploring the virtues of this account, some clarification is in order. First,
the preceding characterizations make use of the language of possible worlds as a con-
vention. The involved locutions (e.g., “in those worlds”) is not meant to commit us to
a particular conception of possibility as referring to concrete particulars. Second, the
possibilities invoked in the preceding account are—by design—to be understood as
constituting the responsibility-relevant capacities of agents. These capacities will ordi-
narily be distinct from the “basic abilities,” or the intrinsic dispositions of agents.16
Instead, they are higher-order characterizations picked out because of their rele-
vance to the cultivation and refinement of those forms of agency that recognize and
respond accordingly to moral considerations of the sort we are likely to encounter in
the world. This is a picture on which the relevant metaphysics of our powers is deter-
mined not by the physical structures to which our agency may reduce but instead by
the roles that various collections of our powers play in our shared, normatively struc-
tured lives. Third, the account is neutral on the nature of moral considerations. Moral
considerations presumably depend on the nature of right and wrong action and facts
about the circumstances in which an agent is considering what to do.17
Situationism and Moral Responsibility 337
So, where does all of this get us? Characterizing the capacities that constitute free
will as somewhat loosely connected to our intrinsic dispositions allows us to recon-
cile the picture of our agency recommended by the psychological sciences without
abandoning the conviction that our judgments and practices of moral responsibility
have genuine normative structure to them. We are laden with cognitive and sub-
cognitive mechanisms that (however ecologically bounded) sometimes can and
do operate rationally. There are surely times when our autonomic, nonconscious
responses to features of the world come to hijack our conscious plans. When this
occurs, sometimes it will mean we are not responding to reasons. Other times it will
mean that we are responding to reasons, just not the reasons our conscious, delib-
erative selves are aware of, or hoping to guide action. Still, this fact does not mean
that we are incapable of recognizing and responding to reasons, even moral reasons.
The facts concerning our unexercised capacities, at least as they pertain to assess-
ments of the responsibility-relevant notion of control, depend on counterfactuals
structured by normative considerations.18
A distinctive feature of the account is that it foregrounds a pluralist epistemol-
ogy of moral considerations. It recognizes that sensitivity to moral considerations
is not a unified phenomenon, relying on a single faculty or mechanism. Moral con-
siderations may be constituted by or generated from as diverse things as affective
states, propositional content, situational awareness, and so on. Consequently, the
corresponding epistemic mechanisms for apprehending these considerations will
presumably be diverse as well.19 Moreover, the present picture does not hold that
sensitivity to moral considerations must be conscious, or that the agent must recog-
nize a moral consideration qua moral consideration for it to count as such. An agent
may be moved by moral considerations without consciously recognizing those con-
siderations and without conceiving of them as moral considerations.20
Another notable feature of the account is that it makes free will variably had in
the same individual. That is, the range of moral considerations an agent recognizes
in some or another context or circumstance will vary. In some circumstances agents
will be capable of recognizing a wide range of moral considerations. In other circum-
stances those sensitivities may be narrower or even absent. When they are absent,
or when they dip beneath a minimal threshold, the agent ceases to be a responsible
agent, in that context. We need not suppose that if someone is a responsible agent
at a given time and context, that he or she possesses that form of agency at all times
across all contexts. In some contexts I will be a responsible agent, and in others not.
Those might not be the same contexts in which you are a responsible agent.
When we have reason to believe that particular agents do not have the relevant
sensitivities or volitional capacities in place, we ought not hold that they are genu-
inely responsible, even if we think that in other circumstances the agent does count as
responsible. We may, however, respond in responsibility-characteristic ways with an
eye toward getting the agent to be genuinely responsible in that or related contexts.
Or, we may withhold such treatment altogether if we take such acculturation to be
pointless, not worth the effort, or impossible to achieve in the time available to us.21
However, we can understand a good deal about the normative structure of moral
responsibility if we think of it as modestly teleological, aiming at the development of
morally responsive self-control and the expansion of contexts in which it is effective.
338 D ECO M P O S E D ACCO U N TS O F T H E W I L L
This limited teleology is perhaps most visible in the way a good deal of child rear-
ing and other processes of cultural inculcation are bent to the task of expanding the
range of contexts in which we recognize and rightly respond to moral considerations
(Vargas 2010). By the time we become adults, praise and blame have comparatively
little effect on our internalizing norms, for we oftentimes have come to develop
habits of thought and action that deflect the force of moral blame directed at us.
Still, the propriety of our judgments turns on facts about whether we are capable
of recognizing and appropriately responding to the relevant moral considerations
in play.
the costs are matched high or matched low, negative moods have no effect over
neutral states.
These data may suggest a problem for the present account. Perhaps what the
mood data show is that the agent is not being driven by reasons so much as an
impulse to maintain equilibrium in moods. According to this line of objection, help-
ing is merely an instrumental means to eliminate the bad mood, albeit one that is
structured by the payoffs and challenges of doing so. If this is so, however, then it
appears that agents in bad moods do not seem to be helping for good reasons, or
even moral reasons at all. Consequently, the present account seems to have made
no headway against the threat that experimental psychology presents to Reasons
accounts.24
I agree that the role of mood in agents is complex. Still, I think the challenge can
be met. As an initial move, we must be careful not to presume that affective states
and moral reasons are always divorced. Plausibly, moral considerations will be at
least partly constituted by an agent’s affective states. Moreover, an agent’s affective
states will play a complex role in the detection of what moral considerations there
are. So, what mood data might show is not that agents in negative moods do not
help for good reasons or for no moral reasons at all, but rather that being in negative
moods can make one aware of, or responsive to, particular kinds of moral reasons.25
Commiseration and sympathy are quite plausibly vehicles by which the structure of
morality becomes integrated with our psychology. And, as far as I can tell, nothing
in the mood literature rules out this possibility. Indeed, what we may yet have rea-
son to conclude is that the mechanisms of mood equilibrium are some of the main
mechanisms of sympathy and commiseration. To note their activity would thus not
undermine a Reasons picture so much as it would explain its mechanisms.26 If all
this is correct, the proposed account can usefully guide our thinking about Phone
Booth and related examples.
Now consider Samaritan. What this experiment appears to show is that increased
time pressure decreases helping behavior. Nonhelping behavior is, presumably, com-
patible with free will. An agent might decide to not help. Or, depending on how that
subject understands the situation, he or she might justifiably conclude that helping is
supererogatory. So, decreased helping behavior is not direct evidence for absence of
free will. Still, perhaps some agents in Samaritan suffered a loss of free will.
Here are two ways that might have happened. First, if what happened in Samaritan
is that time pressure radically reduced the ability of agents to recognize that some-
one else needs help (which is what at least some of the subjects reported), then this
sort of situational effect can indeed undermine free will precisely by degrading an
agent’s capacity to recognize moral considerations. So, perhaps some Samaritan sub-
jects were like this. A second way to lose free will in Samaritan-like circumstances
could be when time pressure sufficiently undermines the ability of the agent to act
on perceived pro-helping considerations.
A natural question here is how much loss of ability constitutes loss of capacity.
Here, we can appeal to the account given earlier, but it does not give us a bright line.
At best, it gives us some resources for what sorts of things to look for (e.g., what data
do we have about how much time pressure, if any, saps ordinary motivational effi-
cacy of recognized moral considerations?). Some of these issues are quasi-empirical
340 D ECO M P O S E D ACCO U N TS O F T H E W I L L
matters for which more research into the limits of human agency is required. Still, in
the ordinary cases of subjects in Samaritan, it seems that we can say this: if one did
not see the “injured” person, then one is not responsible. Matters are more compli-
cated if one did see the “injured” person and thought he or she needed help, and the
agent thought him- or herself capable of helping without undue risk (the Samaritan
study did not distinguish between agents who had and lacked these convictions). In
such circumstances, I am inclined to think that one could avoid blameworthiness
only if, roughly, time or another situational pressure were sufficiently great that most
persons in such circumstances would be incapable of bringing themselves to help. It
is, as far as I know, an open question whether there are any empirical data that speak
to this question, one that folds in the agent’s understanding of the situation. So, I
think, the Samaritan data do not give us a unified reason to infer a general lack of free
will in time-pressure cases; sometimes time pressure may reduce helping behavior
because of any number of reasons, only some of which are free will disrupting.
Finally, let us reconsider the Milgram Obedience cases. Here, there is some reason
to doubt that subjects in the described situations retain the responsibility-relevant
capacities. At least in very general terms, Obedience-like cases are precisely ones in
which agents are in unusual environments and/or subject to unusual pressures.
Plausibly, they are situations that stress the ordinary capacities for responding to rea-
sons, or they invoke novel cognitive and emotional processes in agents. This shift in
situation, and the corresponding psychological mechanisms that are invoked, may
decrease the likelihood that an ordinary agent will have the responsibility-relevant
capacities. We check, roughly, by asking whether in a significant number of delibera-
tively relevant circumstances the evaluated agent would fail to appropriately respond
to the relevant moral considerations. Ceteris paribus, the higher our estimation of
the failure rate in a given agent, the more reason we have to doubt that the agent
possesses the capacity required for moral responsibility. Still, in Obedience-like situa-
tions, agents are not necessarily globally insensitive to moral considerations or even
insensitive to only relevant moral considerations. Some may well be suitably sensitive
to some or all of the moral considerations in play. (Indeed, some subjects did resist
some of the available abhorrent courses of action with greater and lesser degrees of
success.) So, there is a threshold issue here, and in some cases it will be comparatively
unclear to us what an ideal observer would say about a case thus described.
that, upon reflection, is consciously rejected. But conscious deliberation and the
corresponding exercise of active agency do not always involve themselves solely to
turn the present psychological tide. Sometimes we are forward-looking, setting up
plans or weighing up values that structure downstream operation.
Much of the time it is obvious what the agent should do, and what way counts
as a satisfactory way of doing it. Among adults it may frequently be the case that
conscious deliberation only injects itself into the psychological tide when there is
a special reason to do so. Such economy of intervention is oftentimes a good thing.
Conscious deliberation is slow and demanding of neurochemical resources. Like
all mechanisms, it is capable of error. Even so, to the extent to which it effectively
resolves conflicts and sets in motion constraints on deliberation and action through
planning and related mechanisms of psychological disciplining, it has an important
role to play.
Situationism suggests that the empirical facts of our agency are at odds with our
self-conception, that context matters more than we tend to suppose. The picture of
free will I have offered is an attempt to be responsive to those facts. The resultant
account is therefore likely to also be at some remove from our naive self-conception.
For example, the tendency to think that our capacities for control are metaphysi-
cally robust, unified, and cross-situationally stable is not preserved by my account of
free will. Instead, free will involves capacities that are functions of agents, a context
of action, and normatively structured practices. It is simply a mistake to think of free
will as a kind of intrinsic power of agents.
Notice that this means free will is partly insulated from direct threats arising from
experimental research. No matter how much we learn about the physical constitu-
tion of agency, it is exceedingly difficult to see how science alone could ever be in a
position to settle whether some or another arrangement of our physical nature has
normative significance for the integrity of attributions of praise and blame.
which our conscious attitudes explain our behaviors (McGuire 1985; Kraus 1995).
One thing that situationism strongly suggests, however, is that circumstances make
a difference for the ability of agents to control their behaviors in light of their prin-
ciples.29 To the extent that responsible agency requires that an agent’s attitudes con-
trol the agent’s behaviors, experimental data can again provide some guidance on
how we might better shape our environments to contribute to that control.
In sum, while situationist data might initially appear to threaten our freedom and
moral responsibility, what we have seen is, if not quite the opposite, at least some-
thing considerably less threatening. Given a situation-sensitive theory of respon-
sible agency and some attention to the data, we find that our agency is somewhat
different than we imagine. The situationist threat turns out to be only one aspect of a
more complex picture of the forces that enhance and degrade our agency. Whether
and where we build bulwarks against the bad and granaries for the good is up to us.
Here, a suitably cynical critic might retort that this is indeed something, but not
enough. After all, we are still faced with a not-altogether-inspiring upshot that since
we do not control our situations as much as we like, we are still not responsible
agents as much as we might have hoped.
At this point, a concession is in order. I agree that we have less freedom than we
might have hoped for, but I must insist that we have more freedom than we might
have feared. Although we must acknowledge that our freedom-relevant capacities
are jeopardized outside of responsibility-supporting circumstances, we may still
console ourselves with the thought that we have a remarkable amount of control in
suitable environments.
Such thoughts do not end the matter. The present equilibrium point gives rise to
new issues worth mentioning, if only in closing. In particular, it is important to recog-
nize that societies, states, and cultures all structure our actual capacities. Being raised
in an antiracist context plays a role in enhancing sensitivity to moral considerations
tied to antiracist concerns. Similarly, being raised in a sexist, fascist, or classist culture
will ordinarily shape a person’s incapacities to respond to egalitarian concerns. Such
considerations may suggest that we need to ask whether societies or states have some
kind of moral, practical, or political obligation to endeavor to shape the circumstances
of actors in ways that insulate them against situational effects that degrade their (moral
or other) reasoning. We might go on to ask whether societies or states have commen-
surate obligations to foster contexts that enhance our rational and moral agency. If they
do, it suggest that free will is less a matter of science than it is of politics or morality.
ACKNOWLEDGMENTS
Thanks to Henrik Walter and Dana Nelkin (twice over) both for providing com-
ments on predecessor papers and for affording me the circumstances in which I
couldn’t put off writing about these things. Thanks also to Kristin Drake, Eddy
Nahmias, Joshua Knobe, David Velleman, Till Vierkant, and David Widerker for
helpful feedback on ideas in this chapter. I am also grateful to Ruben Berrios and
Christian Miller for their commentaries at the Selfhood, Normativity, and Control
conference in Nijmegen and the Pacific APA in 2007, respectively; thanks, too, to
audience members in both places.
344 D ECO M P O S E D ACCO U N TS O F T H E W I L L
NOTES
1. In the contemporary philosophical literature on free will, this seems to be the domi-
nant (but not exclusive) characterization of free will (Vargas 2011).
2. These “other” conditions might be relatively pedestrian things: for example, being
sometimes capable of consciousness, having beliefs and desires, being able to form
intentions, having generally reliable beliefs about the immediate effects of one’s
actions, and so on. Also, a Reasons account need not preclude other, more ambi-
tious demands on free will. One might also hold that free will requires the presence
of indeterminism, nonreductive causal powers, self-knowledge, and so on. However,
these further conditions are not of primary interest in what follows.
3. Elsewhere, I have attempted to say a bit about the attractions of a Reasons view in
contrast to “Identificationist” views (Vargas 2009). There are, however, other pos-
sibilities beyond these options.
4. It may be worth noting that the particulars of this experiment have not been easily
reproduced. Still, I include it because (1) there is an enormous body of literature
that supports the basic idea of “mood effects” dramatically altering behavior, and (2)
because this example is a familiar and useful illustration of the basic situationist idea.
Thanks to Christian Miller for drawing my attention to some of the troubles of the
Isen and Levin work.
5. In ordinary discourse, to say something “is a threat” is to say something ambiguous. It
either can mean the appearance of some risk, or it can indicate the actuality of risk or
jeopardy, where this latter thing is meant in some nonepistemic way. When my kids and
I pile out of the minivan and into our front yard, I strongly suspect our neighbors regard
us as a threat to the calm of the neighborhood. Nevertheless, there are some days on
which the threat is only apparent. Sometimes we are simply too tired to yell and make
the usual ruckus. As I will use the phrase the situationist threat, it is meant to be neutral
between the appearance and actuality of jeopardy. Some apparent threats will prove to
be only apparent, and others will be more and less actual to different degrees.
6. We must be careful, though, not to overclaim what the body of literature gets us.
Although it is somewhat improbable that one could do so, it is possible that one
could generate an alternative explanation that (1) is consistent with the data but that
(2) does not have the implication that we are subject to situational effects that we
misidentify or fail to recognize. This would be a genuine problem for the situationist
program.
7. For what it is worth, I suspect that the one source of the perception of a situationist
threat is traceable to an overly simple description of the phenomena. Recall the data
in Phone Booth. We might be tempted to suppose that what it shows is that agents
are a site upon which the causal levers of the world operate, if we describe it as a
case where “the dime makes the subject help.” Such descriptions obscure something
important: the fact of the agent’s psychology and its mediating role between situation
and action. The more accurate description of Phone Booth seems to be this: the situ-
ation influences (say) the agent’s mood, which affects what the agent does. Once we
say this, however, we are appealing to the role of some psychological elements that
presumably (at least sometimes) constitute inputs to and/or elements of the agent’s
active, conscious self. If we focus on this fact, we do not so easily lose the sense of the
subject’s agency. A coarse-grained description of situationist effects may thus some-
times imply a form of fatalism that bypasses the agent and his or her psychology.
More on a related issue in section 4.
Situationism and Moral Responsibility 345
8. This idea plays a particularly prominent role in the work of Daniel Wegner and pro-
ponents of recent research on the automaticity of mental processes (e.g., John Bargh).
See section 4.
9. On some views, the privileged psychological element could include things such as
the agent’s conscious deliberative judgments or some maximally coherent arrange-
ment of the agent’s desires and beliefs.
10. Negligence is a particularly difficult to account for aspect of moral responsibility, so
perhaps this is not so telling. Matt King (2009) has recently argued against treating
negligence as a case of responsibility precisely because it lacks the structure of more
paradigmatic cases of responsibility.
11. The bypass threat might work in a different way. Perhaps the worry is not that our
conscious deliberative agency is sometimes trumped by our emotions. Perhaps the
picture is, instead, that our active, deliberative agency never plays a role in deciding
what we do. Perhaps situationist data suggest that our active, mental life is a kind of
sham, obscuring the operation of subagential processes beyond our awareness. I am
dubious, but for the moment I will bracket this concern, returning to it when I dis-
cuss the automaticity literature.
12. One remarkable result from that study: women shown TV commercials with women
in stereotypically unintelligent roles before an exam had poorer performance on
math tests (393).
13. Some social psychologists have contended that the degree to which populations
emphasize individual versus situation in explanation and prediction varies across
cultures (Nisbett 2003). Recently, the idea that circumstances structure decision
making in subtle and underappreciated ways has recently received popular atten-
tion because of the visibility of because of the visibility of the work of Thaler and
Sunstein (2009). The present challenge is to provide a characterization of what the
responsibility-relevant notion of control comes to given that our decisions are vul-
nerable to “nudges” of the sort they describe.
14. For a useful overview of the difficulties faced by the classical conditional analysis,
see Kane 1996. For a critical discussion of more recent approaches in this vein, see
Clarke 2009.
15. Much of the machinery I introduce to explicate this idea can, I think, be paired with
a different conception of the normative aim for moral responsibility; the specific
powers identified will presumably be somewhat different, but the basic approach
is amenable to different conceptions of the organizing normative structure to the
responsibility system. I leave it to others to show how that might go.
16. I borrow the term “basic abilities” from John Perry, although my usage is, I think, a bit
different (Perry 2010).
17. I favor the language of moral considerations (as opposed to moral reasons) only
because talk of reasons sometimes is taken to imply a commitment to something like
an autonomous faculty that properly operates independently of the effects of affect.
There is nothing in my account that is intended to exclude the possibility that affect
and emotion, in both the deliberating agent and in those with whom the agent inter-
acts, play constitutive roles in the ontology of moral considerations.
18. Notice that even if the skeptic is right that we are very often not suitably responsive
to moral considerations, the present account suggests that there may yet be some
reason for optimism, at least to the extent to which we can enhance our capacities and
expand the domains in which they are effective.
346 D ECO M P O S E D ACCO U N TS O F T H E W I L L
19. It would be surprising if the epistemic mechanisms were the same for recognizing
such diverse things as that someone is in emotional pain, that other persons are ends
in themselves, and that one should not delay in getting one’s paper to one’s commen-
tator. The cataloging of the varied epistemic mechanisms of moral considerations will
require empirical work informed by a more general theory of moral considerations,
but there is already good evidence to suggest that there are diverse neurocognitive
mechanisms involved in moral judgments (Nichols 2004; Moll et al. 2005).
20. Huck Finn may be like this, when he helps his friend Jim escape from slavery. For an
insightful discussion of this case, and the virtues of de re reasons responsiveness, see
(Arpaly 2003).
21. Cases of this latter sort can occur when one visits (or is visited by) a member of a
largely alien culture. In such cases, we (at least in the West, currently) tend toward
tolerance of behavior we would ordinarily regard as blameworthy precisely because
of the conviction that the other party operates out of an ignorance that precludes
apprehension of the suitable moral considerations. As George Fourlas pointed out to
me, in recently popular culture, this phenomenon has been exploited to substantial
and controversial comedic effect by comedian Sasha Baron Cohen.
22. The supererogatory nature of helping can be important if, for example, one is worried
about the not helping condition, and how infrequently people help strangers with
minor problems. Perhaps one more global issue here is simply how rare it is that we
act on moral considerations, whether because of failures of perception or motivation.
I return to this issue, at least in part, at the end.
23. For a helpful discussion of this literature, and its significance for philosophical work
on moral psychology, see (Miller 2009b; Miller 2009a).
24. Thanks to Christian Miller for putting this concern to me.
25. Note that none of this requires that the agent conceive of the reasons as moral rea-
sons. As noted earlier, the relevant notion of responding to moral reasons is, borrow-
ing Arpaly’s terminology, responsiveness de re—not de dicto.
26. What would undermine the Reasons approach? Here’s one possibility amenable to
empirical data: If mood data showed that people were driven to increased helping
behavior when they ought not (e.g., if the only way to help would be to do some-
thing really immoral), this would suggest that at least in those cases mood effects were
indeed disabling or bypassing the relevant moral considerations-sensitive capacities.
But in mood-mediated cases, such behavior is rarely ballistic in this way.
27. Claude Steele et al. (2002) doubt it is a motivation effect because “the effort
people expend while exercising stereotype threat on a standardized test has been
measured in several ways: how long people work on the test, how many problems
they attempt, how much effort they report putting in, and so on. But none of
these has yielded evidence, in the sample studied, that stereotype threat reduces
test effort” (397).
28. For my part, I do not think there is anything like a unified account to be told of the
justified norms governing what counts as weakness of will and culpable failure. I sus-
pect that these norms will vary by context and agent in complex ways, and in ways that
are sensitive to folk expectations about psychological resilience and requirements on
impulse management. In his characteristically insightful way, Gary Watson (2004)
may have anticipated something like this point in the context of addiction: “The
moral and legal significance of an individual’s volitional weaknesses depends not only
on judgments about individual responsibility and the limits of human endurance but
on judgments about the meaning and value of those vulnerabilities” (347).
Situationism and Moral Responsibility 347
29. For example, in one study, the subjects’ attitudes controlled their behavior more
when they were looking at themselves in a mirror (Carver 1975).
REFERENCES
Aronson, Joshua, Michael J. Lustina, Catherine Good, Kelli Keough, Claude M. Steele,
and Joseph Brown. 1999. When White Men Can’t Do Math: Necessary and Sufficient
Factors in Stereotype Threat. Journal of Experimental Social Psychology 35: 29–46.
Arpaly, Nomy. 2003. Unprincipled Virtue. New York: Oxford University Press.
Asch, Solomon. 1951. Effects of Group Pressures upon the Modification and Distortion of
Judgment. In Groups, Leadership, and Men, edited by Harold Guetzkow, 177–190.
Pittsburgh: Carnegie Press.
Bargh, John A . 2008. Free Will Is Un-natural. In Are We Free? Psychology and Free Will,
edited by John Baer, James C. Kaufman, and Roy F. Baumeister, 128–154. New York:
Oxford University Press.
Bargh, John A., and M. J. Ferguson. 2000. Beyond Behaviorism: On the Automaticity of
Higher Mental Processes. Psychological Bulletin 126 (6 Special Issue): 925–945.
Bayne, Tim. 2006. Phenomenology and the Feeling of Doing: Wegner on the Conscious Will. In
Does Consciousness Cause Behavior? An Investigation of the Nature of Volition, edited by S.
Pockett, W. P. Banks, and S. Gallagher, 169–186. Cambridge, MA : MIT Press.
Beaman, A. L., P. L. Barnes, and B. McQuirk . 1978. Increasing Helping Rates through
Information Dissemination: Teaching Pays. Personality and Social Psychology Bulletin
4: 406–411.
Carver, C. S. 1975. Physical Aggression as a Function of Objective Self-Awareness and
Attitudes towards Punishment. Journal of Experimental Social Psychology 11: 510–519.
Clarke, Randolph. 2009. Dispositions, Abilities to Act, and Free Will: The New
Dispositionalism. Mind 118 (470): 323–351.
Darley, John, and Daniel Batson. 1973. “From Jerusalem to Jericho”: A Study of Situational
and Dispositional Variables in Helping Behavior. Journal of Personality and Social
Psychology 27: 100–108.
Dennett, Daniel. 1984. Elbow Room. Cambridge, MA : MIT Press.
Doris, John. 1998. Persons, Situations, and Virtue Ethics. Nous 32: 504–530.
Doris, John. 2002. Lack of Character. New York: Cambridge University Press.
Doris, John, and Dominic Murphy. 2007. From My Lai to Abu Ghraib: The Moral
Psychology of Atrocity. Midwest Studies in Philosophy 31: 25–55.
Fischer, John Martin, and Mark Ravizza. 1998. Responsibility and Control: A Theory of
Moral Responsibility. New York: Cambridge University Press.
Harman, Gilbert. 1999. Moral Philosophy Meets Social Psychology: Virtue Ethics and the
Fundamental Attribution Error. Proceedings of the Aristotelian Society 99 (3): 315–331.
Isen, Alice, and Paula Levin. 1972. Effect of Feeling Good on Helping. Journal of Personality
and Social Psychology 21: 384–388.
Kamtekar, Rachana. 2004. Situationism and Virtue Ethics on the Content of Our
Character. Ethics 114 (3): 458–491.
Kane, Robert. 1996. The Significance of Free Will. Oxford: Oxford University Press.
Kihlstrom, John F. 2008. The Automaticity Juggernaut—Or, Are We Automatons after
All? In Are We Free? Psychology and Free Will, edited by John Baer, James C. Kaufman,
and Roy F. Baumeister, 155–180. New York: Oxford University Press.
King , Matt. 2009. The Problem with Negligence. Social Theory and Practice 35: 577–595.
348 D ECO M P O S E D ACCO U N TS O F T H E W I L L
Kraus, Stephen. 1995. Attitudes and the Prediction of Behavior: A Meta-analysis of the
Empirical Literature. Personality and Social Psychology Bulletin 21: 58–75.
Latané, Bibb, and Judith Rodin. 1969. A Lady in Distress: Inhibiting Effects of Friends
and Strangers on Bystander Intervention. Journal of Experimental Social Psychology 5:
189–202.
Li, Wen, Isabel Moallem, Ken A. Paller, and Jay A. Gottfried. 2007. Subliminal Smells Can
Guide Social Preferences. Psychological Science 18 (12): 1044–1049.
McGuire, W. J. 1985. Attitudes and Attitude Change. In The Handbook of Social Psychology,
edited by G. Lindzey, and E. Aronson, 238–241. New York: Random House.
Mele, Alfred R . 2009. Effective Intentions: The Power of the Conscious Will. New York:
Oxford University Press.
Merritt, Maria. 2000. Virtue Ethics and Situationist Personality Psychology. Ethical Theory
and Moral Practice 3: 365–383.
Milgram, Stanley. 1969. Obedience to Authority. New York: Harper and Row.
Miller, Christian. 2009a. Empathy, Social Psychology, and Global Helping Traits.
Philosophical Studies 142: 247–275.
Miller, Christian. 2009b. Social Psychology, Mood, and Helping: Mixed Results for Virtue
Ethics. Journal of Ethics 13: 145–173.
Moll, Jorge, Roland Zahn, R de Olliveira-Souza, Frank Krueger, and Jordan Grafman.
2005. The Neural Basis of Human Moral Cognition. Nature Reviews Neuroscience 6:
799–809.
Montague, P. Read. 2008. Free Will. Current Biology 18 (14): R584–R585.
Nahmias, Eddy. 2002. When Consciousness Matters: A Critical Review of Daniel Wegner’s
the Illusion of Conscious Will. Philosophical Psychology 15 (4): 527–541.
Nahmias, Eddy. 2007. Autonomous Agency and Social Psychology. In Cartographies of the
Mind: Philosophy and Psychology in Intersection, edited by Massimo Marraffa, Mario De
Caro, and Francesco Ferretti, 169–185. Berlin: Springer.
Nelkin, Dana. 2005. Freedom, Responsibility, and the Challenge of Situationism. Midwest
Studies in Philosophy 29 (1): 181–206.
Nelkin, Dana. 2008. Responsibility and Rational Abilities: Defending an Asymmetrical
View. Pacific Philosophical Quarterly 89: 497–515.
Nichols, Shaun. 2004. Sentimental Rules: On the Natural Foundations of Moral Judgment.
Oxford: Oxford University Press.
Nisbett, Richard E. 2003. The Geography of Thought: How Asians and Westerners Think
Differently—and Why. New York: Free Press.
Pelham, Brett W., Matthew C. Mirenberg , and John T. Jones. 2002. Why Susie Sells
Seashells by the Seashore: Implicit Egotism and Major Life Decisions. Journal of
Personality and Social Psychology 82 (4): 469–487.
Perry, John. 2010. Wretched Subterfuge: A Defense of the Compatibilism of Freedom and
Natural Causation. Proceedings and Addresses of the American Philosophical Association
84 (2): 93–113.
Pietromonaco, P., and Richard Nisbett. 1992. Swimming Upstream against the
Fundamental Attribution Error: Subjects’ Weak Generalizations from the Darley and
Batson Study. Social Behavior and Personality 10: 1–4.
Sabini, John, Michael Siepmann, and Julia Stein. 2001. The Really Fundamental Attribution
Error in Social Psychological Research. Psychological Inquiry 12: 1–15.
Steele, Claude M., Steven J. Spencer, and Joshua Aronson. 2002. Contending with
Group Image: The Psychology of Stereotype and Social Identity Threat. Advances in
Experimental Social Psychology 34: 379–440.
Situationism and Moral Responsibility 349
Goal selection (setting), 171, 173, 222, motor, 38ff, 48f, 77, 106–7, 122, 128
229, 267 proximal, 38f, 49, 73, 77ff, 121, 125,
Gollwitzer, P., 23, 295, 307f 128, 173
Graham, G. & Stephens, G.L., 122ff Introspection, 165ff, 175, 300ff
Isham, E., 79ff
Haggard, P., 38, 47, 67, 76ff, 104, 108ff,
111f, 119, 126, 149, 150, 188 James, W., 191, 250, 252f, 259
Hall, L., 8 Johannson, P., 8
Hard determinism, 2, 24 Joint action, 105, 130
Haynes, J.D., 6f, 25, 39 Judgment sensitivity, 291f
Heavey, C, 305f
Hieronymi, P., 22, 260, 283ff, 292ff Kertesz, I., 208ff
Holton, R., 5ff, 14–15, 25, 26, 142, 144, Kihlstrom, J, 331
155, 283, 291ff Ku Klux Klan, 309
Homunculus (fallacy), 2, 9, 163, 171, 189, Kuhl, J. & Koole, S., 201
190, 300
Humphrey, N., 171 Lateral Frontopolar Cortex, 65
Hurlburt, R, 305f, 317 Lateral Intraparietal Area (LIP), 42
Hurley, S., 27 Lau, H., 37, 40, 41, 77ff, 84
Hyperintellectualism, 276 Lazy argument, 6, 89, 92f
Hysteria/Conversion disorder, 131 Lewis, D., 97, 98
Libertarianism, 2, 3, 139, 142, 143
Ideal rationality, 289 Libet, B., 6, 7, 24, 36ff, 47, 61ff, 73ff, 103,
Ideomotor theory, 191 149
Impulsivity, 226, 228 Longo, M, 111ff
Incompatibilism, 2, 7, 25, 60
Indeterminism, 3, 45, 52, 344 Managerial control, 283ff
Individualism, 4 unaware, 293ff
Inferior parietal Many-many problem, 21, 250ff
cortex, 48, 120, 121 Marcel, T., 115
lobe, 111, 48 McCann, H, 270ff
Insanity defense, 15 McGeer, V., 288ff
Instinct, 329 Mechanism, 46, 51
Instrumental reasoning, 266ff Medial dorsolateral prefrontal cortex, 40,
Intentional agency, 16f, 26, 38, 163ff, 251 47
Intentional binding effect, 109, 114, 149, Medial prefrontal cortex, 37, 41, 47, 65,
150 215
Intentional realism, 300, 305, 317 Mele, A., 7, 247, 260, 271ff
Intention-in-action, 121 Mental action, 20–2, 247, 264ff, 296
Intentions, 34, 38ff, 45, 49, 51, 75, 77ff automaticity, 248
103, 106, 108, 121ff, 221ff, 249ff, Mental agency, 19–22, 283ff
258, 263, 273, 290, 296 hierarchical accounts of, 286ff
awareness of, 48, 78, 79, 150 Mental muscle, 292ff
conscious, 6f, 38f, 51, 73, 77, 84, 258 Merino, M. (La Flaca Alejandra), 202,
distal (prior), 38, 50, 121, 122, 123, 205, 211
126, 128, 300 Merleau-Ponty, M., 120
experience of, 78f, 108, 120–1, 277 Metacognition, 22, 39, 125, 258, 268, 276
formation, 121–3, 125, 126, 128, 130–1 Metacognitive beliefs, 105, 275, 317
implementation, 23, 222–5, 307 Metarepresentation, 3, 290ff
354 Index
Truth, aiming at, 263ff Wegner, D., 7, 8, 10, 13, 49, 78, 98, 103,
Tsakiris, M., 12–13, 14, 104, 108, 110, 105, 150, 189, 190, 210, 216, 332,
111, 119 345
Two visual systems hypothesis, 169, Williams, B., 263
171–2, 184–5 Willpower, 201, 308
Wittgenstein, L., 108
Vargas, M., 8, 13, 14–15, 22, 24, 26 Wolpert, D., 106
Velleman, D., 11, 14, 26, 137 Wu, W., 20–1, 22
Veto, 7, 24, 34, 47, 70, 73ff, 187
Vierkant, T., 22, 23 Zombie challenge, 5–9, 10, 11, 15, 16, 17,
Volition, 7, 23, 24, 25, 33ff, 80, 113, 147, 18, 23, 25, 26, 258
149, 167, 183, 188, 190, 191, 263, Zombie systems, 169, 174, 245, 258
337
Voluntary movement/action, 12, 23, 32,
34, 36, 37, 38, 40, 49, 50, 51, 61, 62,
66, 75, 103, 104, 105, 108, 109, 110,
111, 112, 113, 114, 119, 149, 183,
184, 191, 214