Sei sulla pagina 1di 6

Book Review: Surfing Uncertainty: Prediction, Action and the Embodied Mind.

Oxford: Oxford University Press, 2016.

Andy Clark, recognized for his works on the embodied and extended mind, is a
reference when it comes to displaying cognitive science in a natural, comprehensible and yet
philosophical fashion. Further, he has the ability to unite diverse empirical and theoretical
proposals under a common cause by narrating a plot of brain, mind, body and world functioning
that is general enough to unravel – hidden under such plurality of ideas – a common essence,
but that at the same time serves as a rigorous position to be held in debates and applications.
Surfing uncertainty is his most recent challenge, this time to transform a computational
neuroscientific model, expressed in rich and dense mathematics and in computer simulations,
into a coherent natural language story of how the brain functions and how it interacts with body
and environment. This story is supported within virtually every page by accessible reports of
evidence ranging from morphology and physiology of neurons, behavioral and cognitive
psychology, brain imaging, computer simulations, to embodied and developmental robotics.
Clark seems to have three major goals in mind for this book. The first is to develop and
defend a predictive processing approach to brain functioning. The second is to demonstrate
how such approach can account, under the same processing principles, for a whole span of
mind functions such as perception, action, attention, imagination, language and higher
reasoning. The third is to argue that such model is compatible with or even enriching for
previous works (including, but not limited to, his) on the embodied, enactive, situated and
extended mind. This book review will discuss if, and if so how, Clark was successful in each
of these three central goals.
Computational neuroscience is a very recent field which has advanced greatly in the
twenty-first century. Playing the sort of role theoretical physics plays for physicists,
computational neuroscience has been able to develop mathematically dense models of brain
functions which, unlike connectionism, are aimed at replicating properties of real neurons and
nervous systems. Despite bearing these promising goals, computational neuroscience models
have only now started to impact other areas of cognitive science and other related
technological, health and humanities fields. With its elegant language, philosophical approach,
and rich content, Surfing Uncertainty is a milestone step towards making a computational
neuroscience account diffuse and usable among interdisciplinary fields working to understand
the human mind.
Clark's starting goal is to proposes and defend the Predictive Processing account, which
unifies aspects of different models (such as predictive coding, emulation theory, active
inference) about a very similar story. Such story suggests the brain should be thought of as in
a restless, active cycle of predicting what will perturb it in a proximal and distal future. Instead
of being understood as reading input from the world, the predictive brain story suggests the
brain uses statistics to anticipate input before they even arrive. These predictions are based on
expectations (or a statistical generative model) which foresees the most likely outcome of
stimuli and events and takes a bet that thus and so will obtain. By keeping and polishing
expectations the brain can poise itself ahead of the game when it comes to dealing with the
world.
These models suggest the brain is formed by a hierarchy of processing (comprising
higher and lower levels) where multiple layers of neurons are organized to compose a network
with two major streams of information flow. The top-down flow is understood as conveying
multiple predictions, each higher layer attempts to predict the workings of the one underneath
it. The bottom-up flow conveys error correction on previously made predictions to each higher
layer. If predictions of a given event are on track than lower sensory stimulation will be
attenuated, they will not even be considered. On the other hand, if predictions mislead, sensory
stimulation will flag the difference between what was predicted and what sensed and the system
will try to overcome such gap. This, prediction error minimization, Clark claims, is the brain's
major goal.
Clark notes a very strong and interesting shift the predictive processing approach
suggests. It proposes that the forward flow consists not so much of all the features that were
detected to be passed onwards to higher levels but only the error necessary to correct and update
models. Instead of conveying all information from the environment, rather it provides a natural
funnel which guarantees processing economy by focusing on newsworthy information in the
form of error correction.
Predictions escalate downwards at each level and error correction escalate upwards
showing what exactly is faulty and needs to be corrected for future models. Thus, lower levels
will bring news flash since they detect the most recent error correction to propagate upwards,
but the higher levels will have error correction coming from various other strands of the
network. That is, the higher layers will have models corrected from various sources and the
lower levels will have tokens of newest corrections to be made, that is why at any given time
there is not one generative model but various co-evolving models and also why there is a
bidirectional flow of information.
Prediction error is also related to an important concept in this approach which is that of
surprisal. Predictions are based on models which are a form of subpersonal expectation. When
these expectations are not met, prediction error flags them with surprisal. In order to predict,
the brain is always attempting to find a match from higher expectations to the next information
reported from the bottom. Surprisal occurs, therefore, when there is a mismatch between
expectation and the information conveyed by error signaling. The goal of the system at every
second is to minimize surprisal. To reach such goal it must constantly update is models in order
to correspond to novelty. Having tuned predictions enables the system to keep surprisal at the
lowest level possible.
The first part of this book covers the basic mechanisms of the suggested predictive
processing account. This first part is a very comprehensive review of these predictive models
and is likely the best review yet available for those who are not familiar with this work. That
is, it has a nice introductory value. However, one might be left wondering as to how this
processing can come about. For instance, a central concept in the story Clark narrates is that of
precision weighing. Precision weighing is a mechanism that performs a sort of meta-analysis
on the hierarchical economy that results in the power of strengthening what seems reliable for
a given moment and attenuating what is misleading. By measuring uncertainty it can boost the
value given either to error signaling or to predictions. This is a major bet of the model because
it explains: the mechanism underlying attention; how the top-down and bottom-up information
flow is to be controlled; how interaction of multiple areas of brain including multi-modal ones
are to be governed; and even determine if, in a given moment we will act or just imagine.
Further, various pathological conditions are explained as disturbances in precision, such as
autism, schizophrenia, bodily illusions and more. Now, all of these applications are convincing,
well argued and plausible. However, since just by reading the book we have not great access
to the details of how precision (probably better expressed mathematically) works, a
philosophical mind might be worried that it can be playing the role homunculi. How does
precision know so much and decides so much? At first this seems to be an impasse, and I think
any philosophical mind reading the book alone would be justified with such worry.
Nonetheless, the solution to this seems to be to dig into deeper material on precision-weighting
mechanisms, thus, positively, this can be seen as an invitation for further investigation on other
works. And Clark, possibly with this in mind, presents us more with references than claims.
Thus, the first major goal of the book –as in giving the reader a general understanding of
predictive processing – is accomplished.
The second major goal of this book is to provide an unifying account of various mind
functions. Perception is the standard case for predictive processing models, especially
regarding vision. These models were first developed as predictive coding of visual stimuli
which showed how vision could be achieved not by accumulating and organizing
discriminative features of the world but by eliminating predictable features and taking into
account only selected errors. These models downplayed the passive image of perception (such
as that proposed by David Marr) by proposing that and explaining how expectations shape
visual processing. This shift in vision allowed similar applications to other senses. Clark uses
a very nice example for audition, that of sine-wave speech comprehension, how we can
understand language in pure tone whistles. In this case it becomes very clear how expectation
must be playing a huge role in perception. When first hearing sine-wave audios we cannot even
notice that it is language, however, with training or induction our expectations are shaped and
at a given point we cannot help but hear it as normal language. So the predictive processing
account of perception is one in which expectations facilitate the interpretation of real-time
sensory perturbation by means of statistics 'inferences'.
Attention in Surfing Uncertainty is replaced by precision weighing. This is a
complicated move which had already been much criticized in Clark's 2013 predictive
processing paper. In Surfing Uncertainty, Clark chose not so much to engage the debate directly
but rather present a more complete account with further examples. Attention is used when some
events are to be highlighted as more relevant than others, tasks that need to be solved are
focused and others are ignored. If precision can determine the trustworthiness of the sensory
information, it seems natural that the highly weighted prediction error will account for
attention. There are thus two funnels in the bottom-up flow, the first is that not all sensory
information but only prediction error continues to flow up the hierarchy and second that out of
these errors only the most highly weighted will received the more specific care that, in folk
psychology we call, attention.
In contrast to attention, imagination is explained as attenuation of error and increase in
weighing on predictions and higher level generative models. If each higher layer can predict
the response profile of the level below, that suggests higher levels are also capable of producing
a virtual version of the usual sensory data. Thus, by blocking out the bottom-up flow, the brain
traps itself inside its own generative models, enabling the world of imagination.
Action in predictive processing is not less controversial than attention and it takes a
great deal of study and understanding of predictive processing models to get a safe picture of
its functioning. Mostly based on Friston's active inference proposal, Clark argues that even
motor commands can be reduced to the prediction and error regime. Motor areas, as areas of
perception, are always busy in expecting and predicting the sensory stimuli, but in this case the
proprioceptive stimuli. Thus, action is treated as an outcome of perception of proprioceptive
states. In action, the brain is attempting to minimize proprioceptive prediction error. A
prediction concerning our future trajectory and position is generated and the highly weighted
error relative to that prediction makes the action come about. Now, it is not that this triggers
the motor command, motor commands are replaced, these highly weighted errors relative to
future proprioceptive states already directly cause action. There is a tricky point in this proposal
because both prediction (the goal) and error (the cause of action) are important and they need
to be attenuated and increased in different levels of the hierarchy for action to come about. If
errors relative to our future trajectories were attenuated we would only imagine our action.
However, errors relative to our current proprioceptive states must be attenuated for actions to
come about, else we would stand still in checking our own place. Action comes about with
highly weighted error relative to our future predictions and attenuated error relative to
predictions on our current place. This treatment of action shows just how dynamical and
complex the predictive processing account can become. Simply distinguishing between
weighing errors or predictions is not enough, the level of the hierarchy where weighing occurs
and relative to which prediction it is poised also determines function.
As Clark discusses these various mind functions under the predictive processing model
he presents the reader with references to countless empirical studies supporting his views.
Nonetheless, he is albeit humble in suggesting that all of these applications to other areas of
cognition need further study and confirmation. I think this model is very advanced, and very
trustworthy. It might not apply to cognition as a whole, or it might be false, but just as any
other serious cognitive scientific proposal might. However, the same cannot be said for the
account of language and higher reasoning. It did not seem to me that these functions follow
naturally from the predictive processing schema as did the others reviewed above. Nonetheless,
it seems Andy Clark himself would admit that language and reasoning is taken to be more of a
challenge for predictive processing to overcome, than that on which its success has been even
minimally demonstrated. I think he does manage to unify various functions convincingly under
the same predictive processing strategy. Although, I fear that if you have few cards to use in
explanation, a problem might arise later in discriminating between the unified phenomena. For
instance, how are we to individuate, with only these cards (predictions, precisions, errors),
similar phenomena such as dreaming, day-dreaming, thinking aloud, being lost in thoughts,
and so on? But if the goal was to explain how distinct phenomena such as attention and
movement can be explained similarly, then it was met.
Finally, Andy Clark also had the more properly philosophical goal of showing how
predictive processing can go along with or even enrich embodied, situated and extended
accounts. Philosophers such as Jesse Prinz and Tony Chemero before Surfing Uncertainty was
written displayed worries that Andy Clark, by adopting predictive processing, had moved to a
different view from which he had spend the last twenty years defending. The worries are
comprehensible, I too had them before reading, but I do think Clark pulled it off without adding
ad hoc commitments. Here is why.
One of the issues at hand is that of the use of representations. Embodied and situated
cognition had as one of its central tenets that cognitive science had lost itself in the use of
cognitive representations, and that the world itself could serve as its best model. When Clark
then claims that for every aspect of cognition the brain keeps statistical models of reality at
first this seems like a huge departure from situated approaches. But it is actually not so. First,
these representations are nothing like symbolic stand-ins, they are not mirrors of reality, and
there is not a inner token for each outer stimuli. In Surfing Uncertainty, these statistical models
keep information only of organism-relevant stimuli and events, generating predictions that
enable the organism to select affordances. The word 'model' might also sound misleading here.
A model airplane is a replica of a real airplane. However, a statistic model bares a sort of
morphism relation to some content, but it does not replicate the content. Further, Clark argues
these statistical models do not address an organism neutral world nor even all the aspects that
could be relevant to the organism. Second, unlike classical models these representations are
not stored in blocks and do not cause overload resulting in computational explosion, rather,
Clarks argues these models have been mathematically studied and found to be extremely
feasible and have been applied cheaply to computer simulations. And third, Andy Clark uses
his keen wit to notice there is a sense in which the world can be its best model even if models
are guiding perception, no contradiction included. The reason is that these models are not
replacements for the world, instead they enable the agent to use the best of what is available in
the world. If you think about it, the world is actually not its best model in a literal sense, because
(unless you have very specific sensors like insects) the world actually has a majority of
irrelevant information for a given agent, just think of a loud, noise city. There is a sense in
which the world is bombarding us with bad information and noise, that is why we need to surf
such uncertainty. The sense of 'the world is its best model' which is true is in fact kept by Clark.
That is, that our prediction mechanisms should be at each millisecond corrected by errors in
the environment, thus the environment truly is what shapes us, but we need to let the right
information shape us, not any irrelevant information from the environment. Generative models
actually are what permits us to be tuned to the relevant environment, a fabulous twist.
Another issue is that of the implied metaphysics. If our systems only get information
(error) relative to predictions does that imply indirect perception? Clark's answer is yes and no,
or 'non-indirect perception'. The worry of critics of indirect perception is that we might be
locked from the true world itself. The point, once again, is both that we need the mechanisms
to engage in the relevant world and that the world itself, as in free from agent intentional
perception, is senseless, we would not want to be seeing that. When we go to the stadium,
predictive processing is what enables us to see a soccer game instead of physical objects
colliding. So Clark argues perception cannot be direct since it is mediated by expectations, but
no further worry needs to be pursued about 'losing the world', much to the contrary.
Finally, there is the embodied coupling of perception and action. This is achieved in
predictive processing because actions are a consequence of proprioceptive perception and also
because action is a means of reducing prediction error by directing what sort of stimuli will
perturb the sensory system. Therefore, to solve a jigsaw puzzle we need to actively engage the
objects with our hands, rotating, moving and organizing them, and in every such attempt, action
is framing the sort of stimuli that perception will receive, choosing what 'shots' of the world
are taken. This interplay between action, body and world is what solves a tough jigsaw puzzle,
one cannot succeed just by staring at it and thinking.
Surfing Uncertainty shows how embodied proposals of the mind can assume diverse
shapes. It might not represent many of the embodied camp theorists, however, if embodied
proposals of the mind are to be of any relevance to cognitive science then some sort of
formulation is needed. Embodied proposals need to move from purely phenomenological,
ecological and even systemic formulations for cognitive science. Theorist need to propose
cognitive accounts (or at least some relation to one) and Andy Clark is one of the few who have
truly embraced such challenge, this book is his most recent (and ambitious) effort to do so.
What Clark seems to be teaching the cognitive science community with this impressive
twist is that the embodied situated movement is not so much about determining which precise
mechanisms cognition uses but suggesting what cognition is for, what it does, and that is
engaging and communicating at each millisecond in synchrony with a structured, relevant
world. That it can take diverse shapes is not the same as saying the movement has nothing to
offer, the proof of this is that applying embodied and situated cognition in this book completely
remodeled the received view of predictive coding models as one that blocks agents from the
environment by hiring a rich internal world model as a substitute.
Obviously there is much in the book that this review was not able to cover, such as
attending to omissions and empty spaces, enactivism, the extended mind, scaffolding, cognitive
penetration, illusions, pathologies, the dark room problem, free energy and scientific evidence
for these claims. The good part of having so much to say is you open many doors. At the same
time you cannot go deep in everything because you will lack space. At times the book does feel
as if it is lacking space. Once again this is remediated with of references, stimulating further
reading.
In conclusion, yes, Clark was very successful in all three of his major goals, any reader
must agree that the main fault of this book is actually forcing you to have to read more, study
more and continue on this endless journey into research.
Samuel Bellini-Leite
Philosophy Department
Universidade Federal de Minas Gerais (UFMG)
Belo Horizonte, Brazil

Potrebbero piacerti anche