Sei sulla pagina 1di 8

Behavioural Brain Research 261 (2014) 328335

Contents lists available at ScienceDirect

Behavioural Brain Research


journal homepage: www.elsevier.com/locate/bbr

Research report

Observed manipulation of novel tools leads to mu rhythm


suppression over sensory-motor cortices
Norma Naima Rther a,b, , Elliot Clayton Brown b,c , Anne Klepp d , Christian Bellebaum e
a
Institute of Cognitive Neuroscience, Department of Neuropsychology, Faculty of Psychology, Ruhr-University Bochum, Universitaetsstrasse 150, 44801
Bochum, Germany
b
International Graduate School of Neuroscience, Ruhr-University Bochum, Universitaetsstrasse 150, 44801 Bochum, Germany
c
Maryland Psychiatric Research Centre, University of Maryland School of Medicine, 55 Wade Avenue, Baltimore, MD 21228, USA
d
Institute of Clinical Neuroscience and Medical Psychology, Medical Faculty, Heinrich Heine University Dsseldorf, Universitaetsstrasse 1, 40225
Dsseldorf, Germany
e
Institute of Experimental Psychology, Heinrich Heine University Dsseldorf, Universitaetsstrasse 1, 40225 Dsseldorf, Germany

h i g h l i g h t s

Observational learning of tool manipulation leads to mu rhythm modulation.


Similar modulation is seen for visual exploration of tools.
Mu rhythm modulation takes place over sensory-motor cortices.
The effect is seen within 200 ms after tool picture presentation.

a r t i c l e

i n f o

Article history:
Received 5 November 2013
Received in revised form
19 December 2013
Accepted 23 December 2013
Available online 3 January 2014
Keywords:
Tools
Mu rhythm
Observation
Action
Affordance
Training

a b s t r a c t
Tool stimuli can be analyzed based on their affordance, that is, their visual structure hinting at possible
interaction points. Additionally, familiar tools can initiate the retrieval of stored objectaction associations, providing the basis for a meaningful object use. The mu rhythm within the electroencephalographic
alpha band is associated with sensory-motor processing and was shown to be modulated during the sight
of familiar tool stimuli, suggesting motor cortex activation based on either affordance processing or access
to stored conceptual objectaction associations. The current study aimed to investigate the impact of such
associations, acquired by observation of manipulation, in a training study controlling for inherent object
affordances and previous individual differences in object-related experience. Participants observed the
manipulation of a set of novel tool objects and visually explored a second set of novel tools for which only
functional information was provided. In contrast to non-trained objects, observed objects modulated the
mu rhythm over left sensory-motor cortex within 200 ms after training. Additionally, both observed and
visually explored objects modulated mu rhythm over right sensory-motor cortex in the same time window to some extent, with the effect being stronger for the latter. This result suggests that motor cortex
activation in visual processing of tools can result from observation of tool manipulation. However, mu
rhythm modulation, albeit with a different and less clear left-lateralized pattern, is also seen when the
tools were only made visually familiar and when information was restricted to the tools function.
2014 The Authors. Published by Elsevier B.V. Open access under CC BY-NC-SA license.

1. Introduction

Corresponding author at: Institute of Cognitive Neuroscience, Department


of Neuropsychology, Ruhr-University Bochum, Universitaetsstrasse 150, D-44801
Bochum, Germany. Tel.: +49 234 32 23174; fax: +49 234 32 14622.
E-mail addresses: naima.ruether@rub.de, norma.ruether@rub.de (N.N. Rther).

Tools represent a special class of object as they are dened by the


way one interacts with them, and the purpose they serve. Hence,
when seeing manipulable tools, information about potential action
interaction points, or affordances [1], is extracted. Furthermore,
familiar tools can initiate the retrieval of stored information about
associations between the object, an object-related action and an
action goal, which is based on previous object interaction experience [2]. Several functional neuroimaging studies have found
activations of a frontoparietal network by the sight of familiar

0166-4328 2014 The Authors. Published by Elsevier B.V. Open access under CC BY-NC-SA license.
http://dx.doi.org/10.1016/j.bbr.2013.12.033

N.N. Rther et al. / Behavioural Brain Research 261 (2014) 328335

tools, most likely representing the stored motor action representation associated with the tool (for review, see [36]). Furthermore,
behavioral studies have shown that the mere sight of tools can
prime, or potentiate, motor responses to object parts, affording an
action through motor facilitation, therefore supporting the idea of
the automatic extraction of affordances when seeing manipulable
objects. Together, these ndings suggest that action information
can be an integral part of object representations.
One potential mechanism for this is described in the Two Action
Systems Model [2], which proposes that the action system is separable into two functionally and anatomically distinguishable units,
a Structural System and a Functional System. The Structural System
is devoted to perform an online analysis of visual object structures for a rapid interaction and is proposed to be not exclusively
cognitive. During repeated and skilled interaction with objects, a
Functional System extracts the characteristics of an action that
remain constant across object-directed interactions, hence providing the basis for the formation of conceptual object representations.
During manual object interaction the extracted functional knowledge and object-associated actions thus become integral parts of
conceptual representations, which do, however, comprise different types of knowledge about an object in different modalities. A
key component for the evolution of human tool use behavior is
the capacity to learn through the observation of others performing
an action [7]. This therefore suggests that the formation of mental
representations of tools is not only bound to direct interactive experience with the object, but can also result from indirect experience
through the observation of others interactions with the object. It
has been shown that certain neurons in area F5 of the monkey cortex, called mirror neurons, discharge during the observation of
others performing object-related actions [810], a property that
has been proposed to represent a direct matching mechanism by
which the observed motor action is mapped onto the observers
internal motor representations [11,12].
Electroencephalography (EEG) studies investigating sensorymotor processing in humans have analyzed the mu rhythm, which
refers to oscillatory brain activity in the alpha and beta band
(812 Hz and 20 Hz), in electrodes placed over the sensory-motor
cortices. Event-related desynchronization (ERD) of the oscillatory
neuronal ring that underlies the mu rhythm can, for example,
be found during movement execution (e.g., [1315]) but also during the observation of movements (e.g., [16,17]) and movement
imagery (e.g., [18]). These properties of the mu rhythm show that
the motor system can also be engaged in the absence of active
action execution, and some researchers argue that this activity
partly represents an index of the mirror neuron system [19]. Furthermore, ndings on the suppression of the mu rhythm during
object-related in contrast to meaningless actions have been interpreted as activation of goal-directed motor programs related to
the mirror neuron system [16]. Interestingly, the simple viewing
of tool stimuli, even without the demand to interact, can also lead
to mu rhythm suppression between 140 ms and 300 ms after stimulus presentation [20], possibly reecting the automatic access to
object-associated actions. Further support for this comes from early
event-related potentials (ERPs) during the processing of tools for
which sources were described in the left postcentral gyrus and
bilateral premotor cortex. However, so far it is not clear if the modulation of the mu rhythm during the sight of tool stimuli relates to
simple visual affordance processing, or to experience-dependent
objectaction associations that can be acquired during observation
of object manipulation. As mu suppression has been found both for
action execution and for the sight of known objects [1315,20], it
seems likely that active object experience plays a role in the activation of motor cortex during visual object processing. While mu
suppression was also seen for the observation of meaningful actions
[16,17,21], it is not clear what effect the observation of object

329

interactions has on the formation of objectaction associations. The


mirror neuron hypothesis [810] and related simulationist theories [22] posit that direct motor experience may not be necessary
to form such associations, but rather observation of object-related
motor actions may be sufcient. However, this is a topic that is still
hotly debated.
A major problem with using familiar tools in experiments
addressing experience effects is that interindividual differences
in previous knowledge cannot be controlled for. Recently, several
training studies have used novel objects to overcome this problem
(e.g., [2326]). In these studies, subjects systematically received
different types of object-related experience. Even though the
object-related knowledge gained cannot be compared to existing
conceptual representations, training studies can lead to important
insights, providing a model for the mechanisms involved in the
emergence of new object representations. One such study found
that early activation over sensory-motor and occipito-parietal
brain regions during object processing only occurred in the condition in which object-related movements had been previously
trained through pantomimed actions, as compared to training with
non-functional actions that were not relevant to the object being
seen [25]. The authors here concluded that object processing can
be altered with respect to the specic sensory-motor interactions
with objects during knowledge acquisition.
In the current study, novel, tool-like objects ([23] similar to the
ones used by Weisberg et al. [24]) were used to investigate whether
observing the manipulation of objects had a differential impact on
neural correlates of objectaction associations, as reected in the
sensory-motor mu rhythm, when compared to the mere learning
about object function, but without seeing the object being used.
Each participant in the present study received three sessions of
object-related training over 3 days. For one set of objects, participants observed an experimenter manipulating the invented,
unfamiliar tools during training (observation of manipulation training objects, OBS). For another set of invented and unfamiliar tool
objects, participants were informed about the tools function but
only visually explored the object (visual training objects, VIS).
Finally, a third object set served as an untrained control set (not
trained objects, NO). Before and after training, participants performed a visual matching task during which photographs of all tools
were shown and brain activity was measured with EEG.
We expected a post training modulation of ERD of activity in
the sensory-motor mu rhythm depending on the specic training
experience with the objects. More specically, we predicted that
the processing of objects that had been observed to be manipulated
(OBS) would lead to a larger mu rhythm suppression as compared
to objects that had not been seen to be manipulated (VIS and NO).
2. Materials and methods
2.1. Participants
Twenty-one right-handed students (11 women, mean age 25.33,
SD = 3.71, range = 2034) with normal or corrected-to-normal
vision participated in the study. No participant had a history
of neurological or psychiatric diseases. All were informed about
the testing procedure and signed a written consent. Participants
received either course credit or nancial reimbursement. The study
was approved by the Ethics Committee of the Medical Faculty at
Ruhr University Bochum, Germany.
2.2. Overall procedure
Using a children construction toy (KnexTM ), similar to the studies in [23,24], 36 novel objects were constructed. Participants

330

N.N. Rther et al. / Behavioural Brain Research 261 (2014) 328335

completed a visual picture matching task with the objects, before


and after training, during which brain activity was recorded with
EEG. Training occurred over three sessions on three different days
with some of the invented, unfamiliar objects. The average time
span between the rst and the last training was 8.1 days (SD = 3.7),
whereas the mean interval between the last training session and
the post training EEG assessment was 3.2 days (SD = 2.5). The mean
duration from the rst training to the second EEG session was 10.7
days (SD = 4.4) on average.
2.3. Novel object stimuli
Each novel object served one of six possible functions related
to a specic manipulation (separation, pushing, tagging, crushing,
transporting and moving other small objects, e.g., paper rolls or
table tennis balls). Every object had at least one moveable part or
a handle that could be used for the specic type of manipulation
related to object function.
A separate group of subjects (N = 33) evaluated each object in
terms of singularity, visual complexity and possible real-object
associated affordances (how much the object resembles a known
object and which affordances it raises) with a structured questionnaire. Singularity was assessed by asking how outstanding an object
is compared to the other objects (on a 7-point scale) and what other
object was most similar to a given object and how strong the similarity was (again on a 7-point scale). Participants also rated the
visual complexity (How complex is the object visually?) of the
objects on a scale from 1 (low visual complexity) to 7 (high visual
complexity). Additionally, participants rated how much the object
resembled a real object and, if yes, how strong this association was.
Finally, participants were asked to write down if they could think of
a specic function the depicted object might have. Based on these
ratings, the 36 objects were divided into three object sets including
12 objects each matched with respect to the assessed parameters.
2.4. Training sessions
The training sessions lasted 80 min each, and served to induce
object representations in the participants. Only two of the three
object sets were used for training, whereas the third object set
served as a control condition (NO), and were therefore included
in the visual matching task. Training for the two trained object
sets differed to induce qualitatively different types of object representations. One set of objects (VIS) was part of a visual training
while the second object set (OBS) was part of an observation of
manipulation training. Hence, after three training sessions, each
participant had completed three visual trainings with 12 objects
and three observation of manipulation trainings with 12 different
objects. Each object appeared once in each of the three training session. Assignment of the object sets to the training conditions was
counterbalanced and randomized across participants.
2.4.1. Observation of manipulation training
In the observation of manipulation training, the execution
of object manipulation was demonstrated for each of the objects
by an experimenter. No participant executed the manipulation
him-/herself. Each object was placed on a table in front of the participants while naming the specic function of the object. In this
training, however, the experimenter then explained the discrete
steps of manipulation by a standardized verbal description while
each step of manipulation was demonstrated simultaneously,
which illustrated the intended function of the object. Subsequently,
the object was manipulated by the experimenter for 90 s while
the participants were instructed to carefully watch. To ensure that
all participants were attending to the manipulation, they were

Fig. 1. Example trials of the visual matching task.

instructed to silently count how often each object was manipulated


within the 90 s.
2.4.2. Visual training
For the objects with visual training, each was placed on the
table in front of participants, one at a time, and the specic function of the object was named, but without any indication of how the
object could be manipulated. Then participants had 90 s to visually
explore and simultaneously describe the object. As for the observation of manipulation training object set, it was prohibited to
touch the objects. In the cases where participants were not able
to ll the required 90 s with verbal description, the experimenter
prompted a more detailed depiction of the object by asking specic
questions about the object (e.g., how many of these yellow parts
does the object have).
2.5. The matching task
Before and after training, participants performed an identical visual matching task (similar to [23,24]) with pictures of the
invented tool-like objects, to investigate training-induced changes
of object processing (see Fig. 1). The matching task was presented
on a computer screen, and each trial started with an exclamation
mark, with a varying duration of 6001600 ms, which was followed
by a xation cross presented for 500 ms. A photograph of one of the
objects was then shown for 1000 ms, followed by another xation
cross with varying duration between 500 and 700 ms. Afterwards,
a second photograph appeared showing an object from a different
perspective for 1000 ms. Participants were then required to make
a response within 1500 ms (left or right button press) to determine
whether the two photographs were of the same object or not.
The task consisted of 3 blocks containing 72 trials each (216 trials
in total). On each trial, two photographs were shown, amounting to
432 object pictures in total (12 pictures per object). Half of the trials
showed the same object on both photographs. For non-matching
photographs, both objects were from the same training group.
To account for possible repetition effects, each object was photographed from four different perspectives (see also [23]). Hence,
each individual picture was shown three times, with the pictures
being randomly assigned to either the rst or second presentation
per trial. The software Presentation was used to control stimulus timing and response recording (version 13.0, Neurobehavioral
Systems, Albany, CA, USA).
After the post training EEG measurement, participants received
a paperpencil questionnaire including colored pictures depicting
each of the 36 objects from one perspective. They were then asked
to indicate, if the depicted object was part of the observation or
visual exploration training condition (OBS, VIS) or if the depicted
object was not part of the training (NO). Participants indicated
this by marking the respective training condition (observation of

N.N. Rther et al. / Behavioural Brain Research 261 (2014) 328335

manipulation, visual exploration or not part of the training)


that were listed next to the object pictures. This procedure was used
to ensure that subjects were able to distinguish between the objects
and remembered their occurrence during training (see [24]).
2.6. EEG recording
EEG assessment during the performance of the visual matching task took place in the psychophysiology laboratory of the
Institute of Cognitive Neuroscience at Ruhr University Bochum,
Germany. EEG was recorded at 30 scalp sites, with a sampling rate
of 500 Hz, using a Brain Amp Professional Powerpack amplier
and the Brain Vision Recorder software (Brain Products, Munich,
Germany). Using an elastic nylon electrode cap, AgCl electrodes
were arranged according to the International 1020 System (F7,
F3, Fz, F4, F8; FT7, FC3, FCz, FC4, FT8; T7, C3, Cz, C4, T8; TP7, CP3,
CPz, CP4, TP8; P7, P3, Pz, P4, P8; PO7, PO3, POz, PO4, PO8). Mastoid electrodes served as references and one ground electrode was
attached to recording site FPz. All impedances were kept below
10 k. Brain Vision Analyzer 2 software (Brain Products, Munich,
Germany) and MATLAB (Mathworks, Natick, MA, USA) were used
for off-line EEG data analysis.
2.7. Analysis of ERD/ERS
Electrodes of interest for ERD/ERS analysis were at sites C3 (left
hemispheric) and C4 (right hemispheric), since ERD at these sites
is related to activation of primary sensory-motor cortex [15,27].
A Butterworth Zero Phase Filter (low cutoff 0.5305, time constant 0.300 s with 12 dB/oct; high cutoff 40 Hz, 12 dB/oct) was
applied to the EEG raw data. Then, an Independent Component
Analysis (ICA) was performed for each participant to remove eye
blink artifacts from the raw data [28]. An ICA converts the EEG data
into a matrix containing spatially xed and temporally independent
components where the number of EEG channels matches the number of components. Eye blink related distortions are characterized
by symmetric, frontally pronounced positivity. The eligible component showing these characteristics was removed from the data set
by performing an ICA back transform. Subsequent visual inspection
of the data ensured that ocular artifacts were largely removed by
this approach. By careful visual inspection, trials including muscular, movement-related or other artifacts on the electrodes of
interest (C3, C4) were then removed from the data set. Data were
then segmented into different conditions according to the object
sets (i.e. OBS, VIS and NO). Only the rst object picture per trial was
analyzed, because we were mainly interested in modulations due
to different training conditions while processing of the second photograph might have been confounded by the task demands, e.g., the
decision process, and by whether the two presented pictures represented matches or non-matches. A 1000 ms epoch before stimulus
onset was taken as the reference interval for ERS/ERD analysis,
during which the exclamation mark and xation cross were presented. The main epoch used for analysis for the rst picture was
1000 ms in duration, and thus covered exactly the presentation
time of the picture. The alpha frequency bands of 810 Hz and
1012 Hz were analyzed, which both reect sensory-motor processing in frontoparietal cortices [29], but can be differentiated by
their somatotopical specicity for foot and hand movements. The
lower mu rhythm does not show differences in mu rhythm modulation for nger or hand movements over motoric hand or foot
areas, hence representing a more widespread, non-specic activation. On the contrary, the upper mu rhythm is characterized by a
larger modulation for hand movements compared to the modulation during the observation foot movements over the motoric hand
areas [14].

331

To compute ERD/ERS, the data were bandpass ltered and


amplitude samples were squared. The power samples across all
trials were then averaged. ERD was calculated as power decrease
whereas ERS was calculated as power increase in relation to the
reference interval [30]. The formula for ERD/ERS calculation was
[(power of frequency band power of reference interval)/power of
reference interval 100] [15]. After computation of ERD/ERS, the
traces were smoothed using a moving average with a time window
of 126 ms. Topographic maps were created using spherical spline
interpolation in Brain Vision Analyzer 2 software (Brain Products,
Munich, Germany).
2.8. Statistical analysis
Statistical analyses were computed with IBM SPSS Statistics
20.0.0. Mean reaction times and accuracy in the visual matching task were analyzed with a 2 3 repeated-measures analysis
of variance (ANOVA) comprising the factors TIME (pre, post) and
STIMULUS TYPE (OBS, VIS, NO). Reaction times were averaged
across match and mismatch trials and across correct and wrong
responses. As for reaction time data, accuracy was analyzed across
match and mismatch trials.
ERD/ERS analysis was conducted separately for epochs of 200 ms
within the 1000 ms following stimulus presentation, yielding 5
epochs of 200 ms each trial (see Fig. 3). A repeated-measures
ANOVA comprising the factors TIME (pre, post), STIMULUS TYPE
(OBS, VIS, NO) and ELECTRODE (C3, C4) was then conducted for each
200 ms epoch within the specic frequency bands of 810 Hz and
1012 Hz. Results for all main effects and interactions including at
least one of the factors TIME or STIMULUS TYPE will be reported. Of
particular interest were those interaction effects including both factors TIME and STIMULUS TYPE. Paired t-tests were used to resolve
interactions. GreenhouseGeisser corrected results and degrees of
freedom are reported in case of signicant violations of sphericity
as measured by the Mauchlys test [31].
3. Results
3.1. Behavioral data
3.1.1. Performance in the matching task
Mean accuracy and mean reaction times (in ms) during the
matching task before and after training are depicted in Table 1. An
ANOVA with the factors STIMULUS TYPE (OBS, VIS, NO) and TIME
(pre, post) for reaction time data revealed a main effect of TIME
(F(1,20) = 15.052; p = .001), showing that participants were faster to
respond in the visual matching task after object training. No further
signicant effects for reaction time data were found (all p > .360).
The respective ANOVA for accuracy data with the same factors also yielded a signicant main effect of TIME (F(1,20) = 16.615;
p = .001) which shows that participants were overall more accurate
after object training in the post compared to the pre training session. A signicant main effect of STIMULUS TYPE (F(2,40) = 4.733;
p = .014) indicates differences in accuracy between object types,
Table 1
Mean RT data (ms) and accuracy (%) for the performance in the visual matching task
(standard deviation in brackets).

RTs (ms)
Before learning
After learning
Accuracy (%)
Before learning
After learning

NO

VIS

OBS

518 (117)
480 (135)

518 (104)
467 (125)

521 (116)
472 (128)

92.53 (4.27)
94.91 (3.18)

93.52 (4.10)
97.42 (2.25)

93.52 (4.42)
96.76 (3.12)

332

N.N. Rther et al. / Behavioural Brain Research 261 (2014) 328335

and post hoc t-tests revealed that participants performed generally


less accurately for NO in comparison to VIS (t(20) = 3.34; p = .003)
and compared to OBS (t(20) = 2.41; p = .026). No signicant difference in the accuracy between OBS and VIS was found (p = .324).
Furthermore, no signicant interaction of both factors was found
(p = .420).
3.1.2. Assignments of objects to the training condition
After the second EEG session was completed, participants correctly recognized on average 10.95 OBS (SD = 1.16), 10.81 VIS
(SD = 2.25) and 11.14 NO (SD = 1.01) (maximum score is 12 for
each object set). No difference between object sets with respect
to assignment performance was found (p = .701).
3.2. ERD/ERS analysis
3.2.1. Analysis of the lower mu frequency band of 810 Hz
Fig. 2 displays the grand average ERD/ERS traces for pre and
post training sessions at electrodes C3 (A) and C4 (B) during the
accomplishment of the visual matching task for OBS, VIS and NO.
In the frequency band of 810 Hz, none of the main effects
or two-way interactions reached signicance for any of the time
windows (all p > .05). However, a signicant three-way interaction was found in the time window of 0200 ms (F(2,40) = 3.460;
p = .041; all p > .05 for the other time windows). For the resolution of this interaction, one-factorial ANOVAs with the factor
STIMULUS TYPE were conducted, separately for the electrodes
C3 and C4 and for pre and post training acquisitions. This resolution revealed no pre training differences between stimulus
types at C3 and C4 electrodes (both p > .400). At electrode C3,
a signicant linear trend for differences between stimulus types
emerged in the post training data (F(1,20) = 4.735; p = .042). Maximal ERD was seen for OBS (mean = 19.83% ERD/ERS; SD = 20.95),
followed by VIS (mean = 13.48% ERD/ERS; SD = 20.57) and NO
(mean = 9.45% ERD/ERS; SD = 21.37). When single conditions were
compared directly by means of post hoc t-tests, a signicant difference was found only between OBS and NO (t(20) = 2.18; p = .042),
but not between OBS and VIS or VIS and NO (both p > .189).
A general post training effect of STIMULUS TYPE was found
at C4 (F(2,40) = 3.350; p = .045), with a different overall pattern.
Here, there was no clear relationship between the type of training and the ERD. Instead, ERD for VIS and OBS were, at least
descriptively, comparable. When compared statistically, ERD for
VIS (mean = 17.50% ERD/ERS; SD = 19.29) was signicantly larger
compared to NO (mean = 7.29% ERD/ERS; SD = 18.80; t(20) = 2.76;
p = .012). For OBS (mean = 13.95% ERD/ERS; SD = 17.76) ERD was
numerically larger than for NO, but this difference was not significant (p = .116). Likewise, VIS and OBS did not differ signicantly
from each other (p = .413). No signicant effects of interest were
found in the remaining time windows (200400 ms, 400600 ms,
600800 ms and 8001000 ms) at 810 Hz (all p > .326). Topographic maps of ERD activity for the rst 200 ms pre and post
training are displayed in Fig. 3, showing that OBS led to a large ERD
pronounced at scalp regions over the left-central, primary sensorymotor cortex.
To exclude the possibility that there was a link between the
lower accuracy for NO in the post training EEG session and the
ERD effect for the different object types, a correlation analysis
between accuracy values and post training ERD was performed.
The accuracy data were not normally distributed (dened by the
KolmogorovSmirnov test for normality distribution; all p < .05).
The post hoc bivariate Spearman analysis revealed no signicant
correlations between ERD at electrode C3 with post training accuracy in the matching task (all p > .339). Additionally, there was no
signicant correlation between the magnitude of ERD and time
span in which the three training sessions took place (time interval

between the rst and last training session), or between ERD and the
time interval between the rst day of training and the post training EEG assessment, or between ERD and the time span between
the last training session and the post training EEG assessment (all
p > .105).
3.2.2. Analysis of the upper mu frequency band of 1012 Hz
No signicant effects of interests were found for all time windows in the frequency band of 1012 Hz (all p > .05).
4. Discussion
The present study aimed at elucidating the impact of observational learning of manipulation on the processing of pictures of
invented, previously unfamiliar tools. Within three training sessions, participants observed one set of the invented tools being
manipulated (OBS) and visually explored a second set for which
they were only informed about the tools functions but not the concrete manipulation (VIS). A third set served as an untrained control
(NO) and was not part of the training. In sum, our results demonstrated that the different types of training did have a differential
modulatory effect on sensory-motor cortex activation when seeing
the tools after training. More specically, on scalp regions over left
sensory-motor cortex this modulation was characterized by a linear
trend of mu ERD depending on the type of object-experience (NO,
VIS and OBS, respectively), whereby seeing non-trained objects
induced the least ERD, and those for which manipulation was
observed induced the greatest activity. Over right sensory-motor
cortex, signicant mu rhythm modulation by training was also
found, with signicantly larger ERD for VIS in comparison to NO.
It is now generally accepted that conceptual representations
involve different types of knowledge related to specic modalities
[32]. For tools, associations with manipulation and function play
a particularly important role. The role of experience in the emergence of new representations is the matter of an ongoing debate.
The present study addressed this issue by systematically varying
the type of object-related experience. Although it is not possible
to induce highly elaborated object representations in just three
training sessions, the design with previously unfamiliar, manipulable objects [23,24] allowed insights into an important mechanism
involved in the emergence of tool representations, that is, the
learning of objectaction associations. The current study is, to our
knowledge, the rst to provide evidence for an effect of the type of
object-related experience on object-processing by motor-related
brain structures, as reected by the mu rhythm. The nding of mu
rhythm modulation for tools which were observed during manipulation is consistent with studies on the processing of familiar tools,
which also showed early mu rhythm suppression [20]. For the OBS
of the present study, the observation of the concrete steps of manipulation could have led to the formation of action representations
and motor plans that were associated with the objects. There is evidence that the mu rhythm reects general motor-related activity:
during action imitation and observation, mu rhythm modulation
correlates with BOLD responses in a rather broad cortical network
consisting of inferior parietal lobe, premotor and frontal cortex, as
well as medial frontal cortex, thalamus, temporal lobe and cerebellum [19].
A possible mechanism underlying the effect of manipulation
observation on the subsequent processing of tools might relate
to the human mirror neuron system (hMNS). It has been shown
that certain neurons in area F5 of the monkey cortex, called
mirror neurons, discharge during the observation of others
performing object-related actions [810]. This property has been
proposed to represent a direct matching mechanism by which
the observed motor action is mapped onto the observers internal

N.N. Rther et al. / Behavioural Brain Research 261 (2014) 328335

333

Fig. 2. Grand average training-specic ERD traces measured on scalp regions over left sensory-motor cortex (A) and right sensory-motor cortex (B), before and after training.
OBS = observation of manipulation training objects, VIS = visual training objects, NO = not trained objects.

motor representations, consequently activating them [11,12].


During the observation of manipulation training in the present
study, mirror neurons might have been activated, nally leading
to the activation of the motor system elicited by the mere sight of
the tools objects. Even though some studies suggest that the mu

rhythm is related to activation of the hMNS ([16] for review, see


[29]), the link of the present ndings to the hMNS is speculative.
A further possible explanation of the mu modulation could relate
to an involvement of the canonical neuron system that was shown
in the monkey cortex to respond to graspable objects per se

334

N.N. Rther et al. / Behavioural Brain Research 261 (2014) 328335

Fig. 3. Topographic maps of grand average training-specic ERD. Displayed is the


activity within 0200 ms before training (A) and after training (B) for OBS, VIS and
NO. The rectangle marks the analyzed electrodes on scalp regions over left and
right sensory-motor cortices. OBS = observation of manipulation training objects,
VIS = visual training objects, NO = not trained objects.

and during self-executed object-directed grasping [33]. A partial


involvement of the canonical neuron system seems likely, but no
direct conclusions can be drawn from the current design. Future
studies should investigate in more detail where the mu rhythm
modulation evoked by tool stimuli originates from.
Concerning the timing, the early effect within 200 ms is in
line with the effect reported by Proverbio [20], who found mu
rhythm desynchronization for familiar tools within 140175 ms.
Furthermore, the left lateralization of the specic effect for OBS
is in line with previous studies investigating conceptual representations induced via active manipulation experience [23,24], with
studies investigating processing of familiar tools [4,20,34,35] and
with some studies on the observation of object-directed grasping
(e.g., [36]). For example, a recent ERP study showed a similar lefthemispheric asymmetry for the visual processing of tools: seeing
unimanual and bimanual tools activated the left premotor cortex,
as was indicated by source analysis [35]. There is also evidence that
representations of motor action planning are largely left lateralized
(for review, see [37,38]). Additionally, the left-lateralization of the
presumably generated action representations and motor plans is in
accordance with the Function system of the Two Action System
approach [2] that is proposed to calculate and store conceptual
representations comprising features of actions, creating associative action-object links. Therefore, the effect for OBS found at scalp
regions over left sensory-motor cortices in the present study supports the interpretation of an activation of action representations
and motor plans during the sight of objects, for which manipulation
was learned by observation.
For VIS, information about the function of the objects was
provided verbally during the training, possibly leading to associations between the object and functional goal-related semantic
information. The activation of the motor cortex in response to
VIS after training corroborates recent ndings of Cross et al. [39]
who showed that both observation of knot tying and linguistic
training can induce object representations. However, they found
activation for linguistically induced representations about how to
tie knots in the parietal cortex, and not, as it is suggested by the
pattern found in the current study, in more anterior regions. The
activation observed in the present study might be triggered by the
formation of predictive models of motor plans that were generated
in response to the semantic information. Studies have shown that
mu frequency can be modulated by reading sentences relating to
hand actions [40] and that manual action training can improve

linguistic understanding of semantically related sentences [41],


therefore demonstrating a close relation between the semantic
and the motor system and between different types of access to
the semantic system. The predictive action plans induced by VIS
appear, however, to differ from the motor plans elicited by OBS, as
is indicated by the different topographies of OBS and VIS training
effects.
The effect for OBS and VIS measured on scalp regions over
sensory-motor cortices was restricted to the 810 Hz frequency
band. Contrary to our nding, the effect found by the Proverbio
[20] study was only seen in the 1012 Hz frequency band. It is conceivable that the processing of familiar and previously unfamiliar
tools recruits, at least to some extent, different mechanisms. A possible explanation is provided by a study of Pfurtscheller et al. [14]
who argue for a dissociation between the lower (810 Hz) and the
upper mu rhythm (1012 Hz). Both frequency bands show the typical desynchronization before movement initiation but the lower
mu rhythm shows no somatotopic specicity, i.e. it is desynchronized by either nger or foot movements, whereas the location of
the upper frequency is different for nger versus foot movements
and also evoked by movement imagination [18]. The authors suggest that the somatotopically unspecic effect within the lower
mu rhythm reects an unspecic activation of the motor cortex independent of the actual execution of specic movements,
maybe representing a presetting mechanism of motor neurons that
are part of the activated representation areas. Hence, the effect
observed by Proverbio [20] could rely on the activation of specic
motor plans that were evoked during the sight of the familiar tool
stimuli, whereas the objects of the present study elicited a more
unspecic activation of sensory-motor cortex.
In sum, the present study showed that the perception of unfamiliar tools that were previously observed to be manipulated,
and were hence associated with concrete actions, led to mu suppression on scalp regions over left sensory-motor cortex within
the rst 200 ms after stimulus presentation. This effect suggests
sensory-motor cortex activation as a result of indirect manipulation experience with an object, independent of object affordances,
possibly related to the stored representations of objectaction associations. Additionally, the perception of objects that were only
visually explored, and for which information about the specic
function was provided verbally, also led to mu rhythm suppression. For those objects, semantic information might have led to the
formation of predictive motor plans that are related to the provided
general tool-associated function.

Acknowledgment
The authors thank the German Research Foundation for supporting this work (Deutsche Forschungsgemeinschaft, DFG; SFB 874/TP
B6 to C.B.).

References
[1] Gibson JJ. The theory of affordances. In: Shaw R, Bransford J, editors. Perceiving,
acting and knowing: toward an ecological psychology. Erlbaum: Hillsdale, NJ;
1977. p. 6782.
[2] Buxbaum LJ, Kalenine S. Action knowledge, visuomotor activation, and embodiment in the two action systems. Ann N Y Acad Sci 2010;1191:20118.
[3] Lewis JW. Cortical networks related to human use of tools. Neuroscientist
2006;12(3):21131.
[4] Chao LL, Martin A. Representation of manipulable man-made objects in the
dorsal stream. Neuroimage 2000;12(4):47884.
[5] Creem-Regehr SH, Lee JN. Neural representations of graspable objects: are tools
special? Brain Res Cogn Brain Res 2005;22(3):45769.
[6] Martin A, Wiggs CL, Ungerleider LG, Haxby JV. Neural correlates of categoryspecic knowledge. Nature 1996;379(6566):64952.
[7] van Schaik CP, Deaner RO, Merrill MY. The conditions for tool use in primates:
implications for the evolution of material culture. J Hum Evol 1999;36:71941.

N.N. Rther et al. / Behavioural Brain Research 261 (2014) 328335


[8] Dipellegrino G, Fadiga L, Fogassi L, Gallese V, Rizzolatti G. Understanding motor
events a neurophysiological study. Exp Brain Res 1992;91(1):17680.
[9] Gallese V, Fadiga L, Fogassi L, Rizzolatti G. Action recognition in the premotor
cortex. Brain 1996;119:593609.
[10] Rizzolatti G, Fadiga L, Gallese V, Fogassi L. Premotor cortex and the recognition
of motor actions. Cogn Brain Res 1996;3(2):13141.
[11] Iacoboni M, Woods RP, Brass M, Bekkering H, Mazziotta JC, Rizzolatti G. Cortical
mechanisms of human imitation. Science 1999;286(5449):25268.
[12] Rizzolatti G, Fadiga L, Fogassi L, Gallese V. Resonance behaviors and mirror
neurons. Arch Ital Biol 1999;137(23):85100.
[13] Chatrian GE, Petersen MC, Lazarte JA. The blocking of the rolandic wicket
rhythm and some central changes related to movement. Electroencephalogr
Clin Neurophysiol 1958;10(4):7712.
[14] Pfurtscheller G, Neuper C, Krausz G. Functional dissociation of lower and upper
frequency mu rhythms in relation to voluntary limb movement. Clin Neurophysiol 2000;111(10):18739.
[15] Babiloni C, Carducci F, Cincotti F, Rossini PM, Neuper C, Pfurtscheller G,
et al. Human movement-related potentials vs desynchronization of EEG alpha
rhythm: a high-resolution EEG study. Neuroimage 1999;10(6):65865.
[16] Muthukumaraswamy SD, Johnson BW, McNair NA. Mu rhythm modulation
during observation of an object-directed grasp. Brain Res Cogn Brain Res
2004;19(2):195201.
[17] Perry A, Bentin S. Mirror activity in the human brain while observing hand
movements: a comparison between EEG desynchronization in the mu-range
and previous fMRI results. Brain Res 2009;1282:12632.
[18] Neuper C, Scherer R, Wriessnegger S, Pfurtscheller G. Motor imagery and action
observation: modulation of sensorimotor brain rhythms during mental control
of a braincomputer interface. Clin Neurophysiol 2009;120(2):23947.
[19] Braadbaart L, Williams JHG, Waiter GD. Do mirror neuron areas mediate mu
rhythm suppression during imitation and action observation? Int J Psychophysiol 2013;89(1):99105.
[20] Proverbio AM. Tool perception suppresses 1012 Hz mu rhythm of EEG over
the somatosensory area. Biol Psychol 2012;91(1):17.
[21] Muthukumaraswamy SD, Johnson BW. Primary motor cortex activation during
action observation revealed by wavelet analysis of the EEG. Clin Neurophysiol
2004;115(8):17606.
[22] Gallese V, Goldman A. Mirror neurons and the simulation theory of mindreading. Trends Cogn Sci 1998;2(12):493501.
[23] Bellebaum C, Tettamanti M, Marchetta E, Della RP, Rizzo G, Daum I, et al.
Neural representations of unfamiliar objects are modulated by sensorimotor
experience. Cortex 2013;49(4):111025.
[24] Weisberg J, Turennout M, Martin A. A neural system for learning about object
function. Cereb Cortex 2007;17(3):51321.
[25] Kiefer M, Sim EJ, Liebich S, Hauk O, Tanaka J. Experience-dependent plasticity
of conceptual representations in human sensory-motor areas. J Cogn Neurosci
2007;19(3):52542.

335

[26] Creem-Regehr SH, Dilda V, Vicchrilli AE, Federer F, Lee JN. The inuence of
complex action knowledge on representations of novel graspable objects: evidence from functional magnetic resonance imaging. J Int Neuropsychol Soc
2007;13(6):100920.
[27] Kumar S, Riddoch MJ, Humphreys G. Mu rhythm desynchronization reveals
motoric inuences of hand action on object recognition. Front Hum Neurosci
2013;7:66.
[28] Lee TW, Girolami M, Sejnowski TJ. Independent component analysis using
an extended Infomax algorithm for mixed subGaussian and superGaussian
sources. Neural Comput 1999;11(2):41741.
[29] Pineda JA. The functional signicance of mu rhythms: translating seeing and
hearing into doing. Brain Res Brain Res Rev 2005;50(1):5768.
[30] Pfurtscheller G, Aranibar A. Evaluation of event-related desynchronization
(ERD) preceding and following voluntary self-paced movement. Electroencephalogr Clin Neurophysiol 1979;46(2):13846.
[31] Greenhouse SW, Geisser S. On methods in the analysis of prole data. Psychometrika 1959;24(2):95112.
[32] Kiefer M, Pulvermuller F. Conceptual representations in mind and brain:
theoretical developments, current evidence and future directions. Cortex
2012;48(7):80525.
[33] Murata A, Fadiga L, Fogassi L, Gallese V, Raos V, Rizzolatti G. Object representation in the ventral premotor cortex (area F5) of the monkey. J Neurophysiol
1997;78(4):222630.
[34] Proverbio AM, Adorni R, DAniello GE. 250 ms to code for action affordance
during observation of manipulable objects. Neuropsychologia 2011;49(9):
27117.
[35] Proverbio AM, Azzari R, Adorni R. Is there a left hemispheric asymmetry for
tool affordance processing? Neuropsychologia 2013;51(13):2690701.
[36] Grafton ST, Arbib MA, Fadiga L, Rizzolatti G. Localization of grasp representations in humans by positron emission tomography. 2. Observation compared
with imagination. Exp Brain Res 1996;112(1):10311.
[37] Haaland KY, Harrington DL. Hemispheric asymmetry of movement. Curr Opin
Neurobiol 1996;6(6):796800.
[38] Majdandzic J, Grol MJ, van Schie HT, Verhagen L, Toni I, Bekkering H. The role
of immediate and nal goals in action planning: an fMRI study. Neuroimage
2007;37(2):58998.
[39] Cross ES, Cohen NR, Hamilton AFD, Ramsey R, Wolford G, Grafton ST. Physical
experience leads to enhanced object perception in parietal cortex: insights from
knot tying. Neuropsychologia 2012;50(14):320717.
[40] Alemanno F, Houdayer E, Cursi M, Velikova S, Tettamanti M, Comi G, et al.
Action-related semantic content and negation polarity modulate motor areas
during sentence reading: an event-related desynchronization study. Brain Res
2012;1484:3949.
[41] Locatelli M, Gatti R, Tettamanti M. Training of manual actions improves language understanding of semantically related action sentences. Front Psychol
2012;3:547.

Potrebbero piacerti anche