Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Proceedings
Book
of
Abstracts
CD-ROM
Proceedings
Edited
by
E.
Cambouropoulos,
C.
Tsougras,
P.
Mavromatis,
K.
Pastiadis
School
of
Music
Studies
Aristotle
University
of
Thessaloniki
Thessaloniki/Greece,
23-28
July
2012
Proceedings
of
the
ICMPC-ESCOM
2012
Joint
Conference:
12th
Biennial
International
Conference
for
Music
Perception
and
Cognition
8th
Triennial
Conference
of
the
European
Society
for
the
Cognitive
Sciences
of
Music
Edited
by:
Emilios
Cambouropoulos,
Costas
Tsougras,
Panayotis
Mavromatis,
Konstantinos
Pastiadis
ISBN: 978-960-99845-1-5
Copyright
2012
by
E.
Cambouropoulos,
C.
Tsougras,
P.
Mavromatis,
K.
Pastiadis
2
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
Dear
delegates,
On
behalf
of
the
European
Society
for
the
Cognitive
Sciences
of
Music,
I
would
like
to
extend
a
warm
welcome
to
all
of
you.
I
am
very
happy
to
see
such
an
impressive
number
of
delegates
from
all
over
the
world.
I
know
that
some
of
you
have
had
a
very
long
journey,
but
I
am
sure
you
will
not
regret
the
effort.
I
have
no
doubts
that
this
will
be
an
inspiring
and
fruitful
conference.
As
you
might
suspect,
the
road
to
this
conference
was
not
always
smooth.
In
2009,
when
we
decided
Greece
would
be
the
next
venue
for
the
joint
ESCOM/ICMPC
conference,
not
even
the
Delphi
oracle
would
have
been
able
to
predict
the
current
economic
crisis
in
Europe.
Of
course,
we
did
briefly
consider
moving
the
conference
to
another
country,
but
due
to
the
general
tense
economic
situation
in
most
European
countries,
this
was
not
a
realistic
option.
Eventually,
the
unexpected
difficulties
led
to
a
very
productive
and
personally
enriching
inner-European
cooperation
between
ESCOM,
DGM,
and
the
ICMPC
organizers.
First
of
all,
I
want
to
thank
the
local
team,
Emilios
Cambouropoulos,
Costas
Tsougras,
and
SYMVOLI,
for
persistently
pursuing
their
vision
of
an
international
conference
in
this
impressive
setting.
Secondly,
I
would
like
to
express
my
sincere
gratitude
to
the
executive
council
of
the
German
Society
for
Music
Psychology
(DGM),
in
particular
to
its
president
Andreas
Lehmann
and
its
treasurer
Michael
Oehler
for
their
cooperation
with
ESCOM
and
ICMPC
in
settling
financial
matters.
I
hope
that
all
of
the
delegates
will
leave
the
ESCOM-ICMPC
2012
conference
and
Thessaloniki
fresh
and
brimming
with
new
ideas,
new
friends,
good
experiences,
life-enhancing
impressions
and
optimism
regarding
the
scientific
and
scholarly
potential
of
the
cognitive
sciences
of
music.
Reinhard
Kopiez,
Professor
of
Music
Psychology,
Hanover
University
of
Music,
Drama
and
Media,
Germany
ESCOM
President
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
Dear
delegates,
We
would
like
to
welcome
all
participants
here
in
Thessaloniki
for
the
joint
meeting
of
the
12th
International
Conference
on
Music
Perception
and
Cognition
(ICMPC)
and
the
8th
Triennial
Conference
of
the
European
Society
for
the
Cognitive
Sciences
of
Music
(ESCOM).
The
conference
is
organized
by
the
School
of
Music
Studies
at
the
Aristotle
University
of
Thessaloniki,
and
the
European
Society
for
the
Cognitive
Sciences
of
Music.
This
years
joint
conference
is
the
fourth
joint
international
meeting
of
ICMPC
and
ESCOM
following
the
meetings
in
Liege,
Belgium
(1994),
Keele,
England
(2000),
and
Bologna,
Italy
(2006).
Three
years
ago,
at
the
urging
of
Irne
Delige,
we
decided
to
go
ahead
and
make
a
petition
for
holding
this
international
event
in
Thessaloniki.
At
that
time,
we
could
not
imagine
the
financial
turmoil
this
country
would
enter
just
a
short
time
down
the
line.
We
are
grateful
to
ESCOM,
and
above
all
to
Reinhard
Kopiez
and
Irne
Delige,
for
their
steady
support
and
encouragement
throughout
this
long
preparatory
period.
Many
thanks
are
due
to
Andreas
Lehmann
and
Michael
Oehler
(German
Society
for
Music
Psychology
-
DGM)
for
assisting
us
in
securing
a
credible
financial
environment
for
the
conference.
We
would
also
like
to
express
our
gratitude
to
the
members
of
the
international
ICMPC-ESCOM
2012
Conference
Advisory
Board
for
trusting
us,
despite
the
negative
international
publicity
surrounding
the
country.
The
conference
brings
together
leading
researchers
from
different
areas
of
music
cognition
and
perception.
A
large
number
of
papers,
from
a
broad
range
of
disciplines
-
such
as
psychology,
psychophysics,
philosophy,
neuroscience,
artificial
intelligence,
psychoacoustics,
linguistics,
music
theory,
anthropology,
cognitive
science,
education
-
report
empirical
and
theoretical
research
that
contributes
to
a
better
understanding
of
how
music
is
perceived,
represented
and
generated.
Out
of
570
submissions,
154
papers
were
selected
for
spoken
presentation
and
258
for
poster
presentation.
Additionally,
five
keynote
addresses
will
be
presented
in
plenary
sessions
by
five
internationally
distinguished
colleagues.
The
two
SEMPRE-ICMPC12
Young
Researcher
Award
winners
for
this
year
will
also
present
their
work
in
plenary
sessions
on
Wednesday
and
Friday
morning.
4
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
This
year
we
have
attempted
to
give
poster
presentations
a
more
prominent
position
in
the
conference
programme.
Posters
are
organised
thematically
into
speed
poster
sessions
where
authors
have
the
opportunity
to
present
briefly
the
core
points
of
their
work
orally
to
participants;
these
speed
sessions
will
be
followed
by
more
relaxed
presentations
and
discussions
in
front
of
the
posters
in
the
friendly
environment
of
the
main
venue
hall.
The
speed
poster
presentations
are
held
mostly
in
the
morning
giving
time
for
discussion
later
on
in
the
day.
We
are
hoping
that
this
compound
mode
of
presentation
(oral
plus
poster
presentation)
will
contribute
to
a
better
communication
between
poster
presenters
and
conference
participants.
We
are
open
to
further
suggestions
and
ideas,
as
well
as
feedback
on
how
well
this
whole
process
works.
We
also
tried
to
provide
an
interesting
and
diverse
social
programme.
Apart
from
the
welcome
reception
and
banquet,
a
variety
of
half-day
excursions
are
offered
on
Thursday
afternoon,
plus
other
activities
in
the
city
such
as
walking
tours.
We
would
like
to
draw
your
attention
to
the
special
concert
on
Wednesday
evening
that
features
contemporary
works
by
Greek
composers
performed
by
leading
local
performers.
The
concert
will
include
works
from
the
beginning
of
the
th
20
century
to
the
present;
also,
a
traditional
vocal
female
ensemble
will
participate
in
the
concert
complementing
contemporary
works
inspired
by
Greek
folk
music.
On
the
last
day
of
the
conference,
Saturday
afternoon,
a
special
post-conference
two-hour
session,
co-chaired
by
John
Sloboda
and
Mayumi
Adachi,
will
be
looking
at
the
wider
social
and
political
context
of
our
research
and
practice.
This
event
will
focus
on
the
current
global
economic
situation
as
it
is
currently
being
felt
most
strongly
in
Greece,
and
its
impact
on
scholarship
and
intellectual
exchange.
All
are
welcome
for
a
lively
and
thought-provoking
discussion.
We
hope
that
the
richness
of
research
topics,
the
high
quality
of
presentations,
the
smooth
flow
of
the
programme,
the
friendly
and
comfortable
enviroment
of
Porto
Palace,
the
relaxed
coffee
and
lunch
breaks,
along
with
the
conference
excursions,
musical
concerts
and
other
social
events,
will
make
this
conference
a
most
rewarding
experience.
We
hope
that
everyone
will
leave
with
fresh
ideas
and
motivation
for
future
research,
and
new
collaborations
that
will
give
rise
to
inspiring
new
ideas
and
lasting
friendships.
Closing
this
openning
comment,
we
would
like
to
thank
all
our
co-organisers
in
the
organising
committee,
our
colleagues
in
the
Music
Department
and
our
collaborators
at
Symvoli
for
their
support.
We
want
to
thank
especially
Panos
Mavromatis,
Kostas
Pastiadis
and
Andreas
Katsiavalos,
for
their
invaluable
practical
help
in
various
stages
of
this
organisation.
Finally,
a
warm
thanks
to
all
of
you
for
coming
to
Thessaloniki
and
for
your
support
and
solidarity
in
the
midst
of
this
difficult
period
of
our
country.
We
are
confident
that
this
conference
will
be
a
most
rewarding
and
memorable
experience
for
all.
Emilios
Cambouropoulos
and
Costas
Tsougras,
ICMPC-ESCOM
2012
co-chairs
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
ICMPC12-ESCOM8
Organizing
Committee
Chair:
Emilios
Cambouropoulos,
School
of
Music
Studies,
Aristotle
University
of
Thessaloniki,
Greece
Co-Chair:
Costas
Tsougras,
School
of
Music
Studies,
Aristotle
University
of
Thessaloniki,
Greece
Reviewing
Co-ordinator:
Panayotis
Mavromatis,
New
York
University,
USA
Technical
Co-ordinator:
Konstantinos
Pastiadis,
School
of
Music
Studies,
Aristotle
University
of
Thessaloniki,
Greece
Mayumi
Adachi,
Hokkaido
University,
Japan
Anna
Rita
Addessi,
University
of
Bologna,
Italy
Steven
Demorest,
University
of
Washington,
USA
Andrea
Halpern,
Bucknell
University,
USA
Reinhard
Kopiez,
University
of
Hannover,
Germany
Jukka
Louhivuori,
University
of
Juvskyl,
Finland
Yoshitaka
Nakajima,
Kyushu
University,
Japan
Jaan
Ross,
Estonian
Academy
of
Music
and
Theatre
&
University
of
Tartu,
Estonia
Programme Committee
Eckart
Altenmller,
Hanover
University
of
Music,
Drama
and
Media,
Germany
Nicola
Dibben,
University
of
Sheffield,
U.K.
Robert
O.
Gjerdingen,
Northwestern
University,
U.S.
Carol
L.
Krumhansl,
Cornell
University,
U.S.
Stephen
McAdams,
McGill
University,
Canada
Richard
Parncutt,
Karl-Franzens-Universitt
Graz,
Austria
Catherine
(Kate)
Stevens,
University
of
Western
Sydney,
Australia
Petri
Toiviainen,
University
of
Jyvskyl,
Finland
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
Mayumi
Adachi,
Hokkaido
University,
Japan
Anna
Rita
Addessi,
University
of
Bologna,
Italy
Rita
Aiello,
New
York
University,
United
States
Eckart
Altenmller,
University
of
Music
Drama
and
Media,
Hannover,
Germany
Rytis
Ambrazeviius,
Kaunas
University
of
Technology,
Lithuania
Christina
Anagnostopoulou,
University
of
Athens,
Greece
Richard
Ashley,
Northwestern
University,
United
States
Roberto
Bresin,
KTH
Royal
Institute
of
Technology,
Sweden
Warren
Brodsky,
Ben-Gurion
University
of
the
Negev,
Israel
Annabel
Cohen,
University
of
Prince
Edward
Island,
Canada
Eugenia
Costa-Giomi,
University
of
Texas,
Austin,
United
States
Sarah
Creel,
University
of
California,
San
Diego,
United
States
Ian
Cross,
University
of
Cambridge,
United
Kingdom
Lola
Cuddy,
Queen's
University,
Canada
Lori
Custodero,
Columbia
University,
United
States
Irne
Delige,
ESCOM,
Belgium
Steven
M.
Demorest,
University
of
Washington,
United
States
Nicola
Dibben,
University
of
Sheffield,
United
Kingdom
Walter
Jay
Dowling,
University
of
Texas,
Dallas,
United
States
Tuomas
Eerola,
University
of
Jyvskyl,
Finland
Zohar
Eitan,
Tel
Aviv
University,
Israel
Dorottya
Fabian,
University
of
New
South
Wales,
Australia
Morwaread
Farbood,
New
York
University,
United
States
Robert
Gjerdingen,
Northwestern
University,
United
States
Rolf
Inge
Gody,
University
of
Oslo,
Norway
Werner
Goebl,
University
of
Music
and
Performing
Arts,
Vienna,
Austria
Andrea
Halpern,
Bucknell
University,
United
States
Stephen
Handel,
University
of
Tennessee,
United
States
Erin
Hannon,
University
of
Nevada,
Las
Vegas,
United
States
Yuzuru
Hiraga,
University
of
Tsukuba,
Japan
Henkjan
Honing,
University
of
Amsterdam,
Netherlands
Erkki
Huovinen,
University
of
Minnesota,
School
of
Music,
United
States
Roger
Kendall,
University
of
California,
Los
Angeles,
United
States
Reinhard
Kopiez,
Hanover
University
of
Music,
Drama
and
Media,
Germany
Stefan
Koelsch,
Freie
Universitt
Berlin,
Germany
Nina
Kraus,
Northwestern
University,
United
States
Alexandra
Lamont,
Keele
University,
United
Kingdom
Eleni
Lapidaki,
Aristotle
University
of
Thessaloniki,
Greece
Edward
Large,
Florida
Atlantic
University,
United
States
Andreas
Lehmann,
Hochschule
fr
Musik,
Wrzburg,
Germany
Marc
Leman,
University
of
Ghent,
Belgium
Scott
Lipscomb,
University
of
Minnesota,
United
States
Steven
Livingstone,
Ryerson
University,
Canada
Jukka
Louhivuori,
University
of
Jyvskyl,
Finland
Psyche
Loui,
Beth
Israel
Deaconess
Medical
Center
and
Harvard
Medical
School,
United
States
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
SEMPRE AWARDS
The
Society
for
Education,
Music
and
Psychology
Research
(SEMPRE)
<http://www.sempre.org.uk/>
kindly
offers
a
number
of
awards
to
researchers
attending
this
years
ICMPC
conference.
SEMPRE
&
ICMPC12
Young
Researcher
Award
The
SEMPRE
&
ICMPC12
Young
Researcher
Award
(YRA)
is
awarded
to
young
researchers
that
submit
a
high
quality
research
paper
and
demonstrate
the
potential
to
be
a
leading
researcher
in
the
field
of
Music
Perception
and
Cognition.
This
years
Young
Researcher
Award
selection
committee,
consisting
of
Graham
Welch
(chair
of
SEMPRE),
Reinhard
Kopiez
(president
of
ESCOM),
and
Kate
Stevens
(member
of
the
ICMPC-
ESCOM12
Scientific
Advisory
board),
examined
carefully
all
shortlisted
applications,
and
decided
this
year's
YRA
prize
to
be
shared
by
the
following
two
researchers:
Birgitta
Burger:
Emotions
move
us:
Basic
emotions
in
music
influence
people's
movement
to
music
Chia-Jung
Tsay:
The
Impact
of
Visual
Cues
on
the
Judgment
and
Perceptions
of
Music
Performance
The
selection
process
consisted
of
the
following
steps:
Initially,
eleven
submissions
were
shortlisted
based
on
the
review
ratings
of
the
submitted
abstract.
Then,
the
authors
of
these
eleven
abstracts
submitted
full
papers,
which
were
additionally
reviewed
by
at
least
two
reviewers
from
the
Scientific
Advisory
board.
Finally,
the
YRA
selection
committee
examined
carefully
these
eleven
submissions
in
terms
of
their
overall
quality
and
originality
(taking
into
account
the
additional
reviews),
and,
in
terms
of
meeting
all
the
criteria
described
on
the
conference
webpage,
delivered
their
final
decision.
Apart
from
receiving
a
money
prize
(1000$
each),
the
two
YRA
winners
will
present
their
work
in
special
plenary
sessions
on
Wednesday
and
Friday
morning.
The
YRA
selection
committee,
SEMPRE,
the
conference
organising
committee
and
all
participants,
would
like
to
congratulate
whole-heartedly
the
two
winners
for
their
success.
The
Attendance
Bursaries
are
awarded
by
SEMPRE
to
assist
financially
ICMPC
participants
on
the
basis
of
merit
and
need.
This
year,
a
total
of
10000
US
dollars
(from
100$
to
750$)
has
been
awarded
to
the
following
participants:
Amos
David
Boasson,
Blanka
Bogunovi,
Daniel
Cameron,
Elisa
Carrus,
Song
Hui
Chon,
Emily
B.J.
Coffey,
Cara
Featherstone,
Georgia-Aristi
Floridou,
Benjamin
Gold,
Andrew
Goldman,
Meghan
Goodchild,
Shantala
Hegde,
Sibylle
C.
Herholz,
Christos
Ioannou,
Jenny
Judge,
Sarah
Knight,
Amanda
Krause,
Carlotta
Lega,
Samuel
A.
Mehr,
Alisun
Pawley,
Crystal
Peebles,
Rachna
Raman,
Sundeep
Teki,
Michael
Wammes,
Dustin
Wang,
Michael
W.
Weiss
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
Presentation Guidelines
Spoken
Papers
Spoken
papers
are
allotted
20
minutes
plus
8
minutes
for
questions
and
2
minutes
break
for
changing
rooms.
You
must
stop
talking
when
your
time
is
up.
The
timetable
will
be
strictly
adhered
to
so
that
people
can
easily
change
rooms
and
plan
meetings
during
breaks.
All
papers
are
presented
in
English.
All
powerpoint
presentations
must
be
brought
to
the
Central
Technical
Helpdesk
in
the
main
foyer
at
least
three
hours
prior
to
the
scheduled
opening
time
of
the
session.
At
the
helpdesk,
the
authors
should
be
able
to
preview
their
presentation.
The
computers
in
the
presentation
halls
are
laptops
with
Microsoft
Windows
7
or
XP
SP3
installed.
Presentations
should
be
prepared
for
MS
Office
PowerPoint
or
in
Acrobat
pdf
format.
The
powerpoint
presentations
(ppt
or
pptx
file)
and
all
audio/visual
files
must
be
in
the
same
folder
(without
sub-folders)
named
after
the
presenter's
surname.
If
it
is
absolutely
necessary,
e.g.
if
you
want
to
use
a
program
that
runs
only
on
your
computer,
bring
your
own
laptop
and
check
well
in
advance
that
your
and
our
equipment
work
together
in
harmony.
In
case
of
use
of
Apple
Macintosh
computers,
participants
should
provide
any
necessary
adapters
for
video
(VGA)
output
to
the
in-situ
audiovisual
equipment.
Meet
your
chair
and
technical
assistant
10-15
minutes
before
the
start
of
your
session.
If
you
have
a
handout,
give
it
to
an
assistant
along
with
any
instructions
on
what
to
do.
If
something
goes
wrong
with
the
equipment
during
your
talk,
ask
the
technician
to
fix
it.
Meanwhile,
continue
your
talk,
even
if
you
have
to
improvise
without
slides.
Your
20-minute
period
will
not
be
extended
on
account
of
a
technical
problem.
Poster
Presentations
Hanging
up
and
presenting
posters.
Authors
are
responsible
for
setting
up
and
removing
their
posters.
If
your
poster
is
presented
at
a
Speed
Poster
Session
on
Tuesday,
then
you
should
hang
it
up
on
Monday
afternoon
before
5:30pm
and
the
poster
will
remain
till
Tuesday
evening.
If
your
poster
is
presented
on
Wednesday
or
Friday,
then
it
should
be
hung
up
on
the
morning
of
that
same
day
before
9am
and
removed
the
following
day.
A
timetable
of
papers
on
each
poster
panel
will
indicate
which
posters
should
be
hung
up
on
that
particular
panel.
Posters
will
be
organised
thematically,
so
look
for
your
poster
panel
in
the
appropriate
thematic
region.
We
will
provide
the
means
for
you
to
hang
your
poster.
At
least
one
author
of
a
poster
must
be
available
to
present
it
during
the
special
poster
presentation
sessions
and,
also,
during
coffee
breaks
and
lunch
breaks
on
the
two
days
that
the
poster
will
be
hanged.
Speed
poster
presentations.
Apart
from
the
poster,
a
5-minute
slot
is
allocated
for
the
spoken
presentation
of
each
poster.
The
goal
of
this
brief
presentation
is
not
to
present
the
full
paper,
but
rather
to
give
a
glimpse
into
the
participants'
research
that
will
attract
delegates
for
a
more
detailed
presentation
and
discussion
around
the
actual
poster.
Authors
should
not
try
to
fit
as
much
as
possible
into
the
five
minutes,
but
preferably
to
give
a
few
interesting/exciting
points
that
will
urge
delegates
to
discuss
the
issues
raised
further
during
the
poster
presentation
sessions,
and
the
lunch/coffee
breaks.
The
same
requirements
for
spoken
talks
apply
for
the
speed
poster
presentations
(read
carefully
the
quidelines
above),
with
the
following
exception:
each
speed
poster
presentation
is
allotted
exactly
5
minutes
without
extra
time
for
discussion
-
presenters
should
ensure
that
their
presentation
is
less
than
5
minutes
to
allow
half-a-minute
or
so
for
the
preparation
of
the
next
presentation.
The
timetable
will
be
strictly
adhered
to.
We
suggest
powerpoint
presentations
should
consist
of
no
more
that
4-5
slides.
All
powerpoint
presentations
must
be
brought
to
the
Central
Technical
Helpdesk
in
the
main
foyer
at
least
three
hours
prior
to
the
scheduled
opening
time
of
the
session.
Use
of
individual
laptops
is
not
allowed
in
speed
poster
sessions.
10
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
'
' '-
OVERVIEW
'
CONFERENCE
PROGRAM
!
!
'
MONDAY'
23'JULY'
9:00%
9:30!
9:30%
10.00!
10:00%
10:30!
10:30%
11:00!
11:00%
11:30!
11:30%
12:00!
12:00%
12:30!
12:30%
13:00!
13:00%
13%30!
13:30%
14%00!
14:00%
14:30!
14:30%
15:00!
15:00%
15:30!
15:30%
16:00!
16:00%
16:30!
16:30%
17:00!
17:00%
17:30!
17:30%
18:00!
18:00%
18:30!
18:30%
19:00!
19:00%
19:30!
19:30%
20:00!
20:00%
20:30!
20:30%
22:00!
TUESDAY'
24'JULY'
WEDNESDAY'
25'JULY'
THURSDAY'
26'JULY'
FRIDAY'
27'JULY'
SATURDAY'
28'JULY'
REGISTRATION'
keynote!4!
keynote!3!
Young!Resear%
cher!Award!1!
keynote!5!
symposium!2,!
paper!sessions!
20%23!
Young!Resear%
cher!Award!2!
coffee!break!
coffee!break!
speed!poster!
sessions!1%5!
speed!poster!
sessions!6%10!
speed!poster!
sessions!16%20!
speed!poster!
sessions!21%25!
poster!
presentation!
poster!
presentation!
symposium!3,!
paper!sessions!!
24%27!
poster!
presentation!
LUNCH!
LUNCH!
LUNCH!
paper!sessions!!
1%5!
paper!sessions!
10%14!
symposium!4,!
paper!sessions!
23%36!
speed!poster!
sessions!11%15!
poster!
presentation!
speed!poster!
sessions!26%30!
poster!
presentation!
speed!poster!
sessions!41%44!
poster!
presentation!
coffee!break!
coffee!break!
coffee!break!
symposium!1,!
paper!sessions!!
6%9!
paper!sessions!
15%19!
ESCOM!
General!
Assembly!
ICMPC!
Business!
Meeting!
symposium!5,!
paper!sessions!
37%40!
coffee!break!
coffee!break!
speed!poster!
sessions!31%35!
speed!poster!
sessions!36%40!
coffee!break!
paper!sessions!
41%45!
REGISTRATION'
welcome!
keynote!1!
LUNCH!
Special!
Post%
Conference!
Session!
TOURS'&'
EXCURSIONS'
paper!sessions!
36%40!
!
!
keynote!2!
!
!
WELCOME!
RECEPTION!
CONCERT!
BANQUET!
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
11
Monday
23
July
Irne
Delige:
The
cue-abstraction
model:
its
premises,
its
evolution,
its
prospects
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
the
impact
of
heads
of
thematic
elements
is
more
pronounced
in
abstracted
cued
elements:
so-called
priming
procedures
can
shed
light
for
a
better
understanding
of
the
mechanisms
involved.
A
third
axis
concerns
the
definition
of
notions
underlying
the
psychological
mechanisms
involved
in
music
perception.
Cue,
musical
idea,
variation,
imprint,
theme,
motif,
pertinence,
salience,
accent,
similarity,
difference,
and
so
on,
are
all
terms
borrowed
from
the
common
vocabulary
and
used
intuitively
by
musicians
and
musicologists
in
their
work
on
music
analysis,
theory,
history,
philosophy
and
aesthetics
of
music.
Would
it
be
possible
to
go
beyong
this
intuitive
use?
Do
we
have
tools
to
make
progress
towards
more
relevant
definitions
that
can
satisfy
scientists
quest
for
more
precision?
13
MON
Tuesday
24
July
14
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
TUE
Speed
Poster
Session
1:
Grand
Pietra
Hall,
11:00-11:40
Musical
expectation
tension
Changing
expectations:
does
retrospection
influence
our
perceptions
of
melodic
fit?
Freya
Bailes,
Roger
T.
Dean
MARCS
Auditory
Labs,
University
of
Western
Sydney
Statistical
models
can
predict
listeners
melodic
expectations
and
probable
musical
events
are
more
readily
processed
than
less
probable
events.
However,
there
has
been
little
consideration
of
how
such
expectations
might
change
through
time,
as
remembering
becomes
necessary.
Hurons
ITPRA
theory
proposes
successive
stages
forming
musical
expectation,
the
last
of
which,
appraisal,
might
shift
a
listeners
representations
and
expectations.
The
temporal
trajectory
of
expectations
and
the
role
of
remembering
and
appraisal,
are
not
well
understood.
The
aim
of
this
experiment
was
to
identify
conditions
in
which
expectation
and
retrospective
appraisal
contribute
in
melodic
processing.
It
was
hypothesized
that
melodic
expectations
based
on
the
most
recently
heard
musical
sequence
would
initially
influence
ratings
in
a
probe
tone
task,
with
a
shift
to
a
retrospective
analysis
of
the
whole
sequence
through
time.
Four
male
and
12
female
non-musicians
studying
undergraduate
psychology
participated
for
course
credit.
An
adaptation
of
Krumhansls
probe
tone
method
was
used,
in
which
an
isochronous
melody
was
presented,
consisting
of
a
sequence
of
five
chords
in
one
key
followed
by
a
sequence
of
three
monophonic
notes
forming
an
arpeggio
in
another
key
a
semitone
away.
Following
this,
a
probe
tone
was
presented
immediately,
1.8s,
6s,
or
19.2s
later.
Participants
hearing
the
stimuli
over
headphones
rapidly
rated
the
goodness
of
fit
of
the
probe
to
the
preceding
context,
using
a
7-point
scale.
The
tonal
relationship
of
the
probe
to
both
parts
of
the
melodic
sequence
was
manipulated.
Probe
tone
ratings
changed
significantly
with
time.
Response
variability
decreased
as
the
time
to
probe
presentation
increased,
yet
ratings
at
every
time
point
were
significantly
different
from
the
scale
mid-point
of
4,
arguing
against
increasingly
noisy
data,
or
a
memory
loss,
even
19.2s
after
presentation
of
the
melodic
sequence.
Suggestive
evidence
for
a
role
of
appraisal
was
the
development
with
delay
time
of
statistical
correlation
between
distributions
of
perceived
fit
and
predictions
based
on
literature
data
on
tonal
pitch
preference,
or
on
the
IDyoM
model
of
statistical
probability.
So,
with
no
further
musical
input,
listeners
can
continue
to
transform
recent
musical
information
and
so
change
their
expectations
beyond
simply
forgetting.
Closure
and
Expectation:
Listener
Segmentation
of
Mozart
Minuets
Crystal
A.
Peebles
School
of
Music,
Northern
Arizona
University,
United
States
This
study
investigates
the
theoretical
claim
that
the
perception
of
closure
stems
from
the
ability
to
predict
the
completion
of
a
schematic
unit,
resulting
in
a
transient
increase
in
prediction
error
for
the
subsequent
event.
In
this
study,
participants
were
asked
to
predict
the
moment
of
completion
of
mid-level
formal
units
while
listening
to
three
complete
minuet
movements
by
Mozart
(K.
156,
K.
168,
and
K.
173).
Following
this
prediction
task,
participants
then
rated
the
degree
of
finality
of
ending
gestures
from
these
same
movements.
Generally,
endings
punctuated
by
strong
cadential
arrival
were
best
predicted
and
received
higher
ratings,
suggesting
that
learned
harmonic
and
melodic
ending
gestures
contribute
to
the
segmentation
of
musical
experience.
These
results
were
accentuated
for
participants
with
formal
musical
training,
further
supporting
this
conclusion.
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
15
Musical
tension
as
a
response
to
musical
form
16
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
TUE
Expectations
in
Culturally
Unfamiliar
Music:
Influences
of
Perceptual
Filter
and
Timbral
Characteristics
Catherine
Stevens,*
Barbara
Tillmann,#*
Peter
Dunbar-Hall,
Julien
Tardieu,
Catherine
Best*
*MARCS
Institute,
University
of
Western
Sydney,
Australia;
#Lyon
Neuroscience
Research
Center,
CNRS-UMR
5292,
INSERM
U1028,
Universit
de
Lyon,
France;
Conservatorium
of
Music,
The
University
of
Sydney,
Australia;
Universit
de
Toulouse
UTM,
France
With
exposure
to
a
musical
environment,
listeners
become
sensitive
to
the
regularities
of
that
environment.
These
acquired
perceptual
filters
likely
come
into
play
when
novel
scales
and
tunings
are
encountered.
i)
What
occurs
with
unfamiliar
timbre
and
tuning?
ii)
Are
novice
listeners
sensitive
to
both
in-
and
out-of-scale
changes?
iii)
Does
unfamiliar
timbre
make
a
difference
to
judgments
of
completeness?
iv)
When
changes
are
made,
is
perceived
coherence
affected
and
how
much
change
disrupts
judged
cohesion
of
unfamiliar
music?
An
experiment
investigated
the
effect
of
unfamiliar
timbre
and
tuning
on
judgments
of
melody
completeness
and
cohesion
using
Balinese
gamelan.
It
was
hypothesized
that,
when
making
judgments
of
musical
completeness,
novice
listeners
are
sensitive
to
in-
and
out-of-scale
changes
and
this
is
moderated
by
an
unfamiliar
timbre
such
as
sister
or
beating
tones.
Thirty
listeners
with
minimal
experience
with
gamelan
rated
coherence
and
completeness
of
gamelan
melodies.
For
the
out-of-scale
endings,
the
gong
tone
was
replaced
by
a
tone
outside
the
scale
of
the
melody;
for
in-scale
endings,
the
gong
tone
was
replaced
by
a
tone
belonging
to
the
scale
of
the
melody.
For
completion
ratings,
the
out
of
scale
endings
were
judged
less
complete
than
the
original
gong
and
in-scale
endings.
For
the
novel
sister
melodies,
in-scale
endings
were
judged
as
less
complete
than
the
original
gong
endings.
For
coherence,
melodies
using
the
original
scale
tones
were
judged
as
more
coherent
than
melodies
containing
partial
or
total
replacements.
The
results
provide
evidence
of
perceptual
filters
influencing
judgments
of
novel
tunings.
ERP
Responses
to
Cross-cultural
Melodic
Expectancy
Violations
17
possible
confounds
between
these
two
musical
systems.
We
discuss
the
implications
of
these
findings
for
theories
on
cultural
versus
universal
factors
in
music
cognition.
A
pilot
investigation
on
electrical
brain
responses
related
to
melodic
uncertainty
and
expectation
Neural
and
behavioural
correlates
of
musical
expectation
in
congenital
amusia
Diana
Omigie,
Marcus
Pearce,
Lauren
Stewart
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
TUE
notes
in
the
implicit
task
indicating
that
they
found
these
notes
more
expected.
Further,
ERP
analysis
revealed
that
while
an
early
negative
response,
which
was
highly
sensitive
to
note
probability,
was
more
salient
in
controls
than
amusics,
both
groups
showed
a
delayed
P2
to
low
relative
to
high
probability
notes
suggestive
of
increased
processing
time
required
for
these
events.
The
current
results,
showing
spared,
albeit
incomplete,
processing
of
melodic
structure
adds
to
previous
evidence
of
implicit
pitch
processing
in
amusic
individuals.
The
finding
of
an
attenuated
early
negative
response
in
amusia
is
in
line
with
studies
showing
a
close
relationship
between
the
amplitude
of
such
a
response
and
explicit
awareness
of
musical
deviants.
Finally,
the
current
study
provides
support
that
the
notion
that
early
pre-attentive
mechanisms
play
an
important
role
in
generating
conscious
awareness
of
improbable
events
in
the
auditory
environment.
Vaitsa
Giannouli
Department
of
Psychology,
Aristotle
University
of
Thessaloniki,
Greece
The
aim
of
this
paper
is
to
investigate
the
perception
of
optic
and
tonal
acoustic
symmetry.
Twenty-eight
volunteers
(14
musicians
and
14
non-musicians)
aged
18-67
participated
in
the
study.
The
participants
were
examined
individually
and
the
tests
were
administered
in
varying
order
to
the
various
participants.
Half
of
the
participants
were
informed
at
the
beginning
of
the
examination
for
the
possible
kinds
of
symmetry.
Also,
half
of
the
participants
were
presented
before
the
acoustic
stimuli,
with
a
similar
kind
of
symmetry
for
the
optic
stimuli.
The
examination
material
were:
the
mirror
reversal
letter
task
from
PALPA,
the
paper
folding
task
from
ETS,
the
spatial
ability
test
from
ETS,
Bentons
judgment
of
line
orientation
test,
digit
span
(forward
and
backward)
and
a
newly
constructed
test,
that
includes
a
series
of
symmetrical
and
asymmetrical,
big
and
small,
optic
and
acoustic
stimuli.
Except
for
the
registration
of
participants
response
time
(RT)
and
the
correctness
of
their
responses,
measurements
were
also
taken
with
the
use
of
Likert
scales
for
the
metacognitive
feeling
of
difficulty
and
the
metacognitive
feeling
of
confidence
and
measurements
of
the
aesthetic
judgments
for
each
and
every
one
of
the
optic
and
acoustic
stimuli.
The
majority
of
the
participants
(young
-
middle-aged,
women
-
men,
individuals
with
music
education
and
without
music
education)
did
not
show
statistically
significant
differences
in
their
scores
in
the
visuospatial
tests
and
the
memory
tests,
while
at
the
same
time
they
had
a
homogeneously
high
performance
(with
almost
zero
deviation)
for
all
the
optic
symmetrical
and
asymmetrical
stimuli.
For
all
the
acoustic
stimuli,
a
statistically
significant
difference
was
found
for
the
participants
with
music
education,
not
only
for
the
cognitive
processing
of
symmetry,
but
also
for
the
metacognitive.
The
proposed
(on
the
basis
of
the
literature)
preference
(correctness
of
responses
and
reaction
time)
for
the
mirror
symmetrical
around
a
vertical
axis
optic
stimuli
was
not
confirmed
and
neither
there
was
any
confirmation
for
the
preference
for
repetitive
acoustic
stimuli.
What
was
found
were
more
positive
aesthetic
judgments
for
the
symmetrical
formations
versus
the
asymmetrical
ones
for
both
senses.
Finally,
no
cross-modal
interaction
of
priming
was
found,
nor
influence
of
prior
explanation
of
the
kinds
of
symmetry.
These
preliminary
data
provide
support
for
the
independence
of
the
underlying
mechanism
of
optic
and
acoustic
perception
of
symmetry,
with
the
second
one
probably
being
a
non-automatic
and
possibly
learned
process.
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
19
Teuro
Yamasaki
Osaka
Shoin
Women's
University
A
lot
of
studies
investigated
the
interaction
between
musical
materials
and
visual
materials
in
multimedia
works,
and
some
studies
suggested
that
there
was
an
asymmetry
on
direction
of
the
interaction.
That
is,
the
size
of
musical
effect
on
the
impression
of
visual
materials
was
more
than
that
of
visual
effect
on
the
impression
of
musical
materials.
This
might
show
that
musical
impression
and
visual
impression
are
formed
through
different
emotional
processes.
In
these
studies,
however,
the
intensity
of
impression
of
both
materials
was
not
controlled.
Therefore,
this
asymmetry
might
be
caused
not
by
the
modality
of
materials
but
by
the
intensity
of
impression
of
materials.
This
study
investigates
whether
this
asymmetry
is
found
even
on
the
condition
where
the
intensity
of
materials
is
controlled.
In
preliminary
experiment,
fifteen
music
excerpts
and
fifteen
paintings
are
evaluated
on
their
valence
and
arousal,
and
five
music
excerpts
and
five
paintings
are
chosen
as
stimuli
for
main
experiment.
Those
stimuli
are
musical
excerpts
or
paintings
with
positive
valence
and
high
arousal
(+/+),
with
positive
valence
and
low
arousal
(+/-),
with
negative
valence
and
high
arousal
(-/+),
with
negative
valence
and
low
arousal
(-/-),
or
with
neutral
valence
and
medium
arousal
(0/0).
To
add
to
it,
musical
excerpts
and
paintings
with
same
descriptor,
for
example
a
musical
excerpt
with
+/+
and
a
painting
with
+/+,
are
chosen
as
having
same
degree
of
valence
and
arousal.
In
main
experiment,
musical
excerpts
and
paintings
are
combined
and
presented.
Participants
are
asked
to
evaluate
their
musical
impression
or
visual
impression
of
combined
stimuli.
Comparing
the
results
of
the
main
experiment
with
results
of
the
preliminary
experiment,
the
effect
of
musical
excerpts
on
paintings
and
the
effect
of
paintings
on
musical
excerpts
are
analyzed
respectively.
These
results
will
be
discussed,
along
with
confirming
the
existence
of
asymmetry
of
the
size
of
musical
effect
and
visual
effect
and,
if
such
an
asymmetry
exists,
exploring
the
reason
of
the
asymmetry.
Congruency
between
music
and
motion
pictures
in
the
context
of
video
games:
Effects
of
emotional
features
in
music
Shinya
Kanamori,
Ryo
Yoneda,
Masashi
Yamada
Graduate
School
of
Engineering,
Kanazawa
Institute
of
Technology,
Japan
In
the
present
study,
two
experiments
are
conducted.
In
the
first
experiment,
it
is
revealed
that
the
impression
of
game
music
is
spanned
by
pleasantness
and
excitation
axes,
using
one
hundred
pieces
of
game
music.
In
the
second
experiment,
it
is
shown
that
the
congruency
of
moving
picture
and
musical
tune
does
not
decrease
and
the
whole
impression
is
not
change
significantly,
even
if
a
tune
is
replaced
by
a
tune
which
possesses
similar
impression.
These
results
suggests
that
an
archive,
where
various
tunes
are
plotted
on
the
impression
plane
spanned
by
the
pleasantness
and
excitation
axes,
is
useful
to
communicate
in
the
group
of
game
creators
and
engineers,
for
designating
a
piece
of
music
for
a
scene
in
a
video
game.
Complex
Aural
and
Visual
Stimuli:
Discerning
Meaning
in
Musical
Experiences
Dale
Misenhelter
University
of
Arkansas,
USA
This
meta-analysis
explores
findings
from
preference
and
response
studies.
Several
of
the
studies
utilized
both
traditional
major
musical
works,
including
the
Bach
Passacaglia,
Beethoven
Seventh
Symphony,
Stravinsky
Rite
of
Spring,
as
well
as
select
contemporary
popular
compositions.
Variables
considered
in
the
studies
included
the
experience
level
of
20
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
TUE
participants
(often
characterized
as
musicians
and
non-musicians),
musical
elements
(tension
and
release,
textural
and
dynamic
considerations,
consonance
and
dissonance,
etc),
and
visual
elements
as
changes
in
affect
(dramatic
and
temporal
events,
dance,
direction,
speed
of
travel,
tension
and
repose,
artistic
considerations,
etc.).
A
primary
research
question
is
regarding
focus
of
attention
-
the
ability
of
listeners
to
distinguish
between
perceived
musical
elements
or
other
stimuli
while
concurrently
attending
and
responding
-
a
process
loosely
termed
"multi-tasking."
While
there
is
considerable
research
on
listeners
ability
to
discriminate
and/or
prioritize
among
elements
in
audio
only
environments,
research
in
audio-visual
stimuli
discerning
among
multiple
elements
seems
to
be
comparatively
minimal.
Within
aural
models,
it
would
seem
that
less
experienced
listeners
attend
to
individual
components
or
concepts
of
a
musical
selection,
while
experienced
listeners
are
able
to
process
more
complex
information.
With
an
aural-visual
model,
data
suggest
negative
responses
to
negative
visual
stimuli
(despite
the
consistency
with
the
musical
content),
which
raises
issues
of
unclear
definitions
regarding
what
constitutes
aesthetic
response,
as
well
as
the
possibility
of
participants
simply
responding
to
a
demand
characteristic
-
e.g.,
as
they
may
have
assumed
was
expected.
Interaction
of
Audiovisual
Cues
in
the
Perception
of
Audio
Trajectories
Cross-modal
Effects
of
Musical
Tempo
Variation
and
on
Musical
Tempo
in
Audiovisual
Media
Friedemann
Lenz
Departement
of
Musicology
and
Music
Education,
University
of
Bremen,
Germany
Music
is
an
acoustical
phenomenon,
which
is
part
of
a
complex
multisensory
setting.
A
kind
of
research,
which
focuses
on
this
special
issue
is
the
research
on
background
music
and
music
in
different
kinds
of
audiovisual
media.
Research
of
audiovisual
interaction
shows,
that
visual
spatial
motion
can
induce
percepts
of
auditory
movements
and
that
visual
illusion
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012
21
can
be
induced
by
sound.
Studies
on
background
music
indicate,
that
the
musical
tempo
can
be
a
factor
in
cross-modal
interactions.
In
the
present
study,
three
different
effects
of
musical
tempo
variation
in
audiovisual
media
will
be
discussed.
First
it
is
assumed
and
tested
that
musical
tempo
variation
can
influence
the
perception
of
the
velocity
of
the
visual
objects
in
an
audiovisual
medium
and
vice
versa.
The
second
assumption
refers
to
the
thesis
that
the
perception
of
time
in
movies
depends
partially
on
the
variation
of
musical
tempo.
The
third
question
deals
with
the
influence
of
the
musical
tempo
on
the
sensation
of
emotions
felt
by
recipients
while
watching
an
audiovisual
medium.
Several
computer-aided
tests
with
audiovisual
stimuli
were
conducted.
The
stimuli
consisted
of
videos
of
a
conveyor
belt
with
moving
boxes
and
a
musical
soundtrack
with
a
simple
melody.
Several
pretests
on
the
three
hypotheses
were
conducted.
There
are
hints
that
musical
tempo
can
change
perception
of
visual
velocity
perception,
but
not
vice
versa.
When
Music
Drives
Vision:
Influences
of
Film
Music
on
Viewers
Eye
Movements
Karin
Auer,*
Oliver
Vitouch,*
Sabrina
Koreimann,*
Gerald
Pesjak,#
Gerhard
Leitner,#
Martin
Hitz#
*Dept.
of
Psychology,
University
of
Klagenfurt,
Austria
#Interactive
Systems
Group,
University
of
Klagenfurt,
Austria
Various
studies
have
shown
the
co-determining
strength
that
film
music
has
on
the
viewers
perception.
We
here
try
to
show
that
the
cognitive
processes
of
watching
a
film,
observed
through
viewers
scanpaths
and
eye-movement
parameters
such
as
number
and
duration
of
fixations,
are
different
when
the
accompanying
film
music
is
changed.
If
this
holds,
film
music
does
not
just
add
to
a
holistic
impression,
but
the
visual
input
itself
is
actually
different
depending
on
features
of
the
soundtrack.
Two
film
clips,
10
seconds
each,
were
presented
with
three
different
musical
conditions
(horror
music,
documentary
music,
no
music)
in
a
between-subjects
design.
Clip
2
additionally
contained
a
cue
mark
(red
X
in
the
bottom
left
corner,
shown
for
1
s).
Participants
scanpaths
were
recorded
using
a
ASL
H6
head-mounted
eye-tracking
system
based
on
corneal
reflection
of
infrared
light.
The
resulting
scanpaths
of
N
=
30
participants
showed
distinct
patterns
dependent
on
the
music
condition.
Specific
trajectory
categories
were
found
for
both
film
clips
(five
for
clip
1,
nine
for
clip
2).
Systematic
differences
(p
<
.05)
could
be
shown
in
most
of
these
categories
and
variables.
The
additional
cue
mark
was
consciously
perceived
significantly
more
often
in
both
music
conditions
than
in
the
silent
condition.
Our
results
suggest
that
the
slogan
What
you
see
is
what
you
hear
can
be
true
on
a
very
fundamental,
first-layer
level:
Visual
input
varies
with
different
scores,
resulting
in
viewers
not
seeing
the
same
film
anymore
in
a
straight
sense.
Emotional
Impact
of
Musical/Visual
Synchrony
Variation
in
Film
Andrew
Rogers
University
of
Huddersfield,
United
Kingdom
The
emotional
impact
of
synchronous
musical
and
visual
prominences
within
the
cinematic
experience
awaits
thorough
empirical
evaluation.
Film
composition
is
defined
here
as
a
genre
of
stereotypes,
whose
methodologies
are
not
feasibly
subject
to
significant
redevelopment.
As
consequence,
the
research
focuses
on
improving
components
of
the
audience
recognisable
functions
of
film
music.
Subjects
graded
cinematic
clips
with
musical
elements
that
varied
in
their
synchronous
interaction
with
visual
prominences.
A
positive
response
to
more
frequent
synchronisation
between
music
and
film
was
concluded.
Perceptual
expectancy,
attention
and
multisensory
integration
are
principal
in
analysis
of
the
findings.
22
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
TUE
Speed
Poster
Session
3:
Dock
Six
Hall,
11:00-11:40
Composition
&
improvisation
An
information-theoretic
model
of
musical
creativity
Geraint
A.
Wiggins
Centre
for
Digital
Music,
Queen
Mary,
University
of
London
I
propose
a
hypothetical
computational
model
of
spontaneous
musical
creativity;
that
is,
not
deliberate
musical
problem
solving,
(e.g.
rearranging
a
score
for
a
smaller
orchestra),
but
the
production
of
original
musical
ideas
without
reasoning.
The
theory
is
informed
by
evolutionary
thinking,
in
terms
of
the
development
of
its
mechanisms,
and
of
the
social
evolution
of
music.
Hitherto,
no
computational
model
of
musical
creativity
has
made
a
distinction
between
spontaneous
creativity
and
deliberate
application
of
explicit
design
principles.
Further,
there
was
no
computational
model
of
musical
creativity
which
subsisted
in
an
explicit,
coherent
relationship
with
models
of
other
mental
processing.
This
hypothetical
model
suggests
a
mechanism
which
may
underlie
general
implicit
reasoning,
including
the
production
of
language.
That
mechanism
arises
from
simple
statistical
principles,
which
have
been
shown
to
apply
in
perceptual
models
of
music,
and
therefore
may
reasonably
supposed
to
be
available
in
the
mind/brain,
and
consists
in
the
moderation
of
input
to
the
Global
Workspace
via
the
interaction
of
information-theoretic
quantities.
The
proposed
high-level
model,
instantiated
with
appropriate
sub-component
models
of
learning
and
production,
explains
the
origins
of
musical
creativity
and
their
connection
with
speech/language,
narrative,
and
other
time
based
creative
forms.
It
also
supplies
a
model
of
the
mediation
of
information
as
it
becomes
available
to
consciousness.
Therefore
it
may
have
implications
outside
music
cognition,
for
general
ideation.
Algorithmic
Composition
of
Popular
Music
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
23
Vocal
improvisations
of
Estonian
children
24
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
TUE
The
Ideational
Flow:
Evaluating
a
New
Method
for
Jazz
Improvisation
Analysis
Improvisation
in
Jazz:
Stream
of
Ideas-Analysis
of
Jazz
Piano-Improvisations
Martin
Schtz
Institute
of
Musicology,
University
of
Hamburg,
Germany
The
stream
of
ideas-analysis
embodies
a
new
way
to
analyze
jazz
improvisations.
The
core
of
the
stream
of
ideas-analysis,
which
was
developed
within
an
empirical
research,
is
to
translate
an
improvisation
on
a
mid-level
to
a
sequence
of
melodic
phrases/patterns
(=ideas).
On
the
basis
of
methods
of
qualitative
content
research
and
grounded
theory
an
expendable
and
differentiable
dynamic
system
of
categories
was
created
to
represent
every
kind
of
melodic
phrases,
which
occurred
within
the
30
examined
improvisations.
The
underlying
improvisations
were
the
result
of
an
experiment
with
five
jazz
pianists,
who
were
asked
to
improvise
in
several
sessions
on
the
same
collection
of
different
jazz
tunes.
Afterwards
each
improvisation
was
categorized
according
to
the
stream
of
ideas-analysis
and
presented
as
a
sequence
of
used
ideas.
After
analyzing
the
30
improvisations,
the
system
of
categories
consisted
of
nine
main
categories
(=basis-ideas),
which
covered
every
appearing
melodic
phrase.
The
nine
basis-ideas
are
defined
with
regard
to
either
aspects
of
melodic
contour
or
intra-musical
aspects
(variation
of
the
theme,
creating
motifs
etc.).
Furthermore
the
stream
of
ideas-analysis
makes
it
possible
to
compare
improvisations
objectively
between
different
musicians
or
tunes
by
using
statistical
methods
(e.g.
by
dealing
with
frequency
distributions).
It
could
be
shown
that
each
of
the
five
participating
pianists
used
a
quite
similar
combination
of
preferred
basis
ideas
(individual
vocabulary)
to
create
his
different
improvisations
(takes)
on
the
same
underlying
tune.
In
addition,
a
connection
between
the
different
tunes
and
the
amount
of
certain
ideas
was
recognized.
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
25
26
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
TUE
Speed
Poster
Session
4:
Timber
I
Hall,
11:00-11:40
Emotion
&
communication
Music
can
change
our
lives.
As
true
as
this
notion
may
seem,
we
have
little
sure
knowledge
about
what
it
actually
means.
Strong
emotional
experiences
or
peak
experiences
with
music
have
proven
to
be
of
high
significance
for
the
people
who
have
them.
The
authors
investigated
the
long-term
effects
of
such
experiences
on
peoples
way
of
life,
using
narrative
interviews
and
a
grounded
theory
approach
to
develop
a
process
model
that
describes
the
nature
of
intense
musical
experiences
(IMEs)
and
their
long-term
effects.
The
most
important
results
are
that
(1)
IMEs
are
characterized
by
altered
states
of
consciousness,
which
leads
to
the
experience
of
harmony
and
self-realization;
(2)
IMEs
leave
people
with
a
strong
motivation
to
attain
the
same
harmony
in
their
daily
lives;
(3)
people
develop
several
resources
during
an
IME,
which
they
can
use
afterward
to
adhere
to
their
plans;
(4)
IMEs
cause
long-term
changes
to
occur
in
peoples
personal
values,
their
perception
of
the
meaning
of
life,
social
relationships,
engagement
and
activities,
and
consciousness
and
development.
The
authors
discuss
the
results
as
they
relate
to
spirituality
and
altered
states
of
consciousness
and
draw
10
conclusions
from
the
process
model
that
form
a
starting
point
for
quantitative
research
about
the
phenomenon.
Results
suggest
that
music
can
indeed
change
our
livesby
making
it
a
bit
more
fulfilling,
spiritual,
and
harmonious.
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
27
Joshua
Albrecht
School
of
Music,
Ohio
State
University,
USA
A
new
method
of
collecting
self-report
assessments
of
the
perceived
affective
content
of
short
musical
passages
is
described
in
Albrecht
&
Huron
(2010).
This
study
used
a
procedure
termed
the
progressive
exposure
method
in
which
a
large
passage
is
divided
into
discrete
five-second
excerpts.
These
excerpts
are
then
presented
in
random
order,
and
participants
evaluate
the
perceived
affective
content
of
these
short
passages.
In
that
study,
110
participants
used
the
progressive
exposure
method
to
analyze
the
second
movement
from
Beethovens
Pathtique
sonata.
The
results
from
this
study
provide
a
mosaic
portrait
of
eleven
affective
dimensions
across
the
movement.
In
this
study,
a
model
of
perceived
affective
content
is
built
by
measuring
sixteen
different
musical
features
of
each
excerpt
and
using
these
measurements
as
predictors
of
participant
ratings.
This
model
is
used
to
make
predictions
of
participant
evaluations
of
the
same
eleven
affective
dimensions
for
fifteen
excerpts
from
different
Beethoven
piano
sonatas.
To
anticipate
the
results,
the
predictions
for
each
of
the
fifteen
excerpts
along
each
of
the
eleven
affective
dimensions
are
significantly
correlated
with
participant
ratings.
28
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
TUE
Coding
Emotions
with
Sounds
Music
is
widely
used
in
everyday
life,
and
has
been
shown
to
affect
a
wide
range
of
behaviors
from
basic
decision
tasks
to
driving
performance.
Another
aspect
of
everyday
life
is
spatial
attention,
which
is
used
in
most
tasks
regardless
of
whether
it
is
simple
or
complex.
Pseudoneglect
is
a
phenomenon
where
neurologically
normal
individuals
demonstrate
a
reliable
bias
towards
the
left
visual
hemifield.
Theories
of
spatial
attention
suggest
that
because
the
right
hemisphere
of
the
brain
is
more
involved
in
visuo-spatial
processing,
it
has
greater
activation
which
leads
to
the
biasing
of
the
left
visual
hemifield.
It
is
also
theorized
that
there
is
hemispheric
asymmetry
in
the
brain
for
different
emotional
valences,
such
that
the
left
hemisphere
is
more
activated
during
happy
emotions
and
the
right
hemisphere
more
activated
by
sad
emotions.
Music
can
also
be
highly
emotional,
which
was
utilized
for
the
purpose
of
evoking
emotions
in
the
participants
of
this
study.
The
current
study
sought
to
determine
if
manipulating
emotional
valence
through
music
would
increase,
reverse,
or
ameliorate
pseudoneglect
in
neurologically
normal
individuals.
One
hundred
fourteen
participants
performed
a
rating
task
using
a
visual
analog
scale
on
works
of
art
in
silence
or
while
listening
to
music
with
a
sad
or
happy
valence.
The
musical
stimuli
were
selections
from
various
orchestral
works
by
Haydn,
Albinoni,
Faur,
Bruch,
Mendelssohn,
and
Prokofiev.
The
valence
of
the
music
was
confirmed
using
independent
raters.
Participants
rated
both
portrait
art
that
contained
a
human
face
and
abstract/scene
art
that
did
not
contain
a
human
subject.
Additionally,
the
anchors
of
the
rating
scale
were
reversed
half-way
through
to
determine
if
the
pseudoneglect
effect
occurred
regardless.
The
results
demonstrated
a
replication
of
earlier
work
on
pseudoneglect
in
line
bisection
tasks
when
the
ratings
were
performed
in
silence,
but
demonstrated
a
reversal
of
the
effect
when
happy
music
was
present.
No
significant
effect
was
found
when
sad
music
was
present,
though
the
trend
followed
the
same
direction
as
the
happy
condition.
The
results
are
framed
within
theory
regarding
hemispheric
specialization
of
emotions
and
spatial
attention
in
the
brain,
and
how
the
findings
might
be
of
interest
to
researchers
using
Likert-type
scales
for
testing
purposes.
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
29
The
Effect
of
Repeated
Listening
on
Pleasure
and
Boredom
Response
to
a
Cadenza
30
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
TUE
Speed
Poster
Session
5:
Timber
II
Hall,
11:00-11:40
Attention
&
memory
Effect
of
a
reference
vs.
working
memory
task
on
verbal
retrospective
estimation
of
elapsed
duration
during
music
listening
Michelle
Phillips
Centre
for
Music
and
Science,
University
of
Cambridge,
UK
Psychological
time
may
be
warped
and
shaped
by
musical
engagement
and
variation,
including
factors
such
as
the
musics
volume,
tempo,
and
modality.
Two
studies
will
be
presented
here,
exploring
both
reference
and
working
memory.
Participants
listened
to
a
37-
second
extract
of
a
bespoke
piano
composition
(100bpm),
and
retrospectively
verbally
estimated
elapsed
duration
of
the
listening
period.
In
study
1
(N
=
50,
12
male,
average
age:
30.0),
the
average
estimate
for
participants
who
listened
only
(no
task)
was
52.00
seconds.
Participants
in
condition
2
(reference
memory
task),
who
were
instructed
to
write
a
list
of
jungle
animals
whilst
listening,
gave
a
not-significantly
different
average
estimate
of
55.88
seconds.
However,
in
study
2
(N
=
28,
12
male,
average
age:
25.5)
the
average
estimate
for
participants
who
listened
only
(no
task)
of
63.36
seconds
was
significantly
longer
(p
<
0.02)
than
in
the
working
memory
task
group
(instructed
to
rehearse
a
list
of
jungle
animals
whilst
listening)
which
yielded
an
average
estimate
of
38.57
seconds.
These
findings
suggest
that
retrospective
estimates
of
elapsed
duration
during
music
listening
are
not
significantly
shortened
when
a
reference
memory
task
is
included,
but
are
significantly
reduced
when
working
memory
is
occupied
during
the
listening
period.
Diverting
attention
from
the
listening
had
a
greater
impact
when
attention
was
focused
on
rehearsal
in
working
memory,
than
on
retrieval
from
reference
memory.
This
study
provides
evidence
that
differing
processes
may
underlie
these
systems,
and
that
one
diverts
attention
from
music
to
a
greater
extent
than
the
other.
Working
Memory
and
Cognitive
Control
in
Aging:
Results
of
Three
Musical
Interventions
Jennifer
A.
Bugos
School
of
Music,
University
of
South
Florida,
United
States
One
common
barrier
to
successful
aging
is
decreased
performance
in
cognitive
abilities
such
as
executive
function
and
working
memory
tasks
due
to
age-related
cognitive
decline
(Salthouse,
1994;
Meja
et
al.,
1998;
Wecker
et
al.,
2005).
A
key
challenge
is
to
identify
cognitive
interventions
that
may
mitigate
or
reduce
potential
age-related
cognitive
decline.
This
research
examines
the
effects
of
different
types
of
musical
training
namely:
gross
motor
training
(group
percussion
ensemble,
GPE)
and
fine
motor
training
(group
piano
instruction,
GPI)
compared
to
non-motor
musical
training
(music
listening
instruction,
MLI)
on
working
memory
and
cognitive
control
in
older
adults
(ages
60-86).
One
hundred
ninety
non-
musicians,
ages
60-86,
were
recruited
and
matched
by
age,
education,
and
intelligence
to
two
training
interventions.
Two
programs
were
administered
concurrently,
in
each
of
three
16-
week
sessions:
(GPI
and
MLI),
(GPE
and
MLI),
and
(GPE
and
GPI).
A
series
of
standardized
cognitive
assessments
were
administered
pre
and
post
training.
Results
of
a
Repeated
Measures
ANOVA
show
significantly
reduced
perseveration
errors
on
the
ACT
for
the
GPE
group
compared
to
GPI
and
MLI,
F(2,121)=3.6,
p<.05.
The
GPI
group
exhibited
a
similar
pattern
of
reduced
perseveration
errors.
Results
of
a
Repeated
Measures
ANOVA
on
the
Musical
Stroop
Task
indicate
significantly
reduced
errors
by
the
MLI
group
compared
to
GPI
and
GPE,
F(2,109)=3.1,
p<.05.
Musical
training
may
benefit
general
cognitive
abilities.
Data
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012
31
suggest
that
instrumental
training
enhances
working
memory
performance
while
music
listening
instruction
may
contribute
to
cognitive
control.
Interfering
Effects
of
Musical
and
Non-Musical
Stimuli
in
a
Short-term
Memory
Task
Musical
Accents
and
Memory
for
Words
32
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
TUE
Mood-Based
Processing
of
Unfamiliar
Tunes
Increases
Recognition
Accuracy
in
Remember
Responses
Effects
of
Manipulating
Attention
during
Listening
on
Undergraduate
Music
Majors
Error
Detection
in
Homophonic
and
Polyphonic
Excerpts:
A
Pilot
Study
Amanda
L.
Schlegel
School
of
Music.,
University
of
Southern
Mississippi,
United
States
The
purpose
of
this
pilot
study
was
to
investigate
the
potential
effects
of
wholistic
versus
selective
listening
strategies
on
music
majors
detection
of
pitch
and
rhythm
errors
in
three-
voice
homophonic
and
polyphonic
excerpts.
During
the
familiarization
phase,
upper-level
undergraduate
instrumental
music
majors
participants
(N
=
14)
first
heard
a
correct
full
(all
voices
at
once)
performance,
followed
by
each
individual
voice,
and
one
final
opportunity
to
listen
to
the
full
excerpt
again.
Participants
then
heard
a
flawed
performance
containing
pitch
and
rhythm
errors
with
their
task
being
to
detect
the
errors.
Participants
in
the
wholistc
listening
group
were
instructed
to
attend/listen
to
all
voices
while
listening,
while
selective
group
participants
were
instructed
to
attend/listen
to
individual
voices
while
listening.
Participants
heard
the
flawed
performance
twice.
Results
indicated
no
significant
main
effects
due
to
texture,
error
type
(pitch
or
rhythm),
error
location
(top,
middle,
or
bottom
voice),
or
treatment
group.
A
significant
three-way
interaction
among
texture,
error
type,
and
error
location
illustrate
the
influence
of
musical
context
in
the
detection
of
pitch
and
rhythm
errors.
Though
the
small
sample
size
(N
=
14)
and
lack
of
significance
as
a
result
of
the
treatment
illustrate
the
need
for
additional
and
adjusted
investigations,
efforts
to
illuminate
textures
influence
on
listening/attending
are
of
value
to
all
musicians.
Attention
and
Music
Vaitsa
Giannouli
Department
of
Psychology,
Aristotle
University
of
Thessaloniki,
Greece
Many
studies
have
found
that
cognitive
test
performance
can
be
influenced
by
background
music.
The
aim
of
the
present
study
is
to
investigate
whether
background
music
can
influence
attention.
Twenty-four
neurologically
and
acoustically
healthy
volunteers
(12
non-
musicians
and
12
musicians,
15
men
and
14
women,
Mean
age=26,20,
SD=5,64)
participated
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012
33
in
the
study.
All
of
the
participants
had
university
education
(minimum
16years).The
examination
materials
were
Ruff
2
&
7
Selective
Attention
Test
(2
&
7
Test),
Symbol
Digit
Modalities
Test
(SDMT),
Digit
Span
Forward
and
Trail
Making
Test
Part
A
(TMT).
Metacognitive
feelings
(feeling
of
difficulty-FOD
and
feeling
of
confidence-FOC)
were
also
measured
after
the
completion
of
each
test
with
the
use
of
Likert
scales.
Volunteers
participated
in
all
three
condition
of
the
experiment
and
were
grouped
according
to
the
acoustic
background
that
they
experienced
during
the
neuropscyhological
examination
(Mozarts
Allegro
con
spirito
from
the
Sonata
for
two
pianos
K.448,
favorite
music
excerpt
and
no
exposure
to
any
acoustic
stimuli
during
their
examination).
Results
indicated
a
statistically
significant
difference
in
favor
of
the
favorite
music
condition
and
statistically
more
positive
metacognitive
judgments
(less
difficulty,
more
confidence)
for
this
condition.
Listening
to
Mozarts
music
did
not
enhance
performance
on
attention
tasks.
No
music
education
influence
was
found
and
also
no
gender
differences
were
found.
The
finding
of
a
better
attention
performance
could
be
interpreted
as
the
result
of
a
general
positive
influence-effect
that
preferred
music
listening
has
on
general
cognitive
abilities.
Learning
and
memorisation
amongst
advanced
piano
students:
a
questionnaire
survey
While
songs
(defined
as
music
with
lyrics)
have
been
studied
extensively
in
music
theory,
little
empirical
research
addresses
how
music
and
lyrics
together
influence
the
interpretation
of
a
songs
narrative.
Previous
experiments
on
song
focus
on
how
lyrics
and
music
elicit
emotion;
yet
do
not
address
the
songs
narrative.
Cook
(1998)
proposed
three
models
of
multimedia,
including
contest
(or
mismatch),
when
two
simultaneous
media
contradict
each
other.
Previous
research
(e.g.
McNeill,
2005)
indicates
that
mismatched
verbal
and
nonverbal
communication
implies
meta-communication,
or
other
instances
of
non-literal
language
(deception,
irony,
34
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
TUE
sarcasm,
joking,
and
so
on).
In
like
manner,
when
music
and
lyrics
mismatch,
a
listener
might
interpret
the
music-lyrics
mismatch
as
a
kind
of
meta-communication.
We
propose
the
following
hypotheses:
(1)
in
song,
music
does
not
simply
elicit
emotion
but
also
plays
a
part
in
a
listeners
narrative
interpretation;
a
listener
uses
both.
(2)
If
music
and
lyrics
mismatch,
listeners
will
reconcile
the
contradictory
sources
to
create
a
coherent
story.
(3)
When
the
music
and
lyrics
conflict
in
a
song
sung
by
a
character,
a
listener
may
infer
the
character
in
the
song
as
being
ironic,
lying,
sarcastic
or
being
humorous.
Participants
listened
to
song
clips
from
Broadway
musicals
and
provided
responses
to
a
variety
of
questions:
free
response,
Likert
scale
ratings,
forced
choice
and
adjective
listening.
The
study
used
a
2x2
between-subjects
design
where
the
factors
are
the
affect
of
the
music
and
the
affect
of
the
lyrics:
1)
Positive
Music/Positive
Lyrics,
2)
Positive
Music/Negative
Lyrics,
3)
Negative
Music/Negative
Lyrics,
4)
Negative
Music/Positive
Lyrics.
This
research
provides
further
insight
into
how
a
composer
is
able
to
successfully
communicate
a
meaning
or
message
to
a
listener
through
song.
Commercially,
advertising
companies
may
find
the
results
informative
because
then
they
would
know
how
best
to
reach
their
target
audience
by
knowing
how
different
sources
of
media
are
understood
by
the
public.
These
results
would
be
of
interest
to
other
non-music
researchers
who
study
how
people
reconcile
conflicting
simultaneous
sources
of
information.
Studying
the
Intervenience
of
Lyrics
Prosody
in
Songs
Melodies
Jose
Fornari
Comparing
Models
of
Melodic
Contour
in
Music
and
Speech
35
trial
were
produced
using
the
same
contour
model,
but
only
one
was
derived
from
the
melody
or
sentence
heard.
Models
facilitating
the
highest
proportion
of
correct
matches
were
considered
to
summarise
the
pitch
information
in
a
cognitively
optimal
way.
Matching
was
at
above
chance
level
for
all
models,
with
increased
visual
detail
generally
leading
to
better
performance.
A
linear
regression
model
with
musical
training,
stimulus
type,
their
interaction
and
contour
model
as
predictors
accounted
for
44%
of
variance
in
accuracy
scores
(p
<
.001).
Accuracy
was
significantly
higher
for
melodies
than
for
speech,
and
increased
with
musical
training
for
melodies
only.
This
novel
cross-modal
paradigm
revealed
that
listeners
can
successfully
match
images
derived
from
music
theoretical
models
of
contour
not
only
to
melodies
but
also
spoken
sentences.
Our
results
support
the
important
role
of
contour
in
perception
and
memory
in
both
music
and
speech,
but
suggest
limits
to
the
extent
that
musical
training
can
bring
about
changes
to
the
mental
representation
of
pitch
patterns.
The
effect
of
melodic
expectation
on
language
processing
at
different
levels
of
task
difficulty
and
working
memory
load
Towards
a
Musical
Gesture
in
the
Perspective
of
Music
as
a
Dynamical
System
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
TUE
movement
and
change
over
time.
We
present
a
locus
of
convergence
among
studies
with
different
views
on
music
as
a
dynamical
system,
whereafter
we
propose
a
musical
gesture
based
on
the
same
dynamical
principles
which
in
the
domain
of
Linguistics
led
to
a
phonological
unit
called
articulatory
gesture.
The
singing
voice
is
presented
as
a
plausible
musical
gesture
as
it
produces
tones
and
durations
combined
in
order
to
provide
the
musical
information.
This
information
can
be
understood
as
specific
tones
in
a
given
scale
system
and
rhythmic
structure
and
is
part
of
the
musical
unit
proposed
here.
The
articulatory
movements
of
the
singing
voice
produced
by
the
larynx
characterize
this
unit
as
a
unit
of
action.
Thus
we
suggest
a
larynx
modeling
for
music
production
in
an
initial
attempt
to
view
the
singing
voice
as
a
basic
realization
of
music,
organized
and
coordinated
as
a
musical
gesture.
Perceiving
Differences
in
Linguistic
and
Non-Linguistic
Pitch:
A
Pilot
Study
With
German
Congenital
Amusics
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
37
Analyzing
Modulation
in
Scales
(Rgams)
in
South
Indian
Classical
(Carntic)
Music:
A
Behavioral
Study
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
TUE
fewer
errors
than
students.
This
could
be
attributed
to
enhanced
representation
for
systems
of
pitches
and
modalities.
Embodiment
of
Metrical
Structure:
Motor
Patterns
Associated
with
Taiwanese
Music
Literarily
Dependent
Chinese
Music:
A
Cross-Culture
Research
of
Chinese
and
Western
Musical
Score
Based
on
Automatically
Interpretation
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
39
Modeling
the
implicit
learning
of
metrical
and
non-metrical
rhythms
The
information
dynamics
of
music
(IDyOM;
Pearce
&
Wiggins,
2006)
model,
originally
applied
to
melodic
expectation,
indicates
learning
via
entropy
(reflecting
uncertainty)
and
information
content
(reflecting
unexpectedness).
Schultz,
Stevens,
Keller,
and
Tillmann
found
implicit
learning
(IL)
of
metrical
and
non-metrical
rhythms
using
the
serial
reaction-time
task
(SRT).
In
the
SRT,
learning
is
characterized
by
RT
decreases
over
blocks
containing
a
repeating
rhythm,
RT
increases
when
novel
rhythms
are
introduced,
and
RT
recovery
when
the
original
rhythm
is
reintroduced.
Metrical
rhythms
contained
events
that
occurred
on
the
beat
and
downbeat.
Non-
metrical
rhythms
contained
events
that
deviated
from
the
beat
and
downbeat.
In
the
metrical
condition,
larger
RT
increases
occurred
for
the
introduction
of
novel
weakly
metrical
rhythms
compared
to
novel
strongly
metrical
rhythms.
No
differences
were
evident
between
the
introductions
of
novel
non-metrical
rhythms.
We
used
the
IDyOM
model
to
test
the
hypothesis
that
IL
of
metrical
and
non-metrical
rhythms
is
related
to
developing
expectations
(i.e.
RT
data)
based
on
the
probabilistic
structure
of
temporal
sequences.
We
hypothesized
that
previous
exposure
to
the
corpus
results
in
larger
positive
correlations
for
metrical
rhythms
than
non-
metrical
rhythms.
Correlational
analyses
between
RT
data
and
the
IDyOM
model
were
performed.
The
IDyOM
model
correlated
with
RT.
Entropy
demonstrated
moderate
positive
correlations
for
40
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
TUE
the
LTM+
and
BOTH+
models.
Information
content
demonstrated
moderate
to
strong
positive
correlations
for
the
LTM,
BOTH,
LTM+,
and
BOTH+
models.
As
hypothesized,
models
exposed
to
the
corpus
demonstrated
larger
correlations
for
metrical
rhythms
compared
to
non-metrical
rhythms.
Results
suggest
that
the
IDyOM
model
is
sensitive
to
probabilistic
aspects
of
temporal
learning,
and
previous
exposure
to
metrical
rhythms.
The
probabilistic
structure
of
temporal
sequences
predicts
the
development
of
temporal
expectations
as
reflected
in
RT.
Results
indicate
that
the
usefulness
of
the
IDyOM
model
extends
beyond
predicting
melodic
expectancies
to
predicting
the
development
of
temporal
expectancies.
Asymmetric
beat/tactus:
Investigating
the
performance
of
beat-tracking
systems
on
traditional
asymmetric
rhythms
Thanos
Fouloulis,*
Emilios
Cambouropoulos,*
Aggelos
Pikrakis#
Meet
ADAM
a
model
for
investigating
the
effects
of
adaptation
and
anticipatory
mechanisms
on
sensorimotor
synchronization
adaptation
with
an
anticipation
process
inspired
by
the
notion
of
internal
models.
ADAM
is
created
in
Simulink,
a
MATLAB-based
simulation
environment.
ADAM
can
be
implemented
in
a
real-time
set
up,
creating
a
virtual
synchronization
partner.
ADAM
produces
an
auditory
pacing
signal,
and
can
parametrically
adjust
the
timing
of
this
signal
based
on
information
about
the
human
participant's
timing
(via
MIDI).
The
set
up
enables
us
not
only
to
run
simulations
but
also
to
conduct
experiments
during
which
participants
directly
interact
with
the
model.
In
doing
so,
we
investigate
the
effect
of
the
different
processes
and
their
interactions
on
SMS
in
order
to
gain
knowledge
about
how
SMS-based
tasks
might
be
exploited
in
a
motor
rehabilitation
for
different
patient
groups.
Electrophysiological
Substrates
of
Auditory
Temporal
Assimilation
Between
Two
Neighboring
Time
Intervals
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
TUE
thoroughly
explored.
We
conducted
two
experiments
to
investigate
affective
responses
to
tonal
modulation
by
using
semantic
differential
scales
related
to
valence,
synesthesia,
potency,
and
tension.
Experiment
1
examined
affective
responses
to
modulation
to
all
12
major
and
minor
keys
using
48
brief
harmonic
progressions.
The
results
indicated
that
affective
response
depends
on
degree
of
modulation
and
on
the
use
of
the
major
and
minor
modes.
Experiment
2
examined
responses
to
modulations
to
the
subdominant,
the
dominant,
and
the
descending
major
third
using
a
set
of
24
controlled
harmonic
progressions
and
a
balanced
set
of
24
excerpts
from
piano
compositions
belonging
to
the
First
Viennese
School
and
the
Romantics;
all
stimuli
were
in
the
major
mode
to
maintain
the
ecological
validity
of
modulation
to
the
dominant.
In
addition,
Experiment
2
investigated
the
affective
influence
of
melodic
direction
in
soprano
and
bass
melodic
lines.
The
results
agreed
with
the
theoretical
model
of
pitch
proximity
based
on
the
circle
of
fifths
and
demonstrated
the
influence
of
melodic
direction
and
musical
style
on
emotional
response
to
reorientation
in
tonal
space.
Examining
the
affective
influence
of
motion
along
different
tonal
distances
can
help
deepen
our
understanding
of
aesthetic
emotion.
Voice
Multiplicity
Influences
the
Perception
of
Musical
Emotions
Multisensory
Perception
of
Six
Basic
Emotions
in
Music
43
between
musical
sound
and
facial
and
bodily
movements
in
perceiving
emotion
from
music
performance.
Results
showed
that
the
performances
in
the
Audio
(A),
Visual
(V),
and
Audio-
Visual
(AV)
conditions
were
dependent
on
the
combination
of
instruments
and
emotions:
angry
expression
by
cellists
and
sad
expression
by
violinist
were
perceived
better
in
the
V
condition,
while
disgust
expression
by
pianist
were
perceived
better
in
the
AV
condition.
While
previous
studies
have
shown
that
visual
information
from
facial
expression
facilitates
the
emotion
perception
from
emotional
prosody
in
speech,
that
of
musicians
facial
and
bodily
movements
did
not
necessarily
enhance
the
emotion
perception
from
musical
sound.
This
pattern
suggests
that
multisensory
perception
of
emotion
from
music
performance
may
be
different
from
that
from
audiovisual
speech.
New
perspective
of
peak
emotional
response
to
music:
The
psychophysiology
of
tears
Musical
Emotions:
Perceived
Emotion
and
Felt
Emotion
in
Relation
to
Musical
Structures
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
TUE
and
rated
the
intensities
of
perceived
and
felt
emotions
using
a
two-dimensional
evaluation:
arousal
(active/passive)
and
valence
(pleasant/unpleasant).
The
results
showed
that
the
perceived
emotion
did
not
always
coincide
with
the
felt
emotion.
Notably,
participants
who
had
substantial
musical
experience
rated
the
felt
emotion
as
less
unpleasant
or
more
pleasant
than
the
perceived
emotion
in
response
to
minor-key,
dissonant
and
high
note
density
music.
This
finding
may
lead
to
a
better
understanding
of
why
people
sometimes
like
or
enjoy
sad
music.
Emotional
features
of
musical
pieces
for
a
series
of
survival-horror
games
45
of
vehicular
performance,
and
therefore
future
research
should
explore
the
effects
of
music
on
driving
performance.
Developing
and
testing
functional
music
backgrounds
towards
increased
driver
safety
is
an
important
contribution
of
Music
Science
in
the
war
against
traffic
accidents
and
fatalities.
Conceptualizing
the
subjective
experience
of
listening
to
music
in
everyday
life
Ruth
Herbert
Music
Dept.,
Open
University,
UK
Empirical
studies
of
everyday
listening
often
frame
the
way
individuals
experience
music
primarily
in
terms
of
emotion
and
mood.
Yet
emotions
-
at
least
as
represented
by
categorical,
dimensional
and
domain-specific
models
of
emotion
-
do
not
account
for
the
entirety
of
subjective
experience.
The
term
'musical
affect'
may
equally
relate
to
aesthetic,
spiritual,
and
'flow'
experiences,
in
addition
to
a
range
of
altered
states
of
consciousness
(Juslin
&
Sloboda,
2010),
including
the
construct
of
trance.
Alternative
ways
of
conceptualizing
and
mapping
experience
suggest
new
understandings
of
the
subjective,
frequently
multimodal,
experience
of
music
in
daily
life.
This
poster
explores
categorizations
of
aspects
of
conscious
experience,
such
as
checklists
of
basic
dimensions
of
characteristics
of
transformations
of
consciousness
(e.g.
Pekala's
Phenomenology
of
Consciousness
Inventory
(PCI),
or
Gabrielsson
and
Lindstrm
Wik's
descriptive
system
for
strong
experiences
with
music
(SEM-DSM),
together
with
the
potential
impact
of
specific
kinds
of
consciousness
upon
experience
(e.g.
the
notion
of
present
centred
(core
or
primary),
and
autobiographical
(extended/higher
order)
forms
of
consciousness
(Damasio,
1999,
Edelman,
1989).Three
recent
empirical
studies
(Herbert,
2011)
which
used
unstructured
diaries
and
semi-structured
interviews
to
explore
the
psychological
processes
of
everyday
involving
experiences
with
music
in
a
range
of
'real-world'
UK
scenarios
are
referenced.
Free
phenomenological
report
is
highlighted
as
a
valuable,
if
partial
means
of
charting
subjective
experience.
Importantly,
it
constitutes
a
method
that
provides
insight
into
the
totality
of
experience,
so
enabling
researchers
to
move
beyond
the
confines
of
emotion.
The
impact
of
structure
discovery
on
adults
preferences
for
music
and
dance
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
TUE
and
subjective
liking
for
both
music
and
dance.
If
our
research
yields
the
predicted
results,
then
we
will
have
initial
confirmation
that
structure
discovery
impacts
adults
subjective
liking
of
both
music
and
dance.
Values,
Functions
of
Music,
and
Musical
Preferences
Eugenia
Costa-Giomi
Center
for
Music
Learning,
University
of
Texas-Austin,
USA
Although
timbre
plays
different
roles
in
the
organization
of
musical
and
linguistic
information,
research
has
consistently
shown
its
salience
as
a
perceptual
feature
in
both
music
and
language.
Infants
recognize
phonemes
and
words
despite
variations
in
talkers
voice
early
in
life
and
have
difficulty
in
recognizing
short
melodies
when
played
by
different
instruments
until
they
are
13-month-old.
It
seems
that
during
the
first
year
of
life,
timbral
variability
interferes
with
the
categorization
of
melodies
but
not
words.
Because
the
categorization
of
words
and
melodies
is
critical
for
the
understanding
of
language
and
western
music
respectively,
it
is
surprising
that
the
former
seems
to
develop
earlier
than
the
latter.
But
studies
on
infant
categorization
of
linguistic
stimuli
have
been
based
on
the
recognition
of
single
words
or
phonemes
lasting
less
than
a
second,
whereas
those
on
infant
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012
47
categorization
of
music
stimuli
have
used
sequences
of
tones
lasting
almost
6
seconds.
We
conducted
a
series
of
experiments
to
directly
compare
the
formation
of
categories
in
music
and
language
under
timbral
variability
using
melodies
and
phrases
of
the
same
length,
speed,
and
rhythmic
features
and
found
that
11-month
olds
categorized
the
language
but
not
the
music
stimuli.
The
findings
suggest
that
the
categorization
of
certain
structural
elements
emerges
earlier
in
language
than
in
music
and
indicate
a
predisposition
for
the
formation
of
timbral
categories
in
auditory
stimuli
in
general,
even
in
case
in
which
such
categories
are
not
structurally
important.
Music,
Language,
and
Domain-specificity:
Effects
of
Specific
Experience
on
Melodic
Pattern-Learning
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
TUE
tension
and
loudness
of
the
music
by
comparing
tension
ratings
to
predictions
of
a
loudness
model.
Despite
a
general
tendency
towards
flatter
tension
profiles,
tension
ratings
for
versions
without
dynamics
as
well
as
versions
without
agogics
correlated
highly
with
ratings
for
the
original
versions
for
both
pieces.
Correlations
between
tension
ratings
of
the
original
versions
and
ratings
of
harmony
and
melody
versions
as
well
as
predictions
of
the
loudness
model
differed
between
pieces.
Our
findings
indicate
that
discarding
expressive
features
generally
preserves
the
overall
tension-resolution
patterns
of
the
music.
The
relative
contribution
of
single
features
like
loudness,
harmony
and
melody
to
musical
tension
appears
to
depend
on
idiosyncrasies
of
the
individual
piece.
The
semantics
of
musical
tension
Jens
Hjortkjr
Department
of
Arts
and
Cultural
Studies,
University
of
Copenhagen,
Denmark
The
association
between
music
and
tension
is
a
strong
and
long-standing
one
and
yet
the
psychological
basis
of
this
phenomenon
remains
poorly
understood.
Formal
accounts
of
musical
grammar
argue
that
patterns
of
tension
and
release
are
central
to
the
structural
organization
of
music,
at
least
within
the
tonal
idiom,
but
it
is
not
clear
why
structural
relations
should
be
experienced
in
terms
of
tension
in
the
first
place.
Here,
I
will
discuss
a
semantic
view,
suggesting
that
musical
tension
relies
on
cognitive
embodied
force
schemata,
as
initially
discussed
by
Leonard
Talmy
within
cognitive
semantics.
In
music,
tension
ratings
studies
tend
to
relate
musical
tension
to
continuous
measures
of
perceived
or
felt
arousal,
but
here
I
will
discuss
how
it
may
also
relate
to
the
ways
in
which
listeners
understand
musical
events
as
discrete
states
with
opposing
force
tendencies.
In
a
behavioral
tension
rating
study,
listeners
rated
tension
continuously
in
musical
stimuli
with
rapid
amplitude
contrasts
that
could
represent
one
of
two
force
dynamic
schemas:
events
either
releasing
or
causing
a
force
tendency.
One
group
of
participants
were
primed
verbally
beforehand
by
presenting
an
analog
of
the
release-type
schema
in
the
experimental
instructions.
It
was
found
that
primed
subjects
rated
tension
with
a
distinctly
opposite
pattern
relative
to
the
unprimed
group.
The
results
support
the
view
that
musical
tension
relates
to
the
ways
in
which
listeners
understand
dynamic
relations
between
musical
events
rather
than
being
a
simple
continuous
measure
of
arousal.
The
Coupling
of
Gesture
and
Sound:
Vocalizing
to
Match
Flicks,
Punches,
Floats
and
Glides
of
Conducting
Gestures
49
The
vocabulary
of
words
and
phrases
used
by
jazz
singers
to
describe
jazz
voice
sound
is
the
subject
of
this
research.
In
contrast
to
the
ideal
classical
voice
sound,
which
is
linked
to
the
need
to
project
over
loud
accompaniments
(e.g.
formant
tuning),
the
ideal
jazz
voice
sound
50
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012
TUE
takes
advantage
of
microphones
enabling
greater
expressive
variation.
Implicit
concepts
of
ideal
voice
sounds
influence
teaching
in
conservatories
and
music
academies
but
have
been
the
subject
of
little
empirical
investigation.
We
are
interviewing
20
Austrian
jazz
singers.
All
are
or
used
to
be
students
of
jazz
singing.
In
open
interviews,
each
participant
brings
10
examples
of
jazz
singing
and
described
that
singers
voice
sound.
The
qualitative
data
are
represented
in
an
XML
database.
XSLT
stylesheets
are
used
to
create
tag
clouds,
where
the
size
of
a
word
reflects
its
number
of
occurrences.
The
vocabulary
is
split
up
in
a
small
core
of
commonly
used
terms
such
as:
deep,
spoken
and
diverse
(25
descriptors
used
by
more
then
60%
of
the
participants)
and
a
large
periphery
of
intuitive
associations
reflecting
individuality
of
the
perception,
description
and
the
jazz
voice
sound
itself
(260
descriptors
are
used
by
less
then
10%
of
the
participants).
We
explored
the
ideal
jazz
voice
sound
without
asking
for
it
directly.
Participants
additionally
showed
remarkable
motivation
to
listen
to
different
sounds
to
cultivate
their
individuality
as
jazz
singers,
raising
questions
about
the
tension
between
uniformity
and
individuality
in
jazz
pedagogics.
Inaccurate
singing
as
a
dynamic
phenomenon:
Interval
matching
a
live
vocal
model
improves
accuracy
levels
of
inaccurate
singers
Mari
Tervaniemi
Cognitive
Brain
Research
Unit,
Institute
of
Behavioural
Sciences,
University
of
Helsinki,
Finland
Department
of
Psychology,
University
of
Jyvskyl,
Finland
Centre
of
Interdisciplinary
Music
Research,
Department
of
Music,
University
of
Jyvskyl,
Finland
In
the
neurosciences
of
music,
musicians
have
traditionally
been
treated
as
a
unified
group.
However,
obviously,
their
musical
preferences
differentiate
them,
for
instance,
in
terms
of
their
major
instrument
they
play
and
music
genre
they
are
mostly
engaged
with
as
well
as
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012
51
their
practicing
style.
Here
our
intention
was
to
reveal
the
neurocognitive
functions
underlying
the
diversity
of
the
expertise
profiles
of
musicians.
To
this
end,
groups
of
adult
musicians
(jazz,
rock,
classical,
folk)
and
a
group
of
non-musicians
participated
in
brain
recordings
(event-related
potentials
in
mismatch
negativity
(MMN)
paradigm
which
probes
the
brains
automatic
reaction
to
any
change
in
sound
environment).
The
auditory
stimulation
consisted
of
a
short
melody
which
includes
mistakes
in
pitch,
rhythm,
timbre,
key,
and
melody.
During
stimulation,
the
participants
were
instructed
to
watch
a
silent
video.
Our
interest
was
in
comparing
the
MMN
response
evoked
by
the
mistakes
to
the
genre
the
musicians
are
most
actively
involved
in.
We
found
that
all
melodic
mistakes
elicited
MMN
response
in
all
adult
groups
of
participants.
The
strength
of
MMN
and
a
subsequent
P3a
response
reflects
the
importance
of
various
sound
features
in
the
music
genre
they
specialized
to:
pitch
(classical
musicians),
rhythm
(classical
and
jazz
musicians),
key
(classical
and
jazz
musicians),
and
melody
(jazz
and
rock
musicians).
In
conclusion,
MMN
and
P3a
brain
responses
are
sensitively
modulated
by
the
genre
of
musicians
are
actively
engaged
with.
This
implies
that
not
only
musical
expertise
as
such
but
the
type
of
musical
expertise
can
further
modulate
auditory
neurocognition.
Absolute
Pitch
and
Synesthesia:
Two
Sides
of
the
Same
Coin?
Shared
and
Distinct
Neural
Substrates
of
Music
Listening
52
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
TUE
Speed
Poster
Session
11:
Grand
Pietra
Hall,
15:30-16:00
Language
perspectives
Perceiving
meaningful
discourse
structure
in
music
and
language
Jiaxi
Liu
Faculty
of
Music,
Cambridge
University,
United
Kingdom
Despite
common
belief
that
music
lacks
truth-conditional
meaning,
recent
evidence
of
similar
neural
processing
of
the
syntactic
and
semantic
aspects
of
the
music
and
language
suggests
that
they
have
much
in
common
(Steinbeis
and
Koelsch
2007).
However,
this
similarity
seems
to
break
down
at
different
structural
levels.
Music
studies
have
proposed
that
listeners
attend
to
local
but
not
global
structure
(Tillman
and
Bigand
2004,
Delige
et.
al.
1997);
linguistic
data
have
yet
to
distinguish
the
level
of
meaningful
structure
perception.
Thus,
this
study
aims
to
make
parallel
findings
for
both
domains,
additionally
comparing
musicians
to
nonmusicians.
Original
musical
and
textual
compositions
were
analysed
for
tree
structure
by
the
Generative
Theory
of
Tonal
Music
(Lerdahl
and
Jackendoff
1983)
and
the
Rhetorical
Structure
Theory
(Carlson
et.
al.
2001),
respectively.
The
branches
at
each
tree
depth
were
cut
and
randomized
as
audio-visual
music
clips
and
visual
text
slides
in
iMovie
projects.
Collegiate
native
English
speakers
50
musicians
and
50
nonmusicians
were
asked
to
recreate
what
they
considered
the
original
work
in
a
puzzle
task.
The
resulting
ordered
strings
were
analysed
using
edit
distance,
revealing
that
successful
recreation
was
overall
independent
of
subject
and
stimulus
type.
Musicians
performed
better
than
nonmusicians
for
music
only
at
intermediate
tree
depths
(p=0.03).
Cluster
analyses
suggested
that
musicians
attended
to
structural
(global)
cues
in
their
recreation
process
while
nonmusicians
relied
on
surface
(local)
cues.
These
novel
findings
provide
empirical
support
for
differing
affinities
for
differing
compositional
features
in
music
and
language
as
perceived
by
musicians
versus
nonmusicians.
Domain-generality
of
pitch
processing:
the
perception
of
melodic
contours
and
pitch
accent
timing
in
speech
53
Expertise
vs.
inter-individual
differences:
New
evidence
on
the
perception
of
syntax
and
rhythm
in
language
and
music
Research
on
the
phonological
loop
and
music
processing
remains
inconclusive.
Some
researchers
claim
that
the
Baddeley
and
Hitch
Working
Memory
model
requires
another
module
for
music
processing
while
others
suggest
that
music
is
processed
in
a
similar
way
to
verbal
sounds
in
the
phonological
loop.
The
present
study
tested
musical
and
verbal
memory
in
musicians
and
non-musicians
using
an
irrelevant
sound-style
working
memory
paradigm.
It
was
hypothesized
that
musicians
(MUS
at
least
seven
years
musical
training)
would
perform
more
accurately
than
non-musicians
(NONMUS)
on
musical
but
not
verbal
memory.
Verbal
memory
for
both
groups
was
expected
to
be
disrupted
by
verbal
irrelevant
sound
only.
In
the
music
domain,
a
music
expertise
x
interference
type
interaction
was
predicted:
MUS
were
expected
to
experience
no
impairment
under
verbal
irrelevant
sound
whereas
NONMUS
would
be
impaired
by
verbal
and
musical
sounds.
A
standard
forced
choice
recognition
(S/D)
task
was
used
to
assess
memory
performance
under
conditions
of
verbal,
musical
and
static
irrelevant
sound,
across
two
experiments.
On
each
trial
the
irrelevant
sound
was
played
in
a
retention
interval
between
the
to-be
remembered
standard
and
comparison
stimuli.
Thirty-one
musically
proficient
and
31
musically
non-proficient
Belmont
University
students
participated
across
two
experiments
with
similar
interference
structures.
Results
of
two-way
balanced
ANOVAs
yielded
significant
differences
between
musical
participants
and
non-musical
participants,
as
well
as
significant
differences
between
interference
types
for
musical
stimuli,
implying
a
potential
revision
of
the
phonological
loop
model
to
include
a
temporary
storage
subcomponent
devoted
to
music
processing.
54
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
TUE
Speed
Poster
Session
12:
Crystal
Hall,
15:30-16:00
Melodic
similarity
Implicit
and
explicit
judgements
on
the
melodic
similarity
of
cases
of
plagiarism
and
the
role
of
computational
models
Anna
Wolf,*
Daniel
Mllensiefen#
*Hanover
Music
Lab,
Hanover
University
of
Music,
Drama
and
Media,
Germany
#Department
of
Psychology,
Goldsmiths
College,
University
of
London,
United
Kingdom
Towards
Modelling
Variation
in
Music
as
Foundation
for
Similarity
This
paper
investigates
the
concept
of
variation
in
music
from
the
perspective
of
music
similarity.
Music
similarity
is
a
central
concept
in
Music
Information
Retrieval
(MIR),
however
there
exists
no
comprehensive
approach
to
music
similarity
yet.
As
a
consequence,
MIR
faces
the
challenge
on
how
to
relate
musical
features
to
the
experience
of
similarity
by
listeners.
Musicologists
and
studies
in
music
cognition
have
argued
that
variation
in
music
leads
to
the
experience
of
similarity.
In
this
paper
we
review
the
concept
of
variation
from
three
different
research
strands:
MIR,
Musicology,
and
Cognitive
Science.
We
show
that
all
of
these
disciplines
have
contributed
insights
to
the
study
of
variation
that
are
important
for
modelling
variation
as
a
foundation
for
similarity.
We
introduce
research
steps
that
need
to
be
taken
to
model
variation
as
a
base
for
music
similarity
estimation
within
a
computational
approach.
Melodic
Similarity:
A
Re-examination
of
the
MIREX2005
Data
Alan
Marsden
Lancaster
Institute
for
the
Contemporary
Arts,
Lancaster
University,
UK
Despite
a
considerable
body
of
research,
there
is
no
clarity
about
the
basic
properties
of
melodic
similarity,
such
as
whether
or
not
it
constitutes
a
metric
space,
or
whether
it
is
a
more
complex
phenomenon.
An
experiment
conducted
by
Typke
et
al.,
used
as
a
basis
for
the
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012
55
A
Melodic
Similarity
Measure
Based
on
Human
Similarity
Judgments
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
TUE
stability
and
melodic
contour.
In
a
follow-up
experiment,
we
show
that
our
empirically
derived
measure
of
melodic
similarity
yielded
superior
performance
to
the
Mongeau
and
Sankoff
similarity
algorithm.
We
intend
to
extend
this
measure
to
comparison
melodies
with
multiple
note
changes.
57
percussion
training
(i.e.,
Control
group)
imitated
on
a
percussion
pad
a
short
6-note
isochronous
metrical
pattern
(Strong-weak-weak-Strong-weak-weak)
at
the
rhythm
provided
by
a
metronome
under
four
conditions:
1)
with
an
isochronous
metronome,
2)
with
an
isochronous
metronome
but
making
a
break
in
between
repetitions,
3)
with
a
non-isochronous,
still
predictable,
metronome,
and
4)
with
a
non-isochronous
and
non-predictable
metronome.
Data
were
analyzed
with
Functional
Data
Analyses
techniques
(Ramsay
&
Silverman,
2002).
The
results
showed
that
manipulating
the
metronome
isochrony
affected
IFs
movement
kinematics
more
that
in
Controls.
For
IF,
stimulus
isochrony
(in
conditions
(1)
and
(2))
led
to
higher
maximum
amplitude
of
the
top
of
stick,
an
effect
particularly
visible
in
the
vicinity
of
the
strong
beats.
.In
addition,
Functional
ANOVAs
allowed
to
uncover
the
portions
of
the
trajectories
where
differences
between
conditions
are
statistically
significant.
These
analyses
showed
that
for
most
of
the
strokes
produced
in
condition
(2),
movement
amplitude,
velocity
and
acceleration
were
all
higher
than
in
conditions
(3)
and
(4).
These
findings
are
in
keeping
with
the
effect
of
stimulus
isochrony
on
performance
timing
previously
observed
in
IF.
We
suggest
that
synchronizing
with
a
non-isochronous
sequence
may
have
deleterious
effects
(visible
both
in
timing
and
movement
kinematics)
in
individuals
with
exceptional
sensorimotor
coupling
skills.
In
each
of
three
experiments
120
walks
were
made
on
a
2
km
long
circuit
through
various
environments.
In
the
first
two
experiments
60
students
walked
twice,
once
without
and
once
with
music
or
with
different
tempo
ranges
of
music.
The
walkers
had
an
mp3player
with
good
headphones
and
a
small
camera
fixed
to
their
belt.
In
the
environment
markers
were
drawn.
In
the
first
experiment
only
1
out
of
60
walkers
synchronised
spontaneously
to
the
music.
In
the
second
experiment
music
was
offered
with
a
tempo
closer
to
the
walking
tempo
of
each
subject.
3
music
tracks
were
prepared
differing
8%
in
tempo.
Now
5
out
of
35
walkers
synchronised.
The
third
experiment
was
not
aimed
at
synchronisation.
Music
was
collected
from
the
students:
either
motivating
for
movement
or
nice
music
but
that
did
not
urge
to
move.
These
pieces
were
rated
with
the
Brunel
Music
Rating
Inventory-2.
Half
of
the
120
students
received
the
motivating
music
and
half
the
non-motivating
music.
The
motivating
music
resulted
in
faster
walks:
1.67
m/s
vs
1.47
m/s.
In
order
to
stimulate
the
movements
of
walkers
they
need
not
to
be
synchronised
to
the
beat.
It
is
in
line
with
our
earlier
experiments
in
which
walkers
were
explicitly
asked
to
synchronise.
Some
walkers
did
not
synchronise
but
still
walked
faster
to
fast
music.
Birgitta
Burger,
Marc
R.
Thompson,
Geoff
Luck,
Suvi
Saarikallio,
Petri
Toiviainen
Finnish
Centre
of
Excellence
in
Interdisciplinary
Music
Research,
Department
of
Music,
University
of
Jyvskyl,
Finland
Listening
to
music
makes
us
move
in
various
ways.
Several
factors
can
affect
the
characteristics
of
these
movements,
including
individual
factors,
musical
features,
or
perceived
emotional
content
of
music.
Music
is
based
on
regular
and
repetitive
temporal
patterns
that
give
rise
to
a
percept
of
pulse.
From
these
basic
metrical
structures
more
complex
temporal
structures
emerge,
such
as
rhythm.
It
has
been
suggested
that
certain
rhythmic
features
can
induce
movement
in
humans.
Rhythmic
structures
vary
in
their
degree
of
complexity
and
regularity,
and
one
could
expect
that
this
variation
influences
58
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
TUE
movement
patterns
for
instance,
when
moving
to
rhythmically
more
complex
music,
the
movements
may
also
be
more
irregular.
To
investigating
this
relationship,
sixty
participants
were
presented
with
30
musical
stimuli
representing
different
genres
of
popular
music.
All
stimuli
were
30
seconds
long,
non-vocal,
and
differed
in
their
rhythmic
complexity.
Optical
motion
capture
was
used
to
record
participants
movements.
Two
movement
features
were
extracted
from
the
data:
Spatial
Regularity
and
Temporal
Regularity.
Additionally,
12
beat-
related
musical
features
were
extracted
from
the
music
stimuli.
A
subsequent
correlational
analysis
revealed
that
beat-related
musical
features
influenced
the
regularity
of
music-
induced
movement.
In
particular,
a
clear
pulse
and
high
percussiveness
resulted
in
small
spatial
variation
of
participants
movements,
whereas
an
unclear
pulse
and
low
percussiveness
led
to
greater
spatial
variation
of
their
movements.
Additionally,
temporal
regularity
was
positively
correlated
to
flux
in
the
low
frequencies
(e.g.,
kick
drum,
bass
guitar)
and
pulse
clarity,
suggesting
that
strong
rhythmic
components
and
a
clear
pulse
encourage
temporal
regularity.
Helen
M.
Prior
Music
Department,
Kings
College,
London,
UK
The
notion
of
shaping
music
in
performance
is
pervasive
in
musical
practice
and
is
used
in
relation
to
several
different
ideas,
from
musical
structure
to
musical
expression;
and
in
relation
to
specific
musical
features
such
as
phrasing
and
dynamics.
Its
versatile
and
multi-
faceted
nature
prompted
an
interview
study,
which
investigated
musicians
use
of
the
concept
of
musical
shaping
in
a
practical
context.
Semi-structured
interviews
were
conducted
with
five
professional
violinists
and
five
professional
harpsichordists.
These
interviews
incorporated
musical
tasks
that
involved
participants
playing
a
short
excerpt
of
music
provided
by
the
researcher,
as
well
as
their
own
examples,
to
demonstrate
their
normal
playing,
playing
while
thinking
about
musical
shaping,
and
sometimes,
playing
without
musical
shaping.
These
musical
demonstrations
were
then
discussed
with
participants
to
elicit
descriptions
of
their
shaping
intentions.
This
poster
will
illustrate
the
multiple
ways
in
which
the
interview
data
were
examined,
and
explore
the
technical
and
methodological
implications
of
these
approaches.
First,
an
Interpretative
Phenomenological
Analysis
of
the
musicians
interview
data
revealed
a
wide
range
of
themes.
Secondly,
Sonic
Visualiser
was
used
to
analyse
their
musical
demonstrations,
which
allowed
the
examination
of
the
relationships
between
the
musicians
shaping
intentions,
their
actions,
and
the
resulting
sound.
Thirdly,
the
data
were
explored
in
relation
to
participants
use
of
metaphors,
which
were
expressed
verbally,
gesturally,
and
through
musical
demonstrations.
The
exploratory
nature
of
the
research
area
has
exposed
the
value
of
the
adoption
of
multiple
approaches
as
the
relationships
between
musical
shaping
and
other
research
areas
have
become
apparent.
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
59
Accuracy
of
reaching
a
target
key
by
trained
pianists
Chie
Ohsawa,*
Takeshi
Hirano,*
Satoshi
Obata,
*
Taro
Ito,#
Hiroshi
Kinoshita*
*Graduate
School
of
Medicine,
Osaka
University,
Japan
#School
of
Health
and
Sports
Sciences,
Mukogawa
Womens
University,
Japan
One
fundamental
element
of
successful
piano
playing
is
moving
the
fingertip
to
hit
a
key
for
aimed
tone
production.
We
hypothesized
that
pianists
with
years
of
training
would
possess
relatively
accurate
spatial
memory
of
a
keyboard,
and
thus
able
to
target
any
key
position
without
viewing
a
keyboard.
This
hypothesis
was
tested
in
10
highly
trained
pianists,
who
seated
on
a
chair
was
faced
a
table
on
which
either
only
a
flat
sheet
of
C4
key
copy,
or
a
real
scale
copy
of
a
whole
piano
keyboard
was
present.
The
participant
moved
their
left
or
right
index
finger
on
the
target
key
(A1,
F2,
or
E3
for
the
left
hand,
A4,
G5
or
E6
for
the
right
hand)
after
touching
the
reference
key.
Kinematics
of
the
fingertip
were
recorded
by
3D
motion
capture
system
sampling
at
60
Hz.
Data
were
collected
10
times
for
each
key.
Constant,
absolute,
and
variable
errors
of
the
finger
center
relative
to
the
center
of
the
target
key
were
computed.
Contrary
to
our
hypothesis,
errors
in
the
no-keyboard
condition
were
considerably
large.
The
mean
constant
errors
for
A1,
F2,
E3,
A4,
G5,
and
E6
were
63.5,
58.6,
27.4,
6.2,
12.9,
and
29.1
mm,
respectively.
Corresponding
values
for
the
keyboard
condition
was
all
less
2
mm.
The
right-left
hand
difference
in
errors
suggests
the
presence
of
a
laterality
bias
in
spatial
memory.
The
larger
positive
constant
errors
for
more
remote
keys
indicate
that
the
spatial
memory
could
be
constructed
of
expanded
keyboard
representation.
60
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
TUE
Evaluation
parameters
for
proficiency
estimation
of
piano
based
on
tendency
of
moderate
performance
Asami
Nonogaki,1
Norio
Emura,2
Masanobu
Miura,3
Seiko
Akinaga,4
Masuzo
Yanagida5
1Graduate
School
of
Science
and
Technology,
Ryukoku
University,
Japan;
2College
of
Informatics
and
Human
Communication,
Kanazawa
Institute
of
Technology,
Japan;
3Faculty
of
Science
and
Technology,
Ryukoku
University,
Japan;
4Department
of
Education,
Shukugawa
Gakuin
College,
Japan;
5Faculty
of
Science
and
Engineering,
Doshisha
University,
Japan
This
paper
describes
an
automatic
estimation
for
piano
performance
in
terms
of
the
proficiency
for
an
etude
Czerny.
Our
previous
study
proposed
a
method
of
proficiency
estimation
for
a
scale
performance
within
one
octave
by
the
MIDI-piano,
in
which
a
set
of
parameters
were
obtained
and
then
applied
to
the
automatic
estimation.
However,
it
is
not
sufficient
to
simply
employ
them
to
other
musical
excerpts,
since
the
piano
performance
usually
has
several
complex
aspects
such
as
artistic
expression
or
so.
Here
we
introduce
another
set
of
parameters
for
the
automatic
estimation
for
other
musical
task
Czerny.
Even
though
the
content
of
the
task
is
thought
as
simple
because
of
the
simple
equal
intervals,
players
might
produce
deviation
of
loudness,
tempo,
and/or
onset
from
equal
timing.
We
then
newly
introduce
several
parameters
concerning
tempo,
duration,
velocity,
onset
time,
normalized
tempo,
normalized
duration,
normalized
velocity,
normalized
onset,
slope
tempo,
slope
duration,
slope
velocity,
and
slope
onset,
where
the
normalized
parameters
mean
the
average
of
all
performances,
named
here
as
moderate
performance.
By
using
the
Principle
Component
Analysis
for
all
the
obtained
parameters,
we
then
obtained
principle
components
for
them.
A
simple
determination
method
(k-NN)
is
employed
to
calculate
the
proficiency
score
of
them.
Results
shows
that
correlation
coefficient
of
proposed
method
are
0.798,
0.849,
0.793
and
0.516,
for
task
A
of
75
(bpm)
and
150
(bpm),
and
task
B
of
75
(bpm)
and
150
(bpm),
respectively,
showing
the
effectiveness
of
proposed
method.
The
Sung
Performance
Battery
(SPB)
61
Musical
emotion
and
facial
expression:
mode
of
interaction
as
measured
by
an
ERP
Keiko
Kamiyama*,
Dilshat
Abla#,
Koichi
Iwanaga,
and
Kazuo
Okanoya*
* Department of Life Sciences, Graduate School of Arts and Sciences, The University of Tokyo,
Japan;
#Noninvasive
BMI
Unit,
BSI-TOYOTA
Collaboration
Center,
RIKEN
Brain
Science
Institute,
Japan;
Department
of
Design,
Graduate
School
of
Engineering,
Chiba
University,
Japan;
Japan
Science
Technology
Agency,
ERATO,
Okanoya
Emotional
Information
Project,
Japan
Music
has
been
believed
to
express
emotion
through
various
elements
in
music
itself,
while
it
has
been
increasingly
reported
that
the
musical
expression
interacted
with
extra-musical
factors.
In
order
to
reveal
how
these
two
emotional
processes
are
processed
in
the
brain,
we
recorded
the
electroencephalogram
(EEG)
of
the
amateur
musicians
and
non-musicians.
We
presented
several
pairs
of
musical
excerpts
and
images
of
facial
expressions,
each
of
which
represented
happy
or
sad
expressions.
Half
of
the
pairs
were
semantically
congruent
(congruent
condition),
where
the
emotional
meaning
of
facial
expression
and
music
were
the
same,
and
the
remaining
pairs
were
semantically
incongruent
(incongruent
condition).
During
the
EEG
recording,
participants
listened
to
the
musical
excerpt
for
500ms,
immediately
after
the
presentation
of
the
facial
image
for
500
ms.
We
found
that
music
stimuli
elicited
a
larger
negative
component
in
the
250
450
ms
range
(N400)
under
the
62
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
TUE
incongruent
condition
than
under
the
congruent
condition,
notably
in
musicians.
Also,
in
musicians
the
N400
effect
appeared
regardless
of
the
emotional
type
of
music,
while
in
non-
musicians
the
effect
was
observed
only
when
the
happy
music
excerpts
were
presented
as
target
stimuli.
These
results
indicated
that
the
sadness
of
music
was
not
automatically
extracted
in
no-musicians,
although
they
could
judge
the
congruency
of
stimulus
pairs
in
the
behavioral
test.
Also
it
was
suggested
that
facial
emotional
cognition
had
some
common
processes
with
musical
emotional
cognition
and
that
the
emotional
meanings
of
music
were
integrated
with
other
semantic
inputs
such
as
facial
expressions.
Experiential
effects
of
musical
pleasure
on
dopaminergic
learning
Melodies
without
Words:
Validity
of
Happy/Sad
Musical
Excerpts
for
Use
in
ERP
Studies
63
22,6
years)
using
E-Prime.
Subjects
were
asked
to
rate
each
excerpt
in
a
scale
of
1
to
7,
1
being
sad,
4
being
neutral
and
7,
happy.
All
of
the
subjects
were
non
musicians.
The
answers
were
analyzed
considering
the
mean
score
of
each
excerpt.
The
30
excerpts
with
means
close
to
neutral
(3,
4
or
5)
were
discarded.
The
remaining
50
stimuli
were
analyzed
as
to
its
musical
features.
After
the
analysis,
we
concluded
that
subjects
tended
to
guide
their
evaluation
by
tempo
(e.g.,
happy
excerpts
composed
in
not
such
a
fast
tempo
were
discarded),
tessitura
and
direction
of
melody
(e.g.,
happy
excerpts
with
a
downward
melody
were
discarded),
and
duration
of
the
notes
(e.g.,
excerpts
with
staccato
were
the
highest
rated).
Its
possible
that,
given
the
fact
that
the
subjects
were
non
musicians,
they
didnt
rely
on
mode
as
much
as
musicians
would.
Reinhard
Kopiez
Hanover
University
of
Music,
Drama,
and
Media,
Hanover
Music
Lab,
Germany
In
the
natural
sciences
the
replication
of
important
findings
plays
a
central
role
in
the
creation
of
verified
knowledge.
However,
in
the
discipline
of
psychology
there
is
only
one
attempt
for
a
systematic
reproduction
of
published
studies
(see
the
website
of
the
Reproducibility
project,
http://openscienceframework.org/project/shvrbV8uSkHewsfD4/
wiki
and
the
Project
Progress
and
Results
Spreadsheet).
In
music
psychology,
this
self-
evident
tradition
of
replication
studies
plays
only
a
minor
role.
I
will
argue
that
replication
studies
have
two
important
functions:
(a)
as
a
best
practice
mechanism
of
academic
self-
control
which
is
necessary
to
prevent
the
publication
of
premature
results;
(b)
as
a
reliable
64
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
TUE
way
for
the
production
and
integration
of
verified
knowledge
which
is
important
for
the
advancement
of
every
scientific
discipline.
Comparisons
of
selected
replications
with
original
studies
will
demonstrate
that
the
design
of
replications
is
a
creative
research
strategy.
Replication
studies
discussed
will
come
from
topics
such
as
music
cognition,
open
earedness,
or
neuroscience
of
music.
In
a
last
step
I
will
show
the
high
power
of
meta-
analysis
in
the
production
of
verified
knowledge.
This
important
method
for
the
uncovering
of
reliable
effects
by
means
of
data
aggregation
from
single
studies
should
be
extended
in
the
field
of
empirical
music
research.
One
consequence
of
the
replication
approach
will
be
the
future
need
for
an
online
repository
of
already
conducted
replication
studies.
This
idea
will
be
discussed
in
the
symposium.
Aspects
of
handedness
in
Deutsch's
octave
illusion
-
a
replication
study
An
extended
replication
study
of
the
octave
illusion
(Deutsch
1974,
1983)
is
presented.
Since
the
first
description
of
the
octave
illusion
in
1974
several
studies
showed
that
the
perception
of
the
two-tone
pattern
depends
on
subjects'
handedness.
Most
of
the
right-handed
subjects
reported
to
hear
the
high
tone
of
the
octave
at
the
right
ear.
Left-handed
subjects
either
perceive
the
high
tone
on
the
left
ear
or
tend
to
perceive
more
complex
tone
patterns
(39%).
In
all
related
studies
the
handedness
categorization
was
done
by
means
of
a
questionnaire,
e.g.
the
handedness
inventory
of
Varney
and
Benton
(1975).
Several
current
studies
(e.g.
Kopiez,
Galley,
Lehmann
2010)
however
show
that
objective
non-right-handed
persons
cannot
be
identified
by
handedness
inventories.
In
concordance
with
Annett's
"right
shift
theory"
(2002)
performance
measurements
as
speed
tapping
seem
to
be
a
much
more
reliable
handedness
predictor.
It
is
supposed
that
more
distinct
perception
patterns
for
the
right-
and
non-right-handed
subjects
can
be
obtained,
when
performance
measures
are
used
for
handedness
classification.
Especially
the
group
size
of
right-handers
in
the
original
study
that
perceive
complex
tone
patterns
(17%)
is
likely
to
be
much
smaller.
In
the
replication
study
Varney
and
Benton's
handedness
inventory
as
well
as
a
speed
tapping
task
were
used
to
classify
left-
and
right-handed
subjects.
All
131
subjects
(M=28.88,
SD=10.21)
were
naive
concerning
the
octave
illusion.
The
subjects'
perception
of
the
original
two-tone
pattern
was
measured
in
a
forced-choice
task
according
to
the
categories
used
by
Deutsch
(octave,
single,
complex).
The
results
of
Deutsch's
study
could
be
replicated
when
using
the
same
handedness
inventory.
The
performance
measurement
task
however
led
to
a
significantly
clearer
distinction
between
the
left-
and
right-handed
subjects
(w=.42,
p=.0001
in
contrast
to
w=.20,
p=.19
in
the
replication
and
w=.28,
p<.05
in
the
original
study)
and
more
structured
perception
patterns
could
be
observed
within
the
left-handed
group.
The
group
size
of
the
right-handed
subjects
that
perceive
complex
patterns
is
significantly
smaller
(w=.36,
p=.0001)
when
using
performance
measures
(5%)
instead
of
the
questionnaire
(replication:
15%,
original
study:
17%).
All
in
all
the
results
of
Deutsch
could
be
replicated.
Misclassification
of
handedness
could
be
reduced
and
the
observed
perception
patterns
were
more
distinct,
when
speed
tapping
was
used
for
measuring
handedness.
Therefore
performance
measurements
might
be
a
useful
method
in
future
studies
that
deal
with
aspects
of
the
octave
illusion
and
handedness.
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
65
Kathrin
Bettina
Schlemmer1,
Timo
Fischinger2,
Klaus
Frieler3,
Daniel
Mllensiefen4,
Kai
Stefan
Lothwesen5,
Kelly
Jakubowski6
1Katholische
Universitt
Eichsttt-Ingolstadt,
Germany,
2Universitt
Kassel,
Germany,
3Universitt
Hamburg,
Germany,
4,6Goldsmiths,
University
of
London,
UK,
5Hochschule
fr
Musik
und
Darstellende
Kunst
Frankfurt
am
Main,
Germany
When
analysing
human
long
term
memory
for
musical
pitch,
relational
memory
is
commonly
distinguished
from
absolute
memory.
The
ability
of
most
musicians
and
non-musicians
to
recognize
tunes
even
when
presented
in
a
different
key
suggests
the
existence
of
relational
music
memory.
However,
findings
by
Levitin
(1994)
point
towards
the
additional
existence
of
absolute
music
memory.
In
his
sample,
the
m ajority
of
non
absolute
pitch
possessors
could
produce
pitch
at
an
absolute
level
when
the
task
was
to
recall
a
very
familiar
pop
song
recording.
Up
to
now,
no
replication
of
this
study
has
been
published.
The
aim
of
this
paper
is
to
present
the
results
of
a
replication
project
across
six
different
European
labs.
All
labs
used
the
same
methodology,
carefully
replicating
the
experimental
conditions
of
Levitins
study.
In
each
lab,
between
40
and
60
participants
(primarily
university
students
with
different
majors,
musicians
and
non-musicians)
were
tested.
Participants
recalled
a
pop
song
that
they
had
listened
to
very
often,
and
produced
a
phrase
of
this
song.
The
produced
songs
were
recorded,
analysed
regarding
pitch,
and
compared
with
the
published
original
version.
Preliminary
results
suggest
that
participants
show
a
tendency
to
sing
in
the
original
key,
but
a
little
flat.
The
distribution
of
the
data
is
significantly
not
uniform,
but
more
spread
out
than
Levitins
data.
The
distributions
differ
significantly
between
the
three
labs
analysed
so
far.
Our
replication
study
supports
basically
the
hypothesis
that
there
is
a
strong
absolute
component
for
pitch
memory
of
very
well-known
tunes.
However,
a
decline
effect
of
results
could
be
observed
as
well
as
other
effects
to
be
discussed.
66
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
TUE
Estimating
historical
changes
in
consonance
by
counting
prepared
and
unprepared
dissonances
in
musical
scores
Major
and
Minor:
An
Empirical
Study
of
the
Transition
between
Classicism
and
Romanticism
67
An
exploratory
study
of
young
childrens
technology-enabled
improvisations
From
Eco
to
the
Mirror
Neurons:
Founding
a
Systematic
Perspective
of
the
Reflexive
Interaction
Paradigm
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
TUE
hypothesis
that
the
reflexive
interaction
enhances
teaching/learning
processes
and
musical
creativity
in
children.
Despite
its
increasing
importance
in
compositions
in
the
nineteenth
and
twentieth
centuries,
timbre
has
not
been
theorized
in
research
to
the
same
extent
as
other
musical
parameters.
Typically,
orchestration
manuals
provide
prescriptions
and
prohibitions
of
instrumental
combinations
and
short
excerpts
to
be
emulated.
Empirical
studies
suggest
that
emotional
responses
may
be
induced
by
changes
in
orchestration,
such
as
a
sudden
shift
in
texture
and
the
alternation
of
the
orchestra
and
a
soloist.
Some
orchestration
treatises
allude
to
these
expressive
gestures,
but
a
conceptual
framework
is
still
lacking.
Our
first
aim
is
to
model
one
aspect
of
the
dynamics
of
the
listening
experience
by
investigating
the
musical
features
in
orchestral
music
that
elicit
emotional
responses.
Additionally,
we
aim
to
contribute
to
the
development
of
a
theory
of
orchestration
gestures
through
music-theoretical
analyses
and
principles
from
timbre
perception.
Musical
excerpts
were
chosen
to
fit
within
four
categories
defined
by
the
researchers
based
on
instrumentation
changes:
gradual
or
sudden
addition,
or
gradual
or
sudden
reduction
of
instruments.
Forty-five
participants
(22
musicians
and
23
nonmusicians)
listened
to
the
excerpts
and
continuously
moved
a
slider
to
indicate
the
intensity
of
their
emotional
responses.
They
also
completed
questionnaires
outlining
their
specific
subjective
experiences
(chills,
tears,
and
other
reactions)
after
each
excerpt.
Musical
features
of
the
acoustic
signal
were
coded
as
time
series
and
used
as
predictors
of
the
behavioural
ratings
in
a
linear
regression
model
using
the
ordinary
least
squares
approach
(Schubert
2004).
The
texture
parameter
was
expanded
to
include
the
contributions
of
each
instrument
family.
The
results
suggest
that
there
are
significant
differences
between
the
participants
continuous
response
profiles
for
the
four
gesture
categories.
Musicians
and
nonmusicians
exhibit
similar
emotional
intensity
curves
for
the
gradual
gestures
(additive
and
reductive);
however,
musicians
tend
to
anticipate
the
sudden
changes,
whereas
non-
musicians
are
more
delayed
in
their
responses.
For
both
gradual
and
sudden
reductive
excerpts,
participants
demonstrate
a
sustained
lingering
effect
of
high
emotional
intensity
despite
the
reduction
of
instrumental
forces,
loudness,
and
other
parameters.
Through
discussion
of
new
visualizations
created
from
musical
feature
overlays
and
the
results
of
the
regression
study,
we
will
highlight
relationships
between
perceptual
and
musical/acoustical
dimensions,
quantify
elements
of
the
temporality
of
these
experiences,
and
relate
these
to
the
retrospective
judgments.
To
our
knowledge,
this
is
the
first
study
that
specifically
investigates
the
role
of
timbral
changes
on
listeners
emotional
responses
in
interaction
with
other
musical
parameters.
69
intensity
of
felt
emotions
induced
by
music.
The
possible
contribution
of
empathy
was
investigated
by
analysing
the
results
of
two
separate
experiments.
In
Experiment
1,
131
participants
listened
to
16
film
music
excerpts
and
evaluated
the
intensity
of
their
emotional
responses.
In
experiment
2,
60
participants
were
randomly
assigned
to
either
a
neutral
music
group
or
a
sad
music
group.
The
induced
emotions
were
assessed
using
two
indirect
measures
of
emotional
states;
a
word
recall
task,
and
a
facial
expression
judgment
task.
In
Experiment
1,
trait
empathy
correlated
with
the
self-rated
intensity
of
emotions
experienced
in
response
to
tender
and
sad
excerpts.
In
Experiment
2,
trait
empathy
was
reliably
associated
with
induced
sadness
as
measured
by
the
facial
expression
judgment
task
-
in
the
sad
music
group.
The
results
suggest
that
trait
empathy
may
indeed
enhance
the
induction
of
emotion
through
music
at
least
in
the
case
of
certain
emotions.
The
self-report
and
indirect
measures
indicated
that
highly
empathic
people
may
be
more
susceptible
to
music-induced
sadness
and
tenderness,
possibly
reflecting
their
tendency
to
feel
compassion
and
concern
for
others.
Music
Preferences
in
the
Early
Years:
Infants'
Emotional
Responses
to
Various
Auditory
Stimulations
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
TUE
differences
in
the
operation
of
temporal
error
correction
mechanisms,
such
as
phase
correction,
that
enable
internal
timekeepers
in
co-performers
to
remain
entrained
despite
tempo
fluctuations.
The
current
study
investigated
the
relationship
between
phase
correction
and
interpersonal
sensorimotor
synchronization.
Phase
correction
was
assessed
in
40
participants
by
estimating
the
proportion
of
asynchronies
that
each
individual
corrected
for
when
synchronizing
finger
taps
(on
a
percussion
pad)
with
adaptively
timed
auditory
sequences.
Participants
were
subsequently
paired
to
form
10
high
correcting
dyads
and
10
low
correcting
dyads.
Each
dyad
performed
a
synchronization-continuation
task
that
required
both
individuals
to
tap
together
in
time
with
a
2
Hz
auditory
metronome
(for
20
sec)
and
then
to
continue
tapping
together
when
the
metronome
ceased
(for
20
sec).
Each
individuals
taps
produced
a
distinctive
percussion
sound.
The
variability
of
interpersonal
asynchronies
was
greater
for
low
than
high
correcting
dyads
only
when
the
metronome
paced
the
interaction.
The
lag-1
autocorrelation
of
interpersonal
asynchronies
was
likewise
only
relatively
high
in
low
correcting
dyads
during
paced
tapping.
Low
correcting
dyads
may
be
able
to
stabilize
their
performance
during
self-paced
continuation
tapping
by
increasing
the
gain
of
phase
correction
or
by
engaging
in
period
correction
(i.e.,
tempo
adjustment).
These
findings
imply
compensatory
mutual
adaptive
timing
strategies
that
are
most
likely
effortful
and
may
have
costs
in
attentionally
demanding
contexts
such
as
musical
ensemble
performance.
Knowing
too
much
or
too
little:
The
effects
of
familiarity
of
a
co-performers
part
on
interpersonal
coordination
in
piano
duos
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
71
72
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
WED
Wednesday
25
July
Chia-Jung
Tsay
Harvard
University,
Cambridge,
United
States
There
exists
a
wide
consensus
that
sound
is
central
to
judgment
about
music
performance.
Although
people
often
make
evaluations
on
the
basis
of
visual
cues,
these
are
often
discounted
as
peripheral
to
the
meaning
of
music.
Yet,
people
can
lack
insight
into
their
own
capacities
and
preferences,
or
are
unwilling
to
report
their
beliefs.
This
suggests
that
there
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012
73
may
be
gaps
between
what
we
say
we
use
to
evaluate
performance,
and
what
we
actually
use.
People
may
be
unlikely
to
recognize
or
admit
that
visual
displays
can
affect
their
judgment
about
music
performance,
a
domain
that
is
defined
by
sound.
Six
sets
of
experiments
demonstrated
that
visual
information
is
what
people
actually
rely
on
when
making
rapid
judgments
about
performance.
These
findings
were
extended
in
experiments
elaborating
on
1)
the
generalizability
and
persistence
of
effects
throughout
domains
and
levels
of
analyses,
and
2)
potential
mechanisms
such
as
attention
to
specific
types
of
visual
cues.
Additional
experiments
further
examine
the
underlying
visual
and
affective
contributions
to
judgments
of
performance,
the
role
of
expertise
in
such
decision
making,
and
the
implications
for
organizational
performance
and
policy.
Jason
Yust
School
of
Music,
Boston
University,
USA
The
lack
of
attention
given
to
Schenkerian
theory
by
empirical
research
in
music
is
striking
when
compared
to
its
status
in
music
theory
as
a
standard
account
of
tonality.
In
this
paper
I
advocate
a
different
way
of
thinking
of
Schenkerian
theory
that
can
lead
to
empirically
testable
claims,
and
report
on
an
experiment
that
shows
how
hypotheses
derived
from
Schenkers
theories
explain
features
of
listeners
perception
of
key
relationships.
To
be
relevant
to
empirical
research,
Schenkers
theory
must
be
treated
as
a
collection
of
interrelated
but
independent
theoretical
claims
rather
than
a
comprehensive
analytical
method.
These
discrete
theoretical
claims
can
then
lead
to
hypotheses
that
we
can
test
through
empirical
methods.
This
makes
it
possible
for
Schenkerian
theory
improve
our
scientific
understanding
of
how
listeners
understand
tonal
music.
At
the
same
time,
it
opens
the
possibility
of
challenging
the
usefulness
of
certain
aspects
of
the
theory.
This
paper
exemplifies
the
empirical
project
with
an
experiment
on
the
perception
of
key
distance.
The
results
show
that
two
features
of
Schenkerian
theory
predict
how
listeners
rate
stimuli
in
terms
of
key
distance.
The
first
is
the
Schenkerian
principle
of
composing
out
a
harmony,
and
the
second
is
the
theory
of
voice-leading
prolongations.
In
a
regression
analysis,
both
of
these
principles
significantly
improve
upon
a
model
of
distance
ratings
based
on
change
of
scalar
collection
alone.
How
Fast
Can
Music
and
Speech
Be
Perceived?
Key
Identification
in
Time-
Compressed
Music
with
Periodic
Insertions
of
Silence
Morwaread
M.
Farbood,*
Oded
Ghitza,#
Jess
Rowland,
Gary
Marcus,
David
Poeppel
*
Dept.
of
Music
and
Performing
Arts
Professions,
New
York
University,
USA;
#
Dept.
of
Biomedical
Engineering,
Boston
University,
USA;
Dept.
of
Psychology,
New
York
University,
USA;
Dept.
of
Art
Practice,
University
of
California,
Berkeley,
USA;
Center
for
Neural
Science,
New
York
University,
USA
This
study
examines
the
timescales
at
which
the
brain
processes
structural
information
in
music
and
compares
them
to
timescales
implicated
in
previous
work
on
speech.
Using
an
experimental
paradigm
similar
to
the
one
employed
by
Ghitza
and
Greenberg
(2009)
for
speech,
listeners
were
asked
to
judge
the
key
of
short
melodic
sequences
that
were
presented
at
a
very
fast
tempo
with
varying
packaging
rates,
defined
by
the
durations
of
silence
gaps
inserted
periodically
in
the
audio.
This
resulted
in
a
U-shaped
key
identification
74
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
WED
error
rate
curve,
similar
in
shape
to
the
one
implicated
for
speech
by
Ghitza
and
Greenberg.
However,
the
range
of
preferred
packaging
rates
was
lower
for
music
(packaging
rate
of
1.5-
5
Hz)
than
for
speech
(6-17
Hz).
We
hypothesize
that
that
music
and
speech
processing
rely
on
comparable
oscillatory
mechanisms
that
are
calibrated
in
different
ways
based
on
the
specific
temporal
structure
of
their
input.
The
Role
of
Phrase
Location
in
Key
Identification
by
Pitch
Class
Distribution
Harmony
Perception
by
Periodicity
and
Granularity
Detection
Frieder
Stolzenburg
Automation
and
Computer
Sciences
Department,
Harz
University
of
Applied
Sciences,
Germany
Music
perception
and
composition
seem
to
be
influenced
not
only
by
convention
or
culture,
but
also
by
the
psychophysics
of
tone
perception.
Early
models
express
musical
intervals
by
simple
fractions.
This
helps
to
understand
that
human
subjects
rate
harmonies,
e.g.
major
and
minor
triads,
differently
with
respect
to
their
sonority.
Newer
explanations,
based
upon
the
notion
of
consonance
or
dissonance,
correlate
better
to
empirical
results
on
harmony
perception,
but
still
do
not
explain
the
perceived
sonority
of
common
triads
well.
By
applying
results
from
neuroscience
and
psychophysics
on
periodicity
detection
in
the
brain
consistently,
we
obtain
a
more
precise
theory
of
musical
harmony
perception:
The
perceived
sonority
of
a
chord
decreases
with
the
ratio
of
the
period
length
of
the
chord
(its
virtual
pitch)
relative
to
the
period
length
of
its
lowest
tone
component
called
harmonicity.
In
addition,
the
number
of
extrema
in
one
period
of
its
lowest
tone
component
called
granularity
appears
to
be
relevant.
The
combination
of
both
values
in
one
measure,
counting
the
maximal
number
of
times
that
the
whole
periodic
structure
can
be
decomposed
in
time
intervals
of
equal
length,
gives
us
a
powerful
approach
to
the
analysis
of
musical
harmony
perception.
The
analysis
presented
here
demonstrates,
that
it
does
not
matter
much
whether
tones
are
presented
consecutively
as
in
scales
or
simultaneously
as
in
chords
or
chord
progressions.
The
presented
approach
yields
meaningful
results
for
dyads
and
common
triads
and
classical
diatonic
scales,
showing
highest
correlation
with
empirical
results
(r
>
0.9).
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
75
Gary
Yim
Music
Theory,
The
Ohio
State
University,
USA
It
is
proposed
that
two
different
harmonic
systems
govern
popular
music
chord
sequences:
affordant
harmony
and
functional
harmony.
Affordant
chord
transitions
favor
chords
and
chord
transitions
that
minimize
technical
difficulty
when
performed
on
the
guitar,
while
functional
chord
transitions
favor
those
based
on
traditional
harmonic
functions.
A
corpus
analysis
compares
these
systems
by
encoding
each
song
in
two
ways.
Songs
are
encoded
with
their
absolute
chord
names
(such
as
Cm),
characterizing
the
chord's
physical
position
on
the
guitar
this
operationalizes
the
affordant
harmonic
system.
They
are
also
encoded
with
Roman
numerals,
characterizing
the
chord's
harmonic
function
this
operationalizes
the
functional
harmonic
system.
The
total
entropy
(a
measure
of
unexpectedness)
within
the
corpus
for
each
encoding
is
calculated.
Arguably,
the
encoding
with
the
lower
entropy
value
(that
is,
less
unexpectedness)
corresponds
with
the
harmonic
system
that
more
greatly
influences
the
chord
transitions.
It
was
hypothesized
that
affordant
factors
play
a
greater
role
than
functional
factors,
and
therefore
a
lower
entropy
value
for
the
letter-name
encoding
was
expected.
Instead,
a
lower
entropy
value
for
the
Roman
numeral
encoding
was
found.
Thus,
the
results
are
not
consistent
with
the
original
hypothesis.
However,
post-hoc
analyses
yielded
significant
results,
consistent
with
the
claim
that
affordant
factors
(that
is,
the
physical
movements
involved
in
playing
a
guitar)
do
play
some
role
in
popular
music
chord
sequences.
Nevertheless,
the
role
of
functional
harmony
cannot
be
downplayed.
Harmonic
Expectation
in
Twelve-Bar
Blues
Progressions
Bryn
Hughes
Ithaca
College,
USA
Harmonic
expectation
has
been
shown
to
reflect
syntactical
rules
for
chord-to-chord
connections
in
both
short
and
long
musical
contexts.
These
expectations
may
derive
from
the
activation
of
specific
musical
schemata,
providing
listeners
with
the
necessary
context
for
identifying
syntactical
errors.
Few
empirical
studies
have
addressed
the
connection
between
chord-to-chord
syntax
and
larger
schemata,
such
as
phrases
or
form.
The
twelve-bar
blues,
with
its
three
unique
phrases,
offers
an
opportunity
to
investigate
this
relationship.
This
research
investigates
whether
listeners
expect
chord
successions
presented
in
the
context
of
the
twelve-bar
blues
idiom
to
adhere
to
common-practice
syntax.
Additionally,
it
addresses
the
degree
to
which
harmony
affects
the
activation
of
phrase
schemata.
Participants
listened
to
16-second
synthesized
excerpts
representing
a
phrase
from
the
standard
twelve-bar
blues.
Each
phrase
included
a
single
variable
chord.
For
each
trial,
participants
provided
a
goodness
rating
on
a
six-point
scale
and
indicated
whether
they
thought
the
excerpt
came
from
the
beginning
(Phrase
1),
middle
(Phrase
2),
or
end
(Phrase
3)
of
a
twelve-bar
blues.
Ratings
were
interpreted
as
levels
of
expectancy
in
accordance
with
the
concept
of
misattribution.
Listeners
preferred
harmonic
successions
in
which
the
relationship
between
chord
roots
reflected
common
practice;
however,
two
instances
of
root
motion
idiosyncratic
to
blues
also
received
high
ratings.
The
variable
chord
significantly
affected
phrase
labelling.
The
magnitude
of
this
effect
was
dependent
upon
the
variable
chords
location
within
the
phrase
and
the
surrounding
chords.
Successions
for
which
a
consensus
phrase
label
emerged
received
significantly
higher
ratings
than
those
that
did
not
receive
a
clear-cut
phrase
label.
In
some
cases,
ratings
and
phrase
labels
combined
to
reveal
that
specific
chord
successions
can
invoke
different
expectations
depending
on
the
presently
active
phrase
schema.
Harmonic
expectation
in
blues
includes
a
wider
range
of
acceptable
root
motion.
Phrase
schemata
are
defined
both
by
their
harmonic
content
and
by
the
order
in
which
that
content
is
presented.
Single
chords
can
affect
the
strength
of
an
active
schema
and
can
suppress
the
activation
of
other
viable
schemata.
Listeners
have
stronger
expectations
for
phrases
that
can
be
clearly
identified
as
part
of
the
larger
musical
context.
76
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
WED
A
Directional
Interval
Class
Representation
of
Chord
Transitions
Emilios
Cambouropoulos
School
of
Music
Studies,
Aristotle
University
of
Thessaloniki,
Greece
Chords
are
commonly
represented,
at
a
low
level,
as
absolute
pitches
(or
pitch
classes)
or,
at
a
higher
level,
as
chords
types
within
a
given
tonal/harmonic
context
(e.g.
roman
numeral
analysis).
The
former
is
too
elementary,
whereas,
the
latter,
requires
sophisticated
harmonic
analysis.
Is
it
possible
to
represent
chord
transitions
at
an
intermediate
level
that
is
transposition-invariant
and
idiom-independent
(analogous
to
pitch
intervals
that
represent
transitions
between
notes)?
In
this
paper,
a
novel
chord
transition
representation
is
proposed.
A
harmonic
transition
between
two
chords
can
be
represented
by
a
Directed
Interval
Class
(DIC)
vector.
The
proposed
12-dimensional
vector
encodes
the
number
of
occurrence
of
all
directional
interval
classes
(from
0
to
6
including
+/-
for
direction)
between
all
the
pairs
of
notes
of
two
successive
chords.
Apart
from
octave
equivalence
and
interval
inversion
equivalence,
this
representation
preserves
directionality
of
intervals
(up
or
down).
Interesting
properties
of
this
representation
include:
easy
to
compute,
independent
of
root
finding,
independent
of
key
finding,
incorporates
voice
leading
qualities,
preserves
chord
transition
asymmetry
(e.g.
different
vector
for
I-V
and
V-I),
transposition
invariant,
independent
of
chord
type,
applicable
to
tonal/post-tonal/atonal
music,
and,
in
most
instances,
chords
can
be
uniquely
derived
from
a
vector.
DIC
vectors
can
be
organised
in
different
categories
depending
on
their
content,
and
distance
between
vectors
can
be
used
to
calculate
harmonic
similarity
between
different
music
passages.
Some
preliminary
examples
are
presented.
This
proposal
provides
a
simple
and
potentially
powerful
representation
of
elementary
harmonic
relations
that
may
have
interesting
applications
in
the
domain
of
harmonic
representation
and
processing.
Matthew
Woolhouse
School
of
the
Arts,
Faculty
of
Humanities,
McMaster
University,
Canada
A
formal
grouping
model
is
used
to
model
the
experience
of
tonal
attraction
within
chromatic
music,
i.e.
its
dynamic
ebb
and
flow.
The
model
predicts
the
level
of
tonal
attraction
between
temporally
adjacent
chords.
The
functional
ambiguity
of
nineteenth-
century
chromatic
harmony
can
be
problematic:
chromatic
chords,
unlike
diatonic
harmony,
often
have
ill-defined
roots,
and
thus
their
proper
functions
are
difficult
to
establish.
An
important
feature
of
the
model,
however,
is
that
the
key
or
tonal
context
of
the
music
does
not
need
to
be
specified.
The
model
is
based
on
the
idea
of
interval
cycle
proximity
(ICP),
a
grouping
mechanism
hypothesized
to
contribute
to
the
perception
of
tonal
attraction.
This
paper
illustrates
the
model
with
an
analysis
of
the
opening
of
Wagners
Tristan
und
Isolde,
and
shows
that
the
model
can
predict
the
opening
sequence
of
Tristan
in
terms
of
tonal
attraction
without
the
chords
needing
to
be
functionally
specified.
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
77
Important
Experiences
and
Interactions
in
the
Occupational
Identity
Development
of
Music
Educators
Joshua
A.
Russell
The
Hartt
School,
The
University
of
Hartford,
USA
The
purposes
of
this
paper
were
to
describe
the
reported
professional
identity
of
in-service
music
educators
through
the
lens
of
symbolic
interactionism
and
to
identify
activities
and
interactions
that
music
educators
can
seek
out
in
order
to
inform
their
own
professional
identity.
Three
hundred
secondary
music
educators
from
southwestern
United
States
responded
to
the
Music
Educator
Career
Questionnaire,
which
was
developed
from
previous
research.
Participants
responded
to
a
series
of
ipsative
items
designed
to
elicit
information
regarding
their
occupational
identity
as
well
as
the
perceived
importance
of
different
activities
or
interactions.
Music
educators
saw
themselves
and
believe
others
saw
them
as
an
educator,
ensemble
leader,
a
creative
businessperson,
and
entertainer.
However,
their
musical
identities
separated
into
both
an
external
music
identity,
in
which
others
saw
them
as
a
performer,
artist,
performer,
or
scholar,
and
an
internal
identity,
in
which
they
saw
themselves
differently
in
the
same
roles.
The
impact
of
different
activities
and
interactions
on
the
various
identified
occupational
identities
will
be
discussed
a s
a
means
to
assist
music
educators
self
select
their
own
most
appropriate
occupational
identity
and
engage
in
activities
and
with
individuals
in
order
to
develop
their
chosen
identity.
As
teachers
move
from
preservice
to
in-service,
their
identities
may
transform
from
an
integrated
musician
identity
and
segregated
educator
identity
to
an
integrated
educator
identity
and
segregated
musician
identity
unless
they
intentionally
seek
out
interactions
and
activities
to
develop
a
continuously
integrated
occupational
identity.
Implications
are
discussed.
78
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
WED
Cognitive
and
emotional
aspects
of
pupils
attitudes
towards
piano
teachers
and
piano
lessons
Malgorzata
Chmurzynska
Department
of
Music
Psychology,
Chopin
University
of
Music
Professional
primary
music
schools
in
Poland
aim
at
creating
well-educated
and
competent
future
performing
musicians
as
well
as
their
audience
(comprising
primarily
those
who
will
not
pursue
further
stages
of
musical
education).
However,
the
majority
of
pupils
who
complete
their
music
education
discontinue
to
play
instruments
and
lose
interest
in
the
classical
music.
According
to
the
experts
the
reason
for
this
is
their
having
been
discouraged
by
their
music
teachers
and
the
way
they
were
taught.
The
aim
of
the
study
was
to
examine
pupils
attitudes
towards
their
piano
teachers
and
piano
lessons.
The
emotional
and
cognitive
components
of
the
attitudes
have
been
taken
into
account.
The
respondents
(40
pupils
from
the
primary
music
schools)
were
asked
to
complete
the
Pupils
Questionnaire,
designed
to
test
the
cognitive
aspect
of
their
attitudes
(what
they
think
of
their
teachers
and
piano
lessons)
as
well
as
the
emotional
aspect
(what
they
feel
during
the
piano
lessons).
In
the
cognitive
aspect
the
results
revealed
a
general
positive
attitude
of
the
pupils
towards
their
piano
teachers,
more
positive
than
towards
the
piano
playing
itself.
However,
almost
20%
of
the
subjects
preferred
to
learn
with
a
different
teacher,
and
over
40%
did
not
feel
increased
motivation
to
practice
after
the
lessons.
Almost
25%
reported
they
did
not
fulfill
their
aspiration
concerning
piano
playing.
In
the
emotional
aspect
the
results
revealed
a
significant
percentage
of
subjects
manifesting
quite
high
level
of
anxiety
during
the
lessons.
Certainly,
this
is
neither
a
source
of
inspiration
for
the
students,
nor
does
it
build
up
their
high
self-esteem.
The
pupils
much
more
frequently
denied
the
negative
emotions
than
admitted
the
positive
ones.
On
the
basis
of
the
comparison
of
both
aspects
of
the
attitudes
one
can
conclude
that
pupils
image
of
their
teachers
(the
cognitive
aspect)
is
more
positive
than
their
feelings
during
the
lessons
(the
emotional
aspect).
The
analysis
of
the
pupils
attitudes
revealed
many
negative
emotions
and
lack
of
strong
positive
experiences
connected
to
classical
music,
the
latter
undoubtedly
necessary
for
shaping
the
intrinsic
motivation.
It
was
hypothesized
that
this
fact
may
be
a
source
of
a
decrease
in
interest
in
this
kind
of
music.
Experienced
Emotions
through
the
Orff-Schulwerk
Approach
in
Music
Education
-
A
Case
Study
Based
on
Flow
Theory
79
Dibben,
2010,
Krumhansl,
2002;
Sloboda,
1999,
2005;
Sloboda
&
Juslin,
2001;
Juslin
&
Sloboda,
2010),
data
enabled
us
to
put
in
evidence
several
correlations
regarding
the
Orff-
Schulwerk
approach
and
the
students
lived
emotions
during
Music
Education
classes.
AFIMA
enabled
us
to
establish
that
through
an
Orff-Schulwerks
approach
children
lived
many
positive
emotions,
which
demonstrated
to
be
significant
in
the
way
they
acquire
musical
knowledge.
Benefits
of
a
classroom-based
instrumental
training
program
on
working
memory
of
primary
school
children:
A
longitudinal
study
Ingo
Roden,*
Dietmar
Grube,*
Stephan
Bongard,#
Gunter
Kreutz*
*
Institute
for
Music,
School
of
Linguistics
and
Cultural
Studies,
Carl
von
Ossietzky
University
Oldenburg,
Germany;
#Department
of
Psychology,
Goethe-University
Frankfurt,
Germany
80
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
WED
Cognitive
Strategies
in
Sight-singing
Influence
of
Music
Education
on
Expressive
Singing
of
Preschool
Children
Johanella
Tafuri
Conservatoire
of
Music,
Bologna,
Italy
Singing
is
one
of
the
most
diffused
musical
activities
in
nursery
schools.
Teachers
are
accustomed
to
accompanying
different
moments
of
the
day
with
songs
and
children
enjoy
having
fun
with
music.
When
do
children
start
to
sing
autonomously?
How
do
they
sing?Several
studies
have
explored
the
many
ways
used
by
children
to
sing
songs
they
know
and
to
play
with
them.
The
results
showed
different
kinds
of
repetition,
change
of
words
and
also
changes
in
the
expression
through
little
variations
in
speed,
loudness
and
other
musical
characteristics.
The
studies
that
explore
the
relationships
between
music
and
emotions
with
the
particular
aim
of
understanding
the
underlying
processes
of
an
expressive
performance,
pointed
out
that,
in
order
to
produce
it,
performers
need
to
manage
physical
sound
properties.
More
recently,
Tafuri
(2011)
analysed
a
corpus
of
songs
performed,
between
the
age
of
2
and
3,
by
the
children
of
the
inCanto
Project.
This
is
a
group
of
children
who
received
a
special
music
education
that
began
during
their
prenatal
life
(Tafuri
2009).
The
analysis
revealed
that
already
at
this
age
it
is
possible
to
observe
a
certain
ability
of
children
to
sing
in
an
expressive
way.
This
implies
a
certain
ability
in
managing
some
musical
structures,
in
particular
loudness
and
timing.
The
aims
of
the
present
research
are
firstly
to
verify
the
appearance
and
development
of
the
ability
to
sing
in
an
expressive
way
in
children
of
2
-5
years
who
attend
daily
nursery
schools
where
teachers
regularly
sing
a
certain
number
of
songs
almost
daily;
secondly,
to
compare
these
results
with
those
shown
by
the
children
of
the
inCanto
Project
who
have
received
an
early
music
education.
A
corpus
of
songs
performed
by
the
children
of
several
different
schools,
and
recorded
by
the
teachers,
are
analysed
with
the
software
Sonic
Visualizer,
with
particular
attention
paid
to
the
childrens
use
of
agogics,
dynamics,
and
other
sound
qualities.
The
results
highlight
the
process
of
managing
physical
sound
properties
in
order
to
produce
an
expressive
performance.
Particular
problems
are
solved:
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012
81
e.g.
that
of
distinguishing
expressive
from
other
different
motivations,
or
musical
from
verbal
intentions
in
the
analysis
of
sound
properties.
These
results
when
compared
with
those
obtained
by
children
who
received
an
early
music
education,
give
interesting
indications
on
the
role
of
an
early
musical
experience.
Multisensory
learning
and
the
resulting
neuronal
plastic
changes
have
recently
become
a
topic
of
renewed
interest
in
human
cognitive
neuroscience.
Playing
an
instrument
from
musical
notation
is
an
ideal
situation
to
study
multisensory
learning,
as
it
allows
investigating
the
integration
of
visual,
auditory
and
sensorimotor
information
processing.
The
present
study
aimed
at
answering
whether
multisensory
learning
alters
unisensory
structures,
interconnections
of
those
structures
or
specific
multisensory
areas
in
the
human
brain.
In
a
short-term
piano
training
procedure
musically
naive
subjects
were
trained
to
play
tone
sequences
from
visually
presented
patterns
in
a
music
notation-like
system
[Auditory-
Visual-Somatosensory
group
(AVS)],
while
a
control
group
received
audio-visual
training
only
that
involved
viewing
the
patterns
and
attentively
listening
to
the
recordings
of
the
AVS
training
sessions
[Auditory-Visual
group
(AV)].
Training-related
changes
in
the
corresponding
cortical
networks
were
assessed
by
pre-
and
post-training
magnetoencephalographic
(MEG)
recordings
of
an
auditory,
a
visual
and
an
integrated
audio-visual
mismatch
negativity
(MMN).
The
two
groups
(AVS
and
AV)
were
differently
affected
by
the
training
in
the
integrated
audio-visual
MMN
condition.
Specifically,
the
AVS
82
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
WED
group
showed
a
training-related
increase
in
audio-visual
processing
in
the
right
superior
temporal
gyrus
while
the
AV
group
did
not
reveal
a
training
effect.
The
unisensory
MMN
measurements
were
not
affected
by
training.
The
results
suggest
that
multisensory
training
alters
the
function
of
specific
multisensory
structures,
and
not
the
unisensory
ones
along
with
their
interconnections,
and
thus
provide
experimental
data
as
response
to
an
important
question
presented
by
cognitive
models
of
multisensory
training.
EEG-based
discrimination
of
music
appraisal
judgments
using
ZAM
time-
frequency
distribution
Effects
of
Short-Term
Experience
on
Music-Related
ERAN
This
study
investigates
how
short-term
experience
modulates
the
strength
of
the
early-right
anterior
negativity
(ERAN)
response
to
implied
harmonic-syntax
violations.
The
ERAN
is
a
negative-going
event-related
potential
(ERP)
that
peaks
between
150ms
and
250ms
after
stimulus
onset,
has
anterior
scalp
distribution,
right-hemispheric
weighting,
and
relies
on
schematic
representations
of
musical
regularities.
Previous
studies
have
shown
that
the
ERAN
can
be
modified
by
short-term
musical
experience.
However,
these
studies
rely
on
complex
harmonic
stimuli
and
experimental
paradigms
where
music
are
presented
simultaneously
with
visual
images
and
written
text.
In
an
effort
to
better
understand
how
habituation
may
effect
the
ERAN
in
musical
contexts,
we
asked
subjects
to
directly
attend
to
simple
melodies
that
are
either
syntactically
well-formed,
conforming
to
common-practice
tonality,
(M1)
or
end
with
an
out-of-key
pitch
(M2).
Even
with
simplified
stimuli,
our
results
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012
83
reliably
replicate
earlier
findings
based
on
more
complex
stimuli
composed
of
literal
harmonies.
Both
musicians
and
non-musicians
listened
to
M1
and
M2
numerous
times
and
neural
responses
were
recorded
using
magnetoencephalography
(MEG).
Whereas
previous
studies
on
short-term
habituation
of
the
ERAN
only
look
at
changes
in
the
violation
condition,
we
comparatively
analyze
how
responses
to
both
M1
and
M2
change
over
time
and
how
the
relative
relationship
between
M1
and
M2
fluctuates.
This
effectively
controls
for
fatigue
and
allows
us
to
clearly
show
how
the
ERAN
changes
both
independent
of
and
in
conjunction
with
normal
responses.
Daniel
Cameron,*
Job
Lindsen,#
Marcus
Pearce,+
Geraint
Wiggins,+
Keith
Potter,^
Joydeep
Bhattacharya#
*Brain
and
Mind
Institute,
University
of
Western
Ontario,
Canada;
#Dept.
of
Psychology,
Goldsmiths,
University
of
London,
UK;
^Dept.
of
Music,
Goldsmiths,
University
of
London,
UK;
+Centre
for
Digital
Music,
Queen
Mary,
University
of
London,
UK
Humans
tend
to
synchronize
movements,
attention,
and
temporal
expectations
with
the
metric
beat
of
auditory
sequences,
such
as
musical
rhythms.
Electroencephalographic
(EEG)
research
has
shown
that
the
metric
structure
of
rhythms
can
modulate
brain
activity
in
the
gamma
and
beta
frequency
bands
as
well
as
at
specific
frequencies
related
to
the
endogenously
generated
metric
beat
of
rhythms.
We
investigate
the
amplitude
and
inter-trial
phase
coherence
(ITC)
of
EEG
measured
from
20
musicians
while
listening
to
a
piece
of
rhythmic
music
that
contains
metrically
ambiguous
and
unambiguous
rhythms,
Steve
Reichs
Clapping
Music.
ITC
is
the
consistency
of
frequency-specific
phase
over
repetitions
of
individual
rhythms
and
thus
reflects
the
degree
to
which
activity
is
locked
to
stimulus
rhythms.
For
ambiguous
rhythms,
amplitude
and
ITC
are
greater
at
the
frequencies
specific
to
the
metric
beat
of
rhythms
(1.33
Hz
and
1.77
Hz).
Source
analysis
suggests
that
differences
at
metre-specific
frequencies
may
originate
in
left
ventral
premotor
area
and
right
inferior
frontal
gyrus,
areas
that
have
been
linked
to
anticipatory
processing
of
temporal
sequences.
Effects
are
also
found
in
alpha
(8-12
Hz)
and
gamma
(24-60
Hz)
bands
and
these
are
consistent
with
past
EEG
research
showing
modulation
of
gamma
power
by
the
metric
structure
of
auditory
rhythms
and
modulation
of
alpha
activity
due
to
temporal
anticipation.
Our
study
extends
evidence
of
the
electrophysiological
processes
related
to
rhythm
and
metre
by
using
complex,
ecologically
valid
music,
and
showing
differences
in
amplitude
and
ITC
at
metre-specific
frequencies
in
motor
areas
of
the
brain.
Neuroscientific
Measure
of
Consonance
Adrian
Foltyn
Department
of
Composition,
Conducting
and
Theory
of
Music,
F.
Chopin
University
of
Music,
Poland
The
article
contains
a
proposition
of
new
simplified
model
of
neural
discrimination
of
sensory
consonance
/
dissonance
at
higher
stages
of
auditory
pathway.
The
model
regards
primarily
complex
harmonic
sounds
and
is
based
on
periodicity
/
pitch
and
its
representation
in
neural
discharges.
The
hypothesis
relies
on
a
process
involving
measuring
concentration
of
neural
excitation
in
inferior
colliculus
in
time
windows
equal
to
period
of
sum
of
the
incoming
signals.
The
measure
can
accommodate
pitch
deviations
via
a
further
mechanism
based
on
harmonic
entropy
and
can
be
applied
to
any
interval,
including
microtones
and
octave
enhancements.
For
simple
ratios
an
algebraic
calculation
method
is
available,
accounting
for
several
interval
relations
abstract
mathematical
consonance
measures
tended
to
struggle
with.
To
examine
plausibility
of
the
model,
a
psychoacoustic
experiment
was
carried
out,
using
paired
comparison
of
intervals.
One
of
the
resulting
84
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012
WED
dimensions
can
be
clearly
identified
as
consonance
dissonance
axis.
The
proposed
modelled
consonance
values
together
with
4
other
well-known
models
have
been
related
to
experimental
results.
Logarithmic
transformation
of
the
postulated
consonance
measure
displays
the
highest
correlation
with
the
consonance
dimension
obtained
in
the
experiment
out
of
all
examined
models
(R2
0.8).
Higher
degree
of
correlation
versus
roughness-based
models
suggests
plausibility
of
certain
pitch-related
mechanism
underlying
basic
consonance
perception.
Effects
of
musical
training
and
standard
probabilities
on
encoding
of
complex
tone
patterns
Anja
Kuchenbuch*,
Evangelos
Paraskevopoulos*,
Sibylle
C.
Herholz#,
Christo
Pantev*
Neural
Correlates
of
Musical
Timbre
Perception
in
Williams
Syndrome
85
piano
tones.
Event-related
potential
(ERP)
analyses
revealed
robust
P300
responses
to
the
target
piano
tones
in
the
WS
and
TD
groups.
Individuals
with
WS
also
demonstrated
differences
in
P300
amplitude
between
the
non-target
cello
and
trumpet
timbres.
In
the
WS
group
only,
there
was
early
and
sustained
increased
induced
alpha-band
(8-12
Hz)
activity
to
the
cello
vs.
trumpet
timbre.
Thus,
results
indicate
greater
attentional
and
sensory
processing
of
instrumental
timbres
in
WS
compared
with
TD
individuals.
Implications
will
be
discussed
for
auditory
sensitivities
and
musicality
in
WS.
Larrouy-Maestri,
P.
1,
Lvque,
Y.2,
Giovanni,
A.2,
Schn,
D.3,
&
Morsomme,
D.1
1Logopdie
de
la
Voix,
Cognitive
Psychology,
University
of
Lige,
Belgium
2Laboratoire
Parole
et
Langage,
CNRS
and
Aix-Marseille
University,
France
3Institut
de
Neurosciences
Cognitives
de
la
Mditerrane,
CNRS
and
Aix-Marseille
University,
France
Vocal
accuracy
of
a
sung
performance
can
be
evaluated
by
two
methods:
acoustic
analyses
and
subjective
judgments.
For
one
decade,
acoustic
analyses
have
been
presented
as
a
more
reliable
solution
to
evaluate
vocal
accuracy,
avoiding
the
limitation
of
experts
perceptive
system
and
their
variability.
This
paper
presents
for
the
first
time
a
direct
comparison
of
these
methods.
166
occasional
singers
were
asked
to
sing
the
popular
song
Happy
Birthday
.
Acoustic
analyses
were
performed
to
quantify
the
pitch
interval
deviation,
the
number
of
contour
errors
and
the
number
of
tonality
modulations
for
each
recording.
Additionally,
eighteen
experts
in
singing
voice
or
music
rated
the
global
pitch
accuracy
of
these
performances.
The
results
showed
a
high
inter-rater
concordance
within
the
judges.
In
addition,
a
high
correlation
occurred
between
acoustic
measurements
and
subjective
rating.
Their
rating
was
influenced
by
both
tonality
modulations
and
interval
deviations.
The
total
model
of
acoustic
analyses
explained
81%
of
the
variance
of
the
judges
scores.
This
study
highlights
the
congruency
between
objective
and
subjective
measurements
of
vocal
accuracy
when
the
assessment
is
done
by
music
or
singing
voice
experts.
Our
results
confirm
the
relevance
of
the
pitch
interval
deviation
criterion
in
vocal
accuracy
assessment.
Furthermore,
the
number
of
tonality
modulations
is
a
salient
criterion
in
perceptive
rating
and
should
be
taken
into
account
in
studies
using
acoustic
analyses.
Pitch
Evaluations
in
Traditional
Solo
Singing:
Comparison
of
Methods
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
WED
was
applied.
NoteView
was
chosen
since
it
is
considered
one
of
the
best
programs
for
this
purpose.
Evaluations
of
individual
pitches
by
the
three
subjects
(1st
method)
differed
by
6.5
cents
(here
and
hereafter
averaged
values
are
presented).
However,
for
the
degrees
of
musical
scale,
the
difference
dropped
to
1.63.4
cents,
depending
on
the
range
of
sound
durations
(IOIs)
considered.
In
comparison,
the
other
two
methods
gave
considerably
inferior
results
(deviations
from
the
semi-manual
evaluations
of
the
musical
scale):
6.010.0
cents
for
histograms
(2nd
method)
and
3.97.9
cents
for
NoteView
(3rd
method).
The
semi-
manual
method
of
pitch
evaluation,
though
time-consuming,
is
still
more
acceptable
than
the
two
automated
methods
considered;
unless
precision
of
4.09.0
cents
or
worse
is
sufficient.
The
reasons
(need
for
subjective
decisions,
e.g.,
on
target
pitch,
etc.)
are
discussed.
Musicians'
Perception
of
Melodic
Intonation
in
Performances
with
and
without
Vibrato
The
timbre
of
the
voice
as
perceived
by
the
singer
him-/herself
Allan
Vurma
Estonian
Academy
of
Music
and
Theatre,
Estonia
This
research
is
aimed
at
specifying
with
the
help
of
perception
tests
how
the
vocalist
perceives
the
timber
of
his/her
own
voice
during
singing.
15
professional
singers
as
participants
sung
simple
vocal
exercises
at
different
pitch
ranges.
They
were
asked
to
fix
in
their
memory
the
timbre
of
their
voice
as
it
was
perceived
at
singing.
These
sung
excerpts
were
recorded,
and
as
a
next
step,
seven
timbral
modifications
were
created
from
each
recording.
The
modifications
corresponded
to
different
hypotheses
about
the
difference
in
the
voices
timbre
in
the
vocalists
own
perception
compared
to
the
timbre
of
that
voice
in
the
perception
of
other
persons
at
some
distance.
Then
the
modifications
were
played
to
the
participant
whose
voice
was
used
for
the
modifications
and
he/she
had
to
estimate
the
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012
87
similarity
of
those
stimuli
to
the
perception
of
his/her
own
voice
that
had
been
encountered
during
singing.
Participants
rated
as
most
similar
those
stimuli
that
were
modified
by
the
filter
which
frequency
characteristic
resembled
the
shape
of
a
trapezoid
and
at
the
creation
of
which
were
taken
into
account
(1)
the
transfer
function
of
the
diffracting
air
conduction
component
form
the
mouth
of
the
singer
to
his
ear
channel,
(2)
the
transfer
function
of
the
bone
conduction
component,
and
(3)
the
influence
of
the
stapedius
reflex
on
the
sensitivity
of
his/her
hearing
system.The
frequency
characteristics
of
cochlear
microphonics
as
measured
on
cats
were
used
as
the
available
approximation
about
the
impact
of
stapedius
reflex
on
human
hearing.
Brain
rhythm
changes
during
singing
voice
perception
Effect
of
Augmented
Auditory
Feedback
on
Pitch
Production
Accuracy
in
Singing
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
WED
for
both
poor
and
good
pitch
singers
and
to
compare
the
effect
between
two
types
of
tasks.
Data
collection
is
still
in
progress,
however,
available
data
show
that
the
effect
of
augmented
feedback
is
positive
for
the
moderately
poor
pitch
singers
but
not
the
severely
poor
ones
in
the
pitch-matching
task,
but
its
influence
on
the
performance
in
the
song-singing
task
is
negative.
Vocal
tract
dimensional
characteristics
of
professional
male
singers
with
different
singing
voice
types
Vocal
Fold
Vibratory
Differences
in
Different
Registers
of
Professional
Male
Singers
with
Different
Singing
Voice
Types
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
89
Music
use
patterns
and
coping
strategies
as
predictors
of
student
anxiety
levels
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
WED
regulation
and
active/strategic
self-regulation).
Finally,
when
coping
strategies
and
age
were
controlled,
music
coping
was
still
a
significant
predictor
of
anxiety
levels
in
this
sample.
However,
the
prediction
was
positive
indicating
that
students
experiencing
higher
anxiety
levels
also
used
music
more
to
cope
than
did
students
with
lower
anxiety
levels.
These
findings
suggest
that
students
who
are
unable
to
manage
their
anxiety
with
general
coping
strategies
may
find
some
outlet
via
music
listening.
Schizotypal
Influences
on
Musical
Imagery
Experience
Music
aids
gait
rehabilitation
in
Parkinsons
disease
Charles-Etienne
Benoit,
Nicolas
Farrugia,
Sonja
Kotz,
Simone
Dalla
Bella
91
form
of
training.
Here
we
summarize
clinical
and
brain
imaging
evidence
on
the
effects
of
auditory
cueing
on
gait
in
patients
with
PD.
Moreover,
we
propose
that
cueing
effects
are
likely
mediated
by
the
activation
of
a
general-purpose
neuronal
network
involved
in
the
synchronization
of
motor
movement
to
temporally
regular
external
stimuli
(i.e.,
auditory-
motor
coupling).
This
neural
mechanisms,
unaffected
in
PD,
should
facilitate
movement
execution.
Cerebellar
projections
stimulate
motor
areas
facilitating
gait
initiation
and
continuation
when
inducing
externally
generated
movement.
Extensive
stimulation
via
auditory
cueing
is
likely
to
foster
brain
plasticity
in
particularly
at
the
level
of
the
brain
circuitry
underpinning
sensorimotor
coupling
(increasing
connectivity
in
areas
devoted
to
sensorimotor
integration),
thus
supporting
improvements
positively
affecting
gait
kinematics
in
PD.
In
addition,
as
mechanisms
underlying
auditory-motor
coupling
are
likely
to
be
domain
general,
the
effects
of
auditory
cueing
may
extend
to
other
functions,
such
as
regulation
of
fine
motor
movements
or
speech.
Discrimination
of
slow
rhythms
mimics
beat
perception
impairments
observed
in
Parkinsons
disease
Devin
McAuley,
Benjamin
Syzek,
Karli
Nave,
Benjamin
Mastay,
&
Jonathan
Walters
Department
of
Psychology,
Michigan
State
University,
USA
Research
has
demonstrated
that
rhythm
discrimination
shows
a
beat-based
advantage
(BBA)
whereby
simple
rhythms
with
a
beat
are
better
discriminated
than
complex
rhythms
without
a
beat.
Recently,
Grahn
&
Brett
(2009)
showed
that
individuals
with
Parkinson
Disease
(PD)
do
not
show
a
BBA.
The
present
investigated
rhythm
discrimination
using
simple
and
complex
rhythms
that
were
presented
at
either
the
original
tempo
investigated
by
Grahn
&
Brett
(2009)
or
at
a
slower
tempo.
We
expected
to
replicate
the
BBA
for
the
original
tempo
and
to
reduce
or
possibly
eliminate
the
BBA
at
the
slower
tempo.
Two
experiments
were
conducted.
On
each
trial,
participants
heard
two
successive
presentations
of
a
standard
rhythm
followed
by
a
third
presentation
of
the
same
rhythm
or
a
slightly
changed
rhythm.
Participants
judged
whether
the
third
rhythm
was
the
same
or
different
than
the
standard.
In
both
experiments,
participants
showed
a
reliable
BBA.
The
magnitude
of
the
BBA,
however,
was
larger
for
rhythms
marked
by
empty
intervals
(Experiment
1)
than
by
filled
intervals
(Experiment
2).
Slowing
down
the
rhythms
reduced
discrimination
performance.
This
reduction
was
greater
for
simple
rhythms
than
for
complex
rhythms,
thereby
eliminating
the
BBA.
Notably,
the
pattern
of
performance
for
the
slowed
rhythms
was
strikingly
similar
to
the
pattern
previously
observed
for
individuals
with
PD.
Random
delay
boosts
musical
fine
motor
recovery
after
stroke
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
WED
and
index
finger
tapping
speed
and
regularity.
Surprisingly,
patients
in
the
delay
group
improved
strikingly
in
the
nine-hole-pegboard
test,
whereas
patients
in
the
normal
group
did
not.
In
finger
tapping
rate
and
regularity
both
groups
showed
similar
marked
improvements.
The
normal
group
showed
reduced
depression
whereas
the
delay
group
did
not.
We
conclude
that,
contrary
to
expectations,
music
therapy
on
a
randomly
delayed
keyboard
can
significantly
boost
motor
recovery
after
stroke.
We
hypothesise
that
the
patients
in
the
delayed
feedback
group
implicitly
learn
to
be
independent
of
the
auditory
feedback
and
therefore
outperform
those
in
the
normal
condition.
Proposal
for
Treatment
of
Focal
Dystonia
in
a
Guitar
Player:
A
Case
Study
The
Reflexion
of
Psychiatric
Semiology
on
Musical
Improvisation:
A
case
study
of
a
patient
diagnosed
with
Obsessive
Compulsive
Disorder
93
the
patient,
access
to
her
medical
file,
recording
of
musical
sessions
in
order
to
analyse
the
musical
improvisations
and
video
recording
to
observe
the
patient's
related
behaviour.
We
compare
findings
from
the
music
analysis
of
the
improvisations,
the
corresponding
behaviour,
and
the
clinical
data
we
obtained
and
analysed,
using
an
analytical
music
therapy
reflection.
Our
results
show
that
aspects
of
the
patient's
pathology
can
be
associated
with
musical
attributes
and
structures
found
in
the
improvisations.
In
particular,
the
patient's
logorrhea
observed
in
the
interviews
is
translated
into
non-stop
playing,
impulsivity
becomes
intensive
playing,
the
fast
tempo
reflects
anxiety,
repeated
musical
clusters
reflect
fixation
on
ideas,
and
other
musical
features
are
related
to
aspects
of
the
patient's
mood.
The
musical
building
blocks
(here
features)
as
perceived
while
listening
is
often
assumed
to
be
the
notes
and
the
well-known
abstractions
such
as
grouping,
meter
and
harmony.
However,
is
that
really
what
we
hear
when
we
briefly
listen
to
a
new
song
on
the
radio?
We
can
then
perceive
e.g.
the
genre
and
emotional
expression
just
from
the
first
few
seconds.
From
an
ecological
viewpoint
one
can
argue
that
features
like
distance,
direction,
speed,
energy
are
important
(see
other
abstract).
From
emotion
research
a
number
of
qualitative
features
relating
to
general
music
theory
aspects
has
been
identified.
These
are
e.g.
rhythmic
and
harmonic
complexity
measured
on
a
gradual
scale
ranging
from
simple
to
complex.
From
a
computational
viewpoint
a
large
number
of
features
ranging
from
low-level
spectral
properties
to
high-level
aspects
has
been
used
within
research
in
music
information
retrieval.
The
aim
of
the
current
study
is
to
look
at
music
perception
from
a
number
of
different
viewpoints,
identify
a
subset
of
relevant
features,
evaluate
these
features
in
listening
tests,
and
predict
them
from
available
computational
audio
features.
A
small
set
of
nine
features
was
selected.
They
were
Speed,
Rhythmic
clarity,
Rhythmic
complexity,
Articulation,
Dynamics,
Modality,
Overall
pitch,
Harmonic
complexity,
and
Brightness.
All
the
features
were
rated
on
Likert
scales
in
two
listening
experiments.
In
experiment
one
(N=20)
the
music
examples
consisted
of
100
polyphonic
ringtones
generated
from
MIDI
files.
In
this
experiment
they
also
rated
Energy
and
Valence.
In
experiment
two
(N=21)
the
music
examples
were
110
film
clips
previously
used
in
an
emotion
study
(Eerola
and
Vuoskoski,
2010),
thus,
with
available
data
regarding
emotional
ratings.
In
addition,
all
the
perceptual
features
were
modeled
with
audio
features
extracted
by
existing
software
such
as
the
MIRToolbox.
The
agreement
among
the
listeners
varied
depending
on
the
feature
as
expected.
While
Speed
had
a
large
agreement,
Harmonic
complexity
showed
a
rather
modest
agreement
indicating
a
more
difficult
task.
The
feature
inter-correlations
were
in
general
modest
indicating
an
independent
rating
of
all
the
features.
The
emotion
ratings
could
be
well
predicted
by
the
rated
features
using
linear
regression.
In
the
first
experiment
the
energy
rating
was
predicted
with
an
adj.
R2
=
0.93
and
the
valence
rating
with
an
adj.
R2
=
0.87.
Many
of
the
features
could
be
predicted
from
audio
features
rather
well
with
adj
R2
up
to
approx.
0.80.
The
results
were
surprisingly
consistent
and
indicate
that
rated
perceptual
features
can
indeed
be
used
as
an
alternative
to
traditional
features
in
music
information
retrieval
tasks
such
as
the
prediction
of
emotional
expression.
94
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
WED
Stability
and
Variation
in
Cadence
Formulas
in
Oral
and
Semi-Oral
Chant
Traditions
-
a
Computational
Approach
Dniel
Pter
Bir1,
Peter
Van
Kranenburg2,
Steven
Ness3,
George
Tzanetakis3,
Anja
Volk4
1University
of
Victoria,
School
of
Music,
2Meertens
Institute,
Amsterdam,
3University
of
Victoria,
Department
of
Information
and
Computing
Sciences,
4Utrecht
University
This
paper
deals
with
current
computational
research
into
melodic
stability
and
variation
in
cadences
as
they
occur
in
oral
and
semi-oral
traditions.
A
main
aspect
of
recent
computational
investigations
has
been
to
explore
the
ways
in
which
melodic
contour
defines
melodic
identities
(Ness
et
al.,
2010;
Van
Kranenburg
et
al.,
2011).
Creating
a
new
framework
for
melodic
transcription,
we
have
quantized
and
compared
cadences
found
in
recorded
examples
of
Torah
trope,
strophic
melodies
from
the
Dutch
folk
song
collection
Onder
de
groene
linde
and
Quran
recitation.
Working
within
this
new
transcription
framework,
we
have
developed
computational
methods
to
analyze
similarity
and
variation
in
melodic
formulas
in
cadences
as
they
occur
in
recorded
examples
of
the
before-mentioned
oral
and
semi-oral
traditions.
Investigating
stability
and
variation
using
histogrambased
scales,
melodic
contours,
and
melodic
outlines
derived
from
recorded
examples,
we
interpret
our
findings
with
regard
to
structural
processes
of
oral
transmission
in
these
chant
types.
Through
this
research
we
hope
to
achieve
a
better
sense
of
the
relationship
between
melodic
gesture
and
melodic
formulae
within
these
chant
practices
and
possibly
a
new
understanding
of
the
relationship
between
improvisation
and
notationbased
chant
in
and
amongst
these
divergent
oral
and
semi-oral
chant
traditions.
Modeling
Response
Times
in
Tonal
Priming
Experiments
Tom
Collins,*
Barbara
Tillmann,#
Charles
Delb,#
Frederick
S.
Barrett,*
Petr
Janata*
*Janata
Lab,
Center
for
Mind
and
Brain,
University
of
California,
Davis,
USA
#Universite
de
Lyon,
and
Centre
National
de
la
Recherche
Scientifique,
France
In
tonal
priming
experiments,
participants
make
speeded
judgments
about
target
events
in
short
excerpts
of
music,
such
as
indicating
whether
a
final
target
tone
or
chord
is
mistuned.
By
manipulating
the
tonal
function
of
target
events,
it
is
possible
to
investigate
how
easily
targets
are
processed
and
integrated
into
the
tonal
context.
We
investigate
the
psychological
relevance
of
attributes
of
processed
audio
signals,
by
relating
those
attributes
to
response
times
for
over
three
hundred
tonal
priming
stimuli,
gathered
from
seven
reported
experiments.
To
address
whether
adding
a
long-term,
cognitive,
representation
of
tonal
hierarchy
improves
the
ability
to
model
response
times,
Lemans
sensory
periodicity
pitch
(PP)
model
is
compared
with
a
cognitive
model
(projection
of
PP
output
to
a
tonal
space
(TS)
representing
learned
knowledge
about
tonal
hierarchies),
which
incorporates
pitch
probability
distributions
and
key
distance
relationships.
Results
revealed
that
variables
calculated
from
the
TS
model
contributed
more
to
explaining
variation
in
response
times
than
variables
from
PP,
suggesting
that
a
cognitive
model
of
tonal
hierarchy
leads
to
an
improvement
over
a
purely
sensory
model.
According
to
stepwise
selection,
however,
a
combination
of
sensory
and
cognitive
attributes
accounts
better
for
response
times
than
either
variable
category
in
isolation.
Despite
the
relative
success
of
the
TS
representation,
not
all
response
time
trends
were
simulated
adequately.
The
addition
of
attributes
based
on
transition
probabilities
may
lead
to
further
improvements.
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
95
The
influence
of
temporal
regularities
on
the
implicit
learning
of
pitch
structures
96
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
WED
The
effect
of
musical
expertise
on
the
representation
of
space
Franziska
Olbertz
University
of
Osnabrck,
Germany
Psychological
research
shows
increasing
interest
in
early
social
experiences
among
siblings;
however
very
little
is
known
about
sibling
relations
effects
on
musical
development.
Thus
the
aims
of
the
study
are
to
precisely
describe
typical
sibling
influences
in
the
field
of
music
and
to
discover
interacting
environmental
variables.
63
music
students
completed
an
open-
ended
questionnaire
about
their
memories
of
musical
influences
by
siblings
during
childhood
and
adolescence.
394
statements
were
classified
in
30
content
categories
generated
by
qualitative
content
analysis.
Categories
were
assigned
to
four
higher
categories
of
relation
context.
Basic
quantitative
analyses
suggest
that
musical
sibling
influences
depend
on
period
of
life
(childhood
or
adolescence),
age
difference
and
sex
of
respondents
and
siblings
(p<.04).
Sibling
influences
in
the
field
of
music
are
multifaceted.
Whereas
some
respondents,
for
instance,
started
to
play
an
instrument
in
order
to
become
part
of
a
music
making
sibling
group,
others
preferred
their
music
style
to
differ
from
a
sibling.
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
97
The
purpose
of
the
study
was
to
determine
the
effect
of
singing
skills
instruction
on
kindergarten
childrens
singing
accuracy.
Prior
to
instruction,
all
students
(age
5-6
yrs)
were
recorded
in
a
singing
accuracy
assessment
that
included
pitch
matching
and
song-singing
tasks.
Families
of
participating
students
completed
a
background
questionnaire
regarding
student
music
participation,
music
in
the
home,
and
the
expressed
importance
of
music
in
home
life.
The
treatment
group
(n=
41)
is
drawn
from
three
different
classes
receiving
20
minutes
per
day
of
group
music
instruction
with
particular
attention
to
the
development
of
the
singing
voice
in
terms
of
tone,
register
and
accuracy.
The
control
group
(n=38)
comes
from
three
different
classes
that
receive
no
singing
instruction
in
school.
Following
six
months
of
instruction,
post-test
measurements
were
administered
using
the
same
form
as
in
the
pre-test.
Pretest
results
indicate
no
significant
differences
between
the
experimental
and
control
classes
no
difference
in
scores
between
boys
and
girls.
For
the
three
pitch
matching
tasks,
students
scored
significantly
higher
on
the
interval
tasks
followed
by
pattern
tasks
followed
by
the
single-pitch
tasks.
For
the
posttest,
all
groups
showed
significant
improvement
on
the
pitch
matching
tasks
but
no
improvement
on
the
song-singing
task.
The
experimental
group
showed
greater
improvement,
but
the
difference
was
not
significant.
There
was
a
moderate
but
significant
correlation
(r=0.41)
between
total
pitch
matching
scores
and
song-singing
scores.
Results
will
be
discussed
in
terms
of
the
role
of
instruction
and
approaches
to
measurement
in
singing
accuracy
research.
The
function
of
music
for
young
children
is
multi-faceted.
It
has
been
linked
to
communication
and
self-regulation
in
clinical
studies
of
musical
parenting
involving
infants.
Once
children
become
mobile
and
verbal,
research
tends
to
focus
on
musical
skill
exhibited
in
environments
structured
by
adults
for
children
such
as
the
classroom,
home,
or
playground.
Perceiving
childrens
musical
culture
as
different
from
that
of
adults,
we
seek
to
understand
childrens
spontaneous
music-making
in
everyday
life
as
exhibited
in
public
spaces,
specifically
in
the
subway
system
in
New
York
City.
The
current
study
is
based
on
similar
research
(Custodero,
2006)
which
found
a
pervasiveness
of
movement;
invented
vocal
material,
most
often
in
a
solitary
context;
and
a
complex
array
of
adult-child
interactions.
Specific
aims
were
to
document,
interpret,
and
analyze
a)
childrens
musical
behaviors:
broadly
interpreted
as
singing,
moving
themselves
rhythmically
or
expressively,
or
similarly
moving
objects
as
instruments;
b)
environmental,
circumstantial,
and
personal
characteristics
that
may
influence
these
behaviors;
and
c)
possible
developmental
functions
of
musical
behaviors
in
public
spaces.
Data
has
been
collected
on
3
trains
that
run
the
length
of
Manhattan,
on
3
specific
Sundays
over
a
period
of
1
month.
A
team
of
12
people
travelled
in
pairs,
2
pair
in
2
different
cars
on
each
line,
for
one
round
trip
per
day.
Each
team
member
filled
out
the
Spontaneous
Music
Observational
Protocol
for
each
musical
episode
observed,
and
reported
conditions
in
the
train
car
at
each
stop
before
which
no
music
making
was
observed.
Duration,
gender
and
estimated
age
of
child,
social
context,
sonic
and
social
environmental
triggers,
musical
material,
type/s
of
behavior,
possible
developmental
function,
and
more
detailed
description
have
been
recorded.
Interpretation
was
completed
within
24
hours
of
documentation.
Starting
with
paired
descriptions
and
interpretations
of
same
events,
all
team
members
reviewed
all
episodes
to
insure
consensus.
Specific
focus
on
the
categorization
of
musical
behaviors
and
their
functions
for
the
child
included
comparison
with
findings
of
the
pilot
study
concerning
the
role
of
movement,
of
singing
as
accompaniment,
differences
between
episodes
with
social
and
solitary
engagement.
The
study
of
childrens
music
making
in
an
everyday
context
provides
implications
for
resourcing
educative
environments,
and
brings
about
further
questions
about
the
relationship
of
listening
to
children
and
pedagogical
practice.
98
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
WED
Para-language
songs
as
alternative
musical
stimuli
for
devices
and
playthings
to
enhance
caregiver
interaction
with
babies
and
toddlers
Precursors
of
Dancing
and
Singing
to
Music
in
Three-
to
Four-Months-Old
Infants
Shinya
Fujii,1,
2,
3
Hama
Watanabe,2
Hiroki
Oohashi,2
Masaya
Hirashima,2
Daichi
Nozaki,
2
Gentaro
Taga2
1Department
of
Neurology,
Beth
Israel
Deaconess
Medical
Center
and
Harvard
Medical
School,
USA;
2Graduate
School
of
Education,
The
University
of
Tokyo,
Japan;
3Research
Fellow
of
Japan
Society
for
the
Promotion
of
Science,
Japan
Dancing
and
singing
involve
auditory-motor
coordination
and
have
been
essential
to
our
human
culture
since
ancient
times,
yet
its
developmental
manifestation
has
not
been
fully
explored.
We
aimed
to
examine
whether
three-
to
four-months-old
infants
are
able
to
synchronize
movements
of
their
limbs
to
musical
beat
and/or
produce
altered
vocalizations
in
response
to
music.
In
the
silent
condition,
there
was
no
auditory
stimulus,
whereas
in
the
music
condition,
one
of
two
pop
songs
was
played:
Everybody
by
Backstreet
Boys
and/or
Go
Trippy
by
WANICO
feat.
Jake
Smith.
Limb
movements
and
vocalizations
of
the
infants
in
the
spine
position
were
recorded
by
a
3D
motion
capture
system
and
the
microphone
of
a
digital
video
camera.
First,
we
found
a
striking
increase
in
the
amount
of
limb
movements
and
their
significant
phase
synchronization
to
the
musical
beat
in
one
individual.
As
a
group,
however,
there
was
no
significant
increase
in
the
amount
of
limb
movements
during
the
music
compared
to
the
silent
condition.
Second,
we
found
a
clear
increase
in
the
formant
variability
of
vocalizations
during
the
music
compared
to
the
silent
condition
in
the
group.
The
results
suggest
that
our
brains
are
already
primed
with
our
bodies
to
interact
with
music
at
these
months
of
age
via
limb
movements
and
vocalizations.
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
99
There
is
a
large
body
of
evidence
relating
to
the
ways
that
people
synchronise
with
sounds,
and
perform
error
correction
in
order
to
do
this.
However,
anti-phase
movement
is
less
well
investigated
than
in-phase.
While
it
has
previously
been
suggested
that
error
correction
while
moving
in
anti-phase
may
have
similar
mechanisms
to
moving
in-phase,
and
may
simply
be
a
case
of
shifting
the
response
by
a
regular
period,
there
is
some
evidence
that
suggests
there
could
be
more
substantial
differences
in
the
way
that
people
engage
in
antiphase
movement.
In
particular,
it
is
known
that
antiphase
synchronisation
tends
to
become
difficult,
and
break
down,
at
a
different
stimulus
interonset
interval
(IOI)
from
in-phase
synchronisation.
The
current
study
uses
an
anisochronic
stimulus
sequence
to
look
at
peoples
capacity
to
error
correct
when
performing
anti-phase
synchronisation
with
a
set
of
sounds.
Participants
were
instructed
to
tap
between
the
tones
but
try
to
maintain
regularity.
Although
these
potentially
contradictory
instructions
did
not
advise
participants
to
perform
any
error
correction
on
the
basis
of
deviation
in
the
stimuli,
results
initially
suggest
that
participants
did
perform
error
correction,
tapping
with
shortened
intervals
following
a
shorter
stimulus
interval,
and
lengthened
intervals
following
a
longer
stimulus
interval.
However,
using
cross-sectional
time
series
analysis
it
was
possible
to
look
at
tapping
data
over
a
number
of
participants
to
demonstrate
that
the
relationship
between
stimulus
and
response
was
not
such
a
simple
one,
and
that
the
error
correction
response
would
be
better
explained
by
participants
trying
to
maintain
a
regular
asynchrony
with
the
stimulus.
Modelling
confirmed
that
this
strategy
could
better
explain
the
data
than
error
correction
performed
in
a
manner
more
similar
to
that
of
in-phase
tapping.
The
idea
that
antiphase
synchronisation
is
performed
by
attempting
to
maintain
a
regular
asynchrony
of
half
the
stimulus
IOI
is
in
keeping
with
findings
that
antiphase
synchronisation
becomes
difficult
at
around
double
the
stimulus
IOI
that
becomes
difficult
for
in-phase
synchronisation,
and
suggests
that
anti-phase
movement
might
not
share
the
same
error
correction
mechanisms
as
in-phase
movement.
This
may
have
more
general
implications
for
the
way
we
understand
temporal
cognition,
and
contributes
towards
debates
regarding
clock
and
oscillator
models
of
timing.
100 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
WED
The
Subjective
Difficulty
of
Tapping
to
a
Slow
Beat
The
current
study
investigates
the
slower
limit
of
rhythm
perception
and
participants
subjective
difficulty
when
tapping
to
a
slow
beat.
Thirty
participants
were
asked
to
tap
to
metronome
beats
ranging
in
tempo
from
600
ms
to
3000
ms
between
each
beat.
After
each
tapping
trial
the
participants
rated
the
difficulty
of
keeping
the
beat
on
a
seven
point
scale
ranging
from
"very
easy"
to
"very
difficult".
The
participants
generally
used
the
whole
rating
scale
and
as
expected
there
was
a
strong
significant
correlation
between
the
inter
onset
interval
(IOI)
of
the
beats
and
rated
difficulty
(r=.89).
The
steepest
increases
in
rated
difficulty
was
between
IOIs
1200
to
1800
ms
(M=1.6)
and
1800
to
2400
ms
(M=1.2)
and
these
were
significantly
larger
than
the
increases
between
IOIs
600
to
1200
ms
(M=.5)
and
2400
to
3000
ms
(M=0.9).
This
is
in
line
with
earlier
reports
on
where
tapping
starts
to
feel
difficult
and
supports
the
hypothesis
that
there
is
a
qualitative
difference
between
tapping
to
fast
(IOI
<
1200
ms)
and
slow
(IOI
>
2400)
tempi.
A
mixed
model
analysis
showed
that
tempo,
tapping
error
and
percentage
of
reactive
responses
all
affected
the
participants
rating
of
difficulty.
Of
these,
tempo
was
by
far
the
most
influential
factor,
still
participants
were,
to
some
degree,
sensitive
to
their
own
tapping
errors
which
then
influenced
their
subsequent
difficulty
rating.
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
101
Eriko
Aiba,*
Koji
Kazai,*
Toshie
Matsui,#
Minoru
Tsuzaki,+
Noriko
Nagata*
*Dept.
of
Human
System
Interaction,
Kwansei
Gakuin
University,
Japan;
#Dept.
of
Otorhinolaryngology
-
Head
and
neck
surgery,
Nara
Medical
University,
Japan;
+Faculty
of
Music,
Kyoto
City
University
of
Arts,
Japan
Synchrony
judgment
is
one
of
the
most
important
abilities
for
musicians
because
just
a
few
milliseconds
of
onset
asynchrony
can
result
in
a
significant
difference
in
musical
expression.
However,
even
if
all
of
the
components
physically
begin
exactly
simultaneously,
their
temporal
relation
might
not
be
preserved
at
the
cochlear
level.
The
purpose
of
this
study
is
to
investigate
whether
the
cochlear
delay
significantly
affects
the
synchrony
judgment
accuracy
and
whether
there
are
any
differences
in
its
effects
depending
on
musical
experiences.
A
psychoacoustical
experiment
was
performed
to
measure
the
synchrony
judgment
accuracy
for
professional
musicians
and
non-musicians.
Two
types
of
chirps
and
a
pulse
were
used
as
experimental
stimuli
to
control
an
amount
of
the
cochlear
delay.
The
compensated
delay
chirp
instantaneously
increased
its
frequency
to
cancel
out
the
cochlear
delay.
The
enhanced
delay
chirp
had
the
reversed
temporal
relation
of
the
compensatory
delay
chirp.
In
addition,
a
pulse
without
delay
was
used.
The
experimental
task
was
to
detect
a
synchronous
pair
in
the
2I2AFC
procedure.
As
a
result,
synchrony
judgment
accuracy
was
significantly
higher
in
case
of
professional
musicians
than
that
of
non-musicians.
For
professional
musicians,
there
are
significant
differences
among
all
three
types
of
sounds.
However,
for
non-musicians,
there
was
no
significant
difference
between
compensated
chirps
and
enhanced
chirps.
This
result
suggests
that
the
auditory
system
of
professional
musicians
is
more
sensitive
to
the
change
of
temporal
relation
on
frequency
components
such
as
cochlear
delay
than
that
of
non-musicians.
WED
2D-emotional
space.
The
results
show
that
several
motions
are
dependent
to
the
2D
emotional
space
and
emotional
performance
has
several
features
of
motion
not
related
to
musical
sound.
We
found
that
professional
percussionists
are
representing
emotion
on
the
motion
of
the
performance
dependent
to
the
2D
space
and
independent
to
its
acoustic
signal.
Embouchure-related
muscular
activity
and
accompanying
skin
movement
for
the
production
of
tone
on
the
French
horn
Takeshi
Hirano,*
Satoshi
Obata,*
Chie
Ohsawa,*
Kazutoshi
Kudo,#
Tatsuyuki
Ohtsuki,#
Hiroshi
Kinoshita*
*Graduate
School
of
Medicine,
Osaka
University,
Japan
#Graduate
School
of
Arts
and
Sciences,
The
University
of
Tokyo,
Japan
The
present
study
investigated
dynamics-
and
pitch-related
activity
of
selected
five
facial
muscles
(levator
labii
superioris,
zygomaticus
major,
depressor
anguli
oris,
depressor
labii
inferioris,
and
risorius
(RIS))
using
surface
electromyogram
(EMG),
and
accompanying
skin
movement
using
3D
motion
capture
system.
Ten
advanced
French
horn
players
produced
6-
sec
long
tones
at
3
levels
of
dynamics
(pp,
mf,
and
ff)
at
5
levels
of
pitch
(Bb1,
F3,
F4,
Bb4,
and
F5).
For
each
muscle,
mean
EMG
and
kinematics
(marker-to-marker
distance)
were
computed
for
the
pre-attack
phase
of
375
ms
prior
to
the
tone
onset,
and
for
the
sustained
phase
of
750
ms
starting
from
3
s
after
the
tone
onset.
EMG
data
were
normalized
by
the
data
obtained
from
production
of
the
sustained
F5
(near
maximum
high
pitch)
tone
at
ff
dynamics.
Multivariate
analysis
of
variance
on
all
EMG
data
revealed
that
activity
was
greater
at
stronger
dynamics
and
at
a
higher
pitch.
Dynamics
x
pitch
interaction
effect
was
non-significant.
Pitch
and
dynamics
did
not
influence
the
facial
skin
kinematics
except
for
shortening
of
markers
placed
on
RIS.
No
phase
effect
was
observed
for
both
EMG
and
kinematic
data.
The
findings
suggest
that
proper
pre-setting
as
well
as
continuously
maintaining
the
level
of
isometric
contraction
in
the
embouchure
muscles
is
an
essential
mechanism
for
the
control
of
lip
and
oral
cavity
wall
tension,
by
which
production
of
accurate
pitch
and
dynamics
is
accomplished.
Effect
of
short-term
piano
practice
on
fine
control
of
finger
movements
Ayumi
Nakamura*,
Tatsushi
Goda*,
Hiroyoshi
Miwa*,
Noriko
Nagata*,
Shinichi
Furuya#
*School
of
Science
and
Technology,
Kwansei
Gakuin
University,
Japan;
#Institute
for
Music
Physiology
and
Musicians
Medicine,
Hannover
University
of
Music,
Drama,
and
Media,
Germany
103
independence
of
finger
movements,
each
finger
performed
the
fastest
tapping
task,
which
required
repetitive
keystrokes
by
one
finger
as
fast
as
possible
with
keeping
the
remaining
digits
depressing
the
adjacent
keys.
Results
showed
that
each
of
the
index,
middle,
ring,
and
little
fingers
showed
significant
improvement
in
maximum
movement
rate
following
the
practice,
indicating
enhancement
of
independent
control
of
movements
at
individual
finger.
To
further
assess
if
visual
feedback
regarding
temporal
accuracy
of
keystrokes
during
the
practice
affects
the
training
effect
on
the
hand
motor
functions,
we
asked
another
six
non-musicians
to
perform
the
same
task
with
information
on
the
variability
of
inter-keystroke
interval
being
provided
visually.
Training-
dependent
improvement
of
hand
motor
functions
turned
out
to
be
not
facilitated
even
with
accuracy
feedback.
Piano
practice
with
a
particular
tone
sequence
at
a
certain
tempo
had
significant
impacts
on
accuracy,
speed,
and
independent
control
of
finger
movements.
The
transfer
effect
on
both
untrained
hand
and
untrained
tone
sequence
implies
presence
of
shared
motor
primitive
in
piano
playing.
Expert-novice
difference
in
string
clamping
force
in
violin
playing
Hiroshi
Kinoshita,1
Satoshi
Obata1,
Takeshi
Hirano1,
Chie
Ohsawa1,
Taro
Ito2
1
Biomechanics
&
Motor
control
lab,
Graduate
School
of
Medicine,
Osaka
University,
Osaka,
Japan;
2
Department
of
Health
and
Sports
Science,
Mukogawa
Womens
University,
Hyogo,
Japan
Difference
in
the
nature
of
force
for
clamping
the
strings
between
expert
(N
=
8)
and
novice
(N
=
8)
violin
players
was
investigated
using
a
violin
installed
with
a
3D
force-transducer,
and
produced
sound.
These
players
performed
repetitive
open
A-
and
D-tone
(force
measurement)
production
using
the
ring
finger
at
tempi
of
1,
2,
4,
and
8
Hz
at
mezzo-forte.
At
2-
and
8-Hz
tempi,
the
same
task
was
performed
by
the
other
fingers.
At
1
and
2
Hz,
the
profiles
were
characterized
by
an
initial
attack
force,
followed
by
a
leveled
force
during
the
finger
contact
period.
The
peak
attack
force
for
the
experts
exceeded
5
N,
which
was
significantly
larger
than
about
3.N
for
the
novices.
At
4
and
8
Hz,
only
attack
force
with
a
lower
peak
with
no
group
difference
was
observed
than
at
the
faster
tempi,
but
attack-to-
attack
variability
of
force
was
significantly
larger
for
the
novices
than
the
experts.
Both
the
experts
and
novices
had
a
lower
attack
force
by
the
ring
and
little
fingers
than
the
other
two
fingers,
but
the
finger
difference
was
much
less
for
the
experts.
The
findings
suggest
that
expert
violinists
use
a
strategy
of
trade-off
between
physiological
cost
of
string
clamping
force
and
production
of
high
quality
sound.
High
consistency
of
attack
force
action
is
also
an
important
Expert-novice
difference
in
string
clamping
force
when
performing
violin
vibrato
WED
and
their
intra-subject
variability
were
computed
for
each
trial.
It
was
found
that
the
novices
had
significantly
smaller
average
pressing
force
and
amplitude
of
the
shaking
force
than
the
experts.
The
intra-subject
variability
of
shaking-force
amplitude
and
peak-to-peak
time
was
significantly
larger
for
the
novices.
These
were
similarly
common
across
all
four
fingers.
It
was
concluded
that
the
mechanism
of
string
clamping
force
during
the
vibrato
for
novices
were
different
from
experts.
The
findings
suggest
that
the
parallel
and
synergistic
production
of
sufficient
pressing
and
shaking
forces
is
one
element
of
successful
vibrato.
The
role
of
auditory
and
tactile
modalities
in
violin
quality
evaluation
105
extent
to
which
they
experienced
the
emotion
after
listening
to
the
song,
watching
the
video
and
reading
the
lyrics.
High
scores
indicated
negative
emotions.
Rap
lyrics
elicited
the
most
negative
response
followed
by
the
Rock
lyrics.
The
Pop
genre
had
the
lowest
scores.
The
sample
also
reacted
negatively
to
the
Rap
video.
Overall
their
responses
to
the
different
songs
were
about
the
same,
but
responses
to
the
video
content
and
lyrics
were
markedly
different
with
most
negative
responses
to
Rap.
Since
young
girls
tend
to
use
music
to
manage
their
emotions,
these
findings
are
a
cause
for
concern.
Further
research
needs
to
done
linking
types
of
music
and
ways
of
coping.
Specialist
adolescent
musicians
role
models:
Whom
do
they
admire
and
why?
Antonia
Ivaldi
Department
of
Psychology,
Aberystwyth
University,
Wales,
UK
Previous
research
into
typical
adolescents
musical
role
models
has
shown
that
young
people
are
more
likely
to
identify
a
celebrity
figure
as
their
role
model
due
to
their
image
and
perceived
fame,
than
because
of
their
perceived
musical
ability.
This
study
builds
on
this
previous
work
by
looking
at
the
role
models
of
young
talented
musicians
with
the
aim
of
exploring
who
they
admire
as
a
musician
and
the
reasons
why.
It
is
anticipated
that
the
adolescents
will
identify
more
elite
performers
and
teachers
(i.e.,
non-celebrities)
as
their
role
models.
107
young
musicians,
aged
13-19,
took
part
in
a
questionnaire
study,
and
were
drawn
from
two
specialist
musical
environments:
Junior
conservatoire
students
(n
=
59)
and
county
level
students
(n
=
48,
drawn
from
two
local
music
services).
The
adolescents
were
asked
questions
about
who
they
admired
as
a
musician
(i.e.,
someone
famous,
teacher)
and
the
reasons
why
(i.e.,
they
are
talented,
works
hard).
Adolescents
also
rated
how
much
they
wanted
to
become
like
their
role
model
(aspirations),
and
how
much
they
thought
they
could
become
like
their
role
model
(attainability).
Results
showed
that
both
famous
and
non-
famous
figures
were
identified,
with
more
elite
performers
and
teachers
being
chosen
compared
to
previous
research,
thus
indicating
a
specialist
knowledge
and
level
of
exposure
to
relevant
musical
figures.
Factor
analysis
generated
three
loadings
(image,
higher
achievement,
dedication)
for
the
reasons
for
admiring
the
role
models.
The
implications
for
the
adolescents
identifying
more
relevant
figures
for
their
attainability
and
aspiration
beliefs
are
discussed.
Typicality
and
its
influence
on
adolescents
musical
appreciation
WED
to
categorize
stimuli
and
predict
musical
judgments
of
adolescents
with
the
claim
of
optimal
distinctiveness.
As
a
main
result,
we
present
the
typicality
of
a
musicians
image
standardized
in
terms
of
an
iconographic
scale.
1Nagano College of Nursing, Japan; 2Nagoya University Hospital, Japan; 3Seirei Mikatahara
Hips
don't
lie:
Multi-dimensional
ratings
of
opposite
sex
dancers
perceived
attractiveness
Geoff
Luck,
Suvi
Saarikallio,
Marc
Thompson,
Birgitta
Burger,
Petri
Toiviainen
Finnish
Centre
of
Excellence
in
Interdisciplinary
Music
Research,
Department
of
Music,
University
of
Jyvskyl,
Finland
Previous
work
has
shown
that
a
number
of
factors
can
affect
perceived
attractiveness
of
opposite-sex
dancers.
For
women
watching
men,
body
symmetry,
perceived
strength,
vigor,
skillfulness,
and
agility
of
movement,
as
well
as
greater
variability
and
amplitude
of
the
neck
and
trunk,
are
positively
related
to
perceived
attractiveness.
For
men
watching
women,
b ody
symmetry
is
also
important,
and
femininity/masculinity
of
movement
likely
also
plays
a
role
for
both
sexes.
Our
aim
here
was
to
directly
compare
characteristics
of
attractive
opposite-
sex
dancers
under
the
same
conditions.
Sixty-two
heterosexual
adult
participants
(mean
age
=
24.68
years,
34
females)
were
presented
with
48
short
(30
s)
audio-visual
point-light
animations
of
adults
dancing
to
music.
Stimuli
were
comprised
of
eight
females
and
eight
males,
each
dancing
to
three
songs
representative
of
Techno,
Pop,
and
Latin
genres.
For
each
stimulus,
participants
rated
perceived
femininity/
masculinity
as
appropriate,
sensuality,
sexiness,
mood,
and
interestingness
of
the
dancer.
Seven
kinematic
and
kinetic
features
downforce,
hip
wiggle,
shoulder
vs.
hip
angle,
hip-knee
phase,
shoulder-hip
ratio,
hip-body
ratio,
and
body
symmetry
were
computationally
extracted
from
the
stimuli.
Results
indicated
that,
for
men
watching
women,
hip-knee
phase
angle
was
positively
related
to
ratings
of
perceived
interestingness
and
mood,
and
hip-body
ratio
was
positively
related
to
ratings
of
perceived
sensuality.
For
women
watching
men,
downforce
was
positively
related
to
ratings
of
perceived
sensuality.
Our
results
partially
support
previous
work,
and
highlight
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012 107
some
similarities
and
differences
between
male
and
female
perceptions
of
attractiveness
of
opposite-sex
dancers.
How
was
it
for
you?
Obtaining
artist-directed
feedback
from
audiences
at
live
musical
events
John
Sloboda,
Melissa
Dobson
Guildhall
School
of
Music
&
Drama,
UK
Musicians
generally
have
rather
limited
means
of
obtaining
direct
and
detailed
feedback
from
their
live
audiences.
This
is
often
limited
to
applause
and
the
feel
of
the
room.
Although
many
research
studies
collect
more
detailed
evaluative
responses
from
music
listeners,
this
is
often
done
without
reference
to
the
specific
concerns
or
interests
of
the
musicians
involved.
It
is
rare
for
the
musicians
themselves
to
be
directly
involved
in
the
formulation
of
the
research
questions,
or
the
review
of
the
data
obtained.
This
research
project
aims
to
develop
and
pilot
a
means
for
audiences
to
provide
responses
to
questions
which
are
of
direct
interest
and
importance
to
the
musicians
involved
in
live
performance
events.
Specifically
we
wish
to
evaluate
whether
such
processes
enhance
(a)
audience
engagement,
and
(b)
professional
and
artistic
development
of
the
musicians
involved.
The
research
team
has
worked
with
several
artistic
teams
in
a
process
which
involves
(a)
discovering
artistically
relevant
questions
which
can
be
validly
posed
to
audience
members,
(b)
collaboratively
devising
appropriate
means
of
collecting
this
data
(e.g.
questionnaire,
post-performance
discussion),
(c)
jointly
reviewing
the
outcomes
of
the
event,
and
the
audience
data,
(d)
obtaining
reflective
feedback
from
those
involved
regarding
the
value
of
being
involved
in
the
exercise.
We
will
illustrate
the
process
with
specific
data
from
one
or
more
live
musical
events
which
have
taken
place
between
July
2011
and
May
2012.
This
includes
the
world
premiere
of
a
composition
whose
inspiration
was
a
traditional
day
of
celebration
in
the
composers
home
town,
characterised
by
distinctive
rituals
involving
folk-music
and
dance.
The
composer
was
interested
to
know
if
audience
knowledge
of
the
programmatic
background
to
the
composition
(provided
by
a
programme
note)
was
a
significant
factor
in
audience
appreciation
of
the
work.
In
this
case,
unexpected
emergent
features
of
the
research
experience
yielded
unanticipated
benefits,
with
the
composer
perceiving
heightened
audience
attention
to
the
piece
being
researched,
and
experiencing
consequent
affirmation.
Involvement
of
musicians
in
the
design
and
implementation
of
research
on
audience
response
is
a
significant
means
of
enhancing
mutual
understanding
between
musicians
and
audiences
and
of
making
research
more
directly
relevant
to
practitioner
concerns.
Issues
for
discussion
include
the
appropriate
means
of
ensuring
sufficient
research
rigour
without
distorting
the
artistic
process.
Everyday
Listening
Experiences
Utilizing
the
Experience
Sampling
Method,
this
investigation
aimed
to
update
our
understanding
of
everyday
listening
in
situ.
Self-reports
regarding
where,
when,
and
how
music
was
experienced,
as
well
as
ratings
concerning
affect
before
and
after
exposure
to
music
and
the
perceived
effects
of
what
was
heard
were
gathered
over
one
week.
Responding
to
two
text
messages
sent
at
random
times
between
8:00
and
23:00
daily,
370
participants
completed
online
responses
concerning
their
experience
with
any
music
heard
within
a
two-hour
period
prior
to
receiving
each
text
message.
Results
from
the
177
participants
who
completed
at
least
12
of
14
entries
demonstrated
that
music
was
heard
on
108
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012
WED
46.31%
of
occasions
overall.
While
heard
throughout
the
day
and
more
often
in
private
than
public
spaces,
detailed
analyses
revealed
significant
patterns
based
on
time,
location,
device,
selection
method,
mood,
ratings
of
choice
and
attention,
and
the
perceived
effects
of
what
was
heard.
Most
importantly,
the
results
suggest
that
it
is
the
level
of
control
that
a
person
has
over
the
auditory
situation
which
greatly
interacts
with
the
other
variables
to
influence
how
he
or
she
will
hear
the
music
as
well
as
how
it
is
perceived.
In
contrast
to
North,
Hargreaves,
and
Hargreaves
(2004)
proposition
that
the
value
of
music
has
decreased
in
light
of
technological
advancement,
the
current
findings
imply
that
with
the
greater
control
technology
affords,
the
value
has
instead
increased,
when
we
consider
individuals
as
actively
consuming
(thereby
using)
music
rather
than
simply
as
passive
listeners.
Effects
of
Structural
and
Personal
Variables
on
Childrens
Development
of
Music
Preference
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
109
The
Pairwise
Variability
Index
as
a
Tool
in
Musical
Rhythm
Analysis
Godfried
T.
Toussaint
Faculty
of
Science,
New
York
University
Abu
Dhabi,
United
Arab
Emirates
The
normalized
pairwise
variability
index
(nPVI)
is
a
measure
of
the
average
variation
(contrast)
of
durations
that
are
obtained
from
successive
pairs
of
events.
It
was
originally
conceived
for
measuring
the
rhythmic
differences
between
languages
on
the
basis
of
vowel
length.
More
recently,
it
has
also
been
employed
successfully
to
compare
rhythm
in
speech
and
music.
London,
J.
&
Jones,
K.
(2011)
have
suggested
that
the
nPVI
measure
could
become
a
useful
general
tool
for
musical
rhythm
analysis.
One
goal
of
this
study
is
to
determine
how
well
the
nPVI
models
various
dimensions
of
musical
rhythmic
complexity,
ranging
from
human
performance
and
perceptual
complexities
to
musical
notions
of
syncopation,
and
mathematical
measures
of
syncopation
and
rhythm
complexity.
A
second
goal
is
to
determine
whether
the
nPVI
measure
is
capable
of
discriminating
between
short,
symbolic,
musical
rhythms
across
meters,
genres,
and
cultures.
It
is
shown
that
the
nPVI
measure
suffers
from
severe
shortcomings,
in
the
context
of
short
symbolic
rhythmic
patterns
such
as
African
timelines.
Nevertheless,
comparisons
with
previous
experimental
results
reveal
that
for
some
data
the
nPVI
measure
correlates
mildly,
but
significantly,
with
performance
110
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012
WED
complexity.
It
is
also
able
to
discriminate
between
certain
distinctive
families
of
rhythms.
However,
no
significant
differences
were
found
between
binary
and
ternary
musical
rhythms,
mirroring
the
findings
by
Patel,
A.
D.
&
Daniele,
J.
R.
(2003)
for
language.
"The
types
of
ViPES":
A
typology
of
musicians
stage
entrance
behavior
111
first
visible
action
for
the
audience
can
be
regarded
as
the
starting
point
of
musical
persuasion.
Our
aims
are
two-fold:
First
we
will
reveal
a
typology
of
performer's
persuasive
stage
entrance
behavior.
Second,
we
would
like
to
reveal
the
fundamental
components
underlying
the
audiences
construction
of
performer
evaluations.
We
will
present
a
first
sketch
of
a
typology
of
musicians
stage
entrance
behavior.
Furthermore,
we
will
offer
a
latent-structured
framework
of
the
audiences
attitude
mechanism.
Based
on
our
performer
typology,
we
will
obtain
a
deeper
understanding
of
the
audiences
reaction
and
attitudes
towards
varieties
of
stage
performances.
This
work
has
three
main
goals:
first,
to
study
the
perception
of
melodic
similarity
in
flamenco
singing
with
both
experts
and
novices;
second,
to
contrast
judgments
for
synthetic
and
recorded
melodies;
third,
to
evaluate
musicological
distances
against
human
similarity
judgments
(Mora
et
al.
2010).
We
selected
the
melodic
exposition
from
12
recordings
of
the
most
representative
singers
in
a
particular
style,
martinete.
Twenty-seven
musicians
(including
three
flamenco
experts)
were
asked
to
listen
to
the
melodies
and
sort
them
into
categories
based
on
perceived
similarity.
In
one
session,
they
sorted
out
synthetic
melodies
derived
from
the
recordings;
in
the
other
session,
they
sorted
out
recorded
melodies.
They
described
their
strategies
in
an
open
questionnaire
after
each
session.
We
observed
significant
differences
between
the
criteria
used
by
non-expert
musicians
(pitch
range,
melodic
contour,
note
duration,
rests,
vibrato
and
ornamentations)
and
the
ones
used
by
flamenco
experts
(prototypical
structure
of
the
style,
ornamentations
and
reductions).
We
also
observed
significant
correlations
between
judgements
from
non-expert
musicians
and
flamenco
experts,
between
judgements
for
synthetic
and
recorded
melodies,
and
between
musicological
distances
and
human
judgements.
We
also
observed
that
the
agreement
amongst
non-experts
musicians
was
significantly
lower
than
amongst
flamenco
experts.
This
study
corroborates
that
humans
have
different
strategies
for
comparing
synthetic
and
real
melodies,
although
their
judgements
are
correlated.
Our
findings
suggest
that
computational
models
should
incorporate
features
other
than
energy
and
pitch
when
comparing
two
flamenco
performances.
Furthermore,
judgments
from
flamenco
experts
also
differed
from
novice
listeners
due
to
their
implicit
knowledge.
Finally,
novice
listeners
even
with
a
strong
musical
training-
did
not
substantially
agree
on
their
ratings
of
these
unfamiliar
melodies.
WED
in
a
systematic
way.
This
study
shows
the
qualitative
and
quantitative
impact
that
time-scale
has
in
the
evaluation
of
a
simple
tonal
induction
model,
when
the
concurrent
probe-tone
method
is
used
to
capture
continuous
ratings
of
perceived
relative
stability
of
pitch-classes.
Music
stimulus
is
slide-windowed
using
many
time-scales,
ranging
from
fractions
of
second
to
the
whole
musical
piece.
Each
frame
is
analysed
to
obtain
a
pitch-class
profile
and,
for
each
temporal
scale,
the
time
series
is
compared
with
the
empirical
annotations.
Two
commonly
used
frame-to-frame
metrics
are
tested:
a)
Correlation
between
the
12-D
vectors
from
ratings
and
model.
b)
Correlation
between
the
24
key
activation
strengths,
obtained
by
correlation
of
the
12-D
vectors
with
the
Krumhansl
and
Kessler's
key
profiles.
We
discuss
the
metric
artifacts
introduced
by
the
second
representation,
and
we
show
that
the
best
performing
time-scale,
minimizing
the
root
mean-square
of
the
frame-to-frame
distances
along
time,
is
far
longer
than
short-time
memory
conventions.
We
propose
a
temporal
multi-
scale
analysis
method
as
an
interactive
tool
for
exploring
the
effect
of
time-scale
and
different
multidimensional
representations
in
tonal
cognition
modeling.
Personality
of
Musicians:
Age,
Gender,
and
Instrumental
Group
Differences
Blanka
Bogunovi
Faculty
of
Music,
University
of
Arts,
Serbia
The
idiosyncratic
complexity
of
cognitive
abilities,
motivation
and
personality
structure
gives
a
personal
mark
to
the
processes
of
perception,
cognition
and
emotional
arousal
which
take
place
during
different
musical
activities,
such
as
listening,
performing,
creating
and
learning
music.
The
intention
of
this
study
was
to
gain
new
knowledge
by
using
a
newer
theoretical
approach
and
an
instrument
for
personality
assessment.
Namely,
to
investigate
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012
113
Malgorzata
Chmurzynska
Department
of
Music
Psychology,
Chopin
University
of
Music
The
researchers
indicate
that
personality
is
a
significant
factor
determining
the
achievements
both
of
the
students
during
their
music
education
process
and
the
professional
musicians
in
their
musical
career.
The
role
of
personality
is
considered
more
significant
in
the
later
stages
of
music
education
when
the
level
of
musical
ability
no
longer
differentiates
between
the
students
who
have
received
their
musical
instruction.
The
personality
traits
particularly
characteristic
of
musicians
include
the
tendency
to
introversion
(that
makes
them
practice
too
much
in
isolation),
emotional
instability,
sensitivity,
perseverance,
and
openness
(Kemp,
1996;
Manturzewska,
1974).
Among
music
students
who
receive
higher
marks
at
school
there
has
been
identified
a
higher
level
of
self-efficacy
(McPherson,
McCormick,
2006)
and
lower
level
of
neuroticism
(Manturzewska,
1974).
However,
we
are
still
seeking
an
answer
to
the
question:
which
of
the
personality
traits
are
conducive
to
a
high
level
of
musical
performance?
The
aim
of
the
present
study
was
to
examine
the
personality
differences
between
the
high
achievers
and
average
achievers
among
the
pianists.
The
variables
of
gender
and
nationality
were
taken
into
account.
The
subjects
were
participants
of
the
16th
International
Fryderyk
Chopin
Piano
Competition
in
Warsaw
as
well
as
other
piano
competitions
(high
achievers)
and
ordinary
piano
students
(average
achievers).
The
control
group
of
non-musicians
has
been
used
for
comparison,
including
the
normalization
samples
of
the
employed
tests.
The
respondents
completed
the
NEO
Five-
Factor
Inventory
(Costa
and
McCrae,
1992)
and
the
General
Self-Efficacy
Scale
(Schwarzer,
1998).
Moreover,
the
Formal
Characteristics
of
Behavior-Temperament
Inventory
(Zawadzki
and
Strelau,
1998))
was
used
to
measure
the
temperamental
traits
specified
by
the
Regulative
Theory
of
Temperament
(Strelau,
1996)
which
include
briskness,
perseverance,
sensory
sensitivity,
emotional
reactivity,
endurance,
and
activity.
The
results
are
in
the
process
of
being
analyzed.
So
far,
the
analyses
of
the
NEO-FFI
and
GSES
results
have
shown
that
the
most
distinctive
aspects
of
pianists
personalities
are
high
level
of
Openness,
Conscience
(especially
among
females)
and
a
very
high
level
of
self-efficacy
in
comparison
to
the
control
group.
The
study
has
revealed
the
differences
between
the
pianists
and
non-musicians.
So
far
hardly
any
differences
has
been
found
between
the
high
achievers
and
average
achievers
among
pianists.
Possibly
the
analysis
of
the
temperamental
traits
will
bring
new
facts
about
associations
between
personality
and
high
musical
performance.
114 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
WED
Attitudes
towards
music
piracy:
The
impact
of
positive
anti-piracy
messages
and
contribution
of
personality
Steven
C.
Brown
Psychology
and
Allied
Health
Sciences,
Glasgow
Caledonian
University,
Scotland
Conventional
anti-piracy
strategies
have
been
largely
ineffective,
with
pirates
adapting
successfully
to
legal
and
technological
changes.
The
present
research
aims
to
address
the
two
principal
areas
of
research
predictive
factors
and
deterrents
in
a
novel
way
with
personality
being
considered
as
a
potential
predictive
factor
and
positive
anti-piracy
messages
proposed
as
a
potentially
effective
deterrent.
261
participants
(45.6%
male)
with
a
mean
age
of
26.3
participated
in
an
online
questionnaire,
outlining
their
music
consumption
preferences
and
completing
the
60-item
version
of
the
Hexaco
PI-R
(Lee
and
Ashton,
2004)
before
being
allocated
to
one
of
four
conditions:
legal
sales
of
music
encourage
future
live
performances,
legal
sales
of
music
allow
fans
greater
access
to
exclusive
content,
legal
sales
of
music
will
incorporate
charitable
donations
and
a
control.
Participants
attitudes
towards
music
piracy
were
then
measured
using
an
original
construct
(AMP-12).
Condition
had
no
effect
on
piracy
attitudes
where
personality
was
a
significant
predictor,
with
participants
scoring
higher
on
the
AMP-12
scoring
lower
on
honesty-humility
and
conscientiousness
and
higher
on
openness.
Openness
emerged
as
a
key
individual
difference,
with
participants
scoring
higher
on
this
trait
demonstrating
a
greater
likelihood
to
favour
vinyl,
re-mastered
versions
of
albums
and
listening
to
live
recordings.
Crucially,
preference
for
digital
music
was
a
significant
predictor
of
pro-piracy
attitudes.
Several
demographic
differences
were
also
observed
which
point
towards
a
gender-segmented
approach
in
appeasing
individuals
engaging
in
music
piracy
as
well
as
accommodating
the
increasing
trend
for
digital
music.
Implications
for
future
anti-piracy
strategies
are
discussed.
115
languages.
The
results
indicate
that
background
music
can
improve
memory
during
second
language
learning
tasks
and
also
bring
higher
enjoyment,
which
could
help
build
focus
and
promote
future
learning.
Does
Native
Language
Influence
the
Mothers
Interpretation
of
an
Infants
Musical
and
Linguistic
Babblings?
Teachers
Opinions
of
Integrated
Musical
and
Language
Learning
Activities
Karen
M.
Ludke
Institute
for
Music
in
Human
and
Social
Development,
Edinburgh
College
of
Art,
University
of
Edinburgh,
United
Kingdom
There
is
increasing
interest
in
the
potential
of
music
to
support
language
learning
and
memory
(Wallace,
1994;
Schn
et
al.,
2008).
Listening,
perceiving,
imitating,
and
creating
are
basic
skills
in
both
language
and
music.
The
Comenius
Lifelong
Learning
Project
European
Music
Portfolio
A
Creative
Way
into
Languages
(EMP-L)
aims
to
support
childrens
learning
in
music
and
languages
through
a
flexible,
integrated
approach.
This
study
explored
Scottish
music
teachers
opinions
of
the
music
and
language
activities
developed
by
the
international
EMP-L
team.
Special
consideration
was
given
to
the
Scottish
Curriculum
for
Excellence
(CfE),
wherein
music
learning
falls
into
the
expressive
arts
curriculum
area
and
modern
language
learning
into
the
languages
area.
This
qualitative
study
was
conducted
with
6
trainee
primary
music
teachers
and
2
experienced
teachers
who
were
trained
to
use
the
EMP-L
activities
to
support
musical
and
language
learning
outcomes.
Pre-
and
post-teaching
questionnaires
and
focus
groups
asked
teachers
to
comment
on
the
applicability
of
the
EMP-
Ls
core
activities
to
learning
and
progression.
Pre-
and
post-implementation
survey
data
was
analyzed
together
with
teachers
comments
during
the
focus
group
sessions.
Overall,
teachers
opinions
of
the
EMP-L
materials
were
positive
and
the
lessons
led
to
successful
CfE
116
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012
WED
experiences
and
outcomes.
However,
some
concerns
were
raised,
particularly
regarding
progression
and
whether
generalist
primary
teachers
could
use
the
activities
without
support
from
music
and/or
language
specialists.
The
teachers
opinions
of
the
EMP-L
activities
have
the
potential
to
improve
the
materials
and
to
inform
holistic,
integrated
music
education
initiatives
in
Europe
and
elsewhere.
Introducing
ECOLE:
a
language
music
bridging
paradigm
to
study
the
role
of
Expectancy
and
COntext
in
social
LEarning
117
any
more
entrainment.
A
clear
between
groups
difference
was
found:
compared
with
the
cold
end
group,
subjects
in
the
fade-out
group
continued
pulsation
about
3
s
longer
(t(52)
=
2.87,
p
=
.007,
Cohen's
d
=
0.90).
We
call
this
effect
the
Pulse
Continuity
Illusion
(PCI,
say
"Picky").
The
influence
of
imposed
meter
on
temporal
order
acuity
in
rhythmic
sequences
Pitch
and
time
salience
in
metrical
grouping
Jon
Prince
School
of
Psychology,
Murdoch
University,
Australia
I
report
two
experiments
on
the
contribution
of
pitch
and
temporal
cues
to
metrical
grouping.
Recent
work
on
this
question
has
revealed
a
dominance
of
pitch.
Extending
this
work,
a
dimensional
salience
hypothesis
predicts
that
the
presence
of
tonality
would
influence
the
relative
importance
of
pitch
and
time.
Experiment
1
establishes
baseline
values
of
accents
in
pitch
(pitch
leaps)
and
time
(duration
accent)
that
result
in
equally
strong
percepts
of
metrical
grouping.
Pitch
and
temporal
accents
are
recombined
in
Experiment
2
to
see
which
dimension
contributes
more
strongly
to
metrical
grouping
(and
how).
Both
experiments
test
values
in
tonal
and
atonal
contexts.
Both
dimensions
had
strong
influences
on
perceived
metric
grouping,
but
pitch
was
clearly
the
more
dominant.
Furthermore,
the
relative
strength
of
the
two
dimensions
varied
based
on
the
tonality
of
the
sequences.
Pitch
contributed
more
strongly
in
the
tonal
contexts
than
the
atonal,
whereas
Time
was
stronger
in
the
atonal
contexts
than
the
tonal.
These
findings
are
inconsistent
with
an
interpretation
that
stimulus
structure
enhances
the
ability
to
extract,
encode,
and
use
information
about
an
object.
Instead,
they
imply
that
structure
in
one
dimension
can
highlight
that
dimension
at
the
expense
of
another
(i.e.,
induce
dimensional
salience).
118 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
WED
How
is
the
Production
of
Rhythmic
Timing
Variations
Influenced
by
the
Use
of
Mensural
Symbols
and
Spatial
Positioning
in
Musical
Notation?
119
Bach's
themes
of
24
preludes
from
the
Well-Tempered
Clavier.
The
study
strives
to
find
any
regularities
in
the
synesthetic
experience,
i.e.
in
the
connection
between
sounds
and
colors
in
professional
musicians
with
absolute
pitch.
The
Role
of
Pitch
and
Timbre
in
the
Synaesthetic
Experience
Konstantina
Orlandatou
Institute
of
Musicology,
University
of
Hamburg,
Germany
Synaesthesia
is
a
condition,
an
involuntary
process
which
occurs,
when
a
stimulus
not
only
stimulates
the
appropriate
sense,
but
also
stimulates
another
modality
at
the
same
time.
In
order
to
examine
if
pitch
and
timbre
influence
the
synaesthetic
visual
experience,
induced
by
sound,
an
experiment
with
sound-colour
synaesthetes
(N=22)
was
conducted.
It
was
found
that
a)
high
pitched
sounds
conclude
to
a
presence
of
hue,
b)
low
pitched
sounds
to
an
absence
of
hue,
c)
single
frequencies
cause
a
uni-colour
sensation
and
d)
multiple
high
pitched
frequencies
induce
a
multi-colour
sensation.
Variation
of
chromatic
colour,
which
is
present
in
the
sensation,
depends
on
the
timbre
of
the
sound.
These
findings
suggest
that
the
synaesthetic
mechanism
(in
case
of
sound-colour
synaesthesia)
maps
sound
to
visual
sensations
depending
on
the
mechanisms
underlying
temporal
and
spectral
auditory
processing.
Musical
Synesthesia:
the
role
of
absolute
pitch
in
different
types
of
pitch
tone
synesthesia
120 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
WED
Getting
the
shapes
right
at
the
expense
of
creativity?
How
musicians
and
non-musicians
visualizations
of
sound
differ
New
music
for
the
Bionic
Ear:
An
assessment
of
the
enjoyment
of
six
new
works
composed
for
cochlear
implant
recipients
Hamish
Innes-Brown,*
Agnes
Au,#*
Catherine
Stevens,
Emery
Schubert,
Jeremy
Marozeau*
*
The
Bionics
Institute,
Melbourne,
Australia;
#
Department
of
Audiology
and
Speech
Pathology,
The
University
of
Melbourne,
Australia;
MARCS
Institute,
University
of
Western
Sydney,
Australia;
School
of
English,
Media
and
Performing
Arts,
University
of
New
South
Wales,
Australia
The
enjoyment
of
music
is
still
difficult
for
many
cochlear
implant
users.
This
study
aimed
to
assess
cognitive,
engagement,
and
technical
responses
to
new
music
composed
specifically
for
CI
users.
From
407
concertgoers
who
completed
a
questionnaire,
responses
from
groups
of
normally-hearing
listeners
(NH,
n
=
44)
and
CI
users
(n
=
44),
matched
in
age
and
musical
ability,
were
compared
to
determine
whether
specially-commissioned
works
would
elicit
similar
responses
from
both
groups.
No
significant
group
differences
were
found
on
measures
of
interest,
enjoyment
and
musicality,
whereas
ratings
of
understanding
and
instrument
localization
and
recognition
were
significantly
lower
from
CI
users.
Overall,
ratings
of
the
music
were
typically
higher
for
percussion
pieces.
The
concert
successfully
elicited
similar
responses
from
both
groups
in
terms
of
interest,
enjoyment
and
musicality,
although
technical
aspects,
such
as
understanding,
localisation,
and
instrument
identification
continue
to
be
problematic
for
CI
users.
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
121
PerMagnus
Lindborg
Nanyang
Technological
University
(Sgp)
/
KTH
Royal
Institute
of
Technology
(Swe)
We
present
a
pilot
questionnaire
study
to
investigate
visitors
experience
of
an
interactive
and
immersive
sound
installation,
The
Canopy
(Lindborg,
Koh
&
Yong
2011),
exhibited
at
ICMC
in
Huddersfield.
The
artwork
consists
of
a
4.5m
windsurfing
mast
suspended
by
strings,
set
up
in
a
black-box
space
and
illuminated
in
a
dramatic
fashion.
The
visitor
can
manipulate
the
pole
with
several
degrees
of
control:
2
for
floor
position,
2
for
pole
direction,
and
one
each
for
twist,
grip
height
and
squeeze.
A
real-time
program
in
MaxMSP
(Cycling
74)
maps
control
data
to
sound
synthesis
and
3D
diffusion
over
8
loudspeakers.
The
concept
of
the
installation
was
to
sail
in
a
sonic
storm
of
elementary
particles.
35
people
responded
to
the
questionnaire
immediately
after
having
visited
the
installation.
The
questions
aimed
to
gauge
various
qualities
of
the
interactive
experience:
the
amount
of
time
spent,
the
relative
importance
of
visual,
sculptural
and
sonic
elements,
the
amount
of
fun,
and
the
perceived
quality
of
gestural
control
over
spatial
and
timbral
sound
features.
For
the
dependent
variable
fun
amount,
6
graded
sentences
were
given
as
response
options.
Visitors
also
completed
forms
for
the
Ten-Item
Personality
Index
(TIPI;
Gosling
2003)
to
estimate
OCEA
scores,
and
for
Ollens
Musical
Sophistication
Index
(OMSI;
Ollen
2005),
and
gave
free-form
feedback.
The
aim
of
the
questionnaire
was
to
investigate
if
people
with
different
musical
sophistication
and
personality
traits
would
value
different
aspects
of
the
experience
in
systematic
ways.
On
the
OMSI,
24
respondents
scored
high
(p>0.75)
and
7
low
(p<0.45).
Thus
divided,
they
were
treated
as
two
groups
in
the
analysis.
ANOVA
revealed
that
the
groups
had
similar
OCEA
scores,
except
for
Agreeableness
where
the
high-OMSI
group
had
a
marginally
higher
mean.
A
stepwise
regression
of
fun
on
all
the
other
variables
and
on
OMSI
group
interaction
with
OCEA
revealed
that
people
who
felt
they
could
act
on
the
spatial
control
had
more
fun,
and
this
was
in
particular
the
case
for
less
musically
sophisticated
people
who
were
more
extrovert
or
less
agreeable.
With
time
spent
as
dependent
variable,
a
similar
procedure
indicated
that
people
(particularly
the
more
conscientious)
who
felt
they
could
act
on
the
spatial
control
stayed
significantly
longer
in
the
installation.
While
these
results
would
indicate
that
spatial
control
is
primordial,
most
freeform
feedback
focussed
on
timbral
control.
We
are
currently
investigating
whether
correlations
are
moderated
by
personality
traits,
and
further
results
will
be
presented
at
the
conference.
Richard
Glover
Department
of
Music,
University
of
Huddersfield,
UK
This
study
will
discuss
a
cognitive
approach
to
the
experience
of
experimental
music
created
entirely
from
sustained
tones,
in
which
there
is
an
absence
of
typical
perceptual
cues
for
creating
sectional
boundaries
thereby
directing
the
listeners
focus
towards
surface
phenomena
within
the
aural
environment.
Source
material
for
the
study
comprises
recent
compositions
by
American
composers
Phill
Niblock
and
Alvin
Lucier,
as
well
as
the
author.
The
approaches
to
harmonic
transformation
in
these
pieces
are
outlined,
alongside
a
detailed
description
of
the
activity
within
the
surface
layer
of
the
sound,
comprehensively
surveying
the
myriad
acoustic
and
psychoacoustic
phenomena
prevalent.
The
presentation
draws
upon
gestalt
grouping
mechanisms
to
describe
how
this
surface
activity
is
interpreted
by
the
cognitive
process.
The
notion
of
resulting
articulations
within
sections
is
explored,
and
consequently
what
this
means
in
terms
of
stability
and
instability
in
experience
for
the
122
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012
WED
listener,
including
considerations
of
temporality.
The
manner
in
which
this
process
feeds
into
the
compositional
procedure
for
these
composers
is
also
explored,
looking
specifically
at
pitch
structures
employed,
how
composed
indeterminacy
in
sustained
tone
composition
affects
the
cognition
process
and
why
these
composers
have
a
tendency
towards
writing
for
acoustic
instruments
rather
than
electronic
sources.
This
study
provides
further
strategies
into
how
we
might
analyse
sustained
tone
music,
directing
discussion
towards
the
sounding
experience
and
cognitive
comprehension
of
the
listener
rather
than
solely
from
the
score.
This
understanding
can
open
up
further
avenues
of
research
for
composers,
performers
and
interdisciplinary
theorists.
Just
Riff
Off:
What
determines
the
subjectively
perceived
quality
of
hit
riffs?
Andrew
Goldman
Centre
for
Music
and
Science,
University
of
Cambridge,
United
Kingdom
Cognitive
models
of
improvisation
align
with
pedagogical
methods
in
suggesting
improvisers
need
for
both
procedural
and
declarative
knowledge.
However,
behavioral
experiments
do
not
directly
address
this
division
due
to
the
difficulty
of
operationalizing
improvisation.
The
present
study
seeks
to
experimentally
demonstrate
different
types
of
knowledge
involved
in
producing
musical
improvisations
and
to
contribute
an
experimental
paradigm.
Ten
jazz
pianists
improvised
on
a
MIDI
keyboard
over
backing
tracks.
They
produced
one-handed
monophonic
improvisations
under
a
2x2x2
fully
factorial
design.
The
conditions
contrasted
levels
of
motor
familiarity
by
varying
which
hand
(right
vs.
left)
played
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012
123
which
musical
function
(melody
vs.
bass
line)
in
which
key
(Bb
vs.
B).
MIDI
files
were
analyzed
using
MATLAB
to
determine
the
entropy,
the
proportion
of
diatonic
pitch
classes,
the
nPVI
of
a
quantized
version
of
the
data,
and
the
nPVI
of
a
version
left
unquantized.
Separate
ANOVAs
compared
these
values
across
conditions.
Significant
main
effects
were
found
between
keys
and
hands.
In
the
key
of
B,
pianists
produced
improvisations
with
lower
entropy
and
with
more
diatonic
pitches
than
in
Bb.
The
right
hand
had
lower
quantized
nPVI
values
than
the
left
hand.
Several
significant
interactions
were
also
found.
This
research
reframes
the
distinction
between
theoretically
proposed
types
of
musical
knowledge
used
in
improvisation.
In
unfamiliar
motor
contexts,
pianists
improvised
with
less
pitch
class
variability
and
more
diatonic
pitch
classes,
implying
that
in
the
absence
of
procedural
knowledge,
improvisers
rely
more
on
explicit
knowledge
of
tonality.
This
suggests
new
ways
to
consider
modes
of
improvising.
Distributed
creativity
in
Tongue
of
the
Invisible
Cognition
and
Segmentation
in
Collective
Free
Improvisation:
An
Exploratory
Study
WED
with
free
improvisers
in
December
2011
in
order
to
understand
the
cognition
of
musicians
placed
in
a
CFI
context,
in
particular
the
role
played
by
their
representations
of
the
improvisation
under
different
type
of
sequences
into
the
explanation
of
both
their
behaviors
and
the
coordination
success
or
failure.
David
Huron
School
of
Music,
Ohio
State
University,
USA
A
number
of
musically-pertinent
lessons
are
drawn
from
research
on
animal
behavior
(ethology).
The
ethological
distinction
between
signals
and
cues
is
used
to
highlight
the
difference
between
felt
and
expressed
emotion.
Several
ethologically-inspired
studies
are
described
principally
studies
related
to
music
and
sadness.
An
ethologically-inspired
model
is
proposed
(the
Acoustic
Ethological
Model).
The
question
of
how
music
induces
emotion
in
a
listener
is
addressed,
and
it
is
proposed
that
signaling
represents
a
previously
unidentified
mechanism
for
inducing
affect.
An
integrated
theory
of
sadness/grief
is
offered,
where
sadness
is
characterized
as
a
personal/covert
affect,
and
grief
is
characterized
as
a
social/overt
affect.
Sadness
and
grief
tend
to
co-occur
because
they
provide
complementary
strategies
for
addressing
difficult
circumstances.
Emotion
perception
of
dyads
and
triads
in
congenital
amusia
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
125
Rare
pitch-classes
are
larger
and
stronger:
implicit
absolute
pitch,
exposure
effects,
and
qualia
of
harmonic
intervals
Richard
Ashley
Program
in
Music
Theory
and
Cognition,
Northwestern
University,
USA
This
study
investigates
how
music
may
influence
viewers
responses
to
political
advertisements,
looking
specifically
at
the
timecourse
of
affective
responses.
It
builds
on
prior
research
dealing
with
affective
and
perceptual
responses
to
brief
stimuli.
The
primary
hypothesis
is
that
a
listeners
very
early
response
to
a
commercials
music
serves
as
an
affective
prime
for
processing
the
remainder
of
the
commercial.
This
project
involves
both
a
corpus
analysis
and
an
experiment.
The
corpus
used
is
the
database
of
political
advertisements
maintained
by
the
Washington
Post;
this
study
restricted
itself
to
television
and
radio
commercials
from
the
year
2008,
during
the
general
US
Presidential
campaigns
of
Barack
Obama
and
John
McCain.
The
experiment
collects
affective
valence
and
intensity
responses
to
excerpts
from
the
ads
beginnings
in
three
conditions:
audio
only,
video
only,
and
audio
+
video.
Excerpts
are
of
variable
length
(33
msec.
to
4200
msec.)
and
also
include
the
entire
commercial
(most
of
which
are
30
seconds
in
length).
In
results
to
date,
it
appears
that
music
provides
the
fastest
path
to
an
emotional
response
on
the
part
of
a
viewer.
Music
is
typically
employed
from
the
very
beginnings
of
advertisements;
affective
responses
to
audio
excerpts
of
100-250
msec.
are
frequently
stronger
than
those
found
in
the
corresponding
visual
excerpts,
depending
on
the
ads
contents.
Although
judgments
of
126
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012
WED
the
full
commercials
are
more
intense
and
more
stable
than
judgments
of
the
brief
excerpts,
the
affective
priming
seen
in
responses
to
the
music
is
borne
out
by
the
commercial
as
a
whole.
Do
Opposites
Attract?
Personality
and
Seduction
on
the
Dance
Floor
Geoff
Luck,
Suvi
Saarikallio,
Marc
Thompson,
Birgitta
Burger,
Petri
Toiviainen
Finnish
Centre
of
Excellence
in
Interdisciplinary
Music
Research,
Department
of
Music,
University
of
Jyvskyl,
Finland
Some
authors
propose
that
we
are
more
attracted
to
opposite-sex
individuals
with
personalities
similar
to
our
own.
Others
propose
that
we
prefer
individuals
with
different
personalities.
We
investigated
this
issue
by
examining
personality
and
attraction
on
the
dance
floor.
Specifically,
we
investigated
how
the
personality
of
both
observers
and
dancers
affected
the
formers
attractiveness
ratings
of
the
latter.
Sixty-two
heterosexual
adult
participants
(mean
age
=
24.68
years,
34
females)
watched
48
short
(30
s)
audio-visual
point-light
animations
of
adults
dancing
to
music.
Stimuli
were
comprised
of
eight
females
and
eight
males,
each
dancing
to
three
songs
representing
Techno,
Pop,
and
Latin
genres.
For
each
stimulus,
participants
rated
the
perceived
skill
of
the
dancer,
and
the
likelihood
with
which
they
would
go
on
a
date
with
them.
Both
dancers
and
observers
personality
were
assessed
using
the
44-item
version
of
the
Big
Five
Inventory.
Correlational
analyses
revealed
that
women
rated
men
high
in
Openness
to
experience
as
better
dancers,
while
men
low
in
Openness
gave
higher
ratings
of
female
dancers.
Women
preferred
more
Conscientious
men,
but
men
preferred
less
Conscientious
women.
Women
preferred
less
Extraverted
men,
while
men
preferred
more
Extraverted
women,
especially
if
they
were
more
Extraverted
themselves.
Both
women
and
men
preferred
less
Agreeable
opposite-sex
dancers.
Finally,
both
women
and
men
preferred
more
Neurotic
opposite-sex
dancers.
This
study
offers
some
fascinating
insights
into
the
ways
in
which
personality
shapes
interpersonal
attraction
on
the
dance
floor,
and
partially
supports
the
idea
that
opposites
sometimes
do
attract.
Doubtful
effects
of
background
music
in
television
news
magazines
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
127
Cognitive
Science
is
widely
regarded
as
the
best
effort
at
studying
the
mind
that
has
been
made
to
date,
paving
the
way
for
a
truly
rigorous
account
of
cognition,
using
the
methods
and
epistemic
commitments
of
natural
science.
However,
a
large
number
of
authors
have
expressed
a
worry
that
Cognitive
Science
fails
to
account
for
phenomenological
data
and
is
therefore
not
a
full
theory
of
cognition.
As
Joseph
Levine
(Levine
1983)
put
it,
Cognitive
Science
is
suffering
from
an
explanatory
gap.
In
other
words,
regardless
of
what
paradigm
is
employed
to
explain
and
predict
behavioural
data,
Cognitive
Science
fails
to
account
fully
for
how
the
mental
is
subjectively
experienced.
This
issue
has
been
debated
primarily
in
the
philosophy
of
mind
literature.
However,
insofar
as
it
concerns
Cognitive
Science,
I
will
argue
that
music
cognition
researchers
should
pay
attention
to
this
debate.
I
will
outline
the
methodological
and
epistemological
concerns
highlighted
by
the
explanatory
gap
argument,
as
well
as
indicating
some
concrete
ways
in
which
music
cognition
researchers
m ay
attempt
to
move
beyond
the
explanatory
gap
(Gallagher
and
Brosted
Sorensen
2006).
I
will
address
the
issue
of
meaning
in
light
of
the
naturalistic
approaches
of
Cognitive
Science,
arguing
that
attention
to
the
explanatory
gap
literature
allows
us
to
frame
the
issue
of
how
musical
meaning
may
survive
in
a
naturalized
picture
of
music
cognition.
I
will
discuss
the
project
of
naturalizing
phenomenology
(Petitot
1999;
Zahavi
2010),
arguing
for
its
in-principle
possibility
as
well
as
the
promise
it
holds
for
a
more
truly
phenomenological
and
holistic
approach
to
music
cognition.
Most
of
the
literature
on
the
interface
between
philosophy
of
mind
and
Cognitive
Science
to
date
has
focused
on
research
into
visuo-motor
perception;
comparatively
little
attention
has
been
paid
to
auditory
or
musical
perception.
I
will
address
the
issue
of
the
visuocentrism
of
philosophy
of
mind,
arguing
that
greater
attention
to
musical
cognition,
as
well
as
greater
contact
between
philosophy
of
mind
and
Cognitive
Science,
is
important
for
a
more
complete
understanding
of
perception
in
general.
A
Nonrepresentationalist
Argument
for
Music
Patrick
Hinds
Music
Dept.,
University
of
Surrey,
United
Kingdom
Music
is
a
universally
accessible
phenomenon
that
resists
understanding.
These
conditions
have
prompted
a
considerable
discourse
on
musics
transcendental
properties,
tied
up
with
the
notion
of
an
exclusively
musical
meaning.
Following
a
literature
review,
I
reject
this
notion,
favouring
a
leaner
theory
that
takes
musics
lack
of
objective
meaning
just
as
a
lack
of
objective
meaning.
I
argue
that
music
is
a
self-directed
practice,
contingent
on
a
perceivers
prerogative
to
block
the
perceived
objective
significance
of
an
object
and
engage
with
it
for
the
sake
of
engaging
itself.
This
subversion
of
meaning
is,
I
suggest,
a
mechanism
in
virtue
of
which
we
may
have
consciousness
of
sound
tout
court:
when
the
world
is
separated
from
the
aspect
of
self
that
is
affording
the
means
of
perception
and
the
latter
is
taken
as
a
subject
of
experience.
Such
an
argument
can
make
intelligible
the
concept
of
intrinsically
cognitive
operations-
those
that
do
not
refer
outwardly.
Emerging
research
in
music
psychology
gives
empirical
grounding
to
this
concept,
accounting
for
music
experience
with
psychological
structures
that
are
nonrepresentational
and
thus
lack
extrinsic
content.
The
upshot
is
that
music
can
exemplify
nonrepresentational
experience,
where
a
representation
is
an
individuated
(mental)
object
with
semantic
properties.
There
may
be
no
specifiable
object
128
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012
WED
true
to
the
experience
because
music
is
partly
constituted
by
that
which
is
intrinsically
cognitive.
This
framework
could
thus
be
wielded
in
a
discussion
of
qualia,
potentially
elucidating
the
intuition
that
some
qualities
of
experience
are
irreducibly
mental
in
nature.
Topical
Interpretations
of
Production
Music
Christoph
Louven
Institut
fr
Musikwissenschaft
und
Musikpdagogik.,
Universitt
Osnabrck,
Germany
The
assumption
that
younger
children
are
more
open-eared
than
older
children,
i.e.
that
they
are
more
open
towards
unconventional
styles
of
music
than
older
children,
has
been
the
subject
of
several
studies
in
the
last
10
years.
Most
of
these
studies
are
based
on
a
design
that
derives
open-earedness
just
from
preference
ratings
of
music
examples
with
different
styles.
This
leads
to
a
intermixture
of
the
concepts
of
preference
and
openness
that
we
assume
to
be
a
serious
problem.
Therefore,
we
created
a
new
approach
with
a
computer-
based
design
that
combines
preference
ratings
with
measuring
voluntary
listening
durations
and
derived
a
numerical
index
of
open-earedness.
Results
with
primary
school
children
showed
that
although
preferences
for
different
musical
styles
changed
considerably
during
primary
school
the
index
of
open-earedness
did
not.
Since
all
previous
studies
on
open-
earedness
only
dealt
with
primary
school
children
it
has
not
yet
been
established
what
happens
to
open-earedness
in
older
populations.
Therefore,
this
paper
will
present
the
results
of
two
follow-up
studies
with
Gymnasium
(high
school)
pupils
and
university
students,
partly
with
special
music
education
(pupils
of
a
Gymnasium
with
a
special
music
profile
or
university
music
students).
This
allows
for
the
observation
of
both
the
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012
129
development
of
open-earedness
after
primary
school
and
the
influence
of
special
musical
training
on
this
process.
Music
lessons,
emotion
comprehension,
and
IQ
Introducing
a
new
test
battery
and
self-report
inventory
for
measuring
musical
sophistication:
The
Goldsmiths
Musical
Sophistication
Index
WED
music,
and
perception
and
production
abilities.
Furthermore,
these
self-reported
multidimensional
profiles
of
musical
sophistication
are
related
to
performance
on
the
four
perception
and
production
tasks.
The
Gold-MSI,
as
a
new
tool
to
the
research
community,
measures
the
level
of
musical
sophistication
in
the
non-specialist
population
on
several
distinct
dimensions.
The
question
inventory
and
the
ability
tests
have
been
psychometrically
optimized
and
come
with
data
norms
from
a
western
sample
of
more
than
120,000
individuals.
The
Gold-MSI
is
fully
documented
and
free
to
use
for
research
purposes.
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
131
Thursday
26
July
Symposium
2:
Grand
Pietra
Hall,
09:00-11:00
Involuntary
Musical
Imagery:
Exploring
earworms
Lassi
A.
Liikkanen
Helsinki
Institute
for
Information
Technology,
Aalto
University,
Finland
Department
of
Communications,
Stanford
University,
CA,
USA
This
paper
addresses
the
state
of
art
in
the
studies
of
involuntary
musical
imagery
(INMI),
an
emerging
topic
in
psychology.
We
define
INMI
as
a
private,
conscious
experience
of
reliving
a
musical
memory
without
a
deliberate
attempt.
We
review
the
empirical
literature
and
draw
guidelines
for
future
research
on
the
matter.
As
example
of
a
new
research
direction,
we
provide
a
study
of
how
INMI
relates
to
social
interactions
in
everyday
life
based
on
a
corpus
of
over
one
thousand
open-ended
survey
questions.
The
data
shows
that
INMI
can
evoke
overt
behavior
and
have
social
consequences.
Some
people
found
it
difficult
to
distinguish
their
overt
spontaneous
musical
behavior
from
covert
experiences.
In
response
to
an
INMI
inspired
music
act,
many
had
experienced
socially
awkward
situations
or
were
consciously
trying
to
avoid
public
musical
expression.
In
the
other
end,
some
people
choose
expression
and
intentionally
try
to
pass
on
the
earworm,
even
if
they
expected
reproach
for
doing
so.
These
results
suggest
that
INMI
is
an
instance
of
involuntary
music,
sometimes
associated
with
overt
behaviors
and
social
consequences.
The
next
steps
in
the
research
on
INMI
should
132
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012
THU
be
targeted
to
understanding
the
psychology
underlying
this
phenomenon
more
deeply
and
socially.
Instead
of
characterizing
the
phenomenology
on
different
levels,
we
should
seek
the
causal
mechanisms
related
to
INMI,
possibly
on
neural
level
and
to
differentiate
the
different
components
of
INMI
from
each
other
and
related
psychological
and
psychopathological
phenomena.
Earworms
from
Three
Angles:
Situational
Antecedents,
Personality
Predisposition
and
a
Musical
Formula
Arousal,
Valence
and
the
Involuntary
Musical
Image
Freya
Bailes
MARCS
Institute,
University
of
Western
Sydney
The
study
of
the
emotional
qualities
of
imagined
music
is
in
its
infancy.
This
paper
reports
results
from
a
follow-up
of
Bailes
(2006,
2007),
with
the
aim
of
exploring
the
relationship
between
involuntary
musical
imagery
(INMI)
and
emotion.
Forty-seven
respondents,
aged
18
to
53
years,
were
contacted
by
SMS
for
a
total
of
42
times
over
a
period
of
7
days.
At
each
contact
they
were
required
to
fill
in
a
form
describing
their
mood,
location
and
activity,
as
well
as
details
of
any
current
musical
experience,
imagined
or
heard.
A
multiple
logistic
regression
analysis
was
performed
with
current
musical
state
at
the
time
of
contact
as
the
dependent
variable
(hearing
music,
imagining
music,
both
hearing
and
imagining
music,
neither
hearing
nor
imagining
music)
and
ratings
of
mood
as
predictor
variables.
Preliminary
evidence
of
a
link
between
arousal
and
the
propensity
to
experience
INMI
was
found,
showing
that
self-ratings
as
drowsy
or
neither
alert
nor
drowsy
at
the
time
of
contact
were
negatively
associated
with
imagining
music.
In
other
words,
participants
who
did
not
feel
that
they
were
alert
were
unlikely
to
be
imagining
music.
Ratings
for
the
mood
pair
Happy/Sad,
which
best
exemplifies
valence,
were
not
significant
predictors
of
INMI.
Qualitative
analyses
of
responses
to
an
open
question
about
possible
reasons
for
imagining
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012
133
music
are
expected
to
reveal
information
about
the
emotional
characteristics
of
the
music,
context,
and
respondent.
When
an
everyday-phenomenon
becomes
clinical:
The
case
of
long-term
earworms
Drawing
on
research
which
has
investigated
music
tempo's
effect
on
behaviour
in
a
number
of
domains
we
consider
tempo
as
a
factor
which
can
influence
gambling
behaviour.
We
examine
research
which
has
investigated
music
tempos
influence
on
gambling
behaviour
and
consider
whether
arousal
is
a
psychological
mechanism
responsible
for
tempos
influence
on
gambling
behaviour.
This
abstract
provides
the
background
to
a
study
we
have
carried
out
investigating
the
influence
of
music
tempo
on
virtual
roulette
behaviour
which
tests
whether
subjective
and/or
physiological
arousal
are
responsible
for
music
tempos
effects
on
gambling
behaviour.
The
findings
of
our
study
will
be
discussed
in
our
conference
presentation.
To
conclude
we
consider
the
implications
of
determining
arousal
as
responsible
for
music
tempos
influence
on
gambling
behaviour
for
gamblers,
gambling
operators
and
current
gambling
practice.
134 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
THU
The
influence
of
age
and
music
on
ergogenic
outcomes,
energy
and
affect
in
gym-based
exercise
sessions
Rachel
Hallett,
Alexandra
Lamont
School
of
Psychological
Research,
Keele
University,
UK
Music
is
frequently
used
to
accompany
group
and
individual
exercise
to
help
increase
motivation
and
enjoyment.
It
has
been
suggested
that
to
be
motivating,
exercise
music
should
reflect
the
age
of
exercisers,
but
there
is
little
empirical
support
for
this
in
gym
contexts.
This
study
explores
the
area
using
mixed
methods,
with
a
qualitative
study
used
to
inform
the
design
of
a
field-based
within-participant
quasi-experiment.
Sixteen
participants
were
interviewed
about
exercise
preferences,
motivations
and
media
use
during
exercise
and
the
data
explored
using
thematic
analysis.
Results
indicated
that
contemporary
music
was
widely
liked
by
a
worker
group
of
exercisers
into
their
late
fifties,
while
a
smaller
socialiser
group,
typically
retired,
were
ambivalent
towards
music.
Twenty-four
participants
undertook
a
treadmill
protocol
with
measurements
of
distance
covered,
self-
perceived
affect
and
energy
and
liking
for
each
of
the
three
music
conditions:
contemporary
pop
(80-100bpm),
contemporary
dance
(~130bpm)
and
1960s/1970s
pop
(~130bpm).
Data
was
analyzed
by
participant
age
with
an
over-45
and
under-45
group.
Although
1960s/1970s
music
led
to
slightly
superior
outcomes
for
the
older
group,
it
was
disliked
by
the
younger
group
and
produced
inferior
outcomes
to
the
other
styles;
there
was
a
significant
interaction
between
age
and
music
preference.
The
1960s/1970s
music
offers
only
a
modest
benefit
for
older
exercisers
and
appears
to
alienate
younger
exercisers.
Dance
music,
however,
appeals
to
a
broad
age
range
and
is
recommended
for
gym
use,
although
it
may
be
advisable
to
reduce
volume
when
attendance
by
retired
members
is
high.
In-car
music
listening
requires
drivers
to
process
sounds
and
words,
and
most
sing/tap
along.
While
it
may
difficult
to
assess
music
as
a
risk-factor
for
distraction,
previous
studies
have
reported:
momentary
peak
levels
in
loud-music
disrupt
vestibulo-ocular
control;
loud
music
causes
a
decrease
in
response
time;
arousing
music
impairs
driving
performance;
and
quick-paced
music
increases
cruising
speed
and
traffic
violations.
It
is
indeed
worrying
that
drivers
underestimate
the
effects
of
music,
or
perceive
decreased
vehicular
performance
due
to
in-car
listening.
In
the
current
study
we
produced
an
alternative
music
background
proposed
to
maintain
aural
stimuli
at
moderate
levels
of
cognitive
awareness
in
an
effort
to
decrease
music-generated
distraction.
After
a
group
of
everyday
listeners
confirmed
the
background
as
suitable
for
driving
in
a
car,
we
implemented
two
studies:
22
drivers
each
drove
4-trips
while
listening
to
driver-preferred
music
brought
from
home
(2-trips)
or
to
the
alternative
background
(2-trips);
31
drivers
each
drove
10-trips
while
listening
the
alternative
background.
In
Study1
we
found
criterion
related
validity,
and
the
alternative
background
preoccupied
less
attention.
In
Study2
we
found
habituation
effects,
as
well
as
increased
feelings
of
driver
safety
and
ever-increasing
levels
of
positive
mood.
Music
designed
for
driver
safety
is
an
important
contribution
in
the
war
against
traffic
accidents
and
human
fatality.
One
day,
such
applications
might
become
a
standard
form
of
mediated
intervention
especially
among
young
drivers
who
often
choose
music
that
is
highly
energetic
and
aggressive,
consisting
of
a
fast-tempo
accentuated
beat,
played
at
strong
intensity
levels
of
elevated
volumes.
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
135
1Department of Music, School of Linguistics and Cultural Studies, Carl von Ossietzky University
136 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
THU
Assessing
young
childrens
musical
enculturation:
A
novel
method
for
testing
sensitivity
to
key
membership,
harmony,
and
musical
metre
It
was
shown
that
specific
music
perception
abilities
are
related
to
reading
and
phonological
awareness,
an
important
precursor
of
literacy.
Anvari
and
colleagues
(2002)
demonstrated
that
only
part
of
the
association
between
music
perception
and
reading
was
explained
by
phonological
awareness.
Therefore,
the
relationship
between
other
precursors
of
literacy
and
musical
abilities
need
further
investigation.
In
addition,
previous
studies
have
not
investigated
the
relation
between
music
production
abilities
and
precursors
of
literacy.
Thus,
the
aim
of
our
study
was
twofold.
Firstly,
we
investigated
the
relation
between
four
precursors
of
literacy
and
musical
abilities.
Secondly,
we
included
not
only
music
perception
abilities
but
also
music
production
abilities
in
our
analyses.
We
tested
55
(28
girls)
preschoolers.
We
assessed
precursors
of
literacy
with
a
well
established
test
battery
which
comprises
four
subtests
measuring
phonological
awareness,
one
subtest
on
working
memory,
one
on
selective
attention,
and
one
on
rapid
automatized
naming.
Musical
abilities
were
tested
with
a
music
screening
by
Jungbluth
and
Hafen
(2005)
that
contained
comparisons
of
melody,
pitch,
rhythm,
metre,
and
tone
length
as
well
as
the
reproduction
of
a
given
rhythm,
metre,
and
song.
As
control
variables
intelligence
and
socioeconomic
status
measured
by
parents
education
were
assessed.
Partial
correlations
that
controlled
for
gender,
intelligence,
and
SES
revealed
a
significant
positive
association
between
the
aggregated
score
of
phonological
awareness
and
music
perception
and
production
abilities.
Furthermore,
significant
positive
associations
were
revealed
between
working
memory
and
the
overall
scores
of
music
perception
and
production.
We
conclude
that
phonological
awareness
and
working
memory,
which
are
both
precursors
of
literacy,
are
associated
with
musical
abilities.
Furthermore,
we
demonstrated
that
both
music
perception
and
music
production
abilities
are
related
to
phonological
awareness
and
working
memory.
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
137
138 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
THU
Interaction
between
melodic
expectation
and
syntactical/semantic
processes
on
evoked
and
oscillatory
neural
responses
BAASTA:
Battery
for
the
Assessment
of
Auditory
Sensorimotor
and
Timing
Abilities
Nicolas
Farrugia,
Charles-Etienne
Benoit,
Eleanor
Harding,
Sonja
A.
Kotz,
Simone
Dalla
Bella
Department
of
Cognitive
Psychology,
WSFiZ
in
Warsaw,
Poland
Max
Planck
Institute
for
Human
Cognitive
and
Brain
Sciences,
Leipzig,
Germany
EUROMOV,
M2H
Laboratory,
Universit
de
Montpellier
I,
France
In
this
paper
we
describe
the
Battery
for
the
Assessment
of
Auditory
Sensorimotor
and
Timing
Abilities
(BAASTA),
a
new
tool
developed
for
assessing
systematically
rhythm
perception
and
auditory-motor
coupling.
BAASTA
includes
perceptual
tasks
and
Sensorimotor
Synchronization
(SMS)
tasks.
In
the
perceptual
tasks,
auditory
thresholds
in
a
duration
discrimination
task
and
anisochrony
detection
tasks
(i.e.,
with
an
isochronous
sequence
and
with
music)
are
measured
via
the
Maximum
Likelihood
Procedure
(MLP).
In
addition,
a
customized
version
of
the
Beat
Alignment
Task
(BAT)
is
performed
to
assess
participants
ability
to
perform
beat
extraction
with
musical
stimuli.
Tapping
tasks
are
used
to
assess
participants'
SMS
abilities,
including
hand
tapping
along
with
isochronous
sequences
and
music,
and
tapping
to
sequences
presenting
a
tempo
change.
The
battery
is
validated
in
young
expert
musicians
and
age-matched
non-musicians,
as
well
as
in
aged
participants.
In
addition,
the
results
from
3
cases
of
patients
with
Parkinsons
Disease
are
presented.
BAASTA
is
sensitive
to
differences
linked
to
musical
training
;
moreover
the
battery
can
serve
to
characterize
differences
among
individuals
(e.g.,
patients
with
neurodegenerative
disorders)
in
terms
of
sensorimotor
and
rhythm
perception
abilities.
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
139
140 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
THU
Measuring
tongue
and
finger
coordination
in
saxophone
performance
Alex
Hofmann,*
Werner
Goebl,*
Michael
Weilguni,#
Alexander
Mayer,*
Walter
Smetana#
*Institute
of
Music
Acoustics,
University
of
Music
and
Performing
Arts
Vienna,
Austria
#Institute
of
Sensor
and
Actuator
Systems,
Vienna
University
of
Technology,
Austria
When
playing
wind
instruments
the
fingers
of
the
two
hands
have
to
be
coordinated
together
with
the
tongue.
In
this
study,
we
aim
to
investigate
the
interaction
between
finger
and
tongue
movements
in
portato
playing.
Saxophone
students
played
on
a
sensor-equipped
alto
saxophone.
Force
sensors
attached
to
3
saxophone
keys
measured
finger
forces
of
the
left
hand;
a
strain
gauge
glued
onto
a
synthetic
saxophone
reed
measured
the
reed
bending.
Participants
performed
a
24-tone
melody
in
three
tempo
conditions
timed
by
a
metronome
in
a
synchronization-continuation
paradigm.
Distinct
landmarks
were
identified
in
the
sensor
data:
A
tongue-reed
contact
(TRC)
occurred
when
the
reed
vibration
was
stopped
by
the
tongue,
a
tongue-reed
release
(TRR)
at
the
beginning
of
next
tone,
and
in
the
finger
force
data
a
key-bottom
contact
(KB)
at
the
end
of
the
key
motion.
The
tongue-reed
contact
duration
(from
TRC
to
TRR)
was
34.5
ms
on
average
(SD
=
5.84)
independently
of
tempo
condition.
Timing
accuracy
and
precision
was
determined
from
consecutive
TRRs.
We
contrasted
tones
that
required
only
tongue
impulses
for
onset
timing
to
those
that
required
also
finger
movements.
Timing
accuracy
was
better
for
combined
tongue-finger
actions
than
for
tongued
timing
only.
This
suggests
that
finger
movements
support
timing
accuracy
in
saxophone
playing.
Timing
and
synchronization
of
professional
musicians:
A
comparison
between
orchestral
brass
and
string
players
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
141
Dirk
Moelants
IPEM-Dept.
of
Musicology,
Ghent
University,
Belgium
This
paper
investigates
if
and
how
musicians
can
convey
syncopation
without
the
presence
of
a
fixed
metric
framework.
In
a
first
experiment
20
professional
musicians
played
a
series
of
simple
melodies
in
both
a
metrically
regular
version
and
a
syncopated
version.
These
were
analyzed
using
a
series
of
audio
parameters.
This
analysis
shows
a
series
of
methods
used
by
musicians
to
convey
syncopation,
using
timing,
dynamics
as
well
as
articulation.
A
selection
of
the
melodies
was
then
presented
to
16
subjects
in
a
second
experiment,
both
audio-only
and
with
video,
asking
them
to
identify
them
as
syncopated
or
regular.
The
results
of
this
experiment
show
that,
although
some
expressive
cues
seem
to
help
the
recognition
of
syncopation,
it
remains
hard
to
communicate
this
unnatural
rhythmic
structure
without
a
metric
framework.
Analysis
of
the
videos
shows
that
when
musicians
do
provide
such
a
framework
using
their
body,
it
influences
the
results
positively.
Vanessa
Hawes
Department
of
Music
and
Performing
Arts,
Canterbury
Christ
Church
University,
UK
This
paper
aims
to
link
qualitative,
empirical
approaches
from
performance
analysis
with
analytical
and
musicological
issues.
An
ecological
approach
to
perception
frames
an
exploration
of
experiential
(performative)
and
structural
(analytical)
affordances.
A
singers
developing
relationship
with
songs
IV
and
V
from
Schoenbergs
song
cycle,
Das
Buch
der
Hngenden
Grten,
Op.15
(1908-9)
is
recorded
in
two
ways:
videoing
rehearsals
from
first
contact
with
score
to
performance;
and
reflective
comments
about
the
songs
and
her
learning
process
through
interview
and
marked
scores.
As
an
atonal
work,
the
cycle
provides
a
subject
for
the
study
of
the
singers
experience
independent
of
tonality
as
an
overwhelming
structural
affordance.
Detailed
analytical
studies
of
the
song
cycle
provide
a
rich
source-set
from
which
to
draw
in
discussing
structural
affordances.
Songs
IV
and
V
were
chosen
because
they
occur
at
a
moment
of
dramatic
importance,
as
the
narrator
realizes
the
extent
of
the
love
that
drives
the
cycle
(Song
IV)
and
surrenders
to
it
(Song
V).
Fortes
1992
article
about
the
Opus
15
cycle
provides
the
analytical
focus,
an
article
that
identifies
linear
motivic
tetrachords
in
the
cycle,
revealing
them
in
the
fore-,
middle-
and
background
of
the
songs
structure.
Analysis
of
the
videoed
rehearsals
provides
an
alternate
analytic
reading
of
the
songs
based
on
performative
affordances,
and
the
analysis
of
interview
data
furnishes
us
with
another.
These
two
alternate
readings
adjust
and
enhance
Fortes
analysis,
a
direction
of
analytic/interpretive
influence
from
expression
to
structure,
and
the
result
is
related
back
to
issues
about
the
songs
meaning.
142 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
THU
Predicting
expressive
timing
and
perceived
tension
in
performances
of
an
unmeasured
prelude
using
the
IDyOM
model
Bruno
Gingras*#,
Meghan
Goodchild#,
Roger
Dean,
Marcus
Pearce+,
Geraint
Wiggins+,
Stephen
McAdams#
*
Department
of
Cognitive
Biology,
University
of
Vienna,
Vienna,
Austria
#
CIRMMT,
Schulich
School
of
Music,
McGill
University,
Canada
MARCS
Auditory
Laboratories,
University
of
Western
Sydney,
Australia
+School
of
Electronic
Engineering
and
Computer
Science,
Queen
Mary,
University
of
London,
UK
Studies
comparing
the
influences
of
different
performances
of
a
piece
on
the
listeners
aesthetic
responses
are
constrained
by
the
fact
that,
in
most
pieces,
the
metrical
and
formal
structure
provided
by
the
score
limits
the
performers
interpretative
freedom.
As
a
semi-
improvisatory
genre
which
does
not
specify
a
rigid
metrical
structure,
the
unmeasured
prelude
provides
an
ideal
repertoire
for
investigating
the
links
between
musical
structure,
expressive
strategies
in
performance,
and
listeners
responses.
Twelve
professional
harpsichordists
recorded
two
interpretations
of
the
Prlude
non
mesur
No.
7
by
Louis
Couperin
on
a
harpsichord
equipped
with
a
MIDI
console.
The
MIDI
data
was
analyzed
using
a
score-performance
matching
algorithm.
Subsequently,
20
nonmusicians,
20
musicians,
and
10
harpsichordists
listened
to
these
performances
and
rated
the
perceived
tension
in
a
continuous
manner
using
a
slider.
Melodic
expectation
was
assessed
using
a
probabilistic
model
(IDyOM)
whose
expectations
have
been
shown
to
match
closely
those
of
human
listeners
in
previous
research.
Time
series
analysis
techniques
were
used
to
investigate
predictive
relationships
between
melodic
expectations
and
the
performance
and
perceptual
parameters.
Results
show
that,
in
a
semi-improvisatory
genre
such
as
the
unmeasured
prelude,
predictability
of
expectation
based
on
melodic
structure
has
a
measurable
influence
on
local
tempo
variations.
Effects
of
Melodic
Structure
and
Meter
on
the
Sight-reading
Performances
of
Beginners
and
Advanced
Pianists
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
143
144 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
THU
Competencies
and
model-based
items
in
music
theory
and
aural
training
in
preparation
for
entrance
exams
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
145
When
students
are
learning
and
when
they
are
performing
in
instrumental
lesson
interactions:
A
conversational
analysis
approach
Antonia
Ivaldi
Department
of
Psychology,
Aberystwyth
University,
Wales,
UK
Within
the
growth
of
qualitative
research
in
music
psychology
there
has
been
an
attempt
to
explore
the
interactions
that
take
place
between
teachers
and
students
in
music
lessons.
This
research,
however,
has
yet
to
look
at
the
turn
by
turn
talk
that
takes
place
in
pedagogical
discourse,
in
addition
to
exploring
how
playing,
singing
and
demonstrating
are
woven
into
the
sequence
of
the
interaction.
The
studys
aim
is
to
examine
how
students
indicate
to
the
teacher
when
they
are
learning
and
when
they
are
performing
within
the
lesson,
and
how
this
is
received,
taken
up,
and
orientated
to
by
the
teacher
as
a
performance
or
as
part
of
a
more
complex
pedagogical
process.
17
video
recordings
were
made
of
UK
conservatoire
music
lessons
which
lasted
between
50
minutes
and
two
hours.
Relevant
extracts
were
then
selected
and
transcribed
further
using
Jefferson
system
conventions.
Employing
conversation
analysis
(CA)
techniques
such
as
turn-taking,
repair,
overlap,
pauses
etc,
the
analysis
will
explore
how
the
teacher
orients
to
the
students
playing
and
talk
as
being
either
performance
ready,
or
one
that
indicates
that
learning
is
still
taking
place.
CA
offers
a
unique
opportunity
for
teachers
and
students
to
demonstrate
more
fully
how
the
interaction
within
music
lessons
presents
a
complex
interplay
between
talk
and
the
playing
and
demonstration
of
instruments,
which
in
turn
results
in
the
student
and
teacher
continually
moving
between
learning
and
performance
within
the
lesson.
The
implications
for
instrumental
teachers
and
their
students
will
be
discussed.
146 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
THU
Identity
Dimensions
and
Age
as
Predictors
of
Adult
Music
Preferences
Richard
Leadbeater
Lancaster
Institute
for
the
Contemporary
Arts,
Lancaster
University,
England
Recent
empirical
research
in
music
psychology
has
established
that
personality
trait
profiling
may
provide
a
reliable
prediction
of
music
preferences.
However,
research
on
music
preferences
has
largely
focused
on
the
adolescent
age
group.
Whether
adults
similarly
use
music
as
a
tool
to
construct
and
reconstruct
identities
following
lifespan
experiences
is
largely
understudied.
This
paper
presents
the
results
of
an
on-line
survey
which
was
carried
out
at
Lancaster
University
to
expand
recent
empirical
research
on
music
preferences.
The
aim
of
the
study
was
to
explore
the
relationship
between
personality
traits,
age,
estimated
IQ
and
identity
dimensions
as
predictors
of
music
preferences.
A
large
sample
(n=768),
ages
ranging
from
17-66
(X=23.9;
SD=8.95)
completed
the
survey.
Music
preference
ratings
were
assessed
using
STOMP-R.
The
BFI
and
the
EIPQ
were
used
for
personality
trait
and
identity
status
measurement
respectively.
Results
largely
supported
recent
research
except
for
one
notable
exception;
there
was
almost
zero
correlation
between
Openness
and
the
Upbeat
and
Conventional
Dimension,
as
opposed
to
a
significant
negative
correlation.
Standard
multiple
regression
analysis
revealed
highly
significant
effects
of
the
Exploration
identity
dimension,
Age
and
Openness
to
predict
a
preference
for
Rhythmic
and
Complex
music.
Interestingly,
adjusted
R2
scores
would
suggest
that
these
variables
only
account
for
less
than
20%
of
variance
in
music
preferences.
Consequently,
further
research
on
music
preferences
may
adopt
a
more
socially
constructive
methodology
to
identify
how
music
preference
selection
reflects
the
evolving
salient
identities.
Why
not
knitting?
Amateur
music-making
across
the
lifespan
Alexandra
Lamont
Centre
for
Psychological
Research,
Keele
University,
United
Kingdom
Musical
identity
lies
at
the
core
of
understanding
peoples
motivations
and
patterns
of
engagement
with
music.
Much
research
has
explored
this
in
relation
to
professional
musicians
and
music
teachers,
but
less
attention
has
been
given
to
amateurs.
A
growing
body
of
work
shows
that
involvement
in
musical
activities,
particularly
in
later
life,
has
powerful
effects
on
health
and
wellbeing.
However,
less
is
known
about
how
involvement
can
be
supported
over
long
timeframes
spanning
many
years.
This
study
explores
retrospective
memories
of
music
making
and
aims
to
uncover
the
features
that
prevent
or
support
amateurs
in
developing
and
sustaining
(and
sometimes
resuscitating)
a
musical
identity.
Data
was
gathered
from
online
surveys
(530
participants)
and
follow-up
interviews
with
adult
amateur
musicians.
Participants
ranged
in
age
from
21
to
83
and
took
part
in
a
very
diverse
range
of
musical
activities.
Despite
being
actively
involved
in
music,
they
did
not
all
have
a
strong
musical
identity.
Different
patterns
of
motivation
can
be
discerned,
including
the
traditional
pattern
of
a
highly
motivated
child
leading
to
continuous
involvement
in
music,
but
also
adults
with
far
more
patchy
musical
careers.
While
all
participants
had
a
guiding
musical
passion
or
a
core
musical
identity,
this
sometimes
takes
time
to
find
full
expression,
depending
on
circumstances
and
pressures
of
everyday
life.
General
life
crises
and
transitions
(such
as
having
a
family,
relocation
or
retirement)
can
create
barriers
to
involvement
but
also
opportunities
to
re-engage.
Involvement
in
music
also
provides
a
way
of
managing
life
transitions
and
crises.
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
147
Ruth
Herbert
Music
Dept.,
Open
University,
UK
Few
studies
of
everyday
musical
engagement
have
focused
on
the
subjective
'feel'
(phenomenology)
of
unfolding,
lived
experience.
Additionally,
the
musical
experiences
of
children
and
young
adolescents
are
currently
under-represented
in
the
literature.
This
paper
constitutes
an
in-progress
report
of
the
preliminary
stage
of
a
mixed
method
three
year
empirical
enquiry,
designed
to
explore
psychological
characteristics
of
the
subjective
experience
of
young
people
hearing
music
in
everyday,
'real
world
scenarios
in
the
UK.
The
aims
were
to
identify
varied
modes
of
listening,
to
pinpoint
whether
these
are
age-related,
to
explore
the
extent
to
which
young
people
use
music
as
a
form
of
escape
(dissociation)
from
self,
activity,
or
situation.
25
participants
(aged
10-18)
were
interviewed
and
subsequently
kept
diaries
of
their
music-listening
experiences
for
two
weeks.
Data
was
subjected
to
Interpretative
Phenomenological
Analysis
(IPA).
Key
themes
identified
include
the
use
of
music
to
create
a
sense
of
momentum,
energy
and
excitement
to
mundane
scenarios,
to
dissociate
or
'zone
out'
from
aspects
of
self
and/or
situation,
to
feel
relaxed,
to
feel
'connected,
to
articulate
moods
and
emotions,
to
aid
daydreams/imaginative
fantasies
and
to
provide
a
framework
through
which
to
explore
emotions
vicariously,
using
music
as
a
template
for
modelling
future
emotional
experience.
Subjective
experience
was
frequently
characterised
by
a
fusion
of
modalities.
148 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
THU
A
self-regulatory
perspective
on
choosing
sad
music
to
enhance
mood
Everyday
music
listening:
The
importance
of
individual
and
situational
factors
for
musical
emotions
and
stress
reduction
Marie
Helsing
Department
of
Psychology,
University
of
Gothenburg,
Sweden
Music
listening
primarily
evokes
positive
emotions
in
listeners.
Research
has
shown
that
positive
emotions
may
be
fundamental
for
improving
both
psychological
and
physical
aspects
of
well-being.
Besides
from
the
music
itself
it
is
essential
to
consider
individual
and
situational
factors
when
studying
emotional
experiences
to
music.
The
main
aim
with
the
three
papers
(Study
I,
II
and
III)
in
the
doctoral
thesis
was
to
explore
the
effects
of
everyday
music
listening
on
emotions,
stress
and
health.
The
Day
Reconstruction
Method
was
used
in
study
I
and
II.
In
study
III,
an
experiment
group
who
listened
to
their
self-chosen
music
on
mp3-players
when
arriving
home
from
work
every
day
for
30
minutes
for
two
weeks
time
was
compared
to
a
control
group
who
relaxed
without
music
and
with
a
baseline
week
when
the
experiment
group
relaxed
without
music.
Results
from
study
I
and
II
showed
that
music
was
related
to
more
positive
emotions,
lower
stress
levels
and
higher
health
scores.
Liking
of
the
music
affected
the
level
of
stress.
Results
from
study
III
showed
that
the
experiment
group
showed
an
increase
in
positive
emotions
and
decrease
in
perceived
stress
and
cortisol
levels
over
time.
The
results
from
this
thesis
indicate
that
everyday
music
listening
is
an
easy
and
effective
way
of
improving
well-being
and
health
by
its
ability
to
evoke
positive
emotions
and
thereby
reduce
stress.
But
not
just
any
music
will
do
since
the
responses
to
music
are
influenced
by
individual
and
situational
factors.
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
149
Age
differences
in
music-related
emotion
regulation
150 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
THU
Paper
Session
27:
Timber
II
Hall,
11:30-13:30
Interpreting
&
predicting
listener
responses
From
Vivaldi
to
Beatles
and
back:
predicting
brain
responses
to
music
in
real
time
Vinoo
Alluri1,
Petri
Toiviainen1,
Torben
Lund2,
Mikkel
Wallentin2,
Peter
Vuust2,3,
Elvira
Brattico4
1Department
of
Music,
Finnish
Centre
of
Excellence
in
Interdisciplinary
Music
Research,
University
of
Jyvskyl,
Finland,
2Aarhus
University
Hospital,
Aarhus
University,
Denmark,
3Royal
Academy
of
Music,
Aarhus/Aalborg,
Denmark,
4Cognitive
Brain
Research
Unit,
Department
of
Psychology,
University
of
Helsinki,
Finland
We
aimed
at
predicting
brain
activity
in
relation
to
acoustic
features
extracted
from
musical
pieces
belonging
to
various
genres
and
including
lyrics
via
regression
modeling.
We
assessed
the
robustness
of
the
hence
created
models
across
stimuli
via
cross-validation.
Participants
were
measured
with
functional
magnetic
resonance
imaging
(fMRI)
while
they
listened
to
two
sets
of
musical
pieces,
one
comprising
instrumental
music
representing
compositions
from
various
genres
and
the
other
a
medley
of
pop
songs
with
lyrics.
Acoustic
features
were
extracted
from
both
stimulus
sets.
Principal
component
regression
models
were
trained
separately
for
each
stimulus
set
by
using
the
fMRI
time-series
as
dependent,
and
acoustic
feature
time-series
as
independent
variables.
Then,
we
performed
cross-validations
of
the
models.
To
assess
the
generalizability
of
the
models
we
further
extended
the
cross-validation
procedure
by
using
the
data
obtained
in
a
previous
experiment
that
used
a
modern
tango
by
Piazzolla
as
the
stimulus.
Despite
differences
between
musical
pieces
with
respect
to
genre
and
lyrics,
results
indicate
that
auditory
and
associative
areas
indeed
are
recruited
for
the
processing
of
musical
features
independently
of
the
content
of
the
music.
The
right-
hemispheric
dominance
suggests
that
the
presence
of
lyrics
might
confound
the
processing
of
musical
features
in
the
left
hemisphere.
Models
based
on
purely
instrumental
music
revealed
that
in
addition
to
bilateral
auditory
areas,
right-hemispheric
somatomotor
areas
were
recruited
for
musical
feature
processing.
In
sum,
our
novel
approach
reveals
neural
correlates
of
music
feature
processing
during
naturalistic
listening
across
a
large
variety
of
musical
contexts.
I
can
read
your
mind:
Inverse
inference
in
musical
neuroinformatics
Petri
Toiviainen1,
Vinoo
Alluri1,
Elvira
Brattico1,2,
Andreas
H.
Nielsen3,4,
Anders
Dohn3,5,
Mikkel
Wallentin3,6,
&
Peter
Vuust3,5
1Finnish
Centre
of
Excellence
in
Interdisciplinary
Music
Research,
University
of
Jyvskyl,
Finland,
2Cognitive
Brain
Research
Unit,
Department
of
Psychology,
University
of
Helsinki,
Finland,
3Center
of
Functionally
Integrative
Neuroscience,
Aarhus
University
Hospital,
Nrrebrogade,
8000
Aarhus
C,
Denmark,
4Department
of
Anthropology,
Archaeology
and
Linguistics,
Aarhus
University,
Denmark,
5Royal
Academy
of
Music,
Aarhus/Aalborg,
Denmark,
6Center
for
Semiotics,
Aarhus
University,
Denmark
In
neuroinformatics,
inverse
inference
refers
to
prediction
of
stimulus
from
observed
neural
activation.
A
potential
benefit
of
this
approach
is
a
straightforward
model
evaluation
because
of
easier
performance
characterization.
We
attempted
to
predict
musical
feature
time
series
from
brain
activity
and
subsequently
to
recognize,
which
segments
of
music
participants
were
listening
to.
Moreover,
we
investigated
model
parameters
that
yield
optimal
prediction
performance.
Participants
(N
=
15)
were
measured
with
functional
magnetic
resonance
imaging
(fMRI)
while
they
were
listening
to
two
sets
of
musical
pieces.
Acoustic
features
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012
151
were
computationally
extracted
from
the
stimuli.
The
fMRI
data
were
subjected
to
dimensionality
reduction
via
voxel
selection
and
spatial
subspace
projection.
For
each
stimulus
set
separately,
the
fMRI
projections
were
subjected
to
multiple
regression
against
the
musical
features.
Following
this,
temporal
segments
were
selected
from
the
fMRI
data,
and
a
classifier
comparing
predicted
and
actual
musical
features
was
used
to
associate
each
fMRI
data
segment
with
one
of
the
respective
musical
segments.
To
avoid
overfitting,
cross-
validation
was
utilized.
Different
voxel
selection
criteria
and
subspace
projection
dimensionalities
were
used.
Best
performance
was
obtained
by
including
about
10-15%
of
the
voxels
with
highest
correlation
between
participants,
and
by
projecting
the
fMRI
data
to
less
than
10
dimensions.
Overall,
timbral
and
rhythmic
features
were
more
accurately
predicted
than
tonal
ones.
The
excerpt
being
listened
to
could
be
predicted
from
brain
activation
well
above
chance
level.
Optimal
model
parameters
suggest
that
a
large
proportion
of
the
brain
is
involved
in
musical
feature
processing.
Implicit
Brain
Responses
During
Fulfillment
of
Melodic
Expectations
Job
P.
Lindsen*,
Marcus
T.
Pearce#,
Marisa
Doyne*,
Geraint
Wiggins#,
Joydeep
Bhattacharya*
*Department
of
Psychology,
Goldsmiths,
University
of
London,
UK
#Centre
for
Digital
Music,
Queen
Mary,
University
of
London,
UK
Listening
to
music
entails
forming
expectations
about
how
the
music
unfolds
in
time,
and
the
confirmation
and
violation
of
these
expectations
contribute
to
the
experience
of
emotion
and
aesthetic
effects
of
music.
Our
previous
study
on
melodic
expectations
found
that
unexpected
melodic
pitches
elicited
a
frontal
ERP
negativity.
However,
the
role
of
attention
was
not
explicitly
manipulated
in
the
previous
study.
In
the
current
experiment
we
manipulated
the
degree
to
which
participants
could
attend
to
the
music.
One
group
of
participants
just
listened
to
the
melodies,
a
second
group
had
to
additionally
detect
an
oddball
timbre,
and
a
third
group
memorized
a
nine-digit
sequence
while
listening.
We
used
our
statistical
learning
model
to
select
from
each
melody
a
high
and
low
probability
note
for
the
EEG
analyses.
Replicating
previous
results
we
found
an
early
(~120
ms)
frontal
ERP
negativity
for
unexpected
notes.
Initial
analyses
showed
that
this
early
ERP
effect
was
unaffected
by
our
attention
manipulations.
In
contrast,
analysis
of
the
time-frequency
representation
indicated
an
interaction
of
expectedness
and
attentional
load
in
theta
band
(5-7
Hz)
amplitude
during
a
later
time-window
(~300
ms).
The
expectedness
of
a
melodic
event
seems
to
be
extracted
relatively
quickly
and
automatically
extracted
irrespective
of
the
attentional
load,
suggesting
that
early
melodic
processing
is
largely
pre-attentive
or
implicit.
Later
stages
of
processing
seem
to
be
affected
by
attentional
load,
which
might
reflect
differences
in
updating
of
the
internal
model
used
to
generate
melodic
expectations.
"...and
I
Fe
e
l
Good!"
Ratings,
fMRI-recordings
and
motion-capture
measurements
of
body-movements
and
pleasure
in
response
to
groove
Maria
A.G.
Witek,*
Eric
F.
Clarke,*
Mikkel
Wallentin,#
Mads
Hans,#
Morten
L.
Kringelbach,^
Peter
Vuust#
*Music
Faculty,
Oxford
University,
United
Kingdom
^Dept.
of
Psychiatry,
Oxford
University,
United
Kingdom
#CFIN,
Aarhus
University,
Denmark
What
is
it
about
music
that
makes
us
want
to
move?
And
why
does
it
feel
so
good?
Few
contexts
of
musical
enjoyment
make
the
pleasurable
effect
of
music
more
obvious
than
in
a
dance
club.
A
growing
body
of
research
demonstrates
that
music
activates
brain
areas
involved
in
the
regulation
of
biological
rewards,
such
as
food
and
sex.
However,
the
role
of
body-movement
in
pleasurable
responses
to
groove-based
music,
such
as
funk,
hip-hop
and
152
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012
THU
electronic
dance
music,
has
been
ignored.
This
paper
reports
results
from
a
study
in
which
the
relationship
between
body-movement,
pleasure
and
groove
was
investigated.
In
an
online
rating
survey,
an
inverted
U-shaped
relationship
was
found
between
degree
of
syncopation
in
funk
drum-breaks
and
ratings
of
wanting
to
move
and
experience
of
pleasure.
This
inverted
U-curve
was
reflected
in
fMRI-recorded
patterns
of
activity
in
the
auditory
cortex
of
26
participants.
Furthermore,
there
was
a
negative
linear
relationship
between
degree
of
syncopation
and
activation
in
the
basal
ganglia.
After
scanning,
participants
were
asked
to
move
freely
to
the
drum
breaks
in
a
motion-capture
lab.
Early
explorations
of
the
data
suggest
similar
trends
with
regards
to
degree
of
syncopation
and
kinetic
force
of
movements.
This
triangulation
of
results
provides
unique
insights
into
the
rewarding
and
movement-eliciting
properties
of
music.
As
few
can
resist
the
urge
to
tap
their
feet,
bop
their
heads
or
get
up
and
dance
when
they
listen
to
groove-based
music,
such
insights
are
a
timely
addition
to
theories
of
music-induced
pleasure.
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
153
Friday
27
July
FRI
music.
Indeed,
the
reflection
and
embodiment
of
musical
emotions
through
movement
is
a
prevalent
assumption
within
the
embodied
music
cognition
framework.
This
study
investigates
how
music-induced,
quasi-spontaneous
movement
is
influenced
by
the
emotional
content
of
music.
We
recorded
the
movements
of
60
participants
(without
professional
dance
background)
to
popular
music
using
an
optical
motion
capture
system,
and
computationally
extracted
features
from
the
movement
data.
Additionally,
the
emotional
content
(happiness,
anger,
sadness,
and
tenderness)
of
the
stimuli
was
assessed
in
a
perceptual
experiment.
A
subsequent
correlational
analysis
revealed
that
different
movement
features
and
combinations
thereof
were
characteristic
of
each
emotion,
suggesting
that
body
movements
reflect
perceived
emotional
qualities
of
music.
Happy
music
was
characterized
by
body
rotation
and
complex
movement,
whereas
angry
music
was
found
to
be
related
to
non-fluid
movement
without
rotation.
Sad
music
was
embodied
by
simple
movements
and
tender
music
by
fluid
movements
of
low
acceleration
and
a
forward
bent
torso.
The
results
of
this
study
show
similarities
to
movements
of
professional
musicians
and
dancers,
to
emotion-specific
non-verbal
behavior
in
general,
and
can
be
linked
to
notions
of
embodied
music
cognition.
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
155
Derivation
of
Pitch
Constructs
from
the
Principles
of
Tone
Perception
Zvonimir
Nagy
Mary
Pappert
School
of
Music,
Duquesne
University,
Pittsburgh,
United
States
Recent
cross-cultural
studies
in
psychoacoustics,
cognitive
music
theory,
and
neuroscience
of
music
suggest
a
direct
correlation
between
the
spectral
content
found
in
tones
of
musical
instruments
and
the
human
voice
on
the
origin
and
formation
of
musical
scales.
From
an
interdisciplinary
point
of
view,
the
paper
surveys
important
concepts
that
have
contributed
to
the
perception
and
understanding
of
the
basic
building
blocks
of
musical
harmony:
intervals
and
scales.
The
theoretical
model
for
pitch
constructs
derived
from
the
perceptual
attributes
of
musical
tones
the
patterns
of
tone
intervals
extracted
from
the
harmonic
series
builds
on
the
hypothesis
that
fundamental
assumptions
of
musical
intervals
and
scales
indicate
physiological
and
psychological
properties
of
the
auditory
and
cognitive
nervous
systems.
The
model
is
based
on
the
intrinsic
hierarchy
of
vertical
intervals
and
their
relationships
within
the
harmonic
series.
As
a
result,
musical
scales
based
on
the
perceptual
and
cognitive
affinity
of
musical
intervals
are
derived,
their
rapport
with
Western
music
theory
suggested,
and
the
models
potential
for
use
in
music
composition
implied.
This
leads
to
a
vertical
aspect
of
musical
harmony
by
bonding
of
the
intervallic
quality
and
its
very
structure
embedded
within
the
spectra
of
tones
that
produce
it.
The
models
application
in
the
construction
of
tone
systems
puts
forward
a
rich
discourse
between
music
acoustics,
perception,
and
cognition
on
one
end,
and
music
theory,
aesthetics,
and
music
composition
on
the
other.
156 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
FRI
Musical
phrase
extraction
from
performed
blues
solos
Bruce
Pennycook,1
Carlos
Guedes2
The
Music
Phrase
Segmenter
software
is
an
adaptation
of
Lerdahl
&
Jackendoff's
Grouping
Preference
Rules
based
on
earlier
work
by
Pennycook
and
Stammen.
The
primary
objective
of
MPS
is
to
automatically
extract,
analyze
and
classify
phrases
from
live
performance,
audio
and/or
midi
files
and
scores
to
serve
as
input
to
a
generative
system.
It
has
been
shown
that
statistical
combined
with
boundary-detection
segmentation
methods
can
outperform
a
single
GPR
in
ground-truth
tests,
our
intent
was
to
extend
the
GPR
approach
by
adding
1)
style
dependent
weightings
and
2)
secondary
rules
which
are
dynamically
invoked
to
improve
results
on
ambiguous
interval
displacements.
The
target
application
for
this
system
is
an
interactive
generative
blues
player
suitable
for
mobile
applications
which
is
part
of
an
umbrella
research
project
focusing
on
real-time
interactive
generative
music
production
tools.
To
satisfy
the
requirements
for
this
application,
the
MPS
software
is
designed
to
provide
continuous
phrase-by-phrase
output
in
real-time
such
that
an
input
source
(playing
a
keyboard
or
saxophone
for
example)
could
produce
useful
data
with
a
minimal
latency.In
addition
to
the
segment
information
pitch,
duration,
amplitude
the
MPS
system
produces
for
each
detected
phrase
the
following
analyses:
estimated
bpm
for
the
current
phrase
and
estimated
bpm
from
the
beginning
of
the
analysis
to
the
current
(using
a
new
beat-tracking
Max/MSP
external
object
developed
for
the
overall
research
project),
estimated
root,
estimated
tonality,
estimated
chord-scale,
pitch
and
interval
class
collections
(raw
and
weighted)
plus
a
phrase
contour
value.
The
contours
are
determined
using
a
new
Max/MSP
external
implementation
of
a
dynamic
time-warp
method
to
classify
each
phrase
according
to
nine
templates
derived
from
Huron.
The
contour
matching
process
also
occurs
on
a
phrase-by-phrase
basis
in
real-time.
These
data
sets
are
then
passed
to
a
classification
system
allows
a
user
to
cluster
collections
according
to
any
of
the
analytical
criteria.
The
paper
demonstrates
a)
the
results
of
the
segmenter
processes
compared
to
ground-truth
data
b)
the
real-time
operation
of
the
analytical
and
contour
procedures
c)
the
clustering
classification
system
and
d)
how
the
data
is
ultimately
employed
in
the
generative
system.
An
Interactive
Computational
System
for
the
Exploration
of
Music
Voice/Stream
Segregation
Processes
Andreas
Katsiavalos,
Emilios
Cambouropoulos
School
of
Music
Studies,
Aristotle
University
of
Thessaloniki,
Greece
In
recent
years
a
number
of
computational
models
have
been
proposed
that
attempt
to
separate
polyphonic
music
into
perceptually
pertinent
musical
voices
or,
more
generally,
musical
streams,
based
on
a
number
of
auditory
streaming
principles
(Bregman).
The
exact
way
such
perceptual
principles
interact
with
each
other
in
diverse
musical
textures
has
not
yet
been
explored
systematically.
In
this
study,
a
computational
system
is
developed
that
accepts
as
input
a
musical
surface
represented
as
a
symbolic
note
file,
and
outputs
a
piano-
roll
like
representation
depicting
potential
voices/streams.
The
user
can
change
a
set
variables
that
affect
the
relative
prominence
of
each
streaming
principle
giving,
thus,
rise
to
potentially
different
voice/stream
structures.
For
a
certain
setting
of
the
models
parameters,
the
algorithm
is
tested
against
a
small
but
diverse
set
of
musical
excerpts
(consisting
of
contrasting
cases
of
voicing/streaming)
for
which
voices
or
streams
have
been
manually
annotated
by
a
music
expert
(this
set
acts
as
ground
truth).
Preliminary
qualitative
results
are
encouraging
as
streaming
output
is
close
to
the
ground
truth
dataset.
However,
it
is
acknowledged
that
it
is
difficult
to
find
one
stable
set
of
parameters
that
works
equally
well
in
all
cases.
The
proposed
model
enables
the
study
of
voice/stream
separation
processes
per
se,
and,
at
the
same
time,
is
a
useful
tool
for
the
development
of
more
sophisticated
computational
applications.
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
157
Understanding
Ornamentation
in
Atonal
Music
Michael
Buchler
College
of
Music,
Florida
State
University,
U.S.A.
In
1987,
Joseph
Straus
convincingly
argued
that
prolongational
claims
were
unsupportable
in
post-tonal
music.
He
also,
intentionally
or
not,
set
the
stage
for
a
slippery
slope
argument
whereby
any
small
morsel
of
prolongationally
conceived
structure
(passing
tones,
neighbor
tones,
suspensions,
and
the
like)
would
seem
just
as
problematic
as
longer-range
harmonic
or
melodic
enlargements.
Prolongational
structures
are
hierarchical,
after
all.
This
paper
argues
that
large-scale
prolongations
are
inherently
different
from
small-scale
ones
in
atonal
(and
possibly
also
tonal)
music.
It
also
suggests
that
we
learn
to
trust
our
analytical
instincts
and
perceptions
with
atonal
music
as
much
as
we
do
with
tonal
music
and
that
we
not
require
every
interpretive
impulse
to
be
grounded
by
strongly
methodological
constraints.
Perceiving
and
categorizing
atonal
music:
the
role
of
redundancy
and
performance
Maurizio
Giorgio,1
Michel
Imberty,2
Marta
Olivetti-Belardinelli3
3ECoNA - Interuniversity Centre for Research on Cognitive Processing in Natural and Artificial
In
order
to
verify
if
the
performer
interpretation
has
a
role
on
the
perceived
segmentation
of
atonal
music,
we
performed
three
experiments
according
to
the
ecological
approach
developed
by
Irne
Delige
(1990).
We
hypothesize
that
musical
structure
affects
grouping
more
than
158 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
FRI
performance
and,
moreover,
that
the
main
mechanism
involved
in
the
representation
of
musical
structure
is
related
to
the
detection
of
similarity
and
difference
between
phrases,
that
is,
of
their
redundancy.
For
each
experiment
30
subjects
were
invited
to
attentively
listen
to
two
different
performances
of
an
atonal
piece,
to
understand
its
plan
and
to
mark
off
the
sections
of
the
work
pressing
a
computer
key.
The
order
of
presentation
of
the
two
performances
was
balanced.
In
a
first
experiment
we
used
two
versions
of
Berios
Sequenza
VI
performed
respectively
by
Desjardins
(1998)
and
Knox
(2006).
These
variants
are
different
in
duration
(12.13min.
vs
13.14min.)
and
show
differences
in
dynamics
aspects
(i.e.:
velocity,
intensity),
accents
distribution
and
gaps
duration.
The
aim
of
this
work
was
to
isolate
and
analyze
the
role
of
variations
in
dynamic
components,
accents
distribution,
duration
and
the
instrumentalists
point
of
view
in
the
representation
of
the
musical
surface,
as
perceived
by
the
listeners.
In
the
second
experiment
we
focused
on
the
role
of
performances
duration
by
using
two
versions
of
Berios
Sequenza
III,
recorded
by
the
same
singer,
that
differ
exactly
in
duration.
In
order
to
better
investigate
the
performers
interpretation
of
the
score,
in
the
third
experiment
we
asked
to
two
musicians
to
record
a
performance
of
Berios
Sequenza
VIII
by
means
of
a
score
in
which
we
had
previously
erased
the
dynamic
instructions
provided
by
the
composer.
Moreover,
none
of
the
two
instrumentalists
knew
the
Berios
composition
before
our
request.
Then
we
used
the
obtained
tracks
as
stimuli
in
the
same
paradigm
of
previous
experiments.
The
results
show
a
good
number
of
coinciding
segmentations
in
the
two
versions
either
for
the
first,
the
second
and
the
last
experiment,
confirming
our
hypothesis
and
suggesting
a
main
role
of
the
texture
in
perceiving
and
representing
the
plan
of
the
pieces.
The
results
of
the
three
experiments
are
discussed
in
relation
to
the
role
of
same/different
detection.
Whats
That
Coming
Over
The
Hill?
The
Role
Of
Music
On
Response
Latency
For
Emotional
Words
Paul
Atkinson
Psychology,
Goldsmiths
University,
England
Music
and
words
both
have
the
potential
to
generate
emotional
states
that
may
impaction
concurrent
task
performance,
but
the
extent
of
this
interaction
is
rarely
explored.
A
classic
example
of
the
effects
of
emotional
words
is
seen
in
responses
to
the
emotional
Stroop
test,
Stroop
(1935)
whereby
the
presence
of
emotional
words
inhibits
response
times
to
a
standard
color
naming
task.
Graham,
Robinson
and
Mulhall
(2009)
combined
the
Stroop
task
with
music
and
found
an
effect.
The
aim
of
this
study
was
to
explore
whether
music
could
affect
performance
on
an
emotional
Stroop
task:
Specifically
it
was
hypothesized
that
fearful
music
would
inhibit
responses
on
the
reading
task
while
happy
music
would
decrease
inhibition.
Both
conditions
were
measured
against
a
silent
control.
The
music
samples
for
the
present
study
were
taken
from
a
study
by
Eerola
and
Vuoskoski
(2010).
60
undergraduates
took
part
in
the
study
and
were
comprised
of
33
females
and
24
males.
The
experiment
involved
participants
responding
to
a
colour
naming
Stroop
task
on
a
computer
screen
that
contained
both
threat
and
neutral
words,
either
in
silence
or
while
listening
to
music
that
was
rated
as
happy
or
fearful.
The
dependent
variable
was
the
time
taken
for
the
participant
to
respond
to
the
color
of
the
word
presented.
The
findings
of
the
study
supported
the
experimental
hypotheses:
fearful
music
significantly
inhibited
response
times,
while
response
times
in
the
happy
music
condition
were
significantly
facilitated.
In
the
silence
condition
no
significance
difference
was
found
between
performance
of
words.
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
159
Tonality
and
Affective
Experience:
What
the
Probe
Tone
Method
Reveals
160 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
FRI
Lower
than
average
spectral
centroid
and
the
subjective
ability
of
a
musical
instrument
to
express
sadness
Genre-related
Dynamics
of
Affects
in
Music
Romantic
changes:
Exploring
historical
differences
in
the
use
of
articulation
rate
in
major
and
minor
keys
Matthew
Poon,
Michael
Schutz
McMaster
Institute
for
Music
and
the
Mind,
McMaster
University,
Canada
Music
and
speech
are
known
to
communicate
emotion
using
acoustic
cues
such
as
timing
and
pitch.
Previously
we
explored
the
use
of
these
cues
within
a
corpus
of
24-prelude
sets,
quantifying
these
cues
in
each
of
the
12
major
(nominally
happy)
and
12
minor
(nominally
sad)
pieces.
We
found
that
the
major-key
pieces
were
both
higher
in
pitch
and
faster
in
articulation
rate
than
their
minor-key
counterparts
(Poon
&
Schutz,
2011).
However,
we
also
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012
161
found
differences
in
the
way
Bach
and
Chopin
used
the
cuesdifferences
consistent
with
previous
work
suggesting
that
the
Romantic
era
practices
for
the
use
of
articulation
rate
broke
with
those
of
previous
eras
(Post
&
Huron,
2009).
To
further
explore
this
change,
we
expanded
our
survey
to
include
seven
additional
24-prelude
sets
written
by
Classical
and
Romantic
composers.
For
the
Classical-era
sets,
major
key
pieces
were
on
average
25%
faster
than
their
the
minor-key
counterparts.
However
for
the
Romantic-era
sets,
major-key
pieces
were
in
fact
7.5%
slower
than
their
minor
key
counterparts.
Our
analysis
of
pitch
height
differences
is
still
in
progress,
but
through
a
rigorous
methodology
we
document
clear
differences
in
acoustic
cues
between
the
Classical
and
Romantic
eras,
complementing
and
extending
work
by
Post
and
Huron.
Acoustic
variables
in
the
communication
of
composer
emotional
intent
Experienced
emotional
intensity
when
learning
an
atonal
piece
of
music.
A
case
study
Arantza
Almoguera1,
Mari
Jose
Eguilaz1,
Jose
Antonio
Ordoana2,
Ana
Laucirica1
Different
studies
point
out
that
music
is
one
of
the
most
effective
inducers
of
intense
emotional
experiences.
Nevertheless,
almost
all
the
studies
found
are
focused
on
the
listeners
emotion,
being
scarce
the
studies
focused
on
the
performer.
Due
to
its
characteristics,
its
more
difficult
that
atonal
music
generates
positive
emotions,
both
in
the
audiences
and
among
interpreters
and
students.
In
fact,
several
authors
consider
that
atonal
music
is
emotionally
incomprehensible,
and
thats
the
reason
why
atonal
music
is
not
very
widespread
in
music
education
centers.
The
goal
of
our
study
is
to
investigate
into
the
emotional
intensity
experienced
by
five
Flute
students
when
learning
an
atonal
piece
for
Solo
Flute.
Results
point
out
that
the
deeper
knowledge
of
the
music
reached
in
the
learning
process
and
the
successive
listening
to
the
piece
entail
more
familiarity
and
a
better
understanding
of
the
music
played,
and,
therefore,
students
are
able
to
find
emotionally
intense
passages,
as
it
happens
with
tonal
music.
Consequently,
we
dont
agree
with
all
those
162
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012
FRI
theories
that
suggest
that
atonal
music
is
unexpressive
and
emotionally
incomprehensible,
and
we
confirm
that
cognition
has
a
positive
influence
in
the
emotion
felt
when
playing
atonal
music.
This
work
is
part
of
the
Research
National
Project
I+D
2008-2011,
code
EDU-
2008-03401
Audition,
cognition
and
emotion
in
the
atonal
music
performance
by
high
level
music
students,
funded
by
the
Ministry
of
Science
and
Innovation
of
Spain.
Nancy
Rogers
College
of
Music,
Florida
State
University,
United
States
This
paper
aims
to
bridge
the
gulf
between
music
cognition
and
mainstream
music
theory
by
describing
ways
to
augment
typical
approaches
to
basic
musical
organization
(form
and
phrase
structure)
in
a
traditional
music
theory
class.
Discussing
principles
of
musical
expectation,
event
segmentation,
schema
theory,
and
statistical
learning
is
compatible
with
common
pedagogical
approaches
to
form.
I
also
describe
classroom
activities
and
assignments
that
engage
research
in
expectation
and
schema
theory.
Interactive
Computer
Simulation
for
Kinesthetic
Learning
to
Perceive
Unconventional
Emergent
Form-bearing
Qualities
in
Music
by
Crawford
Seeger,
Carter,
Ligeti,
and
Others
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
163
In
the
training
of
future
piano
teachers
(as
well
as
of
other
instrumental
teachers)
provided
by
the
academies
of
music
the
strongest
emphasis
is
put
on
their
preparation
in
terms
of
specific
musical
competences,
such
as
a
high
level
of
piano
performance,
an
ability
to
build
up
pupils
solid
mtier,
to
shape
pupils
playing
apparatus,
to
develop
their
musical
and
technical
skills.
The
teachers
training
involves
also
the
psychological
and
educational
knowledge
and
skills,
which,
however,
are
usually
not
taken
too
seriously,
both
by
the
musical
students
themselves
and
the
music
academies.
The
study
aims
at
establishing
whether
there
exists
a
relationship
between
the
piano
teachers
sense
of
competence
(musical,
educational,
and
psychological)
and
the
pupils
attitudes
towards
their
piano
teachers
and
piano
lessons.
The
subjects
were
pupils
from
the
professional
primary
music
schools
(N=40)
and
their
piano
teachers
(N=15).
The
pupils
were
administered
the
Pupils
Questionnaire,
designed
to
test
their
attitudes
towards
their
piano
teachers
and
the
piano
lessons.
The
teachers
completed
the
Piano
Teacher
Self-Efficacy
Questionnaire
designed
to
measure
their
sense
of
competence.
The
data
were
compared
for
correspondence.
The
comparison
revealed
that
the
higher
the
teachers
sense
of
psychological
competences,
the
more
positive
their
pupils
attitudes
both
towards
the
teacher
him/herself
and
the
piano
lessons,
the
less
often
the
pupils
experience
negative
feelings
during
the
lessons,
the
lower
their
level
of
anxiety
and
the
higher
sense
of
self-fulfillment.
It
has
also
been
revealed
that
the
higher
teachers
musical
competences,
the
less
often
their
pupils
experience
joy,
self-realization,
and
the
more
often
they
experience
anxiety.
The
results
indicate
clearly
that
neither
the
teachers
good
piano
164 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
FRI
playing,
painstakingly
achieved
during
the
musical
studies,
nor
his/her
careful
training
in
the
remaining
areas
ensure
good
relationship
between
teacher
and
pupil.
These
factors,
therefore,
cannot
be
a
predictor
of
the
effectiveness
of
teaching,
e.g.
they
do
not
result
in
developing
pupils
musical
interest
and
motivation
for
piano
playing.
These
findings
once
again
point
to
the
great
significance
of
teachers
psychological
competences
and
their
role
in
shaping
pupils
positive
attitude
towards
piano
playing
and
towards
music
in
general.
The
Effect
of
Music
Teaching
Method
on
Music
Reading
Skills
and
Music
Participation:
An
Online
Study
Music
training,
personality,
and
IQ
How
do
individuals
who
study
and
practice
music
for
years
on
end
differ
from
other
individuals?
We
know
that
musically
trained
individuals
tend
to
perform
better
on
tests
of
cognitive
abilities,
including
measures
of
listening,
memory,
verbal
abilities,
visuospatial
abilities,
nonverbal
abilities,
and
IQ.
Such
advantages
extend
to
school
classrooms,
where
musically
trained
children
and
adolescents
tend
to
get
better
grades
than
their
untrained
counterparts
on
all
school
subjects
except
for
physical
education
(i.e.,
sports).
One
particularly
provocative
finding
is
that
duration
of
music
training
is
associated
with
average
grades
in
school
even
when
IQ
is
held
constant.
In
other
words,
musically
trained
individuals
are
better
students
that
one
would
predict
based
on
their
IQ,
which
implicates
a
contribution
of
individual-difference
variables
other
than
IQ.
One
possibility
is
that
studying
music
is
associated
with
individual
differences
in
personality.
So,
the
research's
aim
is
to
examine
whether
personality
variables
can
help
to
explain
individual
differences
in
duration
of
music
training.
The
sample
included
a
large
number
of
undergraduates
who
varied
widely
in
terms
of
their
music
background.
They
were
tested
individually
on
measures
of
IQ
(Wechsler
Abbreviated
Scale
of
Intelligence)
and
personality
(Big
Five
Inventory).
They
also
provided
detailed
demographic-background
information.
Music
background
was
defined
as
the
number
of
years
of
playing
music
regularly,
which
was
highly
correlated
with
years
of
music
lessons
but
more
strongly
associated
with
the
predictor
variables.
Playing
music
regularly
was
correlated
positively
with
Performance
(nonverbal)
IQ
and
Openness-to-Experience,
but
negatively
with
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012
165
Music-Games:
Supporting
New
Opportunities
for
Music
Education
Gianna
Cassidy,
Anna
Paisley
This
paper
presents
Phase
1
of
the
EPSRC
24month
project,
Music-Games:
Supporting
New
Opportunities
for
Music
Education.
While
learners
are
increasingly
engaged
with
digital
music
participation
outside
the
classroom,
evidence
indicates
learners
are
increasingly
disengaged
with
formal
music
education.
The
challenge
for
music
educators
is
to
capitalise
on
the
evident
motivation
for
informal
music-making
with
digital
technology,
as
a
tool
to
create
authentic
and
inclusive
opportunities
to
inspire
and
engage
learners
with
music
in
educational
contexts.
Previous
research
highlights
the
power
of
music
participation
to
enrich
cognitive,
social
and
emotional
wellbeing,
while
a
growing
body
of
work
highlights
the
educational
potential
of
digital
games
to
scaffold
and
enrich
personalised
learning
across
curriculum.
This
body
of
work
addresses
the
neglected
music-game
synergy,
investigating
the
potential
of
music
games
to
support
and
enrich
music
education
by
identifying
processes,
opportunities
and
potential
outcomes
of
participation.
Phase
1
aimed
to
elucidate
Educator,
Learner
and
Industry
attitudes,
uses
and
requirements
with
music-games,
the
musical
opportunities
and
experiences
music-
games
support,
processes
of
participation
in
and
outside
the
classroom,
and
constraints
of
use
within
existing
practice
in
line
with
defined
curriculum
goals.
Study
1
presents
a
comprehensive
questionnaire
investigation
(n=2000)
of
Educators,
Learners,
and
Games
Industry
uses
and
functions
of
music-games,
and
barriers
to
classroom
employment.
Study
2
presents
a
mixed
method
investigation
of
learner
sessions
(n=70)
with
RockBand,
recording
performance
(e.g.,
score
music
choice,
usability)
and
self-report
measures
(e.g.,
Profile
of
Mood
States
and
Flow)
and
a
thematic
analysis
of
post-session
reflective
interviews.
Study
3
presents
a
thematic
analysis
of
educator
and
industry
co-created
scenarios
of
use
for
RockBand
in
the
classroom
in
line
with
defined
curriculum
goals.
Findings
suggest
music-games
can
engage
and
inspire
us
with
music,
potentially
supporting
and
enriching
key
areas
of
music
education,
social,
emotional
and
cognitive
wellbeing
in
the
classroom
and
wider
musical
world
of
the
learner.
Analysis
was
guided
by
the
elements
of
the
new
opportunities
in
music
curriculum,
and
Hargreaves
et
al.,
(2003)
models
of
opportunities
in
music
education,
and
potential
outcomes
of
music
education.
Findings
are
discussed
through
recommendations
for
effective
and
efficient
employment
of
music
technologies
for
Educators,
and
innovative
and
user-centred
design
of
future
music
technologies
for
Industry.
Attitudes
Towards
Game-Based
Music
Technologies
in
Education:
A
Survey
Investigation
FRI
goals.
Yet,
despite
the
widespread
usage
and
relative
accessibility
of
music-based
digital
games,
coupled
with
the
abundance
of
research
that
exists
to
support
the
cognitive,
emotional
and
social
benefits
of
musical
participation,
there
remains
a
dearth
of
empirical
research
into
the
inclusion
of
such
technologies
within
the
realm
of
music
education.
In
view
of
this
and,
as
part
of
an
ongoing
EPSRC-funded
project
designed
to
evaluate
the
educational
potential
of
music-based
digital
games,
a
large-scale
survey
investigation
was
primarily
conducted
as
a
means
of
ascertaining
current
uses,
requirements
with
and
attitudes
towards
music-based
video
games
across
three
groups
of
relevant
stakeholders,
to
include
educators,
learners
and
game
industry
experts.
An
initial
pilot
study
was
conducted
as
a
means
of
assessing
the
reliability
and
validity
of
this
scale
across
250
participants.
Following
analytical
proceedings,
the
questionnaire
was
subsequently
refined
before
being
administered
across
the
3
groups
of
relevant
stakeholders.
(n
=
2000+).
Results
from
a
nested
sub-sample
of
300
cases
from
the
overall
participant
pool
shall
be
presented
here
with
a
specific
focus
on
learners
responses
to
the
final
version
of
the
survey.
These
initial
findings
shall
subsequently
be
discussed
in
light
of
the
overarching
aims
of
the
project,
and
with
regard
to
the
effective
and
successful
integration
of
music-based
games
within
music
education.
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
167
Effects
of
Observed
Music-Gesture
Synchronicity
on
Gaze
and
Memory
168 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
FRI
Extracting
Action
Symbols
From
Continuous
Motion
Data
Kristian
Nymoen,1
Arjun
Chandra,1
Mariusz
Kozak2,
Rolf
Inge
Gody3,
Jim
Trresen1,
Arve
Voldsund3
1Dept.
of
Informatics,
University
of
Oslo,
Norway;
2Dept.
of
Music,
University
of
Chicago,
IL.,
USA;
3Dept.
of
Musicology,
University
of
Oslo,
Norway
Human
motion
can
be
seen
as
a
continuous
phenomenon
which
can
be
measured
as
a
series
of
positions
of
body
limbs
over
time.
However,
motion
is
cognitively
processed
as
discrete
and
holistic
units,
or
chunks,
ordered
by
goal-points
with
trajectories
leading
between
these
goal-points.
We
believe
this
is
also
the
case
for
music-related
motion.
With
the
purpose
of
utilising
such
chunks
for
the
control
of
musical
parameters
in
mobile
interactive
systems,
we
see
substantial
challenges
in
developing
a
robust
automated
system
for
identification
of
motion
chunks
and
extracting
segments
from
the
continuous
data
stream.
This
poster
compares
several
automated
segmentation
techniques
for
motion
data,
applied
to
recordings
of
people
moving
to
music.
An
experiment
has
been
carried
out,
where
44
participants
were
given
the
task
of
moving
their
body
to
short
musical
excerpts.
The
motion
was
recorded
by
infrared
motion
capture,
with
markers
on
the
right
wrist,
elbow,
shoulder
and
the
C7.
In
order
to
make
the
segmentation
techniques
easily
transferable
to
mobile
devices,
the
automated
segmentation
technique
was
only
based
on
the
data
from
the
right
wrist
marker.
A
human
observing
3D
point
light
displays
of
the
motion
recordings
of
the
whole
arm
(wrist,
elbow,
shoulder,
neck)
demarcated
chunks
by
looking
at
perceptually
salient
moments
in
the
recordings.
The
chunks
demarcated
by
the
human
were
used
as
a
baseline
for
evaluating
the
precision
and
recall
rates
of
the
automated
segmentation
techniques.
Embodied
musical
gestures
as
a
game
controller
Charlie
Williams
University
of
Cambridge,
UK
With
the
increasing
prevalence
of
portable
electronic
devices
and
the
concomitant
pervasiveness
of
casual
gaming,
interest
in
the
potential
musical
effects
of
this
growth
has
been
growing.
Michiel
Kamp
(2010)
in
particular
surveys
the
gaming
field
looking
for
ludic
music,
ultimately
calling
for
it
more
as
a
future
goal
than
as
an
aspect
of
currently
available
games.
I
present
a
digital
game-based
model
for
music-making
and
musicianship-learning,
grounded
in
embodied
spontaneity
and
sociality
rather
than
the
extant
music-theoretical,
ear-training,
or
rote
practice
models.
A
series
of
four
mobile-device
app
games
in
development
is
described,
in
which
live
musical
gestures
(singing
or
clapping)
serve
as
the
control
mechanism.
For
example,
in
one
game
a
group
of
pitch
classes
is
represented
by
a
row
of
gates,
which
close
when
a
pitch
is
sung
and
then
open
slowly
over
time.
In
that
game
mechanic,
the
goal
is
to
break
bricks
by
bouncing
the
ball
off
of
the
closed
gates;
to
do
so
a
user
must
accurately
self-represent
the
pitch
internally,
and
then
perform
the
pitch
required,
all
within
a
timeframe
bounded
by
the
specifics
of
the
games
physics
simulation.
Other
games
focus
variously
on
controlling
the
high-low/loud-soft
distinction
rather
than
producing
specific
pitch
classes,
and
on
rhythmic
pattern-clapping.
The
rhythm-based
games
do
not
require
a
fixed
tempo
but
rather
include
a
mechanism
for
mutual
tempo
entrainment
between
player
and
device.
Gameplay
and
demographic
data
are
gathered
in
both
laboratory
and
in
vivo
settings,
and
a
preliminary
analysis
of
this
data
will
be
presented
at
the
conference.
A
hypothesis
that
musicality
is
at
least
partially
constructed
through
increasingly
sophisticated
manipulation
of
a
vocabulary
of
potential
gestures
will
be
evaluated
in
light
of
these
findings.
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
169
Physical
movement
of
musicians
and
conductors
alike
play
important
role
in
music
perception.
This
study
was
designed
to
identify
whether
there
was
a
predictable
mathematical
relationship
between
hand
gestures
performed
by
an
expert
conductor
and
vocal
responses
of
a
general
adult
sample
with
and
without
musical
background.
Our
empirical
work
has
found
that
adults
systematically
vary
their
utterance
of
the
syllable
/dah/
in
a
way
that
matches
the
motion
characteristics
of
the
hand
gestures
being
observed,
but
the
physical
nature
of
this
relationship
remained
unclear.
The
movements
of
the
conductor
were
captured
using
a
high-resolution
motion
capture
system
while
she
performed
four
different
hand
gestures,
namely
flicks,
punches,
floats
and
glides,
at
constant
tempo.
The
kinematic
features
such
as
position
and
velocity
were
extracted
from
the
motion
data
using
a
computational
data
quantification
method.
Similarly,
an
average
RMS
amplitude
profile
was
computed
from
the
repeated
utterances
of
/dah/
given
each
gesture
across
all
participants.
The
kinematic
features
were,
then,
compared
to
their
amplitude
counterparts
in
the
audio
tracks.
A
correlation
analysis
showed
very
strong
relations
among
the
velocity
profiles
of
the
movements
and
their
accompanying
sound-energy
profiles.
Deeper
analysis
showed
that
initial
velocity
in
the
motion
data
truly
predicted
the
RMS
amplitude
in
their
auditory
counterparts,
i.e.
faster
initial
speed
caused
louder
responses.
The
observed
structural
similarity
between
the
movement
and
sound
data
might
be
due
to
a
direct
mapping
of
the
visual
representation
of
observed
action
onto
ones
own
motor
representation
which
is
reflected
in
its
resultant
auditory
effects.
Intelligent
dance
moves:
rhythmically
complex
and
attractive
dance
movements
are
perceived
to
reflect
higher
intelligence
Suvi
Saarikallio,
Geoff
Luck,
Birgitta
Burger,
Marc
R.
Thompson,
Petri
Toiviainen
Finnish
Centre
of
Excellence
in
Interdisciplinary
Music
Research,
Department
of
Music,
University
of
Jyvskyl,
Finland
Dance
movement
has
been
shown
to
reflect
individual
characteristics,
such
as
personality
of
the
dancer,
and
certain
types
of
movements
are
generally
being
perceived
as
more
attractive
than
others.
We
investigated
whether
particular
dance
movements
would
be
perceived
as
illustrative
of
a
dancers
intelligence.
As
intelligence
generally
refers
to
ability
to
adapt
to
complexly
changing
conditions,
we
studied
movement
features
indicating
complexity,
and
because
people
generally
co-associate
different
positive
characteristics,
we
studied
features
typically
perceived
as
attractive.
The
role
of
the
observers
mood
and
music
preference
was
also
studied.
Sixty-two
adults
(28
males,
mean
age
24.68)
were
presented
with
48
short
(30s)
audiovisual
point-light
animations
of
other
adults
dancing
to
music
representing
different
genres
of
dance
music
(pop,
latin,
techno).
The
participants
were
instructed
to
rate
the
perceived
intelligence
of
the
dancer
in
each
excerpt.
In
addition,
they
rated
their
mood
and
activity
levels
before,
and
their
preference
of
the
music
after
the
experiment.
Movement
features
expressive
of
complexity
and
attractiveness
were
computationally
extracted
from
the
stimuli.
Men
gave
significantly
higher
intelligence
ratings
for
female
dancers
with
wider
hips,
greater
hip-knee
phase
ratio,
and
greater
movement
complexity
indicated
by
metrical
irregularity.
However,
female
observers
ratings
were
not
influenced
by
the
movement
characteristics.
Moreover,
while
music
preference
did
not
influence
the
ratings,
current
positive
mood
and
higher
energy
level
biased
male
observers
to
give
higher
intelligence
ratings
for
female
dancers.
The
study
shows
that
rhythmically
complex
and
generally
170
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012
FRI
attractive
movement
appears
to
be
perceived
indicative
of
intelligence,
particularly
for
men
rating
female
dancers.
Overall,
the
study
provides
preliminary
evidence
that
certain
music-
related
movements
are
perceived
expressive
of
more
inferred
personal
characteristics
such
as
intelligence.
The
Impact
of
Induced
Emotions
on
Free
Movement
Edith
Van
Dyck,*
Pieter-Jan
Maes,*
Jonathan
Hargreaves,#
Micheline
Lesaffre,*
Marc
Leman*
*Department
of
Arts,
Music
and
Theater
Sciences,
Ghent
University,
Belgium
#Department
of
Music,
Trinity
Laban
Conservatoire
of
Music
and
Dance,
UK
The
goal
of
this
study
was
to
examine
the
effect
of
two
basic
emotions,
happiness
and
sadness,
on
free
movement.
A
total
of
32
adult
participants
took
part
in
the
study.
Following
an
emotion
induction
procedure
intended
to
induce
emotional
states
of
happiness
or
sadness
by
means
of
music
and
guided
imagery,
participants
moved
to
an
emotionally
neutral
piece
of
music
that
was
composed
for
the
experiment.
Full
body
movement
was
captured
using
motion
caption.
In
order
to
explore
whether
differences
in
corporeal
articulations
between
the
two
conditions
existed,
several
movement
cues
were
examined.
The
criteria
for
selection
of
these
cues
was
based
on
Effort-Shape.
Results
revealed
that
in
the
happy
condition,
participants
showed
faster
and
more
accelerated
body
movement.
Moreover,
movements
proved
to
be
more
expanded
and
more
impulsive
in
the
happy
condition.
These
findings
provide
evidence
of
the
effect
of
emotion
induction
as
related
to
body
movement.
This
article
locates
Helmhotz's
groundbreaking
research
on
timbre
and
a
few
of
its
historical
implications
in
terms
of
musical
and
mathematical
coordinates.
Through
pinpointing
on
selected
timbre-related
examples
it
describes
how
music
aesthetic
ideals,
mathematical
theories
and
acoustics
research
systematically
interdepend.
After
repositioning
Helmholtz's
work
with
respect
to
Fourier's
theorem,
two
musical
perspectives
are
considered,
Schoenberg's
vision
of
Klangfarbenmelodie
and
Xenakis's
quest
for
sonic
granularity.
It
is
moreover
suggested
to
regard
the
1960
ANSI
definition
as
a
late
echo
of
Helmholtz's
reign.
The
evolution
of
the
multi-dimensional-scaling-based
timbre
space
model
is
briefly
outlined
before
observing
a
plurality
of
mathematic
approaches
which
seems
to
mark
current
research
activities
in
acoustics.
Ecological
factors
in
timbre
perception
Jens
Hjortkjr
Department
of
Arts
and
Cultural
Studies,
University
of
Copenhagen,
Denmark
Recent
meta-analyses
of
timbre
perception
studies
have
suggested
that
physical
aspects
of
the
instrument
sources
are
picked
up
in
timbre
perception.
In
particular,
continuous
representations
of
perceived
timbre
similarities
(timbre
spaces)
appear
to
reflect
categorical
information
about
the
material
composition
of
the
instruments
and
about
the
actions
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012
171
involved
in
playing
them.
To
examine
this
experimentally,
twenty
listeners
were
asked
to
rate
the
similarity
of
impact
sounds
representing
categorically
different
actions
and
materials.
In
a
weighted
multidimensional
scaling
analysis
of
the
similarity
ratings
we
found
2
latent
dimensions
relating
to
the
materials
and
actions,
respectively.
In
an
acoustic
analysis
of
the
sound
stimuli,
we
found
the
material
related
dimension
to
correlate
with
the
centroid
of
the
long-term
spectrum,
while
the
action
related
dimension
was
related
to
the
temporal
centroid
of
the
amplitude
envelope.
The
spectral
centroid
is
also
a
well-known
and
robust
descriptor
across
musical
timbre
studies,
suggesting
that
the
distribution
of
frequencies
is
perceptually
salient
because
it
carries
information
about
the
material
of
the
sound
source.
More
generally,
the
results
suggest
that
listeners
attend
implicitly
to
particular
aspects
of
the
continuous
sound
stimulation
that
carry
higher-order
information
about
the
sounding
source.
Establishing
a
spectral
theory
for
perceptual
timbre
blending
based
on
spectral-envelope
characteristics
Comparative
study
of
saxophone
multiphonic
tones.
A
possible
perceptual
categorization
Martn
Proscia,
Pablo
Riera,
Manuel
C.
Eguia
Laboratorio
de
Acstica
y
Percepcin
Sonora,
Universidad
Nacional
de
Quilmes,
Argentina
A
number
of
studies
have
been
devoted
to
the
production
of
multiphonics
in
woodwinds,
focusing
on
the
possibilities
and
difficulties
of
intonation,
fingering,
pitch
of
components,
and
production
of
trills.
However,
most
of
them
disregard
the
timbric
and
dynamic
qualities
of
these
tones,
or
are
aimed
to
the
detailed
analysis
of
a
few
multiphonic
examples.
Recent
research
also
served
to
unveil
the
physical
principles
that
give
rise
to
these
complex
tones,
including
the
interaction
with
the
vocal
tract
of
the
performer.
In
comparison,
the
psychophyisics
of
the
multiphonic
perception
have
received
much
less
attention,
and
a
complete
picture
of
how
these
multiple
sonorities
are
eventually
grouped
into
perceptual
classes
is
still
missing.
This
work
presents
a
comparative
study
of
a
comprehensive
collection
of
multiphonics
of
the
saxophone,
from
which
a
possible
categorization
into
perceptual
classes
is
derived.
In
order
to
do
this
a
threefold
analysis
is
performed:
musical,
psychoacoustical
and
spectral.
Based
on
previous
research
from
the
musical
perspective,
an
organization
of
the
perceptual
space
for
the
multiphonics
into
four
main
classes
was
proposed.
As
a
first
step,
a
total
of
120
multiphonic
tones
of
the
alto
saxophone,
spanning
a
172 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
FRI
wide
spectrum
of
possible
sonorities,
were
analyzed
using
Schaeffer's
concept
of
sound
object.
From
this
analysis,
a
representative
subset
of
15
multiphonic
tones
was
selected,
including
samples
for
each
of
the
four
groups
proposed.
These
representative
tones
were
used
in
a
psychoacoustical
experiment
(pair
comparison
test)
in
order
to
obtain
a
judgement
of
similarity
between
them.
The
results
obtained
were
analyzed
using
multidimensional
scaling.
Finally,
by
means
of
a
spectral
analysis
of
the
tones,
possibles
cues
used
by
the
listeners
to
evaluate
similarity
were
obtanied.
As
a
main
result,
multidimensional
scaling
shows
a
perceptual
organization
that
closely
resembles
the
classification
proposed
from
the
musical
point
of
view,
clustering
the
four
main
classes
on
a
two
dimensional
space.
From
the
spectral
analysis,
a
possible
correspondence
of
the
two
meaningful
dimensions
with
the
number
of
components
and
the
pitch
of
the
lower
component
was
analyzed.
A
perceptual
categorization
for
the
multiphonics
is
of
uttermost
importance
in
musical
composition.
This
works
advances
a
possible
organization
of
these
tones
for
the
alto
saxophone
that
could
be
eventually
extended
to
other
woodwind
instruments.
Comparison
of
Factors
Extracted
from
Power
Fluctuations
in
Critical-Band-
Filtered
Homophonic
Choral
Music
Analysis
of
Musical
Timbre
Semantics
through
Metric
and
Non-Metric
Data
Reduction
Techniques
173
A
physical
modelling
approach
to
estimate
clarinet
control
parameters
174 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
FRI
Speed
Poster
Session
36:
Grand
Pietra
Hall,
11:40-12:10
Social
perspectives
Dancing
with
death:
music
festivals,
healthy
and
unhealthy
behaviour
Alexandra
Lamont
Centre
for
Psychological
Research,
Keele
University,
United
Kingdom
Popular
music
festivals
are
growing
in
popularity,
and
certain
types
of
festival
have
become
associated
with
different
unhealthy
behaviours
such
as
alcohol
and
drug
abuse.
While
research
has
highlighted
the
considerable
wellbeing
that
festivals
can
provide,
little
is
known
about
the
unhealthier
elements
of
music
festivals.
This
project
explores
the
choices
festival-
goers
make
around
healthy
and
unhealthy
behaviour,
and
attitudes
towards
risk
and
pleasure
in
relation
to
music.
The
research
uses
ethnographic
methods
at
a
three-day
residential
(camping)
electronic
dance
music
festival,
with
observational
data,
an
online
survey
of
76
festival-goers
completed
after
the
event,
and
follow-up
telephone
interviews.
Across
all
ages,
many
participants
reported
an
unhealthy
set
of
behaviours
(combining
legal
and
illegal
drugs)
as
their
route
towards
wellbeing,
in
a
setting
which
provides
an
alternative
reality
the
giant
bubble
of
happyness
[sic]
alongside
a
supportive
social
situation
which
minimizes
the
perceived
risks
of
such
unhealthy
behaviour.
Emerging
themes
included
escape
from
reality,
the
importance
of
social
connections,
and
a
sense
of
control
over
use
of
illegal
drugs.
Memories
of
the
event
are
somewhat
hazy
for
many
participants,
and
other
behaviour
is
less
planned
(e.g.
rarely
is
attention
paid
to
set
lists
or
attempts
to
hear
particular
DJs
or
artists).
The
results
show
that
many
festival-goers
prioritise
a
direct
route
to
pleasure
through
hedonism.
The
illusion
of
safety
of
the
festival
context
leads
to
more
risky
behaviour
than
is
typical
in
festival-goers
everyday
life,
and
this
altered
perception
of
risk
poses
concerns
in
terms
of
health
and
wellbeing.
Deriving
Musical
Preference
Profiles
from
Liked
and
Disliked
Artists
175
You
get
what
you
pay
for:
pitch
and
tempo
alterations
in
user-posted
YouTube
videos
Joseph
Plazak
School
of
Music,
Illinois
Wesleyan
University,
USA
Despite
the
widespread
availability
of
free
streaming
music
hosted
by
YouTube.com,
many
YouTube
videos
contain
music
that
has
been
altered
from
the
original
recording
in
some
way,
including
alterations
of
pitch,
tempo,
or
timbre.
The
factors
and
motivations
guiding
these
alterations
remain
unknown.
The
aims
of
this
study
were
to
determine
the
prevalence
of
pitch
and
tempo
alterations
in
user-posted
YouTube
videos,
and
also
to
determine
the
direction
and
magnitude
of
these
pitch
and
tempo
alterations.
In
an
initial
study,
75%
of
100
collected
YouTube
recordings
contained
a
nominal
alteration
of
pitch
and/or
tempo
(+/-
1Hz;
+/-
3bpm).
Thirty-four
of
these
recordings
contained
a
pitch
alteration
equal
to
or
larger
than
a
half
step
(m2).
Further
analysis
of
the
data
revealed
that
pitch
levels
of
the
sample
set
were
equally
likely
to
be
higher
or
lower,
but
decreasing
the
tempo
of
a
recording
was
more
prevalent
than
increasing
the
tempo.
Additional
studies
may
consider
investigating
if
specific
characteristics
of
the
music
are
influencing
the
direction
and
magnitude
of
YouTube
users
alterations.
Such
characteristics
may
include:
the
type/style
of
music,
the
vocalists
gender
in
the
music
being
altered,
the
release
date
of
the
recording,
etc.
The
attribution
of
agency
to
sound
can
affect
social
engagement
176 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
FRI
Surveying
attitudes
towards
singing
and
their
impact
on
engagement
with
this
musical
activity
Singing
is
the
most
natural
of
all
musical
activities
and
one
that
is
readily
accessible
to
most
individuals.
It
can
be
used
on
our
own
or
in
a
group,
in
different
cultural
settings,
on
different
occasions,
and
for
the
most
diverse
purposes
(entertainment,
grieving,
religious
rituals,
alliance
rituals).
A
recent,
yet
growing
body
of
literature
highlights
the
potential
benefits
of
singing
on
well-being
and
health.
This
evidence
shows
singing
as
an
activity
with
several
psychological,
physical
and
social
components
that
can
interact
and
contribute
to
feelings
of
well-being
and
impact
on
the
immune
system.
However,
Bailey
and
Davidson
(2002,
2005),
highlight
an
elitist
view
of
music-making
that
is
predominant
in
Western
world.
According
to
those
authors,
this
musical
elitism
present
in
the
westernized
societies,
not
only
views
musical
ability
as
being
limited
to
a
talented
minority,
it
also
restricts
the
majority
of
the
population
to
being
procurers
rather
than
producers
of
music.
If
this
musical
elitism
is
present
in
our
society,
than
it
is
possible
that
it
influences
our
engagement
with
singing
activities.
If
this
is
indeed
the
case,
then
it
is
possible
that
a
majority
of
individuals
in
the
western
world
are
missing
out
on
an
activity
that
can
potentially
benefit
their
well-being
and
even
health.
This
study
aimed
to
explore
how
our
attitudes
towards
singing
influence
our
engagement
with
this
musical
activity.
Specifically,
we
hoped
to
see
how
people's
opinions
on
their
own
voices,
their
own
singing,
singing
in
general
and
the
general
singing
voice
influenced
their
likelihood
of
singing
in
public
or
private,
in
formal
or
informal
settings
and
in
group
or
on
their
own.
We
suggest
that
the
majority
of
our
respondents
share
an
elitist
attitude
towards
singing.
We
expected
this
attitude
to
impact
negatively
on
their
engagement
with
singing
and
this
impact
to
be
more
pronounced
when
asked
about
public,
formal
and
solo
singing.
A
survey
was
developed
and
made
available
online.
Data
was
collected
until
the
Spring
of
2012
and
suggested
that
a
majority
of
our
respondents
share
an
elitist
attitude
towards
singing.
For
those
who
believe
they
are
not
part
of
the
singing
elite,
singing
is
something
they
do
in
private
or
informal
settings.
Approaches
to
research
and
promotion
of
singing
for
well-
being
may
have
to
start
taking
these
attitudes
into
account.
Work
attitudes,
Role
Stress
and
Health
among
Professional
Singers
and
Call
Center
Employees
Maria
Sandgren
Department
of
Culture
and
Communication,
Sdertrn
University,
Sweden
In
the
literature
on
artists
and
health
problems,
there
is
a
lack
of
studies
taking
work
conditions
and
their
impact
on
well-being
and
health
into
account.
The
specific
work
conditions
for
artists
can
be
summarized
under
the
concept
of
boundaryless
work,
where
the
individual
is
facing
short
term
employment,
increased
demands
on
flexibility
and
personal
responsibility.
Research
on
for
example
short-term
employment
and
health
show
inconsistent
results.
Professional
classical
singers
might
constitute
a
very
selected
group
of
individuals
who
have
been
very
successful
in
coping
with
complex
work
circumstances.
Yet,
singers
do
not
appear
indifferent
to
work
load,
not
even
in
a
familiar
situation
such
as
a
singing
lesson
with
their
regular
vocal
coach.
They
are
also
at
increased
risk
of
developing
voice
disorders.
The
aim
of
the
study
was
to
compare
professional
singers
in
the
classical
genre
with
another
group
of
professional
voice
users,
call
centre
employees,
on
variables
such
as
work
conditions,
job
satisfaction,
health
and
vocal
behaviour.
Professional
classical
singers
(n=61,
women
n=33,
men
n=28)
and
call
centre
employees
filled
in
a
questionnaire
covering
validated
variables;
qualitative
and
quantitative
work
load,
perceived
performance,
job
satisfaction,
work
involvement,
job
autonomy,
mental
health
and
physical
health
and
vocal
behaviour.
Results
indicated
that
qualitative
work
load
and
perceived
performance
showed
significant
positive
associations
with
impaired
mental
and
physical
health
among
singers.
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012
177
Vocal
behavior
showed
significant
positive
associations
with
job
induced
tension,
perceived
external
demands
and
quantitative
work
load.
Job
satisfaction
showed
significant
positive
associations
with
work
involvement,
job
autonomy
and
perceived
performance.
Effects
of
work
load
were
manifested
both
in
vocal
behaviour
and
mental
health.
Singers
seemed
to
be
positively
influenced,
and
not
distressed,
by
the
achievement-oriented
nature
of
their
work
in
that
job
satisfaction
was
associated
with
a
strong
commitment
and
their
personal
contribution
of
high
artistry.
Research
on
the
emotional
responses
and
brain
activations
evoked
by
music
has
been
a
topic
of
great
academic
and
public
interest.
A
recent
brain-imaging
study
by
Salimpoor
and
colleagues
suggests
the
involvement
of
mechanisms
for
'wanting'
and
'liking'
when
subjects
listened
to
intensely
pleasurable
music.
Their
paper
elaborates
the
functions
of
the
reward
system
during
music
listening.
Inspired
by
their
paper,
the
present
study
aims
to
explore
the
listening
behavior
of
authentic
cadences
through
combining
music
analysis
and
listeners'
physiological
measures.
We
hypothesize
that
cognition
of
the
dominant
chord
and
the
following
tonic
chord
may
engage
mechanisms
for
'wanting'
and
'liking',
respectively.
The
associated
experiences
of
peak
emotion
may
be
detected
by
measuring
skin
conductance.
Participants'
skin
conductance
was
measured
during
music
listening.
In
Experiment
1,
we
used
long
music
stimuli,
including
complete
Taiwanese
popular
songs
(3-5
min)
and
excerpts
of
German
art
songs
(50-100
sec).
In
Experiment
2,
we
used
48
short
music
stimuli
(<30
sec).
A
moving
window
of
2
sec
was
used
to
detect
significant
increases
of
skin
conductance
within
this
window,
i.e.,
skin
conductance
responses.
In
Experiment
1,
we
observed
that
some
authentic
cadences
tend
to
induce
listeners'
skin
conductance
responses.
Cadences
combining
with
changes
in
tempo/loudness
or
the
recurrence
of
a
theme
tend
to
evoke
large
skin
conductance
responses.
In
Experiment
2,
among
12
musical
events
that
evoked
significant
skin
conductance
responses,
only
one
event
may
be
related
to
an
authentic
cadence.
An
isolated
musical
cadence
may
be
unable
to
evoke
listeners'
experience
of
peak
emotion.
Regarding
ecological
validity,
longer
music
excerpts
are
more
appropriate
for
investigating
listeners'
emotional
responses
to
cadences.
If
an
authentic
cadence
combines
with
changes
in
tempo/loudness
or
the
recurrence
of
a
theme,
listeners
would
have
higher
probability
to
experience
intense
emotion
of
'wanting'
and
'liking'.
We
suggest
that
skin
conductance
measures
and
brain-imaging
techniques
may
be
important
tools
for
future
research
on
the
'art'
of
elaborating
musical
cadences.
How
can
we
compare
different
listeners'
experiences
of
the
same
music?
For
decades,
experimenters
have
collected
continuous
ratings
of
tension
and
emotion
to
capture
the
moment-by-moment
experiences
of
music
listeners.
Over
that
time,
Pearson
correlations
have
routinely
been
applied
to
evaluate
the
similarity
between
response
A
and
response
B,
between
the
time
series
averages
of
responses,
and
between
responses
and
continuous
178
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012
FRI
descriptors
of
the
stimulating
music.
Some
researchers
have
criticized
the
misapplication
and
misinterpretation
of
this
class
of
statistics,
but
alternatives
have
not
gained
wide
acceptance.
This
paper
looks
critically
at
the
applicability
of
correlations
to
continuous
responses
to
music,
the
assumptions
required
to
estimate
their
significance,
and
what
is
left
of
the
responses
when
these
assumptions
are
satisfied.
This
paper
also
explores
an
alternative
measure
of
cohesiveness
between
responses
to
the
same
music,
and
discusses
how
it
can
be
employed
as
a
measure
of
reliability
and
similarity
with
empirical
estimates
of
significance.
Number
of
studies
suggested
that
the
two-dimensional
valence-arousal
model
is
not
able
to
account
for
all
the
variance
in
music
elicited
affective
experiences.
The
goal
of
this
study
is
further
elaboration
of
the
underlying
dimensions
of
affective
experiences
of
music.
Specifically,
the
aim
of
the
first
study
was
to
empirically
collect
a
set
of
attributes
that
represents
subjective,
evaluative
experience
of
music.
Participants
were
asked
to
produce
attributes
that
can
describe
their
subjective
experience
of
presented
64
musical
excerpts,
selected
to
cover
wide
spectrum
of
music
genres,
themes
and
instruments.
The
aim
of
the
second
study
was
to
establish
the
underlying
structure
of
affective
experience
of
music
through
factor
analytic
study.
Participants
assessed
72
musical
excerpts
on
the
instrument
that
consisted
of
43
bipolar
seven-point
scales.
The
principal
component
analysis
showed
that
the
underlying
structure
of
affective
experience
of
music
consisted
of
three
basic
dimension,
interpreted
as
affective
valence,
arousal
and
cognitive
evaluation.
Congruence
analysis
indicated
robustness
of
three
obtained
dimensions
across
different
music
stimuli
and
participants.
How
music
can
brighten
our
world:
emotions
induced
by
music
affect
brightness
perception
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
179
Promoting
Social
Engagement
for
Young
Children
with
Autism:
a
Music
Therapy
Approach
Potheini
Vaiouli
Indiana
University,
USA
Joint
attention
is
a
foundational
non-verbal
social-communication
milestone
that
fails
to
develop
naturally
in
children
with
autism.
This
study
used
improvisational
music
therapy
for
three
young
children
identified
with
autism
in
a
kindergarten
classroom.
The
three
participants
receive
individual,
weekly
music
therapy
sessions
at
their
school.
The
study
employs
a
mixed
method
design
that
uses
improvisational
music
therapy
to
enable
joint
attention,
verbal
or
non-verbal
communication,
and
social
interaction
for
the
three
participants.
Also,
a
complimentary
qualitative
analysis
explored
the
teachers
and
the
parents
perspectives
and
variables
that
may
have
influenced
the
intervention
outcomes.
Music
Therapy
enhances
perceptive
and
cognitive
development
in
people
with
disabilities.
A
quantitative
research
Dora
Psaltopoulou,
Maria
Micheli
School
of
Music
Studies,
Aristotle
University
of
Thessaloniki,
Greece
General
Hospital
Thessaloniki,
Agios
Paulos,
Greece
A
statistic
research,
designed
to
unravel
the
effectiveness
of
Music
Therapy
to
children
and
adults
with
disabilities
in
Greece,
shows
that,
Music
Therapy
enhances
perceptive
and
cognitive
development.
The
main
assumptions
were
related
with
the
types
of
populations
and
the
characteristics
of
their
pathologies,
as
well
as,
the
role
that
is
played
by
the
combination
of
different
therapy
modalities
to
them,
so
as
to
show
the
effectiveness
of
Music
Therapy
in
Greece.
The
key
objective
was
to
assess
the
effectiveness
of
music-therapy
through
the
personal
evaluations
made
by
the
parents
of
the
subjects.
The
subjects
characteristics
and
parental
environments
were
documented
as
populations
who
participate
180
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012
FRI
in
the
practice
of
music
therapy
in
Greece.
Quantitative
research
was
conducted
upon
149
subjects
with
disabilities.
Questionnaires
were
used
as
research
instruments,
which
were
answered
by
the
subjects
parents.
The
data
was
processed
with
the
statistical
instrument
SPSS
v.12
with
hypothesis
validity
set
at
a=0,05
and
twofold
crosschecking.
Music
Therapy
is
effective
regardless
the
pathology
of
the
subjects
or
the
co-practice
of
other
therapies
such
as
Occupation
Therapy,
Speech
Therapy
and
Psychotherapy.
The
subjects
participating
in
Music
Therapy
sessions
in
Greece,
children
and
young
adults
with
disabilities,
showed
improvement
in
listening
ability,
in
the
psychosocial
function,
in
the
intellectual
ability
and
the
emotional
growth.
Finding
the
right
tone
for
right
words?
Music
therapy
EEG
and
fronto-temporal
processing
in
depressed
clients
Ludger
Hofmann-Engl
Department
of
Music,
Coulsdon
College
Following
the
ideas
by
Kurt
Blaukopf,
who
pointed
out
that
a
thinking
in
symmetries
was
not
only
confined
to
Baroque
composing
but
could
be
found
elsewhere
such
as
landscaping,
this
paper
introduces
the
concept
of
cognitive
categories
as
to
be
found
within
different
music
aesthetical
approaches.
Additionally,
it
claims
that
isomorph
cognitive
categories
can
be
found
in
other
areas
of
human
activity
such
as
philosophy,
mathematics
and
politics.
In
order
to
demonstrate
the
validity
of
this
approach
the
concept
of
cognitive
categories
has
been
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012
181
applied
to
different
time
periods
of
the
Western
Civilization
commencing
with
the
medieval
ages
and
leading
up
to
the
avant-garde.
Here,
for
instance,
the
paper
makes
the
claim
that
the
cognitive
category
of
force
and
counter
force
is
instrumental
for
the
classical
period
and
can
be
found
within
the
Sonata
Form,
Newton's
Laws
of
Motion
as
well
as
within
the
concept
of
thesis,
anti-thesis
and
synthesis
in
the
works
of
Hegel.
The
paper
does
not
claim
to
be
comprehensive
but
to
open
up
an
area
for
research
which
has
received
little
attention
so
far.
Music
listening
from
an
ecological
perspective
Anders
Friberg
KTH
Royal
Institute
of
Technology,
Sweden
It
is
evident
that
we
normally
analyze
sounds
in
our
environment
regarding
the
source
properties
rather
than
the
quality
of
the
sound
itself.
This
is
natural
in
everyday
listening
considering
that
the
human
perceptual
system
always
tries
to
understand
and
categorize
sensory
input.
We
can
from
the
sound
estimate
physical
properties
of
the
objects,
such
as
size
and
material.
This
ecological
approach
can
also
be
extended
to
human
communication.
From
a
persons
voice
we
can
estimate
identity,
distance,
effort,
and
emotion.
From
footstep
sounds
we
can
estimate
gender
and
other
properties.
This
type
of
source
perception
is
thus
evident
for
environmental
and
human
sounds
but
is
the
same
mechanism
also
active
in
music
listening?
It
seems
plausible
if
we
consider
music
as
a
human
to
human
communication.
Also,
as
pointed
out
by
Clarke
(2005)
it
is
hard
to
make
any
distinction
between
everyday
listening
and
music
listening.
Thus,
we
may
assume
that
both
kinds
of
listening
involve
the
same
perceptual
processing.
We
will
present
a
broad
spectrum
of
perceptual
features
related
to
source
properties
that
can
be
motivated
from
an
ecological/survival
point-of-view
and
discuss
their
potential
relevance
in
music
listening.
A
variety
of
different
aspects
are
potentially
important
during
music
listening.
Many
of
them
are
self-evident
and
empirically
validated,
while
some
others
still
lack
empirical
evidence.
Basic
object
properties
not
related
to
human
communication
includes
Source
separation
-
obviously
active
in
music
listening;
Source
localization
-
an
important
aspect
in
music
reproduction;
Size/Material
-
related
to
musical
instruments
and
timbre;
Classification/Identification
-
related
to
objects,
humans
or
instruments;
Deviation
from
expectation
-
considered
a
major
mechanism
for
creating
meaning
in
music.
There
are
several
human
properties
that
are
relevant.
Human
movement
is
related
to
music
on
a
number
of
different
levels
as
evidenced
by
a
current
research.
Energy
relates
to
the
physical
effort
used
to
produce
the
sound.
Other
human
aspects
include
intention,
emotion,
skill,
and
authenticity/sincerity.
By
analyzing
music
listening
using
an
ecological
perspective
we
can
provide
an
alternative
viewpoint
that
provide
an
explanation
and
motivation
of
the
musical
meaning
for
many
different
musical
aspects
ranging
from
instrument
sounds
and
melody
to
motion
and
emotion.
On
musical
intentionality:
Motor
knowledge
and
the
development
of
musical
expertise
Andrea
Schiavio
Department
of
Music.,
The
University
of
Sheffield,
UK
According
to
previous
literature
skilled
musicians
develop
a
cross-modal
expertise
using
different
modalities
and
categories
to
understand
a
musical
object.
My
hypothesis
is
that
this
ability
is
based
on
the
sensory
motor
integration
provided
by
the
Mirror
Mechanism,
implicitly
assuming
the
existence
a
musical
repertoire
of
acts
that
musicians
develop
throughout
their
life.
In
this
behavioral
experiment,
participants
(musicians
and
non
musicians)
are
asked
to
familiarize
with
four
piano
melodies
under
different
conditions
(playing
the
melodies
on
the
piano,
seeing
someone
playing
and
imagining
them
through
a
silent-tapping
task).
Afterwards,
the
subjects
will
be
asked
to
recognize
these
melodies
among
a
series
of
other
similar
auditory
stimuli.
I
predict
that
non
musicians
will
firstly
rely
182
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012
FRI
on
a
motor-based
experience
recognizing
more
efficiently
the
pieces
they
have
actually
played
(hence
constituting
a
musical
vocabulary
of
acts)
while
musicians
will
not
show
a
great
mismatch,
despite
the
diverse
modalities
used
to
familiarize
with
the
musical
excerpts.
So,
this
study
has
two
aims:
(i)
to
consolidate
the
hypothesis
that
skilled
musicians
have
a
cross-
modal
intentional
relationship
with
a
musical
object,
independently
from
the
modalities
used
to
intend
it
and
(ii)
to
show
that
this
kind
of
intentionality
is
motor
in
its
roots.
Transported
to
Narrative
Worlds:
The
Effects
of
A
Narrative
Mode
of
Listening
on
Music
Perception
Thijs
Vroegh
Media
and
Culture
Studies,
University
of
Utrecht,
the
Netherlands
The
tendency
to
ascribe
agency
to
musical
features
and
interpreting
a
series
of
musical
events
as
a
type
of
story
represent,
besides
musical
emotions,
a
vital
part
of
our
capacity
for
music
understanding
and
our
ability
to
find
music
meaningful.
Indeed,
a
"narrative
mode
of
thought"
may
be
significant
in
music
listening.
However,
although
the
domain
of
music
psychology
is
involved
with
many
conceptualizations
of
music
experience
such
as
music
absorption,
imaginative
involvement,
deep
listening,
or
strong
experiences,
scholars
so
far
refrained
from
thinking
of
listening
to
music
as
a
narrative
experience,
or
from
drawing
on
the
extensive
literature
concerning
the
reception
of
narrative
in
other
domains
(e.g.,
literature,
film).
It
may
therefore
be
useful
to
investigate
these
musical
responses
in
precisely
those
terms;
that
is,
of
actually
being
a
narrative
experience
equivalent
to
those
of
readers
feeling
transported
in
the
fictional
world
created
by
the
book.
Music
imbued
with
narrative
meaning
(e.g.,
personality-driven
associations
and
autobiographical
memories)
that
leads
to
the
experience
of
transportation
shares
important
aspects
with
the
pleasurable
engagement
with
an
immersive
story
in
a
book
or
film.
It
features
transformations
in
consciousness
that
demonstrate
changes
in
attentional
focus,
arousal,
altered
experience
of
time,
thought
processes
and
mental
imagery.
This
suggests
that
the
engagement
with
stories
and
a
narrative
mode
of
thought
triggered
by
music
might
share
a
number
of
deeper
psychological
mechanisms.
This
study
investigates
systematic
relationships
between
the
perception
of
flavour
and
sound
with
regard
to
underlying
inter-modal
attributes
and
recognisability.
The
research
was
inspired
by
the
question,
if
it
is
possible
to
express
a
flavour
acoustically,
which
might
be
of
practical
interest,
e.g.,
for
audio
branding
applications.
One
preliminary
and
two
main
experiments
were
conducted,
in
which
participants
tasted
or
imagined
two
flavours
(orange
and
vanilla),
and
had
to
perform
several
association
and
matching
tasks.
For
the
second
main
experiment,
short
audio
logos
and
sound
moods
were
specially
designed
to
yield
different
citrus-like
sounds.
A
wide
range
of
significant
differences
between
the
two
flavour
conditions
were
found,
from
which
musical
parameters
could
be
extracted
that
are
suitable
to
represent
the
flavours
of
orange
and
vanilla.
Furthermore,
a
few
significant
differences
between
imagined
and
tasted
stimuli
showed
up
as
well,
hinting
at
an
interference
of
visual
associations.
In
the
second
experiment,
subjects
were
reliably
able
to
identify
the
principal
flavour
attributes
from
sound
stimuli
alone
and
to
distinguish
different
degrees
of
citrus-sounds.
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
183
Two
studies
examined
the
eye-movement
effects
of
unexpected
melodic
events
during
music
reading.
Simple
melodic
variants
of
a
familiar
tune
were
performed
in
a
temporally
controlled
setting.
In
a
pilot
study
with
five
university
students,
unexpected
alterations
of
the
familiar
melody
were
found
to
increase
the
number
of
incoming
saccades
to
the
altered
bar
and
the
bar
immediately
before
the
alteration.
The
main
experiment
with
34
music
students,
incorporating
several
improvements
to
the
experimental
design,
again
showed
an
increase
in
the
number
of
incoming
saccades
to
the
bar
before
the
alteration,
but
no
effects
in
the
altered
bar
itself.
In
addition,
the
bar
following
the
alteration
showed
decrease
in
relative
fixation
time
and
incoming
saccades.
These
results
are
discussed
with
a
view
to
future
studies
in
eye-
movements
in
music
reading,
emphasizing
the
need
for
more
systematic
research
on
truly
prima
vista
performance
and,
in
general,
temporally
controlled
music
reading.
Satoshi
Kawase
Graduate
School
of
Human
Sciences,
Osaka
University,
Japan
This
study
investigated
the
roles
of
gazing
behaviour
(specifically
eye
contact)
during
music
performances
by
focusing
on
coordination
among
performers.
Experiment
1
was
conducted
under
four
different
visual-contact
conditions:
invisible,
only
the
body
visible,
only
the
head
visible,
and
face-to-face.
Experiment
2
was
conducted
under
three
different
visual-contact
conditions:
invisible,
only
the
movable-head
visible,
and
only
the
fixed-head
visible;
the
condition
was
implemented
by
using
a
chin
rest.
The
results
of
experiment
1
showed
that
the
timing
lag
between
performers
did
not
vary
significantly
among
the
three
conditions
in
which
visual
cues
were
available.
Performers
looked
toward
each
other
just
before
changes
of
tempo
during
which
two
performers
need
to
coordinate
timing
in
both
experiments.
Under
these
three
conditions,
when
performers
looked
toward
each
other
at
points
of
coordination,
it
significantly
improved
synchronization
accuracy.
The
results
of
experiment
2
showed
that
the
timing
lag
was
significantly
shorter
under
the
fixed-head
condition
than
the
invisible
condition,
and
significantly
longer
under
the
fixed-head
condition
than
the
movable-head
condition.
Regardless
of
whether
or
not
the
head
was
fixed,
the
timing
lag
decreased
when
performers
made
eye
contact
just
before
the
beginning
of
the
sound.
On
the
basis
of
two
experiments,
we
conclude
that
mutual
gaze
is
important
for
reducing
timing
lag
during
a
performance
and
that
performers
may
utilize
movements
(body
or
head)
as
visual
cues
for
coordination
since
they
can
coordinate
only
loosely
through
eye
contact
alone
(without
movement).
184 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
FRI
The
Embodied
Effect
of
Facial
Expressions
on
Pianists
Performance
Interpretation
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
185
186 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
FRI
Cross-Cultural
Emotional
and
Psychophysiological
Responses
to
Music:
Comparing
Western
Listeners
to
Congolese
Pygmies
187
we
characterize
the
decline
in
popularity
of
one
harmonic
schema:
the
so-called
rhythm
changes.
Optimising
a
short
test
of
musical
style
grouping
Extremely
short
musical
clips
can
cue
correct
genre
schemas
and
also
knowledge
of
particular
artists
and
recordings,
most
probably
through
timbral
cues.
The
extent
to
which
individuals
acquire
and
are
able
to
use
such
timbre-based
knowledge
may
vary
with
their
breadth
and
degree
of
engagement
with
the
many
different
styles
of
music
available
to
modern
listeners.
We
aimed
to
create
and
optimise
a
short
and
implicit
musical
clip
sorting
task,
which
would
be
an
ecologically
valid
test
of
musical
perception
skills
necessary
for
discriminating
between
musical
styles
in
a
general
Western
population.
We
were
also
interested
in
comparing
the
performance
of
self-
recruiting
online
and
laboratory
tested
participants.
26
laboratory
and
91
online
participants
grouped
sets
of
16
short
musical
clips
into
four
equal
sized
bins.
They
were
told
to
group
by
similarity
and
'genre'
was
not
mentioned
explicitly.
Four
representative
stimulus
songs
were
chosen
from
each
of
Jazz,
Rock,
Pop
and
Hiphop.
Two
vocal-free
regions
were
extracted
from
each
song
and
400ms
and
800ms
clips
created
from
each.
Each
participant
sorted
two
sets
of
stimuli,
the
second
set
always
having
a
different
clip
duration
and
region
from
the
first.
Population
parameter
estimates
from
test-wise
scores
did
not
differ
significantly
between
online
and
offline
participants
(variance:
p=.1;
mean:
p=.57).
Low
item-wise
scores
(M=1.14,
SD=.95,
out
of
3)
suggest
high
task
difficulty,
with
longer
clips
being
significantly
easier
to
pair
(p<.001).
Complete
linkage
agglomerative
hierarchical
clustering
cluster
analyses
of
pairwise
clip
distances
from
the
sampled
solutions
showed
a
suitable
4
cluster
solution
by
genre
for
800ms
clips
but
400ms
Pop
clips
showed
a
high
confusion
rate
with
the
other
genres.
Piloting
with
derived
shorter
sets
favours
a
3
item
by
3
genre
400ms
set
with
Pop
excluded,
which
is
easier
to
solve
than
the
original
4x4
problem
but
also
harder
than
an
optimised
small
800ms
set
(which
was
also
piloted
and
found
to
be
too
easy).
An
ecologically
valid
and
compelling
test
of
musical
style
grouping
is
presented,
deliverable
over
the
internet
via
standard
web-browsers.
Planned
future
research
will
ascertain
which
cognitive
abilities
are
being
tested
and
how
the
measured
ability
relates
to
self-
reported
musical
sophistication
as
measured
by
the
Goldsmiths
Musical
Sophistication
Index,
which
the
test
was
designed
to
accompany.
Rhythm
is
the
patterned
onsets
of
sound
in
regards
to
timing,
accent,
and
grouping.
Meter
is
the
sense
of
strong
and
weak
beats
that
can
be
abstracted
from
a
rhythm.
According
to
dynamic
attending
theory
(DAT;
Jones
&
Boltz,
1989),
expectancies
for
the
timing
of
onsets
are
easier
to
form
for
metrical
rhythms
than
non-metrical
rhythms.
Differences
between
implicit
learning
(IL)
of
metrical
and
non-metrical
rhythms
have
not
been
explored
using
a
serial
recall
task,
where
IL
is
characterized
by
decreases
in
temporal
error
over
blocks
containing
a
repeating
rhythm
and
188 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
FRI
increases
in
temporal
error
when
novel
rhythms
are
introduced.
Two
experiments
investigated
IL
of
metrical
and
non-metrical
rhythms
in
the
presence
and
absence
of
an
ordinal
pattern
using
a
serial
recall
paradigm.
Based
on
DAT,
it
was
hypothesized
that
i),
metrical
rhythms
are
learned
more
readily
than
non-metrical
rhythms,
and
ii)
introducing
novel
rhythms
with
a
weaker
metrical
framework
in
test
blocks
results
in
larger
timing
error
increases
than
novel
rhythms
with
the
same
metrical
strength.
In
the
serial
recall
task,
an
ordinal
pattern
(auditory
spatial
locations)
was
presented
with
rhythmic
timing.
Participants
were
instructed
to
reproduce
the
pattern
after
each
presentation.
They
were
not
informed
of
the
rhythm.
Experiment
1
(N=64)
examined
IL
of
rhythms
in
the
presence
of
a
correlated
ordinal
pattern.
Experiment
2
(N=72)
examined
IL
of
rhythms
when
the
ordinal
sequence
was
randomized
each
trial.
In
the
metrical
conditions,
participants
were
trained
on
a
strongly
metrical
(SM)
rhythm,
and
received
novel
SM
and
weakly
metrical
(WM)
rhythms
in
test
blocks.
In
Experiment
1,
metrical
rhythms
elicited
significantly
larger
decreases
in
timing
error
than
non-metrical
rhythms
in
the
presence
of
an
ordinal
pattern.
In
Experiment
2,
decreases
in
timing
error
were
not
significantly
different
between
metrical
and
non-metrical
rhythms
in
the
absence
of
an
ordinal
pattern.
In
both
experiments,
the
introduction
of
a
novel
WM
rhythm
resulted
in
significantly
larger
increases
in
timing
error
than
the
introduction
of
a
novel
SM
rhythm.
Metrical
and
non-metrical
rhythms
were
implicitly
learned.
Metrical
patterns
were
only
learned
more
readily
than
non-metrical
rhythms
in
the
presence
of
an
ordinal
pattern.
This
suggests
that
meter
aids
rhythm
learning
differently
depending
on
the
predictability
of
the
ordinal
sequence.
In
line
with
DAT,
meter
was
abstracted
in
metrical
conditions
in
the
presence
and
absence
of
an
ordinal
pattern.
A
Unified
Model
for
the
Neural
Bases
of
Auditory
Time
Perception
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
189
A
rating
experiment
was
carried
out
to
understand
the
relationship
between
blending
and
timbre
saliency,
the
attention-capturing
quality
of
timbre.
Stimuli
were
generated
from
15
Western
orchestral
instrument
sounds
from
the
Vienna
Symphonic
Library,
equalized
in
pitch,
loudness
and
effective
duration.
Listeners
were
presented
with
a
composite
of
two
simultaneous,
unison
instrumental
sounds
and
were
asked
to
rate
the
degree
of
blending
on
a
continuous
scale
between
"very
blended"
and
"not
blended".
Data
from
60
participants
showed
no
effect
of
gender,
musicianship
or
age
in
blending
judgments.
Mild
negative
correlations
were
observed
between
the
average
degree
of
blending
as
well
as
the
sum
(
=
0.34,
df
=
103,
p
<
0.01),
minimum
(
=
0.26,
df
=
103,
p
<
0.01)
and
maximum
(
=
0.30,
df
=
103,
p
<
0.01)
of
saliency
values
of
two
individual
timbres.
These
results
suggest
that
a
highly
salient
sound
will
not
blend
well.
In
addition,
it
is
the
individual
sounds
saliency
level
and
the
saliency
sum
of
the
sound
pair
that
determine
the
overall
degree
of
perceived
blending,
rather
than
the
saliency
difference.
The
best
acoustic
correlate
to
describe
the
average
blending
is
the
minimum
attack
time
of
the
two
individual
timbres,
explaining
57%
of
the
variance.
This
agrees
with
Tardieu
&
McAdams'
(2011)
observation
that
a
sound
with
a
longer
attack
tends
to
blend
better.
Previous
findings
that
sounds
with
lower
spectral
centroids
are
likely
to
blend
better
by
Sandell
(1995)
and
Tardieu
&
McAdams
(2011)
were
also
confirmed.
A
study
of
confusions
in
identifying
concurrently
sounding
wind
instruments
Despina
Klonari,
Konstantinos
Pastiadis,
Georgios
Papadelis,
Georgios
Papanikolaou
Aristotle
University
of
Thessaloniki,
Greece
This
paper
investigates
confused
identification
of
physical
wind
instruments
tones
that
play
in
pairs
and
at
various
interval
relationships.
Our
work
moves
the
study
of
timbre
for
solo
musical
tones
towards
a
more
realistic
framework
of
complex
timbres
produced
by
combinations
of
instruments,
considering
musically
meaningful
factors
of
importance
such
as
the
pitch
intervals
and
the
timbral
constituents
of
the
examined
pairs.
Additionally,
an
important
cognitive
factor,
namely
the
subjects
response
time
in
an
identification
task,
is
examined
to
validate
hypotheses
about
possible
relations
between
subjects
confidence
and
efficiency.
42
musically
experienced
listeners
were
asked
to
name
the
individual
instruments
within
each
pair,
in
total
58
pairs,
from
within
all
possible
combinations
of
Flute,
Oboe,
Bb
Clarinet
and
Bb
Trumpet,
playing
at
each
and
any
of
four
musical
pitches
(A4,
C#5,
A5,
C#6,
forming
the
pitch
intervals
of
unison,
major
third,
octave
and
major
tenth),
in
a
randomized
design
with
five
repetitions
for
each
pairs
presentation.
The
procedure
was
conducted
and
administered
within
an
elaborate
computerized
desktop
system,
which,
allowing
for
recording
of
each
step
of
the
subjects
response,
facilitated
the
registration
of
the
respective
response
times.
Percentages
of
correct,
semi-correct
and
false
identifications
populate
the
instruments
confusion
matrices.
Various
statistically
significant
tendencies
appear
with
respect
to
the
position
of
instruments
within
each
pair
and
pitch
interval.
Unison
identities
show
the
smallest
erroneous
identification
scores.
Correlations
of
confusion
scores
with
mean
response
times
highlight
possible
manifestations
of
subjects
response
confidence
levels.
This
work
is
a
systematic
attempt
to
explore
several
issues
in
identification
of
concurrently
sounding
musical
instruments
and
highlights
the
diversity
and
complexity
of
the
interplay
between
their
acoustics
and
the
respective
perceptual
transformations.
Even
within
a
musically
more
limited
and
coherent
subset,
namely
the
wind
instruments,
observed
systematic
190 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
FRI
variations
of
confusion
between
instruments,
require
further
extensive
investigation
of
perceptual
and
cognitive
phenomena,
such
as
spectro-temporal
masking/prominence
effects,
listeners
bias,
etc.
Interpretation
of
results
might
prove
useful
especially
in
the
fields
of
orchestration
or
music
synthesis,
wherein
tonal
and
timbral
combinations
of
musical
instruments
are
extensively
considered.
Effects
of
background
sound
on
the
volume
and
fundamental
frequency
of
a
singing
voice
191
increased,
regardless
of
the
type
of
sound.
Meanwhile,
F0
precision
of
the
singing
voice
was
not
affected
by
the
intensity
of
background
sound.
However,
F0
precision
deteriorated
more
under
the
multi-talker
noise
condition
than
a
cappella
and
other
conditions.
The
variation
in
singing
volume
in
accordance
with
the
intensity
of
background
sound
was
similar
to
that
for
speech
production
in
noise
(i.e.,
the
Lombard
effect).
That
is,
the
subjects
tried
to
keep
the
auditory
feedback
constant
subconsciously
against
the
background
sound
even
in
singing
tasks,
and
consequently
obtained
high
F0
precision
over
all
tested
intensities
of
background
sound.
It
is
also
indicated
that
the
intensity
of
background
sound
does
not
directly
affect
F0
precision
while
the
existence
of
sufficient
auditory
feedback
or
the
external
reference
is
important
to
maintaining
F0
precision.
Many
Ways
of
Hearing:
Clustering
Continuous
Responses
to
Music
Finn
Upham
Music
and
Audio
Research
Lab,
Department
of
Music
and
Performing
Arts
Professions,
Steinhardt
School
of
Culture,
Education,
and
Human
Development,
New
York
University,
USA
Is
there
more
than
one-way
to
experience
or
perceive
a
piece
of
music?
Anecdotal
evidence
suggests
that
many
are
possible
and
cognitive
theories
hypothesise
variety
and
yet
analyses
of
music
rarely
attempt
to
describe
multiple
cognitive
or
affective
sequences
of
experience.
Continuous
responses
collected
from
different
listeners
to
the
same
music
often
show
great
variability
in
their
temporal
sequence,
whether
ratings
of
emotional
arousal
or
measures
of
192
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012
FRI
skin
conductance.
Either
these
differences
are
the
result
of
random
noise
interfering
with
the
common
experience
(as
assumed
implicitly
in
any
analysis
of
the
average
response
time
series),
or
they
reflect
distinct
interpretations
of
the
stimulating
music
and
corresponding
experiences.
The
aim
of
this
study
is
to
evaluate
whether
continuous
responses
show
evidence
of
distinct
but
repeatable
temporal
patterns
of
perception
or
experience
to
the
same
musical
stimuli.
Comparing
the
cohesiveness
and
distinction
between
clusters
within
continuous
behavioural
response
collections
from
multiple
experiments
and
to
those
of
several
artificially
constructed
collections
of
unrelated
responses,
this
poster
presents
criteria
for
defining
differences
between
responses
and
robust
response
patterns.
Correlations
Between
Acoustic
Features,
Personality
Traits
and
Perception
of
Soundscapes
PerMagnus
Lindborg
Nanyang
Technological
University,
Singapore;
KTH
Institute
of
Technology,
Stockholm
The
present
study
reports
results
from
an
experiment
that
is
part
of
Soundscape
Emotion
Responses
(SSER)
study.
We
investigated
the
interaction
between
psychological
and
acoustic
features
in
the
perception
of
soundscapes.
Participant
features
were
estimated
with
the
Ten-
Item
Personality
Index
(Gosling
et
al.
2003)
and
the
Profile
of
Mood
State
for
Adults
(Terry
et
al.
1999,
2005),
and
acoustic
features
with
computational
tools
such
as
MIRtoolbox
(Lartillot
2011).
We
made
ambisonic
recordings
of
Singaporean
everyday
sonic
environments
and
selected
12
excerpts
of
90
seconds
duration
each,
in
4
categories:
city
parks,
rural
parks,
eateries
and
shops/markets.
43
participants
rated
soundscapes
according
to
the
Swedish
Soundscape
Quality
Protocol
(Axelsson
et
al.
2011)
which
uses
8
dimensions
related
to
quality
perception.
Participants
also
grouped
blobs
representing
the
stimuli
according
to
a
spatial
metaphor
and
associated
a
colour
to
each.
A
principal
component
analysis
determined
a
set
of
acoustic
features
that
span
a
2-dimensional
plane
related
to
latent
higher-level
features
that
are
relevant
to
soundscape
perception.
We
tentatively
named
these
dimensions
Mass
and
Variability
Focus;
the
first
depends
on
loudness
and
spectral
shape,
the
second
on
amplitude
variability
across
temporal
domains.
A
series
of
repeated-measures
ANOVA
showed
that
there
is
are
patterns
of
significant
correlations
between
perception
ratings
and
the
derived
acoustic
features
in
interaction
with
personality
measures.
Several
of
the
interactions
were
linked
to
the
personality
trait
Openness,
and
to
aural-visual
orientation.
Implications
for
future
research
are
discussed.
Influence
of
the
listening
context
on
the
perceived
realism
of
binaural
recordings
193
trained
and
naive
subjects.
Results
show
that
there
exists
differences
between
the
two
groups
of
participants
and
that
the
semantic
relevance
of
a
sound
plays
a
central
role.
The
Effect
of
Singing
on
Lexical
Memory
FRI
memory
for
these
words.
The
first
group
was
asked
to
sing
each
word
to
a
2,
3,
or
4-note
melody
(corresponding
with
the
number
of
syllables
in
the
word),
while
the
second
group
simply
spoke
the
words.
This
was
immediately
followed
by
a
recognition
task,
in
which
the
subjects
were
asked
how
confident
they
were
that
they
had
previously
been
presented
with
the
word.
Our
results
are
currently
being
analyzed,
but
we
have
hypothesized
that
subjects
in
the
singing
condition
will
have
a
markedly
improved
performance
in
the
recognition
task
compared
to
those
in
the
spoken
condition.
The
Impact
of
Trace
Decay,
Interference,
and
Confusion
in
a
Tonal
Memory
Span
Task
Sven
Blankenberger,
Katrin
Bittrich
Department
of
Psychology,
Martin-Luther-University
Halle-Wittenberg,
Germany
The
aim
of
the
present
study
was
to
propose
and
test
a
mathematical
model
concerning
the
impact
of
different
mechanisms
of
forgetting
in
short
term
memory
for
tonal
and
verbal
stimuli.
N=10
participants
completed
a
modified
memory
span
task.
In
each
trial
they
were
presented
16
letters
or
tones
which
they
had
to
recall
(sing
or
speak)
in
correct
serial
order.
In
half
of
the
trials
the
recall
started
immediately
after
the
last
item.
In
the
remaining
trials
the
recall
was
delayed.
Quality
of
response
was
registered.
Letters
were
considered
as
correct
if
recalled
at
the
correct
serial
position.
For
the
tonal
reproduction
a
tolerance
criterion
was
applied:
Tones
were
considered
as
correct
response
if
recalled
at
the
correct
position
and
if
the
sung
frequency
was
within
the
range
of
plus/minus
a
quarter
tone
of
the
given
frequency.
As
expected
participants
were
better
in
the
verbal
compared
to
the
tonal
memory
span
task.
Differences
between
both
conditions
concerning
proportion
of
correct
recall
as
a
function
of
list
length
and
serial
position
were
observed.
The
proposed
model
fitted
the
data
reasonably
well.
The
parameter
estimation
revealed
a
stronger
impact
of
forgetting
mechanisms
in
the
tonal
compared
to
the
verbal
condition.
Furthermore,
item
confusion
only
appeared
in
the
verbal
condition.
These
results
suggest
that
different
mechanisms
of
forgetting
apply
to
tonal
and
verbal
stimuli
in
short
term
memory.
Contracting
Earworms:
The
Roles
of
Personality
and
Musicality
195
the
deliberate
induction
of
earworms
under
laboratory
conditions
does
not,
and
b)
the
mental
process
of
recalling
song
lyrics
can
be
as
efficient
in
triggering
earworms
as
listening
to
music,
suggesting
that
earworm
induction
may
be
linked
with
basic
memory
mechanisms.
Involuntary
musical
imagery
and
musical
structure
do
we
get
earworms
only
for
certain
tunes?
A
great
deal
of
research
has
been
devoted
to
rhythm
perception
and
production
in
ordinary
musicians.
Much
less
is
known
about
connections
between
rhythm
perception
and
production
in
the
general
population.
Recent
data
(Phillips-Silver
et
al.,
2011)
suggest
that
some
individuals
(so-called
rhythm
deaf)
may
exhibit
impaired
rhythm
perception
and
inaccurate
sensorimotor
synchronization
(SMS)
while
showing
spared
pitch
processing.
In
this
study
we
examined
more
in
depth
rhythm
perception
and
SMS
in
non-musicians.
In
a
first
screening
experiment,
96
non-musicians
synchronized
with
musical
and
non-musical
stimuli
in
a
hand-tapping
task.
Synchronization
accuracy
and
precision
were
analyzed
with
Circular
Statistics.
The
results
allowed
to
select
16
participants
revealing
difficulties
in
the
SMS
task
(Poor
Synchronizers).
In
a
second
experiment,
10
of
the
Poor
Synchronizers
and
23
Controls
(i.e.,
participants
chosen
randomly
among
the
other
participants
without
impaired
synchronization
tested
in
the
screening
experiment)
underwent
various
SMS
tasks
(e.g.,
with
different
pacing
stimuli
and
using
different
tempos),
and
to
rhythm
perception
tasks
(i.e.,
anisochrony
detection
and
the
rhythm
task
of
the
Montreal
Battery
of
Evaluation
of
Amusia,
MBEA,
Peretz
et
al.,
2003).
The
analyses
confirmed
that
8
participants
were
poor
synchronizers.
In
particular,
some
of
them
exhibited
normal
rhythm
perception.
This
finding
points
to
a
possible
mismatch
between
perception
and
action
in
the
rhythm
domain,
similar
to
what
previously
observed
in
the
pitch
domain
(Dalla
Bella
et
al.,
2007,
2009;
Loui
et
al.,
2008).
196 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
FRI
Young
childrens
musical
enculturation:
Developing
a
test
of
young
childrens
metre
processing
skills
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
197
The
genesis
of
absolute
pitch
predisposition
versus
acquisition
through
learning
is
still
subject
of
numerous
scientific
investigations.
The
aim
of
the
present
study
was
to
examine
the
impact
of
simple
pair-association-mechanisms
for
the
acquisition
of
absolute
pitch.
At
intervals
of
two
weeks
all
participants
(N=20
non-musicians)
completed
a
tone
identification
tests
(pre-,
post-,
and
follow-up
test).
Pitches
ranged
from
A3
to
G#4.
The
proportion
of
correct
responses
as
well
as
the
differences
in
semi-tones
were
observed.
Participants
of
the
experimental
group
(n=10)
underwent
a
ten-day
adaptive
training
between
the
first
and
the
second
test
in
which
they
learned
to
associate
pitches
with
the
corresponding
name.
The
training
started
with
two
pitches
only.
After
reaching
a
predefined
success
criterion
a
further
tone
was
added.
This
procedure
entails
that
within
the
ten-day
training
period
each
participant
reached
an
individual
number
of
pitches
which
they
could
identify.
Participants
of
the
experimental
group
learned
to
successfully
identify
seven
to
nine
pitches
within
ten
days
of
training.
Relative
frequency
of
correct
responses
as
well
as
the
difference
in
semi-
tones
in
the
tone
identification
task
revealed
a
positive
effect
of
training
in
the
experimental
group
compared
to
the
control
group.
The
results
of
the
training
study
suggest
that
simple
pair-association
mechanisms
are
one
aspect
in
the
development
of
absolute
pitch.
Within
only
two
weeks
of
training
a
group
of
non-musicians
was
able
to
successfully
identify
seven
to
nine
pitches
within
one
octave.
Possible
causes
for
the
fail
of
previous
learning
studies
are
discussed.
198 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
FRI
A
unique
pattern
of
ratio
effect
in
musicians
that
are
absolute
pitch
possessors
Lilach
Akiva-Kabiri1,
Tali
Leibovich2,
Gal
Azaria1,
Avishai
Henik1
1
Department
of
Psychology,
and
the
Zlotowski
Center
for
Neuroscience
2
Department
of
Cognitive
Sciences,
Ben-Gurion
University
of
the
Negev,
Beer-Sheva,
Israel
3
Ben-Gurion
University
of
the
Negev,
Beer-Sheva,
Israel
According
to
the
ratio
effect,
when
the
difference
between
two
magnitudes
is
large,
the
comparison
between
them
is
faster.
The
distance
(or
the
ratio)
effect
holds
for
a
large
variety
of
cardinal
scales
(numbers,
quantities,
physical
sizes,
etc.).
In
ordinal
scales,
such
as
the
alphabet,
this
effect
is
more
elusive.
This
effect
complies
with
Weber's
law
and
was
found
for
many
modalities
such
as
numbers,
brightness
and
musical
tones.
However,
the
ratio
effect
is
elusive
in
ordinal
scales
(i.e.,
alphabet).
Absolute
pitch
(AP)
is
a
rare
ability
to
identify
musical
pitches
without
an
external
reference
tone.
It
has
been
suggested
that
AP
possessors
are
able
to
label
pitch
automatically.
In
contrast,
most
people
use
the
relations
between
pitches
(relative
pitch)
in
order
to
process
musical
information.
In
the
current
study
two
groups
of
musicians
(those
with
AP
and
controls
without
AP)
were
asked
to
compare
pairs
of
musical
tones
that
varied
in
their
ratio.
Results
yielded
a
significant
ratio
effect
for
nAP
group,
as
expected
according
to
the
literature;
namely,
RTs
were
longer
for
large
ratios
than
for
small
ratios.
Interestingly,
AP
possessors
showed
no
ratio
effect;
namely,
RTs
for
small
and
large
ratios
were
similar.
To
the
best
of
our
knowledge
this
is
the
first
study
that
demonstrates
the
lack
of
the
effect
in
a
particular
group
of
people.
Results
are
interpreted
suggesting
that
pitch
tones
can
be
represented
on
ordinal
or
cardinal
scales,
contingent
on
AP
ability.
The
effect
of
intensity
on
relative
pitch
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
199
Previous
works
on
computational
approaches
for
the
description
of
pitch
phenomena
have
employed
various
methodologies,
deterministic
and
probabilistic,
which
are
based
on
psychophysiological
auditory
stimuli
modeling,
representations
and
transformations
(e.g.
spatial,
temporal,
spatiotemporal),
both
at
peripheral
and
more
central
stages
of
the
auditory
chain.
Then,
a
confirmatory
phase,
utilizing
data
from
behavioral
(or
even
imaging)
studies,
is
usually
followed
to
assess
the
validity
of
the
computational
methods.
The
human
auditory
perception
relies
on
interconnected
neuronal
networks,
which
have
been
shown
to
demonstrate
multi-directional
activity
and
dynamical,
adaptive,
and
self-organizing
properties,
together
with
strong
tonotopical
organization
along
the
auditory
pathway
up
to
the
primary
auditory
cortex.
This
paper
focuses
on
the
exploration
of
properties
and
effectiveness
of
a
certain
type
of
computational
approaches,
namely
self-organized
networks,
for
the
description
of
frequency
and
pitch
related
phenomena.
A
Self-Organized
connectionist
model
is
presented
and
tested.
We
explore
the
ability
of
Kohonen
type
neural
networks
(Self-
Organizing
Feature
Maps,
SOFMs
or
SOMs)
to
organize
based
on
frequency
information
conveyed
by
sound
signals.
Various
types
of
artificially
generated
sound
signals
(ordered
along
a
frequency/pitch
axis)
are
employed
in
our
simulations,
including
single
tones,
harmonic
series,
missing
fundamental
series,
band
limited
noises,
and
harmonics
with
formants.
Simple
Fourier
representations
and
their
physiologically
plausible
frequency-to-pitch
mappings
(e.g.
tonotopy
in
the
cochlea)
are
used
as
network
inputs.
The
networks
efficiency
is
investigated,
according
to
various
structural
parameters
of
the
network
and
the
organizing
procedure,
together
with
aspects
of
the
obtained
tonotopical
organization.
Our
results,
using
different
types
of
input
spectra
and
various
SOM
implementations,
demonstrate
a
clear
ability
for
self-organizing
according
to
(fundamental)
frequency
or
pitch.
However,
when
certain
test
configurations
were
used,
the
networks
showed
observable
inability
to
organize,
revealing
limitations
in
the
resolving
ability
of
the
network
related
to
the
required
number
(density)
of
neurons
compared
to
the
dataset
size.
Some
more
difficulties
were
also
observed,
relating
to
the
type
of
signals
for
which
an
organized
network
can
identify
pitch.
The
results
of
this
work
indicate
that,
under
some
provisions,
such
a
model
could
be
effective
in
frequency
and
pitch
indication,
within
certain
limitations
upon
training
parameters
and
types
of
signals
employed.
Further
work
will
compare
the
efficiency
of
the
proposed
representation
with
classical
computational
approaches
upon
various
aspects
of
pitch
perception,
together
with
examination
of
feasibility
and
possible
advantages
of
employing
SOMs
in
the
description
of
pitch
perception
in
various
types
of
auditory
dysfunction.
Detecting
degrees
of
density
in
aggregates:
when
can
we
hear
a
cluster?
FRI
are
used,
12%
with
five
elements
and
about
5%
with
six
or
more.
Subjects
show
a
clear
preference
for
certain
clusters
and
some
configurations
seem
to
increase
the
difficulty
to
indentify
the
components
correctly
or
lead
to
the
perception
of
a
more
complex
aggregate
than
what
they
actually
heard.
These
elements
provide
us
with
interesting
insights
on
how
trained
subjects
perceive
complex
aggregates
of
pitches.
Studying
the
act
of
musical
composition
in
real-time
Dave
Collins
University
Centre,
Doncaster
College,
UK
The
primary
aim
of
research
undertaken
and
ongoing
has
been
to
track
cognitions
of
composers
in
real-time
in
naturalistic
settings.
The
emphasis
is
to
gain
an
understanding
of
the
process
of
the
structuring
and
re-structuring
of
musical
events
in
an
unfolding
composition
with
a
conjoined
appraisal
and
development
of
appropriate
methodological
techniques.
Participants
have
been
purposively
selected
to
have
significant
experience
in
using
computer-based
compositional
tools,
and
asked
to
compose
freely
without
external
constraints
(length
of
composition,
number
of
parts,
duration
of
compositional
period).
Data
collection
integrates
computer
tools
(MIDI
save-as
files,)
with
verbal
protocol,
interview
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012 201
Stefano
Gervasonis
Cognition
Through
the
Compositional
Process
of
Gramigna.
Methodology,
Results
Samples,
Issues
Negotiation
in
a
jazz
ensemble:
Sound
and
speech
in
the
making
of
a
commercial
record
FRI
their
jointly
produced
music
and
the
actual
musical
product.
Based
on
analyses
of
transcribed
conversations
between
musicians,
as
well
as
detailed
acoustic
analysis
of
the
protocols
tracks
obtained
from
their
performances,
we
show
that
musical
projects
are
shaped
through
both
musical
interaction
and
conversational
exchange.
Analysing
the
design
process
of
an
interactive
music
installation
in
the
urban
space
:
constraints
as
resources
and
resources
as
constraints
203
Rolf
Inge
Gody*,
Alexander
Refsum
Jensenius*,
Arve
Voldsund*,
Kyrre
Glette#,
Mats
Hvin#,
Kristian
Nymoen#,
Stle
Skogstad#,
Jim
Trresen#
*Department
of
Musicology,
University
of
Oslo,
Norway,
#Department
of
Informatics,
University
of
Oslo,
Norway
Our
research
on
music-related
actions
is
based
on
the
conviction
that
sensations
of
both
sound
and
body
motion
are
inseparable
in
the
production
and
perception
of
music.
The
expression
"music-related
actions"
is
here
used
to
refer
to
chunks
of
combined
sound
and
body
motion,
typically
in
the
duration
range
of
approximately
0.5
to
5
seconds.
We
believe
that
chunk-level
music-related
actions
are
highly
significant
for
the
experience
of
music,
and
we
are
presently
working
on
establishing
a
database
of
music-related
actions
in
order
to
facilitate
access
to,
and
research
on,
our
fast
growing
collection
of
motion
capture
data
and
related
material.
In
this
work,
we
are
confronted
with
a
number
of
perceptual,
conceptual
and
technological
issues
regarding
classification
of
music-related
actions,
issues
that
will
be
presented
and
discussed
in
this
paper.
Movement
expertise
influences
gender
recognition
in
point-light
displays
of
musical
gestures
FRI
were
more
often
judged
to
be
male.
We
conclude
that
judgement
accuracy
depended
both
on
conductors
level
of
expertise
as
well
as
on
observers
concepts,
suggesting
that
perceivable
differences
between
men
and
women
diminished
for
highly
trained
movements
of
experienced
individuals.
Does
Higher
Music
Tend
to
Move
Faster?
Evidence
For
A
Pitch-Speed
Relationship
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
205
Musical
Agreement
via
Social
Dynamics
Can
Self-Organize
a
Closed
Community
of
Music:
A
Computational
Model
206 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
FRI
Paper
Session
35:
Timber
I
Hall,
17:00-18:30
Group
singing
Why
do
people
sing
in
a
choir?
Social,
emotional
and
well-being
effects
of
choir
singing
Jukka
Louhivuori
University
of
Jyvskyl,
Finland
Singing
appears
to
be
a
common
and
widely
practiced
musical
activity
across
cultures.
According
to
previous
studies
people
sing
in
a
choir
mainly
because
of
social
and
emotional
reasons.
In
addition,
several
studies
have
suggested
connections
between
choir
singing,
wellbeing
and
health.
Most
of
the
studies
have
been
done
in
Western
cultural
context.
Thus,
it
is
not
known
for
sure
if
cultural
background
has
an
effect
on
choristers
motivation.
The
aim
of
the
study
is
to
get
better
understanding
how
cultural
background
effects
choir
singers
reasons
to
sing
in
a
choir.
A
survey
was
conducted
for
choristers
with
different
cultural
background
(European,
African;
N=684).
In
addition
to
the
questionnaire
information
was
acquired
by
interviewing
individual
choristers
(N=48).
The
choirs
represented
most
common
choir
types,
such
as
children,
youth,
mixed,
male,
female
and
senior
choirs.
The
data
consists
of
typical
age
groups
for
choir
singers
(16-91
years;
average
age
=
47
years).
The
results
show,
that
the
main
reasons
for
choristers
to
sing
in
a
choir
are
related
to
emotional
experiences,
relaxation,
social
networks
group
support
and
well-being
effects.
The
findings
are
in
line
with
previous
studies,
but
for
the
choristers
with
European
cultural
background
social
aspects
were
more
important
compared
to
African
singers
who
emphasized
musical
and
emotional
aspects
in
choir
singing.
The
findings
suggest
that
cultural
background
has
a
clear
effect
on
which
aspects
choristers
consider
as
most
important
factor
in
choir
singing.
Tight
and
close
social
networks
typical
for
many
African
societies
may
explain
the
difference
between
European
and
African
choir
singers.
Interviews
support
this
interpretation.
Typically
European
choristers
spoke
about
the
benefits
of
choir
singing
in
building
social
networks,
while
African
choir
singers
pointed
out
that
they
have
enough
social
connections;
choirs
are
not
needed
for
getting
friends,
but
to
support
musical
development
and
emotional
needs.
Both
groups
emphasized
the
relaxation
and
wellbeing
aspects
of
choir
singing.
An
empirical
field
study
on
sing-along
behaviour
in
the
North
of
England
207
indicate
that
non-musical
factors
can
account
for
40%
of
the
variability
in
sing-along
behaviour,
whilst
musical
factors
are
able
to
explain
about
another
25%
of
the
variance.
The
prediction
model
demonstrates
that
it
is
features
of
vocal
performance
rather
than
structural
features
of
the
tunes
that
make
audiences
sing
along.
Results
are
interpreted
in
terms
of
theoretical
notions
of
tribal
or
indigenous
societies.
This
study
makes
a
significant
contribution
to
the
largely
unexplored
territory
of
sing-along
behaviour.
Effects
of
Group
Singing
on
Psychological
States
and
Cortisol
Group
singing
has
several
psychological,
physical,
and
social
components
that
can
interact
and
contribute
to
feelings
of
well-being.
Due
to
the
relative
infancy
of
this
field
of
research,
understanding
on
what
these
beneficial
and
positive
effects
of
group
singing
are
and
how
they
interact
is
still
limited.
In
order
to
investigate
how
group
singing
may
benefit
our
well-
being
and
health,
previous
research
has
looked
at
effects
of
singing
on
psychological
states
and
cortisol,
a
hormone
related
to
well-being.
One
major
limitation
of
previous
research
to
this
date
is
a
lack
of
experimental
designs,
participant
randomization
and
an
active
control.
However,
without
such
research
we
are,
in
fact,
unable
to
determine
the
effects
of
group
singing
on
our
well-being
and
health.
This
study
aims
to
overcome
the
limitations
of
previous
research
and
experimentally
assess
effects
of
group
singing
on
cortisol
and
psychological
variables.
In
this
way,
we
hope
to
better
understand
short-term
effects
of
group
singing
on
the
psychological
states
and
cortisol
of
a
group
of
people
that
had
never
sang
together
before.
At
the
same
time,
we
hope
it
will
allow
us
to
start
answering
the
question
of
whether
the
effects
reported
in
the
literature
are
indeed
due
to
group
singing
or
if
they
can
be
equally
brought
into
place
by
other,
non-musical
group
activities.
Twenty-one
participants
(11
females)
were
recruited
from
the
general
population
and
no
previous
experience
with
singing
was
required.
Eighteen
participants
(9
females)
completed
two
conditions:
singing
and
a
non-musical
group
activity.
Given
the
repeated
measures
design,
participants
were
randomly
allocated
to
one
of
two
groups.
Group
A
sang
on
day
1
and
did
the
non-musical
activity
on
day
2,
and
group
B
did
the
non-musical
activity
on
day
1
and
the
singing
on
day
2.
Participants
donated
saliva
samples
and
completed
the
positive
and
negative
affect
schedule
before
and
after
each
activity.
A
flow
state
scale
and
a
connectedness
scale
were
also
completed
after
each
activity,
and
a
general
well-being
questionnaire
was
completed
at
baseline
on
day
1.
Data
analysis
points
to
similar
effects
of
both
group
activities
on
levels
of
flow,
connectedness
and
positive
affect
which
indicate
that
both
activities
had
similar
levels
of
engagement,
challenge
and
social
interaction.
208 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
FRI
Paper
Session
36:
Timber
II
Hall,
17:00-18:30
Beat
&
time
perception
Henkjan
Honing,*
Hugo
Merchant,#
Gbor
Hden,*
Luis
Prado,#
and
Ramn
Bartolo#
*Cognitive
Science
Center
Amsterdam,
Institute
for
Logic,
Language
and
Computation,
University
of
Amsterdam,
The
Netherlands
#Department
of
Cognitive
Neuroscience,
Instituto
de
Neurobiologa,
Universidad
Nacional
Autonoma
de
Mxico,
Queretaro,
Mexico
We
measured
auditory
event-related
potentials
(ERPs)
in
a
rhesus
monkey
(Macaca
mulatta),
probing
a
well-documented
component
in
humans,
the
mismatch
negativity
(MMN).
We
show
for
the
first
time
in
a
rhesus
monkey
that,
in
response
to
infrequent
deviants
that
were
presented
in
a
continuous
sound
stream,
a
comparable
ERP
component
can
be
detected
with
negative
deflections
in
early
latencies.
This
result
is
in
line
with
an
earlier
study
with
a
single
chimpanzee
(Pan
troglodytes)
that
showed
a
similar
MMN-like
response
using
the
same
two-tone
odd-ball
paradigm.
Consequently,
using
more
complex
stimuli,
we
tested
whether
a
rhesus
monkey
can
not
only
detect
gaps
(omissions
at
random
positions
in
the
sound
stream)
but
also
the
beat
(omissions
at
the
first
position
of
a
musical
unit,
i.e.
the
downbeat).
In
contrast
to
what
has
been
shown
in
human
adults
and
newborns
(using
identical
stimuli
and
experimental
paradigm),
preliminary
analyses
suggest
that
the
monkey
is
not
able
to
detect
the
beat
in
music.
These
findings
are
in
support
of
the
hypothesis
that
beat
induction
(the
cognitive
mechanism
that
supports
the
detection
of
a
regular
pulse
from
a
varying
rhythm)
is
species-specific.
Electrophysiological
correlates
of
subjective
equality
and
inequality
between
neighboring
time
intervals
209
takes
place
in
the
brain
in
a
very
brief
period
after
the
presentation
of
the
temporal
pattern,
enabling
rhythm
processing
in
real
time.
(Supported
by
JSPS)
Comparisons
between
chunking
and
beat
perception
in
auditory
short-term
memory
Jessica
A.
Grahn
Brain
and
Mind
Institute
&
Department
of
Psychology,
University
of
Western
Ontario,
Canada
Auditory
working
memory
is
often
conceived
of
as
a
unitary
capacity:
different
sounds
are
processed
with
similar
neural
mechanisms.
In
verbal
working
memory
(e.g.,
digit
span
tasks),
temporal
grouping
or
chunking
of
auditory
information
occurs
spontaneously
and
benefits
working
memory.
The
current
fMRI
study
examines
whether
beat
perception
may
simply
be
a
case
of
chunking,
by
measuring
brain
responses
to
chunked
and
unchunked
verbal
sequences
and
comparing
them
to
beat-based
and
nonbeat-based
rhythmic
sequences.
Participants
performed
same/different
judgements
on
pairs
of
auditory
sequences.
Rhythm
sequences
were
constructed
from
a
single
letter,
repeated
with
rhythmic
timing
(e.g.,
the
letter
B
repeated
6
times,
with
variable
SOAs
corresponding
to
a
beat-based
rhythmic
sequence).
Non-beat
sequences
had
irregularly
timed
SOAs.
Verbal
sequences
were
composed
of
strings
of
different
letters
(e.g.,
P
M
J
O
E
I
K
C).
Chunked
verbal
sequences
had
temporal
grouping
of
letters
into
2-
or
4-letter
chunks;
unchunked
sequences
had
no
regular
temporal
grouping.
Overall,
activation
to
rhythm
and
verbal
working
memory
stimuli
overlapped,
apart
from
in
the
basal
ganglia.
The
basal
ganglia
showed
a
greater
response
to
beat
than
non-beat
rhythms,
but
showed
no
difference
between
chunked
and
unchunked
verbal
sequences.
Thus,
beat
perception
is
not
simply
a
case
of
chunking,
suggesting
a
dissociation
between
beat
processing
and
grouping
or
chunking
mechanisms
that
warrants
further
exploration.
210 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
SAT
Saturday
28
July
Emotional
influences
on
attention
to
auditory
streams
Quantitative
Estimation
of
Effects
of
Musical
Parameters
on
Emotional
Features
211
Towards
a
brief
domain-specific
self-report
scale
for
the
rapid
assessment
of
musically
induced
emotions
212 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
SAT
Symposium
5,
Crystal
Hall,
09:00-11:00
Classification
as
a
tool
in
probing
neural
mechanisms
of
music
perception,
cognition,
and
performance
Convener:
Rebecca
Schaefer,
Sinichi
Furuya,
Discussant:
Petri
Toiviainen
Music
can
be
considered
acoustic
information
with
complex
temporal
and
spatial
features.
Research
into
perception
and
cognition
of
multifaceted
elements
of
music
tries
to
decode
the
information
from
neural
signals
elicited
by
listening
to
music.
Music
performance,
on
the
other
hand,
entails
the
encoding
of
musical
information
to
neural
commands
issued
to
the
muscles.
To
understand
the
neural
processes
underlying
music
perception,
cognition,
and
performance,
therefore,
researchers
face
issues
of
extracting
meaningful
information
from
extremely
large
datasets
with
regard
to
neural,
physiological,
and
biomechanical
signals.
This
is
nontrivial
for
music
researchers
in
light
of
recent
technological
advances
regarding
data
measurement.
Classification
using
machine-learning
techniques
is
a
powerful
tool
in
uncovering
the
unseen
patterns
in
these
large
datasets.
In
this
way,
not
only
are
the
means
compared,
but
a
data-driven
method
is
used
to
uncover
the
sources
of
informative
variance
in
the
signals.
Moreover,
classification
techniques
allow
for
quantitative
evaluation
of
individual
differences
in
music
perception
and
performance.
In
this
symposium,
examples
are
presented
of
uncovering
neural
representations
of
musical
information
such
as
rhythm
and
harmony
through
applying
single-trial
EEG
classification
techniques
such
as
linear
discriminant
classification,
and
multivariate
data
reduction
methods
such
as
Principal
Component
Analysis
(PCA)
to
electrophysiological
signals
derived
from
individuals
who
listened
to
musical
stimuli.
Additionally,
these
methods
are
useful
to
behavioral
scientists,
allowing
them
to
characterize
fundamental
patterns
of
movements
of
the
motor
system
with
a
large
number
of
joints
and
muscles
during
musical
performance
by
means
of
PCA
and
cluster
analysis
such
as
K-means
and
expectation
maximization
(EM)
algorithm.
Classification
can
also
be
performed
on
spectro-temporal
features
derived
from
audio
waveforms
to
investigate
the
features
that
may
be
most
informative
in
perception
for
auditory
processing
by
the
brain.
This
symposium,
comprising
participants
from
six
different
research
groups,
has
two
aims.
The
first
is
to
present,
through
empirical
research,
examples
of
how
classification
methods
can
be
applied
to
various
experimental
setups
and
different
types
of
measurement.
The
second
aim
is
to
provide
fundamental
knowledge
of
the
methods
of
classification
techniques.
The
hope
is
that
conference
delegates
will
gain
a
greater
understanding
of
classification
and
how
its
methodology
can
be
applied
to
their
own
research.
213
and
content
retrieval,
mixing
and
signal
processing.
A
multidimensional
feature
vector
is
calculated
from
statistical
and
perceptual
processing
of
low
level
signal
analysis
in
the
spectral
and
temporal
domains.
Machine
learning
techniques
such
as
support
vector
machines
are
applied
to
produce
classification
labels
given
a
selected
taxonomy.
The
system
is
evaluated
on
large
annotated
ground
truth
datasets
(n
>
30000)
and
demonstrates
success
rates
(F-measures)
greater
than
70%
correct
retrieval,
depending
on
the
task.
Issues
arising
from
labeling
and
balancing
training
sets
are
discussed.
The
performance
of
classification
of
audio
using
machine
learning
methods
demonstrates
the
relative
contribution
of
bottom-up
signal
derived
features
and
data
oriented
classification
processes
to
human
cognition.
Such
demonstrations
then
sharpen
the
question
as
to
the
contribution
of
top-down,
expectation
based
processes
in
human
auditory
cognition.
An
Exploration
of
Tonal
Expectation
Using
Single-Trial
EEG
Classification
Exploring
the
mechanisms
of
subjective
accenting
through
multivariate
decoding
Rutger
Vlek,*
Rebecca
Schaefer,#
Jason
Farquhar,*
Peter
Desain*
SAT
the
effects
of
different
mental
strategies
on
subjective
accenting
more
closely,
contrasting
imagined
accents
cued
by
a
loudness
accent
versus
a
timbral
accent.
In
addition
to
being
successful
in
decoding
subjective
accents
from
single-trial
EEG
up
to
67%
correctly,
the
first
study
uncovered
evidence
for
shared
mechanisms
in
rhythm
processing,
showing
similarity
between
responses
to
perceived
and
subjective
accents
through
a
maximum
of
66%
classification
rate.
Adding
to
this,
the
second
study
sheds
light
on
how
different
strategies
modulate
the
responses
to
subjective
accents,
with
preliminary
results
showing
a
significant
increase
in
the
decoding
performance
of
subjective
loudness
accents
versus
subjective
timbral
accent,
indicating
that
the
robustness
of
the
brain
signature
may
depend
on
imagery
strategy
or
cueing
parameters.
The
main
contribution
of
this
work
is
to
provide
an
insight
into
the
cerebral
mechanisms
of
subjective
accenting,
showing
that
not
only
is
the
brain
response
detectable
in
a
single
trial
of
data,
but
it
can
also
be
predicted
from
the
EEG
signatures
of
perceived
accenting.
Additionally,
it
is
shown
that
imagery
strategy
has
a
considerable
effect,
which
has
consequences
for
further
research
in
this
area.
The
use
of
subject-specific
classification
methods
also
yields
data
on
interpersonal
differences,
and
the
range
of
responses
that
are
measured,
which
makes
it
a
tool
particularly
well
suited
to
look
at
the
cognitive
mechanism
of
imagery.
The
results
may
inform
a
rhythm-based
Brain-
Computer
Interface
paradigm,
allowing
rhythm
to
be
used
to
drive
a
device
from
the
brain
signal
alone.
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
215
Evidence
for
implicit
tracking
of
pitch
probabilities
during
musical
listening
An
emerging
theory
about
the
origins
of
musical
expectations
emphasises
the
role
of
a
mechanism
commonly
termed
statistical
learning.
This
theory
has
led
to
the
development
of
a
computational
model,
which
encodes
past
experience
of
pitch
sequences
and
then
predicts
the
conditional
probability
of
future
events
occurring
given
the
current
musical
context.
Results
from
a
previous
behavioural
study
showed
a
close
relationship
between
the
predictions
of
the
model
and
listeners
expectedness
ratings.
The
current
study
extends
this
work
to
determine
whether
the
model
can
also
account
for
expectations
made
on
the
basis
of
implicit
knowledge,
with
the
main
aim
of
developing
a
tool
able
to
provide
a
sensitive
measure
of
listeners
musical
expectations
as
they
unfold
in
real
time.
Our
aim
is
to
develop
a
tool
that
allows
the
assessment
of
dynamic
musical
expectations
while
circumventing
confounding
factors
related
to
decision
making
and
musical
competence.
Methods:
Target
216
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012
SAT
notes
that
had
either
a
high
or
low
probability
according
to
the
computational
model
of
melodic
expectation
were
selected
and
participants
carried
out
speeded
judgments
to
indicate
which
of
two
instruments
had
played
the
target
note.
Notes
for
which
a
judgement
was
required
were
indicated
to
the
participants
using
a
visual
cue
that
avoided
the
need
to
interrupt
the
flow
of
the
melody
while
allowing
the
measurement
of
expectations
at
multiple
points
in
a
piece
of
music.
Results:
As
predicted,
analysis
of
reaction
times
showed
that
participants
responded
faster
to
high
probability
compared
with
low
probability
notes
when
they
were
rendered
in
the
same
timbre
as
the
preceding
context.
The
present
study
provides
support
for
the
view
that
musical
expectations
are
formed
on
the
basis
of
musical
knowledge
acquired
over
a
lifetime
of
incidental
exposure.
In
addition,
it
validates
an
implicit
priming
paradigm
that
takes
full
account
of
the
dynamic
nature
of
musical
expectancy
during
everyday
music
listening,
and
which
is
suitable
for
individuals
of
varying
levels
of
musical
expertise.
Structural
Conditions
of
Predictability
in
Post-Tonal
Music:
The
Compound
Melodic
Structures
of
Nikos
Skalkottass
Octet
Petros
Vouvaris
Department
of
Music
Science
and
Art,
University
of
Macedonia,
Greece
The
investigation
of
compound
melodic
structures
has
been
an
implicit
feature
of
most
analytical
approaches
that
adopt
a
prolongational
perspective
with
respect
to
the
hierarchical
structure
of
tonal
music.
When
it
comes
to
theorizing
the
compound
structure
of
melodies
with
no
apparent
tonal
orientation,
the
problematics
of
prolongation
associated
with
post-tonal
music
discourage
the
espousal
of
the
aforementioned
approaches
without
adapting
their
methodological
paradigm
to
the
requisites
of
this
specific
musical
idiom.
This
thesis
concurs
with
the
fundamental
premise
of
the
present
paper
as
relates
to
the
opening
thematic
melodies
of
the
three
movements
of
Nikos
Skalkottass
Octet
(1931).
Their
analysis
aims
at
proposing
an
interpretation
of
their
compound
structure,
based
on
an
investigation
of
the
salient
features
that
account
for
their
respective
associative
middleground.
The
perceptual
relevance
of
these
features
is
factored
in
the
analysis
by
assimilating
the
conclusions
of
empirical
research
on
auditory
stream
segregation
in
relation
to
the
implied
polyphony
of
monophonic
tonal
music.
The
analysis
evinces
the
resemblance
of
the
associative
middleground
of
Skalkottass
compound
melodies
to
prolongational
structures
commonly
associated
with
tonal
melodic
lines.
These
findings
prompt
the
assessment
of
the
compound
character
of
the
Octets
thematic
melodies
as
one
of
the
works
structural
attributes
that
induce
and/or
undermine
expectations
related
to
schematic,
dynamic,
veridical,
and
conscious
predictability.
Musical
Expectation
and
paths
in
Tonal
Pitch
Space
-
Integration
of
concepts/models
and
an
application
on
the
analysis
of
Chopin'
s
Prelude
in
A
minor
Costas
Tsougras
School
of
Music
Studies,
Aristotle
University
of
Thessaloniki,
Greece
Musical
Expectation
Theory
(Huron
2006)
describes
how
a
set
of
psychological
mechanisms
functions
in
the
cognition
of
music.
The
theory
identifies
fundamental
aesthetic
possibilities
afforded
by
expectation,
and
shows
how
musical
devices
(such
as
meter,
cadence,
tonality)
exploit
psychological
opportunities.
Tonal
Pitch
Space
Theory
(Lerdahl
2001)
is
an
expansion
of
the
Generative
Theory
of
Tonal
Music
(Lerdahl
&
Jackendoff
1983)
and
proposes
a
model
that
provides
explicit
stability
conditions
and
preference
rules
for
the
construction
of
GTTM's
time-span
and
prolongational
reductions.
This
paper
aims
at
the
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012
217
Xavier
Hascher
GREAM
Laboratory
of
excellence,
Universit
de
Strasbourg,
France
This
paper
aims
at
applying
a
general
model
of
modal
monody,
constructed
deductively
from
a
theory
of
the
generation
of
musical
systems
and
scales,
to
the
analysis
of
pieces
of
a
given
repertoire,
namely
the
traditional
Arabo-Andalusian
music
of
Tunisia,
or
mlf
(customary).
The
latter
is
therefore
considered
from
a
music-theoretical
perspective
rather
than
an
ethnomusicological
one
(be
it
of
the
etic
type),
even
though
a
certain
permeability
between
the
two
approaches
is,
of
course,
assumed.
After
describing
the
model
and
summarizing
the
principles
that
underlie
its
constitution,
a
brief
recapitulation
of
previous
analyses
is
given.
Then
a
new
piece
is
presented,
a
shghul
(well-wrought
song,
a
form
related
in
style
to
the
nba)
in
the
characteristic
abaayn
mode.
The
purpose
here
is
twofold:
firstly,
to
attempt
a
reductive
analysis
of
the
piece
based
on
the
theoretical
assumptions
exposed
previously;
and,
secondly,
to
derive
from
this
a
deeper
grammatical
understanding
of
the
musical
language
involved
so
as
to
allow
at
least
a
partial
reconstruction,
or
recreation
of
the
piece,
or
of
some
similar
one.
What
is
sought
for
is
a
finite
vocabulary
of
structural
gestures
and
a
syntax
that
regulates
their
articulation,
which
can
be
compatible
with
a
more
customary
kind
of
analysis
in
terms
of
modes
(ub)
and
genres
(udq),
or
the
breaking
down
of
form
into
sections,
yet
without
being
bound
by
the
limitations
inherent
to
such
approaches.
Finally,
a
reference
is
made
to
the
point
of
view
of
the
receiver
and
to
potential
cognitive
implications.
218 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
SAT
Incidental
Learning
of
Modal
Features
of
North
Indian
Music
Pictorial
Notations
of
Pitch,
Duration
and
Tempo:
A
Musical
Approach
to
the
Cultural
Relativity
of
Shape
219
Socio-Cultural
Factors
Associated
with
Expertise
in
Indian
Classical
Music:
An
Interview
Based
Study
SAT
performance
strategies
may
provide
a
significant
insight
into
the
effects
of
supportive
musical
gestures
on
a
vocal
performance.
Respiration
values
did
seem
to
be
impacted
as
a
result
of
musical
collaboration.
When
examining
the
effects
of
previous
interaction
and
rehearsal
on
performance
strategies,
correlations
were
higher
for
the
collaborative
conditions.
In
addition,
correlations
were
also
higher
for
rehearsed
pieces
than
for
pieces
rehearsed
together
for
the
first
time.
Deadpan
and
immobile
performance
intentions
share
movement
features
but
not
expressive
parameters
The
Intentions
of
Piano
Touch
221
types
of
sound.
A
case
study
was
examined
where
a
professional
pianist
performs
two
pieces
of
different
styles
with
two
different
sound
intentions.
Shoulder,
arm
and
hand
motion
is
recorded
via
video-camera
with
a
side-view
of
the
pianist.
Results
show
that
touch
is
heavily
based
on
musical
context
with
movement
and
tension
within
the
shoulder-arm-wrist
system
changing
based
on
musical
intention.
With
the
basis
of
touch
rooted
in
conscious
musical
expression,
this
study
provides
a
starting
point
for
which
to
explore
the
connection
between
the
conscious
choice
of
the
performer
and
the
resulting
physical
gesture.
Functions
and
Uses
of
Auditory
and
Visual
Feedback:
Exploring
the
Possible
Effects
of
a
Hearing
Impairment
on
Music
Performance
SAT
the
High-Urgent
hypothesis
predicted
shortened
ITIs
in
response
to
rising
pitches;
b)
based
on
approach/withdrawal
theories
of
perception
and
on
ethological
research
showing
lower
pitches
interpreted
as
more
threatening,
the
Flexor/Extensor
hypothesis
predicted
shorter
ITIs
in
response
to
falling
pitches,
due
to
stronger
activation
of
the
flexing
muscles
while
tapping;
c)
based
on
previous
research
on
temporal
judgment,
the
hypothesis
predicted
one
effect
in
both
melodic
directions,
correlated
to
the
magnitude
of
pitch
change.
Elicited
ITIs
were
related
to
the
stimulis
melodic
direction.
Following
first
pitch-change,
the
shortest
elicited
ITIs
were
to
pitch-rise
in
double-steps,
showing
a
main
effect
to
melodic
direction.
Taps
to
rising
lines
maintained
increased
negative
asynchrony
through
six
taps
after
first
pitch-change.
However,
peaks
and
valleys
in
mid-sequence
position
both
yielded
delays.
The
High-Urgent
hypothesis
gained
support
the
most,
but
does
not
account,
for
example,
for
the
delays
on
both
peaks
and
valleys
in
mid-sequence.
The
relationship
between
the
human
body,
motor
tasks,
mood
and
musicality:
How
do
you
feel
the
beat?
Rhythmic
Regularity
Revisited:
Is
Beat
Induction
Indeed
Pre-attentive?
223
that
beat
induction
is
independent
of
attention,
while
attention
can
indirectly
modulate
the
perception
of
a
beat
by
influencing
the
top-down
processes
involved
in
beat
perception.
The
Effect
of
Tonal
Context
on
Short-Term
Memory
for
Pitch
SAT
hypothesis.
Fifty
people
were
asked
to
identify
whether
the
second
excerpt
(target
line)
of
a
pair
of
excerpts
taken
from
a
song
came
before
or
after
the
first
excerpt
(probe
line)
in
the
normal
course
of
the
song.
Seven
pairs
of
excerpts,
three
pairs
falling
before
the
target
line,
and
four
pairs
occurring
after
the
target
line,
were
presented
for
each
of
8
popular
and
2
new
songs.
It
was
predicted
that
RTs
for
identifying
the
target
lines
occurring
after
the
probe
line
would
be
shorter
than
those
coming
before
the
probe
line.
Results
supported
this
hypothesis.
The
familiarity
of
a
song
did
not
affect
this
result.
A
companion
experiment
that
compared
performance
on
this
task
for
musicians
and
non-musicians
replicated
these
results,
but
indicated
no
effect
of
musical
expertise.
These
results
support
the
hypothesis
that
memory
for
songs
is
biased
in
a
forward
direction.
Long-term
musical
training
changes
the
neural
correlates
of
musical
imagery
and
perception
-
a
cross-sectional
MRI
study
Emily
Coffey,
Sibylle
Herholz,
Robert
Zatorre
Montreal
Neurological
Institute,
McGill
University;
International
Laboratory
for
Brain,
Music
and
Sound
Research
(BRAMS);
Centre
for
Interdisciplinary
Research
in
Music
Media
and
Technology
(CIRMMT)
Long-term
musical
training
has
been
linked
to
many
of
the
perceptual,
cognitive,
and
neurological
differences
found
between
musicians
and
non-musicians.
It
is
not
yet
known
how
training
affects
auditory
imagery;
that
is,
the
ability
to
imagine
sound.
Previous
studies
have
shown
that
secondary
auditory
and
premotor
areas
are
recruited
for
auditory
imagery,
as
well
as
association
areas
in
frontal
and
parietal
lobes,
but
differences
due
to
experience
have
not
been
identified.
Our
aim
is
to
investigate
the
effects
of
long-term
training
by
comparing
the
functional
and
structural
neural
correlates
of
musical
imagery
of
musicians
and
non-musicians.
Twenty-nine
young
adults
including
fifteen
with
extensive
musical
experience
and
fourteen
with
minimal
musical
experience
listened
to
and
imagined
familiar
melodies
during
functional
resonance
imaging.
The
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012
225
task
comprised
four
conditions:
listen
to
familiar
tunes,
imagine
them
cued
by
the
first
tones
of
the
song,
listen
to
random
tones,
or
rest
in
silence.
We
tested
the
accuracy
of
mental
imagery
by
asking
participants
to
judge
if
a
note
presented
either
after
the
imagery
period
or
at
the
end
of
the
listening
period
was
a
correct
continuation
of
the
melody.
In
addition
to
the
functional
data,
we
acquired
anatomical
data
using
diffusion
tensor
imaging,
magnetization
transfer,
and
T1-
weighted
imaging.
As
expected,
musicians
demonstrated
more
accurate
imagery
performance
(85%)
as
compared
with
non-musicians
(68%).
Both
groups
showed
activation
during
imagery
in
a
previously
identified
network
encompassing
secondary
auditory
cortex,
pre-motor
area,
dorsolateral
prefrontal
cortex,
intraparietal
sulcus,
and
cerebellum.
However,
the
musicians
showed
stronger
activation
in
the
supplementary
motor
area.
Grey
matter
organization,
white
matter
integrity,
and
cortical
thickness
will
be
analyzed.
While
both
musicians
and
non-musicians
are
able
to
imagine
familiar
tunes,
musicians
are
better
at
it.
This
performance
difference
may
be
related
to
stronger
recruitment
of
the
supplementary
motor
area,
which
is
involved
in
auditory
imagery,
planning
motor
actions,
and
bimanual
control.
Analysis
of
the
anatomical
data
will
clarify
the
relationship
between
these
behavioural
and
functional
differences
and
the
underlying
brain
structure.
These
results
support
the
idea
that
long-term
musical
training
affects
higher
order
sound
representation
and
processing.
Furthermore,
the
results
of
this
cross-sectional
study
complement
those
of
short-term
training
studies
in
which
practice
cannot
be
extensive,
but
can
be
experimentally
controlled.
Common
Components
in
Perception
and
Imagery
of
Music:
an
EEG
study
226 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
SAT
Paper
Session
44,
Timber
I
Hall,
11:30-13:00
Phenomenology
&
meaning
Markos
Tsetsos
Department
of
Music
Studies,
University
of
Athens,
Greece
Some
recent
psychological
and
philosophical
approaches
to
musical
meaning,
especially
those
on
embodied
music
cognition,
try
to
establish
a
bodily
mediated
relationship
between
sound
structures
and
mind.
Nevertheless,
the
structural
synarthrosis
of
sensuality
(sound),
corporeality
(movement)
and
understanding
(meaning),
as
long
as
it
is
attempted
in
strictly
empirical
terms,
looses
much
of
its
philosophical
cogency.
In
his
writings
on
music
Helmuth
Plessner,
a
pioneer
of
modern
philosophical
anthropology,
provides
an
a
priori,
transcendental
underpinning
of
the
aforementioned
synarthrosis,
ensuring
thus
its
necessity.
Plessner
proceeds
to
a
systematic
account
of
the
phenomenal
qualities
specific
to
sound,
such
as
produceability
(Produzierbarkeit),
remoteness-proximity
(Fern-Nhe),
voluminosity
(Voluminositt)
and
phenomenal
spatiality
(tonal
position),
impulsivity
(Impulsivitt),
temporal
dynamism,
ability
to
be
displayed
in
intrinsically
justified
horizontal
and
vertical
structures.
These
qualities
render
sound
and
sonic
movements
structurally
conform
to
mans
phenomenal
corporeality.
Musical
meaning,
albeit
semantically
open,
is
thus
understood
immediately
in
terms
of
human
conduct
(Verhalten).
All
these
matters
are
discussed
in
the
first
section
of
the
paper.
The
second
section
presents
a
critical
account
of
some
older
and
recent
studies
on
embodied
musical
cognition
in
reference
to
Plessners
theory.
This
critical
account
aims
at
a
theoretical
reconsideration
of
some
basic
issues
concerning
this
highly
important
trend
of
research.
Vers
une
musicologie
anti-phnomnologique
Ilias
Giannopoulos
This
paper
will
investigate
some
aspects
of
the
relation
of
the
musical
work
to
time,
and
its
perception
as
temporal
artwork
par
excellence.
The
idea
of
a
qualitative
experienced
time
as
opposed
to
the
objective
time,
the
notion
of
temporal
extension
as
it
appears
in
the
work
of
Husserl
(and
Bergson)
and
the
subjective
ability
of
reflective
perception
of
an
extended
temporal
objectwhich
exposes
its
material
on
a
time
interval
(Husserl),
gave
rise
-in
the
field
of
music
aesthetics-
to
phenomenological
approaches
of
the
temporality
of
the
complete
musical
work
with
the
conviction
that
it
also
constitutes
an
extensive
temporal
and
homogeneous
object.
However,
in
his
extended
lectures
On
the
Phenomenology
of
the
Consciousness
of
Internal
Time
(1893-1917),
Husserl
demonstrates
his
phenomenological
analysis
of
the
perception
of
temporal
objects
on
the
basis
of
small
units,
like
melodies
or
even
single
tones.
The
author
will
try
to
scrutinize
the
appropriateness
of
phenomenological
approaches
of
the
temporality
of
musical
work
and
juxtapose
them
to
Adorno's
notion
of
"intensive
time",
based
on
selected
texts,
mainly
on
his
Musikalische
Schriften,
where
he
unfolds
a
dialectical
understanding
of
musical
time.
Phenomenological
temporal
analysis
and
Adorno's
time
dialectics
have
namely
opposite
directions:
the
one
aims
to
extend
an
ideally
identical
-since
small
and
homogeneous-
content
in
temporal
succession
and
the
other
aims
to
comprise
a
diversity
of
content
in
the
moment
(on
the
basis
of
Hegelian
logical
principles).
The
aim
of
this
paper
is
to
demonstrate
misleading
schematisms
arising
from
holistic
phenomenological
approaches
of
the
temporality
of
musical
work
which
in
addition
presuppose
the
assumption
of
questionable
for
the
ontology
of
the
musical
work
supra-
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012
227
temporal
categories.
On
the
other
hand
Adornos
idealistic
attempt
to
comprise
the
manifold,
succesive
given
and
temporal
extended
content
in
the
objective
and
aesthetic
now,
proves
to
be
a
supreme
temporal
hermeneutics
since
it
can
be
supported
(without
any
kind
of
violence)
by
concrete
musical
phenomena.
Panos
Vlagopoulos
Dept.
of
Music
Studies,
Ionian
University,
Greece
A
usual
critique
voiced
against
Nelson
Goodman's
symbolic
theory
of
art
is
related
to
his
strict
adhesion
to
an
extensional
semantics
and,
with
it,
the
failure
to
account
for
the
artist's
intentions.
In
fact,
Joseph
Margolis
even
doubts
the
sustainability
of
the
autographic
/
allographic
distinction
by
claiming
that
since
stylistic
features
are
"profoundly
intentionalized,
historicized,
incapable
of
being
captured
by
any
strict
extensionalized
notation,
then
it
may
well
be
that
all
so-called
allographic
arts
are
ineluctably
autographic".
This
however
would
amount
to
practically
collapse
the
distinction
between
score
and
performance,
which
in
turn
is,
if
anything,
a
strong
engaged
aesthetic
view
about
musical
works.
I
would
like
to
suggest
that,
in
trying
to
understand
the
peculiarities
of
Avant-garde
music
works
of
the
50s
and
60s
(graphic-score
music-works
and
prose
music),
one
can
find
it
very
useful
to
use
Goodman's
autographic
/
allographic
distinction,
without
necessarily
subscribing
to
Goodman's
extensionalism.
Against
suggestions
to
the
contrary,
the
two
elements
(either
the
pictorial
and
the
musical,
in
graphic-score
music-works;
or
the
discursive
and
the
musical,
in
prose
music)
should
be
addressed
together
as
two
irreducible
aspects
of
graphic-score
or
prose
music-works.
These
types
of
music
works
rely
on
a
sui
generis
combination
of
autographic
cum
allographic
elements.
On
the
other
hand,
rehearsal
represents
an
essential
stage
of
these
music
works,
next
to
the
preparation
of
the
score,
on
one
end,
and
performance,
on
the
other.
I
will
try
to
illustrate
this
by
using
samples
from
the
work
of
Earle
Brown,
La
Monte
Young,
and
Anestis
Logothetis.
SAT
was
significantly
higher
for
the
music
therapy
plus
standard
care
group
than
for
the
standard
care
only
group
(odds
ratio
2.96,
95%
CI
1.01
to
9.02).
Individual
music
therapy
combined
with
standard
care
is
effective
for
depression
among
working-age
people
with
depression.
The
results
of
this
study
along
with
the
previous
research
indicate
that
music
therapy
with
its
specific
qualities
is
a
valuable
enhancement
to
established
treatment
practices.
Active
Music
Therapy
and
Williams
Syndrome:
a
Possible
Method
for
the
Visual-Motor
and
Praxis
Rehabilitation?
Jrg
Fachner
Finnish
Centre
of
Excellence
in
Interdisciplinary
Music
Research,
University
of
Jyvskyl,
Finland
Discussing
the
effects
of
drugs
on
music
and
consciousness
is
a
difficult
enterprise:
on
the
one
hand,
drugs
have
specific
effects
on
physiology;
but
on
the
other,
the
phenomena
experienced
and
reported
in
drug-induced
altered
states
of
consciousness
(dASC)
cannot
simply
be
reduced
to
the
perceptual
consequences
of
those
physiological
effects.
This
paper
discusses
the
psychedelic
effects
of
drugs
(mainly
cannabis)
on
the
perception
and
performance
of
music,
and
in
particular
how
such
drugs
influence
time
perception
in
the
process
of
performance.
Drugs
are
binding
to
endogenous
receptors
of
certain
neurotransmitters
and
therefore
emphasize,
amplify
or
weaken
certain
brain
functions
that
-
even
in
extreme
form
-
are
also
possible
without
drugs.
Already
Baudelaire
mentioned
that
nothing
supernatural
happens
under
the
influence
drugs,
but
that
reality
simply
becomes
more
vivid,
and
receives
more
attention.
Drugs
have
the
capacity
to
reframe
perspectives
on
12th
ICMPC
-
8th
ESCOM
Joint
Conference,
Aristotle
University
of
Thessaloniki,
23-28
July
2012
229
A
two-hour
post-conference
session
looking
at
the
wider
social
and
political
context
of
our
research
and
practice,
in
the
tradition
begun
at
the
ICMPC
in
Evanston
and
continued
in
Bologna.
A
likely
focus
will
be
the
current
global
economic
situation
as
it
is
currently
being
felt
most
strongly
in
Greece,
and
its
impact
on
scholarship
and
intellectual
exchange.
This
is
not
part
of
the
academic
programme
of
the
conference,
but
all
registered
conference
participants
and
their
non-participant
accompanying
persons
are
encouraged
to
attend
and
take
part
in
the
discussion.
The
session
will
be
conducted
in
English.
230 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
AUTHOR
INDEX
Abeer,
164
Abla,
62
Adachi,
116,
143,
192,
230
Addessi,
26,
68
Aglieri,
229
Aguiar,
46,
93
Aiba,
102
Akinaga,
61
Akiva-Kabiri,
120,
199
Albrecht,
28,
66
Alexakis,
26,
68
Allpress,
177,
208
Alluri,
151
Almoguera,
162
Altenmller,
92,
134,
215
Ambrazeviius,
86
Anagnostopoulou,
26,
68,
93
Antovic,
119
Aoki,
107
Armin,
90
Ashley,
34,
126
Athanasopoulos,
219
Atherton,
109
Atkinson,
159
Au,
121
Aucouturier,
38
Auer,
22
Aufegger,
78
Ayari,
24
Azaria,
199
Bth,
101
Bagic,
83
Bailes,
15,
100,
133,
176
Baldwin,
29,
46
Barrett,
95
Barrow,
29
Bartlett,
32
Bartolo,
209
Bas
de
Haas,
55
Beck,
49,
170
Ben-Haim,
126
Benoit,
91,
139
Berger,
100,
214
Berkowska,
57,
61
Bertolino,
62
Best,
17
Beveridge,
211
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
231
Chan,
89
Chandra,
169
Chang,
60
Chatziioannou,
174
Chiofalo,
229
Chmurzynska,
79,
114,
164
Chon,
190
Chuen,
187
Cirelli,
198
Clarke,
124,
152,
185
Clift,
208
Coffey,
225
Cohrdes,
106
Collins,
95,
201
Corrigall,
137,
165
Costa-Giomi,
47
Coutinho,
212
Creighton,
109
Crook,
211
Cucchi,
97
Cunha,
79
Custodero,
98
Dakovanou,
93
Dalla
Bella,
57,
61,
91,
139,
196
Davidson,
16
Davidson-Kelly,
34
Dean,
15,
100,
143,
176
Deconinck,
204
Deg,
137
Delb,
95
Delige,
12
Demorest,
17,
98,
186
Demoucron,
220
Desain,
214,
226
Dibben,
134
Dilley,
53
Diminakis,
158
Ding,
39
Dittmar,
164
Dobson,
108
Doffman,
124
Dohn,
151
Donin,
202
Dowling,
32,
38,
42
Doyne,
152
Dunbar-Hall,
17
Dyck,
171
Dykens,
85
Edwards,
149
Eerola,
69,
161,
175
Egermann,
187
Eguia,
172
Eguilaz,
162
Einarson,
137,
197,
198
Eitan,
50,
126,
160,
185
Elowsson,
23
Emura,
61,
211
Erdemir,
49,
170
Erkkil,
181,
228
Evans,
202
Exter,
37
Fabiani,
94
Fachner,
181,
228,
229
Fairhurst,
70
Falk,
116
Farbood,
74,
205,
224
Farquhar,
214,
226
Farrugia,
57,
91,
139
Fazio,
62
Fernando,
187
Fron,
202
Ferrari,
26
Ferrer,
175
Feth,
118
Finkel,
196
Fischer,
47
Fischinger,
66,
141
Floridou,
195
Foltyn,
84
Fornari,
35
Forth,
40
Fouloulis,
41
Foxcroft,
28
Frank,
58
Frank,
63
Friberg,
23,
94,
182
Frieler,
25,
66,
183
Fritz,
105,
174
Fujii,
99
Fulford,
222
Furukawa,
44
Furuya,
103,
215
Gao,
90
Garnier,
124
Geringer,
87
Ghitza,
74
Giannopoulos,
227
Giannouli,
19,
33
Giesriegl,
50
Gifford,
16
232 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
Gill,
203
Gingras,
96,
130,
143,
188
Ginsborg,
222
Giordano,
174
Giorgio,
158
Giovanni,
86
Glette,
204
Glover,
122
Goda,
103
Gody,
169,
204
Goebl,
140,
141
Gold,
62,
63,
121
Goldbart,
222
Goldman,
123
Gollmann,
48
Gmez,
112
Goodchild,
69,
143
Gordon,
85
Goto,
42
Govindsamy,
105
Graepel,
156
Grahn,
210
Granot,
51,
145,
222
Gratier,
202
Griffiths,
189
Grollmisch,
164
Grube,
80
Gualda,
186
Guastavino,
112,
174
Guedes,
157
Hden,
197,
209
Hadjidimitriou,
83
Hadjileontiadis,
83
Hadley,
119,
168
Hallett,
135
Halpern,
33
Hamann,
37
Hambrick,
53
Handy,
60
Hannon,
48
Hans,
152
Hansen,
216
Harding,
54,
139
Hargreaves,
171
Hascher,
218
Hasselhorn,
80,
164
Hawes,
142
Hedblad,
94
Hegde,
38,
220
Heller,
198
Helsing,
149
Hemming,
134
Henik,
120,
199
Herbert,
46,
148
Herholz,
82,
85,
225
Himberg,
101,
203
Hinds,
128
Hirano,
60,
103,
104
Hirashima,
99
Hirt,
183
Hitz,
22
Hjortkjr,
49,
171
Hofmann,
141
Hofmann-Engl,
181
Honing,
197,
209,
223
Horn,
67,
194
Hvin,
204
Hughes,
76
Huovinen,
129,
184
Huron,
37,
66,
67,
125,
161,
205
Imberty,
158
Innes-Brown,
121
Ioannou,
82
Israel-Kolatt,
51
Ito,
60,
104
Ivaldi,
106,
146
Iwanaga,
44,
62
Jakubowski,
66
Janata,
95
Jankovi,
179
Jensenius,
204
Judge,
128
Kaczmarek,
144,
180
Kagomiya,
191
Kaila,
129
Kaiser,
67
Kamiyama,
62
Kanamori,
20,
45
Kaneshiro,
100,
214
Kang,
115
Katahira,
167
Katsiavalos,
157
Kawakami,
44,
102
Kawase,
184
Kazai,
102
Kecht,
65
Keller,
41,
70,
71,
188
Key,
85
Kidera,
191
Kieslich,
180
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
233
Kim,
100
Kinoshita,
60,
103,
104
Kitamura,
109
Kizner,
135
Klonari,
190
Knox,
162,
211
Kochman,
220
Koelsch,
48
Kohn,
50
Koniari,
138
Kopiez,
64,
106,
111,
117,
127,
145
Koreimann,
22,
113
Korsakova-Kreyn,
42
Kotta,
16
Kotz,
54,
91,
117,
139
Kouzaki,
191
Kozak,
169
Kranenburg,
95
Krause,
108
Krause-Burmester,
37
Kreutz,
80,
136,
155
Kringelbach,
152
Kuchenbuch,
82,
85
Kudo,
103
Kuhn,
92
Kssner,
121
Lamont,
27,
135,
147,
175
Lapidaki,
160
Larrouy-Maestri,
86
Lartillot,
24
Laucirica,
162
Launay,
100,
176
Leadbeater,
147
Leboeuf,
213
Lee,
223
Leech-Wilkinson,
121
Lega,
97
Legg,
208
Legout,
203
Lehmann,
80,
164
Lehne,
48
Leibovich,
199
Leitner,
22
Leman,
171,
220
Lembke,
172
Lense,
85
Lenz,
21
Lesaffre,
171
Lvque,
86,
88
Li,
39
Liao,
89
Liebermann,
141
Liikkanen,
132
Lim,
124
Lindborg,
122,
193
Lindsen,
18,
84,
152,
179
Liu,
53
Lock,
16
Lorrain,
138
Lothwesen,
25,
66
Louhivuori,
207
Loui,
52
Louven,
129
Luck,
30,
58,
107,
127,
144,
154,
170,
221
Ludke,
116
Lund,
151
MacDonald,
211
MacLachlan,
224
MacLeod,
87
MacRitchie,
185,
221
Madison,
101
Madsen,
87
Maes,
171
Maestre,
206
Mailman,
163
Mallikarjuna,
72
Mankarious,
130
Manning,
57
Marchini,
206
Marcus,
74
Marentakis,
21
Margulis,
160
Marin,
125
Marozeau,
121
Marsden,
55
Martorell,
112
Mastay,
92
Matsui,
102
Matsumoto,
45,
107
Mauro,
193
Mavromatis,
224
Mayer,
141
Mazzeschi,
229
McAdams,
21,
69,
105,
143,
172,
187,
190
McAuley,
53,
92
Mendoza,
46
Merchant,
209
Micheli,
180
Misenhelter,
20
Mitchell,
111
Mito,
102
Mitsudo,
42,
209
234 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
Paisley,
166
Palmer,
140
Panebianco-Warrens,
28
Pantev,
82,
85,
225
Papadelis,
173,
190,
200
Papanikolaou,
190,
200
Papiotis,
206
Paraskevopoulos,
82,
85
Parncutt,
50,
67,
185
Pastiadis,
173,
190,
200
Patel,
105
Paul,
43,
118
Pawley,
207
Pearce,
18,
36,
84,
110,
139,
143,
152,
216
Pecenka,
70
Peebles,
15
Pennycook,
157
Penttinen,
184
Perreau-Guimaraes,
214
Pesjak,
22
Peter,
199
Petrovic,
119
Peynirciolu,
33
Pfeifer,
37
Phillips,
31,
119
Pikrakis,
41
Piper,
183
Platz,
111,
117,
127,
145
Plazak,
161,
176
Poeppel,
74
Poon,
161
Pope,
213
Potter,
110
Prado,
209
Prem,
50
Prince,
118
Prior,
59,
121
Proscia,
172
Psaltopoulou,
180
Quarto,
62
Rahal,
146
Raju,
24
Raman,
38
Ramanujam,
38,
220
Randall,
83,
150
Raposo
de
Medeiros,
36
Reiss,
173
Remijn,
42
Repp,
70
Reuter,
65,
171
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
235
Siedenburg,
171
Sinico,
186
Skogstad,
204
Sloboda,
108,
230
Slor,
45
Smetana,
141
Smith,
213
Smukalla,
27
Sobe,
123
Sowinski,
57,
196
Speelman,
165,
224
Spiro,
101
Stevanovic,
202
Stevens,
17,
121,
188,
199
Stewart,
18,
96,
125,
130,
188,
216,
223
Stigler,
50
Stoklasa,
141
Stolzenburg,
75
Strau,
109
Sudre,
83
Sulkin,
99
Sun,
72
Suppes,
214
Suzuki,
191
Syzek,
92
Tabei,
43
Tafuri,
81
Taga,
99
Takeichi,
42,
191,
209
Takiuchi,
143
Tamar,
90
Tamir-Ostrover,
185
Tanaka,
43
Tardieu,
17
Taurisano,
62
Teki,
189
Tekman,
47
Temperley,
154
Tervaniemi,
51,
150
Thompson,
32,
54,
58,
107,
125,
127,
154,
170,
199,
203,
221
Tidhar,
168
Tillmann,
17,
73,
95,
96,
188
Timmers,
30,
211
Ting,
32
Tjoa,
213
Tobimatsu,
42,
209
Toiviainen,
58,
107,
112,
127,
144,
151,
154,
170
Trresen,
169,
204
Toussaint,
110
236 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
Trainor,
137,
197,
198
Trehub,
194
Triantafyllaki,
26,
68,
93
Trkulja,
179
Trochidis,
140
Troge,
138
Tsai,
39,
178
Tsay,
73
Tsetsos,
227
Tsougras,
138,
158,
217
Tsuzaki,
102
Tzanetakis,
95
Ueda,
173,
191
Uhlig,
71
Upham,
178,
192
Vaes,
200
Vaiouli,
180
Van
den
Tol,
149
van
der
Steen,
41
van
Handel,
75
van
Kranenburg,
55,
56
van
Noorden,
58
van
Vugt,
92
van
Walstijn,
174
van
Zijl,
144
Vanden
Bosch,
48
Vattulainen,
150
Vecchi,
97
Vempala,
56
Verga,
117
Vitale,
193
Vitouch,
22,
78,
113,
123
Vlagopoulos,
228
Vlek,
214
Voldsund,
169,
204
Volk,
55,
56,
95
Vouvaris,
217
Vroegh,
183
Vujovi,
81
Vuoskoski,
30,
69
Vurma,
87
Vuust,
151,
152
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
237
ISBN:
978-960-99845-1-5