Sei sulla pagina 1di 238

12th

International Conference for


Music Perception and Cognition

8th Conference of the European Society
for the Cognitive Sciences of Music




ICMPC - ESCOM 2012


Joint Conference

Proceedings

Book of Abstracts
CD-ROM Proceedings



Edited by
E. Cambouropoulos, C. Tsougras,
P. Mavromatis, K. Pastiadis

School of Music Studies
Aristotle University of Thessaloniki

Thessaloniki/Greece, 23-28 July 2012








Proceedings of the ICMPC-ESCOM 2012 Joint Conference:
12th Biennial International Conference for Music Perception and Cognition
8th Triennial Conference of the European Society for the Cognitive Sciences of Music

Edited by:
Emilios Cambouropoulos, Costas Tsougras, Panayotis Mavromatis, Konstantinos Pastiadis

Book of Abstracts: Costas Tsougras


CD-ROM Proceedings: Kostantinos Pastiadis
Cover design: Emilios Cambouropoulos
Printed by COPYCITY, Thessaloniki, www.copycity.gr

Published by the School of Music Studies, Aristotle University of Thessaloniki


http://www.mus.auth.gr

12th ICMPC - 8th ESCOM Joint Conference webpage: http://icmpc-escom2012.web.auth.gr


ICMPC webpage: http://www.icmpc.org
ESCOM webpage: http://www.escom.org

The Proceedings are also available online at the conference's website:


http://icmpc-escom2012.web.auth.gr

ISBN: 978-960-99845-1-5
Copyright 2012 by E. Cambouropoulos, C. Tsougras, P. Mavromatis, K. Pastiadis
2

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

Welcoming Address by ESCOM president


Dear delegates,

On behalf of the European Society for the Cognitive Sciences of Music, I would like to extend a
warm welcome to all of you. I am very happy to see such an impressive number of delegates
from all over the world. I know that some of you have had a very long journey, but I am sure you
will not regret the effort. I have no doubts that this will be an inspiring and fruitful conference.
As you might suspect, the road to this conference was not always smooth. In 2009, when we
decided Greece would be the next venue for the joint ESCOM/ICMPC conference, not even the
Delphi oracle would have been able to predict the current economic crisis in Europe. Of course,
we did briefly consider moving the conference to another country, but due to the general tense
economic situation in most European countries, this was not a realistic option. Eventually, the
unexpected difficulties led to a very productive and personally enriching inner-European
cooperation between ESCOM, DGM, and the ICMPC organizers.
First of all, I want to thank the local team, Emilios Cambouropoulos, Costas Tsougras, and
SYMVOLI, for persistently pursuing their vision of an international conference in this impressive
setting. Secondly, I would like to express my sincere gratitude to the executive council of the
German Society for Music Psychology (DGM), in particular to its president Andreas Lehmann and
its treasurer Michael Oehler for their cooperation with ESCOM and ICMPC in settling financial
matters.
I hope that all of the delegates will leave the ESCOM-ICMPC 2012 conference and Thessaloniki
fresh and brimming with new ideas, new friends, good experiences, life-enhancing impressions
and optimism regarding the scientific and scholarly potential of the cognitive sciences of music.

Reinhard Kopiez,

Professor of Music Psychology, Hanover University of Music, Drama and Media, Germany
ESCOM President

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

Welcoming Address by the Conference co-Chairs


Dear delegates,

We would like to welcome all participants here in Thessaloniki for the joint meeting of the 12th
International Conference on Music Perception and Cognition (ICMPC) and the 8th Triennial
Conference of the European Society for the Cognitive Sciences of Music (ESCOM). The conference
is organized by the School of Music Studies at the Aristotle University of Thessaloniki, and the
European Society for the Cognitive Sciences of Music. This years joint conference is the fourth
joint international meeting of ICMPC and ESCOM following the meetings in Liege, Belgium (1994),
Keele, England (2000), and Bologna, Italy (2006).
Three years ago, at the urging of Irne Delige, we decided to go ahead and make a petition for
holding this international event in Thessaloniki. At that time, we could not imagine the financial
turmoil this country would enter just a short time down the line. We are grateful to ESCOM, and
above all to Reinhard Kopiez and Irne Delige, for their steady support and encouragement
throughout this long preparatory period. Many thanks are due to Andreas Lehmann and Michael
Oehler (German Society for Music Psychology - DGM) for assisting us in securing a credible
financial environment for the conference. We would also like to express our gratitude to the
members of the international ICMPC-ESCOM 2012 Conference Advisory Board for trusting us,
despite the negative international publicity surrounding the country.
The conference brings together leading researchers from different areas of music cognition and
perception. A large number of papers, from a broad range of disciplines - such as psychology,
psychophysics, philosophy, neuroscience, artificial intelligence, psychoacoustics, linguistics, music
theory, anthropology, cognitive science, education - report empirical and theoretical research
that contributes to a better understanding of how music is perceived, represented and
generated. Out of 570 submissions, 154 papers were selected for spoken presentation and 258
for poster presentation. Additionally, five keynote addresses will be presented in plenary sessions
by five internationally distinguished colleagues. The two SEMPRE-ICMPC12 Young Researcher
Award winners for this year will also present their work in plenary sessions on Wednesday and
Friday morning.



4

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012



This year we have attempted to give poster presentations a more prominent position in the
conference programme. Posters are organised thematically into speed poster sessions where
authors have the opportunity to present briefly the core points of their work orally to
participants; these speed sessions will be followed by more relaxed presentations and discussions
in front of the posters in the friendly environment of the main venue hall. The speed poster
presentations are held mostly in the morning giving time for discussion later on in the day. We
are hoping that this compound mode of presentation (oral plus poster presentation) will
contribute to a better communication between poster presenters and conference participants.
We are open to further suggestions and ideas, as well as feedback on how well this whole
process works.
We also tried to provide an interesting and diverse social programme. Apart from the welcome
reception and banquet, a variety of half-day excursions are offered on Thursday afternoon, plus
other activities in the city such as walking tours. We would like to draw your attention to the
special concert on Wednesday evening that features contemporary works by Greek composers
performed by leading local performers. The concert will include works from the beginning of the
th
20 century to the present; also, a traditional vocal female ensemble will participate in the
concert complementing contemporary works inspired by Greek folk music.
On the last day of the conference, Saturday afternoon, a special post-conference two-hour
session, co-chaired by John Sloboda and Mayumi Adachi, will be looking at the wider social and
political context of our research and practice. This event will focus on the current global
economic situation as it is currently being felt most strongly in Greece, and its impact on
scholarship and intellectual exchange. All are welcome for a lively and thought-provoking
discussion.
We hope that the richness of research topics, the high quality of presentations, the smooth flow
of the programme, the friendly and comfortable enviroment of Porto Palace, the relaxed coffee
and lunch breaks, along with the conference excursions, musical concerts and other social
events, will make this conference a most rewarding experience. We hope that everyone will
leave with fresh ideas and motivation for future research, and new collaborations that will give
rise to inspiring new ideas and lasting friendships.
Closing this openning comment, we would like to thank all our co-organisers in the organising
committee, our colleagues in the Music Department and our collaborators at Symvoli for their
support. We want to thank especially Panos Mavromatis, Kostas Pastiadis and Andreas
Katsiavalos, for their invaluable practical help in various stages of this organisation. Finally, a
warm thanks to all of you for coming to Thessaloniki and for your support and solidarity in the
midst of this difficult period of our country.
We are confident that this conference will be a most rewarding and memorable experience for
all.

Emilios Cambouropoulos and Costas Tsougras,


ICMPC-ESCOM 2012 co-chairs

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012


ICMPC12-ESCOM8 Organizing Committee


Chair: Emilios Cambouropoulos, School of Music Studies, Aristotle University of Thessaloniki,
Greece
Co-Chair: Costas Tsougras, School of Music Studies, Aristotle University of Thessaloniki, Greece
Reviewing Co-ordinator: Panayotis Mavromatis, New York University, USA
Technical Co-ordinator: Konstantinos Pastiadis, School of Music Studies, Aristotle University of
Thessaloniki, Greece

Georgios Papadelis, School of Music Studies, Aristotle University of Thessaloniki


Danae Stefanou, School of Music Studies, Aristotle University of Thessaloniki
Christina Anagnostopoulou, Department of Music Studies, University of Athens
Eleni Lapidaki, School of Music Studies, Aristotle University of Thessaloniki

Technical assistant, webmaster: Andreas Katsiavalos


Conference Administration: SYMVOLI Conference and Cultural Management, www.symvoli.gr
Conference Venue: Porto Palace Hotel and Conference Center, www.portopalace.gr

ICMPC-ESCOM 2012 Conference Advisory Board


Mayumi Adachi, Hokkaido University, Japan
Anna Rita Addessi, University of Bologna, Italy
Steven Demorest, University of Washington, USA
Andrea Halpern, Bucknell University, USA
Reinhard Kopiez, University of Hannover, Germany
Jukka Louhivuori, University of Juvskyl, Finland
Yoshitaka Nakajima, Kyushu University, Japan
Jaan Ross, Estonian Academy of Music and Theatre & University of Tartu, Estonia

Programme Committee


Eckart Altenmller, Hanover University of Music, Drama and Media, Germany
Nicola Dibben, University of Sheffield, U.K.
Robert O. Gjerdingen, Northwestern University, U.S.
Carol L. Krumhansl, Cornell University, U.S.
Stephen McAdams, McGill University, Canada
Richard Parncutt, Karl-Franzens-Universitt Graz, Austria
Catherine (Kate) Stevens, University of Western Sydney, Australia
Petri Toiviainen, University of Jyvskyl, Finland

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

Scientific Advisory Board


Mayumi Adachi, Hokkaido University, Japan
Anna Rita Addessi, University of Bologna, Italy
Rita Aiello, New York University, United States
Eckart Altenmller, University of Music Drama and Media, Hannover, Germany
Rytis Ambrazeviius, Kaunas University of Technology, Lithuania
Christina Anagnostopoulou, University of Athens, Greece
Richard Ashley, Northwestern University, United States
Roberto Bresin, KTH Royal Institute of Technology, Sweden
Warren Brodsky, Ben-Gurion University of the Negev, Israel
Annabel Cohen, University of Prince Edward Island, Canada
Eugenia Costa-Giomi, University of Texas, Austin, United States
Sarah Creel, University of California, San Diego, United States
Ian Cross, University of Cambridge, United Kingdom
Lola Cuddy, Queen's University, Canada
Lori Custodero, Columbia University, United States
Irne Delige, ESCOM, Belgium
Steven M. Demorest, University of Washington, United States
Nicola Dibben, University of Sheffield, United Kingdom
Walter Jay Dowling, University of Texas, Dallas, United States
Tuomas Eerola, University of Jyvskyl, Finland
Zohar Eitan, Tel Aviv University, Israel
Dorottya Fabian, University of New South Wales, Australia
Morwaread Farbood, New York University, United States
Robert Gjerdingen, Northwestern University, United States
Rolf Inge Gody, University of Oslo, Norway
Werner Goebl, University of Music and Performing Arts, Vienna, Austria
Andrea Halpern, Bucknell University, United States
Stephen Handel, University of Tennessee, United States
Erin Hannon, University of Nevada, Las Vegas, United States
Yuzuru Hiraga, University of Tsukuba, Japan
Henkjan Honing, University of Amsterdam, Netherlands
Erkki Huovinen, University of Minnesota, School of Music, United States
Roger Kendall, University of California, Los Angeles, United States
Reinhard Kopiez, Hanover University of Music, Drama and Media, Germany
Stefan Koelsch, Freie Universitt Berlin, Germany
Nina Kraus, Northwestern University, United States
Alexandra Lamont, Keele University, United Kingdom
Eleni Lapidaki, Aristotle University of Thessaloniki, Greece
Edward Large, Florida Atlantic University, United States
Andreas Lehmann, Hochschule fr Musik, Wrzburg, Germany
Marc Leman, University of Ghent, Belgium
Scott Lipscomb, University of Minnesota, United States
Steven Livingstone, Ryerson University, Canada
Jukka Louhivuori, University of Jyvskyl, Finland
Psyche Loui, Beth Israel Deaconess Medical Center and Harvard Medical School, United States


12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

Geoff Luck, University of Jyvskyl, Finland


Raymond MacDonald, Glasgow Caledonian University, United Kingdom
Elizabeth Margulis, University of Arkansas, United States
Elizabeth Marvin, Eastman School of Music, University of Rochester, United States
Stephen McAdams, McGill University, Canada
Josh McDermott, New York University, United States
David Meredith, Aalborg University, Denmark
Yoshitaka Nakajima, Kyushu University, Japan
Takayuki Nakata, Future University, Hakodate, Japan
Marta Olivetti Belardinelli, Sapienza University of Rome, Italy
Georgios Papadelis, Aristotle University of Thessaloniki, Greece
Richard Parncutt, University of Graz, Austria
Bruce Pennycook, University of Texas, Austin, United States
Peter Pfordresher, University at Buffalo State University of New York, United States
Ian Quinn, Yale University, United States
James Renwick, University of Sydney, Australia
Bruno Repp, Haskins Laboratories, United States
Martina Rieger, UMIT - University for Health Sciences, Medical Informatics and Technology, Austria
Jaan Ross, Estonian Academy of Music and Theatre, Estonia
Frank Russo, Ryerson University, Canada
E. Glenn Schellenberg, University of Toronto, Canada
Emery Schubert, University of New South Wales, Australia
Uwe Seifert, University of Cologne, Germany
John Sloboda, Guildhall School of Music & Drama, United Kingdom
Kate Stevens, University of Western Sydney, Australia
David Temperley, Eastman School of Music, University of Rochester, United States
William Forde Thompson, Macquarie University, Australia
Barbara Tillmann, Lyon Neuroscience Research Center, France
Petri Toiviainen, University of Jyvskyl, Finland
Laurel Trainor, McMaster University/McMaster Institute for Music and the Mind, Canada
Minoru Tsuzaki, Kyoto City University of Arts, Japan
Maris Valk-Falk, Estonian Academy of Music and Theatre, Estonia
Oliver Vitouch, University of Klagenfurt, Austria
Geraint Wiggins, Queen Mary, University of London, United Kingdom
Suk Won Yi, Seoul National University, Republic Of Korea

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

SEMPRE AWARDS


The Society for Education, Music and Psychology Research
(SEMPRE) <http://www.sempre.org.uk/> kindly offers a
number of awards to researchers attending this years ICMPC
conference.



SEMPRE & ICMPC12 Young Researcher Award


The SEMPRE & ICMPC12 Young Researcher Award (YRA) is awarded to young researchers that
submit a high quality research paper and demonstrate the potential to be a leading researcher in
the field of Music Perception and Cognition.
This years Young Researcher Award selection committee, consisting of Graham Welch (chair of
SEMPRE), Reinhard Kopiez (president of ESCOM), and Kate Stevens (member of the ICMPC-
ESCOM12 Scientific Advisory board), examined carefully all shortlisted applications, and decided
this year's YRA prize to be shared by the following two researchers:




Birgitta Burger: Emotions move us: Basic emotions in music influence people's movement to
music
Chia-Jung Tsay: The Impact of Visual Cues on the Judgment and Perceptions of Music
Performance

The selection process consisted of the following steps: Initially, eleven submissions were
shortlisted based on the review ratings of the submitted abstract. Then, the authors of these
eleven abstracts submitted full papers, which were additionally reviewed by at least two
reviewers from the Scientific Advisory board. Finally, the YRA selection committee examined
carefully these eleven submissions in terms of their overall quality and originality (taking into
account the additional reviews), and, in terms of meeting all the criteria described on the
conference webpage, delivered their final decision.
Apart from receiving a money prize (1000$ each), the two YRA winners will present their work in
special plenary sessions on Wednesday and Friday morning. The YRA selection committee,
SEMPRE, the conference organising committee and all participants, would like to congratulate
whole-heartedly the two winners for their success.

SEMPRE Attendance Bursaries


The Attendance Bursaries are awarded by SEMPRE to assist financially ICMPC participants on the
basis of merit and need. This year, a total of 10000 US dollars (from 100$ to 750$) has been
awarded to the following participants: Amos David Boasson, Blanka Bogunovi, Daniel Cameron,
Elisa Carrus, Song Hui Chon, Emily B.J. Coffey, Cara Featherstone, Georgia-Aristi Floridou,
Benjamin Gold, Andrew Goldman, Meghan Goodchild, Shantala Hegde, Sibylle C. Herholz,
Christos Ioannou, Jenny Judge, Sarah Knight, Amanda Krause, Carlotta Lega, Samuel A. Mehr,
Alisun Pawley, Crystal Peebles, Rachna Raman, Sundeep Teki, Michael Wammes, Dustin Wang,
Michael W. Weiss

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

Presentation Guidelines


Spoken Papers
Spoken papers are allotted 20 minutes plus 8 minutes for questions and 2 minutes break for
changing rooms. You must stop talking when your time is up. The timetable will be strictly
adhered to so that people can easily change rooms and plan meetings during breaks. All papers
are presented in English.
All powerpoint presentations must be brought to the Central Technical Helpdesk in the main
foyer at least three hours prior to the scheduled opening time of the session. At the helpdesk,
the authors should be able to preview their presentation. The computers in the presentation
halls are laptops with Microsoft Windows 7 or XP SP3 installed. Presentations should be prepared
for MS Office PowerPoint or in Acrobat pdf format. The powerpoint presentations (ppt or pptx
file) and all audio/visual files must be in the same folder (without sub-folders) named after the
presenter's surname. If it is absolutely necessary, e.g. if you want to use a program that runs only
on your computer, bring your own laptop and check well in advance that your and our equipment
work together in harmony. In case of use of Apple Macintosh computers, participants should
provide any necessary adapters for video (VGA) output to the in-situ audiovisual equipment.
Meet your chair and technical assistant 10-15 minutes before the start of your session. If you
have a handout, give it to an assistant along with any instructions on what to do.
If something goes wrong with the equipment during your talk, ask the technician to fix it.
Meanwhile, continue your talk, even if you have to improvise without slides. Your 20-minute
period will not be extended on account of a technical problem.

Poster Presentations
Hanging up and presenting posters. Authors are responsible for setting up and removing their
posters. If your poster is presented at a Speed Poster Session on Tuesday, then you should hang
it up on Monday afternoon before 5:30pm and the poster will remain till Tuesday evening. If your
poster is presented on Wednesday or Friday, then it should be hung up on the morning of that
same day before 9am and removed the following day. A timetable of papers on each poster
panel will indicate which posters should be hung up on that particular panel. Posters will be
organised thematically, so look for your poster panel in the appropriate thematic region. We will
provide the means for you to hang your poster. At least one author of a poster must be available
to present it during the special poster presentation sessions and, also, during coffee breaks and
lunch breaks on the two days that the poster will be hanged.
Speed poster presentations. Apart from the poster, a 5-minute slot is allocated for the spoken
presentation of each poster. The goal of this brief presentation is not to present the full paper,
but rather to give a glimpse into the participants' research that will attract delegates for a more
detailed presentation and discussion around the actual poster. Authors should not try to fit as
much as possible into the five minutes, but preferably to give a few interesting/exciting points
that will urge delegates to discuss the issues raised further during the poster presentation
sessions, and the lunch/coffee breaks. The same requirements for spoken talks apply for the
speed poster presentations (read carefully the quidelines above), with the following exception:
each speed poster presentation is allotted exactly 5 minutes without extra time for discussion -
presenters should ensure that their presentation is less than 5 minutes to allow half-a-minute or
so for the preparation of the next presentation. The timetable will be strictly adhered to. We
suggest powerpoint presentations should consist of no more that 4-5 slides. All powerpoint
presentations must be brought to the Central Technical Helpdesk in the main foyer at least three
hours prior to the scheduled opening time of the session. Use of individual laptops is not allowed
in speed poster sessions.

10

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

'
' '- OVERVIEW
'
CONFERENCE PROGRAM

!
!
'

MONDAY'
23'JULY'

9:00%
9:30!
9:30%
10.00!
10:00%
10:30!
10:30%
11:00!
11:00%
11:30!
11:30%
12:00!
12:00%
12:30!
12:30%
13:00!
13:00%
13%30!
13:30%
14%00!
14:00%
14:30!
14:30%
15:00!
15:00%
15:30!
15:30%
16:00!
16:00%
16:30!
16:30%
17:00!
17:00%
17:30!
17:30%
18:00!
18:00%
18:30!
18:30%
19:00!
19:00%
19:30!
19:30%
20:00!
20:00%
20:30!
20:30%
22:00!

TUESDAY'
24'JULY'

WEDNESDAY'
25'JULY'

THURSDAY'
26'JULY'

FRIDAY'
27'JULY'

SATURDAY'
28'JULY'

REGISTRATION'
keynote!4!
keynote!3!

Young!Resear%
cher!Award!1!

keynote!5!
symposium!2,!
paper!sessions!
20%23!

Young!Resear%
cher!Award!2!

coffee!break!

coffee!break!

speed!poster!
sessions!1%5!
speed!poster!
sessions!6%10!

speed!poster!
sessions!16%20!
speed!poster!
sessions!21%25!

poster!
presentation!

poster!
presentation!

symposium!3,!
paper!sessions!!
24%27!

poster!
presentation!

LUNCH!

LUNCH!

LUNCH!

paper!sessions!!
1%5!

paper!sessions!
10%14!

symposium!4,!
paper!sessions!
23%36!

speed!poster!
sessions!11%15!
poster!
presentation!

speed!poster!
sessions!26%30!
poster!
presentation!

speed!poster!
sessions!41%44!
poster!
presentation!

coffee!break!

coffee!break!

coffee!break!

symposium!1,!
paper!sessions!!
6%9!

paper!sessions!
15%19!

ESCOM!
General!
Assembly!

ICMPC!
Business!
Meeting!

symposium!5,!
paper!sessions!
37%40!

coffee!break!
coffee!break!

speed!poster!
sessions!31%35!
speed!poster!
sessions!36%40!

coffee!break!

paper!sessions!
41%45!

REGISTRATION'

welcome!

keynote!1!

LUNCH!

Special!
Post%
Conference!
Session!

TOURS'&'
EXCURSIONS'
paper!sessions!
36%40!
!

!
keynote!2!

!
!

WELCOME!
RECEPTION!

CONCERT!

BANQUET!

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012


11

Monday 23 July

Keynote 1: Grand Pietra Hall, 18:30-19:30

Irne Delige: The cue-abstraction model: its premises, its evolution, its
prospects

Irne Delige obtained her qualifications at the Royal Conservatory


of Brussels. After a twenty-year career as a music teacher, she
retrained in psychology at the University of Brussels and obtained
her PhD in 1991 from the University of Lige. A founding member of
the European Society for the Cognitive Sciences of Music (ESCOM),
she acted since its inception in 1991 till recently as Permanent
Secretary and Editor of its journal, Musicae Scientiae that she
launched in 1997. Her main research interests include the
organisation of a mental representation of the musical work, cue
abstraction and imprint formation, categorisation and similarity
perception during listening. She is the author of several articles and has co-edited several books
dedicated to music cognition and perception, among which La Musique et les Sciences
Cognitives (Mardaga, 1986), Naissance et Dveloppement du Sens Musical (Presses
Universitaires de France, 1995), Musical Beginnings (Oxford University Press, 1996), Perception
and Cognition of Music (Psychology Press, 1997), Musique contemporaine : Perspectives
thoriques et philosophiques (Mardaga, 2001), Musical Creativity (Psychology Press, 2006),
Musique et volution (Mardaga, 2010), Music and the Mind: Essays in honour to John Sloboda
(Oxford University Press, 2011), Contemporary Music: Theoretical and philosophical
Perspectives (Ashgate, 2011).

Born of a reflection resulting from an approach by Lerdahl and Jackendoffs grouping
preference rules (see GTTM, 1983), the cue-abstraction model is proposed. This model is
anchored on the formulation of the general perceptual principle of sameness and difference.
The description and discussion of the cue-abstraction model will revolve around three main
axes.
A first axis of reflection concerns the psychological constants on which our perceptual
activities are based whatever the perceptual field addressed. The theoretical premises of the
cue-abstraction model in the perception of a musical piece are based on arguments put
forward in general psychology as well as in psycholinguistics. Similarly, the hypothesis of
imprint formation as a result of the repetition of abstracted figures, found its theoretical
foundations in the work on categorisation processes from Roschs team and in research
about prototype effects in visual and linguistic material by Posner, Keele, Bransford and
Franks.
A second axis considers the influence of culture, education, music tuition and social
environment on the perception of a musical piece. All my investigations from 1985 to date
have been conducted by comparing the performance of musicians and non-musicians. Some
findings have established that :
the cue abstraction process is relatively tuition-independent;
tuition intervenes, however, in the formation of imprints and categorization processes in
which case the role of memory is more effective - influence of implicit learning and memory
require further investigation;
12

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012


the impact of heads of thematic elements is more pronounced in abstracted cued elements:
so-called priming procedures can shed light for a better understanding of the mechanisms
involved.
A third axis concerns the definition of notions underlying the psychological mechanisms
involved in music perception. Cue, musical idea, variation, imprint, theme, motif, pertinence,
salience, accent, similarity, difference, and so on, are all terms borrowed from the common
vocabulary and used intuitively by musicians and musicologists in their work on music
analysis, theory, history, philosophy and aesthetics of music. Would it be possible to go
beyong this intuitive use? Do we have tools to make progress towards more relevant
definitions that can satisfy scientists quest for more precision?

Keynote 2: Grand Pietra Hall, 19:30-20:30

John Rink: The (F)utility of Performance Analysis


John Rink studied at Princeton University, Kings College London, the


Guildhall School of Music & Drama, and the University of Cambridge.
His work as Professor of Musical Performance Studies at Cambridge,
as Fellow at St Johns College, and as Director of the AHRC Research
Centre for Musical Performance as Creative Practice (CMPCP) draws
upon his broad musical and musicological experience. He specialises
in performance studies, theory and analysis, and nineteenth-century
studies. He has published six books with Cambridge University Press,
including The Practice of Performance (1995), Musical Performance
(2002), and Annotated Catalogue of Chopins First Editions (with
Christophe Grabowski; 2010). In addition to directing CMPCP, John Rink is one of four Series
Editors of The Complete Chopin A New Critical Edition, and he directs two other research
projects: Chopins First Editions Online (funded by the Arts and Humanities Research Council)
and Online Chopin Variorum Edition (funded by the Andrew W. Mellon Foundation).

Considerable scepticism has been expressed in recent scholarship about the mapping from
structure to performance that was once considered ideal in the musicological literature.
Clearly the interpretive practice of performers of Western art music involves a good deal
more than translating notated symbols, theoretical constructs and analytical findings into
sound, just as listening is not simply a matter of the structural hearing valorized by certain
authors. That does not mean that musical structure as conventionally understood is
irrelevant to performers or listeners only that the relationship is more complex and less
exclusive than some have assumed. One problem has to do with a reductivist tendency to
regard musical structure as a single, seemingly static entity rather than as a range of
potential, inferred relationships between the various parameters active within a work. Not
only is it more accurate to refer to musics structures, but the origin and dynamic nature of
those structures must also be acknowledged. In that respect performers have a seminal role
to play, creating rather than just responding to musical structure in each performance. This
goes well beyond the surface-level expressive microstructure upon which much of the
literature has focused to date.
This paper will survey a range of different analytical approaches to musical performance,
including those developed by CHARM (www.charm.kcl.ac.uk) and CMPCP
(www.cmpcp.ac.uk). It will be argued that no single analysis can ever be exhaustive and that
analytical truth is both partial and contingent.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

13

MON

Tuesday 24 July

Keynote 3: Grand Pietra Hall, 09:30-10:30

Gerhard Widmer: Computational Music Perception: On the Importance


of Music Cognition Research for Building Musically Competent Systems

Gerhard Widmer is full professor and head of the Department of


Computational Perception at the Johannes Kepler University Linz,
and head of the Intelligent Music Processing and Machine Learning
Group at the Austrian Research Institute for Artificial Intelligence
(OFAI), Vienna. He holds degrees in computer science from the
University of Technology Vienna and the University of
Wisconsin/Madison, USA. His research interests are in computational
models of musical skills (notably: expressive music performance), and
in the application of AI and machine learning methods to real-world
musical problems. He has been awarded several research prizes,
including the highest scientific award in the country of Austria, the "Wittgenstein Prize" (2009).
In 2006, he was elected a Fellow of the European Coordinating Committee for Artificial
Intelligence (ECCAI), for his contributions to European AI Research.

Driven by a strong demand from the digital music world, engineering-oriented fields like
Music Information Retrieval (MIR) and Sound and Music Computing (SMC) have made great
technical progress in the past decade. Today, computer systems are being developed that
successfully perform complex tasks such as music detection, classification, recognition, and
tracking, some of these with substantial commercial impact. An analysis of the underlying
methods shows that these systems generally solve such tasks in ways that seem very
different from how humans approach them, which one might take to imply that we do not
need music cognition research to build musically competent systems.
In this presentation, we will take a closer look at some of these systems and will discover
that they are successful because, in effect, the problems they solve are rather easy, in certain
respects. We will then focus on a more demanding musical task and a corresponding
research field that (I claim) has not made as much progress in the past decade as one might
have hoped: computational modelling of expressive music performance. By looking at recent
work on models of expressive timing, we will identify some central questions related to
music perception that are still (again: my claim) fundamentally unsolved, and whose solution
would greatly help in the development of truly 'musical' systems.


14

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

TUE
Speed Poster Session 1: Grand Pietra Hall, 11:00-11:40
Musical expectation tension
Changing expectations: does retrospection influence our perceptions of
melodic fit?
Freya Bailes, Roger T. Dean
MARCS Auditory Labs, University of Western Sydney

Statistical models can predict listeners melodic expectations and probable musical events are
more readily processed than less probable events. However, there has been little consideration of
how such expectations might change through time, as remembering becomes necessary. Hurons
ITPRA theory proposes successive stages forming musical expectation, the last of which,
appraisal, might shift a listeners representations and expectations. The temporal trajectory of
expectations and the role of remembering and appraisal, are not well understood. The aim of this
experiment was to identify conditions in which expectation and retrospective appraisal
contribute in melodic processing. It was hypothesized that melodic expectations based on the
most recently heard musical sequence would initially influence ratings in a probe tone task, with
a shift to a retrospective analysis of the whole sequence through time. Four male and 12 female
non-musicians studying undergraduate psychology participated for course credit. An adaptation
of Krumhansls probe tone method was used, in which an isochronous melody was presented,
consisting of a sequence of five chords in one key followed by a sequence of three monophonic
notes forming an arpeggio in another key a semitone away. Following this, a probe tone was
presented immediately, 1.8s, 6s, or 19.2s later. Participants hearing the stimuli over headphones
rapidly rated the goodness of fit of the probe to the preceding context, using a 7-point scale. The
tonal relationship of the probe to both parts of the melodic sequence was manipulated. Probe
tone ratings changed significantly with time. Response variability decreased as the time to probe
presentation increased, yet ratings at every time point were significantly different from the scale
mid-point of 4, arguing against increasingly noisy data, or a memory loss, even 19.2s after
presentation of the melodic sequence. Suggestive evidence for a role of appraisal was the
development with delay time of statistical correlation between distributions of perceived fit and
predictions based on literature data on tonal pitch preference, or on the IDyoM model of
statistical probability. So, with no further musical input, listeners can continue to transform
recent musical information and so change their expectations beyond simply forgetting.


Closure and Expectation: Listener Segmentation of Mozart Minuets

Crystal A. Peebles
School of Music, Northern Arizona University, United States

This study investigates the theoretical claim that the perception of closure stems from the
ability to predict the completion of a schematic unit, resulting in a transient increase in
prediction error for the subsequent event. In this study, participants were asked to predict
the moment of completion of mid-level formal units while listening to three complete minuet
movements by Mozart (K. 156, K. 168, and K. 173). Following this prediction task,
participants then rated the degree of finality of ending gestures from these same movements.
Generally, endings punctuated by strong cadential arrival were best predicted and received
higher ratings, suggesting that learned harmonic and melodic ending gestures contribute to
the segmentation of musical experience. These results were accentuated for participants
with formal musical training, further supporting this conclusion.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

15

Tracking levels of closure in melodies

Andrew R. Brown,* Toby Gifford,* Robert Davidson#


*Queensland Conservatorium, Griffith University, Australia
#Dept. Music, University of Queensland, Australia

We computationally implemented the conditions of closure posited in Narmours Implication-
Realisation (I-R) theory, and evaluated how well these formally defined notions of melodic
closure align with points of structural closure phrase ends and score ends in the Essen
folksong corpus. We found three of the conditions, those relating to durational, metric and
tonal resolution, were positively correlated with points of structural closure, and that a
combined closure measure calculated from a weighted combination of these individual
measures had a strong relationship with structural closure. We suggest this provides
evidence supporting the I-R theorys claim that points of positive congruence in these
measures can give rise to a sense of repose or completion, or closure in the sense of Gestalt
psychology. We provide further detail regarding the strength and independence of the
individual conditions in this regard. We conclude that these computationally tractable
measures may be of benefit in automated segmentation tasks.


Musical tension as a response to musical form

Gerhard Lock,* Kerri Kotta #


* Estonian Academy of Music and Theatre, Department of Musicology
Tallinn University, Institute of Fine Arts, Department of Music, Tallinn/Estonia
# Estonian Academy of Music and Theatre, Department of Musicology

Musical tension is a complex phenomenon and its comprehensive description should
generally include a variety of different approaches. In this study, our goal is to describe the
musical tension as a response of a listener to formal patterns by combining perception tests
with musical analysis. To the authors of this article, musical form is essentially a hierarchical
phenomenon. The main idea behind this study is that the perception of musical tension can
be seen as being dependant on the hierarchical aspects of form. We hypothesize that the
intensity of the perceived musical tension is proportional to the structural (or hierarchical)
significance of the corresponding musical event. For ease of comparison of the tension
curves obtained from listening tests and score-based structural analysis, we will present
three new methods: 1) Analysis of salient features of music: based on the discrimination of
the relative importance of different types of compound musical events (i.e. impulse and
culmination, see Lock 2010) based on the musical score and cognitive analysis. 2) Analysis of
musical energy: form is treated as a succession of short areas in which the energy of music
(i.e. a relative degree of the activity of its carriers, i.e. rhythm, dynamics, texture, timbre, and
register) can be described by simple terms, i.e. increase, decrease, and sustain (see Kotta
2011). 3) Reduction and averaging of tension curves: the method allows taking apart
different levels of curves obtained from listening tests with continuous data capture (via
slider controllers). Through further research, we will find optimal mappings between and
compare the outputs of the three analytical methods presented here with a traditional
formal analysis of the works of post-tonal music.

16

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

TUE
Expectations in Culturally Unfamiliar Music: Influences of Perceptual Filter
and Timbral Characteristics

Catherine Stevens,* Barbara Tillmann,#* Peter Dunbar-Hall, Julien Tardieu, Catherine Best*
*MARCS Institute, University of Western Sydney, Australia; #Lyon Neuroscience Research Center,
CNRS-UMR 5292, INSERM U1028, Universit de Lyon, France; Conservatorium of Music, The
University of Sydney, Australia; Universit de Toulouse UTM, France

With exposure to a musical environment, listeners become sensitive to the regularities of
that environment. These acquired perceptual filters likely come into play when novel scales
and tunings are encountered. i) What occurs with unfamiliar timbre and tuning? ii) Are
novice listeners sensitive to both in- and out-of-scale changes? iii) Does unfamiliar timbre
make a difference to judgments of completeness? iv) When changes are made, is perceived
coherence affected and how much change disrupts judged cohesion of unfamiliar music? An
experiment investigated the effect of unfamiliar timbre and tuning on judgments of melody
completeness and cohesion using Balinese gamelan. It was hypothesized that, when making
judgments of musical completeness, novice listeners are sensitive to in- and out-of-scale
changes and this is moderated by an unfamiliar timbre such as sister or beating tones.
Thirty listeners with minimal experience with gamelan rated coherence and completeness of
gamelan melodies. For the out-of-scale endings, the gong tone was replaced by a tone outside
the scale of the melody; for in-scale endings, the gong tone was replaced by a tone belonging
to the scale of the melody. For completion ratings, the out of scale endings were judged less
complete than the original gong and in-scale endings. For the novel sister melodies, in-scale
endings were judged as less complete than the original gong endings. For coherence,
melodies using the original scale tones were judged as more coherent than melodies
containing partial or total replacements. The results provide evidence of perceptual filters
influencing judgments of novel tunings.


ERP Responses to Cross-cultural Melodic Expectancy Violations

Steven M. Demorest,* Lee Osterhout#


*Laboratory for music Cognition, Culture & Learning, School of Music, University of Washington,
USA
#Cognitive Neuroscience of Language Lab, Department of Psychology, University of
Washington, USA

The purpose of this study was to use ERP to test cultural awareness of out-of-scale notes in
Western and North Indian music. We measured late positive ERP responses to out of scale
notes in both listening conditions as well as a rating of the congruousness of the melody. US-
born participants listened to synthesized presentations of 30 excerpts each of European folk
songs and North Indian ragas. All melodies were heard in their original form and in deviation
form. There was a significant main effect for culture and condition with deviation melodies
rated as less congruous than the original versions, and Indian music less congruous than
Western. A significant condition by culture interaction indicated that listeners were less
sensitive to deviations in the culturally unfamiliar melody context. There was a significant
and widely distributed P600 response to out-of-scale notes in the Western condition and a
much smaller but still significant P600 effect in the Indian condition. Congruousness ratings
suggest that listeners are less sensitive to melodic expectancy violations in the music of
unfamiliar cultures compared to their own culture. ERP data were more mixed with subjects
exhibiting a late positive component in response to deviations in both cultural conditions,
but less robust in the unfamiliar culture. The results provide support for the idea that
listeners can internalize tonal structures in culturally unfamiliar music, but there are
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

17

possible confounds between these two musical systems. We discuss the implications of these
findings for theories on cultural versus universal factors in music cognition.


A pilot investigation on electrical brain responses related to melodic
uncertainty and expectation

Job P. Lindsen*, Marcus T. Pearce#, Geraint Wiggins#, Joydeep Bhattacharya*


*Department of Psychology, Goldsmiths, University of London, UK
#Centre for Digital Music, Queen Mary, University of London, UK

Forming an expectation of how music unfolds in time is inherent to listening to music.
However, not all melodic contexts allow for the generation of strong expectations about how
those melodies will continue, i.e. melodic contexts differ in the uncertainty they create about
the melodic continuation. In music there are roughly three possibilities: A melody sets up a
strong expectation that is confirmed by the expected note, or a strong expectation that is
violated by an unexpected note, or no strong expectation in which case the following note is
likely to be unexpected. The aim was to identify distinct brain responses reflecting
uncertainty of melodic continuation, and unexpectedness of musical notes. We used our
statistical learning model to estimate, note-by-note, the uncertainty of expectation, and the
unexpectedness of that note. EEG data was recorded while participants (musicians, n=20)
listened to monophonic and isochronous, but ecologically valid, melodies. Unexpected of
notes was negatively associated with a frontal EEG amplitude around 120 ms after note
onset, followed by a positive frontocentral relationship between 200-300ms. Uncertainty
was also associated with an early negative relationship with frontal EEG amplitude, followed
by a recurrent posterior negative relationship ~470 and ~580ms after note onset. These
findings provide first evidence of neural responses associated with the generation of melodic
expectations, and altogether support our claim that statistical learning produces
information-theoretic descriptions of music that are associated with distinct patterns of
neural activity.


Neural and behavioural correlates of musical expectation in congenital amusia
Diana Omigie, Marcus Pearce, Lauren Stewart

Goldsmiths, University of London, UK



Music listening involves using previously internalized regularities to process incoming musical
structures. Congenital amusia, a disorder believed to affect 4% of the population, is typically
associated with insensitivity to unexpected musical events. However recent evidence suggests
that despite showing striking impairment on tasks of musical perception requiring explicit
judgement, these individuals may possess intact implicit knowledge of musical regularities. The
present study uses two analogous paradigms to measure the formation of melodic expectations at
an implicit and explicit level respectively. We test the hypothesis that those with amusia are able
to demonstrate intact melodic expectations when probed implicitly, but are impaired when
explicit judgements are required. Further, we use EEG to compare the neural correlates of
melodic expectation in amusics versus controls. A computational model of melodic expectation
was used to identify probe notes varying in expectedness in real melodies. In an implicit task,
amusic and control participants made speeded, forced-choice discriminations concerning the
timbre of a cued target note in the context of a melody while in an explicit task, they used a 1-7
rating scale to indicate the degree to which the pitch of the cued target note was expected or
unexpected. In an EEG study, electrophysiological recordings were taken while participants
listened to the same melodies, with the task of detecting occasional timbral deviants introduced
to keep participants attention levels constant. As predicted, amusic participants were
significantly worse than controls at explicitly differentiating between high and low probability
notes. However both groups showed faster responses to high probability than low probability
18

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

TUE
notes in the implicit task indicating that they found these notes more expected. Further, ERP
analysis revealed that while an early negative response, which was highly sensitive to note
probability, was more salient in controls than amusics, both groups showed a delayed P2 to low
relative to high probability notes suggestive of increased processing time required for these
events. The current results, showing spared, albeit incomplete, processing of melodic structure
adds to previous evidence of implicit pitch processing in amusic individuals. The finding of an
attenuated early negative response in amusia is in line with studies showing a close relationship
between the amplitude of such a response and explicit awareness of musical deviants. Finally, the
current study provides support that the notion that early pre-attentive mechanisms play an
important role in generating conscious awareness of improbable events in the auditory
environment.

Speed Poster Session 2: Crystal Hall, 11:00-11:40


Audio & audio-visual perspectives

Optic and Acoustic Symmetry Perception

Vaitsa Giannouli
Department of Psychology, Aristotle University of Thessaloniki, Greece

The aim of this paper is to investigate the perception of optic and tonal acoustic symmetry.
Twenty-eight volunteers (14 musicians and 14 non-musicians) aged 18-67 participated in the
study. The participants were examined individually and the tests were administered in varying
order to the various participants. Half of the participants were informed at the beginning of the
examination for the possible kinds of symmetry. Also, half of the participants were presented
before the acoustic stimuli, with a similar kind of symmetry for the optic stimuli. The examination
material were: the mirror reversal letter task from PALPA, the paper folding task from ETS, the
spatial ability test from ETS, Bentons judgment of line orientation test, digit span (forward and
backward) and a newly constructed test, that includes a series of symmetrical and asymmetrical,
big and small, optic and acoustic stimuli. Except for the registration of participants response time
(RT) and the correctness of their responses, measurements were also taken with the use of Likert
scales for the metacognitive feeling of difficulty and the metacognitive feeling of confidence and
measurements of the aesthetic judgments for each and every one of the optic and acoustic stimuli.
The majority of the participants (young - middle-aged, women - men, individuals with music
education and without music education) did not show statistically significant differences in their
scores in the visuospatial tests and the memory tests, while at the same time they had a
homogeneously high performance (with almost zero deviation) for all the optic symmetrical and
asymmetrical stimuli. For all the acoustic stimuli, a statistically significant difference was found
for the participants with music education, not only for the cognitive processing of symmetry, but
also for the metacognitive. The proposed (on the basis of the literature) preference (correctness
of responses and reaction time) for the mirror symmetrical around a vertical axis optic stimuli
was not confirmed and neither there was any confirmation for the preference for repetitive
acoustic stimuli. What was found were more positive aesthetic judgments for the symmetrical
formations versus the asymmetrical ones for both senses. Finally, no cross-modal interaction of
priming was found, nor influence of prior explanation of the kinds of symmetry. These
preliminary data provide support for the independence of the underlying mechanism of optic and
acoustic perception of symmetry, with the second one probably being a non-automatic and
possibly learned process.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

19

Asymmetry of audio-visual interaction in multimedia works

Teuro Yamasaki
Osaka Shoin Women's University

A lot of studies investigated the interaction between musical materials and visual materials
in multimedia works, and some studies suggested that there was an asymmetry on direction
of the interaction. That is, the size of musical effect on the impression of visual materials was
more than that of visual effect on the impression of musical materials. This might show that
musical impression and visual impression are formed through different emotional processes.
In these studies, however, the intensity of impression of both materials was not controlled.
Therefore, this asymmetry might be caused not by the modality of materials but by the
intensity of impression of materials. This study investigates whether this asymmetry is
found even on the condition where the intensity of materials is controlled. In preliminary
experiment, fifteen music excerpts and fifteen paintings are evaluated on their valence and
arousal, and five music excerpts and five paintings are chosen as stimuli for main
experiment. Those stimuli are musical excerpts or paintings with positive valence and high
arousal (+/+), with positive valence and low arousal (+/-), with negative valence and high
arousal (-/+), with negative valence and low arousal (-/-), or with neutral valence and
medium arousal (0/0). To add to it, musical excerpts and paintings with same descriptor, for
example a musical excerpt with +/+ and a painting with +/+, are chosen as having same
degree of valence and arousal. In main experiment, musical excerpts and paintings are
combined and presented. Participants are asked to evaluate their musical impression or
visual impression of combined stimuli. Comparing the results of the main experiment with
results of the preliminary experiment, the effect of musical excerpts on paintings and the
effect of paintings on musical excerpts are analyzed respectively. These results will be
discussed, along with confirming the existence of asymmetry of the size of musical effect and
visual effect and, if such an asymmetry exists, exploring the reason of the asymmetry.


Congruency between music and motion pictures in the context of video games:
Effects of emotional features in music
Shinya Kanamori, Ryo Yoneda, Masashi Yamada
Graduate School of Engineering, Kanazawa Institute of Technology, Japan

In the present study, two experiments are conducted. In the first experiment, it is revealed
that the impression of game music is spanned by pleasantness and excitation axes, using
one hundred pieces of game music. In the second experiment, it is shown that the
congruency of moving picture and musical tune does not decrease and the whole impression
is not change significantly, even if a tune is replaced by a tune which possesses similar
impression. These results suggests that an archive, where various tunes are plotted on the
impression plane spanned by the pleasantness and excitation axes, is useful to
communicate in the group of game creators and engineers, for designating a piece of music
for a scene in a video game.


Complex Aural and Visual Stimuli: Discerning Meaning in Musical Experiences

Dale Misenhelter
University of Arkansas, USA

This meta-analysis explores findings from preference and response studies. Several of the
studies utilized both traditional major musical works, including the Bach Passacaglia,
Beethoven Seventh Symphony, Stravinsky Rite of Spring, as well as select contemporary
popular compositions. Variables considered in the studies included the experience level of
20

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

TUE
participants (often characterized as musicians and non-musicians), musical elements
(tension and release, textural and dynamic considerations, consonance and dissonance, etc),
and visual elements as changes in affect (dramatic and temporal events, dance, direction,
speed of travel, tension and repose, artistic considerations, etc.). A primary research
question is regarding focus of attention - the ability of listeners to distinguish between
perceived musical elements or other stimuli while concurrently attending and responding - a
process loosely termed "multi-tasking." While there is considerable research on listeners
ability to discriminate and/or prioritize among elements in audio only environments,
research in audio-visual stimuli discerning among multiple elements seems to be
comparatively minimal. Within aural models, it would seem that less experienced listeners
attend to individual components or concepts of a musical selection, while experienced
listeners are able to process more complex information. With an aural-visual model, data
suggest negative responses to negative visual stimuli (despite the consistency with the
musical content), which raises issues of unclear definitions regarding what constitutes
aesthetic response, as well as the possibility of participants simply responding to a demand
characteristic - e.g., as they may have assumed was expected.


Interaction of Audiovisual Cues in the Perception of Audio Trajectories

Georgios Marentakis,* Stephen McAdams#


* IEM, Universitt fr Musik und Darstellende Kunst Graz, Austria
# CIRMMT, Department of Music, McGill University, Quebec, Canada

We present a study that investigates how the presence of visual cues affects the perception
of musical spatial sound trajectories and the way listeners perceive a musical performance.
Based on the results of a first experiment, where it was found that congruent visual feedback
from the movement of the hands of a performer controlling the location of sound in space,
assists listeners in identifying spatial sound trajectory shapes, we ask whether this was due
to the integration of the visual cues with the auditory ones or because participants simply
attended to the visual cues and ignored the auditory ones. Participants watched a video of
the performance gestures while listening to the spatial sound trajectories and identification
performance was measured in conditions that manipulate presentation modality, the
sensory focus of attention, attentional process (selective or divided) and the congruency of
audiovisual cues. Although we found that congruent visual stimulation improves
identification performance even when listeners attended selectively to the auditory stimulus,
we also found that under divided attention conditions, a tendency to focus on vision exists,
which explains the results of the first experiment in which the sensory focus of attention was
not controlled. In such cases, auditory movement information is overwritten. It is therefore
important that listeners maintain an auditory focus of attention when gesture control of
spatialization is employed on stage, as a vision oriented strategy will bias auditory
movement perception in cases of incongruent stimulation and limit the resources available
towards the interpretation of musical material.


Cross-modal Effects of Musical Tempo Variation and on Musical Tempo in
Audiovisual Media
Friedemann Lenz
Departement of Musicology and Music Education, University of Bremen, Germany

Music is an acoustical phenomenon, which is part of a complex multisensory setting. A kind
of research, which focuses on this special issue is the research on background music and
music in different kinds of audiovisual media. Research of audiovisual interaction shows,
that visual spatial motion can induce percepts of auditory movements and that visual illusion
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

21

can be induced by sound. Studies on background music indicate, that the musical tempo can
be a factor in cross-modal interactions. In the present study, three different effects of musical
tempo variation in audiovisual media will be discussed. First it is assumed and tested that
musical tempo variation can influence the perception of the velocity of the visual objects in
an audiovisual medium and vice versa. The second assumption refers to the thesis that the
perception of time in movies depends partially on the variation of musical tempo. The third
question deals with the influence of the musical tempo on the sensation of emotions felt by
recipients while watching an audiovisual medium. Several computer-aided tests with
audiovisual stimuli were conducted. The stimuli consisted of videos of a conveyor belt with
moving boxes and a musical soundtrack with a simple melody. Several pretests on the three
hypotheses were conducted. There are hints that musical tempo can change perception of
visual velocity perception, but not vice versa.


When Music Drives Vision: Influences of Film Music on Viewers Eye
Movements

Karin Auer,* Oliver Vitouch,* Sabrina Koreimann,* Gerald Pesjak,# Gerhard Leitner,# Martin
Hitz#
*Dept. of Psychology, University of Klagenfurt, Austria
#Interactive Systems Group, University of Klagenfurt, Austria

Various studies have shown the co-determining strength that film music has on the viewers
perception. We here try to show that the cognitive processes of watching a film, observed
through viewers scanpaths and eye-movement parameters such as number and duration of
fixations, are different when the accompanying film music is changed. If this holds, film
music does not just add to a holistic impression, but the visual input itself is actually different
depending on features of the soundtrack. Two film clips, 10 seconds each, were presented
with three different musical conditions (horror music, documentary music, no music) in a
between-subjects design. Clip 2 additionally contained a cue mark (red X in the bottom left
corner, shown for 1 s). Participants scanpaths were recorded using a ASL H6 head-mounted
eye-tracking system based on corneal reflection of infrared light. The resulting scanpaths of
N = 30 participants showed distinct patterns dependent on the music condition. Specific
trajectory categories were found for both film clips (five for clip 1, nine for clip 2). Systematic
differences (p < .05) could be shown in most of these categories and variables. The additional
cue mark was consciously perceived significantly more often in both music conditions than
in the silent condition. Our results suggest that the slogan What you see is what you hear
can be true on a very fundamental, first-layer level: Visual input varies with different scores,
resulting in viewers not seeing the same film anymore in a straight sense.


Emotional Impact of Musical/Visual Synchrony Variation in Film
Andrew Rogers
University of Huddersfield, United Kingdom

The emotional impact of synchronous musical and visual prominences within the cinematic
experience awaits thorough empirical evaluation. Film composition is defined here as a
genre of stereotypes, whose methodologies are not feasibly subject to significant
redevelopment. As consequence, the research focuses on improving components of the
audience recognisable functions of film music. Subjects graded cinematic clips with musical
elements that varied in their synchronous interaction with visual prominences. A positive
response to more frequent synchronisation between music and film was concluded.
Perceptual expectancy, attention and multisensory integration are principal in analysis of the
findings.
22

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

TUE
Speed Poster Session 3: Dock Six Hall, 11:00-11:40
Composition & improvisation
An information-theoretic model of musical creativity

Geraint A. Wiggins
Centre for Digital Music, Queen Mary, University of London

I propose a hypothetical computational model of spontaneous musical creativity; that is, not
deliberate musical problem solving, (e.g. rearranging a score for a smaller orchestra), but the
production of original musical ideas without reasoning. The theory is informed by
evolutionary thinking, in terms of the development of its mechanisms, and of the social
evolution of music. Hitherto, no computational model of musical creativity has made a
distinction between spontaneous creativity and deliberate application of explicit design
principles. Further, there was no computational model of musical creativity which subsisted
in an explicit, coherent relationship with models of other mental processing. This
hypothetical model suggests a mechanism which may underlie general implicit reasoning,
including the production of language. That mechanism arises from simple statistical
principles, which have been shown to apply in perceptual models of music, and therefore
may reasonably supposed to be available in the mind/brain, and consists in the moderation
of input to the Global Workspace via the interaction of information-theoretic quantities. The
proposed high-level model, instantiated with appropriate sub-component models of learning
and production, explains the origins of musical creativity and their connection with
speech/language, narrative, and other time based creative forms. It also supplies a model of
the mediation of information as it becomes available to consciousness. Therefore it may have
implications outside music cognition, for general ideation.


Algorithmic Composition of Popular Music

Anders Elowsson, Anders Friberg


Speech, Music and Hearing, KTH Royal Institute of Technology, Sweden

Human composers have used formal rules for centuries to compose music, and an
algorithmic composer composing without the aid of human intervention can be seen as
an extension of this technique. An algorithmic composer of popular music (a computer
program) has been created with the aim to get a better understanding of how the
composition process can be formalized and at the same time to get a better understanding of
popular music in general. With the aid of statistical findings a theoretical framework for
relevant methods are presented. The concept of Global Joint Accent Structure is introduced,
as a way of understanding how melody and rhythm interact to help the listener form
expectations about future events. Methods of the program are presented with references to
supporting statistical findings. The algorithmic composer creates a rhythmic foundation
(drums), a chord progression, a phrase structure and at last the melody. The main focus has
been the composition of the melody. The melodic generation is based on ten different
musical aspects which are described. The resulting output was evaluated in a formal
listening test where 14 computer compositions were compared with 21 human
compositions. Results indicate a slightly lower score for the computer compositions but the
differences were statistically insignificant.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

23

Comprehensive and Complex Modeling of Structural Understanding, Studied


on an Experimental Improvisation

Olivier Lartillot,* Mondher Ayari#


*Finnish Centre of Excellence in Interdisciplinary Music Research, Finland
#IRCAM-CNRS / University of Strasbourg, France

Music perception and cognition are ruled by complex interdependencies between bottom-up and
top-down processes at various cognitive levels, which have not been fully understood and
described yet. Cognitive and computational descriptions of particular facets of music listening
remain insufficient if they are not integrated in a comprehensive modeling. In the long term, we
aim at proposing a comprehensive and complex cognitive modeling of the emergence of
structures in music listening and to test its potential by running a computational implementation
on elaborate music. The study presented in this paper is part of a broader project, whose general
aim is to collect an experimentally controlled jazz improvisation with the view to study jazz
listeners understanding of that piece. An eminent jazz guitarist, Teemu Viinikainen, was invited
to play an original improvisation while following a few general heuristics that we defined
beforehand, concerning the use of pauses, repetitions, accentuations and of various ways of
evolving the modal discourse. During a subsequent interview, while listening progressively to the
recording, the musician gave a detailed a posteriori analysis that was recorded as well, talking
and playing examples on his guitar. A systematic analysis was performed exhaustively on the
piece, starting from a manual transcription of the piece, followed by motivic, harmonic,
rhythmical and structural analyses. Our previous cognitive complex modeling of structural
analysis of music has been extended further and implemented in the Matlab programming
environment. This extended model starts from the audio recordings, and performs altogether
transcription and higher-level analyses, with bottom-up and top-down interactions between low-
level and high-level processes. The study challenges the traditional dichotomy between
transcription and structural analysis and suggests instead a multi-layer structuring of events of
various scales (notes, gestures, motifs, chords, phrases, etc.), where higher-level structures
contextually guide the progressive discovery of lower-level elements. The model will be further
validated and enriched through a comparison with the musicians analysis and with jazz listeners
annotation of the piece collected experimentally.


Vocal improvisations of Estonian children

Marju Raju, Jaan Ross


Department of Musicology, Estonian Academy of Music and Theatre, Estonia

Even a childs passive encounter with the Western tonal music is capable of building certain
expectations as to the set of tonal and temporal composition rules that define which musical
patterns are acceptable for the idiom. This presentation is aimed at studying different strategies
children use to approach the task of vocal improvisation. For the data collection, Test Battery
from Advancing Interdisciplinary Research in Singing (AIRS) project was applied to Estonian
children (N = 26, 17 girls and 9 boys, age 4 to 12). In this presentation, results of two component
tasks (to finish a melody and to compose a song after a picture) of the Test Battery are presented.
For analysis, successful cases from both components were combined to one dataset with total 32
vocal improvisations which were then grouped into four types according to two main features:
(1) how well did they fit the Western tonal musical canon and (2) whether the implied
composition rules were applied explicitly or implicitly. Distribution of improvisational songs
between these 4 types seemed to be more influenced by a childs previous encounter with music
rather than her/his age. In both tasks, majority of children seem to be strongly influenced by the
Western musical canon as their improvisations sound classical like we expect from childrens
songs. In addition to analyzing vocal material, the process of performance must also be
considered as children use different strategies to reach the goal.

24

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

TUE
The Ideational Flow: Evaluating a New Method for Jazz Improvisation Analysis

Klaus Frieler,* Kai Lothwesen#, Martin Schtz*


*Institute of Musicology, University of Hamburg, Germany
#University of Music and Performings Arts, Germany

In two recent studies (Lothwesen & Frieler, 2011; Schtz, 2011), a new approach to the
analysis of jazz improvisation was proposed based on the concept of ideational flow. Jazz
piano solos were segmented into gapless sequences of musical ideas, settling thus on a mid-
level of analysis as opposed to more traditional approaches in which jazz improvisations are
either analysed manually with classical methods or statistically on a single-note level (see
Pfleiderer & Frieler, 2010 for an overview). Our approach is inspired by Grounded Theory
(Glaser & Strauss, 1967) and by methods of qualitative content-analysis (Mayring, 2000). It
supposes a seamless chain of underlying musical ideas which are shaped into a musical
surface during improvisation. Indeed, several musical ideas could be identified, which turned
out to be quite diverse categories, ranging from thematic/motivic variations and various
kinds of melodic runs to purely rhythmical parts and even emptiness. In this study, we aim
at further validation of the method by cross-evaluating a set of selected analyses of jazz
piano improvisations drawn from the previous studies, thereby objectifying this method
with the overall goal of standardisation.


Improvisation in Jazz: Stream of Ideas-Analysis of Jazz Piano-Improvisations

Martin Schtz
Institute of Musicology, University of Hamburg, Germany

The stream of ideas-analysis embodies a new way to analyze jazz improvisations. The core
of the stream of ideas-analysis, which was developed within an empirical research, is to
translate an improvisation on a mid-level to a sequence of melodic phrases/patterns
(=ideas). On the basis of methods of qualitative content research and grounded theory an
expendable and differentiable dynamic system of categories was created to represent every
kind of melodic phrases, which occurred within the 30 examined improvisations. The
underlying improvisations were the result of an experiment with five jazz pianists, who were
asked to improvise in several sessions on the same collection of different jazz tunes.
Afterwards each improvisation was categorized according to the stream of ideas-analysis
and presented as a sequence of used ideas. After analyzing the 30 improvisations, the
system of categories consisted of nine main categories (=basis-ideas), which covered every
appearing melodic phrase. The nine basis-ideas are defined with regard to either aspects of
melodic contour or intra-musical aspects (variation of the theme, creating motifs etc.).
Furthermore the stream of ideas-analysis makes it possible to compare improvisations
objectively between different musicians or tunes by using statistical methods (e.g. by dealing
with frequency distributions). It could be shown that each of the five participating pianists
used a quite similar combination of preferred basis ideas (individual vocabulary) to create
his different improvisations (takes) on the same underlying tune. In addition, a connection
between the different tunes and the amount of certain ideas was recognized.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

25

Observing and Measuring the Flow Emotional State in Children Interacting


with the MIROR Platform

Anna Rita Addessi,1 Laura Ferrari,2 Felice Carugati3


1,2 Dept. of Music and Performing Arts, University of Bologna, Italy
3 Dept. of Psychology, University of Bologna, Italy

This paper introduces a study aiming to measure the Flow state (Csikszentmihalyi 1996) of
children playing with the MIROR- Improvisation prototype, an Interactive Reflexive Musical
System (IRMS) implemented in the framework of the EU-ICT Project MIROR-Music
Interaction Relaying On Reflexion. The IRMS have been defined as Flow machine, thanks to
their ability to imitate the style of the human playing a keyboard (Pachet 2006). The Flow
grid was created with the software Observer (Noldus). The basic idea of this grid is that the
observer did not register the flow state but rather the variables and the intensity of each
variable. The presence of Flow state is instead measured by means an automatic process of
the Observer based on several constraints: according to Csikszentmihalyi, when the level of
all variables is higher, the presence of Flow is indicated. 24 children (4 and 8 years old)
carried out 3 sessions playing a keyboard in 3 consecutive days. In every session, all children
played the keyboard with and without the MIROR-Impro, alone and with a friend. One group
of children played the system with set-up Same and another group with set-up Very different
(with set-up Same the system's reply is more similar to the child's input). The video collected
were analysed with the Flow grid. The results show that the Flow state is higher when the
children play with MIROR-Impro, with set-up Same and with 8 years old children. The
difference between sessions is not significant. These results would support the hypothesis
that the IRMS and the reflexive interaction can generate an experience of well-being and
creativity. The Flow grid worked in effective way and it was possible to indicate some
aspects of the system to be improved. Some limitations have been discussed for further
adjustments of the grid.

A Computational Method for the Analysis of Musical Improvisations by Young


Children and Psychiatric Patients with No Musical Background

Christina Anagnostopoulou, Antonis Alexakis, Angeliki Triantafyllaki


Department of Music Studies, University of Athens, Greece

Improvisation is a common form of musical practice and yet remains the least studied or
understood from a music analysis point of view. When populations with no musical
background engage in musical improvisation (such as young children or patients in therapy
settings) the analysis of the musical aspects becomes more challenging: The possible lack of
common learned musical schemata and related technical skills requires the introduction of
methods of analysis which can deal with these peculiarities. In this paper we propose a
computational method for analysing such types of improvisations and apply it to the analysis
of a small number of case studies. The analytical method is a type of semiotic analysis, where
repetition, variation and transformation are brought forward. Musical parameters have to be
defined, and a computational tool is built to reveal interesting patterns that repeat within the
various musical parameters. The method is applied to the improvisations of six eight-year
old children and two psychiatric patients with psychotic syndromes. For their
improvisations they use the machine-learning based system MIROR-IMPRO, developed
within the FP7 European Project MIROR, which can respond interactively, by using and
rephrasing the user's own material. The results point towards the usefulness of more
abstract types of representations and bring forward several general common features across
these types of improvisations, which can be related to gestures.

26

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

TUE
Speed Poster Session 4: Timber I Hall, 11:00-11:40
Emotion & communication

How intense experiences with music influence peoples way of life


Thomas Schfer, Mario Smukalla, Sarah-Ann Oelker
Department of Psychology, Chemnitz University of Technology, Germany

Music can change our lives. As true as this notion may seem, we have little sure knowledge
about what it actually means. Strong emotional experiences or peak experiences with music
have proven to be of high significance for the people who have them. The authors
investigated the long-term effects of such experiences on peoples way of life, using narrative
interviews and a grounded theory approach to develop a process model that describes the
nature of intense musical experiences (IMEs) and their long-term effects. The most
important results are that (1) IMEs are characterized by altered states of consciousness,
which leads to the experience of harmony and self-realization; (2) IMEs leave people with a
strong motivation to attain the same harmony in their daily lives; (3) people develop several
resources during an IME, which they can use afterward to adhere to their plans; (4) IMEs
cause long-term changes to occur in peoples personal values, their perception of the
meaning of life, social relationships, engagement and activities, and consciousness and
development. The authors discuss the results as they relate to spirituality and altered states
of consciousness and draw 10 conclusions from the process model that form a starting point
for quantitative research about the phenomenon. Results suggest that music can indeed
change our livesby making it a bit more fulfilling, spiritual, and harmonious.

Anxiety, flow and motivation: students strong and intense experiences of


performing music
Alexandra Lamont
Centre for Psychological Research, Keele University, United Kingdom

Many music students and professionals experience a number of health-related problems


connected to performing, but performing music also has the potential to engender high
levels of wellbeing. Memories of early performing experiences may be important in
determining continued involvement in music, However, in a volunteer sample of Swedish
adults, Gabrielsson (2011) found participants chose listening music episodes more
frequently than performing music. This presentation explores recollections of experiences of
performing music, and interprets these in relation to theories of happiness and wellbeing.
27 university students (median age 20) gave free written reports of their strongest, most
intense experiences related to music performing. Accounts were content analysed using
Gabrielssons Strong Experiences of Music Descriptive System. Results were also analysed
thematically for the three components of happiness (hedonism, engagement and meaning)
using an idiographic approach. Most memories were of performances to an audience, with
the majority reflecting positive experiences of familiar music not chosen by the participants.
Accounts either tended to emphasise flow and meaning achieved through personal identity,
or pleasure and meaning achieved through group identity, and did not always explicitly
mention a hedonic state. Four profiles emphasising different combinations of pleasure,
engagement and meaning are identified. The importance of the eudaimonic route to
happiness and wellbeing is encouraging in showing that valuable and rewarding experiences
have the potential to sustain long-term motivation to engage with practical music-making.
Music performance seems to be a qualitatively different experience to music listening in that
it can embody both negative and positive emotions.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

27

A Model of Perceived Musical Affect Accurately Predicts Self-Report Ratings

Joshua Albrecht
School of Music, Ohio State University, USA

A new method of collecting self-report assessments of the perceived affective content of
short musical passages is described in Albrecht & Huron (2010). This study used a procedure
termed the progressive exposure method in which a large passage is divided into discrete
five-second excerpts. These excerpts are then presented in random order, and participants
evaluate the perceived affective content of these short passages. In that study, 110
participants used the progressive exposure method to analyze the second movement from
Beethovens Pathtique sonata. The results from this study provide a mosaic portrait of
eleven affective dimensions across the movement. In this study, a model of perceived
affective content is built by measuring sixteen different musical features of each excerpt and
using these measurements as predictors of participant ratings. This model is used to make
predictions of participant evaluations of the same eleven affective dimensions for fifteen
excerpts from different Beethoven piano sonatas. To anticipate the results, the predictions
for each of the fifteen excerpts along each of the eleven affective dimensions are significantly
correlated with participant ratings.

Exploring the role of the performers emotional engagement with music


during a solo performance

Catherine Foxcroft,* Clorinda Panebianco-Warrens#


*Department of Music and Musicology, Rhodes University, South Africa
#Department of Music., Pretoria University, South Africa

Research shows that performers emotional engagement with the music they are performing
may play a crucial role in the preparation of an expressive performance. Yet optimal
performance requires a relaxed concentration which is incompatible with experiencing
certain emotions. To what extent then do performers engage emotionally with the music
they are performing during an emotionally expressive performance? This research aimed to
explore the extent to which pianists emotionally engage with the music they are performing
during a solo recital. The IPA research method focused on the performers perspectives of
their experienced emotional engagement while performing.10 concert pianists (5 students
and 5 professionals) were individually interviewed directly after a solo recital lasting
approximately 60 minutes. The interview questions posed questions relating to the pianists
experience of their specific performances. The data was collated at the 2010 National UNISA
piano competition (student pianists), and from recitals performed in SA concert halls in
2011/12 (professional pianists). Preliminary results suggest that pianists experience varying
degrees of both musical and non-musical emotions during their performances. The pianists
agreed that engagement with musical emotions may enhance the performances expression.
However uncontrolled musical and non-musical emotions impede the ability to critically
listen to their performances, leading to technical, musical or memory error. Error prevents
the performer from achieving the ideal mental state necessary for an expressive
performance. Preliminary conclusions suggest that while controlled emotional engagement
is a desirable aspect of some performances, uncontrolled emotional engagement disrupts the
focused concentration performers require for spontaneous, creative and expressive
performances.

28

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

TUE
Coding Emotions with Sounds

Nadja Schinkel-Bielefeld, Frederik Nagel


Fraunhofer Institute for Integrated Circuits IIS, Germany
International Audio Laboratories Erlangen, Germany

Emotions play a fundamental role in human communication. Particularly music and films are
capable of eliciting emotions which unfold and vary over time. However, in order to
communicate emotions with sounds, a) subjects should consistently and reliably associate
the sound with a certain emotion, independent of what happened before and b) sounds
should be perceived similarly by different subjects. We presented subjects with a series of
sounds from the International Affective Digitized Sounds database which changed every 5
seconds. Listeners rated the elicited valence and arousal using the real time measurement
software EMuJoy. After an initial training they rated the same sound sequence twice the first
day and once on the following day. We also played the sounds of this sequence in reverse
order to investigate context dependence and possible series effects. We found high intra-
rater correlations of 0.79 (IQR: 0.13) for valence and 0.77 (IQR: 0.10) for arousal. We found
no significant effect of the order, in which the sounds were presented. Inter-rater
correlations were still at about 0.60 (IQR: 0.23) for valence and 0.52 (IQR: 0.27) for arousal.
No series effects have been found. Elicited emotions were generally more consistent for
extreme values of valence and arousal. Thus at least these sounds could be used to reliably
communicate emotions. However, there may be other stimuli which require less
interpretation and thus are more suitable for fast and reliable communication of emotions.

The Effect of Musical Valence on Pseudoneglect in a Likert-type Rating Task

Jane H. Barrow,*1 Lindsay Wenger,* Janet E. Bourne,# Carryl L. Baldwin*


*Department of Psychology, George Mason University, USA
#Bienen School of Music, Northwestern University, USA

Music is widely used in everyday life, and has been shown to affect a wide range of behaviors
from basic decision tasks to driving performance. Another aspect of everyday life is spatial
attention, which is used in most tasks regardless of whether it is simple or complex.
Pseudoneglect is a phenomenon where neurologically normal individuals demonstrate a reliable
bias towards the left visual hemifield. Theories of spatial attention suggest that because the right
hemisphere of the brain is more involved in visuo-spatial processing, it has greater activation
which leads to the biasing of the left visual hemifield. It is also theorized that there is hemispheric
asymmetry in the brain for different emotional valences, such that the left hemisphere is more
activated during happy emotions and the right hemisphere more activated by sad emotions.
Music can also be highly emotional, which was utilized for the purpose of evoking emotions in the
participants of this study. The current study sought to determine if manipulating emotional
valence through music would increase, reverse, or ameliorate pseudoneglect in neurologically
normal individuals. One hundred fourteen participants performed a rating task using a visual
analog scale on works of art in silence or while listening to music with a sad or happy valence. The
musical stimuli were selections from various orchestral works by Haydn, Albinoni, Faur, Bruch,
Mendelssohn, and Prokofiev. The valence of the music was confirmed using independent raters.
Participants rated both portrait art that contained a human face and abstract/scene art that did
not contain a human subject. Additionally, the anchors of the rating scale were reversed half-way
through to determine if the pseudoneglect effect occurred regardless. The results demonstrated a
replication of earlier work on pseudoneglect in line bisection tasks when the ratings were
performed in silence, but demonstrated a reversal of the effect when happy music was present.
No significant effect was found when sad music was present, though the trend followed the same
direction as the happy condition. The results are framed within theory regarding hemispheric
specialization of emotions and spatial attention in the brain, and how the findings might be of
interest to researchers using Likert-type scales for testing purposes.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

29

Emotion perception in music is mediated by socio-emotional competence

Suvi Saarikallio, Jonna Vuoskoski, Geoff Luck


Finnish Centre of Excellence in Interdisciplinary Music Research, Department of Music,
University of Jyvskyl, Finland

This study investigated how adolescents general socio-emotional competence, in terms of
empathy and problem behavior, would relate to a) biases in emotion perception, b) ability to
recognize emotion in music, and c) biases in emotions felt as a response to music. Sixty-one
14-15-year-old adolescents (26% males) filled in self-report scales for empathy (IRI), and
adolescent conduct problems (SDQ). For measuring emotion perception, they rated 50 music
excerpts regarding 8 emotions (happiness, sadness, anger, fear, tenderness, hope, longing,
and potency), and for measuring emotion recognition, they were asked to identify emotions
from 15 music excerpts representing five emotions (happiness, sadness, anger, fear,
tenderness). In addition, they rated their personally felt emotions regarding the excerpts.
Empathy was related to increased, and problem behavior to decreased, perception of
emotion in music. Empathy was also related to higher, and problem behavior to lower,
recognition rates of emotion (tenderness) in music. Furthermore, the results showed that the
affect-related sub-components of socio-emotional competence correlated with perception
biases, while the cognition-related aspects correlated with emotion recognition. As regards
felt emotion, problem behavior correlated with lower ratings of felt emotion in music. The
results show that general socio-emotional competence indeed is related to adolescents
perception of emotions in music, and broaden our knowledge on musical behavior as a part
of adolescents socio-emotional development.


The Effect of Repeated Listening on Pleasure and Boredom Response to a
Cadenza

Yuko Morimoto, Renee Timmers


Music Dept., The University of Sheffield, UK

This study investigates how familiarity with a piece of music influences a listeners aesthetic
response in terms of pleasure and boredom. Repeated listening to a piece of music increases
the listeners familiarity with it, and often also their appreciation for it. However,
appreciation begins to decrease beyond a certain number of listens, a trend that can be
represented by an inverted-U line. We devised a listening experiment to test the effect of
repeated listening, contextual listening, different performances, and musical structure on
listeners pleasure and boredom responses. Forty-eight participants were divided into six
groups (two patterns, and three listening patterns), and were asked to listen to an extract
and cadenza from the first movement of Mozarts Piano Concerto No 20 in D minor. They
responded by pressing buttons on a computer keyboard whenever they felt pleasure and/or
boredom. They also rated on a 7-point intensity scale the pleasantness, interestingness,
boringness, annoyingness, and likeability of the musical stimulus after each listening. The
button-pressing data revealed that participants generally felt more pleasure than boredom.
There was a negative correlation between pleasure and boredom responses. Responses were
influenced both by the musical structure, and by the manner in which the cadenza was
performed. Pleasantness ratings from those that listened to the cadenza, the exposition
twice, and the cadenza, displayed an increase followed by a decrease in conformity with the
inverted-U line. Boredom ratings, conversely, displayed a decrease followed by an increase.
Contextual listening was found to have no effect on participants responses.

30

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

TUE
Speed Poster Session 5: Timber II Hall, 11:00-11:40
Attention & memory
Effect of a reference vs. working memory task on verbal retrospective
estimation of elapsed duration during music listening

Michelle Phillips
Centre for Music and Science, University of Cambridge, UK

Psychological time may be warped and shaped by musical engagement and variation,
including factors such as the musics volume, tempo, and modality. Two studies will be
presented here, exploring both reference and working memory. Participants listened to a 37-
second extract of a bespoke piano composition (100bpm), and retrospectively verbally
estimated elapsed duration of the listening period. In study 1 (N = 50, 12 male, average age:
30.0), the average estimate for participants who listened only (no task) was 52.00 seconds.
Participants in condition 2 (reference memory task), who were instructed to write a list of
jungle animals whilst listening, gave a not-significantly different average estimate of 55.88
seconds. However, in study 2 (N = 28, 12 male, average age: 25.5) the average estimate for
participants who listened only (no task) of 63.36 seconds was significantly longer (p < 0.02)
than in the working memory task group (instructed to rehearse a list of jungle animals
whilst listening) which yielded an average estimate of 38.57 seconds. These findings suggest
that retrospective estimates of elapsed duration during music listening are not significantly
shortened when a reference memory task is included, but are significantly reduced when
working memory is occupied during the listening period. Diverting attention from the
listening had a greater impact when attention was focused on rehearsal in working memory,
than on retrieval from reference memory. This study provides evidence that differing
processes may underlie these systems, and that one diverts attention from music to a greater
extent than the other.


Working Memory and Cognitive Control in Aging: Results of Three Musical
Interventions

Jennifer A. Bugos
School of Music, University of South Florida, United States

One common barrier to successful aging is decreased performance in cognitive abilities such
as executive function and working memory tasks due to age-related cognitive decline
(Salthouse, 1994; Meja et al., 1998; Wecker et al., 2005). A key challenge is to identify
cognitive interventions that may mitigate or reduce potential age-related cognitive decline.
This research examines the effects of different types of musical training namely: gross motor
training (group percussion ensemble, GPE) and fine motor training (group piano instruction,
GPI) compared to non-motor musical training (music listening instruction, MLI) on working
memory and cognitive control in older adults (ages 60-86). One hundred ninety non-
musicians, ages 60-86, were recruited and matched by age, education, and intelligence to two
training interventions. Two programs were administered concurrently, in each of three 16-
week sessions: (GPI and MLI), (GPE and MLI), and (GPE and GPI). A series of standardized
cognitive assessments were administered pre and post training. Results of a Repeated
Measures ANOVA show significantly reduced perseveration errors on the ACT for the GPE
group compared to GPI and MLI, F(2,121)=3.6, p<.05. The GPI group exhibited a similar
pattern of reduced perseveration errors. Results of a Repeated Measures ANOVA on the
Musical Stroop Task indicate significantly reduced errors by the MLI group compared to GPI
and GPE, F(2,109)=3.1, p<.05. Musical training may benefit general cognitive abilities. Data
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

31

suggest that instrumental training enhances working memory performance while music
listening instruction may contribute to cognitive control.


Interfering Effects of Musical and Non-Musical Stimuli in a Short-term Memory
Task

Jack D. Birchfield, James C. Bartlett, W. Jay Dowling


Behavioral and Brain Sciences, University of Texas at Dallas, USA

In a previous study, we found that performance in a short-term verbal memory task was
reduced by presentation of familiar instrumental songs during the retention interval. One
possible interpretation is that the musical nature of these songs (e.g. their tonality, coherent
rhythmic patterns) is a source of interference. An alternative view is that retention is
disrupted by auditory sequences with elements that vary over time. To test the musicality
versus changing-state hypothesis, participants were asked to retain spoken 9-digit
sequences while hearing white noise (the control condition) or one of four types of auditory
distractor: Familiar instrumental music, instrumental versions of familiar vocal songs (IFVS),
random diatonic note sequences between C3 and C5, or random chromatic sequences
between C3-C5. Recall of the digits was significantly lower after hearing the familiar
instrumental distractors than after either the diatonic or chromatic distractors. Recall
performance in the IFVS condition was not reliably different from any of the other
conditions, but was numerically lower than the equally familiar instrumental music and
numerically higher than the diatonic and chromatic distractors. Average notes-per-sequence
was greater for the instrumental songs than for the IFVS, while the diatonic and chromatic
distractors were isochronal (equal onset and duration with no rhythmic information). Thus,
we conclude that the greater interference associated with instrumental music may result
from the greater rhythmic complexity of the instrumental selections rather than from
familiarity or other musical qualities.


Musical Accents and Memory for Words

Thomas Ting, William Forde Thompson


Department of Psychology, Macquarie University, Australia

In this study, we examined the effect of background music on reading, focusing on memory
for words that are read concurrently with musical accents. Can musical accents enhance
memory for words in the same way that visual accents (underscoring, highlighting) draw
attention to words and hence, increase memory for them? Forty undergraduate psychology
students were presented sentences one word at a time on a computer screen. Each word was
accompanied by a piano tone such that the sequence of tones outlined a brief melody with
one note being musically accented. Melodic accents had increased intensity, duration and
pitch height. There were three music conditions. In the first two, musical accents were either
congruent (aligned) or incongruent (not aligned) with a target word. In the third condition
there was no accompanying music. The target words were either visually emphasized
(bolded) during the exposure phase or not. Contrary to predictions, recall was better when a
musical accent was incongruent with a target word compared to when the accent was
congruent or when there was no music at all. There was no significant effect of bolding target
words in the exposure phase. The results suggest that background music enhances coding of
words during reading, but only for words that do not coincide with strong musical accents. A
cost-benefit trade-off model is suggested, where prominent musical accents may compete for
attention, eliminating potential benefits of positive changes in arousal and mood priming
effects of accents during an implicit memory task.

32

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

TUE
Mood-Based Processing of Unfamiliar Tunes Increases Recognition Accuracy
in Remember Responses

Esra Mungan,* Zehra F. Peynirciolu#, Andrea R. Halpern


*Psychology, Bogazici University, Turkey; #Psychology, American University, USA; Psychology,
Bucknell University, USA

We investigated the effects of orienting task (OT) on remember (R) and know (K)
responses in melody recognition of unfamiliar tunes. In Experiment 1, nonmusicians made
mood judgments, continued melodies (=conceptual OTs), counted number of long notes, and
traced pitch contours (=perceptual OTs) of unfamiliar tunes. As expected from earlier
research (Mungan, Peynirciolu, & Halpern, 2011) with familiar tunes, conceptual processing
was more effective than perceptual processing in R type recognition accuracy, which once
again was due mostly to the mood-based processing task. In Experiment 2, we investigated
whether a distinctive versus relational processing difference underlies this OT effect.
Nonmusicians judged a set of familiar tunes in terms of how distinctive they were
(distinctive-conceptual), to which music category they belonged (relational-conceptual), or
their loudness (neutral-perceptual). Findings revealed only that conceptual processing was
more effective than perceptual processing on R-type recognition sensitivity. We discuss
possible reasons why the distinctiveness factor was not effective, even though it is has been
shown with many types of verbal and nonverbal material.


Effects of Manipulating Attention during Listening on Undergraduate Music
Majors Error Detection in Homophonic and Polyphonic Excerpts: A Pilot Study

Amanda L. Schlegel
School of Music., University of Southern Mississippi, United States

The purpose of this pilot study was to investigate the potential effects of wholistic versus
selective listening strategies on music majors detection of pitch and rhythm errors in three-
voice homophonic and polyphonic excerpts. During the familiarization phase, upper-level
undergraduate instrumental music majors participants (N = 14) first heard a correct full (all
voices at once) performance, followed by each individual voice, and one final opportunity to
listen to the full excerpt again. Participants then heard a flawed performance containing
pitch and rhythm errors with their task being to detect the errors. Participants in the
wholistc listening group were instructed to attend/listen to all voices while listening, while
selective group participants were instructed to attend/listen to individual voices while
listening. Participants heard the flawed performance twice. Results indicated no significant
main effects due to texture, error type (pitch or rhythm), error location (top, middle, or
bottom voice), or treatment group. A significant three-way interaction among texture, error
type, and error location illustrate the influence of musical context in the detection of pitch
and rhythm errors. Though the small sample size (N = 14) and lack of significance as a result
of the treatment illustrate the need for additional and adjusted investigations, efforts to
illuminate textures influence on listening/attending are of value to all musicians.


Attention and Music

Vaitsa Giannouli
Department of Psychology, Aristotle University of Thessaloniki, Greece

Many studies have found that cognitive test performance can be influenced by background
music. The aim of the present study is to investigate whether background music can
influence attention. Twenty-four neurologically and acoustically healthy volunteers (12 non-
musicians and 12 musicians, 15 men and 14 women, Mean age=26,20, SD=5,64) participated
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

33

in the study. All of the participants had university education (minimum 16years).The
examination materials were Ruff 2 & 7 Selective Attention Test (2 & 7 Test), Symbol Digit
Modalities Test (SDMT), Digit Span Forward and Trail Making Test Part A (TMT).
Metacognitive feelings (feeling of difficulty-FOD and feeling of confidence-FOC) were also
measured after the completion of each test with the use of Likert scales. Volunteers
participated in all three condition of the experiment and were grouped according to the
acoustic background that they experienced during the neuropscyhological examination
(Mozarts Allegro con spirito from the Sonata for two pianos K.448, favorite music excerpt
and no exposure to any acoustic stimuli during their examination). Results indicated a
statistically significant difference in favor of the favorite music condition and statistically
more positive metacognitive judgments (less difficulty, more confidence) for this condition.
Listening to Mozarts music did not enhance performance on attention tasks. No music
education influence was found and also no gender differences were found. The finding of a
better attention performance could be interpreted as the result of a general positive
influence-effect that preferred music listening has on general cognitive abilities.


Learning and memorisation amongst advanced piano students: a
questionnaire survey

Kirsteen Davidson-Kelly, Nikki Moran, Katie Overy


IMHSD, Reid School of Music, ECA, University of Edinburgh, UK

Professional musicians are often advised to use mental rehearsal techniques, including
musical imagery, but to date there is little evidence regarding the extent to which these
techniques are actually used, or indeed their relative efficacy. We conducted an online
questionnaire with piano students at six UK conservatoires, designed to examine their
conceptions and experiences of the process of learning and memorisation, and to identify
which strategies were most commonly recommended and implemented. Results from 37
respondents showed that statements about conceptions of learning and memorisation did
not always fit with self-reports of actual practice, and that although widely recommended,
mental techniques were less likely to be implemented than physical rehearsal techniques.
These findings suggest that while students may know about certain approaches and
strategies they may not know how to implement them. Future research should investigate
the relative efficacy of specific mental learning techniques involving deliberate uses of
musical imagery and examine ways of teaching these techniques effectively.

Speed Poster Session 6: Grand Pietra Hall, 11:40-12:10


Music, words, language

What Weve Got Here is [No?] Failure to Communicate: How Listeners


Reconcile Music and Lyrics Mismatch in Interpretation
Janet Bourne, Richard Ashley
Northwestern University

While songs (defined as music with lyrics) have been studied extensively in music theory, little
empirical research addresses how music and lyrics together influence the interpretation of a
songs narrative. Previous experiments on song focus on how lyrics and music elicit emotion; yet
do not address the songs narrative. Cook (1998) proposed three models of multimedia, including
contest (or mismatch), when two simultaneous media contradict each other. Previous
research (e.g. McNeill, 2005) indicates that mismatched verbal and nonverbal communication
implies meta-communication, or other instances of non-literal language (deception, irony,

34

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

TUE
sarcasm, joking, and so on). In like manner, when music and lyrics mismatch, a listener might
interpret the music-lyrics mismatch as a kind of meta-communication. We propose the
following hypotheses: (1) in song, music does not simply elicit emotion but also plays a part in a
listeners narrative interpretation; a listener uses both. (2) If music and lyrics mismatch, listeners
will reconcile the contradictory sources to create a coherent story. (3) When the music and lyrics
conflict in a song sung by a character, a listener may infer the character in the song as being
ironic, lying, sarcastic or being humorous. Participants listened to song clips from Broadway
musicals and provided responses to a variety of questions: free response, Likert scale ratings,
forced choice and adjective listening. The study used a 2x2 between-subjects design where the
factors are the affect of the music and the affect of the lyrics: 1) Positive Music/Positive Lyrics, 2)
Positive Music/Negative Lyrics, 3) Negative Music/Negative Lyrics, 4) Negative Music/Positive
Lyrics. This research provides further insight into how a composer is able to successfully
communicate a meaning or message to a listener through song. Commercially, advertising
companies may find the results informative because then they would know how best to reach
their target audience by knowing how different sources of media are understood by the public.
These results would be of interest to other non-music researchers who study how people
reconcile conflicting simultaneous sources of information.


Studying the Intervenience of Lyrics Prosody in Songs Melodies
Jose Fornari

NICS, University of Campinas (UNICAMP), Brazil



Songs are made of two intrinsically connected parts: poetry (in the form of songs lyrics) and
music. The proper fitting between these parts seems to be made by acoustic features that
encompass the relationship between them, representing two fields of sonic communication:
musical and verbal communication. While lyrics convey semantic meaning, music enhances
its emotional intention, filling informational gaps and enhancing its signification that
otherwise would make the poetic meaning of lyrics incomplete of even misleading. This work
presents an introductory research about the influence of lyrics on their accompanying
melodies. The experiment here presented analyzes three famous popular songs.
Computational predictions, given as time series of eight acoustic descriptors, were retrieved
from pairs of audio files; one solely with the speech of the lyrics, and another solely with its
corresponding melody. In order to avoid data tainting from human emotional interpretation,
the audio files with the speech were generated by a text-to-speech voice synthesizer. For the
same reason, melodies are generated by MIDI files. These pairs were analyzed by
computational models of higher-level acoustic descriptors that output time series
representing the development of a particular acoustic aspect on time. The correlation of each
acoustic feature for each pair of audio file are here presented, in the form of the correlation
coefficient. R The experimental results are here presented, explained and discussed, in order
to introduce a study on the acoustic features that better describe the intervenience of lyrics
prosody in song melodies.


Comparing Models of Melodic Contour in Music and Speech

Alex Billig, Daniel Mllensiefen


Department of Psychology, Goldsmiths, University of London, United Kingdom

Contour is an important perceptual and mnemonic feature of both music and speech. Four
formal models of contour, differing in the degree to which they compress melodic
information, were compared empirically to assess how closely they correspond to the mental
processes involved in perception and memory of pitch sequences. Participants listened to a
series of short monophonic melodies and low-pass filtered English sentences. They were
asked to identify which of four images best represented the auditory stimulus. All images in a
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

35

trial were produced using the same contour model, but only one was derived from the
melody or sentence heard. Models facilitating the highest proportion of correct matches
were considered to summarise the pitch information in a cognitively optimal way. Matching
was at above chance level for all models, with increased visual detail generally leading to
better performance. A linear regression model with musical training, stimulus type, their
interaction and contour model as predictors accounted for 44% of variance in accuracy
scores (p < .001). Accuracy was significantly higher for melodies than for speech, and
increased with musical training for melodies only. This novel cross-modal paradigm revealed
that listeners can successfully match images derived from music theoretical models of
contour not only to melodies but also spoken sentences. Our results support the important
role of contour in perception and memory in both music and speech, but suggest limits to the
extent that musical training can bring about changes to the mental representation of pitch
patterns.


The effect of melodic expectation on language processing at different levels of
task difficulty and working memory load

Elisa Carrus,* Marcus T. Pearce,# Joydeep Bhattacharya*


*Department of Psychology, Goldsmiths, University of London, UK; #Center for Digital Music,
School of Electronic Engineering & Computer Science, Queen Marys, University of London, UK

Behavioural studies have shown that language expectancy effects are reduced when
language is presented with unexpected compared to expected musical chords (e.g. Hoch et al,
2011). This study aimed at investigating the behavioural impact of melodic expectation on
processing of language. A computational model was used to create melodies (Pearce, 2005),
allowing to distinguish between high-probability (expected) and low-probability
(unexpected) notes. We used a cross-modal paradigm in three behavioural studies where
sentences and melodies were presented in synch and both consisted of five elements. In the
first experiment, the task consisted in an acceptability judgment, whereas in the second
experiment the task involved detecting the type of language condition presented. The third
experiment included a working memory component which involved keeping digits in
memory while they were doing the language task. When participants were asked to judge the
acceptability of sentences, melodically unexpected notes facilitated processing of unexpected
but not expected sentences. Participants were faster in responding to incorrect sentences
when these were paired with unexpected rather than expected notes. When participants
were asked to detect the type of language violation, the language expectancy effect (faster
processing for correct than for incorrect sentences) was reduced when sentences were
presented on unexpected notes, compared to expected notes. Finally, when working memory
load increased, the language expectancy effect was suppressed. It could be speculated that a
congruency effect is generating the facilitation effect, and that the presence of increased
cognitive load enhances processing of distracting (music) stimuli, thus preventing a
behavioural interaction.


Towards a Musical Gesture in the Perspective of Music as a Dynamical System

Beatriz Raposo de Medeiros


Department of Linguistics, University of So Paulo, Brazil

Assuming a perspective of music as a dynamical system in the domain of cognition implies
adopting the notion that the cognitive structures (nervous system, body and environment)
are integrated. In other words, in each behavior that involves acting and knowing e.g., a
football player kicking a corner ball cognitive structures act as an entire system. The
dynamical view provides the necessary tools and the language required to deal with time,
36

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

TUE
movement and change over time. We present a locus of convergence among studies with
different views on music as a dynamical system, whereafter we propose a musical gesture
based on the same dynamical principles which in the domain of Linguistics led to a
phonological unit called articulatory gesture. The singing voice is presented as a plausible
musical gesture as it produces tones and durations combined in order to provide the musical
information. This information can be understood as specific tones in a given scale system
and rhythmic structure and is part of the musical unit proposed here. The articulatory
movements of the singing voice produced by the larynx characterize this unit as a unit of
action. Thus we suggest a larynx modeling for music production in an initial attempt to view
the singing voice as a basic realization of music, organized and coordinated as a musical
gesture.


Perceiving Differences in Linguistic and Non-Linguistic Pitch: A Pilot Study
With German Congenital Amusics

Silke Hamann,* Mats Exter,# Jasmin Pfeifer,# Marion Krause-Burmester#


*Amsterdam Centre for Language and Communication, University of Amsterdam, The
Netherlands
#Institute for Language and Information, University of Dsseldorf, Germany

This study investigates the perception of pitch differences by seven German congenital
amusics in speech and two types of non-speech material (sinusoidal waves and pulse trains).
Congenital amusia is defined by a deficit in musical pitch perception, and recent studies
indicate that at least a subgroup of congenital amusics also show deficits in linguistic pitch
perception. While previous studies employed pitch differences that occur in naturally spoken
pairs of statement vs. echo question to test the influence of amusia on linguistic pitch
perception, the present study parametrically varied the pitch differences in steps of one
semitone (from one to seven semitones). We further tested the influence of the direction of
the pitch change, the length of the stimuli and the continuity of the pitch curve. Our results
show that amusics have difficulties detecting pitch changes both in non-linguistic stimuli and
in speech. Furthermore, we found that amusics and controls performed better when the
stimuli where discontinuous and the pitch was raised (instead of lowered). With respect to
non-speech material, all participants performed better for pulse trains. The length of the
stimuli did not influence the performance of the participants.

Speed Poster Session 7: Crystal Hall, 11:40-12:10


Ethnomusicology & cross-cultural studies

Prosodic Stress, Interval Size and Phrase Position: A Cross-Cultural Contrast

Daniel Shanahan, David Huron


Ohio State University

Two studies were carried out in order to test the existence of late phrase compression in
music where the interval size tends to decline toward the end of a phrase. A sample of
phrases from notated Germanic folksongs shows the predicted decline in interval size.
However, a sample of phrases from Chinese folksongs shows a reverse relationship. In short,
late phrase interval compression is not evident cross-culturally.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

37

Variations in emotional experience during phases of elaboration of North


Indian Raga performance

Shantala Hegde,* Jean-Julien Aucouturier,# Bhargavi Ramanujam*, Emmanuel Bigand#


*Cognitive Psychology Unit, Center for Cognition and Human Excellence, Department of Clinical
Psychology, National Institute of Mental Health And Neuro Sciences (NIMHANS), Bangalore,
India; #LEAD-CNRS, Universit de Bourgogne, Ple AAFE, Dijon cedex, France

In Indian classical music (ICM) ragas are the base for melodic improvisation. Ragas are
closely associated with specific emotional themes, termed as rasas. Artists improvise and
elaborate on a raga over different successive phases with variation in the melodic
elaboration, tempo and rhythm to evoke the rasa of the raga. There has been little study so
far on how the emotional experience varies along with different phases of raga elaboration.
This study examined the variation in emotional experience associated with specific ragas
during the different phases of raga presentation in the North-Indian-Classical-Music
tradition (NICM), and correlate with acoustic parameters. Fifty musically-untrained Indian
participants listened to one-minute long excerpts from ten ragas. All excerpts were from
Bansuri (bamboo flute) performance by an accomplished musician. For each raga, three
excerpts from different phases of elaboration, viz., Alaap (P1), Jor-Jhala (P2) and Bandish-
Madhyalaya (P3) were included. Participants were asked to choose the predominant
emotion experienced from a set of eight categories. Here we only report on differences
observed comparing P1 and P2 of the ragas. PCA analysis of the complete dataset of the 30
excerpts was carried out. Rhythmic properties of each extract using MIR Toolbox's
algorithms. Valence and arousal variations within a raga typically exceed variations between
different ragas. The transition from P1 to P2 was associated with a significant increase in
pulse clarity. Indian performers have the possibility to strongly vary the expressivity
associated with a specific raga by their performances, but with some specific constraints
depending upon the ragas.


Analyzing Modulation in Scales (Rgams) in South Indian Classical (Carntic)
Music: A Behavioral Study

Rachna Raman, W. Jay Dowling


Dept. of Behavioral & Brain Sciences, The University of Texas at Dallas, USA

The study was aimed at (1) identifying cues that help listeners perceive tonality changes, (2)
investigating if cues learnt from one culture help toward understanding music across
cultures, and (3) understanding if musical training is advantageous for cross-cultural
perception. Carntic music has two kinds of tonality shifts: the popular rgamlik (shifts of
rgam, retaining tonal center; e.g., C to C minor), and the controversial grahabdham (shifts
of rgam and tonal center; e.g., C to A minor). Stimuli were 45 rgamlik and 46
grahabdham shifts in songs. South Indian and American teachers and students divided by
age (older or younger than 60 yr) served in either the rgamlik or grahabdham condition.
Participants indicated the point at which a modulation occurred, measured in terms of
accuracy and latency. Indians were more accurate and faster in rgamlik whereas
westerners performed better with grahabdham. Cues could explain performance
differences between nationalities: Indians performed better in rgamlik presumably
because of their familiarity with it; westerners performed better with grahabdham because
they were probably able to apply cues to a type of modulation culturally familiar to them.
Indians and westerners had similar hit rates in grahabdham. Increased caution toward the
less familiar grahabdham for Indians could explain their slower response time compared to
rgamlik. Musical training was advantageous to teachers overall: they had more hits and
38

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

TUE
fewer errors than students. This could be attributed to enhanced representation for systems
of pitches and modalities.


Embodiment of Metrical Structure: Motor Patterns Associated with Taiwanese
Music

Li-Ching Wang,* Chen-Gia Tsai#


*Centre for Music and Science, University of Cambridge, UK
#Graduate Institute of Musicology, National Taiwan University, Taiwan

Sensory feedback, whether auditory, visual, tactile, proprioceptive and vestibular, enables
music performers to perceive metrical structures of music better due to the multiple sources
of information. Cognitively, humans tend to synchronize their body movements with beats
they are listening to. Ontogenically, the ability to feel music through body movements
develops at an early age. Physiologically, different mechanisms behind the feedback caused
by body movements may result in different types of embodied expression of meter.
Embodiment of metrical hierarchy can also be observed in the variety of beat-counting
processes from different musical cultures, such as the art of conducting in Western classical
music. In some Taiwanese music genres, musicians count beats with specific motor patterns.
The present study used an accelerometer to examine the beat-counting movements in
diverse music traditions: Taiwanese aboriginal music, nanguan music, and beiguan music, in
comparison with the conducting movement in Western classical music. We hypothesize that
different feedbacks induced by beat-counting movements reflect the hierarchy of beats in a
measure. Our results suggest that the tactile feedback is in a higher hierarchy than
proprioception, in which the zero-acceleration timing indicates the beat in some music
traditions. If no tactile feedback occurs, the hand movement with downward velocity is on a
higher hierarchical level than that with upward velocity.


Literarily Dependent Chinese Music: A Cross-Culture Research of Chinese and
Western Musical Score Based on Automatically Interpretation

Rongfeng Li,* Yelei Ding*, Wenxin Li*, Minghui Bi #


* Key Laboratory of Machine Perception (Ministry of Education), Peking University
# School of Arts, Peking University

The evolvement of Western and Chinese musical score is quite different. Firstly, Chinese
musical score depends greatly on literary while with a common view, Western music is
comparatively independent on literary. Specially, in Chinese musical score, the melody is
evolve from the tones of Chinese poetry. The other difference is in rhythmic rule. Compare to
the strictly regulated Western music, gongchepu uses a flexible rhythmic rule, which only
denotes ban (downbeat) and yan (upbeat), and the duration of each note is improvised by
musicians. However, to perform the correct music, the improvisation, of which the
experience is only passed by oral tradition, have fixed patterns. In this paper, we proposed
an automatically interpretation model by recognizing those patterns based on Hidden
Markov Model. Our automatic interpretation method successfully achieves 90.392%
precision and 83.2% OOV precision on database of published manually interpretation of
Gongchepu. The result shows that the up and down tune and the position of the lyrics are the
key feature that affect the rhythmic improvisation of Chinese music, which also support that
the Chinese musical score is literarily dependent. Also, the automatically interpretation have
a great impact on protecting the ancient Chinese traditional culture, for experts who are able
to read gongchepu is decreasing and the way of singing the Chinese traditional poetry will
likely fade in the following generation.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

39

Speed Poster Session 8: Dock Six Hall, 11:40-12:10


Temporality & rhythm I
Conceptual spaces of metre and rhythm

Jamie Forth,* Geraint Wiggins#


*Department of Computing, Goldsmiths, University of London, UK
#School of Electronic Engineering and Computer Science, Queen Mary, University of London, UK

We introduce a formalisation of metrical-rhythmic concepts within Grdenfors' theory of
conceptual space. The conceptual spaces framework is a cognitive theory of representation
in which concepts are represented geometrically within perceptually grounded and variably
weighted quality dimensions. Distance corresponds to conceptual dissimilarity. Informed by
London's psychological theory of metre as a process of entrainment, two conceptual space
models are developed, each designed to encapsulate salient aspects of the experience of
metrically organised rhythmic structure. As a basis for defining each conceptual space, we
first develop a symbolic formalisation of London's theory in terms of metrical trees, taking
into account isochronous and non-isochronous structures. The first conceptual space
represents metrical concepts as hierarchical structures of periodic components. The second
extends this representation to include the internal sequential structure of periodic cycles.
The geometry is defined in terms of the symbolic formulation, and the mappings between the
levels of representation associate metrical tree structures with points in geometrical space.
Expressively varied metres are naturally represented in the space as regions surrounding
prototypical metrical points. The developed models are evaluated within a genre
classification task involving stratified 10x10-fold cross-validation over a labelled dataset of
rhythmically distinctive musical genres using k-nearest-neighbour clustering. The models
achieve classification accuracies of 77% and 80% respectively, with respect to a tempo-only
base-line of 48%.


Modeling the implicit learning of metrical and non-metrical rhythms

Benjamin G. Schultz1,2, Geraint A. Wiggins3, & Marcus Pearce3


1MARCS Institute, University of Western Sydney
2Lyon Neuroscience Research Center, Team Auditory Cognition and Psychoacoustics, CNRS,
UMR 5292, INSERM U1028, Universit Lyon 1
3 Centre for Digital Music, Queen Mary, University of London

The information dynamics of music (IDyOM; Pearce & Wiggins, 2006) model, originally applied to
melodic expectation, indicates learning via entropy (reflecting uncertainty) and information
content (reflecting unexpectedness). Schultz, Stevens, Keller, and Tillmann found implicit learning
(IL) of metrical and non-metrical rhythms using the serial reaction-time task (SRT). In the SRT,
learning is characterized by RT decreases over blocks containing a repeating rhythm, RT
increases when novel rhythms are introduced, and RT recovery when the original rhythm is
reintroduced. Metrical rhythms contained events that occurred on the beat and downbeat. Non-
metrical rhythms contained events that deviated from the beat and downbeat. In the metrical
condition, larger RT increases occurred for the introduction of novel weakly metrical rhythms
compared to novel strongly metrical rhythms. No differences were evident between the
introductions of novel non-metrical rhythms. We used the IDyOM model to test the hypothesis
that IL of metrical and non-metrical rhythms is related to developing expectations (i.e. RT data)
based on the probabilistic structure of temporal sequences. We hypothesized that previous
exposure to the corpus results in larger positive correlations for metrical rhythms than non-
metrical rhythms. Correlational analyses between RT data and the IDyOM model were performed.
The IDyOM model correlated with RT. Entropy demonstrated moderate positive correlations for

40

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

TUE
the LTM+ and BOTH+ models. Information content demonstrated moderate to strong positive
correlations for the LTM, BOTH, LTM+, and BOTH+ models. As hypothesized, models exposed to
the corpus demonstrated larger correlations for metrical rhythms compared to non-metrical
rhythms. Results suggest that the IDyOM model is sensitive to probabilistic aspects of temporal
learning, and previous exposure to metrical rhythms. The probabilistic structure of temporal
sequences predicts the development of temporal expectations as reflected in RT. Results indicate
that the usefulness of the IDyOM model extends beyond predicting melodic expectancies to
predicting the development of temporal expectancies.


Asymmetric beat/tactus: Investigating the performance of beat-tracking
systems on traditional asymmetric rhythms
Thanos Fouloulis,* Emilios Cambouropoulos,* Aggelos Pikrakis#

* School of Music Studies, Aristotle University of Thessaloniki, Greece


#Department of Computer Science, University of Pireaus, Greece

Theories of western metrical structure commonly hypothesize an isochronous beat level


(tactus) upon which the concept of metre is built. This assumption is challenged by this
study. It is proposed that time at the tactus level may be measured by isochronous or
asymmetric temporal scales depending on the musical data (just like asymmetric pitch
scales are adequate for organising tonal pitch space). This study examines the performance
of beat tracking systems on music that features asymmetric rhythms (e.g. 5/8, 7/8) and
proposes potential improvement of theoretical and practical aspects relating to beat
perception that can allow the construction of more general idiom-independent beat trackers.
The tactus of asymmetric/complex musical rhythms is non-isochronous; for instance, a 7/8
song is often counted/taped/danced at a level 3+2+2 (not at a lower or higher level). Two
state-of-the-art beat-tracking systems (Dixon 2007; Davies & Plumley 2007) and a
beat/tempo induction system (Pikrakis et al, 2004) are tested on a number of traditional
Greek (dance) songs that feature asymmetric rhythms. The beat output of the algorithms is
measured against the corresponding beat structures indicated by expert musicians (we also
use knowledge regarding corresponding dance movements), and the algorithms are
compared to each other. As expected, the beat-trackers cannot cope well with asymmetric
rhythms. The metre/tempo induction system performs better in processing asymmetric
rhythms; it does not always find the correct beat level but this level exists implicitly in the
model (in between sub- and super-beat levels).


Meet ADAM a model for investigating the effects of adaptation and
anticipatory mechanisms on sensorimotor synchronization

Marieke van der Steen,* Peter E. Keller *#


*Music Cognition and Action Group, Max Planck Institute for Human Cognitive and Brain
Sciences, Germany; #MARCS Institute, University of Western Sydney, Australia

The temporal coordination of self-generated motor rhythms with perceived external
rhythms is an important component of musical activities. Such sensorimotor synchronization
(SMS) involves temporal adaptation and anticipation. Adaptation mechanisms enable
humans to modify the timing of their actions online when synchronizing with external event
sequences. Reactive temporal error correction processes influence the timing of upcoming
movements and therefore facilitate the maintenance of synchrony. Anticipatory processes
concern predictions about the unfolding external event sequence with which the action is to
be coordinated. These mechanisms facilitate efficient and precise motor control and are
related to online action simulation and internal models. We introduce ADAM an
ADaptation and Anticipation Model to investigate the role of adaptation and anticipatory
mechanisms, and their interactions, on SMS. Adam combines an established formal model of
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012
41

adaptation with an anticipation process inspired by the notion of internal models. ADAM is
created in Simulink, a MATLAB-based simulation environment. ADAM can be implemented in
a real-time set up, creating a virtual synchronization partner. ADAM produces an auditory
pacing signal, and can parametrically adjust the timing of this signal based on information
about the human participant's timing (via MIDI). The set up enables us not only to run
simulations but also to conduct experiments during which participants directly interact with
the model. In doing so, we investigate the effect of the different processes and their
interactions on SMS in order to gain knowledge about how SMS-based tasks might be
exploited in a motor rehabilitation for different patient groups.


Electrophysiological Substrates of Auditory Temporal Assimilation Between
Two Neighboring Time Intervals

Takako Mitsudo*, Yoshitaka Nakajima, Gerard B. Remijn, Hiroshige Takeichi, Yoshinobu


Goto, Shozo Tobimatsu#
*Faculty of Information Science and Electrical Engineering, Kyushu University, Japan; Faculty
of Design, Kyushu University, Japan; RIKEN Nishina Center, Saitama, Japan; Faculty of
Rehabilitation, International University of Health and Welfare, Japan; #Faculty of Medical
Sciences, Kyushu University, Japan

Brain activities related to temporal assimilation, a perceptual phenomenon in which two
neighboring time intervals are perceived as equal even when their physical difference is
substantially larger than the difference limen, were observed. The neighboring time intervals
(T1 and T2 in this order) were marked by three successive 1000-Hz pure-tone bursts of 20
ms. Event-related potentials (ERPs) were recorded from 19 scalp locations while the
participants listened to the temporal patterns. Thirteen participants just listened to the
patterns in the first session, and judged the equality/inequality of the neighboring intervals
in the next session. The participant made his/her judgments on perceived
equality/inequality by pressing one of two buttons. First, T1 was varied from 80 to 320 ms in
steps of 40 ms, and T2 was fixed at 200 ms. About one year later, the same participants took
part in another experiment in which the procedures remained the same except that the
temporal patterns were reversed in time. Behavioral data showed typical temporal
assimilation; equality appeared in an asymmetrical categorical range T1-T2 = -80 to 50 ms.
Electrophysiological data showed a contingent negative variation (CNV) during T2 in the
frontal area, which might reflect the process of memorizing the length of T1. A slow negative
component (SNCt) after the presentation of T1 and T2 appeared in the right-frontal area, and
continued up to about 400 ms after the end of T2; this component was larger when
perceptual inequality took place. (Supported by JSPS)

Speed Poster Session 9: Timber I Hall, 11:40-12:10


Emotional responses & affective experiences I

Emotion in Music: Affective Responses to Motion in Tonal Space

Marina Korsakova-Kreyn, * Walter Jay Dowling #


* School of Music and the Arts, NJ, USA
# The University of Texas at Dallas, USA

Tonal modulation is the reorientation of a scale on a different tonal center in the same
musical composition. Modulation is one of the main structural and expressive aspects of
music in the European musical tradition. Although it is known a priori that different degrees
of modulation produce characteristic emotional effects, these effects have not yet been
42

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

TUE
thoroughly explored. We conducted two experiments to investigate affective responses to
tonal modulation by using semantic differential scales related to valence, synesthesia,
potency, and tension. Experiment 1 examined affective responses to modulation to all 12
major and minor keys using 48 brief harmonic progressions. The results indicated that
affective response depends on degree of modulation and on the use of the major and minor
modes. Experiment 2 examined responses to modulations to the subdominant, the dominant,
and the descending major third using a set of 24 controlled harmonic progressions and a
balanced set of 24 excerpts from piano compositions belonging to the First Viennese School
and the Romantics; all stimuli were in the major mode to maintain the ecological validity of
modulation to the dominant. In addition, Experiment 2 investigated the affective influence of
melodic direction in soprano and bass melodic lines. The results agreed with the theoretical
model of pitch proximity based on the circle of fifths and demonstrated the influence of
melodic direction and musical style on emotional response to reorientation in tonal space.
Examining the affective influence of motion along different tonal distances can help deepen
our understanding of aesthetic emotion.


Voice Multiplicity Influences the Perception of Musical Emotions

Yuri Broze* & Brandon Paul#


*School of Music, Ohio State University, USA
#Department of Speech and Hearing Science, Ohio State University, USA

A polyphonic musical texture can be described in terms of its voice multiplicitythe number
of simultaneous musical voices present. We conjectured that listeners might make use of
voice multiplicity information when inferring the expression of musical emotions. In
particular, we hypothesized that ratings of musical loneliness would be highest for
monophonic music, and decrease as more voices are added to the texture. Moreover, voice
multiplicity should only influence emotion perception to the extent that it can be accurately
perceived. In an experimental study, listeners were asked to rate brief (5s) musical excerpts
for expression of happiness, sadness, loneliness, and pride. We controlled for style, motivic
content, timbre, and loudness by excerpting harpsichord recordings of fugue expositions
from Bachs Well-Tempered Clavier. Higher loneliness and sadness ratings were associated
with fewer musical voices; loneliness showed a stronger effect than sadness. The effect of
voice multiplicity was consistent with the pattern predicted by limitations in stream
segregation. Unexpectedly, listeners were much more likely to make strong emotion ratings
for monophonic textures than for any other multiplicity level, and multiplicity effects seemed
to be greater for loneliness and pride ratings than for sadness and happiness ratings.
Preliminary results from a second study using an expanded between-groups design are
consistent with the idea that positively-valenced emotions are more easily perceived when
more musical voices are present, whereas negatively-valenced emotions are perceived more
strongly when fewer voices are present.


Multisensory Perception of Six Basic Emotions in Music

Ken-ichi Tabei,* Akihiro Tanaka#


*Department of Dementia Prevention and Therapeutics, Graduate School of Medicine, Mie
University, Japan; #Department of Psychology, Tokyo Woman's Christian University, Japan

The interaction between auditory and visual information is known to influence emotion
judgments by using audiovisual speech stimuli (i.e., facevoice combination). In contrast,
little is known about how emotion perception changes when the musicians facial and bodily
movements can be seen as well as heard. In the present study, we applied a paradigm often
used in facevoice emotion perception to music performance to examine the interaction
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

43

between musical sound and facial and bodily movements in perceiving emotion from music
performance. Results showed that the performances in the Audio (A), Visual (V), and Audio-
Visual (AV) conditions were dependent on the combination of instruments and emotions:
angry expression by cellists and sad expression by violinist were perceived better in the V
condition, while disgust expression by pianist were perceived better in the AV condition.
While previous studies have shown that visual information from facial expression facilitates
the emotion perception from emotional prosody in speech, that of musicians facial and
bodily movements did not necessarily enhance the emotion perception from musical sound.
This pattern suggests that multisensory perception of emotion from music performance may
be different from that from audiovisual speech.


New perspective of peak emotional response to music: The psychophysiology
of tears

Kazuma Mori,*# Makoto Iwanaga*


*Graduate School of Integrated Arts and Sciences, Hiroshima University, Japan
# Research Fellow of the Japan Society for Promotion of Science

Music sometimes induces peak emotion. Previous studies examined musical chills (feeling of
goose bumps and shivers down the spine) as peak emotional response to music. Our
previous study, however, revealed that musical tears (feeling of weeping and lump in the
throat) seemed to be another peak emotional response to music. The present study
examined how psychophysiology states induced by musical tears. Thirty four students
listened to self-selected tear music and other-selected neutral music. During music listening,
the participants pushed mouse button when they felt sense of tears. They also moved mouse
right and left to continuous real time recordings of subjective emotional valences (pleasure-
displeasure). Simultaneously, the participants was recorded autonomic nervous activity such
as heart rate, respiratory rate and skin conductance response. We compared time series
subjective emotion and physiology responses accompanied with sense of tears between
when listening self-selected tear music and when listening other-selected neutral music. The
results showed that the participants exhibited monotone increasing of subjective pleasure
before and after fifteen second of tears onset. They also exhibited respiratory rate decreases
that rapidly subsided after tears onset. Decreasing respiratory rate meant that, after tears
onset, the participants experienced activating parasympathetic nervous system. These
results showed that musical tears induce slowly peak pleasurable with physiologically
calming state. On the other hand, previous studies confirmed that musical chills induce fast
peak pleasurable and physiologically arousing state. We conducted that musical tears give
different peak pleasurable state from musical chills.


Musical Emotions: Perceived Emotion and Felt Emotion in Relation to Musical
Structures

Ai Kawakami,1,2 Kiyoshi Furukawa,1 Kazuo Okanoya2,3,4


1 Graduate School of Fine Arts, Tokyo University of the Arts, JAPAN
2 Emotional Information Joint Research Laboratory, RIKEN BSI, JAPAN
3 JST, ERATO, OKANOYA Emotional Information Project, JAPAN
4 Graduate School of Arts and Sciences, The University of Tokyo, JAPAN

Musical emotions are an integration of two kinds of emotions: perceived emotion and felt
emotion. In this study, we hypothesized that perceived emotion would not necessarily
correspond to felt emotion, particularly in response to low consonant music such as music in
a minor key. In addition, we investigated the effect of musical experiences toward the two
kinds of emotions. In total, 24 participants listened to 21 newly composed musical stimuli
44

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

TUE
and rated the intensities of perceived and felt emotions using a two-dimensional evaluation:
arousal (active/passive) and valence (pleasant/unpleasant). The results showed that the
perceived emotion did not always coincide with the felt emotion. Notably, participants who
had substantial musical experience rated the felt emotion as less unpleasant or more
pleasant than the perceived emotion in response to minor-key, dissonant and high note
density music. This finding may lead to a better understanding of why people sometimes like
or enjoy sad music.


Emotional features of musical pieces for a series of survival-horror games

Ryo Yoneda, Kohta Matsumoto, Shinya Kanamori, Masashi Yamada


Graduate School of Engineering, Kanazawa Institute of Technology

In recent years, the hardware and software of video games has substantially developed. This
led to rapid increase of the cost and time for creating high-quality contents for a video game.
Therefore, once a game title sales successfully, producers tend to make that title into a series,
because the content can easily recover the cost of development. However, it is rare for the
original creators of a series to stay with it all the way through its life span, because game
creators tend to switch companies frequently. In the present study, emotional features of
musical pieces composed for Capcoms survivalhorror title Resident Evil, in which seven
titles were released in the last 16 years, were rated using 24 semantic differential scales. The
results showed that the emotional features of the musical pieces were constructed by
pleasantness and excitation axes. On the two dimensional emotional plane, musical
pieces were plotted for each title. The results of the distribution of the musical pieces were
consistent for five titles. This implies that the musicians and sound engineers retained the
original emotional features of musical peaces through at least five of the titles.

Speed Poster Session 10: Timber II Hall, 11:40-12:10


Musical experience & preference

Background Music As A Risk Factor For Distraction Among Young Drivers: An


IVDR Study

Warren Brodsky,* Zack Slor#


*Music Science Lab, Department of the Arts, Ben-Gurion University of the Negev, Beer-Sheva
Israel #Israel Center For Emotional Fitness, Zahala Tel Aviv Israel

Statistical data on road safety indicates that drivers between ages 16-24 account for a high
level of accidents and fatalities; in Israel 25% severe accidents and 5% fatalities occur during
the first two years of driving, and young novice drivers are 10-times more likely to be in an
accident during their first 500 miles. Ironically, the most common violations for this group
are speeding (37%) and lane weaving (20%) both of which correlate with in-cabin music
behavior (Brodsky, 2002). Young drivers regularly listen to fast-tempo highly energetic
aggressive music played at elevated volumes. This State of Israel National Road Safety
Authority study investigates music as a risk factor among young novice drivers. The study
employed two Learners Vehicles installed with in-vehicle data recorders (IVDR). Eighty-five
young novice drivers drove six trips: twice with preferred music brought from home, twice
with In-car alternative music (Brodsky & Kizner, 2012), and twice with no-music. For each
trip 27 events were logged; a range of vehicle variables that were mechanical, behavioral, or
predetermined HMI interactions. The findings indicate that both frequency and severity of
driving violations were higher for trips with driver-preferred music than trips when either
no music or In-car alternative music. We recognize that in-car listening will forever be part
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

45

of vehicular performance, and therefore future research should explore the effects of music
on driving performance. Developing and testing functional music backgrounds towards
increased driver safety is an important contribution of Music Science in the war against
traffic accidents and fatalities.


Conceptualizing the subjective experience of listening to music in everyday
life

Ruth Herbert
Music Dept., Open University, UK

Empirical studies of everyday listening often frame the way individuals experience music
primarily in terms of emotion and mood. Yet emotions - at least as represented by
categorical, dimensional and domain-specific models of emotion - do not account for the
entirety of subjective experience. The term 'musical affect' may equally relate to aesthetic,
spiritual, and 'flow' experiences, in addition to a range of altered states of consciousness
(Juslin & Sloboda, 2010), including the construct of trance. Alternative ways of
conceptualizing and mapping experience suggest new understandings of the subjective,
frequently multimodal, experience of music in daily life. This poster explores categorizations
of aspects of conscious experience, such as checklists of basic dimensions of characteristics
of transformations of consciousness (e.g. Pekala's Phenomenology of Consciousness
Inventory (PCI), or Gabrielsson and Lindstrm Wik's descriptive system for strong
experiences with music (SEM-DSM), together with the potential impact of specific kinds of
consciousness upon experience (e.g. the notion of present centred (core or primary), and
autobiographical (extended/higher order) forms of consciousness (Damasio, 1999, Edelman,
1989).Three recent empirical studies (Herbert, 2011) which used unstructured diaries and
semi-structured interviews to explore the psychological processes of everyday involving
experiences with music in a range of 'real-world' UK scenarios are referenced. Free
phenomenological report is highlighted as a valuable, if partial means of charting subjective
experience. Importantly, it constitutes a method that provides insight into the totality of
experience, so enabling researchers to move beyond the confines of emotion.


The impact of structure discovery on adults preferences for music and dance

Jennifer K. Mendoza, Naomi R. Aguiar, Dare Baldwin


Department of Psychology, University of Oregon, USA

In our society, music features prominently from Beethoven to Lady Gaga concerts, and from
singing on Broadway to singing in the shower. Why is music such a pervasive part of our
world? Why do we derive such pleasure from our musical experiences? Our research
investigates these questions, exploring how adults musical processing affects musical
preferences. Specifically, we seek to determine whether adults structure discovery impacts
their subjective liking of music. Similarities in structural organization make music and
dynamic action domains ripe for comparison. Given the intimate connection between dance
and music, our research also examines whether structure discovery relates to subjective
liking in the field of dance. We created music and dance stimuli with matching structure.
Each undergraduate participant either views the dance stimuli or listens to the music stimuli
at her own pace using the dwell-time methodology (Hard, Recchia, and Tversky, 2011). If
adults dwell longer at points where one phrase ends and the next begins in the stimuli, we
can infer that they discovered the structure in both domains. Participants will rate their
subjective liking of the dance or the music. We predict that adults who discover the structure
will report higher ratings of subjective liking. Our research also explores the effects of
stimulus complexity and domain expertise on the relationship between structure discovery
46

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

TUE
and subjective liking for both music and dance. If our research yields the predicted results,
then we will have initial confirmation that structure discovery impacts adults subjective
liking of both music and dance.


Values, Functions of Music, and Musical Preferences

Hasan Grkan Tekman,* Diana Boer,# Ronald Fischer*


*Psychology Department, Yaar University,Turkey
#School of Humanities and Social Sciences., Jacob University Bremen, Germany
*School of Psychology, Victoria University of Wellington, New Zealand

One function of music that is recognized cross-culturally is helping shape identity and values.
Moreover, values may determine which functions of music people use and which musical
styles are suited to serve different functions. This study had three main aims. First, we
examined the structure of musical style preferences of a Turkish sample. Second, we
examined the relations between value orientations, functions of music and musical
preferences. Third, we searched for mediating effects of functions of music that explain the
link between values and musical preferences. Two hundred and forty six students of Uludag
University in Bursa, Turkey filled a questionnaire in which they were questioned about the
importance of 10 functions of music listening, their preferences for 16 musical styles and
their endorsement of self-enhancement, self-transcendence, openness to change, and
conservation values. Musical preferences could be summarized by five underlying
dimensions that mainly conformed to those obtained in other countries and in earlier
research in Turkey. While self-enhancement values were associated with preference for
contemporary styles, self-transcendence values were associated with preferences for
sophisticated styles. Sophisticated and intense styles were associated positively with
openness-to-change and negatively with conservation. Endorsement of openness-to-change
values was associated with intrapersonal and affective and socio-cultural and contemplative
functions of music, whereas endorsement of conservation values was negatively associated
with these functions. Shaping values, expressing cultural identity, and dancing functions of
music had significant mediating roles in the relation between values and musical
preferences.

Paper Session 1: Grand Pietra Hall, 14:30-15:30


Music & language development

Categorization in music and language: Timbral variability interferes with


infant categorization of melodies

Eugenia Costa-Giomi
Center for Music Learning, University of Texas-Austin, USA

Although timbre plays different roles in the organization of musical and linguistic
information, research has consistently shown its salience as a perceptual feature in both
music and language. Infants recognize phonemes and words despite variations in talkers
voice early in life and have difficulty in recognizing short melodies when played by different
instruments until they are 13-month-old. It seems that during the first year of life, timbral
variability interferes with the categorization of melodies but not words. Because the
categorization of words and melodies is critical for the understanding of language and
western music respectively, it is surprising that the former seems to develop earlier than the
latter. But studies on infant categorization of linguistic stimuli have been based on the
recognition of single words or phonemes lasting less than a second, whereas those on infant
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

47

categorization of music stimuli have used sequences of tones lasting almost 6 seconds. We
conducted a series of experiments to directly compare the formation of categories in music
and language under timbral variability using melodies and phrases of the same length, speed,
and rhythmic features and found that 11-month olds categorized the language but not the
music stimuli. The findings suggest that the categorization of certain structural elements
emerges earlier in language than in music and indicate a predisposition for the formation of
timbral categories in auditory stimuli in general, even in case in which such categories are
not structurally important.


Music, Language, and Domain-specificity: Effects of Specific Experience on
Melodic Pattern-Learning

Erin Hannon, Christina Vanden Bosch der Nederlanden


Psychology Dept., University of Nevada, Las Vegas, USA

Despite their surface similarities, music and language conform to distinct, domain-specific
rules and regularities. Experienced listeners presumably possess music-specific expectations
about which acoustic features will be most relevant in a musical context, but relatively little
is known about how and when this knowledge emerges over the course of development.
Given that melodic structure is of central importance in music but of secondary importance
in language, we report a set of experiments exploring the extent to which listeners with
different life-long listening experiences attend to or ignore melodic information in the
context of language or music. In all experiments we present listeners with a sequence of sung
syllable triplets whose syllables and/or pitches conform to an ABA or ABB pattern. We use
subsequent similarity ratings of novel sequences to determine which rule-like pattern
listeners inferred during the exposure phase. Some test items violate the established syllable
whereas others violate only the melodic rule. We compare performance on this task among
English-speaking non-musicians and musicians and among native speakers of a tonal
language (Chinese, Thai). We find a strong bias among non-musicians to give high similarity
ratings to test stimuli that conform to the syllable pattern, regardless of the syllable pattern.
This bias is attenuated or reversed (i.e. the melodic pattern is favored) for listeners with
music training or experience speaking a tonal language. Implications for the development of
music-specific knowledge and capacities will be discussed.

Paper Session 2: Crystal Hall, 14:30-15:30


Musical tension

The influence of structural features on perceived musical tension

Moritz Lehne, Martin Rohrmeier, Donald Gollmann, Stefan Koelsch


Cluster Languages of Emotion, Freie Universitt Berlin, Germany

In Western tonal music, a dynamic flow of tension and resolution is usually perceived. This
musical tension is related to various structural features of the music (e.g., dynamics, agogics,
melodic contour or harmony), however, the relative contribution of different features to the
experience of musical tension remains unclear.To explore how different features contribute
to the tension experience of the listener, we acquired continuous ratings of musical tension
for original and modified versions of two classical piano pieces. Modifications included
versions without dynamics, without agogics and versions in which harmony, melody and
outer voices were played in isolation. The influence of these features on subjectively
experienced tension was investigated by comparing average tension ratings of the different
versions using correlation analysis. In addition, we investigated the relation of perceived
48

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

TUE
tension and loudness of the music by comparing tension ratings to predictions of a loudness
model. Despite a general tendency towards flatter tension profiles, tension ratings for
versions without dynamics as well as versions without agogics correlated highly with ratings
for the original versions for both pieces. Correlations between tension ratings of the original
versions and ratings of harmony and melody versions as well as predictions of the loudness
model differed between pieces. Our findings indicate that discarding expressive features
generally preserves the overall tension-resolution patterns of the music. The relative
contribution of single features like loudness, harmony and melody to musical tension
appears to depend on idiosyncrasies of the individual piece.


The semantics of musical tension

Jens Hjortkjr
Department of Arts and Cultural Studies, University of Copenhagen, Denmark

The association between music and tension is a strong and long-standing one and yet the
psychological basis of this phenomenon remains poorly understood. Formal accounts of
musical grammar argue that patterns of tension and release are central to the structural
organization of music, at least within the tonal idiom, but it is not clear why structural
relations should be experienced in terms of tension in the first place. Here, I will discuss a
semantic view, suggesting that musical tension relies on cognitive embodied force schemata,
as initially discussed by Leonard Talmy within cognitive semantics. In music, tension ratings
studies tend to relate musical tension to continuous measures of perceived or felt arousal,
but here I will discuss how it may also relate to the ways in which listeners understand
musical events as discrete states with opposing force tendencies. In a behavioral tension
rating study, listeners rated tension continuously in musical stimuli with rapid amplitude
contrasts that could represent one of two force dynamic schemas: events either releasing or
causing a force tendency. One group of participants were primed verbally beforehand by
presenting an analog of the release-type schema in the experimental instructions. It was
found that primed subjects rated tension with a distinctly opposite pattern relative to the
unprimed group. The results support the view that musical tension relates to the ways in
which listeners understand dynamic relations between musical events rather than being a
simple continuous measure of arousal.

Paper Session 3: Dock Six Hall, 14:30-15:30


Motion & Gesture I

The Coupling of Gesture and Sound: Vocalizing to Match Flicks, Punches, Floats
and Glides of Conducting Gestures

Aysu Erdemir,1 Emelyne Bingham,2 Sara Beck,1 John Rieser1


1Psychology and Human Development in Peabody College, Vanderbilt University, USA
2Blair School of Music, Vanderbilt University, USA

The study was designed to explore whether there was a systematic relationship between
various hand gestures performed by an expert conductor, and accompanying vocal sounds
produced by adults with or without any kind of musical background. We explored whether
people automatically and systematically vary their utterances in a way to match the
movement characteristics of certain gestures. For this reason, we picked gestures that are
not contained in conducting manuals, but nevertheless seem familiar/natural in an everyday
life context. Participants watched videos of a conductor performing four different hand
gestures called flicks, punches, floats and glides, which varied in terms of their use of space
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

49

(direct/indirect), weight (strong/light) and time (sudden/sustained). Participants were


asked to produce the syllable /dah/ repeatedly in a way that feels natural to the four
gestures they observed visually. Audio-recordings of the vocal responses were scored by
three independent judges, whose task was to judge which type of gesture gave rise to each of
the vocal productions. Results showed that categorization accuracies were 94%, 96%, 80%
and 82% for flicks, punches, floats and glides respectively. Additional psychoacoustic
analysis on the sound data revealed significant associations of the motion characteristics of
the gestures such as their use of space, weight & time to overall pitch, loudness & duration
levels of the utterances, respectively. The data collected imply a definable cross-modal
relationship between gesture and sound, where the visual effects from the kinematics of
movement patterns are automatically translated into predictable auditory responses.

Seeing Sound Moving: Congruence of Pitch and Loudness with Human


Movement and Visual Shape

Dafna Kohn,1 Zohar Eitan2


1Levinsky College of Education, Israel, 2School of Music, Tel Aviv University, Israel

We investigate listeners evaluations of correspondence between pitch or loudness contours
and human motion (Exp1) or visual shape (Exp2). In Exp1 32 adult nonmusicians watched
16 audiovisual stimuli (a videotaped dancer), which systematically combined bidirectional
changes in pitch or loudness with bidirectional vertical or horizontal (opening and closing)
human motion. Participants ranked how well the music and movement in each audiovisual
stimulus matched. Significant correspondences were found between loudness change and
both vertical and horizontal motion, while pitch changes corresponded with vertical motion
only. Perceived correspondences were significantly stronger for loudness, as compared to
pitch, and for vertical, as compared to horizontal movement. Congruence effects were also
significantly higher for convex (inverted-U) as compared to concave (U-shaped) change
contours, both musical (e.g., pitch rise-fall as compared to fall-rise) and motional (e.g.,
opening-closing vs. closing-opening). In Exp2 the same participants were presented with the
same music stimuli and with 4 static visual shapes, and selected the shape that best matched
each stimulus. Most participants chose the correct shape for each musical stimulus. Results
indicate that adult non-musicians strongly associate particular bodily movements and visual
shapes with particular changes in musical parameters. Importantly, correspondences were
affected not only by the local directions of motion (e.g., rise, fall), but by overall contours (in
both music and motion), such that mappings involving convex contours were stronger than
mappings involving concave contours. This suggests that cross-modal mappings may be
affected by higher-level patterning, and specifically that convex (inverted-U) patterns may
facilitate such mappings.

Paper Session 4: Timber I Hall, 14:30-15:30


Voice & performance

The Ideal Jazz Voice Sound: A Qualitative Interview Study

Ella Prem,1 Richard Parncutt, 2 Annette Giesriegl,3 Hubert Johannes Stigler4


1, 2 Centre for Systematic Musicology, University of Graz, Austria, 3 Department of Jazz,
University of Music and Dramatic Arts Graz, Austria, 4 Centre for Information Modelling,
University of Graz, Austria

The vocabulary of words and phrases used by jazz singers to describe jazz voice sound is the
subject of this research. In contrast to the ideal classical voice sound, which is linked to the
need to project over loud accompaniments (e.g. formant tuning), the ideal jazz voice sound
50 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

TUE
takes advantage of microphones enabling greater expressive variation. Implicit concepts of
ideal voice sounds influence teaching in conservatories and music academies but have been
the subject of little empirical investigation. We are interviewing 20 Austrian jazz singers. All
are or used to be students of jazz singing. In open interviews, each participant brings 10
examples of jazz singing and described that singers voice sound. The qualitative data are
represented in an XML database. XSLT stylesheets are used to create tag clouds, where the
size of a word reflects its number of occurrences. The vocabulary is split up in a small core of
commonly used terms such as: deep, spoken and diverse (25 descriptors used by more then
60% of the participants) and a large periphery of intuitive associations reflecting
individuality of the perception, description and the jazz voice sound itself (260 descriptors
are used by less then 10% of the participants). We explored the ideal jazz voice sound
without asking for it directly. Participants additionally showed remarkable motivation to
listen to different sounds to cultivate their individuality as jazz singers, raising questions
about the tension between uniformity and individuality in jazz pedagogics.


Inaccurate singing as a dynamic phenomenon: Interval matching a live vocal
model improves accuracy levels of inaccurate singers

Rona Israel-Kolatt, Roni Granot


The Hebrew University, Israel

One of the most powerful and enjoyable gifts given to man is the ability to communicate with
others in song. But for some the gift remains unwrapped. One aspect of such Non-singing
which has received much attention in the last years is "out of tune" (OOT) singing. Previous
studies have found that accuracy of singing or level of OOT is not a static factor. Recent
research suggests that the degree of acoustical\physical match of the stimuli source (in
terms of vocal range and timbre), to those of a participant, has a significant influence on
accuracy levels. This in turn suggests some involvement of a mirror system which could be
enhanced when the target tones are produced by a live visible human source. In the current
experiment we asked a group of participants, who varied in their ability to sing accurately, to
vocally match target intervals produced in five different manners: A live voice of a
professional soprano, two versions of her recorded voice, one defined as optimal vocal
production and the other defined as poor, "forced" vocal production, a piano played "live" in
front of the participants, and a recorded piano. Preliminary findings suggest a significant
improvement in accuracy when participants matched intervals produced vocally in
comparison to intervals produced by a piano. Furthermore, the improvement was
significantly heightened in the live voice condition.

Paper Session 5: Timber II Hall, 14:30-15:30


Neurocognitive approaches

Neurocognitive profile of musicians

Mari Tervaniemi
Cognitive Brain Research Unit, Institute of Behavioural Sciences, University of Helsinki, Finland
Department of Psychology, University of Jyvskyl, Finland
Centre of Interdisciplinary Music Research, Department of Music, University of Jyvskyl,
Finland

In the neurosciences of music, musicians have traditionally been treated as a unified group.
However, obviously, their musical preferences differentiate them, for instance, in terms of
their major instrument they play and music genre they are mostly engaged with as well as
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

51

their practicing style. Here our intention was to reveal the neurocognitive functions
underlying the diversity of the expertise profiles of musicians. To this end, groups of adult
musicians (jazz, rock, classical, folk) and a group of non-musicians participated in brain
recordings (event-related potentials in mismatch negativity (MMN) paradigm which probes
the brains automatic reaction to any change in sound environment). The auditory
stimulation consisted of a short melody which includes mistakes in pitch, rhythm, timbre,
key, and melody. During stimulation, the participants were instructed to watch a silent video.
Our interest was in comparing the MMN response evoked by the mistakes to the genre the
musicians are most actively involved in. We found that all melodic mistakes elicited MMN
response in all adult groups of participants. The strength of MMN and a subsequent P3a
response reflects the importance of various sound features in the music genre they
specialized to: pitch (classical musicians), rhythm (classical and jazz musicians), key
(classical and jazz musicians), and melody (jazz and rock musicians). In conclusion, MMN
and P3a brain responses are sensitively modulated by the genre of musicians are actively
engaged with. This implies that not only musical expertise as such but the type of musical
expertise can further modulate auditory neurocognition.


Absolute Pitch and Synesthesia: Two Sides of the Same Coin? Shared and
Distinct Neural Substrates of Music Listening

Psyche Loui, Anna Zamm, Gottfried Schlaug


Department of Neurology, Beth Israel Deaconess Medical Center and Harvard Medical School,
USA

People with Absolute Pitch can categorize musical pitches without a reference, whereas
people with tone-color synesthesia can see colors when hearing music. Both of these special
populations perceive music in an above-normal manner. In this study we asked whether AP
possessors and tone-color synesthetes might recruit specialized neural mechanisms during
music listening. Furthermore, we tested the degree to which neural substrates recruited for
music listening may be shared between these special populations. AP possessors, tone-color
synesthetes, and matched controls rated the perceived arousal levels of musical excerpts in a
sparse-sampled fMRI study. Both APs and synesthetes showed enhanced superior temporal
gyrus (STG, secondary auditory cortex) activation relative to controls during music listening,
with left-lateralized enhancement in the APs and right-lateralized enhancement in the
synesthetes. When listening to highly arousing excerpts, AP possessors showed additional
activation in the left STG whereas synesthetes showed enhanced activity in the bilateral
lingual gyrus and inferior temporal gyrus (late visual areas). Results support both shared
and distinct neural enhancements in AP and synesthesia: common enhancements in early
cortical mechanisms of perceptual analysis, followed by relative specialization in later
association and categorization processes that support the unique behaviors of these special
populations during music listening.

52

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

TUE
Speed Poster Session 11: Grand Pietra Hall, 15:30-16:00
Language perspectives
Perceiving meaningful discourse structure in music and language

Jiaxi Liu
Faculty of Music, Cambridge University, United Kingdom

Despite common belief that music lacks truth-conditional meaning, recent evidence of
similar neural processing of the syntactic and semantic aspects of the music and language
suggests that they have much in common (Steinbeis and Koelsch 2007). However, this
similarity seems to break down at different structural levels. Music studies have proposed
that listeners attend to local but not global structure (Tillman and Bigand 2004, Delige et. al.
1997); linguistic data have yet to distinguish the level of meaningful structure perception.
Thus, this study aims to make parallel findings for both domains, additionally comparing
musicians to nonmusicians. Original musical and textual compositions were analysed for tree
structure by the Generative Theory of Tonal Music (Lerdahl and Jackendoff 1983) and the
Rhetorical Structure Theory (Carlson et. al. 2001), respectively. The branches at each tree
depth were cut and randomized as audio-visual music clips and visual text slides in iMovie
projects. Collegiate native English speakers 50 musicians and 50 nonmusicians were
asked to recreate what they considered the original work in a puzzle task. The resulting
ordered strings were analysed using edit distance, revealing that successful recreation was
overall independent of subject and stimulus type. Musicians performed better than
nonmusicians for music only at intermediate tree depths (p=0.03). Cluster analyses
suggested that musicians attended to structural (global) cues in their recreation process
while nonmusicians relied on surface (local) cues. These novel findings provide empirical
support for differing affinities for differing compositional features in music and language as
perceived by musicians versus nonmusicians.


Domain-generality of pitch processing: the perception of melodic contours and
pitch accent timing in speech

Tuuli H. Morrill,*# J. Devin McAuley,* Laura C. Dilley#*, David Z. Hambrick*


*Dept. of Psychology, Michigan State University, USA
#Dept. of Communicative Sciences and Disorder, Michigan State University, USA

It is unclear to what extent individuals with pitch processing deficits in music also show
speech processing deficits. In speech, pitch and timing information (i.e., prosody) frequently
convey meaning; listeners must perceive the timing of pitch changes (e.g., a peak on the
second syllable of digst (verb) vs. dgest (noun), on the first syllable). We investigate the
relationship between MBEA performance and pitch peak timing perception in speech,
controlling for individual differences in cognitive ability. Participants (n = 179) completed a
Cognitive Ability Battery, the Montreal Battery of Evaluation of Amusia (MBEA), and a
prosody test. Participants learned versions of a nonsense word with a pitch peak on the first
or second syllable (versions A and B), then completed an AXB discrimination task including
versions (X) with pitch peaks at intermediate temporal positions. Structural equation
modeling involved two steps: (1) Establishing a measurement model: predictor constructs
included latent variables representing fluid intelligence and working memory capacity
(Gf/WMC), crystallized intelligence (Gc), and music perception (MBEA) and (2) Tests for
effects of Gf, Gc, and MBEA on a latent variable representing prosody test performance
(Prosody); only MBEA was a significant predictor of Prosody ( = .55). MBEA accounted for
35.7% of variance in Prosody; Gf and Gc added < 1%. Results indicate music perception is
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

53

highly predictive of speech prosody perception; effects do not appear to be mediated by


cognitive abilities. This suggests pitch peak timing perception may be controlled by a
domain-general processing mechanism.


Expertise vs. inter-individual differences: New evidence on the perception of
syntax and rhythm in language and music

Eleanor Harding, Daniela Sammler, Sonja Kotz


Max Planck Society for Human Cognitive and Brain Sciences, Leipzig

Language and music perception overlap in the realms of syntax (Koelsch, Gunter, Wittfoth, &
Sammler, 2005) and rhythm (Vuust, Roepstorff, Wallentin, Mouridsen, & Ostergaard, 2006;
Schmidt-Kassow & Kotz, 2008). Considering that native-speaker language proficiency is
subject to inter-individual variability (Pakulak and Neville, 2010) and that musical aptitude
is not strictly limited to musical experts (Bigand & Poulin-Charronat, 2006; Koelsch, Gunter,
& Friederici, 2000), this ongoing study collects individual working memory and rhythm
performance data among musicians and non-musicians and correlates natural aptitude with
language- and music- syntax perception as a function of rhythm. In discrete sessions,
participants were asked to detect syntactic differences in sentences and melodies, making an
uninformed choice as to whether paired items were 'same' or 'different.' The sentence- and
melody discriminate pairs were either spoken/played in a regular or irregular rhythm. When
comparing musicians to non-musicians, musicians have a globally improved performance in
the melody discrimination, however working memory capacity and rhythm aptitude
correlate with task performance across all participants. Results indicate that variance in the
data may be linked to individual 'affinity' for regular-rhythm entrainment, irrespective of
musical expertise.

Music and the Phonological Loop

Lindsey M. Thompson1, Marjorie J. Yankeelov2

1Music, Mind and Brain, Goldsmiths, University of London, United Kingdom


2Dept. of Music, Belmont University, United States

Research on the phonological loop and music processing remains inconclusive. Some
researchers claim that the Baddeley and Hitch Working Memory model requires another
module for music processing while others suggest that music is processed in a similar way to
verbal sounds in the phonological loop. The present study tested musical and verbal memory
in musicians and non-musicians using an irrelevant sound-style working memory paradigm.
It was hypothesized that musicians (MUS at least seven years musical training) would
perform more accurately than non-musicians (NONMUS) on musical but not verbal memory.
Verbal memory for both groups was expected to be disrupted by verbal irrelevant sound
only. In the music domain, a music expertise x interference type interaction was predicted:
MUS were expected to experience no impairment under verbal irrelevant sound whereas
NONMUS would be impaired by verbal and musical sounds. A standard forced choice
recognition (S/D) task was used to assess memory performance under conditions of verbal,
musical and static irrelevant sound, across two experiments. On each trial the irrelevant
sound was played in a retention interval between the to-be remembered standard and
comparison stimuli. Thirty-one musically proficient and 31 musically non-proficient Belmont
University students participated across two experiments with similar interference
structures. Results of two-way balanced ANOVAs yielded significant differences between
musical participants and non-musical participants, as well as significant differences between
interference types for musical stimuli, implying a potential revision of the phonological loop
model to include a temporary storage subcomponent devoted to music processing.

54

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

TUE
Speed Poster Session 12: Crystal Hall, 15:30-16:00
Melodic similarity
Implicit and explicit judgements on the melodic similarity of cases of
plagiarism and the role of computational models
Anna Wolf,* Daniel Mllensiefen#
*Hanover Music Lab, Hanover University of Music, Drama and Media, Germany
#Department of Psychology, Goldsmiths College, University of London, United Kingdom

Computational similarity measures have proven to be invaluable in the classification,


retrieval and comparison of melodies (e.g. Eerola & Bregman, 2007). A commercially very
relevant application is their use in cases of musical plagiarism (Mllensiefen & Pendzich,
2009; Cason & Mllensiefen, 2012). However, apart from a few notable exceptions (e.g.
Mllensiefen & Frieler, 2004) there is surprisingly little psychological evidence to validate
the cognitive adequacy of the proposed algorithms. In an implicit memory paradigm
participants (n = 36) were exposed to 20 melodies performing cover tasks. In a subsequent
test phase participants listened to 30 melodies (15 similar to melodies from initial phase, 10
neutral, 5 identical) to identify which ones they had heard before. For this task we used
melodies from court cases from the US and the Commonwealth. Participants judgments
agreed fairly well with the courts decision (AUC of .70). Many of the computational
measures of similarity correlate highly with the participants data, such as a Tversky (1977)
feature-based measure (r=.59) and a duration-weighted Edit Distance (r=.51). The court
decisions are best classified by an Earth Movers Distance measure (AUC of .84; Typke,
Wiering & Veltkamp, 2007) and the Tversky measure (AUC of .69). Participants are able to
distinguish between those melodies classified or rejected as plagiarism to a good degree.
However, it has to be noted that, aside from melodic similarity, factors such as knowledge of
either song, lyrics or the title can also significantly influence the courts decision.


Towards Modelling Variation in Music as Foundation for Similarity

Anja Volk,# W. Bas de Haas, # Peter van Kranenburg*


#ICS, Utrecht University, Netherlands; *Meertens Institute, Amsterdam, Netherlands

This paper investigates the concept of variation in music from the perspective of music
similarity. Music similarity is a central concept in Music Information Retrieval (MIR),
however there exists no comprehensive approach to music similarity yet. As a consequence,
MIR faces the challenge on how to relate musical features to the experience of similarity by
listeners. Musicologists and studies in music cognition have argued that variation in music
leads to the experience of similarity. In this paper we review the concept of variation from
three different research strands: MIR, Musicology, and Cognitive Science. We show that all of
these disciplines have contributed insights to the study of variation that are important for
modelling variation as a foundation for similarity. We introduce research steps that need to
be taken to model variation as a base for music similarity estimation within a computational
approach.


Melodic Similarity: A Re-examination of the MIREX2005 Data
Alan Marsden
Lancaster Institute for the Contemporary Arts, Lancaster University, UK

Despite a considerable body of research, there is no clarity about the basic properties of
melodic similarity, such as whether or not it constitutes a metric space, or whether it is a
more complex phenomenon. An experiment conducted by Typke et al., used as a basis for the
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

55

MIREX2005 melodic-similarity modelling contest, represents a particularly rich source of


data. In the experiment, for each of eleven queries (melodies taken from RISM A/II), about
25 experts ranked some of about 50 candidates for similarity with the query. A Monte Carlo
approach has been taken in re-examining this data, simulating data in the same form on the
basis of simple assumptions about the nature of melodic similarity. Statistical properties of
the actual data were compared with the same properties for 10,000 sets of simulated data,
allowing estimation of the significance of differences found. In terms of overall measures
such as the ranking profile for each candidate, quite good simulations (i.e., sets of simulated
data in which the original falls within the second and third quartiles in the measured
property) arose from stochastic ranking based only on the mean and variance of the actual
ranking for each candidate and on the likelihood of the candidate being selected for ranking.
However, the simulations did show evidence, in a substantial minority of cases, of an effect
for some candidates to be ranked higher or lower dependent on the presence of another
candidate, and of the influence of similarity between candidates.

On Identifying Folk Song Melodies Employing Recurring Motifs

Peter van Kranenburg,* Anja Volk,# Frans Wiering,#


*Meertens Institute, Amsterdam, Netherlands; #ICS, Utrecht University, Netherlands

The recurrence of characteristic motifs plays an important role in the identification of a folk
song melody as member of a tune family. Based on a unique data set with expert annotations
of motif occurrences in a collection of Dutch folk song melodies, we define 15 abstract motif
classes. Taking a computational approach, we evaluate to what extent these 15 motif classes
contribute to automatic identification of folk songs. We define various similarity measures
for melodies represented as sequences of motif occurrences. In a retrieval experiment,
alignment measures appear the most successful. The results are additionally improved by
taking into account the phrase position of motif occurrences. These insights motivate future
research to improve automatic motif detection and retrieval performance, and to determine
similarity between melodies on the basis of motifs.


A Melodic Similarity Measure Based on Human Similarity Judgments

Naresh N. Vempala, Frank A. Russo


Department of Psychology, Ryerson University, Canada

Music software applications often require similarity-finding methods. One instance involves
performing content-based searches, where music similar to what is heard by the listener is
retrieved from a database using audio or symbolic input. Another instance involves music
generation tools where compositional suggestions are provided by the application based on
user-provided musical choices (e.g. genre, rhythm and so on) or samples. The application
would then generate new samples of music with varying degrees of musical similarity.
Although several similarity algorithms such as edit distance methods and hidden Markov
models already exist, they are not fully informed by human judgments. Furthermore, only a
few studies have compared human similarity judgments with algorithmic judgments. In this
study, we describe an empirically derived measure, from participant judgments based on
multiple linear regression, for determining similarity between two melodies with a one-note
change. Eight standard melodies of equal duration (eight notes) were systematically varied
with respect to pitch distance, pitch direction, tonal stability, rhythmic salience, and melodic
contour. Twelve comparison melodies with one-note changes were created for each
standard. These comparison melodies were presented to participants in transposed and non-
transposed conditions. For the non-transposed condition, predictors of similarity were pitch
distance, direction and melodic contour. For the transposed condition, predictors were tonal
56

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

TUE
stability and melodic contour. In a follow-up experiment, we show that our empirically
derived measure of melodic similarity yielded superior performance to the Mongeau and
Sankoff similarity algorithm. We intend to extend this measure to comparison melodies with
multiple note changes.

Speed Poster Session 13: Dock Six Hall, 15:30-16:00


Motion & timing

Using Body Movement to Enhance Timekeeping

Fiona Manning, Michael Schutz


McMaster Institute for Music and the Mind, McMaster University, Canada

We previously demonstrated that tapping along while listening to a tone sequence can offer
objective improvements in a listeners ability to detect deviations in that sequences timing.
Previously, participants were asked to judge whether the final probe tone after a short
silence was consistent with the previous rhythm. Each trial contained three segments: (1)
the tempo-establishment segment (i.e., isochronous beats to establish tempo); (2) the
timekeeping segment (i.e., one measure of silence) and the probe segment (i.e., the beat on
which the probe tone sounded). Our results indicated that when the probe tone occurred
later than expected, participants performed significantly better when moving compared to
listening only. In a follow up study, this effect was eliminated when participants moved for
all except the timekeeping segment (2) during the movement condition, demonstrating the
importance of moving during this segment. The present experiment was needed to assess
whether our previous results were due to (a) movement itself, or (b) participants simply
calculating the difference in timing between the probe tone and the produced tap. In this
experiment the movement condition contained tapping in segments 1 (tempo-
establishment) and 2 (timekeeping), but not 3 (probe). Participants performed significantly
better on the task when moving than when listening without moving. However, here the
effect of movement was less marked than the effect in the first experiment, when
participants tapped during all three segments. This experiment builds on our previous work
by confirming that moving to the beat actually improves timekeeping abilities in this
paradigm.

Effect of stimulus isochrony on movement kinematics in a child drummer


prodigy

Jakub Sowinski, Nicolas Farrugia, Magdalena Berkowska, Simone Dalla Bella


Dept. of Psychology, WSFiZ in Warsaw, Poland

Most people, musicians and non-musicians alike (Sowiski & Dalla Bella, in preparation), can
easily synchronize their movement to a temporally predictable stimulus (i.e., via sensorimotor
coupling), such as a metronome or musical beat. The effects of sensorimotor coupling on
movement timing (e.g., as shown with the finger tapping paradigm) are well-known. In contrast,
little is known about the effects of sensorimotor coupling on movement kinematics during music
performance. Here this problem is examined in the case of IF, a 7-year-old child drummer
prodigy. IF revealed outstandingly precocious musical abilities as soon as at the age of 3 and is
exceptionally accurate and precise in synchronizing to auditory stimuli (Dalla Bella et al., in
preparation; Sowiski et al., 2009). In addition, IFs timing during performance is particularly
affected when producing a rhythmic pattern in correspondence of a non-isochronous metronome
(Sowiski et al., 2011). In this study we examined whether this effect extends to movement
kinematics, using motion capture. IF and children from music schools with 1-to-2.5 years of
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

57

percussion training (i.e., Control group) imitated on a percussion pad a short 6-note
isochronous metrical pattern (Strong-weak-weak-Strong-weak-weak) at the rhythm provided by
a metronome under four conditions: 1) with an isochronous metronome, 2) with an isochronous
metronome but making a break in between repetitions, 3) with a non-isochronous, still
predictable, metronome, and 4) with a non-isochronous and non-predictable metronome. Data
were analyzed with Functional Data Analyses techniques (Ramsay & Silverman, 2002). The
results showed that manipulating the metronome isochrony affected IFs movement kinematics
more that in Controls. For IF, stimulus isochrony (in conditions (1) and (2)) led to higher
maximum amplitude of the top of stick, an effect particularly visible in the vicinity of the strong
beats. .In addition, Functional ANOVAs allowed to uncover the portions of the trajectories where
differences between conditions are statistically significant. These analyses showed that for most
of the strokes produced in condition (2), movement amplitude, velocity and acceleration were all
higher than in conditions (3) and (4). These findings are in keeping with the effect of stimulus
isochrony on performance timing previously observed in IF. We suggest that synchronizing with a
non-isochronous sequence may have deleterious effects (visible both in timing and movement
kinematics) in individuals with exceptional sensorimotor coupling skills.

The influence of Spontaneous Synchronisation and Motivational Music on


Walking Speed
Leon van Noorden,* Marek Frank #
* UNESCOG, Universit Libre de Bruxelles, Belgium; *IPEM, Ghent University, Belgium
# University of Hradec Krlov, Czech Republic

In each of three experiments 120 walks were made on a 2 km long circuit through various
environments. In the first two experiments 60 students walked twice, once without and once
with music or with different tempo ranges of music. The walkers had an mp3player with
good headphones and a small camera fixed to their belt. In the environment markers were
drawn. In the first experiment only 1 out of 60 walkers synchronised spontaneously to the
music. In the second experiment music was offered with a tempo closer to the walking tempo
of each subject. 3 music tracks were prepared differing 8% in tempo. Now 5 out of 35
walkers synchronised. The third experiment was not aimed at synchronisation. Music was
collected from the students: either motivating for movement or nice music but that did not
urge to move. These pieces were rated with the Brunel Music Rating Inventory-2. Half of the
120 students received the motivating music and half the non-motivating music. The
motivating music resulted in faster walks: 1.67 m/s vs 1.47 m/s. In order to stimulate the
movements of walkers they need not to be synchronised to the beat. It is in line with our
earlier experiments in which walkers were explicitly asked to synchronise. Some walkers did
not synchronise but still walked faster to fast music.

Music Moves Us: Beat-Related Musical Features Influence Regularity of Music-


Induced Movement

Birgitta Burger, Marc R. Thompson, Geoff Luck, Suvi Saarikallio, Petri Toiviainen
Finnish Centre of Excellence in Interdisciplinary Music Research, Department of Music,
University of Jyvskyl, Finland

Listening to music makes us move in various ways. Several factors can affect the
characteristics of these movements, including individual factors, musical features, or
perceived emotional content of music. Music is based on regular and repetitive temporal
patterns that give rise to a percept of pulse. From these basic metrical structures more
complex temporal structures emerge, such as rhythm. It has been suggested that certain
rhythmic features can induce movement in humans. Rhythmic structures vary in their
degree of complexity and regularity, and one could expect that this variation influences
58

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

TUE
movement patterns for instance, when moving to rhythmically more complex music, the
movements may also be more irregular. To investigating this relationship, sixty participants
were presented with 30 musical stimuli representing different genres of popular music. All
stimuli were 30 seconds long, non-vocal, and differed in their rhythmic complexity. Optical
motion capture was used to record participants movements. Two movement features were
extracted from the data: Spatial Regularity and Temporal Regularity. Additionally, 12 beat-
related musical features were extracted from the music stimuli. A subsequent correlational
analysis revealed that beat-related musical features influenced the regularity of music-
induced movement. In particular, a clear pulse and high percussiveness resulted in small
spatial variation of participants movements, whereas an unclear pulse and low
percussiveness led to greater spatial variation of their movements. Additionally, temporal
regularity was positively correlated to flux in the low frequencies (e.g., kick drum, bass
guitar) and pulse clarity, suggesting that strong rhythmic components and a clear pulse
encourage temporal regularity.

Speed Poster Session 14: Timber I Hall, 15:30-16:00


Performance studies I

Methods for exploring interview data in a study of musical shaping

Helen M. Prior
Music Department, Kings College, London, UK

The notion of shaping music in performance is pervasive in musical practice and is used in
relation to several different ideas, from musical structure to musical expression; and in
relation to specific musical features such as phrasing and dynamics. Its versatile and multi-
faceted nature prompted an interview study, which investigated musicians use of the
concept of musical shaping in a practical context. Semi-structured interviews were
conducted with five professional violinists and five professional harpsichordists. These
interviews incorporated musical tasks that involved participants playing a short excerpt of
music provided by the researcher, as well as their own examples, to demonstrate their
normal playing, playing while thinking about musical shaping, and sometimes, playing
without musical shaping. These musical demonstrations were then discussed with
participants to elicit descriptions of their shaping intentions. This poster will illustrate the
multiple ways in which the interview data were examined, and explore the technical and
methodological implications of these approaches. First, an Interpretative Phenomenological
Analysis of the musicians interview data revealed a wide range of themes. Secondly, Sonic
Visualiser was used to analyse their musical demonstrations, which allowed the examination
of the relationships between the musicians shaping intentions, their actions, and the
resulting sound. Thirdly, the data were explored in relation to participants use of metaphors,
which were expressed verbally, gesturally, and through musical demonstrations. The
exploratory nature of the research area has exposed the value of the adoption of multiple
approaches as the relationships between musical shaping and other research areas have
become apparent.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

59

The effects of music playing on cognitive task performance

Sabrina M. Chang,* Todd C. Handy#


*Interdisciplinary Studies Graduate Program, University of British Columbia, Canada
#Department of Psychology, University of British Columbia, Canada

Many music cognition studies have demonstrated the cognitive benefits of both long- and
short-term musical training. Whereas most of these studies deal with the short-term
benefits for the music listener or the longer term benefits for the novice or accomplished
musician, our study examines the short-term effects of music playing for the advanced
performer. For our pretest-posttest design, we recruited 46 advanced classically/score-
based trained pianists. The participants completed a creative exercise (alternative uses task)
or detail-oriented exercise (proofreading task); they then performed a piano piece for ten
minutes. The performances were followed by completion of a second cognitive task
(whichever task they were not given in the pretest condition). No significant pretest-
posttest differences in creativity were reported. However, we found that participants
performed significantly worse in the posttest detail-oriented task. Our results suggest that
performance in a proofreading task involving the visual detection of errors may be hindered
immediately following a short period of music playing when the piece is already familiar to
the performer. One of the reasons may be that once a piece is learned to a certain degree, the
performance is no longer entirely score-based. At this stage, score reading involves
recognition and not the full cognitive process of reading something unfamiliarthere is no
longer a need to continuously check the musical page for errors. Hence, the participants in
this study were not primed for visual accuracy. It is also possible that the neural
underpinnings for error monitoring are minimally activated during higher-level motor
performance.


Accuracy of reaching a target key by trained pianists

Chie Ohsawa,* Takeshi Hirano,* Satoshi Obata, * Taro Ito,# Hiroshi Kinoshita*
*Graduate School of Medicine, Osaka University, Japan
#School of Health and Sports Sciences, Mukogawa Womens University, Japan

One fundamental element of successful piano playing is moving the fingertip to hit a key for
aimed tone production. We hypothesized that pianists with years of training would possess
relatively accurate spatial memory of a keyboard, and thus able to target any key position
without viewing a keyboard. This hypothesis was tested in 10 highly trained pianists, who
seated on a chair was faced a table on which either only a flat sheet of C4 key copy, or a real
scale copy of a whole piano keyboard was present. The participant moved their left or right
index finger on the target key (A1, F2, or E3 for the left hand, A4, G5 or E6 for the right hand)
after touching the reference key. Kinematics of the fingertip were recorded by 3D motion
capture system sampling at 60 Hz. Data were collected 10 times for each key. Constant,
absolute, and variable errors of the finger center relative to the center of the target key were
computed. Contrary to our hypothesis, errors in the no-keyboard condition were
considerably large. The mean constant errors for A1, F2, E3, A4, G5, and E6 were 63.5, 58.6,
27.4, 6.2, 12.9, and 29.1 mm, respectively. Corresponding values for the keyboard condition
was all less 2 mm. The right-left hand difference in errors suggests the presence of a
laterality bias in spatial memory. The larger positive constant errors for more remote keys
indicate that the spatial memory could be constructed of expanded keyboard representation.

60

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

TUE
Evaluation parameters for proficiency estimation of piano based on tendency
of moderate performance

Asami Nonogaki,1 Norio Emura,2 Masanobu Miura,3 Seiko Akinaga,4 Masuzo Yanagida5
1Graduate School of Science and Technology, Ryukoku University, Japan; 2College of Informatics
and Human Communication, Kanazawa Institute of Technology, Japan; 3Faculty of Science and
Technology, Ryukoku University, Japan; 4Department of Education, Shukugawa Gakuin College,
Japan; 5Faculty of Science and Engineering, Doshisha University, Japan

This paper describes an automatic estimation for piano performance in terms of the
proficiency for an etude Czerny. Our previous study proposed a method of proficiency
estimation for a scale performance within one octave by the MIDI-piano, in which a set of
parameters were obtained and then applied to the automatic estimation. However, it is not
sufficient to simply employ them to other musical excerpts, since the piano performance
usually has several complex aspects such as artistic expression or so. Here we introduce
another set of parameters for the automatic estimation for other musical task Czerny. Even
though the content of the task is thought as simple because of the simple equal intervals,
players might produce deviation of loudness, tempo, and/or onset from equal timing. We
then newly introduce several parameters concerning tempo, duration, velocity, onset time,
normalized tempo, normalized duration, normalized velocity, normalized onset, slope tempo,
slope duration, slope velocity, and slope onset, where the normalized parameters mean the
average of all performances, named here as moderate performance. By using the Principle
Component Analysis for all the obtained parameters, we then obtained principle components
for them. A simple determination method (k-NN) is employed to calculate the proficiency
score of them. Results shows that correlation coefficient of proposed method are 0.798,
0.849, 0.793 and 0.516, for task A of 75 (bpm) and 150 (bpm), and task B of 75 (bpm) and
150 (bpm), respectively, showing the effectiveness of proposed method.


The Sung Performance Battery (SPB)

Magdalena Berkowska, Simone Dalla Bella


Dept. of Psychology, WSFiZ in Warsaw, Poland

Singing is as natural as speaking for humans. In spite of the general belief that individuals
without vocal training are inept at singing, there is increasing evidence that the layman can
carry a tune. This is observed when occasional singers are asked to sing a well-known
melody from memory and when they are asked to imitate single pitches, intervals and short
novel melodies. Different tasks are typically used in various experiments, making the
comparison of the results across studies arduous. So far there is not a standard set of tasks
used to assess singing proficiency in the general population. To fill this gap we propose here
a new tool for assessing singing proficiency (the Sung Performance Battery, SPB). The SPB
starts from the assessment of participants vocal range followed by five tasks: 1) single-pitch
matching, 2) interval-matching, 3) novel-melody matching, 4) singing from memory of
familiar melodies (with lyrics and on a syllable), and 5) singing from memory of familiar
melodies (again, with lyrics and on a syllable) at a slow tempo, as indicated by a metronome.
Data analysis is realized with acoustical methods providing objective measures of pitch
accuracy and precision (i.e., in terms of absolute and relative pitch) as well as of time
accuracy. To illustrate the SPB we report the results obtained with a group of 50 occasional
singers. The results indicate that the battery is useful for characterizing proficient singing
and for detecting cases of inaccurate and/or imprecise singing.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

61

Speed Poster Session 15: Timber II Hall, 15:30-16:00


Neuroscience & emotion
Effect of sound-induced affective states on brain activity during implicit
processing of emotional faces

T.Quarto1,2,3, G.Blasi3, L.Fazio3, P.Taurisano3, B.Bogert1,2, B.Gold1,2, A.Bertolino3, E.Brattico1,2


1 Cognitive Brain Research Unit, Institute of Behavioral Science, University of Helsinki, Finland
2 Center of Excellence in Interdisciplinary Music Research, University of Jyvskyl, Finland
3 Dipartimento di Neuroscienze ed Organi di Senso, Universit degli studi di Bari Aldo Moro

Social interaction involves perception and interpretation of facial expressions. Our ability to
recognize the emotions contained in facial expressions is influenced by our current affective
state. In a behavioural study we demonstrated that music impacts temporary affective state,
and that this modified affective state in turn alters the implicit processing of facial emotions.
Up to date, no study has revealed the neural substrates of these cross-modal effects of music
on visual emotions and affective state. We here investigate how affective state induced by
noise or music stimulation modulates the brain responses at a precognitive, automatic stage
of emotional face processing. 20 healthy subjects underwent functional magnetic resonance
imaging (fMRI) at 3 Tesla while performing an implicit emotion-processing task. In this task,
subjects were asked to identify the gender of angry and happy facial expressions while
listening to a relaxing music sequence or else while listening to amplitude-modulated noise.
Random-effect models on fMRI data (all p<0.001) revealed a main effect of sound stimulation
in bilateral prefrontal cortex (BA47) and a main effect of facial expression in left
supplementary motor area and left fusiform gyrus. An interaction between sound
stimulation and facial expression was present in right insula. Inspection of brain signal
demonstrated that subjects had greater activity in the right insula during processing of
happy faces with music background compared with the other experimental conditions. Our
results indicate that music and noise can alter current affective states, which, in turn,
modulate brain activity during implicit processing of facial emotions.


Musical emotion and facial expression: mode of interaction as measured by an
ERP
Keiko Kamiyama*, Dilshat Abla#, Koichi Iwanaga, and Kazuo Okanoya*

* Department of Life Sciences, Graduate School of Arts and Sciences, The University of Tokyo,

Japan; #Noninvasive BMI Unit, BSI-TOYOTA Collaboration Center, RIKEN Brain Science
Institute, Japan; Department of Design, Graduate School of Engineering, Chiba University,
Japan; Japan Science Technology Agency, ERATO, Okanoya Emotional Information Project,
Japan

Music has been believed to express emotion through various elements in music itself, while it
has been increasingly reported that the musical expression interacted with extra-musical
factors. In order to reveal how these two emotional processes are processed in the brain, we
recorded the electroencephalogram (EEG) of the amateur musicians and non-musicians. We
presented several pairs of musical excerpts and images of facial expressions, each of which
represented happy or sad expressions. Half of the pairs were semantically congruent
(congruent condition), where the emotional meaning of facial expression and music were the
same, and the remaining pairs were semantically incongruent (incongruent condition).
During the EEG recording, participants listened to the musical excerpt for 500ms,
immediately after the presentation of the facial image for 500 ms. We found that music
stimuli elicited a larger negative component in the 250 450 ms range (N400) under the
62

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

TUE
incongruent condition than under the congruent condition, notably in musicians. Also, in
musicians the N400 effect appeared regardless of the emotional type of music, while in non-
musicians the effect was observed only when the happy music excerpts were presented as
target stimuli. These results indicated that the sadness of music was not automatically
extracted in no-musicians, although they could judge the congruency of stimulus pairs in the
behavioral test. Also it was suggested that facial emotional cognition had some common
processes with musical emotional cognition and that the emotional meanings of music were
integrated with other semantic inputs such as facial expressions.


Experiential effects of musical pleasure on dopaminergic learning

Benjamin Gold,a,b Michael Frank,c Elvira Brattico,a,b


aCognitive Brain Research Unit, Institute of Behavioural Studies, University of Helsinki, Finland
bFinnish Centre of Excellence in Interdisciplinary Music Research, University of Jyvskyl,
Finland; cDepartment of Cognitive, Linguistic, and Psychological Sciences, Brown Institute for
Brain Science, Brown University, U.S.A.

Neuroimaging has linked music listening with dopaminergic areas implicated in emotion and
reward. Subjects with more striatal dopamine transmission generally learn better from
rewards, while those with less usually learn better from punishments. In this study we
explored the implications of musical pleasure through its ability to enhance dopamine
release by measuring its effect on reward-based learning in a dopamine-dependent
probabilistic selection learning task. Forty-five subjects (twenty-two musicians) selected
pleasurable and neutral music from an experimenter-created database, and were then
pseudo-randomly divided into four groups -- balanced for musical experience -- according to
which music they would hear during the Training and Test phases. In Training, participants
chose between stimuli of different reward probabilities and received feedback; the Test
consisted of recombined stimuli without feedback. All participants exceeded a learning
criterion, but non-musicians performed better when listening to pleasurable music whereas
musicians performed better when listening to neutral music. Going into the Test, participants
across groups and musical backgrounds had learned the task to similar levels. In the Test,
musicians switching from neutral music to pleasurable music performed better than other
subjects, while non-musicians in the same group responded the slowest. Overall, musical
pleasure had a greater effect on Training, enhancing dopaminergic learning in non-musicians
but distracting musicians perhaps due to non-optimal striatal dopamine transmission. These
effects were complicated when participants switched musical conditions; pleasurable music
during Training distracted musicians but helped non-musicians, and at Test it benefited
musicians not affected by it in Training while non-musicians were less able to successfully
switch musical conditions.


Melodies without Words: Validity of Happy/Sad Musical Excerpts for Use in
ERP Studies

Viviane Cristina da Rocha, Paulo Srgio Boggio


Social and Cognitive Neuroscience Laboratory, Mackenzie University, Brazil

The aim of this study was to validate the excerpts composed so that they could be used in a
posterior ERP study. We also wished to better understand the characteristics in which, given
only a melody, subjects would rely on to judge whether it was a happy or sad piece of music.
A professional musician composed 80 melodies, 40 intentionally representative of sadness
and 40 representative of happiness. Some parameters were used to construct the excerpts,
such as tempo, mode, duration of notes, and tessitura. They were recorded by a professional
female singer. The stimuli were randomly presented to 19 subjects (10 female; mean age
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

63

22,6 years) using E-Prime. Subjects were asked to rate each excerpt in a scale of 1 to 7, 1
being sad, 4 being neutral and 7, happy. All of the subjects were non musicians. The answers
were analyzed considering the mean score of each excerpt. The 30 excerpts with means close
to neutral (3, 4 or 5) were discarded. The remaining 50 stimuli were analyzed as to its
musical features. After the analysis, we concluded that subjects tended to guide their
evaluation by tempo (e.g., happy excerpts composed in not such a fast tempo were
discarded), tessitura and direction of melody (e.g., happy excerpts with a downward melody
were discarded), and duration of the notes (e.g., excerpts with staccato were the highest
rated). Its possible that, given the fact that the subjects were non musicians, they didnt rely
on mode as much as musicians would.

Symposium 1: Grand Pietra Hall, 17:00-18:30


Replication and truth in music psychology

Convener: Timo Fischinger, Discussants: Henkjan Honing, Diana Deutsch



Over the last years, the reliability and validity of findings in (general) psychology have been
seriously questioned. Often used arguments are, among others, the (now) well-known
publication bias, the ritual of statistical significance testing, the so-called 'decline effect', and,
last but not least, the lack of replication studies. Especially the last point is a serious issue in
music psychology, because most studies never get replicated, probably due to the rather
small size of the field. Consequently, meta-analyses are also scarce. This raises the serious
question, which findings in music psychology a really trustful and resilient - besides the
merely trivial ones. In our view, there is a strong need to think and discuss these issues.
Therefore, this symposium is thought as an initial contribution to a methodological
discussion about future needs in empirical music research. In the first presentation on "The
role of replication studies and meta-analyses in the search of verified knowledge", Reinhard
Kopiez will talk about the important functions of replication studies in general, referring to a
selected number of replication studies to illuminate the potential power of replications.
Michael Oehler et al. will then present their replication study on "Aspects of handedness in
Deutsch's octave illusion a replication study". This paper gives new insights into the study
of the octave illusion as well as it shows, how replications can be innovative using
supplementary experimental paradigms. The third presentation on "Absolute memory for
music: Comparative replication studies of the Levitin Effect in six European laboratories"
will be about a larger replication project across six different labs in Germany and the UK.
Here, a widely cited but never replicated study in music psychology was repeated.

The role of replication studies and meta-analyses in the search of verified


knowledge

Reinhard Kopiez
Hanover University of Music, Drama, and Media, Hanover Music Lab, Germany

In the natural sciences the replication of important findings plays a central role in the
creation of verified knowledge. However, in the discipline of psychology there is only one
attempt for a systematic reproduction of published studies (see the website of the
Reproducibility project, http://openscienceframework.org/project/shvrbV8uSkHewsfD4/
wiki and the Project Progress and Results Spreadsheet). In music psychology, this self-
evident tradition of replication studies plays only a minor role. I will argue that replication
studies have two important functions: (a) as a best practice mechanism of academic self-
control which is necessary to prevent the publication of premature results; (b) as a reliable
64

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

TUE
way for the production and integration of verified knowledge which is important for the
advancement of every scientific discipline. Comparisons of selected replications with original
studies will demonstrate that the design of replications is a creative research strategy.
Replication studies discussed will come from topics such as music cognition, open
earedness, or neuroscience of music. In a last step I will show the high power of meta-
analysis in the production of verified knowledge. This important method for the uncovering
of reliable effects by means of data aggregation from single studies should be extended in the
field of empirical music research. One consequence of the replication approach will be the
future need for an online repository of already conducted replication studies. This idea will
be discussed in the symposium.


Aspects of handedness in Deutsch's octave illusion - a replication study

Michael Oehler, Christoph Reuter, Harald Shandara, Michael Kecht


Macromedia University for Media and Communication, University of Vienna, Musicological
Institute, University of Vienna, Cognitive Sciences

An extended replication study of the octave illusion (Deutsch 1974, 1983) is presented. Since
the first description of the octave illusion in 1974 several studies showed that the perception
of the two-tone pattern depends on subjects' handedness. Most of the right-handed subjects
reported to hear the high tone of the octave at the right ear. Left-handed subjects either
perceive the high tone on the left ear or tend to perceive more complex tone patterns (39%).
In all related studies the handedness categorization was done by means of a questionnaire,
e.g. the handedness inventory of Varney and Benton (1975). Several current studies (e.g.
Kopiez, Galley, Lehmann 2010) however show that objective non-right-handed persons
cannot be identified by handedness inventories. In concordance with Annett's "right shift
theory" (2002) performance measurements as speed tapping seem to be a much more
reliable handedness predictor. It is supposed that more distinct perception patterns for the
right- and non-right-handed subjects can be obtained, when performance measures are used
for handedness classification. Especially the group size of right-handers in the original study
that perceive complex tone patterns (17%) is likely to be much smaller. In the replication
study Varney and Benton's handedness inventory as well as a speed tapping task were used
to classify left- and right-handed subjects. All 131 subjects (M=28.88, SD=10.21) were naive
concerning the octave illusion. The subjects' perception of the original two-tone pattern was
measured in a forced-choice task according to the categories used by Deutsch (octave, single,
complex). The results of Deutsch's study could be replicated when using the same
handedness inventory. The performance measurement task however led to a significantly
clearer distinction between the left- and right-handed subjects (w=.42, p=.0001 in contrast
to w=.20, p=.19 in the replication and w=.28, p<.05 in the original study) and more
structured perception patterns could be observed within the left-handed group. The group
size of the right-handed subjects that perceive complex patterns is significantly smaller
(w=.36, p=.0001) when using performance measures (5%) instead of the questionnaire
(replication: 15%, original study: 17%). All in all the results of Deutsch could be replicated.
Misclassification of handedness could be reduced and the observed perception patterns were
more distinct, when speed tapping was used for measuring handedness. Therefore
performance measurements might be a useful method in future studies that deal with
aspects of the octave illusion and handedness.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

65

Absolute memory for music: Comparative replication studies of the Levitin


effect in six European laboratories

Kathrin Bettina Schlemmer1, Timo Fischinger2, Klaus Frieler3, Daniel Mllensiefen4, Kai
Stefan Lothwesen5, Kelly Jakubowski6
1Katholische Universitt Eichsttt-Ingolstadt, Germany, 2Universitt Kassel, Germany,
3Universitt Hamburg, Germany, 4,6Goldsmiths, University of London, UK, 5Hochschule fr Musik
und Darstellende Kunst Frankfurt am Main, Germany

When analysing human long term memory for musical pitch, relational memory is commonly
distinguished from absolute memory. The ability of most musicians and non-musicians to
recognize tunes even when presented in a different key suggests the existence of relational
music memory. However, findings by Levitin (1994) point towards the additional existence
of absolute music memory. In his sample, the m ajority of non absolute pitch possessors could
produce pitch at an absolute level when the task was to recall a very familiar pop song
recording. Up to now, no replication of this study has been published. The aim of this paper is
to present the results of a replication project across six different European labs. All labs used
the same methodology, carefully replicating the experimental conditions of Levitins study.
In each lab, between 40 and 60 participants (primarily university students with different
majors, musicians and non-musicians) were tested. Participants recalled a pop song that they
had listened to very often, and produced a phrase of this song. The produced songs were
recorded, analysed regarding pitch, and compared with the published original version.
Preliminary results suggest that participants show a tendency to sing in the original key, but
a little flat. The distribution of the data is significantly not uniform, but more spread out than
Levitins data. The distributions differ significantly between the three labs analysed so far.
Our replication study supports basically the hypothesis that there is a strong absolute
component for pitch memory of very well-known tunes. However, a decline effect of results
could be observed as well as other effects to be discussed.

Paper Session 6: Crystal Hall, 17:00-18:30


Analysing historical styles

On the emergence of the major-minor system: Cluster analysis suggests the


late 16th century collapse of the Dorian and Aeolian modes

Joshua Albrecht, David Huron


School of Music, Ohio State University, USA

Stable scale-degree distributions have been observed for an idealized version of the major
and minor scales. However, these scales developed out of an earlier system of modes. This
paper describes a corpus study conducted on works spanning the period in which the major
and minor modes were established as the dominant modes. The study involves 455 musical
works by 259 composers sampled across the years 1400 to 1750. Beginning with the period
1700-1750, a series of statistical studies are carried out on the distribution of scale tones,
progressively moving backward in time. The method utilizes a modified version of the
Krumhansl-Schmuckler method of key determination generalized to handle an arbitrary
number of modal classifications. The results from cluster analyses on this data are consistent
with the view that the modern minor mode emerged from the amalgamation of earlier
Dorian and Aeolian modes, with the collapse being completed around the late sixteenth
century.

66

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

TUE
Estimating historical changes in consonance by counting prepared and
unprepared dissonances in musical scores

Richard Parncutt,1 Fabio Kaiser2 and Craig Sapp3


1,2 Centre for Systematic Musicology, University of Graz, Austria
3 CCARH, Stanford University, USA

As musical styles changed in Western history, so did concepts of consonance and dissonance
(C/D; Parncutt & Hair, 2011; Tenney, 1988). Sonorities considered dissonant gradually
became more consonant, consistent with the idea that familiarity is a psychological
component of C/D (cf. Cazden, 1945), other components being smoothness (Helmholtz,
1963) and harmonicity (Stumpf, 1883; Terhardt, 1976). In Western music (theory),
dissonances require preparation and resolution. We investigate historical changes in C/D by
comparing the prevalence of prepared and unprepared dissonances in polyphonic sacred
music by searching for vertical pc-sets with the Humdrum Toolkit (Huron, 2002). For onset
counts, onsets of all tones (and no others) were simultaneous (unprepared dissonances);
for sonor counts, one or more tones were sounded early or held (prepared dissonance). In
Perotins Viderunt omnes and Sederunt (13th Century), sonor > onset for most intervals and
especially triads, suggesting dissonance, but for the perfect fifth/fourth, onset sonor. For
dyads and major/minor triads in Machauts Messe de nostre Dame (14th), onset sonor
suggesting a historical increase in perceived consonance. In works by Lassus and Palestrina
(16th), onset > sonor for third/sixth dyads and major/minor triads, suggesting a further
increase in consonance; but sonor > onset for fourth/fifth dyads, consistent with Hurons
(1991) finding that J. S. Bach encouraged smoothness but avoided fusion so voices would
remain individually audible.


Major and Minor: An Empirical Study of the Transition between Classicism and
Romanticism

Katelyn Horn, David Huron


Music, The Ohio State University, USA

An empirical study is reported tracing the changing use of the major and minor modes
between the so-called Classical and Romantic periods. Specifically cluster analysis was
carried out on a random sample of Western art music works spanning the period 1750-1900.
The analysis examined modality, dynamics, tempo, and articulation. The resulting clusters
are consistent with several affective or expressive categories, deemed joyful, regal,
tender/lyrical, light/effervescent, serious, passionate, sneaky, and sad/relaxed. Changes across
time are consistent with common musical intuitions regarding the shift from Classical to
Romantic musical languages.

Paper Session 7: Dock Six Hall, 17:00-18:30


Technology-enhanced learning & improvisation

Young childrens improvisations on a keyboard: How might reflexive


technologies support the processes of learning to improvise?
Susan Young, Victoria Rowe
Graduate School of Education, University of Exeter, UK

In this presentation we will propose that young children draw on a number of generative
sources or modes when improvising spontaneously on an electronic keyboard. These
sources are driven by, for example, expressive bodily gestures, by an interest in the
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

67

morphology of the keyboard, a motivation to imitate known and meaningful musical


experiences or an interest in making interactive play with a play-partner (whether human or
technological). The international, EU-funded MIROR project is exploring the potential of
reflexive technologies to support childrens learning processes in music. The contribution of
the Exeter University team to the project has been to carry out some studies with 4- and 8-
year-olds in educational settings and to analyse the childrens musical play to attempt to
understand how they use and engage with the MIROR softwares capacity to reply. Whilst
most of the children interacted with the system at a basic level of turn-taking, some
responded at what appeared to be a higher level, listening intently to the responses and
including some elements from them in a more extended musical conversation. The analysis
raised many further questions about childrens musical processing skills and how interactive
technology might support these. The study also raises wider, more fundamental questions
concerned with the directions for ICT in educational practice with young children and these
too will be shared in this presentation.


An exploratory study of young childrens technology-enabled improvisations

Angeliki Triantafyllaki, Christina Anagnostopoulou, Antonis Alexakis


Dept. of Music Studies, National and Kapodistrian University of Athens, Greece

Improvisation is now recognised as a central component of musical creativity. Although a
relatively young area of study, its educational value has been discussed both musically and
socially; young childrens musical improvisations more specifically, have been explored
through a variety of methods and from diverse paradigmatic viewpoints: cognitive,
developmental, educational, sociological and others. The aim of this ongoing exploratory
study is to enrich our understanding of the variety of ways young children experience
musical improvisation, as this is enabled through the MIROR platform an innovative
adaptive system for children's music improvisation and composition, based on the reflexive
interaction paradigm. In this paper we draw on data from an exploratory study conducted in
November 2011 with eight year-old children, which aimed to explore the ways children
engage with the MIROR Improvisation prototype. Three types of data are brought together
for the analysis: thematic analysis of childrens talk, descriptive analysis of childrens turn-
taking behaviour and computational music analysis. The research findings indicate
connections between particular childrens (a) turn-taking behavior and their embodied
(gestural) understandings of how they played with the machine and (b) type of musical
output and the density of their turn-taking behavior, which seem to indicate that the MIROR
technology may in some children encourage particular ways of engagement, both musically
and kinesthetically. Pedagogical issues arising from the integration of such technology-
enabled improvisation in the primary school classroom are discussed.


From Eco to the Mirror Neurons: Founding a Systematic Perspective of the
Reflexive Interaction Paradigm

Anna Rita Addessi


Dept. of Music and Performing Arts., University of Bologna, Italy

The MIROR Project (EC project, FP7-ICT) deals with the development of an innovative
adaptive system for children' music improvisation, composition and body performance,
based on the reflexive interaction paradigm. This paradigm is based on the idea of letting
users manipulate virtual copies of themselves, through specifically designed machine-
learning software referred to as interactive reflexive musical systems (IRMS). In this paper,
the theoretical framework of the reflexive interaction paradigm is discussed from a
systematic musicological perspective. Implications are introduced, aiming to support the
68

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

TUE
hypothesis that the reflexive interaction enhances teaching/learning processes and musical
creativity in children.

Paper Session 8: Timber I Hall, 17:00-18:30


Measuring emotional response

The Role of Orchestral Gestures in Continuous Ratings of Emotional Intensity


Meghan Goodchild, Jonathan Wild, Stephen McAdams
Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT)
Schulich School of Music, McGill University, Canada

Despite its increasing importance in compositions in the nineteenth and twentieth centuries,
timbre has not been theorized in research to the same extent as other musical parameters.
Typically, orchestration manuals provide prescriptions and prohibitions of instrumental
combinations and short excerpts to be emulated. Empirical studies suggest that emotional
responses may be induced by changes in orchestration, such as a sudden shift in texture and the
alternation of the orchestra and a soloist. Some orchestration treatises allude to these expressive
gestures, but a conceptual framework is still lacking. Our first aim is to model one aspect of the
dynamics of the listening experience by investigating the musical features in orchestral music that
elicit emotional responses. Additionally, we aim to contribute to the development of a theory of
orchestration gestures through music-theoretical analyses and principles from timbre perception.
Musical excerpts were chosen to fit within four categories defined by the researchers based on
instrumentation changes: gradual or sudden addition, or gradual or sudden reduction of
instruments. Forty-five participants (22 musicians and 23 nonmusicians) listened to the excerpts
and continuously moved a slider to indicate the intensity of their emotional responses. They also
completed questionnaires outlining their specific subjective experiences (chills, tears, and other
reactions) after each excerpt. Musical features of the acoustic signal were coded as time series
and used as predictors of the behavioural ratings in a linear regression model using the ordinary
least squares approach (Schubert 2004). The texture parameter was expanded to include the
contributions of each instrument family. The results suggest that there are significant differences
between the participants continuous response profiles for the four gesture categories. Musicians
and nonmusicians exhibit similar emotional intensity curves for the gradual gestures (additive
and reductive); however, musicians tend to anticipate the sudden changes, whereas non-
musicians are more delayed in their responses. For both gradual and sudden reductive excerpts,
participants demonstrate a sustained lingering effect of high emotional intensity despite the
reduction of instrumental forces, loudness, and other parameters. Through discussion of new
visualizations created from musical feature overlays and the results of the regression study, we
will highlight relationships between perceptual and musical/acoustical dimensions, quantify
elements of the temporality of these experiences, and relate these to the retrospective judgments.
To our knowledge, this is the first study that specifically investigates the role of timbral changes
on listeners emotional responses in interaction with other musical parameters.

Empathy contributes to the intensity of music-induced emotions

Jonna K. Vuoskoski, Tuomas Eerola


Finnish Centre of Excellence in Interdisciplinary Music Research, University of Jyvskyl,
Finland

Emotional contagion has been suggested as one of the mechanisms through which music can
induce emotions in listeners (Juslin & Vstfjll, 2008). Although links have been established
between trait empathy and emotional contagion in general (e.g., Doherty, 1997), it remains
to be investigated whether trait empathy also contributes to emotion contagion through
music. The aim of the study was to investigate whether trait empathy contributes to the
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

69

intensity of felt emotions induced by music. The possible contribution of empathy was
investigated by analysing the results of two separate experiments. In Experiment 1, 131
participants listened to 16 film music excerpts and evaluated the intensity of their emotional
responses. In experiment 2, 60 participants were randomly assigned to either a neutral
music group or a sad music group. The induced emotions were assessed using two indirect
measures of emotional states; a word recall task, and a facial expression judgment task. In
Experiment 1, trait empathy correlated with the self-rated intensity of emotions experienced
in response to tender and sad excerpts. In Experiment 2, trait empathy was reliably
associated with induced sadness as measured by the facial expression judgment task - in
the sad music group. The results suggest that trait empathy may indeed enhance the
induction of emotion through music at least in the case of certain emotions. The self-report
and indirect measures indicated that highly empathic people may be more susceptible to
music-induced sadness and tenderness, possibly reflecting their tendency to feel compassion
and concern for others.


Music Preferences in the Early Years: Infants' Emotional Responses to Various
Auditory Stimulations

Dennis Ping-Cheng Wang


Faculty of Education, University of Macau, Macau, China

The study aims at investigating if infants can differentiate various types of music and
respond differently in terms of emotional and physical behaviours. The study discovers that
the infants showed the different emotional and bodily responses to the various auditory
stimulations, such as, thriller, suspense, and pleasantness. In this research, there were 20
four- to twelve-month-old infants participated in this study. The whole experiment lasted six
month period of time and physical check and psychological check were given twice during
the period. After cross comparing the two tests of the physical and psychological checks, the
researcher discovered that there were around 68% of the infants expressed similar reactions
which included the increasing heart rates, blood pressure, prolong regular drinking habits,
and showing disturbed when they heard thriller music. Moreover, there were about 80% of
the infants expressed visible contrasts of emotional and facial expression, such as frowning
eyebrows, showing disturbed, and crying when they heard thriller and pleasant music. On
contrast, the infants tended to behavior calmly, such as stable heart rating and longer lengths
of eye contacts with their parents and asleep falling when they heard pleasant and comic
music. The similar results were reflected on the tests throughout the whole experimental
period.

Paper Session 9: Timber II Hall, 17:00-18:30


Coordination & synchronization

Relations Between Temporal Error Correction Processes and the Quality of


Interpersonal Coordination

Peter E. Keller,1,2 Nadine Pecenka,1 Merle Fairhurst,1 Bruno H. Repp3


1Music Cognition & Action Group, Max Planck Institute for Human Cognitive & Brain Sciences,
Leipzig, Germany
2MARCS Institute, University of Western Sydney, Australia
3Haskins Laboratories, New Haven, Connecticut

Interpersonal coordination in joint rhythmic activities, such as ensemble music making, can
be temporally precise yet variable between individuals. This may be due to individual
70

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

TUE
differences in the operation of temporal error correction mechanisms, such as phase
correction, that enable internal timekeepers in co-performers to remain entrained despite
tempo fluctuations. The current study investigated the relationship between phase
correction and interpersonal sensorimotor synchronization. Phase correction was assessed
in 40 participants by estimating the proportion of asynchronies that each individual
corrected for when synchronizing finger taps (on a percussion pad) with adaptively timed
auditory sequences. Participants were subsequently paired to form 10 high correcting
dyads and 10 low correcting dyads. Each dyad performed a synchronization-continuation
task that required both individuals to tap together in time with a 2 Hz auditory metronome
(for 20 sec) and then to continue tapping together when the metronome ceased (for 20 sec).
Each individuals taps produced a distinctive percussion sound. The variability of
interpersonal asynchronies was greater for low than high correcting dyads only when the
metronome paced the interaction. The lag-1 autocorrelation of interpersonal asynchronies
was likewise only relatively high in low correcting dyads during paced tapping. Low
correcting dyads may be able to stabilize their performance during self-paced continuation
tapping by increasing the gain of phase correction or by engaging in period correction (i.e.,
tempo adjustment). These findings imply compensatory mutual adaptive timing strategies
that are most likely effortful and may have costs in attentionally demanding contexts such as
musical ensemble performance.


Knowing too much or too little: The effects of familiarity of a co-performers
part on interpersonal coordination in piano duos

Marie Uhlig,1 Tim Schroeder, 1 Peter Keller 1,2


1Research group Music Cognition and Action, Max-Planck Institute for Human Cognitive and
Brain Sciences, Leipzig, Germany
2MARCS Auditory Laboratories, University of Western Sydney, Sydney, Australia

Performing ensemble musicians may be more or less familiar with each others parts. Such
familiarity may affect the ability to predict, and therefore to synchronize with, co-
performers actions. Specifically, the operation of internal models that guide processes
related to action simulation and anticipatory musical imagery may be affected by knowledge
of (1) the musical structure of a co-performers part (e.g., in terms of its rhythm and phrase
structure) and/or (2) the co-performers idiosyncratic playing style (e.g., expressive micro-
timing variations). To test the effects of familiarity each pianist plays two duets with two
different partners. In one duet both parts are known to both players, while in the other piece
only ones own part is known. The pieces are played and recorded six times without joint
rehearsal or visual contact in order to analyze the effects of increasing familiarity.
Interpersonal coordination was quantified by measuring asynchronies between pianists
keystroke timing and the correlation of their body sway movements. The findings suggest
that familiarity with a co-performers part, but not their playing style, may engender
predictions about micro-timing variations that are based instead upon ones own playing
style, leading to a mismatch between predictions and actual events at short timescales.
Predictions at longer timescalesthat is, those related to musical measures and phrases, and
reflected in body sway movementsare, however, facilitated by familiarity with the
structure of a co-performers part. Results point to a dissociation between interpersonal
coordination at the level of keystrokes and body sway.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

71

Effect of Visual Cues in Synchronization of rhythmic patterns

Sisi Sun, Trishul Mallikarjuna, Gil Weinberg


Center for Music Technology, Georgia Institute of Technology, Atlanta, GA, U.S.A.

We conducted a rhythmic pattern learning and synchronization experiment. During the
experiment, each of 20 experiment subjects was learning 7 patterns in different level of
difficulty from a drummer robot. They played all the patterns twice in 2 different visual
conditions: being able to see, and not being able to see the robots movement. 10 of the
subjects could see the robot the first time they played the 7 patterns, and they then played
the patterns the second time without seeing the robot. The other 10 played in the opposite
order of visual conditions. We applied Dynamic Time Warping algorithm on the onset time
values to find the best matches between the subjects' and robot's hits. Then we used 4-way
Analysis of Variance with the factors: existence of visual cues, order of visual conditions,
subjects, and onset times, to analyze their influence on the time difference between matching
onsets. The average of onset time differences was treated as a measure of synchronization.
The data showed that, in case of more difficult patterns, the average onset time difference
had higher variance when there were no visual cues compared to when there were visual
cues, while in case of easier patterns, the variance was not significant. Thus we infer that
visual cues can influence synchronization in a task that requires learning of more difficult
rhythmic patterns. We also inferred that subjects showeda tendency to learn new patterns
faster with visual cues, though more experimentation is needed to establish statistical
significance of the effect. What's more, people tend to play in lag with visual cues in the
learning period, but then play better after learning.

72

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

WED

Wednesday 25 July

Keynote 4: Grand Pietra Hall, 9:00-10:00

Barbara Tillman: Music perception and memory in nonmusicians and


amusics: To be (or not to be) musical?

After a PhD in cognitive psychology (1999, Dijon) and postdoctoral


research in cognitive neuroscience (Dartmouth College), Barbara
Tillmann started a CNRS research position in Lyon in 2001. Her
research is in the domain of auditory cognition and uses behavioural,
neurophysiological and computational methods. More specifically,
she is investigating how the brain acquires knowledge about complex
sound structures, such as music and language, and how this
knowledge shapes perception. Since 2007, she is leader of the team
"Auditory Cognition and Psychoacoustics", which has now integrated
the Lyon Neuroscience Research Center. The team's research aims to
understand cognitive and neural mechanisms that underlie how humans perceive, learn,
memorize and use complex sound structures (e.g., to expect and anticipate future events).

Numerous research has provided evidence that nonmusicians have acquired sophisticated
knowledge about the musical system of their culture, even though part of it remains on an
implicit level. This musical knowledge allows nonmusicians to process musical structures,
develop expectations for future incoming tones or chords, influences memory etc. The tonal
enculturation process is one example of the cognitive capacity of implicit learning, that is the
capacity to acquire knowledge about complex structures and regularities by mere exposure
and without intention to learn. In contrast to nonmusicians musical expertise stands the
phenomenon of congenital amusia, which has attracted increasing research interest as it
provides further insights in cognitive and neural correlates of music and speech processing.
Individuals with congenital amusia are impaired in music perception and production,
without auditory, cognitive or social deficits. A first hypothesis focused on a pitch
discrimination deficit, which would affect music perception in particular. Further data have
shown that short-term memory for pitch can be impaired in congenital amusia even without
impaired pitch discrimination. Recent research using indirect investigation methods reveals
some musical knowledge at an implicit level in congenital amusia, thus providing further
evidence for the power of implicit cognition.

Young Researcher Award 1: Grand Pietra Hall, 10:00-10:30

The Impact of Visual Cues on the Judgment and Perceptions of Music


Performance

Chia-Jung Tsay
Harvard University, Cambridge, United States

There exists a wide consensus that sound is central to judgment about music performance.
Although people often make evaluations on the basis of visual cues, these are often
discounted as peripheral to the meaning of music. Yet, people can lack insight into their own
capacities and preferences, or are unwilling to report their beliefs. This suggests that there
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

73

may be gaps between what we say we use to evaluate performance, and what we actually
use. People may be unlikely to recognize or admit that visual displays can affect their
judgment about music performance, a domain that is defined by sound. Six sets of
experiments demonstrated that visual information is what people actually rely on when
making rapid judgments about performance. These findings were extended in experiments
elaborating on 1) the generalizability and persistence of effects throughout domains and
levels of analyses, and 2) potential mechanisms such as attention to specific types of visual
cues. Additional experiments further examine the underlying visual and affective
contributions to judgments of performance, the role of expertise in such decision making,
and the implications for organizational performance and policy.

Speed Poster Session 16: Grand Pietra Hall, 11:00-11:40


Tonality Harmony

Testing Schenkerian theory: An experiment on the perception of key distances

Jason Yust
School of Music, Boston University, USA

The lack of attention given to Schenkerian theory by empirical research in music is striking
when compared to its status in music theory as a standard account of tonality. In this paper I
advocate a different way of thinking of Schenkerian theory that can lead to empirically
testable claims, and report on an experiment that shows how hypotheses derived from
Schenkers theories explain features of listeners perception of key relationships. To be
relevant to empirical research, Schenkers theory must be treated as a collection of
interrelated but independent theoretical claims rather than a comprehensive analytical
method. These discrete theoretical claims can then lead to hypotheses that we can test
through empirical methods. This makes it possible for Schenkerian theory improve our
scientific understanding of how listeners understand tonal music. At the same time, it opens
the possibility of challenging the usefulness of certain aspects of the theory. This paper
exemplifies the empirical project with an experiment on the perception of key distance. The
results show that two features of Schenkerian theory predict how listeners rate stimuli in
terms of key distance. The first is the Schenkerian principle of composing out a harmony,
and the second is the theory of voice-leading prolongations. In a regression analysis, both
of these principles significantly improve upon a model of distance ratings based on change of
scalar collection alone.


How Fast Can Music and Speech Be Perceived? Key Identification in Time-
Compressed Music with Periodic Insertions of Silence

Morwaread M. Farbood,* Oded Ghitza,# Jess Rowland, Gary Marcus, David Poeppel
* Dept. of Music and Performing Arts Professions, New York University, USA; # Dept. of
Biomedical Engineering, Boston University, USA; Dept. of Psychology, New York University,
USA; Dept. of Art Practice, University of California, Berkeley, USA; Center for Neural Science,
New York University, USA

This study examines the timescales at which the brain processes structural information in
music and compares them to timescales implicated in previous work on speech. Using an
experimental paradigm similar to the one employed by Ghitza and Greenberg (2009) for
speech, listeners were asked to judge the key of short melodic sequences that were
presented at a very fast tempo with varying packaging rates, defined by the durations of
silence gaps inserted periodically in the audio. This resulted in a U-shaped key identification
74

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

WED
error rate curve, similar in shape to the one implicated for speech by Ghitza and Greenberg.
However, the range of preferred packaging rates was lower for music (packaging rate of 1.5-
5 Hz) than for speech (6-17 Hz). We hypothesize that that music and speech processing rely
on comparable oscillatory mechanisms that are calibrated in different ways based on the
specific temporal structure of their input.


The Role of Phrase Location in Key Identification by Pitch Class Distribution

Leigh van Handel, Michael Callahan


College of Music, Michigan State University, USA

This study extends prior research by investigating how pitch distribution differs at
beginnings, middles, and ends of phrases, and by determining whether these differences
impact key-finding. In the corpus of Haydn and Mozart string quartets used in Temperley
and Marvin (2008), many phrases modulate to either the dominant or the relative major; this
results in an overrepresentation of raised scale degree 4, as the leading tone to the dominant,
and of lowered scale degree 7, as the dominant of III. The overrepresentation of these two
scale degrees in the overall distribution may have contributed to the difficulties that
Temperley and Marvins subjects had with key finding. This study corrects the problem of
overrepresentation by limiting the corpus to non-modulating phrases. A behavioral study
indicates that subjects have better success with the distributional view of key finding with
this modified distribution of pitches. In addition, melodies were constructed using
independent pitch distributions for the beginnings, middles, and ends of phrases.
Preliminary results show that subjects improve at identifying the key of a melody when the
pitch distributions within its beginning, middle, and end follow those of the three sections o f
the original phrases.


Harmony Perception by Periodicity and Granularity Detection

Frieder Stolzenburg
Automation and Computer Sciences Department, Harz University of Applied Sciences, Germany

Music perception and composition seem to be influenced not only by convention or culture,
but also by the psychophysics of tone perception. Early models express musical intervals by
simple fractions. This helps to understand that human subjects rate harmonies, e.g. major
and minor triads, differently with respect to their sonority. Newer explanations, based upon
the notion of consonance or dissonance, correlate better to empirical results on harmony
perception, but still do not explain the perceived sonority of common triads well. By applying
results from neuroscience and psychophysics on periodicity detection in the brain
consistently, we obtain a more precise theory of musical harmony perception: The perceived
sonority of a chord decreases with the ratio of the period length of the chord (its virtual
pitch) relative to the period length of its lowest tone component called harmonicity. In
addition, the number of extrema in one period of its lowest tone component called
granularity appears to be relevant. The combination of both values in one measure,
counting the maximal number of times that the whole periodic structure can be decomposed
in time intervals of equal length, gives us a powerful approach to the analysis of musical
harmony perception. The analysis presented here demonstrates, that it does not matter
much whether tones are presented consecutively as in scales or simultaneously as in chords
or chord progressions. The presented approach yields meaningful results for dyads and
common triads and classical diatonic scales, showing highest correlation with empirical
results (r > 0.9).

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

75

Affordant Harmony in Popular Music: Do Physical Attributes of the Guitar


Influence Chord Sequences?

Gary Yim
Music Theory, The Ohio State University, USA

It is proposed that two different harmonic systems govern popular music chord sequences:
affordant harmony and functional harmony. Affordant chord transitions favor chords and
chord transitions that minimize technical difficulty when performed on the guitar, while
functional chord transitions favor those based on traditional harmonic functions. A corpus
analysis compares these systems by encoding each song in two ways. Songs are encoded
with their absolute chord names (such as Cm), characterizing the chord's physical position
on the guitar this operationalizes the affordant harmonic system. They are also encoded
with Roman numerals, characterizing the chord's harmonic function this operationalizes
the functional harmonic system. The total entropy (a measure of unexpectedness) within
the corpus for each encoding is calculated. Arguably, the encoding with the lower entropy
value (that is, less unexpectedness) corresponds with the harmonic system that more
greatly influences the chord transitions. It was hypothesized that affordant factors play a
greater role than functional factors, and therefore a lower entropy value for the letter-name
encoding was expected. Instead, a lower entropy value for the Roman numeral encoding was
found. Thus, the results are not consistent with the original hypothesis. However, post-hoc
analyses yielded significant results, consistent with the claim that affordant factors (that is,
the physical movements involved in playing a guitar) do play some role in popular music
chord sequences. Nevertheless, the role of functional harmony cannot be downplayed.


Harmonic Expectation in Twelve-Bar Blues Progressions
Bryn Hughes
Ithaca College, USA


Harmonic expectation has been shown to reflect syntactical rules for chord-to-chord connections in
both short and long musical contexts. These expectations may derive from the activation of specific
musical schemata, providing listeners with the necessary context for identifying syntactical errors. Few
empirical studies have addressed the connection between chord-to-chord syntax and larger schemata,
such as phrases or form. The twelve-bar blues, with its three unique phrases, offers an opportunity to
investigate this relationship. This research investigates whether listeners expect chord successions
presented in the context of the twelve-bar blues idiom to adhere to common-practice syntax.
Additionally, it addresses the degree to which harmony affects the activation of phrase schemata.
Participants listened to 16-second synthesized excerpts representing a phrase from the standard
twelve-bar blues. Each phrase included a single variable chord. For each trial, participants provided a
goodness rating on a six-point scale and indicated whether they thought the excerpt came from the
beginning (Phrase 1), middle (Phrase 2), or end (Phrase 3) of a twelve-bar blues. Ratings were
interpreted as levels of expectancy in accordance with the concept of misattribution. Listeners
preferred harmonic successions in which the relationship between chord roots reflected common
practice; however, two instances of root motion idiosyncratic to blues also received high ratings. The
variable chord significantly affected phrase labelling. The magnitude of this effect was dependent upon
the variable chords location within the phrase and the surrounding chords. Successions for which a
consensus phrase label emerged received significantly higher ratings than those that did not receive a
clear-cut phrase label. In some cases, ratings and phrase labels combined to reveal that specific chord
successions can invoke different expectations depending on the presently active phrase schema.
Harmonic expectation in blues includes a wider range of acceptable root motion. Phrase schemata are
defined both by their harmonic content and by the order in which that content is presented. Single
chords can affect the strength of an active schema and can suppress the activation of other viable
schemata. Listeners have stronger expectations for phrases that can be clearly identified as part of the
larger musical context.

76

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

WED
A Directional Interval Class Representation of Chord Transitions
Emilios Cambouropoulos
School of Music Studies, Aristotle University of Thessaloniki, Greece

Chords are commonly represented, at a low level, as absolute pitches (or pitch classes) or, at
a higher level, as chords types within a given tonal/harmonic context (e.g. roman numeral
analysis). The former is too elementary, whereas, the latter, requires sophisticated harmonic
analysis. Is it possible to represent chord transitions at an intermediate level that is
transposition-invariant and idiom-independent (analogous to pitch intervals that represent
transitions between notes)? In this paper, a novel chord transition representation is
proposed. A harmonic transition between two chords can be represented by a Directed
Interval Class (DIC) vector. The proposed 12-dimensional vector encodes the number of
occurrence of all directional interval classes (from 0 to 6 including +/- for direction) between
all the pairs of notes of two successive chords. Apart from octave equivalence and interval
inversion equivalence, this representation preserves directionality of intervals (up or down).
Interesting properties of this representation include: easy to compute, independent of root
finding, independent of key finding, incorporates voice leading qualities, preserves chord
transition asymmetry (e.g. different vector for I-V and V-I), transposition invariant,
independent of chord type, applicable to tonal/post-tonal/atonal music, and, in most
instances, chords can be uniquely derived from a vector. DIC vectors can be organised in
different categories depending on their content, and distance between vectors can be used to
calculate harmonic similarity between different music passages. Some preliminary examples
are presented. This proposal provides a simple and potentially powerful representation of
elementary harmonic relations that may have interesting applications in the domain of
harmonic representation and processing.

Wagner in the Round: Using Interval Cycles to Model Chromatic Harmony

Matthew Woolhouse
School of the Arts, Faculty of Humanities, McMaster University, Canada

A formal grouping model is used to model the experience of tonal attraction within
chromatic music, i.e. its dynamic ebb and flow. The model predicts the level of tonal
attraction between temporally adjacent chords. The functional ambiguity of nineteenth-
century chromatic harmony can be problematic: chromatic chords, unlike diatonic harmony,
often have ill-defined roots, and thus their proper functions are difficult to establish. An
important feature of the model, however, is that the key or tonal context of the music does
not need to be specified. The model is based on the idea of interval cycle proximity (ICP), a
grouping mechanism hypothesized to contribute to the perception of tonal attraction. This
paper illustrates the model with an analysis of the opening of Wagners Tristan und Isolde,
and shows that the model can predict the opening sequence of Tristan in terms of tonal
attraction without the chords needing to be functionally specified.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

77

Speed Poster Session 17: Crystal Hall, 11:00-11:40


Musical Development & Education I
Tales of Talent: Rapid Learning of Acoustic Instrument Recognition

Lisa Aufegger, Oliver Vitouch


Dept. of Psychology, University of Klagenfurt, Austria

Also in the 21st century, the role of innate talents in music remains a matter of fundamental
debate. Within the framework of the rapid learning paradigm, the aim of this study was to
find out whether it is possible to simply and quickly teach non-musicians musical skills in the
perceptual realm, specifically the recognition of instruments timbres. Within a week, 34
subjects had three feedback-driven computer-based training sessions, where they were
asked to discriminate between 10 brass and woodwind instruments. In the pre- and a post-
test, subjects had to recognize the main instrument from an orchestral piece. Results shown
that non-musicians did not fully reach expert level (benchmarked by brass or woodwind
instrument students) after this short period, but performed well at par with semi-experts
(piano students). Our findings demonstrate that acoustic instrument recognition is well-
trainable for (almost) everybody using the simplest of means, and does not seem to depend
on rare individual abilities.


Important Experiences and Interactions in the Occupational Identity
Development of Music Educators

Joshua A. Russell
The Hartt School, The University of Hartford, USA

The purposes of this paper were to describe the reported professional identity of in-service
music educators through the lens of symbolic interactionism and to identify activities and
interactions that music educators can seek out in order to inform their own professional
identity. Three hundred secondary music educators from southwestern United States
responded to the Music Educator Career Questionnaire, which was developed from previous
research. Participants responded to a series of ipsative items designed to elicit information
regarding their occupational identity as well as the perceived importance of different
activities or interactions. Music educators saw themselves and believe others saw them as an
educator, ensemble leader, a creative businessperson, and entertainer. However, their
musical identities separated into both an external music identity, in which others saw them
as a performer, artist, performer, or scholar, and an internal identity, in which they saw
themselves differently in the same roles. The impact of different activities and interactions
on the various identified occupational identities will be discussed a s a means to assist music
educators self select their own most appropriate occupational identity and engage in
activities and with individuals in order to develop their chosen identity. As teachers move
from preservice to in-service, their identities may transform from an integrated musician
identity and segregated educator identity to an integrated educator identity and segregated
musician identity unless they intentionally seek out interactions and activities to develop a
continuously integrated occupational identity. Implications are discussed.

78

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

WED
Cognitive and emotional aspects of pupils attitudes towards piano teachers
and piano lessons

Malgorzata Chmurzynska
Department of Music Psychology, Chopin University of Music

Professional primary music schools in Poland aim at creating well-educated and competent
future performing musicians as well as their audience (comprising primarily those who will
not pursue further stages of musical education). However, the majority of pupils who
complete their music education discontinue to play instruments and lose interest in the
classical music. According to the experts the reason for this is their having been discouraged
by their music teachers and the way they were taught. The aim of the study was to examine
pupils attitudes towards their piano teachers and piano lessons. The emotional and
cognitive components of the attitudes have been taken into account. The respondents (40
pupils from the primary music schools) were asked to complete the Pupils Questionnaire,
designed to test the cognitive aspect of their attitudes (what they think of their teachers and
piano lessons) as well as the emotional aspect (what they feel during the piano lessons). In
the cognitive aspect the results revealed a general positive attitude of the pupils towards
their piano teachers, more positive than towards the piano playing itself. However, almost
20% of the subjects preferred to learn with a different teacher, and over 40% did not feel
increased motivation to practice after the lessons. Almost 25% reported they did not fulfill
their aspiration concerning piano playing. In the emotional aspect the results revealed a
significant percentage of subjects manifesting quite high level of anxiety during the lessons.
Certainly, this is neither a source of inspiration for the students, nor does it build up their
high self-esteem. The pupils much more frequently denied the negative emotions than
admitted the positive ones. On the basis of the comparison of both aspects of the attitudes
one can conclude that pupils image of their teachers (the cognitive aspect) is more positive
than their feelings during the lessons (the emotional aspect). The analysis of the pupils
attitudes revealed many negative emotions and lack of strong positive experiences
connected to classical music, the latter undoubtedly necessary for shaping the intrinsic
motivation. It was hypothesized that this fact may be a source of a decrease in interest in this
kind of music.


Experienced Emotions through the Orff-Schulwerk Approach in Music
Education - A Case Study Based on Flow Theory

Joo C.R. Cunha, Sara Carvalho


INET - MD, University of Aveiro, Portugal

Orff-Schulwerk is one of the most holistic and creative approaches in Music Education, and
during Music classes, teachers are expected to regularly combine a wide range of sources,
including speech, music, creativity, movement and dance. In this paper we propose to
identify different experienced emotions boosted by Orff-Schulwerk activities in a Music
Education context. Students (N=50), aged between 10 and 12 years old, were audio and
video recorded, while attending their weekly Music Education class during one academic
year (9 months). In addition, in the end of each class, each student was asked to answer one
questionnaire, in order to understand their perspective on their lived emotions. All classes
were structured according to three main categories: General, Music and Movement and
Music Laboratory. The empirical process was based on Csikszentmihalyis Flow Theory
(1975, 1990, 1997, 2002), and the consequent adaptation of the FIMA (Flow Indicators in
Musical Activity) and AFIMA (Adapted Flow Indicators in Musical Activity), both developed by
Custodero (1998, 1999, 2002a, 2003, 2005). After analyzing the collected data using AFIMA
conclusions were drawn. As emotions and cognition are closely linked in music (Cook &
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

79

Dibben, 2010, Krumhansl, 2002; Sloboda, 1999, 2005; Sloboda & Juslin, 2001; Juslin &
Sloboda, 2010), data enabled us to put in evidence several correlations regarding the Orff-
Schulwerk approach and the students lived emotions during Music Education classes. AFIMA
enabled us to establish that through an Orff-Schulwerks approach children lived many
positive emotions, which demonstrated to be significant in the way they acquire musical
knowledge.


Benefits of a classroom-based instrumental training program on working
memory of primary school children: A longitudinal study
Ingo Roden,* Dietmar Grube,* Stephan Bongard,# Gunter Kreutz*
* Institute for Music, School of Linguistics and Cultural Studies, Carl von Ossietzky University
Oldenburg, Germany; #Department of Psychology, Goethe-University Frankfurt, Germany

Instrumental music tuition may have beneficial influences on cognitive processing. We


examined this assumption with regard to working memory in primary school children (N =
50; 7-8 years of age) within a longitudinal study design. Half of the children participated in a
special music program with weekly sessions of instrumental tuition, while the other half
received extended natural science training. Each child completed a computerized test battery
for three times over a period of 18 months. The battery includes seven subtests, which
address the central executive, the phonological loop and the visuospatial sketchpad
components of Baddeleys working memory model. Socio-economic background and IQ were
assessed for each participant and used as covariates in subsequent analyses of variance
(ANOVAs). Significant Group by Time interactions were found for phonological loop and
central executive subtests indicating a superior developmental course in children with music
training compared to the control group. These results confirm and specify previous findings
concerning music tuition and cognitive performance. It is suggested that children receiving
music training benefit specifically in those aspects of cognitive functioning that are strongly
related to auditory information processing.

Assessing childrens voices using Hornbach and Taggarts (2005) rubric

Andreas C. Lehmann, Johannes Hasselhorn


Hochschule fr Musik Wrzburg, Germany

Assessment of voice quality and performance is notoriously difficult, and even professional
singers may not always agree on the quality of a voice or performance. Although there is a
mild consensus about what constitutes a good professional voice, untrained voices pose a
serious challenge to raters and it is unclear what specific aspects of performance influence
overall (summative) impressions. In our study three expert judges rated recorded
performances of 55 eleven-year-old children on 19 five-point rating scales regarding specific
aspects (e.g., articulation, matching of given starting notes, rhythm), and they also gave a
comprehensively, summative evaluations using a five-point assessment rubric developed by
Hornbach and Taggart (2005; H&T rubric). Here we show that there is a highly reliable scale
(Cronbachs = .94) of eight individual attributes (Piano starting tone: match no match,
Type of performance: speechlike singing, Melody execution: secure insecure, Attitude:
secure insecure, Voice-ear coordination: fitting not fitting, Tessitura: small age
appropriate, Text integration: fluent stumbling, Interpretation regression analysis, two
variables, namely Melody execution and Piano starting tone entered the equation,
explaining a total of 92 percent (adjusted) of the variance on the H&T rubric. Thus, the H&T
rubric appears to be an effective assessment instrument when used by experts, because it
aggregates well more specific musico-acoustical aspects of childrens vocal performance.

80

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

WED
Cognitive Strategies in Sight-singing

Ida Vujovi,* Blanka Bogunovi #


* Royal Conservatoire, The Hague,The Netherlands
# Faculty of Music, University of Arts, Belgrade, Serbia

This paper presents a part of a wider study that is based on interdisciplinary research of
sight-singing (music education and psychology). We aimed: 1. to determine the kinds and
levels of strategies that music students use in the cognitive processes involved during sight-
singing; 2. to explore strategies of problem solving when difficulties appear; 3. to investigate
the self-evaluation perspectives of students; and 4. to relate students learning experience to
the strategies used. The sample consisted of 89 music students from higher music education
in The Hague and Belgrade. They filled in the questionnaire based on self-reports, covering
general data about their music education background, different issues of sight-singing, such
as planning, problem solving, monitoring and evaluation of outcomes, and three melodic
examples written in different musical styles. Strategies used during sight-singing could be
roughly sorted into three groups that differ according to the key accent given: cognitive,
intuitive and no-strategy. The music cognitive strategies involved cover three levels of
musical organization and representation: a) relying on smaller chunks of the musical piece,
referring to existing knowledge and learning experience b) leaning on a slightly bigger
picture of familiar patterns; and c) mental representation of melodic/rhythmic/harmonic
structures. When faced with a problem, half of the students employ analytic approaches.
Comparisons between sub-samples showed, e.g., that future performing musicians more
often use tone-to-tone thinking and bottom-up strategies in approaching musical
structure, while music theory students have better insight into the whole and have top-
down strategies. Research results give a possibility for evaluation of learning outcomes and
improving teaching practices.


Influence of Music Education on Expressive Singing of Preschool Children
Johanella Tafuri
Conservatoire of Music, Bologna, Italy

Singing is one of the most diffused musical activities in nursery schools. Teachers are accustomed
to accompanying different moments of the day with songs and children enjoy having fun with
music. When do children start to sing autonomously? How do they sing?Several studies have
explored the many ways used by children to sing songs they know and to play with them. The
results showed different kinds of repetition, change of words and also changes in the expression
through little variations in speed, loudness and other musical characteristics. The studies that
explore the relationships between music and emotions with the particular aim of understanding
the underlying processes of an expressive performance, pointed out that, in order to produce it,
performers need to manage physical sound properties. More recently, Tafuri (2011) analysed a
corpus of songs performed, between the age of 2 and 3, by the children of the inCanto Project.
This is a group of children who received a special music education that began during their
prenatal life (Tafuri 2009). The analysis revealed that already at this age it is possible to observe a
certain ability of children to sing in an expressive way. This implies a certain ability in managing
some musical structures, in particular loudness and timing. The aims of the present research are
firstly to verify the appearance and development of the ability to sing in an expressive way in
children of 2 -5 years who attend daily nursery schools where teachers regularly sing a certain
number of songs almost daily; secondly, to compare these results with those shown by the
children of the inCanto Project who have received an early music education. A corpus of songs
performed by the children of several different schools, and recorded by the teachers, are analysed
with the software Sonic Visualizer, with particular attention paid to the childrens use of agogics,
dynamics, and other sound qualities. The results highlight the process of managing physical
sound properties in order to produce an expressive performance. Particular problems are solved:
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

81

e.g. that of distinguishing expressive from other different motivations, or musical from verbal
intentions in the analysis of sound properties. These results when compared with those obtained
by children who received an early music education, give interesting indications on the role of an
early musical experience.

Speed Poster Session 18: Dock Six Hall, 11:00-11:40


Neuroscience studies

Neural Oscillatory Responses to Binaural Beats: Differences Between


Musicians and Non-musicians

Christos Ioannou,* Joydeep Bhattacharya #


* Institute of Music Physiology and Musicians Medicine, Hanover University of Music, Drama
and Media, Germany; # Department of Psychology, Goldsmiths, University of London, United
Kingdom
In the present study, multivariate Electroencephalography (EEG) signals were recorded from
thirty-two adult human participants while they listened to binaural beats (BBs) varying
systematically in frequency from 1 to 48 Hz. Participants were classified as musicians or
non-musicians, with sixteen in each group. Our results revealed that BB stimulation
modulated the strength of large-scale neuronal oscillations, and steady state responses
(SSRs) were larger in musicians than in non-musicians for BB stimulations in the gamma
frequency band with a more frontal distribution. Musicians also showed higher spectral
power in the delta and the gamma frequency bands at all BB stimulation frequencies.
However, musicians showed less alpha band power for BB stimulations in the gamma band.
Our results suggest that BBs at different frequencies (ranging from very low frequency delta
to high frequency gamma) elicit SSRs recorded from the scalp. Musicians exhibited higher
cortical excitations than non-musicians when stimulated by BB stimulation in the gamma
band, which was reflected by lower alpha, and higher gamma band EEG power. The current
study provides the first neurophysiological account of cortical responses to a range of BB
stimulation frequencies and suggests that musical training could modulate such responses.

MEG evidence for music training induced effects on multisensory plasticity


Evangelos Paraskevopoulos*, Anja Kuchenbuch*, Sibylle C. Herholz#, Christo Pantev*

*Institute for Biomagnetism and Biosignalanalysis, University of Mnster, Mnster, Germany


# Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada


Multisensory learning and the resulting neuronal plastic changes have recently become a
topic of renewed interest in human cognitive neuroscience. Playing an instrument from
musical notation is an ideal situation to study multisensory learning, as it allows
investigating the integration of visual, auditory and sensorimotor information processing.
The present study aimed at answering whether multisensory learning alters unisensory
structures, interconnections of those structures or specific multisensory areas in the human
brain. In a short-term piano training procedure musically naive subjects were trained to play
tone sequences from visually presented patterns in a music notation-like system [Auditory-
Visual-Somatosensory group (AVS)], while a control group received audio-visual training
only that involved viewing the patterns and attentively listening to the recordings of the AVS
training sessions [Auditory-Visual group (AV)]. Training-related changes in the
corresponding cortical networks were assessed by pre- and post-training
magnetoencephalographic (MEG) recordings of an auditory, a visual and an integrated
audio-visual mismatch negativity (MMN). The two groups (AVS and AV) were differently
affected by the training in the integrated audio-visual MMN condition. Specifically, the AVS
82

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

WED
group showed a training-related increase in audio-visual processing in the right superior
temporal gyrus while the AV group did not reveal a training effect. The unisensory MMN
measurements were not affected by training. The results suggest that multisensory training
alters the function of specific multisensory structures, and not the unisensory ones along
with their interconnections, and thus provide experimental data as response to an important
question presented by cognitive models of multisensory training.


EEG-based discrimination of music appraisal judgments using ZAM time-
frequency distribution

Stelios Hadjidimitriou, Leontios Hadjileontiadis


Department of Electrical and Computer Engineering, Aristotle University of Thessaloniki,
Greece

This work focuses on the binary classification of listeners EEG responses that relate to music
liking or disliking judgments, by employing time-frequency-based feature extraction. Nine
participants were engaged in an experiment during which they listened to several musical
excerpts, while their EEG activity was recorded. Participants were prompted to rate their
liking for each excerpt after its listening. Subsequent feature extraction from the acquired
EEG signals was based on the Zhao-Atlas-Marks (ZAM) time-frequency distribution. For all
EEG frequency bands (1-49Hz), different types of feature vectors (FVs) were produced, in
order to take into consideration asymmetric brain activations that are linked to emotional
responses. The classification procedure was performed using support vector machines
(SVM) and k-nearest neighbors (k-NN). Highest classification accuracies (CAs) were achieved
using FVs from all channels from the beta (74.56 1.02%) and gamma (71.96 0.87%)
bands and k-NN. The fusion of FVs for the beta and gamma band yielded the best CA, i.e.,
76.52 1.37%. FVs derived from channel pairs that relate to hemispheric asymmetry only,
led to lower CAs. Lower classification performance, achieved using the asymmetry-based
features, might imply that the discrimination of music appraisal judgments may not depend
solely on the valence of emotions induced by music. On the contrary, bilateral activity in beta
and gamma bands led to a more efficient discrimination. This evidence may suggest that
music appraisal has to be interpreted with respect to additional aspects of affective
experiences, like emotional arousal that reflects the degree of excitation.


Effects of Short-Term Experience on Music-Related ERAN

Richard Randall,1 Gustavo Sudre,2 Yang Xu,3 Anto Bagic4


1 School of Music and Center for the Neural Basis for Cognition, Carnegie Mellon University, USA
2 Center for the Neural Basis for Cognition, Carnegie Mellon University, USA
3 Machine Learning Department, Carnegie Mellon University, USA
4 Brain Mapping Center, University of Pittsburgh Medical Center, USA

This study investigates how short-term experience modulates the strength of the early-right
anterior negativity (ERAN) response to implied harmonic-syntax violations. The ERAN is a
negative-going event-related potential (ERP) that peaks between 150ms and 250ms after
stimulus onset, has anterior scalp distribution, right-hemispheric weighting, and relies on
schematic representations of musical regularities. Previous studies have shown that the
ERAN can be modified by short-term musical experience. However, these studies rely on
complex harmonic stimuli and experimental paradigms where music are presented
simultaneously with visual images and written text. In an effort to better understand how
habituation may effect the ERAN in musical contexts, we asked subjects to directly attend to
simple melodies that are either syntactically well-formed, conforming to common-practice
tonality, (M1) or end with an out-of-key pitch (M2). Even with simplified stimuli, our results
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

83

reliably replicate earlier findings based on more complex stimuli composed of literal
harmonies. Both musicians and non-musicians listened to M1 and M2 numerous times and
neural responses were recorded using magnetoencephalography (MEG). Whereas previous
studies on short-term habituation of the ERAN only look at changes in the violation
condition, we comparatively analyze how responses to both M1 and M2 change over time
and how the relative relationship between M1 and M2 fluctuates. This effectively controls for
fatigue and allows us to clearly show how the ERAN changes both independent of and in
conjunction with normal responses.

Entrainment of Premotor Cortex Activity by Ambiguity in Musical Metre

Daniel Cameron,* Job Lindsen,# Marcus Pearce,+ Geraint Wiggins,+ Keith Potter,^ Joydeep
Bhattacharya#
*Brain and Mind Institute, University of Western Ontario, Canada; #Dept. of Psychology,
Goldsmiths, University of London, UK; ^Dept. of Music, Goldsmiths, University of London, UK;
+Centre for Digital Music, Queen Mary, University of London, UK

Humans tend to synchronize movements, attention, and temporal expectations with the
metric beat of auditory sequences, such as musical rhythms. Electroencephalographic (EEG)
research has shown that the metric structure of rhythms can modulate brain activity in the
gamma and beta frequency bands as well as at specific frequencies related to the
endogenously generated metric beat of rhythms. We investigate the amplitude and inter-trial
phase coherence (ITC) of EEG measured from 20 musicians while listening to a piece of
rhythmic music that contains metrically ambiguous and unambiguous rhythms, Steve Reichs
Clapping Music. ITC is the consistency of frequency-specific phase over repetitions of
individual rhythms and thus reflects the degree to which activity is locked to stimulus
rhythms. For ambiguous rhythms, amplitude and ITC are greater at the frequencies specific
to the metric beat of rhythms (1.33 Hz and 1.77 Hz). Source analysis suggests that
differences at metre-specific frequencies may originate in left ventral premotor area and
right inferior frontal gyrus, areas that have been linked to anticipatory processing of
temporal sequences. Effects are also found in alpha (8-12 Hz) and gamma (24-60 Hz) bands
and these are consistent with past EEG research showing modulation of gamma power by
the metric structure of auditory rhythms and modulation of alpha activity due to temporal
anticipation. Our study extends evidence of the electrophysiological processes related to
rhythm and metre by using complex, ecologically valid music, and showing differences in
amplitude and ITC at metre-specific frequencies in motor areas of the brain.


Neuroscientific Measure of Consonance

Adrian Foltyn
Department of Composition, Conducting and Theory of Music, F. Chopin University of Music,
Poland

The article contains a proposition of new simplified model of neural discrimination of
sensory consonance / dissonance at higher stages of auditory pathway. The model regards
primarily complex harmonic sounds and is based on periodicity / pitch and its
representation in neural discharges. The hypothesis relies on a process involving measuring
concentration of neural excitation in inferior colliculus in time windows equal to period of
sum of the incoming signals. The measure can accommodate pitch deviations via a further
mechanism based on harmonic entropy and can be applied to any interval, including
microtones and octave enhancements. For simple ratios an algebraic calculation method is
available, accounting for several interval relations abstract mathematical consonance
measures tended to struggle with. To examine plausibility of the model, a psychoacoustic
experiment was carried out, using paired comparison of intervals. One of the resulting
84 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

WED
dimensions can be clearly identified as consonance dissonance axis. The proposed
modelled consonance values together with 4 other well-known models have been related to
experimental results. Logarithmic transformation of the postulated consonance measure
displays the highest correlation with the consonance dimension obtained in the experiment
out of all examined models (R2 0.8). Higher degree of correlation versus roughness-based
models suggests plausibility of certain pitch-related mechanism underlying basic
consonance perception.


Effects of musical training and standard probabilities on encoding of complex
tone patterns
Anja Kuchenbuch*, Evangelos Paraskevopoulos*, Sibylle C. Herholz#, Christo Pantev*

*Institute for Biomagnetism and Biosignalanalysis, University of Mnster, Mnster, Germany.

#Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada



The human auditory cortex automatically encodes acoustical input from the environment
and differentiates regular sound patterns from noise in order to identify possibly important,
irregular events. The Mismatch negativity (MMN) response is a marker for the detection of
sounds that are unexpected based on the encoded regularities. It has been shown to be
elicited by violations of simple acoustical features but also by violations of more complex
regularities like tone patterns. By means of magnetoencephalography (MEG) we investigated
the responsiveness of MMNm in a noisy environment by varying the standard probability
(70%, 50% and 35%) of a pattern oddball paradigm. In addition we studied the effects of
long term music training in the encoding of the patterns by comparing the responses of non-
musicians and musicians. A MMNm could still be observed in the noisy condition (35%
standards) in response to violations of the predominant tone pattern for both groups. The
amplitude of MMNm of the right hemisphere was influenced by the standard probability, and
this result was mediated by long term musical. The results indicate a reduced but still
present pattern violation detection processing within a noisy environment and while the left
hemisphere is more stable, the standard probability has a strong impact on the auditory
processing of the right hemisphere. Furthermore, non-musicians benefit more from a good
signal to noise ratio while musicians auditory processing is dominated by their trained left
hemisphere.


Neural Correlates of Musical Timbre Perception in Williams Syndrome

Miriam D. Lense,*# Reyna L. Gordon,* Alexandra P.F. Key,* Elisabeth M. Dykens*#


*Vanderbilt Kennedy Center, Vanderbilt University, USA
#Psychology and Human Development, Vanderbilt University, USA

Williams syndrome (WS) is a rare, neurodevelopmental genetic disorder. Many individuals
with WS exhibit auditory aversions and attractions and are extremely emotionally affected
by and interested in music. Given their auditory sensitivities, including an apparent ability to
discriminate amongst particular classes of sounds (e.g., vacuum cleaners), it has been
hypothesized that individuals with WS may show superior timbre discrimination abilities.
However, in contrast to this anecdotal evidence, recent research reveals that individuals with
WS predominantly process the fundamental frequency in complex tones rather than the
spectral information, which is important for distinguishing amongst different timbres. The
present study aimed to clarify timbre perception abilities in WS. Participants included 18
adults with WS and 15 typically developing (TD) controls. Participants performed a timbre
detection task while EEG was recorded. Participants heard sequences of 500-ms
instrumental tones (trumpet: 42% of stimuli; cello: 42%; piano: 16%). The onset and decay
of the tones was replaced with a 10-ms envelope. Participants were asked to respond to the
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

85

piano tones. Event-related potential (ERP) analyses revealed robust P300 responses to the
target piano tones in the WS and TD groups. Individuals with WS also demonstrated
differences in P300 amplitude between the non-target cello and trumpet timbres. In the WS
group only, there was early and sustained increased induced alpha-band (8-12 Hz) activity to
the cello vs. trumpet timbre. Thus, results indicate greater attentional and sensory
processing of instrumental timbres in WS compared with TD individuals. Implications will be
discussed for auditory sensitivities and musicality in WS.

Speed Poster Session 19: Timber I Hall, 11:00-11:40


Singing & Voice

A comparison between subjective and objective methods for evaluating the


vocal accuracy of a popular song

Larrouy-Maestri, P. 1, Lvque, Y.2, Giovanni, A.2, Schn, D.3, & Morsomme, D.1
1Logopdie de la Voix, Cognitive Psychology, University of Lige, Belgium
2Laboratoire Parole et Langage, CNRS and Aix-Marseille University, France
3Institut de Neurosciences Cognitives de la Mditerrane, CNRS and Aix-Marseille University,
France

Vocal accuracy of a sung performance can be evaluated by two methods: acoustic analyses
and subjective judgments. For one decade, acoustic analyses have been presented as a more
reliable solution to evaluate vocal accuracy, avoiding the limitation of experts perceptive
system and their variability. This paper presents for the first time a direct comparison of
these methods. 166 occasional singers were asked to sing the popular song Happy
Birthday . Acoustic analyses were performed to quantify the pitch interval deviation, the
number of contour errors and the number of tonality modulations for each recording.
Additionally, eighteen experts in singing voice or music rated the global pitch accuracy of
these performances. The results showed a high inter-rater concordance within the judges. In
addition, a high correlation occurred between acoustic measurements and subjective rating.
Their rating was influenced by both tonality modulations and interval deviations. The total
model of acoustic analyses explained 81% of the variance of the judges scores. This study
highlights the congruency between objective and subjective measurements of vocal accuracy
when the assessment is done by music or singing voice experts. Our results confirm the
relevance of the pitch interval deviation criterion in vocal accuracy assessment.
Furthermore, the number of tonality modulations is a salient criterion in perceptive rating
and should be taken into account in studies using acoustic analyses.


Pitch Evaluations in Traditional Solo Singing: Comparison of Methods

Rytis Ambrazeviius, Robertas Budrys


Faculty of the Humanities, Kaunas University of Technology, Lithuania

Problems of pitch evaluations from pitch tracks obtained from computer aided acoustical
analysis are considered; case of monophonic vocal performance is examined. The
importance of limited jnd on the adequate desirable precision of the evaluation is noted.
Three methods of pitch evaluations were applied. First, pitches of one Lithuanian traditional
vocal solo performance (six melostrophes) were independently evaluated manually from
Praat-aided logf0 tracks by three subjects. From these data on individual pitches, evaluations
of musical scales averaged across the entire performance were also derived. Second, the
evaluations of musical scales were repeated based on logf0 histograms compiled from Praat
readings. Third, software NoteView for automated pitch extraction and integral evaluation
86

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

WED
was applied. NoteView was chosen since it is considered one of the best programs for this
purpose. Evaluations of individual pitches by the three subjects (1st method) differed by 6.5
cents (here and hereafter averaged values are presented). However, for the degrees of
musical scale, the difference dropped to 1.63.4 cents, depending on the range of sound
durations (IOIs) considered. In comparison, the other two methods gave considerably
inferior results (deviations from the semi-manual evaluations of the musical scale): 6.010.0
cents for histograms (2nd method) and 3.97.9 cents for NoteView (3rd method). The semi-
manual method of pitch evaluation, though time-consuming, is still more acceptable than the
two automated methods considered; unless precision of 4.09.0 cents or worse is sufficient.
The reasons (need for subjective decisions, e.g., on target pitch, etc.) are discussed.


Musicians' Perception of Melodic Intonation in Performances with and
without Vibrato

John M. Geringer,* Rebecca B. MacLeod,# Clifford K. Madsen,* Jessica Napoles ^


*College of Music, Florida State University, USA
#School of Music, University of North Carolina at Greensboro, USA
^School of Music, University of Utah, USA

We compared discrimination of mistuned intervals in unaccompanied melodies performed
by trumpet, violin, and voice, and examined whether there were differences between the
three timbres in melodies performed with and without vibrato. Participants were 144
university music students. Digital recordings of a professional violinist, vocalist, and trumpet
player performing the first four measures of Twinkle, Twinkle Little Star were edited to
provide the designated intonation conditions. Listeners heard 18 examples: the three
unaccompanied solo performers in two vibrato conditions (with and without vibrato), and
three intonation conditions (melodic intervals were in-tune, sharp 25 cents, or flat 25 cents
relative to equal temperament). In examples with mistuned intervals, scale degrees 2, 5, or 6
were altered. Listeners rated intonation accuracy on a 7-point scale. All three stimuli were
perceived as more out-of-tune when there was no vibrato compared to vibrato. Across non-
vibrato stimuli, violin was judged as more out-of-tune than voice and trumpet across all
three tuning conditions. Melodies performed with vibrato were judged differently: Violin
was judged as least in-tune for intervals mistuned in the flat direction, trumpet was heard as
least in-tune for intervals mistuned sharp, and voice was judged least in-tune when intervals
were actually in-tune (relative to equal temperament). This study provides support for the
idea that vibrato helps mask intonation inaccuracies. Differences in perception between
timbres may be influenced by performance tendencies of the instruments and characteristics
of the vibrato itself such as modulation width, rate, and type.


The timbre of the voice as perceived by the singer him-/herself

Allan Vurma
Estonian Academy of Music and Theatre, Estonia

This research is aimed at specifying with the help of perception tests how the vocalist
perceives the timber of his/her own voice during singing. 15 professional singers as
participants sung simple vocal exercises at different pitch ranges. They were asked to fix in
their memory the timbre of their voice as it was perceived at singing. These sung excerpts
were recorded, and as a next step, seven timbral modifications were created from each
recording. The modifications corresponded to different hypotheses about the difference in
the voices timbre in the vocalists own perception compared to the timbre of that voice in
the perception of other persons at some distance. Then the modifications were played to the
participant whose voice was used for the modifications and he/she had to estimate the
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

87

similarity of those stimuli to the perception of his/her own voice that had been encountered
during singing. Participants rated as most similar those stimuli that were modified by the
filter which frequency characteristic resembled the shape of a trapezoid and at the creation
of which were taken into account (1) the transfer function of the diffracting air conduction
component form the mouth of the singer to his ear channel, (2) the transfer function of the
bone conduction component, and (3) the influence of the stapedius reflex on the sensitivity
of his/her hearing system.The frequency characteristics of cochlear microphonics as
measured on cats were used as the available approximation about the impact of stapedius
reflex on human hearing.


Brain rhythm changes during singing voice perception

Yohana Lvque,* Daniele Schn#


*Laboratoire Parole et Langage, CNRS & Aix-Marseille University, France
#Institut de Neuroscience des Systmes, CNRS & Aix-Marseille University, France

A set of studies in humans have brought neuroimaging evidence of motor activations during
speech listening, suggesting that humans may have an audio-visual mirror system matching
articulatory sounds and motor representations. The goal of this study was to find out
whether such a motor activity may be induced by the perception of a natural singing voice, in
contrast with a computer-generated melody, and to determine the behavioral consequences
of this possible motor resonance. Twenty participants were asked to listen to and vocally
reproduce synthetic and sung melodies. We recorded both EEG (electroencephalography)
and vocal productions. An acoustical analysis enabled us to get the mean vocal pitch accuracy
of each participant. Then, we analyzed the evolution of beta-motor (20Hz) and mu (10Hz)
brain rhythms during vocal production and perception periods, two rhythms that are
typically suppressed during motor activity. Our results showed that mu and beta were
suppressed during singing, but also during perception of sung melodies, indicating an early
sensorimotor activity during listening to voice. No such sensorimotor activity was found for
computer-generated melodies. This motor activity during sung melody perception a
hallmark of the mirror system, could reflect a mental simulation of the heard singing action,
priming the motor areas for subsequent repetition. Finally, we found that motor resonance
was inversely proportional to participants vocal accuracy. This result suggests that poor
singers rely more strongly on biomechanical representations linked to voice production than
good singers when encoding the target-melody.


Effect of Augmented Auditory Feedback on Pitch Production Accuracy in
Singing

Dustin Wang, Nan Yan, Manwa L. Ng


Division of Speech and Hearing Sciences, the University of Hong Kong, Hong Kong

The effect of augmented (accompanying) auditory feedback on pitch production accuracy
during singing is controversial. Yet, the lack of control of vocal range as well as the different
criteria of grouping participants into poor and normal pitch singers might have contributed
to the contradictory findings reported in the literature. In the present study, 7 poor pitch
singers as well as 11 controls who had no formal training of singing were recruited to
perform in both a single-note pitch-matching task and a song-singing task. All participants
are native speakers of a tonal language. Absolute and relative pitch accuracy were compared
between speaker groups for the two tasks. Acoustic analysis was carried out using PRAAT
and the stimuli were generated using a music notation software (MUSESCORE) to better
control the tempo of presenting the stimuli and the accompaniment. The objective of the
current study is to investigate the effect of augmented auditory feedback on pitch accuracy
88

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

WED
for both poor and good pitch singers and to compare the effect between two types of tasks.
Data collection is still in progress, however, available data show that the effect of augmented
feedback is positive for the moderately poor pitch singers but not the severely poor ones in
the pitch-matching task, but its influence on the performance in the song-singing task is
negative.


Vocal tract dimensional characteristics of professional male singers with
different singing voice types

Nan Yan,* Manwa L. Ng *, Edith K. Chan *, Chengxia Liao#


*Speech Science Laboratory, Division of Speech and Hearing Sciences, University of Hong Kong,
China; #Vocality Department, Xinghai Conservatory of Music, China

The present study examined the possible relationship between classification of professional
singing voices and their vocal tract parameters including vocal tract length and volume. A
total of 19 tenors, 10 baritones professional singers were participated in the study. Acoustic
reflection technology (ART) was used to measure vocal tract length and volume from all
participants and six vocal tract dimensions (oral length, pharyngeal length, total vocal tract
length, oral volume, pharyngeal volume, and total vocal tract volume) were measured. The
results show that no significant difference was found in all vocal tract dimensions between
tenors and baritones. Our results failed to demonstrate any vocal tract measure that was
specific to a particular classification. This appears to suggest that, in addition to vocal tract
length, other factors may also affect singer types and the characteristic voice timbre of a
professional singer.


Vocal Fold Vibratory Differences in Different Registers of Professional Male
Singers with Different Singing Voice Types

Nan Yan,* Manwa L. Ng *, Edith K. Chan *, Dongning Wang *, Chengxia Liao#


*Speech Science Laboratory, Division of Speech and Hearing Sciences, the University of Hong
Kong, China; #Vocality Department, Xinghai Conservatory of Music, China

Vocal register is an important concept of singing voices and have been related to vocal fold
vibratory characteristics. This study examined the relationship between different singing
voice types and the associated vocal fold vibratory characteristics. A total of 19 tenors, 10
baritones professional singers participated in the study. A total of 84 vowel sounds sung in
chest, head and falsetto registers at a constant loudness and most comfortable pitch level
were analyzed by using electroglottography (EGG). The open quotient (Oq) and fundamental
frequency (F0) parameters were extracted and the gradient Oq/log(F0) were determined.
Results showed that tenors had significantly higher Oq/log(F0) gradient than baritones in
chest and head registers, while no significant difference was found in falsetto register
between the baritones and tenors. Moreover, gradient Oq/log(F0) was significantly greater
in falsetto register when compared with chest and head registers produced by baritone
singers. The present results provide insights to the application of vocal fold vibratory
characteristics in voice classification for male singers.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

89

Speed Poster Session 20: Timber II Hall, 11:00-11:40


Health & well-being

Sonic Feedback to Movement Learned Auditory-Proprioceptive Sensory


Integration

Regev Tamar,*#^ Duff Armin#, Jorda Sergi^


*ELSC - Admond and Lily Safra Center for Brain Sciences, and ICNC - Interdisciplinary Center for
Neural Computation, The Hebrew University of Jerusalem, Israel; #SPECS - Synthetic
Perceptive Emotive and Cognitive Systems; ^MTG - Music Technology Group, Universitat
Pompeu Fabra, Barcelona, Spain

Multisensory integration recently gained attention in a variety of disciplines, from cognitive
psychology to neuroscience. We present an experimental study of auditoryproprioceptive
sensory coupling by sonic feedback to movement, using advances interface technology for
the experimental design and measurement. Our objective is to investigate sound-body
perceptual interaction and suggest possible application for physical therapy. Sound is
synthesized in real-time according to movement parameters captured by a wireless sensor
attached to the arm. Specifically, the angle of arm elevation is dynamically translated to
auditory pitch, forming a new perception-action cycle. Our general hypothesis is that after a
short learning period, subjects develop auditory proprioception, such that auditory
information affects proprioceptive performance. We operationalize our hypothesis using a
motor reaching task, in which subjects lift their arm towards a target point. Continuous
sonification of arm elevation angle is presented, or not (control condition), during movement
trajectory. First, we show that after a short learning period with a fixed angle-to-pitch
mapping, sonic feedback improves accuracy in the motor task, compared to no-feedback.
Second, we distort the learned mapping without informing participants. Mean hand positions
are significantly affected by the mapping manipulation, while most subjects do not report
awareness of it. In conclusion, we show that sonic feedback of auditory pitch can be
integrated efficiently into body perception. Distorting the learned movement-to-sound
mapping results in a complex auditory-somatic competition. We propose that such
distortions could be applied to amplify the range of movement in motor neuro-rehabilitation.


Music use patterns and coping strategies as predictors of student anxiety
levels

Zhiwen Gao, Nikki Rickard


Monash University, Australia

University students are large consumers of music products, and are also under high anxiety
levels due to a range of stressors (e.g. examination and assignments). Music listening is often
claimed to be a useful method of emotion and mood regulation. The aim of this study was to
explore the relationships between music listening habits, music-related coping strategies
and anxiety levels in university level students. The potential moderators of emotion
regulation capacity and self-efficacy were also explored, and general coping capacity was
taken into account. An online survey obtained information from 193 participants (49 males
and 144 females; mean age=21.25, SD=5.65). This sample was found to be quite anxious,
with half the sample reporting severe anxiety levels. The majority (94.3%) indicated that
they like listening to music when stressed or anxious, with most listening to it via a portable
device (78.2%) and in the background (54.4%). A brief period of music listening (less than
30mins) was sufficient for the majority of the sample (74.1%) to feel less stressed. The most
commonly used coping strategies involving music were for emotion/cognitive self-
90

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

WED
regulation and active/strategic self-regulation). Finally, when coping strategies and age
were controlled, music coping was still a significant predictor of anxiety levels in this sample.
However, the prediction was positive indicating that students experiencing higher anxiety
levels also used music more to cope than did students with lower anxiety levels. These
findings suggest that students who are unable to manage their anxiety with general coping
strategies may find some outlet via music listening.


Schizotypal Influences on Musical Imagery Experience

Michael Wammes, Daniel Mllensiefen, Victoria Williamson


Goldsmiths, University of London, UK

There are currently few research studies that explore the nature of musical imagery in the minds
of individuals of different and unique mental health populations. While there have been
interview-based studies into the nature of musical imagery in non-clinical populations, little is
known about how the quality of the musical imagery varies across individuals within clinical
populations. The goal of this research is to better understand how individuals suffering from
schizotypal illnesses and other forms of psychosis experience musical imagery, and to compare
their musical imagery to the experience of auditory hallucinations. This study utilizes both
interviews and quantitative measures in order to test hypotheses that these two phenomena are
experientially similar for this population. In the first study, participants were asked to complete a
questionnaire to assess the extent to which they experience musical imagery, as well as some
qualities of that imagery (The Musical Imagery Questionnaire; MIQ), and the brief version of the
Schizotypal Personality Questionnaire (SPQ-B). A revised version of the MIQ containing new
items designed to assess musical hallucinations and unconscious phenomena was used. In the
second study, semi-structured interviews were conducted with eight of the participants to
conceptualise the phenomenology of the experiences from a personal perspective. Results
showed partial support for the hypothesis. In the first experiment, correlations revealed that
individuals who scored higher on the SPQ-B also tended to find their musical imagery more
persistent and distracting, more worrisome, and more frequent. They also were more likely to
score high on the hallucination items, and the extent to which they perceived their musical
imagery to be out of their conscious control. Participants who scored high on the SPQ also
reported that their musical imagery was less pleasant, consistent with their experiences of
auditory hallucinations. Qualitative data gathered from the interviews supported these findings.
Data from both experiments partially support the hypothesis that individuals suffering from
hallucinations and psychosis experience musical imagery in a similar way to the positive
symptoms of their illness (namely auditory hallucination), and were often incapable of
distinguishing between the two.


Music aids gait rehabilitation in Parkinsons disease
Charles-Etienne Benoit, Nicolas Farrugia, Sonja Kotz, Simone Dalla Bella

Department of Cognitive Psychology, University of Finance and Management, Warsaw, Poland

The presentation of temporally regular auditory stimuli as a cue to facilitate movement


execution is a widespread tool in the gait rehabilitation of Parkinson's Disease (PD). This
disorder is characterized by the malfunctioning of basal ganglia cortical brain circuitry,
leading to a failure to automatically maintain an appropriate amplitude and timing of
sequential movements. Synchronizing steps with a temporally predictable stimulus (i.e., a
metronome presented alone or embedded in a musical stimulus) has shown to improve gait
kinematics in this patient population (with increased walking speed and reduced variability).
The effects of auditory cueing are highly beneficial for the patients' mobility thereby
enhancing their quality of life. Surprisingly, in spite of a great deal of clinical evidence on the
benefits of auditory cueing, little is known about changes in brain plasticity underlying this
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

91

form of training. Here we summarize clinical and brain imaging evidence on the effects of
auditory cueing on gait in patients with PD. Moreover, we propose that cueing effects are
likely mediated by the activation of a general-purpose neuronal network involved in the
synchronization of motor movement to temporally regular external stimuli (i.e., auditory-
motor coupling). This neural mechanisms, unaffected in PD, should facilitate movement
execution. Cerebellar projections stimulate motor areas facilitating gait initiation and
continuation when inducing externally generated movement. Extensive stimulation via
auditory cueing is likely to foster brain plasticity in particularly at the level of the brain
circuitry underpinning sensorimotor coupling (increasing connectivity in areas devoted to
sensorimotor integration), thus supporting improvements positively affecting gait
kinematics in PD. In addition, as mechanisms underlying auditory-motor coupling are likely
to be domain general, the effects of auditory cueing may extend to other functions, such as
regulation of fine motor movements or speech.


Discrimination of slow rhythms mimics beat perception impairments
observed in Parkinsons disease
Devin McAuley, Benjamin Syzek, Karli Nave, Benjamin Mastay, & Jonathan Walters
Department of Psychology, Michigan State University, USA

Research has demonstrated that rhythm discrimination shows a beat-based advantage
(BBA) whereby simple rhythms with a beat are better discriminated than complex rhythms
without a beat. Recently, Grahn & Brett (2009) showed that individuals with Parkinson
Disease (PD) do not show a BBA. The present investigated rhythm discrimination using
simple and complex rhythms that were presented at either the original tempo investigated
by Grahn & Brett (2009) or at a slower tempo. We expected to replicate the BBA for the
original tempo and to reduce or possibly eliminate the BBA at the slower tempo. Two
experiments were conducted. On each trial, participants heard two successive presentations
of a standard rhythm followed by a third presentation of the same rhythm or a slightly
changed rhythm. Participants judged whether the third rhythm was the same or different
than the standard. In both experiments, participants showed a reliable BBA. The magnitude
of the BBA, however, was larger for rhythms marked by empty intervals (Experiment 1) than
by filled intervals (Experiment 2). Slowing down the rhythms reduced discrimination
performance. This reduction was greater for simple rhythms than for complex rhythms,
thereby eliminating the BBA. Notably, the pattern of performance for the slowed rhythms
was strikingly similar to the pattern previously observed for individuals with PD.


Random delay boosts musical fine motor recovery after stroke

van Vugt F. T.*, Kuhn W.*, Rollnik J. D.#, Altenmller E.*


*Institute of Music Physiology and Musicians' Medicine, University of Music, Drama and Media,
Hannover, Germany; #BDH-Klinik, Hessisch Oldendorf, Germany

Motor impairments are among the most common and most disabling results of stroke
worldwide. Previous studies have revealed that learning to play the piano helps to improve
motor function of these patients. It has been hypothesised that the effectiveness of this
therapy relies on the fact that the patient's brain receives a time-locked auditory feedback (a
musical tone) with each movement (keystroke). To test this hypothesis, 15 patients in early
stroke rehabilitation with no previous musical background learned to play simple finger
exercises and familiar children's songs on the piano. The participants were assigned to one of
two groups: in the normal group, the keyboard emitted a tone immediately at keystroke, in
the delay group, the tone was delayed by a random time interval between 100 and 600ms. To
assess recovery, we performed standard clinical tests such as the nine-hole-pegboard test
92

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

WED
and index finger tapping speed and regularity. Surprisingly, patients in the delay group
improved strikingly in the nine-hole-pegboard test, whereas patients in the normal group did
not. In finger tapping rate and regularity both groups showed similar marked improvements.
The normal group showed reduced depression whereas the delay group did not. We
conclude that, contrary to expectations, music therapy on a randomly delayed keyboard can
significantly boost motor recovery after stroke. We hypothesise that the patients in the
delayed feedback group implicitly learn to be independent of the auditory feedback and
therefore outperform those in the normal condition.


Proposal for Treatment of Focal Dystonia in a Guitar Player: A Case Study

Rita de Cssia dos Reis Moura,* Graziela Bortz,# Patrcia Aguiar*


*Department of Neurology, Federal University of So Paulo (Unifesp), Brazil
#Music Department, State University of So Paulo (Unesp), Brazil

Focal dystonia in musicians is classified as a task-specific movement disorder. It presents
itself as a loss of voluntary motor control in extensively trained movements while musicians
play the instrument. When such a disorder occurs in a professional life of a musician, it
frequently leads to a definite interruption of his or her career after several frustrated
attempts to recover. This paper presents a follow up of an individualized treatment and the
evolution of focal dystonia in a diagnosed guitarist after three and six months of treatment.
Instrumental practice conditions were registered before, during and after sessions of
treatment. During the first phase, three techniques were applied: a) desensitization: rest,
relaxation, and consciousness of muscular tension; b) sensory retraining: specific, repetitive,
goal-oriented sensory activities; c) acupuncture: relaxation and balance of muscular tension.
In the second phase, retraining was prioritized through: a) motor reprogramming/motor
control; b) ergonomic adaptations: modifications of movements and instrument; c) use of
ortheses: splints and gloves for restricting unwanted movements. At the last phase, easy
technical methods were used in order to exercise arpeggios, scales, and, lately, chords with
two or three notes. The follow up of the last six months shows decrease of trembling and
improvement of muscular relaxation, and acquisition of good postural consciousness during
guitar practice. A better perception of muscular tension was observed. It was possible to
verify direct emotional interferences impairing instrumental practice. The treatment
proposed here, built on multiple strategies, carried off positive and varied results after six
months of treatment.


The Reflexion of Psychiatric Semiology on Musical Improvisation: A case study
of a patient diagnosed with Obsessive Compulsive Disorder

Xanthoula Dakovanou,* Christina Anagnostopoulou,# Angeliki Triantafyllaki#


*Ecole Doctorale de Recherches en Psychanalyse., University Paris VII, France
#Department of Music Studies, University of Athens, Greece

Several studies associate musical features with specific aspects of a patient's emotional
states. Less work is carried out however in the association between musical discourse and
structure, and the patients psychiatric signs and symptoms. This study aims to investigate
the potential reflection of psychiatric semiology and symptomatology of a patient diagnosed
with Obsessive Compulsive Disorder (OCD) onto her musical improvisation. We describe the
case study of a 41-year old female patient diagnosed with OCD and also presenting other
related psychotic symptoms. The patient had three interactive music sessions with the
MIROR - Impro prototype system, a machine learning based system which interacts with the
user on improvisations, responding by using and rephrasing his/her own musical material
and thus creating a musical dialogue. Data collection involved two clinical interviews with
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

93

the patient, access to her medical file, recording of musical sessions in order to analyse the
musical improvisations and video recording to observe the patient's related behaviour. We
compare findings from the music analysis of the improvisations, the corresponding
behaviour, and the clinical data we obtained and analysed, using an analytical music therapy
reflection. Our results show that aspects of the patient's pathology can be associated with
musical attributes and structures found in the improvisations. In particular, the patient's
logorrhea observed in the interviews is translated into non-stop playing, impulsivity
becomes intensive playing, the fast tempo reflects anxiety, repeated musical clusters reflect
fixation on ideas, and other musical features are related to aspects of the patient's mood.

Speed Poster Session 21: Grand Pietra Hall, 11:40-12:10


Cognitive modeling & representation

Evaluation of perceptual music features


Anders Friberg, Anton Hedblad, Marco Fabiani
KTH Royal Institute of Technology, Sweden

The musical building blocks (here features) as perceived while listening is often assumed to be
the notes and the well-known abstractions such as grouping, meter and harmony. However, is
that really what we hear when we briefly listen to a new song on the radio? We can then perceive
e.g. the genre and emotional expression just from the first few seconds. From an ecological
viewpoint one can argue that features like distance, direction, speed, energy are important (see
other abstract). From emotion research a number of qualitative features relating to general music
theory aspects has been identified. These are e.g. rhythmic and harmonic complexity measured on
a gradual scale ranging from simple to complex. From a computational viewpoint a large number
of features ranging from low-level spectral properties to high-level aspects has been used within
research in music information retrieval. The aim of the current study is to look at music
perception from a number of different viewpoints, identify a subset of relevant features, evaluate
these features in listening tests, and predict them from available computational audio features. A
small set of nine features was selected. They were Speed, Rhythmic clarity, Rhythmic complexity,
Articulation, Dynamics, Modality, Overall pitch, Harmonic complexity, and Brightness. All the
features were rated on Likert scales in two listening experiments. In experiment one (N=20) the
music examples consisted of 100 polyphonic ringtones generated from MIDI files. In this
experiment they also rated Energy and Valence. In experiment two (N=21) the music examples
were 110 film clips previously used in an emotion study (Eerola and Vuoskoski, 2010), thus, with
available data regarding emotional ratings. In addition, all the perceptual features were modeled
with audio features extracted by existing software such as the MIRToolbox. The agreement among
the listeners varied depending on the feature as expected. While Speed had a large agreement,
Harmonic complexity showed a rather modest agreement indicating a more difficult task. The
feature inter-correlations were in general modest indicating an independent rating of all the
features. The emotion ratings could be well predicted by the rated features using linear
regression. In the first experiment the energy rating was predicted with an adj. R2 = 0.93 and the
valence rating with an adj. R2 = 0.87. Many of the features could be predicted from audio features
rather well with adj R2 up to approx. 0.80. The results were surprisingly consistent and indicate
that rated perceptual features can indeed be used as an alternative to traditional features in music
information retrieval tasks such as the prediction of emotional expression.

94

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

WED
Stability and Variation in Cadence Formulas in Oral and Semi-Oral Chant
Traditions - a Computational Approach

Dniel Pter Bir1, Peter Van Kranenburg2, Steven Ness3, George Tzanetakis3, Anja Volk4
1University of Victoria, School of Music, 2Meertens Institute, Amsterdam, 3University of Victoria,
Department of Information and Computing Sciences, 4Utrecht University

This paper deals with current computational research into melodic stability and variation in
cadences as they occur in oral and semi-oral traditions. A main aspect of recent
computational investigations has been to explore the ways in which melodic contour defines
melodic identities (Ness et al., 2010; Van Kranenburg et al., 2011). Creating a new framework
for melodic transcription, we have quantized and compared cadences found in recorded
examples of Torah trope, strophic melodies from the Dutch folk song collection Onder de
groene linde and Quran recitation. Working within this new transcription framework, we
have developed computational methods to analyze similarity and variation in melodic
formulas in cadences as they occur in recorded examples of the before-mentioned oral and
semi-oral traditions. Investigating stability and variation using histogrambased scales,
melodic contours, and melodic outlines derived from recorded examples, we interpret our
findings with regard to structural processes of oral transmission in these chant types.
Through this research we hope to achieve a better sense of the relationship between melodic
gesture and melodic formulae within these chant practices and possibly a new
understanding of the relationship between improvisation and notationbased chant in and
amongst these divergent oral and semi-oral chant traditions.


Modeling Response Times in Tonal Priming Experiments

Tom Collins,* Barbara Tillmann,# Charles Delb,# Frederick S. Barrett,* Petr Janata*
*Janata Lab, Center for Mind and Brain, University of California, Davis, USA
#Universite de Lyon, and Centre National de la Recherche Scientifique, France

In tonal priming experiments, participants make speeded judgments about target events in
short excerpts of music, such as indicating whether a final target tone or chord is mistuned.
By manipulating the tonal function of target events, it is possible to investigate how easily
targets are processed and integrated into the tonal context. We investigate the psychological
relevance of attributes of processed audio signals, by relating those attributes to response
times for over three hundred tonal priming stimuli, gathered from seven reported
experiments. To address whether adding a long-term, cognitive, representation of tonal
hierarchy improves the ability to model response times, Lemans sensory periodicity pitch
(PP) model is compared with a cognitive model (projection of PP output to a tonal space
(TS) representing learned knowledge about tonal hierarchies), which incorporates pitch
probability distributions and key distance relationships. Results revealed that variables
calculated from the TS model contributed more to explaining variation in response times
than variables from PP, suggesting that a cognitive model of tonal hierarchy leads to an
improvement over a purely sensory model. According to stepwise selection, however, a
combination of sensory and cognitive attributes accounts better for response times than
either variable category in isolation. Despite the relative success of the TS representation,
not all response time trends were simulated adequately. The addition of attributes based on
transition probabilities may lead to further improvements.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

95

Optimising short tests of beat perception and melodic memory

Jason Musil*, Bruno Gingras#, Lauren Stewart*, Daniel Mllensiefen*


*Department of Psychology, Goldsmiths, University of London, United Kingdom
#Department of Cognitive Biology, University of Vienna, Austria

Traditional tests of musical ability or achievement tend to assess performance-related aptitude


and aural skills, often related to achievements and objectives defined by Western art music
teaching/training curricula. Their use may cause underestimation of individual differences in
musical cognition enhanced by musical engagement other than formal musical training. We aimed
to create and optimise two short tests of fundamental musical skills to assess individual
differences in non-specialist populations. We adapted Iversen and Patel's (2008) measure of beat
perception (BAT), which is assumed to have little bias towards any musical style. The second task
is a test of memory for unfamiliar melodies, which is only partially affected by formal musical
training and can therefore measure both skill level arising from musical training and musical
memory not affected by formal musical training. 162 participants identified whether 18 fifteen-
second musical clips (representing rock, jazz or pop/orchestral styles) were in time with overlaid
beep tracks or slightly off. Beeps deviated either by phase or tempo and extracts had duple or
triple meters. For the melodic memory task, participants listened to melody pairs, judging
whether or not the second, transposed, version was melodically identical to the first. Variants
differed by changes in interval structure, contour, and/or tonal variations. Test data were
modelled using an Item Response Theory approach to identify item subsets with desired
psychometric properties. BAT performance was high (proportion correct M=0.91, SD=0.11).
Difficulty increased with triple meter and phase shifts, with a significant interaction (all p<.001).
Response data were fitted to a one-parameter Rasch model relating item difficulty to person
ability, and an optimal subset of items was identified. Melodic memory performance was also high
(proportion correct M=0.71, SD=0.45), with differences significantly easier to detect when
violating tonality (p<.001) and showing no main effect of contour (p=.115). Performance was best
for contour plus tonality violations, and worst for contour without tonality violation (p<.001).
Rasch modelling again identified an optimal stimulus subset.


The influence of temporal regularities on the implicit learning of pitch
structures

Tatiana Selchenkova,*,# Mari Riess Jones *, Barbara Tillmann*,#


*CNRS, UMR5292; INSERM, U1028; Lyon Neuroscience Research Center, Auditory Cognition and
Psychoacoustics Team, Lyon, France; #University Lyon 1, Villeurbanne, France

Implicit learning (IL) is the acquisition of complex information without intention to learn.
The Dynamic Attending Theory proposed by Jones postulates internal oscillations that
synchronize with external regularities, helping to guide attention to events and to develop
expectations about future events. Our first study investigated how temporal expectations
influence the development of perceptual expectations in tone sequences created by an
artificial pitch grammar. In this behavioral study, two groups of participants were
respectively exposed to an artificial pitch grammar presented with either a regular or
irregular rhythm. Results showed that the artificial grammar was learned entirely when
presented regularly, but only partially when presented irregularly. These findings suggest
that regular rhythms help listeners develop perceptual expectations about future tones,
thereby facilitating their learning of an artificial pitch grammar. A second study, which
combines behavioral and electrophysiological methods, is currently under progress; it aims
to ascertain which type of temporal presentation, strongly metrical or isochronous, leads to
better IL of tone structures.

96

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

WED
The effect of musical expertise on the representation of space

Silvia Cucchi*, , Carlotta Lega*, #,, Zaira Cattaneo#, , Tomaso Vecchi*,


* Cognition Psychology Neuroscience Lab, University of Pavia, Italy
#Department of Psychology, University of Milano-Bicocca, Italy
Il Musicatorio, Torino, Italy
Brain Connectivity Center, IRCCS Mondino, Pavia, Italy

Spatial abilities play an important role in the way we comprehend and process musical
stimuli. It is thus not surprising that musical expertise affects the way musicians represent
peripersonal space, as for instance suggested by the existence of a SPARC effect (Spatial Pitch
Association Response codes; also referred as SMARC, Spatial Musical Association Response
Codes). Interestingly, previous studies demonstrated that musicians have a more accurate
performance in visual bisection tasks, and even show a small but consistent rightward bias
(whereas non musicians usually show a leftward bias, reflecting the so-called
pseudoneglect). Our study aims to investigate whether differences in the way space is
represented in musicians extend also to non visual modalities. To this purpose, we
compared a group of musicians and non musicians in a haptic bisection task, with rods to be
bisected presented either horizontally and radially. Results indicate that musicians show
indeed a different directional bias compared to non musicians in both the horizontal and
radial plane. Moreover, there is evidence that bisection performance can be affected by the
simultaneous presentation of cues that activate a spatial representation (for instance,
numbers of different magnitude). Accordingly, in our study we also investigated whether
pitch perception influences the representation of space. We found that musicians (but not
non musicians) bisection performance is significantly affected by the simultaneous listening
of notes. Overall, our findings suggest that musical tones are spatially represented in
musicians, and that musical spatial representation can interfere with a spatial perception
task.








Speed Poster Session 22: Crystal Hall, 11:40-12:10


Musical development & education II

Sibling influences on musical development

Franziska Olbertz
University of Osnabrck, Germany

Psychological research shows increasing interest in early social experiences among siblings;
however very little is known about sibling relations effects on musical development. Thus
the aims of the study are to precisely describe typical sibling influences in the field of music
and to discover interacting environmental variables. 63 music students completed an open-
ended questionnaire about their memories of musical influences by siblings during
childhood and adolescence. 394 statements were classified in 30 content categories
generated by qualitative content analysis. Categories were assigned to four higher categories
of relation context. Basic quantitative analyses suggest that musical sibling influences
depend on period of life (childhood or adolescence), age difference and sex of respondents
and siblings (p<.04). Sibling influences in the field of music are multifaceted. Whereas some
respondents, for instance, started to play an instrument in order to become part of a music
making sibling group, others preferred their music style to differ from a sibling.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

97

The Impact of Focused Instruction on Kindergarteners Singing Accuracy


Bryan E. Nichols, Steven M. Demorest
Music Education, University of Washington, USA

The purpose of the study was to determine the effect of singing skills instruction on kindergarten
childrens singing accuracy. Prior to instruction, all students (age 5-6 yrs) were recorded in a
singing accuracy assessment that included pitch matching and song-singing tasks. Families of
participating students completed a background questionnaire regarding student music
participation, music in the home, and the expressed importance of music in home life. The
treatment group (n= 41) is drawn from three different classes receiving 20 minutes per day of
group music instruction with particular attention to the development of the singing voice in terms
of tone, register and accuracy. The control group (n=38) comes from three different classes that
receive no singing instruction in school. Following six months of instruction, post-test
measurements were administered using the same form as in the pre-test. Pretest results indicate
no significant differences between the experimental and control classes no difference in scores
between boys and girls. For the three pitch matching tasks, students scored significantly higher
on the interval tasks followed by pattern tasks followed by the single-pitch tasks. For the posttest,
all groups showed significant improvement on the pitch matching tasks but no improvement on
the song-singing task. The experimental group showed greater improvement, but the difference
was not significant. There was a moderate but significant correlation (r=0.41) between total pitch
matching scores and song-singing scores. Results will be discussed in terms of the role of
instruction and approaches to measurement in singing accuracy research.

Childrens Spontaneous Behaviors as Strategies for Meaningful Engagement

Lori Custodero, Claudia Cali


Teachers College Columbia University

The function of music for young children is multi-faceted. It has been linked to communication
and self-regulation in clinical studies of musical parenting involving infants. Once children
become mobile and verbal, research tends to focus on musical skill exhibited in environments
structured by adults for children such as the classroom, home, or playground. Perceiving
childrens musical culture as different from that of adults, we seek to understand childrens
spontaneous music-making in everyday life as exhibited in public spaces, specifically in the
subway system in New York City. The current study is based on similar research (Custodero,
2006) which found a pervasiveness of movement; invented vocal material, most often in a solitary
context; and a complex array of adult-child interactions. Specific aims were to document,
interpret, and analyze a) childrens musical behaviors: broadly interpreted as singing, moving
themselves rhythmically or expressively, or similarly moving objects as instruments; b)
environmental, circumstantial, and personal characteristics that may influence these behaviors;
and c) possible developmental functions of musical behaviors in public spaces. Data has been
collected on 3 trains that run the length of Manhattan, on 3 specific Sundays over a period of 1
month. A team of 12 people travelled in pairs, 2 pair in 2 different cars on each line, for one round
trip per day. Each team member filled out the Spontaneous Music Observational Protocol for each
musical episode observed, and reported conditions in the train car at each stop before which no
music making was observed. Duration, gender and estimated age of child, social context, sonic and
social environmental triggers, musical material, type/s of behavior, possible developmental
function, and more detailed description have been recorded. Interpretation was completed within
24 hours of documentation. Starting with paired descriptions and interpretations of same events,
all team members reviewed all episodes to insure consensus. Specific focus on the categorization
of musical behaviors and their functions for the child included comparison with findings of the
pilot study concerning the role of movement, of singing as accompaniment, differences between
episodes with social and solitary engagement. The study of childrens music making in an
everyday context provides implications for resourcing educative environments, and brings about
further questions about the relationship of listening to children and pedagogical practice.


98

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

WED
Para-language songs as alternative musical stimuli for devices and playthings
to enhance caregiver interaction with babies and toddlers

Idit Sulkin, Warren Brodsky


Music Science Lab, Department of the Arts, Ben-Gurion University of the Negev, Israel

The study explored the concurrent validity of Para-language versus other commercial
available musical stimuli employed by parents of babies and toddlers. Although Musical
communications and interactions are important to child development, modern day
technology and the popularity of concepts such as the Mozart Effect have caused social
modifications of musical engagement for parents and children, meaning in many cases
music-based electronic devices are used to replace human musical interactions. In this study
we developed an alternative musical stimuli based on pre-language sounds for live caregiver
interactions, as well as for devices and playthings that can engage babies and toddlers more
appropriately. Para-language songs are patterned on two factors: the use of syllables and
consonants deemed as the initial utterances of childrens first verbal expressions; and the
natural universal character of childrens songs. Three studies were conducted. In Study 1,
parents of babies/toddlers in waiting rooms of Child Centers completed a Parents Preference
Questionnaire (PPQ) after listening to different genres of musical stimuli classical themes,
popular folk tunes, and Para-language songs; In Study 2, parents under went the same
procedure as Study 1 but within their own home setting; In Study 3, mothers completed PPQ
subsequent to participation in group encounter that encouraged interactive caregiver-baby
movement sequences as accompaniment to background music. The Para-language songs
received higher/similar scores as did the more commercially available stimuli popular
among parents, media, and products. Hence it can be concluded that parents are open to
engage devices and playthings, which employ alternative musical genres.


Precursors of Dancing and Singing to Music in Three- to Four-Months-Old
Infants

Shinya Fujii,1, 2, 3 Hama Watanabe,2 Hiroki Oohashi,2 Masaya Hirashima,2 Daichi Nozaki, 2
Gentaro Taga2
1Department of Neurology, Beth Israel Deaconess Medical Center and Harvard Medical School,
USA; 2Graduate School of Education, The University of Tokyo, Japan; 3Research Fellow of
Japan Society for the Promotion of Science, Japan

Dancing and singing involve auditory-motor coordination and have been essential to our
human culture since ancient times, yet its developmental manifestation has not been fully
explored. We aimed to examine whether three- to four-months-old infants are able to
synchronize movements of their limbs to musical beat and/or produce altered vocalizations
in response to music. In the silent condition, there was no auditory stimulus, whereas in the
music condition, one of two pop songs was played: Everybody by Backstreet Boys and/or
Go Trippy by WANICO feat. Jake Smith. Limb movements and vocalizations of the infants in
the spine position were recorded by a 3D motion capture system and the microphone of a
digital video camera. First, we found a striking increase in the amount of limb movements
and their significant phase synchronization to the musical beat in one individual. As a group,
however, there was no significant increase in the amount of limb movements during the
music compared to the silent condition. Second, we found a clear increase in the formant
variability of vocalizations during the music compared to the silent condition in the group.
The results suggest that our brains are already primed with our bodies to interact with
music at these months of age via limb movements and vocalizations.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

99

Speed Poster Session 23: Dock Six Hall, 11:40-12:10


Rhythm & synchronization
Tap-It: An iOS App for Sensori-Motor Synchronization Experiments

Hyung-Suk Kim, Blair Kaneshiro, Jonathan Berger


Center for Computer Research in Music and Acoustics, Stanford University, Stanford, CA, U.S.A.

This paper describes Tap-It, an iOS application for sensori-motor synchronization (SMS)
experiments. Tap-It plays an audio file while simultaneously collecting time-locked tapped
responses to the audio. The main features of Tap-It compared to desktop-based SMS
apparatuses are mobility, high-precision timing, a touchscreen interface, and online
distribution. Tap-It records both the time stamp of the tap time from the touchscreen, as well
as the sound of the tapping, recorded from the microphone of the device. We provide an
overview of the use of the application, from setting up an experiment to collecting and
analyzing the output data. We analyze the latencies of both types of output data and assess
the errors of each. We also discuss implications of the application for mobile devices. The
application is available free of charge through the Apple App Store, and the source code is
also readily available.

Anti-phase synchronisation: Does error correction really occur?


Jacques Launay, Roger T. Dean, Freya Bailes
MARCS Institute, University of Western Sydney, Australia

There is a large body of evidence relating to the ways that people synchronise with sounds, and
perform error correction in order to do this. However, anti-phase movement is less well
investigated than in-phase. While it has previously been suggested that error correction while
moving in anti-phase may have similar mechanisms to moving in-phase, and may simply be a case
of shifting the response by a regular period, there is some evidence that suggests there could be
more substantial differences in the way that people engage in antiphase movement. In particular,
it is known that antiphase synchronisation tends to become difficult, and break down, at a
different stimulus interonset interval (IOI) from in-phase synchronisation. The current study uses
an anisochronic stimulus sequence to look at peoples capacity to error correct when performing
anti-phase synchronisation with a set of sounds. Participants were instructed to tap between the
tones but try to maintain regularity. Although these potentially contradictory instructions did
not advise participants to perform any error correction on the basis of deviation in the stimuli,
results initially suggest that participants did perform error correction, tapping with shortened
intervals following a shorter stimulus interval, and lengthened intervals following a longer
stimulus interval. However, using cross-sectional time series analysis it was possible to look at
tapping data over a number of participants to demonstrate that the relationship between
stimulus and response was not such a simple one, and that the error correction response would
be better explained by participants trying to maintain a regular asynchrony with the stimulus.
Modelling confirmed that this strategy could better explain the data than error correction
performed in a manner more similar to that of in-phase tapping. The idea that antiphase
synchronisation is performed by attempting to maintain a regular asynchrony of half the stimulus
IOI is in keeping with findings that antiphase synchronisation becomes difficult at around double
the stimulus IOI that becomes difficult for in-phase synchronisation, and suggests that anti-phase
movement might not share the same error correction mechanisms as in-phase movement. This
may have more general implications for the way we understand temporal cognition, and
contributes towards debates regarding clock and oscillator models of timing.

100 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

WED
The Subjective Difficulty of Tapping to a Slow Beat

Rasmus Bth,* Guy Madison,#


*Lund university Cognitive Science, Lund University, Sweden
#Department of Psychology., Ume university, Sweden

The current study investigates the slower limit of rhythm perception and participants
subjective difficulty when tapping to a slow beat. Thirty participants were asked to tap to
metronome beats ranging in tempo from 600 ms to 3000 ms between each beat. After each
tapping trial the participants rated the difficulty of keeping the beat on a seven point scale
ranging from "very easy" to "very difficult". The participants generally used the whole rating
scale and as expected there was a strong significant correlation between the inter onset
interval (IOI) of the beats and rated difficulty (r=.89). The steepest increases in rated
difficulty was between IOIs 1200 to 1800 ms (M=1.6) and 1800 to 2400 ms (M=1.2) and
these were significantly larger than the increases between IOIs 600 to 1200 ms (M=.5) and
2400 to 3000 ms (M=0.9). This is in line with earlier reports on where tapping starts to feel
difficult and supports the hypothesis that there is a qualitative difference between tapping to
fast (IOI < 1200 ms) and slow (IOI > 2400) tempi. A mixed model analysis showed that
tempo, tapping error and percentage of reactive responses all affected the participants rating
of difficulty. Of these, tempo was by far the most influential factor, still participants were, to
some degree, sensitive to their own tapping errors which then influenced their subsequent
difficulty rating.

Musicians and Non-musicians Adapting to Tempo Differences in Cooperative


Tapping Tasks

Neta Spiro,* Tommi Himberg#


*Nordoff Robbins Music Therapy Centre, London, UK; #Finnish Centre of Excellence in
Interdisciplinary Music Research, Department of Music, University of Jyvskyl, Finland

A number of factors, including musical training, affect our entrainment to the musical pulse
and to each other. Personality traits seem to correlate with some musical behaviours but it is
not known whether this extends to entrainment. We investigate these effects in tapping tasks
where people entrain or resist entrainment, and observe the patterns of interaction, and
investigate whether these patterns or the tendency to entrain depend on musical training or
personality traits of the participants. 74 musicians and non-musicians were finger-tapping in
pairs under 3 conditions; solo, duet in the same tempo, and duet in different tempi.
Participants completed questionnaires about their musical experience, the Big Five
Inventory and the Interpersonal Reactivity Index. In duet tasks, entraining with the partner
was often a yes-no question: the pair either locked in sync or stayed apart. Participants did
not entrain in all same tempo trials, but often did so even in trials with maximum tempo
difference (33 BPM). In general, participants kept their own tempo better in the solo trials
than in the duet trials. Musicians were found to be more self-consistent than non-musicians.
No clear effects of personality were found, even though in the second half of the study
participants were paired together based on their personality scores. There was a
considerable variability in performance across participants and even for the same pair
across different conditions. This novel method of studying interpersonal interaction revealed
a variety of strategies to cope with the "chaos". It is hoped that further analyses of these
strategies and their links with psychological background factors will shed more light on
social and communicative aspects of music performance.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

101

Difference in synchrony judgment accuracy of two pulses depending on


musical experiences and its relation to the cochlear delays

Eriko Aiba,* Koji Kazai,* Toshie Matsui,# Minoru Tsuzaki,+ Noriko Nagata*
*Dept. of Human System Interaction, Kwansei Gakuin University, Japan;
#Dept. of Otorhinolaryngology - Head and neck surgery, Nara Medical University, Japan;
+Faculty of Music, Kyoto City University of Arts, Japan

Synchrony judgment is one of the most important abilities for musicians because just a few
milliseconds of onset asynchrony can result in a significant difference in musical expression.
However, even if all of the components physically begin exactly simultaneously, their
temporal relation might not be preserved at the cochlear level. The purpose of this study is
to investigate whether the cochlear delay significantly affects the synchrony judgment
accuracy and whether there are any differences in its effects depending on musical
experiences. A psychoacoustical experiment was performed to measure the synchrony
judgment accuracy for professional musicians and non-musicians. Two types of chirps and a
pulse were used as experimental stimuli to control an amount of the cochlear delay. The
compensated delay chirp instantaneously increased its frequency to cancel out the cochlear
delay. The enhanced delay chirp had the reversed temporal relation of the compensatory
delay chirp. In addition, a pulse without delay was used. The experimental task was to detect
a synchronous pair in the 2I2AFC procedure. As a result, synchrony judgment accuracy was
significantly higher in case of professional musicians than that of non-musicians. For
professional musicians, there are significant differences among all three types of sounds.
However, for non-musicians, there was no significant difference between compensated
chirps and enhanced chirps. This result suggests that the auditory system of professional
musicians is more sensitive to the change of temporal relation on frequency components
such as cochlear delay than that of non-musicians.

Speed Poster Session 24: Timber I Hall, 11:40-12:10


Instruments & Motion

A Motion Analysis Method for emotional performance on the snare drums

Masanobu Miura,* Yuki Mito#, and Hiroshi Kawakami#


* Dept. of Media Informatics, Ryukoku University, Japan;
# Dept. of Music, Nihon University, Japan

This study proposes a method for averaging several motions in order to analyze and
synthesizing motions of musical performance. The averaged motion is expected to be useful
for obtaining the feature of specified motions by just observing visually. Targeted motion
here is the snare drum performance with emotion. This method is named "Motion-
Averaging-Method (MAM)". Motion data are recorded by a motion capture system for
performances by trained percussionists expressing each of five basic emotions or non-
emotion. Recorded motion data have some deviations due to the variability of position
and/or angle of each player when recording. Thus, the proposed method adjusts position
and angle of the player in each recorded motion. Adjusts motion data are expanded or
contracted based on impact time of drumstick obtained from acoustic waveform of recorded
performance, and then an averaged motion is obtained by observing several motions
adjusted. Quantitative features of averaged motion are extracted from stroke motions and
their ratios of parameters of arm motions among emotions, as well as collecting up features
of motion among emotions. A subjective experiment was conducted to evaluate the
appropriateness of obtained features. Results showed the existence of motion related to a
102 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

WED
2D-emotional space. The results show that several motions are dependent to the 2D
emotional space and emotional performance has several features of motion not related to
musical sound. We found that professional percussionists are representing emotion on the
motion of the performance dependent to the 2D space and independent to its acoustic signal.


Embouchure-related muscular activity and accompanying skin movement for
the production of tone on the French horn

Takeshi Hirano,* Satoshi Obata,* Chie Ohsawa,* Kazutoshi Kudo,# Tatsuyuki Ohtsuki,# Hiroshi
Kinoshita*
*Graduate School of Medicine, Osaka University, Japan
#Graduate School of Arts and Sciences, The University of Tokyo, Japan

The present study investigated dynamics- and pitch-related activity of selected five facial
muscles (levator labii superioris, zygomaticus major, depressor anguli oris, depressor labii
inferioris, and risorius (RIS)) using surface electromyogram (EMG), and accompanying skin
movement using 3D motion capture system. Ten advanced French horn players produced 6-
sec long tones at 3 levels of dynamics (pp, mf, and ff) at 5 levels of pitch (Bb1, F3, F4, Bb4, and
F5). For each muscle, mean EMG and kinematics (marker-to-marker distance) were
computed for the pre-attack phase of 375 ms prior to the tone onset, and for the sustained
phase of 750 ms starting from 3 s after the tone onset. EMG data were normalized by the
data obtained from production of the sustained F5 (near maximum high pitch) tone at ff
dynamics. Multivariate analysis of variance on all EMG data revealed that activity was
greater at stronger dynamics and at a higher pitch. Dynamics x pitch interaction effect was
non-significant. Pitch and dynamics did not influence the facial skin kinematics except for
shortening of markers placed on RIS. No phase effect was observed for both EMG and
kinematic data. The findings suggest that proper pre-setting as well as continuously
maintaining the level of isometric contraction in the embouchure muscles is an essential
mechanism for the control of lip and oral cavity wall tension, by which production of
accurate pitch and dynamics is accomplished.


Effect of short-term piano practice on fine control of finger movements

Ayumi Nakamura*, Tatsushi Goda*, Hiroyoshi Miwa*, Noriko Nagata*, Shinichi Furuya#
*School of Science and Technology, Kwansei Gakuin University, Japan; #Institute for Music
Physiology and Musicians Medicine, Hannover University of Music, Drama, and Media,
Germany

A number of cross-sectional studies that compared pianists and non-musicians have


demonstrated that extensive piano training elicits structural and functional changes in motor-
related brain regions, which enables fine control of finger movements. However, the causal
relationship between piano practice and hand motor function has been understood poorly. The
present longitudinal study aimed to assess effect of daily piano practice in terms of speed,
accuracy, and independence of finger movements. Six adult participants with no history of piano
playing were asked to play a short tone sequence consisting of twelve strokes with the left hand
synchronized with a metronome (inter-keystroke interval = 500ms) for fifty trials per day over
four successive days. MIDI information on each keypress was obtained from an electric piano.
Before and after the practice, pretest and posttest were carried out to assess several fundamental
hand motor functions. Following the practice, the participants exhibited a significant decrease in
temporal variability of keystrokes, indicating improvement of movement consistency. When they
were asked to play as fast and accurately as possible, the maximum rate of keystrokes also
increased after the practice, indicating enhancement of finger movement speed. Concerning the
untrained right hand, both accuracy and speed also improved following the left-hand practice,
which suggests a transfer effect of uni-manual practice on the contra-lateral hand. To evaluate
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

103

independence of finger movements, each finger performed the fastest tapping task, which
required repetitive keystrokes by one finger as fast as possible with keeping the remaining digits
depressing the adjacent keys. Results showed that each of the index, middle, ring, and little fingers
showed significant improvement in maximum movement rate following the practice, indicating
enhancement of independent control of movements at individual finger. To further assess if visual
feedback regarding temporal accuracy of keystrokes during the practice affects the training effect
on the hand motor functions, we asked another six non-musicians to perform the same task with
information on the variability of inter-keystroke interval being provided visually. Training-
dependent improvement of hand motor functions turned out to be not facilitated even with
accuracy feedback. Piano practice with a particular tone sequence at a certain tempo had
significant impacts on accuracy, speed, and independent control of finger movements. The
transfer effect on both untrained hand and untrained tone sequence implies presence of shared
motor primitive in piano playing.


Expert-novice difference in string clamping force in violin playing

Hiroshi Kinoshita,1 Satoshi Obata1, Takeshi Hirano1, Chie Ohsawa1, Taro Ito2
1 Biomechanics & Motor control lab, Graduate School of Medicine, Osaka University, Osaka,
Japan;
2 Department of Health and Sports Science, Mukogawa Womens University, Hyogo, Japan

Difference in the nature of force for clamping the strings between expert (N = 8) and novice
(N = 8) violin players was investigated using a violin installed with a 3D force-transducer,
and produced sound. These players performed repetitive open A- and D-tone (force
measurement) production using the ring finger at tempi of 1, 2, 4, and 8 Hz at mezzo-forte. At
2- and 8-Hz tempi, the same task was performed by the other fingers. At 1 and 2 Hz, the
profiles were characterized by an initial attack force, followed by a leveled force during the
finger contact period. The peak attack force for the experts exceeded 5 N, which was
significantly larger than about 3.N for the novices. At 4 and 8 Hz, only attack force with a
lower peak with no group difference was observed than at the faster tempi, but attack-to-
attack variability of force was significantly larger for the novices than the experts. Both the
experts and novices had a lower attack force by the ring and little fingers than the other two
fingers, but the finger difference was much less for the experts. The findings suggest that
expert violinists use a strategy of trade-off between physiological cost of string clamping
force and production of high quality sound. High consistency of attack force action is also an
important


Expert-novice difference in string clamping force when performing violin
vibrato

Satoshi Obata, Takeshi Hirano, Chie Ohsawa, and Hiroshi Kinoshita


Biomechanics & Motor control lab, Graduate School of Medicine, Osaka University, Osaka, Japan

The violin vibrato is considered a complex playing technique for novice players. Information
on the left-finger force during vibrato of novices, as compared with that of experts, may help
in unveiling hidden biomechanical problems of their technique. The aim of this study was to
investigate the novice-expert difference in the nature of shaking and pressing forces during
sustained vibrato tone production. The subjects were 10 novice and 10 expert players. A
violin installed with a 3D force transducer was used for the measurement of fingerboard
reaction force in three dimensions while performing successive A (open) and D (force
measurement) vibrato tone production repetitively. The target rate of vibrato performed
was 4.5 Hz, and the target level of loudness was between 75 and 77 dB (mf). The index,
middle, ring, and little fingers were used to test the finger effect on generated force. The
average, amplitude of oscillation, and peak-to-peak time of the shaking and pressing forces,
104 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

WED
and their intra-subject variability were computed for each trial. It was found that the novices
had significantly smaller average pressing force and amplitude of the shaking force than the
experts. The intra-subject variability of shaking-force amplitude and peak-to-peak time was
significantly larger for the novices. These were similarly common across all four fingers. It
was concluded that the mechanism of string clamping force during the vibrato for novices
were different from experts. The findings suggest that the parallel and synergistic production
of sufficient pressing and shaking forces is one element of successful vibrato.


The role of auditory and tactile modalities in violin quality evaluation

Indiana Wollman,*# Claudia Fritz,* Stephen McAdams #


*Lutherie-Acoustique-Musique, Institut Jean le Rond d'Alembert, UMR 7190, Universit Pierre et
Marie Curie-CNRS, France; #CIRMMT, Schulich School of Music, McGill University, Canada

The long-term goal of this study is to investigate the differences that can be perceived in the
feel of violins across a range of instruments. Indeed, many violinists consider that not only
the sound but also the feel are really important, and it is not clear what is responsible for
the latter. This study explores the role of auditory and tactile modalities involved in violin
playing and aims to construct a hierarchy of evaluation criteria that are perceptually relevant
to violinists. Twenty professional violinists participated in a perceptual experiment
employing a blind violin evaluation task under different conditions. Participants were asked
to evaluate a set of violins either: i) by holding the instruments, without producing sound ii)
under normal playing conditions, iii) with auditory masking or iv) with vibrotactile masking.
Under each playing condition, the violinists evaluated the violins according to criteria related
to violin playing and sound characteristics and rated and ranked the overall quality of the
violins. Results confirm that violin preference is highly individual. Intra-subject analyses
reveal a consistent trend in violin rankings over the three playing conditions though more
similarities are observed between the ratings under the normal playing and tactile masking
conditions than for the auditory masking conditions. The lack of auditory feedback thus has
greater impact on violinists' perceptual evaluation. However, ratings based only on the
tactile modality preserve overall rating trends - the most and least preferred violins are in
particular weakly dependent on sensory masking - suggesting the existence of "tactile-only"
cues.

Speed Poster Session 25: Timber II Hall, 11:40-12:10


Musical experience and communication

Songs, words and music videos: Adolescent girls' responses

Nicara Govindsamy, Cynthia J. Patel


Discipline of Psychology, University of KwaZulu-Natal, South Africa

Music plays a significant role in teenagers lives: they use music to regulate their emotions
and girls have more emotional responses compared to boys. Exposure to music is generally
in audio or music video form. Over the years song lyrics have become more explicit in
reference to drugs, sex and violence. Fifty eight teenage girls emotional responses to three
genres of music (RnB/Rap, Rock, Pop) in different formats: audio, music video and lyrics
were measured. The Rap song had sexual connotations and objectified women, the Rock was
about determination and inspiration while Pop was about falling in love. A semantic
differential scale comprising bipolar adjectives (describing a range of emotions) was used to
measure emotional response. Fifteen (15) word pairs were selected for the final scale.
Respondents were required to choose from a continuum (between each word pair) the
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

105

extent to which they experienced the emotion after listening to the song, watching the video
and reading the lyrics. High scores indicated negative emotions. Rap lyrics elicited the most
negative response followed by the Rock lyrics. The Pop genre had the lowest scores. The
sample also reacted negatively to the Rap video. Overall their responses to the different
songs were about the same, but responses to the video content and lyrics were markedly
different with most negative responses to Rap. Since young girls tend to use music to manage
their emotions, these findings are a cause for concern. Further research needs to done
linking types of music and ways of coping.


Specialist adolescent musicians role models: Whom do they admire and why?

Antonia Ivaldi
Department of Psychology, Aberystwyth University, Wales, UK

Previous research into typical adolescents musical role models has shown that young people
are more likely to identify a celebrity figure as their role model due to their image and
perceived fame, than because of their perceived musical ability. This study builds on this
previous work by looking at the role models of young talented musicians with the aim of
exploring who they admire as a musician and the reasons why. It is anticipated that the
adolescents will identify more elite performers and teachers (i.e., non-celebrities) as their
role models. 107 young musicians, aged 13-19, took part in a questionnaire study, and were
drawn from two specialist musical environments: Junior conservatoire students (n = 59) and
county level students (n = 48, drawn from two local music services). The adolescents were
asked questions about who they admired as a musician (i.e., someone famous, teacher) and
the reasons why (i.e., they are talented, works hard). Adolescents also rated how much they
wanted to become like their role model (aspirations), and how much they thought they could
become like their role model (attainability). Results showed that both famous and non-
famous figures were identified, with more elite performers and teachers being chosen
compared to previous research, thus indicating a specialist knowledge and level of exposure
to relevant musical figures. Factor analysis generated three loadings (image, higher
achievement, dedication) for the reasons for admiring the role models. The implications for
the adolescents identifying more relevant figures for their attainability and aspiration beliefs
are discussed.


Typicality and its influence on adolescents musical appreciation

Caroline Cohrdes, Reinhard Kopiez


University of music, theater and media, Hanover, Germany

Adolescents evaluate music with regard to their social identity (North & Hargreaves, 1999).
An effective strategy to achieve social identity is the individuals identification with
subgroups (Hornsey & Jetten, 2004). Unconventional musical substyles provide adolescents
opportunity to reach a level of optimal distinctiveness (Abrams, 2009). A musicians
personality and lifestyle is communicated by images (Borgstedt, 2008) and unconventional
images further adolescents positive musical judgements (Cohrdes, Lehmann & Kopiez,
2012). Hence, both components become important when indicating a specific value of
typicality. This study aims to determine indicators defining typicality on a continuous scale
with conventionality and unconventionality as bipolar endings. First, items from the
perspective of adolescents were collected. Subsequently, N = 232 adolescents (M = 15.51, SD
= 1.132) rated different stimuli in an online survey. To assess essential items clarifying the
two dimensions of typicality (music and image), we used methods of Classical Test Theory
(CTT) and Item Response Theory (IRT). 12 selective items concerning the typicality of music
and 6 concerning the musicians image were detected. By means of these scales it is possible
106 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

WED
to categorize stimuli and predict musical judgments of adolescents with the claim of optimal
distinctiveness. As a main result, we present the typicality of a musicians image
standardized in terms of an iconographic scale.

Positive Psychological and Interpersonal Effects by Karaoke


Junko Matsumoto,1 Shiori Aoki,2 Manami Watanabe3

1Nagano College of Nursing, Japan; 2Nagoya University Hospital, Japan; 3Seirei Mikatahara

General Hospital, Japan

This report presents the findings of investigations of college students participation in


karaoke, their subjective moods induced by singing karaoke, and the positive effects
associated with participating in karaoke, but not actively singing. In Study 1, 186 college
students completed a questionnaire about their participation in karaoke. Most respondents
indicated that they go to karaoke with several friends occasionally for amusement or as a
pastime and feel comfortably tired after. These findings suggest that singing karaoke has
positive psychological effects on mood. In Study 2, 185 college students completed a
questionnaire. Respondents were asked to answer the questions about their usual
participation in karaoke and their participation in karaoke when they did not actively sing.
When they participated in karaoke without actively singing, the aim was primarily to be
sociable with not only their friends, but also acquaintances or superiors. With regard to their
mood following karaoke, respondents reported feeling more depressed, anxious, and tired
and less refreshed when not actively singing as compared to when they actively sing. These
results suggest that when college students participate in karaoke without actively singing,
they experience negative psychological effects. However, there seem to be positive
interpersonal effects of maintaining social relations with others when not actively singing.
Consequently there would be beneficial effects from both active and passive participation in
karaoke.


Hips don't lie: Multi-dimensional ratings of opposite sex dancers perceived
attractiveness

Geoff Luck, Suvi Saarikallio, Marc Thompson, Birgitta Burger, Petri Toiviainen
Finnish Centre of Excellence in Interdisciplinary Music Research, Department of Music,
University of Jyvskyl, Finland

Previous work has shown that a number of factors can affect perceived attractiveness of
opposite-sex dancers. For women watching men, body symmetry, perceived strength, vigor,
skillfulness, and agility of movement, as well as greater variability and amplitude of the neck
and trunk, are positively related to perceived attractiveness. For men watching women, b ody
symmetry is also important, and femininity/masculinity of movement likely also plays a role
for both sexes. Our aim here was to directly compare characteristics of attractive opposite-
sex dancers under the same conditions. Sixty-two heterosexual adult participants (mean age
= 24.68 years, 34 females) were presented with 48 short (30 s) audio-visual point-light
animations of adults dancing to music. Stimuli were comprised of eight females and eight
males, each dancing to three songs representative of Techno, Pop, and Latin genres. For each
stimulus, participants rated perceived femininity/ masculinity as appropriate, sensuality,
sexiness, mood, and interestingness of the dancer. Seven kinematic and kinetic features
downforce, hip wiggle, shoulder vs. hip angle, hip-knee phase, shoulder-hip ratio, hip-body
ratio, and body symmetry were computationally extracted from the stimuli. Results
indicated that, for men watching women, hip-knee phase angle was positively related to
ratings of perceived interestingness and mood, and hip-body ratio was positively related to
ratings of perceived sensuality. For women watching men, downforce was positively related
to ratings of perceived sensuality. Our results partially support previous work, and highlight
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012 107

some similarities and differences between male and female perceptions of attractiveness of
opposite-sex dancers.

Paper Session 10: Grand Pietra Hall, 14:30-15:30


Listener perspectives

How was it for you? Obtaining artist-directed feedback from audiences at live
musical events
John Sloboda, Melissa Dobson
Guildhall School of Music & Drama, UK

Musicians generally have rather limited means of obtaining direct and detailed feedback from
their live audiences. This is often limited to applause and the feel of the room. Although many
research studies collect more detailed evaluative responses from music listeners, this is often
done without reference to the specific concerns or interests of the musicians involved. It is rare
for the musicians themselves to be directly involved in the formulation of the research questions,
or the review of the data obtained. This research project aims to develop and pilot a means for
audiences to provide responses to questions which are of direct interest and importance to the
musicians involved in live performance events. Specifically we wish to evaluate whether such
processes enhance (a) audience engagement, and (b) professional and artistic development of the
musicians involved. The research team has worked with several artistic teams in a process which
involves (a) discovering artistically relevant questions which can be validly posed to audience
members, (b) collaboratively devising appropriate means of collecting this data (e.g.
questionnaire, post-performance discussion), (c) jointly reviewing the outcomes of the event, and
the audience data, (d) obtaining reflective feedback from those involved regarding the value of
being involved in the exercise. We will illustrate the process with specific data from one or more
live musical events which have taken place between July 2011 and May 2012. This includes the
world premiere of a composition whose inspiration was a traditional day of celebration in the
composers home town, characterised by distinctive rituals involving folk-music and dance. The
composer was interested to know if audience knowledge of the programmatic background to the
composition (provided by a programme note) was a significant factor in audience appreciation of
the work. In this case, unexpected emergent features of the research experience yielded
unanticipated benefits, with the composer perceiving heightened audience attention to the piece
being researched, and experiencing consequent affirmation. Involvement of musicians in the
design and implementation of research on audience response is a significant means of enhancing
mutual understanding between musicians and audiences and of making research more directly
relevant to practitioner concerns. Issues for discussion include the appropriate means of ensuring
sufficient research rigour without distorting the artistic process.


Everyday Listening Experiences

Amanda E. Krause,1 Adrian C. North2


1Applied Psychology, Heriot-Watt University, United Kingdom
2School of Psychology and Speech Pathology, Curtin University, Australia

Utilizing the Experience Sampling Method, this investigation aimed to update our
understanding of everyday listening in situ. Self-reports regarding where, when, and how
music was experienced, as well as ratings concerning affect before and after exposure to
music and the perceived effects of what was heard were gathered over one week.
Responding to two text messages sent at random times between 8:00 and 23:00 daily, 370
participants completed online responses concerning their experience with any music heard
within a two-hour period prior to receiving each text message. Results from the 177
participants who completed at least 12 of 14 entries demonstrated that music was heard on
108 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

WED
46.31% of occasions overall. While heard throughout the day and more often in private than
public spaces, detailed analyses revealed significant patterns based on time, location, device,
selection method, mood, ratings of choice and attention, and the perceived effects of what
was heard. Most importantly, the results suggest that it is the level of control that a person
has over the auditory situation which greatly interacts with the other variables to influence
how he or she will hear the music as well as how it is perceived. In contrast to North,
Hargreaves, and Hargreaves (2004) proposition that the value of music has decreased in
light of technological advancement, the current findings imply that with the greater control
technology affords, the value has instead increased, when we consider individuals as actively
consuming (thereby using) music rather than simply as passive listeners.

Paper Session 11: Crystal Hall, 14:30-15:30


Communication & musical preference in childhood

Playsongs and lullabies: features of emotional communication and developing


mother-infant attachment

Alison Liew Creighton,1 Michael Atherton,2 Christine Kitamura2


1College of Arts/MARCS institute, University of Western Sydney, Australia
2University of Western Sydney, Australia

This paper presents findings from my current research which examines the features of
mother-infant singing as emotional communication. It explores (1) the mothers subjective
experience of the live use of playsongs and lullabies, (2) how the subjective experience
relates to attachment-specific mental constructs, (3) the quality of interaction during the live
use of playsongs and lullabies and (4) the musical and behavioral features of optimal
emotional communication.


Effects of Structural and Personal Variables on Childrens Development of
Music Preference

Michael Schurig, Veronika Busch, and Julika Strau


Department of Musicology and Music Education, University of Bremen, Germany

Hargreaves (1982) hypothesis of an age-related decline in childrens preference for
unfamiliar music genres (open-earedness) forms the theoretical background of our
longitudinal study with four points of measurement between grade one and four. Primary
school children answered a sound questionnaire with 8 music examples on a 5-point iconic
preference scale. Structural and personal data was collected using standardized
questionnaires, and complementary interviews were conducted. We operationalized open-
earedness as a latent construct with classic and ethnic/avant-garde music preference
(Louven, 2011) as distinguishable factors through exploratory factor analyses. The aim is to
identify predictor variables (e.g. gender, personality, music experience, migration
background, and socio-economic status) using structural equation modelling. This way we
tried to assess a measurement model to be used for further investigation of our longitudinal
data. So far, analyses of variance support the expected open-earedness for preference ratings
of t1 (n1=617), but gender differences already show. Analyses of t2 (n2=1142) disclose the
beginning decline of open-earedness, with t3 (n3=1132) supporting the trend furthermore.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

109

Paper Session 12: Dock Six Hall, 14:30-15:30


Rhythm analysis & perception

Perception of Rhythmic Similarity in Reichs Clapping Music: Factors and


Models

Daniel Cameron,1 Keith Potter,2 Geraint Wiggins,3 Marcus Pearce3


1Brain and Mind Institute, University of Western Ontario, Canada
2Dept. of Music, Goldsmiths, University of London, UK
3Centre for Digital Music, Queen Mary, University of London, UK

Rhythm processing is a critical component of music perception and cognition. Investigating
the influences on the perception of similarity is a useful way to explore processing
underlying processing of perceptual phenomena. In this study, we investigate the perception
of rhythmic similarity using rhythmic figures from Steve Reichs Clapping Music, in 2
experiments. Musicians and non-musicians rated the similarity of rhythm-pairs when
rhythms were heard in the context within the composition or in isolation, in two
performance versions (MIDI or performance recording), and in different orders of
presentation. These factors (musical training, expressive performance, musical context, and
order of presentation) represent influences on the rhythmic information used in music
cognition. Furthermore, computational models representing theoretically distinct
perspectives on rhythmic information processing are compared in their predictions of
perceived rhythmic similarity. Differences in perceived similarity reflect differences in
information processing. Similarity ratings were analyzed for the effects and interactions of
factors. Results suggest that musical training provides an advantage in processing rhythmic
information, that both expressive performance and Clapping Musics compositional process
of rhythmic transformation provide additional information used by listeners to distinguish
rhythms, and that the perceived similarity of rhythms depends on presentation order. These
results are interpreted from, and consistent with, a general perspective of information
theoretic processing. The predictions of all models correlate with participants ratings,
shedding further light on the cognitive mechanisms involved in processing and comparing
rhythms.


The Pairwise Variability Index as a Tool in Musical Rhythm Analysis

Godfried T. Toussaint
Faculty of Science, New York University Abu Dhabi, United Arab Emirates

The normalized pairwise variability index (nPVI) is a measure of the average variation
(contrast) of durations that are obtained from successive pairs of events. It was originally
conceived for measuring the rhythmic differences between languages on the basis of vowel
length. More recently, it has also been employed successfully to compare rhythm in speech
and music. London, J. & Jones, K. (2011) have suggested that the nPVI measure could become
a useful general tool for musical rhythm analysis. One goal of this study is to determine how
well the nPVI models various dimensions of musical rhythmic complexity, ranging from
human performance and perceptual complexities to musical notions of syncopation, and
mathematical measures of syncopation and rhythm complexity. A second goal is to
determine whether the nPVI measure is capable of discriminating between short, symbolic,
musical rhythms across meters, genres, and cultures. It is shown that the nPVI measure
suffers from severe shortcomings, in the context of short symbolic rhythmic patterns such as
African timelines. Nevertheless, comparisons with previous experimental results reveal that
for some data the nPVI measure correlates mildly, but significantly, with performance
110 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

WED
complexity. It is also able to discriminate between certain distinctive families of rhythms.
However, no significant differences were found between binary and ternary musical
rhythms, mirroring the findings by Patel, A. D. & Daniele, J. R. (2003) for language.

Paper Session 13: Timber I Hall, 14:30-15:30


Visual cues in performance

Audiovisual integration in music performer recognition: Do you need to see


me to hear me?

Helen Mitchell,1 Raymond MacDonald2


1Sydney Conservatorium of Music, University of Sydney, Australia
2Department of Psychology, Glasgow Caledonian University, UK

Listeners take for granted not only their capacity to distinguish between musical
instruments, but also their ability to discriminate between performers playing the same
instrument by their sound alone. Sound perception is usually considered a purely auditory
process but in speaker recognition, auditory and visual information are integrated, as each
modality presents the same information, but in a different way. Listeners combine these
cross-modal perceptions to recognise the person speaking and can reliably match talking
faces to speaking voices. This phenomenon has profound implications for music performer
recognition, if multimodal information is combined for listeners to perceive and identify an
individual performer. Saxophonists (n=5) performed three jazz standards for an audio and
video recording and we explored the integration of cross-modal sensory experiences (audio
and visual) in saxophonist identification. Participants either watched a silent video clip of a
saxophonist playing and matched it to an audio clip of the same performer, or heard an audio
clip of a saxophonist and matched it to a silent video clip. Listener/viewers reliably identified
their target saxophonists, and were able to use the information about a performer in one
modality and match it to the same performer in another modality. Participants were more
likely to recognise performers by ear after they had watched their performance. These
results will be discussed with reference to musical identities and sound recognition and will
provide insights into the way auditory experts, such as musicians, identify individual
musicians sound.


"The types of ViPES": A typology of musicians stage entrance behavior

Friedrich Platz, Reinhard Kopiez


Hanover University of Music, Drama and Media, Germany

Music performance can best be described as an audio-visual communicative setting. This
setting is based on the mutual exchange of music-related meaningful information between
performer and audience. From the perspective of musical communication approach, there is
a congruency between musically structure-related features and non-verbal forms of visual
communication. Consequently bodily movements have often been reduced to a supportive
function in musical communication processes. In contrast, in our meta-analysis of ratings of
audio-visual music presentations we suggest that the audiences appreciation is strongly
influenced by visual components, which can be independent from the musical structure. As a
consequence, we emphasize the approach of persuasion instead of communication. The
theoretical framework comes from dual-process theories, in which different kinds of
information processing depend on the audiences attitude. Therefore, visual components in
music performance could be better described as underlying functions of musical persuasion
affecting audiences attitude. From this perspective, the performers stage entrance as the
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

111

first visible action for the audience can be regarded as the starting point of musical
persuasion. Our aims are two-fold: First we will reveal a typology of performer's persuasive
stage entrance behavior. Second, we would like to reveal the fundamental components
underlying the audiences construction of performer evaluations. We will present a first
sketch of a typology of musicians stage entrance behavior. Furthermore, we will offer a
latent-structured framework of the audiences attitude mechanism. Based on our performer
typology, we will obtain a deeper understanding of the audiences reaction and attitudes
towards varieties of stage performances.

Paper Session 14: Timber II Hall, 14:30-15:30


Tonal Cognition

Analyzing Melodic Similarity Judgements in Flamenco a Cappella Singing

Emilia Gmez,1 Catherine Guastavino,2 Fransisco Gmez,3 Jordi Bonada1

1Music Technology Group, Universitat Pompeu Fabra, Spain


2School of Information Studies, McGill University, Canada

3Applied Mathematics Department, School of Computer Science, Polytechnic University of


Madrid, Spain

This work has three main goals: first, to study the perception of melodic similarity in
flamenco singing with both experts and novices; second, to contrast judgments for synthetic
and recorded melodies; third, to evaluate musicological distances against human similarity
judgments (Mora et al. 2010). We selected the melodic exposition from 12 recordings of the
most representative singers in a particular style, martinete. Twenty-seven musicians
(including three flamenco experts) were asked to listen to the melodies and sort them into
categories based on perceived similarity. In one session, they sorted out synthetic melodies
derived from the recordings; in the other session, they sorted out recorded melodies. They
described their strategies in an open questionnaire after each session. We observed
significant differences between the criteria used by non-expert musicians (pitch range,
melodic contour, note duration, rests, vibrato and ornamentations) and the ones used by
flamenco experts (prototypical structure of the style, ornamentations and reductions). We
also observed significant correlations between judgements from non-expert musicians and
flamenco experts, between judgements for synthetic and recorded melodies, and between
musicological distances and human judgements. We also observed that the agreement
amongst non-experts musicians was significantly lower than amongst flamenco experts. This
study corroborates that humans have different strategies for comparing synthetic and real
melodies, although their judgements are correlated. Our findings suggest that computational
models should incorporate features other than energy and pitch when comparing two
flamenco performances. Furthermore, judgments from flamenco experts also differed from
novice listeners due to their implicit knowledge. Finally, novice listeners even with a strong
musical training- did not substantially agree on their ratings of these unfamiliar melodies.

Temporal multi-scale considerations in the modeling of tonal cognition from


continuous rating experiments

Agustn Martorell1, Petri Toiviainen2, Emilia Gmez1


1Music Technology Group, Universitat Pompeu Fabra, Spain
2Department of Music, University of Jyvskyl, Finland

Modeling tonal induction dynamics from naturalistic music stimuli usually involves slide-
windowing the stimuli in analysis frames or leaky memory processing. In both cases, the
appropriate selection of the time-scale or decay constant is critical, although rarely discussed
112 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

WED
in a systematic way. This study shows the qualitative and quantitative impact that time-scale
has in the evaluation of a simple tonal induction model, when the concurrent probe-tone
method is used to capture continuous ratings of perceived relative stability of pitch-classes.
Music stimulus is slide-windowed using many time-scales, ranging from fractions of second
to the whole musical piece. Each frame is analysed to obtain a pitch-class profile and, for
each temporal scale, the time series is compared with the empirical annotations. Two
commonly used frame-to-frame metrics are tested: a) Correlation between the 12-D vectors
from ratings and model. b) Correlation between the 24 key activation strengths, obtained by
correlation of the 12-D vectors with the Krumhansl and Kessler's key profiles. We discuss the
metric artifacts introduced by the second representation, and we show that the best
performing time-scale, minimizing the root mean-square of the frame-to-frame distances
along time, is far longer than short-time memory conventions. We propose a temporal multi-
scale analysis method as an interactive tool for exploring the effect of time-scale and
different multidimensional representations in tonal cognition modeling.

Speed Poster Session 26: Grand Pietra Hall, 15:30-16:00


Identity & personality

Individual differences in inattentional deafness with music: An exploratory


study

Sabrina Koreimann, Oliver Vitouch


Dept. of Psychology, University of Klagenfurt, Austria
In contrast to inattentional blindness, there is few research on inattentional deafness (ID)
phenomena, especially in the musical realm. By definition, ID in music describes the inability
to consciously perceive an unexpected musical stimulus, due to the subjects attending a
certain facet of the piece. We here try to reveal candidate factors for explaining individual
differences in ID with music. To examine the possible roles of field dependence (visual and
acoustic), concentration performance, and conscientiousness on ID, participants initially
listened to the first 150 of Strauss Thus Spake Zarathustra. Subjects had the task of
counting the number of tympani beats. An accompanying e-guitar interlude (20) served as
the unexpected stimulus. After listening, the participants were asked in a sequential
procedure of questions if they had noticed the e-guitar. Visual field dependence was assessed
with the Embedded Figures Test (EFT), concentration performance with an established
concentration test (d2), and conscientiousness with the NEO-FFI. A pilot measure of acoustic
field dependence was developed using the first 1 of the C major fugue from Bachs Well-
Tempered Clavier. The participants task was to identify each onset of the fugues theme by
mouse-click. While results show no interaction between ID performance and acoustic field
dependence, a significant interaction with visual field dependence was demonstrated.
Participants who missed the e-guitar tend to score higher on concentration (p = .104) and
conscientiousness (p = .052) than subjects who perceived the unexpected stimulus.


Personality of Musicians: Age, Gender, and Instrumental Group Differences

Blanka Bogunovi
Faculty of Music, University of Arts, Serbia

The idiosyncratic complexity of cognitive abilities, motivation and personality structure
gives a personal mark to the processes of perception, cognition and emotional arousal
which take place during different musical activities, such as listening, performing, creating
and learning music. The intention of this study was to gain new knowledge by using a newer
theoretical approach and an instrument for personality assessment. Namely, to investigate
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

113

personality structure of musicians and to confirm specific personality profiles concerning


age, gender and musical direction within in the framework of the Big-Five personality model
(NEO-P-R Inventory). The sample consisted of 366 musicians of different age groups -
secondary music school pupils, Faculty of Music students and professionals. Findings
(Oneway ANOVA) pointed out interesting differences in all age groups that have to do with
developmental and/or professional phase, as well as with experiences in dealing with music.
Namely, adolescent group had significantly higher scores on Neuroticism and Extraversion,
students on Openness and adult musicians on Agreeableness and Conscientiousness. On the
level of facets, age group attributes are confirmed, e.g. students developed Fantasy,
Aesthetics, Feelings and Modesty, and professional musicians Values and Dutifulness. It
could be concluded that the interrelated effect of developmental phases impact on the one
hand and long-term educational and professional engagement in musical activities, on the
other, exists and is reflected in the personality profiles of musicians. This means that a
specific way of life and experiences influence the forming of structural layers of musicians
individuality and that it certainly has an imprint on certain patterns of music perception and
cognition.

Personality Conditions of Pianists Achievements

Malgorzata Chmurzynska
Department of Music Psychology, Chopin University of Music

The researchers indicate that personality is a significant factor determining the achievements
both of the students during their music education process and the professional musicians in their
musical career. The role of personality is considered more significant in the later stages of music
education when the level of musical ability no longer differentiates between the students who
have received their musical instruction. The personality traits particularly characteristic of
musicians include the tendency to introversion (that makes them practice too much in isolation),
emotional instability, sensitivity, perseverance, and openness (Kemp, 1996; Manturzewska,
1974). Among music students who receive higher marks at school there has been identified a
higher level of self-efficacy (McPherson, McCormick, 2006) and lower level of neuroticism
(Manturzewska, 1974). However, we are still seeking an answer to the question: which of the
personality traits are conducive to a high level of musical performance? The aim of the present
study was to examine the personality differences between the high achievers and average
achievers among the pianists. The variables of gender and nationality were taken into account.
The subjects were participants of the 16th International Fryderyk Chopin Piano Competition in
Warsaw as well as other piano competitions (high achievers) and ordinary piano students
(average achievers). The control group of non-musicians has been used for comparison, including
the normalization samples of the employed tests. The respondents completed the NEO Five-
Factor Inventory (Costa and McCrae, 1992) and the General Self-Efficacy Scale (Schwarzer, 1998).
Moreover, the Formal Characteristics of Behavior-Temperament Inventory (Zawadzki and
Strelau, 1998)) was used to measure the temperamental traits specified by the Regulative Theory
of Temperament (Strelau, 1996) which include briskness, perseverance, sensory sensitivity,
emotional reactivity, endurance, and activity. The results are in the process of being analyzed. So
far, the analyses of the NEO-FFI and GSES results have shown that the most distinctive aspects of
pianists personalities are high level of Openness, Conscience (especially among females) and a
very high level of self-efficacy in comparison to the control group. The study has revealed the
differences between the pianists and non-musicians. So far hardly any differences has been found
between the high achievers and average achievers among pianists. Possibly the analysis of the
temperamental traits will bring new facts about associations between personality and high
musical performance.

114 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

WED
Attitudes towards music piracy: The impact of positive anti-piracy messages
and contribution of personality
Steven C. Brown
Psychology and Allied Health Sciences, Glasgow Caledonian University, Scotland

Conventional anti-piracy strategies have been largely ineffective, with pirates adapting
successfully to legal and technological changes. The present research aims to address the
two principal areas of research predictive factors and deterrents in a novel way with
personality being considered as a potential predictive factor and positive anti-piracy
messages proposed as a potentially effective deterrent. 261 participants (45.6% male) with a
mean age of 26.3 participated in an online questionnaire, outlining their music consumption
preferences and completing the 60-item version of the Hexaco PI-R (Lee and Ashton, 2004)
before being allocated to one of four conditions: legal sales of music encourage future live
performances, legal sales of music allow fans greater access to exclusive content, legal sales
of music will incorporate charitable donations and a control. Participants attitudes towards
music piracy were then measured using an original construct (AMP-12). Condition had no
effect on piracy attitudes where personality was a significant predictor, with participants
scoring higher on the AMP-12 scoring lower on honesty-humility and conscientiousness and
higher on openness. Openness emerged as a key individual difference, with participants
scoring higher on this trait demonstrating a greater likelihood to favour vinyl, re-mastered
versions of albums and listening to live recordings. Crucially, preference for digital music
was a significant predictor of pro-piracy attitudes. Several demographic differences were
also observed which point towards a gender-segmented approach in appeasing individuals
engaging in music piracy as well as accommodating the increasing trend for digital music.
Implications for future anti-piracy strategies are discussed.

Speed Poster Session 27: Crystal Hall, 15:30-16:00


Music, language & learning

The Effect of Background Music on Second Language Learning

Hi Jee Kang,* Victoria J. Williamson *


Department of Psychology, Goldsmiths, University of London, UK

The present study aimed to determine the effect of background music on second language
learning. Two experiments were prepared to investigate the role of background music on
short-term and long-term memory for new language materials. Experiment 1 focused on
short-term memory: participants with no previous knowledge of Arabic listened to a set of
numbers in Arabic (1-10) with or without background music followed by two recognition
phases interpolated by 5-minute delay. The results showed that the Music group performed
better on both test phases when compared with the No Music group. Age showed a negative
relationship with the results. In Experiment 2, monolingual English speakers chose to learn
either Arabic (atonal language) or Mandarin Chinese (tonal language) as part of an
ecologically valid two week language learning trial that utilized commercially available
language learning CDs. Participants were randomly assigned to either a background Music
group or a No Music group. The post learning test session comprised understanding and
speaking tests in the new language, as well as tests of working memory, general intelligence
task, and musical sophistication. Participants who learned Chinese with Music performed
significantly better on both understanding and speaking tests compared to the Chinese No
Music group. No significance was found between the two Arabic groups. Overall, the
presence of music positively correlated with enjoyment and achievement levels in both
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

115

languages. The results indicate that background music can improve memory during second
language learning tasks and also bring higher enjoyment, which could help build focus and
promote future learning.


Does Native Language Influence the Mothers Interpretation of an Infants
Musical and Linguistic Babblings?

Mayumi Adachi,* Simone Falk#


*Dept. of Psychology, Hokkaido University, Japan; #Ludwig-Maximilians-Universitt Mnchen,
Germany

Adachi and Ando (2010) demonstrate that Japanese mothers can interpret a Japanese
toddlers linguistically ambiguous vocalizations as either talking or singing, depending on the
context sampled. The present study explored whether the same response patterns were
intact among mothers, who were unfamiliar with Japanese toddlers vocalizations. Nineteen
German mothers listened to the same 50 vocalizations used with Japanese mothers in the
earlier study, evaluating whether each vocalization sounded as talking or singing. Results
indicated that German mothers interpreted the Japanese toddlers vocalizations taken from
infant-directed speech contexts more as though it were talking than as singing and those
taken from infant-directed song contexts more as singing than as talking. As a group, German
mothers used seven vocal cues in interpreting the vocalizations. Focusing on the individual
mothers use of vocal cues, however, only one cue among the seven identified as a group
the number of syllables per swas used consistently by more than three mothers: The
lesser number of syllables per s (i.e., a longer syllable) guided German mothers
interpretation toward singing, as found in Japanese mothers. The number of vocal cues used
consistently by three or more mothers was greater in Japanese (7 cues) than German (2
cues) samples. Perhaps, the unfamiliarity of the toddlers native language interfered with
German mothers consistent use of vocal cues. Nonetheless, the equivalent number of
vocalizations interpreted as talking or as singing by German and Japanese mothers may
imply something unique in the mothers interpretation of the toddlers vocalization beyond
native language.


Teachers Opinions of Integrated Musical and Language Learning Activities

Karen M. Ludke
Institute for Music in Human and Social Development, Edinburgh College of Art, University of
Edinburgh, United Kingdom

There is increasing interest in the potential of music to support language learning and
memory (Wallace, 1994; Schn et al., 2008). Listening, perceiving, imitating, and creating are
basic skills in both language and music. The Comenius Lifelong Learning Project European
Music Portfolio A Creative Way into Languages (EMP-L) aims to support childrens learning
in music and languages through a flexible, integrated approach. This study explored Scottish
music teachers opinions of the music and language activities developed by the international
EMP-L team. Special consideration was given to the Scottish Curriculum for Excellence (CfE),
wherein music learning falls into the expressive arts curriculum area and modern language
learning into the languages area. This qualitative study was conducted with 6 trainee
primary music teachers and 2 experienced teachers who were trained to use the EMP-L
activities to support musical and language learning outcomes. Pre- and post-teaching
questionnaires and focus groups asked teachers to comment on the applicability of the EMP-
Ls core activities to learning and progression. Pre- and post-implementation survey data
was analyzed together with teachers comments during the focus group sessions. Overall,
teachers opinions of the EMP-L materials were positive and the lessons led to successful CfE
116 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

WED
experiences and outcomes. However, some concerns were raised, particularly regarding
progression and whether generalist primary teachers could use the activities without
support from music and/or language specialists. The teachers opinions of the EMP-L
activities have the potential to improve the materials and to inform holistic, integrated music
education initiatives in Europe and elsewhere.


Introducing ECOLE: a language music bridging paradigm to study the role of
Expectancy and COntext in social LEarning

Laura Verga, Sonja A. Kotz


Dept. Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Germany

Does music enhance memory and learning of verbal material? The evidence in support of
this claim is inconsistent. Results from patients with AD or MS demonstrate a beneficial
effect of music on memory; however, studies with healthy participants fail to replicate this
effect. Yet, many studies in both populations did not consider two shared features of music
and language. First, the building up of a context creates strong expectancies with respect of
what is coming next. Second, both music and language are in essence social activities.
However, there is paucity of research on the impact of social interaction on learning and
music. We propose a novel paradigm to study the effect of music on verbal learning. Our
approach relies on the two properties shared by music and language: social interaction and
expectancies derived from contextual information. Our paradigm consists of a game-like set-
up mimicking a natural learning situation. Two people (a teacher and a student)
cooperate in finding the matching final object of a sentence context building upon the
combination of melodies and pictures. Each picture aligns to a musical unit, building up a
context and parallel expectations towards a picture representing an object and its name in a
language unknown to the players. Matching of expectancies could attentionally bind
resources enhancing predictions towards the object. Results of this paradigm should have
major implications for 1) our understanding of the impact of music on verbal learning, and 2)
applications in language learning and relearning in clinical populations.

Speed Poster Session 28: Dock Six Hall, 15:30-16:00


Temporality & rhythm II

Fade-out in popular music and the Pulse Continuity Illusion

Reinhard Kopiez, Friedrich Platz, Anna Wolf


Hanover University of Music, Drama, and Media, Hanover Music Lab, Germany

In popular music, fading as a gradual increase or decrease in the level of an audio signal is a
commonly used technique for the beginning or ending of a recording. In popular music, the
primary reason for this type of ending was the limited recording time of 3 min. for a 45 rpm
record. The psychological effect of the fade-out remains speculative. The hitherto intuitive
hypotheses on the psychological effect of fade-out, such as the indefinite closure (Huron,
2006) or the song goes on forever (Whynot, 2011) will be tested by experimental means.
We predict a prolonged tap along behaviour in the fade-out condition (directional
hypothesis: Tap along_fade-out > Tap along_cold end). We used two versions of a recently produced but
unpublished pop song: Version one exhibited an arranged end (cold end) and version two a
fade-out end. A two groups, between subjects design (N = 54, music undergraduates) was
used in a lab setting. The Sentograph (Mark IV) developed by Manfred Clynes served as an
interface for the measurement of the dependent variable musical entrainment. Subjects
received the instruction to feel the groove of the music and continue until you do not feel
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

117

any more entrainment. A clear between groups difference was found: compared with the
cold end group, subjects in the fade-out group continued pulsation about 3 s longer (t(52) =
2.87, p = .007, Cohen's d = 0.90). We call this effect the Pulse Continuity Illusion (PCI, say
"Picky").


The influence of imposed meter on temporal order acuity in rhythmic
sequences

Brandon Paul,* Per B. Sederberg,# Lawrence L. Feth*


*Department of Speech and Hearing Science, Ohio State University, USA
#Department of Psychology, Ohio State University, USA

Imagined meter is an imposed mental hierarchy of phenomenally strong and weak beats that
listeners use to organize ambiguous sequences of sounds and generate temporal
expectations. Here, we examine the possibility that improved auditory perception occurs at
moments when events are most strongly anticipated (i.e., strong beats), and also examine the
effect of long-term experience using a sample of musicians and non-musicians. While
grouping sounds in binary and ternary meter, listeners heard equally-spaced sequences of
click pulses and were asked to identify metric positions on which deviant clicks occurred.
The electroencephalogram was recorded from all participants. Preliminary behavioral
results from six subjects indicate that non-musicians outperformed musicians during this
task. Binary meter was found to yield a better performance overall, consistent with previous
findings that ternary meter is more difficult to impose on ambiguous rhythmic sequences.
Finally, beat-based differences arose only in comparing weak beats of one metric condition
to all other beats; although significant differences between strong and weak beats were not
found overall, current resultsconsistent with our prediction of enhanced perception on
strong beatswarrant further investigation. Preliminary analysis on EEG recordings suggest
that endogenously-maintained meter gives rise to beat-based differences in amplitude of
ERP waveforms, but vary considerably between individuals of both groups. Findings from
the study are implicated in understanding the precise neural mechanisms behind perceiving
and organizing large structures found in speech and music, as well as extending the
knowledge of cognitive structuring of auditory perception.


Pitch and time salience in metrical grouping

Jon Prince
School of Psychology, Murdoch University, Australia

I report two experiments on the contribution of pitch and temporal cues to metrical
grouping. Recent work on this question has revealed a dominance of pitch. Extending this
work, a dimensional salience hypothesis predicts that the presence of tonality would
influence the relative importance of pitch and time. Experiment 1 establishes baseline values
of accents in pitch (pitch leaps) and time (duration accent) that result in equally strong
percepts of metrical grouping. Pitch and temporal accents are recombined in Experiment 2
to see which dimension contributes more strongly to metrical grouping (and how). Both
experiments test values in tonal and atonal contexts. Both dimensions had strong influences
on perceived metric grouping, but pitch was clearly the more dominant. Furthermore, the
relative strength of the two dimensions varied based on the tonality of the sequences. Pitch
contributed more strongly in the tonal contexts than the atonal, whereas Time was stronger
in the atonal contexts than the tonal. These findings are inconsistent with an interpretation
that stimulus structure enhances the ability to extract, encode, and use information about an
object. Instead, they imply that structure in one dimension can highlight that dimension at
the expense of another (i.e., induce dimensional salience).

118 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

WED
How is the Production of Rhythmic Timing Variations Influenced by the Use of
Mensural Symbols and Spatial Positioning in Musical Notation?

Lauren Hadley,* Michelle Phillips#


*Department of Psychology, Goldsmiths College, University of London, England
#Faculty of Music, University of Cambridge, England

The vast majority of Western classical music performance employs the musical score as a
means of communicating composer intention to performers. Within this score, the two most
common methods of notational representation of rhythm include use of mensural symbols
(e.g. crotchets, quavers), and use of spatial layout (proportional spaces after symbols). This
study examined the effect of notational layout and style on the performers realisation of
notational tempo and rhythm. Participants performed one rhythm in 4 different
transcriptions using a MIDI drumpad, order being counterbalanced and distracter tasks
separating each trial. 3 transcriptions employed mensural notation with different spacings
(wide, narrow, or equidistant), and 1 transcription employed block notation relying purely
on space to indicate duration (similar to a piano-roll and common in avant-garde notations).
Notational style (mensural symbols compared to block notation) was found to significantly
affect both tempo choice and performance accuracy. Block notation was performed at a
slower spontaneous tempo and less accurately than the mensural notations, with timings of
different note lengths converging towards the mean. Furthermore, comparison of mensural
transcriptions indicated that although spatial information was not enough to elicit rhythmic
performance alone, it has a significant impact on performance of the mensural score. Eleven
of fifty-one notes were played significantly differently between the three mensural notations,
differing only on spatial layout. These findings suggest that rhythmic timing variations
depend directly on the way in which notation is laid out on the page, and have significant
implications for editors and composers alike.

Speed Poster Session 29: Timber I Hall, 15:30-16:00


Visualization of sound

Interplay of Tone and Color: Absolute Pitch and Synesthesia

Milena Petrovic,* Mihailo Antovic#


*Solfeggio and Music Education Dept., Faculty of Music University of Arts Belgrade, Serbia
#English Dept., Faculty of Philosophy Nis, Serbia

Absolute pitch is an ability to recognize and properly musically name a given pitch (Levitin,
1994). It is more prevalent among speakers of tonal languages, in which meaning may
depend on the pitch (Deutsch, 2009). The emergence of absolute pitch depends on cultural
experience and genetic heredity (Deutsch 2006), exposure to early music education and the
tempered system (Braun, 2002), while todays rare occurrence of this phenomenon might
also be a consequence of transposition (Abraham 1901, Watt 1917). Musicians having
absolute pitch have fewer capacities as compared with musicians with relative pitch:
incessant naming of tones prevents them from fully enjoying music (Miyazaki, 1992).
Absolute pitch may be integrated with other senses synesthesia (Peacock, 1984). The
sample has comprised 28 professional musicians with absolute pitch, aged 15 to 47 of both
sexes. It was found that the most common synesthetic experience among professional
musicians with absolute pitch is the association of sound and color the so-called
chromesthesia or color hearing (Sacks, 2007). The paper shows whether it occurs during the
listening of: 1) an isolated tone played randomly in different register, 2) major and minor
chords along the circle of fifths in the basic position on the piano, in the same octave, and 3)
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

119

Bach's themes of 24 preludes from the Well-Tempered Clavier. The study strives to find
any regularities in the synesthetic experience, i.e. in the connection between sounds and
colors in professional musicians with absolute pitch.


The Role of Pitch and Timbre in the Synaesthetic Experience

Konstantina Orlandatou
Institute of Musicology, University of Hamburg, Germany

Synaesthesia is a condition, an involuntary process which occurs, when a stimulus not only
stimulates the appropriate sense, but also stimulates another modality at the same time. In
order to examine if pitch and timbre influence the synaesthetic visual experience, induced by
sound, an experiment with sound-colour synaesthetes (N=22) was conducted. It was found
that a) high pitched sounds conclude to a presence of hue, b) low pitched sounds to an
absence of hue, c) single frequencies cause a uni-colour sensation and d) multiple high
pitched frequencies induce a multi-colour sensation. Variation of chromatic colour, which is
present in the sensation, depends on the timbre of the sound. These findings suggest that the
synaesthetic mechanism (in case of sound-colour synaesthesia) maps sound to visual
sensations depending on the mechanisms underlying temporal and spectral auditory
processing.


Musical Synesthesia: the role of absolute pitch in different types of pitch tone
synesthesia

Lilach Akiva-Kabiri, Avishai Henik


Department of Psychology, and the Zlotowski Center for Neuroscience
Ben-Gurion University of the Negev, Beer-Sheva, Israel

Synesthesia is a condition in which individuals experience two commonly independent
perceptions as joined together. In tone color synesthesia (TCS), pitch chroma (e.g., Sol) elicits
a color perception. In tone-space (TSS) synesthesia, musical tones are organized explicitly in
a defined spatial array. These types of synesthesia are often associated with absolute pitch
(AP). We tested the importance of AP in TCS and TSS. AP and non-AP TCS were presented
with a visual and auditory Stroop-like tasks. Participants were asked to name a colored patch
on a screen and ignore a musical tone. When the musical tone was auditory, AP possessors
presented a congruency effect, whereas when the tone was presented visually, both groups
presented a congruency effect. These results suggest that in TCS, additional color perception
is impossible to suppress. Moreover, color association could be elicited both by auditory
tones or musical notes, depending upon AP ability. In the second part of this work, we used a
cue detection task and asked TSS without AP and non synesthetes to detect a visual cue
while ignoring a simultaneous irrelevant auditory tone. Synesthetes only presented a
significant validity effect. Hence, they were unable to suppress orienting of attention to the
auditory tone space form. The present results demonstrate the automaticity of synesthetical
associations. Furthermore, data suggest that AP modulates the effects of TCS but not of TSS.
Results are interpreted considering the underlying characteristics of color perception -
which is essentially categorical in nature - compared with the more ordinal nature of space.

120 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

WED
Getting the shapes right at the expense of creativity? How musicians and
non-musicians visualizations of sound differ

Mats B. Kssner,* Helen M. Prior,* Nicolas E. Gold,# Daniel Leech-Wilkinson*


*Department of Music, Kings College London, United Kingdom
#Department of Computer Science, University College London, United Kingdom

The study of visualizations of sound and music spans areas such as cross-modal perception,
the development of musical understanding, and the influence of musical training on music
cognition. This study aimed to reveal commonalities and differences between musicians and
non-musicians in the representational strategies they adopted to visualize sound and music,
as well as the accuracy with which they adhered to their self-reported strategies. To that end,
forty-one musicians and thirty-two non-musicians were asked to represent visually, by
means of an electronic graphics tablet, eighteen sequences of pure tones varying in pitch,
loudness and tempo, as well as two short musical excerpts. Analytic tools consisted of a
mixture of qualitative and quantitative methods, the latter involving correlations between
drawing and sound characteristics. Results showed that the majority of musicians and non-
musicians used height on the tablet to represent pitch (higher on tablet referring to higher
pitches), and thickness of the line to represent loudness (thicker lines for louder sounds).
Non-musicians showed both a greater diversity of representational strategies and a tendency
to neglect pitch information if unchanged over time. Musicians were overall more accurate
than non-musicians in representing pitch and loudness but less imaginative. This was the
first study comparing musicians and non-musicians visualizations of pure tones in a free
drawing paradigm. It was shown that real-time drawings are a rich source of data, enabling
valuable insights into cognitive as well as sensory-motor processes of sound and music.

Speed Poster Session 30: Timber II Hall, 15:30-16:00


Experiencing new music

New music for the Bionic Ear: An assessment of the enjoyment of six new
works composed for cochlear implant recipients

Hamish Innes-Brown,* Agnes Au,#* Catherine Stevens, Emery Schubert, Jeremy Marozeau*
* The Bionics Institute, Melbourne, Australia; # Department of Audiology and Speech
Pathology, The University of Melbourne, Australia; MARCS Institute, University of Western
Sydney, Australia; School of English, Media and Performing Arts, University of New South
Wales, Australia

The enjoyment of music is still difficult for many cochlear implant users. This study aimed to
assess cognitive, engagement, and technical responses to new music composed specifically
for CI users. From 407 concertgoers who completed a questionnaire, responses from groups
of normally-hearing listeners (NH, n = 44) and CI users (n = 44), matched in age and musical
ability, were compared to determine whether specially-commissioned works would elicit
similar responses from both groups. No significant group differences were found on
measures of interest, enjoyment and musicality, whereas ratings of understanding and
instrument localization and recognition were significantly lower from CI users. Overall,
ratings of the music were typically higher for percussion pieces. The concert successfully
elicited similar responses from both groups in terms of interest, enjoyment and musicality,
although technical aspects, such as understanding, localisation, and instrument identification
continue to be problematic for CI users.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

121

How fun is this? A pilot questionnaire study to investigate visitors experience


of an interactive sound installation

PerMagnus Lindborg
Nanyang Technological University (Sgp) / KTH Royal Institute of Technology (Swe)

We present a pilot questionnaire study to investigate visitors experience of an interactive
and immersive sound installation, The Canopy (Lindborg, Koh & Yong 2011), exhibited at
ICMC in Huddersfield. The artwork consists of a 4.5m windsurfing mast suspended by
strings, set up in a black-box space and illuminated in a dramatic fashion. The visitor can
manipulate the pole with several degrees of control: 2 for floor position, 2 for pole direction,
and one each for twist, grip height and squeeze. A real-time program in MaxMSP (Cycling 74)
maps control data to sound synthesis and 3D diffusion over 8 loudspeakers. The concept of
the installation was to sail in a sonic storm of elementary particles. 35 people responded to
the questionnaire immediately after having visited the installation. The questions aimed to
gauge various qualities of the interactive experience: the amount of time spent, the relative
importance of visual, sculptural and sonic elements, the amount of fun, and the perceived
quality of gestural control over spatial and timbral sound features. For the dependent
variable fun amount, 6 graded sentences were given as response options. Visitors also
completed forms for the Ten-Item Personality Index (TIPI; Gosling 2003) to estimate OCEA
scores, and for Ollens Musical Sophistication Index (OMSI; Ollen 2005), and gave free-form
feedback. The aim of the questionnaire was to investigate if people with different musical
sophistication and personality traits would value different aspects of the experience in
systematic ways. On the OMSI, 24 respondents scored high (p>0.75) and 7 low (p<0.45).
Thus divided, they were treated as two groups in the analysis. ANOVA revealed that the
groups had similar OCEA scores, except for Agreeableness where the high-OMSI group had
a marginally higher mean. A stepwise regression of fun on all the other variables and on
OMSI group interaction with OCEA revealed that people who felt they could act on the
spatial control had more fun, and this was in particular the case for less musically
sophisticated people who were more extrovert or less agreeable. With time spent as
dependent variable, a similar procedure indicated that people (particularly the more
conscientious) who felt they could act on the spatial control stayed significantly longer in the
installation. While these results would indicate that spatial control is primordial, most
freeform feedback focussed on timbral control. We are currently investigating whether
correlations are moderated by personality traits, and further results will be presented at the
conference.

The experience of sustained tone music

Richard Glover
Department of Music, University of Huddersfield, UK

This study will discuss a cognitive approach to the experience of experimental music created
entirely from sustained tones, in which there is an absence of typical perceptual cues for
creating sectional boundaries thereby directing the listeners focus towards surface
phenomena within the aural environment. Source material for the study comprises recent
compositions by American composers Phill Niblock and Alvin Lucier, as well as the author.
The approaches to harmonic transformation in these pieces are outlined, alongside a
detailed description of the activity within the surface layer of the sound, comprehensively
surveying the myriad acoustic and psychoacoustic phenomena prevalent. The presentation
draws upon gestalt grouping mechanisms to describe how this surface activity is interpreted
by the cognitive process. The notion of resulting articulations within sections is explored,
and consequently what this means in terms of stability and instability in experience for the
122 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

WED
listener, including considerations of temporality. The manner in which this process feeds
into the compositional procedure for these composers is also explored, looking specifically at
pitch structures employed, how composed indeterminacy in sustained tone composition
affects the cognition process and why these composers have a tendency towards writing for
acoustic instruments rather than electronic sources. This study provides further strategies
into how we might analyse sustained tone music, directing discussion towards the sounding
experience and cognitive comprehension of the listener rather than solely from the score.
This understanding can open up further avenues of research for composers, performers and
interdisciplinary theorists.


Just Riff Off: What determines the subjectively perceived quality of hit
riffs?

Barbara Sobe, Oliver Vitouch


Dept. of Psychology, University of Klagenfurt, Austria

A riff is a short, repeated, memorable musical phrase, often pitched low on the guitar, which
focuses much of the energy and excitement of a rock song (Rooksby, 2002). Burns (1987)
describes guitar riffs as common contexts for melodic hooks, being essential for catching
the listeners attention. This study attempts to provide some empirical and analytical
building blocks for answering a more narrowly defined sub-question of the hitherto
unresolved hit science question: What makes an intersubjectively great guitar riff?
Remotely similar to Slobodas (1991) classification of climactic moments in classical music,
we aim to distill a repertoire of structural elements that successful riffs share. In order to
have our findings based on new and unfamiliar music material, we chose a production &
evaluation approach. Ten e-guitarists from unsigned bands were asked to invent new riffs in
individual sessions. The resulting 55 riffs were assessed by 80 non-expert raters and 14
professional guitar players in terms of subjective liking. In a combination of inductive and
deductive approaches, common features of those riffs that scored highest and lowest were
explored and analyzed, and predictions from the killer riff handbook literature were tested
against the data. Findings show revealing differences between the evaluations of experts and
non-experts. Within each rater group, well-evaluated riffs do indeed share common
structural elements, partly corresponding with advice from the handbook literature. In the
overlapping subset of riffs pleasing both groups, particular musical effects such as
syncopation, timing, and other rhythm effects play a prominent role.

Paper Session 15: Grand Pietra Hall, 17:00-18:30


Group creativity & improvisation

What Does One Know When One Knows How to Improvise?

Andrew Goldman
Centre for Music and Science, University of Cambridge, United Kingdom

Cognitive models of improvisation align with pedagogical methods in suggesting
improvisers need for both procedural and declarative knowledge. However, behavioral
experiments do not directly address this division due to the difficulty of operationalizing
improvisation. The present study seeks to experimentally demonstrate different types of
knowledge involved in producing musical improvisations and to contribute an experimental
paradigm. Ten jazz pianists improvised on a MIDI keyboard over backing tracks. They
produced one-handed monophonic improvisations under a 2x2x2 fully factorial design. The
conditions contrasted levels of motor familiarity by varying which hand (right vs. left) played
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

123

which musical function (melody vs. bass line) in which key (Bb vs. B). MIDI files were
analyzed using MATLAB to determine the entropy, the proportion of diatonic pitch classes,
the nPVI of a quantized version of the data, and the nPVI of a version left unquantized.
Separate ANOVAs compared these values across conditions. Significant main effects were
found between keys and hands. In the key of B, pianists produced improvisations with lower
entropy and with more diatonic pitches than in Bb. The right hand had lower quantized nPVI
values than the left hand. Several significant interactions were also found. This research
reframes the distinction between theoretically proposed types of musical knowledge used in
improvisation. In unfamiliar motor contexts, pianists improvised with less pitch class
variability and more diatonic pitch classes, implying that in the absence of procedural
knowledge, improvisers rely more on explicit knowledge of tonality. This suggests new ways
to consider modes of improvising.


Distributed creativity in Tongue of the Invisible

Eric Clarke1, Mark Doffman1, Liza Lim2


1Faculty of Music, University of Oxford, UK
2School of Music, Humanities and Media, University of Huddersfield, UK

Theoretical and empirical accounts of musical creativity have been dominated by
individualistic and de-contextualised accounts of rather abstracted creative processes. More
recently there has been increasing recognition of and interest in the distributed and situated
nature of musical creativity particularly in the interface between composition,
improvisation and performance. This paper reports on the creation, rehearsal and
performance of a 60-minute work (Tongue of the Invisible, by Liza Lim) that incorporates a
variety of composed and more improvised elements. The aim of the project is to investigate
and understand aspects of ownership (both in an affective sense, and in terms of creative
property), creative control, and social and psychological components in distributed musical
creativity. A large body of qualitative data has been gathered, including discussions with the
composer (Lim), extensive audio and video recordings of the rehearsal processes that led to
the first performances, and recorded interviews with many of the performers. Using
ethnographic methods as well as direct input from the composer herself, this paper will
present analyses of the distributed creative dynamics exemplified in a number of targeted
moments in the work. These analyses expose the complex network of forces that
characterize the creative dynamics of the piece and its genesis, involving institutional, social
psychological, semiotic, cognitive and embodied components. Taken together they afford a
rich and complex picture of collaborative creativity in the interface between composition-
improvisation-performance, contributing to the significant re-theorising of creativity that is
going on from many disciplinary perspectives.


Cognition and Segmentation in Collective Free Improvisation: An Exploratory
Study

Clment Canonne,1 Nicolas B. Garnier2


1Centre Georges Chevrier, UMR 5605, Universit de Bourgogne, France
2Laboratoire de Physique de lENS de Lyon, CNRS UMR 5672, Universit de Lyon, France

Collective Free Improvisation (CFI) is a very challenging form of improvisation. In CFI,
improvisers do not use any pre-existing structure (like the standard in straight-ahead jazz),
but try anyway to produce together coherent music. This can be seen as a coordination
problem: musicians' production must converge to collective sequences, defined as time
frames during which each improviser achieves relative stability in his musical output while
judging the overall result satisfying. In this paper, we report on an exploratory study made
124 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

WED
with free improvisers in December 2011 in order to understand the cognition of musicians
placed in a CFI context, in particular the role played by their representations of the
improvisation under different type of sequences into the explanation of both their behaviors
and the coordination success or failure.

Paper Session 16: Crystal Hall, 17:00-18:30


Emotion perception

Understanding Music-Related Emotion: Lessons from Ethology

David Huron
School of Music, Ohio State University, USA

A number of musically-pertinent lessons are drawn from research on animal behavior
(ethology). The ethological distinction between signals and cues is used to highlight the
difference between felt and expressed emotion. Several ethologically-inspired studies are
described principally studies related to music and sadness. An ethologically-inspired model
is proposed (the Acoustic Ethological Model). The question of how music induces emotion in
a listener is addressed, and it is proposed that signaling represents a previously unidentified
mechanism for inducing affect. An integrated theory of sadness/grief is offered, where
sadness is characterized as a personal/covert affect, and grief is characterized as a
social/overt affect. Sadness and grief tend to co-occur because they provide complementary
strategies for addressing difficult circumstances.


Emotion perception of dyads and triads in congenital amusia

Manuela M. Marin,1 William F. Thompson,2 Lauren Stewart3


1Department of Basic Psychological Research and Research Methods, University of Vienna,
Austria; 2Department of Psychology, Macquarie University, Australia; 3Department of
Psychology, Goldsmiths, University of London, United Kingdom

Congenital amusia is a neurodevelopmental disorder characterized by deficits in pitch


processing. Emotional responses to music have rarely been studied in this clinical group. We
asked whether amusics differ from controls in pleasantness judgements of isolated dyads
and in happiness/sadness judgements of isolated major/minor chords. We also probed
whether the spectrum of sounds in a dyad or triad (sine-tone vs. complex-tone) affects
emotional sensitivity to consonance/dissonance and mode. Thirteen amusics and 13 controls
were matched on a range of variables. Dyads or triads were sine-tones or complex sounds
(piano timbre), 1.5 s length, and equated for loudness. Dyads comprised intervals from one
to 12 semitones. Major and minor triads were played in root position. Participants rated the
pleasantness of dyads and the happiness/sadness of triads on a 7-point scale. The profile of
pleasantness ratings for sine-tone dyads was less differentiated in amusics. Compared to
controls, amusics also assigned lower pleasantness ratings to consonant sine-tone and
complex-tone dyads. Amusics did not differ from controls for ratings of dissonant sine-tone
dyads, but assigned marginally significantly higher pleasantness ratings for dissonant
complex-tone dyads. Happiness/sadness judgements by controls differed for major and
minor triads, but amusics only differentiated between major and minor complex-tone
chords. Major sine-tone and complex triads were rated as less happy by amusics compared
to controls, but minor triads were rated similarly in both groups. Amusics differ from
controls in their perception of the pleasantness of dyads and in the perception of
happiness/sadness for major/minor triads. The implications of these data for models of
congenital amusia are discussed.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

125

Rare pitch-classes are larger and stronger: implicit absolute pitch, exposure
effects, and qualia of harmonic intervals

Zohar Eitan,1 Moshe Shay Ben-Haim,2 Eran Chajut,3


1School of Music., Tel Aviv University, Israel; 2School of Psychology, Tel Aviv University, Israel
3Department of Psychology and Education., The Open University, Israel

It is widely accepted that stimuli's frequency of occurrence affects perceptual processes. In
western tonal repertory, some pitch classes are much more frequent than others. Given
recent studies showing that long-term memory for pitch chroma is widespread, we
hypothesized that common and rare pitches would generate different expressive
experiences in listeners. We examined this hypothesis with regard to emotional and cross-
modal meanings of harmonic intervals, which were comprised of common or rarer pitch
combinations. 96 non-musicians rated two harmonic intervals (sampled guitar sounds),
each presented in 6 pitch transpositions, on 10 bi-polar expression scales (e.g., Weak-Strong,
Happy-Sad). Ratings were significantly associated with interval type (3rd or 4th), pitch
height, and occurrence frequency. In accordance with previous studies, Participants rated
higher pitch intervals as happier, harder, brighter, smaller, sweeter, weaker, and more
relaxed than lower ones (p<0.005). Most importantly, participants rated rare pitch
combinations in both intervals as larger and stronger than their adjacent common
counterparts (p<0.05, FDR corrected). Results suggest that rates of exposure to absolute
pitches in music affect the way pitch combinations are experienced. Specifically, frequency of
occurrence affected potency scales (Osgood et al., 1957), associated with power and
magnitude, as rarer intervals were rated higher in potency (stronger, larger). This novel
exposure effect suggests that implicit absolute pitch abilities are not only widespread among
non-musicians, but partake significantly in the perception of the expressive qualities of
musical sound.

Paper Session 17: Dock Six Hall, 17:00-18:30


Popular music & music in the media

Music in political commercials: A study of its use as affective priming

Richard Ashley
Program in Music Theory and Cognition, Northwestern University, USA

This study investigates how music may influence viewers responses to political
advertisements, looking specifically at the timecourse of affective responses. It builds on
prior research dealing with affective and perceptual responses to brief stimuli. The primary
hypothesis is that a listeners very early response to a commercials music serves as an
affective prime for processing the remainder of the commercial. This project involves both a
corpus analysis and an experiment. The corpus used is the database of political
advertisements maintained by the Washington Post; this study restricted itself to television
and radio commercials from the year 2008, during the general US Presidential campaigns of
Barack Obama and John McCain. The experiment collects affective valence and intensity
responses to excerpts from the ads beginnings in three conditions: audio only, video only,
and audio + video. Excerpts are of variable length (33 msec. to 4200 msec.) and also
include the entire commercial (most of which are 30 seconds in length). In results to date, it
appears that music provides the fastest path to an emotional response on the part of a
viewer. Music is typically employed from the very beginnings of advertisements; affective
responses to audio excerpts of 100-250 msec. are frequently stronger than those found in
the corresponding visual excerpts, depending on the ads contents. Although judgments of
126 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

WED
the full commercials are more intense and more stable than judgments of the brief excerpts,
the affective priming seen in responses to the music is borne out by the commercial as a
whole.


Do Opposites Attract? Personality and Seduction on the Dance Floor

Geoff Luck, Suvi Saarikallio, Marc Thompson, Birgitta Burger, Petri Toiviainen
Finnish Centre of Excellence in Interdisciplinary Music Research, Department of Music,
University of Jyvskyl, Finland

Some authors propose that we are more attracted to opposite-sex individuals with
personalities similar to our own. Others propose that we prefer individuals with different
personalities. We investigated this issue by examining personality and attraction on the
dance floor. Specifically, we investigated how the personality of both observers and dancers
affected the formers attractiveness ratings of the latter. Sixty-two heterosexual adult
participants (mean age = 24.68 years, 34 females) watched 48 short (30 s) audio-visual
point-light animations of adults dancing to music. Stimuli were comprised of eight females
and eight males, each dancing to three songs representing Techno, Pop, and Latin genres. For
each stimulus, participants rated the perceived skill of the dancer, and the likelihood with
which they would go on a date with them. Both dancers and observers personality were
assessed using the 44-item version of the Big Five Inventory. Correlational analyses revealed
that women rated men high in Openness to experience as better dancers, while men low in
Openness gave higher ratings of female dancers. Women preferred more Conscientious men,
but men preferred less Conscientious women. Women preferred less Extraverted men, while
men preferred more Extraverted women, especially if they were more Extraverted
themselves. Both women and men preferred less Agreeable opposite-sex dancers. Finally,
both women and men preferred more Neurotic opposite-sex dancers. This study offers some
fascinating insights into the ways in which personality shapes interpersonal attraction on the
dance floor, and partially supports the idea that opposites sometimes do attract.


Doubtful effects of background music in television news magazines

Reinhard Kopiez, Friedrich Platz, Anna Wolf


Hanover University of Music, Drama, and Media, Hanover Music Lab, Germany

Experimental data on the effects of background music on cognition, affect or attitude are rare
and ambiguous. Additionally, the music selection in these studies seems to be arbitrary. We
used objectively selected background music and the Elaboration Likelihood Model was used
to predict negative effects of music on the central route of processing (recall) but positive
effects on the peripheral route (liking) of the ELM. A television report on toxic substances in
energy saving lamps) served as the basic stimulus in 5 versions: (a) no music, and (b) 4
additional versions with high/low valence/arousal background music. A five group between
subjects design (group size each n = 100, age range: 18-60 years, random selection of
consumers) was used, and stimuli were rated in an online study. As the dependent
variable, pre-post questionnaires on attitudes toward ESL were given. Additionally, subjects
filled in a recall test with 10 items (5 correct, 5 false) each for auditive and visually presented
information. The ANOVA showed no differences in recognition of items from the film or in
liking between conditions. A pre-post shift of attitude toward a critical evaluation of ESL
could be observed, regardless of the condition. No significant influence of background on
recognition could be observed. Our study could not confirm the widespread assumption of a
general positive or negative effect of background music on attitude or recognition.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

127

Paper Session 18: Timber I Hall, 17:00-18:30


Phenomenology & hermeneutics

Mind the gap: Towards a phenomenological cognitive science of music


Jenny Judge
Centre for Music and Science, University of Cambridge, UK

Cognitive Science is widely regarded as the best effort at studying the mind that has been
made to date, paving the way for a truly rigorous account of cognition, using the methods
and epistemic commitments of natural science. However, a large number of authors have
expressed a worry that Cognitive Science fails to account for phenomenological data and is
therefore not a full theory of cognition. As Joseph Levine (Levine 1983) put it, Cognitive
Science is suffering from an explanatory gap. In other words, regardless of what paradigm is
employed to explain and predict behavioural data, Cognitive Science fails to account fully for
how the mental is subjectively experienced. This issue has been debated primarily in the
philosophy of mind literature. However, insofar as it concerns Cognitive Science, I will argue
that music cognition researchers should pay attention to this debate. I will outline the
methodological and epistemological concerns highlighted by the explanatory gap argument,
as well as indicating some concrete ways in which music cognition researchers m ay attempt
to move beyond the explanatory gap (Gallagher and Brosted Sorensen 2006). I will address
the issue of meaning in light of the naturalistic approaches of Cognitive Science, arguing that
attention to the explanatory gap literature allows us to frame the issue of how musical
meaning may survive in a naturalized picture of music cognition. I will discuss the project of
naturalizing phenomenology (Petitot 1999; Zahavi 2010), arguing for its in-principle
possibility as well as the promise it holds for a more truly phenomenological and holistic
approach to music cognition. Most of the literature on the interface between philosophy of
mind and Cognitive Science to date has focused on research into visuo-motor perception;
comparatively little attention has been paid to auditory or musical perception. I will address
the issue of the visuocentrism of philosophy of mind, arguing that greater attention to
musical cognition, as well as greater contact between philosophy of mind and Cognitive
Science, is important for a more complete understanding of perception in general.


A Nonrepresentationalist Argument for Music

Patrick Hinds
Music Dept., University of Surrey, United Kingdom

Music is a universally accessible phenomenon that resists understanding. These conditions
have prompted a considerable discourse on musics transcendental properties, tied up with
the notion of an exclusively musical meaning. Following a literature review, I reject this
notion, favouring a leaner theory that takes musics lack of objective meaning just as a lack of
objective meaning. I argue that music is a self-directed practice, contingent on a perceivers
prerogative to block the perceived objective significance of an object and engage with it for
the sake of engaging itself. This subversion of meaning is, I suggest, a mechanism in virtue of
which we may have consciousness of sound tout court: when the world is separated from the
aspect of self that is affording the means of perception and the latter is taken as a subject of
experience. Such an argument can make intelligible the concept of intrinsically cognitive
operations- those that do not refer outwardly. Emerging research in music psychology gives
empirical grounding to this concept, accounting for music experience with psychological
structures that are nonrepresentational and thus lack extrinsic content. The upshot is that
music can exemplify nonrepresentational experience, where a representation is an
individuated (mental) object with semantic properties. There may be no specifiable object
128 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

WED
true to the experience because music is partly constituted by that which is intrinsically
cognitive. This framework could thus be wielded in a discussion of qualia, potentially
elucidating the intuition that some qualities of experience are irreducibly mental in nature.


Topical Interpretations of Production Music

Erkki Huovinen,1 Anna-Kaisa Kaila2


1School of Music, University of Minnesota, USA
2University of Turku, Finland

The present empirical study sought to chart the kinds of mood, environment, and agency
associated with commercially targeted production music. An experiment with production
music representing a motivational category involved questions about mood variables, free
associative imagery, and questions about selected semantic properties of the associative
images. The results suggested that producers demonstrate considerable success in
engineering mood characters generally recognizable for listeners. Moreover, it was found
that the associative imagery elicited by production music may show even more concrete
commonalities between listeners in the kinds of agency and environments imagined.
Associationally cohesive clusters of musical excerpts were then interpreted w ith reference to
musical topos theory. Based on a hierarchical clustering of the results, tentative topical labels
Idyll and Dynamism with respective associational qualities were identified, along with a
subdivision of the latter into two sub-topoi, Brilliance and Nightlife. Notably, the topical
clustering did not simply reproduce distinctions between musical genres, suggesting that
similar semantic associations may be mapped onto different musical genres even within one
and the same musical culture. Overall, the study confirms the ability of commercial music to
function as an agent of rich meaning formation independently of the multimedia contexts it
is typically conjoined with.

Paper Session 19: Timber II Hall, 17:00-18:30


Learning and Skills assessment I

The "Open-Earedness" After Primary School: Results of a New Approach Based


on Voluntary Listening Durations

Christoph Louven
Institut fr Musikwissenschaft und Musikpdagogik., Universitt Osnabrck, Germany

The assumption that younger children are more open-eared than older children, i.e. that
they are more open towards unconventional styles of music than older children, has been the
subject of several studies in the last 10 years. Most of these studies are based on a design
that derives open-earedness just from preference ratings of music examples with different
styles. This leads to a intermixture of the concepts of preference and openness that we
assume to be a serious problem. Therefore, we created a new approach with a computer-
based design that combines preference ratings with measuring voluntary listening durations
and derived a numerical index of open-earedness. Results with primary school children
showed that although preferences for different musical styles changed considerably during
primary school the index of open-earedness did not. Since all previous studies on open-
earedness only dealt with primary school children it has not yet been established what
happens to open-earedness in older populations. Therefore, this paper will present the
results of two follow-up studies with Gymnasium (high school) pupils and university
students, partly with special music education (pupils of a Gymnasium with a special music
profile or university music students). This allows for the observation of both the
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

129

development of open-earedness after primary school and the influence of special musical
training on this process.


Music lessons, emotion comprehension, and IQ

E. Glenn Schellenberg, Monika Mankarious


University of Toronto, Canada

Music training in childhood is known to be associated positively with many aspects of
cognitive abilities. For example, enhanced performance for musically trained compared to
untrained participants is evident on tests of listening, memory, verbal abilities, visuospatial
abilities, nonverbal abilities, and IQ. Music training is also predictive of better grades in
school. It is unclear, however, whether positive associations with music training extend to
measures of social or emotional functioning. In fact, the available literature provides little
evidence of such associations. The goal was to examine whether musically trained and
untrained children differ in emotion understanding, and if so, whether any difference
between groups could be explained as a by-product of higher IQs among the trained children.
We recruited 60 7- and 8-year-olds. The 30 musically trained children had at least one year
of private music lessons (primarily individual lessons) taken outside of school. The 30
untrained children had no music training taken outside of school. All children completed
standardized tests of emotion comprehension and IQ. Both tests are valid, reliable, designed
for children, and widely used (i.e., translated into many different languages). As in previous
research, music training was predictive of higher IQs even when demographic variables were
held constant. Musically trained children also performed better than untrained children on
the test of emotion comprehension. The difference in emotion comprehension between the
two groups of children disappeared, however, when IQ was held constant. Nonmusical
associations with music training appear to be limited to tests of cognitive abilities and their
correlates. The quasi-experimental design of the present study precludes inferences of
causation, but the findings are consistent with the idea that high-IQ children are more likely
than other children to take music lessons and to perform well on many tests, including tests
of emotion comprehension. More reliable positive associations between music training and
social or emotional functioning may emerge among children who take music lessons in social
contexts, such as choirs or bands.


Introducing a new test battery and self-report inventory for measuring
musical sophistication: The Goldsmiths Musical Sophistication Index

Daniel Mllensiefen,1 Bruno Gingras,2 Jason Musil,1 Lauren Stewart1


1Department of Psychology, Goldsmiths, University of London, United Kingdom
2Department of Cognitive Biology, University of Vienna, Austria

This talk presents the Goldsmiths Musical Sophistication Index (Gold-MSI) as a research tool
to capture different levels of musical sophistication in the non-specialist population that may
develop through sustained and in-depth engagement with music in various forms, such as
listening, playing, or processing music in other cognitive or emotional ways. A self-report
questionnaire as well as an initial set of four different tests of music perception and
production abilities have been designed based on established findings from music cognition
research: a) sorting very short music clips by timbral similarity, b) perceiving and c)
producing a beat to a musical excerpt and d) detecting schematic and veridical changes in a
melodic memory task. A version of the Gold-MSI has been implemented online by the BBC
and has generated datasets from more than 140,000 participants. Analysis of the data from
the self-report inventory generated a statistical model with a clear multi-dimensional
structure for musical sophistication delineating e.g. musical training, emotional usage of
130 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

WED
music, and perception and production abilities. Furthermore, these self-reported
multidimensional profiles of musical sophistication are related to performance on the four
perception and production tasks. The Gold-MSI, as a new tool to the research community,
measures the level of musical sophistication in the non-specialist population on several
distinct dimensions. The question inventory and the ability tests have been psychometrically
optimized and come with data norms from a western sample of more than 120,000
individuals. The Gold-MSI is fully documented and free to use for research purposes.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

131

Thursday 26 July
Symposium 2: Grand Pietra Hall, 09:00-11:00
Involuntary Musical Imagery: Exploring earworms

Convener: Victoria Williamson, Discussant: Andrea Halpern



This symposium brings together interdisciplinary perspectives from institutions across three
continents to discuss the phenomenon known as Involuntary Musical Imagery (INMI) or
earworms. INMI describes the experience whereby a tune comes into the mind and repeats
without conscious control. INMI is a ubiquitous occurrence with over 90% of people
reporting it at least once a week (Liikkanen, 2011), yet it is one that has traditionally
received minimal attention from empirical research. In the last five years however, it has
emerged as a rapidly growing, multidisciplinary area of research (Williamson et al. 2011),
the nature of which calls for a robust definition of the topic and scholarly debate on future
paths for investigation. This symposium is the worlds first gathering of INMI scholars aimed
at establishing INMI as a legitimate topic for study in cognitive musicology, experimental
psychology and neuroscience. We aim to create an agenda for INMI studies and open up the
discussion by probing several research questions identified thus far. We take in multiple
perspectives, including musicologists studying the structural characteristics of earworm
tunes and psychologists studying the personal factors and situational antecedents that
contribute to an INMI experience and its phenomenology.
The symposium will tackle a number of important questions related to INMI including:
(1) Is INMI a functional part of everyday cognition? If we assume that music has an
evolutionary justification, what purpose would the recurrence of involuntary music serve?
(2) Does the emotional rating or psychophysiological arousal associated with music facilitate
its incidental learning and later occurrence as INMI?
(3) Can musical structures within INMI experiences be systematically described and
compared, leading to a formula for particularly catchy tunes?
(4) What methods are optimal for studying INMI in the lab?

New Directions for Understanding Involuntary Musical Imagery

Lassi A. Liikkanen
Helsinki Institute for Information Technology, Aalto University, Finland
Department of Communications, Stanford University, CA, USA

This paper addresses the state of art in the studies of involuntary musical imagery (INMI), an
emerging topic in psychology. We define INMI as a private, conscious experience of reliving a
musical memory without a deliberate attempt. We review the empirical literature and draw
guidelines for future research on the matter. As example of a new research direction, we
provide a study of how INMI relates to social interactions in everyday life based on a corpus
of over one thousand open-ended survey questions. The data shows that INMI can evoke
overt behavior and have social consequences. Some people found it difficult to distinguish
their overt spontaneous musical behavior from covert experiences. In response to an INMI
inspired music act, many had experienced socially awkward situations or were consciously
trying to avoid public musical expression. In the other end, some people choose expression
and intentionally try to pass on the earworm, even if they expected reproach for doing so.
These results suggest that INMI is an instance of involuntary music, sometimes associated
with overt behaviors and social consequences. The next steps in the research on INMI should
132 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

THU
be targeted to understanding the psychology underlying this phenomenon more deeply and
socially. Instead of characterizing the phenomenology on different levels, we should seek the
causal mechanisms related to INMI, possibly on neural level and to differentiate the different
components of INMI from each other and related psychological and psychopathological
phenomena.


Earworms from Three Angles: Situational Antecedents, Personality
Predisposition and a Musical Formula

Victoria J. Williamson, Daniel Mllensiefen


Department of Psychology, Goldsmiths University of London, London, UK

Involuntary, spontaneous cognitions are common, everyday experiences that occur against a
backdrop of deliberate goal-directed mentation (Christoff, Ream & Gabrieli, 2004). One such
phenomenon may hold special promise for empirical investigation of this often elusive
experience. Involuntary musical imagery (INMI) or earworms are vivid, identifiable, and
affect 91.7% of the population at least once a week (Liikkanen, 2012). Utilizing an online
survey instrument (http://earwormery.com/) we collected several thousand reports of
earworm episodes, in collaboration with the BBC. Study 1 employed a qualitative grounded
theory analysis to explore themes relating to the situational antecedents of INMI experiences
(Williamson et al., 2012). The analysis revealed four main trigger themes for INMI
experiences and categorized the role of different music media. Study 2 used structural
equation modeling (SEM) to relate individual differences in INMI characteristics and isolated
an influence of obsessive compulsive traits. Study 3 comprised a computational analysis of
the musical structure of several hundred earworm tunes and compared them to matched
control tunes. A statistical classification model was employed to predict whether a tune
could be classified as an earworm based on its melodic features. The use of INMI as a model
of spontaneous cognition has generated findings regarding the phenomenological experience
as well as the role of different behavioural and cognitive contributing factors. This body of
work demonstrates the feasibility of studying spontaneous cognitions through musical
imagery, which has the potential to enhance our understanding of the intricate relationships
between cognitive control, involuntary memory, and the environment.


Arousal, Valence and the Involuntary Musical Image

Freya Bailes
MARCS Institute, University of Western Sydney

The study of the emotional qualities of imagined music is in its infancy. This paper reports
results from a follow-up of Bailes (2006, 2007), with the aim of exploring the relationship
between involuntary musical imagery (INMI) and emotion. Forty-seven respondents, aged
18 to 53 years, were contacted by SMS for a total of 42 times over a period of 7 days. At each
contact they were required to fill in a form describing their mood, location and activity, as
well as details of any current musical experience, imagined or heard. A multiple logistic
regression analysis was performed with current musical state at the time of contact as the
dependent variable (hearing music, imagining music, both hearing and imagining music,
neither hearing nor imagining music) and ratings of mood as predictor variables.
Preliminary evidence of a link between arousal and the propensity to experience INMI was
found, showing that self-ratings as drowsy or neither alert nor drowsy at the time of
contact were negatively associated with imagining music. In other words, participants who
did not feel that they were alert were unlikely to be imagining music. Ratings for the mood
pair Happy/Sad, which best exemplifies valence, were not significant predictors of INMI.
Qualitative analyses of responses to an open question about possible reasons for imagining
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

133

music are expected to reveal information about the emotional characteristics of the music,
context, and respondent.


When an everyday-phenomenon becomes clinical: The case of long-term
earworms

Jan Hemming,1 Eckart Altenmller2


1Music Institute, University of Kassel, Germany, 2Institute for Music Physiology and Musician's
Medicine, University for Music, Drama and Media Hannover, Germany

Both Authors with a background in musicology respectively in neurology were individually
contacted by a number of subjects suffering from long-term 'earworms' in the past. Taking a
closer look at the subjects in question revealed partly clinical conditions (e.g. tinnitus,
hearing loss, depression, hallucinations). Systematic case studies were set up to investigate
the phenomena in detail. Current research on involuntary musical imagery has shown that
music lovers and musicians actually have more 'earworms' than people who don't bother
much about music. As such the frequency and intensity of 'earworms' might be an indication
of a general affinity to music, which is confirmed by all of the subjects described in the case-
studies, and the frequent report of depression adds to the picture of general increased
sensitivity in life. Also, sensory deprivation through hearing-loss seems to cause autonomous
activity of musical networks in the brain. Existing definitions of hallucinations (subjects
believe in the existence of a sound-source outside of themselves) as opposed to 'earworms'
or involuntary musical imagery (subjects are aware there is no external sound-source as it is
felt to be located inside the head) still need to be properly applied or clarified. With regard to
tinnitus, it seems its sometimes very clear physical causation (dental and cervical spine
disorders) have been overlooked in favor of neuroscientifc approaches. With regard to long-
term-'earworms', the application of anti-depressants seems promising since these have the
potential of eliminating memory traces. Their combination with psychotherapeutic
treatment can result in significant relief for the affected subject.

Paper Session 20: Crystal Hall, 09:00-11:00


Applications & everyday contexts

The influence of music on gambling: The role of arousal


Stephanie Bramley1, Nicola Dibben2 and Richard Rowe3

1 & 2Department of Music, The University of Sheffield, United Kingdom

3Department of Psychology, The University of Sheffield, United Kingdom


Drawing on research which has investigated music tempo's effect on behaviour in a number
of domains we consider tempo as a factor which can influence gambling behaviour. We
examine research which has investigated music tempos influence on gambling behaviour
and consider whether arousal is a psychological mechanism responsible for tempos
influence on gambling behaviour. This abstract provides the background to a study we have
carried out investigating the influence of music tempo on virtual roulette behaviour which
tests whether subjective and/or physiological arousal are responsible for music tempos
effects on gambling behaviour. The findings of our study will be discussed in our conference
presentation. To conclude we consider the implications of determining arousal as
responsible for music tempos influence on gambling behaviour for gamblers, gambling
operators and current gambling practice.

134 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

THU
The influence of age and music on ergogenic outcomes, energy and affect in
gym-based exercise sessions
Rachel Hallett, Alexandra Lamont
School of Psychological Research, Keele University, UK

Music is frequently used to accompany group and individual exercise to help increase
motivation and enjoyment. It has been suggested that to be motivating, exercise music
should reflect the age of exercisers, but there is little empirical support for this in gym
contexts. This study explores the area using mixed methods, with a qualitative study used to
inform the design of a field-based within-participant quasi-experiment. Sixteen participants
were interviewed about exercise preferences, motivations and media use during exercise
and the data explored using thematic analysis. Results indicated that contemporary music
was widely liked by a worker group of exercisers into their late fifties, while a smaller
socialiser group, typically retired, were ambivalent towards music. Twenty-four
participants undertook a treadmill protocol with measurements of distance covered, self-
perceived affect and energy and liking for each of the three music conditions: contemporary
pop (80-100bpm), contemporary dance (~130bpm) and 1960s/1970s pop (~130bpm). Data
was analyzed by participant age with an over-45 and under-45 group. Although
1960s/1970s music led to slightly superior outcomes for the older group, it was disliked by
the younger group and produced inferior outcomes to the other styles; there was a
significant interaction between age and music preference. The 1960s/1970s music offers
only a modest benefit for older exercisers and appears to alienate younger exercisers. Dance
music, however, appeals to a broad age range and is recommended for gym use, although it
may be advisable to reduce volume when attendance by retired members is high.

A Viable Alternative Music Background As Mediated Intervention For


Increased Drivers Safety
Warren Brodsky,1 Micha Kizner2
1Music Science Lab, Department of the Arts, Ben-Gurion University of the Negev, Israel
2Music Education Division, Ministry of Education, State of Israel

In-car music listening requires drivers to process sounds and words, and most sing/tap
along. While it may difficult to assess music as a risk-factor for distraction, previous studies
have reported: momentary peak levels in loud-music disrupt vestibulo-ocular control; loud
music causes a decrease in response time; arousing music impairs driving performance; and
quick-paced music increases cruising speed and traffic violations. It is indeed worrying that
drivers underestimate the effects of music, or perceive decreased vehicular performance due
to in-car listening. In the current study we produced an alternative music background
proposed to maintain aural stimuli at moderate levels of cognitive awareness in an effort to
decrease music-generated distraction. After a group of everyday listeners confirmed the
background as suitable for driving in a car, we implemented two studies: 22 drivers each
drove 4-trips while listening to driver-preferred music brought from home (2-trips) or to the
alternative background (2-trips); 31 drivers each drove 10-trips while listening the
alternative background. In Study1 we found criterion related validity, and the alternative
background preoccupied less attention. In Study2 we found habituation effects, as well as
increased feelings of driver safety and ever-increasing levels of positive mood. Music
designed for driver safety is an important contribution in the war against traffic accidents
and human fatality. One day, such applications might become a standard form of mediated
intervention especially among young drivers who often choose music that is highly
energetic and aggressive, consisting of a fast-tempo accentuated beat, played at strong
intensity levels of elevated volumes.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

135

Evaluating Crowdsourcing through Amazon Mechanical Turk as a Technique


for Conducting Music Perception Experiments

Jieun Oh, Ge Wang


Center for Computer Research in Music and Acoustics, Department of Music, Stanford
University, USA

Online crowdsourcing marketplaces, such as the Amazon Mechanical Turk, provide an
environment for cost-effective crowdsourcing on a massive scale, leveraging human
intelligence, expertise, and judgment. While the Mechanical Turk is typically used by
businesses to clean data, categorize items, and moderate content, the scientific community,
too, has begun experimenting with it to conduct academic research. In this paper, we
evaluate crowdsourcing as a technique for conducting music perception experiments by first
describing how principles of experimental design can be implemented on the Mechanical
Turk. Then, we discuss the pros and cons of online crowdsourcing with respect to subject
demography, answer quality, recruitment cost, and ethical concerns. Finally, we address
audio-specific factors relevant to researchers in the field of music perception and cognition.
The goal of this review is to offer practical guidelines for designing experiments that best
leverage the benefits and overcome the challenges of employing crowdsourcing as a research
methodology.

Paper Session 21: Dock Six Hall, 09:00-11:00


Learning and skills assessment II

Effects of a class-room based music program on verbal memory of primary


school children within a longitudinal design
Ingo Roden,1 Stephan Bongard,2 Gunter Kreutz1

1Department of Music, School of Linguistics and Cultural Studies, Carl von Ossietzky University

Oldenburg Germany, 2Department of Psychology, Goethe University Frankfurt, Germany



Previous research showed beneficial influences of music training on verbal memory. We
examined this assumption using a longitudinal study design. The hypothesis that musical
tuition may improve verbal memory was tested in a total of 73 primary school children.
Children either participated in a class-room based music program with weekly sessions of
instrumental tuition (N=25, 14 female, 11 male, mean age 7.32 years) or received an
extended natural science training (N=25, 11 female, 14 male, mean age 7.68 years) at school.
A third group of children received no additional training (N=23, 11 female, 12 male, mean
age 8.22 years). Each child completed a verbal memory test for three times over a period of
18 month. Socio-economic background and basic cognitive functions were assessed for each
participant and used as covariates in subsequent analyses of variance (ANOVAs). Significant
Group by Time interactions were found in the measures of verbal learning, verbal immediate
and delayed recall. Children in the music group gained greater improvements in those
measures than children in the control groups. These findings are consistent with previous
research and suggest that children receiving music training may benefit from improvements
in verbal memory.

136 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

THU
Assessing young childrens musical enculturation: A novel method for testing
sensitivity to key membership, harmony, and musical metre

Kathleen M. Einarson, Kathleen A. Corrigall, Laurel J. Trainor


Department of Psychology, Neuroscience & Behaviour, McMaster University, Canada

We have developed a novel, video-based paradigm to test Western children's perception of


1) Western tonality (key membership and harmony), and 2) beat alignment in music with
simple or complex metric structure. In the tonal structure task, 4- and 5-year-olds watched
two videos, each of a puppet playing a melody or chord sequence, and gave a prize to the
puppet that played the better song. One puppet played a standard sequence that ended
according to rules of Western harmony, and the other played a deviant version that was
either entirely atonal, or that ended out-of-key or on an unexpected harmony within the key.
For the beat alignment sensitivity test, 5-year-olds judged which of two puppets was a better
drummer, when one was in synchrony with the beat of a musical excerpt and one was either
out of phase or out of tempo with the beat. In the tonal structure task, 5-year-olds selected
the standard version significantly more often than chance for both melodies and chords
when the deviant violated key structure, but not when it violated the expected harmony. 4-
year-olds performed at chance in all conditions. In the metrical task, 5-year-olds selected the
synchronous drumming significantly more often for excerpts with simple metre than
excerpts with complex metre, and their performance was at chance levels for complex metre
excerpts in both the phase error and tempo error conditions. This paradigm shows great
promise for testing other aspects of musical development in young children.

Investigating the associations between musical abilities and precursors of


literacy in preschool children
Franziska Deg, Gudrun Schwarzer
Department of Developmental Psychology, Justus-Liebig-University Giessen, Germany

It was shown that specific music perception abilities are related to reading and phonological
awareness, an important precursor of literacy. Anvari and colleagues (2002) demonstrated
that only part of the association between music perception and reading was explained by
phonological awareness. Therefore, the relationship between other precursors of literacy
and musical abilities need further investigation. In addition, previous studies have not
investigated the relation between music production abilities and precursors of literacy. Thus,
the aim of our study was twofold. Firstly, we investigated the relation between four
precursors of literacy and musical abilities. Secondly, we included not only music perception
abilities but also music production abilities in our analyses. We tested 55 (28 girls)
preschoolers. We assessed precursors of literacy with a well established test battery which
comprises four subtests measuring phonological awareness, one subtest on working
memory, one on selective attention, and one on rapid automatized naming. Musical abilities
were tested with a music screening by Jungbluth and Hafen (2005) that contained
comparisons of melody, pitch, rhythm, metre, and tone length as well as the reproduction of
a given rhythm, metre, and song. As control variables intelligence and socioeconomic status
measured by parents education were assessed. Partial correlations that controlled for
gender, intelligence, and SES revealed a significant positive association between the
aggregated score of phonological awareness and music perception and production abilities.
Furthermore, significant positive associations were revealed between working memory and
the overall scores of music perception and production. We conclude that phonological
awareness and working memory, which are both precursors of literacy, are associated with
musical abilities. Furthermore, we demonstrated that both music perception and music
production abilities are related to phonological awareness and working memory.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

137

The cognition of Grouping Structure in real-time listening of music. A GTTM-


based empirical research on 6 and 8-year-old children

Dimitra Koniari,1 Costas Tsougras2


1Department of Music Science and Art, University of Macedonia, Greece
2School of Music Studies, Aristotle University of Thessaloniki, Greece

The aims of the present study are: a) to investigate how children of average ages 6 and 8
segment a musical piece during real-time listening, b) to compare childrens indicated
segment boundaries with boundaries obtained by the segmentation of the piece by adults
(musicians and nonmusicians), and c) to compare the adults and childrens segmentation
profiles to the structural boundaries predicted in a previous study by a full analysis of the
piece according to the principles of GTTM. 70 children participated in the empirical study, of
average age 6 and 8 (n =35 individuals for each Grade), as well as 50 adults (25 musicians
and 25 nonmusicians). The performed boundaries were placed into two categories,
depending on whether or not they were predicted by the analysis of the piece using the
Generative Theory of Tonal Music (GTTM). Participants indicated a maximum of 38 segment
boundaries. 16 corresponded to the boundaries predicted by the analysis of the piece with
the use of GTTM, and 22 were not. The deviations in the range of values obtained from the 38
segment boundaries are also justified by the theory's principle of hierarchy, by the GS and
TSR preference rules, and by the idiomatic features of the selected piece. The results suggest
that even by the age of 6, children can perceive the grouping structure of a piece in
accordance to the general laws expressed by the GTTM and by the age of 8 year-old children
are almost perfect experienced listeners of their musical culture, in accordance to the
GTTMs principles.

Paper Session 22: Timber I Hall, 09:00-11:00


Neuroscience Perspectives

Abductive Reasoning, Brain Mechanisms and Creative Cognition: Three


Perspectives Leading to the Assumption that Perception of Music Starts from
the Insight of Listeners

Sebastian Schmidt, Thomas A. Troge, Denis Lorrain


Institute for Musicology and Music Informatics, University of Music Karlsruhe, Germany

A theory of listening to music is proposed. It is suggested that, for listeners, the process of
prediction is the starting point to experiencing music. Indications for this proposal are
defined and discussed within perspectives of cognitive science, philosophy and experimental
psychology, leading to a more structured thesis that the perception of music starts from the
inside, through both, a pre-wired and an experienced extrapolation into the future (we call
this a-priori listening). In a second step, we propose that a general a-priori listening is
involved in processes of creative cognition, or, that is to say, that creative cognition is the
necessary component of a-priori listening. Finally, based in the precondition that music
should not be thoroughly expected, we outline a perspective of listening to music as a set of
creative processes, which constantly interact.

138 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

THU
Interaction between melodic expectation and syntactical/semantic processes
on evoked and oscillatory neural responses

Elisa Carrus,1 Marcus Pearce,2 Joydeep Bhattacharya1


1Department of Psychology, Goldsmiths, University of London, UK
2Center for Digital Music, School of Electronic Engineering & Computer Science, Queen Marys,
University of London, UK

Electrophysiological studies have shown support for a neural overlap during structural
processing of music and language (Patel, 1998; Koelsch et al, 2005; Carrus et al, 2011).
Although previous studies have used harmonic stimuli, studying the effect of melodic
expectation is fundamental for an understanding of the extent to which music and language
share neural resources. This study aimed at investigating the neural interaction between
these two domains by using stimuli constructed with a computational model (Pearce, 2005).
Melodies ended with either a high-probability (expected) or a low-probability (unexpected)
note (Pearce, 2005). Sentences ended with one of the following types of words: a correct
word, a semantically incongruent word, a syntactically incorrect word, a word with a
combined syntactic-semantic violation. Music and language were presented in synch and
both consisted of five elements. Participants responded to the acceptability of sentences
while the EEG was recorded. The analysis of event-related potentials and oscillations showed
a neural interaction between music and language processing. This was reflected in a
decrease of the LAN (Left Anterior Negativity) when syntactically incorrect sentences were
presented with a low-probability note and in a decrease of low-frequency (1-7 Hz)
oscillatory power soon after the simultaneous presence of violations in music and language
but only for single syntactic and single semantic violations. This study provides the first
evidence to show neural interactions between melodic processing and language processing.
The results are interpreted in the context of the framework of shared neural resources
between music and language advanced by Patel (2003).


BAASTA: Battery for the Assessment of Auditory Sensorimotor and Timing
Abilities

Nicolas Farrugia, Charles-Etienne Benoit, Eleanor Harding, Sonja A. Kotz, Simone Dalla Bella
Department of Cognitive Psychology, WSFiZ in Warsaw, Poland
Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
EUROMOV, M2H Laboratory, Universit de Montpellier I, France

In this paper we describe the Battery for the Assessment of Auditory Sensorimotor and
Timing Abilities (BAASTA), a new tool developed for assessing systematically rhythm
perception and auditory-motor coupling. BAASTA includes perceptual tasks and
Sensorimotor Synchronization (SMS) tasks. In the perceptual tasks, auditory thresholds in a
duration discrimination task and anisochrony detection tasks (i.e., with an isochronous
sequence and with music) are measured via the Maximum Likelihood Procedure (MLP). In
addition, a customized version of the Beat Alignment Task (BAT) is performed to assess
participants ability to perform beat extraction with musical stimuli. Tapping tasks are used
to assess participants' SMS abilities, including hand tapping along with isochronous
sequences and music, and tapping to sequences presenting a tempo change. The battery is
validated in young expert musicians and age-matched non-musicians, as well as in aged
participants. In addition, the results from 3 cases of patients with Parkinsons Disease are
presented. BAASTA is sensitive to differences linked to musical training ; moreover the
battery can serve to characterize differences among individuals (e.g., patients with
neurodegenerative disorders) in terms of sensorimotor and rhythm perception abilities.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

139

EEG-based emotion perception during music listening

Konstantinos Trochidis,1 Emmanuel Bigand2


1Department of Music Research, McGill University, Canada
2Department of Cognitive Psychology, University of Burgundy, France

In the present study correlations between electroencephalographic (EEG) activity and
emotional responses during music listening were investigated. Carefully selected musical
excerpts of classical music tested in previous studies were employed as stimuli. During the
experiments EEG activity was recorded in different regions without a-priori defining regions
of interest. The analysis of the data was performed in both alpha and theta bands. Consistent
with existing findings, the results in alpha band confirm the hemispheric specialization
hypothesis for emotional valence. Positively valenced emotions (happy and serene) elicited
greater relative left EEG activity, whereas negatively valenced emotions (angry and sad)
elicited greater relative right EEG activity. The results show interesting findings related to
the affective dimension (arousal and valence) by electrodes in different brain regions that
might be useful in extracting effective features for emotion recognition applications.
Moreover, theta asymmetries observed between pleasant and unpleasant musical excerpts
support the hypothesis that theta power may have a more important role in emotion
processing than previously believed and should be more carefully considered in future
studies.

Paper Session 23: Timber II Hall, 09:00-11:00


otion and coordination in performance

Examining finger-wrist joint-angle structure in piano playing with motion-


capture technology

Werner Goebl,* Caroline Palmer#


*Institute of Music Acoustics, University of Music and Performing Arts Vienna, Austria
#Department of Psychology, McGill University, Canada

Piano technique is acquired over decades of practice and piano educators disagree about the
nature of a good technique and the way to achieve it. Particularly when performing very
fast passages, movement efficiency seems to be an important factor. This study investigates
the movement structure of highly skilled pianists performing simple passages faster and
faster until they reach their individual limits. A 3D motion-capture system tracked small
reflective markers placed on all finger joints, the hand and the forearm of twelve highly
skilled pianists performing a simple isochronous melody at different tempi. The pianists
started with a medium fast tempo (7 tones per second, TPS, timed by a metronome in a
synchronization-continuation paradigm) that was increased after each trial until the pianists
decided to stop. They performed on a digital piano recording the onset timing for subsequent
analysis. Joint angle trajectories were computed from the three-dimensional marker position
for all adjacent finger phalanges (DIP, PIP), and the hand (MCP) and the forearm (wrist angle
and wrist rotation). We compare timing measures (CV and timing error of IOI patterns) with
an efficiency measure of finger and wrist kinematics to identify motion features that are
typical for successful fast performers. The rounded finger shape was stable and showed
slight extension in fast pianists, but showed large variability in slow pianists. This study
delivers detailed insights into the joint angle structure of skilled pianists performing at fast
tempi, focusing on the individual differences between performers, and proposes kinematic
markers of successful performers.

140 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

THU
Measuring tongue and finger coordination in saxophone performance

Alex Hofmann,* Werner Goebl,* Michael Weilguni,# Alexander Mayer,* Walter Smetana#
*Institute of Music Acoustics, University of Music and Performing Arts Vienna, Austria
#Institute of Sensor and Actuator Systems, Vienna University of Technology, Austria

When playing wind instruments the fingers of the two hands have to be coordinated together
with the tongue. In this study, we aim to investigate the interaction between finger and
tongue movements in portato playing. Saxophone students played on a sensor-equipped alto
saxophone. Force sensors attached to 3 saxophone keys measured finger forces of the left
hand; a strain gauge glued onto a synthetic saxophone reed measured the reed bending.
Participants performed a 24-tone melody in three tempo conditions timed by a metronome
in a synchronization-continuation paradigm. Distinct landmarks were identified in the
sensor data: A tongue-reed contact (TRC) occurred when the reed vibration was stopped by
the tongue, a tongue-reed release (TRR) at the beginning of next tone, and in the finger force
data a key-bottom contact (KB) at the end of the key motion. The tongue-reed contact
duration (from TRC to TRR) was 34.5 ms on average (SD = 5.84) independently of tempo
condition. Timing accuracy and precision was determined from consecutive TRRs. We
contrasted tones that required only tongue impulses for onset timing to those that required
also finger movements. Timing accuracy was better for combined tongue-finger actions than
for tongued timing only. This suggests that finger movements support timing accuracy in
saxophone playing.


Timing and synchronization of professional musicians: A comparison between
orchestral brass and string players

Jan Stoklasa, Christoph Liebermann & Timo Fischinger


Institute of Music, University of Kassel, Germany

Musicians have to coordinate complex rhythmic movements when playing their musical
instruments. They need years of deliberate practice to learn how to adjust their timing
behavior as good as possible to the acoustic characteristics of their own instrument as well
as to the spatial position in the orchestra respectively. Since most research on sensorimotor
synchronization behavior has mainly focused on the analysis of finger tapping tasks, we
conducted an experiment using a novel experimental paradigm to investigate the timing
skills of professional musicians by playing their own musical instruments. The aim was to
examine whether orchestral brass and string players show differences in synchronization
performance under varying conditions. 21 professional musicians from a professional
orchestra in Germany were asked to participate in the study. In the first experiment subjects
had to synchronize by playing their own instrument (violin, viola, trumpet, trombone) with a
simple metronome sequence (in each case the stimulus sound was the same as the
instrument sound) in varying trials with different interstimulus-onset intervals. In a second
experiment, subjects had to perform the classical finger tapping synchronization task to
metronome sequences on a drum pad (same IOIs as in the first experiment). The results
show considerable differences in synchronization performance: Subjects show a very low
synchronization error in the first experiment, when they have to synchronize by playing
their own instrument (-2.06 ms; SD = 10.92) compared to the second experiment with the
classical tapping task (-12.60 ms; SD = 8.38).

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

141

Conveying Syncopation in Music Performance

Dirk Moelants
IPEM-Dept. of Musicology, Ghent University, Belgium

This paper investigates if and how musicians can convey syncopation without the presence
of a fixed metric framework. In a first experiment 20 professional musicians played a series
of simple melodies in both a metrically regular version and a syncopated version. These were
analyzed using a series of audio parameters. This analysis shows a series of methods used by
musicians to convey syncopation, using timing, dynamics as well as articulation. A selection
of the melodies was then presented to 16 subjects in a second experiment, both audio-only
and with video, asking them to identify them as syncopated or regular. The results of this
experiment show that, although some expressive cues seem to help the recognition of
syncopation, it remains hard to communicate this unnatural rhythmic structure without a
metric framework. Analysis of the videos shows that when musicians do provide such a
framework using their body, it influences the results positively.

Paper Session 24: Grand Pietra Hall, 11:30-13:30


Performer perspectives

An ecological approach to score-familiarity: representing a performers


developing relationship with her score

Vanessa Hawes
Department of Music and Performing Arts, Canterbury Christ Church University, UK

This paper aims to link qualitative, empirical approaches from performance analysis with
analytical and musicological issues. An ecological approach to perception frames an
exploration of experiential (performative) and structural (analytical) affordances. A singers
developing relationship with songs IV and V from Schoenbergs song cycle, Das Buch der
Hngenden Grten, Op.15 (1908-9) is recorded in two ways: videoing rehearsals from first
contact with score to performance; and reflective comments about the songs and her
learning process through interview and marked scores. As an atonal work, the cycle
provides a subject for the study of the singers experience independent of tonality as an
overwhelming structural affordance. Detailed analytical studies of the song cycle provide a
rich source-set from which to draw in discussing structural affordances. Songs IV and V
were chosen because they occur at a moment of dramatic importance, as the narrator
realizes the extent of the love that drives the cycle (Song IV) and surrenders to it (Song V).
Fortes 1992 article about the Opus 15 cycle provides the analytical focus, an article that
identifies linear motivic tetrachords in the cycle, revealing them in the fore-, middle- and
background of the songs structure. Analysis of the videoed rehearsals provides an alternate
analytic reading of the songs based on performative affordances, and the analysis of
interview data furnishes us with another. These two alternate readings adjust and enhance
Fortes analysis, a direction of analytic/interpretive influence from expression to structure,
and the result is related back to issues about the songs meaning.

142 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

THU
Predicting expressive timing and perceived tension in performances of an
unmeasured prelude using the IDyOM model

Bruno Gingras*#, Meghan Goodchild#, Roger Dean, Marcus Pearce+, Geraint Wiggins+,
Stephen McAdams#
* Department of Cognitive Biology, University of Vienna, Vienna, Austria
# CIRMMT, Schulich School of Music, McGill University, Canada
MARCS Auditory Laboratories, University of Western Sydney, Australia
+School of Electronic Engineering and Computer Science, Queen Mary, University of London, UK

Studies comparing the influences of different performances of a piece on the listeners
aesthetic responses are constrained by the fact that, in most pieces, the metrical and formal
structure provided by the score limits the performers interpretative freedom. As a semi-
improvisatory genre which does not specify a rigid metrical structure, the unmeasured
prelude provides an ideal repertoire for investigating the links between musical structure,
expressive strategies in performance, and listeners responses. Twelve professional
harpsichordists recorded two interpretations of the Prlude non mesur No. 7 by Louis
Couperin on a harpsichord equipped with a MIDI console. The MIDI data was analyzed using
a score-performance matching algorithm. Subsequently, 20 nonmusicians, 20 musicians, and
10 harpsichordists listened to these performances and rated the perceived tension in a
continuous manner using a slider. Melodic expectation was assessed using a probabilistic
model (IDyOM) whose expectations have been shown to match closely those of human
listeners in previous research. Time series analysis techniques were used to investigate
predictive relationships between melodic expectations and the performance and perceptual
parameters. Results show that, in a semi-improvisatory genre such as the unmeasured
prelude, predictability of expectation based on melodic structure has a measurable influence
on local tempo variations.


Effects of Melodic Structure and Meter on the Sight-reading Performances of
Beginners and Advanced Pianists

Mayumi Adachi,* Kazuma Takiuchi,* Haruka Shoda*,#


*Dept. of Psychology, Hokkaido University, Japan
#The Japan Society for the Promotion of Science, Japan

We explored how the melodic structure (that can determine the fingering) and the meter
would affect visual encoding (i.e., fixation measured by an eye tracking device), visuo-motor
coordination (i.e., eye-hand span), and the execution (i.e, mistakes, stuttering) in the
beginners sight-reading performances in comparison to the advanced pianist's. Eighteen
students9 beginners and 9 advanced pianistssight-read simple melodic scores,
consisting of the step-wise, the skip-wise, or the combined structure written in 3/4, 4/4, or
5/4. Results indicated that the melodic structure affected the beginners encoding and
execution. The combined structure had the beginners spend more time in saccade (rather
than in fixation) and stutter more often than the step-wise or the skip-wise structure. The
meter, on the other hand, affected the advanced pianists visuo-motor coordination and
execution. The complex meter (i.e., 5/4) resulted in the advanced pianists shorter eye-hand
span than a simple meter (i.e., 3/4, 4/4), in line with Chang (1993), and more rhythm errors
than 4/4 meter. The beginners sight-reading was less efficient than the advanced pianists in
visual encoding, in visuo-motor coordination, and in execution. Nonetheless, the beginners
could read 0.52 notes ahead of what was being played regardless of the meter or the melodic
structure of the score.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

143

The Sound of Emotion: The Effect of Performers Emotions on Auditory


Performance Characteristics

Anemone G. W. van Zijl, Petri Toiviainen, Geoff Luck


Department of Music, University of Jyvskyl, Finland

Do performers who feel sad sound different compared to those who express sadness?
Despite an extensive literature on the perception of musical emotions, little is known about
the role of performers experienced emotions in the construction of an emotionally
expressive performance. Here, we investigate the effect of performers experienced emotions
on the auditory characteristics of their performances. Seventy-two audio recordings were
made of four amateur and four professional violinists playing the same melodic phrase in
response to three different instructions. Participants were first asked to focus on the
technical aspects of their playing. Second, to give an expressive performance. Third, to focus
on their experienced emotions, prior to which they were subjected to a sadness-inducing
mood induction task. Performers were interviewed about their thoughts and feelings after
each playing condition. Statistical and computational analyses of audio features revealed
differences between the performance conditions. The Expressive performances revealed the
highest values for playing tempo, dynamics, and articulatory features such as the attack
slope. The Emotional performances, in contrast, revealed the lowest values for all of these
features. In addition, clear differences were found between the performances of the amateur
and professional performers. The present study provides concrete evidence that performers
who feel sad do sound different compared to those who express sadness.

Paper Session 25: Crystal Hall, 11:30-13:30


Music in the classroom

Differences in Mental Strategies and Practice Behavior of Musically Average


and Highly Gifted Adolescents in Germany
Stella Kaczmarek
Faculty of Music, University of Paderborn, Germany

Amount of research on instrumental practice and demand for this topic has increased greatly
in the last decade. More than half of all research concerns professional musicians, and there
is relatively little research carried out with children or adolescents. Aim of this paper is to
present a recent study on musically gifted adolescents in Germany. Research participants
were young students who participated in a special study program at the music
conservatories in Germany (Hannover, Cologne and Detmold). Participants of the control
group were average music students from local music school in Paderborn. Two
questionnaires were used in which young musicians were asked to reflect on their practice
behavior, practice strategies, and strategies of mental rehearsal. Analysis suggests that
highly gifted adolescents in comparison to average music students - have greater
knowledge regarding the use of appropriate planning and evaluation strategies. We have
only found significant differences in the use of mental strategies between those two groups
in one scale, which means that experts do not always stand out in mental rehearsal than
average music students.

144 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

THU
Competencies and model-based items in music theory and aural training in
preparation for entrance exams

Anna Wolf, Friedrich Platz, Reinhard Kopiez


Hanover Music Lab, Hanover University of Music, Drama and Media, Germany

The study of music theory is part of any musicology and music education degree in Germany.
To enter such a study programme, every prospective student needs to pass an entrance exam
in aural training and music theory. Although these tests decide on the professional future of
young people, they lack a theoretical, model-based validation. A chord labelling task from an
entrance exam (n = 124) has been analyzed. It consists of 15 chords each in both versions of
the task. The items of the chord labelling task represent a too narrow range of difficulty (e.g.
-1.2 to +1.3 logits) and five items even needed to be removed due to differential item
functioning (Wolf, Platz & Kopiez, 2012). Subsequently, a questionnaire with music theory
items will be prepared by music theory experts and will consist of approximately twenty
items. These items will be filled in by students preparing for an entrance exam. The
upcoming analysis using Item response theory is going to provide data about each of the
items, which will result in its removal, revision or retention. In the latter case, item
charateristics such as its difficulty allow for a classification of the item into the competency
model. Ensuing these steps we will produce a competency model for music theory and aural
training. As this model will be based on empirical data of students training in music theory
and aural training, we can integrate both disciplines into music pedagogy and instrumental
training and enable the understanding of music as a generalizable process.

The influence of the visual representation of the notation system on the


experience of time among young music players

Tirtsa Yovel, Roni Y. Granot


Hebrew University of Jerusalem, Israel

Music notation embodies the metaphor of music as motion in time and space (Johnson &
Larson, 2003). Notes can be viewed as analogous to objects along the route defined by the
musical staff. As such, principles of motion may be used in the translation from the visual
information of the notation (length and density) into realized time, creating possible biases
related to our experience of motion in space. In the current study we measured the playing
tempo of 61 children (aged 6.9-14.4) who performed and verbally responded to a set of
musical examples presenting various manipulations on the length of the staff and the density
of the written notes. In order to determine their developmental stage the children were also
tested for weight conservation and time perception (Piaget, 1969). Results indicate a clear
influence of the manipulated variables on playing tempo when manipulations were applied
to the entire staff, but not when limited to a single measure. In general, short and/or dense
visual information led to faster tempi. This was obtained despite an explicit understanding of
the irrelevance of these variables to the temporal interpretation of the notation, and could
not be explained by participants' developmental stage, or ability to maintain a steady beat.
Moreover, even priming with a metronome did not abolish the effect. We discuss
implications for our understanding the metaphor of time-space and motion in music, and
implications for music pedagogy.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

145

When students are learning and when they are performing in instrumental
lesson interactions: A conversational analysis approach

Antonia Ivaldi
Department of Psychology, Aberystwyth University, Wales, UK

Within the growth of qualitative research in music psychology there has been an attempt to
explore the interactions that take place between teachers and students in music lessons. This
research, however, has yet to look at the turn by turn talk that takes place in pedagogical
discourse, in addition to exploring how playing, singing and demonstrating are woven into
the sequence of the interaction. The studys aim is to examine how students indicate to the
teacher when they are learning and when they are performing within the lesson, and how
this is received, taken up, and orientated to by the teacher as a performance or as part of a
more complex pedagogical process. 17 video recordings were made of UK conservatoire
music lessons which lasted between 50 minutes and two hours. Relevant extracts were then
selected and transcribed further using Jefferson system conventions. Employing
conversation analysis (CA) techniques such as turn-taking, repair, overlap, pauses etc, the
analysis will explore how the teacher orients to the students playing and talk as being either
performance ready, or one that indicates that learning is still taking place. CA offers a unique
opportunity for teachers and students to demonstrate more fully how the interaction within
music lessons presents a complex interplay between talk and the playing and demonstration
of instruments, which in turn results in the student and teacher continually moving between
learning and performance within the lesson. The implications for instrumental teachers and
their students will be discussed.

Paper Session 26: Dock Six Hall, 11:30-13:30


Music - Identity - Community

Music and Identity: The Effect of Background Music on Israeli Palestinians'


Salience of Ethnic Identity

Naomi Ziv,* Ahlam Rahal #


*Psychology Dept., College of Management Academic Studies, Israel
#Education Dept., Max Stern Academic College, Israel

The development of identity is an important aspect of adolescence. Music plays an important
part in constructing identity at this age. Israeli Palestinians constitute an ethnic minority,
whose sense of identity may be split between their civic identity, as Israeli citizens, and
ethnic identity, as Palestinians. The aim of the present study was to examine the effect of
background music on the salience of ethnic identity in Israeli Palestinian adolescents. 90
boys and 152 girls participated in the study. Participants were randomly assigned to four
groups. Three groups heard either national, Arab love or English rock songs, and were asked
to write associations to them. All participants completed an ethnic identity questionnaire.
Results showed higher scores on ethnic identity with all types of music compared to no
music. A significant effect of music type was found for affect associated to music type. Gender
differences were found in the effect of music on ethnic identity, and in the relationship
between associations and type of music.

146 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

THU
Identity Dimensions and Age as Predictors of Adult Music Preferences

Richard Leadbeater
Lancaster Institute for the Contemporary Arts, Lancaster University, England

Recent empirical research in music psychology has established that personality trait
profiling may provide a reliable prediction of music preferences. However, research on
music preferences has largely focused on the adolescent age group. Whether adults similarly
use music as a tool to construct and reconstruct identities following lifespan experiences is
largely understudied. This paper presents the results of an on-line survey which was carried
out at Lancaster University to expand recent empirical research on music preferences. The
aim of the study was to explore the relationship between personality traits, age, estimated IQ
and identity dimensions as predictors of music preferences. A large sample (n=768), ages
ranging from 17-66 (X=23.9; SD=8.95) completed the survey. Music preference ratings were
assessed using STOMP-R. The BFI and the EIPQ were used for personality trait and identity
status measurement respectively. Results largely supported recent research except for one
notable exception; there was almost zero correlation between Openness and the Upbeat and
Conventional Dimension, as opposed to a significant negative correlation. Standard multiple
regression analysis revealed highly significant effects of the Exploration identity dimension,
Age and Openness to predict a preference for Rhythmic and Complex music. Interestingly,
adjusted R2 scores would suggest that these variables only account for less than 20% of
variance in music preferences. Consequently, further research on music preferences may
adopt a more socially constructive methodology to identify how music preference selection
reflects the evolving salient identities.


Why not knitting? Amateur music-making across the lifespan

Alexandra Lamont
Centre for Psychological Research, Keele University, United Kingdom

Musical identity lies at the core of understanding peoples motivations and patterns of
engagement with music. Much research has explored this in relation to professional
musicians and music teachers, but less attention has been given to amateurs. A growing
body of work shows that involvement in musical activities, particularly in later life, has
powerful effects on health and wellbeing. However, less is known about how involvement
can be supported over long timeframes spanning many years. This study explores
retrospective memories of music making and aims to uncover the features that prevent or
support amateurs in developing and sustaining (and sometimes resuscitating) a musical
identity. Data was gathered from online surveys (530 participants) and follow-up interviews
with adult amateur musicians. Participants ranged in age from 21 to 83 and took part in a
very diverse range of musical activities. Despite being actively involved in music, they did not
all have a strong musical identity. Different patterns of motivation can be discerned,
including the traditional pattern of a highly motivated child leading to continuous
involvement in music, but also adults with far more patchy musical careers. While all
participants had a guiding musical passion or a core musical identity, this sometimes takes
time to find full expression, depending on circumstances and pressures of everyday life.
General life crises and transitions (such as having a family, relocation or retirement) can
create barriers to involvement but also opportunities to re-engage. Involvement in music
also provides a way of managing life transitions and crises.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

147

Young People's Use and Subjective Experience of Music Outside School

Ruth Herbert
Music Dept., Open University, UK

Few studies of everyday musical engagement have focused on the subjective 'feel'
(phenomenology) of unfolding, lived experience. Additionally, the musical experiences of
children and young adolescents are currently under-represented in the literature. This
paper constitutes an in-progress report of the preliminary stage of a mixed method three
year empirical enquiry, designed to explore psychological characteristics of the subjective
experience of young people hearing music in everyday, 'real world scenarios in the UK. The
aims were to identify varied modes of listening, to pinpoint whether these are age-related, to
explore the extent to which young people use music as a form of escape (dissociation) from
self, activity, or situation. 25 participants (aged 10-18) were interviewed and subsequently
kept diaries of their music-listening experiences for two weeks. Data was subjected to
Interpretative Phenomenological Analysis (IPA). Key themes identified include the use of
music to create a sense of momentum, energy and excitement to mundane scenarios, to
dissociate or 'zone out' from aspects of self and/or situation, to feel relaxed, to feel
'connected, to articulate moods and emotions, to aid daydreams/imaginative fantasies and
to provide a framework through which to explore emotions vicariously, using music as a
template for modelling future emotional experience. Subjective experience was frequently
characterised by a fusion of modalities.

Symposium 3: Timber I Hall, 11:30-13:30


Emotion regulation through music: understanding the mechanisms,
individual differences, and situational influences

Convener: Suvi Saarikallio, Discussant: Daniel Vstfjll



Emotion regulation is one of the very reasons why people engage with music in everyday life,
and research on the topic has been growing rapidly. Recent studies have identified music-
related affect-regulatory strategies, emotion induction mechanisms, and proposed
connections to personality, emotionality, and musical engagement. However, we still know
little about the details of the underlying psychological and physiological mechanisms,
individual differences, and contextual influences on this regulatory behaviour. This
symposium brings together an international group of researchers approaching the topic of
music and emotion regulation from five complementary perspectives: TanChyuan Chin
provides a detailed look on the physiological mechanisms underlying music-related emotion
regulation, and presents a study about the EEG parameters connected to emotion regulation
through music. Annemieke VanDenTol focuses on the psychological mechanisms and
processes that guide mood enhancement after listening to sad music when feeling sad. Marie
Helsing brings in the topic of individual differences of music-related emotion regulation in
the context of everyday life, and presents studies that investigated the effects of music on
mood improvement and stress reduction in everyday life episodes. William Randall further
elaborates the topic of contextual influences on music-related emotion regulation by
presenting a study conducted through real time sampling methodology using current
portable technology. Suvi Saarikallio discusses the perspective of individual differences over
the course of lifespan and presents a study that demonstrates age-related differences in
music-related emotion regulation across adolescence and adulthood. In conclusion of the
symposium, Daniel Vstfjll brings the varying viewpoints together as discussant.

148 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

THU
A self-regulatory perspective on choosing sad music to enhance mood

Annemieke J. M. Van den Tol, Jane Edwards


Irish World Academy of Music and Dance, University of Limerick, Ireland

Many people choose to listen to self-identified sad music when they experience negative life
circumstances. Music listening in such circumstances can serve a variety of important self-
regulatory goals (Saarikallio and Erkkil, 2007; Van den Tol and Edwards, 2011). Listening
to sad music can help people to cope with a problem in the long term through offering
opportunities for reflection, learning, and reinterpreting the situation. In addition, after
listening to sad music, adults report that they feel better in a range of ways (Van den Tol and
Edwards. 2011). The aim of the current research is to get more insight in the psychological
processes that guide mood enhancement after listening to sad music when feeling sad. To
investigate the above aim a correlational study has been designed based on our previous
insights in sad music listening (Van den Tol and Edwards, 2011). A total of 220 participants
volunteered to rate statement in relation to their sad music listening experiences when
feeling sad. Several distinct strategies are identified that people employ for selecting specific
sad music, such as, the selection of sad music based on subjective high aesthetic value, or the
selection of music based on momentary identification/connection with the affective sound of
the music or lyrics of the song. These strategies are guided by several distinct self-regulatory
goals that self-identified sad music can serve during listening. In an explanatory model we
will give an overview of how different factors play a role in self-regulation and of how these
can result in mood enhancement and affective change. These novel findings provide core
insights into the dynamics and value of sad music in relation to coping with negative
psychological circumstances and mood enhancement.


Everyday music listening: The importance of individual and situational factors
for musical emotions and stress reduction

Marie Helsing
Department of Psychology, University of Gothenburg, Sweden

Music listening primarily evokes positive emotions in listeners. Research has shown that
positive emotions may be fundamental for improving both psychological and physical
aspects of well-being. Besides from the music itself it is essential to consider individual and
situational factors when studying emotional experiences to music. The main aim with the
three papers (Study I, II and III) in the doctoral thesis was to explore the effects of everyday
music listening on emotions, stress and health. The Day Reconstruction Method was used in
study I and II. In study III, an experiment group who listened to their self-chosen music on
mp3-players when arriving home from work every day for 30 minutes for two weeks time
was compared to a control group who relaxed without music and with a baseline week when
the experiment group relaxed without music. Results from study I and II showed that music
was related to more positive emotions, lower stress levels and higher health scores. Liking of
the music affected the level of stress. Results from study III showed that the experiment
group showed an increase in positive emotions and decrease in perceived stress and cortisol
levels over time. The results from this thesis indicate that everyday music listening is an easy
and effective way of improving well-being and health by its ability to evoke positive
emotions and thereby reduce stress. But not just any music will do since the responses to
music are influenced by individual and situational factors.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

149

Emotion Regulation Through Personal Music Listening: The MuPsych App

William M. Randall, Nikki S. Rickard


School of Psychology & Psychiatry, Monash University, Melbourne, Australia

An extensive body of research supports music listening as a commonly used self-regulation
strategy, including the series of studies by Saarikallio on adolescent music mood regulation.
However, empirical evidence supporting emotion regulation through music use has been
limited. The current study aimed to provide empirical and ecologically valid data on the
frequency of specific music regulation strategies, and how successful they are in regulating
emotion. A second aim of the current study was to determine if regulation through music use
occurs in accordance with the Process Model of Emotion Regulation. To achieve these aims, a
new event-sampling methodology was developed; a mobile-device application named
MuPsych. Participants are asked to download MuPsych to their own portable device, and use
it as their personal music player for a two-week data collection period. The app employs
Experience Sampling Methodology to collect real-time subjective data on music and social
context variables, regulatory strategies, and the emotional impact of music. In addition,
MuPsych collects data through psychometric questionnaires on listener variables such as
personality, well-being and musical experience. Preliminary results suggest that the
frequency and efficacy of specific music regulation strategies are influenced by music,
listener and social context variables. The app will remain available for participants to
download for a period of 18 months, allowing for automatic and continuous collection of
data. Results to be presented will reveal how young people use music in their everyday lives
to self-regulate emotions, and the conditions under which this is successful. This study will
also determine how emotion regulation through music use relates to established models of
emotion regulation.


Age differences in music-related emotion regulation

Suvi Saarikallio,* Tuuli Vattulainen,# Mari Tervaniemi#


*Department of Music, University of Jyvskyl, Finland
#Department of Psychology, University of Jyvskyl, Finland

Music is used for regulating emotions across the lifespan, but age-related comparisons of this
behavior have not been conducted. We studied how people at different ages use music for
emotion regulation, and particularly focused on differences in the regulatory strategies and
related music preferences. Survey data was collected from volunteering passers-by during a
literature, food, and science exhibition event. Participants (N=123, age range 13-71, 30
males) were divided into four age groups: 1) teenagers: 13-18-year-olds, 2) young adults:
19-35-year-olds, 3) adults 36-50-year-olds, and 4) old adults: 51-year-olds and older.
Participants rated their use of seven music-related mood-regulatory strategies
(entertainment, strong sensation, diversion, mental work, discharge, revival, and solace) and
their liking of musical genres (classical, jazz, pop, Finnish traditional dance music, rock,
heavy, rap, soul). Two regulatory strategies differed significantly differ between the groups:
Discharge, release of negative emotion, was more used by teenagers than by adults and old
adults. Mental work, contemplation of emotional experiences, was more used by young
adults and old adults than by teenagers and adults. Furthermore, age differences were
observed regarding how music preferences related to the regulatory strategies. For instance,
the use of music for entertainment was related to preference for rap in teenagers, but to
preference for Finnish traditional dance music in young and old adults. The use of music for
strong sensations was related to preference for classical and heavy in young adults but
preference for jazz in old adults. The results broaden our understanding of the age-related
development and individual differences in music-related emotional self-regulation.

150 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

THU
Paper Session 27: Timber II Hall, 11:30-13:30
Interpreting & predicting listener responses

From Vivaldi to Beatles and back: predicting brain responses to music in real
time

Vinoo Alluri1, Petri Toiviainen1, Torben Lund2, Mikkel Wallentin2, Peter Vuust2,3, Elvira
Brattico4
1Department of Music, Finnish Centre of Excellence in Interdisciplinary Music Research,
University of Jyvskyl, Finland, 2Aarhus University Hospital, Aarhus University, Denmark,
3Royal Academy of Music, Aarhus/Aalborg, Denmark, 4Cognitive Brain Research Unit,
Department of Psychology, University of Helsinki, Finland

We aimed at predicting brain activity in relation to acoustic features extracted from musical
pieces belonging to various genres and including lyrics via regression modeling. We assessed
the robustness of the hence created models across stimuli via cross-validation. Participants
were measured with functional magnetic resonance imaging (fMRI) while they listened to
two sets of musical pieces, one comprising instrumental music representing compositions
from various genres and the other a medley of pop songs with lyrics. Acoustic features were
extracted from both stimulus sets. Principal component regression models were trained
separately for each stimulus set by using the fMRI time-series as dependent, and acoustic
feature time-series as independent variables. Then, we performed cross-validations of the
models. To assess the generalizability of the models we further extended the cross-validation
procedure by using the data obtained in a previous experiment that used a modern tango by
Piazzolla as the stimulus. Despite differences between musical pieces with respect to genre
and lyrics, results indicate that auditory and associative areas indeed are recruited for the
processing of musical features independently of the content of the music. The right-
hemispheric dominance suggests that the presence of lyrics might confound the processing
of musical features in the left hemisphere. Models based on purely instrumental music
revealed that in addition to bilateral auditory areas, right-hemispheric somatomotor areas
were recruited for musical feature processing. In sum, our novel approach reveals neural
correlates of music feature processing during naturalistic listening across a large variety of
musical contexts.


I can read your mind: Inverse inference in musical neuroinformatics

Petri Toiviainen1, Vinoo Alluri1, Elvira Brattico1,2, Andreas H. Nielsen3,4, Anders Dohn3,5,
Mikkel Wallentin3,6, & Peter Vuust3,5
1Finnish Centre of Excellence in Interdisciplinary Music Research, University of Jyvskyl,
Finland, 2Cognitive Brain Research Unit, Department of Psychology, University of Helsinki,
Finland, 3Center of Functionally Integrative Neuroscience, Aarhus University Hospital,
Nrrebrogade, 8000 Aarhus C, Denmark, 4Department of Anthropology, Archaeology and
Linguistics, Aarhus University, Denmark, 5Royal Academy of Music, Aarhus/Aalborg, Denmark,
6Center for Semiotics, Aarhus University, Denmark

In neuroinformatics, inverse inference refers to prediction of stimulus from observed neural
activation. A potential benefit of this approach is a straightforward model evaluation because
of easier performance characterization. We attempted to predict musical feature time series
from brain activity and subsequently to recognize, which segments of music participants
were listening to. Moreover, we investigated model parameters that yield optimal prediction
performance. Participants (N = 15) were measured with functional magnetic resonance
imaging (fMRI) while they were listening to two sets of musical pieces. Acoustic features
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

151

were computationally extracted from the stimuli. The fMRI data were subjected to
dimensionality reduction via voxel selection and spatial subspace projection. For each
stimulus set separately, the fMRI projections were subjected to multiple regression against
the musical features. Following this, temporal segments were selected from the fMRI data,
and a classifier comparing predicted and actual musical features was used to associate each
fMRI data segment with one of the respective musical segments. To avoid overfitting, cross-
validation was utilized. Different voxel selection criteria and subspace projection
dimensionalities were used. Best performance was obtained by including about 10-15% of
the voxels with highest correlation between participants, and by projecting the fMRI data to
less than 10 dimensions. Overall, timbral and rhythmic features were more accurately
predicted than tonal ones. The excerpt being listened to could be predicted from brain
activation well above chance level. Optimal model parameters suggest that a large
proportion of the brain is involved in musical feature processing.


Implicit Brain Responses During Fulfillment of Melodic Expectations

Job P. Lindsen*, Marcus T. Pearce#, Marisa Doyne*, Geraint Wiggins#, Joydeep Bhattacharya*
*Department of Psychology, Goldsmiths, University of London, UK
#Centre for Digital Music, Queen Mary, University of London, UK

Listening to music entails forming expectations about how the music unfolds in time, and the
confirmation and violation of these expectations contribute to the experience of emotion and
aesthetic effects of music. Our previous study on melodic expectations found that unexpected
melodic pitches elicited a frontal ERP negativity. However, the role of attention was not
explicitly manipulated in the previous study. In the current experiment we manipulated the
degree to which participants could attend to the music. One group of participants just
listened to the melodies, a second group had to additionally detect an oddball timbre, and a
third group memorized a nine-digit sequence while listening. We used our statistical
learning model to select from each melody a high and low probability note for the EEG
analyses. Replicating previous results we found an early (~120 ms) frontal ERP negativity
for unexpected notes. Initial analyses showed that this early ERP effect was unaffected by
our attention manipulations. In contrast, analysis of the time-frequency representation
indicated an interaction of expectedness and attentional load in theta band (5-7 Hz)
amplitude during a later time-window (~300 ms). The expectedness of a melodic event
seems to be extracted relatively quickly and automatically extracted irrespective of the
attentional load, suggesting that early melodic processing is largely pre-attentive or implicit.
Later stages of processing seem to be affected by attentional load, which might reflect
differences in updating of the internal model used to generate melodic expectations.


"...and I Fe
e
l Good!" Ratings, fMRI-recordings and motion-capture
measurements of body-movements and pleasure in response to groove

Maria A.G. Witek,* Eric F. Clarke,* Mikkel Wallentin,# Mads Hans,# Morten L. Kringelbach,^
Peter Vuust#
*Music Faculty, Oxford University, United Kingdom
^Dept. of Psychiatry, Oxford University, United Kingdom
#CFIN, Aarhus University, Denmark

What is it about music that makes us want to move? And why does it feel so good? Few
contexts of musical enjoyment make the pleasurable effect of music more obvious than in a
dance club. A growing body of research demonstrates that music activates brain areas
involved in the regulation of biological rewards, such as food and sex. However, the role of
body-movement in pleasurable responses to groove-based music, such as funk, hip-hop and
152 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

THU
electronic dance music, has been ignored. This paper reports results from a study in which
the relationship between body-movement, pleasure and groove was investigated. In an
online rating survey, an inverted U-shaped relationship was found between degree of
syncopation in funk drum-breaks and ratings of wanting to move and experience of pleasure.
This inverted U-curve was reflected in fMRI-recorded patterns of activity in the auditory
cortex of 26 participants. Furthermore, there was a negative linear relationship between
degree of syncopation and activation in the basal ganglia. After scanning, participants were
asked to move freely to the drum breaks in a motion-capture lab. Early explorations of the
data suggest similar trends with regards to degree of syncopation and kinetic force of
movements. This triangulation of results provides unique insights into the rewarding and
movement-eliciting properties of music. As few can resist the urge to tap their feet, bop their
heads or get up and dance when they listen to groove-based music, such insights are a timely
addition to theories of music-induced pleasure.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

153

Friday 27 July

Keynote 5: Grand Pietra Hall, 09:00-10:00

David Temperley: Mode and emotion: Experimental, computational,


and corpus perspectives

David Temperley is Associate Professor of music theory at Eastman


School of Music, University of Rochester, USA. He received his PhD
from Columbia University (studying with Fred Lerdahl), did a post-
doctoral fellowship at Ohio State University (working with David
Huron), and has been at Eastman since 2000. Temperley's primary
research area has been computational modeling of music cognition;
he has explored issues such as meter perception, key perception,
harmonic analysis, and stream segregation. His first book, The
Cognition of Basic Musical Structures (MIT, 2001) won the Society for
Music Theory's Emerging Scholar Award; his second book, Music and
Probability (MIT, 2007) explores computational music cognition from a probabilistic
perspective. Other research has focused on harmony in rock, rhythm in traditional African
music, and hypermeter in common-practice music. Temperley has also worked on a variety of
linguistic issues, including parsing, syntactic choice, and linguistic rhythm.

My starting point is a recent experiment in which participants heard melodies in different
diatonic modes (Lydian, Ionian, Mixolydian, Dorian, Aeolian, and Phrygian) and judged their
happiness. The experiment reveals a strong and robust pattern: Modes become "happier" as
scale-degrees are raised (i.e. as sharps are added), with the exception of Lydian, which is
higher in pitch than Ionian (major) but less happy. I consider various explanations for this
pattern. The simplest explanation appeals to familiarity: major mode is the happiest because
it is the most familiar. Several considerations argue against this explanation, including new
corpus evidence from popular music. However, I argue that familiarity may explain the low
happiness of modes at the extremes, namely Phrygian and Lydian. (Here I connect with
recent computational work on key-finding.) Regarding the gradual increase in happiness of
modes from Aeolian through Ionian, I consider two explanations: one posits an association
between happiness and pitch height; the other involves a spatial cognitive model of scale-
degrees, the "line of fifths." I put forth several arguments in favor of the latter explanation.

Young Researcher Award 2, Grand Pietra Hall, 10:00-10:30

Emotions Move Us: Basic Emotions in Music Influence Peoples Movement to


Music
Birgitta Burger, Suvi Saarikallio, Geoff Luck, Marc R. Thompson, Petri Toiviainen
Finnish Centre of Excellence in Interdisciplinary Music Research, Department of Music,
University of Jyvskyl, Finland

Listening to music makes us to move in various ways. Several factors can affect the
characteristics of these movements, including individual factors and musical features.
Additionally, music-induced movement may be shaped by the emotional content of the
154 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

FRI
music. Indeed, the reflection and embodiment of musical emotions through movement is a
prevalent assumption within the embodied music cognition framework. This study
investigates how music-induced, quasi-spontaneous movement is influenced by the
emotional content of music. We recorded the movements of 60 participants (without
professional dance background) to popular music using an optical motion capture system,
and computationally extracted features from the movement data. Additionally, the emotional
content (happiness, anger, sadness, and tenderness) of the stimuli was assessed in a
perceptual experiment. A subsequent correlational analysis revealed that different
movement features and combinations thereof were characteristic of each emotion,
suggesting that body movements reflect perceived emotional qualities of music. Happy music
was characterized by body rotation and complex movement, whereas angry music was found
to be related to non-fluid movement without rotation. Sad music was embodied by simple
movements and tender music by fluid movements of low acceleration and a forward bent
torso. The results of this study show similarities to movements of professional musicians and
dancers, to emotion-specific non-verbal behavior in general, and can be linked to notions of
embodied music cognition.

Speed Poster Session 31: Grand Pietra Hall, 11:00-11:40


Cognitive modeling & musical structure

Long-term representations in melody cognition: Influences of musical


expertise and tempo

Niklas Bdenbender, Gunter Kreutz


Department of Music, Carl von Ossietzky University Oldenburg, Germany

We often only need a few tones from the beginning of a melody to anticipate its continuation.
The less known a melody is, however, the more tones are required to decide upon its
familiarity. Dalla Bella et al. (2003) investigated this idea in an experiment where
participants with different musical backgrounds were asked to judge melody beginnings
regarding their point of identification as familiar or unfamiliar. The results reveal expected
influences of musical expertise but also show similarities in the cognitive representation of
melodic material, regardless of musical expertise. In our experiment we replicated and
extended this paradigm by focusing on musical tempo as another potential influence on the
recognition process. Participants were assigned to either a musicians group or a non-
musicians group, according to their grade of musical expertise, and were asked to judge
acoustically presented melody beginnings regarding the point of their identification as
familiar or unfamiliar. Results support the findings of Dalla Bella with a highly significant
difference between the identification points for familiar and unfamiliar melodies of
approximately three tones more for the latter, and a significantly faster identification of
approximately one tone for musicians compared to non-musicians. Deviations from the
original tempo show a trend towards a delayed identification for familiar melodies,
regardless of the direction of the deviation, and a significant correlation between the
increase of tempo and the number of tones required for the identification of unfamiliar
melodies.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

155

Why Elephants Are Less Surprised: On Context-free Contexts, Trees without


Branches and Probabilistic Models of Long-distance Dependencies

Martin Rohrmeier,* Thore Graepel#


*Cluster Languages of Emotion, Freie Universitt Berlin, Germany
#Microsoft Research, Cambridge, United Kingdom

Since Schenker's (1935) and Lerdahl & Jackendoff's (1983) theories, tree-shaped, nonlocal
dependency structures have been proposed for tonal music. Empirical evidence for the
perception or acquisition of nonlocal dependencies, however, is still debated. Regarding
harmony, accounts based on local transition tables (Piston, 1978; Tymoczko, 2003) or
recursive, generative context-free structures (eg. Steedman, 1984, 1996; Lerdahl, 2001;
Rohrmeier, 2011) were proposed. This work explores whether long contexts have an effect
for the prediction of realistic chord sequences. We use simple probabilistic Hidden Markov
and n-gram models to motivate harmonic long-distance dependencies and their learning
statistically using a corpus of Jazz chord progressions. For each chord of each test sequence,
the prediction accuracy based on any contiguous shorter context up to only one chord was
compared to the prediction accuracy for that chord given the full context of the entire piece
so far. Results by HMMs in contrast to n-gram models indicate that long-distance
dependencies up to large ranges (10 or more chords into the past) have a statistically
measurable impact on the prediction accuracy of most, but not all chords in the test pieces.
The results suggest that features of hierarchical, nonlocal harmonic structure are found in
the data and can be detected by HMMs. This finding provides an empirical way to reveal
traces of syntactic dependency structures consistent with theoretical accounts and to show
that aspects of such dependencies can be acquired by mere statistical learning.


Derivation of Pitch Constructs from the Principles of Tone Perception

Zvonimir Nagy
Mary Pappert School of Music, Duquesne University, Pittsburgh, United States

Recent cross-cultural studies in psychoacoustics, cognitive music theory, and neuroscience of
music suggest a direct correlation between the spectral content found in tones of musical
instruments and the human voice on the origin and formation of musical scales. From an
interdisciplinary point of view, the paper surveys important concepts that have contributed
to the perception and understanding of the basic building blocks of musical harmony:
intervals and scales. The theoretical model for pitch constructs derived from the perceptual
attributes of musical tones the patterns of tone intervals extracted from the harmonic
series builds on the hypothesis that fundamental assumptions of musical intervals and
scales indicate physiological and psychological properties of the auditory and cognitive
nervous systems. The model is based on the intrinsic hierarchy of vertical intervals and their
relationships within the harmonic series. As a result, musical scales based on the perceptual
and cognitive affinity of musical intervals are derived, their rapport with Western music
theory suggested, and the models potential for use in music composition implied. This leads
to a vertical aspect of musical harmony by bonding of the intervallic quality and its very
structure embedded within the spectra of tones that produce it. The models application in
the construction of tone systems puts forward a rich discourse between music acoustics,
perception, and cognition on one end, and music theory, aesthetics, and music composition
on the other.

156 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

FRI
Musical phrase extraction from performed blues solos
Bruce Pennycook,1 Carlos Guedes2

1The University of Texas at Austin, USA

2Faculty of Engineering, University of Porto, Portugal

The Music Phrase Segmenter software is an adaptation of Lerdahl & Jackendoff's Grouping Preference
Rules based on earlier work by Pennycook and Stammen. The primary objective of MPS is to
automatically extract, analyze and classify phrases from live performance, audio and/or midi files and
scores to serve as input to a generative system. It has been shown that statistical combined with
boundary-detection segmentation methods can outperform a single GPR in ground-truth tests, our
intent was to extend the GPR approach by adding 1) style dependent weightings and 2) secondary rules
which are dynamically invoked to improve results on ambiguous interval displacements. The target
application for this system is an interactive generative blues player suitable for mobile applications
which is part of an umbrella research project focusing on real-time interactive generative music
production tools. To satisfy the requirements for this application, the MPS software is designed to
provide continuous phrase-by-phrase output in real-time such that an input source (playing a
keyboard or saxophone for example) could produce useful data with a minimal latency.In addition to
the segment information pitch, duration, amplitude the MPS system produces for each detected
phrase the following analyses: estimated bpm for the current phrase and estimated bpm from the
beginning of the analysis to the current (using a new beat-tracking Max/MSP external object developed
for the overall research project), estimated root, estimated tonality, estimated chord-scale, pitch and
interval class collections (raw and weighted) plus a phrase contour value. The contours are determined
using a new Max/MSP external implementation of a dynamic time-warp method to classify each phrase
according to nine templates derived from Huron. The contour matching process also occurs on a
phrase-by-phrase basis in real-time. These data sets are then passed to a classification system allows a
user to cluster collections according to any of the analytical criteria. The paper demonstrates a) the
results of the segmenter processes compared to ground-truth data b) the real-time operation of the
analytical and contour procedures c) the clustering classification system and d) how the data is
ultimately employed in the generative system.


An Interactive Computational System for the Exploration of Music
Voice/Stream Segregation Processes
Andreas Katsiavalos, Emilios Cambouropoulos
School of Music Studies, Aristotle University of Thessaloniki, Greece

In recent years a number of computational models have been proposed that attempt to
separate polyphonic music into perceptually pertinent musical voices or, more generally,
musical streams, based on a number of auditory streaming principles (Bregman). The exact
way such perceptual principles interact with each other in diverse musical textures has not
yet been explored systematically. In this study, a computational system is developed that
accepts as input a musical surface represented as a symbolic note file, and outputs a piano-
roll like representation depicting potential voices/streams. The user can change a set
variables that affect the relative prominence of each streaming principle giving, thus, rise to
potentially different voice/stream structures. For a certain setting of the models parameters,
the algorithm is tested against a small but diverse set of musical excerpts (consisting of
contrasting cases of voicing/streaming) for which voices or streams have been manually
annotated by a music expert (this set acts as ground truth). Preliminary qualitative results
are encouraging as streaming output is close to the ground truth dataset. However, it is
acknowledged that it is difficult to find one stable set of parameters that works equally well
in all cases. The proposed model enables the study of voice/stream separation processes per
se, and, at the same time, is a useful tool for the development of more sophisticated
computational applications.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

157

Timbral & Textural Evolution as Determinant Factors of Auditory Streaming


Segregation in Christian Lauba's Stan

Nicolaos Diminakis, Costas Tsougras


School of Music Studies, Aristotle University of Thessaloniki, Greece

Formal musical analysis does not typically involve the listener's cognition of the
macro/micro structural levels of a composition. Auditory scene analysis provides a
fundamental understanding of the way a listener perceives combined sounds and organizes
them as separate elements of the musical texture. The aim of this paper is to show how a
number of cognitive factors (auditory streaming principles) can provide an insight into the
macro/microstructure of Christian Lauba's Stan for baritone saxophone and pre-recorded
synthesizer. Stan, Lauba's 11th saxophone concert-study, is a Study in virtuosity without
rubato for well-tempered and well-quantized instruments and an homage to Stan Getz, the
renown jazz musician. In this piece, timbral and textural parameters, as well as their
overlapping and interaction during the evolution of the composition, attain importance and
constitute the main generators of auditory streams. The present study reveals the auditory
streaming processes -based on the principles of Temporal Continuity, Minimum Masking,
Tonal Fusion, Pitch Proximity, Pitch Co-modulation, Onset Synchrony, Limited Density and
Timbral Differentiation- that project the division of the piece into three parts (A-B-C) and
explains the unfolding of the composition' s musical texture and the relation of the piece's
structure to its title. Pc set analysis is also applied in order to enlighten important processes
at the microstructural level. The study shows how two distinct methodologies can
complement each other for the benefit of music analysis. The acknowledgment of both
cognitive and theoretical results expands our understanding of musical structure and
broadens our knowledge about the listener's experience.


Understanding Ornamentation in Atonal Music

Michael Buchler
College of Music, Florida State University, U.S.A.

In 1987, Joseph Straus convincingly argued that prolongational claims were unsupportable
in post-tonal music. He also, intentionally or not, set the stage for a slippery slope argument
whereby any small morsel of prolongationally conceived structure (passing tones, neighbor
tones, suspensions, and the like) would seem just as problematic as longer-range harmonic
or melodic enlargements. Prolongational structures are hierarchical, after all. This paper
argues that large-scale prolongations are inherently different from small-scale ones in atonal
(and possibly also tonal) music. It also suggests that we learn to trust our analytical instincts
and perceptions with atonal music as much as we do with tonal music and that we not
require every interpretive impulse to be grounded by strongly methodological constraints.


Perceiving and categorizing atonal music: the role of redundancy and
performance
Maurizio Giorgio,1 Michel Imberty,2 Marta Olivetti-Belardinelli3

1"Sapienza" University of Rome, Universit de Paris-Ouest-Nanterre La Dfense, Italy


2Universit de Paris-Ouest-Nanterre La Dfense, France;

3ECoNA - Interuniversity Centre for Research on Cognitive Processing in Natural and Artificial

Systems, Sapienza University of Rome, Italy

In order to verify if the performer interpretation has a role on the perceived segmentation of
atonal music, we performed three experiments according to the ecological approach developed by
Irne Delige (1990). We hypothesize that musical structure affects grouping more than

158 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

FRI
performance and, moreover, that the main mechanism involved in the representation of musical
structure is related to the detection of similarity and difference between phrases, that is, of their
redundancy. For each experiment 30 subjects were invited to attentively listen to two different
performances of an atonal piece, to understand its plan and to mark off the sections of the work
pressing a computer key. The order of presentation of the two performances was balanced. In a
first experiment we used two versions of Berios Sequenza VI performed respectively by
Desjardins (1998) and Knox (2006). These variants are different in duration (12.13min. vs
13.14min.) and show differences in dynamics aspects (i.e.: velocity, intensity), accents
distribution and gaps duration. The aim of this work was to isolate and analyze the role of
variations in dynamic components, accents distribution, duration and the instrumentalists point
of view in the representation of the musical surface, as perceived by the listeners. In the second
experiment we focused on the role of performances duration by using two versions of Berios
Sequenza III, recorded by the same singer, that differ exactly in duration. In order to better
investigate the performers interpretation of the score, in the third experiment we asked to two
musicians to record a performance of Berios Sequenza VIII by means of a score in which we had
previously erased the dynamic instructions provided by the composer. Moreover, none of the two
instrumentalists knew the Berios composition before our request. Then we used the obtained
tracks as stimuli in the same paradigm of previous experiments. The results show a good number
of coinciding segmentations in the two versions either for the first, the second and the last
experiment, confirming our hypothesis and suggesting a main role of the texture in perceiving
and representing the plan of the pieces. The results of the three experiments are discussed in
relation to the role of same/different detection.

Speed Poster Session 32: Crystal Hall, 11:00-11:40


Emotion & affect

Whats That Coming Over The Hill? The Role Of Music On Response Latency
For Emotional Words

Paul Atkinson
Psychology, Goldsmiths University, England

Music and words both have the potential to generate emotional states that may impaction
concurrent task performance, but the extent of this interaction is rarely explored. A classic
example of the effects of emotional words is seen in responses to the emotional Stroop test,
Stroop (1935) whereby the presence of emotional words inhibits response times to a
standard color naming task. Graham, Robinson and Mulhall (2009) combined the Stroop task
with music and found an effect. The aim of this study was to explore whether music could
affect performance on an emotional Stroop task: Specifically it was hypothesized that fearful
music would inhibit responses on the reading task while happy music would decrease
inhibition. Both conditions were measured against a silent control. The music samples for
the present study were taken from a study by Eerola and Vuoskoski (2010). 60
undergraduates took part in the study and were comprised of 33 females and 24 males. The
experiment involved participants responding to a colour naming Stroop task on a computer
screen that contained both threat and neutral words, either in silence or while listening to
music that was rated as happy or fearful. The dependent variable was the time taken for the
participant to respond to the color of the word presented. The findings of the study
supported the experimental hypotheses: fearful music significantly inhibited response times,
while response times in the happy music condition were significantly facilitated. In the
silence condition no significance difference was found between performance of words.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

159

Diabolus in musica: towards an understanding of the emotional perception of


musical dissonance

Kyriaki Zacharopoulou, Eleni Lapidaki


School of Music Studies, Aristotle University of Thessaloniki, Greece

Musical dissonance is considered to be a decisive factor in the emotional evaluation of a
musical piece. However, previous research on the developmental perception of this musical
phenomenon is characterized by lack of studies, which are usually low in ecological validity
(extensive use of written/verbal self-reports of the emotional experience, artificially made
musical stimuli, or isolated musical events). The purpose of this study was twofold. The first
goal was to propose a web-based, multimedia enriched method, which provides a more
natural research setting, assigning a task that people generally encounter in their everyday
life, namely the pairing of music with images and videos. The second goal of the study was to
assess the emotional connotations of musical dissonance in two different age groups. The
study involved 29 pre-adolescents and 17 adults. The participants watched a set of images
and videos combined with a consonant and a dissonant variation of three musical pieces. The
images and videos were selected so that they would evoke extreme low or high levels of the
emotional dimensions of valence and arousal. We confirmed the participants' tendency to
choose the dissonant musical versions when they judged a visual stimulus as more arousing,
and the consonant versions when they judged a visual stimulus as more positive or pleasant.
The pre-adolescents generally agreed with the adults in evaluating the different musical
pieces, which implies that the emotional responses to musical dissonance of children at the
age of pre-adolescence have already begun to strongly resemble those of adults.


Tonality and Affective Experience: What the Probe Tone Method Reveals

Elizabeth Hellmuth Margulis,* Zohar Eitan#


*Department of Music, University of Arkansas, United States
#School of Music, Tel Aviv University, Israel

Music theorists have long maintained that the tonal hierarchy is an important foundation for
the affective experience of Western music. Tonal relationships are believed to engender
expectancy, tension and surprise, and thus partake in diverse ways in music expression and
meanings. This set of studies aims to use the well-established probe-tone technique
(Krumhansl, 1990) to explore the relationship between perceptions of tonal hierarchy and
aspects of musical expression. Specifically, we examine how listeners goodness-of-fit ratings
of tonal scale degrees correlate with their ratings of expressive qualities conveyed by these
scale degrees. In the experiments reported here, listeners with and without formal musical
training performed two tasks in counterbalanced order: the original probe-tone task (based
on Krumhansl & Kessler, 1982), and a replica of this task such that participants rated not
how well the probe tone fit with the tonal context, but rather how tense they found it
(Experiment 1) or how much they liked it (Experiment 2). Results provide basic information
about the impact of tonality on affective experience. By making simple modifications to a
well-established methodology in music perception, we hope to gain preliminary information
about the relationship between tonality and multidimensional components of affective
experience, as well as about the relationship between these dimensions themselves.

160 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

FRI
Lower than average spectral centroid and the subjective ability of a musical
instrument to express sadness

Joseph Plazak,* David Huron,#


*School of Music, Illinois Wesleyan University, USA; #School of Music, Ohio State University, USA

One of the known cues for a sad "tone of voice" in instrumental music is a relatively darker
timbre. Previous research has determined that spectral centroid is a reliable indicator of
the perceived brightness/darkness for a musical tone. This study sought to determine which
tones, on various orchestral instruments, have a "lower than average" spectral centroid, and
thus, which tones might be better suited for expressing musical sadness. Further, this study
also sought to compare the average spectral centroid for a given instrument to the subjective
capacity of that instrument to express musical sadness. Huron and Anderson collected this
latter data in an unpublished study. A weak correlation (r= -.09) was found between an
instruments average spectral centroid and the subjective capacity of that instrument to
express musical sadness. These results are limited, but are consistent with the hypothesis
that darker timbres, defined as tones with lower than average spectral centroid values, are
correlated with an instruments subjective capacity to express musical sadness.


Genre-related Dynamics of Affects in Music

Pasi Saari, Tuomas Eerola


Music Department, University of Jyvskyl, Finland

Past research in the perception of affects in music has primarily been based on rather limited
music materials both in terms of music genres covered and amount of examples used. Yet we
are aware of large differences in functions, typical listener profiles and affective connotations
of music across genres. The present study considers the contribution of music genre to the
perception of affects in music and seeks to uncover systematic patterns of affects and their
musical correlates across a variety of genres. Moreover, the aim of the study is to assess the
congruence between affects inferred from social media tags and participant ratings of affect
characteristics. Song-level tags related to genre and mood were retrieved for over a million
songs from the Last.fm social music catalogue. Based on Latent Semantic Analysis of the tags,
a set of 600 tracks, balanced in terms of 6 popular music genres and 9 affects were chosen
for a listening experiment, where 29 participants rated the excerpts. Correlations between
the listener ratings and corresponding inferred semantic representations were low (happy
r=.42) to high (peaceful r=.69). Without respect to genre, correlations between mean ratings
of each affect showed strong (e.g. energetic/relaxed r=-.95), but also unexpectedly weak (e.g.
happiness/sadness r=-.46) relationships. However, within the genres, a complex pattern of
relationships emerges, showing strongly negative correlation between happiness and
sadness within folk and pop, but weak correlation within electronic and metal due to non-
relevance of certain affects or shift in the relationship of the affect within the genre.


Romantic changes: Exploring historical differences in the use of articulation
rate in major and minor keys
Matthew Poon, Michael Schutz
McMaster Institute for Music and the Mind, McMaster University, Canada

Music and speech are known to communicate emotion using acoustic cues such as timing
and pitch. Previously we explored the use of these cues within a corpus of 24-prelude sets,
quantifying these cues in each of the 12 major (nominally happy) and 12 minor (nominally
sad) pieces. We found that the major-key pieces were both higher in pitch and faster in
articulation rate than their minor-key counterparts (Poon & Schutz, 2011). However, we also
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

161

found differences in the way Bach and Chopin used the cuesdifferences consistent with
previous work suggesting that the Romantic era practices for the use of articulation rate
broke with those of previous eras (Post & Huron, 2009). To further explore this change, we
expanded our survey to include seven additional 24-prelude sets written by Classical and
Romantic composers. For the Classical-era sets, major key pieces were on average 25%
faster than their the minor-key counterparts. However for the Romantic-era sets, major-key
pieces were in fact 7.5% slower than their minor key counterparts. Our analysis of pitch
height differences is still in progress, but through a rigorous methodology we document
clear differences in acoustic cues between the Classical and Romantic eras, complementing
and extending work by Post and Huron.


Acoustic variables in the communication of composer emotional intent

Don Knox, Gianna Cassidy


School of Engineering and the Built Environment, Glasgow Caledonian University, UK

Music emotion recognition algorithms automatically classify analysed music in terms of the
emotion it expresses. Typically these approaches utilise acoustical features extracted from
the digital music waveform. Research in this area concentrates on the perception of
expressed emotion from the user perspective, and has received some criticism in that it is
limited in terms of unpicking the many facets of emotional communication between the
composer and the listener. Acoustical analysis and classification processes can be expanded
to include aspects of the musical communication model, with the potential to shed light on
how the composer conveys emotion, and how this is reflected in the acoustical
characteristics of the music. The communication of music emotion is examined from the
point of view of the composers actions which have a direct bearing on acoustical properties
of the music being created. A pilot study was carried out in which a composer was tasked
with composing music for a video game. The composer kept a diary of his thoughts and
descriptions of his intentions as he composed music for the game. The music was analysed
and a large number of structural features extracted which were analysed in relation to the
qualitative descriptions provided by the composer. The results shed light on the links
between the actions and intentions of the composer and the resulting acoustical
characteristics of their music.


Experienced emotional intensity when learning an atonal piece of music. A
case study
Arantza Almoguera1, Mari Jose Eguilaz1, Jose Antonio Ordoana2, Ana Laucirica1

1Universidad Pblica de Navarra, Espaa


2Universidad Pas Vasco, Espaa

Different studies point out that music is one of the most effective inducers of intense
emotional experiences. Nevertheless, almost all the studies found are focused on the
listeners emotion, being scarce the studies focused on the performer. Due to its
characteristics, its more difficult that atonal music generates positive emotions, both in the
audiences and among interpreters and students. In fact, several authors consider that atonal
music is emotionally incomprehensible, and thats the reason why atonal music is not very
widespread in music education centers. The goal of our study is to investigate into the
emotional intensity experienced by five Flute students when learning an atonal piece for Solo
Flute. Results point out that the deeper knowledge of the music reached in the learning
process and the successive listening to the piece entail more familiarity and a better
understanding of the music played, and, therefore, students are able to find emotionally
intense passages, as it happens with tonal music. Consequently, we dont agree with all those
162 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

FRI
theories that suggest that atonal music is unexpressive and emotionally incomprehensible,
and we confirm that cognition has a positive influence in the emotion felt when playing
atonal music.
This work is part of the Research National Project I+D 2008-2011, code EDU- 2008-03401
Audition, cognition and emotion in the atonal music performance by high level music
students, funded by the Ministry of Science and Innovation of Spain.

Speed Poster Session 33: Dock Six Hall, 11:00-11:40


Learning & education

Engaging Musical Expectation Research in Pedagogy of Musical Form and


Phrase Structure

Nancy Rogers
College of Music, Florida State University, United States

This paper aims to bridge the gulf between music cognition and mainstream music theory by
describing ways to augment typical approaches to basic musical organization (form and
phrase structure) in a traditional music theory class. Discussing principles of musical
expectation, event segmentation, schema theory, and statistical learning is compatible with
common pedagogical approaches to form. I also describe classroom activities and
assignments that engage research in expectation and schema theory.


Interactive Computer Simulation for Kinesthetic Learning to Perceive
Unconventional Emergent Form-bearing Qualities in Music by Crawford
Seeger, Carter, Ligeti, and Others

Joshua Banks Mailman


Dept. of Music, Columbia University, USA; Steinhardt School, New York University, USA

Embracing the notion that metaphors influence reasoning about music, this study explores a
computational- phenomenological approach to perception of musical form driven by a
dynamic metaphor. Specifically, rather than static metaphors (structure, architecture, design,
boundary, section) instead, dynamic ones are emphasized (flow, process, growth,
progression) as more appropriate for modeling musical form in some circumstances. Such
models are called dynamic form. A pedagogical program for enhancing the perception of
dynamic form is pursued, by exploiting embodied cognition through custom built simulation
technology. Adopting an interdisciplinary approach, the presentation shows some
computational models of qualities that convey such dynamic form in unconventional
repertoire. Since such models are quantitative, it is plausible that, with appropriate
technology, listeners who do not spontaneously attend to these could learn to do so, and then
subsequently demonstrate perception and cognition of such form-bearing flux. Through
simulation algorithms, the paper offers Max/MSP patches and iPhone apps that enable real-
time user manipulation of the intensity of such qualities, by moving sliders with a mouse or
finger or by tilting the angle of an iPhone. Such hands-on control is intended to
kinesthetically cultivate sharper perception, cognition, attention, and interest of listeners
confronting unconventional music. The presentation also offers computer animations of
some theorized unconventional emergent qualities, which indeed constitute vessels of
musical form.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

163

Automatic Singing Assessment of Pupil Performance

Christian Dittmar, Jakob Abeer, Sascha Grollmisch,* Andreas Lehmann, Johannes


Hasselhorn#
*Semantic Music Technologies, Fraunhofer IDMT, Germany
#Hochschule fr Musik, Wrzburg, Germany

Assessing practical musical skills in educational settings is difficult and has usually been done
using human raters. Therefore, projects measuring competencies such as the American NAEP
(National Assessment of Educational Progress, 2008) or German KOMUS (Jordan et al., in press)
rely on responding items rather than performing or creating items to measure what student
know and can do in the field of music. This contribution is part of an attempt to measure practical
singing skills among German secondary school students. This study contributes to the
measurement of competencies in music by developing a methodology and proprietary software
solution for administering performing items and a (semi-)automatic scoring procedure for
evaluating different singing tasks. Voice recordings were made of 56 individual students (age 11)
singing the German national anthem after being given a starting pitch and rhythm. Experts rated
the recordings using a five-point scoring rubric pioneered by Hornbach and Taggart (2008). The
experts averaged ratings served as ground truth data that were then modeled with automatic
analysis tools from Music Information Retrieval research. Therefore, the singing voice recordings
were subjected to an automatic melody transcription algorithm which outputs the discrete note
sequence in MIDI notation and fundamental frequencies in Hz. A set of 3 performance assessment
features were derived from these data: (1) the optimum Euclidean distance between the target
melodies pitch class histogram and the transcribed melodies; (2) the variability of the sung
fundamental frequency over the course of a note; (3) change in fundamental frequency over the
length of a note. The correlation between the Hornbach & Taggart rubric and our features
provided an indication of their effectiveness in capturing childrens vocal performance. In our
ongoing analyses, the combination of all features was used to train a regression model, optimized
with respect to the ground truth. The current regression method yields a significant correlation
around 0.4. Our experiments show that the automatic modeling of human expert ratings is
possible. More sophisticated features are still needed and are currently under development.

Competences of piano teachers and the attitudes of their pupils


Malgorzata Chmurzynska
Department of Music Psychology, Chopin University of Music

In the training of future piano teachers (as well as of other instrumental teachers) provided by the
academies of music the strongest emphasis is put on their preparation in terms of specific musical
competences, such as a high level of piano performance, an ability to build up pupils solid mtier,
to shape pupils playing apparatus, to develop their musical and technical skills. The teachers
training involves also the psychological and educational knowledge and skills, which, however,
are usually not taken too seriously, both by the musical students themselves and the music
academies. The study aims at establishing whether there exists a relationship between the piano
teachers sense of competence (musical, educational, and psychological) and the pupils attitudes
towards their piano teachers and piano lessons. The subjects were pupils from the professional
primary music schools (N=40) and their piano teachers (N=15). The pupils were administered the
Pupils Questionnaire, designed to test their attitudes towards their piano teachers and the piano
lessons. The teachers completed the Piano Teacher Self-Efficacy Questionnaire designed to
measure their sense of competence. The data were compared for correspondence. The
comparison revealed that the higher the teachers sense of psychological competences, the more
positive their pupils attitudes both towards the teacher him/herself and the piano lessons, the
less often the pupils experience negative feelings during the lessons, the lower their level of
anxiety and the higher sense of self-fulfillment. It has also been revealed that the higher teachers
musical competences, the less often their pupils experience joy, self-realization, and the more
often they experience anxiety. The results indicate clearly that neither the teachers good piano

164 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

FRI
playing, painstakingly achieved during the musical studies, nor his/her careful training in the
remaining areas ensure good relationship between teacher and pupil. These factors, therefore,
cannot be a predictor of the effectiveness of teaching, e.g. they do not result in developing pupils
musical interest and motivation for piano playing. These findings once again point to the great
significance of teachers psychological competences and their role in shaping pupils positive
attitude towards piano playing and towards music in general.


The Effect of Music Teaching Method on Music Reading Skills and Music
Participation: An Online Study

Ronniet Orlando, Craig Speelman


School of Psychology and Social Science, Edith Cowan University, Australia

Music reading skills are acknowledged as essential for musicians when learning new pieces,
accompanying, or playing with others in ensembles. Approaches to teaching beginners may
be divided into rote, with new pieces learnt by ear and / or finger positions, and note, where
students learn to read from conventional music notation from the earliest lessons. This study
set out to examine relationships between first methods of learning musical instruments and
outcome measures of subsequent music reading skills, participation in music ensembles, and
ability to play music by ear. A self-administered online questionnaire collected data
regarding the musical background of volunteer adult participants, and included a two-part
music reading task. This was comprised of 24 audio-visual matching tasks using sets of four
2-bar melodies requiring either matching the scored melody to one of four recorded
melodies, or matching a recorded melody to one of four scored melodies. Over a period of 52
days, 155 responses to the questionnaire were recorded, of which 118 (76%) were analyzed
using a series of one-way analyses of variance. Results supported the hypothesis that the
first method of instruction affected subsequent music reading ability, with note methods
resulting in higher reading abilities than rote. Furthermore, a significant relationship
emerged between music reading ability and ensemble participation, and a significant effect
was found for playing by ear on music reading ability.


Music training, personality, and IQ

E. Glenn Schellenberg, Kathleen A. Corrigall


University of Toronto, Canada

How do individuals who study and practice music for years on end differ from other individuals?
We know that musically trained individuals tend to perform better on tests of cognitive abilities,
including measures of listening, memory, verbal abilities, visuospatial abilities, nonverbal
abilities, and IQ. Such advantages extend to school classrooms, where musically trained children
and adolescents tend to get better grades than their untrained counterparts on all school subjects
except for physical education (i.e., sports). One particularly provocative finding is that duration of
music training is associated with average grades in school even when IQ is held constant. In other
words, musically trained individuals are better students that one would predict based on their IQ,
which implicates a contribution of individual-difference variables other than IQ. One possibility is
that studying music is associated with individual differences in personality. So, the research's aim
is to examine whether personality variables can help to explain individual differences in duration
of music training. The sample included a large number of undergraduates who varied widely in
terms of their music background. They were tested individually on measures of IQ (Wechsler
Abbreviated Scale of Intelligence) and personality (Big Five Inventory). They also provided
detailed demographic-background information. Music background was defined as the number of
years of playing music regularly, which was highly correlated with years of music lessons but
more strongly associated with the predictor variables. Playing music regularly was correlated
positively with Performance (nonverbal) IQ and Openness-to-Experience, but negatively with
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

165

Conscientiousness. These associations remained evident when socio-economic status (i.e.,


parents education) was held constant. Even more compelling was the finding that duration of
playing music could be predicted by a combination of these predictor variables using multiple
regression, with each variable (i.e., IQ, Openness-to Experience, and Conscientiousness) making a
significant unique contribution to the models predictive power. In fact, the regression model
accounted for approximately 40% of the variance in years of playing music regularly. Duration of
playing music regularly can be predicted by a combination of IQ and personality variables.
Individuals who study and play music for years on end tend to score well on tests of intellectual
ability. They also tend to be open to new ideas and experiences, but they score relatively low on a
dimension of personality that subsumes qualities such as orderliness, responsibility,
attentiveness, and thinking before acting.


Music-Games: Supporting New Opportunities for Music Education
Gianna Cassidy, Anna Paisley

Glasgow Caledonian Univeristy, UK


This paper presents Phase 1 of the EPSRC 24month project, Music-Games: Supporting New
Opportunities for Music Education. While learners are increasingly engaged with digital music
participation outside the classroom, evidence indicates learners are increasingly disengaged with
formal music education. The challenge for music educators is to capitalise on the evident
motivation for informal music-making with digital technology, as a tool to create authentic and
inclusive opportunities to inspire and engage learners with music in educational contexts.
Previous research highlights the power of music participation to enrich cognitive, social and
emotional wellbeing, while a growing body of work highlights the educational potential of digital
games to scaffold and enrich personalised learning across curriculum. This body of work
addresses the neglected music-game synergy, investigating the potential of music games to
support and enrich music education by identifying processes, opportunities and potential
outcomes of participation. Phase 1 aimed to elucidate Educator, Learner and Industry attitudes,
uses and requirements with music-games, the musical opportunities and experiences music-
games support, processes of participation in and outside the classroom, and constraints of use
within existing practice in line with defined curriculum goals. Study 1 presents a comprehensive
questionnaire investigation (n=2000) of Educators, Learners, and Games Industry uses and
functions of music-games, and barriers to classroom employment. Study 2 presents a mixed
method investigation of learner sessions (n=70) with RockBand, recording performance (e.g.,
score music choice, usability) and self-report measures (e.g., Profile of Mood States and Flow) and
a thematic analysis of post-session reflective interviews. Study 3 presents a thematic analysis of
educator and industry co-created scenarios of use for RockBand in the classroom in line with
defined curriculum goals. Findings suggest music-games can engage and inspire us with music,
potentially supporting and enriching key areas of music education, social, emotional and cognitive
wellbeing in the classroom and wider musical world of the learner. Analysis was guided by the
elements of the new opportunities in music curriculum, and Hargreaves et al., (2003) models of
opportunities in music education, and potential outcomes of music education. Findings are
discussed through recommendations for effective and efficient employment of music technologies
for Educators, and innovative and user-centred design of future music technologies for Industry.


Attitudes Towards Game-Based Music Technologies in Education: A Survey
Investigation

Anna M.J.M. Paisley, Gianna Cassidy


Department of Computer, Communication & Interactive Systems/Psychology & Allied Health
Sciences; Glasgow Caledonian University, Scotland (UK)

A growing body of literature has recently emerged extolling the virtues of incorporating
digital-based games within formal education settings and in line with defined curriculum
166 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

FRI
goals. Yet, despite the widespread usage and relative accessibility of music-based digital
games, coupled with the abundance of research that exists to support the cognitive,
emotional and social benefits of musical participation, there remains a dearth of empirical
research into the inclusion of such technologies within the realm of music education. In view
of this and, as part of an ongoing EPSRC-funded project designed to evaluate the educational
potential of music-based digital games, a large-scale survey investigation was primarily
conducted as a means of ascertaining current uses, requirements with and attitudes towards
music-based video games across three groups of relevant stakeholders, to include educators,
learners and game industry experts. An initial pilot study was conducted as a means of
assessing the reliability and validity of this scale across 250 participants. Following
analytical proceedings, the questionnaire was subsequently refined before being
administered across the 3 groups of relevant stakeholders. (n = 2000+). Results from a
nested sub-sample of 300 cases from the overall participant pool shall be presented here
with a specific focus on learners responses to the final version of the survey. These initial
findings shall subsequently be discussed in light of the overarching aims of the project, and
with regard to the effective and successful integration of music-based games within music
education.

Speed Poster Session 34: Timber I Hall, 11:00-11:40


Motion & gesture

Interpersonal influence of nonverbal body-movement interaction in an


ensemble situation
Kenji Katahira
Graduate school of Science and Technology, Kwansei Gakuin University, Japan

Enhancing interpersonal relationships would be an important function of musical


communication. Music may serve this function by affording participants the opportunity to
interact nonverbally. The nature of the nonverbal channels contributing to the development
of interpersonal relationships, often observed in everyday life, may be one of the factors
underpinning relationship-enhancing function of music. The present study aimed to
investigate whether nonverbal communication influenced the development of dyadic
rapport, through a simple ensemble task. Body movement was focused as a typical nonverbal
channel. Ensemble coordination, body movement, and self-rating rapport during the
ensemble task were measured, and the relationships among them were analyzed by means
of structural equation modeling (SEM). Eight unacquainted pairs of participants played
isochronous patterns together on the electronic drums, synchronizing them as well as
possible under a real-time point-light display environment. The following three
measurements were carried out: a) ensemble coordination, b) explicitness and synchrony of
body movements in dyads, and c) participants interaction rating, measured with a modified
version of the rapport scale developed by Bernieri, et al. (1996). SEM results revealed that
degree of communication through body movement in dyads contributed to ensemble
coordination, but ensemble coordination had no significant effect on rapport rating. Most
remarkable of all the results, communication through body movement showed a positive
direct effect on the interaction ratings. The results in this study empirically demonstrated
that nonverbal communication in a musical ensemble situation may have an interpersonal
function similar to its function in everyday life.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

167

The Effect of Conductor Expressivity on Choral Ensemble Evaluation

Steven J. Morrison, Jeremiah D. Selvey


School of Music, University of Washington, USA

Visual information can contribute significantly to the opinion one makes, the meaning one
ascribes and the interpretation one derives from musical information. An ongoing series of
studies has examined whether a conductors use of gesture in a manner considered either
expressive or inexpressive affects listeners evaluations of an ensemble performance.
Prior results have indicated that among university music students instrumental
performances led by conductors deemed to be expressive were evaluated more positively
than those led by inexpressive conductors even when the performances were actually
identical. The purpose of the present study was (1) to determine whether a similar response
pattern would be observed (a) among younger and less-experienced music students (b)
using choral performance stimuli and (2) to compare responses against evaluations of
performances presented in an audio-only condition. Students (N = 429) enrolled in
secondary level music classes rated the expressivity of two pairs of two identical choral
performance excerpts (four excerpts in all) using a 10-point Likert-type scale. One group (n =
274) watched a video performance of the four excerpts featuring conductors who
demonstrated either high-expressivity (HE) or low-expressivity (LE) conducting techniques.
There was a significant effect of conducting condition on both the conductor and choral
performance evaluations. When compared with the evaluations of a second group of
participants (n = 155) who heard the same excerpts presented in an audio-only format, LE
performance ratings were significantly lower; there was no difference between HE and
audio-only ratings.


Effects of Observed Music-Gesture Synchronicity on Gaze and Memory

Lauren Hadley,* Dan Tidhar,# Matthew Woolhouse


*Department of Psychology, Goldsmiths College, University of London, England; #Faculty of
Music, University of Cambridge, England; School of the Arts, McMaster University, Canada

Following a previously undertaken dance experiment, which found that music-gesture
synchronicity (as in dance) enhanced social memory (Woolhouse & Tidhar, 2010), this study
examined the factors which could be seen to underlie this effect. Both gaze time and gaze
quality were considered. The experiment involved two videos of a dancer presented beside
each other, accompanied by an audio track in time with only one of the two visuals. The
visual stimuli each involved the same dancer, clothed in two similar outfits of different
colours. As participants viewed the stimulus their eye-movements were recorded using a
webcam. Subsequently, the subjects memory of the dancers clothing was tested by them
colouring-in two schematic diagrams of the dancer, one for each of her outfits. Two
hypotheses were tested in this experiment: (1) that gaze would be directed more towards
the video in which the dancer and audio were matched (synchronised dance video or SDV),
and (2) that memory of clothing would be better for the synchronised dance video than for
the desynchronised dance video (or DDV), i.e. the video in which the dancer and audio were
mismatched. The results indicated a tendency for participants to focus for longer on the SDV
than the DDV, but did not show a correlation between music-dance synchronicity and
memory of clothing. Post hoc analysis suggested that instead, size or area of clothing item
correlated to its memorability. These findings are discussed in relation to various
hypothesised modes of entrainment.

168 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

FRI
Extracting Action Symbols From Continuous Motion Data

Kristian Nymoen,1 Arjun Chandra,1 Mariusz Kozak2, Rolf Inge Gody3, Jim Trresen1, Arve
Voldsund3
1Dept. of Informatics, University of Oslo, Norway; 2Dept. of Music, University of Chicago, IL.,
USA; 3Dept. of Musicology, University of Oslo, Norway

Human motion can be seen as a continuous phenomenon which can be measured as a series
of positions of body limbs over time. However, motion is cognitively processed as discrete
and holistic units, or chunks, ordered by goal-points with trajectories leading between these
goal-points. We believe this is also the case for music-related motion. With the purpose of
utilising such chunks for the control of musical parameters in mobile interactive systems, we
see substantial challenges in developing a robust automated system for identification of
motion chunks and extracting segments from the continuous data stream. This poster
compares several automated segmentation techniques for motion data, applied to recordings
of people moving to music. An experiment has been carried out, where 44 participants were
given the task of moving their body to short musical excerpts. The motion was recorded by
infrared motion capture, with markers on the right wrist, elbow, shoulder and the C7. In
order to make the segmentation techniques easily transferable to mobile devices, the
automated segmentation technique was only based on the data from the right wrist marker.
A human observing 3D point light displays of the motion recordings of the whole arm (wrist,
elbow, shoulder, neck) demarcated chunks by looking at perceptually salient moments in the
recordings. The chunks demarcated by the human were used as a baseline for evaluating the
precision and recall rates of the automated segmentation techniques.


Embodied musical gestures as a game controller
Charlie Williams
University of Cambridge, UK

With the increasing prevalence of portable electronic devices and the concomitant
pervasiveness of casual gaming, interest in the potential musical effects of this growth has
been growing. Michiel Kamp (2010) in particular surveys the gaming field looking for ludic
music, ultimately calling for it more as a future goal than as an aspect of currently available
games. I present a digital game-based model for music-making and musicianship-learning,
grounded in embodied spontaneity and sociality rather than the extant music-theoretical,
ear-training, or rote practice models. A series of four mobile-device app games in
development is described, in which live musical gestures (singing or clapping) serve as the
control mechanism. For example, in one game a group of pitch classes is represented by a
row of gates, which close when a pitch is sung and then open slowly over time. In that game
mechanic, the goal is to break bricks by bouncing the ball off of the closed gates; to do so a
user must accurately self-represent the pitch internally, and then perform the pitch required,
all within a timeframe bounded by the specifics of the games physics simulation. Other
games focus variously on controlling the high-low/loud-soft distinction rather than
producing specific pitch classes, and on rhythmic pattern-clapping. The rhythm-based games
do not require a fixed tempo but rather include a mechanism for mutual tempo entrainment
between player and device. Gameplay and demographic data are gathered in both laboratory
and in vivo settings, and a preliminary analysis of this data will be presented at the
conference. A hypothesis that musicality is at least partially constructed through increasingly
sophisticated manipulation of a vocabulary of potential gestures will be evaluated in light of
these findings.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

169

The Coupling of Gesture and Sound: The Kinematics of Cross-Modal Matching


for Hand Conducting Gestures and Accompanying Vocal Sounds
Aysu Erdemir,1 Erdem Erdemir,2 Emelyne Bingham,3 Sara Beck,1 John Rieser1
1Psychology and Human Development in Peabody College, Vanderbilt University, USA
2Electrical Engineering and Computer Science, Vanderbilt University, USA
3Blair School of Music, Vanderbilt University, USA

Physical movement of musicians and conductors alike play important role in music
perception. This study was designed to identify whether there was a predictable
mathematical relationship between hand gestures performed by an expert conductor and
vocal responses of a general adult sample with and without musical background. Our
empirical work has found that adults systematically vary their utterance of the syllable
/dah/ in a way that matches the motion characteristics of the hand gestures being observed,
but the physical nature of this relationship remained unclear. The movements of the
conductor were captured using a high-resolution motion capture system while she
performed four different hand gestures, namely flicks, punches, floats and glides, at constant
tempo. The kinematic features such as position and velocity were extracted from the motion
data using a computational data quantification method. Similarly, an average RMS amplitude
profile was computed from the repeated utterances of /dah/ given each gesture across all
participants. The kinematic features were, then, compared to their amplitude counterparts in
the audio tracks. A correlation analysis showed very strong relations among the velocity
profiles of the movements and their accompanying sound-energy profiles. Deeper analysis
showed that initial velocity in the motion data truly predicted the RMS amplitude in their
auditory counterparts, i.e. faster initial speed caused louder responses. The observed
structural similarity between the movement and sound data might be due to a direct
mapping of the visual representation of observed action onto ones own motor
representation which is reflected in its resultant auditory effects.


Intelligent dance moves: rhythmically complex and attractive dance
movements are perceived to reflect higher intelligence

Suvi Saarikallio, Geoff Luck, Birgitta Burger, Marc R. Thompson, Petri Toiviainen
Finnish Centre of Excellence in Interdisciplinary Music Research, Department of Music,
University of Jyvskyl, Finland

Dance movement has been shown to reflect individual characteristics, such as personality of
the dancer, and certain types of movements are generally being perceived as more attractive
than others. We investigated whether particular dance movements would be perceived as
illustrative of a dancers intelligence. As intelligence generally refers to ability to adapt to
complexly changing conditions, we studied movement features indicating complexity, and
because people generally co-associate different positive characteristics, we studied features
typically perceived as attractive. The role of the observers mood and music preference was
also studied. Sixty-two adults (28 males, mean age 24.68) were presented with 48 short
(30s) audiovisual point-light animations of other adults dancing to music representing
different genres of dance music (pop, latin, techno). The participants were instructed to rate
the perceived intelligence of the dancer in each excerpt. In addition, they rated their mood
and activity levels before, and their preference of the music after the experiment. Movement
features expressive of complexity and attractiveness were computationally extracted from
the stimuli. Men gave significantly higher intelligence ratings for female dancers with wider
hips, greater hip-knee phase ratio, and greater movement complexity indicated by metrical
irregularity. However, female observers ratings were not influenced by the movement
characteristics. Moreover, while music preference did not influence the ratings, current
positive mood and higher energy level biased male observers to give higher intelligence
ratings for female dancers. The study shows that rhythmically complex and generally
170 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

FRI
attractive movement appears to be perceived indicative of intelligence, particularly for men
rating female dancers. Overall, the study provides preliminary evidence that certain music-
related movements are perceived expressive of more inferred personal characteristics such
as intelligence.


The Impact of Induced Emotions on Free Movement

Edith Van Dyck,* Pieter-Jan Maes,* Jonathan Hargreaves,# Micheline Lesaffre,* Marc Leman*
*Department of Arts, Music and Theater Sciences, Ghent University, Belgium
#Department of Music, Trinity Laban Conservatoire of Music and Dance, UK

The goal of this study was to examine the effect of two basic emotions, happiness and
sadness, on free movement. A total of 32 adult participants took part in the study. Following
an emotion induction procedure intended to induce emotional states of happiness or sadness
by means of music and guided imagery, participants moved to an emotionally neutral piece
of music that was composed for the experiment. Full body movement was captured using
motion caption. In order to explore whether differences in corporeal articulations between
the two conditions existed, several movement cues were examined. The criteria for selection
of these cues was based on Effort-Shape. Results revealed that in the happy condition,
participants showed faster and more accelerated body movement. Moreover, movements
proved to be more expanded and more impulsive in the happy condition. These findings
provide evidence of the effect of emotion induction as related to body movement.

Speed Poster Session 35: Timber II Hall, 11:00-11:40


Acoustics & timbre perception

Beyond Helmholtz: 150 Years of Timbral Paradigms


Kai Siedenburg,* Christoph Reuter,#
* Austrian Research Institute for Artificial Intelligence, Austria
# Musicological Institute of the University of Vienna, Austria

This article locates Helmhotz's groundbreaking research on timbre and a few of its historical
implications in terms of musical and mathematical coordinates. Through pinpointing on
selected timbre-related examples it describes how music aesthetic ideals, mathematical
theories and acoustics research systematically interdepend. After repositioning Helmholtz's
work with respect to Fourier's theorem, two musical perspectives are considered,
Schoenberg's vision of Klangfarbenmelodie and Xenakis's quest for sonic granularity. It is
moreover suggested to regard the 1960 ANSI definition as a late echo of Helmholtz's reign.
The evolution of the multi-dimensional-scaling-based timbre space model is briefly outlined
before observing a plurality of mathematic approaches which seems to mark current
research activities in acoustics.


Ecological factors in timbre perception

Jens Hjortkjr
Department of Arts and Cultural Studies, University of Copenhagen, Denmark

Recent meta-analyses of timbre perception studies have suggested that physical aspects of
the instrument sources are picked up in timbre perception. In particular, continuous
representations of perceived timbre similarities (timbre spaces) appear to reflect categorical
information about the material composition of the instruments and about the actions
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

171

involved in playing them. To examine this experimentally, twenty listeners were asked to
rate the similarity of impact sounds representing categorically different actions and
materials. In a weighted multidimensional scaling analysis of the similarity ratings we found
2 latent dimensions relating to the materials and actions, respectively. In an acoustic analysis
of the sound stimuli, we found the material related dimension to correlate with the centroid
of the long-term spectrum, while the action related dimension was related to the temporal
centroid of the amplitude envelope. The spectral centroid is also a well-known and robust
descriptor across musical timbre studies, suggesting that the distribution of frequencies is
perceptually salient because it carries information about the material of the sound source.
More generally, the results suggest that listeners attend implicitly to particular aspects of the
continuous sound stimulation that carry higher-order information about the sounding
source.


Establishing a spectral theory for perceptual timbre blending based on
spectral-envelope characteristics

Sven-Amin Lembke, Stephen McAdams


CIRMMT, Schulich School of Music, McGill University, Canada

A perceptual theory for timbre blending is established by correlating acoustical and
perceptual factors between orchestral wind instruments, based on an acoustical description
employing pitch-invariant spectral envelopes. Prominent spectral maxima (formants)
derived from the spectral envelopes serve as the acoustical factors under investigation.
Relevant perceptual correlates were determined through a behavioral experiment, which
investigated perceptual performance across different instruments, pitches, intervals and
stimulus contexts. The experimental task involved ratings of the relative degree of
perceptual blend for a total of 5 sound dyads. The dyads comprised concurrent presentations
of a constant recorded wind instrument sound paired with variable synthesized sounds, with
each dyad employing a different parametric manipulation of synthesized spectral-envelope
maxima. Relative frequency location and magnitude differences between formants can be
shown to bear a pitch-invariant perceptual relevance to timbre blend for several
instruments, with these findings contributing to a perceptual theory of orchestration and
furthermore offering a possibility to predict perceptual blend based on acoustical spectral-
envelope descriptions.


Comparative study of saxophone multiphonic tones. A possible perceptual
categorization
Martn Proscia, Pablo Riera, Manuel C. Eguia
Laboratorio de Acstica y Percepcin Sonora, Universidad Nacional de Quilmes, Argentina

A number of studies have been devoted to the production of multiphonics in woodwinds, focusing
on the possibilities and difficulties of intonation, fingering, pitch of components, and production
of trills. However, most of them disregard the timbric and dynamic qualities of these tones, or are
aimed to the detailed analysis of a few multiphonic examples. Recent research also served to
unveil the physical principles that give rise to these complex tones, including the interaction with
the vocal tract of the performer. In comparison, the psychophyisics of the multiphonic perception
have received much less attention, and a complete picture of how these multiple sonorities are
eventually grouped into perceptual classes is still missing. This work presents a comparative
study of a comprehensive collection of multiphonics of the saxophone, from which a possible
categorization into perceptual classes is derived. In order to do this a threefold analysis is
performed: musical, psychoacoustical and spectral. Based on previous research from the musical
perspective, an organization of the perceptual space for the multiphonics into four main classes
was proposed. As a first step, a total of 120 multiphonic tones of the alto saxophone, spanning a

172 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

FRI
wide spectrum of possible sonorities, were analyzed using Schaeffer's concept of sound object.
From this analysis, a representative subset of 15 multiphonic tones was selected, including
samples for each of the four groups proposed. These representative tones were used in a
psychoacoustical experiment (pair comparison test) in order to obtain a judgement of similarity
between them. The results obtained were analyzed using multidimensional scaling. Finally, by
means of a spectral analysis of the tones, possibles cues used by the listeners to evaluate
similarity were obtanied. As a main result, multidimensional scaling shows a perceptual
organization that closely resembles the classification proposed from the musical point of view,
clustering the four main classes on a two dimensional space. From the spectral analysis, a possible
correspondence of the two meaningful dimensions with the number of components and the pitch
of the lower component was analyzed. A perceptual categorization for the multiphonics is of
uttermost importance in musical composition. This works advances a possible organization of
these tones for the alto saxophone that could be eventually extended to other woodwind
instruments.


Comparison of Factors Extracted from Power Fluctuations in Critical-Band-
Filtered Homophonic Choral Music

Kazuo Ueda, Yoshitaka Nakajima


Department of Human Science and Center for Applied Perceptual Research, Kyushu University,
Japan

A consistent pattern of three factors, which led to four common frequency bands with
boundaries of about 540, 1720, and 3280 Hz, had been obtained from factor analyses of
power fluctuations of critical-band-filtered spoken sentences in a variety of
languages/dialects. The aim of the present investigation was to clarify whether the same
factors and frequency bands could be found in homophonic choral music sung with texts in
English, Japanese, or nonsense syllables, or with mono-vowel vocalization. Recordings of
choral music were analyzed. Three factors and four frequency bands similar to those
obtained from spoken sentences appeared in the analyses of music with ordinary texts in
English and Japanese. However, no distinct structure was observed in the analysis of a tune,
which was sung with no text but a mimicked buzz of bumblebees, and another tune, which
was vocalized with a single vowel. Thus, it was suggested that the patterns of the first three
factors could appear if there was a certain amount of syllable variety in choral music, and
that basically the same frequency channels were utilized for conveying speech information
both in spoken sentences and in choral music.


Analysis of Musical Timbre Semantics through Metric and Non-Metric Data
Reduction Techniques

Asterios Zacharakis, Konstantinos Pastiadis, Joshua D. Reiss, George Papadelis


Queen Mary University of London, Centre for Digital Music, London, U.K.
School of Music Studies, Aristotle University of Thessaloniki, Greece

This study investigated the underlying structure of musical timbre semantic description.
Forty one musically trained subjects participated in a verbal attribute magnitude estimation
listening test. The objective of the test was to rate the perceptual attributes of 23 musical
tones using a predefined vocabulary of 30 English adjectives. The perceptual variables (i.e.
adjectives) were then analyzed through Cluster and Factor Analysis techniques in order to
achieve data reduction and to identify the salient semantic dimensions of timbre. The
commonly employed metric approach was accompanied by a non-metric counterpart in
order to relax the assumption of linear relationships between variables and to account for
the presence of monotonic nonlinearities. This rank transformation into an ordinal scale has
offered a more compact representation of the data and thus confirmed the existence of
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

173

nonlinearities. Three salient, relatively independent perceptual dimensions were identified


for both approaches which can be categorized under the general conceptual labels:
luminance, texture and mass.


A physical modelling approach to estimate clarinet control parameters

Vasileios Chatziioannou,* Maarten van Walstijn#


*Institute of Musical Acoustics, University of Music and Performing Arts Vienna, Austria
#School of Electronics, Electrical Engineering and Computer Science, Queens University Belfast,
UK

Using a physical model of a musical instrument, a set of physically meaningful parameters
can be translated into audio. By varying several of the model parameters it is possible to
establish how this affects the timbre and perception of the resulting sound. Working in the
opposite direction, physics-based analysis aims to estimate the values of the physical model
parameters from the oscillations of the instrument. Such an approach offers a method for
estimating parameters that are difficult, if not impossible, to measure directly under real
playing conditions. The (inverse) physical model formalises the causal relationship between
the sound and the parameters, which facilitates investigating how the physical parameters
that configure and drive the original sound generation process relate and map to the
perception of that sound. Of particular interest is the possibility of feature extraction from a
recorded sound on this basis. The presented physical model of a clarinet consists of a non-
linear lumped model of the reed-mouthpiece-lip system coupled to a linear approximation of
a cylindrical bore. Starting form the pressure and flow signals in the mouthpiece, a two-step
optimisation method is developed that estimates physical parameters of the lumped model
(blowing pressure, initial reed opening, effective stiffness and further reed properties). The
presented physical analysis approach reveals a possible methodology for extracting useful
information about the actions of the player, and how the control of the instrument is
achieved by modulating several of the model parameters.

Investigating consistency in verbal descriptions of violin preference by


experienced players

Charalampos Saitis,1 Claudia Fritz,2 Catherine Guastavino,3 Bruno L. Giordano,4 Gary P.


Scavone1
1Schulich School of Music, CIRMMT, McGill University, Montreal, Canada
2Lutheries-Acoustique-Musique, Universit Pierre et Marie Curie, UMR CNRS 7190, Paris, France
3School of Information Sciences, CIRMMT, McGill University, Montreal, Canada
4Institute of Neuroscience and Psychology, University of Glasgow, Scotland, UK

This paper reports content analyses on spontaneous verbal descriptions collected in a


perceptual experiment investigating intra-individual consistency and inter-individual
agreement in preference judgments by experienced violinists. In the experiment (in two
identical sessions 37 days apart) 20 musicians played 8 violins of different make and age
and were asked to rank them in order of preference (from least to most preferred), and
provide rationale for their choices through a specially designed questionnaire. The responses
were classified in semantic categories emerging from the free-format data and all
occurrences in each category were counted. Results for self-consistency and inter-individual
agreement in the preference criteria are in close agreement with previous observations
concerning the preference rankings of the participants: violinists are quite self-consistent
but there is an important lack of agreement between individuals. However, further analyses
yielded no obvious relationship between verbal and nonverbal consistency within and across
violin players.

174 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

FRI
Speed Poster Session 36: Grand Pietra Hall, 11:40-12:10
Social perspectives
Dancing with death: music festivals, healthy and unhealthy behaviour

Alexandra Lamont
Centre for Psychological Research, Keele University, United Kingdom

Popular music festivals are growing in popularity, and certain types of festival have become
associated with different unhealthy behaviours such as alcohol and drug abuse. While
research has highlighted the considerable wellbeing that festivals can provide, little is known
about the unhealthier elements of music festivals. This project explores the choices festival-
goers make around healthy and unhealthy behaviour, and attitudes towards risk and
pleasure in relation to music. The research uses ethnographic methods at a three-day
residential (camping) electronic dance music festival, with observational data, an online
survey of 76 festival-goers completed after the event, and follow-up telephone interviews.
Across all ages, many participants reported an unhealthy set of behaviours (combining legal
and illegal drugs) as their route towards wellbeing, in a setting which provides an alternative
reality the giant bubble of happyness [sic] alongside a supportive social situation which
minimizes the perceived risks of such unhealthy behaviour. Emerging themes included
escape from reality, the importance of social connections, and a sense of control over use of
illegal drugs. Memories of the event are somewhat hazy for many participants, and other
behaviour is less planned (e.g. rarely is attention paid to set lists or attempts to hear
particular DJs or artists). The results show that many festival-goers prioritise a direct route
to pleasure through hedonism. The illusion of safety of the festival context leads to more
risky behaviour than is typical in festival-goers everyday life, and this altered perception of
risk poses concerns in terms of health and wellbeing.


Deriving Musical Preference Profiles from Liked and Disliked Artists

Rafael Ferrer, Tuomas Eerola


Finnish Centre of Excellence in Interdisciplinary Music Research, University of Jyvskyl,
Finland

Music preferences are typically determined by asking participants to rate the degree of liking
for music genres. These genre-based measures have certain pitfalls, since specific pieces of
music in a genre might be liked more than the genre itself, and finding consensus in a
definition of a genre is often a daunting task. We developed a tool that captures music
preferences in an intuitive fashion and creates music preference profiles that are highly
comparable across participants. The tool requires from the participant to give names of three
liked and disliked artists. From these, the tool constructs a profile resembling those
traditionally obtained with genre-based measures. In addition, the tool can also produce
other items than genres, such as adjectives, affect constructs or music preference factors
underlying the given artist names as the output. The underlying algorithm uses online
resources (EchoNest and Last.fm) to provide definitions on the items targeted by the
researcher. The effectiveness of the tool was evaluated with two surveys (N=346 and N=861)
in which genre-based preferences and liked and disliked artists were obtained. The
comparison between the two measures demonstrates highly similar results in over 70% of
the cases. The remaining cases typically showed mismatches between artists and genres. The
results underline how genres may not always reflect the actual choice of liked artists,
because they represent a problematic notion for a music preferences measure. The tool is
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

175

presented as an alternative to common music preference instruments that assume a


homogeneous musical knowledge in their sampled population.


You get what you pay for: pitch and tempo alterations in user-posted YouTube
videos

Joseph Plazak
School of Music, Illinois Wesleyan University, USA
Despite the widespread availability of free streaming music hosted by YouTube.com, many
YouTube videos contain music that has been altered from the original recording in some
way, including alterations of pitch, tempo, or timbre. The factors and motivations guiding
these alterations remain unknown. The aims of this study were to determine the prevalence
of pitch and tempo alterations in user-posted YouTube videos, and also to determine the
direction and magnitude of these pitch and tempo alterations. In an initial study, 75% of 100
collected YouTube recordings contained a nominal alteration of pitch and/or tempo (+/-
1Hz; +/- 3bpm). Thirty-four of these recordings contained a pitch alteration equal to or
larger than a half step (m2). Further analysis of the data revealed that pitch levels of the
sample set were equally likely to be higher or lower, but decreasing the tempo of a recording
was more prevalent than increasing the tempo. Additional studies may consider
investigating if specific characteristics of the music are influencing the direction and
magnitude of YouTube users alterations. Such characteristics may include: the type/style of
music, the vocalists gender in the music being altered, the release date of the recording, etc.


The attribution of agency to sound can affect social engagement

Jacques Launay, Roger T. Dean, Freya Bailes


MARCS Institute, University of Western Sydney, Australia

The purpose of music, or the reasons behind its spread and development amongst human
cultures, is a contentious topic. One explanation put forward, that music can enhance the
social relationships of people who engage with it communally, has a potential flaw that has
become striking in the last century: people enjoy engaging with music alone perhaps the
majority of time people spend listening to music is in isolation. Does this mean social
cohesion arguments about music are untenable?
The set of experiments presented aim to test whether sound attributed with agency is able to
engage people in a more social way than sounds that are not attributed with agency. Two
experiments instructed participants to synchronise with sounds in the absence of
interpersonal contact, and demonstrated that when sounds are attributed with agency they
can affect subsequent social behaviour, similarly to synchronisation with observed
movement of another person. Experiment 1 showed that participants place greater trust in a
partner when they report better synchronisation with that partner, even in the absence of
interpersonal contact. Experiment 2 demonstrated that synchronisation with sounds that are
attributed to another person could affect ratings of likeability of that person. We conclude
that people engage differently with sounds that are attributed with agency, compared with
those that are not. As sounds with agency appear to have a greater capacity for affecting
subsequent social interaction, musical sounds, by virtue of being sounds with agency, may
also have some social quality, even when listened to alone.

176 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

FRI
Surveying attitudes towards singing and their impact on engagement with this
musical activity

Rita Bento Allpress,* Jesse Allpress,#


*Sidney De Haan Research Centre for Arts and Health, Canterbury Christ Church University,
England; #School of Psychology, University of Sussex, England

Singing is the most natural of all musical activities and one that is readily accessible to most
individuals. It can be used on our own or in a group, in different cultural settings, on different
occasions, and for the most diverse purposes (entertainment, grieving, religious rituals, alliance
rituals). A recent, yet growing body of literature highlights the potential benefits of singing on
well-being and health. This evidence shows singing as an activity with several psychological,
physical and social components that can interact and contribute to feelings of well-being and
impact on the immune system. However, Bailey and Davidson (2002, 2005), highlight an elitist
view of music-making that is predominant in Western world. According to those authors, this
musical elitism present in the westernized societies, not only views musical ability as being
limited to a talented minority, it also restricts the majority of the population to being procurers
rather than producers of music. If this musical elitism is present in our society, than it is possible
that it influences our engagement with singing activities. If this is indeed the case, then it is
possible that a majority of individuals in the western world are missing out on an activity that can
potentially benefit their well-being and even health. This study aimed to explore how our
attitudes towards singing influence our engagement with this musical activity. Specifically, we
hoped to see how people's opinions on their own voices, their own singing, singing in general and
the general singing voice influenced their likelihood of singing in public or private, in formal or
informal settings and in group or on their own. We suggest that the majority of our respondents
share an elitist attitude towards singing. We expected this attitude to impact negatively on their
engagement with singing and this impact to be more pronounced when asked about public,
formal and solo singing. A survey was developed and made available online. Data was collected
until the Spring of 2012 and suggested that a majority of our respondents share an elitist attitude
towards singing. For those who believe they are not part of the singing elite, singing is something
they do in private or informal settings. Approaches to research and promotion of singing for well-
being may have to start taking these attitudes into account.


Work attitudes, Role Stress and Health among Professional Singers and Call
Center Employees
Maria Sandgren
Department of Culture and Communication, Sdertrn University, Sweden

In the literature on artists and health problems, there is a lack of studies taking work conditions
and their impact on well-being and health into account. The specific work conditions for artists
can be summarized under the concept of boundaryless work, where the individual is facing short
term employment, increased demands on flexibility and personal responsibility. Research on for
example short-term employment and health show inconsistent results. Professional classical
singers might constitute a very selected group of individuals who have been very successful in
coping with complex work circumstances. Yet, singers do not appear indifferent to work load, not
even in a familiar situation such as a singing lesson with their regular vocal coach. They are also at
increased risk of developing voice disorders. The aim of the study was to compare professional
singers in the classical genre with another group of professional voice users, call centre
employees, on variables such as work conditions, job satisfaction, health and vocal behaviour.
Professional classical singers (n=61, women n=33, men n=28) and call centre employees filled in a
questionnaire covering validated variables; qualitative and quantitative work load, perceived
performance, job satisfaction, work involvement, job autonomy, mental health and physical health
and vocal behaviour. Results indicated that qualitative work load and perceived performance
showed significant positive associations with impaired mental and physical health among singers.
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

177

Vocal behavior showed significant positive associations with job induced tension, perceived
external demands and quantitative work load. Job satisfaction showed significant positive
associations with work involvement, job autonomy and perceived performance. Effects of work
load were manifested both in vocal behaviour and mental health. Singers seemed to be positively
influenced, and not distressed, by the achievement-oriented nature of their work in that job
satisfaction was associated with a strong commitment and their personal contribution of high
artistry.

Speed Poster Session 37: Crystal Hall, 11:40-12:10


Emotional responses & affective experiences II

From Wanting to Liking: Listeners Emotional Responses to Musical


Cadences as Revealed by Skin Conductance Responses
Chen-Gia Tsai
Graduate Institute of Musicology, National Taiwan University, Taiwan

Research on the emotional responses and brain activations evoked by music has been a topic of
great academic and public interest. A recent brain-imaging study by Salimpoor and colleagues
suggests the involvement of mechanisms for 'wanting' and 'liking' when subjects listened to
intensely pleasurable music. Their paper elaborates the functions of the reward system during
music listening. Inspired by their paper, the present study aims to explore the listening behavior
of authentic cadences through combining music analysis and listeners' physiological measures.
We hypothesize that cognition of the dominant chord and the following tonic chord may engage
mechanisms for 'wanting' and 'liking', respectively. The associated experiences of peak emotion
may be detected by measuring skin conductance. Participants' skin conductance was measured
during music listening. In Experiment 1, we used long music stimuli, including complete
Taiwanese popular songs (3-5 min) and excerpts of German art songs (50-100 sec). In
Experiment 2, we used 48 short music stimuli (<30 sec). A moving window of 2 sec was used to
detect significant increases of skin conductance within this window, i.e., skin conductance
responses. In Experiment 1, we observed that some authentic cadences tend to induce listeners'
skin conductance responses. Cadences combining with changes in tempo/loudness or the
recurrence of a theme tend to evoke large skin conductance responses. In Experiment 2, among
12 musical events that evoked significant skin conductance responses, only one event may be
related to an authentic cadence. An isolated musical cadence may be unable to evoke listeners'
experience of peak emotion. Regarding ecological validity, longer music excerpts are more
appropriate for investigating listeners' emotional responses to cadences. If an authentic cadence
combines with changes in tempo/loudness or the recurrence of a theme, listeners would have
higher probability to experience intense emotion of 'wanting' and 'liking'. We suggest that skin
conductance measures and brain-imaging techniques may be important tools for future research
on the 'art' of elaborating musical cadences.

Limits on the Application of Statistical Correlations to Continuous Response


Data
Finn Upham
Music and Audio Research Lab, Department of Music and Performing Arts Professions,
Steinhardt School of Culture, Education, and Human Development, New York University, USA

How can we compare different listeners' experiences of the same music? For decades,
experimenters have collected continuous ratings of tension and emotion to capture the
moment-by-moment experiences of music listeners. Over that time, Pearson correlations
have routinely been applied to evaluate the similarity between response A and response B,
between the time series averages of responses, and between responses and continuous
178 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

FRI
descriptors of the stimulating music. Some researchers have criticized the misapplication
and misinterpretation of this class of statistics, but alternatives have not gained wide
acceptance. This paper looks critically at the applicability of correlations to continuous
responses to music, the assumptions required to estimate their significance, and what is left
of the responses when these assumptions are satisfied. This paper also explores an
alternative measure of cohesiveness between responses to the same music, and discusses
how it can be employed as a measure of reliability and similarity with empirical estimates of
significance.

Towards Three-Dimensional Model of Affective Experience of Music

Marija Trkulja, Dragan Jankovi


Department of Psychology, Faculty of Philosophy, University of Belgrade, Serbia

Number of studies suggested that the two-dimensional valence-arousal model is not able to
account for all the variance in music elicited affective experiences. The goal of this study is
further elaboration of the underlying dimensions of affective experiences of music.
Specifically, the aim of the first study was to empirically collect a set of attributes that
represents subjective, evaluative experience of music. Participants were asked to produce
attributes that can describe their subjective experience of presented 64 musical excerpts,
selected to cover wide spectrum of music genres, themes and instruments. The aim of the
second study was to establish the underlying structure of affective experience of music
through factor analytic study. Participants assessed 72 musical excerpts on the instrument
that consisted of 43 bipolar seven-point scales. The principal component analysis showed
that the underlying structure of affective experience of music consisted of three basic
dimension, interpreted as affective valence, arousal and cognitive evaluation. Congruence
analysis indicated robustness of three obtained dimensions across different music stimuli
and participants.

How music can brighten our world: emotions induced by music affect
brightness perception

Job P. Lindsen, Joydeep Bhattacharya


Department of Psychology, Goldsmiths, University of London, UK

Can musical primes influence low level processing of visual target stimuli, which is classically
conceptualized as bottom-up perceptual processing immune from influences of top-down
processing? In three experiments, musical primes were used that were pre-rated as either
high or low along the dimensions of arousal and valence. In Experiment 1 and 2, a grey
square was presented before each prime and after its evaluation, and participants were
asked to judge whether the second square was brighter or darker than the first. Participants
were told that the changes in brightness were small but detectable, while in actuality a
square with identical brightness was presented twice. Exp. 2 was similar to Exp. 1 but
without active affective evaluations of the primes in order to investigate the automaticity in
musical affective evaluations. Exp. 3 was designed to control for potential memory effects;
only one grey square was presented on each trial after each musical excerpt, and participants
rated its absolute brightness on a grey scale. Exp. 1 showed that perception was biased in a
brighter direction following positively (vs. negatively) valenced music, and Exp. 2 showed
that this bias is automatic. A similar, effect was observed in Exp. 1 for high arousal as
compared to low arousal musical primes. Exp. 3 showed that such biases were not caused by
memory effects, and absolute judgment of brightness was mostly modulated by happy
musical primes. These results suggest that general affective disposition of musical stimuli
can systematically induce perceptual bias across modality.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

179

Speed Poster Session 38: Dock Six Hall, 11:40-12:10


Music Therapy
Psychosomatic patients satisfaction from the music therapy treatment

Stella Kaczmarek,* Norbert Kieslich#


*Faculty of Music, University of Paderborn, Germany
#Dept of Psychosomatic, Klinik Rosenberg, Bad Driburg, Germany

In the last few years, patient satisfaction has gained more and more important, both in
health-policy, economic terms, in scientific clinical investigation, as well as in music therapy
treatment. Within the treatment psychosomatic patients it is important to separate the pure
patient satisfaction with the treatment from the attitude towards the music therapy. With
the aim to split these two aspects, we have developed a questionnaire and used questions
about the general satisfaction from the music therapy, the attitude to the music therapy
before the treatment with comparison to the attitude after the end of the treatment as well
as individual profits from the music therapy and some personal characteristics. 100 adult
psychosomatic patients were surveyed in the psychosomatic clinic in Bad Driburg
(Germany). Our results confirmed the hypothesis, that the patient satisfaction from music
therapy is connected with their attitude to the treatment and previous musical activity.


Promoting Social Engagement for Young Children with Autism: a Music
Therapy Approach
Potheini Vaiouli
Indiana University, USA


Joint attention is a foundational non-verbal social-communication milestone that
fails to develop naturally in children with autism. This study used improvisational
music therapy for three young children identified with autism in a kindergarten
classroom. The three participants receive individual, weekly music therapy sessions
at their school. The study employs a mixed method design that uses improvisational
music therapy to enable joint attention, verbal or non-verbal communication, and
social interaction for the three participants. Also, a complimentary qualitative
analysis explored the teachers and the parents perspectives and variables that may
have influenced the intervention outcomes.

Music Therapy enhances perceptive and cognitive development in people with
disabilities. A quantitative research
Dora Psaltopoulou, Maria Micheli
School of Music Studies, Aristotle University of Thessaloniki, Greece
General Hospital Thessaloniki, Agios Paulos, Greece

A statistic research, designed to unravel the effectiveness of Music Therapy to children and
adults with disabilities in Greece, shows that, Music Therapy enhances perceptive and
cognitive development. The main assumptions were related with the types of populations
and the characteristics of their pathologies, as well as, the role that is played by the
combination of different therapy modalities to them, so as to show the effectiveness of Music
Therapy in Greece. The key objective was to assess the effectiveness of music-therapy
through the personal evaluations made by the parents of the subjects. The subjects
characteristics and parental environments were documented as populations who participate
180 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

FRI
in the practice of music therapy in Greece. Quantitative research was conducted upon 149
subjects with disabilities. Questionnaires were used as research instruments, which were
answered by the subjects parents. The data was processed with the statistical instrument
SPSS v.12 with hypothesis validity set at a=0,05 and twofold crosschecking. Music Therapy is
effective regardless the pathology of the subjects or the co-practice of other therapies such as
Occupation Therapy, Speech Therapy and Psychotherapy. The subjects participating in Music
Therapy sessions in Greece, children and young adults with disabilities, showed
improvement in listening ability, in the psychosocial function, in the intellectual ability and
the emotional growth.


Finding the right tone for right words? Music therapy EEG and fronto-temporal
processing in depressed clients

Jrg Fachner, Jaakko Erkkil


Finnish Centre of Excellence in Interdisciplinary Music Research, University of Jyvskyl,
Finland

Fronto-temporal areas process shared elements of speech and music. Improvisational
psychodynamic music therapy (MT) utilizes verbal and musical reflection on emotions and
images arising from clinical improvisation. Music listening is shifting frontal alpha
asymmetries (FAA) in depression, and increases frontal midline theta (FMT). The purpose of
this study is to test whether or not MT has an impact on anterior resting state alpha and
theta oscillations of depressed clients with comorbid anxiety. In a two-armed randomized
controlled trial (RCT) with 79 clients, we compared standard care (SC) versus MT added to
SC at intake and after 3 months. Correlations between anterior EEG, Montgomery-sberg
Depression Rating Scale (MADRS) and the Hospital Anxiety and Depression Scale Anxiety
Subscale (HADS-A), power spectral analysis (topography, means, asymmetry) and normative
EEG database comparisons were explored. After 3 month of MT, lasting changes in resting
EEG were observed, i.e., significant absolute power increases at left fronto-temporal alpha,
but most distinct for theta (also at left fronto-central and right temporoparietal leads). MT
differed to SC at F7-F8 (z-scored FAA, p<.03) and T3-T4 (theta, p<.005) asymmetry scores,
pointing towards decreased relative left-sided brain activity after MT; pre/post increased
FMT and decreased HADS-A scores (r = .42, p < .05) indicate reduced anxiety after MT.
Verbal reflection and improvising on emotions in MT may induce neural reorganization in
fronto-temporal areas. Alpha and theta changes in fronto-temporal and temporoparietal
areas indicate MT action and treatment effects on cortical activity in depression, suggesting
an impact of MT on anxiety reduction.

Speed Poster Session 39: Timber I Hall, 11:40-12:10


Listening & Meaning

Towards a Cognitive Music Aesthetics

Ludger Hofmann-Engl
Department of Music, Coulsdon College

Following the ideas by Kurt Blaukopf, who pointed out that a thinking in symmetries was not
only confined to Baroque composing but could be found elsewhere such as landscaping, this
paper introduces the concept of cognitive categories as to be found within different music
aesthetical approaches. Additionally, it claims that isomorph cognitive categories can be
found in other areas of human activity such as philosophy, mathematics and politics. In order
to demonstrate the validity of this approach the concept of cognitive categories has been
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

181

applied to different time periods of the Western Civilization commencing with the medieval
ages and leading up to the avant-garde. Here, for instance, the paper makes the claim that the
cognitive category of force and counter force is instrumental for the classical period and can
be found within the Sonata Form, Newton's Laws of Motion as well as within the concept of
thesis, anti-thesis and synthesis in the works of Hegel. The paper does not claim to be
comprehensive but to open up an area for research which has received little attention so far.


Music listening from an ecological perspective
Anders Friberg
KTH Royal Institute of Technology, Sweden

It is evident that we normally analyze sounds in our environment regarding the source properties
rather than the quality of the sound itself. This is natural in everyday listening considering that
the human perceptual system always tries to understand and categorize sensory input. We can
from the sound estimate physical properties of the objects, such as size and material. This
ecological approach can also be extended to human communication. From a persons voice we can
estimate identity, distance, effort, and emotion. From footstep sounds we can estimate gender and
other properties. This type of source perception is thus evident for environmental and human
sounds but is the same mechanism also active in music listening? It seems plausible if we consider
music as a human to human communication. Also, as pointed out by Clarke (2005) it is hard to
make any distinction between everyday listening and music listening. Thus, we may assume that
both kinds of listening involve the same perceptual processing. We will present a broad spectrum
of perceptual features related to source properties that can be motivated from an
ecological/survival point-of-view and discuss their potential relevance in music listening. A
variety of different aspects are potentially important during music listening. Many of them are
self-evident and empirically validated, while some others still lack empirical evidence. Basic
object properties not related to human communication includes Source separation - obviously
active in music listening; Source localization - an important aspect in music reproduction;
Size/Material - related to musical instruments and timbre; Classification/Identification - related
to objects, humans or instruments; Deviation from expectation - considered a major mechanism
for creating meaning in music. There are several human properties that are relevant. Human
movement is related to music on a number of different levels as evidenced by a current research.
Energy relates to the physical effort used to produce the sound. Other human aspects include
intention, emotion, skill, and authenticity/sincerity. By analyzing music listening using an
ecological perspective we can provide an alternative viewpoint that provide an explanation and
motivation of the musical meaning for many different musical aspects ranging from instrument
sounds and melody to motion and emotion.


On musical intentionality: Motor knowledge and the development of musical
expertise

Andrea Schiavio
Department of Music., The University of Sheffield, UK

According to previous literature skilled musicians develop a cross-modal expertise using
different modalities and categories to understand a musical object. My hypothesis is that this
ability is based on the sensory motor integration provided by the Mirror Mechanism,
implicitly assuming the existence a musical repertoire of acts that musicians develop
throughout their life. In this behavioral experiment, participants (musicians and non
musicians) are asked to familiarize with four piano melodies under different conditions
(playing the melodies on the piano, seeing someone playing and imagining them through a
silent-tapping task). Afterwards, the subjects will be asked to recognize these melodies
among a series of other similar auditory stimuli. I predict that non musicians will firstly rely
182 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

FRI
on a motor-based experience recognizing more efficiently the pieces they have actually played
(hence constituting a musical vocabulary of acts) while musicians will not show a great
mismatch, despite the diverse modalities used to familiarize with the musical excerpts. So,
this study has two aims: (i) to consolidate the hypothesis that skilled musicians have a cross-
modal intentional relationship with a musical object, independently from the modalities used
to intend it and (ii) to show that this kind of intentionality is motor in its roots.


Transported to Narrative Worlds: The Effects of A Narrative Mode of Listening
on Music Perception

Thijs Vroegh
Media and Culture Studies, University of Utrecht, the Netherlands

The tendency to ascribe agency to musical features and interpreting a series of musical
events as a type of story represent, besides musical emotions, a vital part of our capacity for
music understanding and our ability to find music meaningful. Indeed, a "narrative mode of
thought" may be significant in music listening. However, although the domain of music
psychology is involved with many conceptualizations of music experience such as music
absorption, imaginative involvement, deep listening, or strong experiences, scholars so far
refrained from thinking of listening to music as a narrative experience, or from drawing on
the extensive literature concerning the reception of narrative in other domains (e.g.,
literature, film). It may therefore be useful to investigate these musical responses in
precisely those terms; that is, of actually being a narrative experience equivalent to those of
readers feeling transported in the fictional world created by the book. Music imbued with
narrative meaning (e.g., personality-driven associations and autobiographical memories)
that leads to the experience of transportation shares important aspects with the pleasurable
engagement with an immersive story in a book or film. It features transformations in
consciousness that demonstrate changes in attentional focus, arousal, altered experience of
time, thought processes and mental imagery. This suggests that the engagement with stories
and a narrative mode of thought triggered by music might share a number of deeper
psychological mechanisms.

What is the Sound of Citrus? Research on the Correspondences between the


Perception of Sound and Flavour
Kai Bronner*, Klaus Frieler, Herbert Bruhn#, Rainer Hirt*, Dag Piper
*audity, Germany; #University of Flensburg, Germany; University of Hamburg, Germany;
Mars, Germany

This study investigates systematic relationships between the perception of flavour and
sound with regard to underlying inter-modal attributes and recognisability. The research
was inspired by the question, if it is possible to express a flavour acoustically, which might be
of practical interest, e.g., for audio branding applications. One preliminary and two main
experiments were conducted, in which participants tasted or imagined two flavours
(orange and vanilla), and had to perform several association and matching tasks. For the
second main experiment, short audio logos and sound moods were specially designed to
yield different citrus-like sounds. A wide range of significant differences between the two
flavour conditions were found, from which musical parameters could be extracted that are
suitable to represent the flavours of orange and vanilla. Furthermore, a few significant
differences between imagined and tasted stimuli showed up as well, hinting at an
interference of visual associations. In the second experiment, subjects were reliably able to
identify the principal flavour attributes from sound stimuli alone and to distinguish different
degrees of citrus-sounds.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

183

Speed Poster Session 40: Timber II Hall, 11:40-12:10


Performance studies II
Unexpected Melodic Events during Music Reading: Exploring the Eye-
Movement Approach

Marjaana Penttinen,* Erkki Huovinen,# Anna-Kaisa Ylitalo


*Department of Teacher Education & Centre for Learning Research, University of Turku,
Finland
#School of Music, University of Minnesota, USA
Department of Mathematics and Statistics, University of Jyvskyl, Finland

Two studies examined the eye-movement effects of unexpected melodic events during music
reading. Simple melodic variants of a familiar tune were performed in a temporally
controlled setting. In a pilot study with five university students, unexpected alterations of the
familiar melody were found to increase the number of incoming saccades to the altered bar
and the bar immediately before the alteration. The main experiment with 34 music students,
incorporating several improvements to the experimental design, again showed an increase in
the number of incoming saccades to the bar before the alteration, but no effects in the altered
bar itself. In addition, the bar following the alteration showed decrease in relative fixation
time and incoming saccades. These results are discussed with a view to future studies in eye-
movements in music reading, emphasizing the need for more systematic research on truly
prima vista performance and, in general, temporally controlled music reading.

Mutual Gaze Facilitates Synchronization during Piano Duo Performances

Satoshi Kawase
Graduate School of Human Sciences, Osaka University, Japan

This study investigated the roles of gazing behaviour (specifically eye contact) during music
performances by focusing on coordination among performers. Experiment 1 was conducted
under four different visual-contact conditions: invisible, only the body visible, only the head
visible, and face-to-face. Experiment 2 was conducted under three different visual-contact
conditions: invisible, only the movable-head visible, and only the fixed-head visible; the
condition was implemented by using a chin rest. The results of experiment 1 showed that the
timing lag between performers did not vary significantly among the three conditions in
which visual cues were available. Performers looked toward each other just before changes
of tempo during which two performers need to coordinate timing in both experiments.
Under these three conditions, when performers looked toward each other at points of
coordination, it significantly improved synchronization accuracy. The results of experiment 2
showed that the timing lag was significantly shorter under the fixed-head condition than the
invisible condition, and significantly longer under the fixed-head condition than the
movable-head condition. Regardless of whether or not the head was fixed, the timing lag
decreased when performers made eye contact just before the beginning of the sound. On the
basis of two experiments, we conclude that mutual gaze is important for reducing timing lag
during a performance and that performers may utilize movements (body or head) as visual
cues for coordination since they can coordinate only loosely through eye contact alone
(without movement).

184 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

FRI
The Embodied Effect of Facial Expressions on Pianists Performance
Interpretation

Hila Tamir-Ostrover,* Zohar Eitan,** Eric F. Clarke***


*Department of Music, Graduate School of Art and Science, New York University, USA
**Buchmann-Mehta School of Music, Tel-Aviv University, Israel
***Faculty of Music, University of Oxford, UK

Facial expression has been shown to affect emotional and cognitive processes, such that
smile facilitates positively valenced emotion and related cognition. Here we examine
whether performers interpretation is influenced by their facial expressions in a similar way.
16 professional pianists played two newly composed musical miniatures, each in a Major and
Minor version. The pieces were conventionally notated, but lacked tempo, dynamics and
articulation markings; performers were instructed to make use of these expressive
dimensions as they wished. Each piece was performed in 3 conditions. In two embodied
conditions, participants were asked to hold a wooden stick in their mouth in ways that either
facilitated or inhibited smile-like expression. In the control condition, participants played
with nothing in their mouth. Performances were audio recorded and analysed, focusing on
quantifiable parameters associated with valence or intensity in music, such as tempo (mean,
SD), note duration (articulation), and intensity (mean, SD). Both participants and 15
independent referees rated performances on evaluative and expressive scales. Results will
be reported at the conference. This is the first empirical examination of the effects of facial
expression on musical performance, examining the hypothesis that the bodily and emotional
aspects of performance influence each other bi-directionally. Furthermore, the study
investigates whether the embodied effect is transitive (i.e., conveyed from performer to
listener), thus examining whether embodied aspects of music-making are shared by different
musical activities such as listening and performance.

Recorded interpretations of Chopin Preludes: Performers choice of score


events for emphasis and emotional communication
Erica Bisesi,* Jennifer MacRitchie#, Richard Parncutt*
*Center for Systematic Musicology, University of Graz, Austria
#Conservatorio della Svizzera Italiana, Lugano, Switzerland

What structural features characterize individual performers styles? To what extent do


eminent pianists agree on segmentation and rendering of musical phrases? How much do
they agree on selection of score events (accents) for local emphasis, and how to emphasize
them? How do these choices influence the emotional responses of listeners? How musical
expertise and cognitive style of listening influences listeners responses? Our hypothesis is
that the location of the particular points emphasized by performers by mean of expressive
deviations in timing and dynamics can provide some clues as to a performers interpretation
and communication of emotions. By asking 24 expert musicians to listen to 16 eminent
interpretations of two Chopin Preludes op. 28 (no. 7 and no. 11), and provide information
about perceived segmentation and emphasis on local events, as well as on the main emotions
associated to these pieces, we extract similarities in the segmentation and emphasis on local
events (phrases climaxes and accents), and discuss striking differences across the
performances. We group performances by cluster analysis and consider each cluster as an
interpretative style. We also correlate interpretative styles with intended emotion. Finally,
we discuss results in the light of participants musical expertise and cognitive style of
listening. This work is supported by the Stand-Alone Project P 24336-G21 (Expression,
Emotion and Imagery in Music Performance), sponsored by the Austrian Fonds zur
Frderung der wissenschaftlichen Forschung (FWF).

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

185

Coping Strategies for Music Performance Anxiety: a Study on Flute Players

Andre Sinico,* Fernando Gualda,*# Leonardo Winter,*


*Music Department, Federal University of Rio Grande do Sul, Brazil
#Sonic Arts Research Centre, Queen's University Belfast, Northern Ireland

This research focuses on identifying differences in trait and state anxiety levels in flute
players. The participants of this survey were members of Brazilian Flute Association
(ABRAF). In total, 142 flute players answered an online questionnaire. Eight of twenty
questions are reported in this paper. The participants reported on gender, age, years of flute
practice, proficiency level (professional, student, and amateur), and their most anxiety-
inducing situation (masterclass, recital, and competition). According to the literature, some
musical factors can lead to decrease in music performance anxiety. Some musical factors that
can be considered as coping strategies are familiarity with repertoire, sight-reading skills,
deliberate practice, musical expression, and memorization. Results suggest that male flute
players exhibited higher incidence of music performance anxiety (MPA), professional flute
players may cope better with MPA, and the most stressful performance situation did not
correlate with MPA in those 142 flute players.

Paper Session 28: Grand Pietra Hall, 14:30-15:30


Cross-cultural studies

The Effect of Context on Cross-Cultural Music Memory Performance

Steven M. Demorest,* Steven J. Morrison,* Vu Q. Nguyen,# Erin Bodnar,*


*Laboratory for Music Cognition Culture and Learning, School of Music, University of
Washington, USA
#School of Music, Washington University, USA

Previous research has shown that both expert and novice listeners demonstrate an
enculturation effect where they have more difficulty processing and remembering music
that is culturally unfamiliar. The purpose of this study was to explore the effect of contextual
variables like texture, timbre, tuning, rhythm and complexity on listeners ability to process
and retain culturally unfamiliar music. We also sought to determine if there was a direct
relationship between preference for a piece of music and listeners memory of it. US born
participants were randomly assigned to one of two conditions, contextualized (recordings
from both cultures) or decontextualized (single line melodies transcribed from the
originals). Removing the stimuli from their cultural texture, timbre and tuning had no impact
on cross-cultural memory performance when compared to the original examples. Listeners
preferred Western examples in general to Turkish examples, but when we correlated
preference responses with memory performance on each individual piece across the two
cultures there was no significant association. This experiment demonstrates that removing
surface aspects of the music like timbre, instrumentation and tuning does not alter the effect
of enculturation suggesting that cultural differences are more structural. Poorer memory
performance cannot be explained by a decrease in preference for out-of-culture music.
These results have implications for a theory of cross-cultural music cognition that centers on
statistical properties of expectancy formation for pitch and rhythm patterns. A second
experiment is currently underway to explore whether the removal of rhythmic variability
might affect cross-cultural memory performance.

186 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

FRI
Cross-Cultural Emotional and Psychophysiological Responses to Music:
Comparing Western Listeners to Congolese Pygmies

Hauke Egermann,*# Nathalie Fernando,+ Lorraine Chuen,** Stephen McAdams *


*CIRMMT, Schulich School of Music, McGill University, Montral, Qubec, Canada
#Audio Communication Group, Berlin Institute of Technology, Berlin, Germany
+Laboratoire de Musicologie Compare et Anthropologie de la Musique, Facult de Musique,
Universit de Montral, Montral, Qubec, Canada
**Department of Psychology, McGill University, Montral, Qubec, Canada

Previous research has indicated that emotion recognition in Western and Indian music might
be based on universal features. However, whether a similar cross-cultural comparison can
reveal universal emotion induction remains unexplored. The study compared subjective and
psychophysiological emotional responses to music from two different cultures within two
different cultures. Two similar experiments were conducted, the first in the Congolese
rainforest with an isolated population of Mbenzele Pygmies without any exposure to
Western music and culture; the second with a group of Western music listeners, with no
experience with Congolese music. 40 Pygmies (age in yrs.: M=35, SD=14, 22 males), and 39
Western listeners (age in yrs.: M=22, SD=6, 22 males) listened in pairs of two to 19 music
excerpts of 29 to 99 seconds in duration in random order (8 from the Pygmy population and
11 western instrumental excerpts). For both groups, emotional responses were continuously
measured on the dimensions of subjective feeling, (using a two dimensional rating interface
which measures arousal and valence), as well as psychophysiological response (GSR, HR,
Respiration Rate, facial EMG). Results suggest that the dimension of valence might be
mediated by cultural learning, whereas changes in arousal might involve a more basic,
universal response to implicit characteristics of music (with universal reactions in GSR and
HR measurements).

Paper Session 29: Crystal Hall, 14:30-15:30


Music style & schemata

A Diachronic Analysis of Harmonic Schemata in Jazz

Daniel Shanahan, Yuri Broze


School of Music, Ohio State University, USA

Jazz harmony relies heavily on a set of well-defined harmonic patterns that evolved
gradually throughout the 20th century. While certain tonally-oriented progressions such as
the ii-V-I appear to be nearly ubiquitous across time-periods, the jazz tradition also
includes a notable departure from tonal harmony: the rise of modal jazz in the late 1950s.
We aimed to systematically investigate the history of jazz composition by describing the
evolution of chordal syntax, as well as the sort of organizational frameworks that might be
described as harmonic schemata. In this study, we empirically describe the most common
chords and chord motions of the jazz canon, and trace their evolution over time.
Additionally, we describe an attempt to account for one particularly well-known
compositional schema: the so-called rhythm changes. In so doing, we make use of a
recently compiled database of harmonic progressions for more than 1,160 jazz standards,
encoded into the Humdrum kern format (Huron 1995). The present study provides details
of corpus validation, and presents an initial descriptive characterization of the data set.
Furthermore, we present evidence consistent with the hypothesis that chord sequences
using tonal harmonic syntax became progressively less common from 1925 to 1970. Finally,
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

187

we characterize the decline in popularity of one harmonic schema: the so-called rhythm
changes.


Optimising a short test of musical style grouping

Jason Musil*, Bruno Gingras#, Lauren Stewart*, Daniel Mllensiefen*


*Department of Psychology, Goldsmiths, University of London, United Kingdom
#Department of Cognitive Biology, University of Vienna, Austria

Extremely short musical clips can cue correct genre schemas and also knowledge of particular
artists and recordings, most probably through timbral cues. The extent to which individuals
acquire and are able to use such timbre-based knowledge may vary with their breadth and degree
of engagement with the many different styles of music available to modern listeners. We aimed to
create and optimise a short and implicit musical clip sorting task, which would be an ecologically
valid test of musical perception skills necessary for discriminating between musical styles in a
general Western population. We were also interested in comparing the performance of self-
recruiting online and laboratory tested participants. 26 laboratory and 91 online participants
grouped sets of 16 short musical clips into four equal sized bins. They were told to group by
similarity and 'genre' was not mentioned explicitly. Four representative stimulus songs were
chosen from each of Jazz, Rock, Pop and Hiphop. Two vocal-free regions were extracted from each
song and 400ms and 800ms clips created from each. Each participant sorted two sets of stimuli,
the second set always having a different clip duration and region from the first. Population
parameter estimates from test-wise scores did not differ significantly between online and offline
participants (variance: p=.1; mean: p=.57). Low item-wise scores (M=1.14, SD=.95, out of 3)
suggest high task difficulty, with longer clips being significantly easier to pair (p<.001). Complete
linkage agglomerative hierarchical clustering cluster analyses of pairwise clip distances from the
sampled solutions showed a suitable 4 cluster solution by genre for 800ms clips but 400ms Pop
clips showed a high confusion rate with the other genres. Piloting with derived shorter sets
favours a 3 item by 3 genre 400ms set with Pop excluded, which is easier to solve than the
original 4x4 problem but also harder than an optimised small 800ms set (which was also piloted
and found to be too easy). An ecologically valid and compelling test of musical style grouping is
presented, deliverable over the internet via standard web-browsers. Planned future research will
ascertain which cognitive abilities are being tested and how the measured ability relates to self-
reported musical sophistication as measured by the Goldsmiths Musical Sophistication Index,
which the test was designed to accompany.

Paper Session 30: Dock Six Hall, 14:30-15:30


Rhythm & time perception

The implicit learning of metrical and non-metrical rhythms in a serial recall


task

Benjamin G. Schultz1, 2, Catherine J. Stevens1, Peter E. Keller1,3, & Barbara Tillmann1,2


1MARCS Institute, University of Western Sydney
2Lyon Neuroscience Research Center, Team Auditory Cognition and Psychoacoustics, CNRS,
UMR 5292, INSERM U1028, Universit Lyon 1
3Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig

Rhythm is the patterned onsets of sound in regards to timing, accent, and grouping. Meter is the
sense of strong and weak beats that can be abstracted from a rhythm. According to dynamic
attending theory (DAT; Jones & Boltz, 1989), expectancies for the timing of onsets are easier to
form for metrical rhythms than non-metrical rhythms. Differences between implicit learning (IL)
of metrical and non-metrical rhythms have not been explored using a serial recall task, where IL
is characterized by decreases in temporal error over blocks containing a repeating rhythm and

188 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

FRI
increases in temporal error when novel rhythms are introduced. Two experiments investigated IL
of metrical and non-metrical rhythms in the presence and absence of an ordinal pattern using a
serial recall paradigm. Based on DAT, it was hypothesized that i), metrical rhythms are learned
more readily than non-metrical rhythms, and ii) introducing novel rhythms with a weaker
metrical framework in test blocks results in larger timing error increases than novel rhythms
with the same metrical strength. In the serial recall task, an ordinal pattern (auditory spatial
locations) was presented with rhythmic timing. Participants were instructed to reproduce the
pattern after each presentation. They were not informed of the rhythm. Experiment 1 (N=64)
examined IL of rhythms in the presence of a correlated ordinal pattern. Experiment 2 (N=72)
examined IL of rhythms when the ordinal sequence was randomized each trial. In the metrical
conditions, participants were trained on a strongly metrical (SM) rhythm, and received novel SM
and weakly metrical (WM) rhythms in test blocks. In Experiment 1, metrical rhythms elicited
significantly larger decreases in timing error than non-metrical rhythms in the presence of an
ordinal pattern. In Experiment 2, decreases in timing error were not significantly different
between metrical and non-metrical rhythms in the absence of an ordinal pattern. In both
experiments, the introduction of a novel WM rhythm resulted in significantly larger increases in
timing error than the introduction of a novel SM rhythm. Metrical and non-metrical rhythms were
implicitly learned. Metrical patterns were only learned more readily than non-metrical rhythms
in the presence of an ordinal pattern. This suggests that meter aids rhythm learning differently
depending on the predictability of the ordinal sequence. In line with DAT, meter was abstracted in
metrical conditions in the presence and absence of an ordinal pattern.


A Unified Model for the Neural Bases of Auditory Time Perception

Sundeep Teki,* Timothy D. Griffiths#


*Wellcome Trust Centre for Neuroimaging, University College London, UK
#Auditory Group, Institute of Neuroscience, Newcastle University, UK

Perception of time is essential for normal functioning of sensory and motor processes such
as the perception of speech and music and the execution of skilled motor movement.
Perceptual and motor timing of intervals between sequences of sounds holds special
importance for music. Accumulating evidence suggests that perception of time is mediated
by a distributed neural system consisting of distinct motor structures such as the cerebellum,
inferior olive, basal ganglia, supplementary motor area as well as prefrontal cortical areas. In
this theoretical paper, we review and assess how distinct components of the timing network
mediate different aspects of perceptual timing. Recent work from our group suggests that
different subsystems of the timing network are recruited depending on the temporal context
of the intervals to be timed. Using functional magnetic resonance imaging, we established
brain bases for absolute, duration-based timing of irregular intervals and relative, beat-
based timing of regular intervals in the olivocerebellar and the striato-thalamo-cortical
circuits respectively. We assess neurophysiological and neuroanatomical data that suggests
that the timing functions of these circuits may, however, not be entirely independent and
propose a unified model of time perception based on coordinated activity in the core striatal
and olivocerebellar networks that are interconnected with each other and the cerebral
cortex through multiple synaptic pathways. Timing in this unified model is proposed to
involve serial beat-based striatal activation followed by absolute olivocerebellar timing
mechanisms with a central role for the striatum as the brains internal timekeeper.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

189

Paper Session 31: Timber I Hall, 14:30-15:30


Timbre

Exploring Instrument Blending as a Function of Timbre Saliency


Song Hui Chon,* Stephen McAdams*
*CIRMMT, Schulich School of Music, McGill University, Canada

A rating experiment was carried out to understand the relationship between blending and
timbre saliency, the attention-capturing quality of timbre. Stimuli were generated from 15
Western orchestral instrument sounds from the Vienna Symphonic Library, equalized in
pitch, loudness and effective duration. Listeners were presented with a composite of two
simultaneous, unison instrumental sounds and were asked to rate the degree of blending on
a continuous scale between "very blended" and "not blended". Data from 60 participants
showed no effect of gender, musicianship or age in blending judgments. Mild negative
correlations were observed between the average degree of blending as well as the sum ( =
0.34, df = 103, p < 0.01), minimum ( = 0.26, df = 103, p < 0.01) and maximum ( = 0.30, df
= 103, p < 0.01) of saliency values of two individual timbres. These results suggest that a
highly salient sound will not blend well. In addition, it is the individual sounds saliency level
and the saliency sum of the sound pair that determine the overall degree of perceived
blending, rather than the saliency difference. The best acoustic correlate to describe the
average blending is the minimum attack time of the two individual timbres, explaining 57%
of the variance. This agrees with Tardieu & McAdams' (2011) observation that a sound with
a longer attack tends to blend better. Previous findings that sounds with lower spectral
centroids are likely to blend better by Sandell (1995) and Tardieu & McAdams (2011) were
also confirmed.


A study of confusions in identifying concurrently sounding wind instruments
Despina Klonari, Konstantinos Pastiadis, Georgios Papadelis, Georgios Papanikolaou
Aristotle University of Thessaloniki, Greece

This paper investigates confused identification of physical wind instruments tones that play in
pairs and at various interval relationships. Our work moves the study of timbre for solo musical
tones towards a more realistic framework of complex timbres produced by combinations of
instruments, considering musically meaningful factors of importance such as the pitch intervals
and the timbral constituents of the examined pairs. Additionally, an important cognitive factor,
namely the subjects response time in an identification task, is examined to validate hypotheses
about possible relations between subjects confidence and efficiency. 42 musically experienced
listeners were asked to name the individual instruments within each pair, in total 58 pairs, from
within all possible combinations of Flute, Oboe, Bb Clarinet and Bb Trumpet, playing at each and
any of four musical pitches (A4, C#5, A5, C#6, forming the pitch intervals of unison, major third,
octave and major tenth), in a randomized design with five repetitions for each pairs presentation.
The procedure was conducted and administered within an elaborate computerized desktop
system, which, allowing for recording of each step of the subjects response, facilitated the
registration of the respective response times. Percentages of correct, semi-correct and false
identifications populate the instruments confusion matrices. Various statistically significant
tendencies appear with respect to the position of instruments within each pair and pitch interval.
Unison identities show the smallest erroneous identification scores. Correlations of confusion
scores with mean response times highlight possible manifestations of subjects response
confidence levels. This work is a systematic attempt to explore several issues in identification of
concurrently sounding musical instruments and highlights the diversity and complexity of the
interplay between their acoustics and the respective perceptual transformations. Even within a
musically more limited and coherent subset, namely the wind instruments, observed systematic

190 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

FRI
variations of confusion between instruments, require further extensive investigation of
perceptual and cognitive phenomena, such as spectro-temporal masking/prominence effects,
listeners bias, etc. Interpretation of results might prove useful especially in the fields of
orchestration or music synthesis, wherein tonal and timbral combinations of musical instruments
are extensively considered.

Paper Session 32: Timber II Hall, 14:30-15:30


Singing voice - speech

Multivariate analyses of speech signals in singing and non-singing voices

Yoshitaka Nakajima*, Hiroshige Takeichi#, Saki Kidera, and Kazuo Ueda*,


*Department of Human Science and Center for Applied Perceptual Research, Kyushu University,
Japan
#RIKEN Nishina Center, Japan, School of Design, Kyushu University, Japan

In previous studies, we had analyzed spoken sentences in eight languages/dialects [e.g., Ueda
et al. (2010, Fechner Day 2010, Padua)]; we calculated power fluctuations extracted by
critical-band filters. Three factors related to four frequency bands appeared constantly.
These factors seemed important to convey linguistic information. We were interested in
whether similar factors would appear in singing voices and whether there would be any
systematic difference between singing and non-singing voices. Two male and two female
amateur singers sang two simple tunes in Japanese, and sang also variations of these tunes in
which tone duration (as notated) or pitch was fixed. They also read the lyrics aloud at three
different tempi. These speech signals were analyzed utilizing a critical-band-filter bank
covering a frequency range 50-6400 Hz. Factor analyses were performed on the power
fluctuations obtained from these critical-band filters. The correlation-coefficient matrices,
calculated as a first step of the analyses, were also compared directly with each other. The
same three factors as in our previous research appeared in all speech-generating conditions;
power comodulations between critical bands took place in similar ways. One of the factors
corresponded to a frequency range of several critical bands around 1000 Hz, which is
supposed to be important for the perception of pitch and rhythm. The Euclidean distances
between the correlation-coefficient matrices presented a clear distinction between reading
aloud, singing with a fixed pitch, and singing with the original pitch pattern, indicating
acoustic difference between singing and non-singing voices. (Supported by JSPS)


Effects of background sound on the volume and fundamental frequency of a
singing voice

Mario Suzuki,*# Takayuki Kagomiya,# Motoki Kouzaki,* Seiji Nakagawa #


*Dept. of Human Coexistence, Graduate School of Human and Environmental Studies, Kyoto
University, Japan, #Health Research Institute, National Institute of Advanced Industrial Science
and Technology (AIST), Japan

Singers often perform with musical accompaniment or the voices of other singers. These
background sounds can mask a singer's own voice, whereas they can be a reference for the
fundamental frequency (F0). We investigated the effect of the level of the chorus and musical
accompaniment on the volume and F0 of the singing voice under the condition that singers
can change their singing volume freely. Five normal subjects were requested to sing a song a
cappella or with background sound of a piano accompaniment, choir singing, or multi-talker
noise. The intensity of the background sound was varied from 40 to 80 dB(A). The results
show that the volume of the singing voice increased as the intensity of background sound
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

191

increased, regardless of the type of sound. Meanwhile, F0 precision of the singing voice was
not affected by the intensity of background sound. However, F0 precision deteriorated more
under the multi-talker noise condition than a cappella and other conditions. The variation in
singing volume in accordance with the intensity of background sound was similar to that for
speech production in noise (i.e., the Lombard effect). That is, the subjects tried to keep the
auditory feedback constant subconsciously against the background sound even in singing
tasks, and consequently obtained high F0 precision over all tested intensities of background
sound. It is also indicated that the intensity of background sound does not directly affect F0
precision while the existence of sufficient auditory feedback or the external reference is
important to maintaining F0 precision.

Speed Poster Session 41: Grand Pietra Hall, 15:30-16:00


Listening context listening experience

Effects of the Listening Context on the Audiences Perceptions of Artistry,


Expressiveness, and Affective Qualities in the Piano Performance

Haruka Shoda*, # and Mayumi Adachi*


*Dept. of Psychology, Hokkaido University, Japan
#The Japan Society for the Promotion of Science, Japan

According to the previous studies, visual information enhances the audiences perception of
the performers expressivity, but no such effects are evident in their affective impressions of
late Romantic pieces. Moreover, our previous study suggests that the pianists affective
interpretations can be communicated successfully to the audience only through the sound.
The purpose of the present study was to investigate whether the performers visual
information plays similar roles during a live concert. We arranged 13 separate concerts in
which each of 13 professional pianists performed the same set of six pieces (2-4 minutes)
three slow and three fast, each from Bach, Schumann, and Debussyin front of different
groups of the audience consisting of 11-23 university students (N = 211). Ten weeks later,
the same audience listened to the live recording (i.e., only the sound) of the same pianists
performances in the same auditorium. In both contexts, the audience evaluated each
performance in terms of artistry, expressiveness, and affective qualities (measured by 11
adjectives) on 9-point Likert scale, which each pianist also rated after his or her concert. The
results revealed that the performances were perceived more artistically and expressively in
the concert than in the recorded context regardless of the piece. A three-mode positioning
analysis also showed that the audience could perceive the pianists affective interpretations
more successfully in the concert than in the recorded context. These results suggest that
sharing the common time and place enhances the communication of information from the
performer to the audience.


Many Ways of Hearing: Clustering Continuous Responses to Music

Finn Upham
Music and Audio Research Lab, Department of Music and Performing Arts Professions,
Steinhardt School of Culture, Education, and Human Development, New York University, USA

Is there more than one-way to experience or perceive a piece of music? Anecdotal evidence
suggests that many are possible and cognitive theories hypothesise variety and yet analyses
of music rarely attempt to describe multiple cognitive or affective sequences of experience.
Continuous responses collected from different listeners to the same music often show great
variability in their temporal sequence, whether ratings of emotional arousal or measures of
192 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

FRI
skin conductance. Either these differences are the result of random noise interfering with the
common experience (as assumed implicitly in any analysis of the average response time
series), or they reflect distinct interpretations of the stimulating music and corresponding
experiences. The aim of this study is to evaluate whether continuous responses show
evidence of distinct but repeatable temporal patterns of perception or experience to the
same musical stimuli. Comparing the cohesiveness and distinction between clusters within
continuous behavioural response collections from multiple experiments and to those of
several artificially constructed collections of unrelated responses, this poster presents
criteria for defining differences between responses and robust response patterns.


Correlations Between Acoustic Features, Personality Traits and Perception of
Soundscapes
PerMagnus Lindborg
Nanyang Technological University, Singapore; KTH Institute of Technology, Stockholm

The present study reports results from an experiment that is part of Soundscape Emotion
Responses (SSER) study. We investigated the interaction between psychological and acoustic
features in the perception of soundscapes. Participant features were estimated with the Ten-
Item Personality Index (Gosling et al. 2003) and the Profile of Mood State for Adults (Terry et
al. 1999, 2005), and acoustic features with computational tools such as MIRtoolbox (Lartillot
2011). We made ambisonic recordings of Singaporean everyday sonic environments and
selected 12 excerpts of 90 seconds duration each, in 4 categories: city parks, rural parks,
eateries and shops/markets. 43 participants rated soundscapes according to the Swedish
Soundscape Quality Protocol (Axelsson et al. 2011) which uses 8 dimensions related to
quality perception. Participants also grouped blobs representing the stimuli according to a
spatial metaphor and associated a colour to each. A principal component analysis
determined a set of acoustic features that span a 2-dimensional plane related to latent
higher-level features that are relevant to soundscape perception. We tentatively named these
dimensions Mass and Variability Focus; the first depends on loudness and spectral shape, the
second on amplitude variability across temporal domains. A series of repeated-measures
ANOVA showed that there is are patterns of significant correlations between perception
ratings and the derived acoustic features in interaction with personality measures. Several of
the interactions were linked to the personality trait Openness, and to aural-visual
orientation. Implications for future research are discussed.


Influence of the listening context on the perceived realism of binaural
recordings

Davide Andrea Mauro,* Francesco Vitale#


*LIM - Laboratorio di Informatica Musicale, Dipartimento di Informatica e comunicazione
(DICo), Universit degli Studi di Milano, Milan, Italy; #AGON acustica informatica musica,
Milan, Italy

Binaural recordings and audio are becoming an interesting resource for com- posers, live
performances and augmented reality. This paper focuses on the acceptance and the
perceived quality by the audience of such spatial recordings. We present the results of a
preliminary study of psychoacoustic perception where N=26 listeners had to report on the
realism and the quality of different couples of sounds taken from two different rooms with
peculiar reverb. Sounds are recorded with a self-made dummy head. The stimuli are grouped
into classes with respects to some characteristics highlighted as potentially important for the
task. Listening condition is fixed with headphones. Participants are divided into musically
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

193

trained and naive subjects. Results show that there exists differences between the two
groups of participants and that the semantic relevance of a sound plays a central role.

Speed Poster Session 42: Crystal Hall, 15:30-16:00


Memory & Earworms

Effect of timbre change on memory for vocal and instrumental melodies

Michael Weiss, E. Glenn Schellenberg, Sandra Trehub


University of Toronto Mississauga, Canada

Recently we found that adults remembered vocal melodies better than instrumental melodies
(piano, banjo, timbre), which did not differ from one another. Previous research suggests that
timbre changes between exposure and test impair memory for melodies, but none of the studies
in question included melodies presented in vocal timbre. Aims: (1) To examine whether changes
in timbre between initial exposure and test impair memory for melodies regardless of the timbre
at exposure; (2) to explore the possibility of differential reduction in memory across timbres, and
(3) to determine whether memory for vocal melodies is enhanced both in the presence and
absence of a timbre shift at test. Method: To ensure that changes in timbre were not confounded
by performance differences between melodies, instrumental versions were triggered by MIDI
data generated from the vocal melodies (i.e., preserving timing and relative amplitude).
Participants heard 16 unfamiliar Irish folk melodies presented in four timbres: voice, piano,
banjo, and marimba. In a subsequent memory test, participants heard the 16 old melodies, half of
which changed timbre, intermixed with 8 new melodies (i.e., foils). Participants were instructed
to attend to the melody, and to rate how confident they were that they had heard it previously
regardless of instrumentduring the exposure phase. Results: As in previous research,
recognition scores were highest for old melodies presented in the same timbre as in the exposure
phase, lowest for new melodies, and intermediate for old melodies presented in a timbre that
changed from exposure to test. The finding of greatest interest was that vocal melodies were
remembered better than instrumental melodies whether the melodies were presented at test in a
different timbre or in the original timbre. There was no evidence of differential reduction in
memory for melodies that were timbre-shifted between exposure and test (no interaction).
Conclusions: Vocal melodies are recognized better than instrumental melodies but not simply
because fine-grained acoustic details are retrieved more readily at test. Rather, vocal timbre
enhances encoding of the melody, an advantage that persists even in the context of subsequent
timbre change. The advantage for vocal melodies may stem from the adaptive significance of the
human voice.


The Effect of Singing on Lexical Memory

Katelyn Horn, Daniel Shanahan


Ohio State University

Previous research has demonstrated that both music and musical ability might facilitate
verbal memory (Crowder, Serafine, Repp 1986 and Chan, Ho, and Cheung, 1998, 2003,
respectively). Most studies have focused on how the passive act of listening effects ones
recall ability, rather than on the physical act of song production. It is, however, a common
pedagogical tool to encourage the student to sing as a memory aid. Singing is generally more
difficult than speaking a text, though, so one might expect the added difficulty to inhibit
lexical memory. Nonetheless, common examples such as the alphabet song seem to indicate
that singing really does aid in the memory process. In this study, we aim to test if the act of
singing increases lexical memory more than the act of speaking. For this experiment, we
asked two groups of subjects to recite a randomized list of 102 words, and tested their
194 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

FRI
memory for these words. The first group was asked to sing each word to a 2, 3, or 4-note
melody (corresponding with the number of syllables in the word), while the second group
simply spoke the words. This was immediately followed by a recognition task, in which the
subjects were asked how confident they were that they had previously been presented with
the word. Our results are currently being analyzed, but we have hypothesized that subjects
in the singing condition will have a markedly improved performance in the recognition task
compared to those in the spoken condition.


The Impact of Trace Decay, Interference, and Confusion in a Tonal Memory
Span Task
Sven Blankenberger, Katrin Bittrich
Department of Psychology, Martin-Luther-University Halle-Wittenberg, Germany

The aim of the present study was to propose and test a mathematical model concerning the
impact of different mechanisms of forgetting in short term memory for tonal and verbal
stimuli. N=10 participants completed a modified memory span task. In each trial they were
presented 16 letters or tones which they had to recall (sing or speak) in correct serial
order. In half of the trials the recall started immediately after the last item. In the remaining
trials the recall was delayed. Quality of response was registered. Letters were considered as
correct if recalled at the correct serial position. For the tonal reproduction a tolerance
criterion was applied: Tones were considered as correct response if recalled at the correct
position and if the sung frequency was within the range of plus/minus a quarter tone of the
given frequency. As expected participants were better in the verbal compared to the tonal
memory span task. Differences between both conditions concerning proportion of correct
recall as a function of list length and serial position were observed. The proposed model
fitted the data reasonably well. The parameter estimation revealed a stronger impact of
forgetting mechanisms in the tonal compared to the verbal condition. Furthermore, item
confusion only appeared in the verbal condition. These results suggest that different
mechanisms of forgetting apply to tonal and verbal stimuli in short term memory.


Contracting Earworms: The Roles of Personality and Musicality

Georgia A. Floridou, Victoria J. Williamson, Daniel Mllensiefen


Department of Psychology, Goldsmiths College, London, UK

The term earworm (also known as Involuntary Musical Imagery or INMI) describes the
experience of a short melody getting stuck in the mind and being heard repeatedly outside of
conscious control. Previous studies have examined the relationship between the occurrence
of INMI and individual differences, however important questions still remain; the role of
personality in particular remains largely unexplored. The studies presented here explored a)
the impact of individual characteristics, related to personality and musicality, on INMI
experiences (Study 1) and b) different methods of triggering INMI in the lab (Study 2). In
study 1, 332 participants completed the BFI (Big Five Inventory) and Gold-MSI (Musical
Sophistication Index) questionnaires online and provided information about their INMI
experiences (pleasantness, controllability, length, interference, worrying and expunging
strategies). Evaluation of the responses indicated that only Neuroticism correlated with
earworm characteristics. Earworm frequency correlated with all Gold-MSI subscales
(Importance of Music, Perception and Production, Emotions, Body and Creativity) except
Musical Training. Two earworm induction procedures tested in Study 2, based on a musical
stimulus and on recall of lyrics, were equally successful, regardless of personality traits. The
findings of these studies indicate that a) the characteristics of spontaneously earworms
(INMI) show a dependence on certain individual personality traits (neuroticism), whereas
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

195

the deliberate induction of earworms under laboratory conditions does not, and b) the
mental process of recalling song lyrics can be as efficient in triggering earworms as listening
to music, suggesting that earworm induction may be linked with basic memory mechanisms.


Involuntary musical imagery and musical structure do we get earworms only
for certain tunes?

Sebastian Finkel*, Daniel Mllensiefen#


*Institute of Medical Psychology and Behavioural Neurobiology, University of Tuebingen,
Germany
#Department of Psychology, Goldsmith College, University of London, UK

Involuntary Musical Imagery (INMI) or earworms describes the prevalent phenomenon
whereby tunes get stuck in ones head. INMI appears spontaneously and repeatedly, triggered
by a variety of mental or environmental stimuli. To our knowledge, this is the first study
using computational analysis to investigate structural aspects of INMI tunes. Our aim is to
develop a statistical model that can distinguish between INMI and non-INMI songs on the
basis of unique musical features. Our present modelling results have a prediction accuracy of
61%. We are currently improving the model by using a larger corpus of songs as well as
employing more powerful classification techniques from the machine-learning field (e.g.
random forests). The present approach promises new insights into the cognition of music in
everyday life using quantitative methods. We hope to address the role of memory and
emotions on INMI in the future.

Speed Poster Session 43: Dock Six Hall, 15:30-16:00


Rhythm & time perception

Rhythm deafness in absence of perceptual disorders

Jakub Sowinski, Simone Dalla Bella


Dept. of Psychology, WSFiZ, Warsaw, Poland; EuroMov, Movement to Health (M2H) Laboratory,
University of Montpellier-1, Montpellier, France; BRAMS, Montreal, Canada

A great deal of research has been devoted to rhythm perception and production in ordinary
musicians. Much less is known about connections between rhythm perception and
production in the general population. Recent data (Phillips-Silver et al., 2011) suggest that
some individuals (so-called rhythm deaf) may exhibit impaired rhythm perception and
inaccurate sensorimotor synchronization (SMS) while showing spared pitch processing. In
this study we examined more in depth rhythm perception and SMS in non-musicians. In a
first screening experiment, 96 non-musicians synchronized with musical and non-musical
stimuli in a hand-tapping task. Synchronization accuracy and precision were analyzed with
Circular Statistics. The results allowed to select 16 participants revealing difficulties in the
SMS task (Poor Synchronizers). In a second experiment, 10 of the Poor Synchronizers and 23
Controls (i.e., participants chosen randomly among the other participants without impaired
synchronization tested in the screening experiment) underwent various SMS tasks (e.g., with
different pacing stimuli and using different tempos), and to rhythm perception tasks (i.e.,
anisochrony detection and the rhythm task of the Montreal Battery of Evaluation of Amusia,
MBEA, Peretz et al., 2003). The analyses confirmed that 8 participants were poor
synchronizers. In particular, some of them exhibited normal rhythm perception. This finding
points to a possible mismatch between perception and action in the rhythm domain, similar
to what previously observed in the pitch domain (Dalla Bella et al., 2007, 2009; Loui et al.,
2008).

196 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

FRI
Young childrens musical enculturation: Developing a test of young childrens
metre processing skills

Kathleen M. Einarson,* Laurel J. Trainor*


*Department of Psychology, Neuroscience & Behaviour, McMaster University, Canada

Research indicates that adults can perceptually extract the beat from rhythmic sequences,
and that adults ability to perceive and produce rhythmic sequences is affected by experience
with the particular hierarchical metrical structure of their cultures music. Evidence of
specialization can be seen by 12 months of age but little is known about the developmental
trajectory of this enculturation process throughout childhood. We examine musical
development in five- and six-year-old Western children, asking (1) whether they show a
perceptual bias for common Western metres, and (2) whether perception and production
abilities are correlated. On each trial of the perception task, participants are presented with a
rhythmic sequence in either a four-beat, five-beat, or six-beat metre. The sequence is then
repeated, with small alterations on half of the trials, and children indicate whether the
sequence was copied exactly right. The production tasks consist of recording and analyzing
the childrens ability to tap back simple rhythms similar to those used in the perception task.
Additionally, we measure vocabulary, pre-reading skills, and working memory in order to
examine correlations between these abilities and rhythmic perception. Results show that
alterations were detected equally well in the simple four- and six-beat metres compared to
the complex five-beat metres by both the five-year-olds and the six-year-olds. Sequence
length exerted a much stronger effect on performance than metric complexity, suggesting
that this task is not a sensitive measure of metric enculturation. Analyses in progress will
determine whether sequence length is also the main factor affecting production task
performance.

Newborn infants are sensitive to sound timing

Gbor P. Hden*, Henkjan Honing*, Istvn Winkler#


*Cognitive Science Center Amsterdam, Institute for Logic, Language and Computation,
University of Amsterdam, The Netherlands
#Department of Experimental Psychology, Institute of Cognitive Neuroscience and Psychology,
Research Centre for Natural Sciences, Hungarian Academy of Sciences, Hungary
Institute of Psychology, University of Szeged, Hungary

Detecting changes in temporal intervals is important for perceiving music and speech.
Shorter time intervals (ca. 10-100 ms) are relevant to the study of expressive timing in music
and to prosody and phonology in language. Detection of short intervals is reflected by the
mismatch negativity event-related potential (ERP). We used ERPs to test whether newborns
detect instantaneous tempo changes as well as the onsets and offsets of sound trains at fast
presentation rates. ERPs were recorded from healthy newborn infants during sleep. 50 ms
long tones randomly selected from the C major scale were presented in short trains of 8-24
(random) identical tones followed by a silent gap. The first half of the trains was presented at
a slow rate (mean Inter-Onset-Interval 200 ms). The second half was presented at a fast
rate (mean IOI 100 ms). ERPs elicited at the start of each train, responses to the change of
rate and elicited by the tone expected at the beginning of the silent gap were contrasted with
mid-train controls. Analysis showed significant differential responses to the change of
presentation rate as well as to start of train compared to their respective controls, and there
is some indication that a tone was expected at the beginning of the silent gap. We conclude
that the mechanisms for detecting auditory events based on timing are already functional at
birth making this information available to the infant brain and thus providing an important
prerequisite of entering dialogues as well as for music cognition.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

197

Bouncing babies to the beat: Music and helping behaviour in infancy


Laura K. Cirelli, Kathleen M. Einarson, Laurel J. Trainor
Psychology, Neuroscience & and Behaviour, McMaster University, Canada

A prerequisite for musical behaviour is the ability to entrain movement to an external


auditory beat. Interpersonal auditory-motor entrainment has effects on the social behaviour
of both adults and 4-year-old children. For example, individuals who walk, sing, or tap
together are found to be subsequently more helpful, compliant or cooperative in later
interactions with one another. However, the developmental trajectory of this social
facilitation effect is still unclear. The current study investigated whether such effects could
be measured in 14-month-old infants. Experimenter 1 bounced infants to either predictable
or unpredictable versions of a melody. At the same time, Experimenter 2 faced the infant and
bounced either synchronously or asynchronously with the infant. Following the bouncing
phase, Experimenter 2 performed a few short tasks during which the child was given the
opportunity to help Experimenter 2 by handing accidently dropped objects back to her.
Results comparing the two extreme groups demonstrate that the infants in the synchronous-
predictable beats condition were significantly more likely to help Experimenter 2 than
infants in the asynchronous-unpredictable beats condition, t(20.5)=3.02, p<.01, 61%>25%
helping likelihood. These results suggest that social facilitation following interpersonal
auditory-motor entrainment might be experienced by 14-month-olds. The two control
groups are currently being tested to confirm this interpretation.

Speed Poster Session 44: Timber I Hall, 15:30-16:00


Absolute pitch & tone perception

Absolute Pitch Simple Pair-Association?

Katrin Bittrich, Juliane Katrin Heller, Sven Blankenberger


Department of Psychology, Martin-Luther-University Halle-Wittenberg, Germany

The genesis of absolute pitch predisposition versus acquisition through learning is still
subject of numerous scientific investigations. The aim of the present study was to examine
the impact of simple pair-association-mechanisms for the acquisition of absolute pitch. At
intervals of two weeks all participants (N=20 non-musicians) completed a tone identification
tests (pre-, post-, and follow-up test). Pitches ranged from A3 to G#4. The proportion of
correct responses as well as the differences in semi-tones were observed. Participants of the
experimental group (n=10) underwent a ten-day adaptive training between the first and the
second test in which they learned to associate pitches with the corresponding name. The
training started with two pitches only. After reaching a predefined success criterion a further
tone was added. This procedure entails that within the ten-day training period each
participant reached an individual number of pitches which they could identify. Participants
of the experimental group learned to successfully identify seven to nine pitches within ten
days of training. Relative frequency of correct responses as well as the difference in semi-
tones in the tone identification task revealed a positive effect of training in the experimental
group compared to the control group. The results of the training study suggest that simple
pair-association mechanisms are one aspect in the development of absolute pitch. Within
only two weeks of training a group of non-musicians was able to successfully identify seven
to nine pitches within one octave. Possible causes for the fail of previous learning studies are
discussed.

198 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

FRI
A unique pattern of ratio effect in musicians that are absolute pitch possessors
Lilach Akiva-Kabiri1, Tali Leibovich2, Gal Azaria1, Avishai Henik1
1 Department of Psychology, and the Zlotowski Center for Neuroscience
2 Department of Cognitive Sciences, Ben-Gurion University of the Negev, Beer-Sheva, Israel
3 Ben-Gurion University of the Negev, Beer-Sheva, Israel

According to the ratio effect, when the difference between two magnitudes is large, the
comparison between them is faster. The distance (or the ratio) effect holds for a large variety
of cardinal scales (numbers, quantities, physical sizes, etc.). In ordinal scales, such as the
alphabet, this effect is more elusive. This effect complies with Weber's law and was found for
many modalities such as numbers, brightness and musical tones. However, the ratio effect is
elusive in ordinal scales (i.e., alphabet). Absolute pitch (AP) is a rare ability to identify
musical pitches without an external reference tone. It has been suggested that AP possessors
are able to label pitch automatically. In contrast, most people use the relations between
pitches (relative pitch) in order to process musical information. In the current study two
groups of musicians (those with AP and controls without AP) were asked to compare pairs of
musical tones that varied in their ratio. Results yielded a significant ratio effect for nAP
group, as expected according to the literature; namely, RTs were longer for large ratios than
for small ratios. Interestingly, AP possessors showed no ratio effect; namely, RTs for small
and large ratios were similar. To the best of our knowledge this is the first study that
demonstrates the lack of the effect in a particular group of people. Results are interpreted
suggesting that pitch tones can be represented on ordinal or cardinal scales, contingent on
AP ability.


The effect of intensity on relative pitch

William Forde Thompson,* Varghese Peter,+ Kirk Olsen,# Catherine J. Stevens#


*Department of Psychology, Macquarie University, Australia; +Department of Linguistics,
Macquarie University, Australia; #MARCS, University of Western Sydney, Australia

Music performers frequently introduce systematic changes in intensity as music unfolds. We
tested the hypothesis that changes in the intensity of tones affect the perceived size of
melodic intervals. In Experiment 1, 39 musically untrained participants rated the size of the
interval spanned by two pitches within individual gliding tones. Tones were presented at
high-intensity, low-intensity, looming intensity (up-ramp), and fading intensity (down-ramp)
and glided between two pitches spanning 6 or 7 semitones (a tritone or a perfect fifth
interval). The pitch shift occurred in either ascending or descending directions. Experiment 2
repeated the conditions of Experiment 1 but the shifts in pitch and intensity occurred across
two discrete tones (i.e., a melodic interval). Ratings of interval size were dependent on
whether the interval was high or low in intensity, whether it increased or decreased in
intensity across the two pitches, and whether the interval was ascending or descending in
pitch. A control experiment replicated the effect of intensity using pitch intervals of 6 or 10
semitones in size (N = 30). The perception of interval size did not adhere to a strict
logarithmic function as implied by musical labels. As observed in previous investigations,
identical intervals were perceived as substantially different in size depending on other
attributes of the sound source. The implications for research on interval size and auditory
looming are discussed.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

199

Frequency and Pitch Representation Using Self-Organized Maps

Christos Zarras, Konstantinos Pastiadis, George Papanikolaou, George Papadelis


Department of Electrical & Computer Engineering, Aristotle University of Thessaloniki, Greece

Previous works on computational approaches for the description of pitch phenomena have
employed various methodologies, deterministic and probabilistic, which are based on
psychophysiological auditory stimuli modeling, representations and transformations (e.g. spatial,
temporal, spatiotemporal), both at peripheral and more central stages of the auditory chain. Then,
a confirmatory phase, utilizing data from behavioral (or even imaging) studies, is usually followed
to assess the validity of the computational methods. The human auditory perception relies on
interconnected neuronal networks, which have been shown to demonstrate multi-directional
activity and dynamical, adaptive, and self-organizing properties, together with strong tonotopical
organization along the auditory pathway up to the primary auditory cortex. This paper focuses on
the exploration of properties and effectiveness of a certain type of computational approaches,
namely self-organized networks, for the description of frequency and pitch related phenomena. A
Self-Organized connectionist model is presented and tested. We explore the ability of Kohonen
type neural networks (Self- Organizing Feature Maps, SOFMs or SOMs) to organize based on
frequency information conveyed by sound signals. Various types of artificially generated sound
signals (ordered along a frequency/pitch axis) are employed in our simulations, including single
tones, harmonic series, missing fundamental series, band limited noises, and harmonics with
formants. Simple Fourier representations and their physiologically plausible frequency-to-pitch
mappings (e.g. tonotopy in the cochlea) are used as network inputs. The networks efficiency is
investigated, according to various structural parameters of the network and the organizing
procedure, together with aspects of the obtained tonotopical organization. Our results, using
different types of input spectra and various SOM implementations, demonstrate a clear ability for
self-organizing according to (fundamental) frequency or pitch. However, when certain test
configurations were used, the networks showed observable inability to organize, revealing
limitations in the resolving ability of the network related to the required number (density) of
neurons compared to the dataset size. Some more difficulties were also observed, relating to the
type of signals for which an organized network can identify pitch. The results of this work
indicate that, under some provisions, such a model could be effective in frequency and pitch
indication, within certain limitations upon training parameters and types of signals employed.
Further work will compare the efficiency of the proposed representation with classical
computational approaches upon various aspects of pitch perception, together with examination of
feasibility and possible advantages of employing SOMs in the description of pitch perception in
various types of auditory dysfunction.


Detecting degrees of density in aggregates: when can we hear a cluster?

Luk Vaes,* Dirk Moelants #


*ORCiM, Orpheus Institute, Belgium; #IPEM-Dept. of Musicology, UGent, Belgium

In contemporary music, clusters have become a common part of the musical language. Yet,
our understanding of how clusters are perceived does not match its popularity in
compositional practice. The few existing cluster theories are contradictory to each other as
well as to the clusters history in scores; empirical data on the cluster's aural perception are
almost non-existing. Considering a cluster to be a psycho-physiological phenomenon of
which the individual constituents are losing perceptibility in favor of its contour, an
experiment was set up to study the aural perception of aggregates with varying degrees of
density within a fixed contour. The primary interest was vested in detecting quantity
(number of tones) and quality (identity of tones). 30 professional musicians listened toe all
63 possibilities to fill a fixed interval c-g with 1 to 6 different pitches, and indicated the
number of perceived tones (between 3 and 8) and which pitches they heard. Whereas 66%
of the three-component chords are identified correctly, this drops to 28% when four pitches
200 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

FRI
are used, 12% with five elements and about 5% with six or more. Subjects show a clear
preference for certain clusters and some configurations seem to increase the difficulty to
indentify the components correctly or lead to the perception of a more complex aggregate
than what they actually heard. These elements provide us with interesting insights on how
trained subjects perceive complex aggregates of pitches.

Symposium 4: Grand Pietra Hall, 17:00-19:00


Cognition in Musical Composition: Methodologies, Results, Challenges

Convener: Nicolas Donin, Discussants: Irne Delige, John Sloboda



Research on composers creative cognition is scarce, and has been mainly divided into two
independent trends in research: 1) sketch studies as a means to decipher the composers
intentions, planning and decision-making processes during the creative process of some
work; 2) empirical research on creativity in an educational context, with children or students
performing well-defined compositional tasks in an experimental setting. Recent research
suggests a third approach is worth exploring: gathering sketch studies-like data about
contemporary creative processes and using them to look at creative acts (Delige & Wiggins
2006) with the help of the artist. For example: tracking the creative cognition of a
professional composer over the course of his creative process, with the support of all the
traces left by his activity, whether through real-time data retrieval or through self-report
data obtained shortly after the work completion. This approach poses great methodological
and epistemological challenges. Yet the risk is worth taking: such analysis of the activity of
composition uncovers various aspects of music cognition (and of human creativity in
general) that might not necessarily coincide with our view of music cognition from the
perceptive and performative sides. In the last years, disparate attempts have been made to
implement in-depth research into the cognition of individual composers at work, notably by
McAdams, Collins Donin & Theureau, which lead into discussion of the forgotten pioneering
work of Bahle (1935), Mion, Nattiez & Thomas (1982) and others. Drawing upon a recent
project called MuTeC where case studies of past and current creative processes are
interconnected, Donin (2012) pleads for a crossfertilization between empirical and
historical approaches to creative cognition. The symposium will present samples of the
various methodologies and results emerging from the recent work of researchers active in
this new subfield and delineate current challenges to the development of further research on
compositional cognition. Constituent papers present various practices and situations
(mainstream composition, jazz, new music, sound installation) and distinct disciplinary
backgrounds (music psychology, psycho-ergonomics, musicology), all converging toward a
common, innovative goal.


Studying the act of musical composition in real-time

Dave Collins
University Centre, Doncaster College, UK

The primary aim of research undertaken and ongoing has been to track cognitions of
composers in real-time in naturalistic settings. The emphasis is to gain an understanding of
the process of the structuring and re-structuring of musical events in an unfolding
composition with a conjoined appraisal and development of appropriate methodological
techniques. Participants have been purposively selected to have significant experience in
using computer-based compositional tools, and asked to compose freely without external
constraints (length of composition, number of parts, duration of compositional period). Data
collection integrates computer tools (MIDI save-as files,) with verbal protocol, interview
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012 201

sessions and video observation techniques. Results: a) Methodological procedures: the


acquisition of meaningful cognition data indicates success in methodologies with rich data to
informing hypothetical models of compositional problem-solving strategies. b) Hypothetical
structures of compositional problem-solving strategies indicate: i) moments of insightful
behaviours (gestalt moments) within both general and specific compositional process, ii)
holistic levels of an entire composition as non-linear, recursive problem-solving and
problem-generation, iii) a cycle of chunking processes which contribute to the holistic level
of compositional activity, iv) a micro level of individual processes which take place within
and through each of the chunking processes. Immediately retrospective verbal reporting
with digital data collection techniques can provide substantially rich data to postulate a time-
based hypothetical model of compositional cognition.


Stefano Gervasonis Cognition Through the Compositional Process of
Gramigna. Methodology, Results Samples, Issues

Nicolas Donin, Franois-Xavier Fron


Analyse des pratiques musicales Research Group, STMS Lab (IRCAM-CNRS-UPMC), France

In 2009, internationally renowned composer Stefano Gervasoni authorized researchers to
delineate the genesis of his then most recent piece, Gramigna, a cycle of miniatures that was
soon to be developed into an expanded version. The creative process of the existing version
of Gramigna was documented via drafts and sketches. With regards to the creative process of
newer miniatures added to Gramigna over the course of 2010, data collection during
composition was favored versus retrospective monitoring. Then the composers cognition
along his course of action was recollected through four situation simulation interviews in
which the composer was asked to re-enact and comment on as many compositional
procedures as possible, based on every trace of his activity gathered by the researchers.
These two-hour long interviews were videotaped and transcribed. This data is highly suited
to questioning various aspects of compositional cognition. Sample results are introduced,
concerning: generation and use of rules, filling in the score in course of writing, decisions
about ending or restarting a process.


Negotiation in a jazz ensemble: Sound and speech in the making of a
commercial record

Maya Gratier, Rebecca Evans, Ksenija Stevanovic


Psychology Department, Universit Paris Ouest Nanterre La Dfense, France

Empirical research on the performance of improvised music is only just beginning to provide
a richer account of how music is made in real life contexts through the collaborative and
coordinated actions of participants. However, most studies to date focus either on the
musical outcome of improvised performance or on the social and cultural practices involved
in making music. Few studies have attempted to connect communicative processes, verbal,
nonverbal and musical, with the audible musical product they bring about. Traditional
cognitive psychology cannot explain the speed and efficiency with which musicians co-
produce and dynamically manage rhythmic, melodic and harmonic expression. The present
study situates cognition in embodied verbal and musical interchange. The recording studio is
an interesting context in which to study improvisational music making because so much is at
stake in the act of permanently fixing sound that is performed with a degree of
indeterminacy, fostering at time a tension between individual and collective expression. The
principal aim of this study was to reveal the verbal and musical processes involved in
selecting a take for inclusion on a commercial album of a professional jazz ensemble. A
second aim of the study was to analyze the relation between musicians representations of
202 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

FRI
their jointly produced music and the actual musical product. Based on analyses of
transcribed conversations between musicians, as well as detailed acoustic analysis of the
protocols tracks obtained from their performances, we show that musical projects are
shaped through both musical interaction and conversational exchange.


Analysing the design process of an interactive music installation in the urban
space : constraints as resources and resources as constraints

Pascal Salembier, Marie-Christine Legout


ICD-TechCICO, Universit de Technologie de Troyes, France

This study is part of a project that aims at documenting several examples of 20th and 21st
century professional composers practices in order to contribute to the understanding of
music creative processes. This 2 year study, conducted in collaboration with the composer
Jean-Luc Herv, examined the design process of an electro-acoustic music installation (a
sound garden) located in a public park in central Paris. The installation is a collaboration
between the composer and a landscape architects agency. Various different types of data
were collected such as: traces of the composers activity (notes, sketches, sound samples, and
e-mails with other project participants); verbal reports and comments based on the
composers sketchbooks; and notes from the direct observation of electro-acoustic work
session. Interviews with the composer were videotaped and transcribed (15 sessions,
totalling more than 25 hours). The aim of this paper is to briefly present some preliminary
results of the study concerning: the instrumental role played by the administrative, political,
musical and technical constraints that the composer faced thoughout the project;
composition as a model-based activity, versus activity as a dynamically situated activity; the
distribution of control between the composer and the computer system; and the cognitive
scrutability of the music generator.

Paper Session 33: Crystal Hall, 17:00-18:30


Motion & Gesture II

Body Rhythmic Entrainment and Pragmatics in Musical and Linguistic


Improvisation Tasks

Satinder Gill,* Marc R. Thompson#, Tommi Himberg #


*Centre for Music and Science, Faculty of Music, University of Cambridge, UK
#Finnish Centre of Excellence in Interdisciplinary Music Research, Department of Music,
University of Jyvskyl, Finland

This interdisciplinary study combines researchers and methods from linguistic


communication, music, and movement. We consider conversation as performance, and
improvisation in music as akin to this performance. Improvisation, musical or linguistic,
involves rules/conventions, but the interactive performance will often unfold in
unpredictable ways, involving heightened moments of rhythmic and empathic connection
(salient rhythmic moments, SRM), and require synchrony. We aimed to combine qualitative
observational analysis and quantitative movement analysis to identify SRM, describe
kinematics related to them, and compare periodicity and entrainment of body movements
across different conditions (participants facing each other vs. not facing; music making vs.
story telling). 8 pairs of participants performed musical and linguistic improvisations (2 min)
while audio, video, and movement recordings were made. Video analysis identified SRMs,
and kinematic and statistical analysis of motion capture data, including principal
components analysis (PCA) of a number of movement features, and cross-recurrence
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

203

analysis (QRA) to investigate interpersonal entrainment were undertaken. Preliminary


findings include that that SRMs in the linguistic trial correlate with moments of less distance
between the bodies, indicating increased contact, while SRMs in the music trial correlate
with less distance, but only when the interaction was mutually cooperative. There were no
major differences in the periodicity of movements between the linguistic and musical trials,
suggesting the two systems share rhythmic properties at the relational level of
communication. Observational analysis combined with kinematic and entrainment analyses
form a complementary set of methods for analyzing embodied interaction. Music and
language as communicative performance appear very likely to share properties of body
rhythmic interpersonal synchrony.

Classifying Music-Related Actions

Rolf Inge Gody*, Alexander Refsum Jensenius*, Arve Voldsund*, Kyrre Glette#, Mats Hvin#,
Kristian Nymoen#, Stle Skogstad#, Jim Trresen#
*Department of Musicology, University of Oslo, Norway,
#Department of Informatics, University of Oslo, Norway

Our research on music-related actions is based on the conviction that sensations of both
sound and body motion are inseparable in the production and perception of music. The
expression "music-related actions" is here used to refer to chunks of combined sound and
body motion, typically in the duration range of approximately 0.5 to 5 seconds. We believe
that chunk-level music-related actions are highly significant for the experience of music, and
we are presently working on establishing a database of music-related actions in order to
facilitate access to, and research on, our fast growing collection of motion capture data and
related material. In this work, we are confronted with a number of perceptual, conceptual
and technological issues regarding classification of music-related actions, issues that will be
presented and discussed in this paper.


Movement expertise influences gender recognition in point-light displays of
musical gestures

Clemens Wllner,* Frederik J.A. Deconinck #


*Institute of Musicology and Music Education, University of Bremen, Germany,
#Institute for Biomedical Research into Human Movement and Health, Manchester
Metropolitan University, UK

We investigated (a) whether observers perceive the gender of orchestral conductors in
point-light displays across multimodal conditions and (b) whether there are quantifiable
motion differences between male and female conductors. We hypothesised that in explicitly
trained conducting gestures, gender differences are less pronounced as compared to walking
motion. Gestures of male and female orchestral conductors were recorded with a motion
capture system while they conducted two excerpts from a Mendelssohn string symphony to
musicians. Point-light displays were created according to the following conditions: static
image (no movement), gait, visual-only and audiovisual conducting. In addition, auditory-
only versions of the same music were produced. Musically trained observers distinguished
best between male and female conductors in gait and static images, for which differences in
body morphology and/or motion parameters were found in accordance with previous
research. For conducting movements, no significant motion differences were recorded.
Accuracy of gender recognition was influenced by conductors expertise: While observers
perceived the gender of less experienced conductors above chance level for visual-only and
audiovisual point-light displays of conducting, displays of experienced conductors permitted
correct recognition for gait and static images only, but not for the three conducting
conditions. Results point to a response bias in judgments such that experienced conductors
204 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

FRI
were more often judged to be male. We conclude that judgement accuracy depended both on
conductors level of expertise as well as on observers concepts, suggesting that perceivable
differences between men and women diminished for highly trained movements of
experienced individuals.

Paper Session 34: Dock Six Hall, 17:00-19:00


Structure, Performance, Interaction

Perceptual Evaluation of Automatically Extracted Musical Motives

Oriol Nieto, Morwaread M. Farbood


Dept. of Music and Performing Arts Professions, New York University, USA

Motives are the shortest melodic ideas or patterns that recur in a musical piece. This paper
presents an algorithm that automatically extracts motives from score-based representations
of music. The method combines perceptual grouping principles with data mining techniques,
using score-based representations of music as input. The algorithm is evaluated by
comparing its output to the results of an experiment where participants were asked to label
representative motives in six musical excerpts. The perceptual judgments were found to
align well with the motives automatically extracted by the algorithm and the experimental
data was further used to tune the threshold values for similarity and strength of grouping
boundaries.


Does Higher Music Tend to Move Faster? Evidence For A Pitch-Speed
Relationship

Yuri Broze & David Huron


School of Music, Ohio State University, USA

We tested whether higher-pitched music is associated with faster melodic speeds in Western
music. Three empirical studies produced results consistent with the hypothesized pitch-
speed relationship. This pitch-speed correspondence was evident when analyzing musical
parts and instruments, but not when considering isolated notes. We sketch five possible
origins for the observed effect: acoustic, kinematic, music theoretical, sensory/perceptual,
and psychological. Study 1 tested the idea that high-pitched notes will tend to be faster than
low-pitched notes, regardless of musical part or instrument. Using an electronic database of
174 scores of Western music, we calculated correlations between pitch height and note
duration. Results were mixed, and dependent on genre. Study 2 tested whether higher-
pitched musical parts tend to be faster than lower-pitched ones. Using an independent
sample of 238 Western scores, we tallied the number of pitched events per musical part to
index melodic speed. Statistically significant effects were observed in every subsample
studied when considering the music part-by-part. Study 3 directly measured melodic speed
in notes per second using 192 live recordings of solo instrumental performances. A strong
correlation (r = 0.754, p < .001) between observed median melodic speed and instrumental
midrange.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

205

Computational Analysis of Solo Versus Ensemble Performance in String


Quartets: Intonation and Dynamics

Panagiotis Papiotis,* Marco Marchini,* Esteban Maestre#*


*Music Technology Group, Universitat Pompeu Fabra, Spain
#Center for Computer Research in Music and Acoustics (CCRMA), Stanford University, U.S.A.

Musical ensembles, such as a string quartet, are a clear case of music performance where a
joint interpretation of the score as well as joint action during the performance is required by
the musicians. Of the several explicit and implicit ways through which the musicians
cooperate, we focus on the acoustic result of the performance in this case in terms of
dynamics and intonation - and attempt to detect evidence of interdependence among the
musicians by performing a computational analysis. We have recorded a set of string quartet
exercises whose challenge lies in achieving ensemble cohesion rather than correctly
performing ones individual task successfully, which serve as a ground truth dataset; these
exercises were recorded by a professional string quartet in two experimental conditions:
solo, where each musician performs their part alone without having access to the full quartet
score, and ensemble, where the musicians perform the exercise together following a short
rehearsal period. Through an automatic analysis and post-processing of audio and motion
capture data, we extract a set of low-level features, on which we apply several numerical
methods of interdependence (such as Pearson correlation, Mutual Information, Granger
causality, and Nonlinear coupling) in order to measure the interdependence -or lack thereof-
among the musicians during the performance. Results show that, although dependent on the
underlying musical score, this methodology can be used in order to automatically analyze the
performance of a musical ensemble.


Musical Agreement via Social Dynamics Can Self-Organize a Closed Community
of Music: A Computational Model

smet Adnan ztrel,* Cem Bozahin#


Cognitive Science Department, Middle East Technical University, Ankara Turkey

This study aims to model social dynamics of an idealized closed musical society to
investigate whether a musical agreement in terms of shared musical expectations can be
attained without external intervention or centralized control. Our model implements a multi-
agent simulation, where identical agents, which have their own private two dimensional
transition matrix that defines their expectations on all possible bi-gram note transitions, are
involved in round-based pairwise interactions. Throughout an interaction two agents are
randomly chosen from the population, one as the performer and the other as the listener.
Performers compose a fixed length melodic line by successively appending their most
expected note sequences recursively by using sounds from a finite inventory. Listeners
assess this melody to determine the success of the interaction by evaluating how familiar
they are to the bi-gram transitions that they hear. According to success the interacting
parties perform updates on their transition matrices. All agents start with a flat transition
matrix, and the simulation ends when they converge on a state of agreement. We have found
that 30 out of 144 possible bi-grams, 74 out of 1728 possible tri-grams, and 7 out of 20736
four-grams emerged as agreements, although only bi-grams are communicated. The findings
signify that melodic building blocks for the modeled society are self-organizing, given the
limited bi-gram expectations of individuals, and that convergence trends are dependent on
simulation parameters.

206 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

FRI
Paper Session 35: Timber I Hall, 17:00-18:30
Group singing

Why do people sing in a choir? Social, emotional and well-being effects of choir
singing

Jukka Louhivuori
University of Jyvskyl, Finland

Singing appears to be a common and widely practiced musical activity across cultures.
According to previous studies people sing in a choir mainly because of social and emotional
reasons. In addition, several studies have suggested connections between choir singing,
wellbeing and health. Most of the studies have been done in Western cultural context. Thus,
it is not known for sure if cultural background has an effect on choristers motivation. The
aim of the study is to get better understanding how cultural background effects choir singers
reasons to sing in a choir. A survey was conducted for choristers with different cultural
background (European, African; N=684). In addition to the questionnaire information was
acquired by interviewing individual choristers (N=48). The choirs represented most
common choir types, such as children, youth, mixed, male, female and senior choirs. The data
consists of typical age groups for choir singers (16-91 years; average age = 47 years). The
results show, that the main reasons for choristers to sing in a choir are related to emotional
experiences, relaxation, social networks group support and well-being effects. The findings
are in line with previous studies, but for the choristers with European cultural background
social aspects were more important compared to African singers who emphasized musical
and emotional aspects in choir singing. The findings suggest that cultural background has a
clear effect on which aspects choristers consider as most important factor in choir singing.
Tight and close social networks typical for many African societies may explain the difference
between European and African choir singers. Interviews support this interpretation.
Typically European choristers spoke about the benefits of choir singing in building social
networks, while African choir singers pointed out that they have enough social connections;
choirs are not needed for getting friends, but to support musical development and emotional
needs. Both groups emphasized the relaxation and wellbeing aspects of choir singing.


An empirical field study on sing-along behaviour in the North of England

Alisun Pawley,* Daniel Mllensiefen#


*Kendal, United Kingdom
# Department of Psychology, Goldsmiths, University of London, United Kingdom

Singing along to a tune in a leisure environment, such as on the dance floor of a nightclub, is
one frequent form of spontaneous and informal music-making. This paper reports the
empirical findings and theoretical implications of a field study of sing-along behaviour
carried out at music entertainment venues across northern England, addressing how singing
along is affected by context, as well as what musical qualities make a song singalongable.
Thirty nights of field research were conducted in five different entertainment venues. Both
quantitative and qualitative data was collected, including how many people sang along to
each of the 1168 songs played during research. Nine contextual factors as well as 32 musical
features of the songs were considered as different categories of explanatory variables.
Regression trees and a random forest analysis were employed to model the empirical data
statistically. A resulting quantitative model predicts the proportion of people singing along
with a particular song (dependent variable) given information about the audience, song
popularity, context, and song-specific musical features as explanatory variables. Results
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

207

indicate that non-musical factors can account for 40% of the variability in sing-along
behaviour, whilst musical factors are able to explain about another 25% of the variance. The
prediction model demonstrates that it is features of vocal performance rather than structural
features of the tunes that make audiences sing along. Results are interpreted in terms of
theoretical notions of tribal or indigenous societies. This study makes a significant
contribution to the largely unexplored territory of sing-along behaviour.


Effects of Group Singing on Psychological States and Cortisol

Rita Bento Allpress,* Stephen Clift,* Lucy Legg#


*Sidney De Haan Research Centre for Arts and Health, Canterbury Christ Church University,
England; #London, England

Group singing has several psychological, physical, and social components that can interact
and contribute to feelings of well-being. Due to the relative infancy of this field of research,
understanding on what these beneficial and positive effects of group singing are and how
they interact is still limited. In order to investigate how group singing may benefit our well-
being and health, previous research has looked at effects of singing on psychological states
and cortisol, a hormone related to well-being. One major limitation of previous research to
this date is a lack of experimental designs, participant randomization and an active control.
However, without such research we are, in fact, unable to determine the effects of group
singing on our well-being and health. This study aims to overcome the limitations of
previous research and experimentally assess effects of group singing on cortisol and
psychological variables. In this way, we hope to better understand short-term effects of
group singing on the psychological states and cortisol of a group of people that had never
sang together before. At the same time, we hope it will allow us to start answering the
question of whether the effects reported in the literature are indeed due to group singing or
if they can be equally brought into place by other, non-musical group activities. Twenty-one
participants (11 females) were recruited from the general population and no previous
experience with singing was required. Eighteen participants (9 females) completed two
conditions: singing and a non-musical group activity. Given the repeated measures design,
participants were randomly allocated to one of two groups. Group A sang on day 1 and did
the non-musical activity on day 2, and group B did the non-musical activity on day 1 and the
singing on day 2. Participants donated saliva samples and completed the positive and
negative affect schedule before and after each activity. A flow state scale and a
connectedness scale were also completed after each activity, and a general well-being
questionnaire was completed at baseline on day 1. Data analysis points to similar effects of
both group activities on levels of flow, connectedness and positive affect which indicate that
both activities had similar levels of engagement, challenge and social interaction.

208 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

FRI
Paper Session 36: Timber II Hall, 17:00-18:30
Beat & time perception

Probing Beat Induction in Rhesus Monkeys: Is Beat Induction Species-Specific?

Henkjan Honing,* Hugo Merchant,# Gbor Hden,* Luis Prado,# and Ramn Bartolo#
*Cognitive Science Center Amsterdam, Institute for Logic, Language and Computation,
University of Amsterdam, The Netherlands
#Department of Cognitive Neuroscience, Instituto de Neurobiologa, Universidad Nacional
Autonoma de Mxico, Queretaro, Mexico

We measured auditory event-related potentials (ERPs) in a rhesus monkey (Macaca
mulatta), probing a well-documented component in humans, the mismatch negativity
(MMN). We show for the first time in a rhesus monkey that, in response to infrequent
deviants that were presented in a continuous sound stream, a comparable ERP component
can be detected with negative deflections in early latencies. This result is in line with an
earlier study with a single chimpanzee (Pan troglodytes) that showed a similar MMN-like
response using the same two-tone odd-ball paradigm. Consequently, using more complex
stimuli, we tested whether a rhesus monkey can not only detect gaps (omissions at random
positions in the sound stream) but also the beat (omissions at the first position of a musical
unit, i.e. the downbeat). In contrast to what has been shown in human adults and newborns
(using identical stimuli and experimental paradigm), preliminary analyses suggest that the
monkey is not able to detect the beat in music. These findings are in support of the
hypothesis that beat induction (the cognitive mechanism that supports the detection of a
regular pulse from a varying rhythm) is species-specific.


Electrophysiological correlates of subjective equality and inequality between
neighboring time intervals

Hiroshige Takeichi*, Takako Mitsudo#, Yoshitaka Nakajima, and Shozo Tobimatsu,


*RIKEN Nishina Center, Japan, #Faculty of Information Science and Electrical Engineering,
Kyushu University, Japan, Faculty of Design, Kyushu University, Japan, Faculty of Medical
Sciences, Kyushu University, Japan

Rhythm is an important aspect of music. However, perceived rhythm does not always
correspond to the physical temporal patterns in a simple manner. When two neighboring
time intervals are marked by three successive tone bursts, human listeners are able to judge
whether the intervals are equal or unequal. The equality appears as a perceptual category
when the intervals are around 200 ms or below. However, the perception displays some
ambiguity around a categorical boundary. We aimed at examining whether different
judgments to the same pattern could be related to any particular brain activities observed in
the event-related potentials of the scalp. The event-related potentials were recorded while
participants listened to the temporal patterns around categorical boundaries and made
judgments about the subjective equality. Selective average waveforms were calculated for
each response for each participant, and converted to z-scores for each recording site.
Bhattacharyya distances between the different responses were calculated, and correlations
were calculated between the rate of the unequal judgment and the integral of the
Bhattacharyya distance over the 100-ms interval after the onset of the third tone burst. A
significant correlation was found between them, suggesting that one of the most important
brain activities for the temporal judgment appears immediately after the last temporal
marker for a very short period of about 100 ms. An elementary process in rhythm perception
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

209

takes place in the brain in a very brief period after the presentation of the temporal pattern,
enabling rhythm processing in real time. (Supported by JSPS)


Comparisons between chunking and beat perception in auditory short-term
memory

Jessica A. Grahn
Brain and Mind Institute & Department of Psychology, University of Western Ontario, Canada

Auditory working memory is often conceived of as a unitary capacity: different sounds are
processed with similar neural mechanisms. In verbal working memory (e.g., digit span
tasks), temporal grouping or chunking of auditory information occurs spontaneously and
benefits working memory. The current fMRI study examines whether beat perception may
simply be a case of chunking, by measuring brain responses to chunked and unchunked
verbal sequences and comparing them to beat-based and nonbeat-based rhythmic
sequences. Participants performed same/different judgements on pairs of auditory
sequences. Rhythm sequences were constructed from a single letter, repeated with rhythmic
timing (e.g., the letter B repeated 6 times, with variable SOAs corresponding to a beat-based
rhythmic sequence). Non-beat sequences had irregularly timed SOAs. Verbal sequences were
composed of strings of different letters (e.g., P M J O E I K C). Chunked verbal sequences had
temporal grouping of letters into 2- or 4-letter chunks; unchunked sequences had no regular
temporal grouping. Overall, activation to rhythm and verbal working memory stimuli
overlapped, apart from in the basal ganglia. The basal ganglia showed a greater response to
beat than non-beat rhythms, but showed no difference between chunked and unchunked
verbal sequences. Thus, beat perception is not simply a case of chunking, suggesting a
dissociation between beat processing and grouping or chunking mechanisms that warrants
further exploration.

210 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

SAT

Saturday 28 July

Paper Session 37: Grand Pietra Hall, 09:00-11:00


Emotion recognition & response

Emotion Recognition in Western Popular Music: The Role of Melodic Structure

Scott Beveridge,* Don Knox#, Raymond MacDonald#


*Fraunhofer Institute for Digital Media Technology IDMT, Ilmenau, Germany
#Glasgow Caledonian University, Glasgow, Scotland

Music Emotion Recognition (MER) involves modelling the relationship between musical
features and expressed emotion. Previous work in this eld concentrates on the extraction
of spectrally derived acoustical and psychoacoustical features. However, this method has
reached a glass ceiling with respect to the accuracy in which MER algorithms can identify
music emotion. This paper adopts a wider view of emotional expression in music by
considering the musical communication process. Higher level structural elements of music,
specically the role of melodic structure, are incorporated into the feature extraction
process. A study is introduced in which participants use a 2 dimensional time-continuous
measurement methodology to rate the emotion expressed by musical pieces. These musical
stimuli are then analyzed using feature extraction algorithms. A statistical analysis of these
measures is then performed with the aim of identifying correlations between melodic
structural features and expressed emotion.


Emotional influences on attention to auditory streams

Renee Timmers,* Harriet L. Crook#, Yuko Morimoto*


*Department of Music, University of Sheffield, United Kingdom
#Department of Audiovestibular Medicine, Royal Hallamshire Hospital, United Kingdom

Perception and experience of emotions are important elements of the appreciation and
understanding of music. In fact, they may not only be a response to music, but may also play
a directing role in our perception of music. The results of three experiments present
corroborating evidence that this is indeed the case: Presentations of affective pictures
influence the way participants attend to and group auditory sequences. The experiments
used sequences consisting of alternating high and low notes. Participants indicated their
perception of the sequences by judging to what extent they attended to the high or low
sequence or to both lines (one stream). Happy pictures increased the tendency of
participants to focus on the higher line, while sad pictures increased the tendency to focus on
the lower pitches. Sad pictures also increased the tendency to segregate the lines and focus
on slower melodic movement.


Quantitative Estimation of Effects of Musical Parameters on Emotional
Features

Masashi Yamada, Ryo Yoneda, Norio Emura


Department of Media Informatics, Kanazawa Institute of Technology, Japan

It has been shown that musical emotion can be illustrated by a two-dimensional model, which is
spanned by valence and arousal axes, and experimental studies has revealed the correlations
between the emotional features and musical parameters. However, the quantitative correlations
between the effects of different parameters on the emotional features have not been clarified, yet.
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

211

The two-dimensional plane of musical emotion is illustrated by orthogonal axes of cheerfulness


and tension, rotating the two axes of valence and arousal in 45 degrees. In the present study,
effects of several musical parameters on the cheerfulness and tension were estimated,
quantitatively. In the present study, three listening experiments were conducted, using simple
musical scales performed by pure tones as stimuli. In the first and second experiments, Scheffes
paired comparison method was applied. In the first experiment, scales were provided as stimuli,
varying tempo, performing resister and tonality systematically, and listeners compared and rated
the cheerfulness of measure. Using the results of the experiment, a quantitative scale CM
(Cheerfulness of Music) was determined and the effects of the parameters of tempo, resister and
tonality on the cheerfulness were estimated on the CM measure. In the second experiment,
ascending major scales were provided as stimuli varying tempo, sound level and articulation, and
listeners rated the tension of the scales. A quantitative measure TM (Tense of Music) was
determined. In the last experiment, 15 stimuli were selected from the stimuli used in the first and
second experiment, and listeners rated similarity between every pair of the stimuli. Multiple-
dimensional scaling of the similarity matrix showed a three-dimensional solution. Moreover,
multiple-regression analyses, using the values on the three dimensions as independent variable
and the CM and TM values as dependent variables, showed that the first and second dimensions
are almost along with the CM and TM measures, respectively. Then, one PU (Perceptual Unit) was
determined as the perceptual difference between one CM on the cheerfulness, and TM measure
was translated into PU measure. The stimuli were plotted on the cheerfulness-tension plane, and
the plots successfully revealed the effects of tempo, register, tonality, sound level and articulation
both on the cheerfulness and tension, quantitatively.


Towards a brief domain-specific self-report scale for the rapid assessment of
musically induced emotions

Eduardo Coutinho, Klaus R. Scherer


Swiss Center for Affective Sciences, University of Geneva, Switzerland

The Geneva Emotional Music Scale (GEMS) is the first domain-specific model of emotion
specifically developed to measure musically evoked subjective feelings of emotion
(particularly in live performances). The scale consists of a list of 45 emotion terms pertaining
to nine emotion factors. In this paper, we address two potential limitations of this
instrument. First, since the GEMS comprises a high number of elements to be rated, it
becomes uninteresting for fieldwork studies where a rapid assessment is often necessary.
Second, it is questionable the extent to which the GEMS may be consistently used to discern
the emotions experienced while listening to music of music genres differing significantly
from those that led to its development, especially due to an overrepresentation of classical
music performances. Regarding the former limitation, and based on the analysis of subjective
judgments of pair-wise dissimilarity between the feelings described by each GEMS emotion
term (N=20), we created a short version of the GEMS consisting of nine rating items. Each
item is defined by a fuzzy set of three emotion terms. In this way, the imprecision of
assigning a single verbal label to describe each item is minimized, by maintaining the verbal
richness of the original terms. Regarding the latter aspect, we found that three new
dimensions of emotional meaning concerning contemporary music are necessary to
consistently describe emotional experiences evoked by this genre: knowledge related
feelings, enthusiasm and boredom. Future work includes an investigation of semantic space
of emotion labels, and the development of genre specific scales.

212 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

SAT
Symposium 5, Crystal Hall, 09:00-11:00
Classification as a tool in probing neural mechanisms of music perception,
cognition, and performance
Convener: Rebecca Schaefer, Sinichi Furuya, Discussant: Petri Toiviainen

Music can be considered acoustic information with complex temporal and spatial features.
Research into perception and cognition of multifaceted elements of music tries to decode the
information from neural signals elicited by listening to music. Music performance, on the
other hand, entails the encoding of musical information to neural commands issued to the
muscles. To understand the neural processes underlying music perception, cognition, and
performance, therefore, researchers face issues of extracting meaningful information from
extremely large datasets with regard to neural, physiological, and biomechanical signals.
This is nontrivial for music researchers in light of recent technological advances regarding
data measurement. Classification using machine-learning techniques is a powerful tool in
uncovering the unseen patterns in these large datasets. In this way, not only are the means
compared, but a data-driven method is used to uncover the sources of informative variance
in the signals. Moreover, classification techniques allow for quantitative evaluation of
individual differences in music perception and performance. In this symposium, examples
are presented of uncovering neural representations of musical information such as rhythm
and harmony through applying single-trial EEG classification techniques such as linear
discriminant classification, and multivariate data reduction methods such as Principal
Component Analysis (PCA) to electrophysiological signals derived from individuals who
listened to musical stimuli. Additionally, these methods are useful to behavioral scientists,
allowing them to characterize fundamental patterns of movements of the motor system with
a large number of joints and muscles during musical performance by means of PCA and
cluster analysis such as K-means and expectation maximization (EM) algorithm.
Classification can also be performed on spectro-temporal features derived from audio
waveforms to investigate the features that may be most informative in perception for
auditory processing by the brain. This symposium, comprising participants from six different
research groups, has two aims. The first is to present, through empirical research, examples
of how classification methods can be applied to various experimental setups and different
types of measurement. The second aim is to provide fundamental knowledge of the methods
of classification techniques. The hope is that conference delegates will gain a greater
understanding of classification and how its methodology can be applied to their own
research.

Automated Classification of Music Genre, Sound Objects, and Speech by


Machine Learning

Leigh M. Smith,* Stephen T. Pope#, Jay Leboeuf,* Steve Tjoa*


*iZotope Inc., USA, #HeavenEverywhere.com, USA

A software system, MediaMined, is described for the efficient analysis and classification of
auditory signals. This system has been applied to the tasks of musical instrument
identification, classifying musical genre, distinguishing between music and speech, and
detection of the gender of human speakers. For each of these tasks, the same algorithm is
applied, consisting of low-level signal analysis, statistical processing and perceptual
modeling for feature extraction, and then supervised learning of sound classes. Given a
ground truth dataset of audio examples, textual descriptive classification labels are then
produced. Such labels are suitable for use in automating content interpretation (auditioning)
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

213

and content retrieval, mixing and signal processing. A multidimensional feature vector is
calculated from statistical and perceptual processing of low level signal analysis in the
spectral and temporal domains. Machine learning techniques such as support vector
machines are applied to produce classification labels given a selected taxonomy. The system
is evaluated on large annotated ground truth datasets (n > 30000) and demonstrates success
rates (F-measures) greater than 70% correct retrieval, depending on the task. Issues arising
from labeling and balancing training sets are discussed. The performance of classification of
audio using machine learning methods demonstrates the relative contribution of bottom-up
signal derived features and data oriented classification processes to human cognition. Such
demonstrations then sharpen the question as to the contribution of top-down, expectation
based processes in human auditory cognition.


An Exploration of Tonal Expectation Using Single-Trial EEG Classification

Blair Kaneshiro,*# Jonathan Berger,* Marcos Perreau-Guimaraes,# Patrick Suppes#


*Center for Computer Research in Music and Acoustics, Stanford University, Stanford, CA, USA
#Center for the Study of Language and Information, Stanford University, Stanford, CA, USA

We use a machine-learning approach to extend existing averaging-based ERP research on
brain representations of tonal expectation, particularly for cadential events. We introduce
pertinent vocabulary and methodology, and then demonstrate the use of machine learning in
a classification task on single trials of EEG in a tonal expectation paradigm. EEG was
recorded while participants listened to two-measure chord progressions that established
expectation for resolution to the tonic. Cadential events included the tonic; repeated
dominant; bII; and silence. Progressions were presented in three keys. Classifications were
performed on single trials of EEG responses to the cadential events, with the goal of correctly
identifying the label of the stimulus that produced the EEG response. Classification of the
EEG responses by harmonic function of the cadential endings across keys produced classifier
accuracies significantly above chance level. Our results suggest that the harmonic function of
the stimulus can be correctly labeled in single trials of the EEG response. We show that
single-trial EEG classification can additionally be used to identify task-relevant temporal and
spatial components of the brain response. Using only the top performing time ranges or
electrodes of the brain response produced classification rates approaching and even
exceeding the accuracy obtained from using all time points and electrodes combined.


Exploring the mechanisms of subjective accenting through multivariate
decoding
Rutger Vlek,* Rebecca Schaefer,# Jason Farquhar,* Peter Desain*

* Radboud University Nijmegen, Netherlands


# University of Edinburgh, UK

Subjective accenting is a cognitive process in which identical auditory pulses at an
isochronous rate turn into the percept of an accenting pattern or rhythm. Subjective
accenting can occur spontaneously, for instance when perceiving the sound of a clock
(making tick-tick-tick-tick sound like tick-tock-tick-tock), but can also be voluntarily
controlled. In two EEG studies the neuronal mechanisms underlying our capability to
generate subjective accents have been investigated. The first study was set up to investigate
whether responses to subjectively accented beats could be decoded on a single-trial level
from 64-channel EEG signal. When this was shown to be possible, the same multivariate
single-trial approach was used to investigate the relationship between the imagined and
perceived accents, by predicting responses to (imagined) subjectively accented beats, from
responses to (perceived) physically accented beats. A second study was set up to investigate
214 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

SAT
the effects of different mental strategies on subjective accenting more closely, contrasting
imagined accents cued by a loudness accent versus a timbral accent. In addition to being
successful in decoding subjective accents from single-trial EEG up to 67% correctly, the first
study uncovered evidence for shared mechanisms in rhythm processing, showing similarity
between responses to perceived and subjective accents through a maximum of 66%
classification rate. Adding to this, the second study sheds light on how different strategies
modulate the responses to subjective accents, with preliminary results showing a significant
increase in the decoding performance of subjective loudness accents versus subjective
timbral accent, indicating that the robustness of the brain signature may depend on imagery
strategy or cueing parameters. The main contribution of this work is to provide an insight
into the cerebral mechanisms of subjective accenting, showing that not only is the brain
response detectable in a single trial of data, but it can also be predicted from the EEG
signatures of perceived accenting. Additionally, it is shown that imagery strategy has a
considerable effect, which has consequences for further research in this area. The use of
subject-specific classification methods also yields data on interpersonal differences, and the
range of responses that are measured, which makes it a tool particularly well suited to look
at the cognitive mechanism of imagery. The results may inform a rhythm-based Brain-
Computer Interface paradigm, allowing rhythm to be used to drive a device from the brain
signal alone.

Classification of movement repertoire within and across pianists

Shinichi Furuya, Eckart Altenmller


Institute for Music Physiology and Musicians Medicine, Hannover University of Music, Drama,
and Media, Germany

A large number of joints/muscles at human body enable a rich variety of movement
production across pieces and players during musical performance. To address similarity and
difference across these movement repertoires provides insights for uncovering motor
control mechanisms and biomechanical principles underlying virtuosic, artistic, and injury-
preventive performance. Multivariate analysis is a clue for probing this issue, allowing for
discovering a set of fundamental movement patterns that are hidden behind large datasets.
The present talk aimed to introduce some approaches using multivariate analysis and
classification techniques for motion data during piano playing, particularly focusing on the
three key issues. The first topic is to describe covariation of motion across joints. This issue
has been addressed by researchers who wish to elucidate neural mechanism governing
complex motor behaviors in terms of dimensionality reduction of the redundant motor
system. We will introduce principle component analysis (PCA) as a mean of addressing
changes in joint covariation at the hand through piano practice. The second topic is to
classify hand movement patterns across various tone sequences. Here, a combination of PCA
and cluster analysis enabled to segregate a number of hand coordination into two to three
patterns. The third issue is individual differences in movement strategy across players to
change acoustic variables. We investigated it by combining multiple regression and cluster
analyses, which categorized pianists into a few groups according to similarity of the
movement patterns. In general, these techniques will be applicable for understanding both
consistency and variety of bodily movements in musical performance.

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

215

Paper Session 38, Dock Six Hall, 09:00-11:00


Musical Expectation and Predictability

Shannon entropy predicts perceptual uncertainty in the generation of melodic


pitch expectations

Niels Chr. Hansen,*# Marcus T. Pearce*#


*School of Electronic Engineering and Computer Science, Queen Mary, University of London,
United Kingdom, #Department of Computing, Goldsmiths College, University of London, United
Kingdom

Following the proposal that schematic expectations arise from automatically internalised
probabilities in sensory input, we tested Shannon entropy as a model of predictive
uncertainty in auditory cognition.
24 melodic contexts were selected from two repertoires differing in rhythmic and tonal
complexity (i.e. complex Schubert songs and simple isochronous hymns). The contexts were
assigned to low- and high-entropy categories according to predictions of an unsupervised,
variable-order Markov model. Musicians and non-musicians listened to the stimuli and
provided explicit judgements of perceived uncertainty (explicit uncertainty) and an implicit
measure computed as the entropy of expectedness ratings obtained using a classical probe-
tone paradigm (implicit uncertainty). High-entropy contexts produced significantly greater
implicit uncertainty for both complexity levels and greater explicit uncertainty for hymns.
Averaged across participants, implicit uncertainty correlated with entropy. Musicians
experienced lower implicit uncertainty for both complexity levels and lower explicit
uncertainty for hymns. Entropy-by-expertise and complexity-by-entropy interactions were
found for implicit uncertainty. Moreover, Schubert songs produced higher explicit
uncertainty, and an expertise-by-complexity interaction was present. Unexpectedness
increased with information content; this effect was strongest in musicians and increased
with musical training. Additionally, a hypothesised entropy-by-expertise interaction was
found for these ratings. In conclusion, consistent with predictive coding theory, domain-
relevant training leads to an increasingly accurate cognitive model of probabilistic structure.
Furthermore, the efficacy of entropy as a model of predictive uncertainty is enhanced by: (a)
simplicity in sensory input, (b) domain-relevant training, and (c) implicitness of uncertainty
assessment. We argue that these factors facilitate the generation of more accurate perceptual
expectations.


Evidence for implicit tracking of pitch probabilities during musical listening

Diana Omigie, Marcus Pearce, Lauren Stewart


Goldsmiths, University of London, UK

An emerging theory about the origins of musical expectations emphasises the role of a
mechanism commonly termed statistical learning. This theory has led to the development of
a computational model, which encodes past experience of pitch sequences and then predicts
the conditional probability of future events occurring given the current musical context.
Results from a previous behavioural study showed a close relationship between the
predictions of the model and listeners expectedness ratings. The current study extends this
work to determine whether the model can also account for expectations made on the basis of
implicit knowledge, with the main aim of developing a tool able to provide a sensitive
measure of listeners musical expectations as they unfold in real time. Our aim is to develop a
tool that allows the assessment of dynamic musical expectations while circumventing
confounding factors related to decision making and musical competence. Methods: Target
216 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

SAT
notes that had either a high or low probability according to the computational model of
melodic expectation were selected and participants carried out speeded judgments to
indicate which of two instruments had played the target note. Notes for which a judgement
was required were indicated to the participants using a visual cue that avoided the need to
interrupt the flow of the melody while allowing the measurement of expectations at multiple
points in a piece of music. Results: As predicted, analysis of reaction times showed that
participants responded faster to high probability compared with low probability notes when
they were rendered in the same timbre as the preceding context. The present study provides
support for the view that musical expectations are formed on the basis of musical knowledge
acquired over a lifetime of incidental exposure. In addition, it validates an implicit priming
paradigm that takes full account of the dynamic nature of musical expectancy during
everyday music listening, and which is suitable for individuals of varying levels of musical
expertise.


Structural Conditions of Predictability in Post-Tonal Music: The Compound
Melodic Structures of Nikos Skalkottass Octet

Petros Vouvaris
Department of Music Science and Art, University of Macedonia, Greece

The investigation of compound melodic structures has been an implicit feature of most
analytical approaches that adopt a prolongational perspective with respect to the
hierarchical structure of tonal music. When it comes to theorizing the compound structure of
melodies with no apparent tonal orientation, the problematics of prolongation associated
with post-tonal music discourage the espousal of the aforementioned approaches without
adapting their methodological paradigm to the requisites of this specific musical idiom. This
thesis concurs with the fundamental premise of the present paper as relates to the opening
thematic melodies of the three movements of Nikos Skalkottass Octet (1931). Their analysis
aims at proposing an interpretation of their compound structure, based on an investigation
of the salient features that account for their respective associative middleground. The
perceptual relevance of these features is factored in the analysis by assimilating the
conclusions of empirical research on auditory stream segregation in relation to the implied
polyphony of monophonic tonal music. The analysis evinces the resemblance of the
associative middleground of Skalkottass compound melodies to prolongational structures
commonly associated with tonal melodic lines. These findings prompt the assessment of the
compound character of the Octets thematic melodies as one of the works structural
attributes that induce and/or undermine expectations related to schematic, dynamic,
veridical, and conscious predictability.


Musical Expectation and paths in Tonal Pitch Space - Integration of
concepts/models and an application on the analysis of Chopin' s Prelude in A
minor

Costas Tsougras
School of Music Studies, Aristotle University of Thessaloniki, Greece

Musical Expectation Theory (Huron 2006) describes how a set of psychological mechanisms
functions in the cognition of music. The theory identifies fundamental aesthetic possibilities
afforded by expectation, and shows how musical devices (such as meter, cadence, tonality)
exploit psychological opportunities. Tonal Pitch Space Theory (Lerdahl 2001) is an
expansion of the Generative Theory of Tonal Music (Lerdahl & Jackendoff 1983) and
proposes a model that provides explicit stability conditions and preference rules for the
construction of GTTM's time-span and prolongational reductions. This paper aims at the
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

217

integration of the perceptually/psychologically based Musical Expectation Theory with the


mathematically/cognitively based Tonal Pitch Space Theory with the purpose of making the
principles of Melodic and Harmonic Expectation more explicit through the geometrical
representation and mathematical calculation of melodic tension and harmonic/regional
distance. The paper explores the correlation between key aspects of Expectation Theory
[ITPRA psychological responses (imagination, tension, prediction, reaction, appraisal), the
experienced listener's innate or learned expectations (such as pitch proximity, most frequent
past event, surprise), and emotional qualia (such as uncertainty, stability, closure)] and key
components of the Tonal Space model (melodic attraction, chordal distance, prolongational
tension and relaxation, normative structure) and attempts a parallelism between the concept
of expectation and the concept of hierarchical paths in Pitch Space. The integration is applied
on the analysis of Chopin's "enigmatic" Prelude in A minor (op. 28, nr. 2), proposing a
cognitive explanation of the prelude' s musical effect that embraces or contradicts existing
analyses of the work. The proposed fusion of theories could induce a cognitively based music
analysis attitude that strives towards deciphering musical function rather than describing
musical form. Moreover, the proposed approach could be the incentive for empirical
research and experimentation.

Paper Session 39, Timber I Hall, 09:00-11:00


Perspectives on world musics

In Search of a Generative and Analytical Model for the Traditional Music of


North Africa

Xavier Hascher
GREAM Laboratory of excellence, Universit de Strasbourg, France

This paper aims at applying a general model of modal monody, constructed deductively from
a theory of the generation of musical systems and scales, to the analysis of pieces of a given
repertoire, namely the traditional Arabo-Andalusian music of Tunisia, or mlf (customary).
The latter is therefore considered from a music-theoretical perspective rather than an
ethnomusicological one (be it of the etic type), even though a certain permeability between
the two approaches is, of course, assumed. After describing the model and summarizing the
principles that underlie its constitution, a brief recapitulation of previous analyses is given.
Then a new piece is presented, a shghul (well-wrought song, a form related in style to the
nba) in the characteristic abaayn mode. The purpose here is twofold: firstly, to attempt a
reductive analysis of the piece based on the theoretical assumptions exposed previously;
and, secondly, to derive from this a deeper grammatical understanding of the musical
language involved so as to allow at least a partial reconstruction, or recreation of the piece,
or of some similar one. What is sought for is a finite vocabulary of structural gestures and a
syntax that regulates their articulation, which can be compatible with a more customary kind
of analysis in terms of modes (ub) and genres (udq), or the breaking down of form into
sections, yet without being bound by the limitations inherent to such approaches. Finally, a
reference is made to the point of view of the receiver and to potential cognitive implications.

218 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

SAT
Incidental Learning of Modal Features of North Indian Music

Martin Rohrmeier,* Richard Widdess#


*Cluster Languages of Emotion, Freie Universitt Berlin, Germany
#Department of Music, School of Oriental and African Studies, University of London, United
Kingdom

Musical knowledge is largely implicit; it is acquired without awareness of its complex rules,
through interaction with a large number of samples during musical enculturation. Whereas
several studies explored implicit learning of features of Western music, very little work has
been done with respect to non-Western music, and synthetic rather than ecologically valid
stimuli have been predominantly used. The present study investigated implicit learning of
modal melodic features in traditional North Indian music in a realistic and ecologically valid
way. It employed a cross-grammar design, using melodic materials from two ragas that use
the same scale, To and Multn. Participants were trained on the lp section of either rga
and tested on novel excerpts from jo sections of both ragas featuring 5 distinct melodic
features and using binary familiarity and 6-point confidence judgments. Three of the five
features were melodically distinctive of either rga, whereas two were only distinctive based
on other than mere pitch sequence features (for instance, emphasis). Findings indicated that
Western participants in both groups incidentally learned to recognise some, but not all, of
the five features above chance level, and that the melodically distinctive features were better
recognised than the non-distinctive ones. Confidence ratings suggest that participants
performance was consistently correlated with confidence, indicating that they became aware
of whether they were right in their responses, i.e. they possessed explicit judgment
knowledge. Altogether participants began to incidentally acquire familiarity with a musical
style from beyond their cultural background during only a very short exposure.


Pictorial Notations of Pitch, Duration and Tempo: A Musical Approach to the
Cultural Relativity of Shape

George Athanasopoulos, Nikki Moran


Music Department, University of Edinburgh, United Kingdom

In a previous cross-cultural study we demonstrated that literacy makes a difference in the
way that performers regard textual representation of music. We carried out fieldwork
involving performers from distinct cultural backgrounds (Japanese musicians familiar /
unfamiliar with western standard notation (W.S.N.); members of the Bena tribe, a non-
literate rural community in Papua New Guinea; and classical-trained musicians based in the
United Kingdom pilot group). Performer responses to original auditory stimuli were
examined in order to explore distinctions between cultural and musical factors in the visual
organization of musical sounds. Three major styles of symbolic representation emerged:
linear-notational (x-y axial representation, time on x axis, variable parameter on y axis);
linear-pictorial (axial time indication, variable parameter represented pictorially); and
abstract-pictorial (no axial representation). In this follow-on study, we analysed resulting
pictorial representations in order to explore whether participants showed any notable
preferences that could be based on their cultural background. The pilot group had minimal
response in pictorial representations, opting for linear-notational models. Japanese
participants from both groups provided comparable pictorial responses amongst themselves
by providing a horizontal time frame. Non-literate Benas - the only group who produced a
majority of pictorial and abstract-pictorial responses - provided significantly different
responses to other groups in that their method of application did not follow the axial
representational model of time versus variable parameter. Although resemblances among
participant responses opting for linear-notational models of representation could suggest
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

219

underlying universality in music representation (particularly among literate participants),


the variety in pictorial and abstract-pictorial responses suggests that the association
between music and shape (where it takes place) is affected by cultural norms.


Socio-Cultural Factors Associated with Expertise in Indian Classical Music: An
Interview Based Study

Shantala Hegde, Bhargavi Ramanujam, Nagashree Santosh


Cognitive Psychology Unit, Center for Cognition and Human Excellence, Department of Clinical
Psychology, National Institute of Mental Health And Neuro Sciences (NIMHANS), Bangalore,
India

This exploratory study examined the socio-cultural factors associated with expertise in
Indian Classical Music (ICM) as there are no systematic studies hitherto. Twenty
accomplished professional musicians with A or A-top grade from the All India Radio (AIR)
were interviewed. Content analysis of the interview was carried out to elucidate the factors
that facilitated and contributed to their musical pursuits and achievements. Factors
examined were broadly classified as family background, musical training, opportunities in
academic school, personal abilities and skills and any other factors. All musicians had
precocious musical abilities. Active role played by parents, opportunities to learn music,
positive relation with music teachers, opportunities to attend music programs, to perform,
and at school to showcase ones talent, regularity in music lessons and practice sessions
were considered as important factors. Persistence, determination to succeed and a fine
balance of all the above factors were considered crucial in nurturing and facilitating in
reaching the level of expertise the musicians in the present study had. Active music listening
was reported as an extremely important factor as it helped in improving ones creative ideas
in the improvisation and elaboration of ragas and talas. Ragas roughly analogous to modes
and talas rhythmic cycles form the edifice of ICM, which is basically an oral tradition. The
musicianship is reflected in the creative ways in which a musician develops a raga and talas.
This study provides further evidence to our understanding of factors contributing to
development of musical expertise from an Indian perspective.

Paper Session 40, Timber II Hall, 09:00-11:00


Communicating intention in performance

Embodied Communication Strategies in Accompanied Vocal Performance

Katty Kochman, Matthias Demoucron, Dirk Moelants, Marc Leman


Institute for Psychoacoustics and Electronic Music (IPEM), Gent University, Belgium

In this paper, the effects of nonverbal communication involving respiration during a
collaborative vocal performance are studied. Respiration in this context functions as an
anticipatory signal that allows for perceptual matching and effective decision making
between two performers a singer and an accompanist. The experimental design uses
noninvasive respiration sensors during individual music rehearsal and then collaborative
music practice. The purpose of the research project is to analyze the effects of nonverbal
communication that occur between singers and accompanists during a performance The
purpose of this research project is to analyze the effects of nonverbal communication that
occur between singers and accompanists during a vocal performance. The efficient
nonverbal cooperation among singers and accompanists is an important factor for the
improvement of vocal performance and vocal technique. The analysis of the specific skill sets
involved is an important area of this research study. The data collected in terms of
220 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

SAT
performance strategies may provide a significant insight into the effects of supportive
musical gestures on a vocal performance. Respiration values did seem to be impacted as a
result of musical collaboration. When examining the effects of previous interaction and
rehearsal on performance strategies, correlations were higher for the collaborative
conditions. In addition, correlations were also higher for rehearsed pieces than for pieces
rehearsed together for the first time.


Deadpan and immobile performance intentions share movement features but
not expressive parameters

Marc R. Thompson,* Marcelo M. Wanderley,# & Geoff Luck,*


*Finnish Centre of Excellence in Multidisciplinary Music Research, University of Jyvskyl,
Finland; #Input Devices and Music Interaction Lab, McGill University, Canada

Investigations on expressive body movement in music performances have often employed
the standard paradigm, whereby musicians are asked to perform under conditions of varied
emotional and/or expressive intentions. By contrast, other studies have investigated the
effect of performing without extraneous physical movements by including an immobile
condition. It has generally been observed that expressively deadpan performances result in
smaller movements and an overall reduction of dynamics and expressive timing. Similar
results have been found in studies where musicians were tasked with performing using the
immobile condition. Interestingly, immobile and deadpan performance conditions have until
very recently not been included in the same experiment. The aim of this study is to examine
the effect of performing in deadpan and immobile playing conditions on movement
characteristics and expressive parameters. Pianists and clarinettists (total number = 14)
performing various musical excerpts were asked to play using four separate conditions
(deadpan, normal, exaggerated and immobile) and the performances were recorded and
motion-captured. To gauge the differences between each condition, we investigated timing,
dynamics and amount of physical movement. The results present evidence that for both
piano and clarinet performances, the deadpan and immobile conditions are related
according to the amount of physical movement used, but not in terms of other expressive
parameters (dynamics and timing). Hence, musicians were able to suppress extraneous
movements such as swaying and gesturing while maintaining an expressive timing profile
similar to when performing in a normal fashion. The presentation will further highlight these
relationships with statistical findings.


The Intentions of Piano Touch

Jennifer MacRitchie, Massimo Zicari


Divisione Ricerca e Sviluppo, Scuola Universitaria di Musica - SUPSI, Conservatorio della
Svizzera Italiana, Switzerland

For pianists, touch is a corporeal tool that can be used not only to physically produce notes
on the piano, but to mediate their expressive intentions for the performed music. This paper
directs attention towards the cognitive decisions that result in these performed gestures,
generating different types of touch for the pianist. An open-ended questionnaire concerning
piano touch technique was sent to piano tutors from European conservatoires. Written or
verbal responses were required, for the latter the questions formed a semi-structured
interview. Results conclude that touch originates in the pianists musical intention, an
intuitive response to the timbre of sound or specific mood they are trying to project, often
manifested through the use of imagery or metaphor. Connecting intention to physical
gesture, along with parameters such as weight and point of contact on the finger, the main
concern for pianists is control of tension within the limbs, this helping to create different
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

221

types of sound. A case study was examined where a professional pianist performs two pieces
of different styles with two different sound intentions. Shoulder, arm and hand motion is
recorded via video-camera with a side-view of the pianist. Results show that touch is heavily
based on musical context with movement and tension within the shoulder-arm-wrist system
changing based on musical intention. With the basis of touch rooted in conscious musical
expression, this study provides a starting point for which to explore the connection between
the conscious choice of the performer and the resulting physical gesture.


Functions and Uses of Auditory and Visual Feedback: Exploring the Possible
Effects of a Hearing Impairment on Music Performance

Robert Fulford,* Jane Ginsborg,* Juliet Goldbart#


* Centre for Music Performance Research, Royal Northern College of Music, Manchester, UK
#Research Institute for Health and Social Change, Manchester Metropolitan University,
Manchester, UK

Musicians with hearing impairments develop complex strategies for interactive performance
relying on dynamic, or sometimes reduced, auditory attending and increased visual
attending in music-making situations. Research suggests that there may be a relationship
between auditory feedback and the use of visual cues by musicians with hearing
impairments. To improve understanding of these processes, the present study explored the
use of auditory and visual cues by examining the movement and looking behaviours of
performing musicians. Four violinists with normal hearing were observed playing together
as two duos in four experimental conditions involving the attenuation of auditory and visual
information in which participants wore earplugs and/or faced away from their partner.
Dependent measures were the duration and frequency of physical movements and looking
behaviour as coded in Noldus Observer XT9. Analysis showed that auditory attenuation of
the level used in this study had no effect on the violinists movement or looking behaviour.
The ability to see a co-performer did not affect movement behaviour but, where there was
the possibility of eye contact, the amount of both movement and looking behaviour
increased. Idiosyncratic, inter-player differences were far larger than intra-player
differences resulting from the manipulation of experimental conditions, highlighting the
uniqueness of individual playing styles. The results confirm that physical movement in music
serves many purposes: it is used expressively by the player but can be consciously modified
for the benefit of the co-performer.

Paper Session 41: Grand Pietra Hall, 11:30-13:00


Rhythm & beat

Melodic Directions Effect on Tapping

Amos David Boasson, Roni Granot


Dept. of Musicology, Hebrew University of Jerusalem, Israel

Behavioral response to pitch (pure tone) change was probed, using the tapping methodology.
Musicians and non-musicians were asked to tap steadily to isochronous (2 Hz) beep
sequences featuring pitch events: rise, fall, peak, valley, step-size change, and pitch re-
stabilization. Peaks and valleys were presented in either early, middle or late ordinal
position within sequences. Two non-western melodic step-sizes were used (144 and 288
cents). Inter-Tap Intervals (ITIs) were checked for correlations to melodic direction and
step-size. Three contradicting predictions regarding response to melodic direction and step-
size were proposed: a) based on musicians tendency to rush on ascending melodic lines,
222 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

SAT
the High-Urgent hypothesis predicted shortened ITIs in response to rising pitches; b) based
on approach/withdrawal theories of perception and on ethological research showing lower
pitches interpreted as more threatening, the Flexor/Extensor hypothesis predicted shorter
ITIs in response to falling pitches, due to stronger activation of the flexing muscles while
tapping; c) based on previous research on temporal judgment, the hypothesis predicted
one effect in both melodic directions, correlated to the magnitude of pitch change. Elicited
ITIs were related to the stimulis melodic direction. Following first pitch-change, the shortest
elicited ITIs were to pitch-rise in double-steps, showing a main effect to melodic direction.
Taps to rising lines maintained increased negative asynchrony through six taps after first
pitch-change. However, peaks and valleys in mid-sequence position both yielded delays. The
High-Urgent hypothesis gained support the most, but does not account, for example, for the
delays on both peaks and valleys in mid-sequence.


The relationship between the human body, motor tasks, mood and musicality:
How do you feel the beat?

Dawn Rose, Daniel Mllensiefen, Lauren Stewart & Christopher Lee


Department of Psychology, Goldsmiths, University of London, United Kingdom

Embodied rhythm encompasses the notion that perceptual preferences are constrained by
physical factors, may be goal-orientated and guided by cultural/environmental influences
(Leman, 2008). A study by Todd, Cousins & Lee (2007) yielded evidence suggesting that
body size is a possible determining physical factor in beat perception, i.e. the larger the body,
the longer the preferred beat period (PBP). We report here a follow-up experiment
investigating the relationship between body size, performance on motor tasks, and PBP, and
possible mediating effects of musicality and mood state. 40 subjects completed a mixed
design experiment, incorporating anthropometric measurements, motor tasks (walking and
tapping, estimating preferred step period and spontaneous inter-tap interval respectively),
psychometric tests of mood, and a measure of musicality, alongside the perceptual paradigm
estimating PBP used by Todd et al. (2007). Using a variety of methods of statistical analysis,
we found some evidence of a positive relationship between (some) anthropometric variables
and both preferred step period and PBP, as predicted, as well as suggestive evidence of
effects of musicality and mood variables.


Rhythmic Regularity Revisited: Is Beat Induction Indeed Pre-attentive?

Fleur Bouwer, Henkjan Honing


Cognitive Science Center Amsterdam, University of Amsterdam, The Netherlands
Institute for Logic, Language and Computation, University of Amsterdam, The Netherlands

When listening to musical rhythm, regularity in time is often perceived in the form of a beat
or pulse. External rhythmic events can give rise to the perception of a beat, through a process
known as beat induction. In addition, internal processes, like long-term memory, working
memory and automatic grouping can influence how we perceive a beat. Beat perception thus
is an interplay between bottom-up and top-down processes. Beat perception is thought to be
a very basic process. However, whether or not beat perception depends on attention is
subject to debate. Some studies have shown that beat perception is a pre-attentive process,
while others provide support for the view that attention is a prerequisite for beat perception.
In this paper, we review the current literature on beat perception and attention. We propose
a framework for future work in this area, differentiating between bottom-up and top-down
processes involved in beat perception. We introduce two hypotheses about the relation
between beat perception and attention. The first hypothesis entails that without attention
there can be no beat induction and thus no beat perception. The second hypothesis states
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

223

that beat induction is independent of attention, while attention can indirectly modulate the
perception of a beat by influencing the top-down processes involved in beat perception.

Paper Session 42, Crystal Hall, 11:30-13:00


Pitch, tonality & memory

Memory of a Prior Key after Modulation

Morwaread Mary Farbood


Dept. of Music and Performing Arts Professions, New York University, USA

This study examines the how the percept of a tonal center is retained in working memory,
and in particular, how long the memory of a previous tonal region continues to affect the
perception of harmony following a key change. An experiment was designed to
systematically explore responses to key changes from an established key to a new key, and
then from this new key back to the original key. The duration of the new key section was
parametrically varied as well as the type of harmonic progression in the new key. Subjects
were asked to indicate how they felt harmonic tension was changing while listening to the
progressions. The magnitude and direction of the tension slopes following the modulations
indicate a gradual decay in the memory of the previous key, tapering off completely between
13.5s and 21s. Furthermore, harmonic context (stability and predictability of chord
progressions) plays an important role in how long a previous key is retained in memory.


The Effect of Tonal Context on Short-Term Memory for Pitch

Panayotis Mavromatis, Morwaread M. Farbood


Dept. of Music and Performing Arts Professions, New York University, USA

This paper presents an experimental investigation into how the tonal interpretation of a
pitch affects its retention in short-term memory. The hypothesis that a clear tonal context
facilitates the retention of pitches over longer time-spans as compared to tonally ambiguous
or atonal contexts has been examined in previous work. We present two experiments that
aim to partly replicate previous findings while controlling for additional parameters. The
main experimental task involves comparing a probe tone to a target that is separated by
interference tones. We experimentally manipulated the degree of tonality of the interference
tones and the scale degrees of the target and probe, while fixing factors such as the time
interval between target and probe, and the overall pitch register. Our results indicate that
subjects may be actually responding to the tonal fitness of the probe, as described by
Krumhansl and Kessler (1982), and are not necessarily basing their responses on an
accurate pitch recall of the target.

Memory for Sequence Order in Songs.

Craig P. Speelman, Susan Sibma, Simon MacLachlan


School of Psychology and Social Science., Edith Cowan University, Australia

Previous research on memory for music has typically measured RT and accuracy in tests of
recall and recognition of songs. Little research, however, has focused on the ability of people
to switch their attention between various parts of a song to answer questions about those
parts. One hypothesis is that, because music unfolds in time, ones ability to consider
different parts of a song might be influenced by where in the song someone begins their
consideration, and also in which direction they are then asked to switch their attention, with
the overriding bias being in a forwards direction. The current study tested this forward bias
224 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

SAT
hypothesis. Fifty people were asked to identify whether the second excerpt (target line) of a
pair of excerpts taken from a song came before or after the first excerpt (probe line) in the
normal course of the song. Seven pairs of excerpts, three pairs falling before the target line,
and four pairs occurring after the target line, were presented for each of 8 popular and 2 new
songs. It was predicted that RTs for identifying the target lines occurring after the probe
line would be shorter than those coming before the probe line. Results supported this
hypothesis. The familiarity of a song did not affect this result. A companion experiment that
compared performance on this task for musicians and non-musicians replicated these
results, but indicated no effect of musical expertise. These results support the hypothesis
that memory for songs is biased in a forward direction.

Paper Session 43, Dock Six Hall, 11:30-13:00


Brain imaging & perception

Short-term piano training changes the neural correlates of musical imagery


and perception - a longitudinal fMRI study

Sibylle C. Herholz*, Emily B.J. Coffey*, Christo Pantev#, Robert J. Zatorre*


*Montreal Neurological Institute, McGill University; International Laboratory for Brain, Music
and Sound Research (BRAMS); Centre for Interdisciplinary Research in Music Media and
Technology (CIRMMT), Canada
#Institute for Biomagnetism and Biosignalanalysis, University of Mnster, Germany

Short-term instrumental training has the potential to alter auditory cognition, but effects on
mental imagery of music are yet unknown. In the present study we investigated the effects of
six week of piano training on the behavioral and neuronal correlates of perception and
mental imagery of music, in a longitudinal functional magnetic imaging study in healthy
young adults. Learning to play familiar simple melodies resulted in increased activity both
during listening and imagining of the trained compared to untrained melodies in left dorsal
prefrontal cortex and bilateral intraparietal sulcus, a network believed to be important for
motor learning and auditory-motor integration. For imagery, we additionally found training-
related increases in bilateral cerebellar areas involved in mental imagery of music. The
results indicate that the cortical networks for mental imagery and perception of auditory
information not only overlap, but are also similarly malleable by short-term musical training.


Long-term musical training changes the neural correlates of musical imagery
and perception - a cross-sectional MRI study
Emily Coffey, Sibylle Herholz, Robert Zatorre
Montreal Neurological Institute, McGill University; International Laboratory for Brain, Music
and Sound Research (BRAMS); Centre for Interdisciplinary Research in Music Media and
Technology (CIRMMT)

Long-term musical training has been linked to many of the perceptual, cognitive, and neurological
differences found between musicians and non-musicians. It is not yet known how training affects
auditory imagery; that is, the ability to imagine sound. Previous studies have shown that
secondary auditory and premotor areas are recruited for auditory imagery, as well as association
areas in frontal and parietal lobes, but differences due to experience have not been identified. Our
aim is to investigate the effects of long-term training by comparing the functional and structural
neural correlates of musical imagery of musicians and non-musicians. Twenty-nine young adults
including fifteen with extensive musical experience and fourteen with minimal musical
experience listened to and imagined familiar melodies during functional resonance imaging. The
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

225

task comprised four conditions: listen to familiar tunes, imagine them cued by the first tones of
the song, listen to random tones, or rest in silence. We tested the accuracy of mental imagery by
asking participants to judge if a note presented either after the imagery period or at the end of the
listening period was a correct continuation of the melody. In addition to the functional data, we
acquired anatomical data using diffusion tensor imaging, magnetization transfer, and T1-
weighted imaging. As expected, musicians demonstrated more accurate imagery performance
(85%) as compared with non-musicians (68%). Both groups showed activation during imagery in
a previously identified network encompassing secondary auditory cortex, pre-motor area,
dorsolateral prefrontal cortex, intraparietal sulcus, and cerebellum. However, the musicians
showed stronger activation in the supplementary motor area. Grey matter organization, white
matter integrity, and cortical thickness will be analyzed. While both musicians and non-musicians
are able to imagine familiar tunes, musicians are better at it. This performance difference may be
related to stronger recruitment of the supplementary motor area, which is involved in auditory
imagery, planning motor actions, and bimanual control. Analysis of the anatomical data will
clarify the relationship between these behavioural and functional differences and the underlying
brain structure. These results support the idea that long-term musical training affects higher
order sound representation and processing. Furthermore, the results of this cross-sectional study
complement those of short-term training studies in which practice cannot be extensive, but can
be experimentally controlled.


Common Components in Perception and Imagery of Music: an EEG study

Rebecca S. Schaefer,* Jason Farquhar,# Peter Desain,#


*Intstitute for Music in Human Social Development, Reid School of Music, University of
Edinburgh, UK; #Donders Institute for Brain, Cognition and Behavior, Centre for Cognition,
Radboud University, The Netherlands

The current work investigates the brain activation shared between perception and imagery
of music as measured with electroencephalography (EEG). Meta-analyses of four separate
EEG experiments are reported, focusing on perception and imagination of musical sound
with differing levels of stimulus complexity. Imagination and perception of simple accented
metronome trains, as manifested in the clock illusion, as well as monophonic melodies are
discussed, complimented by more complex rhythmic patterns as well as ecologically natural
music stimuli. By decomposing the data with Principal Component Analysis (PCA), similar
component distributions are found between experiments that explain most of the variance.
All datasets show a fronto-central and a central component as the largest sources of
variance, fitting with projections seen for the network of areas contributing to the N1/P2
complex. We expand on these results using PARAFAC tensor decomposition (which allows to
add the task into the decomposition, but does not make assumptions of independence or
orthogonality) and calculated the relative strengths of these components for each task. The
components were shown to be further decomposable into parts that load primarily on to the
perception or imagery task, or both, adding more detail to the PCA results. Especially the
frontal and central components are shown to have multiple parts, and these subcomponents
are differentially active during perception and imagination. A number of possible
interpretations of these results are discussed, taking into account the pitch and metrical
information in the different stimulus materials, as well as the different measurement
conditions.

226 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

SAT
Paper Session 44, Timber I Hall, 11:30-13:00
Phenomenology & meaning

The Specificity of Musical Meaning in Helmuth Plessners Philosophical


Anthropology of the Senses

Markos Tsetsos
Department of Music Studies, University of Athens, Greece

Some recent psychological and philosophical approaches to musical meaning, especially
those on embodied music cognition, try to establish a bodily mediated relationship between
sound structures and mind. Nevertheless, the structural synarthrosis of sensuality (sound),
corporeality (movement) and understanding (meaning), as long as it is attempted in strictly
empirical terms, looses much of its philosophical cogency. In his writings on music Helmuth
Plessner, a pioneer of modern philosophical anthropology, provides an a priori,
transcendental underpinning of the aforementioned synarthrosis, ensuring thus its
necessity. Plessner proceeds to a systematic account of the phenomenal qualities specific to
sound, such as produceability (Produzierbarkeit), remoteness-proximity (Fern-Nhe),
voluminosity (Voluminositt) and phenomenal spatiality (tonal position), impulsivity
(Impulsivitt), temporal dynamism, ability to be displayed in intrinsically justified horizontal
and vertical structures. These qualities render sound and sonic movements structurally
conform to mans phenomenal corporeality. Musical meaning, albeit semantically open, is
thus understood immediately in terms of human conduct (Verhalten). All these matters are
discussed in the first section of the paper. The second section presents a critical account of
some older and recent studies on embodied musical cognition in reference to Plessners
theory. This critical account aims at a theoretical reconsideration of some basic issues
concerning this highly important trend of research.


Vers une musicologie anti-phnomnologique

Ilias Giannopoulos

This paper will investigate some aspects of the relation of the musical work to time, and its
perception as temporal artwork par excellence. The idea of a qualitative experienced time as
opposed to the objective time, the notion of temporal extension as it appears in the work of
Husserl (and Bergson) and the subjective ability of reflective perception of an extended
temporal objectwhich exposes its material on a time interval (Husserl), gave rise -in the
field of music aesthetics- to phenomenological approaches of the temporality of the complete
musical work with the conviction that it also constitutes an extensive temporal and
homogeneous object. However, in his extended lectures On the Phenomenology of the
Consciousness of Internal Time (1893-1917), Husserl demonstrates his phenomenological
analysis of the perception of temporal objects on the basis of small units, like melodies or
even single tones. The author will try to scrutinize the appropriateness of phenomenological
approaches of the temporality of musical work and juxtapose them to Adorno's notion of
"intensive time", based on selected texts, mainly on his Musikalische Schriften, where he
unfolds a dialectical understanding of musical time. Phenomenological temporal analysis and
Adorno's time dialectics have namely opposite directions: the one aims to extend an ideally
identical -since small and homogeneous- content in temporal succession and the other aims
to comprise a diversity of content in the moment (on the basis of Hegelian logical principles).
The aim of this paper is to demonstrate misleading schematisms arising from holistic
phenomenological approaches of the temporality of musical work which in addition
presuppose the assumption of questionable for the ontology of the musical work supra-
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

227

temporal categories. On the other hand Adornos idealistic attempt to comprise the manifold,
succesive given and temporal extended content in the objective and aesthetic now, proves to
be a supreme temporal hermeneutics since it can be supported (without any kind of
violence) by concrete musical phenomena.

Is It All Autographic? Samples from the Musical Avant-Garde of the 60s

Panos Vlagopoulos
Dept. of Music Studies, Ionian University, Greece

A usual critique voiced against Nelson Goodman's symbolic theory of art is related to his
strict adhesion to an extensional semantics and, with it, the failure to account for the artist's
intentions. In fact, Joseph Margolis even doubts the sustainability of the autographic /
allographic distinction by claiming that since stylistic features are "profoundly
intentionalized, historicized, incapable of being captured by any strict extensionalized
notation, then it may well be that all so-called allographic arts are ineluctably autographic".
This however would amount to practically collapse the distinction between score and
performance, which in turn is, if anything, a strong engaged aesthetic view about musical
works. I would like to suggest that, in trying to understand the peculiarities of Avant-garde
music works of the 50s and 60s (graphic-score music-works and prose music), one can find it
very useful to use Goodman's autographic / allographic distinction, without necessarily
subscribing to Goodman's extensionalism. Against suggestions to the contrary, the two
elements (either the pictorial and the musical, in graphic-score music-works; or the
discursive and the musical, in prose music) should be addressed together as two irreducible
aspects of graphic-score or prose music-works. These types of music works rely on a sui
generis combination of autographic cum allographic elements. On the other hand, rehearsal
represents an essential stage of these music works, next to the preparation of the score, on
one end, and performance, on the other. I will try to illustrate this by using samples from the
work of Earle Brown, La Monte Young, and Anestis Logothetis.

Paper Session 45, Timber II Hall, 11:30-13:00


Music psychology & music therapy

A randomized controlled trail on improvisational psychodynamic music


therapy in depression treatment

Jaakko Erkkil, Jrg Fachner


Finnish Centre of Excellence in Interdisciplinary Music Research, University of Jyvskyl,
Finland

Music therapy has previously been found to be effective in the treatment of depression but
the studies have been methodologically insufficient and lacking in clarity about the clinical
model employed. The aim of this study was to determine the efficacy of music therapy added
to standard care compared with standard care only in the treatment of depression among
working-age people. Participants (n = 79) with an ICD10 diagnosis of depression were
randomised to receive individual music therapy plus standard care (20 bi-weekly sessions)
or standard care only, and followed up at baseline, at 3 months (after intervention) and at 6
months. Clinical measures included depression, anxiety, general functioning, quality of life
and alexithymia. Participants receiving music therapy plus standard care showed greater
improvement than those receiving standard care only in depression symptoms (mean
difference 4.65, 95% CI 0.59 to 8.70), anxiety symptoms (1.82, 95% CI 0.09 to 3.55) and
general functioning (-4.58, 95% CI -8.93 to -0.24) at 3-month follow-up. The response rate
228 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

SAT
was significantly higher for the music therapy plus standard care group than for the standard
care only group (odds ratio 2.96, 95% CI 1.01 to 9.02). Individual music therapy combined
with standard care is effective for depression among working-age people with depression.
The results of this study along with the previous research indicate that music therapy with
its specific qualities is a valuable enhancement to established treatment practices.


Active Music Therapy and Williams Syndrome: a Possible Method for the
Visual-Motor and Praxis Rehabilitation?

A. Chiofalo,* A. Bordin#, A. Mazzeschi+, R. Aglieri


*Ce.s.m.m.e, Music and Medicine Studies Center, Pavia, Italy, #Conservatory, Pavia, Italy,
+Institute of Education, University of London, United Kingdom, Civic Institute of Music, Pavia,
Italy

Notwithstanding variation from person to person, research into Williams Syndrome
identifies difficulty in the following areas of psychomotor control: co-ordinating movements,
spatial orientation, physical ability and, in particular, visual-motor integration. These
difficulties are magnified by physical traits, mainly low muscle tone and contraction of the
joints, which present a further cause of reduced coordination. Music and sound act as specific
stimuli to obtain emotive and movement responses, activating various sensory areas. We
explored the efficacy of active music therapy (MT) on motor functions in patients with WS.
We investigate the use of active music therapy, in particular the use of rhythmic components,
to stimulate functional hand-eye co-ordination and visual-motor integration in patients with
WS. The study involved 10 subjects with WS, aged between 3 and 20. The patients were
involved in weekly sessions of music therapy. The sessions consisted of exercises using
rhythm and movement, vocal exercises and musical improvisation over a rhythmic base.
Patients do not require any musical training. A music therapist who played an active part in
the proceedings conducted each session. In MT sessions, Visual-motor integration and praxis
was tested (VMI Visuo-Motor Integration Test, adapted, TGM) before and after the program
and every two months during the program. The patients showed significant improvements in
visual-motor ability and in praxis skills in the direct aftermath of the program. Less
significant, but nevertheless important, results were observed long-term. Music therapy is
demonstrated to be efficient for improving praxis skills and visual-motor integration in
subjects with Williams Syndrome. We propose an idea to use music therapy as an integrated
part of rehabilitation.

"Reframing time and space Drugs and musical consciousness"

Jrg Fachner
Finnish Centre of Excellence in Interdisciplinary Music Research, University of Jyvskyl,
Finland

Discussing the effects of drugs on music and consciousness is a difficult enterprise: on the
one hand, drugs have specific effects on physiology; but on the other, the phenomena
experienced and reported in drug-induced altered states of consciousness (dASC) cannot
simply be reduced to the perceptual consequences of those physiological effects. This paper
discusses the psychedelic effects of drugs (mainly cannabis) on the perception and
performance of music, and in particular how such drugs influence time perception in the
process of performance. Drugs are binding to endogenous receptors of certain
neurotransmitters and therefore emphasize, amplify or weaken certain brain functions that -
even in extreme form - are also possible without drugs. Already Baudelaire mentioned that
nothing supernatural happens under the influence drugs, but that reality simply becomes
more vivid, and receives more attention. Drugs have the capacity to reframe perspectives on
12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

229

musical materials through an altered temporality and a temporarily more intense


stimulation and evocation of physiological functions. These changes take place in the context
of personal musical preferences, in a habituated set and setting that significantly influence
the listeners focus of attention on the musical time-space. If the information revealed in the
time course of some music becomes meaningful for the listener or performer, the brain has
various strategies available to it to zoom into particular parts of the music in order to
process musical elements more distinctly and in a more focused manner, in a hypofrontal
state of enhanced sensory perception.

Post-Conference Social Session: Grand Pietra Hall, 14:30-16:30


Global crises and their implications for research

Co-ordinated and co-chaired by John Sloboda and Mayumi Adachi


A two-hour post-conference session looking at the wider social and political context of our
research and practice, in the tradition begun at the ICMPC in Evanston and continued in
Bologna. A likely focus will be the current global economic situation as it is currently being
felt most strongly in Greece, and its impact on scholarship and intellectual exchange. This is
not part of the academic programme of the conference, but all registered conference
participants and their non-participant accompanying persons are encouraged to attend and
take part in the discussion. The session will be conducted in English.

230 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

AUTHOR INDEX

Abeer, 164
Abla, 62
Adachi, 116, 143, 192, 230
Addessi, 26, 68
Aglieri, 229
Aguiar, 46, 93
Aiba, 102
Akinaga, 61
Akiva-Kabiri, 120, 199
Albrecht, 28, 66
Alexakis, 26, 68
Allpress, 177, 208
Alluri, 151
Almoguera, 162
Altenmller, 92, 134, 215
Ambrazeviius, 86
Anagnostopoulou, 26, 68, 93
Antovic, 119
Aoki, 107
Armin, 90
Ashley, 34, 126
Athanasopoulos, 219
Atherton, 109
Atkinson, 159
Au, 121
Aucouturier, 38
Auer, 22
Aufegger, 78
Ayari, 24
Azaria, 199

Bth, 101
Bagic, 83
Bailes, 15, 100, 133, 176
Baldwin, 29, 46
Barrett, 95
Barrow, 29
Bartlett, 32
Bartolo, 209
Bas de Haas, 55
Beck, 49, 170
Ben-Haim, 126
Benoit, 91, 139
Berger, 100, 214
Berkowska, 57, 61
Bertolino, 62
Best, 17
Beveridge, 211

Bhattacharya, 18, 36, 82, 84, 139, 152,


179
Bi, 39
Bigand, 38, 140
Billig, 35
Bingham, 49, 170
Birchfield, 32
Bir, 95
Bisesi, 185
Bittrich, 195, 198
Blankenberger, 195, 198
Blasi, 62
Boasson, 222
Bodnar, 186
Boer, 47
Bogert, 62
Boggio, 63
Bogunovi, 81, 113
Bonada, 112
Bongard, 80, 136
Bordin, 229
Bortz, 93
Bourne, 29, 34
Bouwer, 223
Bozahin, 206
Bramley, 134
Brattico, 62, 63, 151
Brodsky, 45, 99, 135
Bronner, 183
Brown, 16, 115
Broze, 43, 187, 205
Bruhn, 183
Buchler, 158
Bdenbender, 155
Budrys, 86
Bugos, 31
Burger, 58, 107, 127, 154, 170
Busch, 109

Cali, 98
Callahan, 75
Cambouropoulos, 41, 77, 157
Cameron, 84, 110
Canonne, 124
Carrus, 36, 139
Carugati, 26
Carvalho, 79
Cassidy, 162, 166
Cattaneo, 97
Chajut, 126

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

231

Chan, 89
Chandra, 169
Chang, 60
Chatziioannou, 174
Chiofalo, 229
Chmurzynska, 79, 114, 164
Chon, 190
Chuen, 187
Cirelli, 198
Clarke, 124, 152, 185
Clift, 208
Coffey, 225
Cohrdes, 106
Collins, 95, 201
Corrigall, 137, 165
Costa-Giomi, 47
Coutinho, 212
Creighton, 109
Crook, 211
Cucchi, 97
Cunha, 79
Custodero, 98

Dakovanou, 93
Dalla Bella, 57, 61, 91, 139, 196
Davidson, 16
Davidson-Kelly, 34
Dean, 15, 100, 143, 176
Deconinck, 204
Deg, 137
Delb, 95
Delige, 12
Demorest, 17, 98, 186
Demoucron, 220
Desain, 214, 226
Dibben, 134
Dilley, 53
Diminakis, 158
Ding, 39
Dittmar, 164
Dobson, 108
Doffman, 124
Dohn, 151
Donin, 202
Dowling, 32, 38, 42
Doyne, 152
Dunbar-Hall, 17
Dyck, 171
Dykens, 85

Edwards, 149
Eerola, 69, 161, 175

Egermann, 187
Eguia, 172
Eguilaz, 162
Einarson, 137, 197, 198
Eitan, 50, 126, 160, 185
Elowsson, 23
Emura, 61, 211
Erdemir, 49, 170
Erkkil, 181, 228
Evans, 202
Exter, 37

Fabiani, 94
Fachner, 181, 228, 229
Fairhurst, 70
Falk, 116
Farbood, 74, 205, 224
Farquhar, 214, 226
Farrugia, 57, 91, 139
Fazio, 62
Fernando, 187
Fron, 202
Ferrari, 26
Ferrer, 175
Feth, 118
Finkel, 196
Fischer, 47
Fischinger, 66, 141
Floridou, 195
Foltyn, 84
Fornari, 35
Forth, 40
Fouloulis, 41
Foxcroft, 28
Frank, 58
Frank, 63
Friberg, 23, 94, 182
Frieler, 25, 66, 183
Fritz, 105, 174
Fujii, 99
Fulford, 222
Furukawa, 44
Furuya, 103, 215

Gao, 90
Garnier, 124
Geringer, 87
Ghitza, 74
Giannopoulos, 227
Giannouli, 19, 33
Giesriegl, 50
Gifford, 16

232 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012


Gill, 203
Gingras, 96, 130, 143, 188
Ginsborg, 222
Giordano, 174
Giorgio, 158
Giovanni, 86
Glette, 204
Glover, 122
Goda, 103
Gody, 169, 204
Goebl, 140, 141
Gold, 62, 63, 121
Goldbart, 222
Goldman, 123
Gollmann, 48
Gmez, 112
Goodchild, 69, 143
Gordon, 85
Goto, 42
Govindsamy, 105
Graepel, 156
Grahn, 210
Granot, 51, 145, 222
Gratier, 202
Griffiths, 189
Grollmisch, 164
Grube, 80
Gualda, 186
Guastavino, 112, 174
Guedes, 157

Hden, 197, 209
Hadjidimitriou, 83
Hadjileontiadis, 83
Hadley, 119, 168
Hallett, 135
Halpern, 33
Hamann, 37
Hambrick, 53
Handy, 60
Hannon, 48
Hans, 152
Hansen, 216
Harding, 54, 139
Hargreaves, 171
Hascher, 218
Hasselhorn, 80, 164
Hawes, 142
Hedblad, 94
Hegde, 38, 220
Heller, 198
Helsing, 149

Hemming, 134
Henik, 120, 199
Herbert, 46, 148
Herholz, 82, 85, 225
Himberg, 101, 203
Hinds, 128
Hirano, 60, 103, 104
Hirashima, 99
Hirt, 183
Hitz, 22
Hjortkjr, 49, 171
Hofmann, 141
Hofmann-Engl, 181
Honing, 197, 209, 223
Horn, 67, 194
Hvin, 204
Hughes, 76
Huovinen, 129, 184
Huron, 37, 66, 67, 125, 161, 205

Imberty, 158
Innes-Brown, 121
Ioannou, 82
Israel-Kolatt, 51
Ito, 60, 104
Ivaldi, 106, 146
Iwanaga, 44, 62

Jakubowski, 66
Janata, 95
Jankovi, 179
Jensenius, 204
Judge, 128

Kaczmarek, 144, 180
Kagomiya, 191
Kaila, 129
Kaiser, 67
Kamiyama, 62
Kanamori, 20, 45
Kaneshiro, 100, 214
Kang, 115
Katahira, 167
Katsiavalos, 157
Kawakami, 44, 102
Kawase, 184
Kazai, 102
Kecht, 65
Keller, 41, 70, 71, 188
Key, 85
Kidera, 191
Kieslich, 180

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

233

Kim, 100
Kinoshita, 60, 103, 104
Kitamura, 109
Kizner, 135
Klonari, 190
Knox, 162, 211
Kochman, 220
Koelsch, 48
Kohn, 50
Koniari, 138
Kopiez, 64, 106, 111, 117, 127, 145
Koreimann, 22, 113
Korsakova-Kreyn, 42
Kotta, 16
Kotz, 54, 91, 117, 139
Kouzaki, 191
Kozak, 169
Kranenburg, 95
Krause, 108
Krause-Burmester, 37
Kreutz, 80, 136, 155
Kringelbach, 152
Kuchenbuch, 82, 85
Kudo, 103
Kuhn, 92
Kssner, 121

Lamont, 27, 135, 147, 175
Lapidaki, 160
Larrouy-Maestri, 86
Lartillot, 24
Laucirica, 162
Launay, 100, 176
Leadbeater, 147
Leboeuf, 213
Lee, 223
Leech-Wilkinson, 121
Lega, 97
Legg, 208
Legout, 203
Lehmann, 80, 164
Lehne, 48
Leibovich, 199
Leitner, 22
Leman, 171, 220
Lembke, 172
Lense, 85
Lenz, 21
Lesaffre, 171
Lvque, 86, 88
Li, 39
Liao, 89

Liebermann, 141
Liikkanen, 132
Lim, 124
Lindborg, 122, 193
Lindsen, 18, 84, 152, 179
Liu, 53
Lock, 16
Lorrain, 138
Lothwesen, 25, 66
Louhivuori, 207
Loui, 52
Louven, 129
Luck, 30, 58, 107, 127, 144, 154, 170, 221
Ludke, 116
Lund, 151

MacDonald, 211
MacLachlan, 224
MacLeod, 87
MacRitchie, 185, 221
Madison, 101
Madsen, 87
Maes, 171
Maestre, 206
Mailman, 163
Mallikarjuna, 72
Mankarious, 130
Manning, 57
Marchini, 206
Marcus, 74
Marentakis, 21
Margulis, 160
Marin, 125
Marozeau, 121
Marsden, 55
Martorell, 112
Mastay, 92
Matsui, 102
Matsumoto, 45, 107
Mauro, 193
Mavromatis, 224
Mayer, 141
Mazzeschi, 229
McAdams, 21, 69, 105, 143, 172, 187, 190
McAuley, 53, 92
Mendoza, 46
Merchant, 209
Micheli, 180
Misenhelter, 20
Mitchell, 111
Mito, 102
Mitsudo, 42, 209

234 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

Miura, 61, 102


Miwa, 103
Moelants, 142, 200, 220
Moran, 34, 219
Mori, 44
Morimoto, 30, 211
Morrill, 53
Morrison, 168, 186
Morsomme, 86
Moura, 93
Mllensiefen, 35, 55, 66, 91, 96, 130, 133,
188, 195, 196, 207, 223
Mungan, 33
Musil, 96, 130, 188

Nagata, 102, 103
Nagel, 29
Nagy, 156
Nakagawa, 191
Nakajima, 42, 173, 191, 209
Nakamura, 103
Napoles, 87
Nave, 92
Ness, 95
Ng, 88, 89
Nguyen, 186
Nichols, 98
Nielsen, 151
Nieto, 205
Nonogaki, 61
North, 108
Nozaki, 99
Nymoen, 169, 204

Obata, 60, 103, 104
Oehler, 65
Oelker, 27
Oh, 136
Ohsawa, 60, 103, 104
Ohtsuki, 103
Okanoya, 44, 62
Olbertz, 97
Olivetti-Belardinelli, 158
Olsen, 199
Omigie, 18, 216
Oohashi, 99
Ordoana, 162
Orlandatou, 120
Orlando, 165
Osterhout, 17
Overy, 34
ztrel, 206


Paisley, 166
Palmer, 140
Panebianco-Warrens, 28
Pantev, 82, 85, 225
Papadelis, 173, 190, 200
Papanikolaou, 190, 200
Papiotis, 206
Paraskevopoulos, 82, 85
Parncutt, 50, 67, 185
Pastiadis, 173, 190, 200
Patel, 105
Paul, 43, 118
Pawley, 207
Pearce, 18, 36, 84, 110, 139, 143, 152, 216
Pecenka, 70
Peebles, 15
Pennycook, 157
Penttinen, 184
Perreau-Guimaraes, 214
Pesjak, 22
Peter, 199
Petrovic, 119
Peynirciolu, 33
Pfeifer, 37
Phillips, 31, 119
Pikrakis, 41
Piper, 183
Platz, 111, 117, 127, 145
Plazak, 161, 176
Poeppel, 74
Poon, 161
Pope, 213
Potter, 110
Prado, 209
Prem, 50
Prince, 118
Prior, 59, 121
Proscia, 172
Psaltopoulou, 180

Quarto, 62

Rahal, 146
Raju, 24
Raman, 38
Ramanujam, 38, 220
Randall, 83, 150
Raposo de Medeiros, 36
Reiss, 173
Remijn, 42
Repp, 70
Reuter, 65, 171

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

235

Rickard, 90, 150


Riera, 172
Rieser, 49, 170
Riess Jones, 96
Rink, 13
Rocha, 63
Roden, 80, 136
Rogers, 22, 163
Rohrmeier, 48, 156, 219
Rollnik, 92
Rose, 223
Ross, 24
Rowe, 67, 134
Rowland, 74
Russell, 78
Russo, 56

Saari, 161
Saarikallio, 30, 58, 107, 127, 150, 154, 170
Saitis, 174
Salembier, 203
Sammler, 54
Sandgren, 177
Santosh, 220
Sapp, 67
Scavone, 174
Schaefer, 214, 226
Schfer, 27
Schellenberg, 130, 165, 194
Scherer, 212
Schiavio, 182
Schinkel-Bielefeld, 29
Schlaug, 52
Schlegel, 33
Schlemmer, 66
Schmidt, 138
Schn, 86, 88
Schroeder, 71
Schubert, 121
Schultz, 40, 188
Schurig, 109
Schutz, 57, 161
Schtz, 25
Schwarzer, 137
Sederberg, 118
Selchenkova, 96
Selvey, 168
Sergi, 90
Shanahan, 37, 187, 194
Shandara, 65
Shoda, 143, 192
Sibma, 224

Siedenburg, 171
Sinico, 186
Skogstad, 204
Sloboda, 108, 230
Slor, 45
Smetana, 141
Smith, 213
Smukalla, 27
Sobe, 123
Sowinski, 57, 196
Speelman, 165, 224
Spiro, 101
Stevanovic, 202
Stevens, 17, 121, 188, 199
Stewart, 18, 96, 125, 130, 188, 216, 223
Stigler, 50
Stoklasa, 141
Stolzenburg, 75
Strau, 109
Sudre, 83
Sulkin, 99
Sun, 72
Suppes, 214
Suzuki, 191
Syzek, 92

Tabei, 43
Tafuri, 81
Taga, 99
Takeichi, 42, 191, 209
Takiuchi, 143
Tamar, 90
Tamir-Ostrover, 185
Tanaka, 43
Tardieu, 17
Taurisano, 62
Teki, 189
Tekman, 47
Temperley, 154
Tervaniemi, 51, 150
Thompson, 32, 54, 58, 107, 125, 127, 154,
170, 199, 203, 221
Tidhar, 168
Tillmann, 17, 73, 95, 96, 188
Timmers, 30, 211
Ting, 32
Tjoa, 213
Tobimatsu, 42, 209
Toiviainen, 58, 107, 112, 127, 144, 151,
154, 170
Trresen, 169, 204
Toussaint, 110

236 12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012


Trainor, 137, 197, 198
Trehub, 194
Triantafyllaki, 26, 68, 93
Trkulja, 179
Trochidis, 140
Troge, 138
Tsai, 39, 178
Tsay, 73
Tsetsos, 227
Tsougras, 138, 158, 217
Tsuzaki, 102
Tzanetakis, 95

Ueda, 173, 191
Uhlig, 71
Upham, 178, 192

Vaes, 200
Vaiouli, 180
Van den Tol, 149
van der Steen, 41
van Handel, 75
van Kranenburg, 55, 56
van Noorden, 58
van Vugt, 92
van Walstijn, 174
van Zijl, 144
Vanden Bosch, 48
Vattulainen, 150
Vecchi, 97
Vempala, 56
Verga, 117
Vitale, 193
Vitouch, 22, 78, 113, 123
Vlagopoulos, 228
Vlek, 214
Voldsund, 169, 204
Volk, 55, 56, 95
Vouvaris, 217
Vroegh, 183
Vujovi, 81
Vuoskoski, 30, 69
Vurma, 87
Vuust, 151, 152

Wallentin, 151, 152


Walters, 92
Wammes, 91
Wanderley, 221
Wang, 39, 70, 88, 89, 136
Watanabe, 99, 107
Weilguni, 141
Weinberg, 72
Weiss, 194
Wenger, 29
Widdess, 219
Widmer, 14
Wiering, 56
Wiggins, 18, 23, 40, 84, 110, 143, 152
Wild, 69
Williams, 169
Williamson, 91, 115, 133, 195
Winkler, 197
Winter, 186
Witek, 152
Wolf, 55, 117, 127, 145
Wollman, 105
Wllner, 204
Woolhouse, 77, 168

Yamada, 20, 45, 211
Yamasaki, 20
Yan, 88, 89
Yanagida, 61
Yankeelov, 54
Yim, 76
Ylitalo, 184
Yoneda, 20, 45, 211
Young, 67
Yovel, 145
Yust, 74

Zacharakis, 173
Zacharopoulou, 160
Zamm, 52
Zarras, 200
Zatorre, 225
Zicari, 221
Ziv, 146

12th ICMPC - 8th ESCOM Joint Conference, Aristotle University of Thessaloniki, 23-28 July 2012

237






ISBN: 978-960-99845-1-5

Potrebbero piacerti anche