Sei sulla pagina 1di 34

Extended instruments

in contemporary music
How do extended instruments influence my compositional process?

Juan Luis Montoro Santos (3085031) - Master in Composition

Research Paper

Research Supervisors: Yannis Kyriakides and Samuel Vriezen


Abstract

Extended instruments are classical instruments with added sensory technology for
controlling real-time sound transformation in a direct way. The electronic processes
(algorithms) are kept simple. They work only as an extension of the instrument, so the
focus is still on the performer, who has total control of the electronic processes. This
allows the performer to get extended nuances in the performability and sound of his
instrument.

Interactive instruments have certain characteristics that can definitely influence the
compositional process and can help the composer solve some aesthetic concerns which
might arise when dealing with interactive computer music.

This paper aims to clarify the concept of extended instruments, going deeper into its
classification as part of interactive computer systems. Within my experience, I present the
possible aesthetic consequences that they might have in my compositional process,
going from the initial idea of the piece to the final final and future performances.

1. Introduction 1

2. Context 2
2.1 Interactive computer music systems 2

2.2 Categories and terminology 3

2.3 Extended instruments 6

3. Compositional aesthetics related to extended instruments 8


3.1 Too much magic 8

3.2 Cause and effect 11

3.3 Performative aspect (visuals) 13

4. Compositional process 15
4.1 How the building process affects the composition process and viceversa 15

4.2 Musical structure influenced by the choice of presenting the instrument 16

4.3 Importance of the performer - Collaborative process 17

4.4 Working flow 19


5. My own compositions with extended instruments 20


5.1 Reflejos de agua, sal, y hierro 20

5.2 Miles away study 22


6. Conclusions 26

7. Bibliography 28

8. Appendix 30
8.1 Interactive computer music systems 30


1. Introduction
Extended instruments belong to a broad classification of interactive-computer systems, a
very wide term which involves a number of different approaches to performer-computer
interaction.

All the terminology used in this classification is quite recent, which presents difficulties
when looking for exact words for classifying or narrowing down the definition of extended
instruments. In order to resolve any possible confusion and properly contextualise my
topic, I will present a classification of terms and categories in chapter two.

After establishing the context needed to delve specifically into the idea of extended
instrument, I will focus on my compositional aesthetics, which are influenced by my own
choice of working with this type of interaction with electronics.

I will also address the aesthetic ideas that led me to choose these instruments as an
option to work with interactive systems, as well as historic examples, and my own
experience. Five years ago, I started to incorporate electronics in my compositions, and
from there onwards, I have been looking for different ways to blend the use of electronics
with my previous experience working with acoustic instruments. Lately, my work has been
focus on interactive computer music, looking for ways to bring that interaction into my
compositions and live performances. Extended instruments has been the result of that
search.

Finally, I will explain the practical aspect of my compositional process when dealing with
extended instruments. I will talk about two pieces composed with different approaches to
extended instruments, referring to my experience as composer and sound technician.

1
2. Context

2.1 Interactive computer music systems

When describing the interactive system between a performer and a computer, the main
characteristic aspect is the need for an input from the performer, and a response (output)
generated by the computer from that specific input.

Joel Chadabe was one of the first composers dealing with these terms, and trying to
theorise about this interactivity in 1967. He used the term “interactive composing” to
describe about real time communication in a live performance, underlining the importance
of the fact that the communication/interactivity happens between a human and a
machine.1

Some decades later, in 1993, Robert Rowe proposes a specific definition for “Interactive
computer music systems”2, defining it as:

“Those whose behaviour changes in response to musical input. Such


responsiveness allows these systems to participate in live
performances, of both notated and improvised music.” (Robert Rowe,
Cambridge, 1993).

The performer needs an interface to send an input, just like the computer needs an
algorithm to create the response. Depending on the characteristics and types of
interfaces and algorithms used in an interactive system, different terminologies have been
used to differentiate and categorise them.

1Joel Chadabe, “Interactive Composing: An Overview”, The Music Machine. Selected Readings from Computer Music
Journal, Curtis Roads (ed.), Cambridge, Mass., MIT, 1989.

2Robert Rowe , “Incrementally Improving Interactive Music Systems”, Contemporary Music Review

1996, Vol. 13, Part 2, pp. 47-62.


2
2.2 Categories and terminology
Firstly, it is important to clarify that Interactive computer music can be found in a very
wide range of different contexts. M.M. Wanderley and N. Orio (IRCAM)3 worked out a list
of possible categories of Human Computer Interactions (HCI):

- Musical-instrument manipulation: focused on the instrument as the interface for


the communication computer-performance.

- Score-level control: development of a system for automatic accompaniment or


synchronisation.

- Digital navigation: the interaction is made via a digital platform or software.

- Multimedia installations: works involving different media that create an


interaction on the space with the visitors/audience.

- Sound processing control: the interaction is focused on the processes made on


an acoustic or analog sound source.

- Dance/music interactions: establishing an interaction between dance


movements an generated or processed music.

- Computer Games: generating music for the engine and interactivity of computer
games.

Analysing this list of possibilities, extended instruments would be part of the “Musical-
instrument manipulation” category, but would also relate to the “Sound processing
control” one, which deals with the invention of new instruments and controllers, or
additions to already existing instruments. But even if we only focus our attention on the
·musical-instrument manipulation category, we could go into a more detailed
classification depending on the characteristics of the relation instrumentalist-computer,
and the characteristics of the controllers used for the input:

3MM Wanderley, N Orio "Evaluation of input devices for musical expression: Borrowing tools from HCI "Computer
Music Journal 26, 2002, Vol. 3, pp. 62-76

3
According to Robert Rowe “Classification of Interactive Systems”4, we can classify the
instrument-computer relation depending on:

• Score or performance -driven:

- Score-driven interactions are related to the synchronisation of electronics


and performer in a score-level, using predetermined event collections, or
stored music fragments, to match music at the input (depending on time,
tempo, or beats).

- Performance-driven: The instrument-computer relation does not anticipate


the realisation of a particular score. The computer does not have a stored
representation of what to expect as an input, so the processes are
controlled by the performer’s choices.

• Response methods:

- Transformative: These take existing musical material and apply


transformations to it to produce variants

- Generative: These responses are based on sets of rules (algorithms) to


produce complete new musical output from stored fundamental material.

- Sequenced: Prerecorded music fragments in response to an input.

• Player or instrument paradigm

- Player paradigm: related to the computer as an “artificial intelligence”,


creating a second player making autonomous decisions.

- Instrument paradigm: here, the concern is to create a modification or


extension of the instruments to control or trigger the electronic processes.

4Robert Rowe , “Incrementally Improving Interactive Music Systems”, Contemporary Music Review

1996, Vol. 13, Part 2, pp. 47-62.


4
Both Rowe and IRCAM proposed a list to define the characteristics of the controllers, and
both of them have similar terms, although each of the lists lack something that the other
has. Here I propose my attempt to combine both of them for an easier definition of the
controllers:

• Learnability: is a virtue of the interfaces that allows users to become familiar with
them and make good use of all their features and capabilities.

• Explorability: This admits or allows the exploration of new possibilities. The better
you know or the more you practice with the interface, the more possibilities you get
from the instrument.

• Feature Controllability: Controllability deals with the possibility of forcing the


system to a particular state by applying a control input. If a state is uncontrollable,
then no input will be able to control that state.

• Timing Controllability: the amount of delay time electronics/computer responds to


the input of the performer.

• Historical foundations: using an existing technique of the instrument (idiomatic


language). It could go from using an existing one, modifying one, or creating a new
technique (for example, new interfaces are mostly new creations, as they are not
usually based in historical instruments)

• Psychological nature of control: perceived naturalness of the link between action


and sound response.

Then, depending on the different presented characteristics of the relation performer-


computer and the nature of the controller in the category of instrument-manipulation, we
can find the following categories (where we already find the term extended instruments):

5
• Live-coding

• New Interfaces - (NIME)

• Digital and hybrid lutherie

• Intelligent/interactive instruments - Composed Improvisation (Joel Chadabe)

• Hyperinstruments (Tod Machover)

• Expanded instruments (Pauline Oliveros)

• Augmented instruments (IRCAM, Standford University, and STEIM)

- Sensory augmented instruments


- Actuated instruments
• Extended instruments (Jeff Pressing)
• “Smart Instruments” (KTH Royal Institute of Technology, MIND Music Labs / Queen
Mary, Londres)

• Alternative to physical control (example: “Solo performer” by Alvin Lucier)

In the appendix there is additional information about each of them.

Choosing one or another category of interactive computer music systems already carries
some aesthetic choices that will affect the compositional process. The one which is closer
to my aesthetics ideas is the extended instrument, whic I will present next.

2.3 Extended Instruments

An extended instrument is one that adds new physical controls to an existing (traditional)
control interface. Note that if control mechanisms and resultant effects are changed
radically enough, we essentially have a new instrument, rather than a extended one.5

These definitions can be obviously applied to many innovations in the history of acoustic
instruments (lutherie nouvelle), but the term extended instrument is more related to the
extension through electronics.

5 Jeff Pressing, “Cybernetic Issues in Interactive Performance”, Computer Music Journal, 1990, Vol.14, pp. 12-25.
6
Extended instruments could be mistaken with sensory augmented instruments, but it is
important to underline what Jeff Pressing says about limiting the control mechanism and
the output complexity to get closed to the classical instrument that is being extended.
When adding too many new gestures, or processes and algorithms by the computer, the
performer starts to lose control of the output in favor of the complexity of the computer
respond, so the instrument could not be understood as a extended anymore, but more as
a sensory augmented instrument.

Relating extended instruments to the terms used by Robert Rowe, they are instruments
based in a performance-driven interactive system, in which the response method is
mainly transformative, and it is completely focused on the performance paradigm, the
computer being an extension of the instrument’s possibilities.

As controllers, extended instruments have a high degree of learnability, explorability,


feature, and timing controllability. These aspects allow the performer to have direct
control of the sound of its instrument, and to explore the nuances of the sound that the
instrument could offer when improving the knowledge of it.

Apart from their using electronics, the characteristic that make extended instruments
differ from most classical instruments is the lack of historical foundations. When
designing the way the sensors are added to the instruments, it is essential for me to look
for the best option for integrating the movements needed for the sensor information
control with idiomatic gestures of the instrument that is being extended. Although the
usual goal when talking about extended instrument, is an integration of the extension with
the idiomatic language of the instrument, this is not completely possible from the first
moment the performer tries it, because a new element of control is added, and this new
element requires some time to be learned and assimilated by the performer. That is
similar to any new technique that the performer wants to incorporate to his repertoire, but
in this case, the learning process is based not on exploring new possibilities of the
instrument, but exploring a new element added to it, which changes the usual relation
with it.

7
3. COMPOSITIONAL AESTHETICS RELATED TO
EXTENDED INSTRUMENTS

Lately, my interest in working with live electronic processes in my compositions has


become an important field of investigation for me. I think that the idea of extending the
possibilities of the classical instruments with electronics opens new approaches to the
composition, and enhance the triangular relation of the performer - composer -
instrument.

The same way that choosing a specific instrumentation is already a compositional


decision that influences all of the compositional process, choosing to compose for an
extended instrument is already part of the decision making process. Extended
instruments are only an option in-between all the system of interactive computer music
that we explained before, and as they have their own characteristics, the decision of
choosing to work with one of them is already a main part of my compositional process.

In this chapter I will approach the three main thoughts/aesthetics that led me to work with
extended instruments as the interactive computer music systems that better fits my
compositional thoughts: reducing the amount of “too much magic”, enhancing the cause-
effect for the performer control of the sound, and generating coherent visuals.

The first one is related to the amount of artificial expressivity (the computer input as a
expressive agent) used in relation with the performer actions, also defined as the balance
between the complexity of the output and the input. Secondly, I will talk about the cause-
effect, dealing with different levels of action-response and controllability of the electronics
by the performer. Finally, I will present my point of view of how I believe extended
instruments could enhance the visual aspect of the performance act of the music.

3.1 Too much magic


My concern about this issue started when attending concerts dealing with algorithm
computer music. These algorithms, or artificial intelligence could end up being presented
in pieces as fixed media, or as part of live performances with interactive computer music.

8
Focusing our attention on the pieces with live electronics, I had many experiences in
concerts where all the electronic processes missed the relation with the performer on
stage. They were situations where the gestures of the performer were not easily linked
with the resultant sounds, not only in a relation one-to-one relation, but also in an overall
feeling of different levels of complexity between performer and sound.

The main worry for me was not related to the visual feedback that I was getting from the
stage, but the fact that the situation made me aware of how the abilities of the performer
were being underestimated. The sound was not depending so much on the actions of the
performer, he was only acting as a necessary input for the real music maker: the
computer and the programmed algorithms.

With an interactive computer music we could talk about two agents: the computer and
the performer; and in the situations described above, the balance was tipped in favour of
the computer, against the traditional relevance of the performer for the music.

The computer overtaking the performer as the main agent for music making is a concern
that has been already presented by some authors:

“One of the greatest issues in the era of digital sound concerned what
consequences the digital episteme of numbers and instructions,
algorithmic representations, and software abstractions had for the
embodied act of musical performance.“ (Chris Salter, 2010)

“The fading of the active and live act of sensorimotor perception,


replaced by the articulation of machines that analyse gesture and re-
render it in mathematical abstraction, far removed from the materialised
act of generating sound in the physical world.” (Peter Sellars, 2010)

With pieces where the code is the most important aspect in the creation of the sounds, it
is easy to understand the point of view of some composers who are frustrated by the
sterility and anti-physical nature of that kind of music, and their efforts to give the
responsibility back to the performer. That focus on the physicality in the performance is
related to produce an organic sound, not only on visual goals.

9
Bringing back the performer’s gestures as a relevant input to generate or control the
sounds, is a reaction against the danger of too artificial sounds that rely only on algorithm
equations. Furthermore, starting the sound production from an acoustic or analog sound
provides rich sounds that can then be enhanced with electronic processes without
missing its organic nature.

We could compare this situation with the design of artificial landscape for films. One way
to go would be to start designing the landscape with a computer from zero, and building
it digitally, what it is usual in video-games as they have to create a whole artificial world to
explore. That option is not common in films, as it is a complex and long process and the
result is too artificial. In films, they start with footage from real landscapes, and working
with overlaying, they transform the real landscape into the imaginary ones (but keeping
underneath the organic part of the first real landscape). This is why these landscapes are
perceived as more realistic than the ones used in video games. Even films based in totally
imaginary worlds use that technique:

According to Cameron himself, the filming of AVATAR is actually composed of 40 percent


live elements and locations6.

I have that kind of experience on my own work, where I try to conserve the ‘realism’, by
which I mean richness and sense of realness of the sonic material, using an acoustic
sound source and adding a layer of human control of the electronic processes I use.

Always trying The fact that the computer can overtake the human as the performing
subject.

In a conversation with Hugo Morales, he underlined the risk of trusting the latest complex
computer sound processes to enhance the music pieces. As he explains it, technology
moves at an amazingly fast pace, so whatever we find interesting or complex about
computer processes nowadays could be quite simple and flat in five years time. Keeping
the electronic limited to simple processes, the quality of the music does not lie in the
complexity of the computer algorithms, but in the creativity of the composer to bring out
the best from simple electronic processes. In that case, the abilities of composition and

6The Official AVATAR Community “Where Did the Filming of AVATAR Take Place?” Accessed April 11, 2018

http://avatarblog.typepad.com/avatar-blog/2010/04/all-about-the-filming-location-of-avatar-and-how-the-filming-of-
avatar-took-place.html
10
ordering sound in time take more importance than the computer processes, enhancing
the composer skills over the technician skills, deepening on the new possibilities that
electronics offer instead of continuously trying to expanding them.

3.2 Cause and effect


Nowadays, some pieces require skills from classical performers that are not really related
with their instrument idiomatic language. They take away from their strength and pushed
the instruments into situations that could simply be done by a sound technician without
so many years of practising the instrument.

In traditional classical instruments, when the players want to change the sound they are
producing while playing, they can immediately generate a physical response that pushes
the sound towards the one they are looking for. The sound of classical instruments is
totally related to the physical input of the players, although a long time of study of it is
needed to learn how to control it and get the most out of it.

In the article of Jeff Pressing “Cybernetic Issues in Interactive Performance Systems”, he


presents this graphic to explain the traditional relation between performer-instrument:

Figure 1. Graphic about traditional relation performer-


instrument. Jeff Pressing

11
In nearly all instruments, the information transfer from human to instruments depends
upon its dynamic encoding in human movement. The parts of the instrument that are
directly controlled or manipulated by parts of the body, and to which information is
directly transferred, are called the control interface.

The parts that actually produce the sounds are called the effort mechanism. Intervening
between the control interface and effort mechanism is often a processor of some kind
that converts information in control format to effector format.

Enhancing this traditional mechanism in new extended instruments, electronics become


another parameter of the instrument which the performer is also in total control of.

In that way, we get a full commitment from the performer while playing the piece, in order
to get the best control of the nuances of the technology he is dealing with.

In a conversation with Frank Baldé, he talks about how he and Michel Waiswisz started to
think about “The Hands” (new musical interface they developed in 1984, making use of
sensor data converted into MIDI). He explains their goal was not the visuals, although he
thinks it was important to enhance the performances for new audiences. Their real goal
was transmitting the freshness of live decision-making on stage, and the energy and
organicity of the performer reflected into the sounds.

Michel Waiswisz (1949 - 2008) and Frank Baldé (1956- ) focused on the live performance
of electronic music. They developed many instruments with which he could produce
electronic sounds through physical touch and manipulation. Michel was concerned about
the disappointing feeling with the new instruments that miss a high degree of
controllability by the performer:

“The recent introduction of the technology of commercial digital


synthesisers, with its single-editing input device, has in some ways decreased their
overall realtime controllability relative to the one-knob/one-function editing of older
analog machines” (Michel Waiswisz, 1985)

12
In fact, both Frank and Michel, together with other Dutch composers and performers,
started the STEIM institute, focused on boosting the physicality in the performances of
electronic music.

STEIM was not the only new institute during that time, IRCAM also appeared to
contribute with new advances in the field of interactive computer music systems, but
IRCAM was more focused in live sound analysis as a way to control the electronics, not
so much in the physical control.

As well as institutions, some recent composers have also underlined the needs of having
the energy of the performer back on stage and in the music:

“And this energy, this will-to-perform and to bring across this intent was to me, the
original starting point to work with those gestures and try to come up with
technology or patches or with a concept for a piece that would allow that same
energy. The beginning was not so much about theatre or performance-art, but
more about the raw energy and expression of these musicians.”7 (Alexander
Schubert, interview with Zubin Kanga)


Before talking about the visual aspect, we could say that although the visual aspect of the
movements and electronics control might be used to enhance the live performance, the
goal of the one-to-one relation between performer and electronics is the resultant sound-
world.

3.3 Performative aspect (visuals)


Although the focus on the physical control of electronics in the performance is related to
the production of an organic sound, visual aspects are derived from that physical control.

The electronic processes are controlled live by physical effort and require total
commitment from the performer with the whole sound-world. Giving most of the
responsibility of the live performance back to the performer, we get his commitment
during the whole piece. That enhances the live performance, seeing a real effort and

7Zubin Kanga, and Alexander Schubert, ”Flaws in the Body and How We Work with Them: An Interview with Composer
Alexander Schubert,” Contemporary Music Review, 2016, Vol.35, pp. 535-553.
13
commitment on stage, which is necessary for the good execution of the piece. These
situations are different from other performances where movements and commitment do
not influence the sound or the musical result, as they only happen for visual purposes.

For example, lets think of a performance in which everything that the performer does is
to dramatically press a button which will make a sine tone start or stop. On one hand, the
expressive movements of the performer are not in relation to the sound production, if the
only important thing for the sound is pushing a button, no matter how you do it. On the
other hand, if the button requires a really specific way of pressing it, between a number of
possible ones, that could create slightly different sounds. Then the commitment and
accuracy of the performer is real and necessary.

Furthermore, using sensors, the composer ask for new movements to the performer, so it
is the composer’s job to think about which visual impact those movements will have in
the performance, and their organisation in time, in a similar way a choreographer do it.
Nowadays, it is being more usual that the performative aspect is also responsibility of the
composer, understanding the composer’s role as the auteur of the final performance, as
underlined by Jennifer Walsh in the New Discipline manifesto.8

In a conversation with Alexander Schubert, he explained me that when he started to work


with sensors, he was coming from the improvisation field, so he tried to bring the
physicality of free improvisation into the work of electronics through sensors. He mapped
the data of the sensors in a way that really specific gestures were needed to activate
them, so the performer had to follow a choreography full of expressive movements to
make them work. He said that borders between composing physical movements or
sounds got blurred when deciding how the sensors worked in the pieces.

8Jennifer Walshe, “The New Discipline: a compositional manifesto”, Roscommon, January 2016, commissioned by
Borealis.
14
4. Compositional Process

As previously mentioned, choosing extended instruments as a way to use interactive


computer music systems carries aesthetic choices that will affect the compositional
process. The ones that became more relevant to my work as a composer will be
presented in this section, and I will talk about them from my own experience.

At the end of this section, I will present my working flow when composing for extended
instruments based on the ideas, terms, and steps I will present next.

4.1 How the building and composition process affect each


other

When thinking about the extended instruments, the building of the instrument is part of
the composition process, as well as construction of the composition itself. They are both
parts of the process, but they influence it in different ways.

An important part of the process is ensuring that the instrument functions properly. It
includes a wide range of actions that consist of: physical building (attachment of sensors
to the instrument), programming (how to interpret the sensor data), and deciding which
sound processes will be controlled by the performer.

It is essential to have the performer present during this process, as they are needed to
frequently test the instrument during rehearsals, in order to integrate the extension of the
instrument into the performer’s idiomatic language and and explore new possibilities on
the instrument.

I refer to these aspects of the composition as “physical distortion”. The term is derived
from the high content of physical manipulation and physical testing needed to work on
the instrument. During the construction of the extensions to the instrument, I am open to
new material (sound, movement, gestures, visuals) emerging which was different to what
I had initially imagined. As so much testing is involved between the composer, performer
and the instrument, there is a high amount of ‘trial and error’, which in turn can generate
new ideas to experiment with, which were not planned at the beginning of the process.

15
On the other hand, there is a more abstract process, which focuses on the organisation of
sound material and movements within a timeline. This decision-making is more related to
the composition of a musical structure, which generates the musical discourse, either
relying on pure musical ideas or influenced by extramusical ones. I call this part of the
composition process “purification”, as it tries to choose, organise, and clear up the new
material. The material is organised in relation to the initial compositional idea, to avoid
creating a piece with a lack of cohesion.

During the whole compositional process, I believe that the composer’s work is to find a
balance between the “physical distortion”, and the “purification”.

That means, for my own work, being open to the new ideas or musical material that can
arise from the rehearsals with the instrument, but with a focus on using them to reinforce
a musical structure or form, not only as a catalogue of possibilities.

4.2 Musical structure influenced by the choice of presenting


the instrument

One of the common elements of my pieces that deal with extended instruments (as well
as actuated or augmented instrument), is to gradually unveil the possibilities of the
instrument as the work progresses. This is done because of the need to create a
reference for the audience to be aware of the material being produced by the new
instruments. In turn, this presents a framework which provokes the curiosity of the
audience about what is happening on stage, working as a bait to bring the musical
material to their attention.

However, we cannot assume that this method of approaching the structure is only based
on the audience. Presenting musical material and possibilities before developing them
has been a common way of starting compositions in traditional music. This method is
also applied here - in the material and possibilities of the instrument itself, as a way of
presenting the extended instrument before developing complex material with it, benefiting
a good musical discourse.

16
As Hugo Morales explained to me during a conversation, that way of presenting the
possibilities of the instrument little by little has also practical reasons. It can be useful
when starting to rehearse the piece with performers who are not familiarised with the
extended instrument. In that way, they begin reading and understanding the notation and
functionality of the instruments as they move forward with the piece. When all the material
is presented and assimilated by the performer, it is easier for them to go on reading parts
where the techniques are combined and further developed.

4.3 Importance of the performer - Collaborative process

As previously mentioned, the performer has an important influence during the process of
building the instrument, and therefore in the piece.

One of the main contributions of the performer to the piece is the fact that they bring
physicality to the sounds processes. Controlling the parameters of the processes with
their movements, the sound-world of the piece goes from an artificial sound or
expressiveness towards an organic behaviour. The control of the processes by the
performer could never be as precise as a digital automatisation of them. However, from
my point of view, imperfections created by human behaviour is where the enrichment of
the sounds occur. This enrichment is produced by adding a layer of (non artificial) realism
to the sounds which brings them closer to analog or traditional instruments, and moves
them away from the sound world of computer generated sounds.

Furthermore, if the instrument is well built to be comfortable for the performer and to give
some freedom of movements while playing it, that will help create nuances in the sounds
during the performance. That freedom allows the performer to do little adjustments to the
sound depending on the different situations he could find in the live performance, or gives
some space to his own expression in every performance (making every performance
unique).

The compositional process has some degree of collaboration, as the presence of the
performer in many steps of the composition and the first performance creates a special
link between the final piece and the performer who premiered it.

17
The development of the instruments is largely influenced by the feedback given from the
performer to the composer they specifically collaborated with. After the first performance,
the instrument can by played by other performers, despite being designed for the
performer who it was developed for, but it will not be so personal.

Some of the musicians dealing with these kinds of instruments give so much importance
to the performer in the compositional process that they do not believe in a separation of
the roles: composer-performer-technician.

In a conversation with Ricardo Marogna, he talked about how the role of the composer is
blurring little by little with the role of performer or technician. Riccardo is an Italian
musician, improviser and composer, who is currently based in The Hague (NL). His
research is focused on developing an improvisational language an electro-acoustic
scenario, where the electronic manipulations and the acoustic sounds merge seamlessly
in the continuum of the sonic gesture. He is active in many projects, playing a range of
reed instruments (alto and bass clarinet, clarinet, tenor saxophone) combined with live
electronics. He thinks these extended instruments are pushing these roles to convene in a
single musician who performs their own music and builds their personal instruments or
extensions.

My opinion that the separation of roles and the possibility to have collaboration between
the composer and performer can benefit the music, as it brings together the experience
of the composer generating ideas and organising sounds within time, along with the
expertise of the performer in playing and understanding the instrument, which will be
extended.

Furthermore, the composer can use a different perspective from the performance of the
piece to find a more objective view, as they are listening and seeing it from the audience,
and are not involved in the performance, which benefits the global vision of the piece.

This vision presents a composer being responsible of the idea and process of the
decision-making. But is also getting working practically during the building process, and
sometimes even taking the role of the technician during the final performance. However,
the composer is still in need of performer to test the results in some of the steps of the
process.

18
4.4 Working flow
Here I present a summary of the essential steps that are common to all of the pieces I
have been working on with extended instruments:

Figure 3. Flow chart of my own composition process

19
5. My own compositions

In this chapter, I will talk about two of my compositions as examples of different situations
I found when working with extended instruments (with slightly different approaches).

5.1 Reflejos de agua, sal, y hierro (piano and extended voice)


Piece for extended voice (proximity sensor and accelerometer) and piano.

Out the piece Clips (to show how the extensions worked)

clarify that there is an overall automatisation

The idea

This piece is based on the word “Málaga”. Analysing the different sounds which compose that
word, I came up with most of the the melodic, harmonic, rhythmic and structural material.

The sound-world of the piece was inspired by this poem by Federico García Lorca:

Don’t ask me any questions. I’ve seen how things


that seek their way find their void instead.

There are spaces that ache in the uninhabited air
and in my eyes, completely dressed creatures -
no one naked there!

Poet in New York, Federico García Lorca

The voice is treated as the main character of the piece, and the piano as the
reflection of its discourse.

With that idea in mind, I worked in the electronics as an extension of the reflection of the
voice in the piano. The electronic processes used were a convolution of the voice through
all the piano strings, and a frequency modulation of the voice (the carrier was the voice,
and the modulator is one of the fundamental pitches in the melodic development-
structure). The electronics were played from a speaker placed inside the piano.

20
The building & compositional process

This piece was almost totally composed before the first rehearsal with both musicians
(piano and singer). In order to save some time to the performers, I recorded some of the
voice passages with the singer and recreating the piano with virtual instruments to test
how they could be affected by the electronics. Then, that allowed me to move on the
composition process without needing the performers to be present to test every change I
was doing in the electronics (software) to get the result I felt fit best for the piece.

That option saved time, but when the first rehearsal came, and I tried how sensors
worked with the movements of the singer, there was some aspects of the electronics that
I had to modify to adapt them to more natural movements on the stage.

Testing the movements of the singer to control the convolution reverb parameters with the
distance from the proximity sensors, I realised two unexpected results: the proximity
sensor was really sensitive to the distance variation, and we were getting feedback from
the singer headmic.

I worked with the singer in new displacements and key positions in the stage that could
be better for a better development of the electronics and be more comfortable for her,
also taking into account that she should move focusing her sound in the opposite
direction of the piano to avoid feedback. These changes became really important for the
visuals aspect of the piece at the end.

Once first problems with electronics were solved, I had the feeling that I was missing
some overall development of the electronics, so I decided to add an extra layer of control
that was not controlled by the performer, but that could be automatised or controlled by a
sound technician. This control was related to the volume of the electronics (in relation to
the acoustic sounds). This came as one of the final decisions of the composition process,
during the last rehearsals before finishing the piece, as I realised that I was going too far
from the initial idea I had, and with only a simple overall crescendo of the electronics
through the piece, the piece could get more coherence and structure strength.

21
If I recall the flowchart where I talked about my compositional process with extended
instruments, for this piece, the steps of “purifications” were essentials. I was constantly
trying to adapt the changes I had to make on the electronics and movements (physical
distortion) of the singer in the rehearsals to the initial idea. This does not mean that the
moments of rehearsals, fixing the instrument and necessary changes did not have an
impact of the piece but, in this case, I did not permit that this changes would take more
importance over the initial idea, which was closer to purification (based on harmonies and
music structure ideas) than to the instrument and sensor technology itself (the physical
distortion).

Conclusions

In this piece, the focus is in the initial idea and structure, and the extension of the voice
was thought as a way to improve the sound world of the piece

The result is a composition where the electronics collaborate for the identity and sound
world of a musical structure, supporting the directionality and enhancing the nuances of
the interpretation of the voice and movements of the singer.

5.2 Miles away study (extended table percussion)

Piece for extended table percussion (quadraphonic amplification, solar panels, and bike
lights)

The idea

The beginning of the idea of this piece starts with an idea about building an extended
instrument dealing with percussion and solar panels.

Working in a workshop with Hugo Morales, I found out how solar panels produce pulse
waves when a blinking light is directed to them. The sound that it is produced depends on
how fast the light blinks: it could be a frequency our eyes can detect, as 5 times per

22
second, or a frequency that our eyes cannot detect, as 200 times per second so we
would see it as a stable light. The same effect is produced for how we perceive the
sound: sounds that we detect as rhythmical are below 20 times per seconds, and sounds
we perceive as stable tones (pitch) are over 20 times per second.

Working with different bike lights patterns I thought about giving the possibility to a
percussionist to work with the combinations of lights rhythmic patterns and pitches as a
layer to interact with.

Finally, the idea came out as a extended table percussion: amplifying a table from the four
different corners (with contact mics) to get quadraphonic spatialisation, gloves with solar
panels attached to them, and four lights (2 bike lights and two IR sensors) distributed
throughout the corners of the table:

Figure 3. Parts of the extended table Figure 4. Extended table

The building & compositional process

In this case, the first idea was more related to the idiosyncrasy of the instrument, so I
recorded the different possibilities of the bike lights, and IR sensors and I started to think
about ways to order the material and develop a framework for a score with moments
open to improvisation (giving space to the performer to naturally interact with the lights,
as they have unexpected outcomes sometimes).

Before starting the rehearsals, I planned an open score dealing with a first idea about the
structure of the piece, and I built the first prototype of the extended table percussion (with

23
the gloves with solar panels). At the first rehearsal, the performer (Gorka Catediano) and
me, started to test the possibilities of the instrument, and we decided to change some
aspect of the instruments to make it more idiomatic for the movements of the
percussionist.

At the beginning, the lights were pointing from the table to the ceiling. We had to change
that aspect because the movement of the hand to turn the palm outside down to put the
solar panels against the lights took too much time, and it was not a comfortable
movement to repeat several times in a fast tempo.

So we went back to fix the instrument and we decided to put the lights pointing to the
table from above. In that way, the performer could also do some crescendo and
decrescendo with the light sounds, controlling the distance from the solar panel to the
lights.

Rehearsals were based in short improvisations with the material notated, what created a
framework to discover new possibilities of the instrument, both the amplified table and
lights.

For example, in the process of testing again the table with the new position of the lights,
we realised that we got a really interesting sound from the table when dragging the hand
from one light to another, creating a scratchy continuos sound. This sound became one of
the main material to unify the sounds from the table to the sounds of the lights later on,
when we introduced it in the form of the piece.

At the end of the workshop week, we had to present the result to an audience, but it was
just an idea emerging,

The improvised aspect of the rehearsal created really interesting and fresh moments, so I
did not want to get rid of it either, I thought it should be part of the piece. I worked in a
score which could permit other performers to play the piece, but that would leave some
information and moments in the piece open to be interpreted by the performer.

24
When I came back to work in the piece, after the first presentation, I modified the table
construction to accentuate the separation between the four parts of it to enhance the
quadraphonic amplification:

Figure 5. Building process for four part separation of the


extended table

I also decided to create a rhythm structure which would work as the framework to include
the different material from the extended table. For that rhythm structure, I used the theory
of the karnatic rhythm tradition to develop a rhythmic pattern that includes step by step
the different possibilities of the extended table.

During the new period of rehearsals, with the last modifications in the instrument and
musical structure included, I began the rehearsal-loop again with a new performer
(Guillermo Martín Viana), creating a new feedback communication performer-composer-
instrument. It was interesting how the piece which started with Gorka, now it also had
some adjustments to the way of playing of Guillermo, ending up in a piece with a little bit
of both.

If I recall the flowchart where I talked about my compositional process with extended
instruments, for this piece, the steps of “physical distortion” were really present.
Everything was about the instrument at he beginning, so most of the work to balance so
much physicality and intuitive decisions was about recap material and ordering it in a
musical structure or framework.

Conclusions
25
In this piece, the initial idea was focused in the sound-world of the new instrument more
than a predetermined structure.

Much of the material was discovered while testing the instrument. When we got new
material from the rehearsal, we notated it and keep it in mind for including it in the final
piece.

The approach of improvising with material to discover new possibilities of the instrument
was great, but it also made it difficult to ground all the information into a written piece, as
I was really used to have really simple instructions with the performer just to follow the
same material and structure every time. We did not take note of every aspect of it
although we perfectly knew how it worked at every moment.

The result is a composition where the extended instrument is the main thing, but it is
presented in a clear musical structure with his own identity, and which help the
performance to be perceived not only as a free improvisation but as a composition for a
specific type of extended instruments, reinforcing its strengths.

6. Conclusions

One of the most important aspects that I discovered while researching about interactive
computer music and extended instruments was how useful was for me to clarify the
context and previous works made in that field. Knowing about different previous attempts
to work in the music interaction with computers allowed me to better understand the
advantages and disadvantages of each of them, and to focus my attention on the field
that was closet to my aesthetics, the extended instruments.

Extended instruments created a framework for the evolution of a new compositional


aesthetic for me. Including the building process of the instrument in my compositional
process, forced me to rethink about my work flow and about how to adapt it to get the
most of the extended instrument idea.

26
Three aspects that I reinforce in my compositional process when working with extended
instruments are the overall structure, the visuals, and the collaboration with the performer:

- The instrument is new for the performer and audience. Understanding that point
is important when thinking about the musical structure. Thanks to this research, I
realised that creating a reference point for the sound world at the beginning of the
piece, and revealing the possibilities of the instruments while the piece moves on,
are two characteristics that are common on my pieces. Realising that two common
aspects it is useful for me to use them in future composition to go with or against
it.

- Sensors need movements to be activated, and I choose to take the control of


that as an opportunity to organise that movements on time and space to enhance
the visual aspect of the performance.

- The close collaboration with the performers has become crucial to my work, as
they have become the intermediate link between me and the instrument. This
collaboration benefits the pieces, as the music becomes more idiomatic in relation
with the possibilities of the new instrument.

When analysing my own compositions with extended instruments I became aware of how
important is the initial idea or step. It makes a difference in the piece if the initial idea is
based in a musical structure, or it is based in a new sound world created by an
instrument. The compositional process in both of them is similar, but the importance of
each of the steps varies as you try to scarifies some possibilities of the instrument for a
musical structure, or otherwise you sacrifice the initial idea of the structure to introduce
new material from the possibilities of the instrument.

After these series of analyses and experiments, I can only say that there is still so much
that I can do with this subject. However, the work done and the conclusions reached
during this paper have certainly set the bases for what I hope to be an inexhaustible
source of inspiration and material.

For future works, I want to explore new possibilities with extended instruments in two
different ways: controlling the instrument through physiological data of the performer (I
27
started to work on it in my piece “Quantified performer”, using heartbeat and galvanic
skin sensors), or trying to go against a comfortable relation between performer-instrument
to create a point of friction between them on stage.

7. Bibliography

Chadabe, J. (1989) “Interactive Composing: An Overview”, The Music Machine.


Selected Readings from Computer Music Journal, Curtis Roads (ed.), Cambridge, Mass.,
MIT.

Hunt, J. (1991) “ Interactive performance systems”, Contemporary Music Review, Vol.6,


pp. 131-138.

Kanga, Z., Schubert, A. (2016) ”Flaws in the Body and How We Work with Them: An
Interview with Composer Alexander Schubert,” Contemporary Music Review, Vol.35, pp.
535-553.

Kanga, Z. (2016) “Gesture-Technology Interactions in Contemporary Music”,


Contemporary Music Review, Vol.35, pp. 375-378.

Salter, C., Sellars, P., (2010) “Entangled: Technology and the Transformation of
Performance” (MIT Press). The MIT Press. Kindle Edition.

Machover, T., Chung, J. (1989) “Hyperinstruments: Musically intelligent and interactive


performance and creativity systems,” in Proceedings of the International Computer Music
Conference.

Ostertag, B. (2002) “Human Bodies, Computer Music” Leonardo Music Journal, Vol. 12,
pp. 11-14.

Overholt, D. (2011) “The Overtone Fiddle: an actuated acous- tic instrument,” in


Proceedings of the International Conference on New Interfaces for Musical Expression.

Overholt, D. , Berdahl, E., Hamilton, E. (2011) “Advancements in actuated musical


instruments,” Organised Sound, vol. 16, no. 02, pp. 154–165.

Tanaka, A. (2000) “Musical performance practice on sensor- based instruments,” Trends


in Gestural Control of Mu- sic, vol. 13, pp. 389–405.

28
Pressing, J. (1990) “Cybernetic Issues in Interactive Performance”, Computer Music
Journal, Vol.14, pp. 12-25.

Rowe, R. (1996) “Incrementally Improving Interactive Music Systems”, Contemporary


Music Review, Vol. 13, Part 2, pp. 47-62.

Turchet, L., McPherson, A., Fischione, C., (2016) “Smart Instruments: Towards an
Ecosystem of Interoperable Devices Connecting Performers and Audiences” in
Proceedings of the International Computer Music Conference.

Walshe, J. (2016) “The New Discipline: a compositional manifesto”, Roscommon,


January 2016, commissioned by Borealis.

Wanderley, M.M., Orio, N. (2002) "Evaluation of input devices for musical expression:
Borrowing tools from hci"Computer Music Journal 26, 2002, Vol. 3, pp. 62-76.

29
8. Appendix
8.1 Systems of Interactive Computer Music - instruments/interfaces

Term Main Characteristics Examples References

Live-coding Live coders expose Live-coding is played https://toplap.org


and rewire the innards of with computers/laptops
software while it
generates improvised
music and/or visuals.

All code manipulation is


projected for
your pleasure.

New Musical Interfaces New hardwares for The Hands (Michel STEIM,

controlling parameters in Waisvisz) NIME

musical softwares (“International


Conference of New
Interfaces for Musical
Expression”)

Digital and hybrid Entirely self-contined Electric guitar, theremin,


Léon Theremin,

lutherie instruments with their Overton Fiddle Stanford University


own idiomatic language

Intelligent/interactive Programmed system to ICMA “International


instruments create an “Artificial Computer Music
Intelligence” responding Association”

live to the inputs


STEIM

Giacomo Lepri -InMuSIC


Hyperinstruments Using technology to give hypercello Tod Machover

extra power and finesse 1986 - MIT Media


Laboratory,
to virtuosic performers

Massachusetts-EEUU
(https://
Learnability
www.media.mit.edu)

More complex output


than input

Expanded Instruments Timbre affected by the http://deeplistening.org/ Pauline Oliveros


space/location - use of site/content/
different delays or expandedmusicalinstru
ments
reverbs

What is expanded is the


time - time machines

30
Term Main Characteristics Examples References

Sensory Augmented The addition of sensors The augmented violin IRCAM


Instruments to familiar instruments to project
control electronic
processes

Sensors are used as


extra controls separate
from the main playing
techniques or using
idiomatic gestures.

Extended instruments Natural historic Extended table Jeff Pressing


evolution of percussion (part if my
instruments.
research)

Interface for classical


instrument

Actuated instruments Sound synthesis back LU (Hugo Morales)


Hugo Morales Murguia
into the acoustic for electromechanical
driven marimba and
structure of the
distortion

instrument

Applied directly to an
instruments vibrating
elements.

Smart Instruments Smart guitar Queen Mary University,


London

Alternative to physical Using information from “Music for solo Alvin Lucier
control the human body not performer”
produced by voluntary
physical movement

31

Potrebbero piacerti anche