Sei sulla pagina 1di 8

sual processes.

However they do not provide functionality for automated forms of configuration and modification
via mechanisms of adaptation such as evolution or learning. Also, the tools fall somewhat short in the creation of
simulations that gradually change over extended periods
of time and thereby provide the opportunity to experiment
with emergent macro scale structures in musical compositions and visual designs. Accordingly, we plan to include
adaptive mechanisms in future versions of the simulation
tools.
Figure 4. Interaction with the Flowspace installation. Exhibition Milieux Sonores, Gray Area Foundation for the
Arts, San Francisco, 2010.
satile to allow its usage in education and artistic realizations. The standalone GUI application complements this
environment in that it not only provides a gentle introduction into the usage of swarm based simulations for users
that lack programming skills, but also offers a good starting point for any artistic realizations as it allows to quickly
sketch and experiment with customized swarms. Only the
realization of rather exotic swarm simulations required
an implementation in C++. But as these new behaviors
become part of the simulation library, the limitations of
a purely OSC based approach gradually decrease. The
software and documentation that can be accessed via the
projects website [1] keep up with these improvements.
Our teaching experience proved that the standalone
GUI application is very helpful in conveying a practical understanding of the principles and capabilities of
swarm simulations as it enables a hands-on approach
where the students can immediately experience the effects
upon changing parameters. Throughout the course, most
of the students kept working solely with the standalone
GUI application and did not consider to modify the simulations more thoroughly on a lower level of abstraction.
They rather focused on the design of their audiovisual
Max/MSP patches and would only return to experimentation with the swarm simulation itself when their envisioned result could not be achieved by modifications to
those patches alone. It remains to be seen whether a prolonged use of the simulation tools will lead the students
to integrate swarm simulation into their works on a more
fundamental level.
As for the authors themselves, the swarm simulation
tools have proven to be extremely inspiring and useful
both for the realization of musical and artistic works. The
tools flexibility has allowed us to transfer a wide variety
of artistic and musical ideas into swarm based approaches.
Furthermore, their OSC based real time configuration and
control capabilities has allowed us to creatively exploit the
swarms high level of responsiveness both in the creation
process and for the final performance and exhibition situations. In their current state, the simulation tools extensively support the manual design and refinement of swarm
simulations and their communication with musical and vi-

_300

6. REFERENCES
[1] http://swarms.cc.
[2] D. Bisig and M. Neukom, Swarm based computer
music - towards a repertory of strategies, in Proceedings of the Generative Art Conference, Milano,
Italy, 2008.
[3] D. Bisig, M. Neukom, and J. Flury, Interactive
swarm orchestra - a generic programming environment for swarm based computer music, in Proceedings of the International Computer Music Conference, Belfast, Ireland, 2008.
[4] D. Bisig, J. Schacher, and M. Neukom, Flowspace
a hybrid ecosystem, in Proceedings of the New
Interfaces for Musical Expression Conference, Oslo,
Norway, 2011.
[5] D. Bisig and T. Unami, Cycles - blending natural
and artificial properties in a generative artwork, in
Proceedings of the Generative Art Conference, Milano, Italy, 2010.
[6] D. Bisig and T. Unemi, Swarms on stage - swarm
simulations for dance performance, in Proceedings
of the Generative Art Conference, Milano, Italy,
2009.
[7] T. Blackwell and P. Bentley, Improvised music with
swarms, in Proceedings of the 2002 Congress on
Evolutionary Computation, 2002.
[8] J. E. Boyd, G. Hushlak, and C. J. Jacob, Swarmart:
interactive art from swarm intelligence, in Proceedings of the 12th annual ACM international conference on Multimedia, 2004.
[9] J. Schacher, D. Bisig, and M. Neukom, Composing
with swarm algorithms - creating interactive audiovisual pieces using flocking behavior, in Proceedings of the International Computer Music Conference, Huddersfield, England, 2011.
[10] D. Shiffman, Swarm, SIGGRAPH emerging technologies exhibition, 2004.
[11] R. Vitorino, Self-organizing the abstract: Canvas as
a swarm habitat for collective memory, perception
and cooperative distributed creativity, CoRR, 2004.

SOURCENODE: A NETWORK SOURCED APPROACH TO


NETWORK MUSIC PERFORMANCE (NMP)
Robin Renwick
Sonic Arts Research Centre (SARC)
Queens University Belfast

ABSTRACT
This paper will seek to outline a Network Sourced Approach
(NSA) to Network Music Performance (NMP). The NSA is
governed through the software application SourceNode,
which can be seen as a working example of a NMP enabler.
The core focus of the NSA is that a master node will
source audio content from designated slave nodes, which
are synchronised together by the master node. The
SourceNode project is a specic type of NMP: a nodal-based
performance environment that has been network
synchronised through time signature, start/stop, loop point
and tempo control. The NMP is assembled in a star
formation. This paper addresses the characteristics of a star
topology within a synchronised environment and describes a
standalone software application implemented for this type of
nodal-based performance. The study will attempt to outline
the implications, advantages and issues faced when
implementing such a nodal-based framework and also offer
a formal example of this NMP structure in practice, through
the SourceNode project.
1.

INTRODUCTION
it is possible to stop seeing music as singular, as a
street between point a and point b, and to start seeing
music as multiple, as landscape, as atmosphere, as an
n-dimensional eld of opportunities [23].

Network Music Performance is a musical practice that has


been enabled through, included with, or integrated by
computer networking technologies, protocols, systems and
topologies that have been incorporated into the design,
implementation or architecture of either the musical system
or sonic output.
A Network Sourced Approach is a distinct form of Network
Music Performance; an approach in which the core focus is
the sense of sourcing or gathering sonic content from a
delineated network. The sourcing behaviour, and its
implications for NMP, may be seen as an extension of
previous NMP approaches as outlined by Alain Renaud in
the Frequencyliator project [15] and Georgi Hadju in the
Quintet.net project [10]. The central characteristic of such

practice is that nodes within a network perform individual


sonic content. The architecture establishes a framework that
ensures this individual audio content is no longer mutually
exclusive. The nodes are governed with respect to tempo,
time signature, loop point and start/stop control. Indeed, they
may now be seen as elements of a greater whole [10]. The
NSA approach offers a topology that is extremely exible,
allowing for expansion or contraction of the framework
without needing to change the network conguration. A
master node can implement certain rules; governing the
audio content that it receives but the number of streams, or
nodes, from which it may source content is unequivocally
open.
The SourceNode project is a specic type of NMP, enabled
through the intricate use of current technologies derived
simultaneously from the elds of music and computer
networking technology. The project seeks to reformulate
past conceptualisations of NMP models and architectures
into a framework that is distinct, unique and evolving in
both context and nature.
2.

AN INTRODUCTION TO NETWORK MUSIC


PERFORMANCE (NMP)

Network Music Performance has always stood at the


intersection of technology and art and will continue to do so.
Unique and interesting art forms have emerged from these
systems; art forms that evolve as technology improves,
continuing to ask questions of our modern understanding of
the musical art form. An outlook which envisions
intermedia or hybrid spaces being utilised by musicians
standing at the cross section of networking, computer music
and internet technologies is far from imaginary [19][25].
Similarly, collaborative spaces which harness the powers of
synergy will always have networking technologies,
architectures and frameworks at their centre [17][20][22]
[24].
Darin Barney offers a very simple, yet detailed, explanation
of what a network is; what is needed within a structure for it
to be termed a networked organisation: networks are
comprised of three main elements: nodes, ties and ows. A

_301

node is a distinct point connected to at least one other point,


though it often simultaneously acts as a point of connection
between two or more other points. A tie connects one node to
another. Flows are what pass between nodes along ties [1].
A diagram of a mesh network is shown below, highlighting
nodes that are connected together along ties. The diagram
displays a complex network system, perhaps one that is
more complex than most NMP systems, but it is apt to
highlight how a network comprises, fundamentally, of
otherwise disparate nodes that are connected together.

3.

THE SOURCENODE PROJECT

This section of the paper will outline the exact framework


for the SourceNode application, its motives, and an
explanation of the software used in the implementation. The
section will also discuss issues faced, such as MIDI-Clock
Drift as well as latency issues in using the JackTrip audio
streaming software. A comparison will be made between the
MIDI-Clock Drift apparent when using the SourceNode
application, with measurements of MIDI-Clock Drift when
using the alternative Apple Network MIDI Utility; a
software application built into the modern day Apple
Operating System (OSX).
3.1. Architecture

Fig.1. Network Diagram [1]

Manuel Castells believes that society, organised into


network structures may leverage their degree of inuence
through increased collaboration, information sharing and
organisation [5]. In a musical context, the idea of networks
becomes increasingly interesting. A musical system may be
created with networked motives; seeking to leverage the
type, reach, experience and inuence of the musical practice.
In this context the relevance of Dramaturgy should be
noted, with the SourceNode project outlining a sense of
Projected Dramaturgy; where the performance is
designed primarily for one node within which the remote
contributions are projected[14].
The SourceNode project seeks to connect two nodes,
designated slave nodes with a third node, which acts as a
master. The software applications attempt to manage the
ow of audio between them, with the goal of enabling a
synchronous,
structured
and
musically
coherent
communication. Dante Tanzi outlines the important role that
ows in networks have on musical expression and musical
context. He addresses the idea of co-authorship; NMPs
allow a certain level of community-based production,
centred in and around a network [15]. There is a move away
from singular creativity to more global, plural and
community based methods; where ows between nodes are
the basis for a new understanding of musical collaboration:
on-line technologies are gradually modifying the
relationships
between
the
authors,
music
and
audiences [21].

_302

The SourceNode project is a specic type of NMP, based on


a nodal framework; structured in a way that is congruous
with a star-shaped network topology. A star shaped network
is one that is centralised around a hub [15][24]. The nodes
in the network are connected to the central node, or hub. If
the central node were to cease to exist, the network would
no longer survive; the central node is sometimes called the
master node. This implementation of a centralised hub,
or master node, draws welcome parallels with the pioneering
NMP work of The Hub [8].

performance stage. In effect, this gives the NMP specic


two-stage performance and modulation levels.
The rst level is performed at the point of the slave nodes,
where mutually independent offerings are created and then
sent over the network to the master node. The second
performance stage takes place at the master node. At the
master node, the streams from each of the slave nodes are no
longer mutually exclusive. The streams of audio are
reinterpreted and combined into a coherent and musical
whole.
It is important at this point to recognise a key facet of the
architecture of this project: the star network, in this case,
will be based in a Local Area Network (LAN) system.
LANs are quite common within network structures that are
based in a star formation [15].

The second stage of software implementation is the


SourceNode application. The SourceNode application was
designed using Cycling 74s MAX/MSP programming
software (www.cycling74.com). The MAX/MSP patch
controls a number of key enablers within the project. Firstly,
the relevant musical information created at the master
node, such as tempo, MIDI-Clock, time signature
information, loop points etc, has to be interpreted and then
transferred across the network. MAX/MSP allows a
gathering of this information at one point. This data is then
packed and transferred across the network. In this instance,
the User Datagram Protocol (UDP) network protocol is
used. This ensures that the data is transferred as quickly as
possible to the slave nodes, as is crucially important with
respect to information such as MIDI-Clock.

Fig.3. SourceNode Architecture

3.2. Software Implementation

Fig.2. A Star Shaped Network [15]

In this instance, the star shaped network will consist of three


nodes: a master node and two slave nodes. The master
node will share certain structural musical information with
the slave nodes, such as tempo (MIDI-Clock), time signature
information, loop points and start/stop information. This
information, or data, will be created at the master node and
then shared, through the network, to each of the slave nodes.
This shared information will ensure that the three nodes are
synced together; allowing a musical performance
inherently structured with, and through, the network to be
created. The slave nodes perform their individual musical
parts, working within the structures dened by the master
node. The audio the slave nodes create will be transferred
across the network to the master node. The streams of audio
that the master node receives will be merged at this point.
The master node will be in charge of performing the piece
of music; implementing an interpretation, reaction and

The SourceNode project, fundamentally, consists of four


core software enablers. The rst is that each node will run a
Digital Audio Workstation (DAW). This software will allow
each node to perform their musical piece as well as allowing
an interpretation of the networked structural data, such as
tempo, time signature, loop points, etc. In this instance the
DAW that is being used is Ableton LIVE
(www.ableton.com).

Fig.4. Ableton Software Interface

Fig.5. SourceNode Software Interfaces

The third stage of software implementation is the JackOSX


(www.jackosx.com) internal audio router. This piece of
software allows for the internal transfer of audio within a
computer. At the slave nodes, JackOSX is used to route the
audio internally from Ableton LIVE to the JackTrip
software. At the master node, JackOSX allows the user to
route the separate slave node streams that are received by
the JackTrip software to distinct, user specied audio
channels within Ableton LIVE. This feature is a crucial
enabler for the successful performance of the SourceNode
project, as the master node can treat each slave nodes audio
stream independently; allowing different performance
processes to be integrated into each.
The fourth software stage is the JackTrip audio streaming
software (www.code.google.com/p/jacktrip/), run in
conjunction with the aforementioned JackOSX software.

_303

node is a distinct point connected to at least one other point,


though it often simultaneously acts as a point of connection
between two or more other points. A tie connects one node to
another. Flows are what pass between nodes along ties [1].
A diagram of a mesh network is shown below, highlighting
nodes that are connected together along ties. The diagram
displays a complex network system, perhaps one that is
more complex than most NMP systems, but it is apt to
highlight how a network comprises, fundamentally, of
otherwise disparate nodes that are connected together.

3.

THE SOURCENODE PROJECT

This section of the paper will outline the exact framework


for the SourceNode application, its motives, and an
explanation of the software used in the implementation. The
section will also discuss issues faced, such as MIDI-Clock
Drift as well as latency issues in using the JackTrip audio
streaming software. A comparison will be made between the
MIDI-Clock Drift apparent when using the SourceNode
application, with measurements of MIDI-Clock Drift when
using the alternative Apple Network MIDI Utility; a
software application built into the modern day Apple
Operating System (OSX).
3.1. Architecture

Fig.1. Network Diagram [1]

Manuel Castells believes that society, organised into


network structures may leverage their degree of inuence
through increased collaboration, information sharing and
organisation [5]. In a musical context, the idea of networks
becomes increasingly interesting. A musical system may be
created with networked motives; seeking to leverage the
type, reach, experience and inuence of the musical practice.
In this context the relevance of Dramaturgy should be
noted, with the SourceNode project outlining a sense of
Projected Dramaturgy; where the performance is
designed primarily for one node within which the remote
contributions are projected[14].
The SourceNode project seeks to connect two nodes,
designated slave nodes with a third node, which acts as a
master. The software applications attempt to manage the
ow of audio between them, with the goal of enabling a
synchronous,
structured
and
musically
coherent
communication. Dante Tanzi outlines the important role that
ows in networks have on musical expression and musical
context. He addresses the idea of co-authorship; NMPs
allow a certain level of community-based production,
centred in and around a network [15]. There is a move away
from singular creativity to more global, plural and
community based methods; where ows between nodes are
the basis for a new understanding of musical collaboration:
on-line technologies are gradually modifying the
relationships
between
the
authors,
music
and
audiences [21].

_302

The SourceNode project is a specic type of NMP, based on


a nodal framework; structured in a way that is congruous
with a star-shaped network topology. A star shaped network
is one that is centralised around a hub [15][24]. The nodes
in the network are connected to the central node, or hub. If
the central node were to cease to exist, the network would
no longer survive; the central node is sometimes called the
master node. This implementation of a centralised hub,
or master node, draws welcome parallels with the pioneering
NMP work of The Hub [8].

performance stage. In effect, this gives the NMP specic


two-stage performance and modulation levels.
The rst level is performed at the point of the slave nodes,
where mutually independent offerings are created and then
sent over the network to the master node. The second
performance stage takes place at the master node. At the
master node, the streams from each of the slave nodes are no
longer mutually exclusive. The streams of audio are
reinterpreted and combined into a coherent and musical
whole.
It is important at this point to recognise a key facet of the
architecture of this project: the star network, in this case,
will be based in a Local Area Network (LAN) system.
LANs are quite common within network structures that are
based in a star formation [15].

The second stage of software implementation is the


SourceNode application. The SourceNode application was
designed using Cycling 74s MAX/MSP programming
software (www.cycling74.com). The MAX/MSP patch
controls a number of key enablers within the project. Firstly,
the relevant musical information created at the master
node, such as tempo, MIDI-Clock, time signature
information, loop points etc, has to be interpreted and then
transferred across the network. MAX/MSP allows a
gathering of this information at one point. This data is then
packed and transferred across the network. In this instance,
the User Datagram Protocol (UDP) network protocol is
used. This ensures that the data is transferred as quickly as
possible to the slave nodes, as is crucially important with
respect to information such as MIDI-Clock.

Fig.3. SourceNode Architecture

3.2. Software Implementation

Fig.2. A Star Shaped Network [15]

In this instance, the star shaped network will consist of three


nodes: a master node and two slave nodes. The master
node will share certain structural musical information with
the slave nodes, such as tempo (MIDI-Clock), time signature
information, loop points and start/stop information. This
information, or data, will be created at the master node and
then shared, through the network, to each of the slave nodes.
This shared information will ensure that the three nodes are
synced together; allowing a musical performance
inherently structured with, and through, the network to be
created. The slave nodes perform their individual musical
parts, working within the structures dened by the master
node. The audio the slave nodes create will be transferred
across the network to the master node. The streams of audio
that the master node receives will be merged at this point.
The master node will be in charge of performing the piece
of music; implementing an interpretation, reaction and

The SourceNode project, fundamentally, consists of four


core software enablers. The rst is that each node will run a
Digital Audio Workstation (DAW). This software will allow
each node to perform their musical piece as well as allowing
an interpretation of the networked structural data, such as
tempo, time signature, loop points, etc. In this instance the
DAW that is being used is Ableton LIVE
(www.ableton.com).

Fig.4. Ableton Software Interface

Fig.5. SourceNode Software Interfaces

The third stage of software implementation is the JackOSX


(www.jackosx.com) internal audio router. This piece of
software allows for the internal transfer of audio within a
computer. At the slave nodes, JackOSX is used to route the
audio internally from Ableton LIVE to the JackTrip
software. At the master node, JackOSX allows the user to
route the separate slave node streams that are received by
the JackTrip software to distinct, user specied audio
channels within Ableton LIVE. This feature is a crucial
enabler for the successful performance of the SourceNode
project, as the master node can treat each slave nodes audio
stream independently; allowing different performance
processes to be integrated into each.
The fourth software stage is the JackTrip audio streaming
software (www.code.google.com/p/jacktrip/), run in
conjunction with the aforementioned JackOSX software.

_303

The JackTrip software is in charge of transferring audio


from the slave nodes to the master. The choice to use the
JackTrip software was a relatively simple one. As the
SourceNode project seeks to implement a type of Realtime
Interactive Approach (RIA), it is imperative that the audio is
transferred over the network as fast as possible and at least
at CD Quality audio standard [4]. JackTrip allows for such
a transfer.
3.3. Network Sourced Approach
The SourceNode project is not inherently designed to
replicate a traditional performance space, but more to utilise
the network as a performance enabler. For further
delineation of this, we may turn to Alain Renaud and his
denition of the Performance in Network (PIN) approach.
Renaud describes the implications of such an architecture:
the performance becomes network mediated [15]. The
SourceNode project seeks to use the network in a certain
way, both as a mediator and enabler. The slave nodes may be
seen as being in the network, whereas the master node is
said to be using the network; mainly as its source for music
material. The audio that the master sources is structured in
a certain way, allowing a network sync to be created.
With this in mind, a better description of this type of
framework is a Networked Sourced Approach (NSA). A
NSA ensures that the topology is congured in a manner
which can be musically understood as a conductor,
permitting multiple contributions to be synchronised. The
network is used as a space in which information, or data, can
be shared and utilised at two distinct levels. At one level,
data is shared by the master node to the slave nodes who
then interpret this data to create, what may be seen as, a
traditional musical structure. At the second level,
information, this time audio, is shared over the network from
the slave nodes to the master node. The master node uses
this information as a musical source from which it creates a
coherent musical whole. The slave nodes act as musical
objects, the master node as both conductor and performer.
Most importantly, the network acts as the enabler.
3.4. Technological Implications - MIDI Drift
The key driver of the SourceNode project is the ability to
transfer MIDI-Clock from the master node to the slave
nodes as quickly and as accurately as possible. The

successful transfer of accurate MIDI-Clock will dene


whether or not the SourceNode project is possible, and more
importantly, if the musical offering derived from the master
node is musically coherent. An unsuccessful transfer of
MIDI-Clock data will ensure that the two sources of audio
that the master node receives become asynchronous;
effecting the musical integrity of the NMP as a whole.
For the purposes of this paper, there exists two distinct ways
to transfer MIDI information and data over a network
available. The rst being the utilisation of the Apple
Network MIDI Utility, built into Mac OSX. The second
using the purpose built MAX/MSP software application,
SourceNode. To outline a comparison, tests have been
implemented to further highlight the merits of each, and this
is where the concept of MIDI Drift is introduced. The term
MIDI Drift is dened as: the rate at which errors occur in the
MIDI-Clock network transfer, with respect to the late, early
or asynchronous timing of MIDI-Clock messages, including
the extremely disruptive non-arrival of MIDI-Clock
messages. By comparing the drift rate, this paper can go
someway to outlining the given success or failure of the
SourceNode application.

application, and the best case as tested with the SourceNode


application.

Fig.6. MIDI Drift with Apple Network Utility

It would now be benecial to look at the MIDI Drift


measurements while using the SourceNode software
application. The expected gures should be higher, due to
the inherent and cumulative nature of using MAX/MSP
software coupled with UDP and IP networking tools to
ensure a network transfer of information. The key question
is whether the drift is high enough to cause issues with the
successful implementation of the NMP.

Fig. 8. Apple Network MIDI Utility - MIDI Drift

3.4.1.MIDI Drift Measurement


To enable a comparison between the inbuilt Apple Network
MIDI Utility and the SourceNode software application, a
series of tests were run in order to gather certain
measurements for MIDI Drift in both cases of software.1 It
was found that while using the Apple Network MIDI Utility
there existed a measurably more accurate and stable
connection compared with the SourceNode architecture.
However, it must also be stated that the inaccuracy of the
SourceNode software was not at destabilising or disruptive
levels.
Firstly, it will be benecial to discuss the Apple Network
MIDI Utility. It was found that the average drift over a series
of three tests, each of 5 minute duration, was 0.51
milliseconds (ms). An example of the drift between the
nodes is found in the illustration below, Fig.6. The two
tracks shown (green and purple) represent the MIDI-Clock
at the two slave nodes. The drift measurements are taken to
be the uctuations between the two slave MIDI-Clocks,
generated with respect to the MIDI-Clock received from the

1 The master node created a MIDI-Clock source which controlled a metronome within the master node DAW (Ableton). The MIDI-Clock messages were

transferred across the network to the slave nodes, where it generated a metronome signal within the respective DAWs (Ableton). Both metronome signals
were sent as audio information by cable to a third, recording DAW (PRO TOOLS). The audio signals were compared. Standard settings on all applications
were used. Buffer sizes as follows. Ableton: 512; MAX/MSP; 512; Pro Tools: 512. The MIDI Drift figure represents the compared time differences that exist
at each slave node with respect to the synchronisation process. The time differences that exist can be seen as the fluctuations caused by the network and
protocols in the transferral of the synchronisation MIDI-Clock messages from the master node each individual slave node.

_304

master node. The best case scenario of the three tests,


offered an average of 0.36 milliseconds (ms) which is
certainly low enough to be below the levels of human
detection; highlighting the accuracy and validity of syncing
through this method.

Fig. 7. MIDI Drift with SourceNode Application

Again, three tests, each of 5 minute duration were


completed. The average drift gure over the three tests was
8.51 milliseconds (ms). This may at rst glance, seem very
high, especially when compared to the average while using
the Apple Network MIDI Utility. It may be wise to put this
drift gure into some sort of context, especially a musical
one. Alexander Carot outlines the latency gures which are
apparent when a piano player plays his instrument: the time
elapsed between pressing a key and the corresponding note
onset is about 100ms for quiet notes and around 30ms for
staccato, forte notes [3]. Piano players have the ability to
perform with these types of latencies apparent in their own
musical system, so there is no reason to assume that time
differentials of around 8.51ms cannot be dealt with by
performers in the SourceNode project, even if the
instruments that they are using are radically different from a
traditional piano. Indeed the best test case of Drift while
using the SourceNode system was shown to be 3.62
milliseconds.
A direct, visual comparison can now be made between the
best case, as tested with the Apple Network MIDI Utility

Fig. 9. SourceNode Software - MIDI Drift

3.5. Technological Implications - Audio Streaming


Latency
The second key driver with regards to the successful
implementation of the SourceNode project is the streaming
of the audio from the slave nodes to the master node. The
Local Area Network (LAN) ensured that Network Latency,
theoretically, was kept to a minimum: as delays due to the
information having to physically travel long distances were
not considered.
3.5.1.Audio Streaming Latency Measurement
The audio streaming within the SourceNode project was
implemented through the use of two software enablers. The
rst software enabler was the JackOSX internal audio
routing software system. This software allowed audio to be
routed internally at both the master and slave nodes. At the
slave nodes, audio had to be routed from the Ableton LIVE
software into JackTrip. At the master node site, the audio
had to be routed from the JackTrip software into Ableton
LIVE, including a distinct separation of the audio streams,
so that each slave channel could be routed into separate

_305

The JackTrip software is in charge of transferring audio


from the slave nodes to the master. The choice to use the
JackTrip software was a relatively simple one. As the
SourceNode project seeks to implement a type of Realtime
Interactive Approach (RIA), it is imperative that the audio is
transferred over the network as fast as possible and at least
at CD Quality audio standard [4]. JackTrip allows for such
a transfer.
3.3. Network Sourced Approach
The SourceNode project is not inherently designed to
replicate a traditional performance space, but more to utilise
the network as a performance enabler. For further
delineation of this, we may turn to Alain Renaud and his
denition of the Performance in Network (PIN) approach.
Renaud describes the implications of such an architecture:
the performance becomes network mediated [15]. The
SourceNode project seeks to use the network in a certain
way, both as a mediator and enabler. The slave nodes may be
seen as being in the network, whereas the master node is
said to be using the network; mainly as its source for music
material. The audio that the master sources is structured in
a certain way, allowing a network sync to be created.
With this in mind, a better description of this type of
framework is a Networked Sourced Approach (NSA). A
NSA ensures that the topology is congured in a manner
which can be musically understood as a conductor,
permitting multiple contributions to be synchronised. The
network is used as a space in which information, or data, can
be shared and utilised at two distinct levels. At one level,
data is shared by the master node to the slave nodes who
then interpret this data to create, what may be seen as, a
traditional musical structure. At the second level,
information, this time audio, is shared over the network from
the slave nodes to the master node. The master node uses
this information as a musical source from which it creates a
coherent musical whole. The slave nodes act as musical
objects, the master node as both conductor and performer.
Most importantly, the network acts as the enabler.
3.4. Technological Implications - MIDI Drift
The key driver of the SourceNode project is the ability to
transfer MIDI-Clock from the master node to the slave
nodes as quickly and as accurately as possible. The

successful transfer of accurate MIDI-Clock will dene


whether or not the SourceNode project is possible, and more
importantly, if the musical offering derived from the master
node is musically coherent. An unsuccessful transfer of
MIDI-Clock data will ensure that the two sources of audio
that the master node receives become asynchronous;
effecting the musical integrity of the NMP as a whole.
For the purposes of this paper, there exists two distinct ways
to transfer MIDI information and data over a network
available. The rst being the utilisation of the Apple
Network MIDI Utility, built into Mac OSX. The second
using the purpose built MAX/MSP software application,
SourceNode. To outline a comparison, tests have been
implemented to further highlight the merits of each, and this
is where the concept of MIDI Drift is introduced. The term
MIDI Drift is dened as: the rate at which errors occur in the
MIDI-Clock network transfer, with respect to the late, early
or asynchronous timing of MIDI-Clock messages, including
the extremely disruptive non-arrival of MIDI-Clock
messages. By comparing the drift rate, this paper can go
someway to outlining the given success or failure of the
SourceNode application.

application, and the best case as tested with the SourceNode


application.

Fig.6. MIDI Drift with Apple Network Utility

It would now be benecial to look at the MIDI Drift


measurements while using the SourceNode software
application. The expected gures should be higher, due to
the inherent and cumulative nature of using MAX/MSP
software coupled with UDP and IP networking tools to
ensure a network transfer of information. The key question
is whether the drift is high enough to cause issues with the
successful implementation of the NMP.

Fig. 8. Apple Network MIDI Utility - MIDI Drift

3.4.1.MIDI Drift Measurement


To enable a comparison between the inbuilt Apple Network
MIDI Utility and the SourceNode software application, a
series of tests were run in order to gather certain
measurements for MIDI Drift in both cases of software.1 It
was found that while using the Apple Network MIDI Utility
there existed a measurably more accurate and stable
connection compared with the SourceNode architecture.
However, it must also be stated that the inaccuracy of the
SourceNode software was not at destabilising or disruptive
levels.
Firstly, it will be benecial to discuss the Apple Network
MIDI Utility. It was found that the average drift over a series
of three tests, each of 5 minute duration, was 0.51
milliseconds (ms). An example of the drift between the
nodes is found in the illustration below, Fig.6. The two
tracks shown (green and purple) represent the MIDI-Clock
at the two slave nodes. The drift measurements are taken to
be the uctuations between the two slave MIDI-Clocks,
generated with respect to the MIDI-Clock received from the

1 The master node created a MIDI-Clock source which controlled a metronome within the master node DAW (Ableton). The MIDI-Clock messages were

transferred across the network to the slave nodes, where it generated a metronome signal within the respective DAWs (Ableton). Both metronome signals
were sent as audio information by cable to a third, recording DAW (PRO TOOLS). The audio signals were compared. Standard settings on all applications
were used. Buffer sizes as follows. Ableton: 512; MAX/MSP; 512; Pro Tools: 512. The MIDI Drift figure represents the compared time differences that exist
at each slave node with respect to the synchronisation process. The time differences that exist can be seen as the fluctuations caused by the network and
protocols in the transferral of the synchronisation MIDI-Clock messages from the master node each individual slave node.

_304

master node. The best case scenario of the three tests,


offered an average of 0.36 milliseconds (ms) which is
certainly low enough to be below the levels of human
detection; highlighting the accuracy and validity of syncing
through this method.

Fig. 7. MIDI Drift with SourceNode Application

Again, three tests, each of 5 minute duration were


completed. The average drift gure over the three tests was
8.51 milliseconds (ms). This may at rst glance, seem very
high, especially when compared to the average while using
the Apple Network MIDI Utility. It may be wise to put this
drift gure into some sort of context, especially a musical
one. Alexander Carot outlines the latency gures which are
apparent when a piano player plays his instrument: the time
elapsed between pressing a key and the corresponding note
onset is about 100ms for quiet notes and around 30ms for
staccato, forte notes [3]. Piano players have the ability to
perform with these types of latencies apparent in their own
musical system, so there is no reason to assume that time
differentials of around 8.51ms cannot be dealt with by
performers in the SourceNode project, even if the
instruments that they are using are radically different from a
traditional piano. Indeed the best test case of Drift while
using the SourceNode system was shown to be 3.62
milliseconds.
A direct, visual comparison can now be made between the
best case, as tested with the Apple Network MIDI Utility

Fig. 9. SourceNode Software - MIDI Drift

3.5. Technological Implications - Audio Streaming


Latency
The second key driver with regards to the successful
implementation of the SourceNode project is the streaming
of the audio from the slave nodes to the master node. The
Local Area Network (LAN) ensured that Network Latency,
theoretically, was kept to a minimum: as delays due to the
information having to physically travel long distances were
not considered.
3.5.1.Audio Streaming Latency Measurement
The audio streaming within the SourceNode project was
implemented through the use of two software enablers. The
rst software enabler was the JackOSX internal audio
routing software system. This software allowed audio to be
routed internally at both the master and slave nodes. At the
slave nodes, audio had to be routed from the Ableton LIVE
software into JackTrip. At the master node site, the audio
had to be routed from the JackTrip software into Ableton
LIVE, including a distinct separation of the audio streams,
so that each slave channel could be routed into separate

_305

audio channels within Ableton LIVE. This ensured an


integration of a distinct two-stage modulation processes. The
rst modulation process at the slave nodes, the second at the
master node. The second key software enabler is the
JackTrip application. The JackTrip software enables the
audio to be streamed from each of the slave nodes to the
master node.
The tests found that the transfer of audio from slave node 1
to the master node experienced an average latency gure of
8.03 milliseconds (ms). The average latency gure for slave
node 2, was 9.46 millisecond (ms).2
It must be recognised that the latency gures apparent in the
streaming of the audio are due to the System Latency at each
slave node. The System Latency will be the time taken for
the internal software, in this case Ableton LIVE, JackOSX
and JackTrip to process the functions that each of them are
designated. Envisaging a SourceNode project within a Wide
Area Network (WAN), where the streaming conguration
covers a much larger geographical distance, the overall
latency gures would probably be much greater. This is not
to say that the System Latency gures that have been
measured are not important, as it has been shown that the
overall latency in any NMP system is the Network Latency
added to the individual System Latency [2][3][4][5][7][9]
[11][12][13][15][16][22].
In the case of the SourceNode project the Network Latency
is at minimal levels, due to the Local Area Network (LAN)
infrastructure. It may be safe to assume that the latency
gures, as measured, relate to the System Latency at each
node.

Fig. 10. Slave Node 1 Latency

Fig. 11. Slave Node 2 Latency


4.

CONCLUSION

This nal section of the paper will discuss the outcomes of


the research, focusing mainly on the aesthetic and
technological results. There will also be a short discussion
on possible future research with regards to both the
SourceNode project architecture and the SourceNode
software.
4.1. Aesthetic Summary
Aside from the motivation to create an architecture and
enabling software technology to permit a Network Sourced
Approach, there was a strong musical interest behind the
SourceNode project. The project consisted of three
participants, one positioned at each of the two slave nodes
and one participant at the master. One slave node was
designated the role of creating percussive elements and one
given the role of melody. The master node decided the
tempo throughout the performance, the time signature, if any
loop constraints were to be imposed, and the start and end
time of the piece. The master node DAW generated the
MIDI-Clock messages that were shared through the
network. Finally, the master node received the generated
audio content from the slave nodes and added the important
interpretation, modulation, reaction and performance stage.
The musical outcome was interesting on a number of levels.
Firstly, the concept of DAWs performing within a
synchronised network structure was new to all participants.
Although the concept was alien to the performers, the sync
created by the network ensured that the architecture felt
somewhat natural. Secondly, the network sync enabled a

2 The latency measurement was the time it took for audio content created at a slave node to reach the master node, over the network. The slave node would

create audio within a DAW (Ableton). In the case of the latency measurements, a metronome click was created. The audio at the slave node was connected
directly to a measurement computer running PRO TOOLS at the point that the audio exited the JackOSX software. The audio was also transferred using
JackTrip across the network from this point. Default buffer sizes were used on all software (Ableton: 512; MAX/MSP: 512; JackOSX: 512; JackTrip: 512;
Pro Tools: 512). The time difference between these measurements was taken to be the latency of the system framework.

_306

degree of freedom within the sonic content of the


performance. The imposed time signature, tempo control
and MIDI-Clock synchronisation meant that the musical
output at the master node remained coherent, without a
disconnect of musical ideas. Lastly, the added interpretation,
performance and modulation stage at the master node
ensured that the musical output of the performance remained
uid, exciting and above all original. Most interestingly, the
participants at the slave nodes felt that the modulation at the
master node effected and inuenced the musical ideas at the
slave nodes; creating a musically communicative feedback
loop similar to those seen in other NMP performances [10]
[14].
4.2. Summary of Network Sourced Approach
a composer can only claim to have created elds of
possibility whose content will be managed by
others [20].

The SourceNode project architecture has created an original


and unique performance space. A space in which audio
content is sourced, in synchronous fashion, from a network.
The idea of the performer, positioned at the master node, in a
multi-faceted orientation of designer, enabler, controller and
performer is key to any understanding of the architectures
core implications.
The master node has democratised the creative musical
process through the implementation of a network
synchronised musical structure. The systems and
technologies utilised may have created an example of
Liquid Modernity [27]. Bauman sees modern society as
having evolved to include both decentralisation and
centralisation in a uid and exible manner; believing that
the decentralisation of certain functions within society is
only possible through the centralisation of others. The
SourceNode project may be seen in this light; where there
exists a distinct decentralisation of creativity, through the
centralised control of certain musical structures. The
democratisation of both the content management and
creativity in the project is principal to an understanding of
the projects worth [21][25][26].
4.3. Technological Summary
The SourceNode software application facilitates the
synchronisation of remotely located Digital Audio
Workstations over a network. The sync created allows a
distinct level of inuence over disparate sequencers,
enabling a star-shaped network formation to emerge. The
SourceNode application achieves its desired goals, which is
to create a standalone software application that enables a

master node to share certain structural musical information,


over a network to slave nodes.
The SourceNode application has been compared to the
Apple Network MIDI Utility and the measured performance
differences with respect to MIDI Drift have been noted.
There has also been an investigation into the JackTrip audio
streaming software, with measurements made of the Latency
apparent within the SourceNode project architecture.
The paper has shown that a unique NMP offering is
attainable through the implementation of the SourceNode
software application. This architecture combines networking
systems and protocols, computer music technology and
NMP software audio streaming technology to create a NMP
framework that incorporates synchronous performance
systems and distinct two-stage modulation and performance
levels.
4.4. Future Work
The SourceNode project has outlined a working, stable and
unique NMP by offering a standalone application that may
assist in the building of an architecture for Network Music
Performances of a specic type. That is not to say it is
complete. The SourceNode project has shown that it is
possible to connect computers within a Local Area Network
(LAN) and ensure that the software on these computers,
namely the Digital Audio Workstations, remain
synchronised. This sync is achieved through the
implementation of UDP networking tools and MIDI-Clock
synchronisation systems and software.
It must, however, be acknowledged that the software
application is far from perfect. The implementation of an
inbuilt MIDI-Clock accuracy alignment function would
ensure that the SourceNode application could become a
standard bearer for synchronous, remote collaboration
through the use of DAWs. With the advent of the computer
evolving into the most powerful source of music creation,
demand for a such a system: one that is accurate, stable and
above all user-friendly, may increase exponentially;
musicians who use computers as their primary instrument
may seek out unique and novel ways of creating music
within collaborative networked spaces.
The SourceNode project has shown that a nodal-based,
Network Sourced Approach (NSA) encompassing two stage
modulation and performance processes in a collaborative
musical architecture, is achievable. As the demand for NMP
systems increases, moving away from the scope of just a
select few computer music and network technology
specialists, we may see the number, form and context of

_307

audio channels within Ableton LIVE. This ensured an


integration of a distinct two-stage modulation processes. The
rst modulation process at the slave nodes, the second at the
master node. The second key software enabler is the
JackTrip application. The JackTrip software enables the
audio to be streamed from each of the slave nodes to the
master node.
The tests found that the transfer of audio from slave node 1
to the master node experienced an average latency gure of
8.03 milliseconds (ms). The average latency gure for slave
node 2, was 9.46 millisecond (ms).2
It must be recognised that the latency gures apparent in the
streaming of the audio are due to the System Latency at each
slave node. The System Latency will be the time taken for
the internal software, in this case Ableton LIVE, JackOSX
and JackTrip to process the functions that each of them are
designated. Envisaging a SourceNode project within a Wide
Area Network (WAN), where the streaming conguration
covers a much larger geographical distance, the overall
latency gures would probably be much greater. This is not
to say that the System Latency gures that have been
measured are not important, as it has been shown that the
overall latency in any NMP system is the Network Latency
added to the individual System Latency [2][3][4][5][7][9]
[11][12][13][15][16][22].
In the case of the SourceNode project the Network Latency
is at minimal levels, due to the Local Area Network (LAN)
infrastructure. It may be safe to assume that the latency
gures, as measured, relate to the System Latency at each
node.

Fig. 10. Slave Node 1 Latency

Fig. 11. Slave Node 2 Latency


4.

CONCLUSION

This nal section of the paper will discuss the outcomes of


the research, focusing mainly on the aesthetic and
technological results. There will also be a short discussion
on possible future research with regards to both the
SourceNode project architecture and the SourceNode
software.
4.1. Aesthetic Summary
Aside from the motivation to create an architecture and
enabling software technology to permit a Network Sourced
Approach, there was a strong musical interest behind the
SourceNode project. The project consisted of three
participants, one positioned at each of the two slave nodes
and one participant at the master. One slave node was
designated the role of creating percussive elements and one
given the role of melody. The master node decided the
tempo throughout the performance, the time signature, if any
loop constraints were to be imposed, and the start and end
time of the piece. The master node DAW generated the
MIDI-Clock messages that were shared through the
network. Finally, the master node received the generated
audio content from the slave nodes and added the important
interpretation, modulation, reaction and performance stage.
The musical outcome was interesting on a number of levels.
Firstly, the concept of DAWs performing within a
synchronised network structure was new to all participants.
Although the concept was alien to the performers, the sync
created by the network ensured that the architecture felt
somewhat natural. Secondly, the network sync enabled a

2 The latency measurement was the time it took for audio content created at a slave node to reach the master node, over the network. The slave node would

create audio within a DAW (Ableton). In the case of the latency measurements, a metronome click was created. The audio at the slave node was connected
directly to a measurement computer running PRO TOOLS at the point that the audio exited the JackOSX software. The audio was also transferred using
JackTrip across the network from this point. Default buffer sizes were used on all software (Ableton: 512; MAX/MSP: 512; JackOSX: 512; JackTrip: 512;
Pro Tools: 512). The time difference between these measurements was taken to be the latency of the system framework.

_306

degree of freedom within the sonic content of the


performance. The imposed time signature, tempo control
and MIDI-Clock synchronisation meant that the musical
output at the master node remained coherent, without a
disconnect of musical ideas. Lastly, the added interpretation,
performance and modulation stage at the master node
ensured that the musical output of the performance remained
uid, exciting and above all original. Most interestingly, the
participants at the slave nodes felt that the modulation at the
master node effected and inuenced the musical ideas at the
slave nodes; creating a musically communicative feedback
loop similar to those seen in other NMP performances [10]
[14].
4.2. Summary of Network Sourced Approach
a composer can only claim to have created elds of
possibility whose content will be managed by
others [20].

The SourceNode project architecture has created an original


and unique performance space. A space in which audio
content is sourced, in synchronous fashion, from a network.
The idea of the performer, positioned at the master node, in a
multi-faceted orientation of designer, enabler, controller and
performer is key to any understanding of the architectures
core implications.
The master node has democratised the creative musical
process through the implementation of a network
synchronised musical structure. The systems and
technologies utilised may have created an example of
Liquid Modernity [27]. Bauman sees modern society as
having evolved to include both decentralisation and
centralisation in a uid and exible manner; believing that
the decentralisation of certain functions within society is
only possible through the centralisation of others. The
SourceNode project may be seen in this light; where there
exists a distinct decentralisation of creativity, through the
centralised control of certain musical structures. The
democratisation of both the content management and
creativity in the project is principal to an understanding of
the projects worth [21][25][26].
4.3. Technological Summary
The SourceNode software application facilitates the
synchronisation of remotely located Digital Audio
Workstations over a network. The sync created allows a
distinct level of inuence over disparate sequencers,
enabling a star-shaped network formation to emerge. The
SourceNode application achieves its desired goals, which is
to create a standalone software application that enables a

master node to share certain structural musical information,


over a network to slave nodes.
The SourceNode application has been compared to the
Apple Network MIDI Utility and the measured performance
differences with respect to MIDI Drift have been noted.
There has also been an investigation into the JackTrip audio
streaming software, with measurements made of the Latency
apparent within the SourceNode project architecture.
The paper has shown that a unique NMP offering is
attainable through the implementation of the SourceNode
software application. This architecture combines networking
systems and protocols, computer music technology and
NMP software audio streaming technology to create a NMP
framework that incorporates synchronous performance
systems and distinct two-stage modulation and performance
levels.
4.4. Future Work
The SourceNode project has outlined a working, stable and
unique NMP by offering a standalone application that may
assist in the building of an architecture for Network Music
Performances of a specic type. That is not to say it is
complete. The SourceNode project has shown that it is
possible to connect computers within a Local Area Network
(LAN) and ensure that the software on these computers,
namely the Digital Audio Workstations, remain
synchronised. This sync is achieved through the
implementation of UDP networking tools and MIDI-Clock
synchronisation systems and software.
It must, however, be acknowledged that the software
application is far from perfect. The implementation of an
inbuilt MIDI-Clock accuracy alignment function would
ensure that the SourceNode application could become a
standard bearer for synchronous, remote collaboration
through the use of DAWs. With the advent of the computer
evolving into the most powerful source of music creation,
demand for a such a system: one that is accurate, stable and
above all user-friendly, may increase exponentially;
musicians who use computers as their primary instrument
may seek out unique and novel ways of creating music
within collaborative networked spaces.
The SourceNode project has shown that a nodal-based,
Network Sourced Approach (NSA) encompassing two stage
modulation and performance processes in a collaborative
musical architecture, is achievable. As the demand for NMP
systems increases, moving away from the scope of just a
select few computer music and network technology
specialists, we may see the number, form and context of

_307

performances implementing a Network Sourced Approach


(NSA) proliferate. These performances may cross
geographical borderlines, musical genre categorisations,
cultural boundaries and perhaps even more interestingly,
musical and social subdivisions.
ACKNOWLEDGEMENTS
The research undertaken for this paper was completed at
CIT, Cork School of Music within a thesis submitted by the
author in partial requirement for an MSc in Music and
Technology. This conference paper was prepared by the
author while pursuing a PhD in Sonic Arts at SARC, Queens
University Belfast.
5.

REFERENCES

6.

[1] Barney, D. (2004) - The Network Society - Polity Press Ltd 2004.
[2] Caceres, J.-P. and Chafe, C. (2009) - JackTrip: Under the
hood of an engine for network audio, in Proceedings of
International Computer Music Conference, Montreal, 2009.
[3] Carot, A. (2009) - Musical Telepresence - A Comprehensive
Analysis Towards New Cognitive and Technical Approaches.
(PhD. Dissertation, Institute of Telematics, Lubeck, Germany,
2009)
[4] Carot, A., and Werner, C., (2007) - Network music
performance problems, approaches and perspectives. In
Proceedings of the Music in the Global VillageConference, Budapest,Hungary, September 2007.
[5] Castells, M. (2000) - Materials for an Exploratory Theory of
The Network Society - British Journal of Sociology Vol. No.
51 Issue No. 1 (January/March 2000) pp. 5-24, 2000
[6] Chafe, C., Wilson, S., Leistikow, R., Chisholm, D., Scavone,
G. (2000) - Simplied Approach to High Quality Music and
Sound over IP, in Proceedings of the Digital Audio Effects
(DAFX) Conference (2000) pp. 159164.
[7] Chew, E., Zimmermann, R., Sawchuk, A., Papadopoulos, C.,
Kyriakakis, C., Francois, A. R. J., Kim, G., and Volk, A.
(2004) - Musical interaction at a distance: Distributed
immersive performance. In 4th Open Workshop of
MUSICNETWORK, 2004.
[8] S. Gresham-Lancaster, The Aesthetics and His- tory of the
Hub: The Effects of Changing Technology on Network
Computer Music, Leonardo Music Journal 8 (1998) pp. 39
44.
[9] Gu, X., Dick, M., Kurtisi, Z., Noyer, U. and Wolf, L. (2005) Network-centric music performance: Practice and
experiments. IEEE Communications, 43:8693, 2005.
[10] Hajdu, G. (2003) - Quintet.net A Quintet on the Internet,
Proceedings of the International Computer Music Conference,
Singapore, 2003

_308

[11] Kleimola, J. (2006) - Latency Issues in Distributed Musical


Performance, Telecommunication Software and Multimedia
Laboratory Seminar, Helsinki, Finland, (2006).
[12] Lazzaro, J.; Wawrzynek, J. (2001) - A Case for Network
Musical Performance, The 11th International Workshop on
Network and Operating Systems Support for Digital Audio
and Video (NOSSDAV 2001), New York. USA.
[13] Oliveros, P. (2009) - From Telephone to High Speed Internet:
A brief History of My Tele-Musical Performances, Leonardo
Music Journal Online Supplement to LMJ 19, 2009.
[14] Rebelo, P., Schroeder, F. & Renaud, A. B. (2008). Network
dramaturgy: Being on the node. Paper at the International
Computer Music Conference, 2008
[15] Renaud, A. (2009) - The Network as a Performance Space.
(PhD. Dissertation, School of Music and Sonic Arts, Queens
University Belfast, 2009)
[16] Renaud, A. B., Carot, A., and Rebelo, P. (2007) - Networked
music performance: State of the art, in Proceedings of the
AES 30th International Conference, Saariselka, Finland, 2007.
[17] Tanaka, A. (2001) - Musical implications of media and
network infrastructures. Hypertextes Hypermdias, Hermes
Science Publications, Paris. 2001, 241-250.
[18] Tanaka, A. (2003) - Seeking interaction, changing space, In
Proceedings of the 6th International Art + Communication
Festival, Riga, Latvia, 2003.
[19] Tanzi, D. (2003) - Musical Experience and On-line
Communication. Crossings: eJournal of Art and Technology,
University of Dublin, Trinity College, December 2003.
[20] Tanzi, D. (2005) - Musical objects and digital domains.
Proceedings of EMS-05 Conference. Montreal, Quebec,
October 19-22, 2005.
[21] Tanzi, D. (2005) - Musical Thought Networked, Laboratorio
di Informatica Musicale, Dipartimento di informatica e
Comunicazione, Universita degli Studi di Milano. 2005.
[22] Weinberg, G. (2002) - The Aesthetics, History and Future
Challenges of Interconnected Music Networks - MIT Media
Laboratory, ICMC 2002, Goteburg, Sweden pp349-356, 2002.
[23] Novak, M. (1997) - Trans Terra Form: Liquid Architectures
and the Loss of Inscription. 1997.
[24] Vitale, C. (2010) - Networkologies - A Manifesto - Section I
- Speculations Online Journal 1: pp 153-184: 2010.
[25] Kim-Boyle, D. (2008) - Network Musics - Play, Engagement
and the Democratization of Performance. In Proceedings of
New Interfaces for Musical Expression Conference, (Genova,
Italy, June 4-8, 2008)
[26] Bannier, S. (2009) - The Musical Network 2.0 & 3.0,
Studies on Media Information & Telecommunication, in
Interdisciplinary Institute for Broadband Technology,
Brussels, Belgium, 2009.
[27] Lee, R (2005) - Bauman, Liquid Modernity And Dilemmas
Of Development in Thesis Eleven: 83:1, pp. 61-77, 2005.

OSCTHULHU: APPLYING VIDEO GAME STATE-BASED


SYNCHRONIZATION TO NETWORK COMPUTER MUSIC
Curtis McKinney

Chad McKinney

Bournemouth University
Creative Technology Research Group
cmckinney@bournemouth.ac.uk

University of Sussex
Department of Informatics
C.Mckinney@sussex.ac.uk

ABSTRACT
In this paper we present a new control-data synchronization system for real-time network music performance named
OSCthulhu. This system is inspired by the networking
mechanics found in multiplayer video games which represent data as a state that may be synchronized across several clients using a hub-based server. This paper demonstrates how previous musical networking systems predicated upon UDP transmission are unreliable on the open
internet. Although UDP is preferable to TCP for transmitting musical gestures, we will show that it is not sufficient for transmitting control data reliably across consumer grade networks.
This paper also exhibits that state-synchronization techniques developed for multiplayer video games are aptly
suited for network music environments. To illustrate this,
a test was conducted that establishes the difference in divergence between two nodes using OscGroups, a popular networking application, versus two nodes using OSCthulhu over a three minute time-span. The test results
conclude that OSCthulhu is 31% less divergent than OscGroups, with an average of 2% divergence. This paper
concludes with a review of future work to be conducted.
1. INTRODUCTION
Computer network music has benefitted from three decades
of development, including the experiments of the San Francisco Bay Area network band pioneers, the introduction
of the OSC protocol [22], and research into streaming and
latency issues. Making an infrastructure suitable for network performance in the face of highly distributed participants and online security roadblocks remains a challenging task, and one which this paper confronts. Our solution, OSCthulu, is a client-server architecture which has
proven robust in concert performance, and as open-source
software may be of benefit to other researchers and performers. This system has been researched and developed
by the authors with real-world testing conducted by their
network music band Glitch Lich [9].
As noted in Indigenous to the Net, the worlds first
network computer band, the League of Automatic Music
Composers, began as an extension of the home brew circuit tinkering that was characteristic of the Bay Area in the
mid-1970s. Their computers, MOS Technology KIM-1

models, were modest with only 1 kilobyte of memory and


could only be programmed using assembly language. The
League of Automatic Music Composers created interactive programs by directly soldering connections between
computers and writing programs which would listen and
transmit data on these lines. The network was fragile and
error prone. It was also difficult to set up as all the connections had to be re-soldered each time the band rehearsed
[3].
In what can be seen as a natural evolution of the technology, the spiritual successor to the League of Automatic
Music Composers, the Hub, utilized a server-based system. This system provided a standardized interface for
connections between members with varying computer models, as well as shared memory for the ensemble. Throughout this time many other approaches to networking were
being developed. Previous efforts mentioned focused on
the real-time interactions of the performers computers,
but in the 1990s several new methods explored non-realtime connections. Systems such as the ResRocket Surfer
and Faust Music Online (FMOL) allowed users to collaborate writing music by providing an online repository [12]
[11].
Much research has been done towards investigating
the issues that latency presents to instrumentalists when
streaming audio as well as strategies to cope with this latency in performance [1]. Often solutions favor research
grade connections between sites, providing lower latency,
although these types of connections are not widely available to the public [20] [14]. Several alternative approaches
have been taken to address network latency such as making the latency a multiple of a preset tempo, as with NINJAM [4]. Local delay offsets are used in the eJAMMING
software to produce the same latency of actual audio output between all users, attempting to bring the synchronicity of the users closer [7].
Recent developments in music programming languages
such as SuperCollider and ChucK have led to new and
exciting systems focused on code sharing in live coding
ensembles. Systems such as Co-Audicle and Republic
create networks in which performers can share code that
will be altered, executed and re-entered into the pool [19]
[18]. Live coding is a fundamentally computer-based performance style and for that reason lends itself well to networking. The information being transferred is small, yet

_309

Potrebbero piacerti anche