Sei sulla pagina 1di 2

CHI 2 0 0 0 • 1 - 6 APRIL 2 0 0 0 I n t e r a c t i v e Posters

Simple Interfaces to Complex Sound in Improvised Music


John Bowers Sten Olof Iteilstrrm
C e n t r e for U s e r - O r i e n t e d I T - D e s i g n ( C I D ) C e n t r e for U s e r - O r i e n t e d I T - D e s i g n (CID)
R o y a l Institute o f T e c h n o l o g y ( K T H ) R o y a l Institute o f T e c h n o l o g y ( K T H )
S t o c k h o l m , SE 100 44, S w e d e n S t o c k h o l m , SE 100 44, S w e d e n
+46 8 7906698 b o w e r s @ n a d a . k t h . s e +46 8 7906698 s o h @ n a d a . k t h . s e
S c h o o l o f Music, U n i v e r s i t y o f East A n g l i a D e p a r t m e n t o f Music, City U n i v e r s i t y
N o r w i c h , N R 4 7 T J, U K L o n d o n , EC 1V 0 H B , U K

ABSTRACT While it is now possible to synthesise very rich sounds in


We describe some interaction design principles and two real-time with affordable technology, rather less work has
interactive algorithms for the transformation of user-input been done on formulating novel interaction design
from simple low degree of freedom (DOF) devices to principles which would enable new instruments or
support the synthesis of sound in music improvisation. We performance practices to be systematically developed. There
offer 'algorithmically mediated interaction' as an alternative are design challenges both for peripheral devices and for
to direct manipulation (DM) to describe auditory interfaces uncovering principles for the transformation of data from
of this sort. A short performance complements this paper. such devices into an interactionally useful form. The case
Keywords
of improvised electroacoustic music performance is an
Music, sound synthesis, auditory interfaces, improvisation important application area because it requires techniques
which are real-time, 'in the continuum', and enable
INTRODUCTION navigation within multidimensional parameter spaces in
In this paper, we offer some techniques which map user- ways which are still aesthetically satisfying and suited for
input to control data for real-time interaction with complex public performance. We believe these requirements, if
synthesised sound in the live performance of improvised properly attended to, would enable creative input to
electroacoustic music. Our interest in this application area research in HCI on auditory interfaces [2], while enhancing
exists, not only because we are musicians, but because we interaction design perspectives in computer music [3].
believe it to be a 'tough case' for interaction technology and
INTERACTION DESIGN PRINCIPLES
design principles. Current synthesis methods (e.g. physical
Ourperformance interfaces have been built guided by a set
modeling, see [3]) implement complex sound models with
many parameters being potentially controllable in real-time. of HC1 design concepts, which, reciprocally, the interfaces
How is this to be made manageable for performers? In non- are intended to demonstrate.
improvised settings, a 'score' may constrain the possibility- Algorithmical/y mediated interaction. In [l] we claim that
space but improvisation raises our question in full effect. alternatives to the DM paradigm are often required for
In music which uses sampled sound or imitative synthesis interaction with complex higher dimensional systems in
of existing instruments, established performance practice artistic settings. Particularly, we argue that close attention
should be given to the algorithms which enable captured
and instrument design can often enable a small set of
control parameters to be identified. For example, many input (e.g. sensor data) to be used by specific applications.
In our work, we separate out a layer of 'algorithmic
instruments are built with pitch control ready-to-hand so
mediation' so different peripheral devices, transformation
that musics which make pitch variation a main structural
means can be easily played. In contrast, electroacoustic algorithms, and sound models can be freely exchanged.
music often uses forms based on 'spectre-morphological' Expressive latitude. We prefer to work with input devices
variation. That is, the dynamic change of the entire acoustic with a small number of DOFs--typically 2D touchpads or
spectrum of sounds provides musical interest. This requires small sensor assemblies. We also tend to use devices which
means for organising 'music in the continuum' rather than are triggered by contact and are not contunually coupled to
'music on the pitch-lattice' [4]. Frequency-parameters in a the body. This enables performers to add emphasis to those
sound model will then be just one set among many with gestures they make which are actually transduced. Space is
interactive potential. Improvised electroacoustic music, in leR free for expressive body movements which are not
these respects, has more the characterof exploring a multi- sensed and have no technically-mediated musical outcome.
dimensional 'soundscape' than varying series of pitches. The cost of allowing such 'expressive latitude' is that fewer
input data streams are available. We try to compensate for
this by careful design of the algorithmic layer.
Dynamic adaptive interfaces. Our algorithms often make
for interfaces whose relation to sound dynamically changes.
© Copyright on this m a t e r i a l is h e l d b y Input may be rescaled in ways which change over time and
the author(s).
in response to ongoing user activity. Thus, the interface

THe FU~UmC t S HOme- 125


Interactive Posters CHI 2000 • 1 - 6 APRIL 2 0 0 0

may 'map' one region in the possibility-space of the sound behaviour as a function of the proximity of the touch to the
model at one moment, and a different region later on. In a vectors' locations. This also is intended as enriching a
sense, then, we use low DOF devices to cut a series of featureless 2D surface with a phenomenological sense of a
sections through higher dimensional spaces over time. varied stock of sounds present to the touch. Depending on
how the interpolation is done and how the generators are
Anisotropic interaction spaces. Direct use or linear
defined, changes in behaviour can be smooth or abrupt,
rescalings of input data to control parameters creates what
again yielding regions with varying character and stability.
can be called an isotropic interaction space. For example, a
touchpad which linearly maps pitch to the X dimension CURRENT EXPERIENCE AND FUTURE WORK
and loudness to Y would create a space where movement in We have performed with Geosonos and S02 in a number of
a given direction would tend to have the same consequence, improvisations with success. More formal reports of
e.g., within limits, moving up would always make things experience and evaluations of our approach lie in the future.
louder. In contrast, we use non-linear and discontinuous Our techniques seem to be effective in allowing us to find
mappings to create anisotropic interaction spaces. The usable sounds within complex models and make musically
significance of movement then becomes context-dependent interesting spectro-morphological transformations between
and locales can emerge with different interactive characters. them through touching and stroking simple sensor pads.
With both algorithms, immediately repeated gestures in the
TWO DEMONSTRATIONS: GEOSONOS AND SO2
same place give similar results but with enough variety to
A sensor pad provides X Y touch location data at 7-bit
suggest ways of developing the music. Neither approach
resolution to the Geosonos algorithm. Several synthesis
allows for fine control of sound but that has not been our
parameters are jointly controlled by each of the X and Y
intention (though combinations with interfaces more
dimensions but two sets of transformations occur to make
attuned to fine manipulation are certainly feasible).
the device surface dynamic and anisotropic. First, a
histogram counting visits to each of the 128 coordinates is There are limits to our approach as currently demonstrated.
kept for each dimension. These counts are adjusted by a In Geosonos, several control parameters are superimposed
power function which either exaggerates the occurrence of per dimension. While each will be differently mapped,
commonly visited coordinates or diminishes or inverts it. there are limits on how much can be usefully co-varied in
A cumulative frequency graph is made of the adjusted this way (about 4 parameters per DOF with our sound
counts and used to transform input coordinates. In effect, models). S02 scales better as the limit on the number of
this rescales the interaction space to stretch or contract it generators is due to available processing power rather than
around sounds which have been commonly explored so far. inherent features of the design concept. However, S02
The degree of distortion (and hence the rate the space requires preparatory work in defining interesting vectors. In
changes character)is given by the exponent to the power both cases, our techniques would benefit from being
function. In this way, we intend an interface which not tunable in performance so that, e.g., Geosonos could find
merely supports exploring a soundscape but incites it. good selections of parameters to map to the pad and
candidate vectors for S02 could be defined at run-time.
Rescaled coordinate values (call them X' and Y') are then
further transformed to give output control values for sound From time to time musicians have experimented with non-
synthesis. While several synthesis parameters are jointly traditional means for sound control as well as extensions to
derived from X' (or Y'), a different function is used to familiar instruments [3]. The specificity of our approach is
determine each of them. For example, pl may be a linear to use low DOF input devices but to algorithmically
function of X', p2 a sinusoidal function, p3 a quartic, and 'magnify' them to support interaction with complex sound.
p4 a discontinuous ramp. This will mean that over the Here, we focused on two examples of such an algorithmic
range of the X dimension, the superimposed synthesis layer. We believe this makes for instruments which are
parameters will vary in their correlation (sometimes equally expressive for performance purposes and potentially
increasing together, sometimes in contrary motion). The more accessible by non-virtuosi than higher DOF devices.
overall effect of this is to create a textured interaction While we do not support the fine control that DM is oiten
surface with regions of variable sonic character. The celebrated for, in our application area, the improvisation of
superimposition of several parameters onto a dynamic 2D synthesised music, precision in this sense is not necessarily
surface with different functions applied to each is intended what you want. We prefer to support 'usability at the edge
to suggest a geological metaphor. Under the surface, layers of control'. This paper and its accompanying performance
slide over and interact with each other: hence Geosonos. demonstrate the current state of our explorations.
S 0 2 also uses 2D input data but in a different way. A ACKNOWLEDGMENTS
'generator' is associated with each controllable parameter in We thank the European Communities' eRENA project, the
the sound model. Generators output streams of values and British Academy's ARiADA project, and Clavia, Sweden.
are specified by setting such details as update rate, range REFERENCES
and step size. Random walks and chaotic iterative functions 1. Bowers, J. et al. The Lightwork Performance, Proc. of
have been used most commonly. A vector of values can be CHI'98 (supplement), ACM Press.
defined specifying the dynamical behaviour of all the 2. Kramer, G. (ed.) Auditory Display, MIT press, 1994.
generators. The user pre-selects four such vectors as being 3. Roads, C. The Computer Music Tutorial, MIT, 1996.
of interest. These are deemed to be 'virtually' located on the 4. Wishart, T. On Sonic Art, Harvester, 1997.
sensor pad such that touching the pad yields interpolated

126 C~X ~ O O O

Potrebbero piacerti anche