Sei sulla pagina 1di 50

University of Manchester

Algorithmic Composition

Author: Supervisor:
Sarah King Dr. Andrea Schalk

A third year project report submitted for the degree of


BSc.(Hons) Computer Science and Mathematics
in the School of Computer Science.

April 24, 2015


Abstract
Algorithmic Composition

Author: Sarah King


Supervisor: Dr. Andrea Schalk

This report considers the link between Computer Science, Mathematics,


and classical music. It looks at programming a computer to algorithmically
generate music, and attempts to determine whether music produced by a
computer can mimic a piece composed by a human.
The aim of this project is to research good practice from previous at-
tempts of algorithmic composition, such as the ‘Illiac Suite’ by Hiller and
Isaacson, and Cope’s ‘Experiments in Musical Intelligence’ (as discussed in
Chapter 2). The project then shifts focus onto building on these practices
and developing algorithms to generate compositions based on a particular
composer’s style. These algorithms take various aspects of musical theory
into consideration and use tactics from probability theory in order model a
string of music.
The compositions generated by these algorithms are compared against
human compositions. There is evidence to suggest that the algorithms used
in this project create the illusion of a human generated piece of music, al-
though respondents were not completely fooled.
An attempt is made to suggest retrospective improvements to the project,
along with ideas for future developers to consider.

1
Acknowledgements

Firstly, I would like to thank my supervisor Dr. Andrea Schalk for being a
constant support throughout this project and my time at university.

Secondly, to Ben Allott for putting up with my midnight ramblings and


always being there to pick up the pieces when things went wrong.

Thirdly, to Rebecca Doran for sticking with me through thick and thin.
Your friendship is worth the world to me, and I hope to never lose that.

Fourthly, to all my friends and family who have acted as coding ducks dur-
ing this past year: I could not have got to this stage without you all. Thank
you for keeping me sane.

Finally, to Richard Hartnell and Merle Calderbank — thank you for intro-
ducing me to the wonderful worlds of mathematics and music. You are both
a great inspiration and without your enthusiasm, the idea for this project
could never have been born.

2
              

Contents

Abstract 1

Acknowledgements 2

1 Introduction 5
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Aims and Objectives . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Structure of the Report . . . . . . . . . . . . . . . . . . . . . 6

2 Background Research 7
2.1 Previous Attempts . . . . . . . . . . . . . . . . . . . . . . . . 7
2.1.1 Canonical Composition (c. 16th Century) . . . . . . . 7
2.1.2 Dice Music (c. 1780) . . . . . . . . . . . . . . . . . . . 7
2.1.3 Twelve–tone Music (c. 1910) . . . . . . . . . . . . . . 8
2.1.4 The Illiac Suite (c. 1955) . . . . . . . . . . . . . . . . 8
2.1.5 Musicomp (c. 1960) . . . . . . . . . . . . . . . . . . . 8
2.1.6 Formalised Music (c. 1960) . . . . . . . . . . . . . . . 9
2.1.7 Experiments In Music Intelligence (c. 1980) . . . . . . 9
2.1.8 Genetic Programming (c. 1995) . . . . . . . . . . . . . 9
2.2 Analysis and Lessons Learned . . . . . . . . . . . . . . . . . . 10

3 Design 12
3.1 Musical Theory . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.1.1 Note Selection . . . . . . . . . . . . . . . . . . . . . . 12
3.1.2 Note Duration . . . . . . . . . . . . . . . . . . . . . . 12
3.1.3 Cadences . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2 Markov Modelling . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2.2 The Mathematics Behind The Models . . . . . . . . . 15
3.2.3 Representing Music . . . . . . . . . . . . . . . . . . . 16
3.2.4 Selecting a Note . . . . . . . . . . . . . . . . . . . . . 17
3.3 Critic Function . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.3.1 Note Repetition . . . . . . . . . . . . . . . . . . . . . 18
3.3.2 Cadences . . . . . . . . . . . . . . . . . . . . . . . . . 19

3
Contents Contents

4 Implementation 20
4.1 Software Design . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.2 Handling Music — The JFugue API . . . . . . . . . . . . . . 21
4.2.1 JFugue MusicString . . . . . . . . . . . . . . . . . . . 21
4.3 Improving Algorithm Output . . . . . . . . . . . . . . . . . . 22
4.3.1 Random Chance (A1) . . . . . . . . . . . . . . . . . . 22
4.3.2 Basic Markov Modelling (A2) . . . . . . . . . . . . . . 22
4.3.3 Markov & Critic Function (A3) . . . . . . . . . . . . . 23
4.3.4 Markov, Critic & Variable Note Lengths (A4) . . . . . 23
4.3.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.4 Parser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

5 Testing & Statistical Analysis 27


5.1 Random Chance . . . . . . . . . . . . . . . . . . . . . . . . . 27
5.2 Basic Markov Modelling . . . . . . . . . . . . . . . . . . . . . 28
5.3 Markov & Critic Function . . . . . . . . . . . . . . . . . . . . 29
5.4 All Improvements . . . . . . . . . . . . . . . . . . . . . . . . . 30
5.5 Study of Musical Ability . . . . . . . . . . . . . . . . . . . . . 31

6 Furthering The Project 33


6.1 Retrospective Look at Development . . . . . . . . . . . . . . 33
6.1.1 Features To Include . . . . . . . . . . . . . . . . . . . 33
6.2 Suggestions for Future Projects . . . . . . . . . . . . . . . . . 35
6.3 Self Reflection . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
6.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

Bibliography 36

A Music Terminology 40

B JFugue Details 42

C Raw Data for Chapter 5 43


C.1 Identifying Compositions . . . . . . . . . . . . . . . . . . . . 43
C.2 Plaintext Responses . . . . . . . . . . . . . . . . . . . . . . . 44
C.2.1 Random Chance . . . . . . . . . . . . . . . . . . . . . 44
C.2.2 Basic Markov Modelling . . . . . . . . . . . . . . . . . 45
C.2.3 Markov & Critic Function . . . . . . . . . . . . . . . . 45
C.2.4 Markov, Critic & Variable Note Lengths . . . . . . . . 46
C.3 Musicality of Respondents . . . . . . . . . . . . . . . . . . . . 48
C.4 Responses by Musical Ability . . . . . . . . . . . . . . . . . . 48

4
              

Chapter 1

Introduction

The dictionary definition of music is ‘a pattern of sounds made by musical


instruments, voices, or computers, or a combination of these, intended to
give pleasure to people listening to it’ [Dic15]. As musical tastes have devel-
oped over time, so have the combinations of instruments and voices that are
considered ‘good music’. Human composers regularly push the boundaries
of music, so it stands to reason that a computer could do the same thing.
By teaching a computer the rules of musical theory — which are essentially
mathematics — music can be generated. The interesting question is if the
computer has created music that is passable as a human composition. This
report shows how a computer can be programmed to generate music, and
eventually fool people into believing that a computer–generated composi-
tion was indeed composed by a human. Throughout this report, a number
of musical terms are used. Please refer to Appendix A for a glossary of all
of these terms.

1.1 Motivation
Mathematics is used in almost every academic discipline, so it follows that
mathematics is heavily involved in the creation of music. The Ancient Greek
mathematician Pythagoras is credited with the creation of the musical scale,
noticing that strings split in the ratio of 3 : 2 produce notes that are a perfect
fifth apart. Different ratios then produce different note intervals [Fra01].
Obviously, mathematics and mathematical theory are also heavily in-
volved within computer science. Without mathematics giving us the ability
to express ideas to a computer, new technologies and the field of computer
science itself would not be as widespread.
There are plenty of examples from history of people composing music
using a computer. The earliest example of this is the ILIAC computer at
The University of Illinois [Edw11]. The Iliac Suite for String Quartet was
completed in 1956 and makes use of Markov chains to generate random-

5
Chapter 1. Introduction 1.2. Aims and Objectives

walk pitch generation algorithms [Edw11]. These are considered further in


Chapter 2.
From these previous attempts, it is clear that music can be composed
algorithmically. Knowing this, if the computer can ‘trick’ a human with a
composition. This forms the second part of the report where attempts to
improve a composition previously generated by the computer are discussed,
with focus on if these improvements aid the illusion of a human composition.

1.2 Aims and Objectives


The aims of the project evolved from research into the history of algorithmic
composition (discussed further in Chapter 2), and specifically focus on the
development and improvements of stochastic algorithms. The aims are:

• Produce a stochastic algorithm (an algorithm with random chance)


that is an approximation of human-created music.

• Produce a function to alter a previous composition based on musical


theory (discussed further in Chapter 3).

• Improve the random probabilities used in this algorithm in real time.

• Input pieces from human composers and have the ability to load the
random probabilities with probabilities relating to a certain composer.

• Allow the user to change tempo, instruments and style of music cre-
ated.

1.3 Structure of the Report


Chapter 2 looks at the previous attempts to create algorithmically gener-
ated music, and reflects on lessons learned from these attempts. Chapter 3
considers what makes ‘good music’ and discusses some of the musical theory
choices were made throughout the project. Chapter 4 delves into the minu-
tiae of the development, and looks at the most influential parts in creating
a human-sounding composition.
In Chapter 5, the statistics gathered from asking various people to lis-
ten to attempt to distinguish between human–composed and computer–
generated music are discussed. Chapter 6 looks at how the project could be
developed with more time, and takes a retrospective look at how time was
spent throughout the development phase of this project.

6
              

Chapter 2

Background Research

Algorithmic composition is the use of a well–defined algorithm when com-


posing music [Jac96]. There is a long history of composing algorithmically
in both the pre– and post–digital computer age [Edw11], and although al-
gorithmic composition became more popular with the rise of the computer,
algorithmic thinking is certainly a lot older [Ess07].

2.1 Previous Attempts


There have been numerous attempts at algorithmically generating music —
either by computer, or by hand. The main attempts have been listed and
analysed here.

2.1.1 Canonical Composition (c. 16th Century)


From the 16th century, the word ‘canon’ was used to describe music that was
generated by ‘any kind of imitative musical counterpoint’ [Bri93]. Canons
are generated from one melodic line, and follow a generative rule that con-
trols the number of voices, various entry points and the tempo of successive
voices (amongst other aspects) [Bri93]. The rule used to generate the other
melodic lines was usually given verbally, and in later compositions, was
denoted by markings in the score.
A modern day canon can be seen in the nursery rhyme ‘Three Blind
Mice’. This can be sung in a ‘round’, where the same melody is sung by a
number of voices, with each voice starting at a different point. Formally,
this is called a ‘perpetual canon’ as the voices can begin again when they
reach the end of their melodic phrase.

2.1.2 Dice Music (c. 1780)


Mozart’s ‘Musikalisches Würfelspiel’ was designed to ‘compose, without the
least knowledge of music, so many waltzes or ländler as one pleases, by

7
Chapter 2. Background Research 2.1. Previous Attempts

throwing a certain number with two dice’ [Moz87]. ‘Musikalisches Wür-


felspiel’ consists of numerous two–bar fragments of music, and contains in-
structions for creating music using these fragments [Nie09]. Other composers
of the time were also experimenting with this concept, such as Kirnberger
with ‘The Ever–Ready Minuet and Polonaise Composer’ [Nie09].

2.1.3 Twelve–tone Music (c. 1910)


The Austrian–born composer Arnold Schoenberg is considered the inventor
of twelve–tone music. This method of composing music was thought to
provide a basis for the structure of a piece, and uses all 12 tones of the
scale (as shown in Figure 2.1) equally. A different ordering of these 12
tones was used for each composition, and this ordering became the idea
that can be found throughout the whole composition [EB14]. There are
almost 500,000,000 unique permutations of a 12–tone basis that can occur,
giving the composer a large scope for creating the structure of a piece of
music.

Figure 2.1: The Twelve Notes Used in a Scale.

2.1.4 The Illiac Suite (c. 1955)


‘The Illiac Suite’ — composed by the ILLIAC computer — was the first
musical composition for traditional instruments created through computer–
assisted composition [Nun11]. The suite was an experiment to test various
composition algorithms, with the four movements of the suite being the
results of these experiments. Hiller & Isaacson experimented with generation
of music, canonical music, music with dynamics, and generative grammars
[HI59].
The generative algorithms used in these experiments were the first re-
ported examples of using computer algorithms for generating music. They
laid the foundation of all algorithmically generated music in the future, but
showed no real break from the algorithms used before [Bag98]. In this way,
the Illiac Suite is sometimes considered a continuation of tradition.

2.1.5 Musicomp (c. 1960)


Musicomp was a piece of software developed by Leonard Hiller as a way of
furthering his work on the Illiac Suite [Nun13]. The first musical work com-
posed with Musicomp was entitled ‘Computer Cantata’, and was an example

8
Chapter 2. Background Research 2.1. Previous Attempts

of the various compositional procedures that could be used. Musicomp was


written as a library of subroutines, giving composers the flexibility to inject
some of their own ideas into the piece, rather than relying solely on the
computer (as in the Illiac experiments) [Mau99]. These subroutines include
a ‘Musikalisches Würfelspiel’–inspired selection procedure, amongst others
[Ari05].

2.1.6 Formalised Music (c. 1960)


Formalised music refers to software created by Iannis Xenakis to produce
data for stochastic algorithms. Using the computer’s ability to calculate
at high speeds, Xenakis focussed on various probability theories to improve
compositions [Xen63]. The program would work out a ‘score’ from a list
of notes and probabilistic weights, and then make a decision based on a
random number generator [Alp95].
This method of composition combines stochastic algorithms and the
rule–based systems similar to those used in Illiac. However, in the com-
positions produced using formalised music, ‘the computer has not actually
produced the resultant sound; it has only aided the composer by virtue of
its high-speed computations’ [Cop84].

2.1.7 Experiments In Music Intelligence (c. 1980)


David Cope’s ‘Experiments in Music Intelligence’ (EMI) software mimics
composers and creates works that are similar sounding [Coc01]. It was
initially developed by Cope as a way to overcoming his ‘composer’s block’,
and would be used to track the current ‘mood’ of a piece, whilst being able
to generate the next note/measure/phrase [Cop81].
The EMI software contains a large database of music, descriptions and
styles. Using this database, the EMI software deconstructs and recombines
musical phrases, giving the illusion of mimicking a particular composer. The
deconstructions and recombinations requires human intervention, however,
in order to ensure that the recombinations are not ‘utter gibberish’ [Cop81].

2.1.8 Genetic Programming (c. 1995)


Genetic algorithms ‘evolve’ in ways that resemble natural selection [Hol15].
Initially, the ‘population’ of music is filled with a randomly selected (or
human–composed) piece. Then, iterations of the algorithm are performed
over this data in order to produce something that sounds more musical. This
process is shown in Figure 2.2. Genetics algorithms are fairly new, and as
such have only been used to create harmonisations and accompaniments to
existing pieces of music [Dos03].
Genetic algorithms are able to generate their own musical materials as
well as form their own grammars. Composers must program their own ‘critic’

9
Chapter 2. Background Research 2.2. Analysis and Lessons Learned

function which ‘listens’ to the music generated by the algorithm and decides
if it is a suitable representation of music [Mau99].

Figure 2.2: A simplified version of a genetic program run.

2.2 Analysis and Lessons Learned


Canonical Composition (2.1.1) is a perfect example of the use of a deter-
ministic algorithm when algorithmically generating music, i.e. using an
algorithm with no random choice. If one considers singing ‘in the round’, it
is known beforehand how the music will sound. Thus, this music can be con-
sidered very prescriptive, which is the reason that deterministic algorithms
were discounted as a viable way of generating ‘good music’.
With the development of stochastic algorithms came the creation of Dice
Music (2.1.2) and Twelve–tone Music (2.1.3). The element of random choice
here would enable an algorithm to force choices that a human may (or may
not) make, and would therefore create more interesting pieces of music than
a deterministic algorithm. However, creating a piece of music completely
randomly does not necessarily mean it is a ‘good’ piece of music. This
would be more suited to building up a piece of music using small phrases of
‘good music’ that have been developed using another algorithm.
Moving onto more modern attempts, the Illiac Suite and Musicomp both
show great advances in the algorithms used for creating music. The use of
Markov Models shown in Musicomp became prevalent and became the basis
for preparing an algorithm for generating music. However, both Illiac and
Musicomp were considered by their developers as ‘excused from aesthetic
scrutiny …as the studies were designed to test the efficiency and ease of use
of the algorithm’ [Ari05]. Hence, it is unclear how ‘The Illiac Suite’ and

10
Chapter 2. Background Research 2.2. Analysis and Lessons Learned

‘Computer Cantata’ were received by audiences at the time.


Finally, Formalised Music (2.1.6), ‘Experiments in Musical Intelligence’
(2.1.7) and Genetic Algorithms (2.1.8) all have similar positives and nega-
tives. They all show a break from the traditional algorithms used by Illiac
and Musicomp. However, these solutions require greater computational time
and power than I had at my disposal, so I have not been able to fully pur-
sue these solutions. But, the Critic function used in Genetic Programming
models seemed feasible to complete in the given time, so this idea will be
taken forward into the development process.
In summary, the elements from this research that are implemented in
the project are:

• Markov models for note and duration selection.

• Critic function to analyse a composition created by the computer. This


judges the compositions based on various aspects of musical theory —
as discussed in Chapter 3.

• A database of previous note combinations used to provide increasingly


accurate data for the Markov model solutions. This makes these solu-
tions more accurate, as the probabilities from each piece can be stored
and added to existing probabilities.

• The ability to parse a piece of music in real–time to improve the


Markov probabilities.

11
              

Chapter 3

Design

There has been a lot of debate about what makes ‘good music’ good. This
chapter looks at some of the bigger concepts in this debate, and then discuss
how these concepts were combined into a single Critic Function used to judge
a piece of music generated by a computer.

3.1 Musical Theory


There are various different elements that make up a piece of music, from the
basic notes that are used in the piece to the more complex chord progressions
used to create a texture (or timbre). A lot of the information here was found
in [BBC14].

3.1.1 Note Selection


A note used in a piece of music is selected from the key signature. The key
signature ensures that the notes selected are all from the same scale, and
hence sound ‘nice’ in comparison with each other.
In a piece of music, there is normally one main key signature that is used,
with occasional modulations into another key. These modulations can create
tension or change the mood of the piece as required. It is also important
to select notes that are commonly used after each other, as evidence (in the
form of other pieces of music) suggests that these note combinations work
well together.

3.1.2 Note Duration


A note’s duration is the length of time that a note is played for. There are
5 main durations that are used in music — semibreves, minims, crotchets,
quavers and semi–quavers [The15]. A semibreve is a note with the longest
duration — typically 4 beats long. A minim is a note with half the duration
of a semibreve. Next, there is the crotchet, which has half the duration of

12
Chapter 3. Design 3.1. Musical Theory

a minim. Quavers and semi-quavers are the quickest moving notes, with a
quaver having half the duration of a crotchet and a semi–quaver having half
the duration of a quaver. This hierarchy is shown in Figure 3.1.

Figure 3.1: The Hierarchy of Notes. [The15]

3.1.3 Cadences
A cadence is a sequence of notes or chords that generally signifies the end of
a musical piece or phrase. There are 4 types of cadence that are commonly
used in music. Chord progressions are generally written as Roman numerals,
where major chords are upper case numerals, and minor chords are lower
case numerals. Finally, diminished chords have a small circle to signify that
they are diminished.The application of Roman numerals to the C major
scale is depicted in Figure 3.2.

Figure 3.2: Roman Numerals Applied to a C Major Scale [The15]

13
Chapter 3. Design 3.2. Markov Modelling

Finished Cadences
A perfect cadence is a chord progression from V to I. This creates the feeling
that the music has come to a definitive end, and as such are usually used at
the end of a piece of music.
A plagal cadence is a chord progression from IV to I. This also creates
the feeling that the music has come to a definitive end, and can also be found
at the end of a piece of music. The plagal cadence was traditionally used in
plainchant songs that emerged around 100 A.D. [Est15] as it is commonly
sung at the end of hymns to the ‘A–men’.

Unfinished Cadences
An imperfect cadence is a chord progression from I to V. Unlike the perfect
or plagal cadences, an imperfect cadence does not sound finished. They
are used at the end of movements (as the music is carrying on into another
movement) or in the middle of a piece at the end of a particular section.
Imperfect cadences sound as though they want to carry on to complete the
music properly.
An interrupted cadence is a chord progression from vi to vii◦ . An inter-
rupted cadence does not provide a satisfactory end to a piece of music, and
is used in the same way as an imperfect cadence.

3.2 Markov Modelling


The algorithm used to create music relies heavily on the use of Markov
models. Markov chains were used in previous attempts — such as Musicomp
and Formalised Music — outlined in Chapter 2 as they provide ‘an effective
mechanism for creating and using stochastic matrices in musically satisfying
ways’ [SB15]. The information that follows was primarily taken from from
Victor Powell’s excellent interactive tutorial [Pow14].

3.2.1 Overview
Markov chains, named for Andrey Markov, are mathematical systems that
move between ‘states’ which represent a situation or some values. Alongside
state names, there is also a set of probabilities that represent the chance
of moving from one state to the next. Markov models take into consid-
eration the events that occurred immediately before (and the probabilities
of these events happening), implying that the outcome could be changed
dramatically depending upon the events that precede a particular event.
In a two-state system, there are 4 possible transitions that the model
must take into consideration: A → A, A → B, B → A, and B → B (as
states can always transition to themselves). In this simple system, depicted

14
Chapter 3. Design 3.2. Markov Modelling

Figure 3.3: A Simple Two State System

A B

in Figure 3.3, the probability of transitioning from one state to any other
is 0.5, as at each state there are two places it can transition to (with even
weighting). Expanding this model, if a state has N links, there is a 1 /N
chance of transitioning to another state.
Of course, it can be that a certain path is more favourable than an-
other, and weight the transition probabilities accordingly. The skewing of
transition probabilities helps to model real–life situations accurately.

3.2.2 The Mathematics Behind The Models


The theory of Markov models can now be generalised into mathematics,
using probability theory to model the effect created by a Markov chain
diagram. First, consider the chain rule of conditional probability. Suppose
there are indexed sets A1 , . . . , An . Using the definition of joint probability
[Tri12], the value of a joint distribution of these indexed sets is:

P (An , . . . , A1 ) = P (An |An−1 , . . . , A1 ) (3.1)


Iterating this process with the final terms gives:
( n )  
k−1
∩ ∏n

P Ak = P Ak Aj  (3.2)
k=1 k=1 j=1

To explain this fully, suppose there are three variables:

P (A3 , A2 , A1 ) = P (A3 | A2 , A1 ) · P (A2 | A1 ) · P (A1 ) (3.3)

where P (A | B) is the probability of event A happening given that event


B has already happened [Wik15]. As an example, suppose there are two
buckets. Bucket 1 has 4 white balls and 5 black balls, and bucket 2 has
1 white ball and 7 black balls. Let A be the event that the first bucket
is selected: P (A) = 0.5. Let B be the event that a black ball is picked
randomly.
The chance of picking a black ball, given that the first bucket was selected
is:
5
P (B|A) =
9

15
Chapter 3. Design 3.2. Markov Modelling

So,

P (A, B) = P (B | A) · P (A)
5 1
= ×
9 2
5
=
18
However, Markov models differ in the sense that they only consider the
event immediately prior in the calculation, and then sum all of these prob-
abilities. Applying this to Equation 3.2 gives (adapted from [Lee10]):
( n )
∩ ∏n
P Ak = P (Ak | Ak−1 ) (3.4)
k=1 k=1

and applying this to Equation 3.3 gives:

P (A3 , A2 , A1 ) = P (A3 | A2 ) · P (A2 | A1 ) · P (A1 ) (3.5)

3.2.3 Representing Music


Music can be separated, very simplistically, into notes and duration, mak-
ing the application of Markov models to a piece of music very easy. For
the remainder of this section, the traditional nursery rhyme ‘Twinkle Twin-
kle Little Star’ will be used; the musical notes for which are displayed in
Figure 3.4.
The piece of music can be represented in matrix form. Equation 3.6 is
the result of parsing this piece of music into a matrix A, and incrementing
a position Aij if the note represented by row i was followed by the note
represented by column j.

C D E F G A B
 
C 2 0 0 0 3 0 0
D 3 2 0 0 1 0 0
 
E 0 4 4 0 0 0 0
 
F 0 0 4 4 0 0 0 (3.6)
 
G 0 0 0 4 4 2 0
 
A 0 0 0 0 2 2 0
B 0 0 0 0 0 0 0

To calculate probabilities of one note following another, the matrix is


made row stochastic (each row sums to 1). The stochastic matrix is shown
in Equation 3.7.

16
Chapter 3. Design 3.3. Critic Function

C D E F G A B
 
C 0.4 0 0 0 0.6 0 0
D  0.5 0.3̇ 0 0 0.16̇ 0 0
 
E  0 0.5 0.5 0 0 0 0
 
F  0 0 0.5 0.5 0 0 0 (3.7)
 
G  0 0 0 0.4 0.4 0.2 0 
 
A  0 0 0 0 0.5 0.5 0 
B 0 0 0 0 0 0 0

Figure 3.4: Twinkle Twinkle Little Star

3.2.4 Selecting a Note


In order to choose the correct, or the most probable, note in a sequence,
a random number is generated in the range of 0 − 1. Then, considering
the row of the stochastic matrix that corresponds to the current note, the
algorithm steps along the row and sums the probabilities of notes it could
move to as it reaches them. If the sum of these probabilities becomes greater
than the random probability, the algorithm moves to the state which had
the probability that tipped the sum over.
Initially, a random starting position is selected for the beginning of the
piece and then Markov modelling is applied on every note after that. The
full pseudocode can be seen in Algorithm 1. Note that ‘allNotes’ described
in the algorithm is the stochastic matrix of probabilities.

3.3 Critic Function


The Critic function ‘judges’ a composition that is generated by the computer
and changes various aspects to suit the musical theory rules outlined in
Section 3.1. This function focusses on minimising the number of repeated
notes and applying a cadence at the end of the piece, although there is scope
to increase the number of features the Critic function checks for.

17
Chapter 3. Design 3.3. Critic Function

Algorithm 1 Pseudocode for Markov Models [SB15]


Require: number of notes to produce – called sizeOfPiece
Require: double array of all notes – called allNotes
seedNote ← random note (initially)
outputNote
for int i ← 0 to i ← sizeOfPiece do
targetProb ← random number between 0 − 1
currentSum ← 0.0
for outputNote ← 0 to outputNote ← allNotes.length do
currentSum += allNotes[seedNote][outputNote]
if targetProb ≤ currentSum then
break
end if
end for
seedNote = outputNote
end for

3.3.1 Note Repetition


To achieve minimal repetition of notes, the Critic function analyses the notes
used in the composition and then decides if a note has been repeated too
many times, in particular checking if a note is repeated more than three
times consecutively. The pseudocode for this can be found in Algorithm 2.
This function should not change for the majority of compositions that
are created. As new pieces are created, and existing pieces are analysed, the
probabilistic weights assigned to the notes should become more realistic.
Thus, this would reduce the number of notes that are repeated numerous
times, as this is a feature not normally found in a traditional piece of music.

Algorithm 2 Pseudocode for Finding Repeated Notes


Require: non–empty byte array of notes, called notes
for int i ← 1 to i ← notes.length do
currentNote ← notes[i]
previousNote ← notes[i - 1]
nextNote ← notes[i + 1]
if currentNote = previousNote and currentNote = nextNote then
Return the position i.
else
There are no repeats in these three notes.
end if
end for

18
Chapter 3. Design 3.3. Critic Function

3.3.2 Cadences
The application of a cadence is something that requires a little more thought.
As is discussed in subsection 3.1.3, there are a number of cadences that can
be applied at the end of a composition generated by a computer.
The cadence to be selected will be chosen using a random–number gen-
erator, with the emphasis on selecting a finished cadence. If an unfinished
cadence is selected, the Critic function will add on more notes (using Markov
modelling) and apply a new cadence at the end of this extended piece. This
continues until a finished cadence is added to the end of a piece. This process
guarantees that the composition will always end of a satisfying note.
Another thing to consider is the chord progression that is used to when
applying a cadence. Initially, the composition is in a fixed key, allowing us
to select the first, fourth, fifth, sixth, and seventh of a scale as required.
In order to select the correct octave to apply to the cadence, the octave of
the last note used in the composition is calculated. Finding the octave can
be achieved simply, by finding the floor value of the midi value of the note
divided by 12 (the number of octaves achievable by midi values).
Once the octave is calculated, the cadence is created by simply working
with the midi values and adding (or subtracting) intervals as required. The
pseudocode for this algorithm can be found in Algorithm 3.

Algorithm 3 Pseudocode for Applying a Cadence


Require: non–empty byte array of notes, called notes
rand ← random number in range 0 − 1
lastMidi ← notes[notes.length - 3]
octave ← octave of lastMidi
if rand < 0.4 then
Apply perfect cadence:
notes[length - 2] = fifth and notes[length - 1] = root
else if rand >= 0.4 and rand < 0.8 then
Apply plagal cadence:
notes[length - 2] = fourth and notes[length - 1] = root
else if rand >= 0.8 and rand < 0.9 then
Apply imperfect cadence:
notes[length - 2] = first and notes[length - 1] = fifth
else
Apply imperfect cadence:
notes[length - 2] = sixth and notes[length - 1] = seventh
end if

19
              

Chapter 4

Implementation

This chapter is concerned with the interesting or problematic aspects of


implementation of the Critic, Parser and Markov functions, and also touches
briefly on the software design principles used throughout development.

4.1 Software Design


The software design principles discussed in this chapter were mainly taken
from John Sargeant’s third year course entitled ‘Software Design using Pat-
terns’ [Sar15].
The project was split into a number of iterations which lasted 3 – 4 weeks
each — in keeping with the Agile method of software development. In each
iteration, the algorithms that generated music were developed, improved,
and even completely thrown away in some cases. This resulted in more
aspects of musical theory being implemented into the algorithms.
The hardest iteration was the third, as this saw the implementation of
the Critic function, which is discussed further in subsection 4.3.3. The Critic
function was difficult to implement because it required a complete rethink
of how the music was being stored, as well as changing large parts of the
logic in the program — such as how notes were being played by the JFugue
player (discussed in section 4.2). The third iteration forced all of the code
to be refactored into the classes that are present within the code now. Using
the Agile principles of low–coupling and high–cohesion, classes were divided
into smaller, specific entities, some of which acted as ‘helpers’ to the other
— more important — classes.
Finally, I used the Agile principle of ‘Information Expert’ to encapsulate
all code for a particular function in one class, making the program more
cohesive. This principle could also be used in reverse to decide where some
functionality should go.

20
Chapter 4. Implementation 4.2. Handling Music — The JFugue API

4.2 Handling Music — The JFugue API


The JFugue API is an open source library that enables the composition of
music in the Java programming language without having to worry about
the MIDI conversions etc [Wik14]. As the use of JFugue makes dealing with
the compositions a lot easier, the development of the algorithms became
the focus of the project. A number of features from the JFugue API that
certainly eased the development process are discussed below. The following
discussion is mainly taken from ‘The Complete Guide to JFugue’ [Koe08].

4.2.1 JFugue MusicString


The JFugue MusicString is a specially formatted String object that consists
of music instructions. The MusicString can consist of notes, durations, and
can also control the tempo and instrumentation of a piece.

Notes
In order to specify a note, it is enough to specify the note name: ‘C’, ‘D’,
‘E’, ‘F’, ‘G’, ‘A’, ‘B’, or ‘R’ (to specify silence). After this specification, it
is simple to sharpen (by appending a ‘#’) or flatten (by appending a ‘b’) a
note. Appending a number in the range of 0 – 10 after the complete note
name selects the octave that the note will sound from. The notes available
in JFugue are shown in Appendix B, Figure B.1.

Duration
A duration or length of a note can be appended to the note in the Mu-
sicString after the octave marking. There are 8 different lengths of notes
that can be applied, which are shown in Appendix B, Figure B.2. As more
markings are added to the MusicString, it becomes less readable by humans.
But, the format is very easy to build up using String objects in Java. The
strict pattern that is followed in order to create a detailed music string is
easy to implement, allowing complicated strings to be created easily

Instruments & Tempo


Music played by JFugue makes use of MIDI in order to render the Music-
String into a playable form. There are 128 different instruments that are
standard across MIDI devices, although the sound quality varies between
the instruments. To select an instrument, an optional argument is placed at
the beginning of the MusicString ‘Ix’, where x is an integer between 0 and
127.
The tempo of a piece of music can drastically change the way a piece
of music sounds. Generally, a faster piece of music is more stimulating and

21
Chapter 4. Implementation 4.3. Improving Algorithm Output

creates a heightened physiological response than a slower piece — even if


the same notes are played [vWv11]. To add a tempo, an optional argument
is prepended to the MusicString, of the form ‘Ty’, where y is an integer.

Limitations
There are, however, limitations created by using JFugue. Additional mark-
ings that a musician would typically expect in a piece of music, such as
markings showing accents placed on a note, are not yet supported by JFugue.
This lowers the realism of a composition that can be produced by the algo-
rithms.

4.3 Improving Algorithm Output


In order to investigate how a ‘good’ piece of music can be created, a number
of different algorithms are used. This section briefly discusses each improve-
ment of the complete algorithm. Each composition is judged on the following
qualities, with justification for each found in the specified sections.

• Tonality — is the piece constrained to one key? (section 3.1.1)

• Variety in pitch — but within a playable bound (section 3.1.1)

• Variety in duration of notes (section 3.1.2)

• Ending of piece feels final and conclusive (section 3.1.3)

• Notes are not repeated multiple times (section 3.3.1)

4.3.1 Random Chance (A1)


This algorithm takes a very simplistic view on how notes are selected. A
random integer between 0 − 127 is generated, converted into a MIDI note
and played. Figure 4.1 shows a typical output from this algorithm1 . The
output of the random algorithm is compared against the five qualities of
‘good’ music (as shown in Table 4.1), and does not achieve any of these
qualities due to the completely random nature of the algorithm.

4.3.2 Basic Markov Modelling (A2)


The Markov algorithm (as explained in Section 3.2) chooses a random start-
ing point for the composition, and then uses each previous note as a seed to
1
The two lines of music shown are played concurrently, and are shown on two staves
as this is how notes in these octaves would normally be displayed.

22
Chapter 4. Implementation 4.3. Improving Algorithm Output

Figure 4.1: Typical Output of Random Algorithm

generate the next most likely note. Figure 4.2 shows a typical output from
this algorithm2 .
Some of the five qualities for a ‘good’ piece of music have been achieved.
The music now has some degree of tonality as all the probabilities in the
Markov matrices are realistic, and become increasingly realistic as more
pieces analysed. Secondly, the Markov models enable the algorithm to select
notes that commonly follow a particular note, achieving the second aim.

Figure 4.2: Typical Output of Markov Algorithm

4.3.3 Markov & Critic Function (A3)


Applying the Critic function, which minimises the number of repeated notes
in a piece and enforces a cadence ending, drastically improved the quality
of the piece of music. Whilst the difference in pieces cannot truly be seen
by looking at the sheet music (in Figure 4.3), the difference can be heard
when the two pieces are played consecutively.
Now, considering the five qualities that should be aimed for in a com-
position, this algorithm has now ensured there is always a cadence ending,
and that notes are not repeated more than three times consecutively.

4.3.4 Markov, Critic & Variable Note Lengths (A4)


As a final improvement to the algorithm, Markov models were applied to
the note lengths. This is done by maintaining a separate matrix for the
2
This output is shown on one stave, as there is no need to go onto the lower bass stave.

23
Chapter 4. Implementation 4.4. Parser

Figure 4.3: Typical Output of Markov Algorithm & Critic Function

probabilities of particular note lengths following another. This models the


effect of having running quavers or semiquavers. This is a common com-
positional feature, as it creates fast moving sections of music that help the
piece of music flow.
The Critic function was adapted to ensure that the application of a
cadence to the end of a piece, also tied in with the application of a crotchet
followed by a minim (one beat note followed by a two beat note). This
further adds to the conclusive feel of the end of a composition. A typical
output of the final algorithm is shown in Figure 4.4.
This algorithm achieves all the aims of good music outlined at the be-
ginning of section 4.3.

Figure 4.4: Markov Applied to Notes & Lengths, & Critic Function

4.3.5 Summary
This table summarises how each algorithm met the aims of ‘good music’.

Table 4.1: Algorithms Compared Against the Aims of ‘Good’ Music


Aim A1 A2 A3 A4
Tonality × ✓ ✓ ✓
Pitch × ✓ ✓ ✓
Note Lengths × × × ✓
Cadence Ending × × ✓ ✓
Note Repetition × × ✓ ✓

4.4 Parser
An important aspect of the project is the Parser, as it allows the Markov
models to be updated in real time, and adds extra functionality to the overall

24
Chapter 4. Implementation 4.4. Parser

piece of software. When the program starts running, the user is presented
with a choice of generating a piece of music or adding information to the
database. The user inputs a piece of music in the format of a MusicString,
and in order to support the addition of Markov–modelled note lengths, the
notes and durations are separated. Temporary Markov matrix are created,
and then combined with the existing matrices.
As an example, consider the matrices used in eqs. (3.6) and (3.7), and
then combine these existing matrices with the temporary matrices created
when parsing ‘Frère Jacques’, the notes for which are:
{G4 A4 B4 G4 G4 A4 B4 G4 B4 C5 D5 B4 C5 D5 D5 E5 D5 C5 B4 G4
D5 E5 D5 C5 B4 G4 G4 D4 G4 G4 D4 G4}
Formulating a matrix based off the newly parsed piece gives:
C4 D4 E4 F4 G4 A4 B4 C5 D5 E5
 
C4 0 0 0 0 0 0 0 0 0 0
D4  0 0 0 0 2 0 0 0 0 0 
 
E4  0 0 0 0 0 0 0 0 0 0 
 
F4  0 0 0 0 0 0 0 0 0 0 
 
G4  0 2 0 0 3 2 1 0 1 0 
  (4.1)
A4  0 0 0 0 0 0 2 0 0 0 
 
B4  0 0 0 0 2 2 0 2 0 0 
 
 0 0 
C5  0 0 0 0 0 2 0 2 
D5  0 0 0 0 0 0 1 2 1 2 
E5 0 0 0 0 0 0 0 0 2 0
The Parser function performs matrix addition on the new and existing ma-
trices of integers (eqs. (3.6) and (4.1)) and then recalculates the new prob-
abilities by summing the rows of this combined matrix and dividing each
element by the sum. This yields:
C4 D4 E4 F4 G4 A4 B4 C5 D5 E5
 
C4 0.4 0 0 0 0.6 0 0 0 0 0
D4  0.375 0.25 0 0 0.375 0 0 0 0 0 
 
E4  0 0.5 0.5 0 0 0 0 0 0 0 
 
F4  0 0 0.5 0.5 0 0 0 0 0 0 
 
G4  0 0.105 0 0.211 0.367 0.211 0.053 0 0.053 0 
 
A4  0 0 0 0 0.3̇ 0.3̇ 0.3̇ 0 0 0 
 
B4  0 0 0 0 0.3̇ 0.3̇ 0 0.3̇ 0 0 
 
 0 0 
C5  0 0 0 0 0 0.5 0 0.5 
D5  0 0 0 0 0 0 0.16̇ 0.3̇ 0.16̇ 0.3̇ 
E5 0 0 0 0 0 0 0 0 1 0
(4.2)
Thus, the probabilities become more accurate for the notes A4 . . . D5 as
these are the most common notes in the parsed pieces. The new notes are

25
Chapter 4. Implementation 4.4. Parser

included in the combined matrix, but the probabilities cannot be expected


to be as accurate, but they are still considered in the calculation. The same
process is used to improve the probabilities of the note lengths.

26
              

Chapter 5

Testing & Statistical Analysis

This chapter focuses on testing the output of the algorithm, rather than
detailing software tests.
An online survey was created, which linked to a number of computer
and human generated pieces in an open Dropbox folder. The instructions in
the survey told the respondent which pieces to play for each question, and
invited them to select which they thought was computer generated (with a
option of ‘Can’t Decide’ for the indecisive). Each question had a human and
computer generated piece, so there were no ‘trick’ questions. In total, there
were 54 respondents of various musical ability. Raw data can be found in
Appendix C.

5.1 Random Chance


The random output from the program was compared to Steve Reich’s ‘Phry-
gian Gates’. This helped to ‘level the playing field’ slightly as both pieces
were atonal and were unlikely to end on a conclusive note. The table of re-
sults for this question can be found in Table C.1, and the plaintext comments
can be found in full in Appendix C.2.1. Figure 5.1 shows the distribution of
responses.
Piece 1 was highlighted as ‘computer generated’ by 62.96% of the respon-
dents, which was accurate. Looking at the plaintext responses, this seems
to be because Piece 1 was deemed ‘too random’ and lacking ‘…a humanly
perceived notion of harmony’. ‘The intervals appeared more haphazard in
Piece 1’.
However, some respondents said ‘Piece 1 was much more pleasing to
listen to’, but the general consensus showed that Piece 1 was obviously a
computer generated piece of music. There were a few comments pertaining
to the respondents not being ‘entirely sure’ which of the pieces was computer
generated, as ‘both seemed haphazard’.

27
Chapter 5. Testing & Statistical Analysis 5.2. Basic Markov Modelling

Figure 5.1: Distribution of Responses (Piece 1 & 2)

5.2 Basic Markov Modelling


The basic Markov output was compared to a more structured minimalist
piece; a stripped down version of ‘New York Counterpoint’ by Steve Reich.
The table of results for this question can be found in Table C.2, and the
plaintext comments can be found in full in Appendix C.2.2. Figure 5.2 shows
the distribution of responses.
Piece 3 was highlighted as computer generated by 48.15% of the respon-
dents, narrowly beating Piece 4 (with 44.44% of the vote). This was the
incorrect choice as Piece 4 was computer generated, but as the margin was
so narrow; there is no significant statistical evidence suggesting the algo-
rithm truly confused respondents.
Respondents thought that ‘Piece 3 feels more fluid than Piece 4, more so
that a human would have composed it’ and that they were ‘sure that Piece
3 is generated by a computer because there are sequences of notes which
sound unnatural’. Piece 4 was considered ‘more complex’ and that it was
used as inspiration for creating Piece 3.
However, there were comments suggesting that ‘Piece 4 seemed too
chaotic’, and that it didn’t ‘have much of a flow’. This suggests that whilst
the respondents were (marginally) tricked by the pieces in this question, it
may have been down to sheer luck in the choice of the human generated
piece. Based on the comments, the need for the Critic function as a means
of improving the piece becomes apparent.

28
Chapter 5. Testing & Statistical Analysis 5.3. Markov & Critic Function

Figure 5.2: Distribution of Responses (Piece 3 & 4)

5.3 Markov & Critic Function


The output obtained from applying the Markov algorithm and the Critic
function was compared to an excerpt from Vaughan William’s ‘English Folk
Song Suite’. The output from the combined algorithm is a lot more so-
phisticated, and hence it should stand up against a truly classical piece of
music. The table of results for this question can be found in Table C.3, and
the plaintext comments can be found in full in Appendix C.2.3. Figure 5.3
shows the distribution of responses.
This was the most surprising response. Piece 6 was highlighted by the
37.04% of the respondents, with one–third of all respondents being unable to
decide between the two pieces. ‘Both were pleasant to listen to’, and ‘both
sounded human’ to the respondents. Piece 6, however, was the Vaughan
William’s excerpt. One respondent stated that ‘Piece 5 finished nicely, and
had a nice continual tempo. It seemed to fit a lot of music I had heard in
the past, making it more believable’.
The two pieces were not expected to cause as much consternation as they
did. The fact that one–third of the respondents were confused between the
two pieces, and more than half of the remaining respondents highlighted the
incorrect piece as computer generated is surely a victory for this algorithm.
It was said by a few respondents that ‘Piece 6 sounded a bit synthetic’ and
‘…more robotic’, but the general consensus of the comments was that there
was little or no difference between these pieces.

29
Chapter 5. Testing & Statistical Analysis 5.4. All Improvements

Figure 5.3: Distribution of Responses (Piece 5 & 6)

5.4 All Improvements


Prior to the creation of the final piece, thirty bars of fifteen distinct Mozart
pieces were parsed in an attempt to get a good idea of his style. The output
from this algorithm was compared to Mozart’s ‘Divertimento in Bb major’,
no part of which was included in the Markov matrices. The table of results
for this question can be found in Table C.4, and the plaintext comments
can be found in full in Appendix C.2.4. Figure 5.4 shows the distribution of
responses.
As this was theoretically the best combination of algorithms, this should
have been the piece to confuse the respondents. But, 62.96% of the respon-
dents correctly identified Piece 8 as the computer generated piece of music,
saying that ‘Piece 8 did not seem to have the melody that Piece 7 had’.
The addition of note lengths seems to have been the downfall here. Con-
sidering some of the plaintext comments, it was observed that ‘The awkward
timing of Piece 8 and the fluid-ness of Piece 7 made me sure Piece 8 was gen-
erated by a computer’. Note lengths applied using Markov modelling seems
to have been a slightly primitive solution. As is discussed in Section 6.1.1,
there are more accurate ways that note lengths could have been applied.

30
Chapter 5. Testing & Statistical Analysis 5.5. Study of Musical Ability

Figure 5.4: Distribution of Responses (Piece 7 & 8)

5.5 Study of Musical Ability


There was a good mix of musical ability in those who responded to the
survey. Only 27.78% of respondents had never played/sung before, with all
other respondents having some degree of musical background. There were,
however, no professional musicians which responded to the survey — even
though some were specifically targeted. Figure 5.5 shows the full distribution
of musical ability amongst the respondents.
Before creating the survey, a positive correlation between musical ability
and ability to identify a computer generated piece of music was expected.
However, it seems to be in reverse. The greater the musical exposure, the
worse respondents seemed to do.
Respondents who used to play/sing had the greatest ability to correctly
identify a piece of computer generated music. There were 21 respondents,
and on average they were able to correctly identify 2.52 pieces correctly, with
a standard deviation 0.86. The small standard error of the mean (0.18),
suggests there is reasonable confidence that the mean generated reflects the
true sample. Five of these respondents were able to identify all four computer
generated pieces. Depending upon the level of ability they reached, the
respondents in this category could have had a large exposure to music before
they stopped playing. Hence they would have an idea of what makes a piece
of music ‘good’.
Coming ‘next’ in ability to identify computer generated music were the

31
Chapter 5. Testing & Statistical Analysis 5.5. Study of Musical Ability

respondents who had never played/sung before. There were a total of 14


respondents in this subgroup. An average of 2.43 pieces were identified cor-
rectly, with a standard deviation of 1.16. However, the standard deviation
and standard error (0.31) were the largest of all the subgroups, suggesting
that there was more variance within their responses. There were two re-
spondents who could correctly identify the four computer generated pieces.
The variance in these responses may be influenced by external factors, such
as music preference. This was not considered by the survey, but for the
respondents who have a preference for electronic music, they may have been
more able to identify the computer generated music.
Bringing up the rear are the respondents who currently play/sing. There
were 17 respondents in this category. On average, they were able to iden-
tify 2.18 pieces correctly, with a standard deviation of 0.55. This was the
smallest standard deviation, and also yielded the smallest standard error
suggesting that the representation of the mean of the sample is accurate.
There were no respondents in this category that could correctly identify all
four computer generated pieces. The poor accuracy of the most musical
respondents could be explained by ‘over–exposure’. As musicians, they have
to play a wide variety of pieces from obscure to well–known. Thus, their
exposure to unconventional pieces of music would be greater, and could
therefore be skewing their results.

Figure 5.5: Distribution of Musical Ability

32
              

Chapter 6

Furthering The Project

This chapter retrospectively considers the development process and com-


ments on the features of the project that could have been included/improved.

6.1 Retrospective Look at Development


There are a number of features that would further improve the quality of
the music that the computer can produce. These weren’t included as either
JFugue does not yet support the particular feature, or there was no time to
add the feature.

6.1.1 Features To Include


Bars & Phrases
A piece of music is naturally split into bars. These are divisions of the
music that contain a number of notes, with durations that total the number
of beats in a bar (as specified by the time signature).
A phrase is a group of bars that contain a particular motif (or theme)
that is carried through the piece of music. Phrases are generally repeated
throughout a piece, although they may be modulated by a certain interval,
or played by a different instrument.
The addition of bars and phrases would enable the piece of music have
some repetition — a point that was highlighted by a number of respondents
to the survey. This feature was researched after the development of the
generation algorithms towards the end of the project and so there was not
enough time to fully implement this feature. With hindsight, the algorithms
should have developed with this in mind from the beginning.

33
Chapter 6. Furthering The Project 6.1. Retrospective Look at Development

Time Signature
The time signature of a piece of music defines the amount and type of
notes that each bar contains. The time signature is usually expressed as a
fraction, with the numerator showing the number of beats in a bar, and the
denominator showing the division of a semibreve [The15].
Time signatures, again, give an idea of how a piece of music will sound
before it is played. A 3 /4 time signature, i.e. 3 crotchets in a bar, suggests
that the music will have traditional waltz feel to it. A time signature gives an
indication to the player how to play the piece, and so can heavily influence
the music that is generated.
This would give the composer a little more flexibility in the piece that
would be generated. However, it would be difficult to differentiate between
pieces of music with different time signatures without the implementation
of bars and phrases, as there would be no audible division from one beat (or
bar) to the next.

Rests
A rest represents a period of silence in a bar [The15], and each type of rest
has the same duration with a certain type of note. Rests can occur within
a bar or — as is more common in longer pieces of music — rests can last
multiple bars.
Rests in music can dramatically alter the timbre of the piece, as they
enforce silence on certain instruments whilst others continue playing. This
can make a section of music seem delicate or heavy depending on the instru-
ments left playing.
JFugue does support the inclusion of rests — simply add an ‘R’ to the
MusicString in the fashion that a note would be added. This was originally
implemented in the Markov models along with the normal notes, but upon
testing it was noted that the rests seemed too forced and obviously placed.
As a result, this feature was removed. However, if a method could be devised
to cleverly add rests into the music, it would certainly make the piece more
realistic. With the implementation of phrases, there could be a slight rest
at the end of a phrase before repetition or modulation.

Key Change/Modulation
All of the elements of music listed above are the building block of creating
a piece of music. However, only writing in a single key sounds boring and
predictable. Humans expect to hear the same tune repeatedly when listening
to music, so when something does change within the piece, it grabs the
attention of the listener [Dew14].
This change can be achieved by changing the key during the piece, chang-
ing the chord progressions used underneath the main melody, or changing

34
Chapter 6. Furthering The Project 6.2. Suggestions for Future Projects

the melodies used for the different sections of the piece. Composers often
take a main theme and then vary the theme throughout different sections
of the piece in order to keep the listener interested.
There is no way within JFugue to modulate a section of music (or Pat-
tern, in JFugue syntax). This would be an excellent addition to the API
and would easily create the illusion that the key has been changed.

Melody & Harmony Lines


Usually, there is not only one single instrument playing a whole piece.
Rather, there is a ‘lead’ instrument that plays the main melody line, and
other instruments that play an accompanying tune, called a harmony [Cla93].
It is important, if melody and harmony lines are used, that they are ‘highly
integrated’ with each other [Wil06].
It is very easy to create simple harmonies; there are a number of tech-
niques a composer can use in order to create simple and effective harmonies.
More complicated harmonies can be created by employing canonical forms
(as discussed in Chapter 2) and other compositional techniques that shall
not be considered here.
Regretfully, this feature would have been very simple to implement with
enough time, as the melody line could have been transposed by a particu-
lar interval. By implementing various harmony lines, an investigation into
block–chord harmonies against a more complicated harmony line. It would
have been interesting to see which of these options would have created a
better illusion.

6.2 Suggestions for Future Projects


Taking all of these points into consideration, it is clear that a lot of them
rely on the implementation of bars and phrases. So, if this project was taken
up in the future I would give the following recommendations:

• Build a data structure to contain music in the form of bars and phrases.
This would initially be difficult, but would enable more complex algo-
rithms to be created.

• Develop a new method of selecting note durations. With the imple-


mentation of bars and phrases, each bar can be constrained to the
time signature, thus giving the piece a better flow.

• Consider the addition of a harmony line, using the implementation of


bars and phrases to help. Within a bar particular beats can be isolated
and block chords can be created at each important beat within a bar.

35
Chapter 6. Furthering The Project

6.3 Self Reflection


Looking back, the main problem with development was the order that tasks
were undertaken. I spent the project focussing on the generation of notes,
and improving algorithms to generate notes more effectively. Focussing on
this was important, but should have been a lower priority than establishing
the global structure of music. By establishing the global structure, I would
have been able to create the idea of bars and phrasing within a piece of
music, as well as improving the way that note durations are selected.
If I could turn back time and start the project over again, there are
a number of things that I would do differently. I would spend my time
more wisely in the first stages of development, focussing on research and
development of algorithms. I would have then had more time to implement
a global structure of phrasing. I regret not implementing a harmony line,
and should have spent some time with this. Managing my time more wisely
early on would have freed up time for this.
Finally, the biggest thing I have learnt throughout this project is how
Markov models can be implemented. This was an entirely new field to me,
and I found that they were a great help in modelling music. With more
time, I would have liked to implement other ways of generating notes, which
would have led to comparisons between different algorithms, rather than
just different musical aspects.

6.4 Conclusion
In conclusion, there have been some small victories for computer gener-
ated music. In the survey, 87% of all respondents were unable to correctly
identify all four pieces of computer generated music, with the average per-
son identifying 2.39 pieces of music correctly. However, there were a lot
of respondents who commented that the generated pieces were ‘obviously
computer generated’. Whilst some advancements have been made, algorith-
mic composition is still not as effective as human composition. Humans
naturally look for progression and emotions in music, and as yet, even a
supercomputer is unable to meet these requirements [Wil13].
              

36
              

Bibliography

[Alp95] Adam Alpern. Techniques for Algorithmic Compostion. Summa-


tive report of Final Project. Hampshire College, 1995.
[Ari05] Christopher Ariza. An Open Design for Computer-Aided Algo-
rithmic Music Composition: athenaCL. 2005, p. 44. isbn: 978–
1581122923.
[Bag98] Denis L. Baggi. The Role of Computer Technology in Music and
Musicology. http : / / www . lim . di . unimi . it / events / ctama /
baggi.htm. [Online; accessed 29-March-2015]. 1998.
[BBC14] BBC. Harmony and Tonality. http://www.bbc.co.uk/schools/
gcsebitesize / music / elements _ of _ music / harmony _ and _
tonality2.shtml. [Online; accessed 30-March-2015]. 2014.
[Bri93] Frederick J. Bridge. Double Counterpoint and Canon. 1893, p. 76.
isbn: 978–1167103896.
[CE15] Oxford Pocket Dictionary of Current English. Oxford Dictionar-
ies. http://www.oxforddictionaries.com/. [Online; accessed
30-March-2015]. 2015.
[Cla93] Carlos Alberto Manrique Clavijo. Basic Musical Concepts —
Beat, Rhythm, Melody and Harmony. https://www.didjshop.
com/BasicMusicalHarmony.html. [Online; accessed 30-March-
2015]. 1993.
[Coc01] Dale Cockrell. The New Grove Dictionary of Music and Mu-
sicians. http : / / www . oxfordmusiconline . com / subscriber /
article / grove / music / L2232381 ? q = david + cope & search =
quick&pos=1&_start=1#firsthit. [Online; accessed 29-March-
2015]. 2001.
[Cop81] David Cope. Experiments in Musical Intelligence. http://artsites.
ucsc.edu/faculty/cope/experiments.htm. [Online; accessed
29-March-2015]. 1981.
[Cop84] David Cope. New Directions in Music. 1984, p. 259. isbn: 978–
1577661085.

37
Bibliography

[Dew14] Colton Dewberry. What Makes Good Music Good? https : / /


medium . com / @ColtonDewberry / what - makes - good - music -
good-f27e9e4b6e9c. [Online; accessed 30-March-2015]. 2014.
[Dic15] Cambridge Dictionaries. English Definition of “music”. http :
//dictionary.cambridge.org/dictionary/british/music.
[Online; accessed 27-March-2015]. 2015.
[Dos03] Martin Dostál. Generating rhythm accompaniment using genetic
algorithms. http://dostal.inf.upol.cz/evm.html. [Online;
accessed 29-March-2015]. 2003.
[EB14] The Editors of Encyclopdia Britannica. 12-tone Music. http :
//www.britannica.com/EBchecked/topic/610945/12-tone-
music. [Online; accessed 29-March-2015]. 2014.
[Edw11] Michael Edwards. “Algorithmic Composition: Computational Think-
ing in Music”. In: Communications of the ACM 54 (2011), pp. 58–
67. doi: 10.1145/1965724.1965742.
[Ess07] Karlheniz Essl. “Algorithmic Composition”. In: The Cambridge
Companion to Electronic Music. 2007, pp. 107–125. isbn: 978–
0521688659.
[Est15] Espie Estrella. What is Plainchant? http://musiced.about.
com/od/faqs/f/plainchant.htm. [Online; accessed 30-March-
2015]. 2015.
[Fra01] Peter A. Frazer. “The Development of Musical Tuning Systems”.
p. 9. 2001.
[HI59] Lejaren A. Hiller and Leonard M. Isaacson. Experimental Music:
Composition with an Electronic Computer. 1959, p. 177. isbn:
978–0313221583.
[Hol15] John H. Holland. Genetic Algorithms. http : / / www2 . econ .
iastate . edu / tesfatsi / holland . gaintro . htm. [Online; ac-
cessed 29-March-2015]. 2015.
[Jac96] Bruce L. Jacob. “Algorithmic Composition as a Model of Cre-
ativity”. In: Organised Sound 1 (1996), pp. 157–165.
[Koe08] David Koelle. The Complete Guide to JFugue: Programming Mu-
sic in Java. 2008, pp. 21–63.
[Lee10] Christopher Lee. C260A-09 Hidden Markov Model Intro. https:
//vimeo.com/7175217. [Online; accessed 31-March-2015]. 2010.
[Mau99] John A. Maurer. A Brief History of Algorithmic Composition.
https://ccrma.stanford.edu/~blackrse/algorithm.html.
[Online; accessed 29-March-2015]. 1999.
[Moz87] Wolfgang Amadeus Mozart. Musikalisches Würfelspiel, K.516f.
1787.

38
Bibliography

[Nie09] Gerhard Nierhaus. Algorithmic Composition: Paradigms of Auto-


mated Music Generation. 2009, 36, 38n7. isbn: 978–3211755396.
[Nun11] Alex Di Nunio. Illiac Suite. http://www.musicainformatica.
org / topics / illiac - suite . php. [Online; accessed 29-March-
2015]. 2011.
[Nun13] Alex Di Nunio. Musicomp. http://www.musicainformatica.
org/topics/musicomp.php. [Online; accessed 29-March-2015].
2013.
[Pow14] Viktor Powell. Markov Chains: A Visual Explanation. http://
setosa.io/blog/2014/07/26/markov- chains/. [Online; ac-
cessed 31-March-2015]. 2014.
[Sar15] John Sargeant. Lecture 3: GRASP principles. 2015.
[SB15] Andrew Sorenson and Andrew Brown. Music Composition in
Java. http://explodingart.com/jmusic/jmtutorial/Markov1.
html. [Online; accessed 30-March-2015]. 2015.
[The15] Music Theory. Note Duration. http://www.musictheory.net/
lessons. [Online; accessed 30-March-2015]. 2015.
[Tri12] Craig Trim. The Chain Rule of Probability. https://www.ibm.
com / developerworks / community / blogs / nlp / entry / the _
chain_rule_of_probability?lang=en. [Online; accessed 31-
March-2015]. 2012.
[vWv11] Marjolein van der Zwaag, Joyce Westerink, and Egon van den
Broek. “Emotional and psychophysiological responses to tempo,
mode, and percussiveness”. In: Musicae Scientiae 15 (2011), pp. 250–
269. doi: 10.1177/1029864911403364.
[Wik14] Wikipedia. JFugue. http://en.wikipedia.org/wiki/JFugue.
[Online; accessed 31-March-2015]. 2014.
[Wik15] Wikipedia. Chain Rule (Probability). http : / / en . wikipedia .
org/wiki/Chain_rule_%28probability%29. [Online; accessed
31-March-2015]. 2015.
[Wil06] Mickie Willis. What Makes “Good Music” Good? http://www.
unconservatory.org/articles/goodmusic.html. [Online; ac-
cessed 30-March-2015]. 2006.
[Wil13] Alistair Wilkins. This classical music was created by a supercom-
puter in less than a second. http://io9.com/5973551/this-
classical - music - was - created - by - a - supercomputer - in -
less-than-a-second. Online; accessed 02-April-2015. 2013.
[Xen63] Iannis Xenakis. Formalized Music. 1963, p. 77. isbn: 978–1581122923.

39
              

Appendix A

Music Terminology

This appendix serves as a glossary for all musical terms that are used
throughout the report. All definitions taken from the Oxford Pocket Dic-
tionary of Current English [CE15].

• Bar: Any of the short sections or measures, typically of equal time value,
into which a piece of music is divided

• Cadence: A sequence of notes or chords comprising the close of a musical


phrase.

• Chord: A group of (typically three or more) notes sounded together as


a basis of harmony.

• Crotchet: A note with a quarter of the duration of a semibreve.

• Diminished: A chord is diminished if it is built using the root (bottom


note of the chord), the minor third (three half steps above the root), and
the diminished fifth (six half steps above the root).

• Harmony: The use of simultaneous notes or chords that accompany the


main melody.

• Key: The scale upon which a composition is based.

• Key Signature: See Key.

• Measure: See Bar.

• Melody/Melodic: A sequence of single notes that is musically satisfy-


ing, a tune.

• Minim: A note with half the duration of a semibreve.

• Modulation: The act of changing from one key to another.

40
Appendix A. Music Terminology

• Movement: A self-contained part of a musical composition or musical


form.

• Note: A sign or character used to represent a tone, its position and form
indicating the pitch and duration of the tone.

• Octave: The interval between one musical pitch and another with half
or double its frequency.

• Perfect Fifth: A pair of pitches with a frequency ratio of 3 : 2 (or very


nearly so).

• Phrase: A group of consecutive melodic notes, both in composition and


performance. Phrasing gives the performer an idea of the shape and flow
of the music.

• Quaver: A note with half the duration of a crotchet. Two quavers make
up the same length as a crotchet.

• Rest: An interval of silence of a specified duration.

• Round: A minimum of three voices sing the same melody, but each voice
starts at a different time.

• Scale: Any set of musical notes ordered by fundamental frequency or


pitch.

• Semibreve: A note with the longest duration — typically 4 beats long.

• Semi–quaver: A note with half the duration of a quaver. Two semi–


quavers make up the same length as a quaver.

• Timbre: The character or quality of a musical sound or voice as distinct


from its pitch and intensity.

• Time Signature: An indication of rhythm, generally expressed as a


fraction with the denominator defining the beat as a division of a semi-
breve and the numerator giving the number of beats in each bar.

• Transposed: The process of moving a collection of notes up or down in


pitch by a certain interval.

41
              

Appendix B

JFugue Details

Figure B.1: All Notes Available in JFugue.

Figure B.2: All Note Lengths Available in JFugue.

42
              

Appendix C

Raw Data for Chapter 5

C.1 Identifying Compositions


I have annotated the pieces with ‘(C)’ to represent the computer generated
music, and ‘(H)’ to represent the human composition.

Table C.1: Pieces 1 and 2 (All Respondents)


Piece of Music Chosen (%) Chosen (Raw Number)
Piece 1 (C) 62.96 34
Piece 2 (H) 18.52 10
Can’t Decide 18.52 10

Table C.2: Pieces 3 and 4 (All Respondents)


Piece of Music Chosen (%) Chosen (Raw Number)
Piece 3 (H) 48.15 26
Piece 4 (C) 44.44 24
Can’t Decide 7.41 4

Table C.3: Pieces 5 and 6 (All Respondents)


Piece of Music Chosen (%) Chosen (Raw Number)
Piece 5 (C) 29.63 16
Piece 6 (H) 37.04 20
Can’t Decide 33.33 18

43
Appendix C. Raw Data for Chapter 5

Table C.4: Pieces 7 and 8 (All Respondents)


Piece of Music Chosen (%) Chosen (Raw Number)
Piece 7 (H) 22.22 12
Piece 8 (C) 62.96 34
Can’t Decide 14.81 8

C.2 Plaintext Responses


C.2.1 Random Chance
Piece 1 was computer generated, and Piece 2 was human generated.
– “Piece 1 seemed too random to be done by a computer program.”
– “I think both are generated by computer, as both seem haphazard.”
– “First piece seemed very discordant. The second piece had more of a
continual flow, each note seemed to ‘remember’ the previous one.”
– “Piece 2 had more rhythm.”
– “Piece 2 was regular and had a clearer structure.”
– “Piece 1 sounded ploddy and random.”
– “I’m not entirely sure, but I believe Piece 1 was generated by a com-
puter, as it had more ‘jumps’ in the scale. Piece 2 went up and down
the scale and felt more cursive.”
– “Piece 2 has a few more note sequences which sound intentional be-
cause they contain something of a pattern.”
– “Piece 1 sounded like a more randomly generated algorithm.”
– “To me, it sounded as if piece 2 had less variation in pitch between
consecutive notes. This was most notable at the end, where several of
the same notes followed each other. Whether piece 1 was computer
generated or not, it was much more pleasing to listen to!”
– “Piece 1 lacks a humanly perceived notion of harmony, while piece 2
feels bounded by rules of what it can/cannot use.”
– “Both sound a bit unpleasant. Of the two, Piece 2 sounds more or-
ganised. Overall, neither sounds as music, they both sound like a first
attempt that could be made by either a human or a computer.”
– “Piece 2 seems to have more of a regular rhythm/style.”
– “The intervals appeared more haphazard in Piece 1.”

44
Appendix C. Raw Data for Chapter 5

C.2.2 Basic Markov Modelling


Piece 3 was human generated, and Piece 4 was computer generated.
– “The start of Piece 4 seems like a computer program playing around
and seeing how it affects the listener. Piece 3 seemed purposefully
ominous or creepy.”

– “Piece 3 feels more fluid like than Piece 4, more so that a human would
have composed it. The start of Piece 4 sounds quite ‘complex’ (?) but
it doesn’t have as much of a flow.”

– “I’m going to go with Piece 4 being the one made by the computer.
Again, it had less of a flow to it, and the tempo seemed to be a bit
off.”

– “Piece 4 was more complex and didn’t sound as ‘one note per sample’.
So I think Piece 3 was composed by a computer.”

– “Piece 4 was way too chaotic, so composed by a computer.”

– “They both sound plausible, the first one sounds cruel to play, but still
good.”

– “The random timing in Piece 4 made me think it was generated by a


computer.”

– “Piece 3 sounds like it is copying parts of Piece 4.”

– “I’m sure that Piece 3 is generated by a computer because there are


sequences of notes which sound unnatural. I’m not entirely sure about
Piece 4.”

– “Piece 3 sounds better, there are no awkward moments that a human


may introduce to suggest a certain feeling. It sounds as more experi-
enced, an experience that could easily be given fast to a computer.”

– “Piece 3 has way too random notes compared to 4th one, so I think
that Piece 3 was composed by a computer.”

– “The start of Piece 4 is quite chaotic.”

C.2.3 Markov & Critic Function


Piece 5 was computer generated, and Piece 6 was human generated.
– “These seemed very similar, chord progressions just done in reverse.”

– “Piece 6 feels more repeated as if a computer was randomly selecting


specific pieces to play.”

45
Appendix C. Raw Data for Chapter 5

– “Piece 5 finished nicely, and had a nice continual tempo. It seemed to


fit a lot of music I had heard in the past, making it more believable.
I actually cannot tell — both seemed to match what I would consider
a basic tune.”

– “Piece 6 sounded more simple.”

– “Piece 5 sounded better.”

– “Piece 6 was once again quite repetitive.”

– “Both were pleasant to listen to.”

– “Sorry, they both sound human to me.”

– “Piece 6 seemed to have more repetition in it consistent with an actual


‘rhythm’ (probably not the correct word), so I think that one was
composed by a human.”

– “Piece 5 ended much more clearly on a cadence.”

– “This is one I find hard to decide on. I’m inclined to go for Piece
6 being the one generated by the computer as the ending of Piece 5
ended with the note you’d expect the piece to end and it seemed in
place but I might be totally wrong.”

– “Piece 6 sounds a bit synthetic”

– “Piece 6 seems more robotic in its repetitiveness and lack of fluidity.”

– “Both pieces sounded reasonably similar, wasn’t any clear distinguish-


ing features in either. Both had a steady rhythm”

– “Both pieces flowed reasonably, cannot discern enough of a difference


to tell”

C.2.4 Markov, Critic & Variable Note Lengths


Piece 7 was human generated, and Piece 8 was computer generated.

– “Piece 7 felt as though it had a rhythm and melody and was consistent
throughout (notes weren’t as random). Piece 8 feels generated because
some of the notes didn’t seem as though they fitted well into the piece.”

– “Piece 7 was a very nice little melody. Piece 8 didn’t seem to have
the melody which Piece 7 had, and I would have been surprised if a
computer was able to generate Piece 7.”

– “Piece 7 seemed more natural.”

46
Appendix C. Raw Data for Chapter 5

– “Piece 8 contained some choices of note that are used in composition


but are relatively rare, hence unlikely that a computer would make
those choices.”

– “Hard to differentiate, guessing Piece 8 was made by the computer but


piece 7 was also pretty.”

– “The awkward timing of Piece 8 and the fluid-ness of Piece 7 made


me sure Piece 8 was generated by a computer.”

– “Piece 7 appears to have some very deliberate phrasing, which was


absent from all the other pieces including piece 8.”

– “Piece 7 is more fluid, seems to give a question and a response along


its sound, while Piece 8 sounds forced.”

– “Piece 7 actually sounds good”

– “The trills in Piece 8 sound like a human would have composed them
rather than being generated”

47
Appendix C. Raw Data for Chapter 5

C.3 Musicality of Respondents

Table C.5: Musical Ability of Respondents


Categories % of Respondents No. of Respondents
Never played/sung 27.78 15
Used to play/sing 40.94 22
Play/sing at amateur level 31.48 17
Play/sing professionally 0.00 0

C.4 Responses by Musical Ability

Table C.6: Pieces 1 and 2 (By Ability)


Piece 1 Piece 2 Can’t Decide
Never Played/Sung 12 0 2
Used to Play/Sing 12 5 5
Still Play/Sing (Amateur) 12 2 2
Play/Sing Professionally 0 0 0

Table C.7: Pieces 3 and 4 (By Ability)


Piece 3 Piece 4 Can’t Decide
Never Played/Sung 7 7 0
Used to Play/Sing 7 12 2
Still Play/Sing (Amateur) 7 7 2
Play/Sing Professionally 0 0 0

Table C.8: Pieces 5 and 6 (By Ability)


Piece 5 Piece 6 Can’t Decide
Never Played/Sung 5 2 7
Used to Play/Sing 7 7 7
Still Play/Sing (Amateur) 5 7 5
Play/Sing Professionally 0 0 0

48
Appendix C. Raw Data for Chapter 5

Table C.9: Pieces 7 and 8 (By Ability)


Piece 7 Piece 8 Can’t Decide
Never Played/Sung 5 7 2
Used to Play/Sing 2 17 2
Still Play/Sing (Amateur) 5 10 2
Play/Sing Professionally 0 0 0

Table C.10: Number of Pieces Identified Correctly (By Ability)


Still
Never Used to All
Play/Sing
Played/Sung Play/Sing Responses
(Amateur)
Mean 2.43 2.52 2.18 2.39
Standard Deviation 1.16 0.86 0.55 1.00
Standard Error 0.31 0.18 0.13 0.14
Mode 3 2 2 2
Median 3 2 2 2

49

Potrebbero piacerti anche