Sei sulla pagina 1di 19

Creative Industries Journal

ISSN: (Print) (Online) Journal homepage: https://www.tandfonline.com/loi/rcij20

Copyright, compensation, and commons in the


music AI industry

Eric Drott

To cite this article: Eric Drott (2020): Copyright, compensation, and commons in the music AI
industry, Creative Industries Journal, DOI: 10.1080/17510694.2020.1839702

To link to this article: https://doi.org/10.1080/17510694.2020.1839702

Published online: 29 Oct 2020.

Submit your article to this journal

Article views: 25

View related articles

View Crossmark data

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=rcij20
CREATIVE INDUSTRIES JOURNAL
https://doi.org/10.1080/17510694.2020.1839702

Copyright, compensation, and commons in the music


AI industry
Eric Drott
Butler School of Music, University of Texas at Austin, Austin, TX, USA

ABSTRACT ARTICLE HISTORY


Since 2015 a number of startups has emerged seeking to com- Received 26 March 2020
mercialise music AI. Two types of firm stand out. One markets Accepted 15 October 2020
services directly to consumers, in the form of adaptive music
that responds to contextual and/or activity-related cues; another KEYWORDS
group markets AI-generated music to cultural producers, in the Music; artificial intelligence;
machine learning;
form of algorithmically-generated, royalty-free production music. copyright; commons
Initiatives like these have generated debate among legal scholars
about notions of copyright and authorship. But until recently dis-
cussion has focused on who (or what) should be awarded rights
over the products of so-called ‘expressive AI’: Its programmers? Its
users? Or the AI itself? Largely overlooked in such debates is the
status of another repertoire: not the music put out by an AI, but
that which is put into it, the music that constitutes the training
set necessary for machine learners to learn. Given the massive
datasets mobilised to train machine learners, existing copyright
regimes prove inadequate in the face of the questions of distribu-
tive justice that such commercial systems raise. Specifically, com-
mercial practices premised on the extraction of value from a
special kind of common-pool resource – the shared knowledge of
a given music community – demand remedies grounded not in
the methodological individualism of copyright law, but commons-
based responses instead. As such, the article sketches a couple of
alternative models (levy-based trust funds, ownership funds) that
could provide a more equitable institutional and economic frame-
work for sustaining the musical commons.

Since 2015 there has been a marked growth in the number of startups and technol-
ogy companies seeking to commercialise music produced using artificial intelligence.
The rapid development of this particular corner of the music tech sector is of a piece
with the broader ‘AI fever’ that has spread across the capitalist world in recent years
(Dyer-Witheford, Kjøsen, and Steinhof 2019). Representatives of this trend are firms
like Amper, Aiva, Endel, Weav, Mubert, Jukedeck, and Boomy, to cite but a few.
Contemporaneous with this profusion of startups have been significant investments in
music AI by larger corporations and platforms. Notable examples include Google,

CONTACT Eric Drott drott@utexas.edu University of Texas at Austin, University Station E3100, Austin, TX
78712-0435, USA.
ß 2020 Informa UK Limited, trading as Taylor & Francis Group
2 E. DROTT

whose Magenta project has released a series of plugins for Ableton, capable of auto-
matically generating melodic and rhythmic pattens (Magenta 2019); Facebook, whose
AI research team has developed (and patented) a number of music-related algorithms,
including one that transposes tracks from one musical genre to another (Greene
2018); and Spotify, which made news by hiring renowned music information retrieval
and AI specialist François Pachet away from Sony Labs to head its recently established
Creator Technology Research Lab in 2017 (Titlow 2017).
Why this sudden surge in commercial music AI? At first glance, the answer to this
question seems self-evident: advances in neural networks, deep learning, adversarial
networks, and other such technologies would appear to be decisive, making commer-
cial ventures in generative and adaptive music feasible in a way they previously were
not. Also important is the commercialization of distributed computer networks via
platforms like Amazon Web Services, Google Cloud, or Microsoft Azure, which means
that smaller firms can now lease the kind of processing power and data storage
capacities that in the past were the exclusive property of large corporations, academia,
or governmental agencies. Yet technological, infrastructural, and commercial develop-
ments such as these do not suffice to explain the growth in the music AI sector of
late. Two other factors are equally important, if not more so.
The first is the increase in both the quality and quantity of training data available
for machine learning. This is partly the result of the massive and ongoing digitization
of human knowledge over the past few decades, partly the result of the steady
encroachment of technologies of digital surveillance and data capture into every cor-
ner of people’s everyday lives (Zuboff 2019). As with other areas of artificial intelli-
gence, improvements in the performance of music AI have generally been less a
function of the development of more sophisticated and specialised algorithms, an
approach identified with symbolic AI (also known as ‘good old fashioned AI’ or
GOFAI), the dominant strain of AI research from the 1960s through the 1980s (Boden
2014). Rather, such improvements have resulted more from the shift towards machine
learning (ML) over the past 25 years, as well as from the massive datasets upon which
ML systems can now be trained. In an interview published shortly after his hiring by
Spotify, Pachet underlines this point, observing how ‘twenty years ago computer sci-
ence was driven by the creation of algorithms, to which you used to feed data for it
to [be] processed in a specific way’. By contrast, the design of sophisticated algorithms
has receded in importance as ML has grown: ‘Nowadays it is the opposite: you start
with a massive amount of data, for which you create a very general algorithm that
looks for patterns’ (Pachet 2018). Machine learning researcher Pedro Domingos puts
the same basic idea somewhat differently, when he observes that a major advantage
machine learning has over designing programs by hand concerns the cost and labor
savings it affords. ‘Programming, like all engineering, is a lot of work’, he observes. By
contrast, machine learning is ‘more like farming, which lets nature do most of the
work’. Instead of human programmers performing the arduous task of constructing
programs, machine learners ‘combine knowledge with data to grow programs’
(Domingos 2012, 80). Which explains why massive quantities of data drive improve-
ments in the performance of AI systems: the more data one can feed machine learn-
ers, the more and the better the programs one can grow.
CREATIVE INDUSTRIES JOURNAL 3

Another key factor driving the growth of commercial AI has been the considerable
financial investment that has been staked on such ventures. This is of a piece with the
massive injection of capital into AI-related research and development that has taken
place since 2010, itself tied to broader political-economic tendencies: a growing mass
of surplus capital in search of profitable sites of investment; the long-term decline in
productivity in capitalist economies worldwide; and, concomitantly, the search for
technical or other fixes that might reverse these trends and relaunch a new wave of
capital accumulation (Brenner 2006; Srnicek 2017). For startups in this field, such
financing has come from the usual places: venture capitalists, angel investors, tech
incubators, and so forth. Endel, for instance, has received support from Amazon’s
Alexa Fund (Heater 2018), which provides backing for ‘voice technology innovation’
(Amazon 2019). In the case of Amper, its investors include Advancit Capital
(Crunchbase 2019), a fund co-founded by a member of the board of CBS/Viacom
(Advancit 2015). Similarly Mubert has received support from Funcubator (Kalashnikov
2019), the venture capital arm of FunCorp, whose self-proclaimed mission is to culti-
vate ‘startups that help people spend their free time with better emotions’ (FunCorp
2019). Investors like these point to the considerable interest this new market has
aroused among major players in the tech, media, and entertainment industries. So too
do the partnerships certain startups have forged with incumbent firms in the music
and music technology sectors. Examples include the distribution deal that Endel
reached with Warner Music in early 2019 (Kaye 2019), or the partnership that Amper
negotiated with Chinese music platform Tencent the same year (Dredge 2019a).
This influx of capital into AI in general and music AI specifically represents a wager,
one staked on its future profitability. Whether or not this wager pays off, it is already
having an impact, by reshaping perceptions of what music AI is or should be. Notably,
however, this is not the first attempt to develop commercial systems that analyzed
musical datasets to automatically generate music; one such effort, the ‘Music
Composing Machine’ developed at RCA in the 1950s is examined in greater detail
below, by way of comparison. For the moment, the key difference to be underlined is
that such early initiatives were marginal, with music AI more a focus of academic
research and artistic experimentation than commercial exploitation. By contrast, the
recent growth in this sector suggests that the relation between commercial and non-
commercial uses of AI for music creation is undergoing a reversal. This raises a num-
ber of questions, including questions of distributive justice. For if commercial music AI
does succeed in generating profits for its investors, on what basis will such profits be
derived? At whose expense? To address these questions, this article critically examines
recent legal debates about how authorship, ownership, and copyright are being
reshaped by so-called ‘expressive AI’. Until recently, discussion has focused on who or
what should be awarded rights to works created using machine learning. Their pro-
grammers? Their users? Or the machines themselves? But this focus on who should be
given title to the music that systems put out overshadows an equally important reper-
toire: that which is put into them, the datasets by which machine learners learn. The
importance of such data to machine learning and the supraindividual level at which it
acts call into question copyright’s suitability for addressing the potential economic
harms presented by commercial uses of such technologies. Hence the argument
4 E. DROTT

advanced in the second half of this article: if machine learning is being used by certain
firms to extract value from a special kind of common-pool resource – the shared
knowledge of a given music community – then what is needed is an alternative to the
possessive individualism of copyright law, an alternative that may be found in com-
mons-based practices instead.

‘The music is not a final product’


The nascent music AI industry comprises a wide range of businesses, producing an
equally wide range of goods and services. These include tools for music curation
(Morris 2015; Goldschmitt and Seaver 2019); automated mastering (Sterne and
Razlogova 2019); playlist sequencing (Harwell 2020); A&R and talent scouting (Malt
2018); among other things besides. Within this heterogeneous domain, two types of
businesses stand out. The first consists of companies that harness machine learning to
create what is variously referred to as production or library music. As a rule, such
music is not intended for direct consumption by end users, but is marketed instead to
other cultural producers, typically for use in mixed media products like games, adver-
tisements, or online web content. Startups operating within this business-to-business
space include Jukedeck, Amper, and Aiva.
By contrast, a second type of business targets individual end consumers, with AI
being enlisted for the task of personalizing music. Either existing tracks are remixed
on the fly, in response to changes in one’s environment or actions, or new music is
generated in real time, likewise adjusted in response to a variety of signals (individual
preferences, mood, context, and so forth). An example of this second category is
Endel, which, according to its website, creates ‘personalised, sound-based adaptive
environments that help people focus or relax’. Another is Weav Run, whose app
adjusts tracks according to the cadence of one’s stride whilst walking or running, with
not just the tempo of a track changing in real time, but also its texture, timbre, and
arrangement (Weav Music 2019). A third example is AI Music, whose founder describes
its applications as a means of ‘shape-shifting’ music so that it can adjust to different
listening situations (Dredge 2017b). The same track heard at the beginning of the day,
when one is just waking up, may be rendered with a mellow, acoustic arrangement,
while the same track heard later in the day, when one is at the gym, may be trans-
muted into an uptempo EDM remix.
The clear contrast between these two types of businesses should not lead one to
overlook the many points of continuity between them. Despite differences in intended
market and use, both assign music an essentially accompanimental role. In one, music
provides a soundtrack to other media; in the other, it provides a soundtrack to every-
day activities. In a way, both types of business can be described as generating produc-
tion music, as both cast music as the input for a process of ‘productive consumption’.
For this reason, the description that Terry Pavone, a composer of production music,
has provided for this particular kind of cultural good applies equally well to personal-
ized, adaptive music: ‘The music is not a final product. It’s a piece of a final product
that is intentionally designed to be incomplete’ (Pavone, cited in Lanza 2004, 63). In
production music proper, the end-product is some multimedia object. In adaptive
CREATIVE INDUSTRIES JOURNAL 5

music, the end-product is the listening subject, understood as the object of the
ongoing work of self-production and social reproduction (Drott, 2019).
Another commonality is a heavy reliance on data to train their systems. While the
proprietary nature of the algorithms used by music AI startups makes any hard and
fast claims difficult, it is likely that many if not most of those cited above employ
some form of machine learning. One index of this is their stylistic flexibility. The cap-
acity to generate music in a given style, the ability to ‘transpose’ a track from one
genre to another, is almost certainly not due to some in-built property of the code.
Unlike GOFAI, all that machine learning requires to approximate different genres or
styles are different rounds of supervised training, using discrete datasets made up of
pieces drawn from one musical genre or another.
Vital in this regard is the ready availability of music files that can be scraped from
the web to use in training ML systems. A massive number of recordings is publicly
accessible via audio and video sharing websites (e.g. SoundCloud, YouTube). Yet the
aggressive legal actions the record industry has pursued to deter so-called music
‘piracy’ since the rise of online file sharing in the late 1990s makes recourse to such
material appear legally risky. For this reason, the numerous MIDI libraries that are eas-
ily found on the internet would seem a far less fraught resource for the purposes of
training music AI. Short for ‘Musical Instrument Digital Interface’, MIDI is a nonpropriet-
ary technical standard developed by a consortium of music instrument manufacturers,
in an effort to facilitate the interoperability of digital instruments (synthesisers, drum
machines, computers, notation software, etc.). Commercially launched in 1983, the
MIDI protocol stores and communicates information about different musical events,
including pitch, velocity, vibrato, duration, and so forth (Theberge 1997; Diduck 2018).
Among other things, MIDI makes it possible to represent entire compositions in the
form of digital data, which has in turn given rise to a number of websites (midiworld.-
com, bitmidi.com, freemidi.org, mididb.com, etc.), where visitors can download free of
charge MIDI scores of music both old and new, both under copyright and in the pub-
lic domain. Given the distinction drawn in most copyright regimes between an under-
lying composition versus specific recordings thereof, and given the weaker position of
the music publishing industry relative to the music recording industry at present (itself
a reflection of the ascendancy of recordings over notated sheet music within much
contemporary popular music), the risk of legal liability associated with the use of MIDI
files to train ML systems would seem considerably lower than that associated with the
use of sound recordings – despite the fact that in many instances both composition
and recording may very well fall under copyright protection. For this reason, MIDI
libraries would appear to be an attractive source of training data: not only are they
less likely to attract legal action from rights holders, but they also offer ML operators a
vast trove of music on which their AIs can be trained, in a variety of different styles.
Free-scores.com, for instance, claims to host approximately 25,000 MIDI files on its site,
in such genres as klezmer, tango, and the blues, while bitmidi.com boasts roughly
113,000 MIDI files, from an equally diverse range of genres and styles.
It is thanks to the massive amounts of music data and music-as-data that AI devel-
opers now have at their disposal that is largely responsible for making contemporary
music AI systems viable, both commercially and aesthetically. Here a historical
6 E. DROTT

comparison is illuminating, since the idea of analyzing the statistical distribution of


events within a corpus to produce new works in the same style has been around for
as long as information theory itself. Particularly influential in this connection was
Claude Shannon’s appeal to Markov processes in his landmark text A Mathematical
Theory of Communication (Shannon 1948). For the most part, Shannon applied Markov’s
ideas to the task of estimating the probabilities for different sequences of letters. Doing
so would allow one to determine the most efficient method of encoding and transmit-
ting messages – which, as Jonathan Sterne has pointed out, was of the utmost import-
ance to Shannon’s employer, AT&T, as a way of reducing the need to invest in
expensive infrastructure (Sterne 2012). Notably, however, he also used the same
approach to generate ersatz sequences from scratch, as proof of concept. Shannon’s
example inspired a number of similar efforts in this direction, many in domains other
than speech. A case in point is the Music Composing Machine, a device designed by
Harry Olson and Herbert Belar at RCA in the early 1950s, as an adjunct to their more
well-known project of constructing one of the first electronic synthesisers (Belar and
Olson 1961). Space does not permit a full description of the device, but worth highlight-
ing is Olson’s and Belar’s direct citation of Shannon’s work in information theory, and
their application of Markov processes to music composition. Following Shannon’s
example, Olson and Belar used n-grams of varying lengths to map out the probability
that a particular musical event would follow another. For their corpus, Olson and Belar
used songs by the 19th-century American composer Stephen Foster. Lacking technolo-
gies that could analyze Foster’s music for them, Olson and Belar had to do this them-
selves, a laborious process that restricted the size of their corpus. Of Foster’s more than
200 songs, only eleven formed Olson’s and Belar’s sample, a limitation that hampered
the quality of the melodies thus generated. (Compare this to the music AI startup AIVA,
which boasts of having a database of 30,000 plus MIDI files for training its AI.) Given
these limitations, Olson and Belar were careful not to make exaggerated claims on
behalf of their device. In one internal memo they noted that while while the device
wasn’t guaranteed to produce melodies evincing a high level of aesthetic achievement,
it could produce material ‘which should at least be pleasing in the order of background
music’ – thus adumbrating the present-day turn to computer-generated produc-
tion music.
A third point of continuity between music AI startups concerns their business
model. As a rule, these firms do not sell products to clients, but services. A case in
point is Mubert, a company that bridges the consumer and business-to-business mar-
kets. For brands, content producers, and/or brick-and-mortar businesses, Mubert offers
a range of subscription plans. For a flat monthly fee, one can generate as much
bespoke music as one needs or desires ‘for free’ – which is to say, without having to
pay any additional royalties or licensing fees. For individual consumers, by contrast,
the service is nominally free; all that is necessary is to download the app and select
the style of music or activity it is supposed to accompany. Of course, the fact that one
doesn’t pay with money doesn’t mean one isn’t paying in some other way, using
some other currency. As with so many other digital services, payment is still being
made: it is simply that it is being made in the form of personal data rather than hard
cash. This is made abundantly clear in Mubert’s official privacy policy. For the most
CREATIVE INDUSTRIES JOURNAL 7

part, the policy’s language is designed so as to reassure users that their information
will in most cases not be shared with other, third parties, at least not without their
express consent. There are exceptions, however. One of these is the eventual acquisi-
tion of Mubert by some other company, in which case (the policy notes) ‘your infor-
mation will [ … ] be part of the assets transferred’ (Mubert 2018). The fungibility of
data and capital is here made explicit (Sadowski 2019).
One framework for understanding this model is provided by the platform, arguably
the dominant organizational form of digital capitalism (Srnicek, 2017; Langley and
Leyshon 2017). Standard definitions characterise platforms as infrastructures connect-
ing two or more user groups. As the hub mediating the transactions of these parties,
platform operators are in a position not only to gather data on the exchanges that
take place in the virtual spaces they govern, but also to charge users a toll to access
these spaces. This arrangement recalls Carlo Vercellone’s account of the ‘becoming
rent of capital’ under neoliberalism, as the production and sale of commodities is sub-
ordinated to the extraction of monopoly rent, typically in exchange for access to
resources made scarce through acts of enclosure.1 The recorded music industry is one
place where this transition from ownership to access is in evidence, as manifest in
streaming music platforms (Morris 2015; Arditi 2018). Another, less visible place where
the same transition can be seen is in traditional production music companies, which
have also adopted a platform model. But in contrast to these and other, more familiar
digital platforms (like Facebook or Amazon), the platformization of commercial music
AI doesn’t involve one group of users being connected to another, but instead a
group of users being connected to an AI system. But in all other respects the parallel-
ism holds. What is being paid for, in brief, is not the music as such, but access to the
machine responsible for generating it.

Debating copyright for creative AI


Replacing a one-time exchange with a durable relation of dependency has obvious
economic benefits for platform operators. But the platform model has an additional
benefit for music AI startups in particular. Namely, it allows them to sidestep a number
of unresolved questions concerning the ownership of computer-generated works. In
both US and European law the copyright status of such creations remains murky. As a
result, it is not clear that companies like Amper, Jukedeck, and the like have a title to
the music their AI systems generate, even though their title to the AI systems respon-
sible for generating this music is unimpeachable. Such uncertainties have sparked
much debate in legal circles over whether works generated primarily by means of AI
should be copyrighted and, if so, who or what should be granted these rights.
Driving these debates is that the products of expressive AI satisfy some of the legal
requirements for copyright assignment while failing to satisfy others. On the one hand,
music produced with machine learners has little trouble crossing the originality thresh-
old, a mainstay of most copyright regimes. Even Olson and Belar’s Music Composing
Machine could generate melodies that differed enough from their models that they
could have passed this test, had they ever been put to it. Even if vestigial figures and
melodic formulae characteristic of Foster’s songs could be located in the tunes the
8 E. DROTT

machine turned out, their recombination in different configurations would have pushed
it over the low bar set for creativity in US jurisprudence. Nor would it have made a dif-
ference that the resulting music was clearly derivative of Foster’s: significantly, style isn’t
copyrightable under US copyright law, only the specific expressions this style assumes
(a point to which I will return). With neural networks, evolutionary computing, and other
techniques better able to extrapolate from inputs to produce unpredictable outputs, the
likelihood of satisfying the originality criterion is greater still.
On the other hand, other legal requirements present greater difficulties to AI-pro-
duced works. Perhaps the greatest hurdle is the default assumption that creators of
copyrightable works be humans. In the United States, this requirement isn’t formally
enshrined in law, though the agency responsible for administering copyright refuses to
confer this status on works produced by nonhuman actors. Likewise, in UK law
machines aren’t eligible for copyright assignment, with rights defaulting to their creators
or owners. What is more, there’s little appetite within legal circles for reforming statutes
to grant machines rights on the works they produce. One reason for this is because
copyright law rests on certain assumptions about human behavior and the kinds of
incentives that influence it, assumptions that don’t apply to machines, even allegedly
intelligent ones. As legal scholar Shlomit Yanisky-Ravid notes, one of the principal
underpinnings of copyright in the United States is the economic incentive it gives crea-
tors. Granting authors a temporary monopoly over their creations is regarded as an
important spur to creation, one that ideally harmonises individual and general interest:
artists are rewarded for their investments of time, effort, and resources, while the harms
the public suffers as a result of the limits placed on their ability to access, consume, or
otherwise use copyrighted works is offset by the benefits it receives from the increased
number of works produced. Yet AIs, unlike humans, are insensible to such rewards,
whether monetary or symbolic. Insofar as ‘machines need no incentive to work’, writes
Yanisky-Ravid, copyrighting their work provides ‘no benefit but does hamper the pub-
lic’s ability enjoy [this] work’ (2017, 702). By radically reconfiguring the calculus of bene-
fits and costs, the assignment of copyright to AI systems would, in Yanisky-Ravid’s
estimation, ‘pose an existential threat to the entire copyright regime’ (703).
Given the overall lack of appetite for designating machines the (legal) authors of
the works they generate, debates about AI authorship tend to revolve around who is
the most suitable among the human actors eligible for copyright assignment. One
candidate is a machine’s programmers. Another is its proprietors. A third is the end
user at whose behest a work is produced. A variety of legal doctrines have been mobi-
lised in support of each of these candidates. Some have appealed to utilitarian theory
to buoy the claims of programmers and/or owners of AI systems, arguing that grant-
ing them rights to AI-generated works will encourage the continued growth of the AI
sector (Hristov 2017). Others have used the same theory to argue that authorship
should devolve instead to the end users of creative AI, in order to incentivise the cre-
ation not of AI systems but of new works of art (Brown 2018). Still others have looked
to the ‘joint authorship’ exception in US copyright law to adjudicate the claims of vari-
ous parties, arguing that the most equitable outcome is one that distributes author-
ship among human and nonhuman actors (Grubow 2018). Yet another group of
scholars have made recourse to another exception in US copyright law: specifically,
CREATIVE INDUSTRIES JOURNAL 9

the ‘Works Made for Hire’ (WMFH) exception, according to which artists hired by some
individual or business may agree to transfer legal authorship of the works they create
to their employer, as part of the terms of their employment (Bridy 2012). And should
none of the many proposed solutions to the question of AI authorship end up
enshrined in law, there is still another possibility: that for lack of an identifiable human
author, AI-generated works will default to the public domain (Gervais 2019). Indeed,
given the state of copyright law in the United States as well as the European Union,
this is what should happen in principle; to date, however, the principle has yet to be
put to the test. But it would seem that it is precisely in order to forestall an eventual
finding that might judge AI-produced art to be in the public domain that explains the
urgency with which the legal community has sought to identify some candidate, any
candidate, who can be designated as the author of such works.
The merits or shortcomings of individual interventions in these debates are less important
to the present argument as what they collectively reveal about the unsettled legal status of AI-
generated works. Two points are worth highlighting. First, the legal uncertainty that hangs
over such creations clarifies why many music AI firms have adopted the platform model. That
these companies have title to the algorithms they developed isn’t in dispute; what is in dispute,
however, is whether the works their systems produce belongs to them, some other party, or
nobody at all. Hence the logic behind ceding any claim to such works, while jealously guarding
access to the system responsible for producing them. As is the case with streaming platforms,
music is strategically decommodified to help commodify something else, in this instance music
AI as a service. Second, the number of actors nominated to fill the authorial role allegedly
vacated by creative AI suggests why the authorship of computer-generated works proves so
contentious for legal scholars. That so many people have a hand in the programming, training,
and operation of expressive AI means that there are an equally large number of stakeholders
who have some claim upon the cultural goods they put out. This is especially true of AI that
relies on some version of machine learning. Yanisky-Ravid cites the contributions made to such
systems by a host of actors, in addition to their programmers; these include ‘data suppliers,
trainers, feedback suppliers, holders of the AI system, system operators, employers or investors,
the public, and the government’ (2017, 692). Even this extensive list isn’t exhaustive. Two EU
policy analysts, Jean-Marc Deltorn and Franck Macrez, allude to the contribution made by a
pair of other actors: the curator(s) of the training sets used to ‘teach’ an AI how to produce the
kind of music or cultural expression one wishes it to produce; and the authors of the material
used as training data, the creators of the music, images, texts, and so forth that function as the
model to be emulated and/or generalised from (2018, 17–19). Viewed from this angle, the trou-
ble creative AI presents to contemporary copyright regimes doesn’t stem from a lack of human
involvement, as machines become creative agents in their own right. Rather, the trouble stems
from an excess of human involvement, from the complexity of the network of human and non-
human actors in which creative AI is entangled.

From copyright to commons


Taking a more expansive view of the actors involved in what’s marketed as AI-gener-
ated music has the benefit of shifting attention away from what comes out of
machine learners, and redirecting it towards what goes into them. Indeed, much of
10 E. DROTT

the efficacy – and hence much of the value – of machine learners depends on the
datasets on which they are trained. Recall Pachet’s remarks cited earlier: elsewhere in
the same interview he describes ‘a huge shift in what is most valuable in the technol-
ogy industry. It used to be algorithms, but now it is data’ (Pachet 2018). Recall as well
Domingos’s metaphor of ‘growing’ programs instead of writing them line by line, com-
mand by command. Within this metaphor, training data is positioned as the soil and
water that nourishes programs and that encourages their growth. But if training data
contribute much to the success and value of expressive AI, how are we to measure
this contribution? And what might those responsible for creating this material be due
as a result?
Answers to these questions within contemporary copyright regimes will depend on
whether the material being used for training is under copyright and, if so, whether its
use is somehow exempt. Crucially, developers of commercial systems, unlike academic
researchers, aren’t obliged to reveal the sources of their training data. And for the
most part they don’t volunteer this information. In public pronouncements, most firms
take care to mention only repertoire already in the public domain. In a TED talk, Pierre
Barreau, CEO of AIVA, boasted that his company’s neural networks were trained on
‘30,000 scores of history’s greatest’, including ‘the likes of Mozart and Beethoven’
(Barreau 2018). Likewise, the one dataset that Jukedeck has publicly cited is a corpus
of English folksongs (Jukedeck 2017). Yet evidence suggests that both firms rely on
more than just public domain works. Both AIVA and Jukedeck offer their clients a
choice of pop genres, some of which – like popsynth and drum ‘n’ bass – only devel-
oped after the copyright term currently in force in the EU, UK, and US. That being the
case, it is very unlikely that either firm exclusively used public domain works to train
their systems how to generate stylistically appropriate songs within these and other
contemporary music genres.
It seems likely, then, that at least some of the training data used by AI startups falls
under copyright. And this suggests that still another party might have some claim on
works produced by commercial music AI: the producers of the material used to train
the machine learners in question. This point has not been lost on artists and advocacy
groups. Artist Rights Watch, for one, has called for musicians to invoke the marketing
restriction clause in recording and publishing contracts to refuse their music’s use ‘for
AI purposes of any kind’ (Castle 2017). To justify their hard line, Artist Rights Watch
notes that computer generated works ‘can be sold or licensed at a very low price’,
undercutting the market for human-composed production music (Castle 2017). A
related tack is to maintain that any piece generated using music under copyright be
considered a ‘derivative work’ and thus guilty of infringement. While the compensa-
tion rights holders would receive for such derivative works would no doubt be infini-
tesimally small, it would provide at least some compensation.
To forestall any potential legal claims they might face, music AI firms have a num-
ber of defensive measures they may exercise. One is to ensure they have rights to the
works they use to train their systems: such is the case with Endel, whose system is
trained on a large number of musical ‘stems’ that an in-house composer in their
employ creates for hire. Barring that, companies can assert some sort of exemption.
This is what Google has done in the United States, in connection with its massive
CREATIVE INDUSTRIES JOURNAL 11

digitization of copyrighted texts for Google Books. Its argument, upheld in courts in
the US legal system, is that its use of expressive works under copyright is itself nonex-
pressive, and thus sufficiently transformative to fall under the fair use exception of
American copyright law (Sag 2019). Another way of skirting liability is to get individual
artists to unwittingly sign over the rights to their music to the platforms they use to
distribute their music. Buried in Spotify for Artists’ terms of service, for instance, is a
clause granting the company ‘a non-exclusive, transferable, sub-licensable, royalty-free,
irrevocable, fully paid, worldwide license to [ … ] create derivative works from [ … ]
any of your User Content’ (Spotify 2017). While this provision on its face applies only
to nonmusical content that artists might upload to the service (photos, bios, etc.), it
notably remained in place, unaltered, during the period in 2018-19 when Spotify
allowed musicians to bypass labels and aggregators and upload their music directly to
the platform. Lacking the leverage and legal resources that major labels possess, inde-
pendent artists who availed themselves of this service may have inadvertently pro-
vided Spotify with the means to transform both their music and themselves into
artistic surplus.
Beyond such affirmative defenses, it is not clear whether artists and rights holders
who invoke marketing restrictions or claim that an AI-generated work is derivative
would find much traction under existing copyright regimes. To begin with, to show a
piece is derivative requires demonstrating ‘substantial similarity’, a key test for copy-
right infringement. But the musical output of machine learners is, in the words of
Deltorn and Macrez, more ‘a recomposition than a mere copy or [ … ] audio collage’,
which means that the ‘final work will not – in most instances – be found ‘substantially
similar’ to any of the materials used to train the system’ (2018, 19). This in turn ren-
ders a finding of market harm unlikely – another key test of infringement. Still another
difficulty is practical, having to do with the challenge of identifying all the relevant
copyright holders whose creations have been used to train an AI, determining the
contribution their works made to its training, and apportioning royalties accordingly.
Such difficulties would appear to rule out, either in principle or in practice, any
claim that authors of training data might have on works generated by a machine
learner trained on their music. But the challenges confronting such claims lies more
than anything else with the inadequacies of existing copyright law, inadequacies that
have become all the more glaring in the face of the epistemic and ontological shifts
heralded by machine learning. These shortcomings follow from copyright’s commit-
ment to the individual as the basic unit of reference, be it the individual author, the
individual owner, or the individual work. And yet much of machine learning’s power
derives from its capacity to generalise, to find functions that model the aggregate pat-
terns that emerge across an aggregation of items in a dataset (Mackenzie 2017, 82).
Copyright law’s grounding in a tradition of possessive individualism makes it ill-
equipped to deal with systems that model the behavior of collective phenomena –
including the collective musical knowledge embodied in a corpus. Hence, in order to
address the questions of political economy presented by the growing commercializa-
tion of music AI, we need a framework grounded not in individual property, but in
forms of common property. Fortunately, the practices and discourses that have devel-
oped around commons and common pool resources provide such a framework. From
12 E. DROTT

this standpoint, what firms like AIVA, Jukedeck, and the like are engaged in is a form
of free riding, appropriating common musical knowledge – like the shared conven-
tions governing a genre – to produce a technical resource over which they have exclu-
sive title. But commercial AI applications do more than enclose a resource and charge
a toll for its use; in addition, they put such commons to work. Here we might recall
Domingos’s metaphor of machine learning as being akin to farming. This metaphor is
revealing, though not necessarily in the way that Domingos imagines. For just as farm-
ing relies on the unpaid energy and work of so-called ‘ecosystem services’, treating
these as a ‘free gift of nature’, so too does commercial creative AI rely on the unpaid
work of musical ecosystems, treating the latter as a free gift of human nature. If,
according to Domingos, less work is involved in ‘growing’ programs compared to writ-
ing them, this is because it is somebody else who is doing the work.
How, then, does approaching commercial music AI from the standpoint of com-
mons rather than copyright reframe the questions of distributive justice it poses? First,
consider the ‘substantial similarity’ requirement used to determine whether or not a
work is derivative, and, if so, whether the author of the original is due some compen-
sation and/or credit. At issue is how similarity is being defined. Under current copy-
right law (in the United States, at least), this is a matter of showing a demonstrable
resemblance between an individual creative work and another. Ruled out is stylistic
similarity, similarity at a general rather than specific level. Also ruled out is similarity
between one group of works and another. Yet what the systems used by firms like
AIVA and Amper glean from a given corpus are not specific elements; rather, what
their systems assimilate is the common musical knowledge that emerges across the
totality of items in a corpus. Such common knowledge is, by definition, only ever par-
tially manifest in any individual work. Hence the chances that a particular stretch of
music will be shared between a single input and a single output is unlikely. For this
reason, note-to-note resemblances are less significant than corpus-to-corpus ones.
What is being appropriated is not the work of the individual but of the collective that
is embodied in the data set. Worth recalling in this connection is that most commer-
cial music AI startups operate in the field of production music, which is to say in gen-
erating music characterised by its lack of characteristic features. This is music defined
by its genericity, which is precisely what enables it to accompany other media and
activities so effectively. If we are to look for evidence of ‘substantial similarity’ within
such a repertoire, we should look at the same level at which it was produced and at
which it works: namely, at the generic level.
Now consider a second test of copyright infringement: that of market harm. Again,
within current copyright regimes this test applies only at the level of individual works.
A prominent case in point is Robin Thicke and Pharrell Williams’ 2013 song ‘Blurred
Lines’. Following a lengthy lawsuit, in 2015 a jury found the two musicians guilty of
having infringed upon Marvin Gaye’s 1977 hit ‘Got to Give It Up’. The $7.3 million that
the jury awarded to Gaye’s family (later reduced to $5 million) provided compensation
for the fact that one song (‘Blurred Lines’) had impinged upon the market for another
(‘Got to Give It Up’) (Legaspi 2018). By contrast, the potential market harms of com-
mercial AI are not located at an individual level, but at a population level. As legal
scholar Benjamin Sobel observes, ‘expressive machine learning [ … ] threaten[s] to
CREATIVE INDUSTRIES JOURNAL 13

usurp the position of authors themselves, rather than supplanting individual works’
(Sobel 2017, 75) Harm, in other words, is inflicted on the market as a whole, rather than
the specific individual, the sales of whose work might be depressed by infringement.
Whence Sobel’s conclusion that companies like Jukedeck threaten to ‘deprive authors of
markets they currently exploit’ (79). The same basic dynamic holds if we translate from
the language of markets to that of commons. For what the appropriation and exploit-
ation of common musical knowledge permits is a withdrawal of some part of the means
by which a given musical commons reproduces itself. Value that might have flowed
back into and helped to replenish a particular field of musical practice is diverted into
the circuit of capital, which it helps to reproduce instead (DeAngelis 2017).
Consider now a final problem faced by existing copyright regimes, that of allocating
payments to the authors of training data. Here, too, the difficulty may result from
focusing on the wrong level. If it is difficult to isolate the contribution made by any
single input, this is because no input contributes in isolation. Olson and Belar’s Music
Composing Machine provides a straightforward illustration of this fact. Among the
charts they used to program the machine is one listing the frequency of all the tri-
grams in their corpus. While certain trigrams are more probable and others less so, it’s
not the case that an improbable sequence (like E4-D4-C#5) somehow counts for less,
or that the single song where it appears contributes less than others. The song’s con-
tribution isn’t the pattern, but its impact on the overall distribution of probabilities.
The same principle holds for machine learning techniques, despite the distance sepa-
rating them from the Markov processes employed by Olson and Belar. What ties them
together is a reliance on what Adrian Mackenzie refers to as ‘probabilization’, as
‘formalisms derived from statistics’ have increasingly served to ‘anchor [the] basic
operations of machine learning in probability’ (2017, 107). Under these conditions, try-
ing to identify what a single item contributes would be to miss the point. The same is
true of trying to determine on this basis what an individual author is due. Far more
productive would be trying to determine what the musical community whose works
form a corpus might be owed instead.
If compensation isn’t to be directed to individual authors in the form of minuscule
royalty payments, but to the larger collectivity of which they form a part, how might
such commons-level compensation be effected? History provides one model for how
to respond, in the form of the Performance Trust Fund established following the U.S.
recording bans of 1942–44 and 1948, which pitted the American Federation of
Musicians, the principal musicians’ union in the United States, against radio and record
companies (Kraft 1996; Anderson 2004). At the heart of the conflict was another, ear-
lier moment of technologically-induced displacement of musical labor. In this instance,
it was the development of sound film in the late 1920s, remote radio broadcasts in
the 1930s, and other technologies of mass reproduction and broadcast that dimin-
ished employment opportunities for working performers in the years leading up to
World War II. (Notably, the two recording bans provided some of the impetus for
RCA’s development of technologies like the Synthesiser and the Music Composing
Machine: one of the main benefits touted by the engineers who designed the RCA
Synthesiser were the potential savings in musical labor it promised to provide). To off-
set the job losses suffered by musicians as a result of sound reproduction’s
14 E. DROTT

commercial applications, the AFM proposed a levy on all recordings and radio tran-
scriptions. The proceeds would be directed to the Trust Fund, which then distributed
the monies raised to pay for free concerts across North America. Not only did this pro-
vide underemployed musicians living outside major urban areas with paid work,
redressing geographic disparities in cultural participation, but it also diminished some-
what the winner-take-all tendencies that technologies of mass reproduction exacer-
bate. The Performance Trust Fund, in short, sought to reclaim some of the value that
technologies of mechanical reproduction had diverted to the owners of capital, chan-
neling it back into the musical commons from which it derived.
Yet a response of the sort proposed by the Performance Trust Fund has serious lim-
itations. Crucially, the MPTF presupposes – and therefore requires – the persistence of
the very processes of value capture and extraction whose effects it seeks to amelior-
ate. It presumes, in other words, that the commercial use of labor-saving technologies
like sound reproduction (or, more recently, AI) will remain unchecked; that their ten-
dency to depress compensation or displace musical labor will likewise continue apace;
and that the only viable response is to redistribute after the fact the value that dispro-
portionately flows to the owners of these technologies. It addresses the symptom but
not the disease. For this reason, it might be better to look elsewhere, to policy
responses that are less beholden to a logic of ex post facto redistribution. One possi-
bility would be the creation of some kind of ownership fund, either targeting individ-
ual firms or the music technology sector more broadly. In line with other worker
ownership funds proposed over the years (Guinan 2019; Gowan 2019), shares might
be issued to a body representing those musicians whose creative output is exploited
not just by music AI companies but other music tech firms as well. The main appeal
of such funds is that they redistribute not just wealth, but economic power, including
the power to determine how and where to invest resources. In most worker owner-
ship funds proposed to date, those to whom power is redistributed are those respon-
sible for creating value in the first place: a company’s or sector’s employees. In the
case of a musician ownership fund, this power would be redistributed to musicians
who, by and large, do not work directly for the companies in question, but whose
music does. If, according to Domingos’s farming metaphor, it is the unpaid work of
the musical commons that is responsible for training ML-based systems, and for
imparting to them whatever value they possess, then economic resources and eco-
nomic power should be transferred to those best suited to exercise stewardship over
these commons. Who better than those musicians who not only draw from but give
back to the musical commons, and in so doing help to sustain them?

Conclusion
By way of conclusion, let us return to why questions about copyright, compensation,
and the challenge presented by music AI should matter. After all, it may be wondered
whether the startups and firms discussed in this article are all that significant, given
the relatively minor niche they occupy within the broader musical economy. To date,
the main product that companies like Aiva, Endel, and Amper have produced would
seem to have been marketing hype, more so than music. In addition, the use of
CREATIVE INDUSTRIES JOURNAL 15

machine learning by certain of these firms hardly represents the most innovative
applications of such technologies. Skepticism may also be extended to the aesthetic
value of the music their AI systems generate. For one thing, much of it is still of dubi-
ous artistic quality. For another, much of it consists of production music, what many
would consider ‘functional’ rather than properly ‘artistic’ music. Should we be con-
cerned if machine-generated music were to encroach on this particular subsector of
the music industry? A common line of argument advanced by AI’s champions in the
media would answer this question in the negative. According to this line of argument,
delegating to machines mundane and menial forms of creative work – like turning
financial reports into news stories (Martin 2019) – might free up creative energies that
could be more fruitfully directed elsewhere. Extended to production music, conceding
this domain to machines would presumably liberate musicians to pursue more aes-
thetically rewarding activities, instead of churning out music that is meant to be pas-
sively heard rather than actively listened to.
At least three points can be made in response to these questions. First, even if the
startups discussed in this article represent only a small corner of the broader field of
music machine learning, and even if there is a good deal of exaggeration at work in
the discourse of such firms, the flow of capital into startups like Jukedeck, Amper, and
Endel will no doubt amplify their influence over public expectations of what music AI
is or should be going forward. Making this possibility all the more likely is the fact
that in the world of tech journalism at least, attention tends to flow in the same direc-
tion as capital. As a consequence, what may seem like empty marketing hype at pre-
sent may end up shaping the agenda for future work in this domain, encouraging
certain pursuits while suppressing others.
Second, whether the music generated by AI systems is persuasive at a musical level
is less critical than whether it proves persuasive elsewhere, as one of the resources
that capital can deploy in its unending effort to drive down the costs of creative as
well as other kinds of labor. Indeed, simply the existence of such services may be
enough to exert downward pressure on the market rate that creators of production
music can demand for their work. Bearing this in mind, it is revealing that so many of
the investors in music AI startups come from the world of media (e.g. Advancit), given
their interest in holding down the expenses associated with media production by
whatever means necessary.
Third, commercial applications using machine learning to generate cheap music
should be a cause for concern, even if the only kind of music presently at risk is the
historically stigmatised genre of production music. Whether this risk is realised, and
whether its impact is restricted to the field of production music, remains to be seen.
The rise of mood and context-based playlists on streaming platforms, however, sug-
gests this impact might not be so limited, as more of the music featured on such play-
lists comes to resemble what Paul Allen Anderson (2015) has referred to as ‘neo-
muzak’.2 But even were one to concede the highly problematic value judgment that
distinguishes functional music from other, autotelic forms of musicking, placing it on a
lower rung of aesthetic achievement, there may be subsidiary benefits that derive
from this field of activity. For one thing, engaging in such ‘hackwork’ often provides
an important training ground where artists can hone skills. For another, such work
16 E. DROTT

often does pay the bills, a point not to be underestimated given that under capitalism
subsistence depends on the ability to win an income. Liberation from mundane,
menial tasks in these circumstances is tantamount to liberation from the ability to
make a living – and, more to the point, the ability to make a living as a musician. The
question, then, is what can be done to ensure that the development of machine learn-
ing applications for music creation will advance not just the cause of music or music
technology, but the equally if not more important goal of creating a just and equit-
able musical economy.

Notes
1. Technical objections may be raised regarding Vercellone’s formulation, in that it conflates
the transfer of value that typifies rent with the creation of value that characterizes capitalist
commodity production. But as a formulation for representing the increasing importance of
rent-seeking behavior in neoliberal capitalism, Vercellone’s phrase is a useful shorthand.
2. Revealing in this regard were the responses elicited in 2017, when news circulated that
Spotify had hired a Swedish production music company to pseudonymously create tracks
for its mood- and context-based playlists. In addition to charges that Spotify was using this
tactic to reduce the overall royalty pool it was obliged to pay out to rights holders, a
number of media commentators saw in this incident an augur of the way platforms might
employ cheap AI-generated music to replace more costly human-created tracks in the near
future (Dredge 2017a).

Disclosure statement
No potential conflict of interest was reported by the authors.

References
Advancit. 2015. “About.” Accessed 30 November 2019. http://www.advancitcapital.com
Anderson, Paul Allen. 2015. “Neo-Muzak and the Business of Mood.” Critical Inquiry 41: 811–840.
Anderson, Tim. 2004. “Buried under the Fecundity of Their Own Creations’: Reconsidering the
Recording Bans of the American Federation of Musicians, 1942-1944 and 1948.” American
Music 22 (2): 231–269.
Amazon. 2019. “The Alexa Fund: $200 million investment to fuel voice technology innovation.”
Accessed 30 November 2019. https://developer.amazon.com/alexa-fund
Arditi, David. 2018. “Digital Subscriptions: The Unending Consumption of Music in the Digital
Era.” Popular Music and Society 41 (3): 302–318.
Barreau, Pierre. 2018. “How AI could compose a personalized soundtrack to your life.” Accessed
23 December 2019. https://www.ted.com/talks/pierre_barreau_how_ai_could_compose_a_per-
sonalized_soundtrack_to_your_life/transcript?language=en
Belar, Herbert, and Harry Olson. 1961. “Aid to Music Composition Using a Random Probability
System.” The Journal of the Acoustical Society of America 33 (6): 1163–1170.
Boden, Margaret. 2014. “Gofai.” In The Cambridge Handbook of Artificial Intelligence, edited by
Keith Frankish and William Ramseys, 89–107. Cambridge: Cambridge University Press.
Brenner, Robert. 2006. The Economics of Global Turbulence. London: Verso.
Bridy, Anne-Marie. 2012. “Coding Creativity: Copyright and the Artificially Intelligent Author.”
Stanford Technology Law Review 5: 1–28.
Brown, Nina. 2018. “Artificial Authors: A Case for Copyright.” Columbia Science and Technology
Law Review 20: 1–41.
CREATIVE INDUSTRIES JOURNAL 17

Castle, Chris. 2017. “The 21st Century Marketing Restriction: No Licensing for AI.” Artist Rights
Watch. Accessed 6 December 2019. https://artistrightswatch.com/2017/08/08/the-21st-century-
marketing-restriction-no-licensing-for-ai/
Crunchbase. 2019. “Amper Music.” Accessed 30 November 2019. https://www.crunchbase.com/
organization/amper-music
DeAngelis, Massimo. 2017. Omnia Sunt Communia: On the Commons and the Transformation to
Postcapitalism. London: Zed.
Deltorn, Jean-Marc, and Franck Macrez. 2018. Authorship in the Age of Machine Learning and
Artificial Intelligence. Strasbourg: Center for International Intellectual Property Studies.
Diduck, Ryan. 2018. Mad Skills: MIDI and Music Technology in the Twentieth Century. London:
Repeater.
Domingos, Pedro. 2012. “A Few Useful Things to Know about Machine Learning.”
Communications of the ACM 55 (10): 78–87.
Dredge, Stuart. 2017a. “AI and Music: Will We Be Slaves to the Algorithm?” The Guardian, August
6. Accessed 30 June 2018. https://www.theguardian.com/technology/2017/aug/06/artificial-
intelligence-and-will-we-be-slaves-to-the-algorithm
Dredge, Stuart. 2017b. “AI Music Reveals Its Plans for ‘Shape-Changing’ Songs.” Music Ally.
Accessed 6 December 2019. https://musically.com/2017/08/08/ai-music-shape-changing-songs/
Dredge, Stuart. 2019a. “Tencent Music Strikes Deal with AI-Music Startup Amper Music.” Music
Ally. Accessed 30 November 2019. https://musically.com/2019/01/24/tencent-music-strikes-
deal-with-ai-music-startup-amper-music/
Drott, Eric. 2019. “Music in the Work of Social Reproduction.” Cultural Politics 15 (2): 160–181.
Dyer-Witheford, Nick, Atle Mikkola Kjøsen, and James Steinhof. 2019. Inhuman Power: Artificial
Intelligence and the Future of Capitalism. London: Pluto Press.
FunCorp. 2019. “FunCubator.” Accessed 30 November 2019. https://fun.co/rp/
Gervais, Daniel. 2019. “The Machine as Author.” Iowa Law Review 105: 2053–2106.
Goldschmitt, K. E., and Nick Seaver. 2019. “Shaping the stream: Techniques and troubles of algo-
rithmic recommendation.” In Cambridge Companion to Music and Digital Culture, edited by
Nicholas Cook, Monique Ingalls, and David Trippett, 63–81. Cambridge: Cambridge University
Press.
Gowan, Peter. 2019. Right to Own: A Policy Framework to Catalyze Worker Ownership Transitions.
The Next System Project. Accessed 23 March 2020. https://thenextsystem.org/rto
Greene, Tristan. 2018. “Facebook Made an AI that Convincingly Turns One Style of Music into
Another.” The New Web. Accessed 21 December 2019. https://thenextweb.com/artificial-intelli-
gence/2018/05/22/facebook-made-an-ai-that-convincingly-turns-one-style-of-music-into-another/
Grubow, Jared. 2018. “OK Computer: The Devolution of Human Creativity and Granting
Copyrights to Artificially Intelligent Joint Authors.” Cardozo Law Review 40: 387–423.
Guinan, Joe. 2019. “Socialising Capital: Looking Back on the Meidner Plan.” International Journal
of Public Policy 15 (1/2): 38–58.
Harwell, Drew. 2020. “iHeartRadio Laid Off Hundreds of DJs. Executives Blame AI. DJs Blame the
Executives.” Washington Post, January 31.
Heater, Brian. 2018. “Amazon’s Alexa Fund Invests in Three Voice Startups.” Techcrunch.
Accessed 30 November 2019. https://techcrunch.com/2018/09/27/amazons-alexa-fund-invests-
in-three-voice-startups/
Hristov, Kalin. 2017. “Artificial Intelligence and the Copyright Dilemma.” IDEA 57: 431–454.
Jukedeck. 2017. “Releasing a cleaned version of the Nottingham dataset.” Accessed 23
December 2019. https://research.jukedeck.com/releasing-a-cleaned-version-of-the-nottingham-
dataset-928cdd18ec68?source¼
Kalashnikov, Mikhail. 2019. “FunCubator—What We’re Looking for.” Medium. Accessed 30 November
2019. https://medium.com/funcubator/funcubator-what-were-looking-for-83bfa6c20aed
Kaye, Ben. 2019. “The end is nigh: An algorithm just signed with a major record label.”
Consequence of Sound. Accessed 30 November 2019. https://consequenceofsound.net/2019/
03/endel-alogrithm-major-label-deal/
18 E. DROTT

Kraft, James. 1996. Stage to Studio: Musicians and the Sound Revolution, 1880-1950. Baltimore,
MD: Johns Hopkins University Press.
Langley, Paul, and Andrew Leyshon. 2017. “Platform Capitalism: The Intermediation and
Capitalisation of Digital Economic Circulation.” Finance and Society 3 (1): 11–21.
Lanza, Joseph. 2004. Elevator Music: A Surreal History of Muzak, Easy-Listening, and Other
Moodsong. Ann Arbor, MI: University of Michigan Press.
Legaspi, Althea. 2018. ‘Blurred Lines’ Copyright Suit Against Robin Thicke, Pharrell Ends in $5M
Judgment. Accessed 6 December 2019. https://www.rollingstone.com/music/music-news/robin-
thicke-pharrell-williams-blurred-lines-copyright-suit-final-5-million-dollar-judgment-768508/
Mackenzie, Adrian. 2017. Machine Learners: Archaeology of a Data Practice. Cambridge, MA: MIT.
Magenta. 2019. Make Music and Art Using Machine Learning. Accessed 21 December 2019.
https://magenta.tensorflow.org/
Malt, Andy. 2018. Warner Music acquires A&R AI tool Sodatone. Complete Music Update.
Accessed 23 March 2020. https://completemusicupdate.com/article/warner-music-acquires-ai-
ar-tool-sodatone/
Martin, Nicole. 2019. “Did a Robot Write This? How AI Is Impacting Journalism.” Forbes. Accessed
23 December 2019. https://www.forbes.com/sites/nicolemartin1/2019/02/08/did-a-robot-write-
this-how-ai-is-impacting-journalism/#1b02942c7795
Morris, Jeremy Wade. 2015. Selling Digital Music, Formatting Culture. Berkeley, CA: University of
California Press.
Mubert. 2018. Privacy Policy. Accessed 22 December 2019. https://static.mubert.com/law/privacy-
policy.pdf
Pachet, François. 2018. “Computer science in music: Interview with François Pachet, director of
the Spotify Creator Technology Research Lab.” Digital Single Market. Accessed 30 November
2019. https://ec.europa.eu/digital-single-market/en/news/computer-science-music-interview-
francois-pachet-director-spotify-creator-technology-research
Sadowski, Jathan. 2019. “When Data Is Capital: Datafication, Accumulation, and Extraction.” Big
Data & Society 6 (1): 1–12.
Sag, Matthew. 2019. “The New Legal Landscape for Text Mining and Machine Learning.” Journal
of the Copyright Society of the USA 66: 290–367.
Shannon, Claude. 1948. “A Mathematical Theory of Communication.” The Bell System Technical
Journal 27 (3): 379–423.
Sobel, Benjamin. 2017. “Artificial Intelligence’s Fair Use Crisis.” Columbia Journal of Law & the
Arts 45: 45–97.
Spotify. 2017. Spotify for Artists Terms and Conditions. Accessed 30 November 2019. https://www.
spotify.com/us/legal/spotify-for-artists-terms-and-conditions/
Srnicek, Nick. 2017. Platform Capitalism. Cambridge: Polity.
Sterne, Jonathan. 2012. MP3: The Meaning of a Format. Durham, NC: Duke University Press.
Sterne, Jonathan, and Elena Razlogova. 2019. “Machine Learning in Context, or Learning from
LANDR: Artificial Intelligence and the Platformization of Music Mastering.” Social Media &
Society 5 (2): 1–18.
Titlow, John Paul. 2017. “Why Did Spotify Hire This Expert in Music-Making AI?” Fast Company.
Accessed 21 December 2019. https://www.fastcompany.com/40439000/why-did-spotify-hire-
this-expert-in-music-making-ai
Theberge, Paul. 1997. Any Sound You Can Imagine: Making Music/Consuming Technology.
Middletown, CT: Wesleyan.
Weav Music. 2019. Frequently Asked Questions. Accessed 6 December 2019. https://run.weav.io/
faq
Yanisky-Ravid, Shlomit. 2017. “Generating Rembrandt: Artificial Intelligence, Copyright, and
Accountability in the 3A Era—The Human-Like Authors Are Already Here—A New Model.”
Michigan State Law Review 659: 659–726.
Zuboff, Shoshana. 2019. The Age of Surveillance Capitalism. New York: PublicAffairs.

Potrebbero piacerti anche