Sei sulla pagina 1di 42

Music and Language

(1992)
by
Chris Dobrian
Table of Contents

The Subjectivity of Experience


Aspects of the Music-Language Relationship
Music As Language
The Vocal Origins of Music
Talking About Music
Music and Semiotics
Information Theory
Generative Grammar
Music Theory
Specialized Music Languages
Western Music Notation
MIDI Data Protocol
Teaching by Example

Notes
The Subjectivity of Experience

Every input to our senses is a stimulus, available for us to interpret as information[1], and from which
we can derive further information. Our physical sensory receptors--our ears, eyes, etc.--can well be
thought of as information "transducers" which convert external stimuli--changes in air pressure, light,
etc.--into nerve impulses recognized by the brain. Scientists and philosophers have advanced many

conceptual models of what the brain does with these nerve impulses to derive knowledge and
meaning.[2] (C.f. Dobrian, Chris, "Music and Artificial Intelligence", regarding models of music cognition.)
Regardless of the mechanism by which our brain accomplishes it, it is clear that we generate (interpret,
deduce, recall, or create) information ourselves, stimulated by external information.

For example, when we hear a lion's roar, our ear drum simply receives continuous changes in air
pressure. The cochlea, so we are taught, responds to the frequencies and amplitudes of those changes
and conveys those responses to the brain. Our brain, by means largely unknown to us (past experience,
instinct, deduction, instruction in roar analysis?) evaluates those time-varying frequencies and
amplitudes as a lion's roar. Our brain then derives further information about the actual source of the
sound and its meaning. A person in one time or place might interpret the sound to mean "My life is in
danger. I must run away from the sound source immediately as fast and as far as I can." A person in
another time or place might look around calmly for the electronic recording device that produced the
simulation of a lion's roar. A person who had never learned to associate that sound with any particular
source--e.g., a person who had never heard a similar sound before--might attempt to compare it with
other known sounds, or might even remain unconcerned as to what produced the sound.

When we hear a strange sound--thinking "What was that?"...we try to identify it....Occasionally we pay
attention to the sound itself. Then it is more than a cue, and we are listening in another mode, music
mode, regardless of the source of the sound.[3]
The point of the above example is that the sound phenomenon which is external to our body--the
fluctuation of air pressure--is considered an objective informational message, and everything that
happens once it is converted by our "transducer" is subjective, based on our brain's understanding of
the transducer's output, our own life experience, and our own favored ways of deriving knowledge. We
may quite easily say, "That sound symbolizes a lion," but would we so easily say, "That sound symbolizes
a tape recorder"? Are we talking about the sound or about our own personal referents derived from the
sound?
The subjective nature of perception was considered a truism by scientist/philosopher Gregory Bateson:

There is no objective experience. All experience is subjective....Our brains make the images that we
think we "perceive"....When somebody steps on my toe, what I experience is not his stepping on my toe,
but my image of his stepping on my toe reconstructed from neural reports reaching my brain somewhat
after his foot has landed on mine. Experience of the exterior is always mediated by particular sense
organs and neural pathways. To that extent, objects are my creation, and my experience of them is
subjective, not objective.

It is, however, not a trivial assertion to note that very few persons, at least in occidental culture, doubt
the objectivity of such sense data as pain or their visual images of the external world. Our civilization is
deeply based on this illusion.[4]
This raises some fundamental questions regarding perception and knowledge, or more precisely,
regarding information and meaning. If information has a certain significance to me, how do I determine
whether that significance is personal to me or whether it is actually "contained" in the external
information (and thus available to others who receive the same information)? After all, people all look
different, are shaped differently, act differently, talk differently; is there any reason to believe that we
all hear the same? How do we communicate those aspects of our knowledge which are personal?
The way that we probe for an answer to these questions is to rely upon a system of symbols and
meanings that we feel relatively confident are shared by others. Our most developed and established
system of communicative symbols is, of course, language.

Understanding language is...a matter of habits acquired in oneself and rightly assumed in others.[5]
Because we use language so much, and have done so for so much of our lives, and have done so as a
species for so long, we often take words for granted as having objective, agreed-upon meanings. Of
course, this trust is belied by everyday misunderstandings, and is actually as much of an illusion as the
illusion of objective experience commented upon above (see also my summary of Benjamin Hrushovski's
analysis of "The Structure of Semiotic Objects"), but it is true that our spoken language is our most fully
shared basis for communication.
Aspects of the Music-Language Relationship

Music and language are related in so many ways that it is necessary to categorize some of those
relationships. I will then address each category in turn.

First, there is the seemingly never-ending debate of whether music is itself a language. The belief that
music possesses, in some measure, characteristics of language leads people to attempt to apply
linguistic theories to the understanding of music. These include semiotic analyses, information theory,
theories of generative grammar, and other diverse beliefs or specially invented theories of what is being
expressed and how. This category could thus be called "music as language".

A second category is "talking about music". Regardless of whether music actually is a language, our
experience of music is evidently so subjective as to cause people not to be satisfied that their perception
of it is shared by others. This has led to the practice of attempting to "translate" music into words or to

"describe" musical phenomena in words, or to "explain" the causes of musical phenomena. The sheer
quantity of language expended about music is enormous, and includes writings and lectures on music
history, music "appreciation", music "theory", music criticism, description of musical phenomena (from
both scientific and experiential points of view), and systems and methods for creating music. These
approaches may include the linguistic theories of the first category, as well as virtually any other aspect
of the culture in which the music occurs: literary references; anecdotes about the lives and thoughts of
composers, performers, and performances; analogies with science and mathematics; scientific
explanations of perception based on psychology and acoustics; poetry or prose "inspired" by hearing
music; even ideas of computer programs for simulations or models of music perception and generation.

A third category is composed of a large number of "specialized music languages". These are invented
descriptive or explanatory (mostly written) languages, specially designed for the discussion of music, as
distinguished from everyday spoken language. The best known and probably most widely acknowledged
specialized music language is Western music (five-line staff) notation. Myriad others can be found in the
U.S. alone, ranging from guitar tablature to computer-readable protocols (e.g., MIDI file format).

Not only is the role of language in the learning and teaching of music important, but the study of the
role of language is important, as well. "What we talk about when we talk about music" is a matter that is
too often taken for granted and too little investigated. I will delve into each of these three categories in
an effort to define better the music-language relationship, and I will give attention to the importance of
that relationship to music teaching and learning.

Music As Language

The oft-quoted poetical statement that "Music is the universal language of mankind[6]" is indicative of
the communicative quality of music, and at the same time is indicative of the elusive and ambiguous
nature of whatever it is that music communicates. Is music a universal language? Is it a language at all?

Because music is a stimulus to our sense of hearing, it is clear that music can, and inevitably does,
convey information. What is the nature of that information? What does it express? These questions
have long been--and continue to be--the source of considerable debate. The following quotes
encapsulate a few views.

I consider that music is, by its very nature, powerless to express anything at all, whether a feeling, an
attitude of mind, a psychological mood, a phenomenon of nature, etc....If, as is nearly always the case,
music appears to express something, this is only an illusion, and not a reality....
The phenomenon of music is given to us with the sole purpose of establishing an order of things,
including particularly the coordination between man and time. To be put into practice, its indispensable
and single requirement is construction....It is precisely this construction, this achieved order, which
produces in us a unique emotion having nothing in common with our ordinary sensations and our
responses to the impressions of daily life.[7]
Do we not, in truth, ask the impossible of music when we expect it to express feelings, to translate
dramatic situations, even to imitate nature?[8]
***
Our audiences have come to identify nineteenth-century musical romanticism as analogous to the art of
music itself. Because romanticism was, and still remains, so powerful an expression, they tend to forget
that great music was written for hundreds of years before the romantics flourished.[9]
Music expresses, at different moments, serenity or exuberance, regret or triumph, fury or delight. It
expresses each of these moods, and many others, in a numberless variety of subtle shadings and
differences. It may even express a state of meaning for which there exists no adequate word in any
language. In that case, musicians often like to say that it has only a purely musical meaning. They
sometimes go farther and say that all music has only a purely musical meaning. What they really mean is
that no appropriate word can be found to express the music's meaning and that, even if it could, they do
not feel the need of finding it.[10]
My own belief is that all music has an expressive power, some more and some less, but that all music
has a certain meaning behind the notes and that that meaning behind the notes constitutes, after all,
what the piece is saying, what the piece is about. This whole problem can be stated quite simply by
asking, "Is there a meaning to music?" My answer to that would be, "Yes." And "Can you state in so
many words what the meaning is?" My answer to that would be, "No." Therein lies the difficulty.[11]
***
Within the orbit of tonality, composers have always been bound by certain expressive laws of the
medium, laws which are analogous to those of language....Music is, in fact, 'extra-musical' in the sense
that poetry is 'extra-verbal', since notes, like words, have emotional connotations....Music functions as a
language of the emotions.[12]
***
One given piece of music may cause remarkably different reactions in different listeners. As an
illustration of this statement, I like to mention the second movement of Beethoven's Seventh Symphony,
which I have found leads some people into a pseudo feeling of profound melancholy, while another

group takes it for a kind of scurrilous scherzo, and a third for a subdued kind of pastorale. Each group is
justified in judging as it does.[13]
***
Hindemith is undoubtedly right in his observation that people react in different emotional ways to a
given piece of music, but his statement that each reaction is equally justifiable fails to take a simple
psychological point into account. Could it not be that some listeners are incapable of understanding the
feeling of the music properly? The answer, of course, is yes....Such people, whom one knows to exist,
are just plainly unmusical.[14]
Stravinsky's resolute assertion that music is "powerless to express anything at all" is almost certainly
symptomatic of an overreaction against the poetic excesses of romantically inclined music
commentators, or of an effort to distinguish himself from the Viennese "expressionists". Nevertheless,
his professed belief was that, if music is "about" anything, it is about music.

Just as cubism is a poetic statement about objects and forms, and about the nature of vision and the
way we perceive and know forms, and about the nature of art and the artistic transformation of objects
and forms, so Stravinsky's music is a poetic statement about musical objects and aural forms.[15]
At the opposite extreme from Stravinsky is the British musicologist Deryck Cooke, who maintains that
music is a language for expressing emotional states, and that furthermore it is (at least in the case of
tonal music) a strictly codified language in which each scale degree signifies a certain emotion and
permits only a single specific reading. Cooke's argument on behalf of his theory in The Language of
Music cites a great many supporting examples, but suffers from a) a tendency to extrapolate inherent
emotional content of scale degrees based on obviously style-based and culture-based clichs, and b)
attribution of emotional qualities which are so ambiguous as to be indisputable but meaningless
because they make opposite assertions simultaneously. For example, he says that "To rise in pitch in
minor...may be an excited aggressive affirmation of, and/or protest against, a painful feeling"[16]. Well,
if it can be an "affirmation of" and/or a "protest against", then he has covered all the possibilities and
thus has told us nothing.

Aaron Copland's intermediary view that music may express both musical and extra-musical meaning
strongly suggests a communicative (informative) power in music, but his belief that music "may even
express a state of meaning for which there exists no adequate word in any language" indicates that he
feels music does not possess anything like the explicit significative nature of language. In fact, his
reference to the existence of "purely musical meaning" allies him much more closely with Stravinsky
than with Cooke.

Implications for Music Education

The American professor Bennett Reimer, in his book A Philosophy of Music Education, labels this type of
Stravinsky-Cooke polarity as Absolute Formalism versus Referentialism.

The experience of art, for the Formalist, is primarily an intellectual one; it is the recognition and
appreciation of form for its own sake. This recognition and appreciation, while intellectual in character,
is called by Formalists an "emotion"--usually the "aesthetic emotion". But this so-called "emotion" is a
unique one--it has no counterpart in other emotional experiences.

The Referentialist disagrees. The function of the art work is to remind you of, or tell you about, or help
you understand, or make you experience, something which is extra-artistic, that is, something which is
outside the created thing and the artistic qualities which make it a created thing. In music, the sounds
should serve as a reminder of, or a clue to, or a sign of something extramusical; something separate
from the sounds and what the sounds are doing.[17]
Reimer points out that the viewpoint adopted by the educator has important socio-political implications,
with potential for both benefit and abuse.
Referentialist assumptions are in operation in much that is done in the teaching of art....the attempt to
add a story or picture to music, either verbally or visually; the search for the right emotion-words with
which to characterize music...

What are the values of art according to the Referentialist point of view?...Art works serve many
purposes, all of them extra-artistic. If one can share these values one becomes a better citizen, a better
worker, a better human being...Of course, there is the danger that harmful works of art will have
harmful effects...That is why societies which operate under a Referentialist aesthetic must exercise a
high degree of control over the artistic diet of their citizens....

Music educators will have no difficulty recognizing the Referentialist basis for many of the value claims
made for music education....It imparts moral uplift,...it provides a healthy outlet for repressed
emotions...it is assumed to be, in short a most effective way to make people better--nonmusically.

The practice of isolating the formal elements of art works and studying them for their own sake is the
[Formalist] counterpart of separating out the referential elements....That the major value of music

education is intellectual; that the study of "the fundamentals" is, in and of itself, a beneficial thing;...that
music or art in general transports one from the real world into the ethereal world of the aesthetic; all
these are assumptions compatible with Formalism....

When considering each of these two viewpoints separately it is difficult...to give full assent to
either....To translate the experience of art into nonartistic terms, whether conceptual or emotional, is to
violate the meaningfulness of artistic experience....At the same time it is not possible to regard art, as
with the Formalist, as an intellectual exercise. Surely art is intimately connected to life rather than
totally distinct from it....So while each view contains some truth, each also contains major falsehoods
which prevent their use as a basis for a philosophy.[18]
Like Copland, Reimer leans in the direction of Formalism, but does not ignore the role of extra-musical
considerations in music education. He calls this viewpoint Absolute Expressionism.

Absolute Expressionism [like Formalism] insists that meaning and value are internal; they are functions
of the artistic qualities themselves and how they are organized. But the artistic/cultural influences
surrounding a work of art may indeed be strongly involved in the experience the work gives...the story in
program music, the crucifixion scene in a painting, the political conflicts in a play, and so on, are indeed
powerfully influential in what the internal artistic experience can be. However, references are always
transformed and transcended by the internal artistic form....That is why it is possible and quite common
for works with trivial referents to be profound...Cezanne's paintings of fruit on rumpled table cloths,
Beethoven's Pastorale Symphony, and so on. That is why it is also possible and quite common for works
with important referents to be trivial or even demeaning as art--dime-store pictures of Jesus painted in
day-glo colors on black velvet, love as depicted in "popular novels", and so on.[19]
Reimer points out that both the Referentialist and Formalist views make music education difficult to
justify administratively because the first view holds that music does not deal with unique issues and the
second view disconnects music from the rest of life. He seeks to combine the best aspects of both, thus
mitigating their narrowness as individual viewpoints.

Absolute Expressionists [and] Absolute Formalists...both insist you must go inside to the created
qualities that make the work an art work. That is the "Absolute" part of both their names. But the
Expressionists include nonartistic influences and references as one part of the interior...The Absolute
Expressionist view [is] that the arts offer meaningful, cognitive experiences unavailable in any other way,
and that such experiences are necessary for all people if their essential humanness is to be realized.[20]
Music as Language (cont.)

If there is disagreement as to what music expresses, there is at least general agreement that music is
intended to and does--through its form, its content, or both--produce in us emotions, be they strictly
musical or extra-musical. So, clearly it gives us stimulus and information, but that is hardly evidence of
its being a language. Before proceeding further, let us establish a working definition of language.

Language is a set (vocabulary) of symbols (signifiers, to use the terminology of semiotics), each of which
refers to (indicates, signifies) one or more concrete things or abstract concepts. These symbols are
combined according to a more or less strict grammar of rules. The combination of the symbolic units in a
specific grammatical structure produces new, further significance.[21] This is the way in which verbal
languages work, as well as such specialized written languages as those of mathematics and computer
programming.

Does music conform to this definition of language? First, let's narrow the question considerably. Does
European-American classical music conform to this definition? Despite attempts throughout history to
answer in the affirmative--from Plato's Republic to the musica reservata of the sixteenth century to the
doctrine of affections of the eighteenth century to Cooke's Language of Music--all theoretical
formulations of a "language of music" either have proved applicable only to a particular period and style
or have not been at all widely accepted as a significant system.

The fact that a language system can potentially be embodied in any circumscribed, discernible set of
sounds (or objects of any kind for that matter) is not trivial, however. There exists purely functional
communicative music, and there exist in the music of a great many cultures and periods certain widely
accepted sonic symbols. Such symbols are a subset of musical sounds or phrases that are recognized as
known musical objects and which we usually term clichs. A knowledge of those symbols, and indeed of
musical clichs in general, is essential to musical understanding because they have significance, either
musical or extra-musical.

In fact, though, a type of music made up entirely of sonic symbols is extremely rare. Symbols and other
clichs are almost always merely a subset of the acceptable sounds of a musical culture or style, and
that culture or style is in turn merely a subset of music. So, while music may contain discernible symbols,
and usually does employ some type of grammar, the two are rarely related in any way, and symbols are
almost invariably only a small subset of any piece of music.

The conclusion we reach, then is that a given style of music often includes linguistic elements of symbols
and grammar, but is not itself a language. It is even more untenable to say that music (independent of

style) is a language, much less a "universal" language of agreed-upon symbols, grammar, and meaning.
Music is not a "universal language" any more than the sum total of all vocal sounds can be said to be a
universal spoken language. Whatever linguistic elements a music may possess are extremely dependent
on explicit and implicit cultural associations, all of which are in turn dependent on society and the
individual. Even though media and tele-communications are increasing the awareness of music of other
cultures, most individuals are still no closer to knowing all music than they are to knowing all languages.

We must also bear in mind that symbolic representation is not the only means of expression. Music can,
by its very form (that is, the abstractions we derive from its form), express abstract or visual concepts, or
it may present a visceral, immediate appeal to our senses (our unconscious response). These are not
modes of expression that depend upon language, yet few would deny their existence in music.

Implications for Music Education

Music is not itself a language and therefore is not susceptible to precisely the same methods of analysis
and teaching as verbal language; so it is almost certainly futile to attempt to model music entirely as a
language. Nevertheless, music does in most cases include the linguistic elements of symbols and
grammar, and it is therefore very likely that linguistic methods of analysis and education can at least
provide some insights into analogous processes of musical understanding.

In the area of music analysis (a term which includes analysis lite, "music appreciation") the linguistic
theories of semiotics, generative grammar, and information are particularly appropriate in this regard.
In the area of musical skill development, theories of language learning and acquisition present parallels
in music education. The actual application of these theories will be discussed in more detail later in this
chapter, in the category of "talking about music" (p. 16 ff.).

The Vocal Origins of Music

It is most widely held that music originated as a vocal practice: either as an extension of non-linguistic
vocal utterances (expressions of joy, pain, etc.), as extensions or intensifications of the intonations of
spoken language, or as the production of vocal sounds purely for their sonic quality.

[An anonymous Sanskrit manuscript] explains the formation of musical sounds on the basis of the
Maheshvara Sutra-s, an esoteric arrangement of musical sounds, which Nanadikeshvara also accepts as
the philosophical basis of the Sanskrit language and, in fact, all language.[22]
***
Various hypotheses about the origins of music have connected it with, or derived it from, the
intonations of excited speech, nonverbal voice signals, or significant inflections of the voice in tonal
languages. A widely accepted notion maintains that both natural language and music, alone of all the
arts, involve sound unfolding in time, and that both have the human voice as their common source.
Finally, music, like natural language, developed a system of writing--musical notation.[23]
***
Certain forms of expression intrinsically link these two phenomena: sound and word. Is this to go so far
as to say that the evolution of language corresponds to a similar evolution of music? To me it does not
seem possible to assert that this problem is posed in terms of a simple parallelism....Certainly
onomatopoeias and pulverized words can express what constructed language cannot propose to touch
upon....For example, ritual chants in a large number of liturgies use a dead language that keeps of the
chanted text remote....The Greek theater and the Japanese noh also furnish us with example of "sacred"
language in which archaicism singularly or utterly limits comprehensibility. In popular songs, at the other
extreme, who has not been surprised to hear successions of onomatopoeias and ordinary words
deprived of their purpose? All that is admitted is the necessity and pleasure of rhythm.[24]
Of course we can know of the origins of music only through descriptive accounts and theoretical
treatises. We do not have examples of ancient music available for audition. We have no real way of
tracing the music of today back to its primal origins, and therefore, as Boulez rightly points out, no basis
for believing in a parallel development between music and language. As he goes on to show, however,
examples of music in which the vocal sounds have to some extent become dissociated from their
symbolic meaning give us a new perspective on the musical organization of phonemes. We can study
not only traditional text "settings" in vocal music, but also any inherent logic which may exist in the
relations between vocal sounds, pitch, timbre, and rhythm.

Professor Yuasa, in lectures and private conversations, has repeatedly emphasized the value he sees in a
composer taking an "archeological approach" to music--to seeking knowledge of the "genesis" of music
as a way to deeper understanding of our own music today. However, Stravinsky, in a stern, almost
curmudgeonly tone, remarks on what he personally has found to be the benefits and limitations of the
"archeological approach".

Archeology...does not supply us with certitudes, but rather vague hypotheses. And in the shade of these
hypotheses some artists are content to dream, considering them less as scientific facts than as sources

of inspiration....Such a tendency in itself calls for neither praise nor censure. Let us merely note that
these imaginary voyages supply us with nothing exact and do not make us better acquainted with
music....My own experience has long convinced me that any historical fact, recent or distant, may well
be used as a stimulus to set the creative faculty in motion, but never as an aid for clearing up
difficulties.[25]
Perhaps the two composers do not hold such different views after all. It is doubtful that Yuasa expects
to establish any "scientific facts" regarding the genesis of music. It is precisely as a "source of
inspiration"--a "stimulus to set the creative faculty in motion"--that he seems to view this (spiritual
rather than musicological) archeological search. His belief is that the important origins of music do not
require a distant "voyage" for discovery, but rather that they are personally available to us, as part of
our collective unconsciousness, and that we have only to cast our eyes and ears in the right way to
realize them. His orchestral work Eye on Genesis II is his most eloquent musical essay on this topic.

Implications for Music Education

If, as is generally believed, instrumental melodies developed as accompaniment to and extension of


vocal melodies, vocalization is a vital tool for achieving a better understanding of instrumental music.
This is the basis of the teaching method of Zoltn Kodly, which in turn forms the basis of virtually all
music education in Hungary. Kodly himself makes a very strong case for the predominance of vocal
training in music education.

Musical illiteracy impedes musical culture, and is the cause of sparse attendance at serious concerts and
opera productions....The way from musical illiteracy to musical culture leads through reading and writing
music....Active participation in music is by far the best way to get to know music; the gramophone and
radio are no more than accessories....The best approach to musical genius is through the instrument
most accessible to everyone: the human voice. This way is open not only to the privileged but to the
great masses....Music belongs to everybody....With a good musician the ear should always lead the way
for those mobile fingers![26]
One of my most vivid memories from my own music education is of a remark made by my first college
professor of musicianship, Gil Miranda, himself a believer in many of Kodly's precepts. He said simply,
"To find the correct interpretation of a melodic phrase, you only have to sing it. The sung melody is the
basis for all phrasing. To make the audience hear something in the music you perform, you only need to
ensure that you can truly hear it yourself." I have found this statement to be true in virtually all cases
where I have applied it--a rare thing for any statement made about music. Even for music which was not
conceived by the composer as having a cantabile character, the simple facts of breath control, changes
in physical exertion, and habits of vocal phrasing lead to a natural interpretation of melody which result

in, if not automatically the best interpretation, at least a sizable step in the right direction. This indicates
the very tight relationship that still exists between instrumental music and the influence of vocal
behavior on musical expression.

Kodly's insistence on early, intensive, and careful vocal training as a requisite for all musical
understanding is virtually indisputable, and his observation of its egalitarian availability (and essential
indispensability) to all people flies in the face of elitist beliefs that some people are musically gifted
while others simply are not.

What is maddening in America is most people have been separated from their culture. They have been
told there's a special privileged class of artists who have a special insight. A normal person doesn't have
this insight. That is a monstrous lie, and it is hideous because it is taught to us early on. We are taught
we're not artists. Every single day we're reminded. The special students are isolated in a class and told,
"You're special, you go on. The rest of you, please become middle-class and boring."[27]
Another music educator who has had great success and who downplays the role of "talent" and "gift" in
musical development is Shinichi Suzuki. Like Kodly, he stresses development of the ear before
instrumental practice, he stresses musical training beginning at an early age, and he champions musical
training for all people. Furthermore, Suzuki emphasizes language acquisition as a model for acquisition
of musical skill. He points out that we all speak our native language competently--some better than
others, to be sure, but all with competency--because we are exposed to it, repeat it, and practice it
constantly, beginning at a very early age. His premise is that other abilities, notably musical skill, can be
similarly acquired.

Education begins from the day of birth. We must recognize the amazing power of the infant who
absorbs everything in his surroundings and adds to his knowledge. If attention is not given to early
infancy, how can the child's original power be developed? We learn from Nature that a plant which is
damaged or stunted during the sapling stage does not have a promising future. Yet at present, we know
very little about proper training for the early infancy of human beings. Therefore, we must learn more
about the conditions in which early human growth takes place....

All children in the world show their splendid capacities by speaking and understanding their mother
language, thus displaying the original power of the human mind. Is it not probable that this mother
language method holds the key to human development?...

Cultural sensitivity is not inherited, but is developed after birth....It is wrong to assume that special
talent for learning music, literature, or any other field, is primarily inherited.

This is not to say that everyone can reach the same level of achievement. However, each individual can
certainly achieve the equivalent of his language proficiency in other fields. We must investigate methods
through which all children can develop their various talents. In a way this may be more important than
the investigation of atomic power.[28]
Suzuki and Kodly differ on the order and importance of vocal training and notation reading in
education, but it is clear that both are guided by a recognition of the important correlations between
speech skills and musical skills.

Talking About Music

If all meanings could be adequately expressed by words, the arts of painting and music would not exist.
There are values and meanings that can be expressed only by immediately visible and audible qualities,
and to ask what they mean in the sense of something that can be put into words is to deny their
distinctive existence.[29]
Talking about music is like dancing about architecture.[30]
People talk about music in an effort to discover to what extent their experience of it and the significance
they attribute to it are personal, and to what extent its significance is actually "contained" in the sonic
information (and thus available to others who receive the same information). Interest in this question
comes from a desire to communicate knowledge, to share knowledge (possess common knowledge),
and perhaps even from a desire to impose knowledge on others. We choose the medium of verbal
language because a) we are uncertain of the extent to which music is a shared system of communication,
and b) what we actually wish to communicate, share, or impose is not music but ideas about music-ideas evoked by music.
As the quotes by Dewey and Anderson affirm, we will never successfully communicate music in any
medium other than music. However, not all ideas expressed by music are solely musical ideas. Musical
symbols or clichs may refer to extra-musical ideas, or music may, by its very formal construction,
express abstract or visual concepts. Such ideas can be and are much discussed. Music does not exist in a
vacuum; it exists in and reflects a society and a culture, and thus can refer to a whole world of ideas.
Indeed there is hardly any idea or concept which has not at some time been, or could not potentially be,
related to music.

How then to get at this matter of "what we talk about when we talk about music"? I will discuss and
criticize some existing ways of analyzing and teaching music, and will propose some new directions. I will
start by treating methods of analysis which are particularly related to music as language: in a sense,
talking about talking about music.

Music and Semiotics

Semiology is the study of symbols. As we have noted, musical discourse makes use of symbols but is by
no means restricted to their use. It would therefore appear that semiology is of limited applicability to
music. If, however, we employ ideas of semiology but shift our focus from the symbol (the signifier) to
that which is signified (for the study of significance is a natural part of semiology) we may find useful
ideas for the analysis of music. Still, we must always be aware of the limitations of its usefulness, as we
will see.

Music lies within [the reach of semiotics] because it is a kind of communication which has both
organization and significance. Moreover, music may seem the most appropriate and gratifying object for
these new approaches, because it is the purest system of abstract relationships presented in concrete
form, and the most immediate expression of meaning.

Yet at this point one has to be cautious...That music may be described in semiotic terms does not
necessarily mean that the terminology and theory of semiotics will help us to understand music
better.[31]
Can music be considered a "semiotic object"? According to Benjamin Hrushovski:

Semiotic objects may be intended for sign-functions or not. In the second case, they become "semiotic"
if they are interpreted as such by "understanders". For example, while walking in a city we read the
forms, sizes, and density of buildings to signify "office buildings", "middle-class homes", "slums", etc.,
even if such messages were never intended by the producers of those objects.[32]
Hrushovski describes a semiotic object as having three dimensions: Speech and Position, Meaning and
Reference, and Organized Text. The dimension of Speech and Position points to the importance of the
source of any expression, and the relative positions of all those involved in its propagation or perception.
Who is the author, speaker, character, addressee, reader, etc.? What is the position (level) of each?
Often speakers are nested in a hierarchy. For example "an author presents a narrator who quotes a
character (who may quote another character in turn), and a distortion may occur at any stage."[33] In

musical terms this might be analogous to a score, authored by a composer, directed by a conductor who
cues the performance of instrumentalists. Where is the music in this chain? At one point, at all points, or
somewhere external to all of it (e.g., in the listener's perception and parsing of the resultant sound)?
What is our opinion or expectation of composer, conductor, and performers and how does that
influence our conviction in the meaning of the text? "Regulating principles, such as point of view, irony,
and generic mode, derive from the speaker or the maker of the text. They explain in what sense to take
the sense of the words."[34]

Speech and Position is an aspect of music analysis which is often ignored or grossly oversimplified in
academic discussions of music. Those who would ignore the question entirely claim that a piece of music
is an object which can be considered "on its own merits", regardless of who composed it or performed it.
Of course this is just another example of the effects of the illusion of objectivity. The source of a piece of
music determines its social context and its artistic history, determines whether we hear that piece of
music at all. Consider, for example the vast socio-economic institution that is the cult of Beethoven. Is
Beethoven a great composer? How do you know? How many pieces did he write and how many of them
do you know? What does it mean to "know" a piece by Beethoven? How can someone who has been
dead for 150 years speak to you? Is Beethoven's Fifth Symphony a greater piece of music than Happy
Birthday? How do you know? Who wrote Happy Birthday? Why are there no "great" woman composers?
Why are there no black American presidents? How many of these questions have you ever heard asked
in a music class?

When questions of Speech and Position are treated at all, they are usually boiled down to standardized
platitudes regarding the composer as creator possessed of greater or lesser inspiration, performer as
interpreter of greater or lesser technical ability, etc. A more penetrating search would address a
comparison of text and subtext, the role of a perception of spontaneity in performance, and a whole
array of performance conventions.

A good performance is a fortunate combination and fusion of the composer's ideas and the player's.
Guitarist Pepe Romero discussed the importance of this fusion with me in a personal conversation.

As a player, when you take a piece of music you have to feel and become in tune with that composer,
with his mind and with his soul, and unite it to your own mind, to your own soul, to your own heart.
Then you can recreate the music so it has a freshness, and it sounds when the player plays it like he is
composing it also....Together [the composer and the player] make one and they merge together; you
cannot tell where one begins and the other ends. I know that when I play, and the music is really flowing,
I cannot tell the difference between the composer and myself.

Of course any music performance (like any presentation of ideas, such as a speech) depends largely on
the personal persuasiveness and the charisma of the performer. The performer must convince the
audience that she/he believes in that music, has made it personal. In the case of a musician who is
presenting someone else's ideas (a player of composed music, as opposed to an improviser), the player's
task takes on similarities to that of an actor.

It is one thing to use your own words and thoughts, and quite another to adopt those of someone else,
which are permanently fixed, cast as it were in bronze, in strong clear shapes. They are unalterable. At
first they have to be reborn, made into something vitally necessary, your own, easy, desired--words you
would not change, drawn from your own self.[35]
Stanislavski outlines the process a performer might use to achieve this goal.

The right, you might say classic, course of creativeness operates from the text to the mind; from the
mind to the proposed circumstances: from the proposed circumstances to the subtext; from the subtext
to the feeling (emotions); from emotions to the objective, desire (will) and from the desire to
action....[36]
From the point of view of the performer as interpreter of a text, it becomes clear how the dynamic
between the composer and the player affects the success with which the music is performed. Composed
music which the performer understands and finds sympathetic is more easily adopted as one's own;
music that is abstruse or alien requires greater "acting" skill of the performer.
The practice of performing music that was composed by someone else, or acting with text written by
someone else, seems very different from the practice of improvising with one's own musical ideas (or
even playing one's own composed or memorized music). There are musicians who play only other
people's music, and there are actors who have no desire to write or to direct others. They feel, due to a
lack of self-confidence, a conviction that they are not themselves "creative", or whatever reason, that
they require a given text in order to perform. The distinction is often made between creative artists and
interpretive artists, as though these represented completely different personality types. I believe,
however, that this distinction is more one of degree and focus than of exclusive categories. (For more on
the creative role of the performer of notated music, see the discussion of notation, p. 41 ff.)

And what changes occur in our view of Speech and Position, and the relationship between composer
and performer, when one or more of those positions is occupied by a computer? How do our aesthetic
perceptions and expectations change with the knowledge (or suspicion) that music has been composed
or performed by a machine? Doesn't this have a profound effect on the question of "expression" in
music? The dimension of Meaning and Reference encompasses not only the "sense" of the text, but also
how its true significance is affected by specific "frames of reference". "Observation of the referent

(within a specific frame of reference) independently of the words and their senses, influences the
decision on the meaning to be assigned to this sign."[37] The meaning of a word, or connections
between sentences, can change entirely based on what we know about their implicit frame(s) of
reference. Indeed, intentional confusion of frames of reference is a staple of comic theatrical dialogue
(Shakespeare, Wilde, the Marx Brothers, etc.) Similarly, the meaning of individual musical events, or of
connections between events, may be lost upon us if we are unaware (perhaps through lack of musical,
philosophical, or general cultural erudition) of an implicit frame of reference.

Meaning and Reference can be treated superficially by searching a piece of music for symbols which
draw on an extra-musical frame of reference and for clichs which refer to specific musical styles or
moods. This is frequently a pursuit of music appreciation courses, music criticism, program notes, and
the liner notes of recordings. More contemporary questions, however, are addressed by direct
confrontation of how meaning is affected by frames of reference. How does our musical perception
depend upon performance context, musical context, quotation (use of other music as a frame of
reference), the definition and role of noise, etc.?

Composer/philosopher John Cage brought these matters into clear focus in the latter half of this century,
seemingly almost single-handedly. His use of indeterminacy in notation and as a compositional method
brought the importance of the will of the composer into question. His piece 4'33" questions the
concepts of noise (unwanted sound) and silence, as well as the "framing" of a musical performance in
time and space. Because Cage's music so openly challenges basic assumptions, a great deal of it gives
rise to confusion or outrage among listeners who are unfamiliar with its philosophical bases.

The dialectic of Meaning and Reference is also closely related to Roger Schank's and Robert Abelson's
ideas of "scripts"--bodies of assumed, shared knowledge based on past experience.[38] Music sounds
continuous or discontinuous, dissonant or consonant, stylistically "correct" or not, out of place or not,
because we have built up elaborate scripts of "appropriate" musical behavior, based on prior experience.
To what extent can we reasonably expect to share or agree upon perceptions of music, given that each
of us has a different "cosmology" of past experience?

Semiology makes a distinction between symbols which are iconic and those which are not. A symbol
which is iconic resembles, exemplifies, or shares some property with the thing that it represents. An
example of an iconic symbol in instrumental music (well known to composers of scores for war movies)
is a snare drum playing a steady march rhythm. Not only does the snare rhythm symbolize military
activity, it actually is (or once was) one of the sounds of military activity. For the most part, however,

instrumental musical symbols, like linguistic symbols, are more commonly not iconic. The sounds in
music, like words...

...are abstract conventional signs. They have nothing in common with what they denote, and this gives
natural language [and music] the freedom of reflecting the world without being tied to it. In this
detachment, language gains an enormous discoursive power but loses whatever presentational capacity
it might originally have had. Words fail to present the difference between blue and green to the blind or
to the daltonic, and, as everyone knows, all the attempts to "translate" music into words invariably
appear awkward, crude, and inadequate. For there is neither transition nor similarity between the two
modeling systems...[39]
The lack of iconicity as a primary feature of both music and verbal language, as well as the obvious
dissimilarity of their grammatical systems, helps us to understand the difficulties of talking about music.
Of course, the use of concrete sounds in music, which rose to popularity in the 1940's and 50's and
which is seeing a new rise due to the availability of the digital sampling synthesizer, creates an entirely
new musical language, in which iconic symbols coexist with traditional musical sounds. Music theorists
have mostly avoided the analysis of such music, but perhaps semiotics will facilitate that analysis if and
when it is undertaken.

The dimension of Organized Text encompasses the significative power of the formal structure of the text.
In both literature and music the structure is mostly one of segmentation: volumes, chapters, paragraphs,
etc. in literature--movements, sections, periods, etc. in music. This segmentation is usually dependent
on the development of thematic material. Aspects of the content--e.g., events in a protagonist's life in a
novel, introduction and variation of motifs in music--drive the sense of formal divisions. Variety of
materials account to some degree for the varieties of formal structure, and the "patterns and
dimensions of the Organized Text...participate in the meaning."[40]

Because of its abstract nature (and perhaps because of an unwillingness to grapple with the first two
dimensions), the dimension of Organized Text (i.e., of form) has received considerable attention from
music theorists. Writings on musical form, however, are unfortunately mostly of the "tour guide" variety,
pointing to landmarks and formal curiosities in a piece of music. "Here's the B theme, and over there
you'll note the recapitulation on the horizon." Rarely is an effort made to explain the significance of
formal structure.

Certain composers are generally acknowledged to be builders of bold and impressive "architectural"
formal structures. Edgard Varse and Iannis Xenakis come immediately to mind. Indeed, Xenakis is

famous as an architect as well as a composer and Varse attested to a deep interest in the structures of
nature.

I was not influenced by composers so much as by natural objects and physical phenomena. As a child, I
was tremendously impressed by the qualities and character of the granite I found in Burgundy...and I
used to watch the old stone cutters, marveling at the precision with which they worked. They didn't use
cement, and every stone had to fit and balance with every other. So I was always in touch with things of
stone and with this kind of pure structural architecture...All of this became an integral part of my
thinking at a very early stage.[41]
The "mobile" (or "open") forms popular the 1950's and 60's allowed the performer to find his or her
own significance in the dimension of Organized Text. The semiotician Umberto Eco saw this trend as an
important extension of the established concept of formal "openness".

The definition of the "open work," despite its relevance in formulating a fresh dialectics between the
work of art and its performer, still requires to be separated from other conventional applications of this
term. Aesthetic theorists, for example, often have recourse to the notions of "completeness" and
"openness" in connection with a given work of art. These two expressions refer to a standard situation
of which we are all aware in our reception of a work of art: we see it as the end product of an author's
effort to arrange a sequence of communicative effects in such a way that each individual addressee can
refashion the original composition devised by the author.[42]
The technology of computer-controlled random access memory devices for music recording provide the
potential for still another type of open form in which the listener can truly restructure the form of a
piece of music.
I suggest that while a strict application of the tenets of linguistic semiotics to music would probably be
futile, the semiotic viewpoint proposed by Hrushovski, of seeking significance in different levels of
source, reference, and structure provides us with many new angles for approaching important and
largely under-emphasized aspects of music perception. In applying semiotic theory to music, it is
important to benefit from the methods of semiology without expecting that music will submit to
formulation as a system signs.

Semiotics as a descriptive analytical method must be further refined and adjusted if it is to become a
useful and productive approach to the peculiarly complex system of music...for it seems somewhat
improbable that a concept formed on the basis of linguistics should have an immediate explanatory
power outside of its original boundaries....

If music is to be considered a sign system, then it is a very strange one: an icon which has nothing in
common with the object it presents; an abstract language which does not allow for a prior definition of
its alphabet and vocabulary, and operates with and indefinite, virtually infinite number of unique
elements...These discrepancies can be reconciled if music is approached in terms of semiotics, but
without its preconceptions.[43]
Information Theory

The application of information theory to music is treated at length in the response to the question by
Professor Lewis (p. 92 ff.), so I will restrict myself here to its indirect effects upon music theory and
analysis.

It is well known that our expectations play a vital role in our reaction to musical events. Our
expectations are based on knowledge of one or more prevalent styles of music, as well as on more local
considerations of context based on the immediate past. Information theory explores systems of
accumulating information as a sort of evidence, usually to attribute likelihoods to different
interpretations of that information and/or to make probabilistic predictions or decisions about the
future. In music, theorists often attempt to correlate ideas of probability (either intuitive or
mathematical) with the fulfillment or disappointment of expectations.

Theorist Leonard Meyer suggests that expectations based on probabilistic evaluations of the local past,
as well as on Gestalt principles of perception, are "the nature of human mental processes", but that they
will generally be superseded by expectations based on learned musical style.

Paradoxical though it may seem, the expectations based upon learning are, in a sense, prior to the
natural modes of thought. For we perceive and think in terms of a specific musical language just as we
think in terms of a specific vocabulary and grammar; and the possibilities presented to us by a particular
musical vocabulary and grammar condition the operation of our mental processes and hence of the
expectations which are entertained on the basis of those processes.[44]
Still, ideas of probability appear to play a role in our very concept of musical style. As a teenager, I
taught myself the rules eighteenth-century harmonic style using a book borrowed from the local public
library. The book, The Contrapuntal Harmonic Technique of the Eighteenth Century by Allen Irvine
McHose, supports its every assertion about harmony and voice leading with statistics of probability
compiled by painstaking analysis of all of the chorales of J.S. Bach. Whereas most textbooks might say,
"The root of a triad should usually be doubled," this book says,

A frequency study of Bach's doubling, in major and minor triads which are in root position, is as follows:
MAJOR TRIAD

MINOR TRIAD

Root Third Fifth Exceptional


88% 8% 3%

1%

Root Third Fifth Exceptional

84% 13% 2%

1%

The above study reveals that there is very little difference in Bach's method of doubling. When major or
minor triads are in root position, the root is the best one to double, the third next, and the fifth
last....[45]
Every single rule of chord usage and voice leading is statistically supported in a similar fashion. While
one may certainly question whether such a statistical analysis can usefully be extrapolated into a
generative rule set (see also the discussion of Markov processes in Dobrian, Chris, "Music and Artificial
Intelligence"), such methodological rigor is certainly unusual in music analysis.
Walter Piston's textbook, Harmony, has a similar (though not statistically supported) table of usual root
progressions. Meyer attests to the importance of probability in style when he says that such tables are
"actually nothing more than a statement of the system of probability which we know as tonal
harmony".[46]

I disagree that tonal harmony is merely a system of probability. As Meyer points out himself,
considerations of musical style play a much greater role in our expectations. It is also a leap to say that
recognition of probability leads directly to expectation. Furthermore, expectations are not the only basis
of our reaction to music. In a general sense, though, we do evaluate everything we perceive based on
our past experience, and information theory suggests a methodology for the study of the role of
expectation in our response to music.

Generative Grammar

Grammar is the set of rules which describe how words of a language may be combined. The rules of
grammar, once accepted (consciously or unconsciously), play an important role in the comprehensibility
and meaning of words. One pursuit of linguists is to analyze grammar, formulate its rules, and try to
discover the rules which govern grammars. Once the rules of a grammar have been formulated, they can
be used to generate phrases in the language.

A full description of a grammar should define rules which permit the generation of all sentences in a
language, without permitting the generation of sentences which are not characteristic of the language.
All but the simplest grammars will permit an infinite number of combinations of basic elements (words)
into meaningful groups (sentences).

Some attempts have been made to apply linguistic grammar theory directly to music analysis. For
example, two theorists have tried to demonstrate a parallel between the deep, shallow, and surface
grammatical structures of spoken language, and the harmonic, modal, and melodic domains in jazz
improvisation.[47] Although this succeeds as an interesting (though incomplete) observation, it leaves
several questions unanswered and would have to be applied to considerably more examples to test its
validity as any sort of general structural principle of jazz improvisation.

Most music theory does not attempt to draw parallels between the behavior of language and that of
music, but rather tries to formulate the strictly musical rules by which basic elements are combined into
meaningful larger units. Observations of common traits of a style or common tendencies of sound
progressions do not, however, automatically constitute a grammar. And even once one feels confident
in having established a working set of rules with which to describe a style of music, it is always
questionable whether grammars from the domains of musical analysis and composition are
interchangeable.

Music Theory

Although the term music theory is commonly used, it is actually extremely rare that anyone attempts to
discover anything like a theory of music. Analyses of musical structure are almost invariably confined to
music of a single style, culture, or sub-culture. Often the claim of universality is implied (more out of a
sense of superiority, ignorance, or apathy toward those styles not considered than out of any real belief
in the universality of one's findings), but in fact theories of musical grammar (like any theory that views
music as a language) can only apply to some music, almost never to all music.

Music theory can be roughly divided into two categories: theories of how music is perceived and
theories of how music is composed. Although assumption of a connection between the two is entirely
reasonable, there is little inherent reason to believe that direct correlation always exists. Often theorists,
teachers, and students confuse these two categories, with distressing results. Watching the students in a
music theory class, as well as remembering my own days in such a class, I observe that most students
incline toward one of the two categories. Some, listening to the professor lecture on music theory

(diatonic tetrachords, harmonic minor scales, etc.), seem to be thinking, "Yeah! This is the kind of stuff
I'm interested in learning: how music really works, the rules that govern its construction." Others, I
imagine, are thinking "What in the world do I need to know all this for? What can this possibly have to
do with my appreciation of music on an emotional level?"

They are both right and they are both wrong. Aaron Copland hypothesizes that we listen on "three
separate planes", which he terms the "sensuous", the "expressive", and the "sheerly musical".[48] Of
course, he acknowledges that we don't really break up our listening so mechanically, but we actually
listen on all three planes simultaneously and "correlate them...instinctively".[49]

Without disagreeing with his three planes, I would present my own companion view. I believe we listen
and react to music somewhere along a continuum between the purely "visceral" and the purely
"intellectual". Sometimes we react to music viscerally (on a "sensuous" or "expressive" plane) as, for
example, with a particularly loud and violent passage or a particularly driving rhythm, and sometimes we
receive a more intellectual ("sheerly musical") pleasure, such as when we perceive well composed
counterpoint or interaction between abstract ideas. Too much time spent on either end of the
continuum often leads to boredom because we seem to desire attention to both types of awareness of
the music.

Copland makes some reference to this when he observes that "many people who consider themselves
qualified music lovers abuse [the sensuous] plane in listening" and that "It is very important for all of us
to become more alive to music on its sheerly musical plane."[50] Indeed, many people think of music as
something of a warm bath which is supposed to soothe them and not provoke them in any way, either
viscerally or intellectually. Listeners who take this attitude, or who overuse the "sensuous plane" are in
effect cutting themselves off from at least half of the music's appeal; they're receiving only half the
experience that they could receive. A knowledge of music's construction can help one to address the
intellectual, "sheerly musical" aspect of the music, thus appreciating it in its totality more fully.

Furthermore, the ability to deal with the intellectual or "technical" aspects of a piece of music is
absolutely essential for composers and performers. If a maker of music doesn't understand the music
intellectually, she/he is incapable of conveying that portion of the music to listeners and so is only
presenting half an experience to begin with.

On the other hand, attempts to understand music in a purely intellectual or factual way are equally
futile. As Mephistopheles said to the Student in Goethe's Faust: "All theory...is gray. The golden tree of

life is green." Anyone who thinks that music theory explains how music works is overly optimistic. It can
really only give us ways of thinking about and terms of discussing music. Music theory tries to make
observations about music and formulate generalizations, which are then often presented to students as
rules. We must always remember that any theoretical idea about music is only a tiny part of the story--a
fragment of an illusion of understanding which, one hopes, helps us to appreciate music and work with
it.

What do we mean, in general, when we talk of a theory? We usually associate theories with scientific
pursuits or methods. In science one makes observations of natural phenomena and tries to formulate a
rational statement that is true and encompassing for all related phenomena. Scientific theorists dislike
having things be inexplicable or irrational; they try to conceive a rational explanation which holds true
for all the phenomena in question. This rational explanation can then be used as a tool for
understanding other examples of similar phenomena. Eventually, phenomena are observed which belie
the theory or demand a revision of the theory, and a new theory is devised, even though each new
theory can only be a provisional explanation of that which we do not fully understand. Nature doesn't
act according to laws; "laws" of nature are just our way of trying to get a theoretical grip on what nature
does. In other words, nature doesn't follow laws, the laws are derived from nature.

And so it is with music. Music is not some kind of blind servant of the "laws" made up by theoreticians.
Theoreticians observe music and try to formulate explanations and point out consistencies. In almost
every case one can find exceptions to the rule or theory in question. Often one will find evidence in
actual music that directly contradicts the traditional "wisdom" of a music theory book.

The main problem with music theory is that it isn't a theory at all. Theories about musical phenomena
are different from scientific theories in two very important ways. The first is that music theory is much
more tolerant of exceptions. When an exception to a scientific theory is discovered, the theory must be
revised to include or otherwise account for the exception. Music theorists much more readily say, "Well,
okay, there are exceptions to my theory, but in general..." This should underscore for us the fact that
theories of music are guidelines for understanding, not rules which explain (much less govern) all
behavior. This points out the second important difference. Scientists theorize about natural phenomena,
while music theorists consider musical phenomena which are the result of human decisionmaking. This
means, first of all, that humans can just as easily make the willful decision to defy or ignore the theories.
Secondly, this leads to a very common misconception about which came first, the music or the theory.
Some theorists even imply that music is composed in accordance with these theoretical ideas. They
treat their descriptive "rules" as if they were obligatory generative grammars. Good composers have
never made their decisions based on what a theorist told them was correct. Nor would we ever presume
that nature behaves the way it does because Isaac Newton or anyone else said it should.

Composers do have a knowledge of what theorists have surmised about the structure of music. That's
part of the education and skill of any composer, and a composer does in some sense use all accumulated
knowledge about theory and musical style to guide compositional decisions. Without such knowledge,
the composer would be swimming aimlessly in the great sea of "anything is possible." Composers
impose restrictions on themselves in order to give coherence to their ideas. But it is the set of
restrictions that composers choose which are later formulated by theorists, not vice versa. Schubert
never worried about whether he was using melodic minor or harmonic minor (although he was certainly
aware that he was raising the seventh degree of the A-minor scale when he wrote a G#). He did what he
did because of traditions in his musical culture and because the sound of it worked for him; only later
did a theorist call the collection of notes he used "melodic minor" or "harmonic minor". Beethoven
never said to himself, "I've heard tell of this thing called the 'harmonic minor scale'. Think I'll try to
include a couple of 'em in my next piece." For us, the term "harmonic minor scale" is just a handy way of
verbally conveying to each other that we're talking about a set of notes with a certain configuration--a
minor scale with the seventh degree raised.

Many fallacies based on the potential chicken-egg confusion between theory and practice--such as this
idea that Beethoven "used" the harmonic minor scale--are standard fare in the teaching of music theory.
These fallacies result from a progression of a) reductions of information, b) non-theories (i.e., theories
which generously permit exceptions), and c) leaps from one closed system to another. We can trace the
growth of such a fallacy as a four-step process. 1) A theorist tries to encapsulate and formulate
observations about musical phenomena or, much more frequently, observations about notation of
musical phenomena. As shown earlier (and described in the section on notation, p. 40 ff.) this results in
a translation, and almost invariably a reduction, of information. 2) This descriptive formula is then
symbolized by a single term (such as "harmonic minor"), which points to the more complex description
(which in turn describes the notation which in turn describes the musical phenomenon). Clearly this is
another reduction of information: we now have a symbol which is open to confusion because of many
possible frames of reference (just as a pointer to the computer address of a data structure contains
none of that structure's data, and is useless without access to the structure itself). 3) This term is then
codified as a theory or rule. However, unless the original description was absolutely complete, the rule
will have exceptions, and is therefore really a guideline rather than a rule. 4) Finally, a leap is made by
transporting this "rule" from the analytical domain in which it was developed into the compositional
domain which may or may not (in most cases clearly does not) work in complete obedience of the
concerns of the analyst. This progression of confusion--descriptive formulation, to descriptive term, to
descriptive "rule", to generative "rule"--results in textbook terminology which is of dubious utility to a
listener or performer, and of even more dubious utility to a composer.

This example shows the pitfalls of transferring too blithely from the descriptive to the generative.
Another example will show the danger of transferring in the other direction.

Pierre Boulez is a very interesting thinker, especially when dealing with general ideas on music and
culture such as form, criticism, taste, etc. However, when he discusses technical aspects of music
(especially his own), he is so overly concerned with the concept of the musical "language", of which
pitch is the fundamental structurable element (ah, perfect-pitchers, what are ya gonna do with 'em?)
that he is often guilty of the same sort of note-counting that he rails against in his nontechnical essays.
(In fact, the main body of his book Penser la musique aujourd'hui is a perfect example of this fascination
taken to its rather absurd extreme.)

What follows is an example from a talk of his I attended at the Centre Georges Pompidou in Paris. This
example was presented during a technical-but-for-the-general-public seminar on Language and
Perception.

On page 2 of Boulez's composition clat, the following gesture occurs.

Boulez points out the obvious, that this is basically a chromatic descent of three tritones, followed by a
B-flat.

To "explain" the presence of the B-flat he points out that, with the proper octave transpositions and
reordering of the first six notes, the B-flat can be shown to be the axis of symmetry upon which the first
six notes converge.

If all this is not terribly interesting, it is at least indisputable. He then makes a rather dubious claim: that
this organization is perceived when one hears the gesture. To me this implies that upon hearing the Bflat an (extremely) astute listener retrospectively mentally performs the proper octave transpositions
and reordering of the first six notes to give logical meaning to the seventh. Perhaps Pierre Boulez and
Milton Babbit and three other people in the world would perform this sort of mental acrobatics, but
even that is a little hard to believe. But let's assume for a moment that one does, either consciously or
unconsciously, recognize the "logic" of this organization of pitches. And let's even grant (since Boulez is,
after all, an acknowledged descendant of the Second Viennese School) that this type of symmetrical

structure is in some way more rewarding, when we arrive on that seventh note, than any other,
asymmetrical one. (If the last note had been E-natural, would I have mentally performed the orderings
and transpositions necessary to make E-natural a satisfying point of symmetry? If the last note had been
B-natural, would I have heard it as a "wrong note", and thought it was Prokofiev?) One has to choose
pitches, after all, and the ones he chose here are as good as any and better than most. So what's the
problem?

Firstly, Boulez and composers who share his views have generated, and continue to generate, the study
of musical language almost exclusively from a technical point of view (materials) rather than a
syntactical one (choices). Of course it's much easier to say what notes you used than it is to say how (not
even to mention why) you used them. To say, "I did this for these artistic reasons" requires stating
aesthetic values and putting them up to scrutiny. It's much safer just to stick to demonstrable facts. As a
result the general public and music students everywhere continue to hear composers and theorists give
talks exclusively on materials, without choices being discussed. Young composers develop the idea
(although one can hardly blame Boulez for the faults of lesser composers who attempt to imitate him)
that if they use symmetrical pitch structures (or whatever) they will write music. After all, they have only
been instructed on the materials, not the use of them. Yet, the example Boulez used here has the
potential for demonstrating some interesting and clear aspects of his (marvelous) use of the language
and its perception: the way that this selection and ordering of pitches carefully and elegantly avoids any
traditional tonal connotation; the way that disjunct, nondirectional contour, along with the speed and
the "l.v." marking, bring about the perception of arpeggio rather than melody; the way that the octave
transpositions add vital ambiguity to an otherwise simple chromatic descent of tritones; even, if one
really insists, the way that the accelerando and the dynamic crescendo heighten the sense of arrival on
the point of symmetry. One might even suggest discussing the sound of the example, given that we are
talking about language and perception. Maybe it simply sounds good. During this lecture, delivered in
one of the world's foremost centers of music and technology, Boulez never played a recording of the
example, nor even played it on a piano (the original instrument in the score).

Secondly, if the best-known and most respected composer in France gives this sort of lecture to the
general public (especially without playing what's being discussed), tells them that they're perceiving
symmetrical pitch aggregates and retrospectively reordering and transposing them in their heads (they
know they're not doing that, and can never hope to), and that this is how modern music is written and is
to be perceived, then not only is he indubitably giving them a monstrous inferiority complex and
alienating them from the very music upon which he proposes to enlighten them, but he is providing
information which is essentially useless as anything other than a curiosity.

A postscript to this example: As I was interested to see the context in which the passage in question
occurs, I proceeded to the library of the Pompidou Center to peruse the score.[51] (Example 1.) The

piano is doubled simultaneously at the superior perfect fifth by the harp (with a few octave
transpositions upward, presumably to facilitate the fingering or to create a more balanced over-all
sonority), at the superior major sixth by the vibraphone, and at the superior minor sixteenth by the
celesta. The actual sound of the score is thus

. Given that the gesture is played quickly by four different players, each with a slightly different concept
of accelerando, all acting on a short down-up cue from the conductor, I'm sure that even Boulez would
not hesitate to admit that what is perceived is a rapidly arpeggiated version of

. So much for convergence on a point of symmetry.


Implications for Music Education

The preceding examples suffer from two problems. They both are incomplete formulations within the
domain they treat, and the formulations are inappropriately transferred to another domain. "Harmonic
minor" is an inadequate description of the pitch set found in almost any piece of music, so it cannot
possibly work as a generative theory. Boulez's demonstration of a pitch set's "convergence on a point of
symmetry" is an insufficient explanation of his complex compositional technique, and so can never be an
accurate explanation of our perceptions.

In discussing music we must be content with partial explanations, and not attempt to draw overly
broad--much less universal--conclusions from them. We must be content to recognize and learn from
the limitations of our theories. After all, if we ever did succeed at adequately explaining the composition
and perception of music, it would become a static body of knowledge and would cease to be of interest.
In order to experience or discover anything new, we would be forced to violate the theories we had
established. As Professor Harkins wittily remarked in a personal conversation, "One gets the impression
that for some people the goal of thinking is to think so much that they eventually reach a point where
they no longer have to think."

Lest I give the impression of anti-intellectualism, I should point out that I am simply advocating an
intellectualism that permits and admits to uncertainty (which will always be present whether we choose
to permit it or not), enjoys and uses ambiguity for greater understanding (even if that understanding
takes a still more ambiguous form), and recognizes the dangers (or at least the limitations) of absolute
definitions.

Although it is more work for the teacher, theoretical analysis of music must steer sharply away from
textbooks--which are too far removed from the musical phenomena--and must derive real experiential
knowledge from music itself (aided by, but not replaced by, notation and other representation when
appropriate). This does not require that teachers and students re-invent the wheel. The teacher can help
the student avoid redundant information, help guide the student toward enlightening listening, and
point the student to established theoretical ideas once the student has sufficient experience with the
music itself to be an adequately critical reader of the theories.

Specialized Music Languages

So-called natural language is by no means the only language used for the description of music. Many
cultures use symbolic written notation to describe either the sound or its means of production
(tablature). Recently various special systems have also been developed to describe music to a computer.
I will consider only two specialized music languages here--Western classical notation and the Musical
Instrument Digital Interface (MIDI) protocol--with the aim of evaluating their effectiveness as
descriptors of musical sound. I will begin with a few general remarks about the utility of notation.

There are many reasons why one writes down a musical idea. Probably the earliest and most important
reason for notating music is to transmit it elsewhere in time and space. Before the invention of sound
recording the only way to recreate a musical experience in another time and place was to memorize it or
to notate it, then to play it again imitating as nearly as possible the original experience. Music was
repeated by memory well before notation was used, and human memory is probably a more thorough
(though more volatile) "storage medium" for saving and recreating musical experience. When music is of
a certain complexity or length, written notation shows some advantages over memorization because
paper is a more stable storage medium than the brain. In the short term, paper does not suffer from
memory loss, and serves as an aid in the predominantly oral transmission of musical culture. In the long
term, people and even whole societies may die, and notation serves as a partial representation of that
cultural loss. Notation is by no means free from a potential loss of information, however, since sonic
memory must be translated onto paper and back into sound; notation is invariably an incomplete
representation of sonic information.

Thus, notation of music can be viewed in some cases as simply an aid to memory or a preferred storage
medium for recreating music in another time and place. This begs the question, "Why recreate a musical
experience?" What does it even mean to "recreate a musical experience", given that any experience

(especially musical experience) is dependent upon its context in time? Perhaps the transplantation of a
musical experience in time is a way of achieving continuity in time.

One reason why humans seek to recreate musical experience is because a culture's music is central to its
image of itself. The improvised music of Gambian griots is a bearer of their nation's 500-year history.
Human memory is thereby harnessed in the service of cultural memory.[52]
Western Music Notation

In considering a specific system of notation, such as our standard five-line staff notation, it is important
to consider its biases and to question, "What aspects of sonic information are being adequately
described by this notation, and what information is being lost?" When the notation is retranslated into
sound by a performer, we may then ask, "What is actually being recreated--which aspects of the original
experience are retained and which are lost--when one plays from notation?"

The simplest approach to these questions is first to discuss what information is contained in most
notated Western music. Pitches and rhythms are notated in the greatest detail, and are generally
considered to be the primary bearers of musical information. Standard notation, however, generally
ignores pitch inflections such as portamento and indicates modifications of tempo only in the most
general of terms (rallentando, etc.). Musical parameters such as dynamics and timbre are notated even
less specifically or not at all. So all subtleties of portamento, rubato, dynamics, and timbre--the elements
considered most important to the expressive performance of music--are left unnotated, assumed known
by any performer of the score through an acquaintance with the conventions of the musical society.

From this fact comes the idea that a score must be interpreted by the reader; the non-notated elements
are provided according to societal conventions and the taste of the reader. It should be noted, though,
that all aspects of music are provided in this way--according to societal conventions and the taste of the
musician(s) involved--it's just that some aspects are notated in advance and others are not (they are
either memorized or improvised in real time). So even in the performance of a composed, notated piece,
many aspects of the musical performance are not notated and may not have been composed before the
performance. The point is that "interpretation" is really improvisation or memorized composition or
both. Insofar as a performer relies on memorized composition (i.e., prepared decisions, made during
rehearsal) to supply non-notated aspects of the music, the act is the same as that of a notator: musical
ideas are fixed in some storage medium for repetition at another time.

The notation in many of the works of composer Brian Ferneyhough, such as Cassandra's Dream Song for
solo flute, stems from the desire to notate himself those aspects which would otherwise be consciously
or unconsciously decided upon and memorized--or improvised--by the performer. It is debatable
whether this increase in notation frees the performer by reducing the number of decisions to be made,
or whether it overburdens the performer with responsibilities for conscious actions, some of which
would otherwise be made unconsciously based on physical technique and societal conventions? Since
the player of notated music composes or improvises all aspects of the music not explicated by the
notation, the extent of the player's contribution is determined only by the completeness of the notation
and the player's own conscientious attention to unwritten detail.

Ferneyhough's notation is representative of one trend, in the last fifty years of Western composed music,
to notate more and more aspects of the music in greater and greater detail. An opposing trend has been
to notate in less detail, writing out structures for guided improvisation, mobile forms, suggestive
graphisms, or some combination of these. In both trends, the composer is considered free to specify as
much or as little as desired, in effect determining how little or how much will be left up to the performer.
In most cases, however, the player is not considered to have the same freedom of determination,
especially in the traditionally strictly notated areas of pitch and rhythm. The reason for this difference is
simply the societal view that the composer is the one with something to say--the creator--and the player
is merely the vessel--the interpreter. This view has not always existed in classical music, however. It is
well documented that performers of concertos during the classical and romantic periods improvised or
composed their own cadenzas, and that composers such as Bach, Mozart, and Beethoven were also
formidable improvisers.

It has long been acknowledged that many "great" players play a lot of "wrong" notes. Clearly pitch and
rhythm are not the only bearers of musical expression or ideas. A player often may give an admirable
performance while disregarding much of the notation--still conveying some if not all of the essential
musical ideas of the composer, or conveying a full musical experience.

MIDI Data Protocol

The MIDI software specification is a protocol for transmitting performance instructions to computercontrolled musical instruments. In most cases the instrument is a synthesizer, but there also exist MIDIcontrollable acoustic instruments such as pianos, as well as musical accessories--equalizers,
reverberators, etc.--and even non-musical devices such as stage lights. Theoretically, any device that can
be controlled by computer could implement the MIDI protocol.

The MIDI specification was devised in the early 1980's by music instrument manufacturers, primarily as a
way of achieving cheap and ready standardization of data transmission between equipment made by
different companies. Its evolution took place over a matter of a few years, as compared to the evolution
of standard notation which took place over several centuries. It does make considerable use of the
model of standard notation, however, and includes such basic conventional notions as notes, dynamics,
etc., thus inheriting both benefits and limitations of the standard notation system.

F. Richard Moore, in his paper entitled The Dysfunctions of MIDI,[53] outlines what he sees as the
shortcomings of MIDI from a technical (computer science) standpoint. He contends that the extremely
wide adoption of MIDI as a standard runs the danger of producing contentment with a limited system
and causing stagnation in the progress of computer music development. It is true that the
manufacturers probably did not foresee the speed with which the computer music community would
push the use of MIDI to its technical limits. However, it is also clear, as evidenced by the enormous
popularity of MIDI, that this language was sufficient to fulfill many musicians' need for a standardized
and easily manageable way of implementing computer control of instruments. While the danger of
stagnation due to use of a limited language may be real, it is a basic problem with music notation-including the potentially limiting nature of standard music notation--and is not unique to MIDI. When a
system of notation (or any language system) proves inadequate for new usages that arise, it is extended
(either by decree from a governing body or by developments in its vernacular usage) or abandoned.

Perhaps of greater concern to musicians are the musical assumptions and biases implied by the protocol.

When Western notation is used for post-audition (transcriptive) purposes, the lack of fidelity to the
original (mostly due to the rigidity of its quantization of pitch and duration) becomes obvious. Musical
data protocols based upon the Western notation system (MIDI) expose rather too brutally the poverty
of its transcriptive possibilities. Indeed, the area of pitch inflection itself ("basic" pitch plus/minus offset)
only becomes meaningful when viewed from the (disad)vantage point of this quantization.[54]
As discussed above, the limited descriptive powers of standard notation are substantially supplemented
by the vast knowledge base and skills of the interpreter. This body of knowledge and skills includes
primarily stylistic conventions, but also includes the ability to of the interpreter interact with the sound
and the environment. When the "interpreter" of this standard notation (or another notation based upon
it, as in the case of MIDI) is a machine, such vitally necessary knowledge and responsiveness are
suddenly made conspicuous by their absence. That whole range of information must be supplied by the
software programmed into a mass-produced commercial synthesizer.

Designers of electronic instruments, sound synthesists, and music programmers attempt to fill this gap
in various ways. They may add complexity to music producing algorithms in an effort to make the effect
more interesting. Unfortunately, even complexity can be simplistic, by being either too predictable, so
unpredictable as to be unengaging, or simply by being, for whatever reason, uninteresting as musical
discourse. It is known that one of the main reasons that the sound of an acoustic instrument like the
flute is so attractive is because the sound contains a vast and complex variety of subtle noises and
variations which go virtually (but not totally!) undetected by our ears, in addition to the more obvious
noises and variations that we do detect. Synthesists have tried to add noise, jitter, vibrato, pitch
inflection, etc. to synthesized sounds in an effort to simulate the complexity of acoustic sounds.
Unfortunately, for the most part, the precise nature of the variations which make acoustic sounds rich
and attractive remains undiscovered. These "microscopic" but all-important aspects of acoustic sound
are so complex, or complex in such an unusual way, as to be virtually undefinable in precise terms.

Similarly, the relationship between a virtuoso musician and his or her instrument is replete with an
overwhelming amount and degree of nuance, built up over years of intense listening and practicing. So
much of the nuance of the instrumentalist-instrument relationship is developed without being
linguistically defined, by listening, imitating, feeling. Furthermore, it seems to be largely stored not in
some intellectually explicable way, but in laboriously developed muscular reflexes, almost as if the brain
is entirely bypassed. In playing a single brief note, a violinist combines bow angle, bow speed, bow
pressure, bow placement, bow attack, finger placement, finger movement, finger pressure, in addition
to whatever totally involuntary muscular movements may be caused by nervousness, coffee
consumption, humidity, unknown electrical discharges in the brain, etc.--and all of these factors are
changing from millisecond to millisecond (more correctly, probably much, much faster than that),
modified by the brain in interactive response to the sound being produced, the sound others are
producing, the acoustics of the room, etc. It is no wonder then that a virtuoso performer of acoustic
instruments is dismayed by the lack of response of a synthesizer--the sound of which is already vastly
inferior to his or her ear--when she/he must manipulate with the foot--one of the less sensitive bodily
extensions--a pedal which has only 128 possible gradations, and the resulting effect is (for example) a
simplistic strictly regular vibrato of a low pass filter.

Efforts by programmers to add complexity to the instrumentalist-instrument interaction in the case of


synthesizers have suffered a fate similar to that of synthesists' efforts to add complexity to sound
materials. The complexities are so numerous, undefined, interconnected, and depend on so many
variable, constantly changing factors, that an attempt to reproduce them is invariably simplistic. When a
large amount of complexity is introduced, it may exceed the ability of the instrumentalist to control it
(since such control was previously largely subconscious), it may be complexity which does not actually
add to the performer's expressive or musical control, and it may simply be complexity which is not
sonically or musically engaging because of aesthetic taste.

Given all of these daunting problems, one must at least credit the originators of the MIDI specification
with providing the capability to express both discrete and continuous musical events, and for providing
the special system exclusive message, an escape hatch by which the specification can be extended by
individual instrument manufacturers. Although the language was designed with very specific musical
meanings in mind--symbols for triggering the beginnings and endings of notes of specific pitch and
loudness, inflecting the basic pitch by "bending" it, controlling over-all instrument volume continuously,
etc.--the actual interpretation of the information is up to the receiving instrument. MIDI is frequently
blamed for its inability to convey vital musical information, but the fault really lies more with the
assumptions of MIDI--how the information is to be interpreted or processed--than with the information
format itself (and how it could potentially be processed). It is important to remember that the symbols
of any language or notation are not the meaning; they merely point to meaning.

Implications for Music Education

Western music notation, for all its imperfections, does have the advantage of being a bona fide language.
The meanings of the symbols are quite well established and agreed-upon; although meaning is often
ambiguous, it is no more so than the meanings of words. The rules which govern the arrangement of the
symbols are likewise standard and thorough. It seems reasonable, therefore, to assume that they can be
successfully discussed and that notation can be a valuable tool for expressing ideas. Of course, it is
important when discussing music notation, and the structures found therein, to remember that "the
map is not the territory, and the name is not the thing named".[55] The ideas expressed in notation may
or may not be perceptible in their sonic manifestation.

The view of music notation as a language suggests that it might be taught similarly to a language. It is no
one's native language, to be sure, but ideas of second language acquisition might be applied to its
teaching. The following guidelines, designed for language teaching, are equally applicable to the
teaching of notation and notation-related skills (dictation, sightreading, etc.)

Some experts in second language teaching draw an important distinction between acquisition and
learning. Acquisition is essentially a subconscious process: one is not aware that one is acquiring a
language, only that one is using it effectively (i.e. obtaining the desired reactions); one is not aware of
using particular rules (or any rules at all, perhaps), but one has a feel for correctness and recognizes
when an error--a violation of the unstated rules--has been committed. Learning is knowing about
language--having explicit formal knowledge of the language and its rules. Grammar-based teaching--

teaching of rules through error correction and explanation--appears not to facilitate acquisition,
especially in children.

We acquire (not learn) language by understanding input that is a little beyond our current level of
(acquired) competence....Speaking fluency is thus not "taught" directly; rather, speaking ability
"emerges" after the acquirer has built up competence through comprehending input....In order for
acquirers to progress to the next stages in the acquisition of the target language, they need to
understand input language that includes a structure that is part of the next stage....

How can we understand language that contains structures that we have not yet acquired? The answer is
through context and extra-linguistic information....by adding visual aids, by using extra linguistic
context....we use meaning to help us acquire language.[56]
There is considerable evidence of a "natural order"[57] of acquisition of grammatical morphemes: some
grammatical structures and usages tend to be acquired relatively early, others tend to be acquired
relatively late. These tendencies are (for the most part) common to both first and second language
acquisition, to both children and adults, and to people with different first language influences. It would
be instructive to consider the possibility of a "natural order" of acquisition of musical skills. Which skills
and concepts in music reading and performance are immanently easier to acquire?

If we have some idea of the "natural order" of acquisition of structures, we can state the acquirer's
stage of development as point i in the natural order, and the next stage in the order as i+1. The
introduction of a new structure (i+1) is best achieved when the speaker "casts a net" of new structures
surrounding point i on the continuum. Provided that the speaker is understood, the speech will include
structure i+1, be comprehensible, and be more interesting than speech which tries to pinpoint i+1.
Furthermore, in a classroom situation each student will have a slightly different point i and point i+1.
Providing a "net" of input assures that everyone's individual i+1 will be introduced. In other words, the
progression introduced by the speaker should be "roughly tuned" to the desired order. This has
advantages over attempts to "fine tune" the acquirer's progression: since we are always taking a guess
as to where the student's current level is, we may miscalculate i+1 if we tune the input too finely;
roughly tuned input recycles and reviews, whereas finely tuned input progresses linearly without review;
roughly tuned input will be good for more than one acquirer at a time and "will nearly always be more
interesting than an exercise that focuses just on one grammatical point."[59] A too rigidly
"programmed" method of progressive steps risks being guilty of this excessively fine tuning, either
boring or losing a large number of students.

For acquisition to take place, the acquirer must have a low "affective filter"[60], i.e., must be open to
input. Factors that contribute to this include a positive attitude toward the "speaker", a low-anxiety
environment, and some degree of self-confidence. Thus, "the students are not forced to [perform]
before they are ready [and] errors which do not interfere with communication are not corrected".[58]
An active interest in the subject matter is also essential to a low affective filter; the material being used
must be of relevance and interest to the students.

Conscious learning does seem to be useful when used as a monitor (editor): utterances are initiated by
the acquired system and may then be adjusted or modified by rule-based learning (either before or after
actually speaking or writing the idea). Monitoring is used in situations where a rule is learned but not yet
acquired and is easy enough to implement quickly, or in situations where the rule has been acquired.

We see the natural order for grammatical morphemes when we test students in situations that appear
to be relatively "Monitor-free", where they are focused on communication and not form. When we give
adult students pencil and paper grammar tests, we see "unnatural orders", a difficulty order that is
unlike the child second language acquisition order....

A very important point about the Monitor hypothesis is that it does not say that acquisition is
unavailable for self-correction. We often self-correct, or edit, using acquisition, in both first and second
languages. What the Monitor hypothesis claims is that conscious learning has only this function, that it is
not used to initiate production in a second language.[61]
This indicates that the utility of conscious learning is largely restricted to the monitor function.
If an effort is to be made to teach music skills such as singing, sightreading, and dictation by acquisition,
the teacher must provide constant examples, presented in an appropriate "natural" order of difficulty,
and create an environment conducive to learning by example. Rules of music theory or conscious
strategies, as applied to music performance, should be taught after acquisition has gotten underway,
and should be employed only for monitoring purposes. Writing music is highly recommended as a way
of applying both acquired and learned structures. Even if good writing skills are not the objective, this
can be a good way of developing monitoring and applications of rules.

Teaching by Example

After all this discussion of music and language, and the problems immanent in attempting to correlate
the two, one is tempted to conclude that the only way to improve one's knowledge of music is through

music. Education in all the surrounding trappings of musical culture--literature, history, etc.--and all the
ideas related to or derived from music--abstract form, psychoacoustics, etc.--is certainly valuable. But
for purely musical insight, nothing succeeds like music.

I therefore posit a teaching method, applicable both to musical skills and music appreciation, in which
music is taught by example. This is hardly a new idea. In many cultures music is very successfully learned
entirely through experience and imitation. Those cultures are not in any way stagnant as a result of a
paucity of discussion. Original thinkers do their original work without being told how to do it. (If they
were told how to do it, it wouldn't be original, nor would they.) Yet this obviously successful means of
music education is eschewed in the university, presumably on the grounds that it is anti-intellectual or
insufficiently liberal. Such fallacies are rooted in the beliefs that only skills--not ideas--can be taught by
example and that ideas can only be taught by verbal instruction.

The goal of the method is for the student to acquire personally meaningful knowledge about music
using music itself. Spoken/written language is to be used as little as possible (if not less). The most
fundamental and most common activity is making comparisons between sounds, musical excerpts,
performances of music, and occasionally visual images (without linguistic comment). Sounds may
include instrumental sounds and combinations thereof, concrete sounds, i.e., anything. Musical excerpts
may include sections, phrases, individual moments, individual parts (a single line of a polyphonic texture,
a single instrument in a group, etc.) of "existing" music or newly composed/played music. Different
performances or renditions of "the same" music may be compared. Visual images may be used to
reinforce sonic perception (e.g., computer display of played music, intellectually and emotionally
evocative representational and non-representational images, etc.) The idea is to isolate musical ideas,
rather than ideas developed in other domains.

When the topic is musical "materials" (the sound itself), visual images may be related abstract forms, or
direct representational depictions of some aspect of the sound or some theory about the sound (e.g.,
related arts, graphical scores, etc.). When the topic is the cultural environment of music (for, after all,
music does not exist in a vacuum), representational images depicting performance setting, surrounding
social conditions, etc. may be more appropriate (e.g., videos, iconography, etc.).

In short, the intention of the method is to teach only by providing opportunities for acquisition.
Conscious learning is certainly not discouraged, but it takes place outside of the classroom.

Teaching entirely by example is certainly not without pitfalls. Without any verbal explanation, how does
the student know what is being exemplified? If that's not sufficiently clear, the student may a) have no
idea what she/he is supposed to derive from the example, b) draw the wrong conclusion from the
example, or c) draw an unintended but valuable conclusion. Obviously, a and b are probably not so
desirable, whereas c could be desirable, especially if the aim is for the student to create her/his own
path to her/his own essential information. How does one insure that c is achieved, though, and not a or
b?

Multiple examples can be useful for exemplifying without explicating. Care must be exercised that the
common traits of the examples are only (or most obviously) those that are at issue (otherwise, see a, b,
and c, above). Rigorous comparison of multiple examples can help eliminate b (or insure that a b is really
a c).

Counter examples are another tool for exemplifying without explicating. The fact that an example is
intended as a counter example should be indicated to avoid unnecessary confusion. (If it's not obvious,
though, it may not be a good enough counter example.)

The question seems to come down to: Is the intent to guide the student to a specific conclusion, or is it
to present evidence and allow the student to reach an independent conclusion? If the former, then
explication is more direct and (by definition) less ambiguous than example. If the latter, then explication
is virtually impossible (for it is insufficiently ambiguous), and example is (by definition) the "evidence".

In fact, the real goal is to teach ways of drawing conclusions independently, from the evidence, rather
than presenting foregone conclusions. So one must first teach bases of decision-making and evaluation.
Can these, too, be taught by example? One of the founders of UCSD, Roger Revelle, recounted:

[It was the conclusion of the founders of UCSD that] the object of a college education was not to acquire
a body of knowledge, but to learn how to learn. We felt that in the rapidly changing world of our times,
few of the so-called facts one learns in college were likely to be useful, or even true, ten years later. But
what would be permanently useful were the language and the logic of a science or a humanistic field,
the demonstration that it is possible to learn something new that is not already known--in other words,
to make discoveries--and that it is an exciting and satisfying thing to do so.[62]
Notes

information. See Dobrian, Chris. "Music and Artificial Intelligence", 1992.


meaning. Knowledge that is considered the end goal, the ultimate significance, of a perceptive and
evaluative process. (c.f. significance.)
Erickson, Robert. Sound Structure in Music. University of California Press: Berkeley, 1975. p. 1.
Bateson, Gregory. Mind and Nature: A Necessary Unity. New York: E.P. Dutton, 1979. (New York:
Bantam Trade Edition, Bantam Doubleday Dell Publishing Group, 1988.) p. 31.
Russell, Bertrand. Selected Papers. New York: Random House, 1955. p. 358.
Longfellow, Henry Wadsworth. Outre-Mer: A Pilgrimage Beyond the Sea, 1835.
Stravinsky, Igor. Chronicle of My Life. London: Victor Gollancz, 1936. pp. 91-92.
Stravinsky, Igor. Poetics of Music. New York: Alfred A. Knopf, 1947. p. 79. (Lectures delivered in 19391940.)
Copland, Aaron. What to Listen for in Music. New York: McGraw-Hill, 1957. pp. 73-74.
Ibid. pp. 13-14.
Ibid. p. 12.
Cooke, Deryck. The Language of Music. New York: Oxford University Press, 1959. pp. 15, 33 & 32.
Hindemith, Paul. A Composer's World, Horizons and Limitations. Gloucester, MA.: Peter Smith, 1969. p.
40. (Charles Eliot Norton lectures, 1949-1950, copyright 1952.)
Cooke. op. cit. pp. 21-22.
Salzman, Eric. Twentieth Century Music: An Introduction. Englewood Cliffs, New Jersey: Prentice-Hall,
Inc., 1974. p. 48.
Cooke. op. Cit. p. 106.
Reimer, Bennett. A Philosophy of Music Education. Englewood Cliffs, New Jersey: Prentice-Hall, Inc.,
Second Edition, 1989. pp. 23 & 17.
Ibid. pp. 23-26.
Ibid. p. 27-28.
Ibid. p. 28-29.
significance. The concrete object or abstract concept which is signified (referred to) by a symbol or
grammatical construction of symbols. (c.f. meaning.)

Danilou, Alain. The Raga-s of Northern Indian Music. London: Barrie & Rockliff the Cresset, 1968. p. 8.
Orlov, Henry. "Toward a Semiotics of Music". The Sign in Music and Literature. Wendy Steiner, ed.
Austin: University of Texas Press, 1981. p. 131.
Boulez, Pierre. "Sound and Word". Notes of an Apprenticeship. Herbert Weinstock, trans. New York:
Alfred A. Knopf, 1968. pp.52-56. (Relevs d'apprenti. Paris: Editions du Seuil, 1966.)
Stravinsky, Igor. Poetics of Music. pp. 26-27.
Kodly, Zoltn. Visszatekints. [In Retrospect.] Ferenc Bnis, ed. Budapest: Zenemkiad, 1964. Vol. I.
Sellars, Peter in Moyers, Bill. A World of Ideas II; Public Opinions from Private Citizens. New York:
Doubleday, 1990. p. 24.
Suzuki, Shinichi. Remarks at the "National Festival", Tokyo, Japan, 1958.
Dewey, John. Art as Experience. New York: Capricorn Books, 1958. p. 74.
Anderson, Laurie. Aphorism displayed on a wall in the film Home of the Brave, 1985.
Orlov. op. cit. p. 131.
Hrushovski, Benjamin. "The Structure of Semiotic Objects: A Three-Dimensional Model". The Sign in
Music and Literature. Wendy Steiner, ed. Austin: University of Texas Press, 1981. p. 12.
Ibid. p. 18.
Ibid. p. 19.
Stanislavski, Constantin. Creating a Role. Elizabeth Reynolds Hapgood, translator. New York: Routledge,
1961. p. 262.
Ibid. p. 270.
Hrushovski. Op. cit. p. 19.
Schank, Roger and Abelson, Robert. Scripts, Plans, Goals, and Understanding. Hillsdale, New Jersey:
Lawrence Erlbaum Associates, 1977.
Orlov. op. cit. p. 133.
Hrushovski. op. cit. p. 23.
Varse, Edgard in Schuller, Gunther. "Conversation with Varse". Perspectives of New Music, 3:2, 1965.
pp. 32-37.
Eco, Umberto. The Open Work. Translated by Anna Cancogni. Cambrige, Massachusetts: Harvard
University Press, 1989. p. 3.

Orlov. op. cit. pp. 132 & 136.


Meyer, Leonard B. Emotion and Meaning in Music. Chicago: The University of Chicago Press, 1956. pp.
43-44.
McHose, Allen Irvine. The Contrapuntal Harmonic Technique of the Eighteenth Century. Englewood
Cliffs, New Jersey: Prentice-Hall, Inc., 1947. p. 19.
Meyer. op. cit. p. 54.
Perlman, Alan M. and Greenblatt, Daniel. "Miles Davis Meets Noam Chomsky: Some Observations on
Jazz Improvisation and Language Structure". The Sign in Music and Literature. Wendy Steiner, ed. Austin:
University of Texas Press, 1981. pp. 169-183.
Copland. op. cit. p. 9.
Ibid. p. 18.
Ibid. pp. 10 & 17.
Boulez, Pierre. clat. London: Universal Edition, 1965. p. 3.
Lewis, George. Personal correspondence, 1991.
Moore, F. Richard. "The Dysfunctions of MIDI". Proceedings of the International Computer Music
Conference. San Francisco: Computer Music Association, 1987. pp. 256-263.
Lewis. op. cit.
Bateson. op. cit. p. 30.
Krashen, Stephen D. and Terrell, Tracy D. The Natural Approach. Hayward, CA: The Alemany Press, 1983.
p. 32.
Ibid. p. 28.
Ibid.p. 35.
Ibid. p. 38.
Ibid. p. 20.
Ibid. p. 31.
Revelle, Roger. Paper to the Revelle College Renaissance Revisited Convocation. La Jolla, California: date
unknown. (As printed in the Los Angeles Times, July 21, 1991.)

Potrebbero piacerti anche