Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Alan Kirby
2009
www.continuumbooks.com
All rights reserved. No part of this book may be reproduced, stored in a retrieval
system, or transmitted, in any form or by any means, electronic, mechanical,
photocopying, recording, or otherwise, without the written permission of the
publishers.
Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
3. A Prehistory of Digimodernism . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Industrial Pornography
Ceefax
Whose Line is It Anyway?
House
B. S. Johnson’s The Unfortunates
Pantomime
v
vi Contents
Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
Works Cited . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
Introduction
Since its first appearance in the second half of the 1990s under the impetus
of new technologies, digimodernism has decisively displaced postmodern-
ism to establish itself as the twenty-first century’s new cultural paradigm.
It owes its emergence and preeminence to the computerization of text,
which yields a new form of textuality characterized in its purest instances
by onwardness, haphazardness, evanescence, and anonymous, social and
multiple authorship. These in turn become the hallmarks of a group of
texts in new and established modes that also manifest the digimodernist
traits of infantilism, earnestness, endlessness, and apparent reality. Digi-
modernist texts are found across contemporary culture, ranging from
“reality TV” to Hollywood fantasy blockbusters, from Web 2.0 platforms
to the most sophisticated videogames, and from certain kinds of radio
show to crossover fiction. In its pure form the digimodernist text permits
the reader or viewer to intervene textually, physically to make text, to add
visible content or tangibly shape narrative development. Hence “digimod-
ernism,” properly understood as a contraction of “digital modernism,” is a
pun: it’s where digital technology meets textuality and text is (re)formulated
by the fingers and thumbs (the digits) clicking and keying and pressing in
the positive act of partial or obscurely collective textual elaboration.
Of all the definitions of postmodernism, the form of digimodernism
recalls the one given by Fredric Jameson. It too is “a dominant cultural
logic or hegemonic norm”; not a blanket description of all contemporary
cultural production but “the force field in which very different kinds of
cultural impulses . . . [including] ‘residual’ and ‘emergent’ forms of cultural
1
2 DIGIMODERNISM
production . . . must make their way.”2 Like Jameson, I feel that if “we do
not achieve some general sense of a cultural dominant, then we fall back
into a view of present history as sheer heterogeneity, random difference . . .
[The aim is] to project some conception of a new systematic cultural
norm.”3 Twenty years later, however, the horizon has changed; the domi-
nant cultural force field and systematic norm is different: what was post-
modernist is now digimodernist.
The relationships between digimodernism and postmodernism are
various. First, digimodernism is the successor to postmodernism: emerg-
ing in the mid-late 1990s, it gradually eclipsed it as the dominant cultural,
technological, social, and political expression of our times. Second, in its
early years a burgeoning digimodernism coexisted with a weakened,
retreating postmodernism; it’s the era of the hybrid or borderline text (The
Blair Witch Project, The Office, the Harry Potter novels). Third, it can be
argued that many of the flaws of early digimodernism derive from its con-
tamination by the worst features of a decomposing postmodernism; one of
the tasks of a new digimodernist criticism will therefore be to cleanse its
subject of its toxic inheritance. Fourth, digimodernism is a reaction against
postmodernism: certain of its traits (earnestness, the apparently real)
resemble a repudiation of typical postmodern characteristics. Fifth, histor-
ically adjacent and expressed in part through the same cultural forms,
digimodernism appears socially and politically as the logical effect of post-
modernism, suggesting a modulated continuity more than a rupture. These
versions of the relationship between the two are not incompatible but
reflect their highly complex, multiple identities.
On the whole I don’t believe there is such a thing as “digimodernity.”
This book is not going to argue that we have entered into a totally new
phase of history. My sense is that, whatever its current relevance in other
fields, postmodernism’s insistence on locating an absolute break in all
human experience between the disappeared past and the stranded present
has lost all plausibility. The last third of the twentieth century was marked
by a discourse of endings, of the “post-” prefix and the “no longer” structure,
an aftershock of 1960s’ radicalism and a sort of intellectual millenarianism
that seems to have had its day. Like Habermas, my feeling is that, ever
more crisis ridden, modernity continued throughout this period as an
“unfinished project.” Although the imponderable evils of the 1930s and 40s
could only trigger a breakdown of faith in inherited cultural and historical
worldviews such as the Enlightenment, the nature and scale of this reac-
tion were overstated by some writers. In so far as it exists, “digimodernity”
Introduction 3
is, then, another stage within modernity, a shift from one phase of its his-
tory into another.
Certain other kinds of discourse are also not to be found here. I won’t be
looking at how digitization actually works technically; and I won’t do more
than touch on the industrial consequences, the (re)organization of TV
channels, film studios, Web start-ups, and so on, which it’s occasioned. I’m
a cultural critic, and my interest here is in the new cultural climate thrown
up by digitization. My focus is textual: what are these new movies, new TV
programs, these videogames, and Web 2.0 applications like to read, watch,
and use? What do they signify, and how? Digimodernism, as well as a break
in textuality, brings a new textual form, content, and value, new kinds of
cultural meaning, structure, and use, and they will be the object of this
book.
Equally, while digimodernism has far-reaching philosophical implica-
tions with regard to such matters as selfhood, truth, meaning, representation,
and time, they are not directly explored here. It’s true that these arguments
first saw the light of day in an article I wrote for Philosophy Now in 2006,
but the cultural landscape was even then my primary interest.4 In that arti-
cle I called what I now label digimodernism “pseudo-modernism,” a name
that on reflection seemed to overemphasize the importance of certain
concomitant social shifts (discussed here in Chapter 7). The notion of
pseudomodernity is finally a dimension of one aspect of digimodernism.
The article was written largely in the spirit of intellectual provocation;
uploaded to the Web, it drew a response that eventually persuaded me the
subject deserved more detailed and scrupulous attention. I’ve tried to
address here a hybrid audience, and for an important reason: on one side,
it seemed hardly worth discussing such a near-universal issue without
trying to reach out to the general reader; on the other, it seemed equally
pointless to analyze such a complex, multifaceted, and shifting phenome-
non without a level of scholarly precision. Whatever the result may be, this
approach is justified, even necessitated, by the status and nature of the
theme. Finally, considerations of space precluded extensive discussion of
postmodernism, and the text therefore assumes that we all know well
enough what it is/was. Anyone wishing for a fuller account is advised to
read one of the many introductions available such as Simon Malpas’s
The Postmodern (2005), Steven Connor’s Postmodernist Culture (1997, 2nd
edition), or Hans Bertens’s The Idea of the Postmodern (1995).
I begin by assessing the case for the decline and fall since the mid-late
1990s of postmodernism, in part as a way of outlining the context within
4 DIGIMODERNISM
[T]wenty years ago, the concept “postmodern” was a breath of fresh air, it suggested
something new, a major change of direction. It now seems vaguely old-fashioned.
Gilles Lipovetsky, 20051
How might you know that postmodernism was dead? To call anything
“obsolete,” “finished,” or “over” is clearly to fuse an allegation of previous
existence with one of contemporary absence, and let us leave to one side for
now any objections to the first proposition. Assuming that postmodernism
was once alive, what would it mean to say it was dead? This is partly a
request for evidence, but there is a more fundamental problem here, to do
with the fixing of criteria for the claim, which doesn’t apply to calling a
sentient being deceased or an event concluded. We don’t really know what
the criteria for such a claim are. Yet cultural or historical periods do end:
nobody seriously believes that terms such as the Stone Age or the Dark
Ages, the Renaissance or Romanticism are appropriate or useful in the def-
inition of social or artistic trends at the start of the twenty-first century.
Despite this, it can still be felt that some of the traits of expired eras linger
on, possibly in subsumed or mutated form, and this can be asserted com-
pellingly; it can also be asserted, though, and in slightly different ways, for
sentient beings and events. As a result, it can be argued with absolute assur-
ance that a day will come when postmodernism is over as an appropriate
or useful category to define the contemporary, even if some of its traits were
to survive. It will only be a question of working out when this happened.
Discussing the possible eclipse of postmodernism with some students, I saw
one or two of them bridle as though irritated by or contemptuous of the
idea. But this can only be historical ignorance or fear of the unknown: of
course, one day it must be gone. And the problem of knowing when that
day is, is also the problem of deciding on the criteria for such a death.
5
6 DIGIMODERNISM
successor has been announced (as by the present author). In each case I
shall ask whether this constitutes evidence of the death of postmodernism
and, if so, how much. Alert readers may notice that the cases are, as the
chapter wears on, increasingly self-conscious about the fatal implications
for postmodernism of their agenda; how this relates to the persuasiveness
of that agenda is another matter entirely, though.
Pixar’s release of Toy Story in 1995 was a digimodernist landmark: the first
entirely computer-generated film. Technically it looked stunningly new,
and was immediately acclaimed as much a turning point in the history of
cinema as when The Jazz Singer had introduced sound. Yet, as the first
postmodern children’s movie, Toy Story’s interest lies more in its content
than in its quickly superseded technological innovations. Its postmodern-
ism derives partly from its hybridity, its fusing of children’s and grown-ups’
fictive modes: it blends traditions in children’s cinema (animation, the
child’s perspective, magic [toys that come alive], themes of loss and resto-
ration) with jokes for adults about Picasso, allusions to horror movies like
Freaks or The Exorcist, and a speed of dialogue and cutting not dissimilar
to that of MTV. It generates an apparent “cleverness” which is more like
street-smartness; it’s sharp and knowing, but in a largely negative and
uninformed manner (seeing through bogusness), which had a lasting
influence on the next decade’s cartoons; it led, for instance, to the faultfind-
ing reaction of the cubs to a story they are told in Blue Sky’s Ice Age: The
Meltdown (2006)—a destructive rather than an enabling “cleverness”
because it has been stripped of actual knowledge. Ironic, knowing, skepti-
cal, aware of and ambivalent about narrative conventions and codes, the
tone and mood of Toy Story are pervasively postmodern.
Hitherto, the heroes and heroines of animation had tended to be legend-
ary or mythopoeic characters drawn from traditional fairy-tale or adven-
ture sources. Those of Toy Story, a children’s fiction about children’s fictions,
however, are merchandising, action figures bought by parents in the wake
of visits to the cinema or purchases of videos; each one therefore com-
memorates, and brings to the film, his batch of preceding texts. Woody is a
cowboy toy who imports into the film the world of the heroic Western; he
is a hero, a commanding and resourceful leader, and calls up a raft of cul-
tural and cinematic memories and references. The first part of the film
focuses on his apparent supercession in his owner’s affections by the space-
man toy Buzz Lightyear, which, on one level, suggests the cultural shifts in
The Arguable Death of Postmodernism 9
the United States from the kind of “pioneer” heroes erected by the 1950s
(Roy Rogers, etc.) to those of the 1970s (Neil Armstrong, Buzz Aldrin,
etc.). This story strand places the narrative closer to the experience of par-
ents (probably born about 1954–64) than to that of their children. Buzz
Lightyear, as his name suggests, is part-Aldrin, part-Luke Skywalker, and
invests the movie with his own raft of cultural and cinematic memories
and references around the heroic sci-fi/fantasy movie, especially Star Wars.
Many of the toys occupy such worlds, their interaction thereby becoming a
chaotic and comic jostling and intermingling of different textual sources.
While Woody is a simple citation, some of the other characters are paro-
dies of their original: the angst-ridden dinosaur looks back to the monsters
of Jurassic Park whom he ought to resemble but doesn’t, while simultane-
ously evoking memories of the cowardly lion of The Wizard of Oz. The film,
then, is largely composed of a quantity of cultural quotations crossing,
bumping, overlapping, and mingling with each other to highly postmod-
ern effect.
Yet its postmodernism goes even beyond that. Focusing on two small
boys, one loving, the other monstrous, it is striking for its absence of fathers;
the boys have sisters and mothers but there are no adult males, problema-
tizing the issue of who the boys are supposed to grow up into. “Adult” mas-
culinity—the fully developed male—is of course represented to the boys
through Woody and Buzz, and is therefore suffused with heroic assump-
tions; yet the thrust of the film, in accordance with postmodern theory,
demonstrates that such heroic masculinity is not so much “natural” as con-
structed by society, specifically as manufactured by corporate marketing
departments. There are powerful echoes here of Blade Runner. Buzz has to
learn the depressing news that, instead of the free, authentic individual he
believes himself to be (has been programmed to think he is), he is a com-
modity, advertised on television, sold in industrial quantities in shops,
identical to thousands of others. Toy Story 2 (1999) even contains a self-
referential gag, voiced as the characters tour a toy shop passing endless
replicas of themselves on sale for money, about the insufficient numbers of
Buzz Lightyear action figures stocked by retailers in 1995 in overly pessi-
mistic anticipation of the first Toy Story. This strand is powerfully anties-
sentialist; it reduces the self to something fabricated and sold by global
corporations (the toys discuss which companies made them—Mattel, Play-
skool—like children telling each other their parents’ names). This, then, is
what the boys will grow up into: elements of international capitalism,
employees and consumers, all identical to each other, saturated in advertis-
ing and shopping. Selfhood is participation in the marketplace as worker,
10 DIGIMODERNISM
Whatever else it is, Shrek is a fairy tale: it recounts the journey of a brave
hero and his quadruped companion to a castle to save a beautiful princess
from a deadly dragon’s clutches, culminating in his marrying her and living
happily ever after. It is also a deconstruction of these conventions, as the
ripping of the page foreshadows: the “brave hero” is a stinking, ugly ogre,
who journeys for his own ends and not for love or honor; the “quadruped”
is a garrulous, cowardly donkey, irksome to the “hero”; the “beautiful prin-
cess” is turned by true love’s kiss (following a curse) into a fat, green, and
unattractive version of herself; the “deadly dragon” falls in love with and
marries the donkey; sequels would show that the married life of the “hero”
and “heroine” was anything but happy “ever after.” Early on a crowd of fic-
tional characters from an array of different narratives take very postmod-
ern refuge in the ogre’s swamp: there are the Three Little Pigs, Snow White
and the Seven Dwarfs, the Pied Piper of Hamelin (plus rats), the Three
Blind Mice, Pinocchio, and so on. Though referred to as “fairy-tale crea-
tures,” the net is actually cast slightly wider than that to encompass figures
from nursery rhymes and more recent children’s stories (e.g., Tinkerbell).
More precisely, they suggest the principal cast of Disney’s collective back
catalog, uprooted from their fictional settings (the notion here is that a
cruel ruler has evicted them, but clearly only symbolically), deracinated
and set floating away from their narrative homes.
However, and unlike Chicken Run, the postmodernism of Shrek does
not primarily lie in an ironic manipulation of a tissue of quotations. It lies
in its style, its registers and tones, which form a carefully orchestrated,
complex, brilliant, and hilarious clash of hybridities and anachronisms,
where all is deracinated and far from home. Heterogeneous and depthless,
knowing and self-conscious, allusive and affectless, Shrek interweaves and
plays off each other the medieval courtly past and the hip hyperreal present
like a French Lieutenant’s Woman for the twenty-first-century third-grader.
So it is a fairy tale, a romantic depiction of an exciting quest and the tri-
umph of true love with many of the ethnonarrative elements analyzed by
Propp’s Morphology of the Folk Tale. And it is also a cynical deconstruction
of all that. So it is packed with bits of contemporary pop culture, TV pro-
grams, cult movies, pop songs, consumerist cool; but the pop songs that
punctuate the action are of the kind that get nine-year-olds throwing them-
selves about on the floor of the school disco, fizzy, bright, and peppy, with
lyrics that refer to nothing at all (no “Venus in Furs” or “Holidays in the
Sun” here)—it’s children’s pop culture to match children’s fairy tales. And
the two fuse (the Monkees sing “I thought love was only true in fairy tales”
at the wedding party) or are made to (the Three Little Pigs break-dance).
14 DIGIMODERNISM
Disney is blown up. Or, to put it another way, Disney’s techniques and
content are belatedly brought forward and renewed into a postmodern age,
twenty years after postmodernism’s heyday. The hybridity and clashes that
Shrek adroitly manages are all reconciled within a prevailing and hugely
enjoyable postmodern aesthetic. Indeed, it suddenly made DreamWorks’
own traditionally animated films look terribly old-fashioned: The Prince of
Egypt (1998), The Road to El Dorado (2000), Joseph: King of Dreams (2000),
Spirit: Stallion of the Cimarron (2002), and Sinbad: Legend of the Seven
Seas (2003) were all drawn under a guiding aesthetic of heroic myth and
“inspiring” legend (down to the pompous colons) which the ferocious
and scintillating postmodernism of Shrek made obsolete overnight. In 2004
DreamWorks announced they would make no more 2D animation, dedi-
cating themselves solely to computer-generated fare and killing off a form
of cartoon narrative along with a tradition of cartoon-making; in 2006
Disney bought Pixar.
Yet these four movies were not the first appearance of postmodernism
in children’s stories. Shrek drew on Jon Scieszka and Lane Smith’s picture
book The Stinky Cheese Man and Other Fairly Stupid Tales, first published
almost a decade earlier and described as “a postmodern collection of fairy
tales for a postmodern time” by the literary critics Deborah Cogan Thacker
and Jean Webb,4 and as “Tristram Shandy for the primary school reader”
by the authors of a popular introduction to postmodernism.5 While this
claims too much of a series of amusing exercises in comic bathos, Cogan
Thacker and Webb are on more solid ground drawing parallels between
Philip Pullman’s metafictional children’s Gothic pastiche Clockwork: Or All
Wound Up, published in 1996, and Italo Calvino’s postmodern classic If on
a Winter’s Night a Traveler. Beyond Shrek and The Stinky Cheese Man lies
Angela Carter’s The Bloody Chamber and its postmodernization of fairy
tales for adults, a staple of university reading lists. However, what is more
interesting is not the (finally inevitable) appearance of postmodernism in
children’s literature, but rather the cultural and historical significance of
the arrival of children’s literature in postmodernism: the fact that what had
once denoted shifts in architectural theory now referred most vibrantly
to the entertainment of prepubescents. This surely suggested a new and
critical stage in the development of postmodernism, which by the turn of
the millennium had come to underpin a billion-dollar industry beloved by
preschoolers.
And soon another, even more damaging point in the history of postmod-
ernism was reached. If it had undermined postmodernism to be reduced
to a child’s plaything, it was even more humiliating to become that infant’s
16 DIGIMODERNISM
discarded toy, grown out of and left behind. When postmodernism turned
into yesterday’s style in the eyes of children, it surely entered the absolute
past tense of contemporary culture. This change, which happened unevenly
over the next five or six years, was signaled by a succession of disastrous
films by DreamWorks: Shrek 2 (2004), Shark Tale (2004), Madagascar
(2005), and Flushed Away (2006, in partnership with Aardman). Mostly
pilloried by critics, they were increasingly unsuccessful at the box office
too. Essentially each took the ingredients of the first Shrek and, rather
than bake them into a postmodern cake, flung them pell-mell at the screen.
The films dissolve into a helter-skelter of scattershot allusions, parodies,
pastiches, fizzy pop songs, knowing irony, breakneck incidents, and adult-
oriented but unsummarizably dull story lines. Whereas Shrek had anchored
itself in a postmodern fairy tale it deconstructed as it went, the plots of
these movies—trying to get on with your in-laws, trying to save London’s
sewer rats, trying to protect yourself and your town from angry fish—are
a diffuse mess, a nothing, at which a disintegrated and nonaccumulating
tumult of stuff gets hurled. Overwrought and scarcely at all funny, they’re
unstructured and hyped-up fragments that, as well as breaking no new
ground as content or style, in fact transform the distinctive Shrek aesthetic
into a tiresome, convoluted and recycled postmodern blizzard. Paradoxi-
cally, they can be seen as even more postmodern than their illustrious
predecessor: it isn’t Disney’s characters that have been evicted from their
narrative home here, but Shrek’s postmodern assembly of elements. They
showcase the postmodernization of the postmodern, and it’s no fun at all.
The more critically and commercially successful films made by Pixar in
this period choose instead to marginalize and downplay the postmodern.
Finding Nemo (2003) can be read as a transtextual recasting of the tradi-
tional Disney narrative about a child separated from his or her parents,
reworked so that attention falls on the parent’s search rather than on the
subsequent informal fostering arrangements that emerge, where the
deeper subject becomes a radical interrogation of the social meaning of
masculinity. You can read it this way if you want, but the film certainly
doesn’t insist on your doing so. An alert postmodern viewer of the same
studio’s The Incredibles (2004) might note the pastiche of Superman and
Batman and the self-consciously ironic references to the narrative codes
of superhero comics (the deadliness of capes and of “monologuing”). But
these are episodic or early gestures only, contributing nothing to the
overall shape of the narrative or its controlling aesthetic. In Cars (2006),
the world of motor sports is depicted through a hyperreal media haze: we
see and listen to TV commentators on the races, see the rolling news TV
The Arguable Death of Postmodernism 17
(1) Shooting must be done on location. Props and sets must not be
brought in . . .
(2) The sound must never be produced apart from the images or
vice versa . . .
(3) The camera must be handheld. Any movement or immobility
attainable in the hand is permitted . . .
(4) The film must be in color. Special lighting is not acceptable . . .
(5) Optical work and filters are forbidden.
(6) The film must not contain superficial action. (Murders, weapons,
etc. must not occur.)
(7) Temporal and geographical alienation are forbidden. (That is to
say that the film takes place here and now.)
(8) Genre movies are not acceptable.
(9) The film format must be Academy 35 mm.
(10) The director must not be credited.
The vow concludes: “My supreme goal is to force the truth out of my char-
acters and settings.”7 The project met with considerable initial success: its
first official film, Vinterberg’s Festen or The Celebration (or Dogme #1;
1998) won the Jury Prize at the Cannes Film Festival and both the New
York and the Los Angeles Film Critics’ awards for Best Foreign Film, while
Von Trier’s Idioterne or The Idiots (Dogme #2; 1998) also received much
critical acclaim. The project spread beyond Denmark, with American and
French directors making Dogme films at the turn of the millennium. Its
official Web site currently lists 254 such films, almost all virtually unseen.8
These statements never mention postmodernism and indeed, had they
done so, it would only have weakened their argument. In 1995 postmod-
ernism in cinema was marginal, restricted to a handful of cult films, though
it grew much more widespread in the second half of the 1990s as we have
seen. The project required a more pervasive and dominant enemy than the
20 DIGIMODERNISM
aesthetics lying behind a Blade Runner or a Pulp Fiction. This was supplied
by the notion of artifice in cinema, and the anti-postmodernist implica-
tions of Dogme 95 are therefore unvoiced, contained in its wholehearted
embrace of values that postmodernism believed it had set in quotation
marks forever. Above all, Dogme 95 threw its arms around a supposedly
unproblematic and transcendent concept of Truth. With Truth came Reality,
also apparently uncomplicated and universal. Its films were to be created in
real places, with real props, using real sound, occurring in real time, con-
sisting of real events (uncorrupted by genre conventions), and avoiding
any sort of directorial or studio tampering with the footage either during
recording or in postproduction. In practice, the overall effect was to give
these fictions a documentary feel; the Vow of Chastity seemed (fraudu-
lently) to defictionalize invented narratives.
It is not important here whether the films stayed absolutely faithful to
the vow or not, or how sincere the signatories were (some accused them
of a public relations coup). What is striking to my mind about Dogme 95 is
how archaic it appears as a cultural event. On one hand, the suggestion that
the True and the Real can be accessed and evoked by a simple act of will is
disingenuous in the extreme, as if von Trier and Vinterberg had failed to
notice developments over the previous century or so in any and every
artistic and intellectual field (Jean-Luc Godard meets Rip Van Winkle). On
the other hand, the act of writing a cultural manifesto seems like an absurd
throwback; it suggests a pastiche of stories about cosmopolitan young
men congregating in early twentieth-century Parisian cafés to draft their
violently worded statements about what must be done with contemporary
art by a fetishized avant-garde. Von Trier and Vinterberg seem to yearn to
be Marinetti, or André Breton; they seem to long for it to be 1909, the year
“The Manifesto of Futurism” was published in a Paris newspaper, or 1924,
the year of the first Manifesto of Surrealism, again. Their rejection of post-
modernist strategies is revealed as a dreamy and impossible nostalgia for
modernism instead. Alternatively, they would like it to be the 1950s again:
point 9, enforcing use of the Academy 35mm film format, returns them
to a picture ratio made obsolete forty years earlier, while the opening words
of the manifesto sarcastically invoke the title of Truffaut’s nouvelle vague
manifesto Une certaine tendance du cinéma français, published in Cahiers
du Cinéma in 1954. The choice of Paris as the city in which to launch the
project was not accidental; and although von Trier and Vinterberg mock
the nouvelle vague, they ostentatiously overlook the fact that by 1995 France
had actually grown peripheral to the world of experimental filmmaking.
The Arguable Death of Postmodernism 21
the literary door to a cinematic technique; Blincoe and Thorne may as well
ban novelists from using dolly shots (it suggests too that something other
than prose is today’s “dominant form of expression”). The literary version
of the flashback in a Rebecca or a Beloved is such a subtle and sophisticated
tool that it is hard to see how narrative could be excised of it (or why),
while as a cinematic device it served Vertigo and Citizen Kane well enough.
Destructive too of actually existing great literature is the interdiction of the
historical novel (a rewording of Dogme’s point 7), the authorial aside and
the dual-time fiction, depriving us forever of War and Peace, Bleak House,
and Ulysses.
To replace them, the anthology provides fifteen tales, ranging from the
quite good through the not very good to the poor, written by a batch of near-
unknown and young-ish British authors together with Geoff Dyer, Alex
Garland, and Toby Litt. The stories are brisk, lightweight, and sometimes
reminiscent of people the manifesto surely wants to dislodge (Vonnegut,
Ballard). Unfortunately, they’re not successful or distinctive or unified
enough to add up to more than the quite modest sum of their parts. This
kind of undertaking, like the nouvelle vague, does require some degree of
aesthetic achievement and some appreciably shared traits, and the collec-
tion, to be blunt, offers neither. The whole affair seems like nothing better
than a PR stunt: it’s almost impossible today to publish such an anthology
in Britain and the scaffolding erected around the venture looks in retro-
spect like a gimmick designed to provide a publishing opportunity for a
bunch of ambitious friends. As their Wikipedia entry sourly but accurately
notes: “New Puritanism has not been espoused by any well-known writers
since the book’s publication, and the contributors have not collaborated
since.”11
Nevertheless, the New Puritans are not without their small historical
significance, enough to get them a brief mention in subsequent academic
guides to contemporary British fiction and their book translated into sev-
eral foreign languages. This significance derives from their semiexplicit
and would-be epochal repudiation of literary postmodernism. From the
start the venture declares itself, with paleontological awkwardness, “[a]
chance to blow the dinosaurs out of the water,” and the reptiles it seems to
have in mind are above all the postmodernists: “While I admire the formal
experiments of writers like B. S. Johnson, Italo Calvino or Georges Perec,
the stories in this collection prove that the most subtle and innovative form
available to the prose writers is always going to be a plot-line.”12 Martin
Amis’s Time’s Arrow and Salman Rushdie’s Midnight’s Children, central
works of British postmodernism, are condemned for their lack of “insight”
24 DIGIMODERNISM
of its award, showing up “dressed as clowns, on the premise that the Tate
had been turned into a circus”; in June 2001 they disrupted the opening
of a conceptualist work of art in Trafalgar Square; in July 2002, again
dressed as clowns, they carried a coffin marked “The death of conceptual
art” through London’s streets; in spring 2003 one of their number cut and
removed the string wrapped by a conceptual artist around Rodin’s The
Kiss in Tate Britain.25
This destructiveness and tendency to shift away from art toward art
criticism are typical of the Stuckists; the home page of their Web site
currently calls for signatures to a petition demanding Serota’s sacking. The
main thrust of their existence actually isn’t the rejection of postmodern-
ism, but of Brit Art, especially those pieces made by Damien Hirst and
Tracey Emin or collected by Charles Saatchi. The attack on postmodern-
ism is, I suspect, an attempt to broaden their appeal beyond such a paro-
chial dispute; through their Web site they soon attracted international
attention, and by 2004 claimed ninety franchised Stuckist groups in twenty-
two countries (they don’t give membership figures). In a concise overview
titled “Stuckism in 20 Seconds” Thomson notes that “Stuckists are pro-
contemporary figurative painting with ideas and anti-conceptual art,
mainly because of its lack of concepts.”26 This is disingenuous, for it falsely
suggests they would embrace a new improved conceptualism, but the com-
mitment to contemporary figurative painting has been unswerving (there
have been regular, usually small exhibitions, tepidly received on the whole).
Thomson has also described as “futile” the “diversification of ‘artists’ into
other media, such as video, installation and performance.”27
It is easy, and mostly justified, to dismiss the Stuckists as too negligible
for attention. They are set up for ad hominem attacks: almost all of the
original twelve members were failed thirtysomething artists from the
corner of southeast England where suburban meets provincial, who can be
seen as hoping to build a career out of a noisy rejection of the dominant
artistic fashion; Billy Childish (who would leave amicably in 2001) was an
ex-lover of Emin’s—indeed, Emin had inadvertently named the group by
describing Childish’s work as “stuck” many years earlier—and there does
seem a disproportionate amount of personal animus in their attacks on
Hirst, Saatchi, Serota, and Emin herself. Moreover, the Stuckists’ addiction
to pompously worded, combative, and sometimes absurdly long manifestos
locks them into Dogme’s time warp whereby they still think it’s 1921 in
their Paris café (or Maidstone pub). The name “Remodernism” makes this
impossible and bankrupt nostalgia painfully clear—it’s that unlikely thing,
a name even worse than “New Puritans.” Moreover again, some of their
The Arguable Death of Postmodernism 27
about the current state of theory; 163 replied, yielding results more com-
plex than Bauerlein’s article had suggested. True, 44 percent of respondents
did agree that theory was “a declining influence” in British universities
(40 percent felt its status was unchanged or were uncertain, 16 percent
thought it was still gaining ground).29 Although a significant result, this
was hardly overwhelming, and on the whole, as the THES put it, “[t]he
picture is patchy.”30 As well as the differences apparent from one institution
to another, a strong majority (79 percent) thought theory was “likely to
continue contributing new ideas,” while 78 percent felt it had “made a posi-
tive contribution to the humanities,”31 disputing the tenor of Bauerlein’s
article, which, as its subtitle puts it, “rejoices” in the death it announces.32
Indeed, the article is a traditional antitheory rant, which wildly concludes
that theory has destroyed higher learning, and which, shorn of the word
“dead,” could have been published twenty years earlier.
The supposed death of theory has been one of the defining debates of
the early digimodernist era. Among the most prominent of relevant texts
have been Post-Theory: Reconstructing Film Studies (1996), edited by David
Bordwell and Noël Carroll; Beyond Poststructuralism: The Speculations of
Theory and the Experience of Reading (1996), edited by Wendell V. Harris;
Post-Theory: New Directions in Criticism (1999), edited by Martin McQuillan,
Graeme Macdonald, Robin Purves, and Stephen Thomson; Reading after
Theory (2002) by Valentine Cunningham; After Theory (2003) by Terry
Eagleton; Life after Theory (2003), a collection of conversations with Jacques
Derrida, Frank Kermode, Christopher Norris, and Toril Moi edited by
Michael Payne and John Schad; and Post-Theory, Culture, Criticism (2004),
edited by Ivan Callus and Stefan Herbrechter. Firming up into a scholarly
question in its own right, the issue of post-theory and these texts have been
critically assessed by Slavoj Žižek in his The Fright of Real Tears: Krzysztof
Kieślowski between Theory and Post-Theory (2001) and by Colin Davis in
his After Poststructuralism (2004), as well as in forums such as the “Theory
after ‘Theory’” conference held at the University of York in October 2006.
More recently, Jonathan Culler’s The Literary in Theory (2007) summarized
the state of play: “Theory is dead, we are told. In recent years newspapers
and magazines seem to have delighted in announcing the death of theory,
and academic publications have joined the chorus.”33 In this climate, the
appearance in 2005 of Theory’s Empire, an anthology of assaults on theory
edited by Daphne Patai and Will H. Corral (and including Bauerlein), was
greeted by the Wall Street Journal as a “sign that things may be changing”
in the world of “American humanistic scholarship,” though actual scholars
were less convinced.34 There is clearly plenty here to interest a journalist
The Arguable Death of Postmodernism 29
looking for a trend, and the titles of the books seem pretty unambiguously
to state what the trend is; and the status of some of the participants (though
not all) is very high; or so it all might seem.
However, as with the THES poll, things are not as simple as they might
appear. We shall look at one of these interventions, Eagleton’s, in more
detail shortly, but seen as a group they do manifest certain tendencies.
First, as already noted, some of them are merely antitheory arguments in
disguise, made by people who have long denounced the supposedly dam-
aging effects of theory (like Bordwell and Carroll) and consequently bring
nothing new to the table beyond the allegation of death. The publication of
Theory’s Empire is a significant contribution to cultural debates, but many
of its pieces are years old. Some of these writers have never shown signs of
great enthusiasm for theory, like Cunningham, whose monumental British
Writers of the Thirties, though published as late as 1988, is a theory-free
zone. There is no rupture visible in such texts; it is all continuity, not change.
Second, many of the titles of these books reflect the opportunism of their
publishers. Their actual texts interpret “after” in the sense of “now that we
have read theory,” not in the sense of “now that theory is dead and buried”;
they evoke a reader who has absorbed theory rather than a theory that has
gone stale. Others take the question to mean this: since the initial wave of
post-1960s theory is no longer crashing down on us, it is time to take stock
and consider what we wish to retain from its most turbulent days, what
jettison, and perhaps now we can reorient theory so as to relaunch it in an
improved form. Such texts reduce their “after” to “following the end of one
phase of theory and before the start of the next”; their writers call for tweaks
here and there to theory, not interment. Third, there is general agreement
both that theory was, on the whole, a good thing (which enriched cultural
studies), and that a return to pretheoretical days is impossible. Despite the
occasional Bauerlein, they hold that while theory’s status has irrevocably
altered, it is not mortally wounded; behind their exciting and seismic-
shifting titles, they are more nuanced than a quick reader might suppose.
Fourth, there is a prevailing uncertainty about what theory might look like
in the future. It does not help that few if any of these writers have previ-
ously contributed anything new to theory, or have the philosophical train-
ing that might enable them to do so. Fifth, the sense of an ending that
these books all recognize but qualify is very unevenly spread across the
faculties. There are demographic differences, and not necessarily the ones
you might imagine. Twentysomethings who have just discovered theory
for the first time still frequently thrill to it as acutely as their forefathers did
in the 1970s. The weariness comes from their forty- and fiftysomething
30 DIGIMODERNISM
elders, who will privately admit that keeping up with theory (e.g., by
reading Žižek) is a tedious chore, who give conference papers denuded
of references to Parisian thinkers, and whose published writings increas-
ingly deploy a bricolage of theoretical concepts to analyze texts rather than
subjecting texts to the laser beam of a theory.
A consensus seems to have gathered around statements like these, made
by respondents to the THES poll: “The high watermark of theory has
passed”; “[n]o one would want to go back to the pre-theory times, and it is
important that students and scholars know the debates. But I am glad we
are much less doctrinaire about theory generally speaking, allowing it to
ask questions rather than setting the terms by which we read”; “[theory]
perhaps loses some of its power to shock or to pose a real challenge when
it becomes merely another tool for handling texts.”35 Such comments, and
also the arguments of the books mentioned above, suggest that theory
changes its identity when it ceases to appear radically and excitingly new.
It can be argued that one of the defining characteristics of postmodernism
was a rhetoric of disruption, of overthrow and resistance. But with almost
all its most dazzling figures now dead, and the age when ground-breaking
ideas arrived thick and fast now two or three decades behind us, the heroic
age of theory is decisively over. It always seemed a glamorous, outlaw pur-
suit; but that image is no longer available, except temporarily perhaps for
the young. Nevertheless, this does not equate to the end of theory; and the
problem remains that we still don’t know what the future of theory might
be. Cunningham calls for “tact” in reading, and writers such as Rónán
McDonald, whose The Death of the Critic (2007) also assesses the state of
post-theory, urge a return to the concept of aesthetic value. Indeed, John J.
Joughlin and Simon Malpas have tried to found a “new aestheticism” which
draws strength from “a conjuncture that is often termed ‘post-theoretical,’”
though without noticeable success.36 Often such calls sound like nothing
more than the banal and unsatisfactory wish that people say valid things
about the books they read, or that they value, for reasons as yet undiscov-
ered, the texts they should value. Above all, the “crisis” which theory finds
itself in is integral to and inflected by the wider cultural changes involved
in the shift from postmodernism to digimodernism, a context that nobody
entering this discussion has so far considered.
Much of this can be illuminated by looking in some detail at one partic-
ular example of the post-theory debate. Terry Eagleton taught at various
Oxford colleges from 1969 to 2001, being appointed Thomas Warton
Professor of English Literature in 1991. In 1983 he wrote Literary Theory,
The Arguable Death of Postmodernism 31
the West will no doubt be forced more and more to reflect on the
foundations of its own civilization . . .
[It] may need to come up with some persuasive-sounding legiti-
mations of its form of life, at exactly the point when laid-back cultural
thinkers are assuring it that such legitimations are neither possible
nor necessary . . .
The inescapable conclusion is that cultural theory must start think-
ing ambitiously once again . . . so that it can seek to make sense of the
grand narratives in which it is now embroiled. (72–73)
32 DIGIMODERNISM
The world has changed and theory must change with it:
made like this, is fascinating and important in itself. Like Theory’s Empire,
it contains no philosophical leap forward beyond the vital and significant
truth of its own published existence.
Eagleton concludes that:
We can never be “after theory,” in the sense that there can be no reflec-
tive human life without it. We can simply run out of particular styles
of thinking, as our situation changes. With the launch of a new global
narrative of capitalism, along with the so-called war on terror, it may
well be that the style of thinking known as postmodernism is now
approaching an end. (221)
This climactic claim is, I believe, new in Eagleton but, though arresting,
it’s undermined by his own eternal antipathy to postmodernism. It’s also
impoverished by a refusal to relate these issues to the cultural mood
outside the academy. How, he might have wondered, is postmodernism
getting on in the big wide world? This final failure is symptomatic of the
whole post-theory debate. At best, all the “end of theory” seems to mean is
that here is one era that is dying and another (so far) unable to be born. The
picture is murky and indecipherable: essentially, our inability to see who
the new king or queen of thought might be makes us unclear whether
exactly the old one is dead. There is no decisive proof of the death of post-
modernism here, only a tumult of circumstantial evidence.
This doesn’t mean that no one has claimed the throne as the new king or
queen of thought in the wake of postmodernism’s supposed “death.” Yet
such a claim is problematic in a way that arguing for the extinction of
cultural postmodernism isn’t. In 1992, observing that “the postmodern
phenomenon has gradually infiltrated every vacant pocket of our lives and
lifestyles,” Gilbert Adair warned:
blows on its subject, leaving it battered and bleeding; it can never finish its
subject off, which survives, apparently indestructible, until another day;
and it is invariably backward-looking in its intellectual wish list. Probably
the best is Alex Callinicos’s Against Postmodernism (1989), which denies
that “postmodern art” is distinguishable from modernist, finds holes in
postmodern and post-structuralist thought which in any case he sees as
modernist in spirit, and rejects the idea of a recent historical “rupture.” He
advocates Marxism instead (in the year the Berlin wall fell). Christopher
Norris’s What’s Wrong with Postmodernism (1990) extols Derrida but vio-
lently rejects Lyotard and Baudrillard; he wants Britain to embrace early
1980s’ Labor Party socialism (it never would). Eagleton’s The Illusions of
Postmodernism (1996) assaults a trashier version of the enemy and, coming
too late to urge Marxism, would settle for general left-wing activism
instead. Raymond Tallis’s Not Saussure (1988, 1995) excoriates in hysterical
style all philosophy deriving from the influential Swiss as incoherent and
unfounded; he wants to roll back to a kind of realism. Most (in)famous
is Alan Sokal and Jean Bricmont’s Intellectual Impostures (1997), which
rightly exposes the abuse of scientific rhetoric in postmodern and post-
structuralist writing, but bizarrely supposes that these stylistic failings
discredit the entire project; calling (quite reasonably) for a valorization of
scientific positivism, they go through a quantity of texts labeling anything
they can’t understand “meaningless” in an example of nineteenth-century
scientistic imperialism we can only thank postmodernism for having
deconstructed. This subgenre is so established a one it’s no surprise to find
the volume on postmodernism in the OUP Very Short Introduction series
uniquely repudiating its subject (the author prefers liberal realism). All
such texts tend to want to send postmodernism away in favor of one of its
predecessors; hence their failure.
If anything of philosophical postmodernism or post-structuralism is
likely to survive undigested, to resist absorption, it’s the work of Jacques
Derrida. However, the jury must surely still be out on Derrida’s oeuvre:
how much of it will survive, which parts, and with what persuasiveness are
as yet unknown. Having published vastly during a lifetime that ended as
recently as 2003, Derrida will gradually find his place somewhere in the
philosophical tradition (and not among the untrained). It can nevertheless
be said with virtually complete confidence that the future will not see him
either as a terrifying and despicable nihilist bent on destroying reason and
truth, or as a godlike superstar who successfully reinvented the history of
human thought. Both of these wild and bogus simplicities, popular in the
The Arguable Death of Postmodernism 39
1980s and 90s, will appear ever more embarrassing as time wears on.
Unambiguous statements such as this one will have to be accounted for:
What about Ian McEwan’s The Child in Time (1987)? Or One Hundred
Years of Solitude, The Tempest, The Seventh Seal, Oedipus Rex, The Divine
Comedy, The Magus? One example of performatist literature that he gives,
Olga Tokarczuk’s “The Wardrobe,” recalls nothing so much as Charlotte
Perkins Gilman’s “The Yellow Wallpaper” (1892), perhaps rewritten by
McEwan. There are times when Eshelman simply seems to have a taste for
mild irrationalism or naivety in art. When he speaks of “this odd prefer-
ence for positive metaphysical illusions, for narrative authoritativeness and
for forced identification with central characters,”50 the “oddity” is sparked
only by the assumed but actually spurious former cultural-monopoly of
theoretical postmodernism (his writing is full of references to the “usual”
or “standard” postmodern position or strategy)—I’m sure he doesn’t find
the Odyssey artistically weird. Eshelman identifies narratives that move in
the space between programmatic antirealism and bourgeois realism, but
many artists always have and plenty did even during postmodernism’s hey-
day. Deconstruction is indeed not very useful with such texts, but then it’s
a mistake to see Derrida primarily as a cultural critic. You suspect that
Eshelman is tired of the über-skepticism of post-structuralist thought and
eager for the traditional pleasures of art, and you can’t blame him for that.
But finally he’s a symptom: he picks up on the superannuation of postmod-
ernism but doesn’t suggest its successor.
If Eshelman positions himself as the heir to Jameson, Gilles Lipovetsky
would supplant Lyotard. Lipovetsky’s Les Temps hypermodernes, first pub-
lished in 2004, proposes “hypermodernity” as the successor to Lyotard’s
42 DIGIMODERNISM
concrete and flesh, when they are lived and experienced (not always
happily) right across society. Consequently, hypermodernity is not intel-
lectually new: it’s the maximization of modernity, the era from which all
premodern structuring principles (family, church, class, etc.) have in prac-
tice been stripped: “[t]he era of hyperconsumption and hypermodernity
has sealed the decline of the great traditional structures of meaning, and
their recuperation by the logic of fashion and consumption” (14).
There is much to recommend this analysis, which notably breaks with
three postmodernist or post-structuralist traits: it’s neither millenarian
(there’s no pulsating rhetoric of “ends” or “post-s”) nor a continuation of
May ’68 by other means (it’s not countercultural; it applauds hyperindivid-
ualism for saving us from the bloodshed of ideological fanaticism) nor
does it flirt with nihilism (Lipovetsky holds that our society believes
unshakably in human rights, in love, it foregrounds others’ well-being,
etc.). The argument, though sketchy, feels qualitatively different than those
of a previous generation of French intellectuals. However, this portrait of
modernity as a bunch of ideas finally reified by hypermodernity is incom-
plete. What happened to universalism? What about the reign of reason?
In fact, Lipovetsky interprets modernity as the sociopolitical dream of
the French Revolution, the hope of liberty, equality, and fraternity and les
droits de l’homme; he distances himself from philosophical Enlightenment,
refurbishing Lyotard with his claim that hypermodernity is characterized
by the “dissolution of the unquestioned bases of knowledge” (67). In prac-
tice, he may see his work as an updated and more completely sociologized
version of Lyotard’s, which “defined the postmodern as a crisis in founda-
tions and the decline in the great systems of legitimation. That was of
course correct, but not absolutely so” (77).
A great part of Les Temps hypermodernes accords though with concep-
tions of a possible digimodernist society, as my Chapter 7 will suggest; its
account of consumerism is particularly compelling. In 2007 Lipovetsky
extended these arguments to the cultural domain with L’Ecran Global:
Culture-médias et cinéma à l’âge hypermoderne, as yet untranslated into
English.52 His fetishization of cinema is traditionally French, and his run-
ning together of all contemporary forms of “screen” into one bundle ruled
by film is both simplistic and conservative. The book adds little to the
meaning of hypermodernity, which again comes across as a mostly super-
fluous category whose content is insufficient and unsatisfactory, and which
obscures his insights into consumerist society.
In a doubtless unwitting echo of Lipovetsky, Paul Crowther argued
in 2003 that “we are now living in what—in cultural terms—would be far
44 DIGIMODERNISM
(highly unlikely, to say the least) or that the zeitgeist would somehow have
made them better (a rather strange notion). Furthermore, it can’t be ruled
out that next month a stunning postmodern film, book, or TV program
may appear. My feeling is that this too is improbable, as none has emerged
for several years now, but I can’t “prove” it. The argument of paradigmatic
failure due to superannuation can be put forth but not conclusively dem-
onstrated; again, the decline of The Simpsons after 2000 suggests a certain
cultural climate, but no more than that. The same goes for the turning away
from postmodernism of artists hitherto associated with it, such as Julian
Barnes or Damon Albarn. Though significant, it proves nothing: artists
with long careers evolve, and without their earlier modes necessarily hav-
ing “died” for the whole human race.
The most that can be said is that lately there hasn’t been much cultural
postmodernism around and what there is, isn’t that good. But perhaps
this is just a fallow period, and postmodernism will regain its strength and
vitality soon (perhaps; but there’s no evidence for this; and all artistic
movements end one day). Or maybe this is still a thriving postmodern
moment but some people—especially the middle-aged hankering wistfully
for the texts of their youth—refuse to see it. It is true that a generation gap
has opened up between the professors teaching postmodernism modules
and their students. An undergraduate taking such a module in 2010 is
likely to have been born in 1989 or after, and likelier still to be given no
primary text to read published in her or his lifetime. This is Mom and Dad’s
culture. Some professors will nevertheless present it as the latest thing in
cutting-edge aesthetics, although it all belongs to the same era as Betamax
video recorders, shoulder pads, and voodoo economics (and that is at best;
teaching The French Lieutenant’s Woman recently I found myself having to
explain as many of the “contemporary” references as of the Victorian ones
to students for whom this novel represented, indeed, their grandparents’
culture). Postmodern texts try to get to grips with the Cold War and televi-
sion; today’s students take for granted Islamism and the Internet.
And yet: it can be argued that this is the fault of old-fart professors who
have lost touch with the latest developments in postmodernism (rather
than saying that postmodernism has no latest developments). It’s true that
in books such as Brian McHale’s Postmodernist Fiction (1987) and Ian
Gregson’s Postmodern Literature (2004) the same period is discussed: while
the authors differ in their choice of interesting texts, either from personal
taste or shifting critical perspectives, the passing of almost two decades
between them is not reflected in a change in the alleged “time” of postmod-
ernism.62 McHale considers the latest fictive thing whereas Gregson—who
The Arguable Death of Postmodernism 47
***
Author of The Poetics of Postmodernism (1988) and The Politics of Postmod-
ernism (1989), which became standard texts in their field, Linda Hutcheon
appended an epilogue to the second edition of the latter book when it
appeared in 2002 called “The Postmodern . . . In Retrospect.” She noted
that when writing the first edition “the postmodern was in the process of
defining itself before my very eyes.”63 However, “[it] may well be a twentieth-
century phenomenon, that is, a thing of the past. Now fully institutional-
ized, it has its canonized texts, its anthologies, primers and readers, its
dictionaries and its histories.”64 Jameson had dated the coming of post-
modernism to the 1950s’ institutionalization of modernism; now the circle
was complete. Hutcheon concluded, “Let’s just say: it’s over.”65 In 2007,
a special issue of the academic journal Twentieth-Century Literature titled
“After Postmodernism” appeared with an introduction evoking “the wake
of postmodernism’s waning influence. By now, as Jeremy Green notes,
declarations of postmodernism’s demise have become a critical common-
place.”66 He’s right: you find them everywhere. The death of postmodernism:
it’s so old hat. And yet it’s still all just assertion: it can be and is and has been
declared; but is that it?
The Arguable Death of Postmodernism 49
50
The Digimodernist Text 51
In either case the physical properties of the text remained solidified and
inviolate: no matter how inventively you interpreted Gravity’s Rainbow you
didn’t materially bring it into existence, and in this Pynchon’s postmodern
exemplum exactly resembled Pride and Prejudice.
The digimodernist text in its pure form is made up to a varying degree
by the reader or viewer or textual consumer. This figure becomes authorial
in this sense: s/he makes text where none existed before. It isn’t that his/her
reading is of a kind to suggest meanings; there is no metaphor here. In an
act distinct from their act of reading or viewing, such a reader or viewer
gives to the world textual content or shapes the development and progress
of a text in visible form. This content is tangible; the act is physical. Hence,
the name “digital modernism” in which the former term conceals a pun:
the centrality of digital technology; and the centrality of the digits, of the
fingers and thumbs that key and press and click in the business of material
textual elaboration.
Fairly pure examples of digimodernist texts would include: on TV,
Big Brother, Pop Idol, 100 Greatest Britons, Test the Nation, Strictly Come
Dancing, and Quiz Call; the film Timecode; Web 2.0 forms like Wikipedia,
blogs, chat rooms, and social networking sites; videogames such as Mass
Effect, Grand Theft Auto IV, BioShock, Final Fantasy XII, and Metal Gear
Solid 4; SMS messages; “6-0-6” and certain other kinds of radio phone-in;
or the Beatles’ album Everest (see “Music,” Chapter 6). Digimodernism is
not limited to such texts or even to such a textuality; rather, it is more easily
expressed as the rupture, driven by technological innovation, which
permits such a form. They are not by virtue of their novelty “great” texts;
indeed, the quality of the digimodernist text is moot. The distinctiveness of
their functioning interests us, not their ostensible content. Instead, it is in
the functioning of such a textuality that the irreducible difference of the
digimodernist becomes most palpable.
The digimodernist text displays a certain body of traits that it bequeaths
to digimodernism as a whole. These will recur throughout the rest of the
analysis. Such characteristics relate to the digimodernist textuality almost
as a machine: considered as a system by which meaning is made, not as
meaning. Postmodernist features denote either a textual content or a set of
techniques, employed by an antecedent author, embedded in a materially
fixed and enduring text, and traced or enjoyed by a willful reader/viewer.
The traits of digimodernist textuality exist on a deeper level: they describe
how the textual machine operates, how it is delimited and by whom,
its extension in time and in space, and its ontological determinants. The
surface level of what digimodernist texts “mean” and how they mean it
52 DIGIMODERNISM
will be discussed later in the book. We can sketch the following dominant
features:
Onwardness. The digimodernist text exists now, in its coming into being,
as something growing and incomplete. The traditional text appears to
almost everyone in its entirety, ended, materially made. The digimodernist
text, by contrast, is up for grabs: it is rolling, and the reader is plunged in
among it as something that is ongoing. For the reader of the traditional text
its time is after its fabrication; the time of the digimodernist text seems to
have a start but no end.
Haphazardness. In consequence, the future development of the text is
undecided. What it will consist of further down the line is as yet unknown.
This feels like freedom; it may also feel like futility. It can be seen as power;
but, lacking responsibility, this is probably illusory. If onwardness describes
the digimodernist text in time, haphazardness locates in it the permanent
possibility that it might go off in multiple directions: the infinite parallel
potential of its future textual contents.
Evanescence. The digimodernist text does not endure. It is technically
very hard to capture and archive; it has no interest as a reproducible item.
You might happily watch all the broadcast hours of Fawlty Towers; no one
would want to see the whole of a Big Brother run again (retransmission has
never been proposed), and in any event the impossibility of restaging the
public votes renders the exact original show unreplicable.
Reformulation and intermediation of textual roles. Already evident, and
explored at greater length in this chapter, is the digimodernist text’s radical
redefinition of textual functional titles: reader, author, viewer, producer,
director, listener, presenter, writer. Intermediate forms become necessary
in which an individual primarily the one acts to a degree like another.
These shifts are multiple and not to be exaggerated: the reader who becomes
authorial in a digimodernist text does not stand in relation to the latter as
Flaubert did to Madame Bovary. These terms are then given new, hybrid-
ized meanings; and this development is not concluded.
Anonymous, multiple and social authorship. Of these reformulations
what happens to authorship in the digimodernist text especially deserves
attention. It becomes multiple, almost innumerable, and is scattered across
obscure social pseudocommunities. If not actually anonymous it tends to a
form of pseudonymity which amounts to a renunciation of the practice of
naming (e.g., calling yourself “veryniceguy” on a message board or in a
chat room). This breaks with the traditional text’s conception of authorship
in terms tantamount to commercial “branding,” as a lonely and definite
quantity; yet it does not achieve communality either.
The Digimodernist Text 53
The fluid-bounded text. The physical limits of the traditional text are
easily establishable: my copy of The Good Soldier has 294 pages, Citizen
Kane is 119 minutes long. Materially a traditional text—even in the form of
a journalist’s report, a school essay, a home movie—has clear limits; though
scholars may discover new parts of a whole by restoring cut or lost material
their doing so only reinforces the sense that the text’s physical proportions
are tangibly and correctly determinable (and ideally frozen). Embodying
onwardness, haphazardness, and evanescence, the digimodernist text so
lacks this quality that traditionalists may not recognize it as a text at all.
Such a text may be endless or swamp any act of reception/consumption.
And yet texts they are: they are systematic bodies of recorded meaning,
which represent acts in time and space and produce coherently intelligible
patterns of signification.
Electronic-digitality. In its pure form, the digimodernist text relies on
its technological status: it’s the textuality that derives from digitization; it’s
produced by fingers and thumbs and computerization. This is not to be
insisted on excessively; however, this is why digimodernism dates back
only to the second half of the 1990s. Digimodernism is not primarily a
visual culture and it destroys the society of the spectacle: it is a manually
oriented culture, although the actions of the hand are here interdependent
on a flow of optical information unified through the auspices of the
electronic.
Much more could be added here, but there is space for only two further
clarifications. First, an ancestor of the digimodernist text is Espen J.
Aarseth’s notion of “ergodic literature” in which, he argued as long ago as
1997, there is “a work of physical construction that the various concepts
of ‘reading’ do not account for . . . In ergodic literature, nontrivial effort is
required to allow the reader to traverse the text.”2 The description of page-
turning, eye movement, and mental processing as “trivial” is misleading,
while the implication of textual delimitedness contained in “traversal” has
been outdated by technical-textual innovations. However, his account dif-
fers from mine most notably in its lack of a wider context. For I see the pure
digimodernist text solely as the easily recognizable tip of a cultural iceberg,
and not necessarily its most interesting element. These characteristics can
be found diffusely across a range of texts that I would call digimodernist
whose consumer cannot make them up; though digimodernism produces
a new form of textuality it is not reduced to that, and many of its instances
are not evanescent, haphazard, and so on. But the discussion had to start
somewhere. Digimodernism can be globally expressed in seven words (the
effects on cultural forms of digitization) and historically situated in eight
54 DIGIMODERNISM
Reader Response
It could be felt (the point has been put to me) that everything I’ve said here
about the digimodernist text is already contained in post-1960s’ theories
of the text and of reading, that there is nothing new here. A similar critical
discourse might appear to have been around for a while. Discussing the
ending of the film Performance, Colin MacCabe argues, for instance, that
“the final eerie minutes of the film are entirely our invention.”4 For MacCabe,
the film’s “whole emphasis” favors “a performance in which the spectator is
a key actor.”5 However, this is too loose for its own good: except as rhetori-
cal excess, as a sort of flourish, there is no way that someone sitting in a
chair gazing silently at a screen is an “actor,” key or not, coterminous with
those s/he is watching; and while the ending of Performance does leave
much to the intelligence, imagination, and wit of its audience, to call it
“entirely our invention” is an exaggeration. The most MacCabe can mean is
that we feel alone as we grope to explain it; it’s so ambiguous, so slippery,
that our interpretations feel strangely exposed, deprived of any textual
underpinning. In reality, the final few minutes were entirely invented at
the end of the 1960s by a group of actors and technicians employed by
The Digimodernist Text 55
although we rarely notice it, we are all the time engaged in construct-
ing hypotheses about the meaning of the text. The reader makes
implicit connections, fills in gaps, draws inferences and tests out
hunches . . . The text itself is really no more than a series of “cues” to
the reader, invitations to construct a piece of language into meaning.
In the terminology of reception theory, the reader “concretizes” the
literary work, which is in itself no more than a chain of organized
black marks on a page. Without this continuous active participation
on the reader’s part, there would be no literary work at all8
captures the nuance: “We are guided by the text and at the same time we
bring the text into realization as meaning at every point.”10
In truth, theory can only conceptualize the reader/viewer as the pro-
ducer of a text by transforming its sense of a text into a system of meanings.
This enables it to construct the reader/viewer as the producer of textual
meanings and hence, to all apparent intents and purposes, as the producer
of text. But, as any filmmaker or novelist knows, a text is primarily a selected
quantity and sequence of visual or linguistic materials, and to make text is
to create those materials. In turn, the materials generate a play of mean-
ings, which the reader/viewer will eventually come in among, finding and
inventing his or her own; but this is secondary. In fact, such theories of
reading silently presuppose a text that is already created; to conceive of a
text as a set of meanings implies approaching it when already constituted
and seeing what has already been made. The point of view of the critic or
student or reader is melded here with the functioning of the text. This is
not exactly an error: it is how texts appear to such people (Iser’s work was
rooted in phenomenology), and for almost its entire existence a text will
consist of a fixed or almost-fixed set of already-created materials. The
source of theory’s assimilation is that it cannot conceive of a meaningful
form of the text which is not already materially constituted; nor does it see
why it should.
However, Barthes’ short essay “From Work to Text,” a central piece of
post-structuralist literary theory originally published in 1971, highlights
another aspect of the question. He attempts here to define Text (capitalized
throughout) as a post-structuralist form of writing that stands in contrast
to the traditional literary “work.” Isolating seven differences between the
two, Barthes describes Text as: not “contained in a hierarchy”; “structured
but decentered”; “plural, [depending] not on the ambiguity of its contents
but on what might be called the stereographic plurality of its weave of signi-
fiers”; “woven entirely with citations, references, echoes, cultural languages
. . . which cut across it through and through in a vast stereophony”; shorn
of “the inscription of the Father”; and “bound to jouissance.”11 These are all
classically post-structuralist; the digimodernist may not be inclined to
write like this (may find it a historical mode of thinking) but would not feel
the need to jettison it. Picking up an earlier point that “the Text is experi-
enced only in an activity of production,”12 Barthes also argues that:
The Text . . . decants the work (the work permitting) from its con-
sumption and gathers it up as play, activity, production, practice.
This means that the Text requires that one try to abolish (or at the
The Digimodernist Text 57
From a digimodernist point of view, this sounds like the straining labor
pains that promise to end in the birth of the digimodernist text. Seen from
a vantage point almost forty years on, Barthes appears to be signaling the
arrival of something yet to be materially possible but which he has theoret-
ically described and greeted (postmodernism as the unwitting mother of
digimodernism). It is as if he is clearing an intellectual and artistic space
for a textuality he cannot yet see, but which he is thereby helping to bring
into existence. To be sure, whether he would have welcomed any of the
actual examples of digimodernism we have so far is a moot point; however,
J. Hillis Miller, a doyen of American deconstruction, described Wikipedia
as “admirable” in an essay on Derrida that adopted its practice of disam-
biguation (so who knows).14 While Barthes’ essay ends with the proto-
digimodernist declaration that “[t]he theory of the Text can coincide only
with a practice of writing” this is subsumed by his recognition that his
remarks “do not constitute the articulations of a Theory of the Text.”15 The
essay is to be read as prophetic and not descriptive, as a call for a theory still
to be written. It is clear that the coming of digimodernism removes, in one
wrench, all the cultural privileges which throughout postmodernism
accrued to theorists as the hieratic investigators and interpreters of the
mystery of the text. The textuality of digimodernism downplays the critic’s
naturally belated relationship to text in favor of growth and action in
the present. Theorists may yet find ways to get their privileges back; indeed,
58 DIGIMODERNISM
during the last decade of his life Barthes himself can increasingly be seen
as working through these issues on a theoretical level.
Other readers have raised objections that parallel the one I’ve discussed
here. For instance, I’ve been told that Baudrillard’s take on Disneyland in
his 1981 essay “The Precession of Simulacra” already contains everything
I’ve called digimodernist; but while a theme park is a text concretized by
physical action (you must travel around it), it isn’t materially invented by
that action—it was wholly constituted before any visitor arrived (it’s a post-
modern textuality, like most loci of mass tourism). Again, Baudrillard’s
comments in the same essay about a fly-on-the-wall TV documentary
shown in 1973 don’t short-circuit a theory of digimodernism; I don’t have
to reach back ten years for my TV examples, or ten hours, come to that.
In Chapter 3 I’ll consider the ways in which our era is characterized by
the move to the cultural center of what had previously been a disreputable,
buried, or just exceptional textuality. But the digimodernist text is, because
of technological innovation, really new, something genuinely never before
seen, and indirect evidence for this comes in the next section.
One sign of the novelty of the digimodernist text is that none of the tradi-
tional words describing the relations of individuals with texts is appropriate
to it. The inherited terminology of textual creation and reception (author,
reader, text, listener, viewer, etc.) is awkward here, inadequate, misleading in
this newly restructured universe. So new is it that even words recently devel-
oped to step into the breach (interactive, nonlinear, etc.) are unsatisfactory.
Of course, in time this new kind of text will evolve its own seemingly inevi-
table lexicon, or perhaps existing words will take on new and enriched
senses to bear the semantic load. Aiming to contribute nothing directly to
this linguistic growth, I am going instead here to assess the wreckage of the
current lexical state, thereby, I hope, helping to clear enough ground to open
up the conceptual landscape a bit more to view. Like all dictionaries, what
follows should really be read in any order: the reader is invited to jump non-
sequentially around the entries, which inevitably overlap.
cost of the death of the Author” and called for the latter’s “destruction”
and “removal” from the field of textual criticism.16 Coupled with Michel
Foucault’s subsequent weak conception of the “author-function,” this
stance became orthodoxy among post-structuralist critics.17 Written self-
consciously “in the age of Alain Robbe-Grillet and Roland Barthes,” John
Fowles’s postmodern novel The French Lieutenant’s Woman critiques
and dismantles the myth of the Author-God, finally revealed as an “unpleas-
ant . . . distinctly mean and dubious” figure.18 Postmodernist culture returns
repeatedly to this debilitated or tarnished image of the author. Martin
Amis’s are obnoxious and louche: a priggish nerd with “sadistic impulses”
in Money, a murderer and murderee in London Fields, and twin preten-
tious morons in The Information: “Like all writers, Richard wanted to
live in some hut on some crag somewhere, every couple of years folding a
page into a bottle and dropping it limply into the spume. Like all writers,
Richard wanted, and expected, the reverence due, say, to the Warrior Christ
an hour before Armageddon.”19 As a symptom of this degeneration, almost
all of the major fictions by one of the greatest of all postmodern authors,
Philip K. Dick, are only, and read like, first drafts: messy, clunky, wildly
uneven, desperate for polishing. Redeemed by their content, these texts’
achievement implicitly junks the Romantic conception of the author as a
transcendent donor of eternal beauty in favor of the haphazardly brilliant
hack.
Digimodernism, however, silently restores the authorial, and revalorizes
it. To do this, it abolishes the assumed singularity of authorship in a redefi-
nition that moves decisively away from both traditional post-Enlightenment
conceptions and their repudiation. Authorship is always plural here, per-
haps innumerable, although it should normally be possible, if anyone
wanted to, to count up how many there are. The digimodernist authorial is
multiple, but not communal or collective as it may have been in premod-
ern cultures; instead, it is rigorously hierarchical. We would need to talk, in
specific cases, of layers of authorship running across the digimodernist
text, and distributions of functions: from an originative level that sets
parameters, invents terms, places markers, and proffers structural content,
to later, lower levels that produce the text they are also consuming by deter-
mining and inventing narrative and textual content where none existed
before. The differing forms of this authorship relate to this text at differing
times and places and with varying degrees of decisiveness; yet all bring
the text into being, all are kinds of author. Though a group or social or
plural activity, the potential “community” of digimodernist authorship
(widely announced) is in practice vitiated by the anonymity of the function
60 DIGIMODERNISM
here. We don’t even get Foucault’s author as social sign: the digimodernist
author is mostly unknown or meaningless or encrypted. Who writes
Wikipedia? Who votes on Big Brother? Who exactly makes a videogame?
Extended across unknown distances, and scattered among numerous
zones and layers of fluctuating determinacy, digimodernist authorship
seems ubiquitous, dynamic, ferocious, acute, and simultaneously nowhere,
secret, undisclosed, irrelevant. Today, authorship is the site of a swarming,
restless creativity and energy; the figure of the disreputably lonely or
mocked or dethroned author of postmodernism and post-structuralism is
obsolete.
The spread of the personal computer in the 1980s brought with it a new
associated vocabulary, some of which, like “interfacing” or going “online,”
has been absorbed permanently into the language. If the emergence of the
digimodernist text has had a comparable effect you might point to the dis-
course of “interactivity” as an example. Videogames, reality TV, YouTube,
and the rest of Web 2.0 are all supposed to offer an “interactive” textual
experience by virtue of the fact that the individual is given and may carry
out manual or digital actions while engaging with them. I talk about the
difficulties of the passive/active binary elsewhere, so will restrict myself
here to the term’s prefix, one that has, indeed, spread across the whole digi-
tal sphere.
The notion of “interaction” seems inevitable and exciting partly because
it evokes the relationship (or interplay or interface) of text and individual
as a dialectical, back-and-forth exchange. This very reciprocity can be seen,
to an extent, as the kernel of digimodernism; the new prevalence of the
“interactive” nexus and of the prefix in general is a sign of the emergence of
a new textual paradigm. Older terms like “reader” or “writer,” “listener” or
“broadcaster” don’t convey that doubled give-and-take, its contraflow; they
focus on one individual’s role within an inert textual theater. The word
“interactive” then is as textually new as the digimodernism with which it is
identical because it reflects the new textual dimension that has suddenly
opened up: not only do you “consume” this text, but the text acts or plays
back at you in response, and you consequently act or play more, and it
returns to you again in reaction. This textual experience resembles a see-
sawing duality, or a meshing and turning of cogs. Moving beyond the
isolation of earlier words, “interactivity” places the individual within a dia-
chronic rapport, a growing, developing relationship based on one side’s
pleasure alone.
The Digimodernist Text 61
I like “inter” both because it captures the historical rupture with the
textual past in its new ubiquity, and because it highlights the structuration
of digimodernism, its flow of exchanges in time. It’s highly misleading,
though, as well, because it suggests an equality in these exchanges. In truth,
just as the authors of the digimodernist text vary in their levels of input or
decisiveness, so the individual is never the equal of the text with which s/he
is engaging. The individual can, for instance, abandon the text but not vice
versa; conversely, the text is set up, inflected, regulated, limited and—to a
large extent—simply invented well before s/he gets near it. Engaging with
a digimodernist text, s/he is allowed to be active only in very constrained
and predetermined ways. In short, the creativity of this individual arrives
rather late in this textual universe.
A better understanding of digimodernist authorship would clarify the
nature of interactivity too, which often seems reduced to a sort of “manual-
ity,” a hand-based responsiveness within a textuality whose form and con-
tent were long ago set. Your “digital” interventions occur here when, where,
and how they are permitted to. But I won’t let go of the glimpse of the new
textual machinery that is conveyed by and contained within “inter.”
Two versions of listening are familiar to us: the first, when we know we are
expected to respond (in a private conversation, in a seminar, meeting, etc.);
the second, when we know we will not respond (listening to music or
a politician addressing a rally, etc.). The social conventions governing
this distinction are fairly rigorously applied: they make heckling, the act
of responding when not supposed to, inherently rebellious, for instance.
Listening has then a double relationship with speech or other human
sound creation, like music: it can only be done, obviously, when there is
something to listen to; and it differs qualitatively according to whether the
listener knows s/he is expected to respond. In one case, we can probably
assume that s/he listens more closely, does nothing else at the same time; in
the other s/he may start and stop listening at will, talk over the discourse,
and so on. Varying contexts produce varying intensities of listening, though
it remains always a conscious, directed act (distinct from the inadvertency
or passivity of hearing). The corollary of this is that the grammar of what
we listen to also embeds these social conventions. When we are expected
to respond, the discourse offered will tend to the second person (“you”),
either explicitly (e.g., questions, orders) or implicitly (e.g., a story that pro-
vokes the response “something similar happened to me”). When not
expected to respond we will probably listen to first-person plural modes
62 DIGIMODERNISM
(“we,” the implicit pronoun of the stand-up comic) or third person (“s/he,”
“they”), although politicians and others will sometimes employ rhetori-
cally the second person to create an actually bogus sense of intimacy (“Ask
not what your country . . .”).
Radio, traditionally, offers sound to which we know we will not respond:
third person, easily capable of being talked over or ignored or sung along
to or switched off in mid-flow. DJs, like politicians, try to create warmth by
generating the illusion that they are speaking to you (this is the whole art
of the DJ) but without using literally a second-person discourse—their
mode is also the comic’s implicit “we.” Digimodernist radio, in which
“listeners” contribute their texts, e-mails, and phone voices to the content
of the show, gives us a different kind of listening, pitched halfway between
the two familiar versions. We are neither expected to respond or unable to,
but suspended between as someone who could respond, who might respond.
We could, as easily as anybody else, send in a text or e-mail or call up the
phone-in line and speak. And perhaps we do: some people will become
regular callers to such programs or repeat contributors of written material,
and their voices and writing take on in time the assured, measured delivery
of the seasoned professional. In so doing, they achieve the conversational
parity of the responding listener. It’s noticeable that such programs permit
their external contributors to make only very brief and concise points. This
is usually explained by “we’ve got a lot of callers” but in some instances,
especially on sports phone-ins like those following an England soccer
match, many of the callers make roughly the same point—they’re not cur-
tailed to allow space for a vast wealth of varying opinions. E-mails and
texts are short too even though they tend to be better expressed and less
predictable than the improvised speech of the presenter. This could again
be due to the psychological effect being sought: the more people who
contribute, the more it could be you contributing, both in terms of the
show’s mood and identity, and as a brute numerical fact.
Similarly, the discourse thrown up by digimodernist radio lies curiously
stranded between the modes typical of the two traditional versions of
listening. It consists, on one level, of the first-and-second person of ordi-
nary conversation: I think this, why do you, and so on. Yet it cannot in fact
be about either of them, partly because the external contributor, in digi-
modernist fashion, is virtually anonymous—to be “Dave from Manchester”
is to teeter on the brink of being anyone at all. So the content of the show
becomes an intimate exchange about public matters, which is why it resem-
bles stereotypical male conversation, like bar or pub talk (and the majority
of contributors are always men). Accounts of personal experience are
The Digimodernist Text 63
tolerated here, but only to clarify a general point. Unlike bar talk, this
discourse has no chance of becoming oriented on private matters since,
though intimately formulated, it belongs to a broadcast public discussion.
The effect, finally, is that the exchanges feel neither really intimate (a faked
I-you-I) nor generally interesting (they make no new intellectual discover-
ies but just stir around the quasi-knowledge and received wisdom of the
presenter and their callers). It’s an attractive model of spoken discourse
because, synthesizing the traits of both common forms, it promises an
unusual richness and potency. But it actually provides neither desired out-
come of listening, neither personalization and intimacy, nor clarification
and action. Listening to digimodernist radio does tend to be listening, but
never the sorts we used to know.
derive logically from the last, but a more complex, developed sequence
becomes increasingly hard to discern. This is a complex field, where termi-
nological precision is so far somewhat elusive, but stopping the habit of
mindlessly boasting of nonlinearity would help.
One of the most misleading claims the digimodernist text and its prosely-
tizers can make is that it provides an active textual experience: that the
individual playing a videogame or texting or typing Web 2.0 content is
active in a way that someone engaged in reading Ulysses or watching
Citizen Kane isn’t. This is self-evidently something in its favor; no one
wants to be “passive.” It’s typical of digimodernism that its enthusiasts
make vigorous and inaccurate propaganda on its behalf; the vocabulary
of “surfing” the Internet common in the 1990s, where a marine imagery of
euphoria, risk, and subtlety was employed to promote an often snail-paced,
banal, and fruitless activity, seems mercifully behind us. But the hype
differentiating the new technologies’ supposedly terrific activeness from
the old forms’ dull passivity is still extant, and very misleading it is too.
It’s true that the purer kinds of digimodernist text require a positive
physical act or the possibility of one, and the traditional text doesn’t. Yet
this can’t in itself justify use of the passive/active binary: you can’t suppose
that an astrophysicist sitting in an armchair mentally wrestling with string
theory is “more passive” than somebody doing the dishes just because the
latter’s hands are moving. Mere thought can be powerful, individual, and
far-reaching, while physical action can become automatic, blank, almost
inhuman; in terms of workplace organization, a college professor will be
more active (i.e., self-directing) than a factory worker. The presence of
a physical “act” seems in turn to suggest the word “active” and then its
pejorative antonym “passive,” but this is an increasingly tenuous chain of
reasoning. It’s one of those cases beloved of Wittgenstein where people are
hexed by language. Yet the mistake is symptomatic: how do you describe
experientially the difference between the traditional and the digimodernist
text? It’s a tricky question, but one that at least assumes that there are such
differences, which here is the beginning of wisdom.
A friend of mine (though he’s hardly unique) thinks that Web 2.0 offers the
biggest revolution in publishing since the Gutenberg Bible. Anyone can
now publish anything; it’s democratic, open, nonelitist, a breaking down of
66 DIGIMODERNISM
the oppressive doors of the publishing cabal which for centuries repressed
thought and decided what we could read; it’s a seizing of the controls of the
publishing world by the people for the people. If this were true, it would
indeed be as exciting as my friend thinks. Sociologically, publishing has
always defined itself as the sacrilizing of speech: whereas speech dies the
instant it is spoken, and carries only to the geographical extent reached by
the volume of the voice, the publishing of text enables utterances to endure
for centuries, even millennia (though increasingly unstably), and to be
transported to the furthest point on our planet, even beyond. Temporally
and spatially published text is, at least potentially, speech equipped with
wondrous powers, furnished with immense resources. It isn’t surprising
that such text has accrued a similarly wondrous and immense social pres-
tige (even if, in practice, the great majority of it is soon destroyed). We all
talk, but few of us talk to everyone forever. Publishing a book is the edu-
cated adult’s version of scoring the touchdown that wins the Super Bowl.
It’s this glamour, this prestige that my friend assumes Web 2.0 lets everyone
in on, and that he’s gotten so excited about.
Leaving to one side for now the issue of whether everyone can or ever
will access Web 2.0, let us imagine a world in which they do. The Web is
indeed responsible for a stupendous increase in the volume of published
material and in the number of published writers. Though held in electronic
form rather than on paper, this text fulfills the definition of publication: it
is recorded, in principle, for everyone forever. This is the first new idea of
publishing. However, and more problematically, this innovation comes at
the expense of a second: the loss of the social prestige associated with the
publishing of text. It isn’t only that so much UGC is mindless, thuggish,
and illiterate, though it is. More awkwardly, nothing remains prestigious
when everybody can have it; the process is self-defeating. In such circum-
stances the notion of a sacrilizing of speech becomes obsolete.
To argue that the newly opened world of publishing is a newly devalued
world seems patrician, antidemocratic, even (so help us God) “elitist.”
Furthermore, it’s not strictly valid. Through, for instance, the placing
of academic journals online, the Internet has also increased the quantity of
easily accessible, highly intelligent, and well-informed written matter, and
it sits cheek-by-jowl with the vile and ignorant stuff on search engine results
pages. What will probably occur in the future will be a shift in our idea of
publishing toward greater stratification and hierarchy, internally divided
into higher and lower forms. The quantity of publication will continue to
rise to unimaginable heights, but unendowed now with social prestige.
How long it will take for the sacred aura of published text to go is anybody’s
The Digimodernist Text 67
guess, but the likelihood is that there will be nothing “nonelitist” about it;
differentiation will simply re-form elsewhere according to other criteria.
This may be a meritocratic hierarchy, whereby text is judged for what it
says rather than what it is, but I wouldn’t want to bank on it.
having light beamed into your eyes. The glow of the screen pushes reading
toward the rushed, the decentered, the irritable; while the eye is automati-
cally drawn to the light it emits (explaining the quantitative surge), the
mind is increasingly too distracted to engage with, remember, or even
enjoy very much what it is given to scrutinize.
a child how to write feels like consigning him or her to an almost bestial
state. And yet there is no reason today to imagine that we are not heading
toward such a world. Already the e-mail and SMS have largely superseded
the phone call, which itself saw off the letter; we have passed from writing
through speaking to typing, and while the newer form can coexist with its
downgraded forerunner, something must logically at some stage become
obsolete. Negotiating that may be a key challenge of our century. For now,
we early digimodernists are stranded: we can write but have less and less
need to, and we type but have never been trained to. It’s a part of the char-
acteristic helplessness of our age.
73
74 DIGIMODERNISM
Industrial Pornography
It’s the early 1950s, or 1850s. You are walking alone through a wood on a
mild spring or summer day. From a distance you espy a couple. They are
having sex. What do you actually see? Or try this. It’s the 1900s, or 1940s.
One afternoon, alone, you glance from your window. Across the way the
drapes are almost wholly drawn, but there’s a gap, and from the angle you’re
looking along the gap leads in to a mirror on a wall, and as chance would
have it the mirror reflects slantwise a couple on a bed having sex. What
actually do you see? And in both cases, suppose the couple is averagely
76 DIGIMODERNISM
self-conscious, neither furtive nor exhibitionist. And that you feel nothing:
not curiosity, or shame, or disgust, or excitement. Your eyes are a camera.
What do they record?
On one level, the answer is self-evident: you see a couple having sex, of
course. More precisely, you probably see a conglomerate of limbs, a mass of
hair, a jerking male behind, a quantity of physical urgency or tension. On
another level, the question is paradoxical, because the total situation here
of viewer and viewed (people being watched having sex) structurally repli-
cates the ostensible reception and content of industrial pornography; but
the glimpsed actions probably wouldn’t resemble those of porn at all. Why
wouldn’t they?
Industrial pornography is a product, it would seem, of the 1970s; its
origins lie in the heartlands of postmodernism. Yet its textual and repre-
sentative peculiarities make it both emblematic of postmodernism and a
precursor of digimodernism; indeed, it has shifted into the new era much
more smoothly than have cinema or television. We don’t need to waste too
much time on what differentiates industrial pornography from other porn,
from “erotica” or “art,” and so on; these are essentially legal battles. Three
points are unarguable: that there exist texts whose principal or sole aim is
to stimulate sexual excitement in their consumer; that some of these texts
manifest a standardization of content and a scale of distribution that can
be called industrial; and that, as a generic label, “industrial pornography”
is in places as smudged in its definition as, say, “the war movie” or “the
landscape picture.” The label suggests the vast scale of pornographic pro-
duction and consumption over the past thirty years or so, along with the
(relative) openness of its distribution and acquisition. It therefore excludes
material aimed at niches, some of which, involving children or animals, is
more accurately classed as recordings of torture; “pornographic” perfor-
mance is, by definition, exchanged for money.
Above all, industrialization manifests itself here as a standardization
of product. It is always the same poses, the same acts, its performers made
to converge on a single visual type. Everything nonsexual is rigorously
cut out; the “actors” or models are identified solely with their sexual attrac-
tiveness or potency. The predilection of early hard-core movies like Deep
Throat for a detachable plot arc was wiped out by industrialization, which
made it hard to differentiate any one title from the next. Buy a random
industrial porn magazine and you will see a seemingly endless array of
similar-looking individuals in the same positions; rent or buy a random
industrial porn film and similar-looking people will work through the
same acts methodically, systematically, with a soul-crushing repetitivity.
In both cases, models and scenes are separated off by extraneous material
A Prehistory of Digimodernism 77
(articles, “acting”) placed there to distinguish them from each other, and
famously ignored.
From a postmodern perspective, industrial pornography is hyperrreal,
the supposed reproduction of something “real” which eliminates its
“original.” The reason for the dates given in the first paragraph of this sec-
tion is that industrial pornography has transformed the sexual practices of
many individuals in societies where it is prevalent, recasting them in its
image. Increasingly, “real” sex tries to imitate the simulacrum of industrial
pornography. Moreover, its staging is often either explicitly or implicitly
self-referential in a recognizably postmodern way: it has a strong sense
of its own status as a representation of sex by paid performers; magazines
discuss their models’ lives as professional models, film actresses gaze at the
camera, and so on. The third postmodern element in this material is its
frequent reliance on pastiche or parody (especially of Hollywood), as a
source of ironic clins d’oeil which also help to achieve a minimum degree
of product differentiation.
From a digimodernist point of view, what characterizes industrial por-
nography is this: it insists loudly, ceaselessly, crucially on its “reality,” on its
being “real,” genuinely happening, unsimulated, while nevertheless deliv-
ering a content that bears little resemblance to the “real thing,” and what
distorts it is its integration of its usage, of the behavior of its user. Take a
soft-core magazine photo spread of a model. As the eyes move sequentially
across the images, she appears to gradually disrobe, turning this way and
that, finally placing herself naked on all fours or on her back with her legs
apart. Very few of the poses derive from the “natural” behavior of women
eager to attract a man; and yet these images will excite many men. The
spread as a whole creates, for the regarding male, the illusion of an entire
sexual encounter: the most explicit images set the woman, in relation to
the camera, in positions she would only adopt seconds before being pene-
trated. Consequently, for the regarding male, the photographed woman
appears to be moving ever closer to intercourse with him. And yet—here is
the digimodernist point—within the logic of the photos she doesn’t actu-
ally get closer to sex with anyone at all, there’s no one else there anyway,
there’s only an increasingly unclothed and eroticized woman. And nothing
in the images explains why her appearance and conduct are changing that
way. The images then are only intelligible, both in their content and their
sequencing, by inserting into them the sexual habits of their male con-
sumer. Otherwise, they look almost bizarre.
This process is found in hard-core movies in even more dramatic
form. Here, sexual positions are adopted solely that someone can watch
the performers who are adopting them, and clearly see their genitalia.
78 DIGIMODERNISM
Couples copulate with their bodies scarcely touching, or contort their limbs
agonizingly, or favor improbable geometries, solely in order that penetra-
tion be made visible. Male ejaculation occurs outside of the woman’s body
purely in order that a viewer can watch it happen (nothing in the text
explains such a pleasureless act). The mechanics of hard-core industrial
pornography suggest an unreal corruption, a slippage from sex as it is done
and enjoyed to sex done so that someone else can enjoy seeing it, and this
corruption generally has the unspoken effect of diminishing the partici-
pants’ pleasure. Such positions, the ejaculation shot, and the rest are staples
of industrial pornography not because they yield unrealistically fantastic
sex but because they permit unrealistically visible sex. While deformation
of “known reality” for creative purposes is all but universal in the arts, its
function is doubly peculiar here: first, since the unique selling point of hard
core is its documentary sexual factuality, the distortions simultaneously
betray the genre’s raison d’être and furnish its necessary cast-iron proof,
making them both structurally crucial and self-destructive; and second,
every one of the changes here stems specifically from the systematic and
crude sexual demands of the watching consumer, not from the artfulness
of the creator.
This is equally apparent in the narrative logic of industrial hard-core
porn movies, which integrates their consumption, constructing itself out
of the circumstances of their viewing. If viewing here is the chancy recep-
tion of sexual images, then the circumstances of the encounters seem cor-
respondingly impromptu, the sudden couplings of virtual strangers (the
pizza delivery boy or the visiting plumber and the housewife) both in their
narrative context and in their presentation to the watching gaze. If viewing
is voyeurism with the consent of the seen, then encounters tend to exhibi-
tionism, sex breaking out on yachts or hilltops, in gardens, by pools, such
that the viewer’s “discovery” of naked copulating bodies is mirrored by the
performers’ “display,” both to the viewer and narratologically, of their
nudity and their copulation. If viewing means “happening” on other peo-
ple having sex, then performers do it to fellow cast members too, acciden-
tally entering rooms to find sex in progress, and joining in or watching.
Indeed, the proportion of encounters watched from within the scene as
well as from outside is striking.
Its alloyed digimodernism marks off the hard-core industrial porn film
from any other movie genre, even those, like comedy or horror, which also
aim to stimulate a visceral or physical response. In turn, no genre excites as
powerful a reaction in its viewer, an impact that derives less from its osten-
sible content than from its digimodernist construction. While experienced
A Prehistory of Digimodernism 79
perhaps most acutely by fans, hard-core porn tends also to have a fairly
overwhelming or engulfing effect on those who find it disgusting or
tawdry. That engulfing, that outflanking of the viewer is recognizably digi-
modernist and shared to a great extent by videogames and reality TV; each
short-circuits, in a way that elicits inappropriate notions of “addiction,”
a deliberate, controlled response. We will come back to this issue later.
Its digimodernism also means that industrial pornography should be
primarily seen as something that is “used” rather than “read” or “watched,”
employed as an ingredient of a solitary or shared sexual act outside of
which it makes no sense or appears ludicrous. However, it’s undeniable
that, for many reasons, the viewer whose feelings, actions, sightlines, and
rhythms are so efficiently uploaded into and visually integrated by indus-
trial pornography tends to be male. There is little universality about the
use of porn. Women, research suggests, initially find hard-core films as
arousing as men do but lose interest much more quickly, and this may be
because the movies are textually invested, in their content and sequencing,
with the sexual practices, habits, and responses of an expected male viewer.
It is women whose pleasure is most visibly articulated (men’s is self-
contained) or whose fellatio is in all senses spectacular; it’s the woman’s
body that is waxed and inflated to become something it had never previ-
ously needed to be: exciting to stare at during sex. However, textual con-
ventions (regular, monotonous) must be separated here from their possible
reception (perhaps wayward, unexpected): it is not because industrial
pornography reinvents lesbianism solely as an object of male regard, for
instance, that some straight women don’t find it exciting. This discussion is
about textuality, not consumption.
The digimodernism of industrial pornography is doubly partial: it coex-
ists with its postmodernism (an interesting contribution to debates about
their relationship); and the viewer (textually male) does not determine or
contribute to the content or sequencing of the material by any conscious
act. His sexuality, abstracted from him and inserted in heightened form
into what he is regarding, “writes” what he sees through the intermediary
of someone else’s hand—the director’s—which guides his metaphorical
pen. Sitting in a ferment before these images he doubtless does not
know or care why they are the way they are, nor why he is responding so
intensely. Entranced, his digimodernist autism overpowers his individual-
ity just as, functionally, industrial pornography relies on anonymity: the
obvious pseudonyms of the performers, and equally of the consumers
whose experiences contributed to Laurence O’Toole’s book Pornocopia.
On his acknowledgments page O’Toole thanks, increasingly ridiculously,
80 DIGIMODERNISM
Ceefax
I enter my living room, turn on the TV, pick up the remote control, and
retreat to the couch. I choose BBC1, then press a button on the remote
control: the picture is replaced by words on a black screen; I key in 316 on
the control number pad. I have entered the world of Ceefax. In seconds the
current football scores have appeared on the screen (it is 4:10 on a Saturday
afternoon in England). Seeing that Manchester United are winning 1-0,
I go to the kitchen to make a coffee, leaving the TV as it is; when I return at
4:30 I discover that United’s opponents have scored two goals in rapid suc-
cession. I take the remote again and key 150: the latest news flash appears.
Then 102: I get the latest news headlines, five or so words per story, and
choose 110 for an extended (about ninety words) rendition of the story
that interests me most. Then (it is a May afternoon) I key 342 to see the lat-
est score in a cricket match being played; then 320 to see the latest score in
my local team’s football match; then 501 to check out the latest entertain-
ment news; then 526 to see brief (about 130 words) reviews of the latest
films. My partner mentions going for a picnic tomorrow, so 401 gives me
the weather forecast and 426 the predicted pollen count. Back on 316,
I discover that United have banged in two more goals to win their match,
and on 324 confirm—since all the games are now finished—that they top
their league. Feeling indolent, I key 606 to see what is currently showing on
the terrestrial channels; uninspired, I try 643 and 644 to see what’s on the
radio right now; then back to 342 for an updated cricket score. Then I turn
the TV off and play with my son instead.
A Prehistory of Digimodernism 81
about midnight and 7 a.m. GMT, I am then way ahead of my print newspaper,
which, at 8 a.m., will provide me (having been put to bed before midnight)
with the score and its meaning from the previous day’s play (cricket matches
can stretch over several days). Even before I buy it, my paper is badly out of
date. Furthermore, as cricket matches can fluctuate considerably, I might
flick disconsolately through my 8 a.m. paper knowing England have been
comprehensively trounced and eyeing a report about how brilliantly they
have played. True, I could get a more up-to-date picture from other sources
(radio, live TV action); but the point is that Ceefax is an electronic version
of print news, and can be systematically compared only with that.
I still remember the shock of first using Ceefax in the 1980s: used to
staring passively at pictures on a TV screen, I felt strange keying three-
digit codes to get endless things to read.4 Ceefax’s arrival in British homes
undermined the established passivity of the TV viewer in a shift equivalent
to the more-or-less concurrent spread of the VCR, which allowed the
viewer to accelerate or rewind the pictures on his or her screen, re-view
and pause them, controlling physically what the screen displayed. Yet the
VCR, a new machine connected up to a TV, in effect merely subjugated
television to an outside technology now responsible for its content. Ceefax,
though, came from within the set; accessed via the BBC’s channels, it was
a BBC product (and other broadcasters had their own versions). Also
significant was Ceefax’s nonsequentiality: as my description shows, users
accessed specific pages (targeted use) or, at best, consecutive pages within
specific sections, and otherwise leapt around the system at will. There was
no real reason either why sections should be arranged according to any
particular numbers: ITV’s Teletext placed sport at 400, local news at 330,
and current TV listings at 120, creating a confusion resolved by users tend-
ing to prefer one broadcaster’s system. If print newspapers differ in this,
it is not only because they tend to arrange their sections pretty much in a
standard order. Readers of print papers will go through them consecutively,
page by page, pausing as interest flickers but glimpsing (at least) all of them,
even the most uncongenial. They will rarely jump into a print paper at a
specific page, or vault dramatically around it; they may pick it up in order
to read, say, the op. ed.s, but will leaf through looking for them and not
consult page numbers (which may vary from day to day anyway), whereas
people like me will go directly and securely to Ceefax page 316. Unlike a
print paper, then, Ceefax yields no sense at all of a total news product to
be consulted or handled from one end to the other; even Ceefax addicts
only ever see perhaps 20 percent of its pages, and this is integral to its infor-
mational identity.
A Prehistory of Digimodernism 83
Who uses Ceefax? Anecdotal evidence suggests men5 (it is ideally suited
to sports news), while the letters page suggests users living in isolated rural
areas who hold pompously expressed, reactionary opinions. It might be
surmised here that Ceefax is yesterday’s technology, suited only to those
too old or too dull to adapt to newer, better forms. It’s true that Ceefax’s
self-containment is constricting compared with the Internet’s varied riches;
equally, Ceefax offers no more scope for user-generated content than print
newspapers did a century ago: its content is top-down. Much of Ceefax does
duplicate the BBC’s news Web site, and it may be that the form is doomed
to obsolescence. Yet Ceefax still has certain advantages. It is cheap—virtu-
ally free in itself, and a TV set costs much less than a computer—and sim-
pler and quicker to access and use, largely because of its very narrowness.
Both these points give Ceefax a potential reach which the Internet cannot
currently match, reinforced by the fact that, unlike the Web, a high number
of simultaneous Ceefax users does not slow the system down.
Above all, to a great extent Ceefax is the perfect proto-digimodernist
textual form: an evanescent, nonsequential textuality constantly being
made and remade anew, never settling, never receiving definitive shape.
Ceefax contains no records of the past: it’s an encyclopedia of right now
(it will give you status reports on flights due to land in the United Kingdom
in the next few minutes) with no other temporal dimensions. Using it
becomes hypnotic, addictive, trance-inducing, evacuating all sense of time;
its anonymity (only a tiny fraction of its pages give their authors’ names,
and there’s no space even for them to tell us anything about themselves)
means that all textual moments on Ceefax resemble all others. Ceefax has
no textual memory, no history; it’s an impersonal, amnesiac textuality; it is
designed to be used, and used right now, not remembered or discussed.
Indeed, analysis of it tends to focus on its engineering, overlooking its
content.6 Though integral to British life it is scarcely ever mentioned by
Britons, in the same way that people rarely talk about their washing machine
(except when it breaks down, and Ceefax doesn’t). In its country of origin
it is both omnipresent and ignored, a cultural marginalization inherent in
its very form: it is, above all, a technological textuality, remarkable for its
efficiency, rather than a content-centered textuality, interesting for what it
says. Proto-digimodernist to the end, it is soon to be largely phased out.
A TV show host invites his four guests to come down on to the stage and
form a line facing the studio audience. They are going to make up a rap.
84 DIGIMODERNISM
House
The Smiths’ song “Panic,” released in 1986 as house music spread across
Britain, urged listeners to “burn down the disco” and “hang the DJ” because
“the music they constantly play/It says nothing to me about my life.” Locked
inside the expressive-meaningful assumptions of white-boy rock music,
The Smiths could only look at house and see an inability to evoke everyday
experience, a failure of signification. As for postmodernism (from which
rock was essentially excluded), it misconstrued house by overemphasizing
its use of sampling, a 1980s technological innovation by means of which
elements of previous songs, like their bass lines or drum beats, could be
excised and redeployed in completely new settings. Consequently, a hit
record like M/A/R/R/S’ “Pump up the Volume” (1987) sounded fresh and
new (and wonderful) while being self-evidently made up of fragments of
88 DIGIMODERNISM
thought, feeling, and behavior not utterly subjugated to it. Its use created,
quite simply, a sense of euphoric weightlessness and nonattachment, of
freedom and exultance, and so the excision of what lay beyond was
integral to it. Such lyrics, and indeed the process of sampling, worked to
achieve an impersonality, an experiential liberation from the confines of
the self in a state of collective transcendence, of ecstasy-induced oneness
with innumerable others. Hence the necessarily communal nature of its
reception or experiencing; hearing it at home yielded a weak echo of
its genuine potency, tangible only when played very loudly to large groups
of people in clubs or at raves.
In such a context, house evolved several distinctively proto-digimodernist
features. First, the songs blurred into one another, their beginnings and
endings unclear, while varying versions of themselves multiplied dizzy-
ingly; house broke the organic song text into a proliferating, haphazard
textuality. Second, house tended to eradicate authorship. This music felt
anonymous and autonomous, and its makers, who were half-unknown
even at the height of their success, rarely achieved the status of “artists,”
people whose work you might follow over time (only DJs achieved that).
Third, house was ephemeral. Intense, vital, and wonderfully exciting in
their time, pieces became almost instantly outmoded. In general, house,
which foregrounded its own use or appropriation, which privileged the
moment, circumstances, and impact of its overwhelming reception, was
swiftly used up, exhausted, discarded. Histories of the phenomenon, like
Sean Bidder’s Pump up the Volume, describe a succession of human experi-
ences, not the development of a genre.9 Fourth, house was as international
as the Internet would later be, both under the automatic aegis of a version
of the English language. Songs came from Belgium, Chicago, England,
Spain, or Italy, but most importantly sounded as though they came from
anywhere and nowhere. (The title of “Ride on Time” (1989), by the three
Italians behind Black Box, is an EFL student’s mishearing of the American
pronunciation of “right on time.”) Once again, all external temporal and
spatial specificity and content were ruthlessly cut away; the songs existed,
in a forerunner of cyberspace, in a kind of autonomous musicspace, a float-
ing realm of sound and feeling, exalted, narcissistic, sleek, and euphoric.
House was the textuality of the suspension of the self and the other.
copy they hold in the public library service’s headquarters. When you
collect it you are given a small rectangular box with author and title names
on the front beside the words “a novel,” and a kind of purple blotchiness
spreading across it (actually a photograph of cancer cells). Open the box
and you find on the left the warning: “Fiction reserve. This book is to be
returned to Headquarters and the fiction reserve. It is not to be added to
stock at any branch.” What is this impossible novel? In what does its impos-
sibility lie?
Within the box is a wrapper holding sections of stitched papers. A note
reads:
You remove the sections from the wrapper; you find and read “First,” three
pages long. It seems to be the jumbled interior monologue of a football
reporter sent one Saturday to cover a match and arriving in a city he
realizes he knows, triggering memories of a Tony and his “disintegration,”
of a June (Tony’s wife?) and of a Wendy (the reporter’s ex-girlfriend?).11 He
immediately admits: “The mind circles, at random, does not remember,
from one moment to another, other things interpose themselves,” and he
contrasts Tony’s “efficient, tidy” mind with his own, “random, the circuit-
breakers falling at hazard, tripped equally by association and non-associa-
tion, repetition.”12 Thus forewarned, you take the next section in the pile,
or another, but not the one titled “Last” (though unnamed, the other
twenty-five have a distinctive abstract pattern at their head, perhaps to aid
the printers).
At this point I can’t, of course, tell you what I read (I would if you were
here). To record and therefore privilege in print a certain pathway through
the novel is, clearly, to betray and disfigure its very meaning; you must go
through it along a certain pathway, and all are equal. Anyway, maybe I’ve
read it ten times by now, and along ten different paths: which would I write
about? Sections succeed each other, perhaps a page long, maybe eight, and
the style is as the reporter suggested, a circuitous wandering around mem-
ories and perceptions with sentences that wind and turn in and out and
back on themselves. The memories go over his past with Tony, with Wendy,
A Prehistory of Digimodernism 91
and you read on along the labyrinth’s route you have chosen yourself to
follow, looking for answers to the questions that emerge about these people
and what happened to them. You realize soon that the randomness of the
sequence of reading within a limit (the wrapper’s fixed contents) mirrors
the obsessive, trapped winding of the reporter’s thoughts and memories
deprived of chronological objectivity. The method of producing the book
enforces a demonstration of and a disquisition on the processes of remem-
bering, which are revealed as associative, chaotic, emotional, and nonse-
quential. Tony is dead; Wendy has been supplanted; they were all students
about ten years ago; they went to pubs and restaurants and visited each
other. Bit by bit, as in any novel, you piece together information about
them; uncannily, the order in which you do this here doubtless hasn’t been
and never will be experienced by anybody else. In the present of the report-
er’s thoughts he moves about the city, and random sequencing means
you read bits from after the match he saw before bits “set” earlier; and
so the jumbling of his memories produced by the text’s structuration is
reproduced in the chaoticization of your reading about them. The voice
sometimes echoes the thought-processes of Leopold Bloom, another
newspaperman-outsider in a provincial city (keen on horses, not soccer),
and sometimes suggests the traumatized recollections, circular and quest-
ing, of Graham Swift’s narrator in Waterland; like the former, he digresses,
tiptoes past clichés, notes everything around him, and like the latter his
present is lonely and his memories intolerably painful. The novel is as trick-
ily experimental as Joyce and as turbulently male as Conrad, as readably
contemporary as The Information and as heartbreakingly “literary” as Flau-
bert’s Parrot. Pitched midway between modernism and postmodernism,
The Unfortunates is a forerunner of a third textual axis.
It’s a novel about friendship, loss, love, guilt, ageing, masculinity,
memory, about place; it’s a story that packs an emotional punch; it’s not
some sterile game. Past and present interweave via a sardonic pun (free
association/association football) and ironic counterpoint (soccer involves
the homosocial bonding of young men too). The intellectual effect of the
structuration is to dramatize the movements of memory and perception,
past and present brain activities, when semiunmoored by objective chro-
nology. But the emotional effect comes up on you too: it makes the report-
er’s grief feel labyrinthine, the way grief feels to all of us; it exposes mourning
and loss as an unchartable psychological prison from whose confines
you feel you will never emerge. And, paradoxically, you forget, as you read
on, finishing a section, choosing its successor (perhaps striving for ran-
domness and digging deep in the pile, perhaps just taking the next one,
92 DIGIMODERNISM
perhaps selecting the shortest one left or the longest for your own reasons),
the “strangeness” of the structuring principle and the author’s refusal of
sequentiality. True, the story has its opacities, its launches in medias res, its
allusions clarified only later, its jumps back and forth in time; but these
seem only confirmations of its literary time and place, not so different
from, say, Greene or Durrell. In practice, the structuring principle is sub-
sumed into the experience of reading: it continues to be felt as the intellec-
tual and emotional effects that I’ve tried to describe, but very soon, on the
second reading session, it no longer feels odd.
It isn’t, of course, completely nonsequential or reader-generated: though
s/he chooses the order of sections at random, the author has invested each
of them with its own unshakably sequential prose. But Johnson under-
mines sequentiality on every level he can. The reader’s sectional nonse-
quentiality is reproduced by a tendency to free associate or drift randomly
within each section, from paragraph to paragraph or from sentence to
sentence (some separated by hiatuses), and even within sentences, which
return on themselves or break up and regroup and restart. Your random
reading is finally just your contribution to an overall literary project fore-
grounding irrational consecutiveness, multilinear form, and internal non-
sequentiality; it’s integrated, part of a whole.
By the reading’s second half (counted by page quantity; the novel itself
has no such thing, of course), the parallel shifts, from finding your way
haphazardly through the labyrinth of somebody else’s mental processes to
the pressure of inevitability you feel, sensing the gaps in the chronology
and lifting the next section, which resembles the latter stages of completing
a jigsaw puzzle. Except that you don’t choose the next “piece” because
you know it fits; you pick it up and it slots itself in as you read it. But
whichever image is employed, the proto-digimodernism of The Unfortu-
nates is, I think, clear: whereas a traditional novel offers a set of words in
a particular order, a materially fixed text, Johnson proposes a set of words
to be placed in one of the 1.551121 × 1025 possible orders which the reader
must select him or herself. In other words, the sequencing of the novel,
traditionally the author’s sole responsibility, here becomes largely the con-
sequence of a physical act necessarily carried out by the “reader.”
Although Johnson can be seen, like Godard, as a late modernist, the
impossibility of The Unfortunates does not lie, as it did for The Rainbow
(also partly set in Nottingham), in its content; it was felt not by censors
whose suppression could be later reversed but by professionals in the
textual field, and it is still palpable today. The publishers Secker & Warburg,
with whom Johnson was contracted, were initially unenthusiastic about
A Prehistory of Digimodernism 93
the project (“it was going to be hellishly expensive to put into practice”)
and, on delivery of the manuscript, “completely nonplussed by [its] daunt-
ing practicalities.”13 Librarians also disliked it, finding that borrowers
would appropriate individual sections; to prevent this, some libraries
bound the sections together, destroying the book’s purpose. Scholars too
have struggled with its refusal of a materially set text, its extreme multilin-
earity (not nonlinearity); scholarship has always regarded indeterminate
textual sequencing simply as a problem needing to be solved, as the history
of the disputes over the “correct” order of the chapters making up Kafka’s
The Trial illustrates.14
As for the book’s contemporary reviewers, they suffocated it, politely
acknowledging its innovations and mildly damning its content.15 Few
national literary establishments can have been as hidebound, reactionary,
and philistine as Britain’s in the 1960s and 70s, as Johnson himself fre-
quently and stridently charged. Yet it can be countered that every official
culture is dominated by a narrow, middle-aged conservatism, and that
what marked Britain’s out in those dark days was that its young and left-
liberal wings had abandoned any belief in homegrown artistic innovation
or excellence, preferring instead to worship what came from France and
America. A London art student in a 1963 novel by John Fowles talks about
feeling “there’s so little hope in England that you have to turn to Paris, or
somewhere abroad,” while a young bohemian in a 1975 novel by Martin
Amis calls the idea of reading an English novel “outré . . . like going to
bed in pyjamas.”16 Johnson’s natural demographic looked the other way.
Widespread indifference to his work, among other crises, culminated in
Johnson’s suicide in 1973 at the age of forty. His biographer Jonathan
Coe records that, only a few months before the end, he was “devastated to
learn . . . that without consulting him, Secker had pulped all the remaining
unsold copies of The Unfortunates, the novel that was, as a physical object,
by far the most dear to him . . . Gone. All destroyed.”17 Impossible.
Its structuring principle was not entirely new: Coe establishes that
Johnson was aware of the appearance in New York in 1963 of the English
translation of Marc Saporta’s Composition No. 1, a novel made up solely of
single pages contained in a box.18 I chose not to analyze this because I could
not find a copy; it was never published in the United Kingdom. The proto-
digimodernist text, inescapably a creature of the margins, runs the risk, as
with happenings, of disappearing forever off the cultural radar: if Johnson’s
novel went almost unread for thirty years (and, despite reissue in 1999,
remains obscure today), then Saporta’s forerunner, at least in my experi-
ence, has vanished into the textual night. These two went furthest, it seems
94 DIGIMODERNISM
In its own way, this book consists of many books, but two books
above all. The reader is invited to choose between these two
possibilities:
The first can be read in a normal fashion and it ends with chapter
56, at the close of which there are three garish little stars which stand
for the words The End. Consequently, the reader may ignore what
follows with a clean conscience.
The second can be read by beginning with chapter 73 and then
following the sequence indicated at the end of each chapter. In case of
confusion or forgetfulness, one need only consult the following list:
73–1–2–116–3–84 (etc.)20
The novel contains 155 chapters in all, and for its longer version chapters
57 to 155 have been shuffled and then scattered among numbers 1 to 56
(55 is not reused). This gives the reader very little actual scope for textual
determination, but conversely Hopscotch’s universe is more fictively auton-
omous than Johnson’s aestheticized memoir. Note too the insertion into
each of these novels of a sort of user’s manual, made necessary by their
authors’ attempts to break with traditional textual form. The Unfortunates
especially pulls the processes governing the publication, dissemination,
A Prehistory of Digimodernism 95
Pantomime
changed behind her/him and discuss the story, or, Davies suggests, hand
out newspapers and award a prize for the best hat made from them, or,
again, invite people up to take part in a game that makes them look
slightly and amusingly ridiculous. Then there are the standard pantomime
commentaries voiced during the action: perhaps encouraged by boards
held up by members of the crew, the audience will boo or hiss or denounce
archaically the villain (“Shame!” “Scoundrel!”), and give the hero(ine)
aahs of sympathy and cheers when appropriate. More traditional still are
the exchanges whereby an actor and the audience contradict each other
over events happening on stage: “Oh no he isn’t!” “Oh yes he is!,” the actor
hammily inciting the audience’s ever more committed, chorused retort.
Moreover:
Prompted by such lines as “If that big spider (or the gorilla or the
ghost) arrives, you will tell me, won’t you?” the children will shout
out as the spider arrives, unseen by the protagonist in this scene. The
spider can hide in various places, changing swiftly from one place to
another as the hero desperately enquires: “Where? Over here? No,
he’s not . . . Where is he, you say? Over here? No, he’s not . . . You’re
kidding me, aren’t you? He’s not here at all, is he?” and so on, until the
creature finally stands right behind him, moving in unison, to remain
hidden every time the hero spins around until the youngsters (and a
good many adults) are yelling hysterically: “He’s behind you. Look
out behind you!”23
During the action audience members can also be invited up on stage and
involved in set-pieces (e.g., by judging an Ugly Sisters beauty contest, or
assisting a magician or wizard). Conversely, items can be thrown into the
audience, like soft candies or supposed water from a bucket that turns
out to be confetti, and cast members can pass through the audience from
the back, interacting and mingling with them (e.g., the dame seeking a
partner for the ball). There is almost no end to the variety of theatrical
“business” by which Davies can imagine the audience being drawn into the
production. She concludes: “One way or another, the audience should feel
that they are part of the pantomime. It is not something just for the players
to create in isolation . . . Give the audience a chance to contribute . . .
Throughout, they should feel that they are welcome to join in—to sing,
shout, cheer and comment—and that they are a vital element, even part of
the story at times.”24
However, this is all more complex than it may at first look: precise
definition of what is going on here, and how, problematizes words like
A Prehistory of Digimodernism 97
“participation” and “contribution,” “joining in” and “being part of.” For a
start, many of the moments when the audience is interacting with the
cast clearly lie outside the story, extrinsic to the narrative, occurring when
the production has paused (at scene changes) or is yet to start. They irrupt
into the entertainment’s margins when it takes a breather without quite
admitting to it. This separation from the genuine action is signaled spa-
tially (being set outside a lowered stage curtain) and textually: “Many of
the audience’s lines are implicit in the script, though not actually written
down.”25 To a degree they are excluded by their incommensurable multi-
plicity: “The question ‘Which way did they go?’ will create a furor of
instructions and pointing fingers.”26 But they can be highly predictable too:
“A character creeping up on another will set off a chorus of ‘He’s behind
you!’”27 Crucially, the lines delivered by the audience here, while making
them contributors to the drama (understood as the totality of the words
voiced and gestures made in a performance), do not constitute them as
a character. The audience are shouting or gesticulating as themselves,
undisguised, truly not fictively, and yet—uniquely—under license from the
production to behave temporarily as though able to enter the piece and
interact with fictional characters (by encouraging, condemning, helpfully
offering information, etc.). Almost all audience participation in panto-
mime requires the veiled suspension of elements of the production, and
appears beyond its space, time, narrative, script, and dramatis personae.
This does not mean that the audience is deceived, but neither is it genu-
inely involved. What distinguishes pantomime is not that the audience
contributes to it, but that it creates moments and spaces where it suspends
itself, and modes by which the wall separating reality and fiction is broken
down (e.g., the dame reads birthday greetings to children in the hall).
If the audience contributes anything, it is clearly not in an authorial
guise, since they say and do what they are told, and invent little or nothing
(hecklers notwithstanding). Deprived of a role, they are not, unlike full
members of the cast, directed and prompted by the production crew
(the Director or the Prompt), but by other actors; the audience are reduced
then to temporary subactors in the interstices of the production. They are
given the lowest of all speaking parts: they reply to greetings, give mono-
syllabic answers, offer obvious and moralistic commentary. Worse still,
their treatment by the cast results in their “acting” becoming necessarily
shoddy, constrained either to frenzied exaggeration (the demented screams
of “he’s behind you!” due to a character’s amazing obtuseness) or to the
blankly wooden (when brought up on stage, due to the passivity of the
function they are made to fulfill). The audience’s technical ineptitude as
an actor, a line reader or stage presence is deliberately engineered by the
98 DIGIMODERNISM
***
Beyond these stand a long line of texts and fragments of texts: John
Krizanc’s play Tamara (1981), where ten actors play out simultaneous
scenes in the various rooms of a large house—the audience mingle among
them, have to choose which room to go to, cannot experience the whole
play, get spoken to by the actors, have to decide whether to follow one actor
throughout or move among them, and so doing make up for themselves
their sense of the text; the improvised play Tony ‘n’ Tina’s Wedding where
the audience is treated as guests at the actors’ nuptials; Lawrence Sterne’s
invitation to the reader to create a page of Tristram Shandy depicting the
A Prehistory of Digimodernism 99
beauty of widow Wadman, “as like your mistress as you can—as unlike
your wife as your conscience will let you—’tis all one to me”;28 and myriad
TV programs featuring the public as game-show contestant, entrant in
competitions, phone-in caller, vox pop interviewee, and more; and doubt-
less you can think of others—though they will be marginal effects and mere
curiosities, eccentric offers no one ever accepted, or corralled and minimal
roles within a regulated textual environment. The margins, the margins.
So what have we here (it’s easy to imagine a hostile observer thinking,
ending this chapter)? A subform of photography and film, which degrades
and dehumanizes all who have contact with it, populated, it would seem,
by male sleazebags and slimeballs and women with mental health issues;
an obsolete source of shards of news; comedy for smug Generation X slack-
ers, which trades on its misfires; mindless and thoughtless beat repetition;
the failed pseudonovel of an egomaniac suicide; the lowest form of theater
known to man.
My feelings here are divided. Industrial pornography, like an unseen
planet, has exerted a powerful influence on the concurrent development
of mainstream cinema worldwide; Ceefax was extraordinary in its day,
and its eclipse doesn’t obliterate its historical importance; improvisation,
brilliantly funny at best, resembles all comedy in not hitting every single
mark; The Unfortunates belongs to our era, in which Johnson has been the
subject of a surge of interest; and pantomime keeps British theater, perhaps
the world’s most dynamic and technically excellent, financially afloat.
And yet all are marginal. Some were pushed to the side by their
obscenity (ob-scene may mean literally “off the stage”),29 some by their
revolutionary technique or cultural demands, some by their suitability for
children—various forms of marginality. There is nevertheless, and this will
become ever more of an issue as this book goes on, a problematic of quality
about the digimodernist text. How good is it? How good can it be? After
postmodernism interrogated the assumptions implicit in the notion of the
“great work of art,” digimodernism struggles with its possibility (whether it
is capable of greatness). Were these texts marginalized because they were
proto-digimodernist? Or is (proto-) digimodernism a form of artistic
mediocrity destined inexorably for cultural denigration? In short, there are
three alternatives: (1) its lack of prestige is socially determined (because of
inherited prejudices about what art should be, for which it is a “scandal”),
and so reversible; (2) its lack of prestige is aesthetically determined (because
texts that function in this way cannot achieve greatness), and so irrevers-
ible; (3) its lack of prestige is historico-textually determined (because until
recently texts have scarcely been made like this, except as eccentricity or
100 DIGIMODERNISM
curiosity, but now they are, and will be increasingly), and so all bets are off.
The hostile observer might prefer the second view. The case of the long
disparagement of jazz improvisation—the form of proto-digimodernism
whose absence from this chapter I regret most—suggests the first. To me,
the third is the most interesting.
4
Digimodernism and Web 2.0
Polyphonic plenitude, the searching out and affirmation of the plurality of different voices,
became the leading and defining principle of postmodernism’s cultural politics. Just as Goethe
is said to have died with the Enlightenment slogan “Mehr Licht!” (“More Light!”) on his lips, so
at one point one might have imagined postmodernism going ungently into its goodnight
uttering the defiant cry, “More Voices!”
Steven Connor, 20041
In an important sense, of course, Web 2.0 doesn’t exist. (The term belongs
in the antilexicon.) Much of the technology underpinning it has been in
place since the Web’s inception, and some of its most emblematic examples
are almost as old; Tim Berners-Lee is surely right to argue that its common
meaning “was what the Web was supposed to be all along.”2 Well known
since a conference in 2004, and despite suffering from hype—The Economist,
mindful of the dotcom mania, has referred sardonically to “Bubble 2.0”—
the accepted sense of the term is nevertheless a convenient textual category:
it denotes the written and visual productivity and the collaboration of
Internet users in a context of reciprocity and interaction, encompassing,
for instance, “wikis, blogs, social-networking, open-source, open-content,
file-sharing [and] peer-production.”3 Moving beyond read-only information-
source Web sites, the textuality of Web 2.0 sites notably favors (in the
jargon) “user participation” and “dynamic content.” Moreover, “Web 2.0
also includes a social element where users generate and distribute content,
often with freedom to share and re-use.”4 The forms of Web 2.0 are the
most globally important cultural development of the twenty-first century
so far, and they lie at the heart of digimodernism as we currently know it.
101
102 DIGIMODERNISM
Two examples of this genre are David Jennings’s Net, Blogs and Rock ’n’
Roll (2007) and Don Tapscott and Anthony D. Williams’ Wikinomics: How
Mass Collaboration Changes Everything (2006, 2008). Jennings explores the
nature of Web 2.0 and suggests how it may evolve. When he evokes the rise
to prominence of Sandi Thom’s music and the movie Snakes on a Plane
through Internet viral marketing, he is interested in how a product appeared
on the market and was received or appropriated by its consumers; he isn’t
concerned with how good those texts are, or what they might mean; he has
no conception of them as texts, only as objects of publicity and consump-
tion.5 Tapscott and Williams advance the view that the particular organiza-
tion of Wikipedia is the way that companies in future would be best advised
to operate: this is Web 2.0 as the model of microeconomic success. They
assume that Wikipedia is a success because it is used (read and written)
by so many people: it’s a consumerist system of values, whereby the widely
bought product is automatically to be emulated. By a sleight of hand they
then see this commodity as the prototype also of the future manufacturer
of commodities.
I don’t want to reject these kinds of writing entirely, though I suspect
that the latter claim too much too quickly and won’t stand the test of time;
they are overly marked by the spirit of advertising integral to business.
It’s telling indeed that Web 2.0 lends itself immediately and most naturally
to a discourse of practical and physical use. But, while highlighting this
point, I can’t see that these are the only ways you can talk about Web 2.0.
It can also be read textually. Many of these platforms have a hard-copy
precursor: the diary (blogs), the newspaper letters page (message boards),
the script for a play (chat rooms),6 the encyclopedia (Wikipedia). At a sec-
ond degree, YouTube resembles a festival of short films or documentaries.
Social networking sites, slightly more problematically, adapt an earlier
electronic platform, the personal Web page, rather than a pre-Web form
of text, but this is not finally prohibitive of textual analysis. And if Web 2.0
can, on the whole, be assimilated to forms universally considered texts,
then they are texts themselves (of a sort) and can be studied—as I’m going
to here, in a way—textually.
This poses, again, its own difficulty. What can textual analysis tell us
that is not already obvious to all? It isn’t just these platforms’ fame; it’s
their accessibility; above all, it’s their ease of use, once more, by which so
many people have gotten to know them intimately, from the inside out. The
critic is a professional reader; Web 2.0 throws up the writer/reader, a new
kind of textual knowledge and familiarity. A bigger problem still derives
from the necessary incompleteness of the Web 2.0 text. The cultural critic
typically watches entire films, gazes at completed paintings, reads finished
104 DIGIMODERNISM
books, and consequently treats them in their totality. Web 2.0 texts, how-
ever, never come to a conclusion. They may stop, or be deleted, or fall out
of favor (and off search engine results pages into oblivion), but they are
not rounded off, not shaped into a sense either of organic coherence or
of deliberate open-endedness. Items within them, like blog entries, may
have this internal structure, but they fit into an overarching onwardness.
Textual analysis of Web 2.0 must therefore follow the text in time: it must
go with it as it develops, seemingly endlessly, over a lapse of weeks, months,
or years. This distinguishes such analysis from that of any pre- or extra-
digimodernist text: it critiques now what will soon be different. Scholars
do frequently shift their attention from a finished text to its manuscripts
or preliminary sketches, but the interest of these stems precisely from their
final incorporation within a supremely complete textual end-product. On
Web 2.0, though, each version of the text in time is the equal of every other;
similarly, each gives an initial impression of finishedness, dispelled at
varying speeds.
Equally trickily, while the forms to be studied have been chosen for me
(they’re sociocultural powerhouses), the practices of digimodernist analy-
sis that they demand don’t exist yet. In response to this and the other issues,
I’m going to look at these forms as examples of such practice and such
analysis. Each will be read in terms of a theme running through digimod-
ernism as a whole. This will also have the beneficial effect of tying my com-
ments into the next two chapters: finally, I see Web 2.0 as no more than
a subform, albeit the most important, of a wider cultural shift, a context
generally missing so far from discourse about it.
I should make clear from the outset that I come neither to bury nor
praise Web 2.0. Culturally it’s evident that much of what is expressed
through it is ignorant, talentless, banal, egomaniacal, tasteless, or hateful;
textually, though, I can’t but feel that the avenues it opens up for expression
are wildly exciting, unformed, up for grabs, whatever we choose to make
them. This disparity is central to the spirit of the times: ours is an era more
interested in cultural hardware, the means by which communication occurs
(iPods, file-sharing, downloads, cell phones) than the content of cultural
software (films, music, etc.); it’s the exact opposite of high postmodernism.
Given the speed and unpredictability of hardware innovation, this bias is
understandable. It won’t last forever, though; and if there is a Web 3.0 then
this technologist supremacy will have to yield ground to the textual.
Also in need of reformulation will be Web 2.0’s pseudopolitics. These
platforms do not with any ease produce the “antielitist” and “democratic”
impulses vaunted by some of their supporters. Democracy presupposes
Digimodernism and Web 2.0 105
education (this is why children are disenfranchised), but Web 2.0 offers its
privileges equally to the unschooled, the fanatical, and the superstitious;
in fact, it’s closer to populism, that gray area between democracy and
fascism. Its new gatekeepers—the ubiquitous “moderators,” Wikipedia’s
“administrators”—are as powerful as any other, but less transparent and
accountable than many; organizationally, Web 2.0 is essentially neo-elitist,
part, indeed, of its very interest.
The chat room, though perhaps less popular or less fashionable today than
several years ago (it’s been sidelined by newer Web 2.0 applications), is
a distinctive digimodernist form. Go on to one that’s in full spate and you
see a scrolling page with phrases, remarks, questions, rejoinders, greetings
and partings, complaints and consolations, invitations and exclamations,
all rolling torrentially by. Leave it for fifteen minutes and a daunting jungle
of text will spring up; what grew before you logged on is imponderable.
Visually, this never-ending, forever-turning stream of communication may
resemble the flowing of a minor sea, but its tide never goes out: (discreetly)
compatible with many people’s working habits and extending over territo-
ries and therefore time zones, the sun never sets on the chat room and the
moon cannot reverse its inexorable onwardness. It’s an endless communi-
cative narrative, into which you shyly emerge.
This endlessness may manifest itself by a feeling of futility, a sense that
people are throwing down comments merely in order to fight off their own
boredom or loneliness, and that the “conversation” will never get anywhere
or produce anything. Chat rooms provide unstoppable movement, but not
progression; a discourse with such a stupefyingly high level of evanescence
(even participants will struggle to recall their previous interventions) will
never be able to develop consecutively toward any sophisticated communi-
cative conclusion. Of all the Internet’s digimodernist forms, the chat room
seems the most open: you register, log on, and write your material, contrib-
uting in to a discursive forum. It is, of course, moderated and patrolled for
unacceptable behavior, but if such is your objective you can hive off with
a like-minded fellow textual contributor to a private cyberspace of your
own: the broad, open chat room is thereby narrowed to a small, closed chat
room, but its structure remains intact. The discourse of the chat room
is whatever you make it: unlike with blogs or message boards there is no
privileged intervenant but an apparent equality permits, potentially, an
extraordinary expressive freedom. (And power, as your greeting is answered
106 DIGIMODERNISM
Web site. The article lists books sorted into eleven categories ranging from
“Classics” and “Poetry” to “Sci-Fi” and “Lives.” Printed out three months
later the original article runs to eleven pages, each book receiving a cursory
summary, for example: “Flaubert’s finely crafted novel tells the story of
Emma, a bored provincial wife who comforts herself with shopping and
affairs. It doesn’t end well.”7 The comments on the message board beneath
the article run in their turn over 52 pages, or 4 times the extent of their
prompt; there are perhaps 500 separate posts. Quantitatively message
boards swamp their original. Of these 500 or so, about 475 were posted
within 10 days of uploading, the final 25 were spread over 2 months, and
the last was dated 3 weeks before I printed. A message board functions in
time like this: an initial tidal wave followed by a gradual slowing down and
then a sudden drying up; its textual onwardness is contained within this
cycle, and directed obscurely by an anonymous or pseudonymous modera-
tor who also applies rules about what cannot be said. Despite this, the tone
of almost all the posts is the same: they are dominated by criticism, carp-
ing, condemnation, contradiction, complaint, and what the moderator
evidently felt were acceptable kinds of abuse, that is, nonspecific, or insults
aimed at groups other than minorities.
Some of the interest in looking at what people actually say on message
boards is to counter the relentless propaganda promoted by Web lovers,
according to which they might be a “forum” for “communication” among
“communities” on a “global” scale. All of these qualities are present here
technologically and functionally; however, in terms of textual content they
are overwhelmed by their polar opposites, by parochialism, provincialism,
isolation, bigotry, rage, prejudice, simple-mindedness, and anonymity.
What message boards do is, toxically, distribute these human failings to
everyone across the planet in no time at all. This is the picture that emerges
from reading them all: one individual locked in a tiny room sitting at
a computer screen typing out their irritation, projecting their bile into the
atmosphere; and fifteen miles away a stranger doing the same; and five
hundred miles away another, and so on, around the world. All of these
streams of rancor and loathing then coalesce in the sky into a thin cloud of
black and shallow dislike, and fall gently but dishearteningly to earth. None
of the projectors is aware of any other: they spew in a void, and the con-
tents of their irked guts are displayed potentially to everybody forever.
I’d argue that this tends to be the pattern of Internet forums in general, but
the one I’ve chosen to highlight is a particularly vivid example.
The cause here of this venom is the list: almost every post refers to it (not
to the other posts). Although its title may suggest it’s setting itself up as an
encyclopedia for the human species, the key is found in the subcategory
108 DIGIMODERNISM
“Books that changed your world.” You, the implied reader, were influenced
by The Hitchhiker’s Guide to the Galaxy, Zen and the Art of Motorcycle
Maintenance, The Beauty Myth, Delia Smith’s How to Cook, A Year in
Provence, Eats Shoots and Leaves, and Schott’s Original Miscellany. Self-
evidently this is a list compiled with a close eye on its market, on the people
who will pay to read it, the newspaper’s known habitual purchasers.
Market research will have guided the writers to select books aimed at
Britons, usually middle-aged and older, certainly middle-class and “higher,”
with right-wing and traditionalist views: elsewhere in the list come Swal-
lows and Amazons, Churchill’s A History of the English-Speaking Peoples,
and the Diaries of the extreme right-wing British politician Alan Clark.
It also contains a large number of books that such a demographic will
certainly already have read, like Jane Eyre and Rebecca: it’s in the business
of comforting its readers more than of dislocating them.
None of the posts bears in mind the identity or probable goals of the
article’s authors. Many of them respond as though it had been penned by
some transcendent but deeply stupid entity, others as though it were effec-
tively the work of the entire British nation. They do not consider the ori-
gins of its biases, nor do they place it in its media context as essentially a
worthless, paper-filling exercise by staff writers lacking the funds necessary
to send somebody out to find some actual news. It’s absurdly limited in its
range, but then it is aimed at, in planetary terms, a tiny and limited group
of people; it’s not a missive from God to the human race; it’s a set of cozy
recommendations for a group of people with fairly well-known tastes,
which at worst will confirm them in their literary habits and at best will
nudge them toward a good book they don’t yet know.
The response of the posters, however, tends to be that the list is “simply
ridiculous, woefully inadequate,” “twaddle and hype,” “incredibly weak and
. . . pathetic,” “so obviously predictable and prejudiced,” “appallingly orga-
nized,” and “a load of crock.” Almost all of the posts foreground the titles of
books whose omission the posters find scandalous. A recurring feature is
Italian posters abusing what they see as British arrogance and extolling
missing Italian glories:
It’s astonishing that the largest part of the literature in the list comes
from places that, when China and Mediterranean cultures invented
literature, were still in the stoneage. It’s a petty provincial list
Sono italiana e trovo piuttosto irritante che quasi tutti i libri da voi
citati appartengano alla letteratura inglese . . . insomma, manzoni?
leopardi? verga? [I am Italian and I find it rather irritating that almost
all the books you mention belong to English literature . . . what about
Manzoni? Leopardi? Verga?]
Hey! there’s life over the earth beyond UK!!!! is not possible to
describe this library list. Is always the same thing. You people are the
best and only you right?? Puaj
Nobody makes a case for their book or author. There are, as ever on mes-
sage boards, a few contributions from trolls designed to annoy posters and
railroad their discussion by, for instance, suggesting Playboy magazine;
equally, there are the usual pontificating and faintly mad speeches packed
with long words and complex sentences and devoid of rational points, like
the one posted by Fred Marshall on April 11, 2008. And post after post goes
by without anyone acknowledging another.
The overriding impression is that almost every post, whether related to
the original article or other posts, appears to be driven above all by the urge
to flatly disagree, to reflexively and bad-temperedly contradict. Uploading
the article just seems to have acted as a kind of lightning rod for interna-
tional contempt, egocentricity, and ignorance. Reading through page after
page of it is dispiriting indeed, because it reveals a systemic failure of com-
munication: in theory, contributing to an Internet forum leads you into a
place of worldwide and instantaneous concert, of debate, where thought is
shared and interrogated among equals; in practice it resembles the irked
and near-simultaneous squawking of an infinite number of very lonely
geese. Recommending a book should be an act of generosity; these posts
sound petulant, hardly able to contain their fury. There is no progression
here, no development, no recognition of the rest of the world: “Where is
Cervantes’ ‘Don Quixote’?” “I must say that Miguel de Cervate’s ‘Don
Quijote de la Mancha’ must be in the list,” “No Don Quixote?” It sounds
like the barking of a petty and frustrated megalomaniac.
At some stage, exhausted by wading through an unending stream
of “[w]hat about ‘The Jungle’ by Upton Sinclair?” and “[n]o Dune?” and
“[t]his list stinks of intolerance and racism,” you forget completely what
originally triggered all of this: a meaningless and pointless itemization of
very good and not very good books for genteel fifty-year-olds hankering
for the return of the death penalty. You are instead lost in the void of
the message board. The emotional tone and the nature of this debate are,
I contend, much the same everywhere on message boards: the internation-
alized insularity, the rage, the preaching, the illiteracy, the abuse, the
simple-minded plethora of non sequiturs, the flow of vapid contradiction,
the impossibility of intellectual progress or even of engagement. Some
boards, like those on the London Guardian newspaper’s Comment is Free
site, are of a much higher caliber than this, but usually no more fruitful or
enlightening; and though some posts are worse than others the good ones
are never enough to drive out the bad.
The one thing you never get on message boards is people saying that
they stimulate communication and community on a global scale (they say
Digimodernism and Web 2.0 111
it in books or on TV). Instead you get this, posted by “open minded french
guy” on June 8, 2008: “I am fed up with this Americano-english proudness
which think themselves as the center of the world.” Ten minutes later, he
returned to add: “I am fed up with this egocentric of this so proudness of
American-English position; the simple idea of thinking ‘a perfect library’
you should first watch a perfect globe.” The fact that he felt his “improved”
insight warranted republication tells you everything.
Blogs (Onwardness)
Of all the forms of Web 2.0, blogs might be the easiest to explain to some
cultivated time traveler from the late eighteenth century who was already
familiar with both diaries and ships’ logs. The former he would know as
a journal intime or intimate daily record, kept by young ladies or great
men; the latter he would understand as a regular, systematic public account
of external activity and events. He would probably then see blogs, fairly
accurately, as a conflation of the two. Books about blogging rightly empha-
size their diversity of type, purpose, readership, and content, but our time
traveler might note that in his era already diaries varied in similar ways,
encompassing little narratives about the quiddities of the daily routine,
small essays of personal opinion, and insights into the hidden operations of
power. What then is new about the blog? Its hyperlinks, obviously, by which
one blog can link to a thousand other sites, but what is textually new?
For Jonathan Yang, the author of a guide to the subject, “[a] blog, or
weblog, is a special kind of website. The main page of each blog consist [sic]
of entries, or posts, arranged in a reverse chronological order—that is, with
the most recent post at the top.”8 The primacy given by blogs to the latest
entry marks a first break with their textual inheritance. The entries in a
diary or log progress from left to right through a book, such that internal
time flows in the same direction as reading about it; but as the eye descends
the screen of the blog it goes back in textual time. Why is this? The reason
lies in the digimodernist onwardness of the blog: it’s a text under develop-
ment, one currently being constructed, being built up, a text emerging,
growing. So is a diary or log, but they can also be read—and are read, by
those who enjoy the diaries of public figures from Alan Clark to Samuel
Pepys, of writers like Woolf or Kafka, or of someone like Anne Frank—as a
finished, enclosed totality. The diary that is not being added to is complete;
the blog in that state is textually dead. Another guide recommends that:
“Posting at least one entry each weekday is a good benchmark for attracting
and holding a readership.”9 Still another warns: “There’s no such thing as
112 DIGIMODERNISM
Wikipedia (Competence)
Producers of text for chat rooms soon evolved a new kind of typed English,
one favoring phonetic substitutes for real words (“how r u”) and acronyms
(“lol”), and discarding punctuation (“im ok”). This script was adopted and
extended by early senders of text messages, whose cell phones could only
hold a very limited number of characters. But since a chat room contribu-
tion could be, within reason, as long as you liked, and there was no physical
discomfort linked to typing or obvious advantage to speed, this simplified
script had no ostensible purpose. Subconsciously, I suspect, the aim was
to construct chat room text in a specific way: as uninflected by issues of
linguistic competence or incompetence. By reducing the number of spelled
words and by eradicating punctuation, there was less and less a contributor
could get linguistically wrong; by forcing all contributions into a simplified
and clearly artificial and new mould, chat room text rendered all semantic
and syntactical rules redundant—it outflanked them, made the issue obso-
lete. For its detractors, this script was the latest stage in the spread of socially
valorized illiteracy; for its zealots, it liberated text, finally, from its old
elitism, its legalism and dogma, its tendency to exclude and oppress.
On one hand, the emergence of this new script is another sign of
the novelty of digimodernist textuality. In all the changes to text since the
Enlightenment it had not been felt necessary to reinvent the English lan-
guage. On the other hand, the desire for a form of text stripped or freed of
questions of linguistic competence—where nobody is punished for “errors,”
nobody rewarded for “correctness”—had a broadly postmodern origin. If
you ceaselessly call for Steven Connor’s “more voices,” as postmodernism
does, you eventually run up against the literary shortcomings of a stubborn
part of the population. Evidence of this overarching context came with the
appearance of Wikipedia, not a text emptied of linguistic (in)competence
but an encyclopedia stripped or freed of issues of objective intellectual
(in)competence. Until this point any contributor to an encyclopedia had
been compelled to offer some proof of objective qualifications: s/he would
have to have passed certain exams and gained certain diplomas, to have
published relevant texts of a certain importance or been appointed to cer-
tain posts. The right to contribute had then to be earned through demon-
strable achievement, and subsequently to be conferred by others also
applying objective criteria (this is true whatever contingent corruption
may have infected the process). Wikipedia simply swept all of this away.
By definition, its criteria for contributors were that they have access to the
Internet and apparent information on the subject in question; to write for
114 DIGIMODERNISM
Wikipedia you had to be able to write for Wikipedia, and the only person
capable of assessing this ability, in principle, was yourself. Nobody would
be disbarred from contributing on objective grounds. The encyclopedia
had, overnight, been wrenched away from the specialists, from the profes-
sors, and given to their students to write.
Some humanities professors have had the gall to attack Wikipedia: after
a lifetime spent teaching that objectivity doesn’t exist, that “knowledge”
and “truth” are mere social constructs, fictions, they actually had the nerve
to describe this particular construct as illegitimate. On the contrary, it was
easy for its enthusiasts to depict Wikipedia as the glorious fulfillment
of Michel Foucault’s final fantasy: the release of knowledge from its incar-
ceration in power structures, its liberation from systems of dominance,
oppression, exclusion. Condemnation by the professors only confirmed
the veracity of Foucault’s critique and, by extension, the emancipatory
justice of the Wikipedian project. Wikipedia is, in short, a digimodernist
form powered by postmodernist engines; it’s the clearest instance of the
submerged presence of postmodernism within contemporary culture.
For this reason, among others, Wikipedia’s natural home is the English-
speaking world, where post-structuralism found its most uncritical and
energetic audience: its article on itself states that a comfortable majority of
its “cumulative traffic” (55 percent) is in English.13 Within this geography,
there is something stereotypically “American” about Wikipedia’s integra-
tion of a sort of naivety or credulity. There is certainly something ill-advised
about the method by which it accrues what it presents as “truth”: if you
wanted to know the capital of Swaziland or Hooke’s Law you wouldn’t
stop someone on the street, ask them, and implicitly believe their answer;
you wouldn’t even approach a group of students in a bar and subsequently
swear to the factuality of whatever it was they happened to tell you. The
appropriate word here may, though, be not so much credulity as idealism:
the belief that the mass of people will somehow conspire just by communi-
cating with each other to throw up truth is akin to the invisible hand theory
of economics (by which everyone mysteriously and inadvertently produces
prosperity) and the Marxist theory of history (by which the majority of the
population inevitably somehow create a free and just society). The parents
and grandparents of Wikipedia’s writers (or “editors”) possibly marched
against nuclear weapons or protested the Vietnam War; Wikipedia is one
of the most striking expressions of political radicalism and idealism in our
time, though it is also typical of our consumerist age that its domain isn’t
truly a political one. In fact, the grand illusion of believers in Wikipedia is
that they are doing politics when they ought to be doing knowledge.
Digimodernism and Web 2.0 115
This is not vandalism: the writer15 seems sincerely to picture him or herself
contributing pertinent and enlightening information about Pynchon’s
novel (it’s not like introducing typos into the article on dyslexia). To my
mind, both paragraphs strive desperately to connect a novel the contri-
butor has read to another one s/he knows: though there’s an element of
detective fiction about The Crying of Lot 49, almost infinite are the stories
that begin with a mysterious event, while three words referring to a novel
known at the time to almost every American adult and written by someone
Pynchon probably didn’t meet do not warrant a third of a page of commen-
tary. Had I been the author’s professor, I would not have corrected this;
I would simply have graded it, and badly, of course, because it isn’t wrong
so much as not good. It cries out for more education, wider reading, and a
better understanding of how literary criticism works. How do I know this
(or think I do)? Is it because I have objectively demonstrated competence
in twentieth-century literature in English (qualifications, etc.)? Not exactly,
since in order to recognize the poor quality of this critique you need nei-
ther a diploma nor a specialty; you just need competence. But competence
is an objective quality: it doesn’t emanate spontaneously from people; it has
to be socially acquired somehow, and capable of display. No doubt whoever
wrote these paragraphs thinks them competent; I don’t know how, within
Wikipedia’s mechanisms and ethos, you would show him or her that they
aren’t. That ethos holds that they will eventually be improved, mystically
Digimodernism and Web 2.0 117
raised up to a higher level of quality. But who decides what that high qual-
ity consists of, if it doesn’t consist of this? And how do you decide who
decides?
Moreover, the problem of competence is not restricted to one of medi-
ocrity. Wikipedia’s article on Henry James, for instance, of which an extract
follows, is as good as you could reasonably expect any encyclopedia entry
to be. And yet, how do you know it’s good? Who can say?
While this is superior to the Pynchon in every respect, the issue isn’t, as
I hope I’ve made clear, “how good” Wikipedia is, but what you can do with
any entry when the objective category of intellectual competence has been
abandoned. By the time you come to read this page, the online original
may have been swept away and replaced by something of inferior quality,
perhaps by the person who thinks Jane Austen an “influence” on Martin
Amis.17 The onwardness of Wikipedia is disguised: you call up an article
and it “looks” finished, though clicking on “history” may lead to five or five
hundred previous versions, saved forever, and evidence that the article is in
constant imperceptible evolution. Go back a month later and it may be
twice as long. Without this onwardness, Wikipedia could not exist: it’s the
textual expression of the open-source wiki software platform. Yet, though
integral to a diary (or blog) or conversation (or chat room), onwardness
moves much more slowly in the realm of knowledge: our understanding
of James is constantly shifting, but not so visibly as to require his encyclo-
pedia entry to be updated every week. A print encyclopedia wouldn’t need
118 DIGIMODERNISM
to revise an entry like this one for ten years, but this negates the meaning
and purpose of open-source software. The entry on postmodernism
currently states: “This article or section is in need of attention from an
expert on the subject” (and it really is),18 but there is no motivation for one
to respond: becoming a specialist is a long, arduous, and costly process,
producing a high-quality summary of such a difficult subject is a time-
consuming and tiring act, and contributions to Wikipedia are unpaid,
anonymous, and capable of being wiped out in seconds. Writing for
Encyclopedia Britannica has (I imagine) only the first of those drawbacks.
In fact, professional recognition and respect for your intellectual product
are the wages that society pays for the hard and endless task of becoming
competent, of becoming an expert. Wikipedia wants the latter without
offering the former; in short, they want to steal your competence.
Proselytizers for Wikipedia trumpet evidence of the accuracy of certain
articles to show that the project is reliable.19 This is an abuse of language: a
broken clock is accurate twice a day, but you wouldn’t “rely” on it to tell you
the time. (Accuracy refers to truth, reliability to its expectation; Wikipedia
often provides the one but can’t furnish the other.) And yet an encyclo-
pedia that can’t be relied on is by definition a failure. Instead, I use, and
recommend, Wikipedia as a pre-encyclopedia, a new kind of text and a god-
send in itself: one that satisfies idle curiosity by providing answers I won’t
have to stake my stick on, and one that eases me into a piece of research
by indicating things that I will verify later elsewhere. The watchword is
unrepeatability: never to quote what you read on Wikipedia as knowledge
without substantiation from a third party. In this context, and with this
proviso, Wikipedia’s digital mode, its hyperlinks, speed, and immensity of
scale, richly compensates for its ineluctable unreliability. Stripped of its
superannuated postmodernist trappings, Wikipedia can finally be seen,
and appreciated, for what it really is.
YouTube (Haphazardness)
Some may feel that I have wrongly evaluated, erring on the side of overgen-
erosity as much as underappreciation, perhaps three of the forms discussed
so far in this chapter. On one hand, blogs have been denounced by Tom
Wolfe as “narcissistic shrieks and baseless ‘information,’”20 and by Janet
Street-Porter as “the musings of the socially inept.”21 On the other hand,
a message board created by the London Guardian for one of its pieces
recently ran a post arguing that “this is yet another article where the major-
ity of the posters appear to take a more nuanced view than the writer.”22
Digimodernism and Web 2.0 119
Facebook (Electronic)
The secret of Facebook, and I imagine those social networking sites (Bebo,
MySpace) with which I’m not familiar, is its close mimicry of friendship.
Opening an account is like meeting somebody socially for the first time,
finding you get on well and chatting away, though with you doing all the
actual talking. You tell them (you tell Facebook) your name, age, where
you work and live, where you went to college and school, you allude to
your political and religious views; opening up, you discuss, or rather mono-
logue, about your favorite movies and music, TV programs and books.
122 DIGIMODERNISM
that it’s been embraced mostly by those for whom everything is new, the
young. As a result, Web 2.0 is inflected by the proclivities and hallmarks
of youth: a mode of social networking that fetishizes the kind (tight peer
friendships) favored by the young; an encyclopedia written by students and
the semiqualified; a database of videos loved by the young or made by and
starring them. Web 2.0 is, like rock music in the 1960s and 70s, driven by
youth’s energy, and just as prey to hype and idealism.
The near-invisibility of the electronic and textual status of Facebook is
linked to this. Web 2.0 seems textually underanalyzed and socially over-
celebrated or overdenigrated because it comes, for now, incandescent with
its own novelty. But there is more going on here than that. As a modifica-
tion of an existing digital mode, the Web page, not of a predigital form like
the diary or encyclopedia, Facebook suggests that the drift of information
technology is now toward the phenomenological elimination of the sense
of the electronic interface, of the text. Increasingly, perhaps, people will feel
that the gulf separating their “real” and their “textual” lives has disappeared;
the thoughts, moods, and impulses of our everyday existence will translate
so immediately into the electronic, textual digimodernist realm that we
will no longer be conscious of transference. It won’t be a question then
of oscillating between offline and online, but of hovering permanently
between those extremes. This conceivable development, which Facebook
foreshadows, would culminate in the emergence of a new kind of human,
one constituted in large part not by the “other” forms of being beloved of
science fiction (robots, etc.), but by digimodernist textuality itself. In this
dispensation, you are the text; the text is superseded.
5
Digimodernist Aesthetics
A metaphor for this chapter (in the unlikely event one is wanted) might be
a bridge linking those before and after it by a discussion of four digimod-
ernist textual themes common to Web 2.0 and to older forms. The ancient
distinction between creative and critical writing, hopelessly assaulted by
postmodernists and post-structuralists from Barthes to Baudrillard, is
indifferently obliterated by digimodernism: computer technology restruc-
tures the “text” however it positions itself in relation to the “world.” Conse-
quently this chapter can join, say, movies with blogs about them in a shared
historical tendency.
Seeking to summarize a few of the more salient changes associated with
digimodernism, this chapter could easily have been as long as the book it
appears in. The discussion may as a result sometimes seem abbreviated
but, like a real bridge, its full meaning only emerges in the territories it lies
between. It’s a Janus-faced chapter in a second sense, too. On one hand, the
evocations of the death of popular culture, the eclipse of the fictive “real,”
and the superannuation of irony paint a portrait of the disintegrating
embers of postmodernism. However, another story, describing the emer-
gence of a new aesthetics, can also be glimpsed: in the prevalence of the
traits of the children’s story, the hegemony of the apparently real, the spread
124
Digimodernist Aesthetics 125
On the way to work one morning you pass a cinema screening the latest
Hollywood blockbuster and turn in for a coffee at a bar where you hear a
recent number one song playing and see friends discussing last night’s
prime-time network comedy show. How do you feel about these texts?
Once upon a time, under the influence perhaps of Theodor Adorno’s
critique of the “culture industry,” you might have frowned in contempt:
these are mere factory-made products churned out in standardized form
with the economic intention of appropriating the wages of the exploited
masses and the political aim of ensuring the continued obedience of the
masses to the capitalist status quo. The film, song, and program are artisti-
cally worthless, you would have felt, pseudoart designed to induce a state
of docile passivity in their consumer. As in ancient Rome, bread and cir-
cuses are purveyed to the oppressed people, mindless amusements and
distractions designed to entrench their subservience. These texts do not
give consumers “what they want”; instead, the downtrodden are manipu-
lated into imagining they desire what it is politically expedient to give
them.2
At another time, you might have reacted much more positively: with an
ironic half-smile, perhaps. Instead of the “culture industry,” you may have
thought in terms of “popular culture.” You might have been influenced by
Jameson’s work on Hollywood in Signatures of the Visible, or by the famous
critiques by Umberto Eco of Casablanca and by Baudrillard of Disneyland,
or by the work of Bowie, Matt Groening, or the Coen brothers; you might
have had a taste for films like Diva or the music of Philip Glass, which fuse
“high” and “popular” cultural traits. Such texts can be sites of resistance to
and subversion of hegemonic forces. In any event, they are central to a cul-
ture defined as a media-saturated hyperreality, where electronic represen-
tations precede experience and determine perception. Neither negligible in
themselves nor simply a means to a political end, they are the beating heart
of our contemporary text-drenched reality-system.3
These two responses correlate roughly to a modernist and a postmod-
ernist view; digimodernism might throw up a third reaction, one directed
by the actual content of today’s popular film, TV, and music. This would be
sympathetic to Adorno’s charges of worthlessness, standardization, and
126 DIGIMODERNISM
The list is not exhaustive, but it highlights the continuities between the
recognizable literary category of the children’s story and the dominant
form of contemporary American popular cinema. This argument may
seem old hat: since 1977 and the influential success of Star Wars voices
have frequently been raised denouncing George Lucas and also Steven
Spielberg for “infantilizing” a Hollywood that subsequently turned its back
on complex and troubling social critique in favor of flashy simplicities for
the kiddies. But over the past ten years this development has taken on a
new character, one that renders such an indictment out of date. It’s clear,
already, that the traits I listed have become the default setting in terms of
content of all American popular cinema; they have spread right across the
board; perhaps eighty of the ninety highest-grossing movies referred to
above are mired in them. There’s a recurring tendency, for instance, to
fantasy, or to innocently juvenile sources of humor, or to pseudomythical
mumbo-jumbo; there’s a parallel erasure of adult experiences or actors
aged over thirty-five, and a marginalization of genres (war, musicals,
drama) that adults like—tellingly, the “woman’s picture” has given way to
the “chick flick.” In place of adaptations of Broadway plays or contempo-
rary literary novels, we get films made from comic books (Blade, etc.) or
128 DIGIMODERNISM
As for the new redefinition of popular music as songs for children, here
is a list of artists, almost all of them purveyors of the kind of anodyne,
industrialized pap Adorno would have recognized: Backstreet Boys,
B*witched, Blue, Boyzone, Busted, Girls Aloud, Hear’Say, McFly, Kylie
Minogue, N-Sync, New Kids on the Block, S Club 7, Britney Spears, Spice
Girls, Steps, Take That, Westlife (and so on, and so on). The relationship of
songs for children to rock and pop has long been an awkward one. Prior to
Bob Dylan’s embrace in 1965 of electric music, pop was uncomplicatedly a
form for people too young to vote, and disparaged by almost everyone else.
In the Beatles’ movie A Hard Day’s Night, filmed in the spring of 1964, they
are shown playing exclusively to audiences aged sixteen or under; at one
point Ringo, wandering by a canal, strikes up a friendship of equals with an
eight-year-old. This reflected the contemporary cultural understanding of
pop, even if the Fabs’ songs at this time reveal an intriguing tension between
a monosyllabic and sexless childishness in their lyrics, and a sense in their
music of a grace and creative potential held in check. This applied also
to Phil Spector’s early “symphonies for the kids,” which married almost
Wagnerian musical ambitions with the lyrical experiences of (junior) high-
school students. Dylan was to inject politics, social criticism, drugs, poetry,
and late modernism into popular song, while the Rolling Stones brought
sex; suddenly pop’s demographic was caught up in a rush to artistic and
personal “maturity.” When in 1967 Scott McKenzie invoked “a whole gen-
eration/With a new explanation,” it had a clear upper age limit but, pro-
vided the explanation was accepted, not a lower one; “young people” could
then be seen as opposed en bloc to the squares, warmongers, and reaction-
aries in an idealistic and lifestyle-driven unity.
In the early 1970s a bifurcation occurred: younger listeners embraced
the Cassidy Family, the Osmonds, Slade, or the Bay City Rollers, dismissed
as inferior pop junk by serious-minded “art rock” aficionados who extolled
the contrasting merits of Led Zeppelin, Genesis, Pink Floyd, and the like.
Between the two floated David Bowie, a figure several years ahead of his
time, whose true importance became apparent when, in the aftermath of
punk’s quest to revivify popular music as a mode of youthful self-expression,
a search began in Britain for a form of pop that would be both genuinely
widely appreciated and socially and politically radical. At various times
it briefly seemed that Adam and the Ants, Scritti Politti, Aztec Camera,
130 DIGIMODERNISM
Culture Club, or Frankie Goes to Hollywood might play such a role; it was
the era of Ian Penman’s “War on Pop” article for the NME and of Simon
Frith’s book Art into Pop, both of which aimed to describe and to bring
about the kind of pop that was supposedly needed: one that was fun, imme-
diate, sexy, cool, but also intelligent, literate, politicized, and socially pro-
gressive. In its marriage of the market and disruption, this music would
take its place within the postmodern cultural-dominant. Pop’s moribun-
dity after Live Aid put such hopes on hold, though they flickered into life
again when the Stone Roses and Happy Mondays appeared on Top of the
Pops in 1990, and were resuscitated anew by the cultural pretensions, fame,
and competing aesthetics of Suede, Blur, and Oasis. If their death can be
given a date it might be January 20, 1997, when Blur released “Beetlebum,”
a single that declared their abandonment of mass market pop as a vehicle
for artistic expression. This notion had long separated British music
from American; if the United States could produce such artists it couldn’t
make them popular (though Madonna came closest), and its mainstream
remained dominated from the 1970s on by an ideology of authenticity and
musical tradition incommensurate with the throwaway, commercialized
smartness and ironic experimentation of post-Bowie pop.
In short, children’s song runs throughout the history of rock/pop as the
brutally marginalized antibody of the “real thing” or as the potential for a
postmodern reconciliation of art and commerce. Whether despised or
expropriated it remained indestructible, though, and as Britpop faded it
reemerged in a musical landscape now cleared of postmodern theory. The
Spice Girls’ first album Spice (1996) is as good an example as any of the
type. Its opening track “Wannabe” begins with a desperate clamoring for
attention followed by a bathetic failure to say anything of note that will
be familiar to anyone who has spent time with a five-year-old; its message,
that a prospective lover must fall in with the female’s friends, reflects a
prepubescent valuation of same-sex friendship (which “never ends”). The
track “Mama” is cloyingly infantile, while “2 Become 1” presents a vision of
sexual love so sublimated it can pass as an account of intense emotional
closeness as much as of carnal mingling. When the Spice Girls extolled
“girl power” it was not generally understood that the first word of the
slogan was to be taken in its primary sense, a confusion it shared with the
expression “boy band.”
In 2000 the coveted Christmas number one spot in Britain was fought
over by “Can We Fix It?”, the theme tune to a TV cartoon for preschoolers,
and Westlife’s “What Makes a Man,” an antiseptic “love” song for ten-year-
olds. If the former is better, relatively speaking, it’s partly because the latter
Digimodernist Aesthetics 131
is sunk in denial of its infantile status: the singers emote like Sinatra on a
record bought solely from the salaries of people who would never choose
to play it. This form of denial is endemic to the genre: one of the Spice Girls
allegedly complained that it was tough during gigs, as they threw them-
selves about the stage doing their exhausting dance routines, to look out at
rows of sleeping children. Evidently she did not draw from this an accurate
assessment of their work: children’s song constantly seeks the kudos of
“real” pop’s past, of the Rat Pack, the Beatles, and Bowie. Consequently
there always seems to be a singer trying to pull off the professional matura-
tion the Beatles negotiated in 1965–66 from children’s entertainers to fully
formed stars: one moment a Britney Spears is in her school uniform and
dancing past her teacher, the next she is desperately trying to invent sex
in music.
In the 2000s, truly popular songs, ones that please (or even interest) a
reasonably wide cross-section of the public, have become rare. The default
setting of what calls itself pop is dedicated instead to selling a peculiarly
idealized version of young adult sexuality to girls not yet wearing a bra.5
It reflects the pedophilic nature of contemporary consumer culture, which
perpetually desires—in fashion, movies, TV, adverts, the Internet, songs—
to sexualize children. This is the version of popular music to which this
decade’s TV talent shows are in thrall, such as The X Factor and Pop Idol and
American Idol. The qualification for judges such as Simon Cowell, Simon
Fuller, and Louis Walsh is to have been successful with past children’s
song acts (respectively, Five, the Spice Girls, and Boyzone). Unconsciously
recuperating the 1970s’ contempt of rock aficionados for “manufactured
groups,” these shows, in their pure digimodernism, enable their audience
to manufacture its own stars. Given the choice, it’s children’s entertainers
they prefer to fabricate.
On TV, the multiplication since the 1980s of satellite, cable, and digital
channels has denationalized the medium: rather than make programs
ostensibly aimed at all age groups, classes, and tastes, channels have, on one
hand, chosen to provide only one form of content (music videos, films,
documentaries, etc.) or, on the other, to target only one kind of viewer.
The latter has led to a profusion of children’s and “youth” channels, from
CBeebies and Nickelodeon for the smaller ones to (in Britain) BBC3,
Sky1, ITV2, Channel 4, E4, and Virgin 1. The essential point here is that
there is little or no equivalent targeting of any other age group: the few chan-
nels with a remit for the “older” viewer fulfill it by rerunning ancient shows
and movies, so that contemporary program-making is understood over-
132 DIGIMODERNISM
This is placed under the rubric of “popular culture.” While Adorno would
bridle at the latter term, the former seems especially unjustified to me: in
reality they’re just a subsection of that sliver of electronic textuality beloved
of sixteen-year-olds.
Why has this happened? First, it’s important to be clear about the dispa-
rate forms of this shift. Popular film has embraced children’s stories reshot
for young adults; popular music has become the semisublimated packag-
ing of adult sexuality for young children; popular TV has increasingly
chased after the 13–18 age group either through content about that demo-
graphic or by dragging material whose focus lies elsewhere. Videogames
and Web 2.0 show a similar bias, though they are too recent to have under-
gone a historical transformation. Some of the reasons, alluded to already,
are specific to the evolution of each medium. You can’t ignore more general
social changes either, from the greater readiness of parents to select family
outings with their smallest children in mind to the heightened indepen-
dence of young teenagers. More broadly still, it can be argued that society
has been infantilized, particularly through a consumerism that fetishizes
spending and sees work as an irrelevant burden: the sports team or the
shopping mania of some men and women, both consumerist, can be linked
Digimodernist Aesthetics 135
characteristics are not set in stone, nor is the distinction between child
and adult texts a black and white one. Usborne Books publish versions for
six-year-olds of Jason and the Golden Fleece, the Arthurian sagas, Robinson
Crusoe, and Gulliver’s Travels, all written before the category of “children’s
fiction” was invented and all considered great “adult” literature. It is, I think,
to texts such as these that Rowling and Pullman look.
Rowling’s first novels in the series are postmodernist pastiches of
previously existing children’s narratives, scraps stitched together under
her new generic hybrid: the semirealist English boarding school fiction
(Tom Brown’s Schooldays, Enid Blyton’s Malory Towers and St. Clare’s, etc.)
with the fantasy tradition (wizards, dragons, unicorns, magic potions, etc.).
The characters spring from two sources: Anthony Buckeridge’s 1950s–60s’
Jennings novels about an adventurous prep schoolboy and his deferential
best friend; and Blyton’s Famous Five books, where roaming kids investi-
gate shady dealings—Hermione is a composite of George (independent,
skilful, robust) and Anne (diligent, feminine, anxious) for a postfeminist
age. It is, indeed, always the late 1950s/early 1960s here: the children travel
by steam train and Ford Anglia, sit silent and fearful in classroom rows,
and receive letters from home. But it is also the past of all children’s stories:
when, at the end of Harry Potter and the Sorcerer’s Stone (1997), the gigan-
tic dog is sent to sleep by playing it music, thereby accessing the epony-
mous treasure, Rowling winks to Jack at the top of his beanstalk playing the
harp to knock out the giant and steal his goose that lays golden eggs. Works
of late postmodernism like Chicken Run or Shrek, the Sorcerer’s Stone and
the Chamber of Secrets read like parties held by a clever and witty hostess
where previous texts can frolic, Billy Bunter with Jason and the Argonauts
(who also do the creature/music/treasure trick) and Swallows and Amazons
with Ali Baba (who also unlocked doors with Latinate gibberish).
The later, longer novels attempt, unsuccessfully I think, a move away
from pastiche and irony to a self-sustaining mythological world that can
be seen as tentatively digimodernist. The backward-looking familiarity
of the shorter, earlier works, which made them amusing and exciting but
also immediately nostalgic, ebbs away; the stories also become engulfed in
their other digimodernist innovation, the seven-book series (see the final
section of this chapter), again, I feel, ultimately unsatisfactorily. Having
been so much of their time, it’ll be interesting to see how they survive (I’m
not an optimist).
Pullman’s His Dark Materials, however, breaks with the postmodernist
burden. While there is no doubt that Pullman anchors his trilogy in the
traits of children’s stories I sketched above, he refurbishes them by justify-
ing their use through adult science: the many-worlds interpretation of
138 DIGIMODERNISM
at that. The photos are fictions, or, rather, they are fictive fictions, invented
fragments of what would be, if they existed, inventions. The plates of the
real shift; “[t]here are so many levels of artifice” here as Sherman herself
says, and what is finally represented is the act itself of representing a woman,
or a woman’s historicized act of self-presentation, in an ontological hall
of mirrors redeemed by Sherman’s wit, her subtlety, and exhilarating
feminism.12
As a result, to believe in a reality “out there” becomes a form of paranoia,
the unwarranted ascription of meanings to a universe that cannot bear
their load. Oliver Stone’s film about the Kennedy assassination JFK (1991)
mixes historical footage with fictional material shot thirty years later to
propose a welter of conspiracy theories explaining what “really” happened
in November 1963. If the textual real is a mishmash of manufactured film
sources, all equal, the functioning of the “real world” is inevitably going to
wind up seeming overdetermined and paranoid. Pynchon’s The Crying of
Lot 49 (1965) follows Oedipa Maas’s quest, similar in some respects to that
of Stone’s Jim Garrison, to uncover the “truth” about what appear to be
secret activities cascading through American life. She finally arrives at four
possible conclusions: that there really is a conspiracy out there, or that she
is hallucinating one, or that a plot has been mounted against her involving
forgery, actors, and constant surveillance, or that she is imagining such a
plot.13 Pynchon doesn’t resolve these multiple and incompatible versions of
the “real.” Other postmodernist novels and films, like The Magus, Money,
The Truman Show, and The Matrix, would also dramatize fabricated reali-
ties involving professional actors and round-the-clock surveillance, and
yielding similar interpretive options.
The aesthetic of the apparently real seems to present no such predi-
cament. It proffers what seems to be real . . . and that is all there is to it.
The apparently real comes without self-consciousness, without irony or
self-interrogation, and without signaling itself to the reader or viewer.
Consequently, for anyone used to the refinement of postmodernism, the
apparently real may seem intolerably “stupid”: since the ontology of such
texts seems to “go without saying,” more astute minds may think they cry
out for demystification, for a critique deconstructing their assumptions.
In fact, the apparently real is impervious to such responses. While it’s
true that a minimal acquaintance with textual practice will show up how
the material of the apparently real has been edited, manipulated, shaped
by unseen hands, somehow as an aesthetic it has already subsumed such an
awareness. Indeed, though paradoxically and problematically, it seems to
believe it has surmounted Sherman’s and Pynchon’s concerns, perhaps
Digimodernist Aesthetics 141
actual assaults on people for the later amusement of viewers, are also fond
of this. The apparently real can in such cases become no better than a
guarantee of suffering.
More rewardingly, I can think of at least three masterpieces of the
apparently real. One of them, Daniel Myrick and Eduardo Sánchez’s film
The Blair Witch Project (1999), appeared so early—only weeks after The
Matrix—it was probably conceived by its makers as postmodernist horror
in the style of Scream: explicitly cine-literate and self-reflexive, it fore-
grounds its own (ostensible) making like a filmic Beaubourg and, with
interpretation of its main events radically undecidable, privileges instead
its acts of representation, its shooting. Shifts between color and black and
white constantly remind us that what we are seeing is a created text. Yet its
sense of the real is Janus-faced: made for an initial outlay of $22,000,
its marketing was orchestrated for free on the Internet by means of planted
speculation that the events it shows “really happened,” while an alleged
“documentary” on the events (also by Myrick and Sánchez) was screened
on the Sci-Fi channel. The film itself begins with the caption: “In October
of 1994, three student filmmakers disappeared in the woods near
Burkittsville, Maryland while shooting a documentary. A year later their
footage was found”—and supposedly pieced together by the directors—
so the film passes itself off throughout as real. In consequence it offers no
explanation for what happens to the students, though lots of suggestions,
and it stops rather than ending; when I first saw it just after release its
famously devastating final shot was followed by darkness, silence, and the
lights of the theater coming up . . . that was where the tape had run out.
As with amateur YouTube clips, docusoaps, and reality TV, the apparent
reality of the footage is conveyed by its awkwardness in comparison to
Hollywood technique: blurred images, wonky framing, self-consciously
wooden “acting” early on (things get more raw in the woods), natural light-
ing, choppy editing, periods of total darkness, handheld camera shake, dis-
torted angles, underwritten “character,” inarticulate “dialogue,” and so on.
The students film in happier times a couple of staged scenes for their docu-
mentary, which become a benchmark of professionalized “fakery” against
which their amateur “truth” seems even truer. Apparent reality is so textu-
ally embedded in The Blair Witch Project it survives on to the DVD, where
a deleted scene is labeled “newly discovered footage.” Yet the film isn’t
a hoax that you can “see through.” Instead, its narrative concerns the
apparent emergence into reality of what had previously been considered
“legends” and “stories”; it depicts the gradual passage of what the students
are investigating from the status of “tale” to bizarre and enigmatic truth.
As a result the film’s dominant motif is the ambiguous appearance of
144 DIGIMODERNISM
“reality” itself. Hence the suspended ontology of the final shot, explicable
but impossible, intelligible but imponderable. The film therefore holds on
extratextually (in its marketing, packaging, etc.) to an apparent reality its
own textuality has generated.
This in turn derives from the circumstances of the film’s shooting.
Heather, Josh, and Michael (really their names) really did get lost hiking
in some woods, and were harassed and scared at night (by Myrick and
Sánchez); they improvised the dialogue as though in reality, genuinely
carried the equipment and shot nearly all the footage (later really edited
by the directors); they were given less and less food during the eight days
they were out there to incite genuine discord among them. The effect, in
short, was to underpin the film’s textual apparent reality with the shoot’s
near-reality.
As for Ricky Gervais and Stephen Merchant’s TV series The Office
(2001–03), Ben Walters rightly traces its aesthetic to two forms of televisual
storytelling increasingly in vogue since the 1990s: naturalism (The Royle
Family, the Alan Partridge vehicles) and vérité (ER, The Larry Sanders
Show). Counterparts of the docusoap and reality TV, both bore witness to
the growing importance of the narratological “real” in TV fiction without
directly addressing the issue to any significant extent. The Office owes much
to the aesthetic of the docusoap; it looks like a TV program about everyday
life in a dreary workplace, intimately shot, and its final episodes draw on
the idea that the earlier ones have now been aired, such that new characters
recognize David Brent as “that awful boss” from the BBC2 show.
What distinguishes The Office from any of its influences, however, is its
use of a technique by which characters’ eyes frequently move toward the
filming lens, but not “as an echoing exercise in postmodern referentiality.”16
Instead, these eye movements, which can be voluntary or involuntary, open
or furtive, and in their duration range from almost-imperceptible flickers
through glances to actual looks, constitute the camera as an implicit, silent
character. In short, they characterize the camera; or rather, as we never see
or hear the show’s (fictional) makers, they characterize and fictionalize
you, the viewer. Tim looks to you appealingly, as an ally in his war of intel-
ligence and sensitivity against Gareth’s stupidity and boorishness; Brent
looks to you deludedly, as an “admiring” audience for his supposed toler-
ance and comedic brilliance; myriad characters look toward you embar-
rassedly, in shared solidarity or even guilt, as the implicated witness of the
cringe-making mess that they themselves are unwillingly part of. Each of
these glances attributes to you a character, a personality, a certain level
of sophistication and social awareness, a certain set of post-PC values, or
Digimodernist Aesthetics 145
rock ‘n’ roll. The character can be read as a satire on the fictions, imperson-
ations, and cultural mystification that have long underpinned the recep-
tion of American youth culture; Ali G was to Staines what Mick Jagger
had been to Dartford (both inevitably wound up in the United States).
In his openly filmed debates with middle-aged representatives of official
institutions or the bien pensant liberal orthodoxy, he would appear as a
fictive invention, they as themselves (a polarity integral to Baron Cohen’s
humor, though alien to postmodernist theories of self). In these discus-
sions he would push as far toward the margins of ignorance, stupidity, sex-
ism, homophobia, and criminality as he could get away with; misidentifying
his persona as apparent reality, his guests, though ever more affronted, let
him do so.
Borat, however, gave Baron Cohen a more dangerous and relevant
target for his satirical venom: the United States itself or, more precisely, that
side of the United States that had repeatedly led it into military action in
the Middle East (these really are “cultural learnings of America”). Borat the
character was also born on British television but found his true purpose
the other side of the Atlantic. In the film he is presented from the outset
as the fictive embodiment of the most insultingly regressive stereotypes
about the Middle East, in order to draw from the Americans he meets the
expression of those actual prejudices of theirs that underscored the war
in Iraq. “Kazakhstan” here is no more than a lightning rod, deliberately
chosen as an almost-unknown (in the eyes of his targets) but vaguely
Middle Eastern piece of land (as one of the film’s writers noted, real Kazakhs
look nothing like Borat). Officials from Kazakhstan reacted with fury to
the film, castigating it as lies and abuse; in doing so they were responding
to one level of its apparent reality without recognizing the subtlety with
which Baron Cohen deployed it. I don’t think for a moment that Baron
Cohen had any interest in “genuine” Kazakhstan: it’s a fictive construct
that the movie depicts, a spurious racist prejudice designed to elicit the real
racist prejudices of genuine Americans. It is then a bold and politically
radical piece of work, a devastating assault on actual American ignorant
primitivism that conceals its anger and brilliance behind an entirely bogus
presentation of invented Kazakh ignorant primitivism. A polemical study
of one aspect of contemporary Western orientalism, Borat unmasks
through its fictions the true system of values—the anti-Semitism, the
assumed cultural superiority, the bloodlust, the fear of the other, the paro-
chialism, the naivety, the certainty, the unthinking patriotism, above all
perhaps, and most disturbingly, the blind and empty desire to “help”—
which makes possible, even inevitable, American attempts to control, colo-
nize, and “save” countries like Iraq. The film gives us, then, a fictitious self
Digimodernist Aesthetics 147
In the wake of 9/11, some voices in America called for what would come to
be known as the “new sincerity,” defined by Wikipedia as: “the name of
several loosely related cultural or philosophical movements following
postmodernism . . . It is generally agreed that the principal impetus towards
the creation of these movements was the September 11th attacks, and the
ensuing national outpouring of emotion, both of which seemed to run
against the generally ironic grain of postmodernism.”23 There was a politi-
cal subtext to this, understandable after such a trauma, in that sincerity has
traditionally been identified as a typically American trait; to have more of
it is to reinforce Americanness. On the Côte d’Azur in Lawrence Kasdan’s
film French Kiss (1995), Meg Ryan’s character exclaims that, while the local
women may be mistresses of guile and ambiguity, “I cannot do it, OK?
Happy—smile. Sad—frown. Use the corresponding face for the corre-
sponding emotion.”24 This distinction between American naturalness and
straightforwardness, and European sophistication and game-playing, is at
least as old as Henry James. Sincerity is here rooted in notions of New
World innocence and childlike uncontamination as much as it underpins
the curious British belief that Americans don’t get irony and the French
conviction, expressed by Baudrillard, among others, that Americans are
typically naïve.
However, “new sincerity,” at least in such terms (and to the degree
that you trust Wikipedia), has been made redundant by an international
digimodernist earnestness that wipes out postmodernism’s irony and pre-
dates the attacks on the World Trade Center. While sincerity is a value, a
conscious moral choice reassuringly (in troubled times) under the control
and will of a speaker, digimodernist earnestness, like postmodernist irony,
has deep roots in contemporary culture. It can therefore seem a compulsive
mode, involuntarily swamping its speaker. Digimodernist earnestness, as
far as a cultural mode can be, is necessary, that is, a sociohistorical expres-
sion, not a personal preference. It cannot be called for or promoted as
it’s already here, and right at the heart of our culture. The following extract,
for instance, comes from a 1999 movie that made almost a billion dollars
worldwide; it’s spoken in a toneless voice, unmodulated and flat but exud-
ing gravitas:
Anakin or Harry with the “dark side.” It’s visible too in the shift from
the postmodern camp, irony, and depthlessness of the 1960s’ TV shows
Batman and Spider-Man to their more recent cinematic versions. The
Spider-Man trilogy starring Tobey Maguire is especially rich in earnestness:
its first installment (2002) ends with the hero musing, “This is my gift.
My curse,”26 and almost all of the second (2004) is taken up by the angst,
hand-wringing, and sulky self-communing of the three solemn leads. It’s
shallow and narcissistic, and so tediously transitional, but what it really
isn’t, is ironic.
It mustn’t be concluded from this that earnestness is merely humorless-
ness. It’s true in general that earnestness, especially when so labeled, will
have an unattractive image: it suggests a very unsexy and exaggerated
pseudograndeur that frankly needs to chill and lighten up; “irony” had
sounded knowledgeable (or “knowing”), cool, hip, undeceived, in control
and skating pleasurably over the surface of things. In cinema earnestness
does derive frequently from the attempt to shoot children’s material for
young adults. But, more interestingly, it also stems from the shift of cinema
toward mythological subjects or toward ancient-historical or apocalyptic
scenarios. This, as I explore in the next chapter, is partly due to what CGI
can give cinema, the reality-systems beyond our naked-eye universe that it
dramatizes convincingly. But it is equally linked to an evolution in narra-
tive after postmodernism, away from the realist/antirealist impasse toward
a mythopoeic form more reminiscent of medieval storytelling. This is a
fascinating and as yet embryonic shift, and the overblown or heavy absur-
dities of earnestness in films like X-Men, The Chronicles of Narnia, or The
Golden Compass, where the fates of civilizations are at stake but never felt
to be, are a very early—and wholly inadequate (but then all babies start
with faltering, falling steps)—symptom of it.
Earnestness in contemporary pop derives from a parallel disjuncture
between adult material and childish consumer. In reality TV and the docu-
soap I find a different cause: the absence of critique, of critical intelligence.
This is paradoxical, since these are top-heavy forms with a crushing weight
of authorial directedness: a voice-over tells you how to interpret what
you’re seeing, an “expert” is on hand to tell you what it all means—it’s
infantilized. But the experts are frequently pseudoauthorities (semiquali-
fied members of academically marginal disciplines), or lecturers from
the “soft sciences” sweetening and dumbing down their insights from the
social to the superficial. In Wife Swap, for instance, the evident differences
in status or values would seem to provoke an understanding based on
154 DIGIMODERNISM
Of all the pages and arguments making up this book, those in this section
are the ones I feel most uncertain about. It’s a risk I’m willing to take because
the issue, however much I may misunderstand it, fascinates me. But
although a book like this unavoidably posits its author as a fount, if not of
wisdom, then of belief, here I grope in the dark, the points are indistinct to
me, and this may even be a nighttime of my own making. Perhaps I might
say: this is an argument that could be put forth by somebody unknown,
which I have imagined and am quoting with all due detachment.
156 DIGIMODERNISM
its components: these are detachable, can be recombined (as by Joyce and
Kubrick), and vary in importance. In this way it seemed that Tolkien’s
equivalent to Odysseus’s scar might be Tom Bombadil, a figure met by
the four hobbits in The Fellowship of the Ring in a lengthy section entirely
omitted by Peter Jackson’s film version. To have included him would have
been enriching but was not necessary; it was an episode rather than one of
the subplots regularly cut from literary adaptations, although endlessness
is not reducible to the episodic. Such a narrative is stitched together out of
repeatedly appended bits and pieces: it’s limited really by the fatigue of the
reader/listener, and it’s telling that Tolkien himself felt his 1,500-page story
“is too short.”31 You could indeed just keep adding more. The beginning
and the end are largely set in stone, but how much of the middle you’d want
and which episodes are down really to the skillfulness of the storyteller and
the tastes of his/her readers or listeners.
The ostensible content of this form is today found particularly, of course,
in narrative-heavy videogames such as World of Warcraft or The Elder
Scrolls, which draw heavily on post-Tolkien imagery, and where the player
him/herself plays the role of the storyteller, reshaping the given fictive
materials in a distinctive (hopefully skillful) way. These thoughts may seem
to gather up the threads of this chapter: the shift in status of the Homeric/
Tolkienesque mode is consonant with the move to the cultural center
ground of the traits of the children’s story; this mode is earnest, not ironic;
and its creation of an autonomous reality-system frequently relies on
pseudoscientific discourses, notably historical, geographical, and anthro-
pological/zoological. Alison McMahan has identified “a new umbrella
categorization system” of American film narrative blending myth, fairy
tale, drama, and what she calls the pataphysical film, claiming that “[t]his
system . . . applies to every film coming out of Hollywood today.”32 Both
the name and the nature of McMahan’s “pataphysical film” strike me as
problematic. However, the fusion of myth (yielding endlessness), fairy tale
(children’s story), and drama, both in American films like the Matrix
trilogy and internationally with Ang Lee’s Crouching Tiger, Hidden Dragon
or Zhang Yimou’s House of Flying Daggers, represents cinema’s response
to the retreat of postmodernism. Realism is superannuated, postmodern
antirealism is bankrupt; here lies a solution, a way out of the impasse. It’s
a better option than the “wistful return[s] to realism” suggested by various
literary critics as the aftermath of postmodernism, such as “dirty realism,”
“deep realism,” “spectacle realism,” “fiduciary realism,” and “hysterical
realism.”33 Such terms are likely to be fully intelligible only to other critics;
Digimodernist Aesthetics 159
in the not negligible world where narrative is embraced solely for pleasure,
more radical developments are underway. The Tolkienesque in my after-
Auerbach sense is prevalent in videogames, in Hollywood (as content), and
in popular fiction (Germaine Greer may by now be having nightmares
about posterity’s take on Terry Pratchett’s Discworld). Yet endlessness as
a digimodernist textual-narrative characteristic does not mean only the
spread of neo- or pseudomedieval storytelling modes and content.
Indeed, the thrust of this—still hypothetical—argument runs in the
opposite direction: it is our new taste for endlessness in fiction that has
created a demand for the Homeric/Tolkienesque. Another layer of possible
argument here: at the time of the invention of cinema, contemporary nar-
rative was almost exclusively structured in one of two ways (essentially
the same in singular and plural quantities): as a once-and-for-all unique
account of events and characters (Jude the Obscure); or in terms of the
format serial, in which many of the same characters would recur from story
to story doing pretty much the same things in altered circumstances, never
ageing, scarcely developing, and barely if at all remembering or showing
awareness of their own past adventures (the Sherlock Holmes stories).
Cinema inherited these possibilities, giving us Citizen Kane and, in the
format serial, the Thin Man or Charlie Chan movies, among others. TV
inherited them from cinema: in the 1960s or 70s, for instance, TV fiction
favored either the one-off film like Cathy Come Home or the format serial
like Fawlty Towers and Starsky and Hutch. (Mini-series, like Roots, were
extended one-off films.) And yet TV carried within itself from its inception
the germ of endlessness, also found on the radio: the soap. Mocked and
marginalized, the endless soap was placed socioculturally relative to the
finite TV narrative as Tolkien had been to “literary” fiction.
Digimodernist narrative, it can be asserted, favors the endless. This sug-
gests that, in this hypothetical argument, endlessness is the fictional form
of onwardness. By “endlessness” here I don’t mean, of course, that the story
literally goes on forever: each narrative has in practice a finite number of
words, scenes, or episodes. Instead, I am using it as the highly simplified
catchall for a variety of similar and overlapping narrative forms, all of
which open the storytelling up internally and estrange it from its supposed
destiny. Instances are listed below in no particular order:
In Britain, most people born since about 1980 experience narrative pri-
marily as endless in these senses. Whatever its (debatable) aesthetic merits,
this storytelling mode has become dominant for the generation that grew
up into digimodernism.
When I first saw what was then called Star Wars in 1977, I assumed it
was a one-off: after all, it ended with the total annihilation of the enemy
(though I was vaguely aware Darth Vader had escaped). I can’t remember
when I heard that there would be a “sequel,” but I do distinctly recall read-
ing around then that the movie I’d enjoyed would be the first in a series of
nine. This soon proved unfounded: there would be only three . . . Putting
to one side the issue of how and when Lucas conceptualized his project, the
point here is rather the project’s very elasticity: it could be and has been
expanded endlessly. So doing, the story is not “completed,” not even today,
perhaps not in my lifetime: it’s extended, broadened, renewed, in principle
forever. The prequel trilogy had to be fiddled with to get it to mesh with the
originals (McGregor had to imitate Guinness’s voice, Portman to be coiffed
like her “daughter”) but, more subtly, the originals changed shape too
under its retrospective influence. Their titles were reworked into chapter
headings (no independent story would be as feebly named as A New Hope);
Palpatine’s absence in episode four suddenly seemed a gap in the narrative.
The six films cohered only by reimagining the last three episodes as the
continuing story of Anakin, which they clearly weren’t, causing relative
Digimodernist Aesthetics 161
disaffection toward the prequel trilogy among many adult fans of the origi-
nals. Endlessness means not only the scope for repeated addenda, but the
resultant reshufflings and reorderings of the “whole,” while each
bit is discretely detachable and of varying quality—the story can be reorga-
nized, rethought, reedited. And beyond the films come the books, the
videogames . . . This narrative form is so reminiscent of myth or ethnonar-
rative it’s necessary to stress an obvious difference: the mode of the Star
Wars or the Matrix “universes” is not oral; it’s electronic-digital. Moreover,
the multiple and social authorship of a Beowulf runs up against the copy-
right and franchising of today’s texts. If (broadly speaking) the narratologi-
cally “ancient” or “medieval” is reinscribed in our culture, its mode of
diffusion is lost: authorship and textual sociality function differently in our
time, and all passes via digitization.
Star Wars is really one twelve-hour film (at least). Endless narrative,
as its name suggests, is liable to be very long, and it’s indicative of contem-
porary taste that recent movie versions of the Titanic disaster or of King
Kong last twice the duration of their 1950s’ or 1930s’ forerunners. Such
extendedness in turn suggests endlessness as its narrative structuring prin-
ciple, and makes its implied reader/viewer the fan-geek, who has the time
and inclination to learn the infinite details of this fictive universe. As con-
tinuing narratives The Matrix is seven hours long, Lord of the Rings ten,
while Pirates of the Caribbean, a sixteen-minute theme park ride, lasts
461 minutes as a story (with more to come). It achieves this expansion
by mechanically opening and closing its story (escape-capture-escape-
capture) and nonchalantly producing new tasks for the protagonists to
accomplish and new mythic items to do battle with. Immensely long nar-
ratives should historically come as a surprise: it was once assumed that
increasing demands on free time would inevitably make stories shorter
and shorter (“Ken Russell, when asked why he had shifted over into MTV,
prophesied that in the twenty-first century no fiction film would last longer
than fifteen minutes”).34 Compressed and tightened since the passing of
the Victorian age, by the 1930s most British novels, literary and popular,
tended to come in under 300 pages. The aptly named Big Read, however,
a 2002 BBC TV poll of the citizens’ favorite fictions, seemed almost to set
400 pages as a minimum: Tolkien and Pullman featured among the wordy
classics of Austen, the Brontës, Dickens, and Hardy, many of the shorter
books dating back to the now-anomalous mid-twentieth century. I am of
course concentrating here on popular taste, exemplified by the tens of
millions of copies sold and lovingly devoured of Pratchett’s 36-novel Disc-
world series and, even more notably, the 3,000 or so pages of the Harry
Potter sequence.
162 DIGIMODERNISM
any one episode don’t connect to any other. Growing to independence and
leaving behind earlier cultural models, TV, a constant, rolling medium like
radio, has increasingly sidelined the format serial in favor of continuing
narrative. Beginning with Hill Street Blues (1981–87), modern flagship fic-
tions such as ER (1994–2009), The West Wing (1999–2006), The Sopranos
(1999–2007), Sex and the City, and Lost (2004– ) have been structured by
an opening/closing episodic form within an ongoing framework. Such
stories require some memory (spawning the fan-geek) and, while highly
plotted locally, are not oriented toward any “final” goal. Soaps are distin-
guished from them by their content or production values, not their tempo-
rality. Sex and the City and Friends may have stopped by pairing off their
principal female with her long-term man, but they didn’t “conclude” that
way; under endlessness the last bit has no special weight, just as nobody
cares that the Canterbury Tales are actually unfinished. This shift in the
focus of interest from the overall arc to the minute-by-minute detail may
help explain the fantastic popularity since 1995 of Jane Austen (ceaseless
adaptations, reworkings, biopics, etc.), whose total narrative structures are
generic, predictable, and banal (girl meets boy) but whose every page is
intricate, subtle, and fascinating.
breast milk. All three interwoven stories are resolved within the episode,
respectively: Monica and Rachel are reconciled, Joey sees off the competi-
tor, and Ross tastes the milk. But although each story is introduced, devel-
oped, and completed inside twenty-two minutes, understanding its full
significance is impossible without reference to much that has happened
before then: the back story of Ross’s long unrequited love for Rachel,
recently discovered by the latter who is now in ironically unrequited love
with him; the back story of Joey’s faltering acting career, which necessitates
a day job; the back story of Ross’s divorce from the now-lesbian Carol,
which constructs his relationship with her as inevitable sexual humiliation.
Indeed, the episode contains implicit content from almost all of the previ-
ous twenty-five episodes. For a new viewer, this isn’t the confusion that
comes from unfamiliarity with character and relationship; indeed, know-
ing that Monica and Ross are siblings or that Ross is a professor doesn’t
take you very far. It’s a lack that can only be fully restored by watching the
show from its start. Consequently, seeing any one Friends episode enriches
your understanding of all those you’ve seen before, regardless of the order
you come to them in, while you can also follow any episode in isolation
from the 235 others.
For a long time the scope of Friends lay within the lyrics of its jangly
theme tune: the disappointments of early adulthood, a “joke” job, no
money, an abortive love life, and consolation for this from friends. All three
plot strands illustrate these themes. Focusing on failed progression, on
stunted developments, the show avoided any threatening changes: charac-
ters got jobs but not promotions requiring relocation abroad; they got
married but were immediately divorced. Five or six seasons in, and as
the characters moved into their thirties, the writers began to relax these
constraints in the interests of verisimilitude but still found ways of reintro-
ducing the past, by, for instance, bringing back ex-partners from several
seasons earlier to add complexity and spice to wedding preparations.
Throughout, then, the present remains populated with the past, and it also
flows forward into the future. Ten years after first viewing “The One with
the Breast Milk,” it’s easy to think of the nourished baby growing up into a
child who will play practical jokes on Rachel, or to ponder the fact that
Rachel will one day work in the department store (called here “her house
of worship”) Monica shops in, or to recall the interminable saga of Ross
and Rachel’s on-off relationship, the recurring motif of Ross’s sexual humil-
iation, and the absurd ignominies of so many of Joey’s acting jobs (the
cologne standoff, to underline what he should be doing, is a pastiche of a
Western). So the episode is (1) complete in itself, (2) dependent on a flow
Digimodernist Aesthetics 165
of information from past episodes, and (3) locked in to much that will
ensue for as long as the series will run, but—crucially—as repetition, not as
an elaboration forward and leaving behind; or, rather, as variations within
a field of action to be traversed in all directions but never abandoned.
As a result of this triple temporality, you could (1) watch only this episode
and enjoy it for what you think it is, (2) insist on seeing all twenty-five
episodes before it and enjoy it as the growth outward from their previous
content, like reading chapter twenty-six of a new novel, or (3) watch every
one of the other 235 episodes without ever realizing you’d missed this one
(unlike a novel). It’s a multiple, complex interweaving of time schemes
suited both to fans and to occasional viewers, by which episodes can be
seen in any order but gain from being watched sequentially (they none-
theless appear to start in medias res—there’s no immediate continuity).
The story opens and closes, opens and closes, on many levels and at many
varying speeds.
6
Digimodernist Culture
[L]iterature, Richard said, describes a descent. First, gods. Then demigods. Then epic became
tragedy: failed kings, failed heroes. Then the gentry. Then the middle class and its mercantile
dreams. Then it was about you—Gina, Gilda: social realism. Then it was about them: lowlife.
Villains. The ironic age. And he was saying, Richard was saying: now what? Literature, for
a while, can be about us (nodding resignedly at Gwyn): about writers. But that won’t last
long. How do we burst clear of all this?
Martin Amis, 19951
166
Digimodernist Culture 167
Videogames
It’s amusing to play with the idea that certain cultural forms lie at the very
heart of certain cultural movements, embodying, in some sense, their most
emblematic characteristics. For modernism it might have been cinema,
newly invented; though Michael Wood has warned against such an identi-
fication on the grounds that silent films overwhelmingly anchored them-
selves in traditional narrative modes, anyone seeking a quick and strong
sense of what European modernism was about could do worse than watch
such studies of the machine, the city, dislocation, and anxiety as Sunrise or
The Man with a Movie Camera. It can be argued too that the format of the
mass-distribution daily newspaper, equally new, lies behind Ulysses: Joyce’s
novel, also a kind of encyclopedia of one day, comprises a sequence of
disparate forms of writing oriented on a major city and, through mise en
abyme, uses journalism and advertising as motifs (similar points can be
made about The Waste Land). As for postmodernism, its sense of the
swamping influence of the “spectacle” and the precession of the image
owed much to the spread of television; its delight in mixed registers, tones,
and genres suggests the experience of channel-hopping across blurred and
mingled fragments of myriad cultural discourses. Also central to postmod-
ernism, it could be said in the same spirit, was the recent invention of the
theme park, the acme of the simulacrum.
Whatever the validity of these identifications, we can say that, for
digimodernism, the role of formal exemplum is taken by the videogame
(hence its primary position in this chapter).5 It could be objected that
videogames predate the arrival of digimodernism by a couple of decades,
168 DIGIMODERNISM
the ludic universe. Some videogames, like the many versions of chess or
golf available, are electronic adaptations of existing games or sports; others,
like Peter Jackson’s King Kong or Spider-Man 2, are electronic versions of
existing narratives, especially movies. This tension is so integral to video-
games it has marked them since their inception: while Pong redesigned
table tennis, Asteroids was intended to echo currently popular narratives
(the original Star Wars trilogy, Close Encounters, etc.). Over the years, mov-
ies and videogames have converged on occasion almost to the point of
fusion (though only a very narrow set of low movie genres).
This resource-stripping doesn’t, however, establish videogames as an art
in their own right. Moreover, the lists of qualities (complexity, subtlety,
etc.) commonly ascribed by enthusiasts to certain games are clearly para-
sitic on existing conceptions of art. Yet the one thing you would expect
of a new form of art would be its separateness from older ones, just as you
wouldn’t expect a baby to look exactly like its mother—you’d anticipate a
redistribution of family traits. Books with titles such as Video Game Art
reduce the form to visual imagery, which they study as one might analyze
a film’s cinematography.6 Yet a landscape that you play through and charac-
ters you play off are entirely distinct from (though not wholly different
than) landscapes and characters that you watch. One might just as rele-
vantly study the rendering of some carved chess figures: their beauty would
be real, and interesting, but the pieces wouldn’t derive their meaning from
it. The visual imagery of videogames resembles in its functionality the
look of a building (a game’s “architecture”), but you don’t play buildings
either. And whether videogames are art or not, or texts, you definitely
play them.7
This issue can be resolved, I think, through what I take to be, functionally,
the rupturing novelty of videogames: their grammatical reliance on super-
subjectivity. All games are subjectivist in their basic operation because “I”
play them: I am physically involved in the actions and deliberations, the
incidents and maneuvers of play. While gaming can be watched it’s clear
that any audience is peripheral and insignificant; all that matters is the
playing self (in theater and movies if nobody watches there is no perfor-
mance). Subjectivity in a traditional game is literal: it’s really you who win
and lose (the source of games’ emotional pull) even if it’s mediated through
inanimate objects like pieces, tokens, an iron or ship; and this is carried
over into the fundamental structure of videogames, their ludic heart. Yet
the subjectivity that videogames allow is actually a super-subjectivity.
Super-subjectivity can take many gaming forms (I give here only the
briefest of sketches). A player’s self can map on to many game selves: in a
170 DIGIMODERNISM
soccer game, s/he can incarnate all eleven members of their team plus the
coach during one matchup alone, plus all the players and coaches of all of
the other teams during a single session of play; in ten hours a player might
map him or herself on to hundreds of different selves. (Pathologically this
one-to-many correspondence can be considered as latently schizophrenic.)
Conversely, the game self mapped on to may be a single fictive individual,
that is, a character, with a name, history, traits, feelings, social place, and so
on, though set in a universe where “selfhood” is invested with personal
power(s) or an ego-emphatic lifestyle impossible in the real world. Playing
as such a character you really are him/her (pathologically, this is narcis-
sism) and you inherit all his/her enhanced rights, strengths or invulnera-
bility, diminished responsibilities and eliminated needs or weaknesses.
Indeed, whether equipped with a personality or not, the game self assumed
by a player may possess a subjectivity more extreme, forceful, or immune
than the player’s own: s/he can be killed many times, or can slaughter
with impunity, or drive cars at 200 mph and step unscathed from infinite
appalling crashes, and so on. Knowing no fear, stripped of the consider-
ation of consequences, this subjectivity seems heroic, mythical, legendary;
knowing no shame or guilt, no psychological attachment to the past or the
external world, it has the pathology of the psychopath. Alternatively, the
player’s self may map on to anthropomorphized creatures that retain
the (disavowed) consciousness of humans while furnishing a whole new
set of qualities and powers. Such a player remains him/herself, only much
more so.
By super-subjectivity, you play through your gaming self or selves: you
play, then, as yourself (it’s you whose game ends when all your lives have
gone) but vastly inflated. The process of self-identification that is involved
owes something to the ways in which readers and viewers identify with
characters in fiction, but the textual universe of games gives it a distinctive
ontology. In gaming, you can often switch the object-self of your super-
subjectivity from one instant to the next, whereas film or literary identifi-
cations tend to be deeper and more inflexible; and while the latter rely on
an optional self-recognition by which the character is felt to be “just like
me” or to embody “my values,” gaming super-subjectivity enforces self-
identification at a grammatical level: either you identify yourself thus, or
you don’t play the game. These structural considerations are rendered more
complex again by multiplayer action, whereby your super-subjectivity
interacts with, is thwarted by, or joins forces with somebody else’s.
As videogames have developed so far, super-subjectivity seems their
most essential feature. It also distinguishes them clearly from other forms
Digimodernist Culture 171
art derives from the universality of loss, from inexorable ageing and
tarnishing and forgetting and wearying, from the inescapable mortality
of self and loved ones: “Where there is beauty there is pity for the simple
reason that beauty must die: beauty always dies, the manner dies with the
matter, the world dies with the individual.”9 A similar problem seems to be
faced by digital art, that is, art produced either using or within digital tech-
nologies: often visually arresting and intellectually interesting, it tends to
feel shallow for the same moral reason.10 The digimodernist crisis of textu-
ality alluded to above is in both videogames and digital art already (though
not insuperably) apparent: the two cultural modes most integrally reliant
on digital technology struggle to constitute themselves art.
Film
brings into filmic being the other, something from another world, another
time; the monstrous, the impossible, what lies outside our observable
world. This was inevitable, since conventional cameras had for a hundred
years been able to capture the naked-eye universe, that medium range of
vision which rests only on the here and now. Everything that belongs
beyond this circle is provided by CGI. Already in Terminator 2 CGI had
made material, not elements of the distant past, but fragments of the future,
projected into our present. Watching Spielberg’s movie it must have
occurred to some studio executives that all cinema’s historical monsters
could be resuscitated by CGI, and so came Roland Emmerich’s Godzilla
(1998) and Peter Jackson’s King Kong (2005). The second half of Jurassic
Park is both cinematically and ecologically controlled by the dinosaurs,
who are finally glimpsed roaring triumphantly while a banner about them
ruling the world floats symbolically down. CGI has been flung up against
conventional filmmaking, and prevailed; it is also, in this incarnation, won-
drous, truly magical, jaw-droppingly so. Its reliance on CGI is, of course,
one reason why popular cinema has undergone infantilization: CGI-domi-
nated movies look like a child’s magic show, a firework display, a kiddies’
theme park; the Spielberg-like entrepreneur introduces two under-twelves
to the scientists with an allusion to his “target audience.”
A similar line separates the postmodern from the CGI in Emmerich’s
Independence Day (1996). This is a blend of commerce and subversion,
a megabucks blockbuster and brainless neo-con flag-waver that weirdly
and transgressively climaxes with an act of world-saving anal rape. It
depicts the arrival of the imperialistic aliens in media terms, as they hijack
and cause the malfunction of satellites and, through them, television sets;
the long preamble of warnings and chaos is mostly conveyed via TV broad-
casts watched by captivated crowds, a national and international outbreak
of transmitting and staring that establishes the events as essentially spec-
tacular. This apocalypse will only be televised. Constructed from the modes
of viewing and passivity, directed also at the spaceships themselves, the
script calls up a host of movie allusions to It Came from Outer Space, E.T.,
2001: A Space Odyssey, and Close Encounters, among others. The whole is a
palimpsest of The War of the Worlds with Wells’s deadly bacteria wittily
replaced by a fatal computer “virus.”
Nevertheless, in its exact middle and at its conclusion Independence Day
crosses the line. It’s vital here that, contrary to most of its sources, it has
presented the aliens as psychotically malevolent; the only human response
can be to annihilate them first, so two dogfight sequences are played out.
Cinematically they differ radically in their mise-en-scène from the rest of
Digimodernist Culture 175
the film: the concern with representation, spectacle, and watching is sup-
planted suddenly and violently by an immediate and visceral engagement;
the screen is filled with fizzing lights and careering craft, the humans dodge
and fire, spin and attack the enemy in fast, kinetic, and material involve-
ment with a digitized world. It doesn’t look “real” in either conventional or
postmodern terms; it looks like a computer game, a more sophisticated
Space Invaders. The enemy whizzes brightly at and around you while you
try to avoid being hit and fire madly back in a survivalist blur. The change
of aesthetic is temporary but absolute; the US President, until then just
another viewer/broadcaster, is transformed into a fighter pilot.
This use of CGI to make film resemble preexisting videogames has
become, of course, widespread: among the many examples of games made
into movies are Final Fantasy (2001, 2005), Tomb Raider (2001, 2003),
Resident Evil (2002, 2004, 2007), and Doom (2005). It can even be argued
that the videogame has replaced theater as cinema’s other. From its incep-
tion film struggled to distinguish itself from the mere recording of what
belonged on a stage; in One A.M. (1916) Chaplin played a drunk returning
home late at night, and the camera sat before his supposed living room
following his misadventures from middle distance like a spectator in the
front row at the music hall. The maturation of cinema required it to find its
autonomy, to shrug off this dependence, this mechanical reproduction of
theatrical performance; it never entirely succeeded either, as the movement
of actors, directors, and writers between the two suggests. CGI cinema
arguably replaces that ambiguous reliance on photographed theater by a
new frère-ennemi: the fabrication of reality-systems and human experience
by computers, the precise area of expertise of the videogame. Consequently,
while bad 1930s’ movies look literally “stagey,” many CGI pictures look
slightly “unreal,” that is, insubstantially computerized. In the latter the
videogame is never far away; or, rather, it is never further away than theater
and vaudeville were from Chaplin or Welles, Renoir or the Marx brothers.15
This shift lies at the root of much contemporary complaint about cinema;
but in principle it reorients film, it does not destroy it.
Stephen Sommers’ The Mummy (1999) regenerates a CGI-made ancient
Egyptian and his dead world’s practices, and an early 1930s’ horror movie.
This double revivification makes for the presence throughout of two lan-
guages, two epochs, but also two aesthetics and tones. On one side you have
some very self-aware, self-mocking, and depthless Middle Eastern hokum,
which knows itself to be the latest in a low tradition already remodeled in
postmodern terms by Raiders of the Lost Ark (1981). Filled with inverted
commas, it’s a self-parody that surfs its intertexts. Ranged against all this is
176 DIGIMODERNISM
the CGI world of the dead with its digimodernist earnestness and mythol-
ogies, its unquestioning embrace of remote abstract values, its sacrifice and
eternity and tragedy, and its apocalyptic destructiveness: flights of locusts,
plagues of beetles, murderous towering sandwalls—epic forms of slaugh-
ter. Once again, the CGI domain is “evil” without ours really being “good,”
since the latter has junked, in its ironic reflexivity, all moral dichotomies.
Instead, the postmodernism here defangs the horror, and the CGI invigo-
rates, with unexpected reciprocity, the film’s weary postmodern strategies.
If The Mummy runs these discourses together, Jackson’s King Kong, another
remake of an early 1930s’ movie, gives us an hour of popular postmodern-
ism engulfed in its second hour by rampaging digimodernism: the journey
to Skull Island is all knowingness, mise en abyme, transtextuality, and cine-
literacy, then the film is taken over by the mythology and devastating vio-
lence of the ape and the dinosaurs. By now the former seemed gratuitous,
mere forelock-tugging to obsolete film school theory. By contrast, Guill-
ermo del Toro’s Pan’s Labyrinth (2006), which juxtaposes a similar “real”
period (the early 1940s) with a CGI realm of fairy tale and horror, omits
entirely the postmodern as presumably superfluous. Its intertwining of
ontological levels is richer and more suggestive than that of many more
commercial movies; at the same time, and not unlike Jackson’s Lord of the
Rings trilogy, it excitingly opens up new narrative possibilities through a
redeployment of the traits of the children’s story.
In the Harry Potter series (2001– ) the line divides the Muggle from the
magic world: every episode ritualistically reestablishes the former so that
the joyous crossing into the latter can ensue (we’re never rid of the vile
Dursleys). This traversal is so important it’s multiply enacted: by traveling
diagonally (to Diagon Alley), by accessing a fractional railway platform, by
voyaging on extinct means of transportation—it’s wondrous and symbolic.
Once in Hogwarts, the magical dimensions of boarding school life emerge
from the familiar routines, calendar, and characters for which fictive fore-
runners have long prepared us (they play sport in houses, not cricket but
its magical near-homophone). One kind of reality is conveyed by conven-
tional means, the other by CGI: corridors, detentions, headmasters, and
janitors by one method, swirling staircases, talking centaurs, and moving
paintings by the other. In the Chronicles of Narnia series (2005– ), the fault-
line is materialized, from the original children’s novels, as the back of a
wardrobe; but it also marks a different form of narrative, as nonsignifying
and temporal history passes into meaning-rich and eternal allegory.
Many critics struggle with CGI cinema. Those weaned on 1960s’ mod-
ernism lament the loss of a distinctive authorial “vision” and the lack of
Digimodernist Culture 177
try to escape from destroyers of worlds (aliens, apes, cold, etc.). They come
to seem very puny faced with these overwhelming, overpowering sources
of annihilation, and their stories shrink emotionally with them. Such films
set up the personal or political, human triggers for the arrival of their CGI,
and they spend time establishing human relationships and predicaments to
be traced through the subsequent CGI-driven bombardment. In practice,
the latter swamps the former: we wind up caring about and believing in
nothing else. So an ironic doubling occurs: while the characters run around
trying not to be obliterated by the CGI-made killers, their actors are textu-
ally erased by them.
As Spielberg showed, the second major narrative use of CGI is to
revivify the past (rather than blot out the present): the CGI-historical movie.
It revitalizes vanished places, ruined buildings, lost worlds. As this is, by
the nature and name of the technology, a visual resurrection, it works most
effectively among civilizations whose written histories have come down to
us but whose visible sites have been half-erased (though some residual trace,
to signal the very act of reconstitution, is essential). Beginning with Ridley
Scott’s Gladiator (2000), CGI-history brings us the ancient and medieval
world. Gladiator rebuilds imperial Rome and restores the Coliseum to its
pristine entirety: this is CGI as the work of a sort of architectural heritage
trust renewing the past’s pure look. In the same vein, CGI operates as a sort
of historical reenactment society busily restaging past battles. There’s much
to learn here about the close-quarter combat that disappeared forever in
1914–18; again and again a sword is raised only to chop down at its enemy
while seething masses hack and slice each other in the background. CGI
can recreate the decapitation and the impaling and the disemboweling
attendant on the sword, spear, and lance with startling truthfulness. It can
also present an ancient or medieval army of tens of thousands of men
standing on a plain, CGI-made extras producing a sight not seen with such
verisimilitude in living memory. CGI evokes the actual scale and horror of
such warfare; if it suggests these subjects to filmmakers, it also suffuses
them with powerful conviction. The same can be said of the gladiatorial
scenes fought in the Coliseum under Scott’s direction.
This is (alpha) male history; this is (great) man’s historiography, all
emperors and generals and warlords, kings and captains, spurting blood
for their noble causes. It’s the sort of history, indeed, that the ancients and
medievals wrote of themselves: of power and battles, coups and scheming,
armies, wars, and conquests. It’s not the history of the poor, the weak, or
even the female. CGI, as did these men, builds dazzling cities, assembles and
Digimodernist Culture 179
terrific. Zack Snyder’s 300 (2007) is CGI-history (it reenacts the Battle
of Thermopylae) with its homoeroticism and uneasy neo-con utility. An
adaptation of a graphic novel, it deliberately looks “drawn” and artificial.
There is no attempt here to fuse the CGI with the naked-eye universe;
cinema itself is surrendered to its digitization. Much of 300 is absolutely
fascinating cinematically and suggests a brave new world of filmmaking of
whose topography I don’t pretend to know anything. The challenge though
will be to find subjects as original, rich, and arresting as the mise-en-scène
itself. As for digital “rotoscoping,” where actors are filmed conventionally
and the footage then painted over by animators as in Richard Linklater’s
Waking Life (2001) and A Scanner Darkly (2006), the technology suits
themes of derealization and identity loss and the evocation of dreamstates
by looking what it is, both real and invented. Beyond this, it’s hard to see
what narrative applications such insanely labor-intensive work could have.
What, in summation, does CGI bring to films? I’ve already suggested that
it brings what we cannot see with our own eyes or with existing technology
(telescopes, microscopes): the noncontemporary, the nonexistent, the
nonscientific. On one level it can be argued that CGI has added nothing
new, since apocalypse, history, and myth are, as subjects, almost as old as
cinema itself. The real revolution is ontological. CGI embodies neither the
contents of the mind nor of the world, neither idea nor substance. To over-
simplify, traditional cinema lay between two poles: at one extreme, you
could station a camera somewhere, record what happened in front
of it, and relay the images via a projector on to a screen, as the Lumière
brothers did in the 1890s to depict workers leaving a factory or a train
arriving in a station; at the other extreme, you could make films out of your
imagination, like Méliès in 1902 when he pictured a rocket ship landing in
the eye of a moon recreated as a human face. These two poles can be defined
platonically as thought versus actuality, the mind against the world, inven-
tion versus fact. In practice, all films (including these) blend the two: “The
ability of a shot to be about both what it objectively photographs—what is
in front of the camera—and about the subjectivity of its maker explains
the alluring dualism at the heart of cinema.”17 In 1967 you might have
watched both Disney’s The Jungle Book (cartoon, pure imagination) and
Andy Warhol’s Empire (what a camera placed before a skyscraper hap-
pened to record). All cinema can be—or could be—seen as the outcome of
a negotiation between the two poles: the interior and the exterior, the
dreamed-up and the already existing.
In CGI cinema a third element is added to this ontological structure.
Seemingly “natural” images, apparently of the world, are yet immaterial,
184 DIGIMODERNISM
insubstantial; and yet they are not just expressions of thought either, not
just products of imagination, intention, or invention. CGI lies closer to the
“world” of the Lumières, but it’s not our world; and, although consciously
manipulated, it’s not reducible to the contents of the filmmaker’s head
either. Such images are the actual material stuff of the movie, but are not in
themselves any such thing. Charles Foster Kane is framed and lit so that his
creator can imply things about him; but the tens of thousands of creatures
awaiting the Battle at Helm’s Deep aren’t an expressive tool, they’re genu-
inely there . . . except they aren’t: they’re digitized. It’s in the aftermath of
films like 300 and phenomena like Gollum, where digitization breaks free
of mere “special effects” to become a film’s conceptual and aesthetic point
of departure, that a separate and new level of cinematic ontology has
become identifiable.
Outgrowing its earlier role as a supplier of striking images, digitization
has restructured the reality of film. Its importance is reflected in a plethora
of “CGI narratives” without actual computerization, from Chuck Russell’s
The Scorpion King (2002, a spin-off from The Mummy, which restores
Gomorrah) to Kevin Reynolds’ Tristan and Isolde (2006); and in culturally
esoteric CGI movies like Ang Lee’s Crouching Tiger, Hidden Dragon (2000),
or Zhang Yimou’s House of Flying Daggers (2004). Furthermore, it can
be argued that this destabilization of film’s mind/world duality (ever com-
promised) has been countered immediately by the appearance of a
fourth element (or axis). Fiction/fact, a reformulation of the duality,
reworked the Lumières’ material actuality as the documentary film of
a Flaherty or Jennings, which ostensibly provided a celluloid record of the
lives of people remote to the viewer. The assumption of objectivity did
not withstand the challenge of 1960s’ postmodernism, however, and the
form increasingly incorporated the figure of the director as a factor in its
content.
In recent years a new factual genre has been noisily inaugurated: the essay.
A filmmaker has a thesis, a strong and definite opinion; s/he marshals
evidence for it, collects and shapes different kinds of visual material sup-
porting his/her case. I’m thinking of films like Morgan Spurlock’s Super
Size Me (2004), Michael Moore’s Fahrenheit 9/11 (2004), Al Gore’s An
Inconvenient Truth (2006), Robert Greenwald’s Wal-Mart: The High Cost of
Low Price (2005), and Kirby Dick’s This Film is Not yet Rated (2006). There’s
no attempt here at documenting people’s objective lives, skewed or not by
the presence of the lens. Instead, these movies seek to explore and establish
a preconceived viewpoint, their production’s raison d’être. This is not a
“scandalous” betrayal of the documentary ethic, but a new and perfectly
Digimodernist Culture 185
valid approach to factual cinema. Rather than fetishize “what happens out
there” (with or without their intervention), such filmmakers begin with
a thought, an argument, and research and present imagery and data con-
firming it. (Super Size Me, whose apparently real aesthetic underpins a
pseudoscientific “experiment” discourse seen in Chapter 5, is typical: if
Spurlock had thrived eating fast food he wouldn’t have had a movie—he’d
have had an advert.) It’s anachronistic though that such films are nomi-
nated for Best Documentary Oscars alongside very different kinds of movie
like Jeffrey Blitz’s Spellbound (2002) or Alex Gibney’s Enron: The Smartest
Guys in the Room (2005). The essay is formally distinct from a documen-
tary: it’s a thesis not a portrait, op. ed. not reportage, and a key instance of
the digimodernist aesthetic of the apparently real.
Of the four axes of digimodernist cinema the CGI movie, the essay, and
the documentary are in robust health, while the cartoon, the film of pure
imagination, was never stronger, as I’ve discussed elsewhere. Pixar is the
world’s leading studio, and successful international cartoons over the last
decade or so have been legion: Hayao Miyazaki’s Spirited Away (2001) and
Howl’s Moving Castle (2004), Marjane Satrapi and Vincent Paronnaud’s
Persepolis (2007), Sylvain Chomet’s Les Triplettes de Belleville (2003), and
Ari Folman’s Waltz with Bashir (2008), among others.
What’s struggling is the film that until very recently seemed to epito-
mize the art of cinema: the personal, distinctive authorial vision or critique
of the material, social world. This encounter of a certain mind (individual,
characteristic, skeptical, politicized) with a certain actuality (often violent
or sexual, harsh or disturbing) enjoys critical prestige: it’s the cinema of
Eisenstein, Welles, Hitchcock, Godard, Antonioni, Kubrick. It’s bound up
with the notion of the auteur, first described in 1950s’ France but as old
as movies (Griffiths, Scorsese, Greenaway himself) and probably indestruc-
tible even by digimodernism. Contemporary films that position themselves
within this conception of (art) cinema, however, appear out of date and
sterile, echoes from another era. When Sofia Coppola plays out the
climax to Lost in Translation (2003) with the Jesus and Mary Chain’s “Just
Like Honey” or soundtracks the royal court of Versailles with Bow Wow
Wow in Marie Antoinette (2006), she seeks the individual distinctiveness
of the 1960s’ auteur: Godard gave his guerillas the titles of his favorite
movies as codenames in Week End (1967), Fellini filmed his dreams, fears,
and memories; and so Coppola, a fan of postpunk rock music, puts it in her
pictures whether it belongs there or not (it really doesn’t). The theme of
Lost in Translation—a weary, lonely but strangely attractive middle-aged
man connects with a clever, beautiful but lonely young woman—is a virtual
186 DIGIMODERNISM
parody of French auteurist cinema of the 1970s; similarly, the film’s casual
anti-Japanese racism bespeaks the time when American and European
fears of the mighty yen were rampant (e.g., Blade Runner’s Nipponized
Los Angeles [1982]). Lost in Translation is the auteur film as nostalgia for
the auteur film. So is Michael Winterbottom’s 9 Songs (2004) which seeks
to fuse the revolutionary energy of rock music with sexual explicitness
and psychological claustrophobia in a strained recreation of films such as
Performance (1970), Last Tango in Paris (1973), and Ai No Corrida (1976).
It’s vitiated by the dull mediocrity of its songs, the joylessness of its sex, its
reactionary assumption of the male gaze, and an overriding feeling of
belatedness. Just as CGI movies don’t do sadism, digimodernist cinema has
no programmatic use for sexual explicitness (cf. earnestness, infantilism);
it’s a hallmark of an eclipsed cinematic modernism.
Canny auteurs have turned their attention instead away from the
exhausted values of the 1960s/70s toward digitization itself. Cousins, who
has the traditional anti-Anglophone bias of the “serious” British film critic,
thinks the Frodo trilogy “added nothing to the schemas of the movies” and
that if America “raced into the future of cinema technology . . . others . . .
thought through the implications of the new technology more rigorously.”18
For him, Alexander Sokurov’s Russian Ark (2002) “shows that, far from
being at an end, the history of this great art form is only beginning.”19
Sokurov’s film comprises a single unbroken ninety-minute shot, never
before feasible, recorded directly on to a computer hard drive embedded in
the camera and so bypassing film and tape. The infinitely gazing and mov-
ing camera journeys through the Hermitage’s endless rooms, past its art-
works and around its history; the “tourism” of the spatial premise and the
“passing” of time match conceptually the ever-rolling cinematic eye. Yet the
grammar/rhetoric problem rears its head again: the digital means of expres-
sion are awesome, miraculous; what’s expressed with them seems pointless
and shallow, pretty only because of its reverential treatment of its location,
and uninterestingly conventional in its nineteenth-century worldview.
This tension or slippage between ground-breaking digital filmmaking
and inadequate content recurs among contemporary auteurs. Lars von
Trier’s The Boss of it All (2006), while taking up some of the Dogme tech-
niques explored in Chapter 1, is the first film made using “automavision”:
the cinematographer chooses the best possible fixed position for the
camera, then a computer program randomly makes it tilt, pan, zoom, and
so on, producing off-kilter, irrational, and uneven framing and exposure.
With the sound recorded in similar fashion, the digital actually becomes,
Digimodernist Culture 187
Television
Big Brother was newer and more revolutionary than is generally recognized.
Its roots tend to be traced back to fly-on-the-wall documentaries like
An American Family, to the static cameras that endlessly filmed Andy
Warhol’s borderline exhibitionists, or the constant surveillance of the
Digimodernist Culture 191
There are also mini-tasks. In week 2 that year, together with their main task
of having to memorize ten things about each other, “Big Brother sets them
a task. They must paint portraits of each other, and then mount them on
the wall as if they were in an art gallery.”24 Their shopping allowance does
not hinge on this. In week 3 their main task is a “cycling challenge,” their
mini-task is to “write, design and stage a play”; for the latter triumph they
are given a treat (a video).25 These tasks are, then, small-scale and playful,
perhaps involving elements of sport or the creative arts, or simple chal-
lenges of physical or mental dexterity. The particular ontology of Big
Brother, where the outside world is suspended, means they are often self-
referential. But, as can be seen, they have no intrinsic importance. They are
pretexts, goads to collaborations and fallings-out, incitements to bits of
Digimodernist Culture 193
action and therefore to feelings and interactions. They are fuel for narra-
tive: negligible in themselves, they stimulate instead interpersonal behav-
ior; and the provision or withholding of rewards furnishes a stake that
will induce drama and intensify their investment in the task.
Vladimir Propp, one of the earliest systematic analysts of fictional nar-
ratives, argued that fairy tales, the bedrock of Western literature, contain
recurring generic functions. Prominent among these is the “task” issued
to the hero, which might comprise ordeals or riddles or tests of strength or
endurance. The hero’s successful resolution of the task is rewarded with the
hand of the princess in marriage. In the 1960s Roland Barthes and Umberto
Eco drew on Propp’s work to find similar structures in Ian Fleming’s James
Bond novels (M as the task-giver, etc.). Big Brother’s deployment of tasks
and rewards reflects this narratological framework, but with a difference.
In fairy tales or Fleming the issuing of the task prompts an action (a quest,
an investigation) which subsumes it; it is perceived as a mission, and in this
form triggers the whole narrative with its accompanying interpersonal and
emotional content. In Big Brother, though, the task remains just a task.
Trivial and ludic, it’s just a device for getting at that same emotional and
interpersonal content. The tasks that prompt rich narrative variations for
Propp, Barthes, and Eco become here a skeletal means to a narrative
by-product.
The tasks are announced by a voice calling him/herself “Big Brother.”
It addresses the housemates over the PA, but is localized in the diary room
where it speaks to one or a handful of them in a notably impersonal man-
ner. In this small enclosure the camera and so the viewer occupy the invisi-
ble position of “Big Brother”: the housemate speaks then to us, confiding
feelings or opinions or hearing instructions that emanate with deliberate
anonymity from where we are. We seem to be, or could be, this figure; in
any event, the show’s title, from Orwell’s “Big Brother is watching you,”
intimates that the viewer really is this person. And it’s s/he who dictates
and disseminates the narrative-inducing tasks, the situations and actions
to come. The housemates tell us what they think about the goings-on in the
house, which are in turn driven by:
(3) The artificial stimulus of “conflict.” From the start the “cast” appears
to have been chosen (and, one suspects, encouraged) to clash and bounce
off each other: some housemates are unbearably annoying, others are
acutely intolerant, incompatible extremes are hurled together; confined
and juxtaposed, they cannot but make narrative out of their conflicts and
turmoil. Again narrative theory lurks behind this: for years American
screenwriting gurus have asserted dogmatically (and exaggeratedly) that
194 DIGIMODERNISM
S/he determines who will be left in the house next week, and therefore
what sorts of interpersonal behaviors, what kinds of events and dramas
and scenes and dialogues will be played out then. True, the person removed
leaves genuinely, not just narratologically; but then the whole show is
predicated on the fuzziness of the line separating reality from story; and
calling up or texting to cast a vote, while an interference in someone’s life,
works primarily here to shape and direct the future development of a TV
program itself discreetly cloaked in the devices of fiction.
The audience’s authorship can then be described as productive, creative;
privileged; ungainsayable, absolute. The viewer here is the supreme autho-
rial figure, the eviction the supreme narrative act. And the ostensible
unformedness of the program, supposedly just a bunch of real people in
a house, chatting and stuff, only strengthens the viewer’s sense of his or
her determination of narrative material, whether justified (the evictions)
or illusory (the tasks). Moreover, in its collectivity and anonymity, this
is stress-free authorship: it might be fun, it’s easy and unburdened with
responsibility, and yet it’s really a text-making act.
At the end of each series, the viewers, by definition, have chosen their
favorite housemate. In Britain, creditably, this has enabled a TV audience
to display a tolerance in advance of its elites by, for instance, plumping for
a transsexual. Structurally and formally Big Brother is fascinating and rich,
and socially it has done much good. In detail, though, stretched over the
one thousand six hundred hours of any series, the show is all but unendur-
able: this is again the difference between grammar and rhetoric, textuality
and content, which inflected the chapter on Web 2.0, and for precisely
the same reason. The message board or blog, translated into television,
would be Big Brother. And yet the show’s dullness or mindlessness is inevi-
table, not a willful shortcoming as its vast array of hostile critics seem to
imagine. If a show is to document the transformation of reality into narra-
tive it’s going to need a large dose of the former, and on a twenty-four-hour
basis other people’s reality often is dull. I once watched, for two or three
minutes, someone lying on his bed regarded by a static camera using some
kind of night light to penetrate the darkness; occasionally he opened his
eyes and stared into space, occasionally he shut them. And this was the
highlights show. It felt like the death of television, to be honest. Anyone
who watches the round-the-clock coverage will have “enjoyed” seeing
yawning people slurping mugs of tea or disheveled people urinating, or
listening to conversations so desultory and vacuous you think you’re
going crazy. But you can’t see narrative come into being without seeing
the shape of things prior to that narrative; and although watching almost
196 DIGIMODERNISM
their interlocutor. Much of the comedy plays off the gulf between the
thoughts in the leads’ heads (voice-over) and their lives in the world, joined
by this use of first-person point of view. Mark (Mitchell) is internally filled
with self-hate and inchoate rage, but externally repressed and “nice”; Jez
(Webb) imagines himself cool and liberated but socially is hopeless and
hapless. Posing as a student to seduce a girl and asked the name of his tutor,
Mark’s head camera swings in panic across a college notice board as his
voice-over, impotently conscious of his desperate ludicrousness, thinks:
“Keyser Söze?”26 A recurring joke has Jez, much the sexually more success-
ful of the two, stop listening to women when they’ve been talking for a
few seconds: his rambling, priapic, and airheaded thoughts almost drown
out the girl’s voice while her pretty face gazes eagerly and unwittingly up at
him/us. The overfamiliarity of Peep Show’s objective situation permitted
a formally original exploration of subjective states in the world; while also
very funny (though an acquired taste), it evoked nuances of thought, feel-
ing, and character hardly before seen in TV and film.
edited it. “Surreal” wasn’t appropriate; Monty Python had been that, but
they kept it to themselves. Digimodernist TV, however, can leave the
viewer neither alone nor idle.
With the commercialization of the DVD box-set this trend shifts from
individual shows to a larger issue. For a reasonable price now you can
acquire a season of House or Curb Your Enthusiasm, DVDs occupying so
much less informational and domestic space than videos did; at home you
can take in several episodes in one night, and a five-month run in a week.
This is potentially far-reaching, the tip of a digital iceberg. Traditional com-
mercial TV relies on a sleight of hand: watching their favorite shows, view-
ers may imagine they are being sold programs by the channels that make
them, since this is what happens when they go to the movies or the
theater—they purchase blocks of entertainment. Commercial TV channels
in fact sell quantities of viewers to advertisers; the programs are “bait”
held out to attract the attention of large numbers of potential customers so
that companies can show them their products. Commercial channels
deliver these people to advertisers, for which service they are paid money
with which to concoct future bait.27 Consequently such TV is virtually free
to viewers; they aren’t “buying texts”; they’re being sold. This point isn’t
really arguable since it’s unquestionably how commercial TV programs are
funded. There’s a moral issue about the exploitation or deception of the
viewer, but this arrangement necessarily impoverishes the TV text too:
functioning as bait (rather than “dinner”), it glows with immediate luster
and fascination, and soon after seems thin and unsatisfying: no great art
ages so evisceratingly as “great” commercial TV.
DVD box-sets, however, redefine TV programs as the textual equivalent
of novels. When you buy them you eliminate the advertisers; you acquire
and peruse Seinfeld as you would the latest Martin Amis, at your own
speed, when and where you feel like it, in a direct and personal textual
experience. This would seem to have impacted on the shows themselves:
many box-set favorites have a density and richness not found in traditional
programs; they’re dinner. This development dates back, I think, to the
advent of cable reruns and videotape sales, which offered shows the possi-
bility of endlessly repeated screening of episodes. Today, a program like
The West Wing is ideally suited to sustained viewing across two or more
hours a night: it has the subtlety and complexity, and demands the consec-
utive close attention, of a literary novel. Box-sets may have helped stimu-
late then a general improvement in the quality of TV drama and comedy.
200 DIGIMODERNISM
Radio
The first song played, and therefore the first video screened, by MTV in
1981 was the Buggles’ “Video Killed the Radio Star.” It bade farewell to the
era of radio, engulfed by new technology: “Pictures came and broke your
heart/Put the blame on VCR.” Today, the VCR has gone the way of the
penny-farthing and the abacus to the museums of design and technology,
and, at least as a culturally significant mode, the music video with it. MTV
screened almost exclusively videos throughout the 1980s, and the form
became the focus for much postmodernist analysis: Madonna’s clips were
the subject of cultural theory conferences and academic articles. But by the
mid-1990s the majority of MTV’s programming was nonmusical, and
the channel has increasingly been dominated by teen-oriented comedy
and reality TV, an abandonment of its initial ethos that damningly indicts
the lack of vitality of the contemporary music scene. As an art form the
music video now seems exhausted, devoid of creativity, interchangeable,
and dull. Moreover, the rise of the MP3 player, the iPod, and file-sharing
has above all reconstituted music as primarily an audio experience to which
the mind alone supplies images.
Radio, however, is thriving in the digimodernist era. Digital technology
has enhanced the experience of listening, producing a crystal clarity vastly
superior to the distortion and strangulation of yore; it has improved access
to programs by permitting their transmission via the Internet, TV, and cell
phones as well as traditional sets; podcasting, “listen again,” and technolo-
gies like the BBC’s iPlayer have brought shows to more listeners, allowing
both the creation of personalized archives and a greater listener control
over the circumstances of textual reception; and the number of stations has
increased exponentially. Output, listeners, convenience, quality, access,
Digimodernist Culture 201
In August 2007 it was estimated that one quarter of British adults accessed
radio digitally, with digital-only stations increasing their audience by
600 percent in four years and listeners to podcasts up by 50 percent in
twelve months.29 Another survey suggested that the availability of podcasts
was increasing overall radio listening as new programs were thereby sam-
pled and discovered.30 But not everyone is a winner: in 2008 it was reported
that “almost 80% of digital listening is to stations already available on ana-
logue,”31 and that many commercial stations were struggling to compete, in
part due to the “record numbers” tuning in to the BBC but also to “under-
investment in new content.”32 Indeed, while the unchanging cheapness of
radio content underlies the beneficial impact of the new technology on
transmission and reception, the nature of digimodernist radio textuality is
less certain.
Textually, comparatively little of radio’s output bears the hallmarks of
digimodernism. Perhaps not 10 percent of the programming of Britain’s
five national BBC stations can be described as even vaguely digimodernist
in function. A reliance on prerecorded music, orienting shows around
material created on a previous occasion by people outside of the station,
will make a text rigidify even if transmitted live; structurally the nature of
the extemporized chat of the DJ linking these musical pieces has hardly
altered in half a century. Prerecorded spoken material is equally traditional,
and frequently restricted to professional voices. However, it is in the area of
speech radio that a digimodernist textuality becomes possible. Among the
BBC’s national stations it’s the youngest, 5 Live, founded in 1994, which has
a virtual monopoly on the form (this doesn’t make it, of course, necessarily
202 DIGIMODERNISM
they too feed in their written modes into the spoken debate as it unfolds.
Derbyshire’s role in all this is fascinating. She’s deliberately reactive: she
asks questions, unpacks the implications of contributors’ remarks, greets,
encourages, and thanks callers. While the majority of the latter pile their
thoughts successively each on top of the last, she can also link simultane-
ous callers to each other so they can interact more directly. She seems to
shift perspectives and views throughout in order to manage the discussion,
to tease out nuances, identify conflicts and problematics, and to keep the
debate concise and focused (cutting off when necessary). Self-effacing and
deceptively withdrawn, she’s skilful, tactful, and sympathetic; but she’s firm
and controlling too, maintaining an implicit insistence on the quality
of debate, its cogency, pertinence, and shrewdness, protecting discursive
standards: stupidity, ignorance, arrogance, and abuse get short shrift indeed.
Whatever the caller’s view, she retains her stance of minimal disagreement:
offering contradictory evidence and pinpointing argumentative flaws, her
role is not to state opinions about the issue but to enforce an ethics of
discussion.
In short, she never allows a finality to obtrude. Consequently the textual
onwardness and haphazardness that she oversees so adroitly are destined
for intellectual inconclusiveness too. (This is digimodernist radio’s version
of endlessness.) Listening to discussions is stimulating but also finally frus-
trating. This stems in part from the BBC’s position as a public service
broadcaster; on commercial radio, by contrast, debates such as these often
give the impression of having intellectually been concluded several decades
before they went on air. The trajectory is therefore horizontal, toward the
clarification of all argumentative points and angles, rather than a vertical
shift toward higher resolution, even “truth.” The successfulness of the
format derives, I think, from the unambiguous establishment of stringent
rules of debate, ones that valorize reason, objectivity, coherence, skepti-
cism, respect for one’s interlocutor, and the primacy of evidence. Der-
byshire’s soft voice imposes all this: her power is almost absolute, like her
silence.
5 Live’s “Drive,” which airs Monday through Friday from 4 p.m. to 7 p.m.,
offers a different perspective. It’s intended for workers on their way home
(hence its name), and rounds up and explores the day’s major news stories,
interviewing participants or experts: for example, on September 23, 2008,
and following a keynote speech by the British prime minister, one of the
presenters (Peter Allen) quizzed a government minister; shortly after, and
in the wake of the conviction of a woman for murdering her disabled
daughter, his copresenter Rachel Burden interviewed the policeman who
204 DIGIMODERNISM
had led the investigation. This was not, then, so much a phone-in show as
a phone-out one, which, in place of Derbyshire’s “democracy,” traced and
called up high-profile and implicated professionals for their opinions. But
the listener could still contribute material. Allen and Burden invited texts
and e-mails about the show, and read some of this instant feedback out. On
the whole such commentaries, marginal to the show’s purpose, brought
spice to it: they could be witty, original or piquant, or reveal unusual but
valid takes on the day’s events. In all, they were noticeably funnier, cleverer,
more individual and unexpected than anything the presenters said. Allen
and Burden’s style, in keeping with 5 Live as a whole, was warm, engaging,
unpretentious, good-humored, and acutely interested in the world. But
the material sent in from outside the station, though technically gratuitous,
enriched both the news content and the show’s interpretation of it. You
could just as easily have made the program without it, and before the inven-
tion of the SMS and e-mail you would have; you could easily listen to it
now without noticing these contributions, randomly scattered at roughly
twenty-minute intervals; like culinary spice, though almost weightless they
added something extra, making the textual dish more palatable, more dis-
tinctive and interesting.
At around 5:30 this day Burden referred to reports coming in of an explo-
sion in the center of the city of Bath, and immediately invited a second
source of listener-made material: eyewitness accounts via cell phone, e-mail,
or text. On a regular basis the show runs travel updates detailing accidents
and tailbacks affecting homebound commuters, which include informa-
tion sent in by stranded motorists about their own particular impasse;
other drivers are then advised to find an alternative route. In both cases
this show—and others like it on radio and TV—thereby encourages what
Web 2.0 calls “citizen journalism”: the provision from affected private indi-
viduals of hard news that can then be taken up and diffused by mass-media
broadcasters. Once again, this enriched the show: supposed itself to accrue
stories and find travel information, “Drive” used its listeners as unpaid and
ad hoc reporters, as uncontracted stringers, and so extended its editorial
grasp out from a claustrophobic studio across the country as a whole.
The uninvolved listener received an improved journalistic service; the
broadcaster’s product was, for free, significantly upgraded; and the ideal-
ism propelling contributors—the desire to bring truth or to help others—
was laudable. Once again, this textual digimodernism seemed to suggest,
even in banal circumstances, the workings of a healthily democratic spirit;
Web 2.0 without the populism, perhaps.
Digimodernist Culture 205
5 Live’s “6-0-6,” on the other hand, sets the phone-in in the consumerist
jungle of the leisure industry. Broadcast just after the conclusion of the
day’s professional soccer games (at a time indicated by its name), it permits
homeward-bound fans to vent their postmatch emotions, and contribu-
tions, whether euphoric, vindictive, or despairing, tend to be voluble, flu-
ent, and impassioned. The callers are all defined as fans of a particular club
and valorized as eyewitnesses of its match; they are heard by Alan Green, a
commentator who had of course been present at only one particular fix-
ture. In the course of the show Green speaks little, reacting and reflecting
only on the callers’ points. Although the fans of twenty clubs may ring him
on one evening, for each conversation he positions himself as a co-fan
wanting only the very best for the caller’s team. “6-0-6,” though popular,
has none of the qualities of Derbyshire’s show: it lacks continuity of subject
and the ethic of objectivity, accepting instead a narrowness of focus and a
tone of frenzied partisanship. Green will challenge what he sees as espe-
cially untenable views, but mostly he sympathizes with all misery and
empathizes with all joy. The show can in turn degenerate into paroxysms of
incoherent loathing, overdone anguish, or rebarbative gloating; insight is in
short supply, along with proportion. It resembles a kind of talking cure for
dangerously emotional soccer fans who can share their near-hysteria with
a friendly ear; rather than inviting callers to describe their childhood, Green
asks whether the second goal was offside.33 Many of the callers are extremely
articulate and analytical, but the show is vitiated by its embrace of the myth
of the myopic bias of the “true fan.” In this, it’s a product of Britain’s soccer
culture, which has never accepted the idea of the “football intellectual” with
his weird objectivity and cool-headedness. Gabriele Marcotti, also employed
by 5 Live, might have chaired a very different debate.
The conclusion is that much depends here on the style of the presenter
and the ethos s/he establishes. All three shows instantiate the digimodernist
traits of onwardness and haphazardness. They also exemplify digimodern-
ism’s transfers of creative terminology: the role of the “writer,” the origina-
tor of textual content, is partly taken, in varying ways, by the “listener”; that
of the “presenter” occasionally resembles the show’s producer, managing
others’ inventiveness; on “6-0-6” Green’s primary function is to listen.
They’re also evanescent texts: it’s the prerecorded music and comedy that
tend to get podcasted, ironically. Of course this is far from exhausting the
range of possibilities of a digimodernist radio, and each has its strengths and
limitations. Such forms have also become increasingly common elsewhere
in British radio and TV, leading to transmission of audience feedback
206 DIGIMODERNISM
that can be shallow or crass; and although radio’s “liveness” ideally adapts
it in principle to digimodernism, this has no necessary implications for the
nature or quality of such a program.
A day spent listening to such shows might end with Richard Bacon’s
round-midnight phone-in for 5 Live, which, like Derbyshire’s twelve
hours earlier (or later), invites “listener” comment on the latest major
news stories. Bacon plays his outside intervenants off against studio guests
(generally minor political or media figures) to create discussions that
mingle his own deliberately emphatic but scattershot views, his milder
voice reading out texts and e-mails, the divergent positions of his flesh-
and-blood panelists, the contributions of his transitory, disembodied and
“ordinary” callers, and the interactions of the latter linked up to one
another. The tone is voluble, irreverent, and entertaining, like an argument
in the pub; there are no democratic pretensions here, and no conclusions
either. There are many ways, it seems, to skin a cat successfully; this surface
has only been scratched.
Music
In the digimodernist era rock and rock-related pop music are exhausted
musical forms. The time of their creativity and cultural achievement is
decisively over. In this they now resemble jazz, which continues to be
recorded and performed, bought and appreciated, but with no expectation
that any significant new development in its artistic history will ever again
occur. Yet rock and pop, though moribund, still impose the general aes-
thetic criteria by which we understand and value the contemporary arts:
film and TV in particular (also the novel) are in thrall to the ideologies
previously laid down by an art form that is today played out. This leaves us
historically stranded: our cultural king reigns over us dead and unburied.
To argue this, though, is to enter all sorts of murky waters. The era of
rock as an interesting and vibrant form, 1956 to (say) 1997, was more or less
that of postmodernism; and yet its vigorous espousal of authenticity,
passion, spontaneity, and self-expression was, on the face of it, embarrass-
ing and inimical to a cultural-dominant favoring the waning of affect,
depthlessness, the decentered self, irony, and pastiche. Lyotard does not
(to the best of my knowledge) mention rock, and Baudrillard traverses
America without noticing it (he prefers movies).34 Jameson reduces it in an
early article to an item in a list of postmodern examples: “and also punk
and new-wave rock with such groups as the Clash, the [sic] Talking Heads
and the Gang of Four.”35 In Postmodernism a similar sentence includes
Digimodernist Culture 207
his sole remark on the subject (the film Something Wild gets nine pages):
“and also punk and new wave rock (the Beatles and the Stones now
standing as the high-modernist moment of that more recent and rapidly
evolving tradition).”36 This is badly misinformed, as well as uselessly
brief. Theoretically orphaned, postmodernist critics got very excited about
sampling (more characteristic of hip-hop) and video (ads for songs, akin to
movies). Simon Frith noted in 1988 that “[i]n the relentless speculation on
mass culture that defines postmodernism, rock remains the least treated
cultural form.”37 Rock has little or no academic status: you can quote
Godard in an article in Modern Fiction Studies but not the Stones, critiqu-
ing television (theorized by Bourdieu, Derrida) is more credible than rock
(French TV is socially, if not culturally important; French rock is neither),
and Christopher Ricks is generally perceived to be on vacation when
studying Dylan. Consumer-oriented rock writing tends, in this academic
void, to be historically unreliable, culturally philistine, temporally narrow,
aesthetically tendentious, and only tangentially interested in the actual
music (privileging legends and hype instead). It’s permeated by what Frith
has criticized as the “common sense of rock,” the belief “that its meaning
is known thoughtlessly: to understand rock is to feel it.”38 This ambient
intellectual nullity reproduces rock’s own irrationalist ideology, its emphatic
valorization of the intuitive over the studied: the Scott Fitzgerald reader
who doesn’t get it (Dylan), the uncool teachers who taught me (the Beatles),
throw your schoolbook on the fire (Bowie), school’s out forever (Cooper),
we need no education (Pink Floyd) so leave this academic factory (Franz
Ferdinand). Rock is suffused by a rejection of the values of education—
veracity, context, judiciousness, theory, perspective, knowledge—they’re
uncool, unrock.
There are other reasons why rock was intellectually unfashionable (too
male, too white, too unFrench . . .) but the principal legacy is a gulf between
artistic achievement and critical/academic evaluation. The best of rock
(Blonde On Blonde, Revolver, The Velvet Underground and Nico, Forever
Changes, Astral Weeks, Exile on Main Street, Horses, Marquee Moon, the Sex
Pistols’ four classic singles, etc.) is a towering and lasting cultural triumph;
at least as great as anything of its time in any other medium; hugely influ-
ential on every other art form from film (Scorsese, Coppola, Tarantino) to
classical music (Glass), the novel (Rushdie, Amis) to television (too many
examples to cite); deeply and richly meaningful to tens of millions of people;
and probably the greatest songs ever written in English, and conceivably in
any language (I’m in no position to adjudicate this, but would welcome very
warmly the song that’s better than “A Day in the Life,” “Marquee Moon,” or
208 DIGIMODERNISM
“Madame George”). Rock also became, in a way that videogames can only
envy, an art form in its own right. But now it’s over.
Four versions of rock. One, as an ethos, an aesthetic, rock is: dynamic,
abrasive, dramatic, immediate; communicative, emotional, exciting; vary-
ing in mood from exultant to terrified, reassuring to threatening, but always
strong, intense, committed; apocalyptic, anxious, disaffected, alienated;
thoughtful, open, curious, accessible; urban, contemporary, hip, cool; eman-
cipatory, libertarian, skeptical; sensual, sexual, hedonistic; druggy, vision-
ary; perhaps not performatively complex but lyrically rich; white, male,
young, English-speaking. Not all of these are essential or sufficient quali-
ties; there are canonical rock texts lacking most (though not all) of them.
But they define rock as a cultural hegemonic: they are the aesthetic traits
desperately sought by every film producer, TV controller, and publisher
of fiction.
Two, as an afterlife of historical Romanticism rock: emphasizes the
individual and personal experience against the demands of an oppressive
society; valorizes freedom, self-expression, spontaneity, a return to
nature, political revolution, and social nonconformism; plays with anti-
Enlightenment, mysticism, drugs, and sexual unorthodoxy; implodes into
the occult, violence, madness; and fetishizes the figure of the unloved,
intense, suffering artist-hero burning bright and dying young. The Doors
cited Blake, Suede quoted Byron. After Rubber Soul the Beatles juxtaposed
a cult of the child with a journey into hallucinogen-fueled “visions”; after
“Satisfaction” the Stones explored the noble savagery of the sexually and
violently primal. This is rock as social meaning: as authenticity and coun-
terculture idealism, and the danger it represented to society.
Rock’s post-Romanticism severs it from pop and rock ’n’ roll’s “romance,”
but it is rarely sweetly Romantic in tone. Version three: as a lyrical/musical
form of late modernism, rock: conveys the sound of the city (discordant,
mechanical, cacophonic, overpopulated), the imagery of the urban (the
street), and the feel of modernity (dislocation, loneliness, terror, despair);
fetishizes speed and the machine; is hypnotized by images of war and dic-
tatorship, and haunted by Eliot’s apocalyptic nightmares; valorizes experi-
ment, can be obscure or considered obscene and censored; and draws
water at the wells of symbolist and high-modernist poetry (Rimbaud, Eliot
again) and avant-garde music (Stockhausen, Cage). This is rock as artistic
achievement: the burden of its claim to cultural significance resides here.
Modernism was brought to rock by Bob Dylan alone,39 between the writing
of “Mr Tambourine Man” in February 1964 and the release on August 30,
1965, of Highway 61 Revisited.
Digimodernist Culture 209
Like a four-line poem or the story of a thousand words, the individual song
can gleam like a jewel or knock you out, but it is too slight to become art of
the highest order. There’s not enough substance there; and acclaimed song-
writers from non-English speaking countries like Serge Gainsbourg have
accordingly insisted that song is a minor art. Rock, however, invented for
itself its own cultural unit, a new signifying form: the album, a coherent
suite of songs, forty or fifty minutes long, with the textual range, complex-
ity, richness, and variation that mark enduring artworks. Great albums had
appeared before 1965, such as Robert Johnson’s King of the Delta Blues
Singers (1961) or James Brown’s Live at the Apollo (1963), but they were
adventitious: by serendipity they just happened to contain an awful lot of
terrific songs. There were also immortal jazz albums, like Miles Davis’s Kind
of Blue (1959), but jazz never embraced the album as an expressive form
the way rock did. The songs on a rock art-album belong only there: they are
distinct but integral parts of a greater whole, they contribute to something
beyond themselves, they are linked thematically as well as sonically, flow
into one another and so extend and enrich each other. This coherence or
unity arises organically, though the “concept album” attempted, usually ham-
fistedly, to impose one artificially. The first such art-album was Highway 61
210 DIGIMODERNISM
becomes the work of art designed for reproducibility.”41 The most extreme
example of the form may be the Stones’ Exile on Main Street, which contains
no one extraordinary song, but where the flows of meaning and emotion
across the ensemble generated by repeated replaying produce a sense of
wholeness, intensity, and beauty virtually second to none. This totality can-
not easily be described, but to connoisseurs it is unmistakable and unique.
This interwoven, slow-burning, and integral form, though distinctive to
rock, bears some family resemblance to the poetry recueil such as Lyrical
Ballads, Les Fleurs du Mal, or Swinburne’s Poems and Ballads First Series.
Though too numerous to yield the album’s overall shape, such poems gain
from being read as parts of a whole. If rock is to survive as an art form it
must be through the album, since decontextualized songs weigh too little
on the cultural memory. Creatively, however, the art-album is dead.
Rock ran itself into the ground under its own hypercombustible
steam, but it was helped on its way from the mid-1980s by the spread of
the compact disc. This was the first formal impact of digital technology on
rock. Producing one long, undifferentiated raft of songs, the CD made it
impossible to shape an album’s sequence. Permitting up to seventy-five
minutes of continuous music where the LP had been restricted to about
twenty-five, albums conceived as CDs became amorphous, unwieldy and
interminable quantities of often mediocre material. The CD didn’t “kill
rock,” which was showing signs of reaching the end of its natural life sev-
eral years before the format’s commercialization. Moreover, its influence
took a while to percolate through to artists reared on the art-album. Oasis’s
Definitely Maybe (1994) is shaped, but their Be Here Now (1997) is an inter-
minable raft. Radiohead’s OK Computer (1997), the last great art-album
ever made, is as beautifully formed as anything in rock.
The reason why OK Computer can be awarded such an accolade with
such confidence is not that it’s musically unsurpassable but that its form
is now obsolete. Debilitated by the CD, the art-album has been killed off
by the iPod and the MP3 player, the computerization of access to music
and therefore of the conception of the music text. Just as Led Zeppelin
abandoned singles, more and more artists have spoken of dropping the
album and releasing only tracks to be downloaded from the Internet. The
shift away from the commercial, social, and instant single in favor of the
album enabled experimentation, risk-taking, music as cultural achievement;
both have disappeared into the past. The track, so private, individualized,
fragmentary, and momentary, is made possible by and in turn embodies
the death of popular culture, in both its terms.
212 DIGIMODERNISM
The era of high rock, of rock’s dynamism, creativity, and originality, its
social relevance and cultural potency, seems to me to lie between spring
1965 and the early 1980s. The start of this period is easy to date. In late
1964, the Beatles were still monosyllabic children’s entertainers and the
Stones a rhythm ’n’ blues tribute act; the release of Dylan’s Bringing It All
Back Home in March 1965 was shattering in its impact, a gauntlet thrown
down in terms of lyrical and musical quality that led directly to such water-
shed singles that year as the Stones’ “Satisfaction” (June), the Beatles’
“Help!” (August), and the Byrds’ “Mr Tambourine Man” (April). The end is
less easy to pinpoint, though a line was drawn in Britain by the suicide of
Ian Curtis in 1980. It’s habitual for people to think that music was most
exciting during their youth, but for me in 1985–88 it seemed, on the con-
trary, that rock had never been less vibrant. This was the climate that
prompted Simon Frith to state in 1988, prematurely in my view, “I am now
quite sure that the rock era is over.”42 The mood was encapsulated by Live
Aid: retrospective, nostalgic, creatively lifeless, the spirit of the greatest
hits’ package rather than anything new. British music was dominated by
the Smiths, who drew lyrically and visually on late 1950s/early 1960s’
northern English “kitchen sink” drama in defiant rejection of contempo-
rary yuppie triumphalism, and musically on 1960s’ West Coast jangling
guitars in explicit repudiation of modern music’s synthesizers, samplers,
and beats. Their singer assumed a pose of adolescent torturedness though
by now in his mid-twenties. Their American equivalent as a successful sig-
nifier of “integrity,” “authenticity,” and “real music” was Bruce Springsteen,
whose songs, as Frith put it in 1987, emanated a “whiff of nostalgia” and
whose stage persona was that of a “37-year-old teenager.”43 Such 1980s’
rock was multiply lost to its memories.
In the decade from 1989 rock actually revived considerably, and under-
went what in retrospect can be seen as its aftershock or afterlife, an interval
between its achievement and its exhaustion in which a string of interesting
artists appeared without finally that much great music being produced.
After a handful of ground-breaking songs the Stone Roses and Happy
Mondays imploded; Kurt Cobain, one of the most gifted figures in all rock,
signaled, like Curtis, the cul-de-sac of Nirvana’s aesthetic (after two
wonderful albums) in the most terrible of ways. Britpop, after Suede’s
opening starburst, grew ever more reliant on nostalgia and retrospection,
though it substituted for the solemnity and sentimentality of the Smiths
and Springsteen an ethos of postmodern irony, pastiche, allusion, and wit.
Blur’s mid-1990s’ songs evoked the “naughty” sex and monoracial dreari-
ness of the England of the Carry On films and Benny Hill; their tales of
Digimodernist Culture 213
former rebelliousness. At the same time rock songs no longer break new
ground musically, creative originality has given way to conservatism; there’s
no artistic rebellion either. And rock’s social disruptiveness is gone too. The
Rolling Stones in 1967, the Sex Pistols in 1977, or Boy George in 1983
sparked a storm of fear and excitement that is inconceivable today, I think
because the terms of that tempest—drug-taking, sexual libertarianism,
media manipulation, and gender ambiguity—no longer stir. The personal
travails of Britney Spears, Pete Doherty, and Amy Winehouse evoke only a
mixture of prurient curiosity and parental concern; the forms of revolt
once embodied by rock are today experienced as exploitative entertain-
ment or human interest.
The White Stripes, Elephant (2003); Amy Winehouse, Back to Black (2006)
The Libertines, The Libertines (2004); The Streets, A Grand Don’t Come for Free (2004);
Arctic Monkeys, Whatever People Say I Am, That’s What I’m Not (2006)
It isn’t that rock has gotten “bad”: if Lou Reed or the Clash had been born
in 1980 they wouldn’t have made anything better than the Libertines’ “Can’t
Stand Me Now” or Arctic Monkeys’ “I Bet You Look Good on the Dance-
floor.” It’s necessarily different, not contingently less good. As an example
of this, when alive and dynamic rock was healthy and strong enough to
stretch, absorb, merge with and reshape other musical genres and aesthet-
ics: jazz, musique concrète, reggae, Spanish guitar, the Western art music
(classical) tradition, country. Arctic Monkeys, the Libertines, and others
play rock, instead, as if resuscitating an ancient form, as you’d sing madri-
gals today, locked inside its inflexible and dead limits, codes, and conven-
tions. Such music cannot evolve or innovate, only repeat itself; it’s narrow
(and sometimes enjoyable). It can be atmospheric, sexy, fun, groovy (so
can Kylie)—but all drama, rebellion, and meaning are gone. A pose of rock
cool is assumed, but the sound is constricted, inchoate, shallow, drained of
content; the songs echo vast quantities of earlier great songs but without
depth or significance, the way French groups used to ape the gestures and
tone of the best Anglo-American music. Such artists then are mostly
valued ideologically: for their personal-aesthetic-historical self-positioning,
their adherence to rock’s traditional versions, their reproduction of the
type of personality and the type of music rock comprised in its heyday
(especially the punk template). The reductio ad absurdum of rating artists
and songs according to their fidelity to personal and sociocultural criteria
laid down thirty years earlier—the ad hominem fallacy of criticism, instead
of paying attention to what’s actually there—is the overestimation of the
excruciating and risible Streets.
Radiohead, In Rainbows (2007); Gorillaz, Demon Days (2005); Tori Amos; Manu Chao
Such music is instantly familiar and comfortable: it’s quotation and pas-
tiche without irony or double coding, without postmodernism, the hewing
from a coalface whose rich seams have long since been extracted, leaving
only faint traces among common earth. Some artists, recognizing this, have
sought to move on, for if rock is exhausted then song doesn’t have to be and
one of my aims here is precisely to separate the two by historicizing and
theorizing a little the former. It could be time for artists outside rock’s
personal templates and musical heritage, like Manu Chao and Tori Amos,
though both are perhaps past their best. The work of Damon Albarn since
Digimodernist Culture 217
2003, especially with the fictitious (not “virtual”) group Gorillaz, rejects
the tradition of the rock artist and his sociohistorical context. Although
Gorillaz’ late electronica is not musically original, its meaning, such as it is,
is freed from the straitjacket of certain inherited aesthetic assumptions.
Radiohead have probed most rigorously the territory beyond rock, begin-
ning with OK Computer’s titular embrace of digitization. Its hushed,
sinuous, intimate wash of sound and sense of insubstantiality, unease, and
alienation perfectly evoke the contemporary computerized workplace; it
largely leaves behind the harsh guitar drive with which rock traditionally
suggested the mechanical world of the factory and the street. Kid A (2000)
and Amnesiac (2001) took refuge in 1950s–70s’ avant-gardism: academic,
inward, and muted, they’re dominated by a mood of self-eclipse, of non-
communication and nonbeing. Opaque, churning, indecipherable, and
rebarbative, Hail to the Thief (2003) found the band even further advanced
into a crisis of expression which was that of the musical form that spawned
them. Like its predecessors, it’s defiantly minority in its rewards, culturally
deliberately marginal; rather than the glowing, warm satisfactions of OK
Computer, the album is aloof, hard, and indeterminate. Four years of silence
tellingly followed, ended by the marvelous In Rainbows, initially available
solely as a digital download in a controversial experiment in the ongoing
restructuration of music’s economics. The songs again placed themselves
in a cultural ghetto, so personal and private it seemed extraordinary they
had ever found an audience. Radiohead’s response to the disappearance of
the rock context has been unwavering though varyingly expressed: an ever-
greater artistic withdrawal from the world around into the delicacy, beauty,
and weirdness of their own creations.46
Some may wonder why I have given such space to a form I consider
played out. The reason is partly, as I’ve said, that the rock ethos remains
culturally hegemonic in other fields: on the Internet, on TV, and in cinema,
the notion that what’s valuable and interesting, creative and contemporary
is what appeals to “young people,” what’s “edgy,” explosive, antieducational,
and so on—all this derives from the authority of a cultural model. If this
model were itself creative and interesting, this would be unexceptional;
instead, the aesthetic future is to be rewritten. The same applies to the recent
past. Now that postmodernism is (as good as) over, one of the tasks of digi-
modernism will be to revisit and reassess the artistic period it appropriated
free from the bias of its assumptions. For postmodernism, as I’ve indicated,
rock was aesthetically unacceptable, even inimical. Yet this led to a dis-
torted view of actual cultural achievement. Digimodernism is the chance
to reevaluate and understand anew the art of the past fifty years; it’s the
218 DIGIMODERNISM
Literature
were not yet born). S/he would almost certainly have felt, instead, that
the effect of the Renaissance on the written word inhered mostly in the
invention of the printing press, in Gutenberg’s Bible, rather than any new
literary texts s/he could as yet proffer. In the same way, digimodernism in
literature has first and foremost been apparent in revolutions in publishing:
in the physical production of and access to literature. On one side there is
Amazon and the processes of online book selling, which have vastly
increased the number of texts kept in the commercial domain: whereas
stores are constrained in the books they can stock by their floor space,
Amazon’s huge warehouses are limited really only by the reach of customer
demand.48 Similarly, digital technology has made it possible for books to be
printed on demand, again immensely increasing the numbers potentially
in circulation; books live longer in the public domain than before (or have
longer deaths). Beside these developments comes the digitization of the
book itself, making existing works available on the Web for free through,
for instance, Google Books, though this is a problematic and contentious
area. And then there’s the putative e-book (you can read about Auster’s
typewriter on Kindle). It’s enough though to indicate that computerization
has not left literature alone.
More broadly, digimodernism inflects contemporary literature through
the increased socialization of reading. TV shows like Oprah Winfrey’s and
Richard & Judy regularly recommend novels to huge audiences, who buy
and devour them concurrently in an outbreak of mass identical literary
reception not seen since Dickens. Book clubs construct reading as a social
activity; writers’ tours (signings, readings, festivals) bring texts to vast
numbers of people simultaneously. To oversimplify, traditional reading was
solitary, driven by the “canon,” and seen as an ineffable contact with a tran-
scendent author; postmodernist reading was constructed as politicized,
skeptical, and a near-impossible engagement with a slippery text, the author
nowhere. Reading structured by such broadly digimodernist practices is
distinct again: social and commercialized, it favors the “fan” and makes a
cult of the author while assuming that a text’s meaning emerges from its
social use. An example of what happens to literature under such a con-
sumption appears in Robin Swicord’s film adaptation of Karen Joy Fowler’s
2004 novel The Jane Austen Book Club (2007), where a group of well-off
American women meet regularly to discuss the six novels and the vicissi-
tudes of their own relationships. Their personalities and predicaments
intertwine with those they’re reading about in a manner which, fifteen
years earlier, would have been postmodernist. Instead, the women’s attitude
to Austen is both humanist and pragmatic: having ironically posited her
220 DIGIMODERNISM
work as “escapism” from the stresses of life, in practice they settle into read-
ing it as an escape-route, a key leading them out of the prison house of their
present anguish into the sun-kissed world of emotional fulfillment. They
read her, then, as a treasure-trove of wisdom about the eternal verities of
the human heart; she becomes a guide to contemporary satisfaction, a form
of amusing holy writ subsumed into the hard detail of her readers’ lives.
The digimodernism of such tendencies lies in the robust sense that
textual meaning is bound up with its use (Wittgenstein distantly presiding
here, displacing Saussure), and that texts exist largely as the focus of collec-
tive practices. By this Mansfield Park can be treated as though a blog or
a Wikipedia entry even though it can’t be “written.” Other aspects of the
literary landscape have also been forcefully transformed, notably the place
of the critic, though this applies equally to cinema and music. On social
networking sites and blogs, on Amazon and the IMDb (International
Movie Database), cultural criticism is turned out en masse by untrained
amateurs. I’ve talked about this already in Chapter 4, but it’s interesting
here that the discourse they employ is modeled on professional published
criticism: an “objective” summation of content, background information
about the author’s previous works, evaluations of the text’s success or
failure, and recommendation or not for its target audience. Subtle, stupid,
well informed, or ignorant, such online reviews pay tribute to published
professionals’ jargon and methods even as they make them redundant.
A disinterested comparison of what the amateurs and professionals offer
suggests that these days remarkably little of the latter is as shrewd, fair,
knowledgeable, and stimulating as much (but definitely not all) of the
former. The quality of newspaper and magazine arts reviewing has plum-
meted in places since the 1970s–80s, while academic criticism, which for a
while abandoned any sense of positive literary value except on an ad homi-
nem basis (down with DWEMS, up with their opposites), diverged entirely
from the realm of reading for pleasure. So, on one side there’s the socializa-
tion of reading (we all read the same book simultaneously); on the other,
the socialization of criticism (every woman and man an online critic).
What of the author? But this is too much like modernism: pockets of
“radical” and restless young men collaborating on coffee-stained manifes-
tos in an attempt, essentially, to model literary production on the behavior
of revolutionary groupuscles. It’s certainly the case, instead, that the shifts
outlined in Chapter 5 have been instantiated in the world of the novel: His
Dark Materials and the Harry Potter sequence reflect the infantilism, mytho-
logy, earnestness, and endlessness that characterize early digimodernism.
Digimodernist Culture 221
Mark Haddon’s The Curious Incident of the Dog in the Night-Time and Dan
Brown’s The Da Vinci Code (both 2003) will be discussed in Chapter 7. But
even the best of these are only the signs of changes that are underway, and
I wouldn’t want to call any of them “digimodernist literature” because I
can’t see yet what the category might mean, although such a label exists
in film and TV. There is, then, a disparate bag of transformations here—
Amazon, Google Books, Kindle, Oprah, tours, book clubs, critic-blogs,
mythology, endlessness—which all break with postmodernism and relate
to its successor. But not only does digimodernism await its Shakespeare
or Woolf, it also awaits its Barth, Barthelme, and Queneau, both its giants
and its recognizable type.
Whatever digimodernist literature may turn out to be, my sense is that
it won’t be hypertext or electronic interactive literature. This can be defined
as a form of fiction accessible only via a computer and consisting of
discrete quantities of text that the reader moves around by clicking on
links; the reader chooses his or her pathway among these textual units,
which have been previously created by the writer. A division of labor
transpires: the writer invents the material, the reader sequences it the
way s/he wants. The coming of hypertext as the future of literature was
announced as long ago as 1992 when Robert Coover published an inflam-
matory article called “The End of Books,”49 and was vigorously espoused by
people such as George Landow, the man responsible for the portmanteau
word “wreader” which would have made the antilexicon had it ever caught
on. But in reality the future dominated by hypertext is already behind us.
There are perhaps three senses, largely unavoidable, in which hypertext
is now the literary master who will never rule. The first is that, outside of
niches such as pockets of academia or Eastgate electronic publishers, no
one is interested, to be brutally honest. The hypertext world is even smaller
and more incestuous than the much-maligned contemporary poetry
world (who is hypertext’s Heaney, its Walcott?); most voracious readers of
fiction would struggle to name one hypertext title or author; there’s no
nonprofessional demand for it. There are professors who write it, like
Stuart Moulthrop, Michael Joyce, Shelley Jackson, and Mark Amerika, and
professors who write about them, and the rest of the world lets them do so
in peace. Nor is hypertext likely now to gain in popularity. The past fifteen
years have seen a wild enthusiasm for the Internet and electronic text
swamp the developed world; it’s been an age of exponential growth in the
number of Web sites and digital applications. And yet the level of general
interest in hypertext is, if anything, even lower than it was in the
222 DIGIMODERNISM
mid-1990s; how will it survive when cultural fashions change, as they inev-
itably must, and people lose their fascination for computerized text?
The second reason is that, functionally, hypertext is already old hat.
Web 2.0 enables everybody to write and publish their own material before
a worldwide audience; to be allowed to sequence someone else’s stuff no
longer looks quite as astounding as it once did. Compared to Web 2.0, just
clicking your way around what someone else has provided is a minor thrill;
it’s like asking somebody accustomed to writing and directing movies to
take a job as an editor. A third possible reason, though more subjective, is
that, in my experience, hypertext fictions are somewhat joyless affairs. The
citation is overused, but they really do resemble Samuel Johnson’s walking
dog, about whom we marvel that the act is done at all and overlook how
badly it is done . . . except that, in the age of Web 2.0, we no longer do the
marveling bit. Engulfed by such a fiction, the reader’s response is likely
to be disorientation, followed by frustration and finally a sense of futility;
as The Unfortunates demonstrated, the right to sequence literary materials
is worthless unless doing so permits a distinctive aesthetic or human
experience. For its practitioners, hypertext offers the reader liberation,
empowerment, and so on, but this is both bogus and old-fashioned. It is
spurious to promise emancipation where there was never oppression
(however much you may hate a writer, you don’t feel stifled by her/his
monopoly on textual sequencing). Moreover, this rhetoric is heavily reliant
on post-structuralist assumptions about the writer and the reader, which
have lost their once unassailable position in a broadly “post-theory” intel-
lectual landscape. Hypertext advocates fetishize postmodernist textual
qualities (discontinuity, the aleatory, etc.) that have gone out of cultural
fashion. Consequently, the citizens of the hypertext community are, on the
whole, an ageing, nostalgic group already.
If you want to pursue this further you could google Geoff Ryman’s 253,
uploaded to the Web in 1996 and published in book form in 1998.50 You
click around the personal details of 253 people traveling across London by
tube one January morning: calling up any individual you get their name,
age, appearance, and a description of their thoughts, all condensed into
exactly 253 words. They share some characteristics and themes, there are
links among certain of them, and you can jump around both data and peo-
ple; there is no preset textual sequence beyond what you make up yourself.
But two problems with the content emerge. First, each descriptive capsule
must be striking and, to some degree, cryptic, to cope with its physical
isolation from the rest of the text: it has to induce in the reader a desire to
click on and can do so only by providing a strong textual experience—it’s
Digimodernist Culture 223
too easy for the reader to stop when s/he knows that there is no chance of
a sense emerging of textual completeness or its positive corollaries (coher-
ence, harmony, order, purpose). The need for something remarkable on
every screen produces in 253 a ludicrous melodramatic overwriting: every
single passenger seems to be embroiled in race rioting, underworld crimi-
nality, sudden bereavement, drug-trafficking, risky adultery, and so on—it’s
unreal and overblown. After all, if one page didn’t deliver a kick you
mightn’t go on to another: there’s no cumulative pleasure possible here, no
slow-burn or control of shifts in pace, focus, or mood. Similarly, Ryman
“ends” his narrative with a cataclysmic crash instantly slaughtering half his
passengers, an event with all the verisimilitude and proportion of a cartoon
character dropping a thousand kilo weight on top of the train. But then
again, how do you “conclude” a text with no forward dynamic of its own?
Ryman himself seems to have regarded 253 as an interesting experiment
not to be repeated. But all these failings don’t prevent it from figuring in a
recent “canon” of hypertext.51
Prescient and pioneering twenty years ago, hypertext was hexed by con-
tent and pushed, so it would seem, into the footnotes of literature by the
onward speed of change. As with ergodic literature, the theoretical ances-
tor of the digimodernist text, the context is much broader now: things are
textual, cultural, social, historical.
***
Such a catalog of struggles and problematics within established media
provokes a last, unanswerable question. Is digimodernism finally another
name for the death of the text? Most of the crisis-ridden forms discussed
here provide a closed, finished text: you buy, own, and engage a film or TV
program or song as a total artistic entity, as a text-object. This objectivity
endures over time, is authored, reproduced; it has become, in its material
already-createdness, the definition of a text. Videogames and radio shows
are markedly weaker in this regard; they are less culturally prestigious too;
but socially they are thriving. The onward, haphazard, evanescent digi-
modernist “text” may seem finally indistinguishable from the textless flux
of life. Is digimodernism the condition of after-the-text?
The sections of this chapter, sequenced by the intensity of the digimod-
ernism manifested in each medium, divide into weak and strong forms
of delimited text. The latter are afflicted by declining audiences (film, TV),
fossilized canons (film, music), academic uncertainty (music, literature),
and the disappearance or undermining of their commodity form (film
downloads and pirating, the crisis of TV advertising, the death of the CD).
224 DIGIMODERNISM
Kevin Kelly has dreamed of all books being digitized into “a single liquid
fabric of interconnected words and ideas” to be unraveled, re-formed,
and recomposed freely by anyone for any reason.52 There are signs across
the media landscape of such a development. Yet, unquestionably, this
would resemble a mass of unauthored and unlimited textualized matter.
A text, though, must have boundaries and a history, in the same way that
the distinction between “life” and “a life” ascribes to the latter physical
circumscription and biography. With the reception and commodification
of the individual text already imploding, will there be room under digitiza-
tion for a text?
There are two optional answers to this. The first sounds a futuristic note
of doomy jeremiad: early digimodernism will perhaps be remembered as
the last time one could speak of a new, emergent form of textuality, before
the singular object-text was drowned forever by the rising tide of undiffer-
entiated text; the 2000s naively saluted a textual revolution before it revealed
itself, in its totalitarianism, as the genocide of the text.
The second entails turning away from texts and the consideration instead
of history, or of contemporaneity placed in the long term. The survival
of the object-text depends on the continued valorization of competence,
skillfulness, and know-how, because these are, ipso facto, excluding forces:
they delimit, isolate, close. These are social and moral issues, and so we
come to the final chapter.
7
Toward a Digimodernist Society?
225
226 DIGIMODERNISM
for life” as it is today, unable to conceive of the differentness that the future
implacably brings.
However, what marks us out most distinctively in time is our aggression
toward the future; a society forever congratulating itself on its newfound
tolerance for all its current members treats those to come with implicit
loathing. Imperial Western prosperity was built on stealing from the spa-
tially other, the kidnapping of Africans, the annexation of foreigners’
resources; prevented from doing this, contemporary wealth is founded on
stealing from the temporally other, the future. Personal, household, and
national debt amassed simply to live day to day spends the future’s money
in advance and effectively bankrupts it; the wild propagandizing for loans,
mortgages, store and credit cards, and the marginalization of savings, relate
the present to the future as a burglar to his/her victim. Similarly, far more
of the physical earth is consumed every day than is restored to it, leading
inexorably to an empty and filthy future planet that will trace its brutal
inhospitality to our temporal rapacity. This consumerist assault, this mug-
ging of the future’s cash and resources, produces, like a native uprising
against cruel invaders, both banking catastrophes and natural disasters.
It is mirrored in our attitude to children, the future made flesh: the wide-
spread depiction of infant torture, abuse, and murder in TV and film, often
only to trigger a plot or convey vague gravitas; the popularity of “true
accounts” of past childhood suffering (A Child Called “It,” etc.); the high
level of casually incurred divorce and separation of couples with children
under eighteen, inflicting deep and self-evident trauma; and the sense
of an explosion in pedophilia caused by inadequate criminal sentencing
(a shameful residue of patriarchy). Digimodernist societies steal the future,
and torment its citizens. Three sections follow: they respectively address
the destiny of self, thought, and action in such a possible time.
We live in the age of autism. In 1978 the rate of autism was estimated at
4 in every 10,000 people; by 2008 this figure had risen to 1 in 100, a 25-fold
increase.4 For Simon Baron-Cohen, research in the early 1990s was ham-
strung by “the now incorrect notion that autism is quite rare (today we rec-
ognize it to be very common).”5 Does this mean that contemporary society
is starting to be flooded by something that hardly existed before? This
seems unlikely, to say the least (though there are those who assert it); the
more probable explanation is that specialists now recognize a syndrome
and make informed diagnoses on the basis of previously unavailable
228 DIGIMODERNISM
research. But at the same time, it is naïve to imagine that changes in clinical
diagnosis are wholly unlinked to changes in objective sociohistorical
conditions. The consulting room does not exist in a vacuum; scientific
research occurs in, without being engulfed by, history. Sociohistorical
factors don’t determine scientific results, of course, but occasionally, and
perhaps especially in the study of people or of the unobservable, they
imperceptibly strengthen the plausibility of an interpretation. It’s not then
absurd to hypothesize that we inhabit a society uniquely adapted to the
frequent ascription of autism and the identification of autistic traits. This
is not to reduce autism to a “social construct,” or to claim, offensively,
that nobody “really” suffers from it. I don’t doubt that many people, undi-
agnosed, endure a form of quotidian misery that could be alleviated by
suitable treatment, and that medical attention and research funding need
to be directed urgently toward both; my comments here should be seen
in this light. Yet it’s also reasonable to assume that our present under-
standing of the condition is imperfect; Asperger syndrome was recognized
by the World Health Organization only as recently as 1994; and part of this
incompleteness may well lie in the condition’s sociohistorical and cultural
identity.
In such a society a doctor like Andrew Wakefield, seeking to gain pub-
licity for his (now discredited) view that the MMR jab was harmful to
children, would describe it as the trigger for autism; a generation earlier, it
would have been schizophrenia. In such a society too a novel like Mark
Haddon’s The Curious Incident of the Dog in the Night-Time (2003), which
explores the interior mental landscape of Christopher Boone, a fifteen-
year-old boy with Asperger syndrome, could win the Whitbread Book
of the Year prize and go on to sell over ten million copies worldwide. Not
entirely I think due to its literary merits: its plot is thin and its depiction of
peripheral characters schematic, but it is at times very moving, and from
a clinical point of view utterly fascinating as it uncovers the thought-
processes, the psychic blocks and piercing astuteness of the autistic mind—
this was, quite simply, a novel whose psychopathological insights the world
had suddenly grown desperate for.
Cultural representations of autism have become so commonplace that,
as Ian Hacking wrote in May 2006, “everyone has got to know about
autism.”6 Despite this, two of the most telling of such portrayals date back
as far as the 1980s, one of which, Ridley Scott’s film Blade Runner (1982),
makes no mention of what was then an obscure syndrome. The story of a
policeman’s quest to eliminate highly sophisticated renegade replicants,
Blade Runner is predicated on the “Voigt-Kampff Empathy Test” which
Toward a Digimodernist Society? 229
economic and social ideology of the day with autism will assuredly increase
dramatically the incidence of diagnosis of the condition, though at the
price of emptying the term of all meaning.
While the new clinical conceptualization emphasizes that “we all have
some autistic traits—just like we all have some height,”11 two social develop-
ments or trends over the past fifteen years or so do seem to have encouraged
the greater production of autistic or pseudoautistic traits in the population.
(By pseudoautistic, I mean characteristics deriving locally from particular
experiences and lacking the neurological basis of clinical autism; it will be
understood that I use “autism” in this section as a linguistically convenient
umbrella term for the autistic spectrum; much of what I say is most appli-
cable to Asperger syndrome.)
The ascription of autism is encouraged by the emergence of new tech-
nologies, especially computers, the Internet, and videogames, which enable
individuals to engage with “worlds” or reality-systems without socially
interacting; this systemic desocialization is subsequently extended to the
“real world” in the form of a diminished capacity to relate to or to “read”
other people, a preference for solitude and a loss of empathy; such technol-
ogies also do little to stimulate language acquisition. Derivative gadgets
like the iPod hold their users in similarly isolated private worlds (cell
phones too, though with less obvious causes). This is largely the basis for
the claim that autism is integral to digimodernism, and plays much the
same role within it as the neurosis did for modernism and schizophrenia
for postmodernism: a focus for contemporary clinical study, a modish
social buzz word, and a dimension, in varying forms, of much of the most
emblematic culture of the time.
This is reinforced by the growing and widespread tendency to portray
the sociopathic as normative in popular TV drama, cinema, and music
(cop shows, action movies, rap, grunge, etc.). Drawing on the existential or
rock ‘n’ roll outlaw hero of the 1960s, such texts valorize acting according
to personal impulses with no reference to other people, the collectivity,
social rules, or conventions (e.g., Oasis’s injunctions, “don’t let anybody
get in your way . . . don’t ever stand aside/don’t ever be denied”—cf. the
video for the Verve’s “Bitter Sweet Symphony”); they may fetishize an
absence of empathy and a solitary, cruel fantasy potency; or they may
romanticize a state of helplessly total and self-harming isolation from
society. Such texts both normalize and glamorize a condition of nonsocial-
ization and noncommunication, which can be seen, up to a point, as
pseudoautistic.
Toward a Digimodernist Society? 231
Of all these tendencies that produce autism as the excluded or failed other
of the contemporary hegemonic, this last seems to me decisive. In the case
of Asperger syndrome all the others may be only its tributaries. A strand
of popular discourse about autism, found especially on the Internet,
ascribes the condition to just about every dead intellectual high achiever
you can think of: Newton, Kant, Darwin, Nietzsche, Wittgenstein, Einstein,
and so on. It’s inevitable that a society which hates autonomous intellectual
sophistication as ours does will wind up labeling its heroes and heroines
mentally ill; however, it should cause clinicians some unease that almost
all of the teenagers admitted in any year to the world’s top four or five
universities would score highly on the Autism Spectrum Quotient test.
Furthermore, the same strand of popular discourse also tends, in a move
that is integral to our collective sense of mental health, to assign the condi-
tion to any dead single-minded and self-denying person who achieved
anything of meaningful and lasting importance: Michelangelo, Mozart,
Beethoven, Jefferson, Van Gogh, Joyce. From this a clear (if disavowed)
definition of the normal appears: normalcy is frivolity, superficiality, igno-
rance, gregariousness, a short attention span, self-gratification, disengage-
ment, empty tolerance, social competence, and so on. Indeed, normalcy
is the condition of consumerism; everything inimical to consumerism is
reduced to mental sickness.
Autists cannot be seen as “rebels” against or “martyrs” of contemporary
society because they have not chosen their profoundly difficult relation-
ship to it. As for Haddon’s Christopher, whose life is unenviably and intrac-
tably hard in many ways, it is also, despite or because of this, richer, fuller,
and better than that of his mother, a half-illiterate and bad-tempered egoist
who abandons her child for another man. (Christopher responds to stress
by “groaning,” his mother by yelling at people.) It wouldn’t be so strange to
imagine a system of social values that treats Christopher’s problems with
more sympathy and acceptance than his mother’s. A society whose values
produce autism so perfectly as its excluded other does not deserve to sur-
vive; nor will it.
234 DIGIMODERNISM
later airy assertions on the subject lack intellectual substance, for a histori-
cal critique of the status of metanarratives that broadly holds water21 we
return to a strict reading of The Postmodern Condition. This is in turn
entirely compatible with the thesis that one of the most striking social
characteristics of our time is the prevalence and power of grand narratives
in their most poisonous form.
Repeatedly in the digimodernist era an image of religion emerges,
especially in its public role, its cultural and social and political functions.
It comes across as pure toxicity. It would stand: for violence, murder,
destruction; for ignorance, superstition, irrationalism; for oppression,
hatred, cruelty; against education; against freedom; against democracy.
This is not antireligious (I’m not an atheist, for many reasons) but a social,
educational, and political comment. Religions also show themselves in
private places, often with compassion and humanity. The nature of the
universe is another matter. But this is the contemporary drift of their
public face. They enter the contemporary social arena with apparently
only one thought in mind: to stamp out intellectual freedom; to obliterate
equality; to overthrow democracy; to extirpate the arts; to slaughter the
innocent; to brand and scourge the differently minded; to annihilate rea-
son; to short-circuit knowledge; to destroy thought. This is their one idea:
the death of the idea.
I am referring to events such as the Jyllands-Posten Muhammad car-
toons controversy (2005– ), the Mecca girls’ school fire (2002), the murder
of Theo van Gogh (2004), the Sudanese teddy bear blasphemy case (2007),
or Benedict XVI’s lecture on Islam (2006). I mean the bombings of Bali
(2002), Madrid (2004), London (2005), Mumbai (2006), and more. I mean
the suppression of Gurpreet Kaur Bhatta’s play Behzti (2004) and Sherry
Jones’s novel The Jewel of Medina (2008), the disruption of Jerry Springer—
The Opera (2005– ) and the censorship of film versions of His Dark Materi-
als (2006– ). I mean the spread of intelligent design and faith schools and
the “veil” and “honor” killings and the desecration of graves. I mean the
cruel punishments of Iran, Texas, and Saudi Arabia. I mean the messianic
Christian fundamentalism lying behind Bush and Blair’s invasion of Iraq
and the wanton slaughter that followed it. And more, and more. I mean
religion as killing, silencing, ignorance, and fear. Religion doesn’t have to
be like this; the territory has been poisoned.
Emerging from the iniquitous tale told in Genesis 22 where a psychotic
god sends a loathsome daddy to murder his own little boy for their
pleasures, 9/11 embodies this poison in our machine. Terrorism as pure
Toward a Digimodernist Society? 237
“home”), or how to eat, drink, and cook, or what clothes to buy and places
to visit. Newspapers sell advertising posing as journalism (travel supple-
ments, etc.); magazines merge adverts with “consumer guides”; DVDs
make us buy a film’s advertising and marketing (passed off as “extras”).
And so on.
Consumerism destroys political action (it becomes senseless to “choose”
or “engage” without spending) and revamps social idealism in its own
image: messianic, it believes the planet will be saved by a kinder and smarter
consumption (recycling, cutting out wastage, etc). This is a lie: political
decisions will be necessary. Consumerism, as its name suggests, eats up the
planet and excretes back into it, at a rate well in excess of its capacity for
absorption. Consequently the only thing consumerism can contribute to
the environmental cause is to be less. But every fanaticism proffers its own
processes of “salvation.”
Megalomaniacal and messianic, consumerism is also deranged. It thinks
it is valid to eat 8,000 calories on Christmas Day or drink from bucket-
sized coffee cups in bars. Our social problems are those of consumerism,
of too much or the wrong kind of consumption, as Salem’s were those of
religion: obesity, anorexia, malnutrition, food panics, drug addiction, debt,
gambling, binge drinking. Banks frenziedly offer credit to spend in shops
and discourage savings, short-circuiting their own revenues and crippling
the world economy. Google, our preeminent structurant of information,
ranks pages on their popularity among other pages, on their cybersales in
the cybermarket, rather than on their content (let alone their quality); the
Internet implodes toward its lowest common denominator of sex and
trashy entertainment and becomes the habitat, like consumerism itself, of
the unsocialized.
Consumerism robs you of your home (turned into a get-rich-quick
investment opportunity), your community (become a no man’s land of fear
separating the “property” of strangers), your city (controlled by near-empty
but owned cars), your country (run by and for consumerism), and your self
(curdled into a lazy, passive, endlessly unsatisfied and demanding mouth).
And yet no single act of consumption causes any of this. The road to hell
begins with the smallest step. Equally there’s no inkling of an economic
system not based on personal consumption that would be better. I want
here to isolate consumerism primarily as a mode of thought, a moral code,
an ethos, a buried framework of understanding; to challenge it in its grand-
narrative imperialism, its demented ambitions to direct all; to roll it back,
to push it back. We need a new mental master. The social will follow in due
course.
Toward a Digimodernist Society? 241
equations can manipulate any logical system, a child who can send an
e-mail can only send an e-mail). The fear of authoritarianism generates
a terror of filling children with autonomous cleverness or knowledge
that will stay knowledge. The effects of such schooling are: (a) low
intellectual self-esteem, since that comes from demonstrable compe-
tence no longer permitted, (b) alienation, since children know nothing
of the world around them, (c) low skills, since competence stripped
of autonomy allows only mindless repetition, and (d) a pervasive and
understandable contempt for the pointlessness of school.
2. politically as deprofessionalization: (a) the self-presentation of politicians
and/or parties as fit for office not because of shown or putative compe-
tence (being good at governing), but because of their “morality.” Driven
by recent failure or by a platform serving only the economic self-interest
of a tiny minority, this claim to morality is at best political (promises of
probity, justice), at worst private (love of spouse, country, God; hatred
of others’ uses of their genitalia). It’s reflected by apolitical voters indif-
ferent to the tedious detail of administration and obsessed instead with
strangers’ sexual or spiritual preferences. Realizing Iraq had no weap-
ons of mass destruction and that he had sent Britons to slaughter and
die for nothing, Blair proclaimed his sincerity (he had “acted in good
faith”) in turn attacked by critics labeling him “Bliar”; nobody pointed
to his evident ineptitude. George W. Bush, corrupt and deranged in for-
eign policy, catastrophic and venal in economic policy, was elected twice
on his claims to private morality, finally of importance only to himself,
his loved ones, and his God; (b) the quotidian management, control,
and direction, not of the country (a political concern) but of its media
(a PR focus), transforming politicians into their own image consultants
(Cameron); and (c) the framing of policy not from a postideological
and technocratic tinkering with the machine, described since the late
1950s and recognizably postmodern (Fukuyama’s “end of history,” etc.),
but from throwback economic and religious prejudice, contemptuous
of “evidence” or even logic, and the triumph of myth.
3. in every area of human life, every time and society, the plausibility of
judgment and evaluation has varied according to knowledge and/or
training. The opinion of coq au vin of a child who has eaten only ham-
burgers means little; an architect and a doctor are not equally persuasive
on the structural solidity of your house. In cultural matters digimod-
ernist society has jettisoned this rule; its orthodoxy states: nobody’s
cultural judgment or evaluation is given greater or lesser weight by his/
her cultural knowledge and/or training. The assessment of Transformers
Toward a Digimodernist Society? 243
with the unproven or the half-true or the vacuous, books on such themes
sell in truckloads to a population dazedly inept, it would seem, in the
fundamental practices of life. To go about his/her humdrum existence
the digimodernist individual needs the constant support of hundreds
of TV shows and thousands of periodical or book pages; and yet this
profound existential feebleness and wish to be told how to “improve,” to
live “better,” meets only a mass of inadequate, shallow, and unscientific
pseudoauthority. This is the spiral of the death of competence.
5. the spurious cultural glamorization of illiteracy, innumeracy, and inar-
ticulacy. In Oxford, home of a world-class university and world-famous
dictionary, almost every public sign is scarred with spelling, punctua-
tion, and other linguistic errors (English itself gets blamed, like architects
accusing bricks when their buildings collapse). Schools and exams dis-
regard correctness; consecutive thought is defeated by the bullet point.
Debt is casually amassed at interest rates of 30 percent with no sense
of how this impoverishes. Public conversations are so voided by vague-
ness and vocabulary famine they convey only surges of will and taste.
There have always been illiterate, innumerate, and inarticulate people,
and perhaps not more now than ever; nor are such human failings to be
condemned out of context. The digimodernist era, inheriting postmod-
ernism’s critique of power/knowledge, its desire to “dismantle thought”
and “expose reason,” is distinguished instead by a bogus valorization of
these failings by electronic-digital culture. (They’re “cool,” “democratic,”
“antielitist,” “young.”) It’s a lie: socioeconomically, now and here as
always, power, wealth, and independence accrue to the highly literate,
numerate, and articulate. Only the naïve are fooled: you might almost
suspect a conspiracy (the competent few own and run society, while the
inept masses are told they’re “cool” by the “culture” the competent con-
trol). Democratic government, if self-serving and short-termist, reduces
all education to what will boost economic growth, since the latter
reelects politicians; consumerism finds innumeracy especially useful,
of course. In such a society knowledge and cleverness are inherently
radical, subversive. Foucault’s antihumanism, which bound knowledge
to power, was only the reverse of the humanist coin: ignorance, illiter-
acy, and innumeracy guarantee poverty, oppression, and exclusion
(or slavery, as Frederick Douglass saw). Anticompetence is death.
6. infantilized adults produce children and teenagers mired forever in
preschool behavior patterns: unable to listen or concentrate, seeking con-
stant entertainment, unwilling to do chores, verbally incontinent and
incoherent, acting and dressing in public as at home. The undermining of
Toward a Digimodernist Society? 245
246
Conclusion: Endless 247
Introduction
1. Gore Verbinski (dir.), Pirates of the Caribbean: The Curse of the Black Pearl (Walt
Disney Pictures, 2003).
2. Fredric Jameson, Postmodernism, or, The Cultural Logic of Late Capitalism. London:
Verso, 1991, p. 6.
3. Ibid.
4. Alan Kirby, “The Death of Postmodernism and Beyond” in Philosophy Now,
November/December 2006 http://www.philosophynow.org/issue58/58kirby.htm Retrieved
January 23, 2009.
249
250 Notes
37. Terry Eagleton, After Theory. London: Allen Lane, 2003, pp. 1–2. Further references
will appear in the text.
38. Terry Eagleton, “Capitalism, Modernism and Postmodernism” in Against the Grain:
Essays 1975–1985. London: Verso, 1986, p. 131.
39. David Alderson, Terry Eagleton. Basingstoke: Palgrave Macmillan, 2004, pp. 3–4.
40. Terry Eagleton, Literary Theory: An Introduction, 2nd edition. Oxford: Blackwell,
1996, p. 75; Ludwig Wittgenstein, Philosophical Investigations, trans. G. E. M. Anscombe.
Oxford: Blackwell, 1968, p. 20. Emphasis in original.
41. Terry Eagleton, Saints and Scholars. London: Verso, 1987, p. 23; Terry Eagleton, The
Gatekeeper. London: Allen Lane, 2001, pp. 62–68; Terry Eagleton, The Meaning of Life.
Oxford: Oxford University Press, 2007, p. 8. The former, with an unbearably delicious irony,
is a postmodern tale, which borrows, shall we say, its premise from Tom Stoppard’s play
Travesties (1974).
42. Gilbert Adair, The Postmodernist Always Rings Twice: Reflections on Culture in the
90s. London: Fourth Estate, 1992, pp. 19, 15. Emphasis in original.
43. G. P. Baker and P. M. S. Hacker, Wittgenstein: Meaning and Understanding. Oxford:
Blackwell, 1983, p. 279. Emphasis in original.
44. Alderson, Terry Eagleton, p. 61.
45. Jacques Derrida, Limited Inc, trans. Samuel Weber. Evanston, IL: Northwestern
University Press, 1988, pp. 144–46. Emphasis in original.
46. Raoul Eshelman, “Performatism in the Movies (1997–2003)” in Anthropoetics, vol. 8,
no. 2 (Fall 2002/Winter 2003), http://www.anthropoetics.ucla.edu/ap0802/movies.htm
Retrieved October 12, 2008.
47. Raoul Eshelman, “Performatism, or the End of Postmodernism” in Anthropoetics,
vol. 6, no. 2 (Fall 2000/Winter 2001), http://www.anthropoetics.ucla.edu/ap0602/perform.
htm Retrieved April 26, 2008.
48. Eshelman, “Performatism in the Movies.”
49. Quotes from Eshelman, “Performatism in the Movies,” and Raoul Eshelman,
“After Postmodernism: Performatism in Literature” in Anthropoetics, vol. 11, no. 2 (Fall
2005/Winter 2006), http://www.anthropoetics.ucla.edu/ap1102/perform05.htm Retrieved
October 11, 2008.
50. Eshelman, “After Postmodernism.”
51. Lipovetsky, Hypermodern Times, p. 30. Further references will appear in the text.
52. Gilles Lipovetsky and Jean Serroy, L’Ecran Global: Culture-médias et cinéma à l’âge
hypermoderne. Paris: Seuil, 2007.
53. Paul Crowther, Philosophy after Postmodernism: Civilized Values and the Scope of
Knowledge. London: Routledge, 2003, p. 2. Emphasis in original.
54. Ibid.
55. José López and Garry Potter (eds.), After Postmodernism: An Introduction to Critical
Realism. London: Athlone Press, 2001, p. 4.
56. Ibid.
57. Charles Jencks, Critical Modernism: Where is Post-Modernism Going? Chichester:
Wiley-Academy, 2007, p. 9.
252 Notes
58. I think on the whole their critical and/or commercial failure can be asserted, but
not with absolute assurance. Metacritic, a Web site that aggregates published reviews, gives
average rankings (out of 100) of 73, 63, and 48 for the three films respectively; it also gives
average “user” (customer) scores (out of 10) of 8.1, 6.4, and 5.3, all of which suggests a dra-
matic falling off (retrieved October 31, 2008). The first section of William Irwin (ed.), More
Matrix and Philosophy: Revolutions and Reloaded Decoded (Peru, IL: Open Court, 2005) is
called “The Sequels: Suck-Fest or Success?” with a first chapter by Lou Marinoff subtitled
“Why the Sequels Failed.” As Wikipedia notes, “the quality of the sequels is still a matter of
debate” (“The Matrix [series],” retrieved October 31, 2008). The tendency is unmistakable,
but not conclusive.
59. Both float whimsically romantic hypotheses about the inspiration for the national
playwright’s breakthrough (transtextuality, pastiche).
60. Hollywood Ending, unreleased in any form in Britain, was called “old, tired and
given-up-on” by the Washington Post, while Melinda and Melinda was described as “worn
and familiar” by Village Voice.
61. Famously described by Tibor Fischer in the London Daily Telegraph as “like your
favorite uncle being caught in a school playground, masturbating.”
62. Brian McHale, Postmodernist Fiction. London: Methuen, 1987; Ian Gregson, Post-
modern Literature. London: Arnold, 2004.
63. Linda Hutcheon, The Politics of Postmodernism, 2nd edition. London: Routledge,
2002, p. 165.
64. Ibid.
65. Ibid., p. 166.
66. Andrew Hoberek, “Introduction: After Postmodernism” in Twentieth-Century
Literature, vol. 53, no. 3 (Fall 2007), p. 233.
67. Wittgenstein, Philosophical Investigations, p. 48. Emphasis removed.
68. Steven Connor (ed.), The Cambridge Companion to Postmodernism. Cambridge:
Cambridge University Press, 2004, p. 1.
69. Ibid.
9. Raman Selden, Practicing Theory and Reading Literature. Harlow: Pearson, 1989,
pp. 113, 120.
10. Ibid., p. 125. Emphasis added.
11. Roland Barthes, “From Work to Text” in Image Music Text, trans. Stephen Heath.
London: Flamingo, 1984, pp. 157, 159, 159, 160, 161, 164 (translation modified). Emphases
in original.
12. Ibid., p. 157. Emphasis in original.
13. Ibid., pp. 162–63.
14. J. Hillis Miller, “Performativity as Performance/Performativity as Speech Act:
Derrida’s Special Theory of Performativity” in South Atlantic Quarterly, vol. 106, no. 2
(Spring 2007), p. 220.
15. Barthes, “From Work to Text,” p. 164.
16. Roland Barthes, “The Death of the Author” in Image Music Text, trans. Stephen
Heath. London: Flamingo, 1984, pp. 148, 145.
17. Michel Foucault, “What is an Author?” trans. Josué V. Harari, in Textual Strategies:
Perspectives in Post-Structuralist Criticism, ed. Josué V. Harari. Ithaca, NY: Cornell Univer-
sity Press, 1979, pp. 141–60. For a more recent view, see Seán Burke, The Death and Return
of the Author, 2nd edition. Edinburgh: Edinburgh University Press, 1998.
18. John Fowles, The French Lieutenant’s Woman. London: Vintage, 1996, pp. 97, 388, 389.
19. Martin Amis, Money. London: Penguin, 1985, p. 247; Martin Amis, The Information.
London: Flamingo, 1995, p. 300.
20. Christopher Lasch, The Culture of Narcissism: American Life in an Age of Diminish-
ing Expectations. London: Abacus, 1980, pp. 125, 127, 129.
21. Ibid., p. 150.
22. Allan Bloom, The Closing of the American Mind: How Higher Education has Failed
Democracy and Impoverished the Souls of Today’s Students. Harmondsworth: Penguin, 1988,
p. 62.
23. For certain languages, like Arabic and Japanese, other directions are clearly involved.
3. A Prehistory of Digimodernism
1. Michael Kirby’s Happenings (London: Sidgwick and Jackson, 1965) is an anthology
of statements, scripts, and production notes for happenings orchestrated by Allan Kaprow
(including his seminal 1959 piece “18 Happenings in 6 Parts”), Jim Dine, Claes Oldenburg,
and others. Based on first-hand textual experiences inaccessible to me, it’s recommended as
a replacement of sorts for the section “missing” from this chapter.
2. Laurence O’Toole, Pornocopia: Porn, Sex, Technology and Desire, 2nd edition. Lon-
don: Serpent’s Tail, 1999, p. vii. Both well researched and naïve, O’Toole’s book reflects the
immense difficulties intelligent discussion of pornography faces, caused, to a great extent,
by the form’s digimodernist shattering of conventional meta-textual categories.
3. Michael Allen, Contemporary US Cinema. Harlow: Pearson, 2003, p. 162.
4. Ceefax first went live in the mid-1970s, but take up was initially slow.
254 Notes
5. It was ever thus: mid-1980s’ research already found that heavy users tended to
be male. See Bradley S. Greenberg and Carolyn A. Lin, Patterns of Teletext Use in the
UK. London: John Libbey, 1988, pp. 12, 47.
6. See, for instance, the Wikipedia entry on international teletext: http://en.wikipedia.
org/wiki/Teletext Retrieved January 26, 2008.
7. The title conflates those of the TV game-show What’s My Line (originally CBS
1950–67, BBC 1951–63) and Brian Clark’s TV play Whose Life is It Anyway? (ITV 1972,
remade by MGM as a feature film in 1981).
8. Slattery was almost destroyed by the show (among other pressures), while Sessions,
McShane, Proops, Stiles, and Lawrence never broke out of the cultural margins. One or two
performers did, like Stephen Fry and Paul Merton, but through other shows.
9. Sean Bidder, Pump up the Volume: A History of House. London: Channel 4 Books,
2001.
10. B. S. Johnson, The Unfortunates. London: Panther Books in association with Secker
& Warburg, 1969, inside left of box.
11. Ibid., “First,” p. 4.
12. Ibid., pp. 1, 3.
13. Jonathan Coe, Like a Fiery Elephant: The Story of B. S. Johnson. London: Picador,
2004, pp. 230, 269.
14. For a good essay on related issues see Kaye Mitchell, “The Unfortunates: Hypertext,
Linearity and the Act of Reading” in Re-Reading B. S. Johnson, ed. Philip Tew and Glyn
White. Basingstoke: Palgrave Macmillan, 2007, pp. 51–64.
15. Coe, Like a Fiery Elephant, pp. 269–70.
16. John Fowles, The Collector. London: Vintage, 2004, p. 162; Martin Amis, Dead
Babies. London: Vintage, 2004, p. 21. (Amis echoes, deliberately or not, a phrase in Oscar
Wilde’s The Picture of Dorian Gray, chap. 4.)
17. Coe, Like a Fiery Elephant, p. 352.
18. Ibid., pp. 230–31.
19. Ibid., p. 343.
20. Julio Cortázar, Hopscotch, trans. Gregory Rabassa. London: Harvill Press, 1967,
unnumbered page.
21. Edward Packard, The Cave of Time. London: W. H. Allen, 1980, p. 1.
22. For an “adult” version of this narrative form see Kim Newman, Life’s Lottery.
London: Simon & Schuster, 1999.
23. Gill Davies, Staging a Pantomime. London: A&C Black, 1995, p. 90. Ellipses in
original.
24. Ibid., p. 92.
25. Tina Bicât, Pantomime. Marlborough: Crowood Press, 2004, p. 25.
26. Ibid.
27. Ibid.
28. Lawrence Sterne, The Life and Opinions of Tristram Shandy, Gentleman. Oxford:
Clarendon Press, 1983, p. 376.
29. Other derivations have also been proposed.
Notes 255
25. Andrew Keen, The Cult of the Amateur: How Today’s Internet is Killing Our Culture
and Assaulting Our Economy. London: Nicholas Brealey, 2007, p. 5.
26. Jonathan Zittrain, The Future of the Internet and How to Stop It. London: Allen Lane,
2008, p. 70. Emphasis removed.
27. See David Randall and Victoria Richards, “Facebook Can Ruin Your Life. And so
Can MySpace, Bebo . . .” in London Independent on Sunday, February 10, 2008, www.
independent.co.uk/life-style/gadgets-and-tech/.../facebook-can-ruin-your-life-and-so-can-
myspace-bebo-780521.html Retrieved September 21, 2008; Anon., “Web Revellers Wreck
Family Home,” BBC News Web site, April 12, 2007, http://news.bbc.co.uk/1/hi/england/
wear/6549267.stm Retrieved September 1, 2008.
5. Digimodernist Aesthetics
1. Algernon Charles Swinburne, “Hymn to Proserpine.”
2. A good overview of these positions is to be found in Robert W. Witkin, Adorno on
Popular Culture. London: Routledge, 2003.
3. As a contrast to the above, sample Robert Miklitsch, Roll Over Adorno: Critical
Theory, Popular Culture, Audiovisual Media. New York: SUNY Press, 2006.
4. My source is Wikipedia; the tendency is so overwhelming that absolute precision
in the data becomes irrelevant.
5. The gay market for this kind of music is a secondary, derived one.
6. Countries that impose tighter controls on the possession of credit cards, like France,
Spain, and Italy, show a correspondingly weaker form of this shift in scheduling.
7. Friends (NBC), “The One with Rachel’s Assistant,” season 7 episode 4, first transmit-
ted October 26, 2000.
8. http://en.wikipedia.org/wiki/Magi Retrieved August 30, 2008.
9. Catherine Constable, “Postmodernism and Film” in Connor (ed.), The Cambridge
Companion to Postmodernism, pp. 53–59.
10. Suman Gupta, Re-Reading Harry Potter. Basingstoke: Palgrave Macmillan, 2003, p. 9.
11. Jean Baudrillard, “History: A Retro Scenario” in Simulacra and Simulation, trans.
Sheila Faria Glaser. Ann Arbor, MI: University of Michigan Press, 1994, p. 43. Emphasis
added.
12. Cindy Sherman, The Complete Untitled Film Stills. New York: Museum of Modern
Art, 2003, p. 9.
13. Thomas Pynchon, The Crying of Lot 49. London: Vintage, 2000, pp. 117–18.
14. Baudrillard posed this question in 1981 about the subjects of the proto-reality TV
show An American Family, first aired in 1973 (“The Precession of Simulacra” in Simulacra
and Simulation, p. 28). Such shows used to appear once a decade; now they launch every
week. When in 2008 Channel 4 screened a structural remake of the program that so exer-
cised Baudrillard, a British TV critic noted presciently: “it won’t have the same impact . . .
Reality shows, for want of a better expression, are now the norm” (Alison Graham, “Déjà
View” in London Radio Times, September 13–19, 2008, p. 47).
Notes 257
6. Digimodernist Culture
1. Amis, The Information, pp. 435–36. Emphases in original.
2. Steven Connor, Postmodernist Culture: An Introduction to Theories of the
Contemporary. Oxford: Blackwell, 1989.
3. Connor (ed.), The Cambridge Companion to Postmodernism.
4. Jameson, Postmodernism, p. 299.
5. “Videogames” here encompass all software-based electronic games whatever plat-
form they may be played on, and are synonymous with “computer games.” Academically the
definition is moot, but mine is closer to the popular sense of the word.
258 Notes
6. Nic Kelman, Video Game Art. New York: Assouline, 2005. Andy Clarke and Grethe
Mitchell (eds.), Videogames and Art (Bristol: Intellect, 2007) sets games among the broader
practices of pictorial art.
7. Some games, like SimCity, are more accurately “played with” than “played.”
8. Vladimir Nabokov, Lectures on Literature. London: Weidenfeld & Nicolson, 1980,
p. 251. Emphasis in original.
9. Ibid.
10. See Christiane Paul, Digital Art, rev. edition. London: Thames & Hudson, 2008.
11. Quoted in “Hoffman Hits Out over Modern Film,” BBC News Web site, January 25,
2005, http://news.bbc.co.uk/1/hi/entertainment/film/4206601.stm Retrieved September 1,
2008.
12. Quoted in Clifford Coonan, “Greenaway Announces the Death of Cinema—and
Blames the Remote-Control Zapper” in London Independent, October 10, 2007, http://
www.independent.co.uk/news/world/asia/greenaway-announces-the-death-of-
cinema--and-blames-the-remotecontrol-zapper-394546.html Retrieved September 21, 2008.
Punctuation amended.
13. Mark Cousins, The Story of Film. London: Pavilion Books, 2004, p. 5.
14. Even more striking is the French poll conducted in November 2008 by Les Cahiers
du Cinéma, which could not find one film made since 1963 to put in its twenty all-time
greatest movies. This excision of the more recent half of cinema history suggests a paralysis
of critical appreciation.
15. Also the reliance on circus performance by Fellini and others.
16. In W. (2008) Stone depicts Bush junior as a bemused nonentity.
17. Cousins, The Story of Film, p. 9.
18. Ibid., pp. 447, 458.
19. Ibid., p. 493.
20. Federico Fellini (dir.), 8½ (Cineriz, 1963). My translation.
21. An example is Jean-Pierre Jeunet’s overpraised and complacent Amélie (2001). Its
real title, Le Fabuleux Destin d’Amélie Poulain, with its internal rhyme and use of “foal”
(poulain) as a proper name, echoes Chickin Lickin or Mr Magorium’s Wonder Emporium.
Heavily indebted to Tim Burton and Ally McBeal, its blend of digitization and children’s
story motifs could have been made in New York, though no American studio would have
dared bankroll its implied politics.
22. My comments on A Picture of Britain quote verbatim from Niki Strange’s paper
“ ‘The Days of Commissioning Programmes are over . . .’: The BBC’s ‘Bundled Project’ ” at the
Television Studies Goes Digital conference, London Metropolitan University, September 14,
2007. See also David Dimbleby, A Picture of Britain. London: Tate Publishing, 2005.
23. Jean Ritchie, Big Brother: The Official Unseen Story. London: Channel 4 Books, 2000,
p. 154.
24. Ibid., p. 73.
25. Ibid., p. 92.
26. Peep Show (Channel 4), season 2 episode 4 [10], “University,” first transmitted
December 3, 2004.
Notes 259
27. This isn’t original, but I forget who argued it first (candor is a virtue).
28. Terry Kirby, “Radio Enters a New Golden Age as Digital Use Takes Off ” in London
Independent, February 2, 2007, www.independent.co.uk/news/media/radio-enters-a-new-
golden-age-as-digital-use-takes-off-434732.html Retrieved September 22, 2008.
29. Anon., “Quarter of Radio Listeners Make Switch to Digital” in London Independent,
August 16, 2007, www.independent.co.uk/news/media/quarter-of-radio-listeners-make-
switch-to-digital-461813.html Retrieved September 22, 2008.
30. Ben Dowell, “Podcasts Help Lift Live Radio Audiences” in London Guardian, July 2,
2008, www.guardian.co.uk/media/2008/jul/02/radio.rajars Retrieved September 22, 2008.
31. John Plunkett, “Digital Radio Attracts More Listeners” in London Guardian,
May 1, 2008, www.guardian.co.uk/media/2008/may/01/digitaltvradio.rajars Retrieved
September 22, 2008.
32. Owen Gibson, “Record Numbers Tune in to BBC” in London Guardian, May 2,
2008, www.guardian.co.uk/media/2008/may/02/bbc.radio Retrieved September 22, 2008.
33. Green’s style is interestingly discussed in Andrew Tolson, Media Talk: Spoken
Discourse on TV and Radio. Edinburgh: Edinburgh University Press, 2006, pp. 94–112.
34. Jean Baudrillard, America, trans. Chris Turner. London: Verso, 1988.
35. Fredric Jameson, “Postmodernism and Consumer Society” in The Anti-Aesthetic:
Essays on Postmodern Culture, ed. Hal Foster. Port Townsend, WA: Bay Press, 1983, p. 111.
36. Jameson, Postmodernism, p. 1.
37. Simon Frith (ed.), Facing the Music: Essays on Pop, Rock and Culture. London:
Mandarin, 1990, p. 5.
38. Ibid. Emphasis in original.
39. I mean musically. The personal influence of Joan Baez on Dylan’s transformation
into a late modernist was, I suspect, immense.
40. Ed Whitley has argued that The Beatles (1968) is a postmodern album because of its
heterogeneity, pastiche, plurality, bricolage, and fragmentation. But for me its diversity
stems from an attempt at collective and multifaceted self-portraiture, suggested by the
title. This unifies and totalizes the text. Postmodern elements appear on Highway 61 Revis-
ited and elsewhere before 1972, but subjugated to other aesthetics (alloys, alloys). See
Ed Whitley, “The Postmodern White Album” in The Beatles, Popular Music and Society:
A Thousand Voices, ed. Ian Inglis. Basingstoke: Macmillan, 2000, pp. 105–25.
41. Walter Benjamin, “The Work of Art in the Age of Mechanical Reproduction” in
Illuminations, trans. Harry Zohn. New York: Schocken, 1969, p. 224.
42. Simon Frith, Music for Pleasure: Essays in the Sociology of Pop. Cambridge: Polity,
1988, p. 1.
43. Ibid., pp. 99, 96.
44. Dylan Jones, iPod, Therefore I Am. London: Weidenfeld & Nicolson, 2005,
pp. 152–56, 259–64.
45. John Harris, The Last Party: Britpop, Blair and the Demise of English Rock. London:
Fourth Estate, 2003, pp. 370, 371.
46. See Joseph Tate (ed.), The Music and Art of Radiohead. Aldershot: Ashgate,
2005.
260 Notes
47. Paul Auster and Sam Messer, The Story of My Typewriter. New York: Distributed Art
Publishers, 2002.
48. The business implications of this are explored in Chris Anderson, The Long Tail:
How Endless Choice is Creating Unlimited Demand. London: Random House, 2006.
49. Robert Coover, “The End of Books” in New York Times Book Review, June 21, 1992.
50. Geoff Ryman, 253: The Print Remix. London: Flamingo, 1998; http://www.ryman-
novel.com Retrieved November 11, 2008.
51. Astrid Ensslin, Canonizing Hypertext: Explorations and Constructions. London:
Continuum, 2007, pp. 84–86.
52. Kevin Kelly, “Scan This Book!” in New York Times, May 14, 2006, www.nytimes.
com/2006/05/14/magazine/14publishing.html?ex=1305259200&en=c07443d368771bb8&
ei=5090 Retrieved October 10, 2008.
like this, the “extreme female brain” is unlikely ever to be glimpsed; as with much discourse
around autism, a richer sense of the sociocultural context is needed.
13. Murray, Representing Autism, pp. 139–65.
14. Simon Malpas, Jean-François Lyotard. Abingdon: Routledge, 2003, pp. 1, 123.
15. Jean-François Lyotard, The Postmodern Condition: A Report on Knowledge, trans.
Geoff Bennington and Brian Massumi. Manchester: Manchester University Press, 1984,
p. xxiv. Emphasis in original.
16. D. C. R. A. Goonetilleke, Salman Rushdie. Basingstoke: Macmillan, 1998, p. 105.
17. Christopher Butler, Postmodernism: A Very Short Introduction. Oxford: Oxford
University Press, 2002, p. 13.
18. Jean-François Lyotard, La Condition Postmoderne: Rapport Sur le Savoir. Paris: Les
Editions de Minuit, 1979, p. 7.
19. Jean-François Lyotard, “Answer to the Question: What is the Postmodern?” in The
Postmodern Explained to Children: Correspondence 1982–85, trans. ed. Julian Pefanis and
Morgan Thomas. Sydney: Power Publications, 1992, pp. 24–25.
20. Lyotard, “Apostil on Narratives,” p. 29.
21. This is not to overlook detailed problems such as the book’s misuse of Wittgenstein
or the term “narrative.” I return to the question of paralogy later.
22. Michel Foucault, The Order of Things: An Archaeology of the Human Sciences, trans.
Alan Sheridan. London: Routledge, 2002, p. 422.
23. Consumerist mechanisms have also restructured English primary school education,
imposing stressful and damaging entry and exit tests on small children solely in order to
construct prospective parents as “consumers” selecting the best “product” in the school
marketplace.
24. Quoted in Francesca Steele, “Go Back to School . . . Starting Right Here” in London
Times, January 2, 2008, http://www.timesonline.co.uk/tol/life_and_style/education/
article3117906.ece Retrieved November 20, 2008.
25. Dolan Cummings, “Introduction” in Dolan Cummings, et al., Reality TV: How Real
is Real? London: Hodder & Stoughton, 2002, p. xiii.
26. These compendia of dictates are called self-help books, an ingratiating deception
that foreshadows their contents.
27. In Jack Kornfield (ed.), Teachings of the Buddha. Boston, MA: Shambhala, 1993,
p. 13.
This page intentionally left blank
WORKS CITED
263
264 Works Cited
Jennings, David. Net, Blogs and Rock ‘n’ Roll. London: Nicholas Brealey,
2007.
Johnson, B. S. The Unfortunates. London: Panther Books in association
with Secker & Warburg, 1969.
Jones, Dylan. iPod, Therefore I Am. London: Weidenfeld & Nicolson, 2005.
Joughlin, John J. and Malpas, Simon (eds.). The New Aestheticism.
Manchester: Manchester University Press, 2003.
Keen, Andrew. The Cult of the Amateur: How Today’s Internet is Killing Our
Culture and Assaulting Our Economy. London: Nicholas Brealey, 2007.
Kelly, Kevin. “Scan This Book!” in New York Times, May 14, 2006, www.
nytimes.com/2006/05/14/magazine/14publishing.html?ex=130525920
0&en=c07443d368771bb8&ei=5090 Retrieved October 10, 2008.
Keulks, Gavin. “W(h)ither Postmodernism: Late Amis” in Martin Amis:
Postmodernism and Beyond, ed. Gavin Keulks. Basingstoke: Palgrave
Macmillan, 2006, pp. 158–79.
Kirby, Terry. “Radio Enters a New Golden Age as Digital Use Takes Off ” in
London Independent, February 2, 2007, www.independent.co.uk/news/
media/radio-enters-a-new-golden-age-as-digital-use-takes-off-
434732.html Retrieved September 22, 2008.
Kornfield, Jack (ed.). Teachings of the Buddha. Boston, MA: Shambhala,
1993.
Lasch, Christopher. The Culture of Narcissism: American Life in an Age of
Diminishing Expectations. London: Abacus, 1980 [1979].
Lauffer, Daniel. “Asperger’s, Empathy and Blade Runner” in Journal of
Autism and Developmental Disorders, vol. 34, no. 5 (October 2004),
pp. 587–88.
Laurance, Jeremy. “Keith Joseph, the Father of Thatcherism, ‘was Autistic’
Claims Professor” in London Independent, July 12, 2006, http://www.
independent.co.uk/life-style/health-and-wellbeing/health-news/keith-
joseph-the-father-of-thatcherism-was-autistic-claims-professor-407600.
html Retrieved August 7, 2008.
Lipovetsky, Gilles. Hypermodern Times, trans. Andrew Brown. Cambridge:
Polity, 2005 [2004].
López, José and Potter, Garry (eds.). After Postmodernism: An Introduction
to Critical Realism. London: Athlone Press, 2001.
Lyotard, Jean-François. “Answer to the Question: What is the Postmodern?”
in The Postmodern Explained to Children: Correspondence 1982–85,
268 Works Cited
trans. ed. Julian Pefanis and Morgan Thomas. Sydney: Power Publications,
1992 [1986], pp. 9–25.
—. “Apostil on Narratives” in The Postmodern Explained to Children:
Correspondence 1982–85, trans. ed. Julian Pefanis and Morgan Thomas.
Sydney: Power Publications, 1992 [1986], pp. 27–32.
—. La Condition Postmoderne: Rapport Sur le Savoir. Paris: Les Editions de
Minuit, 1979.
—. The Postmodern Condition: A Report on Knowledge, trans. Geoff
Bennington and Brian Massumi. Manchester: Manchester University
Press, 1984.
MacCabe, Colin. Performance. London: BFI, 1998.
Malpas, Simon. Jean-François Lyotard. Abingdon: Routledge, 2003.
McBride, Nat and Cason, Jamie. Teach Yourself Blogging. London: Hodder
Education, 2006.
McMahan, Alison. The Films of Tim Burton: Animating Live Action in
Contemporary Hollywood. London: Continuum, 2005.
Miller, Michael. YouTube 4 You. Indianapolis, IN: Que Publishing, 2007.
Murray, Stuart. Representing Autism: Culture, Narrative, Fascination.
Liverpool: Liverpool University Press, 2008.
Nabokov, Vladimir. Lectures on Literature. London: Weidenfeld &
Nicolson, 1980.
O’Toole, Laurence. Pornocopia: Porn, Sex, Technology and Desire, 2nd
edition. London: Serpent’s Tail, 1999.
Packard, Edward. The Cave of Time. London: W. H. Allen, 1980 [1979].
Plunkett, John. “Digital Radio Attracts More Listeners” in London Guardian,
May 1, 2008, www.guardian.co.uk/media/2008/may/01/digitaltvradio.
rajars Retrieved September 22, 2008.
Pynchon, Thomas. The Crying of Lot 49. London: Vintage, 2000 [1965].
Ritchie, Jean. Big Brother: The Official Unseen Story. London: Channel 4
Books, 2000.
Schaeffer, Francis. The God Who is There. London: Hodder and Stoughton,
1968.
Seaton, James. “Truth Has Nothing to Do with It” in Wall Street Journal,
August 4, 2005, http://www.opinionjournal.com/la/?id=110007056
Retrieved March 29, 2008.
Selden, Raman. Practising Theory and Reading Literature. Harlow: Pearson,
1989.
Works Cited 269
Sherman, Cindy. The Complete Untitled Film Stills. New York: Museum of
Modern Art, 2003.
Steele, Francesca. “Go Back to School . . . Starting Right Here” in London
Times, January 2, 2008, http://www.timesonline.co.uk/tol/life_and_
style/education/article3117906.ece Retrieved November 20, 2008.
Sterne, Lawrence. The Life and Opinions of Tristram Shandy, Gentleman.
Oxford: Clarendon Press, 1983 [1759–67].
Street-Porter, Janet. “Just Blog Off ” in London Independent on Sunday,
January 6, 2008, http://www.independent.co.uk/opinion/commentators/
janet-street-porter/editoratlarge-just-blog-off-and-take-your-
selfpromotion-and-cat-flap-with-you-768491.html Retrieved August
28, 2008.
Thomson, Charles. “A Stuckist on Stuckism” in The Stuckists: Punk Victo-
rian, ed. Frank Milner. Liverpool: National Museums, 2004, pp. 6–31.
Tolkien, J. R. R. “Foreword to the Second Edition” in The Fellowship of the
Ring. London: HarperCollins, 2007 [1966], pp. xxiii–xxvii.
Walters, Ben. The Office. London: BFI, 2005.
Wittgenstein, Ludwig. Philosophical Investigations, trans. G. E. M. Anscombe.
Oxford: Blackwell, 1968 [1953].
Wolfe, Tom. “A Universe of Rumors” in Wall Street Journal, July 14, 2007,
http://online.wsj.com/article/SB118436667045766268.html Retrieved
August 28, 2008.
Yang, Jonathan. The Rough Guide to Blogging. London: Rough Guides,
2006.
Zittrain, Jonathan. The Future of the Internet and How to Stop It. London:
Allen Lane, 2008.
This page intentionally left blank
INDEX
“6–0–6” (BBC 5 Live) 205 Amis, Martin 23–4, 45, 47, 59, 93,
9/11 151, 154, 177, 226, 237 117, 166, 207, 218
The Information 59, 91, 166
Aardman Animations 10–11, 16–17 London Fields 59, 218
Chicken Run 10–13, 137 Money 47, 59, 140
Wallace and Gromit: The Curse of the Time’s Arrow 23–4
Were-Rabbit 17 amnesia, digimodernist textual 64, 83,
The Wrong Trousers 10–11 149
Aarseth, Espen J. 53 Amos, Tori 217
Ergodic Literature 53–4 anonymity, digimodernist textual 52,
Adair, Gilbert 36–7, 150–1 60, 62, 69, 71, 75, 79, 83, 89,
addiction, digimodernist textual 79, 106–7, 118, 171, 193, 195
83, 148–9, 180 Antonioni, Michelangelo 139, 187
Adorno, Theodor 34, 125, 129, L’avventura 187
134, 136 apparently real, the 22, 48, 87, 120–1,
After Poststructuralism (Davis) 28 139–50, 177, 185, 187–8, 191,
Against Postmodernism 196, 198
(Callinicos) 38 Arctic Monkeys 216
Albarn, Damon 46, 217 [See also Blur, Whatever People Say I Am, That’s
Gorillaz] What I’m Not 216
Alderson, David 33–4, 37 Armageddon (Bay) 177
Allen, Michael 80 Aronowitz, Stanley 150
Allen, Peter 203–4 art-album 209–11, 213–15
Allen, Woody 45, 47 Asperger syndrome 228–30, 232–3
Stardust Memories 47 Asteroids (Atari) 169
All Hail the New Puritans (ed. Blincoe Astral Weeks (Morrison) 210
and Thorne) 22–3 audience participation 83–7, 95–9,
Ally McBeal (Fox) 196 191, 198
Alteration, The (Amis) 138 Auerbach, Erich 156–7, 159
Amazon 219–21 Mimesis 156–7
American Family, An (PBS) 190 Austen, Jane 163, 181, 219–20
American Idol (Fox) 131, 214 Auster, Paul 218–19
271
272 Index
Sex and the City (HBO) 119, 133, Raiders of the Lost Ark 175
160, 163 The War of the Worlds 177
Sex Pistols, the 215 Springsteen, Bruce 212
Shakespeare, William 219, 221 Spurlock, Morgan 184–5
Shaun of the Dead (Wright) 154 Super Size Me 184–5
Shelley, Mary 173 Star Wars Episode I: The Phantom
Sherman, Cindy 139–40 Menace (Lucas) 152
Untitled Film Stills 139–40 Star Wars Episode II: Attack of the
Simpsons Movie, The (Fox) 18 Clones (Lucas) 180
Simpsons, The (Fox) 10, 18, 46, 126, Star Wars Episode III: Revenge of the
132, 155, 162–3 Sith (Lucas) 180
Sim, Stuart 150 Star Wars Episode IV: A New Hope
Sing-a-long-a Sound of Music 75 (Lucas) 127, 160
Singin’ in the Rain (Kelly and Star Wars (film series, 20th Century
Donen) 181 Fox) 115, 128, 160–1, 179–80
Skinner, Frank 198 Star Wars (original film trilogy, 20th
Sky Captain and the World of Century Fox) 9, 169, 179–80
Tomorrow (Conran) 182–3 Star Wars (prequel film trilogy,
Smiths, the 87–8, 212, 214 Lucas) 126, 152–3, 160–1, 179–80
“Panic” 87 Stinky Cheese Man and Other Fairly
SMS (text message) 69–71, 113, 148, Stupid Tales, The (Scieszka and
189, 191, 202, 204, 206 Smith) 15
Snow Patrol 214–15 St. Matthew Passion (Bach) 237
Final Straw 214–15 Stone, Oliver 140, 177, 179
social networking sites 103, 121–3, Alexander 152, 179
142, 220 JFK 140
Songs of Praise (BBC) 135 Natural Born Killers 177
Sopranos, The (HBO) 163 Stone Roses, the 130, 212
Sound of Music, The (Wise) 135 Street-Porter, Janet 118
South Park (Comedy Central) 132 Streets, the 216
Space Invaders (Taito) 175 Strokes, the 214–15
Spears, Britney 131, 215 Is This It 214–15
Spector, Phil 129 stuckism 24–7, 40
Spellbound (Blitz) 185 Suede 130, 208, 212
Spice Girls, the 130–1 supermodernity 44
Spice 130 super-subjectivity 169–71
Spider-Man (film series, Sony Swinburne, Algernon Charles 124, 211
Pictures) 126, 153
Spider-Man 2 (Activision) 169 Talking Heads 206
Spielberg, Steven 127, 173–4, 177–8 Fear of Music 215
Close Encounters of the Third Tamara (Krizanc) 98
Kind 169, 174 Tarantino, Quentin 126, 155, 177, 207
Jurassic Park 9, 173–4 Pulp Fiction 64, 155
Index 281