Sei sulla pagina 1di 213

History of Computing

Thomas Haigh  Editor

the Early
History of Computing

Founding Editor
Martin Campbell-Kelly, University of Warwick, Coventry, UK

Series Editors
Gerard Alberts, University of Amsterdam, Amsterdam, The Netherlands
Jeffrey R. Yost, University of Minnesota, Minneapolis, USA

Advisory Board
Jack Copeland, University of Canterbury, Christchurch, New Zealand
Ulf Hashagen, Deutsches Museum, Munich, Germany
Valérie Schafer, CNRS, Paris, France
John V. Tucker, Swansea University, Swansea, UK
The History of Computing series publishes high-quality books which address the
history of computing, with an emphasis on the ‘externalist’ view of this history,
more accessible to a wider audience. The series examines content and history from
four main quadrants: the history of relevant technologies, the history of the core
science, the history of relevant business and economic developments, and the history
of computing as it pertains to social history and societal developments.
Titles can span a variety of product types, including but not exclusively, themed
volumes, biographies, ‘profile’ books (with brief biographies of a number of key
people), expansions of workshop proceedings, general readers, scholarly expositions,
titles used as ancillary textbooks, revivals and new editions of previous worthy
These books will appeal, varyingly, to academics and students in computer
science, history, mathematics, business and technology studies. Some titles will also
directly appeal to professionals and practitioners of different backgrounds.

More information about this series at

Thomas Haigh

Exploring the Early Digital

Thomas Haigh
Department of History
University of Wisconsin–Milwaukee
Milwaukee, WI, USA
Comenius Visiting Professor
Siegen University
Siegen, Germany

ISSN 2190-6831     ISSN 2190-684X (electronic)

History of Computing
ISBN 978-3-030-02151-1    ISBN 978-3-030-02152-8 (eBook)

Library of Congress Control Number: 2019931874

© Springer Nature Switzerland AG 2019

Chapter 4: © This is a U.S. government work and not under copyright protection in the U.S.; foreign
copyright protection may apply 2019
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of
the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation,
broadcasting, reproduction on microfilms or in any other physical way, and transmission or information
storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology
now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors, and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the
editors give a warranty, express or implied, with respect to the material contained herein or for any errors
or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims
in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Switzerland AG.
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

What the “early digital” does is rethinking the technologies of computation within
a broader frame. The Springer History of Computing series very much welcomes
the broadening of scope exemplified by the present volume. Mirroring the develop-
ment of the history of computing as a scholarly field, the series has already expanded
beyond histories of computer science, computer business and computer manufactur-
ing to include analysis of the sociological and cultural dimensions of computing. In
a broad sense, history of computing has come to cover the history of the practices of
computing, and so too has this series.
Then does not a volume centered on early computing machinery imply a return
to a narrower history of computing? Here is the secret of this book. It returns to
some of the subjects familiar from the early days of the scholarly history of comput-
ing, to study them in a new light. The editor of the present book, Tom Haigh, has led
the way this process in his previous work with Mark Priestley and Crispin Rope.
Their perspectives on ENIAC (in the book ENIAC in Action) and Colossus (in a
series of articles currently in press) have stripped away what seemed self-evident
about these machines to earlier historians. Analysis by Priestley on Von Neumann’s
Routines of Substitution is appearing as a SpringerBrief in the History of Computing
Liberated from the knowledge of what was “only natural” about those machines
and unburdened by commitments to declaring which machines should count as
computers, let alone be the “first computer,” they were able to present the ENIAC to
the reader as an ensemble of practices of constructing and using computational
devices. That object now stands before us in a fresh mode. A historian freed from
the assumptions of the computer scientist can show how certain words and notions
gradually evolved in relation to the use of those pieces of technology.
The scholarly goal of “decentering the computer” has been much discussed in
the history of computing community. To me, decentering implies staying away from
anybody’s definition or demarcation. It means looking at the historical pieces with-
out assuming that they were a computer.
Now suppose a bunch of historians got together and took a similarly fresh look a
set of historical objects which they had not necessarily deemed “computers” before.

vi Foreword

This is what the authors for this book did. They further decentered the computer, by
presenting it as one in a set of things, artifacts entangled with digitality.
By rethinking their trade, and decentering twice, historians have reached an
interesting new intellectual point. They have also reached a new physical place: the
German town of Siegen. Siegen University has been the venue of workshops and
conferences initiated by Erhard Schüttpelz and his colleagues. This book is the out-
come of not only conceptually combining ideas, it is the genuine product of bring-
ing people together.
Scholars of media share an inclination to treat the technological side of media
with overdue reverence. All of media studies? Those in the venerable German town
of Siegen are welcome exceptions. The black boxes of the “inter” are opened.
Computers, software, and networks are critically researched. If historians of com-
puting are broadening their frame, these students of media are offering a proper
setting for such broader scope. Welcoming Tom Haigh to Siegen has deepened and
supported its intellectual endeavors. His diplomacy convinced an impressive crew
of historians of computing to follow suit and enter dialog with an equally impressive
media studies team for the cross fertilization of ideas. We are proud to present the
resulting book as a contribution to the two series of Springer books.

Gerard Alberts
editor Springer Series History of Computing

The history and theory of computing and the history and theory of media were, for
a surprisingly long time, separate fields of scholarship. There were some obvious
historical reasons for this: computers were devised as computing machines and
understood to have been derived from devices for calculating numbers, while the
word “media” had been reserved for the “mass media” of communication and the
public services of telecommunications. Of course, ever since the 1960s there had
been predictions that telecommunications and computers would merge, and com-
puters, automation, and artificial intelligence were from the beginning a speculative
topic for media theory, for instance, in Marshall McLuhan’s writings. Thus, when
computing and media did merge, for a moment it seemed that media theory would
be well-prepared to develop theories about “the tele-computing-coordinated media”
future (to use a retro-futuristic term that does not really exist), which has become
our present and even our fate. But it seems, both computer science and media stud-
ies carried such a heavy theoretical load on their shoulders that the anticipated intel-
lectual fusion had to be postponed, though the practical fusion quickly overwhelmed
us. Computer science was not prepared for media theoretical discussions of net-
worked computing and their media interfaces, and at least some media theoretical
experts expected media to vanish in an all-encompassing hypertext reality or in a
convergence of all past media on one platform. In many respects, particularly in
Germany, media studies and computer history both remained hypnotized by the idea
that a computer was first and fundamentally a Turing Machine, and not the strange
amalgam of networks and nodes, of software and hardware, social relationships and
Foreword vii

faddish interfaces, computerization movements and disillusioned bubbles, and sim-

ulations of old media and parasitic new practices that slowly emerged as the domi-
nant reality of both media and computing. Thus, the history of computing had to
find its own way, along the edges of more dominant fields, Science and Technology
Studies (STS), library and information studies, business history, the history of sci-
ence, cultural studies, and especially the research done by computer scientists and
programmers into their own history. It is only recently and due to the relentless
efforts of Tom Haigh, Gerard Alberts, JoAnne Yates, Rebecca Slayton, and their
comrades-in-arms that the history of computing has been able to break out of its
niche at the intersections of other fields and emerges as a possible disciplinary field
in its own right.
The present volume is dedicated to this emerging synthesis. It will help to close
the gap between media studies and the history of computing by making both strands
meet in the historical reconstruction of digital computing devices, whether they
were called media or computers or neither. Because once we are able to treat digital
computing devices in their own right, the vexing questions of “are they computers?”
and “when did they turn into media?” become secondary. And by becoming second-
ary, they can be answered much more efficiently, because this move makes it easier
for scholarship to follow the symmetry standards set by science and technology
studies: the big opposition between “internalist” and “externalist” descriptions is
overcome; the successes and failures of innovations and projects may be explained
and described with the same concepts; technical details, social organization, and
cultural transmission turn out to be aspects of one seamless whole; and the different
kinds and attributions of human and non-human agency involved in computing can
be more easily acknowledged.
Luckily, at least one thing about digital computers seems to have withstood the
test of time. Digital machines they remain, or at least some digital machines are
what they are made of. And computerized media: digital machines they became, or
at least a vital part is made of digital machines. Even if other digital machines were
and are called neither “computers” nor “media,” this is the best common ground one
can find.
Following digital computing devices through history, we can expect the liberat-
ing effect Gerard Alberts described so well in his generous and thoughtful preface
to this volume: to work beyond a priori assumptions about media and computers.
And to be honest, we do not have much of a choice. Because our digital computing
devices today are ubiquitous and pervasive, it is a matter of great arbitrariness if
they are now called “computers,” if they were historically called “computers,” if
they are or were called “media,” if they are called “the software component,”
“machines,” “instruments,” “control panels” or whatever. Computing has become
ubiquitous, and somehow “the computer” as a clearly defined physical entity, from
mainframe to desktop and beyond, is gone. But computing devices with their vari-
able geometries and their physical and material effects still exist and proliferate.
And media, though we have long expected their demise or final convergence, have
not disappeared either, if only because we need devices to perform our long-­
cherished media practices. We know a smartphone is not a phone, but it is a phone
viii Foreword

for us. And as long as we do the driving ourselves, we cannot escape the impression
this bundle of computing devices, wheels and seats, glass and steel, is a car. But
what about cashiers, hotel keys and locks, credit cards, and countless other devices?
Whether things are called computers or media or neither does not give us a ­theoretical
or historical clue what the computing device in its belly does. That’s why we need a
thorough history of digital computing devices to answer the questions of historical
continuity: “How did we get here?” and “What to expect next?”
And concerning “what the future has in store,” let me close with a personal
remark. Actually, it may be a personal remark, but it concerns the professionalism
of our universities worldwide. The scholars represented in this volume know how
few experts in the history of computing field actually are able to meet their stan-
dards, and how fragile and tenuous the field still is. Because the experts in this field
have to combine a profound technical knowledge with a versatile knowledge of
changing historical circumstances in all scales, in the micro-scale of networks of
computing experts, but also in the macro-scale of shifting economical, political, and
social circumstances, in the guise of government sponsorship and guidance, of mar-
ket supply and demand, and of professional career-opportunities. And speaking of
career opportunities, the history of computing field is still on the verge of taking off.
After all, digital media do rule the world, or rather, if you change the rules of digital
media, in any of its numerous monopolies and cartels you are wielding enormous
power. Economics, education, administration, and entertainment, the list is long,
have all been digitalized and are being run with the help of digital algorithmic
machines and their asymmetrical power relationships. Come to think of it, each and
every major university in this world should have a professorship for the History of
Computing, to teach our students (and not only the computer scientists among them)
how it became possible that the world could be run by digital machines – and how
it was possible to do without them. And how many professorships for the History of
Computing do you think actually exist in this world, to take up that task? To whom
it may concern.

Erhard Schüttpelz
(Collaborative Research Center
“Media of Cooperation,” University of
Siegen and editor of the Springer Series
Media of Cooperation)

This book originated with two workshops held at Siegen University, in June 2016
and January 2017. My Siegen colleague Sebastian Gießmann, and his team, did a
great job in putting these events together and handling all the logistics. Funding for
the events came from SFB 1187: Media of Cooperation and from Siegen’s School
of Media and Information. Neither the workshops nor this book would have been
possible without the support and encouragement of Professor Erhard Schüttpelz,
who has been a tireless champion for the relevance of the history of computing to
media studies.
I am also grateful to all those who have contributed to the book’s development.
Gerard Alberts, as editor of the Springer Series in History of Computing, encour-
aged us along the way. Other workshop participants, including Liesbeth De Mol,
David Link, Pierre Mounier Kuhn, and David Nofre, provided input that helped to
shape the final papers. Jeffrey Yost contributed detailed and helpful reviews of four
chapters. Catherine Barrett Abbott helped to proofread most of the submissions. My
family members, Maria, Peter, and Paul, supported the project with their willing-
ness to spend summers in Germany and to manage without me during other travels
for workshops.
At and for Springer, A. IshrathAra, Wayne Wheeler, Simon Rees, and Prasad
Gurunadham saw the book through the editorial and production process with admi-
rable speed.


1 Introducing the Early Digital������������������������������������������������������������������    1

Thomas Haigh
2 Inventing an Analog Past and a Digital Future
in Computing��������������������������������������������������������������������������������������������   19
Ronald R. Kline
3 Forgotten Machines: The Need for a New
Master Narrative��������������������������������������������������������������������������������������   41
Doron Swade
4 Calvin Mooers, Zatocoding, and Early Research
on Information Retrieval������������������������������������������������������������������������   69
Paul E. Ceruzzi
5 Switching the Engineer’s Mind-Set to Boolean:
Applying Shannon’s Algebra to Control Circuits
and Digital Computing (1938–1958)������������������������������������������������������   87
Maarten Bullynck
6 The ENIAC Display: Insignia of a Digital Praxeology ������������������������  101
Tristan Thielmann
7 The Evolution of Digital Computing Practice
on the Cambridge University EDSAC, 1949–1951��������������������������������  117
Martin Campbell-Kelly
8 The Media of Programming��������������������������������������������������������������������  135
Mark Priestley and Thomas Haigh

xii Contents

9 Foregrounding the Background: Business, Economics,

Labor, and Government Policy as Shaping Forces
in Early Digital Computing History������������������������������������������������������  159
William Aspray and Christopher Loughnane
10 “The Man with a Micro-calculator”: Digital Modernity
and Late Soviet Computing Practices����������������������������������������������������  179
Ksenia Tatarchenko

Index������������������������������������������������������������������������������������������������������������������  201
Chapter 1
Introducing the Early Digital

Thomas Haigh

Abstract  This introductory chapter outlines the objectives of the book, explaining
how adopting “early digital” as a frame can encourage new perspectives on estab-
lished topics within the history of computing and productively integrate concerns
from related fields such as media theory and communications history. Haigh encour-
ages historians to take digitality seriously as an analytical category, probes the dif-
ferences between analog and digital computing, and argues that the ability of a
machine to follow a program is fundamentally digital. He also introduces the con-
tributions of the individual chapters in the book, situating each within this broader
analysis of digitality and its historical materiality.

At the workshop held to prepare this book, Paul Ceruzzi noted that the digital
computer was a “universal solvent.” This idea comes from alchemy, referring to an
imaginary fluid able to dissolve any solid material. In the 1940s the first digital
computers were huge, unreliable, enormously expensive, and very specialized.
They carried out engineering calculations and scientific simulations with what was,
for the time, impressive speed. With each passing year, digital computers have
become smaller, more reliable, cheaper, faster, and more versatile. One by one they
have dissolved other kinds of machine. Some have largely vanished: fax machines,
cassette players, telegraph networks, and encyclopedias. In other cases the digital
computer has eaten familiar devices such as televisions from the inside, leaving a
recognizable exterior but replacing everything inside.
For those of us who have situated ourselves with the “history of computing,” this
provides both a challenge and an opportunity: an opportunity because when the
computer is everywhere, the history of computing is a part of the history of every-
thing; a challenge because the computer, like any good universal solvent, has dis-
solved its own container and vanished from sight. Nobody ever sat down in front of

T. Haigh (*)
Department of History, University of Wisconsin–Milwaukee, Milwaukee, WI, USA
Comenius Visiting Professor, Siegen University, Siegen, Germany

© Springer Nature Switzerland AG 2019 1

T. Haigh (ed.), Exploring the Early Digital, History of Computing,
2 T. Haigh

their television at the end of the day, picked up their remote control, and said “let’s
do some computing.” Our object of study is everywhere and nowhere.
The startling thing is that “computer” stuck around so long as a name for these
technologies. The word originally described a person carrying out complex techni-
cal calculations, or “computations.” (Campbell-Kelly and Aspray 1996) The “auto-
matic computers” of the 1940s inherited both the job and the title from their human
forebears and retained it even when, after a few years, their primary market shifted
to administrative work. Well into the 1990s, everyone knew what a computer was.
The word stuck through many technological transitions: supercomputers, minicom-
puters, personal computers, home computers, and pocket computers. Walking
through a comprehensive computing exhibit, such as the Heinz Nixdorf
MuseumsForum or the Computer History Museum, one passes box after box after
box. Over time the boxes got smaller, and toggle switches were eventually replaced
with keyboards. Computing was implicitly redefined as the business of using one of
these boxes to do something, whether or not that task involved computations.
Even then, however, other kinds of computers were sneaking into our lives, in
CD players, microwave ovens, airbags and antilock brakes, ATMs, singing greeting
cards, and videogame consoles. Within the past decade, the box with keys and a
screen has started to vanish. Laptop computers are still thought of as computers, but
tablets, smartphones, and high-definition televisions are not. To the computer scien-
tist, such things are simply new computing platforms, but they are not experienced
in this way by their users or thought of as such by most humanities scholars.
Instead many people have come to talk of things digital: digital transformation,
digital formats, digital practices, digital humanities, digital marketing, and even
digital life. Other newly fashioned areas of study, such as algorithm studies and
platform studies, also define themselves in terms of distinctly digital phenomena. In
some areas “digital” now denotes anything accomplished by using computers which
requires an above average level of technical skill. This is the usual meaning of “digi-
tal” in the “digital humanities” and in various calls for “digital STS” and the like.
Using “digital” as a more exciting synonym for “computerized” is not wrong,
exactly, as modern computers really are digital, but it is arbitrary. Historians of
computing have so far been somewhat suspicious of this new terminology, only
occasionally using it to frame their own work (Ensmenger 2012). I myself wrote an
article titled “We Have Never Been Digital.” (Haigh 2014) Yet historians of comput-
ing cannot expect the broader world to realize the importance of our community’s
work to understanding digitality if we are reluctant to seriously engage with the
concept. Instead the work of conceptualizing digitality and its historical relationship
to computer technology has been left largely to others, particularly to German media
scholars (Kittler 1999; Schröter and Böhnke 2004).
In this volume, we approach digitality primarily from within the history of com-
puting community, rethinking the technologies of computation within a broader
frame. Subsequent efforts will build on this reconceptualization of computational
digitality as an underpinning to the study of digital media. This book therefore
makes the case that historians of computing are uniquely well placed to bring rigor
to discussion of “the digital” because we are equipped to understand where digital
1  Introducing the Early Digital 3

technologies, platforms, and practices come from and what changes (and does not
change) with the spread of the digital solvent into new areas of human activity.
Hence the title of our book, Exploring the Early Digital.

1.1  Digital Materiality

Let’s start with what digitality isn’t. In recent usage, digital is often taken to mean
“immaterial.” For example, entertainment industry executives discuss the shift of
consumers toward “digital formats” and away from the purchase of physical disks.
The woman responsible for the long-running Now That’s What I Call Music! series
of hit music compilations was recently quoted (Lamont 2018) as saying that songs
for possible inclusion are now sent to her by email, unlike the “more glamorous…
analogue era, when labels sent over individual songs on massive DAT tapes by cou-
rier.” Such statements make sense only if one adopts a definition of “digital” that
excludes all disks and tapes. That is a stretch, particularly as the D in DVD stands
for Digital. So does the D in DAT.
The recent idea of “the digital” as immaterial is both ridiculous and common,
deserving its own historical and philosophical analysis. Langdon Winner’s classic
Autonomous Technology, which explored the history of the similarly odd idea of
technology as a force beyond human control, might provide a model. Some impor-
tant work in that direction has been done in “A Material History of Bits” (Blanchette
2011). Although the modern sense of “digital” was invented to distinguish between
different approaches to automatic and electronic computing, the characteristics it
described are much older. The mathematical use of our current decimal digits began
in seventh-century India before picking up steam with the introduction of zeros and
the positional system in the ninth century. Devices such as adding machines incor-
porated mechanical representations of digits. For example, in his chapter Ronald
Kline quotes John Mauchly, instigator of the ENIAC project and one of the creators
of the idea of a “digital computer,” explaining the new concept with reference to
“the usual mechanical computing machine, utilizing gears.”
Most of the chapters in this book deal with specific forms of digital materiality,
emphasizing that the history of the digital is also the history of tangible machines
and human practices. Ksenia Tatarchenko’s contribution deals with Soviet program-
mable calculators. These displayed and worked with digits, like the machines
Mauchly described, but exchanged gears for electronics.
Other digital technologies, used long after the invention of electronic computers,
avoided electronics entirely. Doron Swade provides a close technical reading of the
ways in which a complex mechanical odds-making and ticket selling machine rep-
resented and manipulated numbers.
Paul Ceruzzi’s chapter explores Zatocoding, a digital method of categorizing and
retrieving information using notches cut into the side of punched cards. Its creator,
Calvin Mooers, had early experience with digital electronics and used information
theory to help formulate a highly compressed coding scheme able to combine many
possible index terms. His work was foundation for modern information retrieval
4 T. Haigh

systems, including web search engines. Yet when Mooers went into business, he
was more excited by the promise of paper-based digital information media. The
cards represented information digitally, through different combinations of notches,
which were read using what Ceruzzi calls a “knitting needle-like device.”

1.2  Digital vs. Analog

The antonym of digital is “analog,” not “material.” As Kline explains, this distinc-
tion arose during the 1940s, with the spread of automatic computers. He locates it
in discussions between the creators of digital computers, tracing its initial spread
through enthusiasts for the new metascience of cybernetics, building on his work in
(Kline 2015). Both analog and digital machines could automate the solution of
mathematical problems, whether at the desk, in the laboratory, or, as control equip-
ment, in the field. However, the two kinds of computer represented the quantities
they worked on in fundamentally different ways.
Digital machines represented each quantity as a series of digits. Mechanisms
automated the arithmetic operations carried out by humans, such as addition and
multiplication, mechanizing the same arithmetic tricks such as carrying from less
significant digits to more significant digits or multiplying by repeated addition.
Within the limits imposed by their numerical capabilities, the machines could be
relied upon (when properly serviced, which was not a trivial task) to be accurate and
to give reproducible results. Machines with more digits provided answers with more
In analog machines, in contrast, each quantity being manipulated was repre-
sented by a distinct part of the machine such as a shaft, a reservoir, or an electrical
circuit. As the quantity represented grew or diminished, the component representing
it would change likewise. The shaft would spin more or less rapidly, the reservoir
empty or fill, or the voltage across the circuit rise or fall. This explains the name
“analog computer.” An analogy captures the relationship between things in the
world, defining a specific correspondence between each element of the analogy and
something in the system being modelled. Some, like model circuits used to simulate
the behavior of electrical power networks, were essentially scale models. Others
substituted one medium for another, such as water for money. The accuracy of an
analog computer was a matter of engineering precision. In practice analog comput-
ers were specialized for a particular kind of job, such as solving systems of differ-
ential equations. They were often faster than digital computers but usually accurate
to only a few significant figures.
Once the categories of analog and digital computers were established, it became
natural to project the idea of analog vs. digital back onto earlier technologies. In
these broader terms, any discreet representation of numbers appears digital, whereas
continuous representations appear analog. Kline notes that some computing special-
ists of the 1950s were wary of “analog” for this reason and prefer the precision
gained by speaking of “continuous” representations. Adding machines, calculating
1  Introducing the Early Digital 5

machines, cash registers, tabulating machines, and many other common technolo-
gies were digital. These machines typically represented each digit as a ten-faced
cog, which rotated to store a larger number. Newer, higher-speed devices stored
numbers as patterns in electromagnetic relay switches or electronic tubes. Other
calculating devices, going back to ancient navigational tools such as the astrolabe,
were analog. So was the once-ubiquitous slide rule (approximating the relationship
between a number and its logarithm). Automatic control devices before the 1970s
were, for the most part, analog: thermostats, the governors that regulated steam
engines, and a variety of military fire control and guidance systems. As David
Mindell has shown (Mindell 2002), engineers across a range of institutions and dis-
ciplinary traditions developed these techniques long before the mid-century fad for
cybernetics provided a unified language to describe them.
Although the application of “digital” to computing and communication was new
in the 1940s and bound up with automatic computers, many of the engineering
techniques involved were older and arose in other contexts. In his chapter, Doron
Swade explores technologies found in two recreational machines from the early
twentieth century which embedded computational capabilities: a golf simulator and
“automatic totalizator” machines used by dog racing tracks to calculate odds in real
time based on ticket sales. Swade notes that these machines have been left out of
traditional master narratives in the history of computing, which focus on scientific
calculator and office administration as the primary precursors of digital computer
technology. His work demonstrates the benefits of moving beyond the limitations
imposed by this traditional frame and taking a broader approach to the study of
computational technology. Indeed, even the categories of digital and analog, accord-
ing to Swade, are sufficiently tangled in engineering practice for him to challenge
the “faux dichotomous categories used retrospectively in the context of pre-elec-
tronic machines.”

1.3  Computer Programs Are Inherently Digital

Programmability, often seen as the hallmark of the computer, is itself a fundamen-

tally digital concept. As John von Neumann wrote when first describing modern
computer architecture, in the “First Draft of a Report on the EDVAC,” an “automatic
computing system is a (usually high composite) device which can carry out instruc-
tions to perform calculations of a considerable order of complexity….” (von
Neumann 1993). In that formulation, the device is a computer because it computes:
it carries out laborious and repetitive calculations according to a detailed plan. It is
automatic because like a human computer, but unlike a calculating or adding
machine, it goes by itself from one step in the plan to the next.
Kline’s contribution notes that digital and analog were not the only possible
terms discussed during the 1940s. Some participants advocated strongly for the
established mathematical terms “continuous” (instead of analog) and discrete
(instead of digital). These distinctions apply not only to number representations,
6 T. Haigh

which analysis has usually focused on, but also to the way the two kinds of com-
puter carry out their calculations. The latter distinction is perhaps the more funda-
mental, as it explains the ability of digital computers to carry out programs.
Analog computers work continuously, and each element does the same thing
again and again. Connections between these components were engineered to mimic
those between the real-world quantities being modelled. A wheel and disk linked
one shaft’s rotation to another’s, a pipe dripped fluid from one reservoir to another,
and an amplifier tied together the currents flowing in two circuits.
The MONIAC analog computer (Fig. 1.1) designed by William Phillips to simu-
late the Keynesian understanding of the economy illustrates these continuous flows.
Different tanks are filled or emptied to represent changing levels of parameters such
as national income, imports, and exports. Adjustable valves and plastic insets
expressing various functions governed the trickling of water from one chamber to
another. This gave a very tangible instantiation to an otherwise hard to visualize
network of equations, as economic cycles played themselves out and the impact of
different policy adjustments could be tested by tweaking the controls.
In a very obvious sense, analog computations occur continuously. In contrast a
modern computer, or to use the vocabulary of the 1940s an automatic digital com-
puter, breaks a computation into a series of discrete steps and carries them out over
time. At each stage in the computation, the mechanism may work on a different
variable. For example, most early digital computers had only one multiplying unit,
so every pair of numbers to be multiplied had first to be loaded into two designated
storage locations. Over the course of the computation, the numbers loaded into
those locations would refer to completely different quantities in the system being
The first step in planning to apply a digital computer to a problem was to figure
out what steps the machine should carry out and what quantities would be stored in
its internal memory during each of those steps. Two of the earliest efforts to plan
work for automatic computers were made by Lovelace and Babbage in the 1830s
and 1840s for the unbuilt Analytical Engine (Fig. 1.2) and by the ENIAC team in
1943 to plan out the firing table computations for which their new machine was
being built (Fig. 1.3). When I explored these episodes in collaboration with Mark
Priestley, we were startled to realize that both teams came up with essentially the
same diagramming notation when setting out sample applications for their planned
computers: a table in which most columns represented different storage units of the
machine and each row represented one step in the algorithm being carried out. A
single cell thus specified the mathematical significance of an operation being car-
ried out on one of the stored quantities.
The word “program” was first applied in computing by the ENIAC team (Haigh
and Priestley 2016). Our conclusion was that its initial meaning in this context was
simply an extension of its use in other fields, such as a program of study, a lecture
program, or a concert program. In each case the program was a sequence of discreet
activities, sequenced over time. An automatic computer likewise followed a program
of operations. The word was first applied to describe the action of a unit within
ENIAC that triggered actions within other units: the master programmer. (The same
1  Introducing the Early Digital 7

Fig. 1.1  The Phillips Machine, or MONIAC, illustrates two key features of analog computing: the
“analogy” whereby different parts of the machine represent different features of the world and the
fixed relationships between these parts during the computation, which consisted of continuous
processes rather than discrete steps. (Reproduced from (Barr 2000), courtesy of Cambridge
University Press)
8 T. Haigh

Fig. 1.2  This 1842 table, prepared by Ada Lovelace, is a trace of the expected operation of
Babbage’s Analytical Engine running a calculation. Each line represents 1 of 25 steps in the com-
putation (some of them repeated). Most of the columns represents quantities storied in particular
parts of the engine

term is given to the electromechanical control unit in an automatic ­washing machine,

though we are not sure which came first.) Quickly, however, “program” came to
describe what von Neumann (1993) called “The instructions which govern this oper-
ation” which “must be given to the device in absolutely exhaustive detail.”
“Programmer” became a job title instead of a control unit. Thus “programmer” and
“computer” passed between the domains of human and machine at around the same
time but in opposite directions.
Because each part of an analog computer carried out the same operation through-
out the computation, analog computer users did not originally talk about “program-
ming” their machines though as digital computers became more popular, the term
was eventually applied to configuring analog computers. In contrast, digital comput-
ers built under the influence of von Neumann’s text adopted very simple architec-
tures, in which computations proceeded serially as one number at a time was fetched
from memory to be added, subtracted, multiplied, divided, or otherwise manipu-
lated. Such machines possessed a handful of general-purpose logic and arithmetic
capabilities, to be combined and reused as needed for different purposes.
As an automatic computer begins work, its instructions, in some medium or
another, are present within it and its peripheral equipment. If the computer is pro-
grammable, then these instructions are coded in a form that can be changed by its
users. In operation it translates some kind of spatial arrangement of instructions into
a temporal sequence or operations, moving automatically from one task to the next.
1  Introducing the Early Digital 9

Fig. 1.3  A detail from the ENIAC project diagram PX-1-81, circa December 1943. As with the
Babbage and Lovelace table, the rows represent discrete steps in the calculation, and the columns
(32 in the full diagram) represent different calculating units within ENIAC

Early computers stored and arranged these instructions in a variety of media.

ENIAC, the first programmable electronic computer, was wired with literal chains
and branches, along which control pulses flowed from one unit to another to trigger
the next operation. In his chapter, Tristan Thielmann uses ideas from media theory
to explore ENIAC’s user interface, specifically the grids of neon bulbs it used to
10 T. Haigh

display the current content of each electronic storage unit. Because ENIAC could
automatically select which sequence of operations to carry out next, and shifted
between them at great speed, its designers incorporated these lights and controls to
slow down or pause the machine to let its operators monitor its performance and
debug hardware or configuration problems.
The chapter Mark Priestley wrote with me for this volume explores the range of
media used to store programs during the 1940s and the temporal and spatial meta-
phors used to structure these media into what would eventually be called a “memory
space.” Several computers of the mid-1940s read coded instructions one at a time,
from paper tape. These tapes could be physically looped to repeat sequences.
Computers patterned after von Neumann’s conception for EDVAC stored coded
instructions in one or another kind of addressable memory. Whether this was a delay
line, tube memory or magnetic drum had major implications for the most efficient
way of spacing the instructions over the medium. Like ENIAC, these machines
could branch during the execution of a program, following one or another route to
the next instruction depending on the results of previous calculations. These media
held both instructions and data, laying the groundwork for later systems that
encoded text, audio, and eventually video data in machine-readable digital forms.
The fundamental technology of digital computers and networks takes many dif-
ferent shapes and supports many different kinds of practice. In his chapter, Martin
Campbell-Kelly explores the variety of use practices that grew up around one of the
earliest general-purpose digital computers, the EDVAC. Its users were the first to
load programs from paper tape media into electronic memory, quickly devising a
system that used the computer itself to translate mnemonics into machine code as it
read the tape. The ability of computers to treat their own instructions as digital data
to be manipulated has been fundamental to their proliferation as the universal
machines of the digital age. Some practices from the first computer installations,
such as the preparation of data and instructions in machine-readable digital form, or
the practice of debugging programs by tracing their operation one instruction at a
time spread with the machines themselves into many communities. Others were
specific to particular areas of scientific practice and remained local.
Almost all of the earliest projects to build automatic digital devices were spon-
sored in some way or another by government money. Charles Babbage was bank-
rolled by the British government, as was Colossus. Konrad Zuse relied on the
patronage of the Nazi regime, while ENIAC was commissioned by the US Army.
With the exception of the (widely misunderstood) connection of what became the
Internet to the military’s interest in building robust networks, the role of the state in
the later exploitation and improvement of digital technology is less widely appreci-
ated. Yet, as William Aspray and Christopher Loughnane show in their chapter in
this volume, the state remained vitally important in structuring the early use of digi-
tal computers as a procurer of digital technologies, a sponsor of research, and a
regulator of labor markets. Their chapter illustrates the contribution broader-based
historical analysis can provide to understanding the spread of digital technology, in
contrast to popular history with its focus on brilliant individuals, as demonstrated by
the title of the recent blockbuster The Innovators: How a Group of Hackers,
Geniuses, and Geeks Created the Digital Revolution (Isaacson 2014).
1  Introducing the Early Digital 11

1.4  Digital Information

The chapters collected here give a new and broader idea of the material culture of
digitality. Nothing is immaterial. Yet there is something special about the relation-
ship of bits to their material representations: different material representations are,
from a certain viewpoint, interchangeable. Digital information can be copied from
one medium to another without any loss of data, and the same sequence of bits can
be recovered from each. Transcribe the text of a book into a text file, save that file,
compress it, email it, download it, and print it out. The text has been represented in
many material forms during this process, but after all those transformations and
transcriptions, one retains the same series of characters. Matthew Kirschenbaum
called this the “formal materiality” of digitality (Kirschenbaum 2007). Discussion
of “digital formats” as alternative to material media, misleading as it is, captures
something about the truth of this experience.
Claude Shannon’s “mathematical theory of computation” (Shannon and Weaver
1949), popularized as “information theory,” is foundational to our sense of “the
digital” and to the modern sense of “information” (Kline 2006; Soni and Goodman
2017). Maarten Bullynck’s chapter in this volume examines the early adoption and
development of Shannon’s efforts to “synthesize” networks of relay switches from
logical descriptions defined using Boolean algebra. Such circuits provided the
building blocks of digital machines, including early computers. He suggests that it
took a decade of work by practicing engineers, and the creation of new craft prac-
tices and diagramming techniques, to turn his work into a practical basis for digital
electronic engineering. This also reminds us that Shannon’s work had a very spe-
cific institutional and technological context, looking backward to new digital com-
munication techniques developed during WWII and forward to anticipate the
generalization of these as a new basis for routine telecommunication.
Over time, the connection of digitality and information with computer technol-
ogy grew ever stronger and tighter. “Information” had previously been inseparable
from a process in which someone was informed of something. It now became what
Geoff Nunberg (1997) memorably called an “inert substance” that could be stored,
retrieved, or processed. “Information” became a synonym for facts or data – and in
particular for digitally encoded, machine-readable data. This processes gained
steam with the spread of the idea of “management information systems” within
corporate computing during the 1960s (Haigh 2001), followed by the widespread
discussion of “information technology” from the 1970s and the introduction of the
job title “chief information officer” for corporate computing managers in the 1980s.
I suspect that the root of all this is in the digital engineering practices used to build
computers and other devices during the 1950s. Shannon-esque digital communica-
tion was taking place within computers, as memory tanks, tape drives, printers, and
processing units swapped signals. This is the context in which it became natural to
think of a process of information occurring without human involvement and, with a
slight linguistic and conceptual slippage, to think of the stored data itself as “infor-
mation” even when it was not being communicated.
12 T. Haigh

1.5  When Was the Early Digital?

Our sense of what, exactly, “early digital” means shifted during our discussions. It
originally appealed as something indicating an era of historical interest, not unlike
the “early modern” period referred to by historians of science. This volume focuses
primarily on a time period from the 1930s to the 1950s, an era that provided the
original locus for work on the history of modern computing. It is during this era that
the concepts of “analog” and “digital” were invented, as were the technologies such
as programmable computers and electronic memories that we associate with “the
digital.” It is the era in which digital computational technologies are most clearly
defined against analog alternatives and a period in which their unmistakable and
sometimes monumental materiality makes it clearest that digitality does not mean
Yet “early digital” has an attractive temporal flexibility and encompasses other
devices that are not always considered to be “computers,” stretching back in time to
Babbage’s planned difference engine and wartime devices such as the codebreaking
Bombes. The phrase initially appealed to me because, like “modern” and “early
modern,” its boundaries are obviously permeable. Tatarchenko, for example, looks
at a late Soviet version of the early digital, which spread during the early 1980s and
centered on a more obviously digital technology: the programmable calculator.
Users coded programs, including games, as sequences of digits displayed on tiny
screens. From the viewpoint of a future in which humans have given up corporeal
existence to live forever in cyberspace, the present day would seem like the very
early digital.
As our thinking evolved over the course of several workshops, we came to think
of “early digital” less as something defining a general epoch in a society and more
as a very local designation describing the transformation of a specific practice
within a specific community. In particular, we do not believe that there was a single
“early digital” epoch or that one can follow those who talk about “digital revolu-
tions” into a view of the world in which a single event or invention creates a univer-
sal rupture between digital and pre-digital worlds.
The first instinct of the responsible historian is to challenge assumptions of
exceptionalism, whether made for nations or for technologies. Discourses of the
kind Gabrielle Hecht (2002) termed “rupture talk” have grown up around many new
technologies. These claim that the technology represents a break with all prior prac-
tice so dramatic that historical precedents are irrelevant. The now fashionable idea
of the “postdigital” is likewise premised on the idea that we are currently on the far
side of some kind of digital rupture.
Recognizing that rhetoric of a “digital transformation” parallels claims made for
nuclear power or space exploration as the defining technology of a new epoch, the
careful historian should begin with a default assumption that computer technology
is neither exceptional nor revolutionary. Yet all around us, we see the rebuilding of
social practices around computers, networks, and digital media. Even the most care-
ful historian might be moved to entertain the hypothesis that some kind of broadly
1  Introducing the Early Digital 13

based “digital transformation” really is underway. The challenge is to find a point of

engagement somewhere between echoing the naïve boosterism of Silicon Valley
(Kirsch 2014) and endorsing the reflex skepticism of those who assume that digital
technology is just a novel façade for the ugly business of global capitalism and neo-
liberal exploitation.
As the papers gathered in this volume begin to suggest, technologies and prac-
tices did not become digital in a single world-historic transformation sometime in
the 1940s or 1950s (or the 1980s or 1990s), but a set of localized and partial trans-
formations enacted again and again, around the world and through time, as digital
technologies were adopted by specific communities. Within those communities, one
can further segment the arrival of the early digital by task. The EDVAC users dis-
cussed by Campbell-Kelly were using a digital computer to solve equations, reduce
data, and run simulations, but it would be decades before they could watch digital
video or count their steps digitally. From this viewpoint, the early digital tag indi-
cates the period during which a human practice is remade around the affordances of
a cluster of digital technologies.
The early digital is also structured geographically. For most of humanity, it
arrived within the past 5 years, with the profusion of cheap smartphones. The poor-
est billion or two people are still waiting for it to begin.

1.6  Warming Up to the Early Digital

Our opportunity as historians of computing confronting a so-called “digital revolu-

tion” is to explain, rigorously and historically, what is really different about digital
electronic technology, how the interchangeability of digital representations has
changed practices in different areas, and how the technological aspects of digital
technologies have intertwined with political and social transformations in recent
decades. This means taking the “digital” in “digital computer” as seriously as the
“computer” part.
At one of the workshops in the Early Digital series, the phrase “I’m warming up
to the early digital” was repeated by several participants becoming, by the end of the
event, a kind of shared joke. The new phrase was beginning to feel familiar and use-
ful as a complement to more established alternatives such as “history of computing”
and “media history.”
The identity of “history of computing was adopted back in the 1970s, at a time
when only a small fraction of people had direct experience with digital electronic
technologies. Its early practitioners were computer pioneers, computer scientists,
and computer company executives – all of whom identified with “computing” as a
description of what people did with computers as well as with “the computer” as a
clearly defined artifact.
The history of computing community devoted a great deal of its early energy to
deciding what was and what was not a computer, a discussion motivated largely by
14 T. Haigh

the desire of certain computer pioneers, their family members, and their friends to
name an “inventor of the computer.” As I have discussed elsewhere (Haigh et al.
2016), this made the early discourse of the field a continuation of the lawsuits and
patent proceedings waged since the 1940s. Though these disputes alas continue to
excite some members of the public, they have little to offer scholars and were
resolved to our satisfaction (Williams 2000) by issuing each early machine with a
string of adjectives its fans were asked to insert between the words “first” and
This did not resolve the larger limitation of “the history of computing” as an
identity, which is that it makes some questions loom disproportionately large while
banishing others from view. Our inherited focus on the question of “what is a com-
puter,” combined with the fixation of some historically minded computer scientists
and philosophers on “Turing completeness,” has forced a divorce between closely
related digital devices. Digital calculators, for example, have been discussed within
the history of computing largely for the purposes of discounting them as not being
computers, and therefore not being worthy of discussion. Yet, as Tatarchenko’s
chapter shows, electronic calculators (the most literally digital of all personal elec-
tronic devices) shared patterns of usage and practice, as well as technological com-
ponents, with personal computers.
Neither can the history of computing cut itself off from other historical com-
munities. Computing, informing, communicating, and producing or consuming
media can no longer be separated from each other. Thirty or forty years ago, that
statement might have been a provocative claim, made by a professional futurist
or media savvy computer scientist looking for a lavish book deal. Today that digital
convergence is the taken for granted premise behind much of modern capitalism,
embodied in the smartphones we carry everywhere. Yet the histories of these differ-
ent phenomena occupy different literatures, produced by different kinds of histori-
ans writing in different journals and in many cases working in different kinds of
academic institution. “Information history” (Black 2006; Aspray 2015) happens for
the most part within information schools, media history within media studies, and
so on.
My own recent experience in writing about the 1940s digital electronic code-
breaking machine Colossus makes clear the distorting effect this boundary mainte-
nance has had on our historical understanding. Since its gradual emergence from
behind government secrecy since the 1970s, Colossus has been claimed by its vocal
proponents (Copeland 2013) to have been not just a computer, in fact the first fully
operational digital electronic computer, but also programmable. These claims, first
made (Randell 1980) at a time when its technical details were less well documented
than there are today, do not hold up particularly well – the basic sequence of opera-
tions of the Colossus machines was fixed in their hardware, and they could carry out
no mathematical operation more complicated than counting. So “computer” is a bad
fit, whether applied according to the usage of the 1940s (carrying out complicated
series of numerical operations) or that of later decades (programmable general-­
purpose machines). The Colossus machines did, however, incorporate some com-
plex electronic logic and pioneer some of the engineering techniques used after the
1  Introducing the Early Digital 15

war to build early electronic computers. Their lead engineer, Tommy Flowers, spent
his career in telecommunications engineering dreaming of building an all-electronic
telephone exchange (Haigh 2018). Decades later, he was still more comfortable
viewing the devices as “processors” rather than computers. They applied logical
operations to transform and combine incoming bitstreams, anticipating some of the
central techniques behind modern digital communications. The Colossus machines
played an appreciable, if sometimes exaggerated, part in determining the course of
the Second World War. Within the frame of the “history of computing,” however,
they matter only if they can somehow be shoehorned into the category of computers,
which has motivated a great deal of special pleading and fuzzy thinking. Positioning
them, and other related machines used at Bletchley Park, as paradigmatic technolo-
gies of the “early digital” requires no intellectual contortions.
The “early digital” is also a more promising frame than “the history of comput-
ing” within which to examine digital networking and the historical development of
electronic hardware. Historians of computing have had little to say about the history
of tubes, chips, or manufacturing – these being the domain of the history of engi-
neering. While important work (Bassett 2002; Lecuyer 2006; Thackray et al. 2015)
has been done in these areas by scholars in close dialog with the history of comput-
ing, the material history of digital technologies has not been integrated into the
mainstream of the history of technology. Overview accounts such as  (Campbell-
Kelly and Aspray 1996) have focused instead on computer companies, architec-
tures, crucial products, and operating systems.
The need to engage with the history of digital communications is just as impor-
tant. Perhaps mirroring the origins of the Internet as a tool for the support of com-
puter science research, scholarly Internet history such as Abbate (1999) and Russell
(2014) has fallen inside the disciplinary wall surrounding the computer as a histori-
cal subject. On the other hand, the history of mobile telephony, which in the late
1990s becomes the story of digitally networked computers, has not. At least in the
United States (and particularly within the Society for the History of Technology),
historians of communications have so far focused on analog technologies – though
anyone following the story of telephony, television, and radio or the successors to
telegraphy past a certain point in time will have to get to grips with the digital. So
far, though, the history of computing and history of communications remain largely
separate fields despite the dramatic convergence of their respective technologies and
industries since the 1980s. Some scholars within media and communication studies
have been more engaged than historians of computing in situating digital technolo-
gies within broader political contexts (Mosco 2004; Schiller 2014), something from
which we can surely learn.
Media studies and media archaeology (Parikka 2011) have their own active areas
of historical enquiry. This work is often written without engagement with the his-
tory of technology literature and, in some cases, such as (Brügger 2010), has delib-
erately eschewed the disciplinary tools and questions of history to situate the
exploration of “web history” as a kind of media studies rather than a kind of history.
Enthusiasm for “platform studies” (Montfort and Bogost 2009) has similarly pro-
duced a new body of historical work only loosely coupled to the history of ­computing
16 T. Haigh

and history of technology literatures. Neither have the authors of path-breaking

work at the intersection of digital media studies, cultural studies, and literature
such as (Chun 2011) and (Kirschenbaum 2016) found it useful to identify as histo-
rians of computing. The “history of computing” does not resonate in most such
communities, whereas study of “the early digital” may be more successful in forg-
ing a broader scholarly alliance.

1.7  Conclusions

As the computer dissolves itself into a digital mist, we are presented with a remark-
able opportunity to use lessons from decades of scholarship by historians of com-
puting to bring rigor and historical perspective to interdisciplinary investigation of
“the digital.” By embracing, for particular questions and audiences, the frame of the
early digital as a new way of looking at the interaction of people with computers,
networks, and software, we can free our work from its historical fixation on “the
computer” as a unit of study.
The papers gathered here form a series of provocative steps in this direction,
spreading out from different starting points to explore different parts of this new
terrain. Our position recalls Michael Mahoney’s description, a generation ago, of
the challenge faced by the first historians of computing: “historians stand before the
daunting complexity of a subject that has grown exponentially in size and variety,
looking not so much like an uncharted ocean as like a trackless jungle. We pace on
the edge, pondering where to cut in.” (Mahoney 1988) Today we face in “the digi-
tal” a still larger and more varied jungle, but like so many of the exotic places that
daunted Western explorers, it turns out to be already inhabited. To comprehend it
fully, we will need to find ways to communicate and collaborate with the tribes of
scholars who have made their homes there.


Abbate, Janet. Inventing the Internet. Cambridge, MA: MIT Press, 1999.
Aspray, William. “The Many Histories of Information.” Information & Culture 50, no. 1 2015):
Barr, Nicholas. “The History of the Phillips Machine.” In A.W.H.  Phillips: Collected Works in
Contemporary Perspective, 89–114. New York, NY: Cambridge University Press, 2000.
Bassett, Ross Knox. To The Digital Age: Research Labs, Start-Up Companies, and the Rise of
MOS Technology. Baltimore: Johns Hopkins University Press, 2002.
Black, Alistair. “Information History.” Annual Review of Information Science and Technology 40
2006): 441–473.
1  Introducing the Early Digital 17

Blanchette, Jean-François. “A Material History of Bits.” Journal of the American Society for
Information Science and Technology 62, no. 6 2011): 1042–1057.
Brügger, Niels, ed. Web History. New York: Peter Lang, 2010.
Campbell-Kelly, Martin, and William Aspray. Computer: A History of the Information Machine.
New York, NY: Basic Books, 1996.
Chun, Wendy Hui Kyong. Programmed Visions: Software and Memory. Cambridge, MA: MIT
Press, 2011.
Copeland, B Jack. Turing: Pioneer of the Information Age. New  York, NY: Oxford University
Press, 2013.
Ensmenger, Nathan. “The Digital Construction of Technology: Rethinking the History of
Computers in Society” Technology and Culture 53, no. 4 (October 2012): 753–776.
Haigh, Thomas. “Inventing Information Systems: The Systems Men and the Computer, 1950–
1968.” Business History Review 75, no. 1 (Spring 2001): 15–61.
———. “Thomas Harold (“Tommy”) Flowers: Designer of the Colossus Codebreaking Machines.”
IEEE Annals of the History of Computing 40, no. 1 (January–March 2018): 72–78.
———. “We Have Never Been Digital.” Communications of the ACM 57, no. 9 (Sep 2014):
Haigh, Thomas, and Mark Priestley. “Where Code Comes From: Architectures of Automatic
Control from Babbage to Algol.” Communications of the ACM 59, no. 1 (January 2016):
Haigh, Thomas, Mark Priestley, and Crispin Rope. ENIAC In Action: Making and Remaking the
Modern Computer. Cambridge, MA: MIT Press, 2016.
Hecht, Gabrielle. “Rupture-talk in the Nuclear Age: Conjugating Colonial Power in Africa.” Social
Studies of Science 32, no. 6 (December 2002).
Isaacson, Walter. The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the
Digital Revolution. New York: Simon and Schuster, 2014.
Kirsch, Adam. “Technology is Taking Over English Departments: The False Promise of the Digital
Humanities.” The New Republic (May 2 2014).
Kirschenbaum, Matthew. Mechanisms: New Media and the Forensic Imagination. Cambridge,
MA: MIT Press, 2007.
Kirschenbaum, Matthew G. Track Changes: A Literary History of Word Processing. Cambridge,
MA: Harvard University Press, 2016.
Kittler, Friedrich A. Gramophone, Film, Typewriter. Stanford, CA: Stanford University Press,
Kline, Ronald. The Cybernetics Moment, Or Why We Call Our Age the Information Age: Johns
Hopkins University Press, 2015.
Kline, Ronald R. “Cybernetics, Management Science, and Technology Policy: The Emergence of
‘Information Technology’ as a Keyword, 1948–1985.” Technology and Culture 47, no. 3 (June
2006): 513–535.
Lamont, Tom. ‘You Can’t Judge a Generation’s Taste’: Making Now That’s What I Call Music
The Guardian, 23 June 2018. Available from
Lecuyer, Christoph. Making Silicon Valley: Innovation and the Growth of High Tech, 1930–70.
Cambridge, MA: MIT Press, 2006.
Mahoney, Michael S. “The History of Computing in the History of Technology.” Annals of the
History of Computing 10, no. 2 (April 1988): 113–125.
Mindell, David A. Between Human and Machine: Feedback, Control, and Computing Before
Cybernetics. Baltimore: Johns Hopkins University Press, 2002.
Montfort, Nick, and Ian Bogost. Racing the Beam: The Atari Video Computer System. Cambridge,
MA: MIT Press, 2009.
Mosco, Vincent. The Digital Sublime: Myth, Power, and Cyberspace. Cambridge, MA: MIT Press,
18 T. Haigh

Nunberg, Geoffrey. “Farewell to the Information Age.” In The Future of the Book, 103–138.
Berkeley: University of California Press, 1997.
Parikka, Jussi. “Operative Media Archaeology: Wolfgang Ernst’s Materialist Media
Diagrammatics.” Theory. Culture & Society 28, no. 5 (September 2011): 52-74.
Randell, Brian. “The Colossus.” edited by N Metropolis, J Howlett and Gian-Carlo Rota, 47–92.
New York: Academic Press, 1980.
Russell, Andrew L. Open Standards and the Digital Age: History, Ideology and Networks.
New York, NY: Cambridge University Press, 2014.
Schiller, Dan. Digital Depression: Information Technology and Economic Crisis. Champaign, IL:
University of Illinois Press, 2014.
Schröter, Jens, and Alexander Böhnke. “Analog/Digital  – Opposition oder Kontinuum? Zur
Theorie und Geschichte einer Unterscheidung.” Bielefeld, Germany: Transkript, 2004.
Shannon, Claude E, and Warren Weaver. The mathematical theory of communication. Urbana:
University of Illinois Press, 1949.
Soni, Jimmy, and Rob Goodman. A Mind at Play: How Claude Shannon Invented the Information
Age. New York, NY: Simon & Schuster, 2017.
Thackray, Arnold, David Brock, and Rachel Jones. Moore’s Law: The Life of Gordon Moore,
Silicon Valley’s Quiet Revolutionary: Basic Books, 2015.
von Neumann, John. “First Draft of a Report on the EDVAC.” IEEE Annals of the History of
Computing 15, no. 4 (October 1993): 27–75.
Williams, Michael R. “A Preview of Things to Come: Some Remarks on the First Generation of
Computers.” In The First Computers: History and Architectures, edited by Raúl Rojas and Ulf
Hashagen, 1–16. Cambridge, MA: MIT Press, 2000.
Chapter 2
Inventing an Analog Past and a Digital
Future in Computing

Ronald R. Kline

Abstract  The chapter discusses why the venerable words analog and digital were
appropriated by inventors of the numerical computer for different types of comput-
ers in the United States in the 1940s, what alternatives were proposed, how they
became paired keywords, why closure occurred so quickly in the United States (by
1950), and the different ways in which digital and analog engineering cultures inter-
preted the terms in the 1950s and 1960s, and speculates why the concerns raised at
the 1950 Macy conference on cybernetics, and also by a couple of other computer
engineers, that the terms were vague and that analog was not the logical opposite of
digital were ignored. I comment on the relatively weak progress narrative of analog
vs. digital computers in science journalism to the early 1970s, even though scien-
tists and engineers had appropriated the terms to distinguish between an analog past
(Vannevar Bush’s differential analyzer) and a digital future (the ENIAC).

To paraphrase Evelyn Fox Keller’s argument on the role of metaphors in creating

the field of genetics (Fox-Keller 1995, pp. 9, 10), we can say for computing that
scientists and engineers did more than develop the techniques and practices of elec-
tronic computing during World War II and the early Cold War. They also forged a
way of talking that framed the future course of research in computers and computer
networks. By adapting the venerable words analog and digital to classify all com-
puting machines, they invented a discourse that spread quickly from computers to
signal processing. When journalists embraced this language for new media in the
1990s, analog and digital shed their technical referents to become the keywords of
our time (Sterne 2016; Peters 2016).
This chapter recounts an early part of that history by examining the contested
origins and spread of the analog-digital discourse in computing in the United States
from about 1940 to 1970. The discourse communities (Oldenziel 1999) I study—
scientists and engineers in academia, government, and industry—did not simply

R. R. Kline (*)
Department of Science and Technology Studies, Cornell University, Ithaca, NY, USA

© Springer Nature Switzerland AG 2019 19

T. Haigh (ed.), Exploring the Early Digital, History of Computing,
20 R. R. Kline

accept the new terms coming out of wartime laboratories. They questioned whether
analog and digital were appropriate terms for a new taxonomy, proposed alterna-
tives, and interpreted them from the point of view of distinct computing cultures
(Small 2001, Chap. 7; Mindell 2002). Yet they reached rhetorical closure in a
remarkably short time.
The new vocabulary was embedded in a technological progress narrative (Nye
2003) which predicted that an analog past would give way to a digital future in
computing. I argue that this progress narrative was understated and challenged
when scientific and engineering communities discussed the merits of analog versus
digital computers and that the popular press transformed it into a robust progress

2.1  Pairings

The setting in which analog and digital were yoked together was the development
of electronic computers for calculating artillery firing tables and controlling the
aiming of guns, under the auspices of the US Army’s Ballistic Research Laboratory
and the National Defense Research Committee (NDRC) during World War II. By all
accounts (Burks and Burks 1988, p. 124; Mindell 2002, pp. 295, 387; Ceruzzi 2012,
pp. 1–2), the early computer builders who first proposed the terms analog and digi-
tal for computing—physicist John Atanasoff at Iowa State College, physicist John
Mauchly at Ursinus College outside of Philadelphia, and mathematician George
Stibitz at the American Telegraph and Telephone Company (AT&T)—created this
taxonomy to distinguish their new (digital) machines from the (analog) machines,
such as Vannevar Bush’s differential analyzer and the Sperry Corporation’s gun
directors, that were calculating firing tables and aiming guns at the start of the war.
All three men became well-known in the history of computing for inventing pio-
neering (digital) computers: the Atanasoff-Berry Computer (ABC machine);
Mauchly’s and J.  Presper Eckert, Jr.’s, Electronic Numerical Integrator and
Computer (ENIAC); and Stibitz’s Bell relay computers (Burks and Burks 1988;
Haigh et al. 2016; Ceruzzi 1983). Atanasoff and Mauchly became entwined in com-
puter history during the extensive US patent litigation that ensued in the 1970s over
the invention of the electronic computer.
By 1950, scientists, engineers, and companies in the emerging US computer
industry had adopted the meanings Atanasoff, Mauchly, and Stibitz had initially
given to digital and analog to classify all computing machines, past and present.
Digital computers, such as the ENIAC and desk calculators, solved equations by
representing numbers as discrete physical quantities (such as electrical pulses and
the positions of gears), which were counted to give the result. Analog machines,
such as differential analyzers, electrical network analyzers, and harmonic analyzers,
solved equations by representing numbers as the magnitudes of continuous physical
quantities (such as electrical currents and the positions of rotating shafts), which
were measured to give the result (Hartree 1947).
2  Inventing an Analog Past and a Digital Future in Computing 21

Analog appears before digital in this discourse. Since as early as the 1930s,
builders of differential analyzers and electrical network analyzers in the United
States had used analog as a noun to refer to the fact that their machines worked by
creating an analog (i.e., an analogy) of the equations to be solved or the system to
be simulated (Bush 1936; Skilling 1931). In 1940 Atanasoff had employed analog
in that manner in considering alternatives to the proposed design of his ABC
machine (Atanasoff 1940). It was not a big step then for analog to be used as an
adjective for computers in correspondence between Atanasoff and Mauchly in early
1941. The two men had met at a recent scientific meeting in Philadelphia, where
they discovered a common interest in automating the calculation of equations in
scientific research. At the time Atanasoff was working on the ABC machine, which
computed by representing numbers as discrete variables, while Mauchly was build-
ing harmonic analyzers, which computed by representing numbers as continuous
variables (Burks and Burks 1988).
In January 1941 Atanasoff wrote Mauchly that Samuel Caldwell, the head of the
differential analyzer center at the Massachusetts Institute of Technology (MIT)
(Mindell 2002, Chap. 5), had recently visited him in Iowa on wartime business with
the fire control division of the NDRC (Burks and Burks 1988, pp. 122–125). “His
visit,” wrote Atanasoff, “gave me the urge to attempt the construction of a differen-
tial analyzer on a dime-store basis,” which he was putting aside until he finished the
ABC machine. Mauchly replied in February that he had read an article in Nature by
a British physicist (Douglas Hartree) who had built an analyzer out of Meccano
parts (akin to an erector set) and thus thought the “‘dime-store’ analyzer ought to be
successful.” “Incidentally,” Mauchly continued, “do you consider the usual
d[ifferential] analyzer an ‘analogue’ machine? If so, how about a polar planimeter?”
(a mechanical device that calculated areas). Atanasoff replied that he had an “idea
as to how the [ABC] computing machine which we are building could be converted
into an integraph [for the war effort]. Its action would be analogous to numerical
integration and not like that of the Bush integraph [which was in the same family as
the planimeter] which is, of course, an analogue machine.”1 The Oxford English
Dictionary (OED 2018) cites Mauchly’s February letter, extracts of which were
published in the 1980s, as the first use of analog as an adjective for a computing
In June 1941 Mauchly visited Atanasoff in Iowa. He came away impressed with
the ABC’s circuitry but was disappointed that it was not fully electronic. That
August, Mauchly, who had applied to be an instructor at the Moore School of
Electrical Engineering at the University of Pennsylvania (Akera 2006 pp. 82–84),
wrote an entry in his notebook titled “Notes on Electrical Calculating Devices.” In
a footnote, he wrote, “I am indebted to Dr. J. V. Atanasoff of Iowa State College for

 John Atanasoff to John Mauchly, Jan. 23, 1941, PX 676; Mauchly to Atanasoff, Feb. 24, 1941, PX
699; and Atanasoff to Mauchly, May 31, 1941, PX 773, all in ENIAC Trial Records, University of
Pennsylvania Library, microfilm, Reel 9. They are partially quoted in (Mauchly 1984,
pp. 126–128).
22 R. R. Kline

the classification and terminology here explained.” In that terminology, Mauchly

classified computing machines “as either ‘analog’ or ‘impulse types.’” He explained
that the:
analog devices utilize some sort of analogue or analogy, such as Ohm’s Law [of electrical
circuits] or the polar planimeter mechanism, to effect a solution of a given equation. The
accuracy of such devices is obviously subject to limitations; at times the attainable is more
than sufficient, but there is increasing need for more computational aid not so restricted.
Impulse devices comprise all those which “count” or operate upon discrete units corre-
sponding to the integers of some number system. There is no theoretical limit to the accu-
racy to which such devices will work; practical limitations on the bulk or cost or convenience
of operation provide the only restrictions. The usual mechanical computing machine [e.g.,
a desktop adding machine], utilizing gears, pauls, etc., are [sic] examples of impulse

Arguing from these principles, Mauchly then stated the earliest instance I have
found of the progress narrative of analog vs. digital in regard to accuracy.2 “No fur-
ther attention will be given to the analog type here, for although differential analyz-
ers and other analog machines are now used and will continue to be used for some
problems, it is in the field of impulse machines that major improvements in speed
and accuracy are to be expected and sought for.”3
Within a year, in the spring of 1942, two scientists connected with the fire control
division of the NDRC proposed alternatives to the paired terms analog and impulse
when discussing the feasibility of developing electronic (i.e., vacuum tube) comput-
ers to replace electrical and mechanical ones for antiaircraft directors. In April, Jan
Rajchman at the Radio Corporation of America (RCA), a contractor to the NDRC,
used the terms continuous and numerical to distinguish between the older and newer
types of computers (Rajchman et al. 1942). That same month, George Stibitz sug-
gested the terms analog and digital when he analyzed electronic computer designs
for the NDRC. On loan from AT&T to the government for the war effort, as a tech-
nical aide to the NDRC, Stibitz had invented in 1939 a relay computer, the Complex
Number Computer, to do calculations at Bell Labs (Ceruzzi 1983, pp. 84–93). In a
confidential memo, Stibitz commented on a meeting in which MIT, RCA, Bell
Labs, and Eastman Kodak presented proposed designs to the NDRC for an elec-
tronic fire control computer (Williams 1984, pp. 290–316; Mindell 2002, pp. 293–
296). Stibitz began his memo, entitled “Digital Computation for A. A. [Antiaircraft]
Directors,” by writing that “computing mechanisms have been classified as ‘analog’

 After the war, Mauchly (1947–1948) and other digital proponents often combined the advantages
of electronics with those of numerical computation when they compared their machines to analog
electromechanical computers and electrical network analyzers. But with the advent of electronic
analog computers in the early 1950s, digital enthusiasts focused on the accuracy, data processing
capability, and flexibility of their computers, rather than on the speed of electronics (e.g., Ridenour
1952). Electronics was the subject of a separate progress narrative, which was apparent, for exam-
ple, in IBM ads for business machines (IBM 1949, 1950). I thank Tom Haigh for the insight about
a separate electronic progress narrative.
 John Mauchly, “Notes on Electrical Calculating Devices,” Aug. 15, 1941, ENIAC Trial Records,
Reel 9, PX 846, partially quoted in (Mauchly 1984, p. 131). The word “speed” is probably an allu-
sion to vacuum tube electronics.
2  Inventing an Analog Past and a Digital Future in Computing 23

or as ‘pulse’ computers. The latter term seems to me less descriptive than the term
‘digital.’ All directors in use now are of the former type; that is, the value of each
variable in the computation is represented in the mechanism by the magnitude of a
physical quantity such as length, voltage, speed, etc. It has been suggested from
time to time that digital calculation, such as that performed by adding and calculat-
ing machines might be used in the A. A. Director, with advantage.”4 He used analog
and digital to compare the competing proposals throughout the lengthy memo.
It is unclear whether or not Stibitz had heard the terminology of analog and pulse
from Atanasoff, who consulted for the fire control division of the NDRC in the
spring and fall of 1941.5 The term pulse (or impulse), however, was common in the
NDRC. The head of its fire control division, Warren Weaver, an applied mathemati-
cian on loan from the Rockefeller Foundation, referred to the proposed AA comput-
ers as “impulse electronic computing devices” in a letter to the RCA labs in January
1942.6 Stibitz did not explain why he thought digital was more descriptive than
pulse for a taxonomy of computers. He may have chosen digital because it had long
referred to numerical digits, no matter how they were physically represented (OED
2018). He may have thought that pulse was too specific, referring to electrical sig-
nals. That did not, however, bother Mauchly.
It took awhile for the NDRC’s fire control division to adopt Stibitz’s terminol-
ogy. Weaver, for example, used the terms analog and numerical in official docu-
ments to describe AA computer designs in July 1942.7 The new head of the fire
control division, Harold Hazen from MIT’s servo lab, used the awkward phrase
“discrete number electronic calculating device” instead of “digital computer” to
refer to RCA’s electronic director in February 1943.8 Stibitz popularized the paired
terms digital and analog within the NDRC by writing a series of memos in 1943
about the Relay Interpolator, a computer he had developed at Bell Labs (Mindell
2002, p. 303). He began one memo in June 1943 by stating matter-of-factly that “the
two fundamentally different methods of solving numerical problems have been
called the analog method and the digital method; both are applicable to the solution
of differential equations.”9 Stibitz continued to push his terminology in a classified

 George Stibitz, “Digital Computation for A. A. Directors,” April 23, 1942, Record Group 227,
Division 7 Records, Office of Scientific Research and Development, National Archives and
Records Administration, College Park, MD (hereafter OSRD-7), Office Files of Warren Weaver,
1940–1946, entry 83, box 2, partially quoted in (Williams 1984, p. 310).
 Warren Weaver to John Atanasoff, Oct. 15, 1941, Norbert Wiener Papers, Institute Archives and
Special Collections, MIT Libraries, Cambridge, MA, box 4–61; and Burks and Burks (1988,
pp. 125–126).
 Warren Weaver to V. K. Zworykin, January 20, 1942, OSRD-7, Office Files of Warren Weaver,
1940–1946, entry 83, box 2, partially quoted in (Mindell 2002, p. 292).
 Warren Weaver to V. K. Zworykin, et al., July 21, 1942, OSRD-7, Office Files of Warren Weaver,
1940–1946, entry 83, box 2.
 Harold Hazen to Paul Klopsteg, Feb. 6, 1943, OSRD-7, General Project Files, 1940–1946, entry
86-A, project 48, box 40.
 George Stibitz, “Relay Interpolator as a Differential Analyzer,” June 19, 1943, OSRD-7, General
Project Files, 1940-1946, entry 86-A, box 50.
24 R. R. Kline

report on “relay computing,” published by the NDRC in February 1945, where he

paid a good deal of attention to terminology. Distinguishing between the terms cal-
culator, computer (machines, not humans), and computing system, he noted that
“computing devices may be classified in two groups, which have been called,
respectively, continuous or analogue and digital computers.” In a section comparing
the two types of computers, he applied the new terms retrospectively to such earlier
devices as slide rules (analog) and adding machines (digital), but did not engage in
the progress narrative of digital replacing analog (Stibitz 1945a, p. 2). In a talk on
relay computers at Brown University that same month, Stibitz classified computers
as “continuous or analog” and “discrete or digital,” concluding that the “continuous
computers are mechanizations of geometry, while the discrete computers are mech-
anizations of number theory” (Stibitz 1945b, p. 1).
The developers of the ENIAC picked up the word digital by 1945. They didn’t
use digital (or analog) in a draft report prepared for the Army’s Ballistic Research
Laboratory, which would fund the computer, in April 1943.10 But mathematician
John von Neumann at Princeton, who consulted on the project, extensively employed
the term digital in an influential report on how to design a follow-up computer to the
ENIAC in June 1945. That document, the First Draft of a Report on the EDVAC,
described the stored-program technique that became the basis of digital computing
(Aspray 1990, Chap. 2; Haigh et al. 2016, Chaps. 6 and 11). In November 1945,
shortly before the ENIAC was publicly unveiled, a classified NDRC report on this
computer—written by its principal designers, John Mauchly and electronic special-
ist J. Presper Eckert, Jr., and their colleagues at the Moore School—embraced the
digital part of the new taxonomy of computing machines (Eckert et al. 1945).
The rapid declassification of these and other reports spread the new vocabulary
of analog and digital in scientific and engineering journals, a vocabulary that had
circulated in the NDRC and its academic and industrial contractors during the war.
In this terminology, analog and digital marked both old and new computing
machines, distinguishing between those that calculated by measuring continuous
variables (e.g., slide rules, network analyzers, and differential analyzers) and those
that calculated by counting discrete variables (e.g., adding machines, Bell Labs
relay computers, and the ENIAC). Conferences held on the wartime computers in
1946 and 1947, showcasing the Mark I at Harvard and the ENIAC at Penn, helped
popularize this way of speaking to a wider audience (Campbell-Kelly and Williams
1985; Harvard University Computational Laboratory 1948).
Computer vendors also popularized the analog-digital discourse. My survey of
computer advertisements in Scientific American from 1949 to mid-1953 shows that
early computer firms in the United States—engineering start-up companies such as
Engineering Research Associates (ERA) and electronic companies such as RCA
and Raytheon (Norberg 2005, Introduction)—used the terms digital and analog in
product ads (ERA 1952; Raytheon 1952) and in help-wanted ads for engineers to
design and build computers (RCA 1952). In contrast, prominent business machine
firms such as IBM and Remington Rand did not use either term in their product ads,

 Moore School of Electrical Engineering, “Report on an Electronic DIFF*. Analyzer,” April 2,


1943, ENIAC Trial Records, reel 15, PX 1431.

2  Inventing an Analog Past and a Digital Future in Computing 25

despite the fact that IBM advertised its first stored-program computer, the 701, and
Remington Rand advertised the UNIVAC in this period (IBM 1953; Remington
Rand 1952). Two electronic analog computer vendors—the Reeves Instrument
Company and George A. Philbrick Researches, Inc. (Small 2001)—used only the
term analog in their ads (Reeves 1949; George A. Philbrick 1949), in keeping with
the tradition of the analog engineering culture. All of these ads helped spread the
analog-digital vocabulary that Mauchly, who was now with Remington Rand, and
other scientists and engineers had created.

2.2  Variety

Yet the linguistic patterns in different computing communities varied a great deal in
the United States during the early Cold War period. Those working in the engineer-
ing culture of network and differential analyzers, pioneered at MIT, were slow to
adopt the terms analog and digital. A 1945 article by Bush and Caldwell on the
Rockefeller Differential Analyzer, an upgrade to Bush’s machine (Owens 1996), did
not use either term.11 In fact, Caldwell and Norbert Wiener, the MIT mathematician
who founded the science of cybernetics, proposed the alternative terminology of
measurement devices and counting devices (for analog and digital, respectively) at
this time.12
In contrast, engineers who built general-purpose machines readily adopted the
adjective analog for their new computers, and did not pair it with digital. As noted
previously, they had used analog as a noun in the 1930s, so applying it as an adjec-
tive apparently did not seem like a big step, nor a derogatory one, to them. This
usage appears as early as 1945 when a report of a computer conference noted that
Sibyl N.  Rock, a mathematician at the Consolidated Engineering Corporation in
California, “described an electric analogue computer used to solve a system of
simultaneous linear equations” on mass spectroscopy (Daus 1945, p. 416).13 In the
late 1940s and early 1950s, engineers at the California Institute of Technology, MIT,
and the Westinghouse Electric Company referred to the machines they had designed
as general-purpose analog computers (Anonymous 1947; McCann 1949; Hall 1950;
Small 2001, pp. 95–98; Harder and McCann 1948). They were probably imitating
the claim made by the designers of the ENIAC in 1945 that it was the “first general
purpose automatic electronic digital computing machine” (Eckert et  al. 1945,
Introduction-I).14 Edwin Harder at Westinghouse called his room-sized machine,
which could solve differential equations for a wide variety of engineering fields, a

 Bush and Caldwell (1945, p. 258), did use analog in the 1930s sense to describe the basic func-
tion of the analyzer.
 Goldstine and von Neumann (1946, p. 320), refer to this as the “Wiener-Caldwell terminology”
and say it provides a “more suggestive expression” than analogy.
 This is the OED’s first citation under “analog computer.”
 On the use of “general purpose” for digital computers in other early publications, see, e.g.,
Goldstine and Goldstein (1946), on p.  97, and an article by John Brainerd in the Pennsylvania
Gazette for 1946; see Haigh et al. (2016), p. 241.
26 R. R. Kline

General-Purpose Electric Analog Computer or Anacom for short (Aspray 1993).

Engineers at Bell Labs designed the electronic General Purpose Analog Computer
(GPAC, known as Gypsy), which the Bell company used as a flight simulator and to
calculate the design equations for the Nike surface-to-air missile. They established
an analog computing culture in a laboratory known for its digital prowess in design-
ing relay computers (Curie 1951; Small 2001, p.  117). None of these engineers
referred in print to the analog-digital progress narrative.
Those working in the younger engineering culture of electronic numerical
machines were more likely to adopt the vocabulary of analog and digital, which
their colleagues had coined, but they employed a wide variety of terms before 1950.
The authors of the 1945 ENIAC report, for example, used the term digital, but not
analog. Instead, they employed several other adjectives—analogy, non-digital, and,
more commonly, continuous-variable—to refer to the differential analyzer and its
kin (Eckert et  al. 1945), despite the fact that one of the authors, Mauchly, had
applied the adjective analog to computers 4 years earlier. They apparently thought
these adjectives were more philosophically and mathematically sound than analog.
Analogy referred more directly to the philosophical principle behind this class of
machines. Continuous-variable referred to the mathematical type of equations
solved by the differential analyzer and the network analyzer and also to their method
of solution (by measuring continuously varying physical quantities). It was compat-
ible with discrete-variable, the name of the stored-program successor to the ENIAC:
the Electronic Discrete Variable Automatic Computer (EDVAC) (Williams 1993).
This variety is apparent at the Moore School Lectures held at the University of
Pennsylvania in the summer of 1946. Sponsored by the Navy’s Office of Naval
Research (ONR) and the Army’s Ordnance Department, which had funded the
ENIAC project, the lectures were given by the inventors of the ENIAC and some
outside experts to scientists and engineers from academia, industry, and govern-
ment. Mimeographed copies of the lectures disseminated the theory and practice of
numerical electronic computers throughout the United States and Britain (Campbell-
Kelly and Williams 1985, pp. xxii–xxiii; Wilkes 1985). While the lecturers often
employed the terms analog and digital, the latter of which was in the title of the
series, the range of alternatives (continuous, continuous-variable, analogy, discrete,
and discrete-variable) was similar to that in the ENIAC report. Furthermore, lectur-
ers applied these adjectives beyond the noun “computer” to “signals” and “system,”
a semantic extension of analog and digital into the field of signal processing.15
It is noteworthy that Mauchly and Stibitz did not employ those words consis-
tently in their Moore School Lectures. Mauchly spoke in terms of digital, analogy,
and continuous-variable machines, the vocabulary of the ENIAC report. Stibitz

 For continuous, see George Stibitz (Lecture 1, p. 5), Perry Crawford, Jr. (Lecture 32, pp. 379,
380), and J.  Presper Eckert, Jr. (Lecture 33, p.  397); for continuous-variable, see Irven Travis
(Lecture 2, p. 21), John Mauchly (Lecture 3, p. 26), and Eckert (Lecture 33, pp. 393, 394); for
analogy, see John Mauchly (Lecture 3, p. 25) and D. H. Lehmer (Lecture 4, p. 46); for discrete, see
Stibitz (Lecture 1, p. 5) and Eckert (Lecture 33, pp. 394, 397); for discrete-variable, see Lehmer
(Lecture 4, p. 43) and Eckert (Lecture 10, p. 109). All in Campbell-Kelly and Williams (1985).
2  Inventing an Analog Past and a Digital Future in Computing 27

used the term digital, which he had proposed for computers, only in the title of his
lecture, “Introduction to the Course on Electronic Digital Computers.” In the lec-
ture, Stibitz, now a professor of mathematics at the University of Vermont, classified
and compared computers under the mathematical terms of continuous and discrete
(Stibitz 1947–1948).16 He used a similar taxonomy in 1947 (Stibitz 1947).
The alternatives flowered for a while. For example, John Brainerd, the project
supervisor of the ENIAC, paired digital with the phrase “continuous-variable or
analogue type of computer” in an engineering article on the ENIAC in early 1948
(Brainerd 1948, p. 164). But the more common terms at the Moore School Lectures,
analog and digital, did not take long to take root. In 1946 one of the lecturers,
British mathematical physicist Douglas Hartree, then at the University of
Manchester, used the ENIAC to calculate the aerodynamics of projectiles that sum-
mer (Haigh et al. 2016, pp. 95–104). In an article on the ENIAC published that fall,
he noted that the “American usage is ‘analogue’ and ‘digital’ machines.” He pre-
ferred his own terms “instruments” and “machines” (Hartree 1946, p. 500), which
he also publicized in a textbook (Hartree 1949). But they did not catch on. By 1950,
alternatives to analog and digital had virtually disappeared in US publications in the
growing field of computing.
An exception was the persistence of the term analogy into the 1950s in the writ-
ings of two prominent scientists: John von Neumann and Norbert Wiener, who had
given up the term measurement devices (Von Neumann 1945, p.  66, 1948, p.  3;
Wiener 1948, pp. 138–139, 154, 177; Wiener 1950, pp. 75, 77). In a 1951 article on
the general theory of automata, artificial and natural, von Neumann compared com-
puters and the human brain on the basis of the “analogy principle” and the “digital
principle” and applied these adjectives extensively to computing machines (Von
Neumann 1951).17 Wiener explained why the term analogy was appropriate for
computing in his 1956 autobiography. “In Bush’s machine,” Wiener wrote, “num-
bers were represented as measured quantities rather than as sequences of digits.
This is what we mean by calling the Bush machine an analogy machine and not a
digital machine. . . The physical quantities involved in the problems which the
machine was to solve were replaced in the machine by other physical quantities of
a different nature but with the same quantitative interrelations,” creating an analogy
(Wiener 1956, pp. 136–137). The situation in Europe was more contested. At a con-
ference on computers and human thought in France in 1951, attendees from non-­
English-­speaking countries criticized the chaos that resulted when they adopted the
British and American terms (MacKay 1951).

 Stibitz (1946), also called the Rockefeller Differential Analyzer a “continuous computer.” For
other usages of “continuous computer,” see Murray (1947, 1952).
 Later, von Neumann (1958, p. 3), adopted the analog terminology.
28 R. R. Kline

2.3  Criticism

A striking statement about the rapid rhetorical closure around analog and digital,
despite problems with these terms, comes from a debate at the interdisciplinary
conference series on cybernetics sponsored by the Josiah Macy, Jr., Foundation in
New York City. At the 1950 meeting, prominent mathematicians, engineers, biolo-
gists, and social scientists debated whether the human brain was analog or digital in
light of cybernetics. A new postwar science that became a scholarly and popular fad
in the 1950s and 1960s, cybernetics claimed to be able to model all self-regulating
organisms—from humans to intelligent machines to society—using principles from
control and communication engineering. Wiener and other cyberneticians claimed,
for example, that the human brain operated like the ENIAC and vice versa (Heims
1991; Kline 2015). When neurophysiologist Ralph Gerard at the University of
Chicago argued at the 1950 meeting that the brain was a hybrid of digital and analog
processes, the group vigorously debated the challenge to cybernetics. The wide-­
ranging, rambling discussion did not settle the issue for neurophysiology or for
cybernetics. Instead, the debate opened up a space where we can observe a remark-
ably critical discussion of the meanings of the then newly paired terms analog and
Gerard sparked the debate by questioning whether the human brain and nervous
system operated solely by digital means, i.e., as a collection of neurons and impulses
that functioned in on-off states, like the vacuum tubes in the ENIAC. He stated the
consensus in neurophysiology that “chemical factors (metabolic, hormonal, and
related) which influence the functioning of the brain are analogical, not digital.” He
concluded that “digital functioning is not overwhelmingly the more important of the
two” (Gerard 1951, pp. 11, 12).18
When the cybernetics group could not decide this issue, the debate shifted to how
to define the terms analog and digital. Von Neumann, Wiener, and other mathemati-
cians and engineers at the meeting made the distinction that had become common
in computing, but it was contested by the interdisciplinary Macy group, starting
with anthropologist Gregory Bateson, the husband of Margaret Mead who helped
her organize the group’s social science contingent. Bateson broke into the debate to
say, “It would be a good thing to tidy up our vocabulary. . . . First of all, as I under-
stand the sense in which ‘analogical’ was introduced to this group by Von Neumann,
a model plane in a wind tunnel would be an ‘analogical’ device for making calcula-
tions about a real plane in the wind.” Wiener and von Neumann agreed. Then,
Bateson continued, “It seems to me that the analogical model might be continuous
or discontinuous in its function,” which questioned the strict duality between analog
and digital. Von Neumann drew on his comparison of analog and digital processes
in automata, computers, and the brain (Burks 1987; Aspray 1990, chap. 8) to reply:
“It is very difficult to give precise definitions of this, although it has been tried
repeatedly. Present use of the terms ‘analogical’ and ‘digital’ in science is not com-
pletely uniform” (Gerard 1951, pp. 26–27).

 The debate is discussed in (Edwards 1996, p. 192; Dupuy 2000; and Kline, 2015, pp. 46–49).
2  Inventing an Analog Past and a Digital Future in Computing 29

Later in the discussion, Joseph Licklider, the MIT psychologist who would later
become the founding director of the Pentagon office that developed the predecessor
of the Internet (the Information Processing Techniques Office of the Advanced
Research Projects Agency), pointed out the importance of what Bateson had elicited
from the group that models (analogs) could be continuous or discrete. “Analogue
and digit are not words that the ordinary person, even the intelligent person, holds
up and says: These are opposite. I can conceive of a digital system which is the digi-
tal process and the analogue of another digital process, and therefore really analogi-
cal. I need clarification. . . . We understand the distinction between continuous and
discontinuous or between continuous and discrete. We understand roughly what an
analogy is, but we would like to have explained to us here, to several of us and many
on the outside, in what sense these words [analog and digital] are used in reference
to the nervous system.” When no satisfactory answers were forthcoming, Licklider
asked, “Is it then true that the word ‘analogues’ applied to the context of the com-
puter’s brains, is not a very well-chosen word; that we can do well if we stick to the
terms ‘discrete’ and ‘continuous’ . . ?” Leonard Savage, a statistician from the
University of Chicago, replied, “We have had this dichotomy with us for four or five
years, Mr. Licklider. I think the word [analogue] has worked fairly well on the
whole. Most of us have been familiar with its meaning. There would be some fric-
tion for most of us in changing it now.” Licklider became more frustrated as the
discussion wore on. “These names confuse people. They are bad names, and if other
names communicate ideas they are good names.” When the group could not clarify
the meaning nor the origins of the terms analog and digital, Licklider thought the
debate had gone on long enough. “We really ought to get back to Gerard’s original
problem. We will use the words as best we can” (Gerard 1951, pp. 32, 36, 43, 44).
Prompted by Bateson’s questioning, the cybernetics group, in effect, separated
two meanings of analog that had been combined when it was applied to computers
as an adjective: to denote machines that worked on the principle of analogy and
those that measured continuous variables. By recognizing that analogs (models)
could be continuous or discrete, the group realized, as stated by mathematician
Walter Pitts, that “the digital and analogical sorts of devices have been defined quite
independently and are not logical opposites” (Gerard 1951, p. 48).
Although this criticism was swept aside, there are some traces of it in the com-
puter literature. A few analog enthusiasts claimed that analog was the general term
for all computing because all computers operated on the basis of analogy. W. Allison,
a control systems engineer at the American Bosch Arma company, took up this posi-
tion in the journal Control Engineering in early 1955. At first glance, Allison stated,
“it does indeed seem that two separate technologies are in competition for suprem-
acy. A closer examination, however, reveals that the digital approach is itself one
specific class of analog instrumentation,” because it sets up “an analog of a number
system,” on which it performs arithmetic. “Thus, rather than constituting a flaw or
discrepancy in the validity of the concept of analog operations, the high state of
development of digital computers and machines is a demonstration of the scope of
the more general category” of analog (Small 2001, p. 250).19 This point of view did

 See also McLeod (1962). Small (2001, p. 6) also notes that both digital and analog computers

work by analogy.
30 R. R. Kline

not prevail in the United States, despite the fact that George Stibitz used it to cri-
tique the term analog in favor of his pet term continuous in a textbook he wrote on
computers in 1957 and in his autobiography (Stibitz 1957, pp.  150–151; Stibitz
1993, p. 163).20
One reason for the rapid adoption of analog and digital was that the art and sci-
ence of designing computing machines did not have an established taxonomy before
the war. The general terms for these machines varied widely, from “calculating
machines” to “mathematical machines” to “mechanical aids to computation.” None
of these demarcated between machines that counted and those that measured, which
were given specific names such as the tabulator and the differential analyzer, respec-
tively (Bush 1936; Lilley 1942). The analog-digital vocabulary coming out of World
War II provided a general taxonomy at the same time that the first professional
society for computing was formed in 1947. Organized by insurance consultant
Edmund Berkeley in the wake of a computing symposium at Harvard, the Association
for Computing Machinery (ACM) adopted the vocabulary of analog and digital and
established conference proceedings that carried articles on both types of machines
well into the 1950s (Williams 1954; Akera 2007). The same can be said for the
American Institute of Electrical Engineers Committee on Computing Devices,
founded in 1946, which sponsored Joint Computer Conferences with the ACM and
the Institute of Radio Engineers (AIEE Technical Committees 1950, pp. 3–4; IRE,
AIEE, ACM 1953).

2.4  Hybridity

A notable feature of the Macy conference debate is the lack of a progress narrative
contrasting an analog past with a digital future. The cybernetics group did not con-
sider whether or not the human brain was outdated because it had analog (continu-
ous) elements. They agreed with the neurophysiology of the day that continuous
and discrete processes worked together to handle information in the brain. At the
debate, Wiener pointed to the brain’s hybrid architecture to suggest how to better
design computers: “I think that the freedom of constructing machines which are in
part digital and in part analogical is a freedom which I profoundly believe to exist
in the nervous system, and it represents, on the other hand, with humanly made
machines, possibilities which we should take advantage of in the construction of the
automaton” (Gerard 1951, p. 18).21 The hybridity of a system containing analog and
digital elements played out, as well, in the debates in the technical press about ana-
log versus digital computers. Hybridity disrupted the progress narrative that accom-

 Stibitz (1993, p. 36), also notes that the “terms used to classify all these devices [digital, analog,
discrete, and continuous] have become so completely confused and illogical that it is important to
try to straighten them out.”
 John von Neumann also considered analog-digital hybridity as a design principle for computing
(Burks, 1987, p. 368).
2  Inventing an Analog Past and a Digital Future in Computing 31

panied this debate from the time analog and digital were linked together in
Typically, the narrative was understated in the technical press. Digital evangelists
(Owens 1996, p. 35) such as John Mauchly (Mauchly 1947–1948) and MIT engi-
neer Jay Forrester (Redmond and Smith 1980) predicted that the electronic digital
computer had more potential than the analog computer because it was an accurate
general-purpose machine, could process information numerically, and could be
more easily programmed via software instead of reconfiguring the hardware. They
painted the analog computer as the opposite. Advocates of the electronic analog
computer argued, instead, that it, too, could be a general-purpose machine. They
praised its low cost and its ability to more rapidly perform real-time calculations for
control functions and simulations. These contrasts—the flexibility, accuracy, and
data processing capabilities of the digital computer versus the lower cost, faster
simulation, and hands-on feel of the analog computer—characterized the debate
until vastly improved digital computers began to dominate the entire field of com-
puting in the 1970s. Not only was the electronic digital computer the obvious choice
for data processing, for which the analog computer was never suited because it
represented numbers by continuous rather than discrete variables. The digital com-
puter began to displace the analog computer (whether mechanical, electrical, or
electronic) from its entrenched position in engineering calculation and control engi-
neering (Small 2001, Chap. 5).
In the 1950s, however, most commentators recognized the scientific and engi-
neering value of both modes of computing. Mina Reese, the head of the mathemati-
cal sciences program at the ONR (a major funder of computer projects at the time),
remarked in late 1950, “Although the substantial government interest, expressed in
plans and money, is, and has been for some time, in the digital field, it would be
false to give the impression that no considerable sums of money have been invested
by the government in analog computers; or that such machines and other analog
computers built without government support are not playing an important and use-
ful role in the scientific and industrial laboratories of the country.” She recom-
mended “adopting the broad point of view that recognizes merit in both the analog
and the digital aspects of the computer art and derives assistance from whatever
phase is relevant to the problem at hand.” At the time of this report, during the first
year of the Korean War, electronic digital computers were still in the early stages of
development, funded largely by the military. Recognizing the nascent state of digi-
tal computers, Rees argued that the federal government should fund what comput-
ing facilities it could to meet the “present international emergency” of the Korean
War (Rees 1950, pp. 731, 732, 735).22
Proponents of analog technology took advantage of the development of elec-
tronic analog computers alongside electronic digital computers to compose a coun-
ter narrative in the 1950s and 1960s. Spurred by lavish funding from the military
and the National Aeronautics and Space Administration (NASA), engineers used

 Rees (1982), plays down the ONR’s support for analog computing. For a good analysis of the

merits of analog versus digital computers in the early Cold War, see Edwards (1996, pp. 66–70).
32 R. R. Kline

electronic operational amplifiers to build analog computers to calculate the design

characteristics of advanced aircraft, guided missiles, and spacecraft and to test the
designs through real-time simulation in the laboratory before building these com-
plex systems.23 In this gendered, homosocial world, “analog men,” as they referred
to themselves, often complained about snobbish treatment from the “digital men”
who thought their machines should take over these realms. A 1954 survey of analog
computing noted, “So much has been written recently about the truly wonderful
achievements in the field of digital computing that there has been a tendency to
forget about analogue computers and to overlook the progress they have been mak-
ing. Indeed there are those who would say that the analogue computer is outdated
and dispute the need to improve it further.” The “digital computer as it stands, in
spite of its undoubted superiority in many cases, still has a long way to go before it
can surpass the analogue device in all applications” (Small 2001, pp. 248–249).24 In
a special issue entitled “Analog-Hybrid Computing,’’ of a major engineering jour-
nal published in 1962, another analog enthusiast, Lee Cahn, recalled that “ten years
ago the digital boys were all saying that analogue computing was a crude anachro-
nism, which would shortly be swept away by the manifest superiority of digits.
Evidently this has not happened . . . it does not appear that any function has ever
been taken away” (McLeod 1962, p. 3). The guest editor of the special issue, John
McLeod of General Dynamics, agreed, “It is safe, for instance, to say that analog
computers are here to stay. They are uniquely superior for certain kinds of work. But
it is equally as safe to say that they will be quite different, and probably eventually
unrecognizable as analog—or digital—computers” in the future (McLeod, p. 5).
McLeod referred to the growing popularity of the hybrid computer, which com-
bined complementary qualities of each type of computer: flexibility and precision
for digital, real-time simulation for analog. The first hybrids were created by con-
necting specially designed analog computers to commercial digital computers by
means of an electronic unit that would convert analog signals to digital signals and
vice versa. According to historian James Small, hybrids “were a user-led develop-
ment initiated in the mid-1950s by aeronautics firms in connection with the devel-
opment of intercontinental ballistic missiles” (Small 2001, p. 120). US and British
companies commercialized hybrids starting in the mid-1960s. But US sales of elec-
tronic analog and hybrid computers peaked in 1969 and then declined with federal
cutbacks in the space program and the advent of more powerful, integrated-circuit
digital computers. One writer estimated that digital computers performed about
three-quarters of the work in simulation in 1970 (Small 2001, pp. 151–170).
Electrical utility companies were slower to change from analog to digital. System
operators quickly adopted analog computers and digital-analog hybrids in the 1950s

 Small (2001, pp. 78–79), notes that electronic analog computers were used less in the mode of a
differential analyzer to solve equations and more in the mode as a simulator of a dynamic system.
The term “digital differential analyzer” was often used to characterize the first usage. See, e.g.,
Northrop (1951), Palevsky (1953).
 On the history of masculinity in computing, see the chapters by Haigh and Ensmenger in Misa
2  Inventing an Analog Past and a Digital Future in Computing 33

and 1960s but took longer to abandon these technologies for all-digital systems to
control the loading of interconnected power systems in the 1970s. The typical
aspects of the analog-digital tradeoff were evident in this industry. Especially appar-
ent, though, was the desire to retain the tacit knowledge derived from analog simu-
lation to understand how the electrical power system could be controlled. The issue
was so important for these users that when they switched to digital, they created
digital simulations of the complex power systems to train new operators and engi-
neers to regain the “feel” for the system that was lost in moving from analog to
For many years, analog computing retained its edge in the field of signal process-
ing by being able to solve the numerous differential equations needed to calculate
Fourier transforms. By representing a signal in the frequency domain, rather than
the time domain, the Fourier transform enabled engineers to more easily design
feedback amplifiers, filters, and other elements in communication networks
(Nebeker 1998, pp.  68–70). Allen Newell, a founder of computer science, has
argued that the viability of analog computing ended around 1970: “The finish was
spelled out not just by the increased speed and cost-efficiency of digital systems, but
by the discovery of the Fast Fourier Transform, which created the field of digital
signal processing and thus penetrated the major bastion of analog computation”
(Newell 1983, p. 196).
The stakes for these discourse communities during the analog-digital debate
went beyond the economics of the computing industry. Scientific and engineering
status accrued to those who could claim they were producing general-purpose com-
puters or developing the computers of the future, which were usually seen as digital
(Small 2001, p. 247).

2.5  The Popular Press

This understated progress narrative in the technical press was mirrored by a similar
discourse about analog and digital computers in a major newspaper, the New York
Times, in the 1950s. But Scientific American, the best-known magazine of popular
science in the United States at the time, touted a digital progress narrative from the
end of World War II to the early 1970s, which eventually turned into digital utopia-
nism (Turner 2006).
Science journalists in the New York Times were not digital evangelists in the early
Cold War. Instead, they celebrated both analog and digital computers for doing the
engineering calculations necessary to design missiles, bombers, and nuclear weap-
ons to counter the Soviet Union. In this regard, the Times covered large electronic

 While Cohn (2015), focuses on automatic control by analog and digital computers, Tympas
(1996), notes that utilities shifted from digital desk calculators to analog machines to digital com-
puters to calculate system operations. Analog computers survived in another area of the digitaliza-
tion of automatic control, the Apollo space program in the 1960s. See Mindell (2008).
34 R. R. Kline

analog computers, such as the Typhoon (Small 2001, pp. 93–96), as enthusiastically

as they did the large electronic digital computers, such as the ENIAC (compare
Kennedy 1946, with Laurence 1950), which, however, received much more media
attention. The enduring trope for both analog and digital computers in the Times
into the 1950s was the computer as a thinking machine, as a large “mechanical
brain” or “electronic brain.”26
In contrast, Scientific American stated a consistent digital progress narrative. In
the magazine’s first article on the new electronic computers, “Mathematical
Machines,” published in 1949, science writer Harry Davis—who visited the major
computer projects in the United States at the time—presented a nonlinear progress
narrative. He retrospectively used the terms analog and digital to discuss the devel-
opment of computers from the digital Chinese abacus to the analog slide rule, from
the digital Hollerith machines of the late nineteenth century to Bush’s analog dif-
ferential analyzer of the 1930s, which led to the digital Bell Labs relay computers
and the digital ENIAC of the current day. Davis noted that modern “analogue com-
puters are likely to be less bulky and expensive than the digital type; they provide
quick solutions. But like the slide rule . . . they have a limit to their possible accu-
racy. For the higher refinements of calculations, the digital or logical computer is
now coming to the fore,” which is why he focused on them for the article (Davis
1949, p. 30).
This implicit progress narrative was made explicit in a special issue on automatic
control systems in 1952. Physicist Louis Ridenour, a radar specialist and former
Chief Scientist of the Air Force, thought digital computers would replace analog
ones for controlling factories and solving engineering design equations. “Digital
begins to have advantage with more complex control problems,” Ridenour claimed,
“because analog machines will have to be increasingly complex and will reach a
limit. . . The great machine called Typhoon, built by the Radio Corporation of
America for the simulation of flight performance in guided missiles, closely
approaches that limit. It is perhaps the most complicated analogue device ever built,
and very possibly the most complicated that it will ever be rewarding to build.” Thus
the “most significant and exciting prospects reside in the digital machine” (Ridenour
1952, pp. 123–124, 126). Engineering professor Gordon Brown, head of MIT’s ana-
log Servomechanisms Laboratory, endorsed the digital computer while leaving a
place for analog in industrial control: “In the future, as Louis Ridenour explains in
this issue, the versatile digital computer will take over master control, perhaps
supervising analogue devices assigned to specific jobs” (Brown and Campbell 1952,
p. 62). Electronic circuits that converted analog signals into digital signals and vice
versa made Brown’s dream possible, just they had kept alive the prospects of hybrid
computers in other areas.
Analog computers then drop out of sight in Scientific American until the late
1960s and early 1970s, when the magazine published updates on the computeriza-

 The latest articles I found that carried this theme as a headline were Anonymous (1953a), for

analog computers, and Anonymous (1953b), for digital computers. For digital computers, the trope
began with the ENIAC; see Martin (1995/1996).
2  Inventing an Analog Past and a Digital Future in Computing 35

tion of factories and the electric power industry. By then, digital computers were
replacing analog computers in these and other areas of control systems (e.g.,
Karasek 1969; Glavitch 1974). At the same time, the analog-digital discourse was
adopted in articles on signal processing, especially in satellite communications and
the digitalization of telecommunications, where the need to convert between analog
and digital signals was pressing (e.g., Mengel and Herget 1958; Pierce 1966). More
and more, authors took it for granted that the term computer referred to a digital

2.6  Conclusion

This progress narrative is a far cry from the contested analog-digital discourse cre-
ated by early computer builders in the United States. For them, the contrast between
analog and digital was not a naturalized narrative. They criticized the terms for not
being logical opposites of each other and proposed alternatives. Although rhetorical
closure soon occurred around analog and digital, in technical communities of the
1950s and 1960s, the terms did not mark an electronic distinction between comput-
ers—since both analog and digital computers were electronic—nor signify a deter-
ministic progress narrative, because of the prevalence of hybrid computers and the
advantage of analog in the areas of control and signal processing. What the articles
in Scientific American point to instead is a movement toward today’s creed of digital
utopianism. That narrative gained traction with the public during debates about the
competition between analog long-playing (LP) records and the new digital compact
disc (CD) technology in the 1980s (Kozinn 1980, 1981) and the dot-com boom of
the 1990s.

Acknowledgment  Thanks to Alana Staiti for her research for this article in the
University of Pennsylvania archives and in the National Archives.


Akera, Atsushi. 2006. Calculating a Natural World: Scientists, Engineers, and Computers During
the Rise of U.S. Cold War Research. Cambridge, MA: MIT Press.
Akera, Atsushi. 2007. Edmund Berkeley and the Origins of ACM. Communications of the ACM
50(5): 31–35.
American Institute of Electrical Engineers (AIEE) Technical Committees. 1950. 1949 Engineering
Developments. Electrical Engineering 69:1–11.
Anonymous. 1947. News and Notes. Science n.s. 105 (1947): 543.
Anonymous. 1953a. Electronic ‘Brain’ Made in Miniature. New York Times, Dec. 17, p. 67.
Anonymous. 1953b. New ‘Brain’ Excels in Process Control. New York Times, March 21.
Aspray, William. 1990. John von Neumann and the Origins of Modern Computing. Cambridge,
MA: MIT Press.
36 R. R. Kline

Aspray, William. 1993. Edwin L. Harder and the Anacom: Analog Computing at Westinghouse.
IEEE Annals of the History of Computing 15(2): 35–51.
Atanasoff, John. 1940. Computing Machine for the Solution of Large Systems of Linear Algebraic
Equations. Rpt. in The Origins of Digital Computers: Selected Papers, edited by Brian Randell,
3rd ed. (New York: Springer-Verlag, 1982), 315–335.
Brainerd, John. 1948. The ENIAC. Electrical Engineering 67: 163–172.
Brown, Gordon, and Donald P.  Campbell. 1952. Control Systems. Scientific American 187(3):
Burks, Alice R., and Arthur W. Burks. 1988. The First Electronic Computer: The Atanasoff Story.
Ann Arbor: Univ. of Michigan Press.
Burks, Arthur W. 1987. Introduction [to Part V]. In Papers of John von Neumann on Computing
and Computer Theory, edited by William Aspray and Arthur Burks (Cambridge, MA: MIT
Press), 363–390.
Bush, Vannevar. 1936. Instrumental Analysis. Bulletin of the American Mathematical Society 42
(1936): 649–669.
Bush, Vannevar, and Samuel Caldwell. 1945. A New Type of Differential Analyzer. Journal of the
Franklin Institute 240: 255–326.
Campbell-Kelly, Martin, and Michael R.  Williams, eds. 1985. The Moore School Lectures:
Theory and Techniques for Design of Electronic Digital Computers, 4 vols., 1947-1948; rpt.
Cambridge, MA: MIT Press.
Ceruzzi, Paul. 1983. Reckoners: The Prehistory of the Digital Computer, from Relays to the Stored
Program Concept, 1935-1945. Westport, CT: Greenwood Press.
Ceruzzi, Paul. 2012. Computing: A Concise History. Cambridge, MA: MIT Press.
Cohn, Julie. 2015. Transitions from Analog to Digital Computing in Electric Power Systems. IEEE
Annals of the History of Computing 39(3): 32–43.
Curie, A.  A. 1951. The General Purpose Analog Computer. Bell Laboratories Record 29(3):
Daus, P. H. 1945. The March Meeting of the Southern California Section. American Mathematical
Monthly 52: 415–426.
Davis, Harry M. 1949. Mathematical Machines. Scientific American 180(4): 29–38.
Dupuy, Jean-Pierre. 2000. The Mechanization of the Mind: On the Origins of Cognitive Science,
tr. M. B. DeBevoise. Princeton, NJ: Princeton Univ. Press.
Eckert, J. Presper, Jr., John Mauchly, Herman Goldstein, and John Brainerd. 1945. Description of
the ENIAC and Comments on Electronic Digital Computing Machines. Applied Mathematics
Panel Report 171.2. Washington, DC: NDRC.
Edwards, Paul N. 1996. Closed World: Computers and the Politics of Discourse in Cold War
America. Cambridge, MA: MIT Press.
Engineering Research Associates. 1952. Reliability: ERA Magnetic Drum Storage Systems.
Scientific American 187(3): 125 (ad).
Fox-Keller, Evelyn. 1995. Refiguring Life: Metaphors of Twentieth-Century Biology. New York:
Columbia Univ. Press.
George A.  Philbrick Researches, Inc. 1949. World’s Largest All-Electronic Analog Computer.
Scientific American 181(3): 8 (ad).
Gerard, Ralph W. 1951. Some of the Problems Concerning Digital Notions in the Central Nervous
System [and discussion]. In Heinz von Foerster, Margaret Mead, and Hans Teuber, eds.,
Cybernetics: Circular Causal and Feedback Mechanisms in Biology and Social Systems.
Transactions . . ., vols. 6–10 (New York: Macy Foundation, 1950–1955), 7:11–57.
Glavitch, Hans. 1974. Computer Control of Electric Power Systems. Scientific American 231(5):
Goldstine, Herman, and Adele Goldstein. 1946. The Electronic Numerical Integrator and Computer
(ENIAC). Mathematical Tables and Other Aids to Computation 2(15): 97–110.
Goldstine, Herman, and John von Neumann. 1946. On the Principles of Large Scale Computing
Machines. Unpub mss., rpt. in Papers of John von Neumann on Computing and Computer
2  Inventing an Analog Past and a Digital Future in Computing 37

Theory, edited by William Aspray and Arthur Burks (Cambridge, MA: MIT Press, 1987),
Haigh, Thomas, Mark Priestly, and Crispin Rope. 2016. ENIAC in Action: Making and Remaking
the Modern Computer. Cambridge, MA: MIT Press.
Hall, A. C. 1950. A Generalized Analogue Computer for Flight Simulation. Transactions of the
AIEE 69: 308–320.
Harder, E. L., and G. D. McCann. 1948. A Large-Scale General-Purpose Electric Analog Computer.
Transactions of the AIEE 67: 664–673.
Hartree, Douglas. 1946. The ENIAC, an Electronic Computing Machine. Nature 158: 500–506.
Hartree, Douglas. 1947. Calculating Machines: Recent and Prospective Developments and their
Impact on Mathematical Physics. Cambridge: Cambridge Univ. Press.
Hartree, Douglas. 1949. Calculating Instruments and Machines. Urbana: Univ. of Illinois Pres.
Harvard University Computational Laboratory. 1948. Proceedings of a Symposium on Large-Scale
Digital Calculating Machinery . . . 7-10 January 1947. Cambridge, MA: Harvard Univ. Press.
Heims, Steve J. 1991. The Cybernetics Group. Cambridge, MA: MIT Press.
Institute of Radio Engineers (IRE), American Institute of Electrical Engineers (AIEE), Association
for Computing Machinery (ACM). 1953. Proceedings of the Western Computer Conference.
New York: IRE.
International Business Machines (IBM). 1949. Releasing the Human Mind. Scientific American,
181(12): 65 (ad).
International Business Machines (IBM). 1950. Speeding Business through Electronics. Scientific
American, 182(2): inside front cover (ad).
International Business Machines (IBM). 1953. The New IBM Electronic Data Processing
Machines. Scientific American 188(5): 62 (ad).
Karasek, F.  W. 1969. Analytic Instruments in Process Control. Scientific American 220(1):
Kennedy, T. R., Jr. 1946. Electronic Computer [ENIAC] Flashes Answers, May Speed Engineering.
New York Times, Feb. 15, pp. 1, 16.
Kline, Ronald. 2015. The Cybernetics Moment: Or Why We Call Our Age the Information Age.
Baltimore: Johns Hopkins Univ. Press.
Kozinn, Allan. 1980. The Future is Digital. New York Times Magazine, April 13, pp. 84ff.
Kozinn, Allan. 1981. A Pocket-Sized Digital Record for the Future. New York Times, April 26.
Laurence, William L. 1950. 4,000-Tube ‘Brain’ [Typhoon] Cuts Out Years in Designing and
Testing Missiles. New York Times, Nov. 22.
Lilley, S. 1942. Mathematical Machines. Nature, 149: 462–465.
MacKay, Donald. 1951. Calculating Machines and Human Thought. Nature 167: 432–434.
Martin, Diane. 1995/1996. ENIAC: Press Conference that Shook the World. IEEE Technology and
Society Magazine, 14(4): 3–10.
Mauchly, John. 1947–1948. Digital and Analogy Computing Machines. Rpt. in Campbell-Kelly
and Williams, 1985, 25–40.
Mauchly, Kathleen. 1984. John Mauchly’s Early Years. IEEE Annals of the History of Computing
6(2): 116–138.
McCann, G.  D. 1949. The California Institute of Technology Electric Analog Computer.
Mathematical Tables and Other Aids to Computation 3 (28): 501–513.
McLeod, John. 1962. Ten Years of Computer Simulation. IRE Transactions on Electronic
Computers 11: 2–6.
Mengel, John T., and Paul Herget. 1958. Tracking Satellites by Radio. Scientific American 198(1):
Mindell, David A. 2002. Between Human and Machine: Feedback, Control, and Computing before
Cybernetics. Baltimore: Johns Hopkins Univ. Press.
Mindell, David A. 2008. Digital Apollo: Human and Machine in Spaceflight. Cambridge, MA:
MIT Press.
38 R. R. Kline

Misa, Thomas, ed. 2010. Gender Codes: Why Women Are Leaving Computing. New York: IEEE
Murray, Francis J. 1947. The Theory of Mathematical Machines. New York: King’s Crown Press.
Murray, Francis J. 1952. [Review of] H. H. Goode, ‘Simulation—its place in systems design . . .’
Mathematical Tables and Other Aids to Computation 6(38): 121.
Nebeker, Frederik. 1998. Signal Processing: The Emergence of a Discipline, 1948 to 1998.
New York: IEEE History Center.
Newell, Allen. 1983. Intellectual Issues in the History of Artificial Intelligence. In The Study of
Information: Interdisciplinary Messages, edited by Fritz Machlup and Una Mansfield (New
York: Wiley), 187–227.
Norberg, Arthur. 2005. Computers and Commerce: A Study of Technology and Management at
Eckert-Mauchly Computer Company, Engineering Research Associates, and Remington Rand,
1946-1957. Cambridge, MA: MIT Press.
Northrop Aircraft, Inc. 1951. Now Available: Low-Cost Digital Electronic Desk-Side Computer.
Scientific American 185(3): 84 (ad).
Nye, David E. 2003. America as Second Creation: Technology and Narratives of New Beginnings.
Cambridge, MA: MIT Press.
OED: Oxford English Dictionary, on-line edition. Accessed June 25, 2018.
Oldenziel, Ruth. 1999. Making Technology Masculine: Men, Women, and Modern Machines in
America, 1870-1950. Amsterdam: Amsterdam Univ. Press.
Owens, Larry. 1996. Where Are We Going Phil Morse? Changing Agendas and the Rhetoric of
Obviousness in the Transformation of Computing at MIT, 1939-1957. IEEE Annals of the
History of Computing 18(4): 34–41.
Palevsky, Max. 1953. The Design of the Bendix Digital Differential Analyzer. Proceedings of the
IRE 41: 1352–1356.
Peters, Benjamin. 2016. Digital. In Digital Keywords: A Vocabulary of Information Society and
Culture, edited by Benjamin Peters (Princeton: Princeton Univ. Press), chap. 3.
Pierce, John R. 1966. The Transmission of Computer Data. Scientific American 215(3): 144–150,
152, 154, 156.
Radio Corporation of America (RCA). 1952. For Engineers, Your Career Opportunity of a
Lifetime, at RCA Now! Scientific American 186(1): 59 (ad).
Rajchman, J. A., G. A. Morton, and A. M. Vance. 1942. Report on Electronic Predictors for Anti-­
Aircraft Fire Control. Rpt. in The Origins of Digital Computers: Selected Papers, edited by
Brian Randell, 3rd ed. (New York: Springer-Verlag, 1982), 345–347.
Raytheon Manufacturing Company. 1952. Is There a Missing Link in Your Control Loop?
Scientific American 187(3): 77 (ad).
Redmond, Kent C., and Thomas M. Smith. 1980. Project Whirlwind: The History of a Pioneer
Computer. Bedford, MA: Digital Equipment Corp.
Rees, Mina. 1950. The Federal Government Computing Machine Program. Science, n.s., 112:
Rees, Mina. 1982. The Computing Program of the Office of Naval Research, 1946-1953. IEEE
Annals of the History of Computing 4(2): 102–120.
Reeves Instrument Corporation. 1949. REAC: The First Standard, High-Speed Electronic Analog
Computer. Scientific American 180(4): 27 (ad).
Remington Rand. 1952. Sensational New “Fact Power” Unleashed by Remington Rand
UNIVAC. Scientific American 187(3): 39 (ad).
Ridenour, Louis. 1952. The Role of the Computer. Scientific American 187(3): 116–118, 120–126,
128, 130.
Skilling, H. H. 1931. Electric Analogs for Difficult Problems. Electrical Engineering 50: 862-865.
Small, James S. 2001. Analogue Alternative: The Electronic Analogue Computer in Britain and the
USA, 1930-1975. London: Routledge.
Sterne, Jonathan. 2016. Analog. In Digital Keywords: A Vocabulary of Information Society and
Culture, edited by Benjamin Peters (Princeton: Princeton University Press), chap. 9.
2  Inventing an Analog Past and a Digital Future in Computing 39

Stibitz, George, 1945a. Relay Computers, Applied Mathematics Panel Report, 171.R. Washington,
Stibitz, George. 1945b. A Talk on Relay Computers. Applied Mathematics Panel Memo,
171.1M. Washington, DC: NDRC.
Stibitz, George. 1946. [Review of] V. Bush & S. H. Caldwell, ‘A new type of differential analyzer
. . .’ Mathematical Tables and Other Aids to Computation 2(14): 89-91.
Stibitz, George. 1947. Film Slide Rule. Mathematical Tables and Other Aids to Computation
2(20): 325.
Stibitz, George. 1947-1948. Introduction to the Course on Electronic Digital Computers. Rpt. in
Campbell-Kelly and Williams, 1985, 5–16.
Stibitz, George. 1957. Mathematics and Computers. New York: McGraw-Hill, 1957.
Stibitz, George. 1993. The Zeroth Generation: A Scientist’s Recollections (1937-1955). (n.p).
Turner, Fred. 2006. From Counterculture to Cyberculture: Stewart Brand, the Whole Earth
Network, and the Rise of Digital Utopianism. Chicago: Univ. of Chicago Press.
Tympas, Aristotle. 1996. From Digital to Analog and Back: The Ideology of Intelligent Machines
in the History of the Electrical Analyzer. IEEE Annals of the History of Computing 18(4):
Von Neumann, John. 1945. First Draft of a Report on the EDVAC. Unpub. report; rpt. in Papers of
John von Neumann on Computing and Computer Theory, edited by William Aspray and Arthur
Burks (Cambridge, MA: MIT Press, 1987), 17–82.
Von Neumann, John. 1948. Electronic Methods of Computation. Bulletin of the American
Academy of Arts and Sciences 1(3): 2–4.
Von Neumann, John. 1951. The General and Logical Theory of Automata. Rpt. in Papers of John
von Neumann on Computing and Computer Theory, edited by William Aspray and Arthur
Burks (Cambridge, MA: MIT Press, 1987), 391–431.
Von Neumann, John. 1958. The Computer and the Brain. New Haven: Yale Univ. Press.
Wiener, Norbert. 1948. Cybernetics: Or Control and Communication in the Animal and the
Machine. Cambridge, MA: Technology Press; New York: John Wiley.
Wiener, Norbert. 1950. The Human Use of Human Beings: Cybernetics and Society. Boston,
Houghton Mifflin.
Wiener, Norbert. 1956. I Am a Mathematician: The Later Life of a Prodigy. New York: Doubleday.
Wilkes, Maurice. 1985. Memoirs of a Computer Pioneer. Cambridge, MA: MIT Press.
Williams, Bernard S. 1984. Computing with Electricity, 1935-1945. Ph.D. disst., Univ. of Kansas.
Williams, Michael R. 1993. The Origins, Uses, and Fate of the EDVAC.  IEEE Annals of the
History of Computing 15(1): 22–38.
Williams, Samuel. 1954. The Association for Computing Machinery. Journal of the ACM 1: 1–3.
Chapter 3
Forgotten Machines: The Need for a New
Master Narrative

Doron Swade

Abstract  History of computing seeks, amongst other things, to provide a narrative

for the overwhelming success of the modern electronic digital computer. The datum
for these accounts tends to be a finite set of machines identified as developmental
staging posts. The reduction of the datum to a set of canonical machines crops out
of the frame other machines that were part of the contemporary context but which
do not feature in the prevailing narrative. This paper describes two pre-electronic
systems designed and built before the general-purpose stored programme digital
computer became the universal default and the de facto explanandum for modern
history. Both machines belong to the era in which “analog” and “digital” emerged
as defining descriptors. Neither of these machines can be definitively categorised as
one or the other. The Julius Totalisator is a large online real-time multi-user system
for managing dog and horse race betting; the Spotlight Golf Machine is an indoor
interactive golf gaming simulator. Their descriptions here are with a view to expand-
ing the gene pool of early devices of reference and at the same time voice historio-
graphic concerns about the way in which master narratives influence criteria of
historical significance.

Modern histories of computing seek, amongst other things, to provide a narrative

for the overwhelming success of the general-purpose digital electronic computer
(Cortada 1993; Campbell-Kelly and Aspray 1996; Ceruzzi 1998). Perspectival his-
tories of this kind have self-serving criteria for historical significance. In the case of
computing, “significance” is based on the extent to which a machine, system or
practice anticipates or, better still, has a traceable influence on the electronic digital
computer, and the machines, prescriptively selected, serve as a set of canonical
devices of reference.
The first master narrative framed the electronic computer as a machine for math-
ematical, scientific and engineering calculation (Goldstine 1972; Metropolis et al.

D. Swade (*)
Royal Holloway, University of London, Egham, UK

© Springer Nature Switzerland AG 2019 41

T. Haigh (ed.), Exploring the Early Digital, History of Computing,
42 D. Swade

1980; Williams 1985; Rojas and Hashagen 2000). Its canon includes the set of
machines created between the late 1930s and early 1950s, a transitional period of
significant intensity, during which new principles, praxis and vocabulary (Lynch
1992) were explored, articulated and contested. In the USA these machines include
ENIAC, EDVAC, the Atanasoff-Berry Computer (ABC) and the Harvard Mk I; in
the European arena, we find Colossus, EDSAC, the Manchester “Baby” (SSEM),
Pilot ACE and Konrad Zuse’s machines.
The second master narrative framed the computer as an “information machine”
and broadened its reach to business administration, information management and
corporate office systems (Cortada 1993; Campbell-Kelly and Aspray 1996; Ceruzzi
1998). The narrative scope was extended to real-time systems such as SAGE, Project
Whirlwind and SABRE (an airline reservation system that came on stream in the
early 1960s) and software for non-mathematical purposes. This was the age of
mainframes, IBM and the nascent personal computer boom, email and the now
dated futurism of the “information superhighway.”
This chapter describes two pre-electronic machines. Neither features as a canoni-
cal device of reference. That they are unrepresented can be seen to signal the need
for a new master narrative, or at least an updated one, that newly encompasses cat-
egories of human activity to which these two machines, amongst others, belong.
They are offered here in anticipation of a narrative that will widen the historical
loupe and in doing so expand the gene pool of early machines.
The first machine is a Julius Totalisator widely used to calculate odds and manage
racetrack betting for greyhound and horse racing; the second is the Spotlight Golf
Machine, an interactive indoor golf gaming simulator. In the case of the Julius
Totalisator, installed at Harringay greyhound racing stadium in North London in
1930, we have a large-scale online real-time multi-user system that remained reliably
operational for nearly 60 years and gave unbroken service in parallel with and largely
unthreatened by major developments in electronic computing. The Spotlight Golf
Machine developed in the 1930s offered physical gaming interactivity well over a
half-century before Nintendo’s video gaming console Wii was launched in 2006.

3.1  Totalisators

A Totalisator is a device or machine for managing the pari-mutuel system of betting

and was widely used on horse racing and dog tracks. The name pari-mutuel is
derived from the French parier (to bet or wager, pari meaning a bet) and mutuel as
in “in common” or “among ourselves”, i.e. a system of collective betting. The
scheme was devised in France by Joseph Oller, who started using it in 1867
(Canymeres 1946, p. 35), although other sources show a spread of a couple of years
either way. Oller, born in Paris in 1839 of Catalan origin, attended a cockfight in
Bilbao when he was 17 (Canymeres 1946, p. 26). There he was struck by the chaotic
and complicated betting practices, the erratic computation of odds and the flaring
disputes amongst the punters. Oller’s pari-mutuel system, seen as simpler and fairer,
3  Forgotten Machines: The Need for a New Master Narrative 43

spread rapidly. It made his fortune with receipts in his first year topping 5 million
francs from racetracks in Paris alone (Canymeres 1946, p. 68).
In the pari-mutuel system, bets on all the runners in a race are accumulated to
form a pool which, after deductions for operating fees, commissions and tax, is
shared between those holding winning tickets, in proportion to the size of their bet.
The arithmetical operations involved in the mutuel system are cumulative addition
to form the individual totals for each runner, cumulative addition to form the grand
total or pool, subtraction of agreed deductions from the grand total and finally arith-
metical division, i.e. dividing the pool by the total amount wagered on the winning
runner to provide the dividend or pay-out.
Pari-mutuel betting does not require a machine to manage it. Oller’s first opera-
tions were entirely manual with totals displayed on blackboards (Canymeres 1946,
p. 68; Doran 2006-7, p. 1), tickets written and issued by hand and the dividend cal-
culated manually. A court case trying an alleged case of swindling was covered at
length in the New Zealand Herald, 24 April 1880. The number of bets made on each
horse was displayed on a piece of paper hung outside the booth (Mackie 1974,
p. 90). The charge against the defendant was that after the race, he had taken down
the sheet and fraudulently added bets to reduce the pay-out to the four legitimate
winners. While operable without machinery, a system reliant on the probity of
human agency was clearly vulnerable to abuse.
Various operations were mechanised, and with each appropriation of function by
machine came a new semblance of unbiased neutrality, immune from gainful self-­
interest. In place of hand tallies, for example, an operator registered a bet by depress-
ing a foot pedal or pulling a lever that incremented a counter, one for each runner,
displayed to the public (Mackie 1974, p. 91). Oller himself introduced a machine
that issued and counted tickets (“les billet compteurs”, literally “ticket counters”).
Counters and dials were themselves far from tamper-proof: before pay-out, a cor-
rupt Tote operator advanced the numbered wheels of the counter by hand and had
added an additional ten fictional bets after the race to reduce the dividend to the
bona fide winners (Mackie 1974, p. 95). Fraud, real and suspected, was a motif of
betting practices for most of the history of the Tote.
Prior to the mutuel system, punters placed bets directly with bookmakers, and
there are significant differences between the two systems. Bookmakers gave fixed
odds, i.e. the odds determining the winning pay-out were those at the time the bet
was placed and remained unchanged whether or not the odds shortened or length-
ened afterwards. In the case of the “Tote”, as it colloquially became known in
Britain, since the odds are determined by the size of both the individual total for a
particular runner and the size of the pool to which a bet contributes, the odds change
continuously as betting progresses, and the final odds are not known till the start of
the race. So, a punter betting with the Tote did so “blind” as neither they nor the
stakeholder (usually the Totalisator operator or racetrack owners) knew the final
odds until just before the off when the traps open and the mechanical hare was
released. A further difference was that bookmakers stood to lose if the winning pay-­
out exceeded the total takings. The odds they offered were designed to create a “bal-
anced book” – one that ensured that they profited regardless of the outcome of a
44 D. Swade

race. In contrast, the Tote cannot lose as dividends are disbursed from a paid-up pool
and, by the basic rules of the mutuel system, the pay-out cannot exceed the pool.
In Tote betting, the odds are the same for all participants and are fixed at the start
of the race. There is therefore no incentive for a punter to bet early as there is no
advantage to be taken of early long odds, say on an outsider, which if placed with a
bookmaker would be fixed at the time the bet was placed. Indeed, with the Tote
there is advantage in betting as late as possible as the odds, which reflect an aggre-
gate view of all the investors as to likely winners, converge to their final values as
the close of betting nears. An outcome of this late convergence was that the volume
of traffic escalated dramatically as the off approached, and managing excessive data
flows was a significant challenge both to manual and automatic Totalisators.

3.1.1  Contested Acceptance

Support for, or opposition to, the introduction of Totalisators reflected the interplay
of different vested interests: bookmakers, betting houses, owners of race horses,
owners of greyhounds and racetracks, and trainers, Totalisator operators and manu-
facturers, treasuries and exchequers, punters and moralists and religious organisa-
tions all had a stake in promoting or obstructing the growth of the Tote for a range
of motives and shifting interests (Munting 1996).
The Tote was legalised in England for horse racing in 1828 during Churchill’s
chancellorship (Hansard 1928) but not for dog racing. In the final reading of the
Racecourse Betting Bill, Churchill framed horse racing as noble and dignified, a
sport for gentlemen, and dog racing he described as “animated roulette boards”, a
temporary “craze”, and “nothing more than a casino” which “working men might be
tempted to attend” (Hansard 1828; Times 1828, p.  7). Totalisators for dog tracks
were legalised in Britain in 1933, by which time several were already installed
albeit with dubious legality (Munting 1996, p. 119).
The benefits and disbenefits of the Tote to different interested parties were not
clear-cut and were fiercely debated. Social reformers saw the Tote as a cleansing
agent to divest the term “racing man” of its pejorative freight, and to contain, if not
eradicate, the “evil” of gambling (Times 1828, p. 11; Mackie 1974, pp. 90–3). The
exchequer had an eye to revenue and a more accountable way of collecting betting
taxes. Punters were divided and were, in part at least, at the mercy of publicists.
Some maintained that betting against a machine that could not err removed the
essential attraction of pitting your wits, luck and judgement against a bookmaker.
Disquiet about the ingression of the machine into the interpersonal was reflected in
an encyclopaedia entry on automata which reported that R. H. Hamilton, the pro-
genitor of the Hamilton Totalisator in 1928, had invented a “Robot bookmaker”
(1930, p. 1158). Others welcomed the Tote’s mechanical impartiality and the pros-
pect of increased insurance against fraud. It was the bookmakers who were most
unambiguously affected by the seemingly unstoppable growth of the Tote which
was in direct competition with them for punters’ custom (Graham 2007).
3  Forgotten Machines: The Need for a New Master Narrative 45

The Tote thrived and became a widely established feature of betting practice
often replacing bookies entirely or operating in grudging rivalry with them. The first
greyhound race in Britain was held at Belle Vue, Manchester, in 1926 with some
1700 spectators present. By 1931 attendances reached 18 million at 187 tracks run
by the National Greyhound Racing Association (Munting 1996, p. 32).

3.1.2  Automatic Totalisators

While Totalisators were still manual or only partially mechanised, the volume of
traffic was necessarily limited by the speed of ticket issuing, accumulating totals,
calculating dividends and distributing winnings before the start of the next race.
Delays queuing for pay-outs after a race were especially frustrating for punters.
Mobile “wagon Totes”, horse-drawn and mechanical in the late nineteenth century,
and motorised and battery driven in the twentieth century, travelled on race days to
suburban or rural tracks. Itinerant Totes were used to increase capacity by supple-
menting installations at metropolitan tracks (Barrett and Connell 2006-7, pp. 4, 8;
Caley 1929, pp. 71, 109) and were typically operated by their owners for a percent-
age of the take. The Julius Premier Totemobile announced in 1949, a large trailer
installation with motorised cab, was advertised for use at “provincial and country
racetracks, where either the frequency of meetings or the volume of business does
not warrant expenditure upon a fixed machine” (1949, p. 8).
A sea change in the scale of use followed the introduction of automatic
Totalisators, the first of which was installed at Ellerslie Racecourse in 1913  in a
suburb in Auckland, New Zealand (Barrett and Connell 2006-7). The Ellerslie Tote
was commissioned by the Auckland Jockey Club and designed and built by George
Alfred Julius (1873–1946), a mechanical engineer and progenitor of the eponymous
Julius Tote. Automating the essential processes of mutuel betting had profound con-
sequences for managing the volume of betting traffic, for reliability, trust and speed.
In 1917 Julius founded Automatic Totalisators Ltd (ATL) based in Sydney, and
ATL went on to become a major supplier of automatic Totalisators for some 60
years, until the 1970s when the electronic digital computer swept all before. Other
Tote manufactures joined the fray: Graham Amplion Ltd made the Hamilton Tote;
there were machines by the British Automatic Totalisator Co., the Lightning
Automatic Totalisator Co. and in the USA by the American Totalisator Co. (ATC).
These Totes differed from the Julius designs: the Lightning Tote, known colloqui-
ally as the “ball bearing tote”, used steel balls to represent a unit bet (Caley 1929,
p.  71; Julius 1920, p.  17; Doran 2018); ATC’s machines used rotary telephone
uniselector switches; and all three used electric lamps in their display boards
(Randell 1982, p. 241; Caley 1929, p. 71, 109). While the differences in design and
implementation are non-trivial, the essential pari-mutuel functions are identical for
all Tote types. The Julius Tote described here was installed in 1930 at Harringay
Stadium in North London, and it serves here as a placeholder for the technical his-
tory of the whole class.
46 D. Swade

3.2  The Harringay Julius Totalisator

The Ellerslie Tote installed in 1913 was entirely mechanical, powered as it was by
cast iron weights attached to bicycle chains via sprockets (Barrett and Connell
2006-7, p.  3). Julius soon electrified his Totes, and a series of detailed patents
granted in the 1920s and 1930s describe a rolling progression of innovation and
enhancement that established the basis of the machine’s effective operation and its
The Harringay Tote was the first Julius Totalisator installed in Britain (1937b)
and amongst the last to come out of service. It was used for greyhound racing, giv-
ing continuous service, unbroken by the war, until the last race meeting on 25
September 1987 after nearly 60 years of active service (Norrie 1994; Swade 1987).
Ironically, financial vulnerability was partly responsible for its longevity. With the
rise in leisure activities, especially during the “Swinging Sixties”, greyhound racing
suffered a decline. Falling attendance coupled with long-standing uncertainty about
the future of the stadium kept the fate of the site semi-permanently provisional, and
the track owners (the Greyhound Racing Association) were deterred from mod-
ernising or investing in updates. So, the apparatus at closure was more representa-
tive of the original 1920s technology than at tracks more optimistic about their
While betting customs differ in various countries, it is common, at least in
Britain, for the Tote to offer three basic types of bet for a greyhound race: win, place
and forecast. The simplest is a win bet, which backs a single dog to come outright
first; a place bet backs a single runner to come first or second; a forecast bet backs
a pair of dogs to come first and second in a specific order. There are other more
exotic bets for both horse and greyhound racing: Quinella, Duella, Trio or Trifecta,
Exacta, the Plum, Two Twists and Reverse Forecast are some (Conlon 1989).
Automating each of these requires custom cabling and dedicated, albeit generic,
equipment. There is no evidence that bets other than the standard three were auto-
mated for dog racing.

3.2.1  Operation: Ticket-Issuing Machines

Totalisators had four distinct pieces of apparatus: input from ticket-issuing machines
(TIMs) which record and issue details of the wager; output in the form of drum
indicators that display the size of the pools and, in installations equipped with odds
calculators as was the case at Harringay, the public display of continuously chang-
ing odds; and two computational devices – accumulators which sum the wagers for
the various pools – and, finally, the Odds Machine which continuously calculates
the odds for each dog. While the TIMs are distributed around the racetrack grounds,
the Tote machinery, including the accumulators and Odds Machine, was centralised
in machine rooms in specially constructed Tote buildings from which the public was
3  Forgotten Machines: The Need for a New Master Narrative 47

Fig. 3.1  Ticket issuing machine, (Harringay, 1987) (SSPL, Science Museum)

In informational terms the process of wagering started with a punter placing a bet
at a ticket-selling booth (Fig. 3.1) where an operator registered the bet on a ticket-­
issuing machine, a complex electromechanical motor-and-clutch-driven device
(Julius 1928).
The TIM performs two essential functions: it electrically transmits the wager to
the accumulators in the Tote room and prints on a blank ticket roll a “voucher strip”
recording the number of the runner, type of bet, value of bet and number of the race.
The operator prepares the selection of runner, finishing place and type of bet by
swinging a rotary handle or “setting lever” to position it over a selection hole. With
the selection made, the bet is registered by depressing the handle. This closes a
circuit to increment the cumulative total in the machine room and initiates the print-
48 D. Swade

ing cycle. The TIM then automatically guillotines and ejects the voucher strip, and
the operator hands the ticket to the punter as a contractual record of the wager.
Individual ticket-issuing machines are confined to only one denomination of bet,
so there are different TIMs for differently sized bets. In addition, there is one type
of TIM for win and place, another for forecast. The TIM in Fig. 3.1 is a forecast
TIM, and the arc of selection holes makes provision for each of the 30 combination
pairs for a six-dog race. Each TIM has a selection position marked “Test” which is
used to confirm correct operation while the TIM is off-line. A supervisor tested each
TIM before the booth opened to the public, and operators tested again before each
Security and reliability were paramount both for reputational trust and because
of instances of violence when disputes arose. Both the Tote manufacturers and the
track regulators introduced a raft of provisions to deter fraud and ensure reliability:
the facility to disable the TIM for a nonrunner to prevent fraud through phantom
bets; counters to automatically record the number of tickets sold by each TIM;
mechanisms for automatically disabling registering the bet when a test ticket was
issued; serrated wheels to deface a partially printed ticket if any attempt was made
to withdraw it so that only validly completed tickets were issued (Julius 1928, p. 9);
automatically disabling the TIM in the event of interruption of the paper supply;
preventing the operator placing a new bet until the system cleared the current bet;
centrally disabling all TIMs at the start of the race; ensuring that the printing and
issuing cycle was triggered solely by acknowledgement from the accumulator so
that only bets that had verifiably contributed to the pools could be ticketed; freezing
any bets in progress not yet completed at the race start (Norrie 1994, p. 33); and a
differently coloured print roll used for each race to prevent false claims, deliberate
or inadvertent, for a win on a different race (13 different colours were used at
Julius summarised the design briefly: “the whole installation must be so arranged
that no ticket can possibly be issued without its issue being correctly recorded and,
vice versa, that no ‘record’ can be transmitted and recorded without the correspond-
ing issue of a ticket” (Julius 1920, p. 16). As technical and security features evolved,
lawmakers enshrined these in Statutory Instruments specifying detailed regulatory
requirements for operating Tote equipment. There are eight pages of rigorous statu-
tory provisions in amendments to the 1935 Dog Racecourse Totalisator Regulations
(1967). Error and fraud were evidently taken seriously. Legislation was comprehen-
sive, exacting and technically detailed.

3.2.2  Accumulators and the Aggregation of Bets

The aggregation of bets into pools is a central feature of mutuel betting, and the
accumulators are the computational heart of the machine. All that is of interest in
the mutuel system is the single figure for the cumulative totals, regardless of how
that figure is contributed to by different sizes of bet. The technical challenge is
3  Forgotten Machines: The Need for a New Master Narrative 49

twofold: how to aggregate different value bets into a single total, and, since the
majority of punters bet as close to the off as possible, how to manage the ava-
lanche of asynchronous bets from multiple sources that peak as the start of the
race approaches. The first is a computational issue and the second a data flow
The mechanism for cumulative addition was a serial train of epicyclic gears
which form a shaft adder (Fig. 3.2). The central set of bevel gears (labelled “T”) is
pinned to the main central shaft which was subject to continuous torque from a slip-
ping clutch. The sprocket wheels (shown as thin vertical strips in Fig. 3.2a and in
elevation in 2b) are linked by differential gears which allow the continuously
torqued shaft to turn each sprocket wheel without affecting any other wheel, whether
or not other wheels are themselves turning. The freedom of the sprocket wheels to
rotate is controlled by an escapement, operated by a solenoid, which releases the
sprocket wheel to turn one increment of tooth separation each time the solenoid is
activated. The distance between the sprockets corresponds to the size of the bet; i.e.
tooth pitch is proportional to the fixed value of the bet in multiples of the lowest
denomination, the unit bet which in 1987 was 20 pence. So, the pitch of the sprock-
ets for a £1 bet is twice that for a 10-shilling bet.
The rotation of each escapement wheel on each release is imparted to the central
shaft independently and cumulatively via the chain of differential gears. The dif-
ferential action of the epicyclic train obviates problems of contention that would
otherwise be caused by the simultaneous arrival of a number of bets. The net result
is that the total number of revolutions of the shaft represents the summed value of
all bets registered by the device fed to it by various TIMS. The data on the size of
each bet is lost, absorbed as it is in the totals.
Groups of accumulators were mounted on strong steel tables (referred to as
“table tops”), and subtotals from such groups could be aggregated by cascading
their outputs to the next tier of accumulators (Conlon, Randwick). “Table tops” were
arranged in large arrays, and their serried ranks disappearing into the middle dis-
tance are a visual signature of the vast machine rooms (Fig. 3.3).

Fig. 3.2  Drawing: Accumulator. (a) Epicyclic gears. (b) Escapement wheel (Julius 1920) (Brian
50 D. Swade

Fig. 3.3  “Table tops” for win and place, White City (c. 1935) (Brian Conlon)

In his Mechanical Aids to Calculation (Julius 1920), Julius provides a survey of

the history of mechanical calculators from ancient finger counting to Babbage, in
which he demonstrates a detailed command of device design and technique. He
notes that all calculating machines to date had been designed for control and opera-
tion by a single operator, and he presents the need to manage input from multiple
simultaneous operators as a new problem (Julius 1920, p. 15). He cites no precedent
for the epicyclic shaft adder.

3.2.3  Multiplexing, Selectors and Controlling Data Flow

The relationship between the TIMS and the accumulators is not one to one. Clusters
of up to eight TIMs were multiplexed to single escapement wheel via what Julius
called a “distributor” (Julius 1920, p.  24), also variously called a commutator or
selector. The selector consists of a rotary arm which continuously sweeps a series of
eight contact studs. The arm of the selector was connected to one escapement wheel
of an accumulator, and one TIM was connected to each of the studs. The rotation of
the selector arm polled each of eight TIMs and discharged their bets in turn to the
accumulator. The process is an electromechanical implementation of time-division
multiplexing (Conlon 2005a) first developed for telegraphy in the late nineteenth
century. The action of the rotor arm divides the cycle into equal fixed time segments
during which each accumulator input is serviced in a fixed sequence to the exclu-
sion of the others. The selectors were mounted in banks (Norrie 1994, p. 28), and
the rotor arms moved in unison with a sampling interval of about 100 milliseconds
between stud contacts.
3  Forgotten Machines: The Need for a New Master Narrative 51

The volume of betting was uneven, peaking just before the start of the race, and
the ability to register and clear bets was time critical. The data flow demands were
extreme. Julius took the average figure of 33,000 ticket sales per minute as a design
requirement with provision for double this at times of peak demand (Julius 1920,
p. 21): peak input to the accumulators from the TIMs came at 1,100 per second. The
irregular volume of betting traffic was technically problematic. The epicyclic gears
were light and responsive, and their inertia was not a limiting factor. The pinch
points were the counters recording cumulative totals. To be visible to the public, the
digit drum counters were some 2 feet in diameter, and their inertial mass prevented
them from accelerating fast enough to keep up. Slowing them down during low data
flows, or abruptly halting them when the race lock was applied at the end of the bet-
ting period, was problematic. With high data rates, the unit’s wheel reached 200
revolutions per minute for even modestly sized installations (Julius 1920, p. 21),
and the unit’s drum was an unreadable blur. Further, at high work rates, the carriage
of tens to the next higher wheel would entail damaging mechanical shocks.
The selectors with their fixed sampling rate capped the worst of the peaks, but
this was not sufficient on its own. A sophisticated device called a “storage screw”
buffered the accumulator outputs and balanced the rate at which the accumulators
discharge their running totals to the counters (Julius 1921, pp. 4–5; Conlon Adding
Mechanism). In this balancing device, the rotation of the accumulator shaft is stored
in a long coiled spring sleeved in a barrel. A variable speed feedback loop offloads
the bets to the counters at a rate regulated by the counter’s capacity to register the
inputs: the speed regulator automatically speeds up the counters when the data rates
are high and slows them down when data rates subside. So, the counters have a
tapered acceleration to reach higher speeds and gently come to rest as the accumula-
tor outputs reduce to zero. The buffering and controlled discharge to the counters
was automatic. If the data flowrates exceeded the buffer capacity, TIMs were auto-
matically locked down. The design made an additional provision: manually operat-
ing a local rheostat reduced the operating voltage of the accumulator motors, and
this slowed its responses to incoming bets by reducing the torque on the shaft
adders. The “table tops” with their shaft adders and barrels are clearly visible in
Fig. 3.3.

3.2.4  The Odds Machine and Elementary Trigonometry

The final piece of equipment in the Julius suite is the Odds Machine (Figs. 3.4 and
3.5) which calculates and displays the odds for each starter. The calculation of odds,
or dividend paid out on a given starter, requires the division of the grand total of all
bets (the pool) by the individual dog total for each of the six dogs. Since both the
grand total and the dog totals change continuously as betting proceeds, for results to
meaningfully inform the betting public as to which runners are favoured and to what
comparative degree, the calculation and display of odds need to be in real time. The
Julius Odds Machine is a computational device that performs continuous real-time
division and displays the result on large display indicators.
52 D. Swade

Fig. 3.4  Odds Machine, Harringay Totalisator (1987) (SSPL, Science Museum)

The principle of the division is based on schoolroom trigonometry. By definition

the tangent of the angle, θ, of a right-angle triangle is the ratio of the length of the
opposite side to the length of the side adjacent to the angle; i.e. the tan function defi-
nitionally involves the division of two quantities. The Odds Machine works by
physicalising this relationship. The machine has a right-angle triangle for each of
the six dogs with the vertical side (opposite) representing the grand total and the
horizontal side (adjacent), the dog total. The hypotenuse, called the “quotient arm”
(Julius and Julius 1932, pp. 3–4), is pivoted on moveable sliders at each end, one to
traverse the vertical and the other the horizontal (Fig. 3.5). (The quotient arm in
Fig. 3.5 is labelled “67” and not to be confused with part “47”, which is a transport
strut.) Lifting the vertical slider extends the length of the opposite side of the trian-
3  Forgotten Machines: The Need for a New Master Narrative 53

Fig. 3.5  Odds Machine, patent drawing (1932) (Espacenet)

gle; traversing the horizontal slider extends the adjacent. Local aggregators output
the grand total and dog totals as shaft revolutions. The rotational motion of the
grand total shaft adder is translated into linear motion by simple pulleys, lines and
weights to raise the vertical slider in proportion to the grand total. Similarly, the
position of the horizontal slider is extended in proportion to the dog total so that the
three limbs form a right-angle triangle where θ is the angle whose tan gives the
desired quotient.
54 D. Swade

The quotient arm (colloquially called a “bayonet”) directly drives dial displays
local to the Odds Machine. The arms also drive large remote dial pointer public
displays using servo-systems in the form of back-to-back pairs of self-­synchronising
“Selsyn” motors (Julius and Julius 1932, pp.  2, 7; Electrician 1932, p.  402).
However, the tan function is non-linear, and this is dealt with by tapering the scale
of the graduations on the dial face to take out the tan function (Fig. 3.6). “Barometer”
indicators were also used for outdoor indicators though Harringay used pointer dials
(Conlon World’s First).
The grand total is the same for all six runners, so the sliders on the verticals on
the triangle were all at the same height. Deductions were made from the grand total
before it was used for the calculation of the dividend. Since the deduction for bet-
ting tax, track operator’s fees and profit was levied as single compound percentage
of the pool, the deduction was simply made by gearing down the grand total shaft.
So, a 10% deduction would require a gear ratio of 10:9.
The purpose of the Odds Machine was to provide to the public indicative com-
parative odds for each starter. But the division was not penny perfect as required for
the purposes of dividend pay-out or for calculating betting duty and house take. It is
also the case that the Odds Machine calculates and displays odds for a straight win
only and not for place or forecast. For all three classes of bet (win, place and fore-
cast), the exact amount for the dividend was calculated by operators using Brunsviga
manual desktop mechanical calculators. Totals for the calculation were read from
drum counters and cyclometer-style “Veeder” counters which recorded the aggrega-
tor outputs.
The mechanical components of the accumulators are inherently analog (bevel
gears, sprocket wheels, shafts). However, the action of the escapement rockers digi-

Fig. 3.6  Early Odds pointer indicator showing tapered dial scales (1927) (Brian Conlon)
3  Forgotten Machines: The Need for a New Master Narrative 55

tised their motions: movement was finitely incremented with positions between
fixed displacements necessarily transitional. It follows that the Odds Machine is
digital as the limbs of the right-angled triangle are altered in finite increments of
discrete bets. Finally, the dial pointer motions, which mirror the discretised move-
ment of the quotient arm, are also digital, though the discretised motions are con-
veyed to the outdoor displays via Selsyn motors, which are themselves analog.
Publicists describe a digital watch with hands as a “digital watch with analog dis-
play”, notwithstanding the fact that the hands move jerkily in obviously finite incre-
ments. “Analog” is here used as a descriptor for traditional watch displays to
differentiate them from seven-segment numerical displays. Prior to numerical digi-
tal displays, there was no need for differentiation, and the distinction was not made.
So, the Julius Totalisator is fully digital, and the Odds Machine has a digital (discre-
tised) display which for historical reasons we call analog. The French use the word
numérique for “digital.” The emphasis on counting may have its advantages. Digital/
analog distinctions do not appear in contemporary literature describing these
devices, and their limp and tangled use here flags some of the difficulties of these
faux dichotomous categories used retrospectively in the context of pre-electronic

3.2.5  Overview

Totalisators are routinely underrepresented in the canon. They were substantial

installations providing time-critical service throughout their operational lives for
millions of users. By 1936, after less than a decade in business, and when the popu-
larity of greyhound racing was still on the rise, Julius’s company (ATL) alone had
supplied 74 Totalisators to greyhound and horse racing tracks in England, Scotland,
Wales, Ceylon, India, New Zealand, Burma, Singapore, Canada, France, Malta, the
USA, the Philippines and South Africa (1937b, pp. 1–2). ATL and other companies
that joined the fray went on to trade successfully into the 1970s.
Multi-user capacity, measured by the number of ticket-issuing machines the Tote
was able to service simultaneously, was unprecedented for an automatic machine in
the early twentieth century. The Julius Tote installed at the Paris Longchamp horse
racing track in 1928 could process ticket sales from 273 ticket machines; the Tote at
White City greyhound track in London serviced 325 TIMs at the height of its opera-
tional life. In 1920 Julius wrote that Totes for the largest French racecourses needed
to allow for 600 ticket machines and that one designed and built for the French
market had processing capacity extendable to 900 ticket machines with peak data
rates of 4000 bets per second (Julius 1920, pp. 24–25).
The physical scale of the installations was not for the fainthearted. The central
control room at Ascot Racecourse in Berkshire, England, measured 260 feet by 100
feet and required an improbable 1 million miles of electrical wiring and had 50,000
back-panel connections (1931, p. 7). One of the several machine rooms at Harringay
56 D. Swade

measured about 60 feet by 20 feet; the dog total accumulators occupied a room 40
feet by 40 feet. The selector panels were 7 feet high and 30 feet long (Norrie 1994,
p. 26).
The Julius Totes of the 1920s and 1930s featured a number of techniques and
practices more readily associated with later systems. With an apologetic wave to the
anachronistic use of modern terminology, such features include time-division mul-
tiplexing, polling, online real-time analog division and aggregation in real time of
different denominations into totals. Data security features include failsafe lossless
protection of transactions in progress in the event of power outage; interlocking and
handshaking between the processor and the input devices to ensure data integrity;
buffering and feedback control to cope with excessive data flows; and automatic
shutdown in the event of device failure or buffer overflow. At a systems level, the
machines were scalable by simply extending the number of identical processing
units to increase capacity, had self-monitoring fault alarms and featured modular
design allowing running repairs by ready replacement of processor subunits.
The product life of automatic electromechanical Totalisators as a class spans
over 60 years from the first installation at Ellerslie Racecourse in New Zealand in
1913 to the late 1970s when they were progressively replaced by less characterful
electronic systems. At least two Julius greyhound Totes are thought to have survived
in Britain after 1987, at Belle Vue, Manchester, and at Shawfield, Glasgow. A Julius
Tote was reportedly still operating in 2005 in Caracas, Venezuela (Conlon 2005b).

3.3  The Spotlight Golf Machine

Golf inspires both passion and uncomprehending boredom. It has an esoteric lan-
guage all its own: bogeys, birdies, eagles, mashie niblick, wedge, shank, fade,
woods, irons, drop out, lie, bunker, links, cup, pitch-and-run, and pin high. Thought
to originate in Scotland in the fifteenth century, the game has particular sociocul-
tural freight: presidents play as public symbols of political bonhomie; James Bond
and Auric Goldfinger played in a proxy power struggle; and P. G. Wodehouse wrote
short stories of comic satire, many of which featured the legendary Oldest Member
who had an inexhaustible fund of yarns with which he regaled the unwary. Possibly
because of the expense of equipment and club membership, golf has traditionally
been seen as a game for the posh, the elite or the wealthy, though this has been miti-
gated to some extent by a general movement to democratise privilege. What role for
new electromechanical technology of the 1930s, a brash intruder into hallowed tra-
ditions over 500 years old?
The overt purpose of a round of golf round is to complete the course taking the
fewest number of strokes to propel the ball from the starting tee position into a cup
sunk into the putting green for each of 9 or 18 holes. The Spotlight Golf Machine
(Fig. 3.7) is an interactive golf gaming simulator that allowed nine holes of golf to
be played indoors using a full set of clubs and a real-life golf ball securely tethered.
For each stroke the machine instruments the principal features of the ball’s flight
3  Forgotten Machines: The Need for a New Master Narrative 57

Fig. 3.7  Craig Ferguson (restorer) with the opened Spotlight Golf Machine (Chris Crerar and
Australasian Golf Museum)

(velocity and direction), determines the distance the ball would reach and automati-
cally displays its rest position on an aerial map of the course layout. The rest posi-
tion of the ball is indicated by a small pencil spotlight that back-illuminates with a
dot a translucent chart or scroll on which the pictorial view of the layout of the hole
is imprinted, hence “Spotlight” in the name. The scroll, which shows the tee, fair-
58 D. Swade

way, rough, sand traps (bunkers), water hazards, putting green and flag, is advanced
by hand after each hole is played to show the layout of the next hole. Scrolls can be
replaced with depictions of different courses, actual and fictional, of varying
The game can be played by a single player or by two competing players using a
full range of clubs with no inhibition of swing. In a twosome the machine automati-
cally indicates which player has the next stroke (in golf it is the player whose ball is
furthest from the hole who plays next regardless of how many strokes it takes to get
closer to the hole than their opponent) and detects whether a player has gone out of
bounds, falls foul of a hazard, succeeds or fails in clearing the hazard and when a
player’s ball reaches the green. The position of each of the two balls is displayed by
its own spotlight. The machine does not keep score by counting the number of
strokes each player takes. Players do this themselves in the quaint tradition of mark-
ing their scorecards with a pencil (Simon and Mitchell 1936; Sale 2013, p. 7).
The Spotlight Golf Machine was invented in the UK and patented by Louis John
Simon, who secured four related patents filed between 1833 and 1836 variously in
the USA, Canada and France (Simon 1935, 1936a, b; Simon and Mitchell 1936).
From his range of other patents, Simon does not appear to have had any special
interest in golf: his patents include a floor-scrubbing machine, an illuminated adver-
tising sign, a distillation apparatus, an improvement in spanners and an oil extrac-
tion apparatus. The conclusion that Simon was not primarily motivated by a passion
for golf is supported by a newspaper reporter who maintained that Simon had never
played a round of golf in his life until he devised the Spotlight machine (1936a).
It is not known how many of the machines were made or sold, though one esti-
mate places their number at around 500. From press accounts in Great Britain,
Canada, Australia, New Zealand, Singapore and the USA (Sale 2013, p. 1), usage
was evidently fairly widespread. The Golf Machine was advertised widely in Great
Britain in the 1930s particularly in the sporting press (1936c, p.  3, 1936d, p.  7,
1937a, p. 4). Golf clubs offered indoor tuition, and upmarket hotels offered guests
the pleasure of playing a round of golf with no need to leave the premises. Two well-­
known Ryder Cup golfers gave a promotional three-hole demonstration in London
in 1836, and the press write-up eulogised the machine as an apparatus allowing play
without interference from the elements. It could be played in a lounge at home, in a
hotel or in a liner in the mid-Atlantic (1936a). The machine’s product life may have
been relatively brief, truncated perhaps by the priorities of war. Several appeared for
sale in classified ad sections in newspapers in the 1940s (1944).
At the time of writing, one near-complete example made in 1936, supplied from
London and found in New Zealand, is in the care of the Australasian Golf Museum
in Tasmania. While documentation for this machine is scant, the 1936 patents
describe a near-identical device (Simon and Mitchell 1936).
3  Forgotten Machines: The Need for a New Master Narrative 59

3.3.1  Playing the Game

The essential elements of the game shown in Fig.  3.8 are from a contemporary
advertisement. The ball is placed on a starting tee on a playing mat and tethered fore
and aft – to a floor unit behind and to the base of the safety net in front. The floor
unit is connected to the tall oak cabinet (centre rear of the image) by a stout multi-
core electrical cable not visible in the illustration. In the window of the cabinet
appears the course scroll (Fig. 3.9) showing the layout of the hole being played.
Lights on the front panel indicate which of the two players is to play next. The
distance of the hole from the tee position is shown on the scroll, which is calibrated
in increments of 50 yards marked on a scale along the scroll edges. The player
makes a club selection based primarily on distance, addresses the ball and swings
taking aim at a marker on the safety net. The ball is arrested by the tether before it
hits the safety net. The floor unit registers the energy of the struck ball and its direc-
tion, left or right (azimuth) and, when needed, up or down (elevation). Once the ball
has slowed to a stop, the floor unit automatically rewinds the tether in preparation
for the next play.
The floor unit electrically signals to the cabinet unit various physical parameters
of the ball’s flight including azimuth, elevation and the length of time the ball is in
flight. A sliding carriage bearing a small spotlight, located behind the graphic,
­travels upwards on runners to show the advance of the ball towards the hole. The
progress of the spot on the chart is indicative rather than exact, but its stop position
is representative of the distance an untethered ball would travel when struck with

Fig. 3.8  Spotlight Golf Machine, indoor setup (c. 1936) (Australasian Golf Museum)
60 D. Swade

Fig. 3.9  Example of a course scroll (Chris Crerar and Australasian Golf Museum)
3  Forgotten Machines: The Need for a New Master Narrative 61

the same force. Deflecting mirrors position the spotlight left or right if the shot is
off-­centre. If the shot is out of bounds left or right, this is detected and indicated, and
motion of the spot is disabled. If the ball falls foul of a hazard, a blue hazard light is
lit (Sale 2013, p. 7).
If the player needs to recover from a sand trap, then the loft of the ball needs to
clear the lip of the bunker (Simon and Mitchell 1936, p. 5). The player chooses a
club based on the depth of the bunker, which is read from the course chart as a num-
ber in feet inscribed alongside the trap. If the player clears the bunker, the spot
indicates the landing position. If the ball does not clear the lip, then the player has a
choice of repeating the attempt or “dropping out”, i.e. repositioning the ball in a
playable position and taking a penalty stroke. The player indicates their wish to
“drop out” by depressing a button on the front of the cabinet which signals the logic
to disable the hazard (Sale 2013, p. 7) and position the ball in a playable lie. In the
case of a water hazard, the ball is unplayable, and a penalty stroke is incurred using
the same procedure as for “dropping out.” After each rewind of the tethered ball, the
display indicates which player plays next according to whose ball is furthest from
the hole. The play continues in this way until a player reaches the green.
A player’s ball reaching the green is indicated on the display cabinet. This sig-
nals that the player should move to the separate putting mat and place a standard
(untethered) ball the distance from the flag taken from the last position of their ball
as shown on the scroll display (Simon and Mitchell, Abstract, p. 5). The players
then putt to hole out, record their scores and reset the machine for the next hole.

3.3.2  Logic and Control

The logic and control of the golf machine is classically electromechanical: it uses
electrical switches, solenoid relays, copper wiring, springs, ratchets, pulleys and
motors. The interface between the ball and the machine is the floor unit, which
translates physical features of the ball’s motion into electrical signals fed to the
cabinet which houses the logic controlling the display’s responses. The floor unit
has a small turret pivoted to rotate (Fig. 3.10), and mounted on this is a “guide arm”,
which can lift and lower in the vertical plane (Simon and Mitchell 1936, p. 2, 5). So,
the guide arm has two degrees of rotational freedom: lateral and vertical.
The golf ball tether runs the length of the arm through an eyelet at the player’s
end and through the turret. It is then wound onto a sprung drum. When the ball is
struck, the guide arm aligns with the direction of the tether, which unwinds from the
drum against the force of an internal spring. The loft of the ball raises the guide arm,
and lateral movement swings the guide arm (and therefore the turret) to the left or
62 D. Swade

Fig. 3.10  Spotlight Golf Machine, floor unit (without cover) (Chris Crerar and Australasian Golf

right. Switch contacts sense the elevation angle in finite height intervals from hori-
zontal at one end to lofted at the other. Similarly, contact switches detect the lateral
swing of the guide arm dividing azimuth readings into three zones: left, right and
straight ahead. Extra contacts detect extreme swings of the guide arm as would
occur with a shot so laterally deranged as to be out of bounds left or right (Simon
and Mitchell 1936, pp. 4–5). The azimuth signals are used to deflect the spotlight
left or right from the centre line. The control logic is solenoid and relay-based, and
the outputs from the floor unit are switched on/off signals; i.e. control and measure-
ment of azimuth and elevation are digital.
The mechanism for predicting the distance an untethered ball would travel is in
the floor unit. The motion of the struck ball unwinds the tether cord against the
spring of the drum. Once the tethered ball has reached the limit of its travel, the
drum automatically rewinds the tether (Simon and Mitchell 1936, Abstract). A con-
ventional centrifugal speed governor limits the rewind speed and ensures that the
cord rewinds at a uniform rate. The duration of the rewind time is signalled to the
cabinet by switch contacts at the start and end of the drum rotation, and this time
interval is a measure of the initial velocity of the struck ball (Sale 2013, p.  8).
Empirical findings show that, while not exact, there is a strong proportionate rela-
tionship between the initial velocity imparted to an untethered ball and the distance
it travels (Barzeski 2006). So, the time taken to rewind the cord provides an indica-
tive measure of the distance an unconstrained ball would travel.
The spotlight advances up the scroll for the duration of this time signal. Where it
stops is the rest position of the virtual ball and indicative of the distance a real-life
ball would travel. There are two lamp carriers, one for each player, located behind
the course scroll (Fig.  3.7). These travel on vertical rails and carry the spotlight
3  Forgotten Machines: The Need for a New Master Narrative 63

optics. The carriers are independently raised by a falling counterweight draped over
a pulley: a solenoid-operated brake releases the pulley and allows the carrier to rise,
lifted by the falling counterweight. The speed of rise is constant and controlled by a
standard centrifugal speed governor. A brake is released at the start of the time
­signal and re-engaged at the end when the tether cord is fully rewound. During this
interval one of the two carriers rises at constant speed drawing the spotlight upwards
hoisted by the “gravity motor” (Simon and Mitchell 1936, pp. 2–3). The spotlight
progresses towards the virtual putting green and stops at the end of the time signal
to indicate the rest position of the ball. The lowermost carrier determines which
player’s turn is next as it is the player furthest from the hole that has precedence. If
the carriers cross over, a switch is tripped which registers that the ball of the player
that has just played is now closer to the hole than their opponent’s, and the second
player is signalled to take the next stroke (Simon and Mitchell 1936, p. 3). If the
carriers do not cross, then the light indicating which player’s turn it is remains
unchanged, and the first player will continue to have precedence. When both players
have holed out, the carriers are restored to the bottom of the scroll by hand.
Each of the carriers has a deflecting mirror that redirects the position of the spot-
light. Lateral deviation of the ball trajectory from straight ahead swings the guide
arm on the floor unit left or right so as to operate switches. These switched signals
from the floor unit activate solenoids that displace the mirrors to deflect the spotlight
to the left or right of centre provided the landing position of the ball is within the
corridor of allowed play. A wild shot outside the legitimate azimuth zone is detected,
signalled by an out-of-bounds panel light, and the gravity motor is braked so that the
virtual ball cannot move until the out-of-bounds condition is released by the player
pressing a panel button to indicate a penalty stroke (Simon and Mitchell 1936, p. 3).
The player then retakes the shot.

3.3.3  Scrolls and Slots

Slots cut into the scroll down each side provide a level of interactive sophistication
(Fig. 3.9). The slots code the presence of hazards and the location of the green. They
are sensed by contacts in the travelling carriers which detect where the ball is in
relation to traps and the green and determine the constraints and requirements for
the next play. There are four columns of spaced slots duplicated down each side of
the scroll. Each slot in the three columns of shorter slots indicates the proximity of
a sand trap: a slot in the left, centre or right column indicates a hazard to the left,
centre or right of the centre of the fairway, respectively. Longer slots indicate the
apron area in front of the hazards and the green (Simon and Mitchell 1936, p. 4).
The signal from the guide arm that deflects the spotlight mirror left or right also
activates the corresponding contact sensors in the lamp carrier. So, the contacts in
the travelling carrier detect whether the ball has landed in a hazard or whether the
ball is in the front of a hazard or on the apron in front of the green.
64 D. Swade

If a ball is in a bunker, the next stroke must lift the ball clear of the lip. To ensure
sufficient loft to clear the hazard, the elevation signal from the guide arm is used. If
lofted sufficiently the travelling carrier is free to move to register the new landing
position of the ball (Simon and Mitchell 1936, p. 5, Abstract). Since the distance a
significantly lofted ball travels is shorter than that travelled by a ball struck with the
same velocity but with moderate loft, the patent description makes provision for
automatic adjustment to reduce the travel of a well-lofted ball for, say, an approach
shot to the green. Similarly, if the ball is in front of but not in the bunker, a lofted
shot is required to clear the hazard, and the elevation signal from the floor unit is
again used to ensure that this requirement is met. For other than bunker shots and
approach shots, elevation is not taken into account.
There are features of the game of which the machine does not take account.
Tethering the ball prevents it from spinning, so the often substantial effects of side-
spin, topspin or backspin do not have any effect on simulated flight. In the case of a
slice or a hook, both of which are regarded as mishits giving sidespin, the landing
position is determined only from initial direction and velocity, and the flight path is
simulated as straight whatever the lateral direction. The same is true for moderate
forms of these (fade and draw), which are not mishits but used intentionally by more
advanced golfers to drift the ball to curve right or left in a controlled way. The
machine is similarly insensitive to backspin applied to stop a ball from running on,
particularly for approach shots landing on the green, or even to pitch the ball back-
wards towards the flag following a deliberate overshoot.
At least three of Simon’s patents include proposals for cinema back-projection in
lieu of stationary translucent scrolls so that the course, rather than the ball, would
appear to travel in response to a flighted ball. Alternatively, projection of changeable
static images by a “cinematograph machine” could be used in lieu of the static
scrolls (Simon 1936a, pp. 2, 3; 1936b, p. 8; 1935, pp. 3, 4). It is unclear whether this
was ever implemented in marketed versions of the machines.

3.3.4  Overview

The Spotlight Golf Machine is an interactive game simulator enabling one or two
players to play golf indoors. It was a successful product from the mid-1930s to the
early war years, and its use was publicly advertised by golf clubs and hotels. The
overall design and implementation are novel, and the machine is well engineered.
The game and simulation logic is constructed from conventional electrical and elec-
tromechanical switching technology using contacts and solenoids. The distance pre-
dictor is an analog device that codes its output into a switched digital signal in
which only the duration is significant. A degree of programmability is provided by
slots cut into the course scrolls which encode features of the particular course
depicted, determine the responses of the machine to player input and control admis-
sible player response. The Spotlight Golf Machine was not the first interactive
mechanical game (Freeland 1879; Monnens 2013), nor was it the first golf machine
3  Forgotten Machines: The Need for a New Master Narrative 65

(1922). However, the levels of interactivity and the sophistication of the simulation
were novel, as was the application of contemporary technology to gaming and simu-
lation. In a broader context, the Spotlight Golf Machine may be said to belong more
to the history of gaming than to the history of computing as currently framed.

3.4  Wrap-Up

Totalisators feature only fleetingly in the second master narrative (Campbell-Kelly

and Aspray 1996, p. 111) and the Spotlight Golf Machine not at all. Yet they embody
many of the logical principles, techniques and applications of their electronic suc-
cessors: polling, multiplexing, handshaking, buffering, memory and real-time com-
putation are some. At systems level the Totalisator is an online real-time multi-user
information management and calculation system; the Spotlight Golf Machine is a
real-time interactive simulator. There are early electronic systems that have related
capabilities that post-date our two examples and that do feature in the master narra-
tives. The canonical machine for real-time applications is SAGE, an extensive air
defence network developed in the USA in response to the threat of Soviet airborne
nuclear attack and that went into operation in 1958; Project Whirlwind, operational
in 1951, was a real-time flight simulator built to accelerate pilot training and which
featured real-time interactive simulation; SABRE, a multi-user real-time distributed
system for airline reservation, went live in the early 1960s (Campbell-Kelly and
Aspray 1996).
When it comes to canonical honours, it would seem that selection privileges
electronic technologies over mechanical and electromechanical technologies. But
one could argue that this on its own is not sufficient to account for the exclusion of
our two examples. Neither the Totalisator nor the golf machine was created for sci-
entific calculation or administration. They are machines for leisure and entertain-
ment. And the growth markets for on-line gambling and game simulators are outside
the time frame of the two master narratives. It is not that these machines are insig-
nificant but that the narrative that confers meaning is outdated.
Our narrative frames are provisional, and exclusion from the canon runs forbid-
ding risks. In Enigma Rebus, David Link argues that computer-like devices, being
intricate and opaque, soon become enigmas, lapsing into mystery unless engaged
with, preferably operationally, within the living memories of their creators or the
last practitioners. Link uses as a literary device Franz Kafka’s Odradek, an object
whose use and purpose are unclear (Link 2016, pp. 79–112). Odradek is an object
trapped in no man’s land between artefacts, which have use but no volition, and
humans, who have volition but no purpose. It is the archetype of the mystery object.
Link observes that artefacts become increasingly unreadable after they cease to
work and, the longer this goes on, unreadability progresses to the point “where even
its former overall purpose is forgotten.” The object lapses into mystery “because it
is not sufficiently charged by interest in it. Being purposeless and no longer under-
stood, the thing loses its place in the world” (Link 2016, p. 87).
66 D. Swade

Only three Spotlight Golf Machines have so far been traced: two of them shells,
one near-complete, albeit degraded. Its purpose and function were partially lost and
had to be reverse-engineered from the one near-intact relic. There is no known sys-
tems diagram for a Julius Totalisator. The closest is a partial diagram drawn, prob-
ably within the last decade, by an ex-manager and engineer in Julius’s Totalisator
company (ATL), who had worked on Julius Totes. Link’s abyss of mystery seems
disturbingly real.
Attempts to build reconstructions of working machines of which there is no sur-
viving original provide further test cases for Link’s thesis. The reconstruction of
Colossus, developed for wartime code breaking at Bletchley Park, relied on frag-
mentary documentation, technical archaeology and the memories of surviving cre-
ators (Sale 2005, pp. 62–3; Swade 2017). Colossus is included in the canon but has
had to impersonate a general-purpose computer to keep its place. It remains some-
thing of an awkward guest, guiltily aware that it may be present under false pre-
tences. The reconstruction of EDSAC, which ran its first programme in Cambridge
in 1949, is based on incomplete documentation, reverse engineering and the exper-
tise of the last generation of engineers schooled in vacuum tube technology (Herbert
2017). The reconstruction of the Manchester “Baby”, which ran the first stored pro-
gramme in 1948, was assisted by the recovered memories of its original creators
(Burton 2005). In all these cases, documentation was incomplete, sometimes lam-
entably so. Even these machines, which enjoy inclusion, required substantial intel-
lectual, technical and practical effort to excavate. Had these reconstructions not
been undertaken within the lifetimes of contemporary practitioners, knowledge,
know-how and purpose would have been irretrievably lost.
Operating with outdated narratives deprives significant and potentially signifi-
cant machines of the attention that sustains their histories. Odradek is a baleful
reminder of the fate that awaits such neglect.


1828. “Dog Racing Bill: Emergency Committee’s Plea.” The Times, 23 October 1928, 11.
1922. “The Craig Golfmeter.” The Yorkshire Post, March 11, 14.
1926. “The Protest Against the Betting Tax.” The Yorkshire Post, 4 November 1926.
1928. Racecourse Betting Bill. Hansard, 16 March 1928.
1930-1. Robot. The Concise Universal Encyclopedia edited by Hammerton J.  A. London:
Amalgamated Press.
1931. “Progress of the Totalisator: a Big Business Now Established.” Sporting Life, 7 March 1931.
1932. “An Improved Totalisator: particulars of the Latest Premier (Julius) Automatic Type  –
Continuous Indicating of Dividends Returnable – Use of Selsyn Motor.” The Electrician:402*3.
1936a. “Ryder Cup Players Demonstrate Spotlight Golf.” Aberdeen Press and Journal, 23 May
1936, 5.
1936b. “Spotlight Golf.” The Evening News, 3 December 1836, 7.
1936c. “Spotlight Golf has Come to Stay.” Northern Whig, 31 October, 3.
1936d. “Spotlight Golf Now.” Daily Herald, 23 May, 7.
1937a. “Are your clubs entirely suitable?” Aberdeen Press and Journal, 2 July, 4.
3  Forgotten Machines: The Need for a New Master Narrative 67

1937b. “Timelines: the Premier (Julius) Automatic Totalisator.” accessed 29 April 2018. http://
1944. “Sale of Spotlight Golf Machine.” The Times, 22 August, 1.
1949. “Automatic Tote: Solve Difficulties of Country Clubs.” Kalgoorlie Miner, accessed 28 April
1967. Betting and Gaming: The Dog Racecourse Totalisator Regulations 1967. Home Office.
Whitehall: HMSO.
2013. Australia’s Oldest Computer found in Bothwell, Tasmania? Hobart, Tasmania.
Barrett, Lindsay, and Connell, Matthew. 2006–7. “An Unlikely History of Australian Computing:
the Reign of the Totalisator.” The Rutherford Journal (Vol. 2).
Barton, Charles. “Old Mechanical, Electromechanical Totalisator Computer Systems.” accessed
07 May 2018.
Barzeski, Erik J. 2006. “The Mythical ‘Ball Boost’.” accessed 15 May 2018. https://thesandtrap.
Burton, Christopher P. 2005. “Replicating the Manchester Baby: Motives, Methods, and Messages
from the Past.” IEEE Annals of the History of Computing 27 (3):44–60.
Caley, D H N. 1929. “Electricity and the “Tote”.” The Electrician 103 (3, 4):71–73, 108–109.
Campbell-Kelly, Martin, and William Aspray. 1996. Computer: A History of the Information
Machine: Basic Books.
Canymeres, Ferran. 1946. L’Homme de la belle èpoque. Paris: Les Editions Universelles.
Ceruzzi, Paul E. 1998. A History of Modern Computing: MIT Press.
Conlon, Brian. “A 1917 Randwick Julius Tote Shaft Adder.” accessed 7 May 2018. http://members.
Conlon, Brian. “Adding Mechanism and Mechanical Storage Device.” accessed 05 April 2018.
Conlon, Brian. “Sir Goerge Julius, Inventor and Australian Nation Builder.” accessed 7 May 2018.
Conlon, Brian. “The World's First Odds Computer.” accessed 6 May 2018. http://members.oze-
Conlon, Brian. 1989. “Totalizator Wager Type Explanation: Pool Definitions from the ATL Diary.”
accessed 06 May 2018.
Conlon, Brian. 2005a. “The Harringay London Julius Tote.” accessed 7 May 2018. http://members.
Conlon, Brian. 2005b. “Julius tote installation in Caracas.” accessed 07 May 2018. http://members.
Cortada, James W. 1993. Before the Computer: IBM, NCR, Burroughs, and Remington Rand and
the industry they created. Princeton NJ: Princeton University Press.
Doran, Bob. 2006-7. “The First Automatic Totalisator.” The Rutherford Journal 2.
Doran, Bob. 2018. “Henry Hodsdon and his ‘marble’ Totalisator.” The Rutherford Journal 5.
Freeland, Frank, T. 1879. “An Automatic Tit-Tat-Toe Machine.” Journal of the Franklin Institute
CVII (No. 1):1–9.
Goldstine, Herman H. 1972. The Computer: from Pascal to von Neumann. Princeton: Princeton
University Press.
Graham, R. A. 2007. “Who Killed the Bookies?: Tracking Totalisators and Bookmakers across
Legal and Illegal Gambling Markets.” M A Anthropology, University of Canterbury.
Julius, George Alfred. 1920. Mechanical aids to calculation. Institution of Engineers, Australia.
Julius, George Alfred. 1921. Improvements in Race Totalisators. Patent GB166007, filed 15 May
1919, issued 11 July 1921.
Julius, George Alfred. 1928. Improvements in Machines for Printing and Issuing Totalisator
Tickets and for Operating Totalisator Registers and Indicators. Patent GB297048, filed 13 June
1927, issued 13 September 1928.
Julius, George Alfred, and Julius, Awdry Francis. 1930. Method and Means for Calculating and
Indicating the Ratio of Two Variable Quantities. Patent GB332668, filed 13 May 1929, issued
31 July 1930.
68 D. Swade

Julius, George Alfred, and Awdry Francis Julius. 1932. Improvements in and relating to Ratio
Computers more particularly for Totalisator Odds Indicator Apparatus. Patent GB383649, filed
15 June 1931, issued 24 November 1932.
Herbert, Andrew. 2017. “The EDSAC Replica Project.” In Making IT Work, edited by Martin
Campbell-Kelly, 22–34. London: British Computer Society and The National Museum of
Link, David. 2016. “Enigma Rebus: Prolegomena to an Archaeology of Algorithmic Artefacts.” In
Archaeology of Algorithmic Artefacts, David Link. Minneapolis: Univocal.
Lynch, Richard K. 1992. “On Analytical ‘Engines’, Data ‘Architectures’ and Software ‘Engineers’:
Metaphoric Aspects of the Development of Computer Terminology.” PhD, Teachers College,
Columbia University.
Mackie, William. 1974. A Noble Breed: the Auckland Racing Club 1874-1974. Auckland: Wilson
& Horton.
Metropolis, N., J.  Howlett, and Gian-Carlo Rota, eds. 1980. A History of Computing in the
Twentieth Century: Academic Press Inc.
Monnens, Devin. 2013. “‘I commenced an Examination of a Game called ‘Tit-Tat_To”: Charles
Babbage and the ‘First’ Computer Game.” Proceedings of the 2013 DiGRA International
Conference: Defragging Game Studies.
Munting, Roger. 1996. An Economic and Social History of Gambling in Britain and the USA.
Manchester: Manchester University Press.
Norrie, Charles. 1994. “Harringay Greyhound Stadium Totalisator and George Alfred Julius.”
London's Industrial Archaeology (5):24–34.
Randell, Brian, ed. 1982. The Origins of Digital Computers: Selected Papers. Berlin:
Rojas, Raul, and Hashagen, Ulf, eds. 2000. The First Computers – History and Architectures: MIT
Sale, Anthony E. 2005. “The Rebuilding of Colossus at Bletchley Park.” IEEE Annals of the
HIstory of Computing 27 (3):61–69.
Sale, Arthur. 2013. “1936 Computer Game: Spotlight Golf.” Australasian Golf Museum, accessed
13 May 2018.
Simon, Louis John. 1935. Apparatus for playing a game simulating golf. Patent GB440177, filed
21 March 1934, issued 23 December 1935.
Simon, Louis John. 1936a. Apparatus for playing a game simulating golf. Patent US2051751, filed
6 April 1934, issued 18 August 1936.
Simon, Louis John. 1936b. Golf game apparatus. Patent CA360189, issued 9 January 1936.
Simon, Louis John, and Frank Allen Mitchell. 1936. Apparatus for playing a game. Patent
GB458663, filed 19 March 1935, issued 21 December 1936.
Swade, Doron. 1987. “Science goes to the dogs: a sure bet for understanding computers.” New
Scientist (29 October):49–51.
Swade, Doron. 2017. “The Historical Utility of Reconstruction and Restoration.” In Making IT
Work, edited by Martin Campbell-Kelly, 7–21. London: British Computer Society and The
National Museum of Computing.
Williams, Michael R. 1985. A History of Computing Technology. Englewood Cliffs, N.J.: Prentice-
Hall, Inc.
Chapter 4
Calvin Mooers, Zatocoding, and Early
Research on Information Retrieval

Paul E. Ceruzzi

Abstract  Historians of computing know of Calvin Mooers (1919–1994) for sev-

eral contributions to electronic computing and programming languages. This paper
describes a less well-known development by Mooers. Around 1950, he devised a
coding scheme for edge-notched cards—a decidedly “low-tech,” nonelectronic
method of information storage and retrieval, based on cards with notches cut into
their edges. In spite of his experience and training in electronics, Mooers believed
that existing digital computer projects were ill-suited for the storage and retrieval of
large amounts of information. Edge-notched and other cards had been in common
use for data retrieval, but none were able to handle the explosion of information
occurring in the sciences after World War II. “Zatocoding” was to address the defi-
ciencies of both existing electronic computers and of other card systems, as it was
based on a more theoretical understanding of the nature of information and its
retrieval. Zatocoding did not prevail, but I argue that the theoretical work done by
Mooers proved later to be of fundamental importance to modern databases, encryp-
tion, and information retrieval.

Historians of computing know of Calvin Mooers (1919–1994) for his several con-
tributions to computing. One was his work at the Naval Ordnance Laboratory in
White Oak, Maryland, where during and after World War II, he worked with John V.
Atanasoff on wartime acoustics problems and later on a never-completed electronic
digital computer project. Another was his development of the “TRAC” program-
ming language in the late 1960s. Mooers’ assertion of intellectual property rights
over the TRAC language generated controversy and was the opening salvo in the
battles over rights to programming languages and operating systems. This paper
chronicles a less well-known development by Mooers from the early 1950s: a cod-
ing scheme for edge-notched cards—a “low-tech,” nonelectronic, albeit digital
method of information storage and retrieval.

P. E. Ceruzzi (*)
Smithsonian Institution, Washington, DC, USA

© This is a U.S. government work and not under copyright protection 69

in the U.S.; foreign copyright protection may apply 2019
T. Haigh (ed.), Exploring the Early Digital, History of Computing,
70 P. E. Ceruzzi

At the Naval Ordnance Laboratory (NOL) in the spring of 1945, Mooers was part
of a cutting-edge electronic computer project, one of the most advanced in the
­country (Weik 1955, pp.  125–126). The project was led by J.V. Atanasoff and
attracted the attention of both John Mauchly, who was a consultant at the NOL, and
John von Neumann, who visited the NOL from time to time. Yet in September 1946,
Mooers left the NOL and enrolled in graduate school at MIT to study mathematics.
He then abandoned that degree program and began a study of information science,
which led to his design of a mechanical system he later called “Zatocoding” (Mooers
1992; Mooers 2003, pp. 51–66). Like others at the time, Mooers observed the expo-
nential increase in published scientific information after 1945, and he saw that exist-
ing indexing and retrieval systems were inadequate to handle it (Rayward and
Bowden 2004; Hahn and Buckland 1998). A major influence on his ideas was MIT
Professor James W. Perry, who had enlisted the support of the American Chemical
Society in exploring methods for retrieving chemical data (Casey and Perry 1951).
In 1947 Mooers founded a company, “Zator,” and began marketing a custom-
designed version of edge-notched cards (Fig.  4.1). Information was coded onto
these cards by a system he called “Zatocoding.” No electronics, digital or analog,
were involved in retrieving the data. The only mechanism involved was a motor-
driven shaker that helped extract the relevant notched cards.
Moore’s decision raises the question of why someone with a background in elec-
tronics and an exposure to the beginnings of electronic digital computing would
pursue such a path. The NOL computer project foundered and was eventually can-
celled after John von Neumann assessed the project and gave a poor evaluation of it.

Fig. 4.1  Calvin Mooers, ca, 1950. (Photo: author’s collection, gift of Calvin Mooers)
4  Calvin Mooers, Zatocoding, and Early Research on Information Retrieval 71

Mooers stated that the project suffered from poor management. Although both von
Neumann and Mauchly were involved with the project, their ability to contribute to
it was limited. In the immediate postwar years, von Neumann was involved with a
number of other projects, which placed many demands on his time. At the same
time, the FBI was investigating Mauchly for alleged security issues, which restricted
his access to classified information. Mooers recognized that the NOL project was
not going to succeed, and that was a factor in his departure. But other contempora-
neous projects along the Eastern Seaboard, such as at the National Bureau of
Standards, the University of Pennsylvania, the Institute for Advanced Study, and
MIT, pointed toward the ascendency of digital electronic computers in the postwar
world. Given his experience and training in electronics, Mooers might have found a
place in one of those other projects. A Whig view of history would say that Mooers
took a step backward. And indeed, some of his colleagues did not think highly of his
interpretation of Claude Shannon’s work on information theory as a basis for
Zatocoding. For example, Jerome Wiesner, who was charged with organizing an
American delegation to the Third London Symposium on Information Theory, did
not want to fund Mooers’ trip, even though he had been invited by the British (Kline
2015, pp. 113–114).
The Zator Company also foundered, and Zatocoding as an information retrieval
system did not become common. However, we shall see that in its failure, Mooers
made fundamental contributions to what later became computer science. But that
was years later, and one must use care in giving him credit for what others accom-
plished based on his pioneering steps. What follows is a look at why Mooers chose
the path he chose, away from electronics, and what his reasoning was behind the
creation of Zatocoding.

4.1  The Computer as a Data Retrieval Device

One answer to why he chose that path is that although the first electronic computers
could solve complex mathematical problems, they were ill-suited for data retrieval.
Their central storage capacities, whether mercury delay line, Williams tube, drum,
or magnetic core, were too small. Secondary memories of magnetic tape had more
capacity, but the challenge of storing and retrieving data from a serial medium such
as tape was not trivial. Popular images of early mainframes typically showed a con-
trol panel with switches and blinking lights, behind which was a long row of mag-
netic tape drives. That is why the name for one of IBM’s first forays into von
Neumann electronic computers was the “Tape Processing Machine.” The UNIVAC,
America’s first commercial electronic computer, was known as much for its
“Uniservo” tape drives as for its stored program architecture. In the mid-1950s both
RCA and Remington Rand introduced computers that deviated from the von
Neumann architecture, to provide for better data retrieval and to avoid the “von
Neumann bottleneck” of having a single channel between the central processor and
main memory. RCA’s “BIZMAC” (Fig.  4.2) had multiple channels between
72 P. E. Ceruzzi

Fig. 4.2  RCA “BIZMAC.” Note the multiple processor consoles and the number of employees
needed to operate the system. (Photo: Charles Babbage Institute)

multiple processors and magnetic tape drives. RCA worked closely with von
Neumann and knew of his preferred architecture for computers, but they chose a
different design for the BIZMAC.
Likewise, although the Remington Rand UNIVAC I was a technological tour de
force, as late as 1956, the company was marketing a non-von Neumann computer,
the UNIVAC File, programmed by a plugboard. The Burroughs Corporation’s first
entry into electronic computing, the “E-101,” was also programmed by a plugboard.
These machines addressed the limitations of the electronic digital computers for
access to large amounts of business data (Weik 1955, pp.  157–159; Daly 1956,
pp. 95–98).
Well into the 1970s, one of Remington Rand’s most profitable products was a
manual card-based data retrieval system called the “Kardex” (shown in Fig. 4.3).
Libraries found the Kardex system useful for keeping track of serial holdings.
Hospitals used the Kardex to keep track of patients as they progressed from admit-
ting, to treatment, to discharge. Thus, the company that marketed the UNIVAC, one
of the most advanced electronic computers of the 1950s, made substantial profits
from a card-based system.
Thus manual, card-based information retrieval systems continued to find a mar-
ket. The persistence of manual filing reveals that, for problems involving storage
4  Calvin Mooers, Zatocoding, and Early Research on Information Retrieval 73

Fig. 4.3  Remington Rand “Kardex” in storage at a science library. (Photo courtesy of Linda Hall
Library of Science, Engineering, and Technology)

and retrieval of large amounts of data, magnetic tape and even the newly invented
disk memory had limits. As late as 1973, Donald Knuth, in his seminal Volume 3 of
The Art of Computer Programming: Sorting and Searching, devoted several pages
to the problem of “retrieval on secondary keys” and states that:
….some of [these applications] are better done by hand than by machine! Computers have
increased the speed of scientific calculations by a factor of 107 or 108, but they have pro-
vided nowhere this gain in efficiency with respect to problems of information handling.
When large quantities of data are involved, today’s computers are still constrained to work-
ing at mechanical (instead of electronic) speeds, so there is no dramatic increase in perfor-
mance per unit cost when a computer system replaces a mechanical system. We shouldn’t
expect too much of a computer just because it performs certain other tasks so well. (Knuth
1973, 551)

Knuth’s description of the “mechanical” speeds of data retrieval implies the reliance
on magnetic tape for “large quantities of data.” Although by 1973 disk storage was
becoming available on mainframe installations, bulk storage and retrieval was still
carried by tape drives.
Knuth was not alone in this assessment. In the recently declassified history of
computing at the National Security Agency (NSA), historian Colin Burke recounts
several attempts by the agency to adapt existing or build new computers to tackle
their data retrieval needs (Burke 2002). Most were disappointments, with one major
exception. In the late 1950s, the NSA contracted with IBM to develop a system that
employed tapes mounted in cartridges, with a robotic device that retrieved, mounted,
and demounted the tapes when summoned by the computer processor. The overall
74 P. E. Ceruzzi

capacity of this “Tractor” was 19 billion characters—a massive capacity for its day
(Bashe et al. 1986, pp. 224–230). The NSA considered the “Harvest” computer sys-
tem, with its Tractor tape drives, a success. But for IBM it was a one-off product,
although the technologies developed for it were later incorporated into successful
commercial products. The approach to data retrieval embodied in the Tractor did not
The impasse was only partially broken with the introduction of disk storage,
beginning in 1956 with the IBM Model 305 Disk Storage Unit, later marketed as
“RAMAC” (Random Access Memory Accounting Machine) and demonstrated at
the Brussels World’s Fair in 1958. IBM President Thomas Watson called the prod-
uct introduction “the greatest product day in the history of IBM” (Ceruzzi 1998,
p. 70). Watson was correct, although it took more than a decade before random-­
access disk storage began to supplement tape as a viable storage medium for data
retrieval. These limits persisted well into the following decade.
Disk drives did not have the drawback of tape’s sequential nature of storage and
retrieval. The notion of “random access” implied that one could go directly to a
record. The time to access the information would be slightly different depending on
several factors such as the location of the read/write head, but access was much
faster than if one had to search through a reel of tape. Rapid retrieval, however, did
not solve all the problems of rapid data retrieval. Fast and efficient retrieval depended
on an indexing scheme that directed the computer to the correct place on the disk
where the data were stored—not a trivial problem. We shall return to the indexing
problem later as we examine Mooers’ work on information retrieval.
Concurrent with the advent of large-capacity disk storage units was the develop-
ment of theories of searching data stored on them using, e.g., linear searches,
“B-trees,” and other techniques. Those theories were part of the emergence of a
fundamental shift in the understanding of computer storage and retrieval of data, an
understanding of the need for what later was known as “database management sys-
tems” (DBMS). Its origins are credited to Charles Bachmann, who outlined the
need for such an approach in 1973 (Bachman 1973). For decades, until the advent
of “RAID” storage, high-capacity disk storage was expensive.1 With the exception
of the Harvest system, most mainframe installations employed people who would
mount and demount relatively inexpensive reels of magnetic tape from a drive as
needed. A human operator could also mount and demount disks from second-­
generation disk drives, such as the IBM Model 1311 drive first offered in 1962. That
process was not straightforward. Because of the tight tolerances, these disks were
kept in sealed cartridges resembling cake boxes. The disk packs were expensive and
lacked the massive capacity of a roomful of tapes.

 RAID is an acronym for “Redundant Array of Inexpensive Disks.”

4  Calvin Mooers, Zatocoding, and Early Research on Information Retrieval 75

4.2  Edge-Notched Cards

Against that background, Mooers turned to edge-notched cards. Edge-notched cards

were once in common use for data storage and retrieval, including at many technical
libraries and at intelligence agencies, well into the 1980s. They became obsolete
with advances in disk storage, the personal computer, and research into the theory of
data retrieval. Kevin Kelly, a co-founder of Wired magazine, noted the speed with
which these cards became “dead media” and how few people born after 1980 have
any notion of what they were or how they worked (Kelly 2008). Edge-notched cards
were heavily used by the chemical, textile, and other industries into the mid-­
twentieth century (Casey and Perry 1951). As the cartoon reproduced as Fig.  4.4
shows, they enjoyed a high public profile at the time. Yet the numerous standard
histories of punched card systems typically begin with Herman Hollerith’s cards for
the 1890s Census, proceed to the further development by IBM and Remington Rand,
and make no mention of edge-notched cards at all (e.g., Heide 2009). This is not
meant to criticize the many excellent accounts of punched cards’ use in data pro-
cessing. It only shows how quickly edge-notched cards faded from interest.
Edge-notched cards were similar to standard library cards, although larger, and
they had holes punched along the edges. Each hole or series of holes represented an
attribute, category, or trait, e.g., author, title, subject, date range, etc. By cutting
away a notch, and passing a knitting needle-like device through the desired field, all
of the cards that had that particular attribute would drop out. Using more than one
pass, or using more than one needle when conducting a search, gave simple Boolean
searches. More advanced cards used two or even three rows of holes and offered
more retrieval capability depending on the depth of the cut.
To illustrate their advantages, consider how before 1980 the Library of Congress
(LC) organized its holdings using 3 × 5″ cards. For each book in its collection, the
library generated several identical 3 × 5″ cards: one for author, one for title, and
several more for subject. These were then filed alphabetically. For the first two sets
of cards, the filing was straightforward, but for the third, not so. What was a book’s
subject? The card could only categorize a few. The library maintained a list of sub-
ject headings in a bound volume kept adjacent to the card catalog (Library of
Congress 2018). The choice of subjects was nevertheless arbitrary. And the subjects
that readers searched for would change over the years, rendering the initial choice
of subjects obsolete. Both the Dewey Decimal System and the LC’s cataloging
scheme have difficulty coping with the flood of information and new topics brought
on by recent scientific and technological advances. Cataloging books on the history
of computing is a good example: the library catalogs some of them under “QA”—
mathematics—and others under “TK,” electrical engineering. Because the Library
of Congress made its cards and the accompanying catalog scheme available to other
libraries, the LC classification scheme and the 3 × 5″ card, with their limitations,
became a de facto standard.
Consider this more specific problem of data retrieval. In 2003 the comic Phyllis
Diller offered a collection of personal materials to the Smithsonian Institution for
76 P. E. Ceruzzi

Fig. 4.4  Cartoon revealing the “secret” manual method of data retrieval using Zatocoding.
(Cartoon in author’s collection, gift of Calvin Mooers)

preservation and display. Among the items offered were 48 drawers, containing
52,569 jokes written on 3 × 5″ note cards (Smithsonian Institution 2017). The cards
were organized alphabetically by subject: “airplanes,” “animals,” “drugs,” “eating,”
etc. But what if she wanted to retrieve a joke about the quality of food served on an
airplane? Ms. Diller did not have the luxury of making duplicate cards, as the
Library of Congress did. (The Smithsonian has appealed to the public to join a
crowdsourcing effort to index these jokes in a modern computerized database.)
Edge-notched cards (Fig. 4.5) allowed one to retrieve a card using several identi-
fiers. They represented what information scientists call “faceted classification”: a
method of classifying information that addresses the restrictions inherent in the
Dewey Decimal or Library of Congress schemes described above (Svenonius 2000;
4  Calvin Mooers, Zatocoding, and Early Research on Information Retrieval 77

Fig. 4.5  An example of the use of the “Cope-Chat Paramount” edge-notched card system for a
dissertation on the history of astrophysics. Note the holes notched on all four sides of the card.
(Photo: David H. DeVorkin)

La Barre 2002). InDecks™ cards, for example, had two rows of holes around the
entire perimeter of the card. There was no need to keep the cards in alphabetical or
any other order; typically, one set of holes was coded to the alphabet, and one could
retrieve cards alphabetically by author simply by passing a needle through the rele-
vant holes. There was plenty of room for holes corresponding to different subjects,
time frames, etc. In the 1970s this author and several of his fellow graduate students
from a variety of disciplines used InDecks™ edge-notched cards for dissertation
78 P. E. Ceruzzi

research and for keeping track of a collection of 35 mm slides used for teaching.
Hundreds of 35 mm slides covering the history of technology were organized by the
name of a person associated with the slide, the era in which it flourished, the name
of the machine pictured in the slide, and finally, the classification of technology.
That last attribute followed the classification given by the Society for the History of
Technology (SHOT) and published annually with the Society’s “Current Bibliography
in the History of Technology.” In 1971 Stewart Brand praised edge-­notched cards as
an aid to the production of the Last Whole Earth Catalog—for him it meant the dif-
ference “between partial and complete insanity” (Brand 1971, p. 320).

4.3  Zatocoding

What were the limits of edge-notched cards that caused their demise? One was the
physical limit of the edges in accommodating codes, even if those edges were a vast
improvement over ordinary 3 × 5″ note cards. One could not encode an arbitrary
number of subjects on a single card, much less do the almost-arbitrary searches that
one typically conducts with Google. Zatocoding attacked this limit. A more serious
drawback, which was the fatal weakness of all edge-notched systems, was that prior
to notching the cards, one had to construct an index of relevant attributes and sub-
jects. That could be a tedious effort and if done poorly could limit the usefulness of
the resulting system. Once a set of attributes was chosen, it was difficult to modify
or add to them based on progress in the research. Coding the author and title of a
document was straightforward, but what if, in the above example, the terms of the
“Current Bibliography in the History of Technology” were inappropriate to one’s
personal research? That bibliography, which this author employed in the 1980s, had
little mention of topics that are now popular in the history of technology, e.g., race,
gender, ethnicity, and third-world labor practices. It is also common during the
course of research that one’s approach to a topic will evolve. Kevin Kelly believed
that one reason for the cards’ demise was their tendency to become cumbersome
and unwieldy when applied to more complex searches. Casey’s and Perry’s exhaus-
tive study, published in 1951, gives several examples that unwittingly illustrate how
quickly the indexing schemes become unwieldy (Casey and Perry 1951).
“Zatocoding” overcame the first limit: the physical limit of the card. It did not
address the second limit: the need to construct an index prior to coding the cards. It
was, however, a step toward modern computerized indexing and retrieval schemes.
Based on his theoretic research, Mooers developed a coding scheme that allowed
one to index thousands of terms, far beyond what was implied by the physical limits
of a card. For every “descriptor” (a term coined by Mooers), a set of four, two-digit
random numbers was generated, and those numbers, not the original attribute, were
notched onto the card. In one description of the process, the numbers were gener-
ated by a mechanical system similar to that used by state lottery offices to select
winning lottery numbers. In another it was implied that the numbers were chosen
from a table of random numbers. Mooers was writing in the early 1950s, and he
4  Calvin Mooers, Zatocoding, and Early Research on Information Retrieval 79

might have been aware of preliminary work done by the RAND Corporation to
generate such a table. In 1955 RAND published a seminal book, A Million Random
Digits with 100,000 Normal Deviates (RAND Corporation 1955), whose contents
may seem odd to a layperson, but which was well regarded and heavily used among
social and physical scientists.
Regardless of the source, Zatocoding used no computer, electronic or otherwise.
Zatocoded cards were notched at the bottom only; a mechanical device (Fig. 4.6)
lifted the cards that did not have the desired notches cut in them, and these were
removed by passing a needle through a hole in the top of each card. Mooers called
the Zatocoding a “superimposed” code, “initially generated by a mechanical ran-

Fig. 4.6  Zator retrieval device. The cards had holes punched only on the lower edge. (Photo:
Charles Babbage Institute)
80 P. E. Ceruzzi

dom process” (Mooers 1951, pp. 20–32). In this way, a single card could encode
many more attributes, subjects, or keywords, as a large number of them could be
encoded by a small number of codes. It was also more flexible in that it could handle
new attributes that might arise during a research project. In one description, Mooers
states that a Zatocard, with a row of 40 notching positions along its lower edge,
could encode up to 10,000 descriptors (Mooers 1951 p. 22).
Mooers argued that Zatocoding was far superior to the “Rapid Selector” pro-
posed by Vannevar Bush, who also noted the postwar exponential increase of sci-
entific information (Bush 1945). Bush proposed storing the information and
coding its authors, title, and subjects on photographic film. His proposal for what
he called a “Memex” and the preliminary design of his Rapid Selector have been
cited as the forerunner of today’s World Wide Web and its technique of hyperlink-
ing materials (Burke 1994). Mooers was correct: the Rapid Selector did not live
up to its promise, and it never had an ability to encode more than a few descrip-
tors. The concept of hypertext, as realized in the World Wide Web, had to await
advances not only in computer-­to-­computer communication but also in mass,
random-access disk storage and database software.
It was critical to the efficacy of Zatocoding that the codes thus generated be ran-
dom and that they have no relation to the item being indexed. This was based on
Mooers’ interpretation of Claude Shannon’s pioneering work on information theory
(Shannon and Weaver 1949). Mooers was also aware of, but ascribed less impor-
tance to, the analysis of randomness in Norbert Wiener’s pioneering work
Cybernetics (Wiener 1948). Wiener’s analysis of the random properties of Brownian
motion was a starting point to his lifetime of work, including the impetus for
Cybernetics (Heims 1980). Wiener and Shannon are typically cited as laying the
foundations of modern information and communications theories, although they
approached the topic from different directions. Mooers used their theories to argue
that random codes were the only way to map the descriptors onto the cards that took
maximum advantage of the theoretical capacity of edge notching (Mooers 1951).
His interpretation of those theories may have seemed odd to his contemporaries and
may have contributed to the limited adoption of the cards he marketed, although his
limited marketing skills were also a factor (Garfield 1997). Regardless of his adap-
tation of the concept, and regardless of its origins, randomness as a tool for science
became well-established in the 1950s, especially with the adoption of the “Monte
Carlo” method used by nuclear weapon designers—hence the popularity of the
RAND Corporation book mentioned above (Institute for Numerical Analysis 1949;
Ulam 1976, pp. 196–198). Beginning with the Apollo missions to the Moon in the
late 1960s, the notion of using pseudorandom numbers to encode radio transmis-
sions became an established method for not just space exploration but many modern
digital telecommunications systems: GPS, Wi-Fi, cell phones, Bluetooth, etc. These
codes allow multiple channels to be superimposed on a single radio frequency, just
as Zatocoding allowed the superimposition of codes on edge-notched cards.
4  Calvin Mooers, Zatocoding, and Early Research on Information Retrieval 81

4.4  Hashing

One still had to construct an index before using this system, regardless of the method
of coding. That was the fatal flaw of not only Zatocoding but of the other edge-­
notched card systems, many of which had been developed to a high degree of
sophistication. The descriptors did not automatically generate the codes—to do so
may sound absurd, but ultimately this research led to system that did bypass the
construction of indices.
Mooers developed this technique around 1947–1950. The Zator Company, which
Mooers founded to market the system, did not succeed. The reasons are complex
and beyond the scope of this study. By Mooers’ own admission, he was a poor busi-
nessman (Mooers 1992). Eugene Garfield was put off by Mooers’ insistence on
retaining intellectual property rights to the system—an issue that later resurfaced
when Mooers tried to protect his “TRAC” programming language (Garfield 1997).
His and other card systems are now gone. But the coding that Mooers pioneered and
analyzed has been cited as a forerunner of modern hash coding: one of the bedrock
techniques of modern data storage, data retrieval, and password encryption. It would
be historically inaccurate to credit Mooers with the invention of hash coding, but he
did make fundamental contributions to the theory.
In contrast to Zatocoding, hashing automatically generates a random code by
performing a function on the datum one wishes to encode, thus bypassing the
tedious, and for computers, memory-intensive process of constructing an index.
Donald Knuth, in his analysis of hash coding, examines the origin of the tech-
nique—and its somewhat whimsical name—in his Volume 3 of The Art of Computer
Programming (Knuth 1973, pp. 506–549). In this volume, he credits Mooers with a
significant contribution to the theory of such codes. Hash coding (no relation to
“hashtags”) has some overwhelming advantages over other methods of data
retrieval, such as allowing a computer to link a piece of data to a storage location in
memory without the prior generation of an index (Ralston and Reilly 1993,
pp. 1185–1191). The technique proved its worth as disk memories began to replace
sequential tape storage in large computer installations. Modern computer data
retrieval and storage systems rely heavily on hash coding. It is ironic that many
accounts of the impact of Norbert Wiener’s Cybernetics focus on famous intellectu-
als like Margaret Meade and her then-husband Gregory Bateson, whose interpreta-
tions of Wiener seem off-base today. Mooers, whose interpretation of Wiener was
less appreciated, made a modest but far more lasting impact on the modern
Information Age (Brand 1974).
Modern systems generate the hash code by an algorithm, not by a mechanical
random number generator. The algorithm is deterministic—for a given input, it will
generate the same number every time. The number is therefore not random but pseu-
dorandom. It is necessary to be able to replicate the random sequence, for example,
in the telecommunications applications mentioned above. Nevertheless, the
82 P. E. Ceruzzi

a­ lgorithm must generate numbers that pass a set of tests to determine how random
they are. That leads to deep theoretical issues about what constitutes a truly “ran-
dom” number (Knuth 1969). Mathematicians are familiar with a quotation from
John von Neumann, stated when he was advising researchers at the US atomic
weapon laboratories: “Anyone who considers arithmetical methods of producing
random digits is, of course, in a state of sin” (von Neumann 1949, p. 36). Finding an
algorithm that can generate a hash code efficiently and quickly and which satisfies
the criteria for randomness is not a simple matter. That was not an issue with
Zatocoding, as Zatocoding, at least in its early description, used truly random num-
bers generated by a non-deterministic, mechanical process.
A major issue with hashing, especially when codes are generated by a determin-
istic algorithm, is that a hash function will from time to time generate a collision, or
a “false drop”: the function may generate the same code from two distinct, unrelated
attributes. In an edge-notched card, that would mean that the wrong cards would
drop out when a needle is inserted. The problem may be illustrated by the following
famous “birthday paradox.” Consider a database of information a group of people.
The database keeps track of each person by a unique number, which points to a stor-
age location in a computer’s memory. A suggested hash function would assign those
numbers based on the month and day of each person’s birth: an easily generated,
four-digit number that is unique, as a person can only have one birthday. However,
for a group of as few as 23 people, there is a high likelihood that 2 people will have
the same birthday (Ball 1940, pp. 45–46). Modern hash-coding algorithms are more
complex than this example, but collisions do occur, and when they do, additional
code must be written to resolve the ambiguity.
In his theoretical analysis of Zatocoding, Mooers was able to show that these
collisions could be held to a low and manageable level (Mooers 1947 pp. 14E-15E).
In other words, the prospect of false drops was not a fatal flaw. Mooers did not cite
the birthday paradox, but according to his theory, if one assigned a truly random
four-digit number to each person, one could encode hundreds of names with only a
few false drops. That theoretical work, independent of its implementation in Mooers’
edge-notched card system, would later turn out to be crucial in the spread of hash
coding for electronic digital computers.
After the dissolution of the Zator Company, Mooers abandoned Zatocoding,
although he continued to explore methods of automated text and data retrieval. He
founded another entity, the nonprofit Rockford Research Institute, and in the early
1960s, he developed a programming language called TRAC (“Text Reckoning and
Compiling”). Jean Sammet assessed it in her study of programming languages, and
she compared it favorably to other languages that dealt with text, including LISP,
IPL-V, and COMIT (Sammet 1969, pp. 448–454).
4  Calvin Mooers, Zatocoding, and Early Research on Information Retrieval 83

4.5  Conclusion

We return one final time to the topic of hash coding. For the pioneers of computing
software, hash coding was an efficient and effective way to store and retrieve data in
random-access memories. Since the advent of the Internet, it has taken on a new
significance. That is because a well-designed hash is a “one-way” function. That is,
given a piece of information, one can develop a fast algorithm that will generate a
unique hash code. But given the hash code, it should be impossible to go backward
and determine the nature of the original datum. Hashing thus allows Internet service
providers to verify the authenticity of passwords as they are sent over the network.
With that one-way property, a person who intercepted the hash code could not use
that information to obtain the original password. Early generators of hash coding
did not always have that property, and indeed Knuth noted that for one technique, it
was an advantage that one could go from the code to the initial attribute (Knuth
1973). But other hash code generators did have this one-way property, and years
later that property would prove to be as useful as hashing’s ability to facilitate data
retrieval. As Internet service providers appeared, they found that by hashing a user’s
password, it was possible to secure the user’s account from unwanted intrusion.
This method was not foolproof, however, and depended on a careful design of the
hashing function to prevent various “brute force” and other attacks.
Potentially the most significant recent use of hash is found at the heart of the
digital currency known as Bitcoin. In the words of Satoshi Nakamoto, the (pseud-
onymous) creator of the currency:
A timestamp server works by taking a hash of a block of items to be timestamped and
widely publishing the hash, such as in a newspaper or Usenet post. The timestamp proves
that the data must have existed at the time, obviously, in order to get into the hash.

Once again, this use of hash is removed from its original use as a way to retrieve
data from computer memories.
Hashing is not perfect, even given the proof that collisions can be managed. For
example, in December 2016, users of Yahoo services were informed that their pass-
words were compromised, and that in response Yahoo implemented what they feel
is a more secure hash function. In its message to users, Yahoo stated:
Law enforcement provided Yahoo in November 2016 with data files that a third party
claimed was Yahoo user data. ….We believe an unauthorized third party, in August 2013,
stole data associated with more than one billion user accounts…the stolen user account
information may have included names, email addresses, telephone numbers, dates of birth,
hashed passwords (using MD5) and, in some cases, encrypted or unencrypted security
questions and answers.
84 P. E. Ceruzzi

Yahoo did not elaborate on the “MD5” hash function, although that function had
been known to have vulnerabilities. According to the Carnegie Mellon Software
Engineering Institute, MD5 is “cryptographically broken and unsuitable for further
use” (Carnegie-Mellon 2008). Yahoo stated that, in response to the intrusion, it
adopted a more secure hash function, “bcrypt.” But Yahoo did not give details about
bcrypt, either. Wikipedia has an extensive, highly technical description of bcrypt,
based on contributions to an online discussion forum.2
This discussion takes us far beyond the initial work of Mooers. It is also far
beyond the understanding of the millions of Yahoo subscribers who were affected.
But it does underscore how important that early theoretical work was to the devel-
opment of computing. None of this was foreseen by Mooers. This study of
Zatocoding looks at its theory, development, and failure not in terms of how the
computer and Internet evolved in subsequent decades but rather in the context of the
stored program electronic computers that were Zatocode’s contemporaries.
Zatocode competed against commercial business data processing machines such as
the UNIVAC 1 and the IBM 702. In that context, we see that during the pioneering
phase of electronic digital computing, which Professor Henry Tropp called the
“effervescent years,” the progress of digital electronics was by no means a straight-
forward steamroller over classic punched card tabulators, the Kardex, analog com-
puters, and the lowly 3 × 5″ inch or edge-notched card (Tropp 1974). That story is
not just about how edge-notched card enthusiasts developed ad hoc techniques to
overcome the limits of a mechanical device; it is also about how the electronic digi-
tal computer, born as a device to “compute”—to do mathematics—evolved into a
device that stores and retrieves text, video, photographs, sound, and other forms of


Ball, W. W. Mathematical Recreations & Essays. London: MacMillan and Co., 11th edition, 1940.
Bachman, Charles. 1973. The Programmers as Navigator. Communications of the ACM 16/11:
Bashe, Charles J, Lyle R. Johnson, John H. Palmer, and Emerson W. Pugh. 1986. IBM’s Early
Computers. Cambridge, MA: MIT Press.
Brand, Stewart, ed. 1971. The Last Whole Earth Catalog. Portola, CA, p. 320.
Brand, Stewatr. 1974. II Cybernetic Frontiers. New York, Random House.
Burke, Colin. 1994. Information and Secrecy: Vannevar Bush, Ultra, and the other Memex.
Metuchen, NJ: Scarecrow Press.
Burke, Colin. 2002. It Wasn’t All Magic: the Early Struggle to Automate Cryptanalysis, 1930–
1960s. Fort Meade, MD: National Security Agency, Center for Cryptologic History.
Bush, Vannevar. 1945. Bush, “As We May Think,” Atlantic Monthly 176/1 (1945), pp. 641–649.
Carnegie-Mellon University, Software Engineering Institute. 2008. “Vulnerability Notes Database,
VU 836068,” Accessed March 6, 2017

 Wikipedia cites the original USENIX paper on the origins of bcrypt in 1999: https://www.usenix.
org/legacy/events/usenix99/provos/provos_html/node1.html. Accessed March 6, 2017.
4  Calvin Mooers, Zatocoding, and Early Research on Information Retrieval 85

Casey, Robert S. and James W. Perry, eds. 1951. Punched Cards: their Applications to Science and
Industry. New York: Reinhold Publishing.
Ceruzzi, Paul. 1998. A History of Modern Computing. Cambridge, MA: MIT Press.
Daly, R.P. 1956. “Integrated Data Processing with the Univac File Computer.” Proceedings
Western Joint Computer Conference.
Garfield, Eugene, 1997. “A Tribute to Calvin N. Mooers, A Pioneer of Information Retrieval,” The
Scientist 11/4 (March), accessed electronically December 20, 2016.
Hahn, Trudy Bellardo, and Michael Buckland. 1998. Historical Studies in Information
Science.Metuchen, NJ: American Society for Information Science.
Heide, Lars. 2009 Punched-Card Systems and the Early Information Explosion, 1880–1945.
Baltimore: Johns Hopkins University Press.
Heims, Steve J.  1980. John von Neumann and Norbert Wiener: From Mathematics to the
Technologies of Life and Death. Cambridge, MA: MIT Press.
Institute for Numerical Analysis. 1949. Monte Carlo Method: Proceedings of a Symposium Held
June 29, 30 and July 1, 1949, in Los Angeles, California (Washington, DC: US Government
Printing Office, 1951).
Kelly 2008. “One Dead Media.” The Technium, June 17. technium/one-dead-
media/. Accessed December 27, 2016.
Kline, Ron. 2015. The Cybernetics Moment. Baltimore: Johns Hopkins University Press.
Knuth, Donald. 1969. The Art of Computer Programming, vol. 2: Seminumerical Algorithms.
Reading, MA: Addison Wesley.
Knuth, Donald. 1973. The Art of Computer Programming, vol. 3: Sorting and Searching. Reading,
MA: Addison Wesley.
La Barre. 2002. “Weaving Webs of Significance: The Classification Research Study Group in the
United States and Canada.” Proceedings of the Second Annual Conference on the History and
Heritage of Scientific and Technical Information Systems: 246–257.
Library of Congress. 2018.
Mooers, Calvin. 1947. Proceedings of the American Chemical Society Meeting 112 (September).
Mooers, Calvin. 1951. “Zatocoding Applied to Mechanical Organization of Knowledge,” American
Documentation, 2: 20–32.
Mooers, Calvin. 1992. Private communication with the author.
Mooers, Calvin. 2003. “The Computer Project at the Naval Ordnance Laboratory.” IEEE Annals of
the History of Computing 23, no. 2 (April–June): 51–66.
Nakamoto, Satoshi. 2017. “Bitcoin: a Peer-to-Peer Electronic Cash System,”
bitcoin.pdf. accessed March 6, 2017.
Ralston, Anthony, and Edwin D. Reilly. 1993. Encyclopedia of Computer Science, Third Edition.
New York: Van Nostrand Reinhold, 1993: pp. 1185–1191.
RAND Corporation. 1955. A Million Random Digits with 100,000 Normal Deviates. http://www. Accessed February 28, 2017
Rayward, W. Boyd, and Mary Ellen Bowden. 2004. The History and Heritage of Scientific and
Technological Information Systems. Proceedings of the 2002 Conference. Medford, NJ:
Information Today, Inc.
Sammet, Jean. 1969. Programming Languages: History and Fundamentals. Englewood Cliffs,
NJ: Prentice-Hall.
Shannon, Claude, and Warren Weaver. 1949. A Mathematical Theory of Communication. Urbana,
Ill: University of Illinois Press.
Smithsonian Institution, National Museum of American History. 2017. “Help us transcribe Phyllis
Diller's jokes—and enjoy some laughs along the way!”
help-us-transcribe-phyllis-dillers-jokes. Accessed March 6, 2017.
Svenonius, Elaine. 2000. The Intellectual Foundation of Information Organization. Cambridge,
MA: MIT Press.
Tropp, Henry. 1974. “The Effervescent Years: A Retrospective.” IEEE Spectrum 11 (February):
86 P. E. Ceruzzi

Ulam, S. M. 1976. Adventures of a Mathematician. New York: Charles Scribner & Sons.
Von Neumann, John. 1949. “Various Techniques Used in Connection with Random Digits,” in
Institute for Numerical Analysis, Monte Carlo Method: Proceedings of a Symposium Held
June 29, 30 and July 1, 1949, in Los Angeles, California (Washington, DC: US Government
Printing Office, 1951), pp. 36–38; quotation on p. 36.
Weik, Martin H. 1955. A Survey of Domestic Electronic Digital Computing Systems. Aberdeen,
MD: Ballistic Research Laboratories, Report No. 971 (December): 125–126.
Wiener, Norbert. 1948. Cybernetics. New York: John Wiley.
Chapter 5
Switching the Engineer’s Mind-Set
to Boolean: Applying Shannon’s Algebra
to Control Circuits and Digital Computing

Maarten Bullynck

Abstract  It belongs to the lore of computer science that Claude Shannon’s master’s
thesis (1937) revolutionized the design of (relay) switching circuit design. However,
as often is the case when taking a closer look at the historical records, things were
slightly more complex. Neither was Shannon’s paper an isolated result in switching
theory, nor was it immediately absorbed into the engineers’ daily practice. It proved
to be only usable in a small number of situations and had to be used in conjunction
with other techniques and the engineer’s know-how. Boolean algebra would only
become more important and more generally useful once standard situations were

In the Great Gallery of Heroes of the Information Age, where history is written with
capitals mostly, Claude Elwood Shannon figures alongside other luminaries such as
John von Neumann or Alan Turing. As the “father” of information theory, the math-
ematician and engineer Shannon, who spent most of his career at Bell Labs, has
become lionized and consecrated for both computer scientists and communications
engineers. As usual in such cases, selected episodes from Shannon’s biography have
been recast as telling anecdotes highlighting both Shannon’s “original” character and
his “seminal” contributions. Among those anecdotes that by now have been repro-
duced thousand times over, the one about his master’s thesis is a classic feature.
In 1937 Shannon, then a young student at MIT, successfully submitted his
master’s thesis; a year later it was published in the Transactions of the American
Institute for Electrical Engineers. The thesis, “A symbolic analysis of relay and
switching circuits,” has been hailed as perhaps the most important master’s thesis
ever written and is generally considered a classic milestone in the history of digital
computing. Its contributions to the design of switching (and computing) circuits are
frequently described as having transformed the art of circuit design into a science.

M. Bullynck (*)
Université Paris 8, Saint-Denis, France

© Springer Nature Switzerland AG 2019 87

T. Haigh (ed.), Exploring the Early Digital, History of Computing,
88 M. Bullynck

This ­often-­recycled formula comes from H.H. Goldstine’s book on the history of the
computer,1 and, as with many of Goldstine’s other claims and stories, it simplifies
and heroifies to the point of misrepresenting historical facts.2 Because, as is often is
the case when taking a closer look at the historical records, things were slightly
more complex.
Switching elements had become crucial in electrical engineering in the early
twentieth century because they could be used to implement control of complicated
electrical systems such as telephone exchanges, industrial motor control, or railway
and subway operation. They would also increasingly be used in the digital calculat-
ing instruments built in the 1940s. Depending on the signals coming from the com-
plex system and their combination, the switching circuit would automatically
“decide” what course of action the system would follow. These switching circuits
had to be robust and were quite complex to design. The general problem of circuit
or network synthesis was that “given certain characteristics, it is required to find a
circuit incorporating these characteristics” (Shannon 1938, p. 713).
Until the 1940s electromagnetic relays were the key technology for switching;
later, the electronic vacuum tubes and transistors would replace them. Electromagnetic
relays have discrete behavior: they are either on or off, just like vacuum tubes later
could be engineered to flip between high and low voltage only. Shannon’s idea was
to formalize this discreteness as 0 for a closed relay and as 1 for an open relay. If
you combine two relays in sequence (serial circuit), only two closed relays (0 and
0) will give a closed circuit (0). If you combine two relays in parallel, only two open
relays (1 and 1) will result in an open relay (1). This corresponds to a Boolean alge-
bra that uses only the values 0 and 1 (with 1 + 1 = 1).3 Fig. 5.1, taken from this paper,
illustrates this use of binary algebra.

Fig. 5.1  In his 1938 paper, Shannon showed how this binary algebra could be used to simplify
existing circuits. He demonstrated translations between circuit notations, logical expressions, and
binary arithmetic. Here, the left part illustrates how two relays in sequence can be interpreted as a
Boolean addition, while the right shows how two relays in parallel can be interpreted as Boolean

 “The paper was a landmark in that it helped to change digital circuit design from an art to a
science” (Goldstine 1972, p. 120).
 One of the major biases is the heroifying of John von Neumann, to the point of neglecting or
underplaying contributions by others (especially in the context of ENIAC or the EDVAC design).
But the most incorrect statement in the book may very well be “the state of numerical mathematics
stayed pretty much the same as Gauss left it until World War II (p. 287),” for this alone the author
should have deserved to be haunted by the ghosts of all mathematicians that have worked on
numerical mathematics for the military, for industry, or for government well before World War II.
 Today, most authors will not use this particular Boolean algebra but rather another one where
1  +  1  =  0. In that case, a closed relay corresponds to 1 and open to 0, with a serial circuit
corresponding to multiplication and a parallel circuit to addition. Hartree 1949 was the first to
suggest using this algebra.
5  Switching the Engineer’s Mind-Set to Boolean: Applying Shannon’s Algebra… 89

Though Shannon’s paper was important in establishing a theoretical basis for

designing switching circuits eventually, it was neither that revolutionary nor imme-
diately effective in practice at the moment it was published. First, Shannon’s paper
was not an isolated result in switching theory. On the contrary, it rather was a chap-
ter in a longer process that had started in the 1920s where (US) electrical engineer-
ing had been moving on from a practical art to a scientific practice aided by
mathematics. People, including G.A.  Campbell, had been promoting the use of
advanced mathematical techniques like Fourier analysis for electrical engineering
since the early twentieth century. Other techniques such as matrices, quadratic
forms, and Heaviside’s operational calculus also slowly found their way in the cur-
ricula during the 1930s and 1940s. While classically most of these mathematical
tools were borrowed from analysis, the synthesis of large switching circuits using
electromechanical relays called for techniques from other mathematical domains.
For a discrete technology, elements of discrete mathematics were needed; hence
algebra, graph theory, and algebraic logic were mostly deployed. As large relay
networks became more common in the 1920s, a number of researchers in the USA
(Shannon, Aiken), in Japan (Nakasima and Hanzawa), in Germany and Austria
(Piesch, Plechl, Zuse), and in Russia (Shestakoff) had proposed formalisms and
techniques usable in the analysis and synthesis of relay circuits.4 Shannon’s formal-
ism for relay circuits was thus but one among many other mathematical formats
Second, Shannon’s technique did not become immediately part of the engineers’
daily knowledge and practice. It apparently took more than a decade before
Shannon’s method became better-known to a larger group of engineers and that it
was put to practice in a more general way. This delay is not the result of bad com-
munication – Shannon’s article was published in a well-known and respected jour-
nal, and similar results had been found and published elsewhere. Rather, it was due
to a communication gap between the engineer and the mathematician, as well as the
difficulties of applying Shannon’s technique in actual practice. Indeed, engineers
had to be made familiar with Boolean algebra and other algebraic techniques. As
J.L. Martin, who wrote a thesis on circuit logic in 1948, noticed: “Symbolic logic is
a subject that is unfamiliar to most persons in industry” (Martin 1948, p. 4). And
even two decades after Shannon’s paper, Caldwell still wrote in his manual that
“most engineers are not familiar with the ideograms of symbolic logic, [s]ome of
these look to them more like the brands that identify western cattle ranches than
mathematical symbols” (Caldwell 1958, p. xi). This gap slowly closed with the
inclusion of logical design methods in the engineer’s curriculum in the late 1950s.
The other main hindrance becomes clear if one looks at the concrete problems the
circuit engineers were facing. Shannon’s Boolean algebra proved to be only usable
in a small number of situations and had to be used in conjunction with other tech-
niques and the engineer’s knowledge. Boolean algebra would only become more

 For a rich documentation of early publications in algebraic switching theory, see the second part
of (Stankovic and Astola 2011).
90 M. Bullynck

important and more generally useful once standard situations in engineering net-
works of relays and later of vacuum tubes and transistors would be created in the
1940s, although the friction between theory and practice would never really go
This paper focuses on the difficulties encountered in the application of Shannon’s
methods in practice, mostly at Bell Labs and at MIT, and shows how it slowly
became a standard technique around 1950, although it only hit the handbooks and
the curricula one decade later.5 It tells of the frictions existing between theory and
practice and also of how the representation of circuits as Boolean equations was
promoted by a number of mathematicians as a useful simplification and pedagogic
tool for talking about the functional design of digital machines.

5.1  Ingenuity: Bringing Theory to Practice (1938–1951)

One main practical problem was that Shannon’s algebra could only be applied to a
restricted set of switching elements  – essentially relays without memory (also
known as combinational circuits). Even relay circuits with memory (sequential cir-
cuits) as well as a lot of other circuit elements, such as flip-flops and rotary switches,
could not be easily formalized using Shannon’s symbolism, let alone continuous
circuitry such as resistor-capacitor networks, wave filters, nonlinear oscillators, etc.
These had to be left “bracketed” in the circuit diagrams as black boxes of which the
behavior (input and output) was known, but not the internal machinery. Some even
tried, with little success, to extend the formalism or to add to the mathematical
idiom, using algebraic numbers or matrices, proposing an “algebra, which is a com-
bination of the algebra of fields (algebra of complex numbers, to represent the net-
work elements) and Boolean algebra (to represent the switches)” (Shekel 1953,
p. 913).
Another problem was that Boolean algebra was useful to minimize the number
of contacts once a network realizing a function had been found, but that it could not
be used when one wanted to find a realization of a given function, nor could it be
practically used when the network was too large. Already in 1940, Shannon had
pointed this out:
Although all equivalent circuits representing a given function f…can be found with the aid
of Boolean Algebra, the most economical circuit in the any of the above senses will often
not be of this type….The difficulty springs from the large number of essentially different

 Note that this paper focuses on US reception, or more correctly, only a small part of it, mostly
using materials coming from Bell Labs, MIT, and Harvard. The current paper, however, gives the
general drift of Shannon’s reception in the USA. A more ambitious story (work in progress) should
include not only a selection of East Coast sites but also other players in the field such as IBM or
West Coast developments. Another restriction of the paper is that its sources used are mainly
published papers, complemented with technical reports and patents. No use was made of Bell’s
engineers’ notebooks or similar documents, which did not circulate.
5  Switching the Engineer’s Mind-Set to Boolean: Applying Shannon’s Algebra… 91

networks available and more particularly from the lack of a simple mathematical idiom for
representing these circuits. (Shannon 1949, p. 65).6

Or as Samuel Caldwell remarked in 1953 during a discussion: “Switching

algebra could be used directly for the simplification of contact networks,” but “the
situation was different with respect to the synthesis of a network” (Karnaugh 1953,
p. 598). Indeed, “design tools for the synthesis of the building blocks [are] outside
the province of Boolean algebra” (Washburn 1953, p. 380), and “when problems of
any magnitude were at attempted, the method broke down both because of the dif-
ficulty of writing word statements and because of the difficulty of converting bulky
word statements into algebraic expressions” (Karnaugh 1953, p. 598). Hence, when
“one attempted to use the method, there arose a peculiar sort of frustration”
(Karnaugh 1953, p. 598).
It was only through the experience gathered during the production of large
(relay-based) translation circuits in a telephone system and during the development
of digital calculating machines at Harvard and at Bell that Shannon’s ideas became
integrated into a more systematical framework that connected theory with practice.
In these contexts, a number of practical tools, both algebraic and diagrammatic,
were developed that could be used in conjunction with Boolean algebra to dispel
some of its complexities. Aiken’s team developed the switching functions and the
minimization chart. Switching functions group a configuration of switches, say,
e.g., two switches u and v, and regard them as a mathematical function f(u,v). The
minimization chart then puts all positions of switches of a circuit in a chart, much
like a truth table in logic. Both would later be adapted and perfected at Bell Labs.
Karnaugh turned the minimization chart in a form that is now known as the Karnaugh
map (Karnaugh 1953). Hohn and Schissler provided an algebra for manipulating
Boolean connection matrices that generalized the switching functions (Hohn and
Schissler 1955). This general development of moving more theory into practice,
based upon the systemization of accumulated experience, is also witnessed by two
textbooks that would become classic references. The first is a textbook by Keister
et al. (1951) based on a course given by Bell engineers for MIT students in 1949 to
1950; the second is written by H. Aiken (1951), summarizing their own courses at
Harvard from 1947 to 1948.
The latter book is the outcome of a 1947–1948 course on large-scale computing
machines at Harvard (Aiken 1951), based upon their years’ long experience with
building the Mark I and Mark II relay calculators and especially the Mark III and
Mark IV vacuum tube computers. Instead of Shannon’s Boolean algebra, Aiken opts
for another formalism, an algebraic approach. Essentially, Aiken uses Boolean alge-
bra but works with multivalued Boolean functions which he calls “switching func-
tions.” These functions can be interpreted on three levels: as a vacuum tube operator,
as a symbolic circuit, and as a schematic circuit. Aiken then goes on to define much-­
used switching functions that stand for larger components, viz., triodes, pentodes,
etc. In other words, instead of working with relays as basic elements, Aiken ­develops

 The main results of the paper date from 1940, but the paper was only published after the war in
92 M. Bullynck

an algebraic calculus for vacuum tube modules. The book gives practical, usable
circuits for a variety of standard units, which are useful in the synthesis of comput-
ing units.7 Aiken’s work was quite influential to his students and due to the transla-
tion of the book into Russian in 1954.8
The former book on the design of switching circuits was written by three Bell
engineers and sums up years of practical experience, especially in designing the No.
5 Crossbar Dial Telephone Switching System. This crossbar system was a further
development of automating telephone switching, replacing the human operators by
automatic control. As they write in their preface: “switching art … was limited to a
few quarters where complex control mechanisms such as telephone switching sys-
tems were developed and used,” but now (1951) this has changed because of the
“appearance of automatic digital computing systems” (Keister et al. 1951, p. vii).
Their book adequately describes the standard switching elements and the techniques
to analyze and synthesize them, including Shannon’s method. While manipulating
and simplifying networks by “inspection” “may become tedious and time-­
consuming,” switching algebra offers “an extremely useful design tool for setting
up, simplifying and combining complex networks” [Keister et  al. 1951, p.  68].
However, “in its present forms it takes no account of time or sequential relation-
ships” (Keister et al. 1951, p. 68). A similar opinion is voiced by Staehler who had
also worked on the crossbar system: “switching (boolean) algebra in its present state
is not to be considered entirely self-sufficient, but, for the most beneficial results,
should be applied, when warranted, in conjunction with inspection techniques so
that the latter may fill in any limitations in the algebra techniques which have not
been completely systematized as yet due to the newness of this field” (Staehler
1952, p. 282).
These quotations from Bell Labs refer to a classic topic in engineering discourse:
the need of experience to supplement theory. This experience is dubbed by the word
“inspection,” described by Staehler as “a background of considerable experience in
that the designer must recognize certain contact network arrangements that may
allow further rearrangements and thereby he must mentally develop his own rules”
(Staehler 1952, p. 281). Because, “as with any tool, satisfactory results depend upon
the judgment, ingenuity and logical reasoning of the user.” The word “ingenuity”
has a long history and has literally contributed to shaping the profession of “engi-
neer,” viz., the one who has ingenuity (see Vérin 1993). Its appearance in engineer-
ing literature, quite frequently even after the Second World War, points quite
precisely to the non-mechanizable aspect of the engineer’s job, viz., the engineer’s
“ingenuity” is aided but never replaced by design tools. Indeed, as Mealy put it, the
“engineer must make a judicious selection of his design tools and, most likely, must
invent methods and diagrammatic devices which fit the particular problem at hand”
(Mealy 1955, p. 1078).

 The team of Stibitz and Williams who developed the Bell relay calculators proceeded in a similar
way, developing basic units for their calculators, but they never published on the topic.
 See (Stankovic et al. 2016) for its reception in the Soviet Union.
5  Switching the Engineer’s Mind-Set to Boolean: Applying Shannon’s Algebra… 93

5.2  Getting out of the Black Box (1950–1960)

5.2.1  The Engineers’ Building Blocks

Around 1950 the relevance of Shannon’s papers in practice was still limited both by
the physical properties of the switching elements and the engineering day-to-day
reality and practice of combining circuits in a robust way in a particular context. As
Washburn noted, “However, the end result of this process [of using Boolean alge-
bra] in the present case is a circuit composed of more or less idealized elements.
Considerable ingenuity may be necessary to replace these elements with their phys-
ical equivalents” (Washburn 1953). More and more, however, the technique of
Boolean algebra proved to combine well with a classic engineering design approach,
functional block diagrams. While the block diagram displayed the general architec-
ture of the circuit or of the machine, its individual blocks had to be specified more
concretely. If these blocks were not too complex and used digital technology (relays,
vacuum tubes, or higher “functions” such as flip-flops or triodes), they could be
analyzed down to minimal components using Boolean algebra.
In the Bell Labs book on circuit design, this functional block diagram is described
as follows:
From a statement of circuit requirements, a functional plan is developed in terms of known
or conceptually evident circuit blocks, representing simple circuits similar to single-­
function circuits… as the design proceeds, the functional blocks are coordinated and inte-
grated to the point where a comprehensive block diagram of the proposed circuit exists. ...
The most satisfactory approach to developing a block diagram is to start with a few main
subdivisions of the over-all circuit and successively break these down until each block rep-
resents a unifunctional circuit….In a surprisingly large number of cases in the planning,
familiar functional circuits are found to applicable. When a new circuit concept is encoun-
tered, the designer can usually recognize whether an appropriate circuit can readily be
designed. If this is so, the circuit can be designated on the diagram and the design deferred
until later….the attempt should be made to obtain the simplest and most efficient arrange-
ment among the various blocks….the designer should from the start make a conscious
effort to familiarize himself with different types of basic circuits already in use and to clas-
sify them in terms of function. In this way he develops a constantly growing ‘catalogue’ of
circuit building blocks which expedites is planning and design of circuits. (Keister et al.
1951, p. 497)

Building complex technology out of standard building blocks that could be easily
manufactured was an essential part of the Bell System design and development
philosophy (as it was of other companies). Shannon’s switching algebra could prove
most useful for designing economically standard building blocks that would be used
frequently. The algebra can be used “to achieve the ultimate in efficiency and econ-
omy in that the number of relays used therein approaches the absolute minimum
necessary.” Indeed, as another Bell researcher wrote, “Better methods for synthesiz-
ing any imaginable function whatsoever will be of little help in practice…instead
one must try to isolate classes of useful switching functions which are easy to build”
(Gilbert 1951, p.  669). Instead of attacking general questions such as whether a
94 M. Bullynck

function can be realized or not (which Shannon likened to “somewhat like proving
a number transcendental”), a more pragmatic attitude is taken.
One important feature in the block diagram approach, as described by Keister,
Ritchie, and Washburn, was the use of “black boxes” for units that were as yet
unspecified. This permitted to begin with the overall structure and fill in the gaps
later. It was also useful if one had to abstract away from a specific unit (say, a con-
tinuous unit), so as to only consider its functioning within a network. The method
had its merits for visualizing the general structure but had its limits when it comes
to making the connections between the boxes more concrete. As one technical
report remarked, “You cannot calculate with block diagrams” (Jeffrey and Reed
1952). Since not each block that was left as a black box in the diagram was simple
enough that some Boolean manipulation could be applied to it straightaway, people
at Bell Labs started looking at how they could split a complex problem up in simpler
parts. This led to a quite general and sometimes abstract study of the problem of
decomposition of a given circuit (or machine, or automaton).
In 1940, Shannon had raised the general problem:
In general, the more we can decompose a synthesis problem into a combination of simple
problems, the simpler the final circuits. The significant point here is that, due to the fact that
f satisfies a certain functional relation we can find a simple circuit for it compared to the
average function of the same number of variables. […] functional separability is often eas-
ily detected in the circuit requirements and can always be used to reduce the limits on the
numbers of elements required. We will now show that most functions are not functionally
separable. (Shannon 1949, p. 90)

In 1954, E.F. Moore, who closely collaborated with Shannon in the early 1950s, put
the problem in a more general form in his famous paper “Gedankenexperiments on
sequential machines” (1954, published 1956):
The machines under consideration are always just what are called ‘black boxes’, described
in terms of their inputs and outputs, but no internal construction information can be
gained….Suppose an engineer has gone far enough in the design of some machine intended
as a part of a digital computer, telephone central office, automatic elevator control etc. to
have described his machine in terms of the list of states and transitions between them…he
may then wish to perform some gedanken-experiments on his intended machine. If he can
find e.g. that there is no experimental way of distinguishing his design from some machine
with fewer states, he might as well build the simpler machine. (Moore 1956, p. 132)

This leads, in general, to the problem of decomposition of a given circuit (or

Many problems exist in relation to the question of decomposition [of a machine] into
component [machines]. Given a machine with n states, under what conditions can it be
represented as a combination of two machines having n1 and n2 states such that n1n2 = n […]
One way of describing what engineers do in designing actual automata is to say that they
start with an overall description of a machine and break it down successively into smaller
and smaller machines, until the individual relays or vacuum tubes are ultimately reached.
The efficiency of such a method might be determined by a theoretical investigation on such
decompositions. (Moore 1956, p. 153)

Moore’s work, together with Huffman’s and Mealy’s, would lead to techniques to
finally attack the sequential circuits using Boolean algebra, a desideratum since the
5  Switching the Engineer’s Mind-Set to Boolean: Applying Shannon’s Algebra… 95

1940s.9 “Recent developments in the synthesis of sequential circuits show that the
end result of a sequential synthesis is a combinational problem” (Caldwell 1954).
Mealy warns, however, that “the place of formal methods…in the everyday practice
of synthesis is smaller than might appear at first glance”; in practice, the theory
furnishes the engineer with “only generalized methods of attack on synthesis
together with a small handful of particularized tools for design.” However, the the-
ory does provide “a unified way of thinking about circuits during their design”
(Mealy 1955, p. 1078).

5.3  The Mathematicians’ Diagrammatic Notation

While the engineers had their block diagrams to design machines, mathematicians
had been using functional diagrams for talking about digital electronic computers
since the First Draft of the EDVAC was written by John von Neumann in 1945. It is
well-known that von Neumann, in simplifying the existing designs of computing
machines, took inspiration from McCullough and Pitts’ “A logical calculus of ideas
immanent in nervous activity.” Pitts had used Carnap’s variant of Whitehead and
Russell’s formalism for propositional logic to represent networks of neurons and
also introduced graph-like diagrams. Now von Neumann took the idea of represent-
ing neurons as all-or-nothing devices to the basic computing elements using these
formalisms and diagrams.
Upon reading the EDVAC report, the English mathematician Douglas R. Hartree
immediately saw how von Neumann’s use of diagrams could be translated into
Boolean algebra (Hartree 1945). This algebraic form was both more economic and
notation-wise and was directly amenable to computation. In his 1949 book on com-
puting instruments and machines, Hartree summed up the advantages of the EDVAC
formalism and proceeded to reword it into Boolean algebra:
The analysis of the operation of a machine using two-indication elements and signals can
conveniently be expressed in terms of a diagrammatic notation introduced, in this context,
by von Neuman and extended by Turing. ... [it] is a functional rather than a structural analy-
sis. ... The designer and the maintenance engineer will need diagrams more closely repre-
sentative of the actual physical hardware. But for the user who is more concerned by which
it does it, analysis into these functional elements seems to me more illuminating and easier
to follow than detailed or block circuit diagrams. There is a form of algebra which is very
suitable for expressing relationships between a set of connected elements […] this is
Boolean algebra. (Hartree 1949, pp. 97–99)

 The work of Moore, Mealy, and Huffman also sparked a new rapprochement between circuit
design and information theory, especially coding theory. This constitutes another chapter in the
history of (theoretical) circuit design, where automata theory, information theory, and circuit
design meet. M. Mahoney’s work on the emergence of theoretical computer science provides an
angle on that particular chapter, but his studies are based on theoretical publications, not on the
actual practices.
96 M. Bullynck

Through Hartree’s 1949 book, the algebraic notation reached a wider audience and
was taken up by many authors.10
In this particular context, Boolean algebra is a way to formalize logical
relationships between (idealized) elements. In contrast to propositional logic, its
symbolism allows to compute with its elements. However, the Boolean algebra in
this case cannot be instrumentalized to analyze the circuits further or to find minimal
or equivalent arrangements. As Lebow noted in a similar context, “The term logical
design does not get down to the detailed logical configuration. Obviously, a lot of
work is necessary in going from the transfer level to the actual circuit details”
(Dinneen et al. 1958, p. 94). However, this logical and functional notation helps to
focus on the structural relationships between standard units and elements. These
functional diagrams would prove to be useful pedagogic tools and also a practical
formalism for designing the overall structure of computers or as a framework for
more theoretical purposes, e.g., under the name of “automata.”
Another figure who was important in popularizing the use of logic for talking
about serial digital computers was Edmund C. Berkeley (Sugimoto 2010). He was
an early reader of Shannon’s paper and introduced logic of algebra as a formal lan-
guage to describe both the operations and the instructions of the simple model com-
puter “Simple Simon” in his well-known book Giant Brains (1949). Both his
position in the ACM and his role as popularizer helped in making the Boolean logic
and Shannon’s use of it better known. Through his correspondence with Alonzo
Church, the driving force behind both the Association and the Journal for Symbolic
Logic, Berkeley, was also one of the first to communicate the relationship between
Boolean logic and switching circuits to the community of (mathematical) logicians.
Another early communication link was the students of Howard Aiken at Harvard,
such as Theodor Kalin (who took classes with Willard V. Quine,11 also at Harvard),
and embarked upon making logical machines in the late 1940s. Kalin regarded
Boolean logic as “a notation which refers both to switches on the machine and to a
syntax about Statements,” viz., referring to both operations and instructions of the
machine (ALS Meeting 1948). When Berkeley and Kalin spoke at the 10th meeting
of the Association of Symbolic Logic (1948), this may have got many logicians
acquainted with this new application of Boolean logic. Later, the application of
Boolean logic to describe relay circuits would often become included in logic text-
books as an exercise completely stripped of its engineering intricacies, of course.

 Including von Neumann himself who used the notation in his lectures on probabilistic logic and
the synthesis of reliable circuits from unreliable components.
 Quine would later contribute an important result in logic directly useful for simplifying Boolean
functions that were used in circuit design (Quine 1952).
5  Switching the Engineer’s Mind-Set to Boolean: Applying Shannon’s Algebra… 97

5.4  Theory and Practice Revisited

After the mid-1950s, owing both to the unified treatment of combinational and
sequential circuits and to the increased usage of the technique in the design of actual
computers, Boolean algebra did indeed become a standard technique for designing
switching circuits and would be included in nearly every textbook from that moment
onward. Exactly two decades after Shannon’s paper was published, Caldwell wrote
the following in the introduction of his classic book on switching theory from 1958:
Many of the world’s existing switching circuits were designed and built without the aid of
any appreciable body of switching-circuit theory. The men who designed them developed
and organized their experience into creative skills of high order without benefit of theory.
Why, then, is it important to have a theoretical basis for designing switching circuits?…
Shannon’s work presented an opportunity to supplement skill with methods based on sci-
ence, and thus to increase the productivity of circuit designers. Of greater importance
though, was the possibility of developing a science which would promote deep understand-
ing and nourish creative imagination. (Caldwell 1958, pp. vii–viii).

Caldwell’s main vision to introduce algebra on all levels of circuit design is

based on the fact that “algebra needs no physical support, … this understanding …
unifies the subsequent study of switching circuits in which many kinds of physical
components are used” (Caldwell 1958, p. ix). This echoes Mealy’s prediction from
1955: that Boolean algebra could become the theoretical framework that unified the
treatment of switching circuits, although it did not exhaust the more practical and
physical intricacies of engineering a circuit. Therefore, Caldwell tried to steer “a
course which avoid the extremes of strict utility and pure mathematics” (Caldwell
1958, p. xii).
This theorification of switching theory was not unanimously greeted in the field.
In a review for the Journal of Symbolic Logic, Edward F. Moore accused Caldwell
of “extreme bias against the common and practical”:
Caldwell [included] problems whose truth-tables are ‘chosen at random’ while omitting
other circuits whose only fault is that they happen to be generally useful. Caldwell’s index
omits binary adders, buffers, lockout circuits, and shift registers. Attempting to write a logi-
cal design textbook which doesn’t treat any of these is almost as extreme as attempting to
write an English grammar which doesn’t deal with the verbs ‘to be’ or ‘to have.’ (Moore
1958, p. 434).

This goes to show that even after Shannon’s work was integrated into everyday
practice and into the engineer’s curriculum, the friction between theory and practice
did not disappear; rather, it was there to stay.
But much more than for designing circuits, Boolean algebra proved to be a
convenient vehicle for representing digital computing units. The approach pioneered
independently by Aiken and by Hartree in the late 1940s, viz., to use Boolean alge-
bra as a shorthand for functional units such as a binary adder or a flip-flop, was quite
convincing, both pedagogically and notation-wise. While Aiken’s formalism per-
mitted to go down a level, to the physical vacuum tube level, Hartree’s formalism
functioned more as a black box. This same ambiguity, a formalism that can either
98 M. Bullynck

remain opaque or can be made transparent when adding extra layers, is still encoun-
tered today. For instance, in a modern textbook on microchip design, it is argued
that a “system can be more easily understood at the top level by viewing compo-
nents as black boxes with well-defined interfaces and functions rather than looking
at each individual transistor” but that a hierarchy of domains, from the behavioral
over the structural to the physical, should be kept equivalent throughout the design
process if one wants to pass from the abstract to physical and vice versa (Weste and
Harris 2011, p. 31–32). And if indeed the design of a NAND gate is determined
more by the physical properties of the semiconducting substratum than by switch-
ing logic (Weste and Harris 2011, p. 27), once synthesized, the NAND can be con-
veniently written down as a Boolean function. As a Boolean function, it can more
easily be integrated into a larger scale unit that will be amenable to formal
In the process of appropriating Boolean logic, the original intention of Shannon
introducing this calculus, viz., as a technique for minimizing the elements of a given
switching circuit, increasingly gets lost. Instead, the Boolean algebra is used to
represent and arrange elementary and basic switching circuits and to formalize the
connections between them. As such, it functions as a “theory” for switching circuits.
It hides, with success, the physical intricacies of a circuit and makes it easier to view
the circuit as an abstract, purely digital unit. In a way unintended by Shannon him-
self, Boolean algebra became contaminated through its encounter with logic dia-
grams to form a mathematical “theory” for doing logical design for digital
It has been argued that Shannon’s information theory does not only do a brilliant
job of mathematically unifying a large class of problems in communications engi-
neering but also hides its practical, military origins in antiaircraft control, radar
communications, and secret speech telephony (Roch 2010). In a somewhat similar
way, Boolean logic in its modern appropriation does a good job at hiding the engi-
neering details of computing circuits, foregrounding the digital switching behavior
and its interconnections. For each Great Hero of the Information Age, there is a
Great Concept that is a toy model formalizing and simplifying the complexities of
computing. Von Neumann has his architecture, Turing has his machine, and Shannon
juggles two: Boolean logic in one hand and information in the other.


Aiken, Howard H. (ed.). 1951. Synthesis of Electronic Computing and Control Circuits. Cambridge,
Mass.: Harvard University Press.
Association for Symbolic Logic. 1948. Tenth Meeting. Journal for Symbolic Logic, vol. 13 (1),
Caldwell, Samuel H. 1954. “The recognition and identification of symmetric switching functions”.
Transactions of the AIEE, vol. 73, 142–146.
Caldwell, Samuel H. 1958. Switching circuits and logical design. Cambridge, Mass.: MIT Press.
5  Switching the Engineer’s Mind-Set to Boolean: Applying Shannon’s Algebra… 99

Dinneen, G.P., Lebow, I.L. and Reed, I.S. 1958. “The Logical Design of CG 24” AFIPS proceedings
Fall 1958, vol. 14, p. 91–94.
Gilbert, E.N. 1951. “N-terminal switching circuits”. Bell System Technical Journal, vol. 30,
pp. 668–688, 1951.
Goldstine, Herman H. 1972. The computer, from Pascal to von Neumann. Cambridge, Mass.: MIT
Letter from D.R. Hartree to H.H. Goldstine about “various points in the EDVAC report”, August
24, 1945. This letter was made available to me by Thomas Haigh and Crispin Rope.
Hartree, Douglas R. 1949. Calculating instruments and machines. Illinois.
Hohn, Franz E. and Schissler, Robert L. 1955. “Boolean Matrices and the Design of Combinational
Relay Switching Circuits”. Bell System Technical Journal, Vol. 34 (1), 177–202.
Jeffrey, R.C. and Reed, I.S. 1952 The use of Boolean algebra in logical design. Engineering Note
E-458-2, Digital Computer Laboratory, MIT, Cambridge, Massachusetts, April 1952.
Karnaugh, M. 1953. “The map method for synthesis of combinational logic circuits.” Transactions
of the AIEE, vol. 72, 593–599.
Keister, W.; Ritchie, A. and Washburn, S. 1951. The Design of Switching Circuits. New  York:
D. Van Nostrand Co., Inc.
Martin, J.L. 1948. Solution of Relay and Switching Circuits through Symbolic Analysis. Master’s
thesis, Georgia School for Technology.
Mealy, George H. 1955. “A Method for Synthesizing Sequential Circuits.” Bell Systems Technical
Journal, Vol. 34 (5), 1045–1081.
Moore, Edward F. 1956. “Gedanken-experiments on sequential machines.” In Automata Studies,
edited by C.E. Shannon et J. McCarthy, Princeton: Princeton University Press, 129–153.
Moore, Edward F. 1958. “Review of Caldwell, Switching circuits and logical design.” Journal of
Symbolic Logic, 23 (4), 433–434.
Quine, Willard V. 1952. “The problem of simplifying truth functions.” American Mathematical
Monthly, vol. 59, 521–531.
Roch, Axel 2010. Claude E.  Shannon. Spielzeug, Leben und die geheime Geschichte seiner
Theorie der Information. Berlin: gegenstalt.
Shannon, Claude E. 1938. “A symbolic analysis of relay and switching circuits.” Transactions of
the American Institute of Electrical Engineers, Vol. 57, 713–723.
Shannon, Claude E. 1949. “The Synthesis of Two Terminal Switching Circuits.” Bell System
Technical Journal, Vol. 28 (1), 59–98.
Shekel, Jacob. 1953. “Sketch for an Algebra of Switchable Networks.” Proceedings of the IRE,
41 (7), 913–921.
Staehler, R.E. 1952. “An Application of Boolean Algebra to Switching Circuit Design.” Bell
Systems Technical Journal, Vol. 31 (2), 280–305.
Stankovic, Radomir S., Astola, Jaakko (eds.). 2011. From Boolean Logic to Switching Circuits and
Automata. Towards Modern Information Technology. New York, Berlin: Springer.
Stankovic, Radomir S.; Astola, Jaakko; Shalyto, Anatoly A. S; and Strukov, Alexander V. (eds.).
2016. Reprints from the Early Days of Information Sciences, Early Work in Switching Theory
and Logic Design in USSR.  Tampere International Center for Signal Processing, Tampere
University of Technology, Tampere, Finland, TICSP no. 66.
Mai Sugimoto. 2010. “Making Computers Logical: Edmund Berkeley’s promotion of logical
machines.” SIGCIS 2010 Workshop, Work in progress.
Vérin, H. 1993. La gloire des ingénieurs. L’intelligence technique du XVIe au XVIIIe siècle. Paris:
Albin Michel.
Washburn, S.H. 1953. “An application of Boolean algebra to the design of electronic switching
circuits.” Transactions of the AIEE, vol. 72, 380–388.
Weste, Neil and Harris, David, 2011. CMOS VLSI Design: A Circuits and Systems Perspective
(4th Edition). Boston (MA): Addison-Wesley.
Chapter 6
The ENIAC Display: Insignia of a Digital

Tristan Thielmann

Abstract  This paper argues that digital computing is sustainably organized by the
distributed agency of visual displays. By a praxeological analysis based on Harold
Garfinkel’s “net-work theory,” it can be shown that the operability of the ENIAC –
the first programmable digital general-purpose computer  – is based on three
properties that are characteristic of computing today: the nonrepresentational,
public, and discrete nature of computer screens. This means that something can be
read off the display that wasn’t originally intended. The digital display is
characterized by an administrative practice (of registry) and not solely by the form
of visualization. In this case, light dots do not yet represent digital image signs as
we understand them today – as an arbitrary allocation of significant and significate.
The single point of light does not exhibit a dissimilar but a strictly coupled
coordinative relationship to its reference object. The ENIAC display targets the
comprehensible representation of digit positions instead of the readability of digits.
Its purpose is not the semantic interpretation of primary information; its importance
is constituted at the level of secondary information, through which a praxeological
path structure is revealed.
Three praxeological characteristics indicate that the technical constitution of the
first computer display is designed for structural pragmatic incorporation of a human
counterpart, while, at the same time, the scope of action is being restricted: (1)
ENIAC’s computer program is defined by the fact that it controls the tasks that must
be completed and simultaneously rejects the human-readable semantic representa-
tion of interim results. The idea of electronic digital computers lies in the deletion
of human-interoperable intermediaries. (2) The public demonstration of the ENIAC
exhibited (a) that digital computing entails a sequence of operations and a distrib-
uted calculation, (b) that the data visualization on a light display is faster than on
paper, and (c) that all data are simultaneously visible on distributed displays in the
process of their computing. (3) The ENIAC display therefore constitutes a decou-

T. Thielmann (*)
University of Siegen, Siegen, Germany

© Springer Nature Switzerland AG 2019 101

T. Haigh (ed.), Exploring the Early Digital, History of Computing,
102 T. Thielmann

pling of the calculation process and its integral representation. Since the ENIAC, we
have been dealing with analytical images that display situations that are not imme-
diately visible.

6.1  Introduction

[T]here are good reasons for believing that the trend will be toward more flexible and
smaller automatic digital machines—electronic devices, quite general and all-purpose—
and away from mechanical machines and specific analogy devices. (History of Development
of Computing Devices 1946)1

From a media studies perspective, the history of technology always constitutes

itself retrospectively. A media historiography is constantly searching for alliances
and structural principles, an intellectual and epistemic history of “longue durée.” It
is therefore no surprise that, particularly in relation to the cultural infiltration of
digital images, the question arises as to whether a media specificity of digital dis-
plays exists that can be traced back to the very origin of digital computing.
Screen interactions are currently one of the central digital phenomena. From a
technological perspective, the origins of the representation of digitality are
undoubtedly to be found in the first fully functional digital electronic computer, the
ENIAC. However, there is also another genealogy from a media studies perspective.
In his “dromology,” the media theorist Paul Virilio describes how the history of
technology, physics, metaphysics, and military strategy are interwoven as a
“logistics of perception” (Virilio and Lotringer 1997). Herein, speed is the primary
force shaping society and technology. Virilio talks especially of the fusion between
communication technologies and weapons and sees a prime movens of media his-
tory in the theory and practice of the projectile:
[T]he projectile’s image and the image’s projectile form a single composite. In its tasks of
detection and acquisition, pursuit and destruction, the projectile is an image or ‘signature’
on a screen, and the television picture is an ultrasonic projectile propagated at the speed of
light. (Curtis 2006, 144)

The initial condition for a common history of media and socio-technical practice
therefore inevitably includes the calculation of trajectories for firing and bombing
tables, as this is what the first digital computing machine was primarily developed
for. An in-depth look at how the visual means of representing arithmetic operations
changed with the advent of the computer should therefore be helpful to reevaluate
the media-historical and media-theoretical importance of displays  – of how their
media-specific properties can be determined.

 I am indebted to Thomas Haigh and Mark Priestley for their numerous suggestions and comments
on an earlier version of this paper. I wholeheartedly thank Thomas Haigh for providing me with the
ENIAC reports and press materials.
6  The ENIAC Display: Insignia of a Digital Praxeology 103

To this end, the ENIAC display is investigated from a praxeological perspective.

Such a practice theory must be assessed on the basis of “the practical procedures
being given precedence over all other explanatory parameters” (Schüttpelz 2015,
153). Under reference to Harold Garfinkel, we could also formulate this as follows:
“Praxeology seeks to formulate statements of method, and to extend their generality,
seeking as wide a domain of applicability as possible” (Garfinkel 1956, 191). The
aim of a practice theory of the display must be to unveil the methods of the medium
or  – in order to describe it along the line of Garfinkel’s Sociological Theory of
Information  – must have the capacity to describe how a situation is structured
through a socio-technical and communicative net-work (Garfinkel 2008). Given the
diversity and multiplicity of displays, this chapter therefore pursues the question of
what socio-technical properties are exhibited phenomenologically by the ENIAC
display. What constitutes their specific media characteristics that distinguish them
from all other forms of electronic monitors and screens?

6.2  The Enigmatic Nature of the Display

There is no more apposite dispositif that can be used to describe the massive computer
production of the 1940’s in the USA than the growing flood of calculations for ‘firing
tables’ for all sorts of projectiles and missiles […]. (Hagen 1997, 41)

When the USA entered into the Second World War, it was faced with the problem
that numerous firing tables were required for artillery, antiaircraft defense, and
bombing. In 1942, the Ballistic Research Laboratory in Aberdeen Proving Ground
recruited almost 200 “human computers” to calculate the trajectories  – almost
exclusively women (Light 1999, 463f.; Shurkin 1996, 126).
These women were capable of working out the position of a projectile at a
precision of one tenth (in some cases also up to one hundredth) of a second, taking
gravity, velocity, atmospheric disturbance, etc. into consideration. The female
computers derived the vertex of the trajectory and the point when the target would
be reached from the completed calculation forms. In a next step, this information
was then used to compile firing tables.
The calculation of several thousand trajectories was required for the compilation
of a single firing table, which was accomplished with pencil and paper and the aid
of slide rules, electric table calculators, and, occasionally, a differential analyzer
that was capable of processing multiple differential equations simultaneously.
[I]t required 750 multiplications to calculate a single trajectory, taking ten to twenty minutes
for the differential analyzer to do just the multiplication, and twelve hours for a human with
a calculator. A typical firing table for one gun contained 2,000 to 4,000 individual
trajectories! It required thirty days for the analyzer, if it was working properly, to finish one
such table, and no gun could be put out in the field until the artillerymen were provided with
just such a table. (Shurkin 1996, 130f.)

Up to 4000 calculation sheets that had been completed by hand were required as the
basis for the compilation of a firing table. The intensive exploitation of human
104 T. Thielmann

computers and the growing backlog of uncompleted calculations for ballistic tables
prompted physicist John Mauchly to produce a memorandum in August 1942 that
can be referred to as an early draft for the Electronic Numerical Integrator and
Computer (ENIAC). John Mauchly and John Eckert are regarded as the inventors of
the first programmable digital general-purpose computer (Martin Campbell-Kelly
et al. 2014, 70–73). Even though Mauchly’s memo initially appears to describe the
advantages of the electronic computer,2 it is primarily a document that is entirely
rooted in the procedures of human computing (Akera 2007, 85).
In this memo, Mauchly devises the idea of developing a computer program that
allows the transferal of operations from one machine to another without the interim
results having to be put down on paper. Within the framework of this concept, a
computer program is defined by the fact that it controls the tasks that must be
completed and simultaneously rejects the human-readable semantic representation
of interim results, i.e., by leaving these to reside within vacuum tubes:
[T]he design of the electronic computor allows easy interconnection of a number of simple
component devices and provides for a cycle of operations which will yield a step by step
solution of any difference equation within its scope. The result of one calculation, such as a
single multiplication, is immediately available for further operation in any way which is
dictated by the equations governing the problem, and these numbers can be transferred
from one component to another as required, without the necessity of copying them manu-
ally on to paper or from one keyboard to another as is the case when step by step solutions
are performed with ordinary calculating machines. If ones desires to visualize the mechani-
cal analogy, he must conceive of a large number, say twenty or thirty calculating machines,
each capable of handling at least ten-digit numbers and all interconnected by mechanical
devices which see to it that the numerical result from an operation in one machine is prop-
erly transferred to some other machine, which is selected by a suitable program device […].
(Mauchly (1982 [1942]), 356)

The task of a computer program is to release the processes from the requirement of
copying results on to paper. The idea of the electronic digital computer lies in the
deletion of human-readable and semantically processable intermediaries – making
the interim result excess to requirements.
The problem at that time was the fact that manually writing down the numbers
took as much time as the conduct of the calculation. The time saved by using a table
calculator was partially nullified by the requirement for manually writing down the
interim steps. One of the central ideas in the development of the electronic computer
was therefore the elimination of the required interim steps in the coordination of
slide rules, electric table calculators, and the differential analyzer.
Adele Goldstine, who helped to program the ENIAC between 1943 and 1946,
provides what is probably an early manual and therefore also an early praxeological
description of a computer in her “Report on the ENIAC” in 1946, in which she gives

 In the offprint of the memorandum, the term “computer” that was originally used by Mauchly was
replaced by the less common word “computor” (see Mauchly 1982 [1942]). In computer sciences,
there is the convention that the term “computor” is used, within the meaning of its use by Alan
Turing, for an “idealized human computing agent” and “computer” is used for electronic calculators
or also the actual Turing machine (see Soare 1999, 3).
6  The ENIAC Display: Insignia of a Digital Praxeology 105

a detailed description of the nonrepresentational character of the intermediaries.

What is initially informative in this process is the fact that the term “digital” does
not appear in the technical descriptions of the first digital general-purpose computer
(Ceruzzi 1983, 108). Goldstine focuses on the representational plane of “digits” in
her Technical Manual (Goldstine 1946, Chapter I, 4–6).
The central components in the ENIAC were around 18,000 electron tubes
(triodes), of which pairs were interconnected to form a bistable “flip-flop.” Ten flip-­
flops formed a decadic circulating register that was capable of adding numbers and
storing them. An accumulator was composed of ten such horizontally arranged ring
counters that each represented one digit in a ten-digit number. The digital element
of the ENIAC results in digits being stored in the flip-flops.
On the front of the accumulator, corresponding to each flip-flop, 102 glow lamps
filled with neon were attached above eye level, based on which the current stored
number could be read off (Figs.  6.1, 6.2, and 6.3). Two neon bulbs signified the
positive or negative sign; the remaining 10  ×  10 glow lamps (read vertically)
represented the ten digits of a (read horizontally) ten-digit number. When the
ENIAC carried out a calculation, the neon bulbs positioned in the square field lit up
ping-pong balls, yielding constantly changing patterns.
These neons provide one of the most important visual checks on the operation of the
ENIAC. In addition to the continuous mode of operation at the 100 kc rate, the ENIAC has
2 special modes of operation, 1 addition time and 1 pulse time operation, which permit the
operator without disturbing the flip-flop memory, to stop the ENIAC at some point to exam-
ine the neons and, thus, to determine whether or not the proper sequence of events is taking
place. (Goldstine 1946, Chapter I, 13)

The processing power could be reduced to allow a number to be read from the
pattern of lights, either by slowing the electronic clock or by switching to a single-­

Fig. 6.1  Corner view of the ENIAC accumulators 11–18 in the Moore School. (US Army Photo)
106 T. Thielmann

Fig. 6.2  ENIAC accumulators with displays of 10 × 10 neon bulbs. The addition in process of
being completed. The left-hand accumulator stores +4236093905 and the carryover neons in the
second and sixth counters are lighted. (US Army Photo)

Fig. 6.3  Detailed view of an ENIAC display. (Courtesy of Thomas Haigh)

6  The ENIAC Display: Insignia of a Digital Praxeology 107

stepping mode. When used this way, the computation rate of the ENIAC was thus
inherently connected to the ability of the computer user to perceive syntactical
information (Ceruzzi 1983, 125). While the radar display is designed for
complementarity between acquisition and representation, the first computer display
constitutes a structural coupling of processing power and representation of the
calculation. The technical constitution of the digital display is designed for the
structural incorporation of a human operator as a pragmatic entity (Morris 1977).
The glow lamps played a role in debugging hardware, as well as in displaying
numbers. When a neon bulb did not light up, this indicated that an electron tube in
the flip-flop circuit had burnt out. The vacuum tubes were mounted on the back of
the accumulator, so they were visible. This allowed easy replacement of defective
tubes. The display comes into play as it denotes a superficial field to which the loca-
tions of exchange are allocated in the background.
On the front face of each accumulator panel […] are to be found 10 columns of 10 neon
bulbs apiece. There is a 1 – 1 correspondence between the neon bulbs in a column and the
digits 0 to 9 and hence between the neon bulbs and the 10 stages of the corresponding
decade counter. (The ENIAC. Progress Report Covering Work from July 1 to December 31,
1944, 2)

The ENIAC display, 10 × 10 light dots in dimension, therefore not only represents
a numerical value  – as a nonparticipating observer might assume  – but rather
different digit positions. The neons serve the purpose of enabling a visual registration
of the logical state of a flip-flop. They were also described elsewhere, with the hint
that the neon lamps primarily “register the number stored in the accumulator” (The
ENIAC.  A Report Covering Work until December 31, 1943, IX (7)). The digital
display is thus characterized by an administrative practice (of registry) and not
solely by the form of visualization.
In this case, light dots do not yet represent digital image signs as we understand
them today  – as an arbitrary allocation of significant and significate. The single
point of light does not exhibit a dissimilar but a strictly coupled coordinative
relationship to its reference object.
The first digital computer display targets the comprehensible representation of
digit positions instead of the readability of digits. Its purpose is not the semantic
interpretation of primary information, rather more, its importance is constituted at
the level of secondary information, through which the technical path structure is
revealed (Thielmann 2013, 402–405). This analytical separation, introduced by
Garfinkel in 1952 for the analysis of information infrastructures, is still visible in
the display itself. However, this changes with its entry into the public sphere.

6.3  The Public Nature of the Display

The first public presentation of the ENIAC in February 1946 resulted in changes to
the display.
108 T. Thielmann

Fig. 6.4  John Mauchly in front of the ENIAC display. (Fox Movietone News from 1946)

In planning their public demonstration in 1946, it occurred to Pres Eckert and the rest of the
ENIAC team to place translucent spheres—ping-pong balls cut in half—over the neon bulbs
that displayed the values of each of ENIAC’s twenty accumulators. Ever since, the flashing
lights of computers, often called electronic or giant ‘Brains’ in the early years, have been
part of the scene involving computers and science fiction. (Winegrad and Akera 1996, 7)

The ENIAC achieved its popularity as a “giant brain” mainly thanks to these plastic
covers that, in combination with the visible cabling, at least partially give the com-
puter an organic aesthetic form and adaptive design (Fig. 6.4). As the displays that
were distributed in the room were installed above the level of the eyes, their effect
was that of an intentional allusion to ideas that are sparked in a computer brain.
Having said this, in a program broadcast by Fox Movietone News in 1946, it is
not possible to see that all lamp matrices for the 20 accumulators had actually been
covered with ping-pong balls to increase their brightness. The news program simply
shows two 10 × 10 quasi-pixel displays, fixed in parallel, at a height at which it was
possible to include the inventor on the broadcast image. However, the human and
the display never faced each other in such a way during the computing practice. The
dispositive structure had only been produced in this form to create publicity.
The public demonstration event constituted an interaction arrangement in which
the operator and the display encounter each other at eye level. This interactional
relationship was an illusion right from the start, which, however, was to shape the
style of displays in action. The same applies to the ping-pong balls that had been cut
in half and placed over the neon bulbs on the ENIAC accumulators. They became a
formative element in style for data visualizations for subsequent generations of
computers and guided the dialogue over decades on how computing in practice was
imagined to be:
ENIAC’s accumulators ended up as nine-foot-tall monsters jam-packed with so many tubes
that from a distance their back sides looked like the points of a Jumbotron giant television
screen. The tubes were socketed in a mesh of wiring that caused lights to blink on the out-
side panel. The lights were put on the outside to make it easy to keep track of which circuits
were functioning; undoubtedly, this feature led Hollywood filmmakers to believe that com-
puters must have blinking lights to be working. (McCartney 1999, 74)
6  The ENIAC Display: Insignia of a Digital Praxeology 109

This form of visualization was designed such that widely separated values are also
optically depicted as widely separated and that increasing/decreasing values also
rise and drop visually. In the trajectory calculations, the individual variables of the
partial differential equation were each depicted on one of the arrayed 10 × 10 neon
bulbs, and all the displays taken together thus represent the different relevant
parameters in a mathematical equation. The ENIAC displays therefore cause a
decoupling of the calculation process and its representation. This makes clear what
actually constitutes the digital in the first digital computer:
The designers of the ENIAC speak of it as a ‘digital’ or ‘discrete variable’ computing
machine, as opposed to the ‘continuous variable’ type of machine, of which the differential
analyzer is an outstanding example. (Ordnance Department Develops All-Electronic
Calculation Machine 1946, 2)

The public presentation of the ENIAC already formed part of the discourse on the
general technological development to “more flexible and smaller automatic digital
machines,” as is made clear in the citation at the start of this chapter. This may
initially appear surprising, but the War Department was by all means also interested
in emphasizing to the general public the manifold fields of application in civilian
life. For example, a press release on the demonstration of the ENIAC on February
1, 1945, states:
Although the machine was originally developed to compute lengthy and complicated firing
and bombing tables for vital ordnance equipment, it will solve equally complex peacetime
problems such as nuclear physics, aerodynamics and scientific weather predictions.
(Ordnance Department Develops All-Electronic Calculation Machine 1946, 1)

A detailed list of the “Basic Items for Publicity Concerning ENIAC” was produced
as early on as in December 1945. The 11 points, of which only the first 6 are
illustrated in part here, are found both in the detailed press releases and in the pub-
lished media reports.
The ENIAC (Electronic Numerical Integrator and Computer) is a large-scale electronic
general-purpose computing machine of which the following can be said:

1. It is the fastest computing machine in the world and is 500 times faster than any existing
machine. (Confidential note: It is about 1000 times as fast as the Harvard University –
IBM computer which received much publicity about a year ago.)
2. It is the most intricate and complex electronic device in the world requiring for its
operation 18,000 electronic tubes (compare 10 tubes in an average radio, 400  in the
largest radar set, less than 1000 in a B-29).
3. It will perform more than one million additions or subtractions of ten-figure numbers in
five minutes, or, if used to complete capacity, more than ten million.
4. It can carry out in one tour more than one million multiplications of one ten-­figure
number by another, producing a twenty-figure answer each time.
5. It is a general-purpose machine which can handle almost any mathematical problem
from the simplest to the most abstruse.
6. It automatically transfers numbers from one part of the machine to the other, so that only
final results of an extensive problem need be printed. The full advantage of the high-
speed electronic computation is thus gained. For example, in saying that an addition can
be formed in 200 microseconds (200 millionths-of-a-­second), there is included not only
time to perform an addition but also time to transfer the result to another place in the
machine where that result can be used.
110 T. Thielmann

The decisive aspects that are outlined here and intended for the general public are
therefore that this is (a) a very fast, (b) intricate, and complex computer for (c)
solving abstract problems that is (d) capable of transferring data automatically.
These four factors constitute the publicly effective media specificity of the
ENIAC. The flashing neon lights underline precisely this intended public image the
actors who were involved wished to convey of the ENIAC, in that the neons flashed
with lightning speed that was hardly possible to comprehend, but that these
simultaneously also represented the sequence of programming and the abstract
course of action. Moreover, the hitherto largest computer display implicitly sends
the message that the “largest scientific computing group in the world” obviously
also has the “most modern, large-scale computing device” (Ibid., 2).
In a statement, Henry Herbert, the publicity manager of the University of
Pennsylvania, described the reasons for the selection of the different calculations
that were performed in the demonstration of the ENIAC on February 1, 1946.
According to this, the operation of addition was demonstrated first. Such a simple
operation was demonstrated because Arthur Burks wanted to show “that the
numerical solutions of even the most complicated partial differential equation can
be obtained by means of sequences of the simple operations of addition, subtraction,
multiplication, and division” (Herbert 1946, 1). However, secondly, the ENIAC also
produced a table of the squares and cubes of the numbers from 1 to 100:
When the square and cube of each number was computed the ENIAC stopped and the
answer was punched on cards. In this manner 100 cards were punched, each containing a
number, its square, and its cube. […] This problem required one minute, but during most of
this time the ENIAC was lying idle while the answers were being punched on cards. To
show how fast the ENIAC computed the squares and cubes from 1 to 100 the problem was
repeated without taking the time required for punching. The problem was finished in 1/10
second – so fast, in fact, that some who blinked didn’t see it. (Herbert 1946, 2)

The demonstration of the ENIAC was therefore intended, above all, to make two
things clear: (a) that digital computing entails a sequence of operations and a
distributed calculation and (b) that the data visualization on a light display is faster
than on paper. These two aspects have remained central to the media specificity of
digital computing to this day.
However, from a practical and theoretical perspective, this display exclusively
served the purpose of appresentation – a co-envisioning of the flip-flops and electron
tubes that were located on the back of the ENIAC accumulators where they were
fixed in place and not visible to the observer. The term “appresentation” is attributed
to Edmund Husserl, who coined it in his Cartesian Meditations in 1929. In this
Introduction to Phenomenology, Husserl describes appresentation as “a kind of
making ‘co-present’ […]. An appresentation occurs even in external experience,
since the strictly seen front of a physical thing always and necessarily appresents a
rear aspect and prescribes for it a more or less determinate content” (Husserl 1960
[1929], 109). The display on the front of the ENIAC units is, therefore, by necessity,
a co-envisioning of the vacuum tubes (the digital storage medium) on the back that
predetermines their image.
6  The ENIAC Display: Insignia of a Digital Praxeology 111

Not every non-originary making-present can do that. A non-originary making-present can

do it only in combination with an originary presentation, an itself-giving proper; and only
as demanded by the originary presentation can it have the character of appresentation some-
what as, in the case of experiencing a physical thing, what is there perceptually motivates
‘belief in’ something else being there too. (Ibid., 109f.)

The entanglement with the hidden side of the computer results in current
manifestations on the display always being transcended. From a noematic
perspective, the back and forth of the numbers lighting up appears to contain more
information than is immediately visible. The display raises expectations of the
entire object with its as yet undetermined reverse and internal views – whereby a
change of location or penetration and amalgamation with the operations room can
always also result in a “primeval presence [Urpräsenz].” Appresentations are
possible correlates of a future perception of thing.
The ENIAC therefore already symbolizes the Janus-faced character of the digital,
which remains the insignia of digital praxeology to this day: at the material level of
the thing “computer,” the display raises expectations of the entire object; at the
electronic level of the process “computing,” the display can only render visible that
which corresponds to the operative occurrences. “Neon lights in each accumulator
displayed the entire contents of the electronic memory” (Haigh et al. 2016, 254).
This means that no data remain hidden. All data are simultaneously visible in the
process of their processing. The digital display archetype is thus an image that
always only represents parts of a whole, the wholeness of which is simultaneously,
however, never fully perceptible. It creates a copresence through the attribution of
that which is spatially present in situ and, at the same time, absent.
From a practice-theoretical perspective, the displays, as shaped by the first digital
computer, are characterized by a distributed perception of thing through the
phenomenological entanglement (a) between the front and back and (b) between
sequential displays. This structural coupling forms part of the practical materiality
of ENIAC’s data visualization. A display appresents, even if it is now no longer the
case that this is associated with the practice of access to the back of the display (to
exchange the vacuum tubes) as was the case for the ENIAC. This appresentation
only gains a different import today, for example, when Knorr Cetina sees a central
cause for the “estranged” workings of financial markets in the media-specific
properties of the display. “Screen appresentations” are thus the starting condition
for the distributed agency of actors (Knorr Cetina 2012, 182, 2014).

6.4  The Discrete Nature of the Display

The fact that, right from the start, the display is representing something in addition,
and simultaneously partially, can be recognized based on the descriptions of what
was perceived on the display. What was actually visible on the ENIAC display?
Barkley Fritz, who worked on the ENIAC as a programmer and numerical analyst
from 1948 to 1955 and who demonstrated the ENIAC to the US President Harry
112 T. Thielmann

Truman in 1951, stated in a discussion at the 50th anniversary of the ENIAC in

Many […] have seen ENIAC, and have seen it in operation—the way the lights kept flashing
in the accumulators and how one could see how the results were going. You could see a
trajectory moving. (Barkley Fritz, cited in Bergin 2000, 150)

In Harry Reed’s opinion, who had been working in the Ballistic Research Laboratory
since 1950 and where he was the Chief of the Ballistic Modeling Division, the
ENIAC display has a further dimension:
The ENIAC itself, strangely, was a very personal computer. Now we think of a personal
computer as one which you carry around with you. The ENIAC was actually one that you
kind of lived inside. And as Barkley [Fritz] pointed out, you could wander around inside it
and watch the program being executed on lights, and you could see where the bullet was
going—you could see if it happened to go below ground or went off into space or the wrong
direction. So instead of your holding a computer, this computer held you. (Harry Reed,
cited in Bergin 2000, 153)

The connection of living inside the computer, moving around inside it, and the
perception of chains of lights lighting up to represent columns of numbers in a
trajectory creates the impression of a projected simulation of a path. In addition, the
bond with the trajectory conveys a sensation of immersion on the external ballistics
expert that ties him to and simultaneously captures him within the computer.
However, Reed also admits that only diagnostic tests were generally demonstrated
at public presentations of the ENIAC, for example, to the cadets who were starting
their service at the Aberdeen Proving Ground.
It was a great display, because these tests were constructed so that you could watch the
numbers sort of flow through the registers in their patterns. So it reminded you of Times
Square in New York, and you could diagnose what was going on in the computer. So we
would put these diagnostic tests on, and as I said, it sort of looked like Times Square. Then
this escort officer would come in with these cadets, and he’d been briefed ahead of time. He
would say, ‘Over there, you can see in this register that this is the velocity of a bullet, and
you can see how it is moving …’ None of this was true. It was just these tests going on. But
we got away with it, and it did look good. (Ibid., 155)

The calculation of trajectories arises out of a pair of second-order differential

equations. The path of a shell was made visible in a distributed manner. Two
accumulators exhibit a shell’s horizontal and vertical distances from the muzzle
as a function of time. Two further accumulators displayed how the horizontal and
vertical velocities of the projectile changed over time (Haigh et al. 2016, 20ff.).
Taken together, they constitute a simulation of a trajectory but also a distributed
simulation of a projectile course of action. What the glow lamps that were light-
ing up represented or revealed was therefore only partially comprehensible to the
layperson. Only an engineer or programmer who was familiar with the distrib-
uted operating modes of the ENIAC was capable of recognizing a number in the
ping-­pong-­like lighting up of bulbs and the simulation of a trajectory in the con-
secutive numerical values – from the launching of a projectile to its impact on the
6  The ENIAC Display: Insignia of a Digital Praxeology 113

The graphical course is transformed into an arithmetic form of presentation.

Nevertheless, analogies can be determined between the sign systems that allow the
observer to follow what is happening during each moment in the calculation process.
Based on the flashes, the start and end points of the trajectory and the location of the
point where the projectile is located in the simulated space of the coordinate system
are recognizable.
However, this required specific skills: If a human wanted to interpret the
calculative situation, the observer had to have the ability to read off numbers from
the flashing digits and possess tabular knowledge on the numerical construction of
the course of a projectile. “The work required a high level of mathematical skill,
which included solving nonlinear differential equations in several variables” (Light
1999, 464). Right from the start, the computer displays therefore relied on a human
counterpart who had the ability to comprehend the computer calculations.
In the case of the ENIAC, these were six female computers who worked in shifts
with the computer in an almost symbiotic relationship.3 The ENIAC girls were not
only capable of calculating the path of projectiles and expressing this in the form of
tables, they also possessed the recursive ability to read trajectories off rows of
numbers and a set of function variables. By necessity, the early readers of digital
displays relied on knowledge of media methods to understand the contents that were
being presented.
When the ENIAC was demonstrated for the first time to journalists in February
1946, the female computer turned into an actual computer, and the human calculators
were “promoted” to operators (Light 1999, 469, fn. 40). This event demonstrated
that actual computers no longer required human computers. Even if this was not
really the case in practice, it was nevertheless semantically effective.
It was only the process of making it public that rendered the display into a
pointing device and no longer simply an operation indicator. The display acquired
an additional handling dimension through the public demonstration. The
transformation from a collection of inconspicuous small lamps to the aesthetic form
of – even if wavy – an illuminated flat field is simultaneously associated with the
expanded agency of the display. This meant that something could be read off the
display that had not originally been intended from a technical perspective.
The opening-up of the laboratory situation is a central programmatic factor in the
“definition” of a medium. There would have also been other options in the public
demonstration of the ENIAC. However, the decision was taken to show which other
media (such as paper) are no longer required. The ENIAC is therefore also a digital
medium because it inevitably refers to the obliteration of intermediaries and older
media. The first computer display thus essentially appears to undermine the thesis
put forward by Claus Pias, stating that an “arbitrariness of the representation” (Pias
2002, 77, 2017) was created through the digital computer. However, if we consider
that its use as an “expert system” conveys a different image, then this favors the
investigation not only of the media-technical structure of the display but also of its
socio-technical structure.

 “[I]n this era, a computer was a person who did computing” (Fritz 1996, 13).
114 T. Thielmann

The early computer display as an arbitrary representation only exhibits a practice

of “decoupling of data and display” (Pias 2002, 90, 2017) to the external observer.
In contrast, for the operators, this manifests not only as a strict coupling between
calculations that are represented and carried out but also as an analog connection
between digit position and calculation position, as well as between the processed
data and the displayed data. The readability of the display is dependent on the
actors’ cognitive ability to recognize the course of trajectories in the flashing digit
positions and the distributed variables of a functional equation.
A further aspect that, up to now, has only been touched upon here refers to the
materiality of the display: the numerical values are represented with the help of
discrete points of light. This would emerge as an important media methodology
during the further development of displays. Seen phenomenologically, direct
addressing of the individual quasi-pixels on an illuminated area is made possible by
the glow lamp panel of the ENIAC and also the radar screen, without having to pass
through flashing “predecessors” and “successors” (Kittler 2001, 33). This means the
following for optical media: in contrast to the dispositive constituted by the
television, not only the rows but also the columns of an image are broken down into
constituent elements.
This has the following consequences for the computing media: a number is
broken down into its digits and is no longer represented as its semantic unit but
distributed across the area of a decimal digit(al) display.
This discrete nature of geometric location and chromatic value distinguishes the
ENIAC and radar display from the film/television image and, long before the
computer was to conquer media, already pointed toward our current media age in
which displays have become the insignia for (full) digitality.
The digital of the first electronic, general-purpose, large-scale digital computer is
defined by the “discrete variable” computing of the ENIAC.  The digital display
archetype is thus an image that, from a phenomenological point of view, always
only represents parts of the whole, with its entirety simultaneously never being fully
perceptible. Since the ENIAC, we have been dealing with analytical images that
display situations that are not immediately visible. The ENIAC display does not aim
to simulate the course of a trajectory but to render visible the distributed operativity
of a projectile. A digital display is, by definition, always a distributed display.


Akera, Atsushi. 2007. Calculating a Natural World: Scientists, Engineers, and Computers During
the Rise of U.S. Cold War Research. Cambridge, MA: MIT Press.
Bergin, Thomas J. 2000. 50 Years of Army Computing. From ENIAC to MSRC. A Record of a
Symposium and Celebration, November 13 and 14, 1996, Aberdeen Proving Ground. Adelphi,
MD: ARL Technical Publishing Branch.
Basic Items for Publicity Concerning ENIAC. December 21, 1945. UPD 8.4. Moore School of
Electrical Engineering Office of the Director Records, 1931–1948, University Archives and
Records, University of Pennsylvania, Philadelphia, PA.
6  The ENIAC Display: Insignia of a Digital Praxeology 115

Campbell-Kelly, Martin et al. 2014. Computer: A History of the Information Machine. Boulder,
CO: Westview Press, A Member of the Perseus Books Group.
Ceruzzi, Paul E. 1983. Reckoners: The Prehistory of the Digital Computer, from Relays to the
Stored Program Concept, 1935–1945. Westport, CT: Greenwood Pub Group.
Curtis, Neal. 2006. War and Social Theory: World, Value and Identity. New  York: Palgrave
Fritz, W. Barkley. 1996. The Women of ENIAC. IEEE Annals of the History of Computing 18 (3):
Garfinkel, Harold. 1956. Some Sociological Concepts and Methods for Psychiatrists. Psychiatric
Research Reports 6: 181–198.
Garfinkel, Harold. 2008. Toward a Sociological Theory of Information. Boulder, CO: Paradigm.
Goldstine, Adele Katz. 1946. Report on the ENIAC (Electronic Numerical Integrator and
Computer). Technical Manual, University of Pennsylvania, Moore School of Electrical
Engineering. Philadelphia, PA.
Hagen, Wolfgang. 1997. Der Stil der Sourcen. Anmerkungen zur Theorie und Geschichte der
Programmiersprachen. In Hyperkult, edited by Wolfgang Coy et al., 33–68. Basel: Stroemfeld.
Haigh, Thomas, Mark Priestley, and Crispin Rope. 2016. ENIAC in Action. Making and Remaking
the Modern Computer. Cambridge, MA: MIT Press.
Herbert, Henry. Demonstration of ENIAC.  February 1, 1946. For Release February 16, 1946.
University of Pennsylvania, Philadelphia, PA. Arthur W. Burks Papers, Institute of American
Thought, Indiana University–Purdue University of Indianapolis, IN.
Husserl, Edmund. 1960. Cartesian Mediations. An Introduction to Phenomenology. Translated by
Dorian Cairns. The Hague: Martinus Nijhoff Publishers.
History of Development of Computing Devices. Future Release. War Department  – Bureau of
Public Relations. For Release in Morning Papers, Saturday, February 16, 1946. For Radio
Broadcast after 7:00 p.m. EST, February 15, 1946. Arthur W.  Burks Papers, Institute of
American Thought, Indiana University–Purdue University of Indianapolis, IN.
Kittler, Friedrich. 2001. Computer Graphics: A Semi-Technical Introduction. Grey Room 2: 30–45.
Knorr-Cetina, Karin. 2012. Skopische Medien: Am Beispiel der Architektur von Finanzmärkten.
In Mediatisierte Welten: Forschungsfelder und Beschreibungsansätze, edited by Friedrich
Krotz, and Andreas Hepp, 167–195. Wiesbaden: Springer.
Knorr-Cetina, Karin. 2014. Scopic media and global coordination: the mediatization of face-to-­
face encounters. In Mediatization of Communication, edited by Knut Lundby, 39–62. Berlin:
De Gruyter.
Light, Jennifer S. 1999. When Computers Were Women. Technology and Culture 40 (3): 455–483.
Mauchly, John. 1982. The Use of High Speed Vacuum Tube Devices for Calculating, privately
circulated memorandum, August 1942, Moore School of Electrical Engineering, University of
Pennsylvania. In The Origins of Digital Computers. Selected Papers, edited by Brian Randell,
355–358. Berlin/Heidelberg/New York, NY: Springer.
McCartney, Scott. 1999. ENIAC.  The Triumphs and Tragedies of the World’s First Computer.
New York, NY: Walker Books.
Moore School of Electrical Engineering. 1943. The ENIAC (Electronic Numerical Integrator
and Computer), Vol. I: A Report Covering Work until December 31, 1943. University of
Pennsylvania. Philadelphia, PA.
Moore School of Electrical Engineering. 1944. The ENIAC (Electronic Numerical Integrator and
Computer). Progress Report Covering Work from July 1 to December 31, 1944. University of
Pennsylvania. Philadelphia, PA.
Morris, Charles. 14th edition 1977. Foundations of the Theory of Signs. Chicago: Chicago
University Press.
Ordnance Department Develops All-Electronic Calculation Machine. Future Release. War
Department  – Bureau of Public Relations. For Release in Morning Papers, Saturday,
16 February 1946. For Radio Broadcast after 7:00 p.m. EST, 15 February 1946. Arthur
116 T. Thielmann

W. Burks Papers, Institute of American Thought, Indiana University–Purdue University of

Indianapolis, IN.
Pias, Claus. 2002. Computer-Spiel-Welten. Munich: Fink.
Pias, Claus. 2017. Computer Game Worlds. Zurich: Diaphanes.
Schüttpelz, Erhard. 2015. Skill, Deixis, Medien. In Mediale Anthropologie, edited by Christiane
Voss and Lorenz Engell, 153–182. Paderborn: Fink.
Shurkin, Joel. 1996. Engines of the Mind. The Evolution of the Computer from Mainframes to
Microprocessors. New York, NY/London: W.W. Norton & Company.
Soare, Robert. 1999. The History and Concept of Computability. In Handbook of Computability
Theory, edited by Edward R. Griffor, 3–36. Amsterdam: Elsevier.
Thielmann, Tristan. 2013. Digitale Rechenschaft. Die Netzwerkbedingungen der Akteur-Medien-­
Theorie seit Amtieren des Computers. In Akteur-Medien-Theorie, edited by Tristan Thielmann,
and Erhard Schüttpelz, 337–424. Bielefeld: transcript.
Virilio, Paul, and Sylvère Lotringer. 1997. Pure War. New York: Semiotext(e).
Winegrad, Dilys, and Atsushi Akera. 1996. ENIAC at 50: The Birth of the Information Age. A
Short History of the Second American Revolution. University of Pennsylvania Almanac 42
(18): 4–7.
Chapter 7
The Evolution of Digital Computing
Practice on the Cambridge University
EDSAC, 1949–1951

Martin Campbell-Kelly

Abstract  Cambridge University was very unusual, if not unique, among British
universities in that it had established a centralised computation facility—the
Mathematical Laboratory—in 1937, long before the advent of stored-program com-
puting. The laboratory contained a variety of computing machinery, including desk-
top calculating machines and a differential analyser. During 1947–1949, the
laboratory built the EDSAC, the world’s first practical stored-program computer.
The EDSAC provided a massive increment in computing power that rendered the
earlier equipment largely obsolete. However, the pre-existing computing infrastruc-
ture and practices profoundly shaped how the EDSAC was used and what it was
used for.

7.1  Formation of the University Mathematical Laboratory

The Cambridge University Computer Laboratory, formerly the Mathematical

Laboratory, recently celebrated its 75th anniversary (Ahmed 2013). The original
proposal for a computation facility was made by John Lennard-Jones FRS (1894–
1954). In 1932 Lennard-Jones had been appointed Plummer Professor of Theoretical
Chemistry where “he set out to build up a school of theoretical chemistry in which
the concepts of quantum mechanics and of interatomic forces were to be applied to
a wide field of phenomena in both physical and organic chemistry” (Mott 1955).
This demanded a computing resource for solving differential equations in particular.
In 1936 he organised the construction of a model differential analyser by a senior
departmental technician, J. B. Bratt, based on the Meccano model built in 1934 by
Douglas Hartree FRS (1897–1958) at Manchester University—itself based on
Bush’s differential analyser constructed at MIT.
In March 1936 Hartree came down from Manchester to Cambridge to give a
lecture on computing methods, which was followed by a demonstration of the

M. Campbell-Kelly (*)
Warwick University, Coventry, UK

© Springer Nature Switzerland AG 2019 117

T. Haigh (ed.), Exploring the Early Digital, History of Computing,
118 M. Campbell-Kelly

model differential analyser. Maurice Wilkes (1913–2010), who was then a doctoral
student in the Cavendish Laboratory, found the machine “irresistible” and persuaded
Lennard-Jones to allow him to use it for calculations relating to his research topic of
radio wave propagation (Wilkes 1985, p. 25). Wilkes was such an enthusiast for the
machine that it led to five of his earliest publications. In late 1936 Bratt decided to
leave the university and Lennard-Jones invited Wilkes to take his place and assist
users. This carried a modest salary; Wilkes’ research grant was about to expire, so
he gladly accepted.
Lennard-Jones was an astute academic operator. He enlisted the support of over
a dozen senior science professors from other departments in proposing the develop-
ment of a “computing laboratory”. The proposed laboratory would be analogous to
the university library: it would be a resource centrally funded by the university,
available to all, at no charge. In December 1936 a report was presented to the
The report, published in the Cambridge University Reporter (1937), was detailed
and specific in its recommendations in terms of the equipment and staff needed.
First, it recommended the manufacture and purchase of a full-scale “Bush machine”,
similar to that commissioned from Metropolitan-Vickers by Hartree at Manchester
University. It also recommended the purchase of the “Mallock machine”, a simulta-
neous equation solver developed by the Cambridge Scientific Instrument Company.
The laboratory would also need “adding and multiplying machines of standard
types” from manufacturers such as Brunsviga, Monroe, and National. The total cost
of the differential analyser, Mallock machine, and desktop calculating machines
was estimated at £10,000 (about £500,000 in today’s money). Second, the labora-
tory would be staffed by a lecturer-grade individual to assist users, provide training,
and conduct research into computational techniques. There would also be a small
team of female assistants to perform computing tasks under supervision.
The proposal was accepted by the university, including its finance committee, in
February 1937. The original proposed name of Computing Laboratory, however,
was changed to Mathematical Laboratory. According to Croarken (1990, p. 57), this
was because it was “felt that the words ‘computing’ or ‘calculating’ did not ade-
quately describe the analogue nature of the differential analyser”.
The laboratory was to be formally established in October 1937 with an honorary
director (Lennard-Jones) and a full-time assistant director with the rank of univer-
sity demonstrator. Lennard-Jones invited Wilkes to apply for the assistant director
post, which he duly obtained. The new laboratory would be housed in a building
that the School of Anatomy was shortly to vacate, but because of building delays,
this did not actually take place until autumn 1939.
One of Wilkes’ first tasks in his role as assistant director was to commission the
full-scale differential analyser. He contacted Hartree at Manchester to arrange a
visit and an introduction to Metropolitan-Vickers. Hartree, who was a notably col-
legial individual, collected Wilkes from the railway station in his motor car and put
him up in his home for the night. It was the beginning of a deep friendship.
Construction of the differential analyser proceeded slowly because of the higher
priority of war-related work as the international situation deteriorated. The machine
7  The Evolution of Digital Computing Practice on the Cambridge University EDSAC… 119

arrived in September 1939 just as the new laboratory was ready for occupation.
However, war broke out the same month and the laboratory was immediately requi-
sitioned by the war office and the differential analyser put to use on ballistics calcu-
lations. Wilkes himself was enlisted in the scientific war effort, where he worked on
radar and operations research. This turned out to be an ideal training in pulse elec-
tronics and practical computation for the world of digital computing that would
emerge when the war was over in 1945.

7.2  EDSAC and the Development of Programming

Wilkes returned from the war in September 1945. Lennard-Jones—who had also
been away from the university on war service—had decided not to resume his posi-
tion as director of the Mathematical Laboratory, and Wilkes was appointed acting
director. He was formally made director in October 1946.
As peacetime conditions returned, Wilkes need to re-equip the laboratory,
appoint staff, and decide on a research program. In connection with the former, he
consulted L.  J. Comrie, the founder and proprietor of the Scientific Computing
Service. In March 1946, Comrie made a visit to the laboratory and brought with him
a copy of von Neumann’s EDVAC Report, written on behalf of the computer group
at the Moore School of Electrical Engineering, University of Pennsylvania. Wilkes
stayed up late into the night reading the report, with a growing realisation that this
was the likely future direction of computing. A few weeks later, he received an invi-
tation from J. G. Brainerd, head of the Moore School, inviting him to attend a sum-
mer school entitled “Theory and Techniques for Design of Electronic Digital
Computers” to be held in July and August (Campbell-Kelly and Williams 1985).
The invitation probably came at the suggestion of Hartree, who was well acquainted
with and had visited the Moore School computer group. Transatlantic shipping was
in very short supply at the time, and Wilkes did not manage to get a berth until well
into August. Consequently, he missed the first 6 weeks of the 8-week course.
Because much of the material covered had been basic electronics and numerical
techniques in which he was already versed, Wilkes was not fazed. In the last 2
weeks of the course, he learned what he needed to know about the stored-program
computer and determined to build one at Cambridge.
Wilkes began the design of the computer on his return passage to England on the
Queen Mary in September. The machine was to be known as the EDSAC (electronic
delay storage automatic calculator). The name was chosen as a conscious echo of
the EDVAC on which it was based and the fact that it would use mercury delay-line
storage. Back in the Mathematical Laboratory, Wilkes recruited a small technical
staff. Chief among these were William Renwick (chief engineer) and Eric Mutch
(chief technician). While the computer was under construction, he appointed more
staff and research students. These included R. A. (Tony) Brooker, to take charge of
the differential analyser, and research students John Bennett, David Wheeler, and
Stanley Gill. In October 1946, Hartree had moved to Cambridge University as
120 M. Campbell-Kelly

Plummer Professor of Mathematical Physics. He gave a celebrated inaugural lecture

Calculating Machines: Recent and Prospective Developments (Hartree 1947) and
became a mentor to Wilkes and a stalwart of the laboratory until his untimely death
in 1958.
While the computer was being built, there were unresolved issues concerning
programming. It had generally been assumed at the Moore School and elsewhere
that this would be a two-stage process. A mathematician would specify in detail the
numerical process (the algorithm as we would now say) and then a “coder” would
convert this into a form the machine could execute—probably in binary or some
variant. The boundary between these two activities was unclear.
Following the pre-war practice of the laboratory, Wilkes assumed that users
would solve their own problems using the equipment available. The laboratory
would supply training and advice where needed, but not do the actual programming.
Hence, the programming technique had to be something that a user could reason-
ably be expected to do for him or herself largely unaided. Whether investigators
chose to do their own programming or delegate it to a research fellow or student was
not the concern of the laboratory; but in either case the process had to be direct and
Wilkes (1949) decided that the programmer would use a symbolic code, using
mnemonics for operation codes and decimal addresses. For example, an instruction
written as “A 100” would mean “Add the number in location 100 into the accumula-
tor”. Programs would be punched onto five-hole teleprinter tape, and the EDSAC
would convert them into machine instructions using a set of “initial orders” (some-
what like the bootstrap of later computers). Wilkes assigned the writing of the initial
orders to his research student David Wheeler. Wheeler’s “Initial Orders 1” consisted
of some 30 instructions hardwired onto a set of uniselectors (telephone stepping
switches). When the start button of the computer was pressed, the initial orders were
placed in memory; they then proceeded to read the symbolic instructions from paper
tape, convert them to machine instructions, and plant them in the memory.
EDSAC sprang to life on Friday, 6 May 1949. Wheeler wrote the first program,
which was loaded by his initial orders and which then calculated and printed a table
of the squares of the first 100 integers, neatly in decimal. The following Monday,
two further and more complex programs were run—the  first to print a table of
squares and their differences and the second to print a table of prime numbers—
these were written by Wilkes and Wheeler, respectively. In June an inaugural con-
ference was held to celebrate the EDSAC’s arrival (Cambridge University
Mathematical Laboratory 1950a). After that, the laboratory settled down for the rest
of the year to making the machine more reliable, establishing an operating regimen,
and creating a complete programming environment.

 This chapter gives a highly condensed description of the programming techniques devised in the
laboratory. A much more detailed history is given in Campbell-Kelly (1980).
7  The Evolution of Digital Computing Practice on the Cambridge University EDSAC… 121

Although Initial Orders 1 simplified programming compared with raw machine

code, getting programs right remained a minefield of difficulties. Wilkes discovered
this fact for himself when developing a program for the numerical integration of
Airy’s differential equation shortly after the June 1949 conference. An early version
of the program shows numerous errors, any one of which would have caused the
program to fail (Campbell-Kelly 1992). Granted, Wilkes was a careless program-
mer, but if he experienced difficulties, so would others. If the laboratory was to
maintain its practice of helping users, but not solving their problems directly, then a
much more helpful programming environment was needed. Wilkes assigned the
task of designing such a system to Wheeler.
Wheeler’s programming scheme was based on a library of subroutines. The idea
of subroutines was well understood. Earlier computers such as the Harvard Mark I
and Bell Labs relay machines used pre-written coding sequences, and Turing’s ACE
Report of 1946 included an extensive discussion of subroutines, which he called
“tables” (Carpenter and Doran 1986). At the Moore School Lectures, C. B. Sheppard
described how “sub-routines” could be used on a stored-program computer
(Campbell-Kelly and Williams 1985, p. 448). However, the mechanism for incorpo-
rating subroutines in a program and calling them was virgin territory. Some of the
early mechanisms were surprisingly clumsy; this was true even of von Neumann’s
group at the Institute for Advanced Study. Wheeler’s scheme was truly elegant—
and it established his reputation as the British programming genius of his
A key problem of implementing a subroutine library was to be able to write a
subroutine in position-independent code that would only become specific when the
program was loaded. A second issue was the subroutine call—so that a subroutine
could be used multiple times in a program yet return to the particular place where it
was originally called. Wheeler invented neat solutions to both these issues—later
called relocation and linkage—the latter invention became known as the “Wheeler
jump” and was used on the IBM 701 (Bashe et al. 1986, pp. 321–323).
At the inaugural conference, Wheeler (1949) had already outlined how pro-
grams might be organised. In his scheme a program would consist of a “master
routine” and a set of subroutines. The initial orders would first load a set of “co-
ordinating orders” ahead of the user program. The co-ordinating orders would
then take over and input the master routine and subroutines, taking care of reloca-
tion, so that each program unit was loaded immediately after the previous one,
without any gaps. Wilkes agreed this was the way to go, and in August 1949, the
functions of the initial orders and the co-ordinating orders were combined in the
form of Initial Orders 2, which were used for the life of the EDSAC.  The new
initial orders were constrained by the 42-word capacity of the uniselectors, so they
did not do quite everything Wheeler would have wished. But they were more than
good enough.
122 M. Campbell-Kelly

7.3  The Subroutine Library and Documentation

The advantages of using a library of subroutines were threefold. First, they reduced
the programming burden by greatly reducing the number of lines of code a user had
to produce—in a typical program, library subroutines constituted two-thirds of the
code. Second, library subroutines had been written by experts and were therefore
more efficient than a user’s own code. Third, library subroutines had been thor-
oughly tested, which reduced the need for debugging.
Following the installation of Initial Orders 2, writing library subroutines began
in earnest, with most of the laboratory’s research staff and students contributing.
Within a year, there were about 90 subroutines in the library, classified by their
mathematical or operational function (Table 7.1).
The most basic of the subroutines performed the following tasks: reading paper-­
tape input and printing output (classes R and P), division (class D—the EDSAC had
a multiplier, but no divider), and common mathematical functions, including expo-
nentials, logarithms, trigonometrical functions, and square and cube roots (classes
E, L, T, and S, respectively). These subroutines would have been written in autumn
1949 (unlike later subroutines, they are undated). Each subroutine was meticulously
documented on an “EDSAC Programme Sheet” so that “people not acquainted with
the interior coding can nevertheless use it easily”. Wheeler (1952) noted “This last
task may be the most difficult”. The program sheets were spirit duplicated in bulk
and made available to users.

Table 7.1 Classification Subject

scheme of the EDSAC
A Floating-point arithmetic
subroutine library
B Arithmetical operations on complex
C Checking
D Division
E Exponentials
F General routines relating to functions
G Differential equations
J Special functions
K Power series
L Logarithms
M Miscellaneous
P Print and layout
Q Quadrature
R Read (i.e. input)
S nth root
T Trigonometrical functions
U Counting operations
V Vectors and matrices
7  The Evolution of Digital Computing Practice on the Cambridge University EDSAC… 123

The more technically challenging subroutines in the library were for mathemati-
cal processes that enabled users to perform calculations that would formerly have
been done using the differential analyser, desk machines, and mathematical tables.
In this way, users could transfer their existing computational background to the new
world of electronic computing. The subroutine library was consequently shaped by
pre-existing practices, tempered by what EDSAC could actually achieve.
The most important set of mathematical subroutines were for integrating differ-
ential equations (class G). These were written by Stanley Gill (the index G no doubt
signalling both the initial letter of Gill’s surname and exhaustion of the alphabet).
The program sheets for these subroutines are dated May 1950. Gill used a Runge-­
Kutta method adapted for the new calculating economy of the EDSAC in which—as
Hartree famously noted—multiplication was cheap compared with desk-machine
calculation. The method that Gill devised (later known as Runge-Kutta-Gill) was
the subject of his first academic publication and a major part of his dissertation, in
which he acknowledged the advice of Hartree (Gill 1951a, 1952).
By contrast, class V for matrices was relatively undeveloped with only two sub-
routines. The 1024-word store of the EDSAC was too small to accommodate sig-
nificant matrix calculations, and the subroutines eventually fell into disuse. People
requiring matrix work had to use the Hollerith machines, punching out intermediate
results onto cards for subsequent re-input.
Two important classes of subroutines were for power series and quadrature
(classes K and Q). The K-series subroutines evaluated power series (such as poly-
nomial approximations). This was a fundamental requirement for both desk-­
machine and electronic computing. The Q-series evaluated definite integrals using a
variety of methods such as Simpson’s rule and Gaussian multipoint formulae.
Wilkes was probably involved in specifying these subroutines as an active user (see
the final section of this chapter). He subsequently published an article about an
adaptive technique for numerical integration (Wilkes 1956).
The miscellaneous subroutines in classes F and J are interesting for what they
reveal about computing on the cusp of change. They included subroutines for inter-
polation, which enabled function values to be computed using values taken from
standard mathematical tables. This was exactly as had been done in the desk-­
machine world, but on electronic computers, interpolation would generally be
replaced by subroutines to calculate values on the fly from first principles.
The library contained two classes of “interpretive” subroutines, for arithmetic on
complex numbers and floating-point numbers (classes B and A). The idea of inter-
pretation was due to research student John Bennett. Originally, he used interpreta-
tion as a means of making programs shorter to make better use of the memory, by
packing two artificial instructions in the place of one real instruction. Later he
extended this idea into what would later be called a virtual machine, but for which
there was then no vocabulary. The virtual machine had a complex accumulator and
the basic arithmetic operations. The idea was not found to be particularly useful,
however, and just two subroutines were developed. During 1950 Tony Brooker
developed a very comprehensive scheme for floating-point arithmetic. Over 20 sub-
routines were developed—these included not only the basic arithmetic operations
124 M. Campbell-Kelly

but also subroutines for common functions and for solving differential equations.
This saved the programmer from having to deal with the formidable scaling prob-
lems involved with using the EDSAC’s fixed-point accumulator. Although the inter-
pretive overhead was very high (probably a factor of 20 or more), for one-of-a-kind
problems, it saved both the programmer’s time in coding problems and the machine
time spent debugging them. The work was described in a joint paper by Brooker and
Wheeler (1953). In 1951 Brooker moved to Manchester University where he devel-
oped these ideas into the Ferranti Mark I Autocode, a simple programming language
that had a major influence on programming in Britain (Lavington 1975, pp. 22–24;
Knuth and Trabb Pardo 1980).
The remaining classes of subroutine were not directly involved with calculation.
They were for processes related to getting work done on the computer. The most
important were the “checking” subroutines (class C). Designed by Gill, these were
a set of interpretive routines that enabled diagnostic information to be printed dur-
ing the course of a computation. This enabled subtle errors to be detected away from
the computer, thereby saving valuable machine time. Prior to the availability of
these subroutines, users had to insert their own ad hoc printing routines or else
debug a program by “peeping” at the CRT store monitors and single-stepping
through the program. The checking subroutines augmented the existing but much
less useful “post-mortem” programs that printed a section of memory after the
abnormal termination of a program. Gill published his work on checking routines
and included it as a major section of his dissertation (Gill 1951b, 1952).
Finally, classes M and U were subroutines designed to make programming eas-
ier. The M-series included “assembly” subroutines written by Wilkes to simplify the
consolidation of the master routine and subroutines in a program (and this may be
the origin of the term “assembler”). In the U-series there were “counting” routines
devised by Wheeler and Hartree that behaved somewhat like the for-loop of later
programming languages. Interestingly, none of these subroutines were ever much
used by anyone other than their authors. They added a further level of complication
and learning to an already difficult process.
For the first year of EDSAC’s operation, user documentation consisted of loose
“EDSAC Programme Sheets” and memoranda produced on a spirit- or stencil-based
duplicator. There were specifications and the actual code for each library subrou-
tine. The laboratory also produced two series of memoranda: “Operating
Memoranda”, which kept users up-to-date with hardware and operational changes,
and “Programming Bulletins” that described interesting programming ideas. In
September 1950, much of this information was consolidated into a Report on the
Preparation of Programmes for the EDSAC and the Use of the Library of Sub-­
routines of approximately 100 pages (Cambridge University Mathematical
Laboratory 1950b). The material included chapters on the EDSAC’s operation and
programming techniques with example programs and the specifications of all the
subroutines in the library with the code for a subset of them, as examples of good
programming style. Wilkes sent a copy of the report to every other computing labo-
ratory he knew about. A few weeks later Wilkes received a visit from the astronomer
Zdenek Kopal, then at MIT, with whom he had discussed atmospheric oscillations
7  The Evolution of Digital Computing Practice on the Cambridge University EDSAC… 125

earlier in his research career. Kopal offered to take a copy of the report to Addison-­
Wesley in Cambridge, Mass., with a view to publication. The Preparation of
Programs for an Electronic Digital Computer was published in April the following
year (Wilkes et al. 1951). Usually known as Wilkes, Wheeler, and Gill (or WWG for
short), it was the world’s first textbook on programming, and it influenced program-
ming development practically everywhere.

7.4  A Digital Computer Service and the Summer Schools

By the end of 1949, the EDSAC was running with tolerable reliability, and there
was a serviceable subroutine library. At first, users operated the machine themselves
with the inevitable inefficiencies and conflicts as they competed to get their work
To improve efficiency in January 1950, Wilkes set up a formal computing ser-
vice.2 The machine would in the future be run by an operator during “office hours”—
which ran from 9:00 to 17:45, with a lunch break 12:45–14:00. The operations team
consisted of a day operations manager in overall charge, a senior operator and a
junior operator, and a senior engineer. The EDSAC had frequent breakdowns, so it
was necessary for the operations manager to schedule time for testing and to co-­
ordinate repairs. The operators were responsible for running user programs and for
keeping a log of work done, ensuring library tapes and test tapes were in good con-
dition, keeping up-to-date catalogues, maintaining supplies of stationery, and so on.
The senior engineer was responsible for fixing the machine when things went
wrong. This involved maintaining a supply of spares and keeping a log of machine
fixes and alterations. The EDSAC was always something of a work in progress, so
another job for the senior engineer was to ensure that the machine drawings reflected
the current state of the machine. Users were excluded from the machine room,
although they could request to be present when their own program was running.
The computing day would begin at 9:00 with the running of a set of test pro-
grams that would test the memory and the correct functioning of the arithmetic
circuits. The mercury delay-line memory was unreliable at this time—and it only
got somewhat better over the following years. It was always the least reliable part of
the machine. If an individual memory tube (known as a “tank”) was not working,
the memory could be reconfigured to eliminate it, leaving the machine with less
than the nominal amount of memory. For this reason, users had to specify how much
memory their program needed.
Program tapes were assigned to one of two “queues”—in practice a horizontal
wire to which program tapes were pegged. Each program tape bore a written iden-
tification and a coloured tag—yellow tags for “experimental” programs and blue for

 The following description of the computing service is distilled principally from Cambridge
University Library, Computer Laboratory papers COMP. B. 3.15: Computer Laboratory Operating
Memorandum No. 3 (March 1951), 8 (January 1952), and 11 (September 1952).
126 M. Campbell-Kelly

production programs. Experimental programs (i.e. those still being debugged) were
given priority so that, with luck, a user could get several test runs in a day. The oper-
ating instructions for the program were written on a “tape ticket” (Fig. 7.1). This
specified the amount of memory needed, the expected run time and output, and
whether the user wanted to be present. The programmer could also specify what to
do in the event of an abnormal termination—this was typically a post-mortem print-
out of a specified region of memory. During 1952, the queuing system was refined
with red-tagged tapes (typically new library subroutines and machine test programs)
being given priority over user programs.
A second shift ran from 17:45 to 22:00. The role of the operations manager was
then assumed by a trusted user who had been instructed in dealing with potential
emergencies and was able to power the machine down safely. Because there was no

Fig. 7.1  EDSAC tape ticket. (Courtesy of The National Museum of Computing)
7  The Evolution of Digital Computing Practice on the Cambridge University EDSAC… 127

engineer present, if the machine broke down, it simply had to be shut down and the
session abandoned. The night operations manager (usually one of the research stu-
dents) had priority for his own programs. This was especially useful for program
development as it was possible to get many debugging runs during the session. In
theory, the other users would have to work around the operations manager, but in
practice social convention prevented monopolisation of the machine by a single
user. Later the night shift was extended from 17:45 to 9:00 the following day.
A major challenge for the laboratory was training users in programming. Wilkes
and Wheeler had made programming as easy as possible, but it was still difficult and
quite unlike anything that users had encountered before. At first Wilkes would loan
a research student to do the programming for important users. For example, a letter
from the distinguished geneticist R. A. Fisher FRS dated February 1949, well before
the EDSAC was complete, asked if Wilkes’ “fine new machine” could be set to
work on computing a table for him.3 When EDSAC was finished, this was one of
Wheeler’s first assignments, and the results were published in Biometrics (Fisher
1950). This was one of the first published outcomes of the EDSAC. In other cases,
a research student from outside the laboratory took up temporary residence and
learned from the lab’s research students. Wilkes tried to keep a few spare desks for
this purpose. Some research students, such as David Barron, never left (Campbell-­
Kelly, 2013).
There was, however, a limit to this ad hoc approach to user training. In order to
scale up the process, Wilkes decided to organise a summer school in programming
in September 1950. It was such a success that the summer schools continued for the
life of the EDSAC and after that for its successor EDSAC 2. For a fortnight, normal
activity in the laboratory stopped and all hands participated—senior staff and pro-
gramming experts like Wheeler gave formal lectures, and the rest of the laboratory
staff served as demonstrators. Early syllabuses have not survived, but those for the
mid-1950s indicate that there were 12 formal lectures on programming, with labo-
ratory sessions for worked examples, tape preparation, and actual machine running.
Attendees were not only Cambridge insiders but also visitors from other universities
and industry, both domestic and overseas, and of senior and junior rank. For this
reason, there were also broadening lectures on topics such as “the organization of a
computer centre”, “commercial computers”, and “business applications”. By the
mid-1950s there were about a hundred students, divided into six groups for practical
sessions, each supported by two demonstrators. Attending one of the summer
schools gave an entrée into computing for some of computing’s major figures. No
complete list of attendees is available, but we know from personal memoirs that
attending was a turning point for individuals such as Edsger Dijkstra and Peter

 Cambridge University Library, Computer Laboratory papers COMP. A. 9.

128 M. Campbell-Kelly

7.5  Applications and the Priorities Committee

One of the primary applications of computing machinery before the war had been
the production of mathematical tables.4 Both Wilkes’ Airy program and Fisher’s
genetics table followed in this tradition. A research student Richard Scorer (later a
professor of meteorology at Imperial College London) adapted Wilkes’ Airy pro-
gram for a table of waveforms in a dispersive medium (Scorer 1950). Wilkes also
worked with a visiting Canadian J. P. Stanley to produce a 100-page table published
by the University of Toronto Press (Stanley and Wilkes 1950). However, when mak-
ing tables EDSAC was doing what pre-war computing laboratories had done, only
faster. Already the machine was making a much bigger impact in direct problem-­
solving—particularly the integration of differential equations and Fourier analysis.
By spring 1951, user demand for the EDSAC had built up, and Wilkes decided to
establish a Priorities Committee with two purposes: first, to ensure that “the prob-
lems selected for running are those which are of the greatest intrinsic value and
which are, at the same time, well adapted to the machine”.5 The second objective
was to keep a complete record of the work done on the machine so that it could be
subsequently analysed and, no doubt, used to justify the investment in EDSAC. The
Priorities Committee maintained a record of work done for the life of the machine.
When EDSAC was decommissioned in 1958, over 300 individual projects had been
The first list of projects was published in May 1951 (Fig.  7.2 shows the first
page). The list gives a revealing insight into what the EDSAC was actually being
used for and the research community it had started to attract. There were some 43
projects and 25 users. The users included a sprinkling of senior faculty and a few
mid-career academics; the rest were junior staff and research students. Some of the
latter were working for senior faculty; in turn, some of the senior faculty were learn-
ing to use the EDSAC alongside their students.
The three heaviest users were Douglas Hartree, J. C. P. (Jeff) Miller (1906–1981),
and S. F. (Frank) Boys (1911–1972). Hartree had three personal projects underway,
developing numerical techniques for mathematical physics, and three further proj-
ects being undertaken by an Australian research student John Allen-Ovenstone
(Froese Fischer 2003, p.  179). Miller, who had joined the laboratory from the
Scientific Computing Service in 1949, had been one of Britain’s leading table-­
makers since the 1930s, and he used the EDSAC to produce some significant new
tables. However, at this moment in time, he had six projects underway—assisted by
Wheeler, among others—exploring the computation of prime numbers. In June
1951 Miller and Wheeler submitted a note to Nature stating that they had computed

 Table making was the primary reason for the existence of both the Harvard Mark I and the ENIAC.
Indeed, the Harvard machine had a nickname “Bessie” due to the vast amount of time spent com-
puting tables of Bessel functions.
 Unless otherwise indicated, the information and quotations for this section come from Cambridge
University Library, Computer Laboratory papers COMP. A. 9 (Priorities Committee).
7  The Evolution of Digital Computing Practice on the Cambridge University EDSAC… 129

Fig. 7.2  List of EDSAC projects, May 1951. (Reproduced by kind permission of the Syndics of
Cambridge University Library, COMP A. 9)
130 M. Campbell-Kelly

the largest known prime number—by the time the note appeared in November, they
had already beaten their own record (Miller and Wheeler 1951).
In 1948 S. F. Boys had been appointed to a lectureship in Lennard-Jones’ depart-
ment of theoretical chemistry. Boys had been computing atomic wave functions
since the 1930s and was a very enthusiastic EDSAC user. He had written to Wilkes
in March 1950:
I am informed that EDSAC has just completed a calculation in about twenty minutes of a
whole series of electrostatic integrals which will be very useful to us. … I might say that it
is with mixed feelings that I regard the result. I painfully calculated about forty sheets of
coefficients for these integrals about three years ago. …
This is the first of the two main parts of the atomic wave function calculations, in which
it appears that your activities will alter the whole aspect of the problem. So with repeated
thanks for all the facilities and for the help of you department.

By May 1951 Boys had one personal programming project underway and six fur-
ther projects by research students (including V. E. Price, later professor of comput-
ing at City University). This work led to a series of 12 papers published between
1950 and 1956 (Coulson 1973).
Wilkes was also an EDSAC user, though on a much smaller scale than the heavi-
est users. He continued to plough the furrow of his pre-war research into atmo-
spheric oscillations. At this time, he was working on a table of Chapman’s Integral
(named for Sydney Chapman FRS, a noted expert in geomagnetism and professor
of mathematics at Imperial College London). The table was published in 1954, and
the program also served as an example in WWG (Wilkes 1954). Two of Wilkes’
pre-war colleagues from the Cavendish Laboratory, K. Weekes and K. G. Budden,
were also working on related projects.
Among the junior members of the lab, Wilkes’ research students Bennett, Gill,
and Wheeler were assisting senior staff and working on their own somewhat ran-
dom topics. These latter eventually saw the light of day in their dissertations—in the
new world of electronic computing, few dissertations had much internal coherence.
Research assistant R. A. Brooker had by now migrated from the differential analy-
ser and, as well as assisting external users, was working on his interpretive floating-­
point subroutines. A visiting astronomy research student from Denmark, Peter
Naur, was working on a project “orbits for minor planets” (Naur 1980). Naur subse-
quently moved from astronomy to computing and became a prime mover behind
Algol 60.
The Priorities Committee’s list of projects is a remarkable snapshot of what the
EDSAC was being used for at a particular moment in time. However, perhaps even
more remarkable is the glimpse it gives of a handful mid-career academics for
whom electronic computing would play a central role in their research life. These
included Hermann Bondi (1919–2005), Andrew Huxley (1917–2012), Alan
Hodgkin (1914–1998), and John Kendrew (1917–1997) (Roxburgh, 2007; Huxley
2000; Holmes 2001). All of them were then in their 30s and had been involved in the
scientific war effort. This experience had opened their eyes to the possibilities for
instrumentation and electronics in post-war research. All four were later elected to
the Royal Society, knighted, and three of them won Nobel Prizes.
7  The Evolution of Digital Computing Practice on the Cambridge University EDSAC… 131

Hermann Bondi, assisted by Eric Mutch, was working on the integration of dif-
ferential equations. Bondi and his colleague Fred Hoyle later invented the “steady
state” theory of the universe. Hoyle and his students (including Leon Mestel in the
May list) became heavy users of the EDSAC (Wheeler, J. 1992). The irrepressible
Hoyle had a sideline as a science fiction writer, and his novel The Black Cloud
(1957, pp. 36–37) includes an account of a night shift on the EDSAC.
Andrew Huxley and Alan Hodgkin—who both came from illustrious scientific
dynasties—were then mid-career academics in the Department of Physiology. They
used the EDSAC in their investigation into the mechanism of nerve impulses. The
so-called Hodgkin-Huxley equations were evaluated using the EDSAC and resulted
in the publication of five papers in 1952. They shared the Nobel Prize in Physiology
or Medicine in 1963 “for their discoveries concerning the ionic mechanisms
involved in excitation and inhibition in the peripheral and central portions of the
nerve cell membrane”. The Hodgkin-Huxley equations have survived with “rela-
tively little modification” (Huxley 2000).
John Kendrew, who had attained the rank of wing commander in the war, returned
to Cambridge in 1945. He shared an interest in the molecular structure of proteins
with Max Perutz, and they jointly formed the ponderously named Medical Research
Council (MRC) Unit for the Molecular Structure of Biological Systems, nominally
headed by Perutz. (The MRC Unit would shortly attract Watson and Crick, the dis-
coverers of DNA.) Kendrew established an X-ray crystallography project to deter-
mine the molecular structure of myoglobin. This protein contained approximately
2600 atoms, whereas the largest protein mapped up to that date (and quite recently)
was Dorothy Hodgkin’s elucidation of vitamin B12  which had only 200 atoms.
After a programme of research taking several years, in which Kendrew was assisted
by John Bennett, the structure of myoglobin was finally established in 1957 after a
single run of 76 minutes on the EDSAC.6 This gave a low-resolution 6-Angstrom
view of the molecule. Later, with more computing power, a higher-resolution ver-
sion in which individual atoms could be discerned was achieved with EDSAC 2.
These four early users—Bondi, Huxley, Hodgkin, and Kendrew—went on to
have truly stellar careers. Besides the Nobel prizes, honorary degrees, and member-
ships of the Royal and other scientific societies, later in life all became masters of
Cambridge University colleges. Bondi became a chief scientific advisor to the gov-
ernment and Huxley a president of the Royal Society. All were recipients of the
Order of Merit, Britain’s highest scientific and cultural honour, whose membership
is limited to 24 individuals.
Talent will out, of course. But it is tempting to conjecture that their success owed
much to their realisation that scientific progress would become increasingly depen-
dent on high-speed electronic computing. They made a bet on the future.

 Bennett and Kendrew’s work is described in several contemporary papers (particularly Bennett
and Kendrew 1952 and Kendrew et al. 1958). Historical accounts setting the work in its scientific
context include November (2012) and de Chadarevian (2002).
132 M. Campbell-Kelly

7.5.1  EDSAC’s Digital Legacy

In recent years “digital” has taken on a broader meaning than in the 1950s. Today
we have terms such as digital culture, the digital divide, and the digital economy.
These terms evoke a society that is increasingly reliant on stored-program comput-
ers and networks. Such usages were inconceivable in the 1950s.
When the Mathematical Laboratory was first mooted in 1936, it was proposed to
call it a “Computing Laboratory”. At this time, the terms “digital” and “analogue”
did not have their later meanings, and this vocabulary does not appear in the written
proposals. However, that there were two kinds of computing was well understood.
On the one hand, there were computations involving decimal numbers performed by
human computers using desktop calculating machines and mathematical tables; cal-
culations performed with the aid of punch card machinery were also digital in char-
acter. On the other hand, there were devices such as the Mallock machine and the
differential analyser in which numbers were represented purely by physical quanti-
ties. There was considerable debate about what to call the laboratory. To those
involved in the discussions, the term “computing” captured the notion of computing
with numbers but failed to recognise machinery in which numbers did not exist as
such. In the event, “Mathematical Laboratory” was settled upon; like “Physics
Laboratory” it connoted a wide range of possibilities (Ahmed 2013, p. 28).
As the EDSAC became the university’s computing mainstay, it rendered the pre-­
existing equipment obsolete. There are few records of this transformation or what
happened to the machines themselves. According Brooker (2010), the differential
analyser was acquired by the Royal Military College of Science in 1950. Differential
analysers remained a useful computing resource for many organisations well into
the 1950s, until commercially manufactured computers became available and
affordable. The Mallock machine had no afterlife. It was a technological dead end
that had never lived up to expectations, and it was broken up. Turning to the “digi-
tal” machinery, there are no records of what happened to the punched card machines.
However, they were loaned to the laboratory by the British Tabulating Machine Co.,
to whom they would have been returned; following the normal practice, they would
then probably have been refurbished and leased to a commercial customer. Punched
card machinery remained the dominant data processing technology in the UK until
the mid-1960s. Again, there are no records of what happened to the desktop calcu-
lating machines or the women who operated them. However, one can surmise that
the machines remained in use for small-scale calculations in the university and else-
where for at least a decade. Electro-mechanical calculating machines were valuable
items much in demand and were still manufactured well into the 1960s.
EDSAC was shut down in 1958 when it was replaced by its successor EDSAC 2.
The original EDSAC had swept all before it and established the pattern of the labo-
ratory’s digital future. The laboratory was renamed the Computer Laboratory in
7  The Evolution of Digital Computing Practice on the Cambridge University EDSAC… 133


Ahmed, H. 2013. Cambridge Computing: The First 75 Years. London: Third Millennium.
Bashe, C.  J., L.  R. Johnson, J.  H. Palmer, and E.  W. Pugh. 1986. IBM’s Early Computers.
Cambridge Mass.: MIT Press.
Bennett, J.  and J.  C. Kendrew. 1952. “The Computation of Fourier Syntheses with a Digital
Electronic Calculating Machine.” Acta Cryst. 5, pp. 109-116.
Brooker, R. A. 2010. “National Life Stories: Interview by Tom Lean.” London: British Library.
Brooker, R.A. and D. J. Wheeler. 1953. “Floating Operations on the EDSAC.” MTAC, 7, pp. 37-47.
Cambridge University Mathematical Laboratory. 1950a. Report of a Conference on High Speed
Automatic Calculating Machines, 22–25 June 1949.
Cambridge University Mathematical Laboratory. 1950b. Report on the Preparation of Programmes
for the EDSAC and the Use of the Library of Sub-routines.
Cambridge University Reporter. 2 February 1937. “Report of the General Board on the
Establishment of a Computing Laboratory.”
Campbell-Kelly, M. 1980. “Programming the EDSAC: Early Programming Activity at the
University of Cambridge”, Annals of the History of Computing 2, 1, pp. 7-36.
Campbell-Kelly, M. 1992 “The Airy Tape: An Early Chapter on the History of Debugging.” Annals
of the History of Computing 14, 4, pp. 16-26.
Campbell-Kelly, M. 2013. “David Barron: A Life in Software, 1935–2012,” Software—Practice &
Experience, 43, pp. 733-741.
Campbell-Kelly, M. and M.  R. Williams, eds. 1985. The Moore School Lectures. Cambridge,
Mass: MIT Press; Los Angeles: Tomash Publishers.
Carpenter B.E. and R.W. Doran, eds. 1986. A.M. Turing’s ACE Report of 1946 and Other Papers.
Cambridge, Mass: MIT Press; Los Angeles: Tomash Publishers.
Coulson, C. A. 1973. “Samuel Francis Boys, 1911-1972.” Biogr. Mems Fell. R. Soc. 19, pp. 94-115
Croarken, M. 1990. Early Scientific Computing in Britain. Oxford: Oxford University Press.
de Chadarevian, S. 2002. Designs for Life: Molecular Biology after World War II. Cambridge:
University press.
Fisher, R.A. 1950. “Gene Frequencies in a Cline Determined by Selection and Diffusion.”
Biometrics 6, pp. 353-361.
Froese Fischer, Charlotte. 2003. Douglas Rayner Hartree: His Life in Science and Computing.
London: World Scientific Publishing.
Gill, S. 1951a. “Process for the Step by Step Integration of Differential Equations.” Proc.
Cambridge Philosophical Society, 47, pp. 96-108.
Gill, S. 1951b. “The Diagnosis of Mistakes in Programmes on the EDSAC.” Proc. Roy. Soc. (A)
206, pp. 538-554.
Gill, S. 1952. The Application of an Electronic Digital Computer to Problems in Mathematics and
Physics. Ph.D. Dissertation, University of Cambridge, November 1952.
Hartree, D. R. 1947. Calculating Machines, Recent and Prospective Developments. Cambridge:
Cambridge University Press.
Holmes, K.  C. 2001. “Sir John Cowdery Kendrew, 1917–1997.” Biogr. Mems Fell. R.  Soc. 47,
pp. 311-332
Hoyle, F. 1957. The Black Cloud. London: Penguin.
Huxley, A. 2000. “Sir Alan Lloyd Hodgkin, 1914–1998.” Biogr. Mems Fell. R. Soc. 46, pp. 219-241.
Kendrew, J. C. et al. 1958. “A Three-Dimensional Model of the Myoglobin Molecule Obtained by
X-Ray Analysis.” Nature 181, pp. 662-666.
Knuth, D.E. and L. Trabb Pardo. 1980. “The Early Development of Programming Languages” in A
History of Computing in the Twentieth Century. New York: Academic Press, 1980.
Lavington, S. H. 1975. A History of Manchester Computers. Manchester: NCC.
Miller, J. C. P. and D. J. Wheeler. 1951. “Large Prime Numbers.” Nature, 168, p. 838.
Mott, N.  F. 1955. “John Edward Lennard-Jones, 1894-1954.” Biogr. Mems Fell. R.  Soc. 1,
pp. 174-18.
134 M. Campbell-Kelly

Naur, P. 1980. “Impressions of the Early Days of Programming.” Bit 20, pp. 414-425.]]
November, J. 2012. Biomedical Computing: Digitizing Life in the United States. Baltimore: Johns
Hopkins University Press.
Roxburgh, I. W. 2007. “Hermann Bondi, 1919-2005.” Biogr. Mems Fell. R. Soc. 53, pp. 45-61.
Scorer, R.  S. 1950 “Numerical Evaluation of Integrals.” Quart. J.  Mech. and Applied Math. 3,
pp. 107-112.
Stanley, J. P. and M. V. Wilkes. 1950. Table of the Reciprocal of the Gamma Function for Complex
Argument. Toronto: Computation Centre, University of Toronto.
Wheeler, D.J. 1949. “Planning the Use of a Paper Library.” In Cambridge University Mathematical
Laboratory 1950a, pp. 36-40.
Wheeler, D. J. 1952. “The Use of Sub-routines in Programmes.” Proc. ACM Nat. Conf., Pittsburgh,
May 1952, pp. 235-236.
Wheeler, J.  1992 “Applications of the EDSAC.” Annals of the History of Computing 14, 4,
pp. 27-33.
Wilkes, M.  V. 1949. “Programme Design for a High Speed Automatic Calculating Machine.”
J. Sci. Instr. 26, pp. 217-220.
Wilkes, M. V. 1954. “A Table of Chapman’s Grazing Incidence Function.” Proc. Phys. Soc. B 67,
pp. 304-308.
Wilkes, M. V. 1956. “A Note on the Use of Automatic Adjustment of Strip Width in Quadrature.”
Nachrichtentechnische Fachberichte, 4, pp. 182-183.
Wilkes, Maurice V. 1985. Memoirs of a Computer Pioneer. Cambridge, Mass.: MIT Press.
Wilkes, M. V., D. J. Wheeler, and S. Gill. 1951. The Preparation of Programs for an Electronic
Digital Computer. Reading, Mass.: Addison-Wesley.
Chapter 8
The Media of Programming

Mark Priestley and Thomas Haigh

Abstract  We revisit the origins of the modern, so-called “stored program” com-
puter during the 1940s from a media-centric viewpoint, from tape-driven relay com-
puters to the introduction of delay line and cathode ray tube memories. Some early
machines embodied fixed programs, but all general-purpose computers use a
medium of some kind to store control information. The idea of a “memory space”
composed of sequentially numbered “storage locations” is crucial to modern com-
puting, but we show that this idea developed incrementally and was not fully articu-
lated in John von Neumann’s First Draft of a Report on the EDVAC. Instead, the
designers of computers based around delay line memories conceptualized their
structure using different temporal and spatial schemes. Referencing the correct data
was not just a question of “where” but also one of “when.”

The phrase “stored program computer” is often used as shorthand for computers pat-
terned after the design set down by John von Neumann in his 1945 First Draft of a
Report on the EDVAC. We have argued elsewhere (Haigh et al. 2014) that it is mis-
leading to compress the cluster of computing technologies developed during the mid-
1940s into a single innovation and to erect a rigid barrier between (comparatively)
ancient and modern computing depending on whether a machine stored coded instruc-
tions in an addressable electronic memory that was both readable and writable.
We consider it particularly unwise to speak of a stored program concept, as the
phrase suggests that the single innovation was an abstract idea. Professional histori-
ans have largely moved on from discussing the early computers of the 1940s to
focus on more recent developments. In their absence, authors such as Martin Davis

M. Priestley (*)
Independent Scholar, London, UK
T. Haigh
Department of History, University of Wisconsin–Milwaukee, Milwaukee, WI, USA
Comenius Visiting Professor, Siegen University, Siegen, Germany

© Springer Nature Switzerland AG 2019 135

T. Haigh (ed.), Exploring the Early Digital, History of Computing,
136 M. Priestley and T. Haigh

(2001) and Jack Copeland (2013) have claimed to find the source of this supposed
concept in logic, reinforcing an abstract view of the origins of modern computing.1
Perhaps because of this abstract turn, the widely held belief that the essence of the
modern computer is tied up with the question of the storage of programs has not led
to a significant engagement with the characteristics of the digital media doing the stor-
ing. Accounts of the kind offered by Davis and Copeland privilege mathematical and
abstract ideas and downplay the technological development of delay lines, Williams
tubes, and other forms of information storage device.2 Conversely, work engaging
with the details of early memory technology, even by some of the same authors
(Copeland et al. 2017), has not typically focused on creation and initial adoption of the
EDVAC model for the modern computer or considered the interplay of memory tech-
nologies and architecture. In this chapter we retell the familiar story of the emergence
of modern computing in the mid-1940s from the unfamiliar perspective of the digital
media used to encode the programs of instructions that the machines carried out.

8.1  Automation and Programming

The mechanical aids to digital computation available in the 1930s, desk calculators
and punched card machines, only automated individual operations. Extended compu-
tation still depended on human labor: people ferried card decks around corporate
offices or transcribed numbers from calculator to paper and back again in the comput-
ing rooms of the Mathematical Tables Project (Grier 2006) and the country homes of
the retired mariners working for the UK’s Nautical Almanac Office (Croarken 2003).
Automation had been carried further in non-digital machines such as the differ-
ential analyzer. In a computation setup on an analyzer, elementary operations were
executed continuously and simultaneously on quantities that were not represented
digitally but by some physical analog such as the rotation of a shaft. Analyzers con-
tained devices to perform individual operations, such as integrators and adders, but
did not sequence discrete operations over time as human and automatic digital com-
puters did (Haigh and Priestley 2016).
Between the 1820s and the early 1940s, several groups described, and some
built, digital machines that automatically performed sequences of operations. In
some the sequence of operations was determined solely by the physical structure of
the machine. Babbage’s difference engines fall into this category, as does the Bell
Labs Complex Number Calculator of 1940, a special-purpose machine capable of
performing the four fundamental operations of arithmetic on complex numbers.
According to its inventor George Stibitz, it contained units handling the real and

 Of course, the “stored program concept” has been discussed at length by many other historians,
including William Aspray, Paul Ceruzzi, and Doron Swade. We engage in detail with their work in
{Haigh et al. (2016), Chap. 11} and {Haigh et al. (2014)}, and will not repeat that analysis here.
 Copeland has in fact produced an excellent study of early tube-based memory devices (Copeland
et al. 2017), but this engagement with materiality does not appear to have altered his conviction
that Alan Turing invented the “stored program computer” in 1936.
8  The Media of Programming 137

imaginary parts of the calculation which “operated in parallel, during multiplica-

tion, for example, when the real and imaginary parts of the multiplicand were
­multiplied by digits of the real part of the multiplier simultaneously” (Stibitz 1967,
p. 39). The machine implemented the operations of real-number arithmetic but gave
its users no way of re-sequencing those operations for any other purpose.
Other special-purpose machines of the era, with similarly fixed programs, included
the Colossus machines and the Atanasoff-Berry computer. These machines per-
formed a small number of operation sequences defined by their physical construc-
tion. Using a phrase introduced by the ENIAC group (Grier 1996), we describe such
machines as embodying a single “program of operations.” As with other contempo-
rary uses of the word “program,” the exact sequence of operations carried out would
vary, in this case according to properties of the input data or user configuration.
Greater flexibility could be obtained by allowing the sequence of operations to
be specified in advance of the computation starting, a process usually described as
giving “instructions” or “orders” to the machine. In the 1930s, Konrad Zuse (Zuse
1993) and Howard Aiken (Cohen 1999) began projects to construct machines capa-
ble of obeying sequences of instructions, and the general approach was theorized by
Alan Turing (1936) and Emil Post (1936). The metaphor of “giving orders” reso-
nated with the familiar dynamic of supervisors and human computers, something
brought out clearly in Post’s account. The term “automatically sequenced” was
introduced to distinguish these machines from their less capable forebears.
Around the middle of the 1940s, the term “program” began to be used to refer to
these sequences of instructions rather than operations to which they gave rise (Haigh
et al. 2016). The creation of these sequences is one of the key senses of what we now
call “programming.” In the mid-1940s, however, “program” and its derivatives were
used in other senses: a program could be a sequence of operations or instructions, and
programming was a task carried out by control circuitry before it became “paper work”
(Turing 1946) performed by humans. These ambiguities highlight an important theme
of this chapter, namely, the relationship between the static program of instructions
given to an automatically sequenced machine and the dynamic unfolding of those
instructions into a computation or program of operations. This was originally thought
to be a fairly straightforward correspondence, and we trace the interplay of logical and
technological factors that led to a more complex understanding of its nature.
The distinction between machines which embody a single program and those
which carry out programs supplied to them as a set of encoded orders appears to be
fundamental. In the latter case, a novel computation can be carried out by simply
supplying a different set of orders instead of building a new machine. We say
“appears to be fundamental,” however, because the distinction is not really one
between different kinds of machine. The program embodied by an automatically
sequenced machine consists of the operations required to read and interpret a coded
sequence of instructions. A modern computer endlessly fetches instructions from
memory, decodes them, and executes the corresponding operations. Likewise,
Turing’s abstract machines embody a single program, defined by a table listing their
possible configurations and behavior. Turing’s universal machines are not a new
type of machine, but regular Turing machines put to a very specific use.
138 M. Priestley and T. Haigh

The instructions defining computations were encoded in a variety of quite dis-

tinct media. Each medium offered different affordances to the designer of the
machines’ repertoire of instructions and to its programmers and affected the ways
in which instruction sequences could be mapped into temporal sequences of opera-
tions during the execution of a program. Proceeding largely chronologically, we
describe how the instruction sets and practices of organizing computations on dif-
ferent machines were shaped by the properties of the media used to store the pro-
gram instructions.

8.2  Storing Instructions on Tapes

Three famous historical actors independently began projects to build machines that
would obey arbitrary instruction sequences: Charles Babbage (Swade 2001), Konrad
Zuse (1993), and Howard Aiken (Cohen 1999). All three conceived of computation
as the execution of a temporal sequence of basic operations and therefore started by
assuming that simple sequences of instructions, each causing the machine to exe-
cute a single operation, would be adequate to control computations. All three came
to recognize that this model was oversimple.
Their machines encoded instructions as patterns of perforations in paper or card-
board configured to make a sequential, machine-readable medium, or tape. For the
Analytical Engine, Babbage drew upon Joseph Marie Jacquard’s use of punched
cards to control the operation of mechanical looms. Instructions were punched on
cards which, like Jacquard, Babbage proposed to tie together to make a sequential,
machine-readable medium. For the Harvard Mark I, Aiken specified a custom-­
designed 24-channel tape out of IBM card stock, each position holding a single
encoded instruction. Other machines, such as the Relay Interpolator built at Bell
Labs in the early 1940s, used standard five- or six-hole teleprinter tape.

8.2.1  Addressable Memory and the Format of Instructions

Unlike machines that embody a single program, tape-controlled computers were

designed to carry out different programs at different times. This required a different
approach to the storage of data. Special-purpose machines have special-purpose
memory components: the electronic counters of the Colossus machines, for example,
were permanently connected to the machines’ logic circuits and were used solely to
tally the number of times particular conditions were satisfied. In contrast, automati-
cally sequenced machines need general purpose storage units that can be switched to
connect with different computational units as demanded by different instructions.
Mark I stored numbers in “counters,” and a specific counter could at one moment
receive a number from the machine’s multiplier and at the next moment supply it as
8  The Media of Programming 139

an argument for the computation of a sine function. The relationship between the
store and the mill (CPU) of Babbage’s Analytical Engine was similarly flexible.
The arguments for a specific operation could be taken from any storage unit, and
the result of the operation placed in any storage unit. This meant that the storage
units had to be identifiable in some way. This was done by assigning what would
later be called an “address” (usually a number) to each individual memory unit.
Instructions to the machine to perform a specific operation therefore needed to
encode not only a reference to the operation but also the addresses of the storage
units involved.
Exactly how this encoding was carried out depended on the details of a machine’s
architecture. Many of Mark I’s instructions, for example, contained two addresses,
one specifying the counter from which data was to be read and the other specifying
the counter into which the data would be placed. A third field specified the opera-
tions that the machine would carry out in response to reading the instruction.

8.2.2  Controlling Computations

The first tape machines had only a single tape reader. They appear to have been
designed on the assumption that there would be a straightforward one-to-one cor-
respondence between the instructions punched on the tape and the operations car-
ried out by the machine, similar to the relationship between the tape of a player
piano and the music it performs.
The iterative nature of many mathematical calculations, where a relatively short
sequence of operations is carried out repeatedly until the results obtained approach
some required level of tolerance, complicated this simple picture. It would obvi-
ously be wasteful and error-prone to punch the same sequence of operations repeat-
edly onto a tape, and in cases where the number of iterations was not known in
advance, this would not even be feasible in principle. The obvious solution to this
problem was to punch the instructions once and provide some way for them to be
repeatedly presented to the machine. To this end, Babbage planned to introduce a
“backing” mechanism which would allow the Analytical Engine to rewind its con-
trol tape to an earlier instruction. More mundanely, the ends of Mark I’s tapes were
glued together to make loops, called “endless tape” loops that, when fed through the
tape reader, would generate potentially unending iterative computations.
In practice, of course, mathematical computations come to an end. To stop an
iterated instruction sequence, Mark I’s designers provided a conditional stop order:
when a calculated quantity was sufficiently close to the required value, the sequence
mechanism would be stopped, and the operator could manually adjust the machine
before it carried out the next sequence of instructions.
This simple feature introduces a significant complication into the relationship
between instructions and operations performed. In the case of a telegram, the structure
of the message directly corresponds to the structure of the tape. The characters of the
message are simply the (encoded) characters punched on the tape – the same number
140 M. Priestley and T. Haigh

of them and in the same order. Likewise, there is a one-to-one correspondence between
the notes played on a player piano and those punched on its control tape.
With conditional termination, however, the situation is different. Suppose that
100 coded instructions have been punched on a Mark I control loop. As the compu-
tation proceeds, the instructions will be read repeatedly, and the number of times the
tape cycles will vary from computation to computation, depending on the initial
data supplied. By changing the (spatial) topology of the tape from a linear sequence
to a cycle and allowing the machine itself to determine the number of times the tape
is cycled, the length of the (temporal) sequence of operations performed by Mark I
becomes difficult or impossible to predict in advance. What started as an apparently
simple relationship whereby a sequence of instructions was transformed into a
sequence of operations has become rather more complex, as the machine “unwinds”
a “loop” of instructions into a longer sequence of operations. As Herman Goldstine
and John von Neumann explained a few years later, “the relation of the coded
instruction sequence to the mathematically conceived procedure of (numerical)
solution is not a statical one, that of a translation, but highly dynamical” (Goldstine
and von Neumann 1947).

8.3  Programming with Multiple Instruction Sequences

It soon became evident that most problems required more than just a single instruc-
tion tape. Many of the examples given in Mark I’s Manual of Operation (Staff of the
Harvard Computation Laboratory 1946) use two tapes: a linear “initial tape” which
set up things like the initial values of parameters and an endless tape which per-
formed the iterative calculations. When the first tape finished, it was removed by a
human operator and the next tape loaded.
In general, control of a computation also involved making sure that the necessary
sequences were selected and performed in the right order. Various common patterns
of execution were recognized, namely, the need to choose between alternative
sequences, the need to perform a single sequence repeatedly, and the need to allow
a sequence to be performed at different points in a computation. As well as the
coded instructions on the tapes, programmers wrote orders for Mark I’s operators.
Both were supposed to leave no room for discretion or ambiguity. The conditional
stop order was used to pause the machine for manually executed conditional
branches, as one of Mark I’s first programmers, Richard Bloch, recalled:
Since orders were not in registers, but only on tape, in order to branch you physically
branched by stopping the machine or causing the machine to stop; whereupon the operator
would twirl the drum that held the tape over to an indicated branch line and proceed from
there. You had instructions that would say ‘if the machine stops here, move it over to the red
line next; if it stops somewhere else, ship it over to the blue line.’ (Bloch 1984)

Such procedures further complicated the relationship between instructions and

operations, as did the practice of “interpolating” orders. Mark I included special-
purpose units to perform multiplication and various other operations. To avoid the
8  The Media of Programming 141

rest of the machine sitting idly by while a multiplication was being carried out, say,
extra instructions could be punched between the three coded instructions that con-
trolled the multiplier. The consequence was that a single sequence of coded instruc-
tions on the tape would give rise to parallel sequences of operations as the machine
read and processed the instructions.
Mark I therefore operated with two quite distinct levels of control. Coded orders
were read from the tape and executed in the order in which they had been punched
while the tapes themselves were managed by the operators. Sequencing was there-
fore only partly automated, but in view of the machine’s speed and intended appli-
cation, the slowdown caused by the human operators was expected to be

8.3.1  Program Pulses and Program Controls

The first programmable electronic computer, ENIAC, was expected to calculate

several orders of magnitude faster than Mark I. Its lead designers John Mauchly and
Presper Eckert understood from the beginning that ENIAC would need to automate
both levels of control and be able to switch between different sequences of opera-
tions automatically as well as following an individual sequence. They also knew
that reading instructions from an external medium such as paper tape would be
unacceptably slow, so individual instructions were set up by turning switches on the
many “program controls” distributed around the machine’s units. These instructions
were then linked into sequences by plugging a fixed connection between each
instruction and the next in the sequence in a problem-specific “setup.” At run-time,
the execution of the sequence of instructions was controlled by “program pulses”
that were received by a program control to trigger an operation and then passed on
in a kind of relay race to the next control in the sequence.
There were hierarchies of control within ENIAC, however, that complicated this
simple picture. Some units, notably the multiplier and the divider, made coordinated
use of several accumulators to carry out the sequences of additions and subtractions
necessary to form products and quotients. The number of additions needed to per-
form a multiplication varied as the size of the operands increased (unlike Stibitz’s
complex multiplication, which required exactly the same sequence of real number
operations on every occasion). A progress report described how, when a multiplica-
tion was triggered, “accumulators will be automatically programmed to receive the
multiplier and the multiplicand” (Anonymous 1944, pp. IV-10). This was an exten-
sion of a lower-level usage in which the “program circuits” within a unit controlled
the operation of its arithmetic circuits, setting and unsetting of flip-flops and con-
trolling other simple electronic circuits.
The system of program pulses moving between units offered a way for ENIAC
to sequence operations at electronic speed, without the huge slowdown that would
be caused by waiting for the next instruction to be read from the tape. As the team
developed it further, it also offered a way to shift automatically from one sequence
142 M. Priestley and T. Haigh

0 Sd
I 0 n 2 1
0 Sc
2 3

stepper B (2), (no counter)

0 Sd
0 n
0 Sc
5 6


Fig. 8.1  Douglas Hartree included this master programmer diagram in a published paper describ-
ing a computation performed on ENIAC (Cope and Hartree 1948). It describes both the configura-
tion of the master programmer and the overall structure of the computation (Source: W. F. Cope
and Douglas R.  Hartree, “The Laminar Boundary Layer in Compressible Flow,” Philosophical
Transactions of the Royal Society of London. Series A; Mathematical and Physical Sciences 241,
no. 827 (1948): 1–69; reproduced with permission of Royal Society)

to another. They arrived at a definitive model early in 1944 after Adele Goldstine
and Arthur Burks had made detailed plans for a ballistic trajectory calculation.3 The
problem was broken down into four sequences  – initialization, the operations to
carry out a single integration step, printing results, and hardware checking – which
needed to be repeated in a relatively complex pattern involving two nested loops.
Coordinating these sequences was principally the responsibility of a dedicated
unit, the “master programmer.” Combining counters with devices called steppers
that allowed control to branch to one of up to six basic sequences, this unit allowed
complex-nested sequences of sequences to be set up, sequences to be repeated in a
fixed number of times, conditional branching either to choose between two
sequences or to terminate a loop, and control to be transferred to different places
after successive invocations of a “subsidiary” sequence.
To master this complexity, the ENIAC team also developed a graphical notation
(Fig. 8.1) for depicting the sequence programming involved in particular computa-
tion. These “master programmer diagrams” black-boxed the basic instruction
sequences and showed how the steppers controlled the repetition and time-varying
invocation of sequences.

 In Haigh et  al. (2016), we attributed this work to Burks, but subsequent archival research has
revealed the importance of Goldstine’s contribution.
8  The Media of Programming 143

8.3.2  Multiple Tapes

The creators of the tape-controlled computers soon realized the usefulness of letting
their machines switch automatically between control sequences. An initial step was
to add more tape readers, so that several sequences could be mounted at once, and
to provide the machine with instructions to shift control between them automati-
cally. Harvard’s operating procedures had already made the operators’ work as
mechanical as possible. The Harvard Mark II, designed in 1945 after only a few
months of experience of using Mark I, allowed up to four sequence units to be used
(Staff of the Computation Laboratory 1949). Similar proposals had been made for
the Bell Labs relay computer later known as “Model V.” In March 1944, Samuel
Williams proposed that the description of a problem would be split between a “rou-
tine tape” describing the operations to be performed and a “problem tape” contain-
ing “information necessary for the solving of the particular problem for which the
tape has been prepared” (Williams 1944). In August, following discussions with
Williams and Stibitz, von Neumann reported to Robert Oppenheimer that the
machine would also use “auxiliary routine tapes […] used for frequently recurring
subcycles,” with separate readers for the different tapes (von Neumann 1944).
In 1947 a “subsidiary sequence unit” was added to the Harvard Mark I. Similar
in intent to the Mark II proposals, this unit allowed 10 sequences of around 50
instructions each to be set up. The instructions comprising the subsequences were
not punched on tape but “wired with plug wires, the codes of each line were wired
with a series of plug holes for a particular line.” The sequencing of these instruc-
tions was carried out by stepping switches (Bloch 1984).
Mark II’s four sequence units were divided, with units 1 and 2 being on the
machine’s left side and units 3 and 4 on the right.4 Having more than one sequence
unit enabled the dynamic transfer of control between two instruction sequences.
The Harvard team conceptualized this as a hierarchical relationship, though the
hardware itself could have supported more flexible relationships:
Should the method of solution involve a repeated sub-routine, it is advantageous to employ
the sequence units in a dominant and subsidiary relationship. One unit—the dominant—ini-
tiates the computing routine and then calls in the other—the subsidiary. The latter executes
the repeated portions of the routine as many times as required and then calls back the domi-
nant to complete the routine. Thus the repeated sub-routine need be coded but once, permit-
ting short control tapes to be used. (Staff of the Computation Laboratory, 267)

The orders that transferred control between units could be read at any unit. Perhaps
for this reason, instructions did not use the numeric identifiers of the units, but
defined them relatively. Thus, operation code 67 had the meaning “switch to the
other unit on the same side,” while operation codes 71 and 72 switched control to
one or other of the units on the opposite side of the machine. Operation code 70, a

 Mark II could be operated as one large machine or split to allow independent problems to be
computed simultaneously on its two “sides.”
144 M. Priestley and T. Haigh

conditional branch, transferred control to the other sequence unit if the value held in
a particular register was positive.
The multiple tape readers of the Mark II and the Bell machine played the same
role as ENIAC’s master programmer, allowing the basic sequences of instructions
to be automatically invoked and repeated according to the needs of a specific prob-
lem. We have, however, found no evidence indicating that either approach was
directly influenced by the other.

8.4  Sequential Electronic Memory

Under pressure to build a highly innovative machine on a wartime contract, the

ENIAC team made two significant design decisions. Reasoning that computation at
electronic speed required similarly fast access to the numbers being operated on,
they built ENIAC’s rewritable store out of the only available electronic technology:
vacuum tubes. The costs of this approach meant that the store held only 200 decimal
digits split between 20 accumulators which, like Mark I’s counters, did not simply
store numbers but also implemented addition. Small though it was, this storage
capacity enabled ENIAC to solve the ballistic equations thought likely to form the
bulk of its workload. Secondly, as described above, its idiosyncratic programming
system was designed to get around the fact that reading instructions from paper tape
would be too slow.
In mid-1944, the team began to make plans for a new machine, soon code-named
EDVAC. They specifically wanted to address ENIAC’s small store and awkward
setup process. Von Neumann helped focus the proposal by identifying nonlinear
partial differential equations as a key application, important not only to the
Manhattan project but also to BRL’s ongoing research programs. However, the
numerical material required for the solution of these equations could not be eco-
nomically stored in a vacuum tube memory. The first requirement for the new proj-
ect was therefore to identify a fast storage device capable of holding large amounts
of numerical data.

8.4.1  The Invention of Delay Line Memory

Since 1942, Moore School staff, principally Eckert and T. Kite Sharpless, had been
working on an NDRC contract with MIT’s Radiation Laboratory on various aspects
of radar systems (Eckert and Sharpless 1945). They came across the liquid delay
line developed by Bell Labs’ William Shockley. Intended as a timing device for use
in a range detection system, this promising idea had practical limitations, such as its
weight. By June 1943, the Moore School team had developed a prototype delay line
using mercury instead of the water and ethylene glycol mix used by Shockley. The
Moore School device was demonstrated to Rad Lab staff, but work on the MIT
8  The Media of Programming 145

contract ceased shortly afterward because the start of the ENIAC project in May
diverted the efforts of the Moore School staff. Details of the mercury line were
handed over to MIT, who continued to develop it for range detection and the
“removal of ground clutter” in radar systems, applications which required the line
to delay the pulses traveling through it for a precisely specified period of time.
In the spring of 1944, Eckert returned to the mercury line and constructed a pro-
totype which could be used as a storage device by adding simple circuitry to read
the pulses emerging from the end of the line, reform them, and reinsert them at the
other end of the tank for another traverse. This process of regeneration and recircu-
lation turned the mercury line from a device which simply delayed a train of pulses
into one which could preserve them indefinitely. Thus reconfigured, the mercury
delay line seemed to hold out the promise of being a relatively cheap way to store
and quickly retrieve large amounts of data.5

8.4.2  The Unification of Memory

In August 1944, Eckert and Mauchly circulated a description of the invention (which
they hoped to patent) to the ENIAC team explaining how delay lines or, more gener-
ally, “transmission line registers” might be used in a computer:
The transmission line register is easily adapted to the storage of large amounts of numerical
information […] A number of such registers may be employed as components in a comput-
ing machine. They may be used to receive from and read back into devices which do the
actual computing. The pulses stored in the registers need not represent actual numbers, but
may represent operations to be performed. (Such operations, or the code which represents
them, may of course be interpreted as “code numbers” for the operations.) The pulses from
a transmission line register may, for instance, be fed into an electronic stepping switch so as
to operate a chosen circuit at a given time. (Eckert and Mauchly 1944, p. 5)

By providing a large, fast, rewritable store, delay lines could store coded instruc-
tions, already used on the Harvard and Bell Labs machines, and supply them to
EDVAC at electronic speed. Delay lines therefore held out the promise of address-
ing both of ENIAC’s perceived shortcomings.
Beginning in the autumn of 1944, the EDVAC project progressed on the assump-
tion that the machine would be built around a delay line memory. Unlike ENIAC’s
accumulators and Mark I’s counters, these would simply store numbers which
would be passed to separate devices to “do the actual computing.” As Samuel
Williams had in his March proposals for the new Bell Labs machine (Williams
1944), which had been communicated to the Moore School group, Eckert and
Mauchly separated the two functions of storage and computation. The Moore
School group went further in suggesting that a single medium could store the differ-
ent kinds of information held by a fast electronic machine. This was a radical depar-
ture from machines that read numbers from electromechanical or electronic

 This paragraph is based on the account given in (Burks n.d.).

146 M. Priestley and T. Haigh

counters, program instructions from tapes, and tabulated  functions from tapes or
resistor matrices.
Von Neumann joined the group as a consultant principally responsible for work-
ing on “logical control.” In practice, this meant the question of how the machine’s
structure could be represented to the programmer and how a set of instructions
could be designed to allow it to be efficiently used. His first systematic account of
the issues was contained in the First Draft of a Report on the EDVAC (von Neumann
1945), the manuscript of which he sent to the group toward the end of April 1945.

8.4.3  Memory in the First Draft

At the beginning of the First Draft, von Neumann enumerated various things
EDVAC would have to “remember”: the instructions governing the course of the
computation, intermediate numerical results, tabular representations of functions,
and the numerical parameters of a problem (von Neumann 1945, pp.  4–6).
Generalizing the suggestion made in Eckert and Mauchly’s description of delay line
storage, von Neumann proposed a single functional component to hold all these
disparate kinds of information. He called this the computer’s “memory.” Most of the
First Draft reflects the team’s commitment to delay lines, but von Neumann also
briefly discussed the use of iconoscopes as an alternative memory technology, as we
discuss in Sect. 8.5.
The First Draft is notable for its abstract approach. Putting aside the details of
pulses traveling through mercury, von Neumann described the basic “unit” of mem-
ory as simply the ability to “retain the value of one binary digit.” After deciding that
numbers could be stored in 32 bits, von Neumann commented that it was “advisable
to subdivide the entire memory […] into groups of 32 units, to be called minor
cycles” (von Neumann 1945, p. 58). He called the contents of minor cycles, whether
coded instructions or numbers, “code words,” a phrase soon shortened simply to
Memory is useless unless the stored information can be easily located. Existing
machines did this in a variety of ways. The counters and registers of the tape-­
controlled machines had index numbers which appeared in instructions and con-
trolled switches temporarily connecting the specified storage device to other units
of the machine. Instructions, however, were sequenced by the physical properties of
the medium storing them, being punched in consecutive tape positions or, as on
ENIAC, physically linked by cabling. Tabulated function values were often retrieved
by a linear search: Mark I’s function tapes stored alternating arguments and function
values and to look up a tabulated value a dedicated unit “hunted” for a supplied
argument and then read the following value.6 In contrast, ENIAC used two-digit
function arguments to index its portable function tables. Finally, several machines

 The tapes of Turing’s abstract machines of 1936 were accessed in a similar way: temporary marks
were left in squares adjacent to the squares holding data of interest and later “hunted” for.
8  The Media of Programming 147

provided read-only storage devices to hold numerical parameters. Typically, like

Mark I’s registers and the rows on ENIAC’s constant transmitter, these were indexed
and accessed in a similar way to the rewritable devices holding intermediate results.
Von Neumann defined a unified indexing system for EDVAC’s delay line mem-
ory but at the same time recognizing the benefits of sequential access. The 32 bits
making up a number were not individually indexed, and their meaning was deter-
mined by the fact that they were stored in a contiguous sequence. He further noted
that “it is usually convenient that the minor cycles expressing the successive steps in
a sequence of logical instructions should follow each other automatically” (von
Neumann 1945, p. 76).
EDVAC’s memory, as described in the First Draft, consisted of a battery of delay
lines, each holding 32 minor cycles indexed by ordered pairs (x, y), where x x identi-
fied a delay line and y a minor cycle within a line.7 The two components of the index
signify in very different ways, however. A value of x denotes a physical delay line
which could be selected by means of switching in the familiar way. As the bits in the
delay line recirculated, however, the minor cycle y would only be available for read-
ing at one specific time within the line’s overall cycle. The values of y therefore
denote not fixed regions of space from which data could be copied, but periods of
time during which it was available. As von Neumann put it:
the DLA organs [delay line with amplifier] are definite physical objects, hence their enu-
meration offers no difficulties. The minor cycles in a given DLA organ, on the other hand,
are merely moving loci, at which certain combinations of 32 possible stimuli may be
located. Alternatively, looking at the situation at the output end of the DLA organ, a minor
cycle is a sequence of 32 periods τ, this sequence being considered to be periodically
returning after every 1,024 periods τ. One might say that a minor cycle is a 32τ ‘hour’ of a
1,024τ ‘day,’ the ‘day’ thus having 32 ‘hours.’ (von Neumann 1945, p. 89)

In his proposals for the ACE, Turing (1946) followed von Neumann’s proposals for
a delay line memory and simply described the two components of x and y the index
as the “delay line” and the “time” at which the desired minor cycle would be avail-
able. Von Neumann, however, struggled to find an intuitive way of describing the
situation. He initially used purely spatial language: writing to Herman Goldstine in
February, he described the “integers x, y which denote positions in the memory”
rather vividly as “house numbers,” before crossing the phrase out and replacing it by
“filing numbers” (von Neumann 1945). It seems very odd now to think of numbers
living on a street, but that’s the idea that has been naturalized in the phrases like
“memory address” (just as it today seems strange to think of a computer as a brain
but natural to think of it having a memory).
By April, however, when the First Draft was written, von Neumann had moved
away from spatial descriptions of minor cycles, a poor fit with the delay lines’
dynamic properties, to the use of the temporal metaphors of “hours” and “days.”
Numbers were not stored in fixed and very concrete physical device, like Mark I’s

 These ordered pairs were written, problematically from a typist’s point of view, μρ in the First
148 M. Priestley and T. Haigh

counters, but in “moving loci,” abstract regions of space containing waveforms in a

largely static medium. A locus may even consist of two non-contiguous regions, for
example, when a word is being transferred bit by bit from one end of a tank to the
other. Just as midnight is used as the starting point to number the hours in a day, the
minor cycles were numbered from an arbitrarily chosen point in the delay lines’
timing cycle.
English offers more ways of describing spatial divisions than temporal ones. An
alternative temporal metaphor was offered by Haskell Curry (1945), an early reader
of the First Draft who noted that there was “no generally accepted term for the fun-
damental unit of time.” He suggested using “beat”  – “the accepted term for the
fundamental time unit in music” – for the time taken for one pulse to emerge from
a delay line and considered carrying the metaphor further by referring to minor
cycles as “measures” or “bars” before concluding that von Neumann’s terminology
of minor cycles was “just as good.”
In an unpublished manuscript written shortly after the First Draft, von Neumann
gave a more precise mathematical characterization of how temporal indexes would
work (see Priestley 2018). A clock would keep a count as bits emerged from the
delay lines, and from this the index of the word currently emerging from a line could
easily be computed. Turing’s ACE report described a similar scheme in somewhat
more detail (Turing 1946).

8.4.4  Coding in the First Draft

In the First Draft, von Neumann described a new approach to automatic control
which we call the “modern code paradigm.” We have previously (Haigh et al. 2014)
summarized its key aspects as the following:
1 . The program is executed completely automatically.
2. The program is written as a single sequence of instructions, known as “orders”
in the First Draft, which are stored in numbered memory locations along with
3. Each instruction within the program specifies one of a set of atomic operations
made available to the programmer.
4. The program’s instructions are usually executed in a predetermined sequence.
5. However, a program can instruct the computer to depart from this ordinary
sequence and jump to a different point in the program.
6. The address on which an instruction acts can change during the course of the
program’s execution.
This combined the fully automatic control of ENIAC (points 1 and 5) with the
ordered sequence of coded instructions found on the tape-controlled machines
(points 2–4). Exploiting the fact that instructions stored in EDVAC’s unified mem-
ory were themselves indexed, switching to a different instruction sequence required
nothing more than (in the language of the First Draft) “connecting” the central
8  The Media of Programming 149

control organ to a different minor cycle by executing a transfer instruction specify-

ing the address from which the next instruction should be read. This was simpler
and more efficient than adding additional tape readers to hold routine tapes or wir-
ing up a network of program controls to represent different execution pathways as
One of von Neumann’s motivations for insisting that the memory had a default
sequential ordering was to allow the machine’s control to read an instruction
sequence in a simple way (point 4). Tape-controlled computers moved naturally
from one instruction to the next as the tape was moved physically through its reader.
On EDVAC, instructions to be executed successively would be stored in consecutive
minor cycles, and the address of the next instruction would only have to be explic-
itly specified in the special case of a transfer order (point 5). As von Neumann
(1945, p. 86) put it, “as a normal routine CC should obey the orders in the temporal
sequence, in which they naturally appear at the output of the DLA organ to which
CC is connected.” This wording suggests that operations would be carried out at the
same rate as instructions appeared and hence that a computation could progress by
simply taking the instruction emerging from a delay line once the preceding opera-
tion was completed.
However, there is a significant difference between tape readers and delay lines.
Mark I’s instruction tape reader only advanced when the machine’s control recog-
nized that the preceding operation was complete but EDVAC’s instructions would
emerge from the delay lines at a fixed rate. Von Neumann’s preference for sequen-
tial storage of instructions may have been a reflection of the natural properties of
instructions punched on paper tape, but its implementation could not rely solely on
the physical properties of the delay lines. Even if an operation could be carried out
as the next instruction was being read (which would have required buffering not
explicitly described in the First Draft), this would mean that all operations had to be
carried out in the time of one minor cycle. Von Neumann knew very well that this
was not the case, estimating, for example, that multiplication would take around 30
minor cycles. There were only a few places in the code where the delay lines’ tem-
poral properties coincided with the operation being performed, for example, in an
instruction to copy a number from the arithmetic unit to the minor cycle immedi-
ately following the instruction.
Orders should not be obeyed in “the temporal sequence in which they naturally
appear” at the output of a delay line after the completion of the previous instruction,
then, but rather in a logical sequence defined by their addresses. Most machines
patterned after EDVAC include a program counter, a dedicated memory location
holding the address of the instruction currently being executed. This is incremented
automatically but can be manipulated to produce a jump. In the First Draft, von
Neumann had not yet thought this through. He specified that the bus joining the
“memory organ” to the “central arithmetic organ” should be “connected” to a spe-
cific location by modifying a number stored in the “central control organ” (in later
computers this would be called the address register). Whenever an instruction trans-
ferred data from memory to the arithmetic organ, the address register would be
150 M. Priestley and T. Haigh

overwritten, with the result that EDVAC would lose its place in the program.
Realizing the need to avoid this, von Neumann specified that during such “transient
transfers” the address should be “remembered” (he did not say where) and restored
before the next instruction was read. Like a dedicated program counter, that scheme
would consume one extra storage location. Unlike a dedicated program counter, the
scheme would waste time shuffling numbers in and out of the address register. In
contrast, Turing’s ACE report goes into considerable detail about how a short delay
line CD (for “current data”) would accomplish this task (Turing 1946).
Unfortunately, executing orders in the default sequence defined by their addresses
causes substantial inefficiency. In most cases, the minor cycle holding the next
instruction will not be the next to appear at the end of the delay line, and EDVAC’s
control would have to wait for the instruction to arrive. The delay would depend on
the time taken to execute the preceding operation. Von Neumann was aware of this
to some extent: the length of the delay lines (32 minor cycles) was chosen to mini-
mize the delay in finding waiting for instructions that follow multiplications.
Perhaps, this reflected the well-known observation that multiplication time domi-
nated most computations, but in fact the delay incurred by operations which took
only a handful of minor cycles to complete would be significantly longer than the
statistically expected delay of half the cycle time of the delay line.

8.4.5  Short Delay Lines

One approach to reduce the inefficiency inherent in the naïve use of delay lines was
to make critical data immediately available by moving it into a separate “fast” store.
In principle, this could use any suitable medium, but the second iteration of the
EDVAC design proposed the use of short delay lines. Storing only one minor cycle,
these would be economical to construct, and their timing properties would fit well
with the rest of the machine. As Eckert and Mauchly (1945) described in a progress
report in September 1945, EDVAC’s memory would now contain a mixture of the
original long lines and the new short lines. Similarly, the memory described by
Turing (1946) for the ACE consisted of a mixture of long and short lines.
This development made the code more complex. In the summer of 1945, before
coding an example merge routine, von Neumann developed a new instruction set
containing a range of instructions for moving data between long and short lines
(Priestley 2018). New instructions allowed sequences of words, rather than indi-
vidual words, to be moved in one operation. The short lines were used for a variety
of purposes: to hold the variables manipulated by the code, as a place to construct
new instructions for immediate execution, and as temporary storage to hold data
being moved from one location in the long lines to another.
The experience of detailed planning for the use of a delay line machine therefore
led to a substantial modification in both machine design and programming tech-
nique and substantially complicated the intentionally simple and abstract design
proposed in the First Draft.
8  The Media of Programming 151

8.4.6  Optimum Coding and the Pilot ACE

An alternative, and complementary, way to reduce the inefficiencies arising from

storing instructions in delay lines was to take into account the expected time taken
by each operation and organize the code so that the instructions and numbers did in
fact appear at the end of delay lines exactly when required. Von Neumann attempted
this in a limited way in his merge routine by intercalating blank instructions into
the code at various points but seems to have misunderstood the temporal interaction
between the delay lines and the central control (Knuth 1970).
The general approach became known as “optimum coding” and was embraced
more fundamentally in the ACE project. In his original report, Turing (1946) pro-
posed that consecutive program instructions be stored in alternate minor cycles. On
the assumption that many of the machine’s operations, such as addition or the trans-
fer of a number, could be completed within one minor cycle, this would signifi-
cantly reduce the waiting time.
The designers of the Pilot ACE, a simplified pilot version constructed a few years
later, went further. Its minimalist architecture was optimized for speed, producing a
small machine that could outpace much more expensive computers built on the
model of EDVAC. To eliminate instruction-decoding hardware and boost perfor-
mance, it eschewed conventional operation codes, instead of treating all instructions
as specifying a transfer between a “source” and a “destination.” Sources and desti-
nations could represent delay lines for storage, circuits that performed arithmetic or
logical operations, or even pervasively useful constants such as 0 and 1 (Campbell-­
Kelly 1981). This mechanism did not allow individual minor cycles within delay
lines to be specified: transferring a number from one line to another would simply
copy it between the minor cycles appearing at the end of the lines when the instruc-
tion was executed. This eliminated any waiting time but placed a large burden on
programmers who had to consider the execution time of operations explicitly and to
track exactly what would be emerging from each delay line at each moment during
the execution of a program.
This went for instructions as well: the time at which an instruction arrived at the
end of a delay line had to be coordinated with the arrival of the data it was manipu-
lating. Instructions were not executed in a default sequence but were carefully scat-
tered throughout in the memory, the aim being to “eliminate all the waiting time
associated with fetching instructions” (Campbell-Kelly 1981, p. 140). Each instruc-
tion nominated its successor by specifying a spatial index to identify a delay line and
a temporal index to pick out a specific minor cycle. The temporality of the delay line
was treated very differently from the First Draft, however, which had imposed what
we would now call addressability on the delay lines, effectively tagging each piece
of data with a location number that accompanied it as it moved through the delay
line. This let programmers completely ignore the actual temporality of the delay
lines by writing code as if the memory consisted of fixed locations. In contrast, the
Pilot ACE specified the position of the next instructions as a “wait” of so many
minor cycles from the current position. If, as von Neumann had suggested, tempo-
152 M. Priestley and T. Haigh

rality could be understood via clock-based metaphors, an EDVAC programmer

executing a jump would say “carry out the instruction that emerges from line 7 at 3
o’clock,” while a Pilot ACE programmer would say, in each instruction, something
like “carry out the instruction that emerges from line 7 in 10 minutes time.” While
the First Draft fixed time indexes to an agreed starting point, the Pilot ACE expressed
everything relative to the current time.
The Pilot ACE approach was used for one of the most successful early British
commercial machines, the English Electric DEUCE.  However, most delay line
machines, such as the EDSAC and the commercial Univac I, followed the EDVAC
model and accepted complexity and suboptimal memory performance as the price
to pay for ease of programming. Programmers could place data for optimum
retrieval, if they chose, but instructions were fetched from sequential locations and
so could not be optimally positioned. Some computers, including EDVAC as built,
retained the addressable memory of the First Draft but added an additional address
field to each location specifying the next instruction to be executed. This permitted,
but did not mandate, the use of optimum coding for instruction placement.8

8.5  Random-Access Memory

The other potential memory technology mentioned in the First Draft was the icono-
scope. Developed in the 1920s by RCA as a component for television cameras,
iconoscopes were modeled on the retina. A light-sensitive electrostatic screen was
imagined as a matrix of tiny capacitors each holding a charge proportional to the
intensity of the light falling on the screen at that point. The matrix was scanned line
by line with an electron beam which converted the charge at each point into a
sequence of pulses that could be transmitted to a remote location where an image
would be reconstructed. Von Neumann imagined that a usable memory device could
be constructed by placing charges on the screen with a second electron beam rather
than by light.9
He noted that “in its developed form” an iconoscope could remember the state of
200,000 separate points, giving a single tube a similar storage capacity as EDVAC’s
array of 256 delay lines. The iconoscopes used in television scanned the screen in a
fixed linear order, line by line, and therefore accessed memory units in a default
temporal sequence which could be exploited, as with the delay lines, to store the bits
in a minor cycle next to each other and instruction sequences in consecutive minor

 It became common for machines using delay lines or magnetic drums as their primary memory to
include an additional address to code the location of the next instruction, thus allowing optimum
coding. However, that was not the only motivation. EDVAC’s designer (Lubkin 1947) justified the
additional address by saying it would make programs easier to change, not as a way to improve
operational efficiency.
 See the detailed discussion in von Neumann (1945 , pp. 73–78). It is a curiosity that at this point
von Neumann seems to be envisaging a “memory organ” based on the structure of the vertebrate
8  The Media of Programming 153

cycles. However, the electron beam reading the charges could also be rapidly
switched to any point on the screen, meaning that arbitrary transfers could be carried
out without the delay caused by waiting for the desired minor cycle to emerge from
a delay line. Iconoscopes seemed to be an ideal memory technology, providing “a
basic temporal sequence of memory units” but also the ability to “break away from
this sequence in exceptional cases” without penalty (Von Neumann 1945, p. 77).
Memory devices based on iconoscopes never became a reality. When the IAS
computer project began, in November 1945, it was assumed that the memory would
use delay lines, as in the latest version of the EDVAC proposals (IAS 1945). At the
beginning of 1946, however, RCA (who was one of the project partners) proposed a
novel type of storage tube, the Selectron (Rajchman 1946). These tubes would have
the crucial property of being able to switch to read any point without delay, and
design progressed on the assumption that it would be feasible to develop tubes with
a capacity of 4,096 bits within the time frame of the project (Burks et al. 1946, p. 9).
The modest capacity of the proposed Selectron meant that to have a sufficiently
large memory, the IAS machine would require an array of tubes. However, the team
chose to organize this memory in a novel way. The bits in a minor cycle would no
longer be stored contiguously and read in accordance with the default behavior of
the storage device. Rather, each bit would be stored in a different Selectron. Reading
a word would involve reading one bit from each Selectron, the bits being located at
corresponding positions in the tubes. The bits in a word would no longer be read
serially but in parallel. The team believed this would be faster and require simpler
switching circuits.
While it seemed very promising, the Selectron tube turned out to be very hard to
produce and the final version held only 256 bits. This was used in only one com-
puter, the RAND Corporation’s version of the IAS machine known as the
JOHNNIAC (Ware 2008). The IAS machine itself was eventually constructed using
an alternative technology developed at the University of Manchester and known as
the “Williams tube” after its inventor. Rather than trying to develop a completely
new type of tube, Williams drew on his wartime experience of radar and built a
functionally similar device based on standard cathode ray tubes.
In June 1946, von Neumann and his collaborators produced what they termed a
Preliminary Discussion of the design of the IAS machine and its code (Burks et al.
1946). In fact, the code changed only in details thereafter, and for many computer
builders, this document, rather than the First Draft, was the seminal reference on
computer design. Rather than the two-dimensional spatial and temporal indexes
used to identify minor cycles in the First Draft, the IAS machine’s memory was
conceptualized as a simple sequence of words indexed by a single integer. The
instruction set was much simpler than the one von Neumann had proposed for the
mid-1945 version of the EDVAC with short delay lines, and its basic capabilities
were stripped back almost to the level of the First Draft code. The major difference
arose from the different organization of the arithmetic unit and the provision of a
reasonably extensive range of arithmetic orders, presumably intended to ease cod-
ing of the machine’s core applications.
154 M. Priestley and T. Haigh

The Preliminary Discussion, then, conceptualized memory in purely spatial

terms. This was a considerable departure from the First Draft with its mixture of
spatial and abstract temporal coordinates. With von Neumann’s embrace of tube
storage and the assumption that any word could be accessed with equal efficiency,
the temporal aspect of the memory so prominent in delay line storage dropped out
of consideration altogether, and the memory was imagined to be a consecutive array
of words accessed by a single index, or address, which it was natural to imagine in
spatial terms. There was no need for programmers to manage the complexities of
distributing data between long and fast delay lines or to grapple with optimum cod-
ing. Later, readers, ourselves initially included, have therefore tended to read the
First Draft as if its system of coordinates represents an “address space” or specify
“memory locations” – both spatial models. The abstraction of its coordinates makes
that interpretation easier, but in reality, spatial metaphors of this kind were not gen-
erally applied to delay line memories. Well into the 1950s, materials describing
commercial delay line machines, such as the Univac I, continued to talk of the
memory as being structured into “major cycles” rather than “locations.”
The spatial model of the memory remained a considerable abstraction of the
physical reality of storage in the IAS machine. The bits in a word were not stored
contiguously, but were distributed across all the storage tubes. The “address” of a
word did not denote a boxlike region of space but rather a fragmented and distrib-
uted locus of small regions on an array of physical devices. Von Neumann had
reintroduced the “house number” metaphor of memory indexes in the early stages
of the IAS project, when it was still assumed that the machine would use delay lines
(IAS 1945), but dropped it thereafter. The same metaphor was revived, or indepen-
dently rediscovered, by Max Newman of Manchester University. In 1948 he spoke
to the Royal Society about “storing numbers […] in certain places or ‘houses’ in the
machine.” However, Newman focused on access to the stored data as much as its
physical properties and spoke of needing an “‘automatic telephone exchange’ for
‘selecting ‘houses,’ connecting them to the arithmetic organ, and writing the answers
in other prescribed houses” (Newman 1948).
The Preliminary Discussion presented a simple spatial abstraction of memory as
a sequence of boxlike containers of data. Like all good abstractions, this model
could be implemented in many different ways and served to insulate the business of
programming from inevitable changes in the development of memory technology.
The model could be applied as well to serial delay line machines such as EDSAC as
to the parallel computers modeled on the IAS machine and remained essentially
unchanged when both technologies were replaced by magnetic-core storage in the
1950s. It was even retrofitted to EDVAC itself. By 1947 its evolving design used a
single index for minor cycles that completely hid the temporal properties of its
delay lines. As Samuel Lubkin put it, “the number […] representing the location of
a word, will be referred to as an ‘address’ or ‘register number’” and addresses hid
the “precise arrangement of the memory” allowing programmers to think that num-
bers are “actually stored in individual registers” (Lubkin 1947, pp. 10–11). From
this point on, programming technique could develop autonomously and relatively
unaffected by changes in hardware, a point made by Maurice Wilkes and his
8  The Media of Programming 155

c­ ollaborators who observed that techniques developed for EDSAC could “readily
be translated into other order codes” (Wilkes et al. 1951). Some delay line machines
continued to rely on optimum programming techniques, as did certain later machines
with magnetic drum memories, but as memory technology evolved, these rapidly
became obsolete.

8.6  Conclusions

We have examined the evolution of the media used to store programs in the period
in the 1940s when the collection of ideas constitutive of the modern computer was
coming together and considered the effect of these media choices on coding and
code design. This reveals a dialectical and emergent relationship between the devel-
opment of storage technology and more abstract ideas about coding. Although
described by both von Neumann and Turing as a new kind of logic, the development
of computer programming did not follow the path of implementation of a well-­
formed theoretical idea but was always responsive to developments in memory
We mentioned the “stored program concept” at the beginning of this chapter, and
the media-inflected origins of the term are worth reemphasizing here. The term
“stored program” in this context can be traced to a prototype electronic computer
assembled in 1949 by an IBM team led by Nathaniel Rochester.10 The Test Assembly,
as the machine was usually known, was a mash-up of existing components, includ-
ing the IBM 604 Electronic Calculating Punch. It could read program instructions
from two different media: 60 instructions could be set up on the 604’s plugboard,
but there was also a magnetic drum which could hold 250 numbers or coded instruc-
tions. To distinguish between the two sources of instructions, the team began to
refer to instructions held on the drum as the “stored program” (Rochester 1949).
IBM disseminated the phrase in its marketing (IBM 1955) for the IBM 650 com-
puter which likewise partitioned control information between a magnetic drum
(holding the “650 stored program”) and traditional punched card controls. Over the
following decade, the phrase gradually became established as a way of referring to
the class of machines originally described more clumsily as “EDVAC-type
machines.” In its original use, however, it was not intended to mark any deep theo-
retical insight, but simply to distinguish between two media on a largely unknown
experimental machine.

 See Haigh et al. (2014). We have subsequently discovered the following occurrence of “stored
program” in a 1946 draft of Goldstine and von Neumann’s Planning and Coding reports: “acoustic
or electrostatic storage devices will […] provide […] the possibility to modify (erase and rewrite)
stored program information under the machine’s own control.” This usage is adjectival rather than
substantive, however, and does not appear in the published reports. It is therefore unlikely to have
inspired the use of the phrase within IBM in 1949, and we do not believe that this materially affects
our earlier discussion of the topic.
156 M. Priestley and T. Haigh

The turning point in the story we have told occurs in early stages of the EDVAC
project, when the rather disparate collection of media used to store numbers and
instructions on earlier machines was replaced by the conception of a single, internal
memory storing different types of information. We have traced in some detail the
tension in and around the First Draft between the abstract model of the memory,
which would later be called an “address space,” and the temporal properties of delay
line memory.
This highlights a tension between spatial and temporal modes of thinking that
recurs in the relationship between program instructions and the operations that are
executed. Initially conceived as a fairly simple relationship between corresponding
sequences of instructions and operations, by 1947, the coded instructions placed in
the memory were seen as merely the starting point of a complex process that could
generate an extremely long and complex sequence of operations, in the process also
altering the coded instructions themselves.
By the mid-1950s, computer designers had settled on core memory as a new
medium to replace both delay lines and display tubes. Like display tube storage, this
had a straightforward spatial organization of data rather than the fundamentally
temporal structure of the delay line. This eliminated the need for optimum coding as
practiced with drum and delay line machines. In a broader sense, optimizations
based on knowledge of the actual hardware underlying a simple instruction set and
memory model never went away, as programmers and compiler creators struggled
to optimize the performance over time of vector instructions, pipelines, and cache


Anonymous. 1944. ENIAC progress report dated June 30, 1944 In Moore School of Electrical
Engineering, Office of the Director Records (box 1): UPD 8.4, University Archives and
Records, University of Pennsylvania, Philadelphia, PA.
Bloch, Robert. 1984. Oral History Interview with William Aspray, Februrary 22, 1984. Charles
Babbage Institute, Minneapolis, MN.
Burks, Arthur W. n.d. Unpublished book manuscript. In Arthur W.  Burks papers, Institute for
American Thought, Indiana University-Purdue University, Indianapolis, IN.
Burks, Arthur W., Herman Heine Goldstine, and John von Neumann. 1946. Preliminary Discussion
of the Logical Design of an Electronic Computing Instrument. Princeton, NJ: Institute for
Advanced Studies, 28 June 1946.
Campbell-Kelly, Martin. 1981. “Programming the Pilot Ace: Early Programming Activity at the
National Physical Laboratory”. Annals of the History of Computing no. 3 (2):133–162.
Cohen, I. Bernard. 1999. Howard Aiken: Portrait of a Computer Pioneer. Cambridge, MA: MIT
Cope, W.  F. and Hartree D.  R. 1948. The Laminar Boundary Layer in Compresible Flow.
Philosophical Transactions of the Royal Society of London. Series A. 241(827), 1–69.
Copeland, B.  Jack. 2013. Turing: Pioneer of the Information Age. New  York, NY: Oxford
University Press.
8  The Media of Programming 157

Copeland, B. Jack, Andre A. Haeff, Peter Gough, and Cameron Wright. 2017. “Screen History:
The Haeff Memory and Graphics Tube”. IEEE Annals of the History of Computing no. 39
Croarken, Mary. 2003. “Tabulating the Heavens: Computing the Nautical Alamanac in 18th-­
Century England”. IEEE Annals of the History of Computing no. 25 (3):48–61.
Curry, Haskell. 1945. Letter to John von Neumann, August 10, 1945. In John von Neumann papers
(box 3, folder 2): Library of Congress, Washington, DC.
Davis, Martin. 2001. Engines of Logic: Mathematicians and the Origin of the Computer. New York,
NY: Norton.
Eckert, J. P., Jr, and J. W. Mauchly, 1944. “Use of acoustical, electrical, or other transmission line
in a device for the registering of pulses, the counting, switching, sorting and scaling of pulses,
and the use of such devices for performing arithmetic operations with pulses”. August 1944.
In Herman H. Goldstine papers (box 21): American Philosophical Society, Philadelphia, PA.
Eckert, J. Presper, and John W. Mauchly. 1945. Automatic High Speed Computing: A Progress
Report in the EDVAC. Report of Work Under Contract No. W_570_ORD_1926, Supplement
No 4. (Plantiff Exhibit 3540). September 30. In ENIAC Patent Trial Collection: UPD 8.10,
University of Pennsylvania Archives and Records Center, Philadelphia, PA.
Eckert, J. P., and T. K. Sharpless. 1945. Final Report under Contract OEMsr 387. Moore School
of Electrical Engineering, University of Philadelphia, November 14, 1945. In Britton Chance
papers (box 80, folder 7): American Philosophical Society, Philadelphia, PA.
Goldstine, Herman H, and John von Neumann. 1947. Planning and Coding Problems for an
Electronic Computing Instrument. Part II, Volume 1. Princeton, NJ: Institute for Advanced
Grier, David Alan. 1996. “The ENIAC, the Verb ‘to program’ and the Emergence of Digital
Computers”. IEEE Annals of the History of Computing no. 18 (1):51–55.
Grier, David Alan. 2006. When Computers Were Human. Princeton, NJ: Princeton University
Haigh, Thomas, and Mark Priestley. 2016. “Where Code Comes From: Architectures of Automatic
Control from Babbage to Algol”. Communications of the ACM no. 59 (1):39–44.
Haigh, Thomas, Mark Priestley, and Crispin Rope. 2014. “Reconsidering the Stored Program
Concept”. IEEE Annals of the History of Computing no. 36 (1):4–17.
Haigh, Thomas, Mark Priestley, and Crispin Rope. 2016. ENIAC In Action: Making and Remaking
the Modern Computer. Cambridge, MA: MIT Press.
IAS. 1945. Minutes of E.C.  Meeting, November 19. Institute of Advanced Studies. In Herman
H. Goldstine papers (box 27): American Philosophical Society, Philadelphia, PA.
IBM. IBM 650 Technical Fact Sheet, July 20. IBM Archives 1955.
Knuth, Donald E. 1970. “Von Neumann's First Computer Program.” ACM Computing Surveys no.
2 (4):247–260.
Lubkin, Samuel. 1947. Proposed Programming for the EDVAC.  Moore School of Electrical
Engineering, University of Philadelphia, January 1947. In Moore School of Electrical
Engineering, Office of the Director Records (box 8): UPD 8.4, University Archives and
Records, University of Pennsylvania, Philadelphia, PA.
Newman, M H A. 1948. “General Principles of the Design of All-Purpose Computing Machines”.
Proceedings of the Royal Society of London, Series A no. 195:271–274.
Post, E. L. (1936) Finite combinatory processes - formulation 1. Journal of Symbolic Logic
1 (3):103–105.
Priestley, Mark. 2018. Routines of Substitution: John von Neumann's work on Software
Development, 1945–1948. (Springer, 2018).
Rajchman, Jan. 1946 The Selectron. In Cambell-Kelly, M., and Williams, M. R. The Moore School
Lectures. MIT Press, 1985.
Rochester, Nathaniel. 1949. A Calculator Using Electrostatic Storage and a Stored Program. May
17, 1949. IBM Corporate Archives, Somers, NY.
158 M. Priestley and T. Haigh

Staff of the Harvard Computation Laboratory. 1946. A Manual of Operation for the Automatic
Sequence Controlled Calculator. Cambridge, MA: Harvard University Press.
Stibitz, George R. 1967. “The Relay Computers at Bell Labs.” Datamation 13 (4):35–44 and
13 (5):45–50.
Swade, Doron. 2001. The Difference Engine: Charles Babbage and the Quest to Build the First
Computer. New York: Viking Penguin.
The Staff of the Computation Laboratory. 1949. Description of a Relay Calculator, Volume XIV
of the Annals of the Computation Laboratory of Harvard University. Cambridge, MA: Harvard
University Press.
Turing, A. M. (1936) On computable numbers, with an application to the Entscheidungsproblem.
Proceedings of the London Mathematical Society 42:230–265.
Turing, Alan. 1946. Proposed Electronic Calculator. NPL. Reprinted in Carpenter, B. E. and Doran,
R. W (1986), A. M. Turing's ACE Report of 1946 and Other Papers (MIT Press).
von Neumann, John. 1944. Letter to Robert Oppenheimer, August 1, 1944 (Los Alamos National
Laboratory, LA-UR-12-24686).
von Neumann, John. 1945. Letter to Herman Goldstine, February 12, 1945. In Herman H. Goldstine
papers (box 9): American Philosophical Society, Philadelphia, PA.
von Neumann, John, 1945. First Draft of a Report on the EDVAC. Moore School of Electrical
Engineering, University of Pennsylvania, June 30, 1945.
Ware, Willis H. 2008. RAND and the Information Evolution: A History in Essays and Vignettes.
Santa Monica, CA: RAND Corporation.
Wilkes, M., Wheeler, D. J., Gill, S. 1951. The Preparation of Programs for an Electronic Digital
Computer. Addison-Wesley.
Williams, S.  B. 1944. “Calculating System”. Bell Telephone Laboratories, March 29, 1944.  In
Herman H. Goldstine papers (box 20): American Philosophical Society, Philadelphia, PA.
Zuse, Konrad. 1993. The Computer—My Life. Berling/Heidelberg: Springer-Verlag.
Chapter 9
Foregrounding the Background: Business,
Economics, Labor, and Government Policy
as Shaping Forces in Early Digital
Computing History

William Aspray and Christopher Loughnane

The decade and a half from the end of the Second World War to
the election of John F Kennedy as president in 1960 was one of
prosperity and progress for the American economy and its
business sector. Although the developing Cold War with the
Soviet Union overshadowed many of the developments of that
period, its effect was stimulative for the domestic economy.
Pent-up consumer demand from the Second World War years,
the availability of cheap energy, the application of government-
financed wartime innovations to peacetime commercial uses,
and liberal programs of foreign economic and military aid all
encouraged private investment and created expanding
employment opportunities. Increased productivity and the
application of new tools for monetary and fiscal management
helped keep the economy growing and minimized the potential
for inflation and excessive swings of the business cycle. Despite
the increased role of government intervention in the economy,
American business enjoyed an era of market-directed growth
that created an affluent society without parallel in the world
(Johnson, “American Business in the Postwar Era,”
pp. 101–113 in Bremner and Reichard, op cit, 1982).

Abstract  This paper places the early history of digital computing in the United
States, during the period from 1945 to 1960, in the larger historical context of
American business, labor, and policy. It considers issues concerning the business
sectors that chose to enter into early digital computing, the robustness of the general
economy and the importance of defense as an economic driver, the scientific race
with the Russians, and gendered issues of technical labor – and how each of these
helped to shape the emerging mainframe computer industry.

W. Aspray (*)
University of Colorado Boulder, Boulder, CO, USA
e-mail: William.Aspray@Colorado.EDU
C. Loughnane
University of Glasgow, Glasgow, UK

© Springer Nature Switzerland AG 2019 159

T. Haigh (ed.), Exploring the Early Digital, History of Computing,
160 W. Aspray and C. Loughnane

This paper surveys a large topic: the shaping influence of business, economics,
labor, and government policy on the demand, development, purchase, and use of
early digital computers in the United States in the period from 1945 to 1960.1 This
was a formative period which began with the public revelation of the ENIAC and
the dawning recognition of the potential uses of the computer and ended a mere 15
years later with a mature mainframe computer industry. This paper does not claim
to be the first place where these exogenous forces are discussed. Indeed, a number
of excellent books and articles have been published that discuss these contextual
issues in both the United States and Western Europe, how these issues shaped par-
ticular decisions about which computers were made, which features they included,
which industries adopted them, and how users incorporated them into their business
and scientific activities.2 By foregrounding these contextual issues and moving the
development and use of particular technologies into the background, this paper can
encapsulate in only a few pages the importance of these exogenous forces in com-
puter history.3 The main purpose here is to provide computer historians and other
early digital scholars a ready reference to these large exogenous forces as they write
their focused technological histories.
Computer historians have been attracted to study American developments
between 1945 and 1960 because this period is characterized by the transition from
the first proof-of-principle computers to working scientific and commercial main-
frames, the development of workable solutions to constituent technologies such as
memory devices, and the emergence of a mainframe computer industry. America in
these same years is characterized at the societal level by the transition from wartime
to peacetime, a growing economy based in part on Cold War investments, increas-
ingly conservative Republican politics, a golden age for large corporations, and dif-
ferentiation of labor roles by gender. These two histories are intimately connected.

 Not very far into the writing of this paper, the authors discovered Thomas Haigh’s excellent but
infrequently cited paper, “Computing the American Way” (2010). His paper has a purpose similar
to this one, and we commend our readers to examine his paper carefully. We have intentionally
refocused our paper to take a different tack from the one Haigh takes, but nevertheless there are a
few places at which the two papers overlap.
 Some examples of excellent contextualized computer histories include Campbell-Kelly (1990) on
ICL and the British computer industry, Lecuyer (2005) on Silicon Valley, Bassett (2007) on semi-
conductor history, Edwards (1997) and Akera (2008) on Cold War computing, Heide (2009) on
punched-card systems, Rubin and Huber (1986) on the knowledge industry in the United States,
and Haigh and Paju (2016) on IBM in Europe. There are many more good examples of literature
that could be pointed to here. However, the goal of this paper is to present a brief and readable
account without trying to do justice to a full bibliographic analysis. This paper can thus be regarded
as more of an essay than a traditional research literature review.
 The focus of this paper is early computer history. Some audiences would be more interested in the
history of information rather than the history of computing, and this topic is also shaped by similar
exogenous forces. One approach to this broader topic in the United States is taken by Rubin and
Huber (1986), who describe the knowledge industry as including education, research and develop-
ment, the media of communication, information machines, and information services. A more
recent approach to the history of information in the United States is given by Cortada (2016),
which traces the use of information in government, business, education, and people’s everyday
lives since 1870.
9  Foregrounding the Background: Business, Economics, Labor, and Government… 161

This article ends with 1960 with the governmental reactions to Sputnik intended
to strengthen American prowess in science and technology (Neal et al. 2008). After
this, during the decade of the 1960s, America entered a new and different era char-
acterized in computing by the rise of families of computers, new semiconductor
storage and switching devices, emerging market niches for supercomputers and
minicomputers, and the rise of an independent software industry and on the national
scene by liberal Democratic administrations and social upheaval as represented by
the civil and women’s rights movements.
This article lightly touches on the connections between computer history and
contemporaneous societal trends in postwar America, leaving thorough investigation
of these connections to future studies. However, some of these connecting themes
are identified: the importance of defense spending on computing research and prod-
uct development, showing a continuous defense influence from ENIAC to ARPANET;
the availability of an adequate, pliable technical labor pool because of the GI Bill;
welfare capitalism and popular belief that striking against defense work is un-Amer-
ican; the rapid rise of the computing industry attributable as much to a robust econ-
omy and defense funding as to technological innovation; and an alternative view of
gendered labor in computing that does not argue for a golden age of work for women
in computing that fell off in the 1960s for unexplained reasons but instead sees com-
puting employment as following a national trend characterized by the low participa-
tion of women generally in the technical labor force in the postwar years.

9.1  Types of Entrants in Early Digital Computing

We begin this examination of endogenous and exogenous factors shaping the devel-
opment of early computing by looking at the makeup of the US computing industry
itself, which formed in the period between the end of the Second World War and
1960. It included entrants coming from other established industries, as well as a
number of entrepreneurial startups. Four types of companies produced digital com-
puters in the period of 1945–1960: electrical and electronic product and service
companies, business machine manufacturers, defense contractors, and startup firms.
The business machine industry, which had grown mightily during the Progressive
Era in the early decades of the twentieth century and during Roosevelt’s New Deal
Administration in the 1930s, was the most dominant group of entrants in the new
computer industry. Indeed, by the mid-1950s, the business machine manufacturing
firm IBM had come to dominate the mainframe computer industry. Its main com-
petitors were labeled by the public press as the “Seven Dwarves”: Burroughs,
Control Data, General Electric, Honeywell, NCR, RCA, and Sperry Rand.
Each firm of course had its own specific set of strengths and weaknesses.
However, each of these types of firms brought common organizational advantages
and disadvantages to the early digital computer industry simply because of the type
of firm they were, independent of the specifics of the particular firm. Table 9.1 gives
a highly simplified account of the kinds of strengths and weaknesses of early digital
computer manufacturers, organized by the industry of origin.
162 W. Aspray and C. Loughnane

Table 9.1  A prosopography of the early digital computer industry, 1945–1960

Electrical and
electronics Business
products and machine Defense
services company manufacturer contractor Startup
Examples of General Electric, IBM, Raytheon, Eckert-Mauchly,
firms Honeywell, Philco, Remington Sperry Rand, Control Data,
RCA, Bell Labs, Rand, NCR, Hughes Aircraft/ Digital Equipment,
Consolidated Burroughs, Ramo-­ Technitrol,
Electrodynamics Underwood Wooldridge, Engineering
Corp./Electrodata Computer Research Associates,
Research Corp./ Electronic Computer
Northrop Corp., Consolidated
Aviation, Bendix Engineering/
Aviation, Electrodata,
Librascope/ Logistics Research
General Corp./El-Tronics,
Precision Packard Bell, Wang
Capital Strong Strong Strong Weak
Electronics Moderate to strong Weak to Moderate Moderate to strong
expertise moderate
Government Moderate Widely varying Strong Weak to moderate
Research Moderate to strong Moderate to Moderate to Weak
facilities strong strong
Manufacturing Moderate to strong Moderate to Moderate Nonexistent
facilities strong
Technical Moderate to strong Moderate to Moderate to Widely varying
management strong strong
Sales network Strong Moderate Moderate to Nonexistent
and experience strong
Marketing skill Moderate to strong Moderate to Moderate to Nonexistent
strong strong
Customer base Moderate Strong (for Moderate Nonexistent
weak to
moderate for
makers of
Alternative Moderate to strong Moderate Moderate to Nonexistent
market strong
9  Foregrounding the Background: Business, Economics, Labor, and Government… 163

You will note that the table is labeled a prosopography. This was a term intro-
duced into historical practice by the British historian Lawrence Stone in the 1970s.
Prosopography is used to develop a composite portrait as part of a narrative account
(Keats-Rohan 2007). The earliest known use of prosopography in computer history
is by Pierre Mounier-Kuhn (1999) in his doctoral dissertation. In the case of early
digital computer manufacturers, as a prosopographer you would create a composite
image of the typical firm of a given type that entered the computer industry. Thus,
you would create a narrative in which there were only a few players competing to
control the early digital computing machine industry (or at least carve out a niche for
themselves in which they would have limited competition so that they could earn
high profits): business machine manufacturers, electronics firms, defense contrac-
tors, startups, governments, universities, and a few types of customer organizations.
When carrying out a prosopography, you would not reflect upon the differences
between individual firms within a given class, such as International Business
Machines and National Cash Register from the business machine manufacturing
class; instead, you would focus on their similarities. All of the business machine
firms, for example, were likely to have relatively good sales and marketing networks,
but they might not be up to speed on the use of electronics to build computers.4
The table above is intended to suggest – albeit sketchily – some of the issues that
should concern the historian writing about commercially developed computers of
this postwar era. How did the organizational strengths and weaknesses of the firm
shape the firm’s strategy as well as the technology it created? To what degree does
the story of a particular computer fit into a narrative of these four types of entrants
into the early computer industry? Historians have explored this question for the
entrepreneurial startups (Stern 1981; Norberg 2005) and for companies with back-
grounds in business machines (Pugh 1995) and in electronics and defense (Campbell-­
Kelly et al. 2014). The computer had originally been a military and, secondarily, a
scientific technology, but not primarily a commercial technology. Did the domi-
nance of the business machine manufacturers in the computer industry drive the
introduction of commercial computers and expedite the spread of computers into
various industry sectors? For example, consider the provision of computers by the
business machine manufacturers to different industrial sectors (Cortada 2003, 2005,
2007), particularly to the data-intensive insurance sector (Yates 2008). Was the
action of some entrants in the computing field dominated not by their organizational
strengths as they applied to providing computing products and services but instead

 There are typically some aberrations in these composite profiles, and the prosopographer simply
has to live with these aberrations. For example, NCR, which had built codebreaking equipment
during the war, had already learned about electronics and computing when the mainframe comput-
ing industry was established. IBM handled its lack of electronics expertise in part by stepping out
of its role as a business machine manufacturer after the war and also becoming a defense contrac-
tor, utilizing government defense contracts to build up its research labs, research personnel, and
electronics expertise. So, in this one respect of familiarity with electronics, NCR and IBM were
uncharacteristic of the typical business machines company, which generally had little such exper-
tise. But in many other respects, both IBM and NCR were similar to the other business machine
164 W. Aspray and C. Loughnane

because these companies had alternative market opportunities, such as General

Electric in lighting, power, and home appliances, or some of the defense contractors
in other areas of defense electronics or avionics?

9.2  T
 he US Business Climate and Its Impact on Early Digital

The computer industry could not have formed so rapidly and thrived so well had it
not been for a robust postwar economy in the United States. There was widespread
concern among economists and policymakers after the war as to whether the United
States would return to the Depression economy of the 1930s once the artificial stim-
ulus of war spending ended. There was similar concern about the employment pos-
sibilities for the 12 million (mostly) men returning from the war who were anxious
to be released back into civilian life. However, these worst fears did not materialize.
The economy grew from $212B gross national product (GNP) in 1945 to $503B
GNP in 1960. There was steady growth and low inflation throughout the period.
Family income grew by approximately 40% during this period. Exports outpaced
imports during this era. In 1960, for example, the United States had a trade surplus
of more than $6B (Bremner and Reichard 1982).
One of the factors that enabled the economy to grow so robustly was new defense
spending in this era of the Cold War, especially during the early 1950s on account of
the Korean War (Flamm 1988; Jones 1986). For example, the military budget jumped
from $14B in 1950 to $34B in 1951 because of the Korean conflict – representing a
doubling in the military portion of GNP from approximately 5% to over 10%. In the
first half of the 1950s, GNP grew at an annual rate of more than 4.5% and at about
half that rate in the second half of the 1950s. These military purchases dominated
growth in the aircraft and electronic industries and created substantial growth in the
rubber, plastics, and other chemical industries. The aircraft industry, in particular,
was a major user of early computers (Bremner and Reichard 1982). Another reason
for the economic growth was the pent-up demand for consumer goods after the pre-
vious 15 years of the Great Depression and the war – during which times there had
been little opportunity to make consumer purchases. Sales of automobiles, televi-
sion receivers, and refrigerators – as well as new houses in the suburbs – skyrock-
eted. It is not surprising that a General Electric appliance factory in Kentucky was
one of the earliest factories to acquire a computer to help carry out its work.
These postwar years were a time of growth for large companies, partly through
mergers and acquisitions and partly through moving from the company’s traditional
business domain into adjacent business areas (Galambos 1970). Three of the fastest
growing companies in this era were companies that participated in the early main-
frame computer industry: IBM, RCA, and GE.  The latter two participated in the
computer industry by shifting from a base in electronics to electronic computers.
The government promoted business development in various ways during these
postwar years. Government support was critical to the advancement of companies
in the computer industry – not only those coming from the defense sector but also
9  Foregrounding the Background: Business, Economics, Labor, and Government… 165

for companies such as IBM and NCR. Many factories built with government funds
during the war were sold to private firms at huge discounts after the war. For exam-
ple, Engineering Research Associates, one of the companies that became part of
Sperry Rand’s computer division, was set up in business after the war by the US
Navy in a former glider plane factory it owned in St. Paul, Minnesota. A number of
companies faced large tax burdens from their wartime profits, and the government
found various ways to ease the tax burden on these companies.5
Wartime scientific research and development in areas such as electronics, aero-
nautics, chemistry, atomic energy, and computing were put to use in peacetime
products by private industry during this postwar era in the markets for television,
commercial jets, automobile tires, nuclear medicine, and commercial computers.
Moreover, the close collaboration between government and industry that had devel-
oped during the war carried over into peacetime with increased corporate lobbying,
movement of executives back and forth between the government and private sectors,
and the use of private business models and managerial and financial strategies by
government agencies.
IBM had cultivated strong ties with the Roosevelt Administration in the 1930s,
leading to IBM being awarded the contract for data processing machinery for the
Social Security Administration. IBM continued to cultivate ties with the Truman
and Eisenhower administrations after the war. IBM also received numerous defense
contracts as well as contracts for office equipment and for data processing equip-
ment with data-intensive government agencies such as the Social Security
Administration, Bureau of Labor Statistics, and Bureau of the Census.
The computing industry gained greatly from the defense opportunities in the
postwar period. As Thomas Haigh has explained, in the postwar years, the business
machine firms such as IBM and NCR built on their already large customer bases
established during the 1930s and early 1940s:
This continued in the 1950s with the Defense Department, the world’s largest user of elec-
tronic computers for administrative purposes. Indeed, it was its growing array of incompat-
ible machines which inspired the DOD to nurture the COBOL standard effort and then to
spur compiler development by requiring computer manufacturers to provide an implemen-
tation of the language if they wanted their computers considered for procurement.

But computer companies also moved quickly to win government contracts in new military
markets far removed from their roots in the administrative technology business. Burroughs,
formerly known only for its adding and bookkeeping machines, became a major supplier of
military command and control systems. Office machine conglomerate Remington Rand
became part of Sperry Rand, lending its expertise in computer technology to a firm best
known for its specialized military automation technologies such as the marine gyrostabi-

 There had been excess profit taxes imposed by the federal government during the war. These were
revoked in 1945 but reimposed in 1950 because of the Korean War. For an analysis of corporate tax
incentives in the 1950s in the United States, see Thorndike (2011). The authors have not been able
to find specific cases in which companies entering the computer industry after the Second World
War were given tax incentives, but they may exist. IBM, for example, capped its profits at 1.5% on
government business during the war and had no postwar revenue slump for which it might have
taken advantage of tax credits. In fact, the company grew rapidly in the postwar years. Clearly,
defense contracts to certain of these companies were more important than tax incentives in the
postwar era.
166 W. Aspray and C. Loughnane

lizer, computer controlled bombsights, autopilots, and airborne radar systems. Other firms
that entered the computer industry in the 1950s made similar transitions. Honeywell pio-
neered thermostatic heating control, but built up defense production during the second
world war and during the Cold War manufactured missile guidance systems, bombs, land
mines and napalm for the US military. Its computing business grew out of a 1955 joint
venture with military electronics firm Raytheon. General Electric had previously diversified
into the production of a wide range of equipment for producers, industrial consumers, and
domestic users of electrical power. Expertise in power and turbines led to a major new
government contracts to build jet engines and nuclear reactors. (Haigh 2010a)

IBM provides a good example of the importance of defense contracts to firms in the
computer industry. Not only were defense organizations important customers for
IBM, government contracts enabled IBM to build up its research staff and facilities
and conduct the research and development it embedded in its commercial products.
In the 1950s, for example, IBM held government contracts for the IBM 701
(“Defense Calculator”), Naval Ordnance Research Calculator, the AN/FSQ-7 com-
puters for the SAGE air defense missile system, and the Bomb-Nav analog comput-
ers for guidance in B-52 bombers. Our implicit argument about the importance of
defense contracts to IBM’s success in the computing field runs counter to the argu-
ment of the distinguished historian of technology Steve Usselman (1993, 1996),
who argues that this defense work was a distraction from IBM’s business in busi-
ness data processing. The technologies developed for these defense products showed
up in the 650, 701, 702, 704, and 705 computers that IBM sold to commercial cus-
tomers. It is well known that military support to the ENIAC project during the
Second World War led to the development of the stored program computer and that
military support in the 1960s and 1970s that created the ARPANET led to the devel-
opment of the Internet. It is clear, however, that military support was unbroken from
the Second World War into the 1970s and that military support in the 1950s was
critically important to the creation of the mainframe computer industry in the United
States (Ceruzzi 2003a, b).

9.3  T
 he US Labor Environment and Its Impact on Early
Digital Computing

If the US computer industry was going to thrive, it needed an adequate supply of

qualified technical workers (as well as many nontechnical workers). Hardware
development, such as finding adequate memory technology and faster and more
reliable switching devices, was what historian Thomas Hughes would call the
“reverse salient” of the computing industry; it was what was holding back the cre-
ation of practical computing devices. Software was a secondary consideration at the
time, e.g., with the first major programming languages, Fortran and COBOL, only
created in the late 1950s and the “crisis” over programming only emerging later in
the 1960s. Thus, the primary technical employment need in the computing field of
the postwar years was for engineers – mostly for electrical engineers (for storage
and switching technologies) but also for mechanical engineers (for input-output
9  Foregrounding the Background: Business, Economics, Labor, and Government… 167

devices such as printers). While there was occasionally anxiety about finding an
adequate supply of these workers, in the end the computer industry found these
workers – and they were overwhelmingly white and male. The labor history of the
computer industry is, not surprisingly, like the more general labor history for
American engineers and scientists in the postwar years. On the history of the sci-
ence and engineering workforces in postwar America, see Rossiter (1995) and Bix
(2013), respectively. On the history of computing labor in the United States, see
Abbate (2012), Aspray (2016), Ensmenger (2012), and Misa (2010).
The US labor market was driven in the postwar period by four major factors: (1)
a return of workers from military service to civilian life, (2) the production of jobs
by the defense sector, (3) the growth of jobs related to pent-up consumer demand,
and (4) a conservative political pushback to the leftist politics of the labor unions.
During the war, half of total production and more than two-thirds of durable
goods production went to the war effort. There was strong political pressure to both
cut military spending and return military personnel to civilian life as quickly as pos-
sible after the war ended. Economists were concerned that the coupling of these two
changes would lead to massive civilian unemployment. Indeed, there were some
major dislocations, such as laying off 300,000 Michigan war workers immediately
after the Japanese surrender, but in the end, the worst fears of massive unemploy-
ment did not occur. One reason was the Servicemen’s Readjustment Act of 1944,
better known as the GI Bill of Rights, which enabled 2.3 million former military
personnel to attend college in the second half of the 1940s. This bill deferred the
entrance of many former military personnel into the civilian workforce and also
enabled them to train for the higher-skilled jobs that emerged in the postwar years.
The Servicemen’s Readjustment Act also provided loans to former service person-
nel to start their own businesses. This government program enabled numerous mili-
tary personnel to gain engineering degrees, which they put to use in the nascent
computer industry (Barnard 1982).
One casualty of the transition to peacetime was the loss of work for women
(Cayton et al. 1993). A survey in 1944 conducted by the US Women’s Bureau
showed that four-fifths of the women employed in war work desired to continue to
work after the war. However, women were more likely than men to lose their
employment after the war. In the aircraft industry, for example, women comprised
89% of those who lost their jobs after the war, even though only 39% of the wartime
aircraft industry workforce were women. While there was a big drop in employment
for women in the second half of the 1940s, there was a slow but steady growth dur-
ing the 1950s, and from about 30% to 35% of the workforce was female. Much of
this growth arose through married women reentering the workforce. The percentage
of married women working outside the home increased from 17% in 1940 to 32%
in 1960.
Closely associated with the work patterns mentioned in the previous paragraph
is the propensity of women in the postwar years to have careers as homemakers.
Between 1940 and 1960, a woman’s average age at the time of marriage dropped
from 21.5 to 20.3 in the United States. The number of live births per 1000 women
in the population grew from 80 in 1940 to 123 in 1957 (the peak year for this statis-
168 W. Aspray and C. Loughnane

tic in the postwar period). It was customary at the time for women with small chil-
dren to stay at home rather than hold outside jobs.
The situation for women entering scientific careers in the postwar era was some-
what bleak. In 1946–1947, women made up 2.7% of the science and engineering
workforce, and that percentage only increased to 6.64% in 1954 and 9.37% in 1970.
Women fared better in employment as mathematicians (20.1% of the workforce)
than as engineers (0.3%) (Rossiter 1995). For a more extended discussion of this
topic, see Aspray (2016). In this postwar era, women often departed early from their
scientific education and careers. When they were employed, they often held posi-
tions below their talents and education and often worked in feminized occupations
such as routine testing, home economics, and chemical librarianship. While the GI
Bill was a boon to male servicemen returning from the war, it was often an impedi-
ment for women. The GI Bill led to oversubscription in US colleges and universi-
ties, and to handle the demand, these higher education institutions often set quotas
on the number of women they would enroll, especially for graduate education.
The story of the female programmers on the ENIAC is well known. However
much we might be troubled today by the lack of recognition for these women’s
accomplishments and for the lack of a career path for them to follow, this episode is
entirely consistent with what happened to women in the science and engineering
disciplines more generally during this time. It is a clear example of both gender-­
typing of work and the elimination of female workers after the war to make room
for the men returning home. There is a larger issue about what kinds of computing
workers we are talking about. The Golden Age discussion, such as in Ensmenger
(2012) and much of the rest of the literature, is about women programmers working
in computer companies or on scientific and military projects. But there are examples
of women computer operators, keypunch operators in data processing departments,
in light manufacturing positions such as building ENIAC or later in stringing core
memories in the 1960s, or working in traditionally gendered design, sales, market-
ing, and administrative support positions in computer companies. Expanding to this
larger set of occupations and looking at gendered work make for a much more com-
plex and interesting story, which is to date largely unexplored. One of the first schol-
ars to touch on these topics is Thomas Haigh (2010b) and Haigh et  al. (2016,
especially pp. 71–74, pp. 279–283). There is also some recent scholarship (Vogel
undated; Misa forthcoming) that does careful counting from archival sources to
show that common assumptions about a golden era for women in computing are
If the opportunities for women in science and engineering careers after the war
were limited, opportunities for people of color were much more constrained. At the
end of the war, African Americans, at the time the largest racial minority population
in the United States, had few good job opportunities. One common pattern for the
postwar black population was to move off the farm (often located in the rural South)
into factory jobs (often located in the urban North). Between 1940 and 1960, the
percentage of African Americans working on farms dropped from 35% to 8%.
While many African Americans had served in the armed services and were eligible
for college benefits under the GI Bill, many colleges and universities were not open
9  Foregrounding the Background: Business, Economics, Labor, and Government… 169

to them. The NAACP and other organizations used the court system during this
postwar period to pry open access to public post-secondary education for African
Americans, for example, Johnson v. Board of Trustees of the University of Kentucky
(1949), McLaurin v. Oklahoma State Regents (1950), Sweatt v. Painter (1950),
Hawkins v. Board of Control (1954), and Lucy v. Adams (1955). Despite these
efforts, change was slow in coming (Aspray 2016).
What pressure the computer industry had in finding qualified technical workers
came primarily from the robust economy. During the second half of the 1940s, pent-
­up consumer demand led to growth in both the economy and jobs. Just when con-
sumer demand began to slow at the end of the 1940s, the Cold War and the Korean
conflict led to major increases in defense employment, including work in the utility,
electrical, electronics, aircraft, instrument, steel, aluminum, and chemical industries
in support of the country’s defense needs. US unemployment in the 1950s averaged
4.6% – what we would now regard as “full employment” – with some significant
swings from a strikingly low 1.3% unemployment rate in 1953 to 7% unemploy-
ment during the recession of 1958 (Bremner and Reichard 1982).
The Roosevelt and Truman administrations had been pro-labor, but with the
increasing political power of the Republicans after the war, legislation became
increasingly pro-management; and this contributed to a workforce for the computer
industry that had little power to take labor action. Republicans took control of the
Congress in 1946 and immediately began to attack the power of the labor unions,
which had been strengthened greatly in the 1930s through President Roosevelt’s
New Deal legislation. While numerous anti-labor bills were introduced in the
Congress – more than 200 during the late 1940s – the signal piece of legislation was
the Taft-Hartley Act, enacted in 1947. President Truman’s veto of this legislation
was roundly overridden by both the House and Senate. The new legislation rebal-
anced the power relationship between labor and management that had swung in
labor’s favor with the Wagner Act, which had been passed in 1935. Taft-Hartley
included provisions limiting the ability of unions to strike or picket, create closed
shops (in which the employer was only permitted to hire members from the union),
or direct financial contributions by unions to political campaigns; union leaders had
to sign oaths to the federal government avowing that they were not communists; and
states were given new authority to pass right-to-work laws that limited union power
in the workplace. One main effect that the law had was to reduce the unions’ ability
to mount national campaigns; they now had to divide their resources and carry out
their political efforts simultaneously in many different states (Barnard 1982).
The fact that much of the job growth of the 1950s came in the defense sector had
a shaping influence on labor action in the computing industry. There were relatively
few labor disputes in the defense industry, and collective bargaining was largely
nonexistent there. This was in part because pay was high and work was relatively
secure but also because a strike against a defense project was widely regarded as
un-American. While the nature of work in many office and manufacturing jobs did
not change, the defense industry led in both the creation of R&D jobs that required
a science or engineering background and the demand for highly skilled workers to
manufacture and operate increasingly complex defense equipment. Overall  – not
170 W. Aspray and C. Loughnane

just in the defense sector – between 1940 and 1960, the proportion of white-collar
jobs increased from 31% to 42%, mostly at the expense of manual, blue-collar labor
jobs (Barnard 1982).
It was only partly because computing firms were participating actively in defense
projects that there was little to no labor unrest in the computer industry. Additionally,
the business machine manufacturers that entered the computing field, especially
IBM and NCR, had long practiced the so-called welfare capitalism, characterized
by company newspapers, social activities, and benefit programs for employees as
well as by IBM’s practice of not laying off employees in hard times and offering
internal career paths. These efforts led to greater company loyalty of employees.
IBM in particular had a close relation to Roosevelt’s New Deal programs. IBM
president Thomas Watson cultivated a close personal relationship with President
Roosevelt and bought into Roosevelt’s prolabor sentiments even though he carefully
managed an environment within IBM that avoided unionization.

9.4  S
 cience Policy and Scientific Institutions in the Rise
of the US Early Digital Computing Field

In the postwar years, federal science policy in the United States had a shaping influ-
ence on computing. Prior to the Second World War, involvement of the federal gov-
ernment in the funding of scientific R&D was limited to a few key areas, most
notably agriculture. The Morrill Acts of 1862 and 1890 granted states land to estab-
lish land-grant colleges with a focus on practical subjects including agriculture and
engineering. These were complemented by the Hatch Act of 1887, which provided
the funds for these colleges to create practical research centers known as agricul-
tural experiment stations, and the Smith-Lever Act of 1914 to fund cooperative
extension services to provide informal education opportunities for improvements in
farming, environment, and nutrition (see Hyman 1986; Rose 1962; Fiske 1989).
Federal regulation of electronic communications was reorganized and centralized
through statute by President Franklin Roosevelt with the Communications Act of
1934, under the oversight of the Federal Communications Commission (Paglin
1989; Smulyan 1996). Despite these various early regulatory efforts, federal science
policy prior to the Second World War maintained a decentralized and mostly hands-­
off approach. The most significant role this early US science policy had for the
computing industry was to create a healthy group of public universities that trained
the hordes of engineers needed after the Second World War. Thus, the dominant
policy issue for computing in this period to 1960 concerns government support for
computing research, not, for example, antitrust or tax or export regulation.
Before the 1941 US entry into the world conflict, funding for university research
came primarily from private and philanthropic sources, with the federal government
playing a minimal role – an arrangement that suited all parties. Donors included the
Carnegie and Rockefeller foundations and companies such as General Electric and
9  Foregrounding the Background: Business, Economics, Labor, and Government… 171

DuPont. After the Great Depression in the 1930s, research funding from corporate
and foundation sources had diminished significantly, motivating universities to look
to the government for assistance. However, the real lever for a change in federal
policy was the concern over world events, forcing President Roosevelt to abandon
previous laissez-faire policy and move toward a more coordinated approach to sci-
entific research, with the creation in 1940 of the National Defense Research
Committee and in 1941 of its successor, the Office of Scientific Research and
Development. Once war mobilization was underway, the award of significant fed-
eral contracts and grants established enduring research ties between government
and university. These ties included ones related to computing at various universities,
including Harvard, MIT, Pennsylvania, and UCLA (Greenberg 2001).
It is sometimes said that the atomic bomb ended the war but that radar won it. In
any event, the Second World War dramatically changed the relationship between
science and the US government. The federal government supported the research,
development, and manufacture of various medicines and technologies during the
war; and various scientists such as Vannevar Bush and J. Robert Oppenheimer took
leading roles in scientific research, management, and policy for the federal govern-
ment during the war. In 1940, less than 1% of the federal budget was devoted to
scientific research and development; the amount reached 2% in 1945; and by 1960
the percentage had grown to 10% (of a much larger federal budget base). Before the
war, two-thirds of US R&D was supported by industry and only one-fifth by the
federal government. In 1945, industry and government were approximately equal in
their support of R&D. By 1960, the government was paying for approximately two-­
thirds of the national R&D expenditures. This research included various projects
involving computing either directly or indirectly at universities across the country
(Bremner and Reichard 1982; Hart 2010; Kleinman 1994).
The most famous episode in the transition from wartime to peacetime federal
support for science is the effort to create a national science foundation based upon
the plan in Vannevar Bush’s white paper to President Roosevelt, entitled Science,
the Endless Frontier. Many historians have written about this episode (see Appel
2000; England 1963; Hart 2010; Kevles 1977; Kleinman 1994, 1995; Lomask 1976;
Maddox 1979; Mazuzan 1988; Puaca 2014; Reingold 1987; Wang 1995), so here
we will only state that the creation of NSF was delayed until 1950 because of politi-
cal battles over whether the foundation should be controlled by the Congress or by
the scientists, the scope of the foundation (whether it should include military, medi-
cal, and social science research or technological development), and the counterpro-
ductive efforts of Bush to insert himself into the postwar control structure. Because
of these delays, multiple governmental organizations were created that played a part
in the financial support and intellectual direction of scientific research: the Atomic
Energy Commission, the National Institutes of Health, and military research orga-
nizations such as the Office of Naval Research. By the time that NSF was finally
created, it had a circumscribed scope, although it did have primarily the responsibil-
ity in the federal government for basic as contrasted with mission-oriented scientific
research. However, during the 1950s, less than 10% of the federal science budget
was set aside for basic scientific research. In this period up to 1960, the Office of
172 W. Aspray and C. Loughnane

Naval Research, the Army Research Office, the National Bureau of Standards, the
Advanced Research Projects Agency, and the National Science Foundation all were
supporting computing research of one kind or another. NBS and NSF had basic sci-
ence missions in computing, whereas the other government funders of computing
were mission oriented.
None of the efforts, such as the Interdepartmental Committee on Scientific
Research and Development created in 1947, were successful at creating a unified
federal stance on shaping scientific research – and government support for scientific
research has remained pluralistic ever since. In 1956, for example, 38 federal agen-
cies applied to the Congress for funds to carry out scientific research and develop-
ment programs – almost all of them mission oriented. There was unevenness in the
distribution of these funds with much higher percentages going to the physical sci-
ences than to the biological sciences (and within the physical sciences, an unevenly
large percentage going to physics rather than chemistry). Funds were heavily
invested for defense needs, with less funding for environmental, health, or other
national needs. Later, this pluralistic approach was regarded as a strength when it
became apparent how poor a job the federal government made of identifying win-
ning technological directions or predicting how large a scientific workforce was
needed (Greenberg 2001).
Funding for computing at the National Science Foundation in the 1950s came
primarily out of the Mathematics and Physical Sciences Directorate, although there
was also limited support from the much smaller Engineering Directorate and from
the Director’s office for large capital expenditures on computing. The pluralistic
approach to federal support for computing funding was most clearly seen in the
contrasting approaches taken by NSF and ARPA in the 1960s, as is described below.
The United States had a serious wake-up call in 1957, when the Soviet Union
successfully launched the artificial satellite Sputnik. Up until this time, the United
States had nonchalantly assumed that it had a commanding lead in the scientific
realm over its Cold War rival, the Soviet Union. Sputnik led to various new federal
initiatives to support US science, including five major policy changes: (1) the cre-
ation of the National Aeronautics and Space Administration; (2) the creation of a
major program to fund advance scientific education through the National Defense
Education Act; (3) the reorganization of military scientific research, with the cre-
ation of the Advanced Research Projects Agency to conduct all the basic research
related to military missions while the military research agencies were refocused on
advanced development for their particular military branch; (4) substantial increases
in funding to NSF; and (5) the creation of the President’s Science Advisory
Through the NASA-Ames Research Center, established in 1939 under the then
National Advisory Committee for Aeronautics (NACA), NASA had a long pedigree
of groundbreaking work in advanced computation before it began to make the shift
from human to digital computers (Hartman 1970). With the creation of NASA in
1958 and a substantially increased budget, NASA became a major customer and
driver in the development of computing technology. The increasingly intense calcu-
lation requirements of NASA and the NASA-Ames Research Center meant meth-
9  Foregrounding the Background: Business, Economics, Labor, and Government… 173

ods of computation had to rely not only on new modes of digital work but the
hardware to drive this work. From calculating rocket launch trajectories to deter-
mining real-time orbits with minimal delay, the space race was a huge driver of
early digital computing. Using two IBM 7090s installed in 1960 at NASA’s Goddard
Space Flight Center, for example, real-time rocket trajectories could be calculated
for the first time. In its mission to put a man on the moon in the 1960s, NASA’s
computing needs, limited timeframe, and expansive budget resulted not only in
unique hardware modifications that would be unthinkable in most commercial oper-
ations but also innovations in the development of software such as Mercury Monitor,
which enabled lifesaving interrupt functions (Ceruzzi 1989 2003a, b). The kinds of
direct access, instant calculation, and networked data that NASA required laid the
path for further computing hardware innovations and uses in the commercial sector,
including banking, airline reservations, and online data sharing.
The National Defense Education Act provided $800 million to support the edu-
cation of both undergraduate and graduate students – many of whom were mathe-
matics, science, or engineering majors – and to help develop both curriculum and
capacity in the nation’s colleges and universities. Between 1955 and 1960, college
enrollment increased by 44% – to 3.6 million students. These increases were valu-
able to the computing industry in the education of male engineers who they could
employ. In oral histories with a number of the first generation of computer scientists
in the United States, the interviewees talked about how the NDEA provided them
with the funds for their college study, especially their graduate training (Clowse
However, women continued to be disadvantaged in both graduate-level technical
education and in technical careers. For example, women were limited in the range
of schools that would admit them (including very few of the most prestigious uni-
versities), the fields in which they could study (in the sciences, women were often
channeled into psychology and home economics), and the range of professors who
would agree to advise their study (see Aspray 2016 for further detail).
As a result of the Sputnik political climate, the Congress allocated a budget
increase of 269% to NSF for FY 1959. In subsequent years, the Congress continued
to increase the NSF budget, though at more modest rates. This growth of NSF’s
budget, plus increased research expenditures in the mission agencies, meant that the
percentage of the federal R&D budget devoted to basic research grew from 3%
before Sputnik to 9% in 1961. However, NSF contributed only 9.3% of the federal
basic research budget in 1961; the rest remained under the control of the mission
agencies, which had a narrower view of basic research (Bremner and Reichard
NSF had only minor programs to support computing prior to Sputnik. It ran a
small computing facilities program to provide computing power to universities for
scientific research. Until 1959, computer purchases for universities were made in an
ad hoc way, but once the Congress appropriated the large budget increase to NSF in
1959, a long-term, separately funded program for computer facilities was estab-
lished. The same pattern occurred in computer education: NSF spent little funding
on computer education prior to Sputnik, but in 1959 it established a program that
174 W. Aspray and C. Loughnane

carried into the 1960s to offer graduate fellowships and traineeships, fund teacher
institutes to train people to teach college-level computer science courses, and sup-
port curriculum development. Research grants also increased substantially after
Sputnik. In fact, much of the advancement in theoretical computer science was the
result of increased NSF research funding (Aspray and Williams 1993, 1994; Aspray
et al. 1996).
Over time, various federal agencies contributed to the development of computing
in the United States, e.g., the research offices of the Army and Navy, the Atomic
Energy Commission, the energy laboratories, the National Institutes of Health, the
National Bureau of Standards (later NIST), and the National Security Agency and
other defense agencies. However, the main two federal agencies that contributed to
computing were the Advanced Research Projects Agency (ARPA, later DARPA)
and the National Science Foundation. These organizations had different missions
and different funding practices. ARPA was responsible for building a strong scien-
tific base for defense purposes, and it tended to concentrate large amounts of fund-
ing in a few research areas and in a few elite institutions. NSF was responsible for
the overall health of the scientific community in the United States, and it tended to
spread its funds much more widely across many different project areas and across a
larger number of both elite and non-elite institutions. ARPA had major successes in
advancing the fields of time-sharing, graphics, and networking. NSF contributions
were more widely scattered across technical areas of computing.
In 1951 President Truman created the Science Advisory Committee as a unit of
the Office of Defense Mobilization to advise the president on science matters, espe-
cially as they are related to the military. The committee did useful but low-profile
work under the successive chairmanships of the communications engineer Oliver
Buckley and the physicists Lee DuBridge and I.I. Rabi. The activity was upgraded
in 1957 by President Eisenhower as a direct response to Sputnik. Eisenhower cre-
ated the President’s Science Advisory Committee (PSAC), reporting directly to the
office of the President. He appointed the management scholar James Killian, who
later served as the president of MIT, as his first presidential science advisor. In the
late 1950s and throughout the 1960s, PSAC advised on various issues concerning
the role of science in the welfare of the nation, especially in support of defense. (The
committee was terminated in 1973 by President Nixon, who objected to its indepen-
dence and its opposition to his plans for a supersonic transport aircraft and an anti-
ballistic missile defense program.) The science advisor role had its greatest early
significance for computing in the 1960s as the nation grappled with the demand for
high-priced computing equipment by its many colleges and universities for both
educational and research purposes.
9  Foregrounding the Background: Business, Economics, Labor, and Government… 175

9.5  Conclusions

From this brief review, we can see the shaping influence of business, economics,
labor, and government policy on the demand, development, purchase, and use of
early digital computers in the United States in the period from 1945 to 1960. The
industrial origins of individual firms shaped their organizational capabilities and
determined their strategy and success in the mainframe computer industry. IBM and
some of its competitors gained various capabilities by pursuing defense contracts in
the Cold War era. With built-up consumer demand after the Great Depression and
the Second World War, many industries flourished in the postwar period, and the
computing industry supported their administrative, manufacturing, and sometimes
their research activities. The Cold War also made the defense industry a major busi-
ness customer of the computer firms. The GI Bill, which was intended to reward
servicemen for their service during the war, as well as manage their return to the
private sector, enabled many individuals to gain the engineering training that pre-
pared them for technical work in the computing industry. Unfortunately, these
opportunities were not equally available to women and minorities, who had limited
opportunities for both advanced education and placement in technical jobs in the
computing industry and in other scientific and engineering positions more gener-
ally. A sharp change in science policy led the federal government to take an active,
hands-on role in science beginning with the Second World War and continuing
afterward. This had a profound impact on computer science research and education
through the programs of the NSF and ARPA and through the massive college and
graduate education programs of the NDEA. In these and many other ways, business,
economics, labor supply and demand, and government policy had a massive shap-
ing impact on the computing field in the years from 1945 to 1960.


Abbate, Janet. 2012. Recoding Gender: Women’s Changing Participation in Computing.

Cambridge, MA: MIT Press.
Akera, Atsushi. 2008. Calculating a Natural World: Scientists, Engineers, and Computers During
the Rise of U.S. Cold War Research. Cambridge, MA: MIT Press.
Appel, Toby A. 2000. Shaping Biology: The National Science Foundation and American Biological
Research, 1945-1975. Baltimore, MD: Johns Hopkins University Press.
Aspray, William. 2016. Women and Underrepresented Minorities in Computing: A Historical and
Social Study. Cham, Switzerland: Springer International.
Aspray, William and Williams, Bernard O. 1994. “Arming American Scientists: NSF and the
Provision of Scientific Computing Facilities for Universities, 1950-1973,” IEEE Annals of the
History of Computing, vol. 16, no. 4, pp. 60–74.
Aspray, William and Williams, Bernard O. 1993. “Computing in Science and Engineering
Education: The Programs of the National Science Foundation,” Electro/93 International, vol.
2 (Communications Technology & General Interest), IEEE and ERA, Conference Record,
Edison, NJ, April 27-29.
176 W. Aspray and C. Loughnane

Aspray, William, Williams, Bernard O., and Goldstein, Andrew. 1996. “The Social and Intellectual
Shaping of a New Mathematical Discipline: The Role of the National Science Foundation
in the Rise of Theoretical Computer Science and Engineering.” In Ronald Calinger, ed. Vita
Mathematica: Historical Research and Integration with Teaching. Mathematical Association of
America Notes Series.
Barnard, John. 1982. “American Workers, the Labor Movement, and the Cold War, 1945-1960,
pp. 115–145 in Bremner and Reichard, op.cit.
Bassett, Ross. 2007. To the Digital Age: Research Labs, Start-up Companies, and the Rise of MOS
Technology. Baltimore: Johns Hopkins University Press.
Bix, Amy Sue. 2013. Girls Coming to Tech: A History of American Engineering Education for
Women. Cambridge, MA, MIT Press.
Bremner, Robert H. and Reichard, Gary W. eds. 1982. Reshaping America: Society and Institutions,
1945-1960. Columbus: Ohio State University Press, 1982.Campbell-Kelly, Martin. 1990. ICL:
A Business and Technical History. Oxford, UK: Oxford University Press.
Cayton, Mary Kupiec et al., eds. 1993. “The Postwar Period Through the 1950s.” Encyclopedia of
American Social History. New York: Charles Scribner's Sons.
Ceruzzi, Paul. 1989. Beyond the Limits: Flight Enters the Computer Age. Cambridge, MA: MIT
Ceruzzi, Paul. 2003a. A History of Modern Computing. Cambridge, MA: MIT Press.
Clowse, B.B. 1981. Brainpower for the Cold War: The Sputnik Crisis and National Defense
Education Act of 1958. Westport, CT: Greenwood Press.
Cortada, James W. 1993. The Computer in the United States: From Laboratory to Market, 1930 to
1960. Armonk, NY: M.E. Sharpe.
Cortada, James W. 2003. The Digital Hand: How Computers Changed the Work of American
Manufacturing, Transportation, and Retail Industries. Oxford: Oxford University Press.
Cortada, James W. 2005. The Digital Hand: Volume II: How Computers Changes the Work of
American Financial, Telecommunications, Media, and Entertainment Industries. Oxford:
Oxford University Press.
Cortada, James W. 2007. The Digital Hand: Volume III: How Computers Changed the Work of
American Public Sector Industries. Oxford: Oxford University Press.
Cortada, James W. 2016. All the Facts: A History of Information in the United States Since 1870.
New York: Oxford University Press.
Ceruzzi, Paul E. 2003b. A History of Modern Computing. Cambridge. MA: MIT Press.
England, J. Merton. 1963. A Patron for Pure Science: The National Science Foundation’s Formative
Years, 1945-1957. Washington, DC: National Science Foundation.
Edwards, Paul. 1997. The Closed World: Computers and the Politics of Discourse in Cold War
America. Cambridge, MA: MIT Press.
Ensmenger, Nathan L. 2012. The Computer Boys Take Over: Computers, Programmers, and the
Politics of Technical Expertise. Cambridge, MA: MIT Press.
Fiske, Emmett. 1989. From Rolling Stones to Cornerstones: Anchoring Land-Grant Education in
the Counties through the Smith-Lever Act of 1914, Rural Sociologist 9(4): 7–14.
Flamm, Kenneth. 1988. Creating the Computer: Government, Industry, and High Technology.
Washington, DC: Brookings Institution.
Galambos, Louis. 1970. “The Emerging Organizational Synthesis in Modern American History,”
The Business History Review Vol. 44, No. 3 (Autumn), pp. 279–290.
Greenberg, Daniel. 2001. Science, Money, and Politics. Chicago: University of Chicago Press.
Haigh, Thomas. 2010a. “Computing the American Way: Contextualizing the Early U.S. Computer
Industry,” IEEE Annals of the History of Computing, vol. 32, no. 2, pp. 8–20.
Haigh, Thomas. 2010b. “Masculinity and the Machine Man: Gender in the History of Data
Processing,” in Gender Codes: Why Women are Leaving Computing ed. Thomas J. Misa, IEEE
Computer Society Press: 51–72.
Haigh, Thomas and Petri Paju. 2016. “IBM Rebuilds Europe: The Curious Case of the Transnational
Typewriter,” Enterprise & Society 17:2 (June): pp. 265–300.
9  Foregrounding the Background: Business, Economics, Labor, and Government… 177

Haigh, Thomas, Mark Priestley, and Crispin Rope. 2016. ENIAC in Action: Making and Remaking
the Modern Computer. Cambridge, MA: MIT Press.
Hart, D.M. 2010. Forged Consensus: Science, Technology, and Economic policy in the United
States. Princeton, NJ: Princeton University Press.
Hartman, E.P. 1970. Adventures in Research – A History of Ames Research Center, 1940–1965.
Washington, DC: NASA, available at
Heide, Lars. 2009. Punched-card Systems and the Early Information Explosion, 1880-1945.
Baltimore: Johns Hopkins University Press.
Hyman, Harold. 1986. American Singularity. Athens, GA: University of Georgia Press.
Johnson, Arthur M. 1982. “American Business in the Postwar Era,” pp. 101–113 in Bremner and
Reichard, op cit.
Jones, Kenneth M. 1986. “The Government-Science Complex,” pp.  315–348 In Bremner and
Reichard, op.cit.
Joseph J. Thorndike, “Tax History: The Fifties: From Peace to War”, March 31, 2011, Tax History
Keats-Rohan, K.S.B., ed. 2007. Prosopography Approaches and Applications: A Handbook.
Oxford: University of Oxford Linacre College Unit for Prosopographical Research.
Kevles, Daniel. 1977. “The National Science Foundation and the Debate Over Postwar Research
Policy, 1942-1945: A Political Interpretation of Science – The Endless Frontier,” Isis, vol. 68
(March 1977): pp. 5–26.
Kleinman, Dan. 1994. “Layers of Interest, Layers of Influence: Business and the Genesis of the
National Science Foundation,” Science, Technology, and Human Values, vol. 19: pp. 259–282.
Kleinman, Dan. 1995. Politics of the Endless Frontier: Postwar Research Policy in the United
States. Durham, NC: Duke University Press.
Lecuyer, Christophe. 2005. Making Silicon Valley: Innovation and the Growth of High Tech.
Cambridge, MA: MIT Press.
Lomask, Milton. 1976. A Minor Miracle: An Informal History of the National Science Foundation.
Washington, DC: U.S. Government Printing Office.
Maddox, R.F. 1979. “The Politics of World War II Science: Senator Harley M. Kilgore and the
Legislative Origins of the National Science Foundation,” West Virginia History, vol. 41 (1):
pp. 20–39.
Martin Campbell-Kelly, William Aspray, Nathan Ensmenger, Jeffrey Yost. Computer: A History of
the Information Machine. 3rd ed. Boulder, CO: Westview Press, 2014.
Mazuzan, George. 1988. The National Science Foundation: A Brief History. Washington, DC:
National Science Foundation.
Misa, Thomas, ed. 2010. Gender Codes: Why Women are Leaving Computing. Hoboken, NJ: John
Wiley and Sons, Inc.
Misa, Thomas. Forthcoming. “Gender Bias in Computing,” in William Aspray, Historical Studies
in Computing, Information, and Society: The Flatiron Lectures. Cham, Switzerland: Springer.
Mounier-Kuhn, Pierre. 1999. L’informatique en France de la Seconde Guerre mondiale au Plan
Calcul. Science, Industrie, Politique Gouvernementale (Paris: CNAM).
Neal, Homer A., Tobin L. Smith, and Jennifer B. McCormick. 2008. Beyond Sputnik: U.S. science
policy in the twenty-first century. Ann Arbor: University of Michigan Press.
Norberg, Arthur L. 2005. Computers and Commerce. Cambridge, MA: MIT Press, 2005.
Paglin, M.D. 1989. A Legislative History of the Communications Act of 1934. Oxford: Oxford
University Press.
Puaca, Laura Michelatti. 2014. Searching for Scientific Womanpower. Chapel Hill, NC: University
of North Carolina Press.
Pugh, Emerson W. 1995. Building IBM: Shaping an Industry and its Technology. Cambridge, MA:
MIT Press.
Reingold, Nathan. 1987. “Vannevar Bush’s New Deal for Research: Or the Triumph of the Old
Order,” Historical Studies in the Physical and Biological Sciences, vol. 17(2): pp. 299–344
178 W. Aspray and C. Loughnane

Rose, Henry. 1962. A Critical Look at the Hatch Act, Harvard Law Review 75(3): 510–526
Rossiter, Margaret. 1995. Women Scientists in America: Before Affirmative Action, 1940-1972.
Baltimore: Johns Hopkins.
Rubin, Michael Rogers and Mary Taylor Huber, 1986. The Knowledge Industry in the United
States, 1960-1980. Princeton, NJ: Princeton University Press.
Smulyan, Susan. 1996. Selling Radio: The Commercialization of American Broadcasting 1920-­
1934. Washington, DC: Smithsonian Institution Press.
Stern, Nancy B. 1981. From ENIAC to UNIVAC. Bedford, Mass: Digital Press.
Usselman, Steve. 1993. “IBM and Its Imitators: Organizational Capabilities and the Emergence
of the International Computer Industry” Business and Economic History Vol. 22, No. 2 (Fall):
Usselman, Steve. 1996. “Fostering a Capacity for Compromise: Business, Government, and the
Stages of Innovation in American Computing,” Annals of the History of Computing Vol. 18,
No. 2 (Summer 1996): 30–39.
Vogel, William F. undated. Shifting Attitudes: Women in Computing, 1965-1985. https://www. (accessed 5 June 2018).
Wang, J. 1995. “Liberals, the Progressive Left, and the Political Economy of Postwar American
Science: The National Science Foundation Debate Revisited,” Historical Studies in the Physical
and Biological Sciences, vol. 26: pp. 139–166.
Yates, JoAnne. 2008. Structuring the Information Age: Life Insurance and Technology in the
Twentieth Century. Baltimore: Johns Hopkins University Press.
Chapter 10
“The Man with a Micro-calculator”:
Digital Modernity and Late Soviet
Computing Practices

Ksenia Tatarchenko

Abstract  Technology played a defining role in the socialist version of modernity

across the entire life span of the Soviet state. During the 1980s, the Soviet popular-
izers of computing technology mobilized the expressive power of Vertov’s 1929
masterpiece, The Man with a Movie Camera. When the nation’s most prominent
popular scientific magazine, Nauka i Zhizn' [Science and Life], started a column
devoted to both playful and serious applications of programmable calculators, it
was titled “The Man with a Micro-calculator.” In this chapter, I argue that this refer-
ence reflected a consistent late Soviet preoccupation with introducing the popula-
tion to a “digital” version of the socialist technological modernity, where a modest
digital device, the programmable calculator, played a key role. I trace the massive
scale of diffusion of computing practices around programmable calculators during
the last decade of the Soviet Union’s existence to exploit the nonlinear temporality
encompassed in the notion of “early digital.” Breaking with the established chronol-
ogy of hardware development culminating with the so-called ‘‘Personal Computer
Revolution,’’ the “early digital” helps to reveal how the “man with a micro-calcula-
tor” was imagined as the man of the future.

The 1929 Soviet silent movie The Man with a Movie Camera is a milestone in
the world history of cinema. Shot at the time of the dramatic transformation of the
Soviet Union that became known as the “great break,” the film celebrated industri-
alization and its world of machines. The titular “movie camera” was at once the
film’s subject and its tool of production, accentuating the dialogical relationship
between human and machine. The “man” was an operator. He documented social
interactions with a whole spectrum of modern technologies of the period, ranging
from personal items such as an alarm clock to the bureaucratization of family life to
the large-scale infrastructures, including urban public transportation and production
facilities (Roberts 2000; Hicks 2007).
Half a century later, the man was still celebrating Soviet technological moder-
nity, but this time he was wielding a programmable calculator. When the nation’s

K. Tatarchenko (*)
Geneva University, Geneva, Switzerland

© Springer Nature Switzerland AG 2019 179

T. Haigh (ed.), Exploring the Early Digital, History of Computing,
180 K. Tatarchenko

most prominent popular scientific magazine, Nauka i Zhizn' [Science and Life],
started a column devoted to both playful and serious applications of programmable
calculators, the column was titled “The Man with a Micro-calculator.” With its cir-
culation of some three million copies per issue and even larger effective readership,
the magazine’s editors recapitulated Vertov’s strategy of harnessing the entertain-
ment to enlighten and technology to transform Soviet society (Kuklin 2017).
This was just one prominent instance of a widespread phenomenon. The bibliog-
raphies featured in Nauka i Zhizn’ point to a mass-scale publishing of literature on
programmable calculators throughout the 1980s: in about a decade’s time, Soviet
publishing industry turned out several hundred titles. These titles ranged from text-
books for students and popular accounts printed in hundred thousand copies per
edition to highly specialized volumes and articles devoted to different professional
applications of programmable calculators, with modest prints produced mostly by
local presses.1
Despite continuities between early and late Soviet efforts to harness new tech-
nologies to build a new society, the late Soviet version of the early digital illumi-
nates the global story of digitization. As Ben Peters (2016) recently pointed out, a
focus on the rhetoric of centrally managed socialism versus innovative, transgres-
sive, and liberal capitalism can blind us to the reality that both systems relied on
state-driven technological projects, internal competition, and technological enthusi-
asm. Yet the sparse English-language publications on Soviet computing have
focused on discourses, rather than practices, and been more concerned with things
the Soviet Union didn’t do than with things that actually happened. The most promi-
nent examples are Slava Gerovitch’s account of Soviet cybernetics From Newspeak
to Cyberspeak (2002) and Peters’ How Not to Network A Nation (2016), which
discusses the failure of the Soviet projects to create a nationwide economic control
and production system. This focus on failure, built on the assumption of a Soviet
non-digitization, parallels the rhetoric of Soviet computing experts who themselves
mobilized the idea of a “computer gap” with the West when seeking resources.
However, it has distorted the literature of Soviet computing when compared to the
much better developed study of Soviet nuclear energy and space exploration which
acknowledges both achievements and disasters (Schmid 2015; Siddiqi 2010).
During the 1970s and early 1980s, the Soviet Union and the West had both expe-
rienced the emergence of the new computational practices often conceptualized in
terms of technological rupture. This rupture was called variously the “microelec-
tronics revolution,” the “microcomputer revolution,” the “postindustrial society,” or
the “information society.” Western governments made considerable investments in
programs to prepare their citizens for this new world: subsidizing the development
of personal computers and network services, funding television programs and edu-
cational initiatives, and in the case of France, deploying eight million domestic
terminals. A “hacker” culture of enthusiasts and tinkerers, epitomized by the famous
Homebrew Computer Club in Silicon Valley, drove the development of new hard-
ware and software.

 For examples of bibliographies, see and
papers.htm, last accessed on March 30, 2018.
10  “The Man with a Micro-calculator”: Digital Modernity and Late Soviet Computing… 181

The Soviet case is not exceptional in this regard: a similar mass culture of tinker-
ing and experimentation with programmable digital devices emerged about the
same time and was officially described in respect to the political goal of building
socialism. The ethos, operation, and materiality of this tinkering culture were not
identical to the Western one, however.
The renowned American expert on Soviet science and technology, Loren Graham
(1984), believed that “a hacker culture does not exist in the Soviet universities”
despite their strength in mathematical and theoretical aspects. I demonstrate that
Graham was looking in the wrong place for an “independent spirit” associated with
hands-on interaction with machines. The Soviet mass culture of programming grew
up around the programmable calculator, which the USSR manufactured millions of,
rather than the personal computer, which the Soviet industry did not mass-produce.
It was not grounded in the universities and around computers but instead formed
around informal communities of calculator users coordinated via popular scientific
press. These communities were later mobilized for the national educational reform
promoting a universal diffusion of programming literacy.
As most recent historical narratives increasingly describe the computeriza-
tion using the term of “digital revolution,” the exclusion of calculators, a liter-
ally and obviously digital technology, is a paradox revealing underlying
presuppositions. The user communities around programmable calculators have
been systematically omitted in academic historiography. The most conspicuous
exception to this is Paul Ceruzzi’s standard A History of Modern Computing
(1998), which prominently features programmable calculators and their user
groups and newsletters as the vehicle through which ordinary professionals and
technological enthusiasts were first exposed to programmable digital technolo-
gies. Yet even Ceruzzi deploys these machines within his master narrative pri-
marily as a way of explaining why users were ready to embrace personal
computers. The peculiarity of the Soviet case allows to decouple the calculator
from its status as a mere precursor of the digital era. A more nuanced analysis
of the similarities and differences between the Western and Soviet calculator
cultures is beyond the scope of this chapter, but it pays a systematic attention to
comparisons and dependencies.
I start by showing important parallels between the Soviet and American trajecto-
ries of calculators as commodities by tracing the contours of the Soviet microelec-
tronics industry responsible for the mass production of calculators. Next, I turn to
the pages of Nauka i Zhizn’ to trace the meaning of “hands-on” work within the
Soviet material culture as well as the organization of the calculator user communi-
ties and the exploitation of the machines’ “undocumented” features. In the second
half of the chapter, I explore the domestication of programmable calculators in the
broader Soviet context of the state computerization campaign.
In what follows, I use the notion of “digital practices” to explore the range of
possibilities offered to Soviet users by the available technology. Recently analyzed
by the historians of computing Tom Haigh and Mark Priestly (2018) in the context
of early machines, the concept of “programmability” emerges at the junction of
hardware capabilities and human-machine interaction. When applied to
182 K. Tatarchenko

programmable calculators, it facilitates a break from the America-centered narratives

of technological triumphs. To appreciate the Soviet version of digital practices, we
need to focus less on the calculator qua the computer and more on its programmability.2

10.1  The Making of a Commodity

Miniaturization effected the penetration of electronic devices and products in every

sphere of life. It is often associated with “Moore’s Law”: the observed doubling of
the number of components per integrated circuit every year or later every 18 months
(Mollick 2006; Brock 2006). In the mid-1960s, when Gordon Moore first formulated
his observations based on his work at the Fairchild Semiconductor, “Silicon Valley”
was yet to obtain its name or status as the major center of innovation and production
(Lecuyer 2006; Mody 2017). At that time, the integrated circuit (IC) was one among
several promising paths to miniaturization, but all paths were costly to explore. The
driver of innovation in the Valley and elsewhere was not consumer products but
rather the military and aerospace. “Hardly any organization dealing with the
electronics field,” commented one industry participant in 1962 (Ramo 1962;
Atherton 1984), “remains untouched either directly or indirectly by the nation’s
guided missile and space programs.”
In the West and East alike, the Cold War race turned the military into the first
customers and patrons of microelectronics, literally ready to pay more for less, as
microelectronic technology typically made for lighter onboard equipment. Civilian
products were often mere by-products. For instance, the Soviets revealed one of
their first mass-produced miniature consumer devices to an international audience in
1964: a transistor radio that used thin-film hybrid technology, in which two-­
dimensional components were sputtered onto a ceramic substrate. Called the
“micro” (43 × 30 × 7.5 mm), the radio became a political gift to state leaders and
eventually a popular souvenir (Malashevich and Malashevich 2011).
In America, however, this dominance of federal funding over the electronics
industry faded in the 1970s with the opening of the large civilian markets. In his
classical overview of computing history, Paul Ceruzzi (1998) observed that whereas
the development of the metal-oxide semiconductor (MOS) in the late 1960s made it
possible to develop a computer-on-a-chip, “that did not mean that such a device was
perceived as useful” (p. 217). The calculators were the devices that both showed
what IC could do and the economy of scale governing their production. During the
1970s, many actors stepped in the new niche of producing calculators, and the
prices were brought down drastically. As the simplest calculators became giveaway
commodities, the high-end (and more profitable) programmable calculators, with
performance characteristics rivaling that of a computer, were developed. Targeted at

 One of the major arguments against considering programmable calculators as computers empha-
sizes that they are not general-purpose Von Neumann machines, that is, that, unlike general-pur-
pose computers, they stored their programs in a memory deliberately kept separate from data.
10  “The Man with a Micro-calculator”: Digital Modernity and Late Soviet Computing… 183

engineers and scientists, these calculators publicized the idea of portable and per-
sonal computing before the advent of the “personal computer.”
In the Soviet case, the trend toward commodification was no less prominent but
predicated on the logic of state orders and the politics of showcasing. According to
Boris Malinovskii (1998), the Soviet Ministry of Electronic Industry solicited
proposals for demonstrating its achievements in 1970 on the 100th anniversary
since Lenin’s birth. As the micro-calculator seemed a particularly suitable
demonstration object, the ministry funded two parallel projects for prototype devel-
opment.3 By early 1974, first machines became available in Soviet retail under the
name of “Elektronika B3-04,” where “B3” was the code which corresponded to a
series of electronic “household appliances”: for example, desktop electronic watches
were “B2” and the hand watches “B5” (Frolov updated online resource). In a few
years’ time, there were many more models of pocket calculators: the editors of
Nauka i Zhizn' guided the readers by publishing comparative tables (Nauka i Zhizn’,
no. 9, 1981: 46–50).
The mass production of Soviet pocket calculators started in the mid-1970s and
the programmable ones at the end of the decade, only a few years behind the Western
and Japanese benchmarks. A 1990 (Trokhimenko) publication devoted to technical
characteristics of several Soviet programmable calculators contains tables listing
the main features of some prominent Soviet models, a number of Hewlett-Packard
(HP) models, and those of Texas Instruments. The comparison with the HP models
is particularly insightful, as the Soviet developers clearly appreciated and utilized
the advantages of the reverse Polish notation used by HP. In Table 10.1, I reproduce
the information about the Soviet and HP devices complimented with additional data
regarding dates of production runs and retail prices upon the first release.
Whereas a controversy rages between the Western and Russian accounts of the
beginnings of the Soviet electronic industry and the foundation of Zelenograd, the
Soviet center of the electronic industry, it is important to situate the large-scale pro-
duction of calculators within a national geography that does not depend on the lim-
ited analogy between Zelenograd and the Silicon Valley (Usdin 2005; Malashevich
2007). By the early 1970s and the mass production of the first Soviet electronic
calculators, the Zelenograd center employed some 13,000 workers. But the center
itself was only the tip of the iceberg: a part of the larger national conglomerate of
some 40 organizations with a workforce of 80,000. Making calculators a Soviet
commodity was predicated on large-scale industrial development designed to serve
the needs of both the defense sector and the civilian population. For example, as
observed by the American intelligence experts in 1989 (CIA SOV 23306 1989),
some 200 plants throughout the Soviet Union were involved in the production of
microelectronic equipment, including Riga semiconductor plant, Angstrem plant in
Zelenograd, and Svetlana Production Association in Leningrad. The very same pro-
duction complexes were often involved in manufacturing the calculators.

 See chapter “Mikroelektronika v Ukraine: proshloe bez budushchego?” available at http://mog-, last assessed
on March 30, 2018.

Table 10.1  Comparative characteristics of the Soviet and HP devices. Technical elements come from Trokhimenko (1990)
Mantissa Register: data/ Program Alphabetic Direct/ Mag cards/ Production
Device/price digits/exponent operational steps symbols indirectaddressing peripherals Size runs
HP-33E $100 7/2 8/5 49 50 Direct –,– 130 × 68 × 30 1978–1983
HP-38E $120 7/2 25/5 99 44 D –,– 130 × 68 × 30 1978–1981
HP-55 $395 10/2 20/5 49 68 D –,– 152 × 68 × 30 1975–1977
HP-25C $200 8/2 8/5 49 72 D –,– 130 × 68 × 30 1976–1978
HP-29C $195 8/2 30/5 98 72 D/I –,– 130 × 68 × 30 1977–1979
HP-19C $345 8/2 30/5 98 89 D/I – , yes 152 × 81 × 34 1977–1979
HP-67 $450 10/2 26/5 224 84 D/I yes, – 128 × 80 × 15 1976–1982
B3-21 RUB 8/2 7/2 60 47 D –,– 186 × 100 × 48 1977–1982
MK-­46 RUB 8/2 8/2 66 48 D –, yes 280 × 240 × 28 1981–1984
235 desktop
MK-­64 RUB 8/2 8/2 66 49 D --, yes 208 × 240 × 48 1984
270 desktop
B3-34 RUB 8/2 14/5 98 64 D/I –,– 185 × 205 × 48 1980–1986
MK-­56 RUB 8/2 14/5 98 64 D/I –,– 208 × 205 × 60 1981–1992
126 desktop
MK-54 RUB 8/2 14/5 98 64 D/I –,– 167 × 78 × 36 1982–1985
MK-61RUB 8/2 15/5 105 78 D/I –,– 167 × 78 × 36 1985–1991
MK-52 RUB 8/2 15/5 105 80 D/I –, yes 212 × 78 × 34,5 1983–1993
Retail prices and production dates come from the online collection of the Museum of HP Calculators at, last accessed on June 25, 2018
K. Tatarchenko
10  “The Man with a Micro-calculator”: Digital Modernity and Late Soviet Computing… 185

For example, the “B3-34,” the most popular Soviet programmable calculator,
was produced at the Ukrainian branch of the Soviet microelectronics industry, the
scientific-production complex “Kristal.” Created in 1970, the organization regrouped
several research institutions and production facilities under the leadership of
S.A. Moralev, whose team successfully developed the calculator prototypes early
that year. In his overview of the rapid growth of the complex in the 1970s, Boris
Malinovskii (1998) traced the importance of the key engineers and administrative
leaders as well as their close connections with the aviation and shipbuilding
industries also interested in miniaturization of the control equipment. While he does
not emphasize the tremendous popularity of “Kristal’s” civilian output  – the
calculators – among the Soviet users, his account reproduced some of the statistics.
For instance, if in 1974 “Kristal” produced some 200,000 LSI; 100,000 calculators;
and 200,000 desktop calculating devices, by 1991 the growth of the Ukrainian
electronic industry enabled the annual production of over 300 million chips.
Moreover, the illustration showcasing the production of the calculators, and not the
military equipment, also makes visible the feminized workforce of the Soviet
electronic industry (Fig. 10.1).
Appreciating the distributed geography of the Soviet industry beyond its focal
node in Zelenograd makes it possible to gauge the multiple roles of the foreign
technologies within it: from serving as functional analogues, to exact copies, to use
as elements within domestic equipment or as part of the production lines. The ways

Fig. 10.1  The assembly line of the scientific-production complex “Kristal,” Ukraine, 1978–1979.
The History of Development of Computer Science and Technologies in Ukraine: The European
Virtual Computer Museum. (
html, last accessed on June 25, 2018)
186 K. Tatarchenko

of acquiring Western technology were no less varied. Whereas the sensationalist

accounts focus on the espionage (Melvern et al. 1984), the CIA analysts (CIA SOV
23306 1989) pointed that legal means were often dominant. For instance, in the
1970s, the period when Soviet acquisition efforts peaked as the new Ministry of the
Electronics Industry (created in 1965) was expanding, the trade restrictions were
relatively loose. When the official restrictions tightened, the Soviets relied on
multiple strategies to acquire the embargoed devices contriving long chains of trade
diverters operating through Western Europe and Eastern Asia.
To sum up, questions of the “origin” of the Soviet microelectronics industry and
its structural dependence on the input of Western samples should not obscure this
industry’s capacity for mass-producing electronic devices as commodities. Whereas
the Soviet difficulties in retooling from LSI and switching to VLSI are well-­
documented, this was not the full story. The variety of the models and the production
of millions of units testify to the planned economy’s capacity to master the supply
of certain chips. For example, “К145ИК1302” and its companion, “К145ИК1302,”
produced by Rodon plant in Ivano-Frankivsk, Ukrainе, were the heart of the “B3-­
34.” The following models of the calculator known as “MK-54” and “MK-61” used
the chips of the “K745” series. The ease of obtaining an operational device today,
more than three decades after production, reveals something about not only the wide
distribution of these objects but also Soviet production standards. Western analysts
tended to stress the high IC failure rate during production and less the IC robustness,
better known to Soviet consumers.
This leads to the question addressed in the following section: once programma-
ble calculators became commodities, what discourses, practices, and user commu-
nities developed around them? Building on David Arnold’s (2013) idea that everyday
technologies can take a significant place in the daily lives of people and in the
national imaginary, I trace how the everyday usages of a small machine, the
calculator, complicates our understanding of the Soviet large-scale structures such
as electronic industry, popular science media, and educational system.

10.2  Quirks and Tricks

The planned economy regulated the production of all Soviet goods. The postpro-
duction life of the civilian devices was not the priority for the producers. Famously,
Soviet industry mass-produced TV set in high numbers, but these devices were
notorious for their need for frequent repair, which generated abundant stories and
anecdotes about poor consumer service.4 Yet virtually no such criticism appeared
with respect to electronic calculators. I suggest that, beyond the quality of the
machines’ electronic elements, the explanation must account for the particular role

 There are detailed accounts of repair practices common among Soviet TV watchers documented
by online communities brought together by Soviet nostalgia; for an example, see https://www., last accessed on March 30, 2018.
10  “The Man with a Micro-calculator”: Digital Modernity and Late Soviet Computing… 187

of the advanced users. The widely distributed popular magazines demonstrate that
Soviet users blurred the boundary between repair and modification. These magazines
both circulated details about user modifications and normalized such user behavior,
occasionally in a direct confrontation with the producers. Moreover, they became
forums for virtual communities of new social groups of users structured by the
degree of familiarity with calculator’s internal working and non-documented
The relationship between the producers and users of technologies had been an
important issue for understanding technological change, and the historiography
indicates that the sites where such interactions were located varied according to
different technologies and historical contexts (Haring 2007; Tinn 2011). In the case
of Soviet micro-calculators, popular scientific magazines became important forums
introducing new devices to potential users at first. Two articles by R. Svoren’ from
Nauka i Zhizn’ (1976, 1981) illustrated early efforts at representing the calculators
as the technology of modernity. The 1976 piece was titled “Fantastic Electronics”
and guided the readers as potential customers through a virtual tour of shopping for
the calculator and discovering its functions. The author established electronics as
the next big technology of modernity by opening his piece with a long list of
everyday technologies for millions: phones, automobiles, clocks, radios, and photo
and movie cameras. Svoren’ was both mobilizing the technological wonder
provoked by the “fantastic electronics” and normalizing its consumption as the next
ordinary machine.
As witnessed by the caption on the cover from Nauka i Zhizn' in September
1981, watches became the favorite technology of comparison, implying an important
continuity in the social valances associated with personal technical devices. Like the
watch, the calculator carried an implication of certain social status but also personal
virtues associated with self-discipline. However, unlike the watch, the calculator
was considered as the machine automatizing mental labor, a difference that entailed
a particular interest in the hidden mechanism regulating the computation.
When reflecting on the length of the three seconds, the time that “B3-18” took to
calculate a trigonometric function (arctangent), in 1976, Svoren’ commented that it
“provokes involuntary questions regarding the big work that is happening inside the
small box, before you can see the result lightening up on the indicator” (Svoren’
1976, p. 29). This interest in the workings of the device was fully articulated with
technical comments. The internal structure of calculators was literally exposed: the
1976 piece illustrated the creation stages of the IC (Fig. 10.2). Svoren’s later (1981)
article also featured a description of the digital electronic elements involved in the
performance of computation such as logic gates, adders, and registers; two full-­
color images depicting circuits, transistors, and fragments of LSI accompanied
detailed explanations about their operation. Although the popularizer suggested that
users need not understand the operation of the machine to benefit from the
calculator’s capacities, the follow-up publications in the new 1983 column “The
Man with the Micro-calculator” demonstrated that opening up and fiddling with the
device were common. The journal’s publications were an encouragement and a
guide for opening the “black box.”
188 K. Tatarchenko

Fig. 10.2  Calculator architecture and the schematic explanation of the main stages involved in the
production of the IC. (Source: Nauka i Zhizn', no. 10 (1976), illustration by M. Smolin)
10  “The Man with a Micro-calculator”: Digital Modernity and Late Soviet Computing… 189

The readers of the Western magazines from the late 1970s and 1980s would eas-
ily recognize the content on the pages devoted to calculators, which carried captions
reading “Little Tricks” and “Memory Nods.” These subsections consisted of reader
letters featuring practical advice and clever gimmicks. Although the suggestions
occasionally included programming tips, the programs typically formed publication
topics on their own, so many suggestions that were published under these subtitles
were devoted to the material practices of work with calculators.
The “little tricks” were shared experiences of preserving, enhancing, or modify-
ing calculators’ capacities. A frequent topic covered in the section was energy con-
sumption. For instance, some readers were offering advice for prolonging battery
life. After observing the white coating covering the accumulator’s seal ring of the
“B3-34” after a certain period of use, which was leading to the dispersion of electri-
cal flow and shortened accumulator’s utility, one letter suggested cleaning the entire
surface of the accumulator with an eraser and then using lubricators such as indus-
trial Vaseline (Nauka i Zhizn’, no. 6, 1985, 43). Another user found out that the
“regime of memory” used less battery than the “regime of waiting” (when zero on
the screen) (Nauka i Zhizn’, no. 6, 1989, 194). The owners of calculator MK-52
discovered that in addition to the accumulator “316,” specified in the instructions, it
was also possible to use another type (NGKTs-045) by inducing a small change
with soldering iron, namely, switching contacts from the gate 4 to gate 3 (Nauka i
Zhizn’, no. 12, 1989, 74).
The editors encouraged users’ identity as co-creators of the calculator. This
aspect of the engagement among calculator users is most obvious in the magazine-­
led initiative launched in 1988, in which users were asked to imagine the perfect
micro-calculator. S.  Komissarov called upon the readers with the article “Let’s
Invent the Calculator!” (Nauka i Zhizn’, no. 6, 1988, 108–109) specifying that the
initiative was coordinated with the producers. As was expected by the editors, there
were a lot of contradictory suggestions regarding the price and the technical
characteristics, demonstrating the diverging needs of different user groups. The
analyses of letters published by the magazine revealed that about a third of
respondents requested doubling or tripling of the number of programming steps that
could be held in memory. The distribution of ideas regarding this issue points to
different priorities and programming skills among the audience: about a quarter of
respondents (less interested in programming) suggested changeable memory units
with available programs, while the same proportion of expert users emphasized the
need for a less volatile and more energetically autonomous RAM (many older
models lost all memory content when unplugged). The majority of respondents
argued for preserving the layout and command system of the most popular machines
such as “MK-61,” the modernized version of “B3-34.” Only a minority of
respondents expressed an interest in changing the operation input from the reverse
Polish notation to the regular one, comparable to the American calculators “TI-
58C” and “TI-59.” Behind the user consensus regarding the continuity with existing
machines was the perceived need for compatibility and access to the existing
libraries of programs (Nauka i Zhizn’, no. 10, 1989, 134–136). This need had little
to do with national characteristics and was also widely discussed among the
190 K. Tatarchenko

American users of the TI and HP calculators on the page of the PPC Journal and TI
PPC Notes (Ristanović and Protić 2012).5
While the users speculated on the promises of modular architecture and the types
of devices to include into families of calculators, they were already familiar with the
difficulties of upgrading to more recent models. In fact, Soviet users and producers
had a different notion of “compatibility.” In the producers’ description, compatibility
was guaranteed for the commands specified in the user manual (Nauka i Zhizn’, no.
11, 1989, 124). Users wanted to run their existing programs without modification.
This represented a major issue because of the widespread reliance on undocu-
mented features. The pages of Nauka i Zhizn’ offered both “soft” and “hard” solu-
tions to this problem. For instance, one could simply interrupt the program and
perform the missing command manually or add a subroutine. V.  Kudryavtsev
consulted with “Kristal” engineers and contributed a description of how to make
“MK-61” operate as “MK-54.” Their more radical solution, making a permanent
change to the machine’s wiring, required skills with soldering iron and access to the
right kind of work environment (Nauka i Zhizn’, no. 6, 1989, 105).
In addition to coordinating the users, the magazine also established a space for
user-producer communication by occasionally publishing interviews and exchanges
with the calculators’ designers. The dialogue was not always productive. In reaction
to the suggestions collected by editors during the initiative “Let’s invent the calcula-
tor!” the producers raised their concerns regarding users’ interaction with the
devices and the users’ responsibility for what they qualified as “malfunctions.”
The creators and the designer of the calculators can only guarantee the solution of the tasks
without any errors on the condition that the user does not transgress the rules of exploita-
tion. The “side effects” rise from the unsanctioned usage of operations or combinations of
commands, which are not specified in the “user manual.” It is impossible to describe all
combinations of two or three buttons from the 30 keys on the calculators’ keyboard within
the text. The best is to refrain from the usage of combinations not specified in the instruction
(Nauka i Zhizn’, no. 6, 1989, 105).

However, what were considered as “errors” and “side effects” by the designer
were often precisely the features exploited by users who came up with multiple
“non-sanctioned” uses of calculators to reconfigure an engineering tool into an
entertainment device.
The search for and discovery of the calculator’s undocumented features became
known under the mocking name of “Eggogologia,” or “Errology,” the term formed
from the version of the “error” message appearing on the calculator’s display. The
author of one publication did not mince words and called the instances when the
machine’s commands did not operate properly “quirks,” all while offering alternative
ways to obtain the desired functions. Some undocumented functions were considered
useful, such as using the “↑” key to access the content of the zero register without
modifying it or enabling indirect addressing via an undocumented function of the

 Some issues of TI PPC Notes are archived at
Notes%20Articles.pdf, last accessed on June 25, 2018. For an example of PPC Journal issue, see, last accessed on June 25, 2018.
10  “The Man with a Micro-calculator”: Digital Modernity and Late Soviet Computing… 191

“K” key (Fink 1988, 90). But the most widely known branch of the “Errology” was
concerned with “illegal” numbers having exponents beyond 99, which enabled
users to obtain different alphabetic and numerical combinations on the display,
which was useful for games.
A major aspect of this digital practice was sharing one’s discoveries via the net-
work of popular journals, which sustained the formation of virtual user communi-
ties structured by age, skills, professional makeup, and other interests such as
science fiction (Tatarchenko manuscript). But the groups of Soviet users were not
necessarily informal. The hands-on and playful character of the interaction with the
machine was mobilized in Soviet education.

10.3  Collective and Algorithmic

Similar to the United States’ users, the first Soviet users were specialists ready to
invest in the calculator as a working tool. Also, similar to the United States, the mass
production of programmable calculators unleashed the creative forces of expert
users. But the Soviet experience was also different. When the planned economy did
not follow the Western pace and scale of producing personal computers, the
calculator was reconfigured into a driver of computerization and become an
educational prop, a tool for mental transformation, and an instrument of digital
daily life.
The process leading to the national diffusion of calculator-based digital practices
in the Soviet Union of the 1980s is best understood in the context of a statewide
programming education initiative (Kerr 1991; Boenig-Liptsin 2015). One aspect of
this education reform was the 1985 introduction of a compulsory course called “The
Basics of Informatics and Computer Technology,” oriented at fostering programming
skills and the “algorithmic thinking” habits. Both the course and its algorithmic
orientation were the product of the efforts of a group associated with the Soviet
programming expert, Andrey Ershov (Ershov 1981; Tatarchenko 2017). The TV
lessons supporting “The Basics of Informatics” aired on television in 1986, turning
Ershov into the public face of the reform.
As in the West, the TV lessons and the press campaign reflected the mechanism
of support where the popularization of the new technology is dependent on the
already dominant media. For instance, Tilly Blyth (2012) has documented the use of
television for the promotion of the BBC computer literacy project. Among the
Soviet materials for publicizing education reform, a short 1986 film, A Game with
the Computer, is a particularly rich source.6 The film reproduced both the political
and social aspects of the debate surrounding Soviet computerization and conveyed
the material reality, where the programmable calculators appear among devices
available to Soviet teenagers. Unlike today’s transformation of the computer into a

 Igra s komp’iuterom (1986), a film available on YouTube at
watch?v=CW_0eWBySdA, last accessed on June 25, 2018.
192 K. Tatarchenko

media machine, the film makes the opposite come true: the TV screen becomes a
proxy for the computer screen and the newspaper page, connecting the official
discourse of computerization with individual fascination for a still rare object, the
personal computer.
The official framework underlying the computer’s potential in a socialist society
and a victory of the world of labor over the world of capital was the familiar
acceleration of the scientific-technological revolution, a key term in the Soviet
official discourse since the 1960s. “Microelectronics, computer technology,
instrument making, and the entire information science industry,” said the new
General Secretary, Mikhail Gorbachev, in Pravda on June 12, 1985, “are the
catalysts of progress.”
In the publicity film, this official discourse is impersonated by two cuts from an
interview with the vice-minister of education, whose comments draw heavily on the
stereotypes of the Soviet bureaucratic language – the intensification of production,
the preparation of a new worker, and new accomplishments in all areas of production.
The bureaucrat also admits the challenges of computerization in education and
expresses his satisfaction with participating in such a big, future-oriented agenda.
This general tone emphasizing collective efforts and their futuristic temporal vector
was shared by most official and expert discourses on digital technology of the
The assurances of the Soviet leadership that all Soviet schools were to be
equipped with computing technology by the end of the 12-year plan pointed to the
familiar problem of supply. While several ministries competed to fulfill the state’s
promise to deliver 1,000,000 school computers, alongside the shared computer
classes and educational computer centers, the make-do solutions often involved the
use of already mass-produced desktop and pocket programmable calculators.
These classroom digital practices benefited from the established experiences
with calculators used as part of the electives offered in some schools and taught by
the technology’s enthusiasts from the late 1970s (Boltianskii 1979; Kovalev and
Shvartsburd 1979). Some elite schools specializing in teaching physics and
mathematics were supplied with the desktop programmable calculators: devices
such as “MK-46” and “MK-64” were used during the laboratory works, where the
machines were connected to sensors, analogue to digital converters, or other
instruments (Narkiavichus and Rachukaitis 1985; Sozonov 1985; Zakharov et al.
1983). The importance of the calculator’s role in the educational setting is best
illustrated in a device which reversed the logic of miniaturization: the section of an
institute dedicated to research in professional-technical training situated in Riga
developed a large, poster-sized version of “B3-36,” the so-called maxi-calculator, to
facilitate classroom demonstrations (Romanovskii 1987, p. 42).
But the calculator’s, and especially the programmable calculator’s, status as the
suitable technology for fostering computerization was not so much due to its role as
a small computer, in the way that HP marketed its production. Rather, the educational
function of the calculators was predicated on the understanding of computerization
as “algorithmization,” where the calculator became a platform for algorithm’s
10  “The Man with a Micro-calculator”: Digital Modernity and Late Soviet Computing… 193

In the second, 1987 edition of his Micro-calculators in Stories and Games, which
followed the 1984 version, T.B. Romanovskii observed that the book’s educational
component gained significance since the introduction of the national computer
education reform. Among the major revisions by the author was a change in notation:
all the games for programmable calculator were represented twice, using the
notation adopted in the school books, the school algorithmic language (developed
by Ershov’s team), and the input language of the programmable calculators.
Romanovskii did not dwell on the Soviet difficulties of access to personal computers.
Instead, he highlighted the educational rational for requesting double efforts from
the students, namely, working with the paper version of the algorithm first and then
implementing it with the help of the calculator.
Learning to translate the commands of attribution, branching, and repetition into the input
language of the programmable micro-calculator, the student is not only confirmed in the
reality of automation of the intellectual labor, but he also acquires the general skills of pro-
gramming as a process of translating the algorithms into the language of a particular proces-
sor (Romanovskii 1987, p. 5).

The late Soviet digital practices localized within the context of school or after-­
school educational activities emphasized fundamental approach to programming.
On that account, the calculator’s role as an educational tool was predicated on its
programmability. This educational function was not contained by the classrooms,
and many children owned machines and carried them between school and home.

10.4  Digital Domestic

The pocket size made calculators portable, a characteristic that provokes a reflection
on these devices’ transition between different environments. By following the
movement of calculators between work, school, and domestic milieus, we can
examine the distinctive roles assigned to different social groups in late Soviet
society and the articulation of digital technology as it crosses the boundary between
work and leisure and between public and domestic spheres. The concept of
domestication, developed in media and communication technology studies, is
particularly insightful for its emphasis that formal and moral economies of daily
media consumption are mutually constitutive (Silverstone and Hirsch 1992; Berker
et al. 2006).
In the Soviet case, however, the planned economy’s orientation on fulfilling
national needs makes the domestication dynamics a reflection not only of social
structures but also of political agenda. If national computerization was the general
frame related to the wide distribution of programmable calculators, then the Soviet
version of their domestication was predicated on the efforts that mediated the
introduction of calculators into schools and homes.
Whereas the “maxi-calculator” embodied the calculator’s function as a collective
device and instrument of state computer literacy agenda, a recurring theme across
194 K. Tatarchenko

the readers’ letters to popular journals is the story of acquisition of the device into a
personal possession. Most often, the younger readers requested and received
calculators as a birthday gift from their family members. The perception that the
calculator was a device with potential utility in the youth’s future was the part of the
official rhetoric that became internalized into the family’s logic of purchase. The
publications in the family-oriented magazines such as Nauka i Zhizn' also reflect the
logic of domestication of the machine. The editors illustrated the device’s usefulness
by regularly publishing programs devoted to the activities associated with the
female domestic sphere, such as knitting, dieting, and shopping, the content that is
reminiscent of the Western publications for early home computers.
The shared content does not lessen important differences in the domestic struc-
tures enabling the Soviet family’s encounter with the digital device. Beyond its
obvious function as a scientific instrument, the calculator was represented as a focal
point of social interactions – a warrant of future professional success for younger
generation, a tool of an immediate household value for both domestic chores and
entertainment, and a catalyst of communications among relatives and friends.
Following this logic, some publications introducing calculators to its target audi-
ence of youths chose to personalize their technical account by placing it in a fic-
tional family setting. Two representative publications that used family for their
educational goals are The Five Evenings with the Мicro-calculator by I.D. Danilov
and G.V. Slavin (1988) and Father, Mother, Me and Micro-calculator by L.M. Fink
(1988). The books’ labels stipulating of their orientation at a “wide public” are
supported by the democratic pricing and large print runs: 100,000 at 1 rub 10 kopek
and 120,000 at 60 kopek, respectively. Both explicitly describe their target audience
as Soviet teenagers and use a controversy between fictional parents as a dramatic
device to liven the narrative. The two publications highlight the accessibility of their
technical content and explicitly indicate the limited time investment necessary to
master the machine, using temporal categories (evening and month) to structure
their narratives. This is also the point of a major difference between the two books.
The “five evenings” in the title of Danilov’s and Slavin’s book correspond to dif-
ferent encounters spread across several months-long periods and emphasizing a
variety of aspects of daily life where the calculator could be used. Accordingly,
although the difficulty of programs increases progressively, the goal is less a full
mastery than keeping the readers’ interest in both basic and advanced usage options,
that is, operating with the already published, ready-to-use programs or creating new
ones. In Fink’s book, one month of daily study is the period during which the
fictional character achieves proficiency in programming. Thirty short chapters
document the learning stages, but the total volume of the publication is also
considerably larger, and programming technics are more varied.
The authors’ strategy for employing the domestic setting thus also has important
distinctions marked by the professions of the female figures  – one mother is a
historian, while another is a professional programmer – a metaphor of the machine’s
accessibility versus a rhetorical tool for introducing a more theoretical content. For
Danilov and Slavin, the calculator was “a simple and convenient commodity, which
could be used by a person of any age and any profession” (Danilov and Slavin 1988,
10  “The Man with a Micro-calculator”: Digital Modernity and Late Soviet Computing… 195

p. 2). In contrast, Fink stressed how the “programming skills are necessary to people
of a variety of professions and of all ages” (Fink 1988, p. 3).
Their differences notwithstanding, the two accounts illustrate the key element
grounding the digital practices within the domestic sphere. This element is a
conversational communication between different generations localized within
family’s living spaces or literally in the middle of a dinner table according to one
playful illustration on the pages of The Five Evenings with the Мicro-calculator. In
the words of the “mother” from the book’s fictional family, “The user manual is not
the oeuvre which is worth reading aloud” (Danilov and Slavin 1988, p 11). The quip
allowed the authors to introduce a casual style of explanation explicitly drawing on
an oral tradition. The dialogues were more generally crucial for driving the narrative.
For example, this book announced its goal to prepare its readers to embrace the
forthcoming “world of computers” and introduced a variety of scientific capacities
of the machine by associating it with different professionals, personified as guests
who came to discuss the matter.
The fictional family setting allowed for machine and human genealogies to inter-
twine. The two authors reminded the audience about the Soviet computing tradi-
tion: the plot included a visit by the “grandparents,” where the “grandfather” shared
his memoirs about the 1950s development of the first Soviet computers (Danilov
and Slavin 1988, p. 78). From the technical point of view, the memoirs were but a
literal device used to reflect on the storage of instructions in calculators and their
relationship to the pre-ENIAC computer architecture. But the choice of this particu-
lar literary device for making a technical point also carried a larger social function.
Here, the chronological line of technological developments became a genealogy
emphasizing the tradition of technical enthusiasm passed forward from generation
to generation. The form of this narrative – the oral recollections shared in a close
domestic circle – was itself of particular significance within the milieu of Soviet
technical intelligentsia. Barbara Walker (unpublished manuscript) argued that the
recollections of the Soviet computer pioneers reproduce the discursive structure and
functions typical of the traditional Russian intelligentsia culture. Similarly, Slava
Gerovitch documented that oral tradition was crucial for the professional identity of
Soviet rocket engineers and cosmonauts (Gerovitch 2014, 2015).
Despite its greater length, Mother, Father, Me and Micro-calculator does not
reproduce many details of Soviet social life, and its fictional family has a more sche-
matic and functional role, where both parents’ expertise in engineering and pro-
gramming is deployed didactically (Fink 1988). Yet the geography of Leningrad and
the choice of certain programs reveal ties to the real identity of the author. Behind
the fictional diary of a 15-year-old novice programmer was one of the most renowned
Soviet experts in telecommunication, and the fictional family restituted the multi-
generational domestic setting no longer within the author’s reach. Born in 1910,
L.V. Fink died only a few months after the release of this last book; his life story
reflects the complexity of Soviet experts’ roles in mediating state’s goals and the
extent to which digital practices were infused with technical intelligentsia’s values.
A radio amateur in his youth, Fink later became an engineer at one of the
Leningrad’s key institutions for radio and later microelectronics production; a
196 K. Tatarchenko

military expert awarded the Stalin prize for his wartime achievements; and a
researcher working on information theory, the occupation that led him to encounters
with Norbert Wiener and Claude Shannon. During the academic part of his career,
Fink also turned to history and education. Fink’s commitment to popularizing
technical skills – literally to his last breath – did not stem from his full embrace of
the ideological goals of the system. The personal drama of the emigration of his
children to the United States had negative repercussions on his professional standing
(Bykhovskii 2006, pp.  158–61). Fink regarded both the moral responsibilities
associated with technological expertise and the study of computerization as part of
education writ large.
His choice of the diary form was far from accidental: the diary was the key
instrument of self-discipline among the members of Russian and Soviet intelligentsia
(Hellbeck 2006; Paperno 2009). Moreover, Fink’s insistence on documenting
multiple versions of the same program reveals an association between programming
and the ethic of self-determination, which is also familiar from Ershov’s pedagogy
of programming. The practice of writing as a work on the self and the work on the
program documentation formed the essence of Fink’s book. Programmability in this
context was not only the characteristic of the calculator. The program became a site
that allowed to shift the gravity from the machine toward human skill and personal
This human-centered version of programmability helps to appreciate the mani-
fest differences regarding the Western cultural and political valences related to per-
sonal computing, which became elements of popular culture such as the Apple 1984
Super Bowl commercial or Lee Felsenstein’s motto, “To change the rules, change
the tools,” and the subject of critical analysis by Fred Turner (2006). Even when the
Soviet domestic digital cultures drew on the Western programming content, the
social baggage associated with metaphorical and real-life generations of Soviet
experts infused digital practices with distinct values. Whereas both the socialist and
the capitalist domestication scenarios of computerization attributed the future to the
figure of a young male, the Soviet authors reproduce the social mechanisms of
technical intelligentsia milieu and associated the embrace of digital practices with
the virtue of responsibility.

10.5  Conclusions: The Digital Unbound

The fall of the Berlin Wall and the ensuing collapse of the Soviet Union six decades
after The Man with a Movie Camera reached its first audiences set in motion the
precipitous integration of that economy with the world marketplace, which rendered
most Soviet domestic technology obsolete. Western observers singled out the poor
state of Soviet information technology as symbolic of the crisis of the authoritarian
system in the face of the rapid pace of innovation and technological progress
associated with liberal economics and democracy (Castells 1998; Rohozinski 1999).
In this light, the “absence” of computers in the Soviet Union was simultaneously
10  “The Man with a Micro-calculator”: Digital Modernity and Late Soviet Computing… 197

perceived as a cause and a consequence of the failure of the Soviet modernity.

Personal computers and computer networks embodied the Western values of free
information and self-determination. The usages of calculators documented above
problematize this received narrative and demonstrate complex interactions,
similarities, and differences shaping Soviet and Western mass encounter with digital
The Soviet electronic industry also produced millions of different models of cal-
culators, making them – unlike the personal computers – into a generally accessible
commodity. The relative simplicity of the machine allowed for expert user interven-
tion and hardware and software modifications. Unlike the rapid commercialization
of software for personal computers in the Western context, Soviet distribution con-
tinued to rely on social networks and state-sponsored print culture. The emergent
communities of users were governed by a moral economy of prestige and credit for
software development and sharing. Moreover, as I document more fully elsewhere
(Tatarchenko 2013), the Soviet model of computerization emphasized algorithmiza-
tion as a tool for thinking and classroom as a key site for promoting universal pro-
gramming skills leading to personal and social transformation.
Like the technological modernity of the 1930s, the late Soviet machines and
programs could be borrowed. The many forms of Soviet digital practices reveal no
simple reproduction of the Western model, however. Some involved learning the
reverse Polish notation and came with the state-organized technological transfer;
some were products of a hands-on interaction with the machine and came to
resemble their bottom-up Western counterparts by convergence; and unlike in the
West, the digital often interlocked with the algorithmic, and electronic devices
depended on paper tools.
To reconstruct the original functions of calculators in the late Soviet society
holds significance beyond a single national case. The nonlinear temporality
encompassed in the notion of “early digital” has the potential to transform the
geography of computerization as we know it. The appropriation and
re-conceptualization of digital practices locally enabled both convergence with and
an opposition to the capitalist technological modernity. The process is still playing
out in many communities.
The polyvalent character of calculators also helps identify important areas of
digital history left out of today’s historiographical research. Whereas the focus on
noncommercial methods of software production and distribution is one obvious
aspect, no less important is that the calculator was but one among many digital
devices from laboratory electronic instruments to industrial controllers, which
played and are still playing crucial roles in our technically mediated environments.
Stepping aside from the “computer revolution,” by refocusing on “early digital”
practices, we can recover a rich material life and moral virtues associated with
computerization across a variety of global contexts.
198 K. Tatarchenko

Acknowledgments  This chapter is a product of many discussions with the participants at the
“early digital” workshops. My special thanks are to Thomas Haigh and Paul Ceruzzi for their
engaged reading of the draft version and to Sergei Frolov for sharing his knowledge about Soviet


Arnold, David. 2013. Everyday Technology: Machines and the Making of India’s Modernity.
Chicago: University of Chicago Press.
Atherton, W. A.. 1984. From Compass to Computer. San Francisco: San Francisco Press.
Berker T., M.  Hartmann, Y.  Punie and K.  Ward, eds. 2006. Domestication of Media and
Technologies. Maidenhead: Open University Press, 2006.
Blyth, Tilly. 2012. The Legacy of the BBC Micro: Effecting Change in the UK’s Cultures of
Computing. London: Nesta, 2012.
Boenig-Liptsin, Margo. 2015. “Making Citizens of the Information Age: A Comparative Study of
the First Computer Literacy Programs for Children in the United States, France, and the Soviet
Union, 1970–1990.” PhD diss., Harvard University.
Boltianskii, V. 1979. “Shkola i mikrokomp'iuter,” Matematika v shkole, no. 2.
Brock, David, ed. 2006. Understanding Moore’s Law: Four Decades of Innovation. Philadelphia:
Chemical Heritage Foundation.
Bykhovskii, M. A. 2006. Pionery informatzionnogo veka: Istoriia razvitiia teorii sviazi. Moskva:
Castells, Manuel. 1998. End of Millennium, The Information Age: Economy, Society and Culture,
vol. III. Cambridge, MA; Oxford, UK: Blackwell.
Ceruzzi, Paul. 1998. A History of Modern Computing, 2nd Edition. Cambridge, MA: MIT Press.
CIA SOV 23306. 1989. “The Soviet Microelectronics Industry: Hitting a Technology Barrier.”
A Research Paper by Office of Soviet Analysis, Office of Science and Weapons, Directorate
of Intelligence, available at
pdf, last accessed on March 30, 2018.
Danilov, I. D. and G. V. Slavin. 1988. Piat’ vecherov s mikrokal'kuliatorom. Moskva: Finansy i
Ershov, Andrei. 1981, “Programming, the Second Literacy.” In Computer and Education: Proc.
IFIP TC-3 3rd World Conf. on Computer Education, WCCE 81. Amsterdam: North Holland.
Fink, L. M. 1988. Papa, mama, ia i mikrokal'kuliator. Moskva: Radio i sviaz'.
Frolov, Serguei. “Istoria sovetskikh kal'kuliatorov,” text available:,
last accessed on March 30, 2018.
Gerovitch, Slava. 2002. From Newspeak to Cyberspeak: A History of Soviet Cybernetics.
Cambridge, MA: MIT Press.
Gerovitch, Slava. 2014. Voices of the Soviet Space Program: Cosmonauts, Soldiers, and Engineers
Who Took the USSR into Space. New York: Palgrave Macmillan.
Gerovitch, Slava. 2015. Soviet Space Mythologies: Public Images, Private Memories, and the
Making of a Cultural Identity. Pittsburgh, PA: University of Pittsburgh Press.
Graham, Loren. 1984. “Science and Computers in Soviet Society,” Proceedings of the Academy of
Political Science 35, no. 3, The Soviet Union in the 1980s: 124–134.
Haigh, Thomas and Mark Priestley. 2018. Colossus and Programmability. IEEE Annals of the
History of Computing, vol. 40, no. 4: 5–27.
Haring, Kristen. 2007. Ham Radio's Technical Culture. Cambridge, MA: MIT Press.
Hellbeck, Jochen. 2006. Revolution on My Mind; Writing a Diary under Stalin. Cambridge, MA:
Harvard University Press.
Hicks, Jeremy. 2007. Dziga Vertov: Defining Documentary Film. London: I. B. Tauris.
10  “The Man with a Micro-calculator”: Digital Modernity and Late Soviet Computing… 199

Kerr, Stephen. 1991. “Educational Reform and Technological Change: Computing Literacy in the
Soviet Union.” Comparative Education Review 35, 2: 222–54.
Komissarov, S. 1988. “Izobretaem kal'kuliator!” Nauka i zhizn', no. 6, 1988: 108–109.
Kovalev, M, and S.  Shvartsburd. 1979. “O sovremennykh uslovoiakh obucheniia schetu,”
Matematika v shkole, no. 2.
Kuklin, I. 2017. “Periodika dlia ITR: Sovietskie nauhno-populiarnye zhurnaly i modelirovanie
interesov pozdnesovetskoi nauchno-tekhnicheskoi intelligentsii,” NLO, no. 3: 61–85.
Lecuyer, Christophe. 2006. Making Silicon Valley: Innovation and the Growth of High Tech, 1930-­
1970. Cambridge, MA: MIT Press.
Malinovskii, B. N. 1998. Ocherki po istorii vyhislitel'noi tekhniki v Ukraine. Kiev: Feniks.
Malashevich, B.  M. 2007. “Utinaia okkhota, ili o ‘prichastnosti amerikantsev k sovetskoi mik-
roelektronike’,” Elektronika: Nauka, Tekhnologiia, Biznes, no. 5, at, last accessed on March 30, 2018.
Malashevich, B. and D.  Malashevich. 2011. “The Zelenograd Center of Microelectronics.” In
Perspectives on Soviet and Russian Computing: First IFIP WG 9.7 Conference, SoRuCom
2006, Petrozavodsk, Russia, July 3-7, Revisited Selected Papers, edited by John Impagliazzo
and Eduard Proydakov. New York: Springer, 2011: 152–163.
Melvern, Linda, Nick Anning, and David Hebditch. 1984. Techno-Bandits: How the Soviets are
Stealing America’s High-Tech Future. New York: Houghton Mifflin, 1984.
Mody, Cyrus. 2017. The Long Arm of Moore’s Law: Microelectronics and American Science.
Cambridge, MA: MIT Press.
Mollick, Ethan. 2006. “Establishing Moore’s Law,” IEEE Annals of the History of Computing 28,
3: 62–75.
Narkiavichus, V. K. and G. I. Rachukaitis. 1985. “Ispol’zovanie mikrokal’kuliatora ‘Elektronika
MK-46’ dlia avtomaticheskoi obrabotki eksperimental’nykh dannykh,” Priboryi tekhnika
eksperimenta (1985), no. 2: 74–76.
Nauka i zhizn'. 1981. No. 9.
Nauka i zhizn', 1985. No. 6.
Nauka i zhizn'. 1989. No. 6.
Nauka i zhizn'. 1989. No. 10.
Nauka i zhizn'. 1989. No. 11.
Nauka i zhizn'. 1989. No. 12.
Paperno, Irina. 2009. Stories of the Soviet Experience: Memoirs, Diaries, Dreams. Ithaca, NY:
Cornell University Press, 2009.
Peters, Benjamin. 2016, How Not to Network a Nation: The Uneasy History of the Soviet Internet.
Cambridge, MA: MIT Press, 2016).
Pravda [Speech by M. Gorbachev], June 12, 1985.
Ramo, S. 1962. Proceedings of the IRE 50, no. 5: 1237–1241.
Ristanović, D. and J. Protić. 2012. “Once Upon a Pocket: Programmable Calculators from the Late
1970s and Early 1980s and the Social Networks Around Them.” 34(3): 55–66
Roberts, Graham. 2000. The Man With the Movie Camera: The Film Companion. London: I. B.
Tauris, 2000.
Rohozinski, Rafal. 1999. “Mapping Russian Cyberspace: Perspectives on Democracy and the Net,”
United Nations Research Institute for Social Development, Discussion Paper 115, November,
available at
D80256B67005B738A/$file/dp115.pdf , last accessed on March 30, 2018.
Romanovskii, T. B. 1987. Mikrokal'kuliatory v rasskazakh i igrakh. Riga: Izdatel'stvo “Universitetskoe.”
Schmid, Sonja. 2015. Producing Power: The Pre-Chernobyl History of the Soviet Nuclear Industry.
Cambridge, MA: MIT Press, 2015.
Siddiqi, Asif. 2010. The Red Rockets’ Glare: Spaceflight and the Soviet Imagination, 1857–1957.
New York: Cambridge University Press.
Sozonov, D. 1985. “Ispol’zovanie kal’kuliatora ‘Elektronika MK-46’ v izmeritel’nom komplekse,”
Priboryi tekhnika eksperimenta, no. 2: 73–74.
200 K. Tatarchenko

Svoren', R. 1976. “Fantasticheskaia elektronika,” Nauka i zhizn', no. 10: 29–35.

Svoren', R. 1981. “Prishla pora ostavit’ schety.” Nauka i zhizn', no. 3: 46–55.
Silverstone, R. and E.  Hirsch, eds. 1992. Consuming Technologies: Media and Information
Domestic Spaces. London: Routledge, 1992.
Tatarchenko, Ksenia. 2013. “A House with the Window to the West”: The Akademgorodok
Computer Center, 1958-1993. PhD diss., Princeton University.
Tatarchenko, Ksenia. 2017. “‘The Computer Does Not Believe in Tears:’ Programming,
Professionalization and Gendering of Authority,” Kritika 18, no. 4: 709–39
Tatarchenko, Ksenia. “Right to Be Wrong”: Science-fiction, Gaming and Cybernetic Imaginary in
Kon-tiki: A Path to the Earth (1985-1986), unpublished manuscript.
Tinn, Honghong, 2011. “From DIY Computers to Illegal Copies: The Controversy over Tinkering
with Microcomputers in Taiwan, 1980–1984.” IEEE Annals of the History of Computing, vol.
33 no. 2: 75–88.
Trokhimenko, Ia. K., ed. 1990. Programmiruemye mikrokal’kuliatory. Ustroistvoi pol’zovanie.
Moskva: Radio i sviaz'.
Turner, Fred. 2006. From Counterculture to Cyberculture: Stewart Brand, the Whole Earth
Network, and the Rise of Digital Utopianism. Chicago: Chicago University Press.
Walker, Barbara. “Aiming for Victory: Early Cold War computer competition between the US and
the Soviet Union.” Unpublished manuscript.
Zakharov, V. N., A. V. Oleinik, and L. M. Soldatenko. 1983. “Programmiruemyi mikrokal’kuliator
‘Elektronika MK-64’,” Elektronnaia promyshlennost’, no. 3: 17–19.
Usdin, Steven. 2005. Engineering Communism: How Two Americans Spied for Stalin and Founded
the Soviet Silicon Valley. New Haven: Yale University Press.

A Cambridge University Computer Laboratory

ACE report (Turing), 121, 147, 148, 150, 151 (Mathematical Laboratory), 117
Aiken, H., 89, 91, 92, 96, 97, 137, 138 Campbell, G.A., 89
Analog, 4, 19, 54, 70, 102, 185 Card, edge-notched (or punched), 3, 69, 70,
ARPANET, 161, 166 75–82, 84, 110, 120, 132, 136, 138,
Atanasoff-Berry Computer (ABC), 20, 21, 139, 155, 160
42, 137 Cathode ray, 153
Atanasoff, J., 20, 21, 23, 69, 70 Ceruzzi, P., 1, 4, 20, 22, 41, 42, 69, 70, 72, 73,
76, 77, 79, 105, 107, 136, 166, 173,
181, 182
B Circuits, 4, 6, 11, 22, 34, 47, 87, 88, 107, 108,
Babbage, C., 6, 8–10, 12, 50, 72, 79, 138, 139 125, 138, 141, 145, 151, 153, 182, 187
Bachmann, C., 74 COBOL, 165, 166
Bateson, G., 28, 29, 81 Code words, 146
Bell Labs, 22–24, 26, 34, 87, 90–94, 121, 136, Coding, 3, 69, 78, 80–83, 95, 121, 122, 124,
138, 143–145, 162 148–156
Bell Labs machines, 145 Cold War, 19, 25, 31, 33, 160, 164, 166, 169,
Berkeley, E.C., 30, 96 172, 175, 182
Black boxes, 90, 93–97, 142, 187 Colossus, 10, 14, 42, 66, 137, 138
Boolean algebra, 11, 88–98 Computer, analog, 4, 6, 8, 22, 24, 25, 29,
Boys, S.F. (Frank), 128, 130 31–35, 84, 166
Brooker, T., 119, 123, 130, 132 Computer, digital, 1, 3–6, 8, 10, 13, 20,
Brown, G., 34, 80 23–25, 29–35, 41, 45, 69, 72, 82,
Burroughs, 72, 161, 165 84, 94, 96, 98, 104, 107, 109, 111,
Bush, V., 20, 21, 25, 30, 34, 80, 117, 171 113, 114, 125–127, 136, 160–163,
172, 175
Cybernetics, 4, 5, 25, 28–30, 80, 81, 180
Calculator, electronic (or digital), 5, 12–15,
22–25, 31, 41, 45, 69, 70, 82, 84, D
102–104, 136, 179, 181, 182, 185–189, Delay lines, 10, 71, 119, 125, 136,
191–194, 196, 197 144–154, 156
Caldwell, S., 21, 25, 89, 91, 97 Differential analyzers, 20–22, 24–26, 30, 32,
Cambridge University, 7, 117–132 34, 103, 104, 109, 136

© Springer Nature Switzerland AG 2019 201

T. Haigh (ed.), Exploring the Early Digital, History of Computing,
202 Index

Digital, 2–16, 19–35, 41, 55, 62, 64, 69, 70, J

72, 80, 82–84, 87, 88, 102, 117–132, Julius Odds Machine, 51
136, 160–175, 179–197 Julius Totalisator, 42, 46, 55, 66
Digital revolutions, 10, 12, 13, 181

E Karnaugh, M., 91
Eckert, J.P. Jr., 20, 24–26, 104, 108, 141, Korean War, 31, 164, 165
144–146, 150
Electronic Delay Storage Automatic
Calculator (EDSAC), 42, 66, 117–132, L
152, 154 Lennard-Jones, J., 117–119, 130
Electronic Discrete Variable Automatic Licklider, J., 29
Computer (EDVAC), 10, 13, 26, 42, 88, Lovelace, A., 6, 8, 9
95, 119, 136, 144–156
Electronic Numerical Integrator and Computer
(ENIAC), 3, 6, 9, 10, 20–22, 24–28, 34, M
88, 102, 104, 109, 128, 137, 141, 142, Machine, analog, 4, 20, 22, 27, 33, 34
144–146, 148, 160, 161, 166, 168 See also Computers, analog
Machine, digital, 4, 11, 20, 27, 34, 90, 102,
109, 132, 136
F See also Computers, digital
Fink, L.M., 191, 194–196 Mahoney, M., 16, 95
First Draft of a Report on the EDVAC, 5, 24, Manchester University, 117, 118, 124, 154
135, 146 Mark I (Harvard), 24, 91, 121, 124, 128,
Fortran, 166 138–141, 143–147, 149
Fourier analysis, 89, 128 Mark II (Harvard), 91, 143, 144
Massachusetts Institute of Technology (MIT),
21–23, 25, 29, 31, 34, 70, 71, 91, 117,
G 124, 144, 171, 174
Garfinkel, H., 103, 107 Mathematical Laboratory (Cambridge
General Electric, 161, 162, 164, 166, 170 University Computer Laboratory),
GI Bill of Rights, 167 117, 124
Gill, S., 119, 123–125, 130 Mauchly, J., 3, 20–26, 31, 70, 71, 104, 108,
Goldstine, A., 104, 105 141, 145, 146, 150
Goldstine, H.H., 42, 88, 140, 142, 147, 155 McLeod, J., 32
Mead, M., 28, 81
Memory, 6, 8, 10, 11, 65, 73, 81, 82,
H 90, 105, 111, 120, 123, 125,
Haigh, T. (Tom), 2–16, 20, 22, 24, 25, 27, 32, 135–139, 144–156, 160, 166,
102, 106, 111, 112, 135–156, 160, 165, 182, 189
166, 168, 181 Miller, J.C.P. (Jeff), 128
Hartree, D.R., 20, 21, 27, 88, 95–97, 117–119, Minicomputers, 2, 161
123, 124, 128, 142 MONIAC, 6, 7
Hashing (hash coding), 81–83 Mooers, C., 4, 69, 70, 72, 73, 76, 77, 79
Hodgkin-Huxley equations, 131 Moore, E.F., 94, 97
Honeywell, 161, 162, 166 Moore School, 21, 24, 26, 105, 119–121,
144, 145

IAS machine, 153, 154 N
Iconoscopes, 146, 152, 153 National Aeronautics and Space
International Business Machines (IBM), Administration (NASA), 31, 172, 173
22, 25, 42, 71, 73–75, 84, 90, 109, 121, National Cash Register (NCR), 161–163,
138, 155, 160–166, 170, 173, 175 165, 170
Index 203

National Defense Education Act (NDEA), Sputnik, 161, 172–174

172, 173, 175 Stibitz, G., 20, 22, 23, 26, 30, 92, 136,
National Defense Research Committee 141, 143
(NDRC), 20–24, 144, 171 Stored program, 24–26, 66, 71, 84, 119, 121,
National Security Agency (NSA), 73, 174 132, 135, 136, 155, 166
Nauka i zhizn’ (Science and Life magazine), Subroutines, 121–126, 130, 190
181, 183, 187–190, 194
Naval Ordnance Laboratory (NOL), 69–71
Newell, A., 33 T
Nixon, R., 174 Texas Instruments, 183
Nunberg, G., 11 Ticket issuing machines (TIMs), 46–48, 50,
51, 55
Totalisator, 42–46, 48, 52, 55, 56, 65, 66
O See also Julius Totalisator
Odds Machine, 46, 51–53, 55 Truman, H., 112, 165, 169, 174
See also Julius Odds Machine Turing, A., 87, 95, 104, 121, 136, 137, 147,
Oller, J., 42, 43 148, 150, 151, 155
Oppenheimer, J.R., 143, 171

P UNIVAC, 25, 71, 84, 152, 154
Personal computers, 2, 14, 42, 75, 112, 180,
181, 183, 191, 192, 197
Philips, W., 6 V
Pilot ACE, 42, 151, 152 von Neumann, J., 5, 8, 10, 24, 25, 27, 28,
Pocket computer, 2 30, 70, 71, 82, 87, 88, 95, 96, 98, 119,
Post, E., 137 121, 135, 140, 143, 144, 146, 147,
Praxeology, 102 149–155, 182
Project Whirlwind, 42
Prosopography, 162, 163
Wheeler, D.J., 119–122, 124, 125, 127,
R 128, 130
Radio Corporation of America (RCA), 22, 24, Wiener, N., 25, 27, 28, 30, 80, 81, 196
71, 72, 152, 153, 161, 164 Wilkes, M.V., 26, 118–121, 123–125, 127,
Reed, H., 112 128, 130, 154
Remington Rand, 25, 71, 73, 75, 162, 165 Williams tubes, 71, 136, 153
Ridenour, L., 22, 34 Wired (magazine), 9, 33, 34, 75, 143, 180,
Roosevelt, F.D., 161, 165, 169–171 187, 189, 190, 194
Women workers (employment), 167
World War II (Second World War), 15, 19, 20,
S 30, 33, 69, 88, 92, 103, 161, 165, 166,
SAGE, 42, 65, 166 170, 171, 175
Science and Life magazine (Nauka i zhizn’), 180 World Wide Web, 80
Scientific American magazine, 33, 34
Selectrons, 153
Shannon, C.E., 11, 71, 80, 87, 88, 196 Y
Silicon Valley, 13, 160, 180, 182, 183 Yahoo, 83, 84
Soviet Union, 33, 92, 172, 179, 180, 183,
191, 196
Sperry Corporation, 20 Z
Sperry Rand, 161, 162, 165 Zatocoding, 3, 69, 70, 72, 73, 76, 77, 79
Spotlight Golf Machine, 42, 56–59, 62, 64–66 Zuse, K., 10, 42, 89, 137, 138