Sei sulla pagina 1di 536

The Fourth Civilization--

Technology Society and Ethics


Fourth (2002-2003) Edition

by
Richard J. Sutcliffe

Table of Contents
Shareware Information
Introduction
Index

Part I--Laying the Groundwork

Chapter 1 History and Technology


Chapter 2 The Foundations of Science and Technology
Chapter 3 Basic Concepts in the Theory of Ethics

Part II--Four Wavefronts on a Sea of Change

Chapter 4 The Information Revolution


Chapter 5 Robotics and The Second Industrial Revolution
Chapter 6 The Intelligence Revolution
Chapter 7 The Biospace Revolution

Part III--Ethics and the Institutions of the Fourth Civilization

Chapter 8 Technology and Economic Institutions


Chapter 9 Technology, the State, and the Law
Chapter 10 A New Education for a New Civilization?
Chapter 11 Religion and the Transcendental in the Fourth Civilization

Part IV Quo Vadis--Directions for People, Society and Technology

Chapter 12 Integration And the Fourth Civilization

Chapter 1
History and Technology
Seminar - "Who cares about history?"
1.1 The Art and Science of History
1.2 The First Four Phases of Civilization
1.3 History and Technology
1.4 A Brief History of Computing

1
1.5 Forecasting the Future
1.6 Summary and Further Discussion
The Fourth Civilization Table of Contents
Copyright © 1988-2002 by Rick Sutcliffe
Published by Arjay Books division of Arjay Enterprises

1.1 The Art and Science of History


Students often think of history as a simple listing of events, names and dates.
However, an understanding of such events also requires knowing something of both
the motivations influencing the people who made those events happen, and of
methods by which the end results were achieved. In particular, the history of every
society is intertwined on the one hand with its technological development, and on
the other with the moral and ethical principles upon which the society is built. This
book is concerned with all three concepts (history, technology, and ethics) and the
relationships among them.
One goal of the study of history is attempting to look ahead as well as back,
for by understanding the past and present one gains keys to the future. For
instance, even though the technology that will influence the society of the future is
very different from that which shaped historical events, there is still much to be
learned by examining the past. It is possible to see how societies have already
responded to (or developed from) radical technological changes, and thus to
suggest how current trends might shape the future. To assist in this, a brief
examination of the nature of historical studies is in order.

There is a flow to history and culture. This flow is rooted in and has its wellspring in the
thoughts of people. People are unique in the inner life of the mind--what they are in their
thought world determines how they act. This is true of their value systems and it is true of
their creativity, true of their corporate actions, such as political decisions, and it is true of
their personal lives. The results of their thought flow through their fingers or from their
tongues into the external world. This is true of Michelangelo's chisel, and it is true of the
dictator's sword.

--the late Francis Schaeffer in How Should We Then Live

What Does a Historian Do?

A historian is more than simply a collector of facts about the past or present.
In some ways, the "doing of history" is not unlike that of science, for in both
disciplines it is well understood that a collection of data, however vast, does not
become useful information until it is organized and interpreted. Like a courtroom
judge who must sift through often conflicting eyewitness reports to discover the
truth of events, the historian must reconcile accounts of the events under study that
are often in sharp disagreement.
There are various reasons for the contradictions that arise even between
eyewitness accounts of the same event. For instance, suppose two people standing

2
at the roadside witness a traffic accident from different angles, each noticing
aspects that the other does not. The first witness observes a car stop at an
intersection and another car attempt to pass on the left, whereupon the stopped car
suddenly makes a left turn and is struck broadside by the passing car. Everyone in
both vehicles is killed and little is evident about the cause of the accident from the
tangled wreckage. This witness is convinced that the driver of the passing car is at
fault for attempting to pass when it was not safe to do so The second witness,
however, sees the accident from the front and to the left of the flow of traffic
instead of from the right side and behind as does the first. She observes that the
stopped car had a right-turn signal on when the moving car attempted to pass on
the left. To her, the accident is clearly the fault of the driver who signaled to turn
one way and actually did the opposite.
Yet, despite knowledge of the misleading turn signal, the coroner who
examines the bodies comes to agree with the first witness. She once barely avoided
a similar accident by reacting quickly on the brake pedal to a slight movement of
the leading driver's arm. Wondering why the following driver did not pick up such a
clue despite the false signal, she tests the body of the passing car's driver and finds
the blood alcohol content to be four times the legal limit. She has little doubt about
where most of the blame lies.
Previous experiences, the time of day, road conditions, lighting, the amount of
time spent watching, racial and sexual prejudice, and what a witness expected to
see all might also colour the reports that the court hears. Each person takes the
stand sworn to tell the whole truth, but even if all do exactly that to the best of their
abilities, there will still be disagreements and contradictions.
Likewise, when considering historical events, it is necessary to take into
account such things as nationalism, the pride of winners, the shame of losers, and
the tendency of historians to support a particular theory or historical figure. As a
result of such biases, the accounts of world events reaching a later historian will
diverge even more than do those of the traffic accident in the example above. Add
in the passage of hundreds or thousands of years and the perils of going through
third or fourth hand copies of originals--each perhaps embellished with the copyist's
ideas--and it may become difficult to sift contemporary fact from later myth.
What is more, historians have in the past usually concentrated on the few
outstanding figures who were at the centre of events--the kings, queens, generals,
politicians and other acknowledged movers and shakers. Where there were sources
available from common citizens, these were too fragmentary (and often too
voluminous) to shed much light on the larger events that shaped the time under
study. This is changing as computers allow such material to be assembled and
sifted to get a more everyday perspective on events.
Establishing the facts in a careful and scientific fashion using the same kind of
evidence weighing employed in a courtroom is the first task of a historian. The
accuracy of available accounts must be assessed by checking them against other
documents (perhaps describing related events) whose reliability is better accepted.
The personality, motives, and level of education and knowledge of the author of the
account must also be taken into consideration. For instance, in a society that
attaches great importance to the mythology of a variety of gods and goddesses, the

3
appearance of a comet or the conjunction of two planets might be viewed as a clash
between the gods, with a simultaneous war here on Earth regarded as incidental.
Although later historians would have the opposite view of which is the more
important, they may be left with very little useful material with which to work.
However, the practice of the discipline of history is more than a mechanical
sifting of facts and weighing of evidence, and the results must be more than a mere
narrative of human actions. A history must also serve as an explanation of actions
and events in their cultural and technological context, and it must at least attempt
to explain the motivations of the people involved. The facts alone (who, what and
when), however carefully verified, only slightly engage the mind to the study of
history. The how (including the technology) and why (ethical and other motivations)
capture one's interest at a much deeper level, for it is in the explaining of these two
that one gains an understanding of events and the ability to apply what has been
learned to new situations.
This is the art of history--to place events within a context that tells something
about the people and ideas that moved the events in the first place, and that were
in turn changed by the events after they took place. Events involve real flesh and
blood human beings, and if one is to understand the forces that move societies, one
must understand the people who shape those forces and are shaped by them.
History seeks to explain societal and individual experiences and to integrate
opinions, motivations, causes, actions, reactions and effects into a comprehensive
view of people in the context of their whole society. On the one hand, history must
take into account the technologies that may have caused events to take place or
that developed as a result of events. On the other hand, it must also take into
consideration the moral/ethical (and other) human motivations for action.

Interpreting History

The historian must also produce a narrative that is able to convey to others
the comprehensive picture created by sifting the collected materials. Thus the
scholar must be able to write clearly and effectively. It is important to realize,
however, that the result does not gain some canonical status ("truth") merely
because it has been published--every book on history is filtered through its author's
views about the world, people, their motivations, and the meaning of historical
events.
At the most radical extreme are those who ignore the evidence of history and
write their own. For instance, some deny that the holocaust of millions of Jews in the
Second World War ever took place. Evidence is not relevant to people with a
sufficiently strong view of a matter. The implied racism of these pseudo-histories
has such incendiary potential it strains the ability of society to guarantee universal
free speech, for it raises the spectre of the true history being repeated.
In like manner, the old-fashioned Marxist views history as the unfolding of a
class struggle between the poor masses and the wealthy (capitalist) elite: history
moves toward an inevitable climax wherein the mass of workers will control all
wealth and its means of production. Historical writing done within the framework of
this world view interprets the period under study in terms of such clashes because

4
that world view requires class struggle to be present. Such an account may
reinterpret what all other historians have said about events to the point where it
becomes almost unrecognizable. This reinterpretation is both acceptable and
morally right to the Marxist historian. In such ideologies the truth of past events is
variable and must serve doctrine, for it alone is fixed. The Marxist believes that
world view creates history, and the account generated by such a historian
necessarily conforms to that world view.
The main character in George Orwell's anti-totalitarian novel 1984 is
employed by the government to change old magazine reports of party officials'
speeches so that they will conform to current party policy. The "Ministry of
Information" is engaged not in securing and publishing factual material, but in
ensuring that the record of history is altered to fit the current policies of the party.
Truth attaches to party doctrine, not to mere facts.
Such views are prevalent in the study of all literature, not just the historical.
The deconstructionists hold that no body of writing has any inherent meaning, even
if one was intended by the author. Meaning is created and attached by the reader,
and such activity is unique and relative to each individual, not absolute. It is also
rather common to read present values into works of the past, or to criticize (or even
ban) them for not having certain modern political and social views. Thus the works
and thinking of past writers are often dismissed as irrelevant rather than studied for
understanding.
While these are extreme examples, they force us to recognize that accounts
of the past are always filtered through a world view that includes some theory of
what history is, or ought to be. Even when applied to the same body of data, this
filtering process may produce very different results, depending on who is doing the
filtering.
The view of some Greek philosophers--one that has had periods of popularity
ever since--was quite different. In this view, history was not an expression of
political dogma, but an eternal repetition in cycles of the same kinds of events.
Perhaps something could be learned from events, and perhaps not. What was
certain was that if Rome burned Carthage, enslaved its peoples, and poured salt on
its arable land, the same thing one day would be done to them by another people,
and that their conquerors too would ultimately meet destruction.
There would be another Plato to deliver the messages of another Socrates;
kingdoms would become democracies which when fully corrupt would lapse into
dictatorship; and their dictators would in turn proclaim themselves kings and begin
the cycle of government anew. Time and history had no beginning, no purpose, and
no end--it just was. One could not rely on the gods to escape the cycle; one could
but be subject to the fates. In this view, no real explanation for history is possible,
for in the long run, inexplicable forces shape events--forces that are beyond the
scope and knowledge of mere human beings. Taking this idea to its extreme, one
could well conclude that there is little humankind can do in the face of events but
continue a fateful existence as a bit player on an unknown stage before an
unknowable audience.
The observation about the repetition of government types cannot be denied
entirely, for such cycles may be seen to some extent in modern times and societies

5
as well. For instance, in the first half of this century, both Russia and Germany went
from imperial monarchy to democracy, into dictatorship, and back out again.
Moreover, it is legitimate to ask whether democracy contains within itself the seeds
of its own destruction. A democracy assumes that people will act in the common
good, but its laws must reflect that they often act selfishly. As bureaucracy and
regulations grow, and selfish demands increase, some may come to believe that a
period of dictatorship or monarchy is necessary to salvage order out of what they
see as growing chaos.
A few thinkers have taken the idea of the predictability of history further,
wondering if it may become possible to develop systematic descriptions of trends in
history and society so that events can not only be forecast, but also be managed by
taking "corrective" measures. Perhaps the most popular of all science fiction was
Asimov's Foundation series, whose premise was that just such a detailed analysis
and prediction was possible, even over the span of millennia.
However, while some insist that complete scientific descriptions of history are
both possible and necessary, others claim that no definitive explanation for history
is possible or needed. In the last hundred years or so, "scientific" views of history
have become increasingly popular, for humanity as a statistical whole is thought of
as being subject to analysis and prediction. In this thinking, once the motivations of
the masses could be measured and tabulated, their response to economic or
technological stimuli could be accurately predicted. Appropriate technology and
education could then be adapted to engineer and control the desired society. Such
theories are popular among both political rightists and leftists, neither of whom
realize that they are advocating the same kind of society--a sort of "scientific
totalitarianism" or "technocratic dictatorship."
Finally, it is worthwhile to consider a Judeo-Christian perspective on the
subject. The possibility of some thematic repetition as history progresses is not
completely ruled out by these historians, though they regard history as much more
than a record of purposeless recycling of events with no beginning and no end. Both
Jews and Christians hold that the world and its peoples had a definite beginning and
a purpose (to serve the Creator). The Bible chronicles events after creation: the
rebellion and falling away from God by the first human beings, a new start after the
worldwide catastrophe of the flood, the promise of a Messiah given to the nation of
Israel, the provision of the law to set that nation apart as an inheritance for God.
The Christian scriptures add that the law intentionally demonstrated the
impossibility of pleasing God through an imposed morality. They detail the coming
of Jesus Christ as the promised Messiah to usher in a new covenant with God based
on his grace alone. Christians look forward to a return of Christ for judgment and
reward, and to a final culmination of the Earth, its peoples, and their histories.
Thus, both Judaism and Christianity claim a comprehensive view of history as
a definite progression. In the latter, the sequence of history is centred on the cross,
but both root all their claims in a series of historical events. These events are
potentially verifiable by the same means applied to other occurrences. Although this
text will not present a detailed history of Western religions or discuss in other than
general terms how their institutions have directly affected the events of Western
nations, it is important to make two points:

6
First, whatever one thinks of the Judeo-Christian religions, one cannot
underestimate the effect that they have had on Western culture and on ethics, the
law, and government in particular. Ideas derived from the Bible can be found at the
heart of much that is held dear in modern Western law, particularly in the important
area of human rights. The influence upon the U.S. Constitution is particularly potent.
Second, religious influences may be on the ascendancy in Western culture
after at least a century of decline. In recent centuries, secular statism and agnostic
humanism had largely replaced Judeo-Christian thinking as "religious" forces. Yet in
the last decade or so, such phrases as "born again" once more appear in the
headlines of newspapers and magazines; prominent religious leaders have sought
the U. S. presidency; and the morality of politicians is once again scrutinized
publicly by a press newly sensitized to the deep interest of the public in such
matters. Membership in theologically conservative organizations that are both
socially and politically active continues to grow rapidly. Even though these members
are numerically offset to a degree by the continuing exodus from the more liberal
mainline groups, the net result is a higher visibility for religion in North America. In
this context, it is not surprising that religious views of history, technology, and
ethical issues should once again be regarded as legitimate topics for scholarly
study, and debates once thought conceded are now being rejoined. All this activity
has interesting side effects, for as Naisbitt (Megatrends) points out "evangelical
publishers now account for a third of domestic book sales." The work of religious
artists has also become an important factor in the music market.
Meanwhile, there is a broad resurgence of interest in other forms of
spirituality, including various reinterpretations the so-called "New Age" movement
has placed on some traditional Eastern religions that have made such ideas
marketable in the West.
This increase of interest in religion following the end of the industrial age
might be seen in a broader context as part of a reaction against the perceived
"hardness" of the science and technology that have dominated the recent past. A
new view is often heard--that one can indeed know things to be true in ways other
than exclusively through the scientific method.
It is worth observing that these trends have not yet (and may never be)
reflected in the legislative and judicial arenas, and that they are more pronounced
in the United States than in Canada, where the controlling paradigms are decidedly
anti-religious. It would hurt a Canadian political candidate who made an issue of her
religion, and such religious lobby groups as do exist have minimal influence on
politicians or the courts in that country.
The desire to have structure during a time of change may also be a factor,
especially in formerly Communist countries where people seeking stability often find
religious leaders to be more credible than political ones. There is also a new desire
to assert the importance of people over things, or at least to promote a "high-touch"
aspect of society to balance the "high-tech." Thus, not only do older religions like
Christianity appear to be enjoying at least nominal revival, but so are many other
forms of religion, philosophy, and mysticism. All these factors may well have a
strong influence on the peoples of the coming age.

7
Conclusion

It should distress no one that there are conflicting views about history, its
interpretation, or even the facts themselves. The important lesson is that each
person is part of a culture and has a view of the world through which all knowledge
(including that of history), is filtered. One who appreciates this lesson and has a
clear perception of both personal world view and cultural surroundings, gains an
understanding of both the events of history and of a place in them, as well as the
ability to engage in debate about their meaning in an informed way. Here again is
Francis Schaeffer on the subject:

People have presuppositions, and they will live more consistently on the basis
of these presuppositions than even they themselves realize. By presuppositions we
mean the basic way an individual looks at life, his basic world view, the grid through
which he sees the world. Presuppositions rest upon that which a person considers to
be the truth of what exists. People's presuppositions lay a grid for all they bring
forth into the external world. Their presuppositions also provide the basis for their
values and therefore the basis for their decisions.
"As a man thinketh, so is he," is really most profound. An individual is not just
the product of the forces around him. He has a mind, an inner world. Then, having
thought, a person can bring forth actions into the external world and thus influence
it.
-- How Should We Then Live

Placing history in a larger context is very much a theme of this book, for
throughout the text, events, technology, and ethical issues are discussed in relation
to one another. An important thesis here is that not only are ideas, actions, and
people inseparable, but also that many disciplines of human thought regarded as
distinct are really part of a whole. Because of this larger context, the title of this
chapter could have been expressed in terms of sociology rather than of history. Use
of the latter word reflects a need to place the discussions in a continuum of time, for
the major concern in this book will be that of the mutual influence of ethical and
social issues, of technology, and of events over time. This continuum will also be in
evidence in the attempts made in several chapters to peer into the future.

1.2 The First Four Phases of Civilization


The purpose of this section is to review briefly the broad progress of
civilization as it is driven by technology. Specific examples, however revolutionary,
are left for later. This review will divide the complex development of history into
more easily understood periods or phases, here called "civilizations". There is an
element of fiction to this, as a boundary a given historian might propose to demark
two phases of civilization is artificial, and could be disputed by others. After all, one
type of society actually flows smoothly into another. New information and new
technologies take time to be absorbed, to have applications developed, and to
become influential. Even when technologies are put to use decisively to prosecute a
war, their effects on the general population may not otherwise be seen until long

8
after the victory or defeat of one nation. It is sometimes only with the passage of
centuries the vision of hindsight perceives a transitional period as a sharply defined
change.
A more detailed history of the world's peoples would surely refine the four
broad divisions given here into many more, with excellent arguments about why
additional divisions are necessary and important. However, more than one book
would be needed to complete that task. In addition, a goal of this book is to take the
moral, social and technological pulse of the fourth civilization, which has just begun.
Any speculations on a possible fifth will be left for other works.

The First Civilization--The Hunter-Gatherers

The most primitive type of society has little need for organization. People who
live by hunting animals and gathering edible plants wherever they find them need a
large expanse of territory just to keep each individual alive. Consequently, such
hunter-gatherers move about, following the weather and the food. Family groupings
are small; life is short and brutal; and sophisticated medical care does not exist.
There is no store of knowledge apart from what parents pass to their children, and
anyone who does have the time to invent the wheel has little chance to tell anyone
else, let alone to market it. Ethics and the practice of religion are personal and
cultural matters, and are highly localized.
The highest technologies in a hunter-gatherer society consist of fire for
cooking and protection, simple throwing or clubbing instruments for hunting or
defense, and clothing manufacture from animal products. There might be some
metal working, animal domestication, and possibly the use of wheels. People may
band together in extended families (tribes) of up to a few hundred with a common
language, limited trade, and some broad knowledge of traditional history and
geography. Such tribes may eventually establish fairly complex social structures.
The most highly developed society of this type was probably that of the North
American natives before the arrival of the Europeans. Theirs was a society with low
population density and primitive technology but with well developed social and
economic structures, including continent-wide trading routes. Yet, they found
themselves unable to meet the challenge presented by the arrival of the expansion-
minded Europeans. They were unable to compete effectively with the European
technology (wagons, guns, and iron tools), and fell before the better organized and
equipped invaders. They also suffered from a lack of immunity to diseases like
smallpox, and from the demise of the buffalo herds.
Hunter-gatherers must expend most of their total working energy on feeding
and defending themselves (though they may have some leisure time). They lack the
resources to support many non-food-producing members such as teachers, lawyers,
scholars, and other city dwellers. As a result, such societies gain technology only
very slowly, and may remain nearly unchanged over many centuries.

The Second Civilization--The Agriculturalists

9
As an intermediate stage before the second level of civilization begins, some
peoples domesticate animals and become nomads, with a wider geographical range
and a somewhat expanded social complexity. Whether this step takes place before
or after the domestication of plants, it transforms hunting from a solitary, weary
pilgrimage to a community experience. The key discovery needed to make the
actual transition to an agricultural civilization is that it is more economical to save
seed from desirable plants and grow them systematically in one location than to go
out and find them wherever they happen to have sprouted on their own. If a single
technology can be pointed to as the most important factor in developing an agrarian
economy, it is the use of plough, though arguments could also be advanced for
earlier innovations such as the sickle or the flail.
Farming provides a powerful motivation for further invention, because the
more one's arm can be augmented by tools, the more land that can be put under
cultivation, and the more wealth that can be generated by one person. Wheeled
carts, animal drawn ploughs, and increasingly complex planting and harvesting
machinery become highly prized as do the skills of the artisans who make these
tools. Once a family can produce more than it can eat, trade expands and service
settlements become towns, then cities. Metal tools replace stone implements, and
the economic advantages cause the change from iron and copper to brass and
eventually to steel in the search for stronger ploughs and other tools.
Agriculture provides a basis for supporting large numbers of non-food-
producing people. Some manage to acquire wealth by using specialized knowledge
rather than by producing food, and a class structure grows. Scholars and students
can be supported, as can artists, musicians and theatrical players, for there must be
something for the new upper classes to take their leisure in. Once information can
be communicated to others in written form, it can also be transmitted to the next
generation. The amount of knowledge then increases greatly, and philosophers,
mathematicians and other academic disciplines find a place in society.
At the same time, armies can be outfitted for adventures in other lands, and
societies grow far beyond the bounds of single families, cities or even districts.
Technological know-how is also turned to the production of decorative jewellery.
Gold, silver and gemstones may also become important. The needs of cities also
drive invention in both architecture and in the design of water and transportation
systems. The really ambitious empire builders must both construct and maintain
roads and also find ways of ruling the oceans. They must also codify systems of law
and apply them uniformly. Religious practice may also become organized and
standardized and its institutions grow in power and size along with the society,
perhaps forming an alliance with the state to maintain stability.
None of this progress is without cost to the ordinary person, for daily life in an
agrarian society demands steady, heavy labour and is more complex than in a
hunter-gatherer culture, though the food supply is more likely to be consistent.
There are more and broader civil obligations to meet, including taxes, conscription,
and dealing with government bureaucracy. Farmers have their lives ruled entirely
by the land, the weather, and the state; they are not free to take a day or two off
after a successful hunt. That is, the gains in security achieved through larger
networks of mutual obligation are partially offset by a loss of individual freedom.

10
This is particularly true if farmers are members of an empire-building society, since
the wealth generated by their work must support large armies. In such a case,
farmers may be virtual slaves to land they usually do not own.
The largest and most successful such society in the ancient world was the
Roman empire. Centred on the Mediterranean Sea, it dominated Europe, Asia Minor,
the Middle East, and Northern Africa for centuries. Its capital city, Rome, grew to a
population of over a million people, a level not reached again after its decline until
the 1930s. When the empire did fall, the Roman system of roads deteriorated and
communications and transportation suffered to the point that no other power could
grow to dominate in quite the same way. Because many such links were lost, a
great deal of knowledge was not transmitted to succeeding generations, and
progress toward the next stage of civilization was stalled for centuries. Instead,
nations rose and developed localized languages, customs, and technology.
During this time one group that successfully straddled the first two
civilizations were the Mongols whose base was initially nomadic, but who conquered
and ruled a settled empire stretching across much of Asia and part of Europe.
European empires were based on oceangoing communication, and these were
centred on such trade routes as proved strategic for their time. Venice and Genoa
came to dominate the Mediterranean, and afterward Portugal and Spain ruled the
South Atlantic. Later still, the British, French, and Dutch joined Portugal and Spain in
roaming the world's oceans in search of food, preservatives, subject peoples and
trade goods. As a result, advances in technology centred around ship building and
military applications. The widespread use of gunpowder, particularly for ship-
mounted guns, increased the ability to kill large numbers of people in a short period
of time. This changed the nature of warfare dramatically, but had little direct effect
on society itself. In such encounters, those nations prevailed that took the trouble to
train their midshipmen in trigonometry, and to train their munitions suppliers to mix
gunpowder uniformly. Countries with inferior education and technology gradually
lost both territory and influence.
The gradual increase in the number of foundries, the continuous search for
better metals, and the increasing use of small machines set the stage for the advent
of the next society. It was Britain that had carried the trade-centred empire to
global proportions more successfully than any other nation, and it was there in the
eighteenth century that the critical mass of technology first became great enough
to take the next step.

The Third Civilization--Industrialists

The harnessing of the steam engine to factory-based machines for the


production of textiles started the next major series of changes. Known as the
Industrial Revolution, because of the rapid conversion to the new technologies, this
period was characterized by a large scale transfer of people from a rural to an urban
setting, as they gave up farm labouring and cottage industries for work in the new
factories. This urbanization became even more dramatic as time went on. It spread,
first to Europe, then to North America, and subsequently to other parts of the world,

11
even though in some places it was not accompanied by the necessary increase in
jobs to keep the new city population at work.

Profile On ... A Changing Society


The latter part of the industrial age saw a dramatic decline in the proportion of
the population involved in farming--the principal occupation of the agricultural
society. As time passes, fewer people grow food for an ever expanding total
population. As the chart below shows, farm families had become an all-but-invisible
2% of the U.S. population by 1987, and was even then still dropping by 0.1% per
year.

Workers, at first little better off in urban slums than in rural poverty, gradually
became consumers of the goods they produced and their standard of living began a
rapid and almost unbroken rise that has continued to this day. As the list of factory
produced goods lengthened, machines were also revolutionizing mining and later
farming. For the first time, a society became possible in which the majority of
people did not have to live at subsistence level, expending nearly all their energy
just to stay alive.
Technological breakthroughs continued at a rapid pace through this time and
these have put lasting marks upon the nations that made them. In the latter part of
the twentieth century (as in Roman times, but on a larger scale) national borders
became less important in the face of transportation and communications
technologies that were capable of bypassing such barriers. No one people can for
long keep secret or monopolize any given technology, and there is an increasing
sense that the world is one place.
The typical person in an industrial society is better off materially than at any
time in history, having more education, longer life, better medicine, faster personal
transportation, more consumer goods and better communications facilities than
ever before. To be sure, the new technologies have, as usual, been used to deal out
death and destruction on a wider scale in the past century than in any previous one,
but despite this, there has been a continuous increase in the ability to produce
goods. This increased production has been accompanied by an ongoing urbanization
and consumerism, and also by a greatly increased food supply.
Moreover, leisure time is available to the workers for the first time in history.
Whole new industries have been spawned by this development, and tourism is not

12
only big business, but in many places it is the biggest business of all. Indeed, as the
wealth one person can produce has increased, the percentage of people working in
non-goods-producing service industries has risen dramatically. This trend, too,
would no doubt continue, even in the absence of new dramatic changes in
production technologies.
The initial upheavals of the Industrial Revolution saw vast numbers of people
attempting to improve their economic conditions by leaving the land and moving to
the city. Many of them simultaneously severed their connections with organized
religion. The institutionalized church had begun to lose its authority in any case, for
it insisted upon the teachings of traditional authorities to explain the physical world
and its workings long after these had been undermined by the influence of the
philosophies associated with modern science. Some religious leaders came to
believe that the idea of an infinite, unchanging God with absolute moral standards
could be extrapolated to lend a similar absolutism concerning the physical world.
Religious traditions, whether liturgical, governmental, or scientific, were invested
with the weight of divine authority, becoming as unchangeable as God. Knowledge
was not an incomplete and inexhaustible aspect of an infinite God. Instead, it was
finite and complete. The Bible (and by association God) came to be seen as a
limited creation of the institutional church. What was created by humans could
eventually be seen as flawed, and then discarded. At the same time, the increasing
availability of consumer goods helped to promote a materialism that separated
people from the spiritual roots of traditional morality.
Meanwhile, the rising intellectual class was quick to seek new interpretations
and draw different conclusions in ethical and moral matters that religion had once
claimed for its own. Thus, the success of the Industrial Revolution also spawned new
ideologies to compete with Judeo-Christian teachings for the hearts and minds of
the modern Western peoples. People came to place their religious-like faith in the
philosophies of science (scientism), reason (rationalism), progress (progressivism),
the state (statism), or humankind (humanism) as the measure and end of all things.
Many discarded ideas like the worship of God as creator and sustainer of the
universe and dispensed also with the social aspects of religion. Over time, religion
ceased to be part of the glue that held society together. Simultaneously, the
abundance of wealth and the newfound ability to indulge in consumption tended to
replace the concepts of duty and interdependence with the notions of self-
actualization and autonomy epitomized by the "yuppie" phenomenon of the 1980s.
This is not to say that traditional religion (or social values) have altogether
vanished from everyday life, for they may have made somewhat of a comeback in
recent years. Yet, religion seems to have had little influence upon intellectuals or
upon the leading institutions in Western society in the Industrial age, and this fact
alone would set off the last century-and-a-half as unique among all periods of
history.
Some hail the decline of traditional religious influence and the rise of
individualism, citing a beneficial increase in freedom for the human mind from such
changes. Others note that the simultaneous fragmentation of the culture tends to
make society more difficult to maintain as a working entity. Still others worry about
the track record modern humans have using technology in the absence of religious

13
influence. Life and the Earth itself are at risk from nuclear weapons on the one
hand, and from widespread pollution on the other. Moreover, this century has
already seen the most devastating wars in all human history, as well as political and
economic exploitation on at least as large a scale as ever done before. It has also
seen deliberate mass killings for racial, religious, and political reasons that dwarf
the most ambitious pogroms of earlier centuries.
The machine age has brought unparalleled prosperity to those who own or
serve the machines and can buy the goods they make, but it has also brought the
world to the brink of destruction by the same technology. Thus, while one could
judge from material evidence that the human race is better off, such a judgement
cannot be unqualified. Material goods have not in the past been regarded as the
chief measure of the value of the human spirit, and it seems unlikely the machine
age will be looked back upon as an idyllic or utopian time.
Meanwhile, there have been striking new developments that promise to bring
even more radical changes. It is a commonplace observation by now that critical
mass in certain technologies has been reached, and the transition to a whole new
kind of civilization is well underway.

The Fourth Civilization--The Information Brokers

This transition is also characterized by many social changes, some


representing continuations of long-established trends. Others may be due to
reaction against what some regard as the excesses of the industrial age. For
example, society has embraced a set of changing attitudes and new technologies
that are concerned with the environment in which people live and the quality of life.
"High touch" is balancing "high tech." There has been an increased interest in
ethics, morality, religion, and the disciplines of thought and study that relate more
to people than to things, and consequences of this will be considered later in the
text. On the other hand, certain new technologies can be cited as formative for the
next mature phase of civilization. For convenience, they are here grouped into four
major categories, and these provide the chapter divisions for the next section of the
text.
The first, and most characteristic, is the rapid development of computer-based
data systems toward the goal of universal information availability. Anyone who
wants to learn the facts of a subject can find the desired material, and do so without
leaving home. This will have a profound impact upon political systems, education,
most institutions, and the use of various media. Along with this can be cited
improvements in communications and transportation. Not only will people be able to
travel farther and faster than ever, but they will be able to exchange information
with any point on earth easily and inexpensively. New communication methods are
also causing dramatic changes in the conduct of business. In all, the consequences
of freer information flow may well be greater and farther reaching than those
caused by Gutenberg's invention of the printing press.
It is also worthwhile to note that such universal availability of information
means that little or none of it is likely to be lost, however difficult the social aspects
of the transition to the next civilization may prove to be. The redundancy of

14
information makes it easier to ensure that what is available to one generation will
be for the next. Of course, the next generation may not interpret or use a given
piece of information in the same way, because values (including spiritual ones) are
much harder to transmit than facts.
The second aspect of technological change characterizing the fourth
civilization is the culmination of the second industrial revolution. Jobs continue the
recent rapid shift away from the smokestack industries and into the service
industries, and in the light of other trends, this shift will likely accelerate. The most
revolutionary aspect of this is the introduction of robots to replace people on
assembly lines. The end result could be the reversal of many aspects of the first
industrial revolution--from the workers' point of view, the most radical change of all.
A third that is often cited is the further development of computer-based or
artificial intelligence (AI). Combined with developments in robotics, this could
further mechanize certain aspects of decision-making and managerial level tasks.
This is the third, or intellectual, phase of the transfer of human tasks to machines
(First came manual labour, then skilled craftsmanship and repetitious jobs, and
finally some brainwork, too). Whether a machine employed in such tasks will ever
be said to understand either the issues or the decision is another matter. The chief
consequence of such a move could be yet another dramatic change in the way
many individuals make their living.
The fourth group of formative technologies has to do with life itself--the most
fundamental of all issues, and the one to which study and some understanding has
come the latest in human history. The developing understanding of the genetic code
implies the ability to manipulate life forms, engineer them for specific uses, prolong
human life far beyond the present limits, and solve medical problems that have
resisted all previous efforts.
In addition, the way in which people live is being given increasing attention. In
the fourth civilization, such things as air, soil and water may all be engineered for
human benefit, rather than treated simply as expendable raw material for factories.
There could also an increasing focus on technologies for food production, on
developing new habitats in places people have not lived before, and on enhancing
the quality of life in other than simply material ways.
Since much of the dramatic change in all four of these characteristic groups is
due to the development of high-technology devices, many based on microprocessor
equipment, the invention of the computer in the late 1940s looms as the most
significant single technological advance responsible for entry into the new
civilization.
Details on the effects of these formative technologies will be left to the
appropriate specific sections. Perhaps this brief summary will assist the reader to
begin considering what kinds of societal change these new trends in technology
may cause, even before reading a more detailed analysis (and speculation) later in
the book.
The chart below gives a simple summary of the four civilizations insofar as
employment is concerned. The true story is more complicated, and it should not be
thought that some "inevitable advance of the cycles" is being presented. The latter
two transitions needed to take place only once among human societies; sufficient

15
communication had by then been established to ensure that the effects would
spread throughout the world. The availability of instant information immensely
complicates any such analysis as has been attempted here. Many more people have
knowledge of the most recent technologies, and are prepared to attempt to skip the
intermediate development others went through to get to that point. Thus, the path
through the four stages, while presented as characteristic, can only properly
describe the first time it happened; the experiences of nations trying to catch up
must be very different.

A different view of the four civilizations can be had by observing that the
hunter-gatherers acted for the most part alone or in small groups, and had little
flexibility to make changes in their life style or culture. The agrarians had more
flexibility, though they too tended to remain in the same occupation for their whole
lives. An industrial society is centred around organizations more than individuals,
but there is greater flexibility and ability to change than in the other two. An
information-oriented society returns us to a more individualistic orientation, but
there is more flexibility and freedom for both individuals and organizations. The
tension between flexible and inflexible organizational structures and between
collectivizing and individualizing trends will be important themes throughout this
book.

16
1.3 History and Technology

The last section presented an overview of the four main phases of civilization.
It is also useful to follow several threads of specific technological development
through these phases and to indicate effects they have had upon society. In later
chapters attempts will be made to follow some threads forward to determine what
their future development will be. These topics have been chosen for discussion
because of their universal importance in all cultural change; because they strongly
impact behaviour (and thus tend to generate many basic ethical questions), and
because they are particularly important in understanding the fourth civilization.

Technology and Food

17
Increases in food production efficiency have been closely associated with
great societal transformations in the past. The transition from hunter-gatherer to
agricultural society depends entirely upon the recognition that food can be obtained
more efficiently through effective management of limited land areas (farms).
Further, rapid improvements in food production must take place simultaneously
with a nation's industrial revolution, for while one group of machines lures workers
to the city, another must make it possible for the land to allow them to go.
Food production is fundamental to the existence of any organized social
group. A nation whose people are starving will play no leading role on the world
stage, and its very existence will be threatened if the food shortage is prolonged.
When changes occur in soil fertility due to poor land management, climatic
alterations, or the devastations of war, a people may find itself on the move in
search of new land, or becoming absorbed by another group. They may simply
starve to death, or they may go to war against their neighbours in order to take
their food.
Particularly fertile land may have so many wars fought because of it that it
becomes unusable, for its farmers may abandon it to the ravages of the fighting,
and topsoil with no crop to anchor it may blow away. Also, fertile land tends to be
concentrated in great river valleys and deltas, so some nations have much more of
it than do others. Consequently, in times of peace, food-producing nations will trade
in great volume with goods manufacturers from other countries, and the
establishment of food trade routes and transportation facilities also becomes
important.
Through to the mid- to late-industrial stage, whenever there is an abundance
of food, populations increase at an exponential rate. However, the amount of arable
land does not often increase; it may decrease with overcropping, poor
management, and desertification. Therefore, development of technology allowing
more food to be grown on a given piece of land has always been critical both to
social stability and to progress to the next phase of civilization.
From stone knives, bone spears, arrows, and cutting and cleaning implements
of hunter-gatherers, to the plough, sickle, and horse-drawn combine of the farmer to
the modern collection of tractor-powered machinery, there have been steady
improvements in techniques of food production. Today there is also a wide variety
of fertilizers, pesticides, fungicides and other chemicals employed in growing food,
and both milk and beef cattle are fed an increasingly artificial diet in an effort to
squeeze the last gram of cream or hamburger from each animal.
At one point during a period of slow change in food production techniques,
Thomas Malthus (1766-1834) became convinced that population would always tend
to grow faster than the food supply. He was pessimistic that this could have any
other result than mass starvation in the near future. The fact that the world's food
supply supports far more people than Malthus thought possible is due to two
factors: First, for a considerable time, the more sparsely populated Americas
absorbed large numbers of Europeans, and even today their vast cereal-growing
lands provide food in great quantity for countries that have not been able to grow
their own. Second, technology has provided more and better varieties of food
animals and cereal grains, and this has had a particular impact in places like India,

18
which now produces a far larger proportion of its food than previously. Indeed,
subsidies to farmers in Europe and the United States had, even by the mid 1980s
(and the trend had continued), produced such a food surplus as to suggest modern
famines may be more the result of a failure of political will to feed everyone than of
any actual food shortage. Today, more people are fed by fewer people working less
land than a century ago, and this trend, too, shows no sign of slowing.
These developments do not mean that Malthus was entirely wrong; they just
postpone an inevitable shortage of arable land to some time in the future. After all,
at any given level of food technology, it seems there ought to be an absolute upper
limit to the population the planet can support. Yet, it can never be assumed at a
given time that the upper limit of all possible food production technologies has been
reached.
Technological optimists are convinced that ability to produce food will
continue to keep pace with (or be only just behind) population growth. They also
observe that factors like urbanization mean that more people live in a context
where having children confers little advantage (unlike on the farm) and assume that
reduced birth rates will eventually stabilize total population.
Pessimists are ready to forecast imminent mass starvation. They are sure that
the world's population will pass (has passed) the ability to provide food long before
it stabilizes. Whatever the case, the transition to and progress of the next
civilization will require the food problem be met and resolved on a continuing basis.
Because food is a fundamental need, its provision is closely related to a number of
other areas, and it will be necessary to return to aspects of this issue at several
later points.

Technology and Energy

Primitive societies were little concerned with this class of problems. A warm
fire sufficed (where this had been discovered), and people walked unawares over
future Middle Eastern oil fields, the Athabaska tar sands, the British coal deposits,
and the Texan gas deposits. Later, human labour (including slaves), animals, wind,
and water were the power sources of the agricultural age. Additional forms of
energy were necessary to provide the enhanced life-style of farmers and seafarers,
for their ambitions to tame the environment had grown beyond the ability of mere
human strength to fulfil them.
The industrial age required vastly more energy than had ever been consumed
before. England became a leader partly because she sat on a mountain of coal, and
there was no reluctance to fill the land with soot and smoke in the name of
progress. Today, natural gas and oil have proven cleaner and more convenient, but
all three continue to be used at an ever-increasing rate. These fuels, plus water
power, nuclear fission, and alternative energy sources, are used to generate
electricity, which serves industry simultaneously as an energy source and a means
of energy transmission. The advent of the information age will not decrease the
total need for energy, for there are more people and they have higher expectations
for a life-style of abundance than ever. More goods than ever need to be produced,

19
and even if all people were to work in offices while the factories were filled entirely
with robots, manufacturing would still require energy.
New sources will eventually be required as old ones are exhausted, and there
will be proponents of a variety of replacement energy technologies, including solar,
geothermal, tidal, and nuclear fusion. The "high touch" culture--with its concern
over the quality of life--demands safe, clean, and renewable energy sources, and
therefore some of the old ones may pass out of common use before the resources
on which they are based become depleted.
However, cleanliness and safety both carry price tags. No matter how great
the desire for both, there is a point beyond which further progress toward such
goals is uneconomic. If the perceived cost of a power source--in monetary, energy,
or human terms--exceeds the expected return, it will not be used. In North America,
this had, for example, happened for nuclear power by the mid-1980s, though
subsequent political and economic developments could change this decision
Every civilization requires energy--and the advanced need more than the
primitive, witness the rolling blackouts of energy-guzzling California in early 2001.
Per capita energy consumption throughout the world is liable to increase for many
more years, so technologies that provide it will continue to be critical.

Technology and The Environment

At the hunter-gatherer stage, the environment determines the available


technology, and to a great extent, the cultural responses to technology as well. A
people with no access to copper will not have a bronze age, and those who live in a
favourable climate with plenty of game may never have the motivation to become
farmers. If fish are available, people take their food from the water. If neighbours
have desirable products, a people may either go to war and take what they want, or
develop a trade. Technology used to overcome the environment is limited to
clothing, shelter, and simple hunting tools.
To a great extent, environment determines the technology of an agricultural
society as well. Soils and climates can support certain crops and cannot support
others. Grapes cannot be grown in the far north, nor wheat in a rain forest.
However, during this phase, there is a gradual increase in the stock of tools
designed to allow the farmer to overcome the limitations of environment.
The industrial age, with its great faith that humankind could master all
through machinery, saw an about-face in the relationship between man and the
environment. Technology became a tool to overcome and to exploit the Earth,
rather than simply a means to better live on it. Pollution was not merely accepted, it
was pointed to with pride as a visible sign of great progress--the smell of money. It
was only realized very late in this period that no life could survive on a poisoned
planet.
Each society has had to live in its environment and to manage it appropriately
to ensure survival. The hunter who killed all game within hunting range had to
relocate, and so did the farmer who exhausts the soil's nourishment through poor
cropping practices. The industrial age has seen the greatest impact on the

20
environment, with dramatic changes to the land, the forest, the air, and even to
near space around the planet.
A continuation of these uncontrolled changes to the environment on the scale
seen in the industrial age may eventually render the Earth uninhabitable. For
example, if the so-called "greenhouse effect" causes the climate to warm enough to
melt the polar ice, many large cities would vanish under the oceans, and much of
North America would become a desert. If temperatures go the opposite way, others
would disappear under sheets of ice. If acids from the burning of coal and oil
continue to pour down upon crops and forests all plant life could die. If the rest of
the Amazon rain forest is cut and burned for short-term subsistence farming,
atmospheric oxygen levels could drop substantially. The enterprising pessimist may
choose from these and many other bleak futures. The entire Earth is involved, so it
would be difficult for its multiplied billions to secure new living quarters if the
current ones become uninhabitable.
In the information age, new possibilities exist to make informed choices with
respect to the environment, and to manage the quality of air, water, soil, and
climate. The knowledge that a problem exists and the techniques to solve that
problem can be communicated and implemented rapidly throughout the world. Of
course, the will to make the necessary changes and pay for them is less easily
transmitted. This situation also illustrates how use of one technology may
eventually require that another be developed to repair side effects of the first--a
theme that will be discussed further in Chapter 7. Suffice it to conclude for now that
a major task for the citizens of the fourth civilization will be to learn how to live with
the environment and manage it well.

Technology and Health

The systematic practice of medicine is a late development in human history.


The first physicians had little knowledge of anatomy and none about the causes of
disease. They relied on what would today be termed "folk remedies" for their cures.
For instance, a common technique for centuries was "bleeding" the patient to let out
supposedly diseased blood, a practice now known to be harmful in most
circumstances. The Romans maintained a staff of army doctors whose anatomical
knowledge and surgical skills became quite advanced, but their work had little
effect on the common citizen, who was at constant risk of death from disease and
such simple problems as appendicitis. Disease theory developed in the eighteenth
and nineteenth centuries and such practices as sterilizing surgical instruments,
quarantining sick patients, and eliminating unsanitary conditions made an important
contribution to increasing life span.
Some of the most important advances of modern medicine were the
development of vaccines and antibiotics. Diseases such as smallpox, polio, typhus,
yellow fever, diphtheria and tuberculosis that once killed millions of people, have
now been eliminated entirely or have had their effects greatly reduced (though TB is
on the rise again). Continuing improvements in sanitation, particularly in Western
cities where sewage is enclosed instead of running in the streets, have also
contributed to an increased chance of surviving childhood and beyond.

21
The use of antibiotics, coupled with the later development of hormone
treatments for birth control (the "pill") had a profound effect on sexual practices,
particularly after the mid 1950s. Many people had already abandoned religious
notions of eternal consequences for promiscuity. Now, they were also freed from
such temporal consequences as pregnancy and sexually transmitted diseases. As a
result, the public perception of morality was redefined to fit the new freedoms and
sex came to be marketed as recreation instead of being seen as part of the old
social, religious, and moral contract of monogamous marriage. Whether the private
behaviour of people changed as consistently as the public view of it is more difficult
to assess. Both those who were fundamentally committed to the old moral
standards and those who had never followed them no doubt acted as they always
had. Others followed public opinion, and adopted a new life-style, for the perception
of what constituted normal behaviour was now the opposite of what it had been. Of
course, the medical story of the early 1990s was the failure of technology to provide
fast cures for the new venereal diseases of herpes and AIDS, and the consequent
abandonment of some aspects of the sexual revolution, at least for the time being
(and at least in public). Moreover, by the end of that decade, a number of strains of
bacteria had become resistant to antibacterial drugs, leaving researchers
scrambling for replacements as infection rates notched upwards.
One could use the sexual revolution and its effects to argue that modern
medicine had a negative impact on both morality and religion. However, one could
as easily blame modern communication and transportation technology, claiming
that by being in touch with other societies and their values, the people of the West
first came to take first a relative view of both, and then simply to discard them
altogether.
Paradoxically, medical practice itself has its roots not just in the desire for
survival, but also in the ethical impulse. There have always been strong moral (and
often religious) convictions associated with the development and provision of
medical facilities for the masses of people who have not previously had them. This
behaviour--not so much one of self-interest, but of compassion--has been at the
heart of medical missions and humanitarian aid to undeveloped nations, disaster
relief, and universal medical care in the industrialized world and elsewhere.
The practice of medicine has also meant the average age to which people in
all nations may expect to live is higher than it has been in recorded history, and
there is a better opportunity than ever for an individual to survive serious disorders
such as cancers, brain tumors and heart diseases. However, longer life spans mean
more people, exacerbating food and housing supply difficulties. In addition, medical
services are still not well distributed, and accessibility remains a problem in many
parts of the world. Improvements in medical technology shift the balance of
population (to the young at first, and then to the old). They also tie up expenses
resources in facilities and trained people, costing more money, and affecting what
can be spent on other things.
That is, changes in medical practice have wide repercussions in the entire
society in which the new techniques are employed. Such interconnections of
medicine with society and public policy are well expressed by an important principle
of interdependence that applies to many other situations as well:

22
It is impossible to do one thing.

The modern challenges to medicine, if met, will cause new and dramatic
changes in the ways people live, in how many of them live, and in how long they
live. Some of these changes will be examined in a later chapter.

Technology and Warfare

The first stone tool developed for clubbing an animal could also be used for
hitting its inventor's neighbours over the head and taking their food. Bow and arrow
or spear could hunt both animals and men. Carts could carry produce to market,
men to war, or captured enemies back to slavery. Black powder could clear stumps
or fire cannonballs. Ships could carry trade goods or an invading army; and simple
machines could pump water or become battering rams.
The same technology that produced tractors also builds tanks. That which
made airplane passenger and mail service possible also created aerial warfare and
firebombing. The telephone lines by which one "reaches out and touches" far-away
relatives also carry military orders. Satellites can either transmit communications
over previously sealed borders, or spy on the enemy living there. Nuclear energy is
used to produce power, radioactive medicines, and unlock the secrets of matter
itself, but could destroy the world in a few hours. Chemistry can produce healing
medicines or develop the tools for warfare to kill millions.
Every technology has the potential both to improve living conditions of human
beings, or to harm them. Some complain bitterly because devices made for peaceful
use are twisted into weapons of war. Others point to civilian spin-offs of military
technology as sufficient justification to pursue the arts of making war.
Human history presents an unbroken record of nation coveting nation, of
peoples hating peoples, and of the endless making of war to realize such destructive
ambitions. Whether caused by shortages of food or land, envy over another's
prosperity, racial or religious hatreds, or competition over trade routes, there have
always been wars. Those involved have always sought out and used the highest
available technology for killing the enemy. A nation that lost one war due to inferior
technology, if allowed to survive, could always rebuild, create new weapons, and try
again. That is:

It is impossible to fight one battle.

More generally, no inventor or technological innovator in any field can ever


foresee the consequences for either peace or war of a particular idea or device. All
technology has consequences for society and for subsequent development of new
technologies. This leads to other statements of the principle of interdependence:

It is impossible to have one idea.

It is impossible to invent one thing.

23
This version could be made more general than these specific (and useful)
statements as follows:

It is impossible to think one thing.

In nuclear weaponry, humankind now has the technology that could not only
kill every person in the world, but could also sterilize it of life altogether. Since wars
normally result in the highest available technology being used, the task facing
humanity today is nothing less than the elimination of war altogether, for the
human race cannot survive a new global conflict. With the fall of the Soviet Union,
and the collapse of its military apparatus, the potential for superpower warfare has
been greatly diminished, but the probability remains high that those same weapons
will find themselves one day in the hands of others with more hatred and fewer
scruples. At the same time, because the war industry is an important factor in the
economies of many nations, survival will also mean social, and economic change.
The fundamental urge to survive will also mean that the new civilization will be
different in its view of humankind and of the appropriate use of technology; the
popular ethics of war will change to reflect the price of war.

Profile On ... Decisions and War

A Few fateful decisions affecting World War II

Decision: In the late 1930s, the Nazis, motivated by racial hatreds, decide to
persecute the Jews. Scientists and engineers from all over Germany (including
Albert Einstein) are forced to resign their positions. Many leave the country.
Consequence: Allied nations receive an influx of highly intelligent and well-
trained experts in the very fields critical to development of technologies needed to
win the ensuing war. Einstein is influential in the decision to develop the atomic
bomb.

Decision: On September 29, 1938, Britain's Neville Chamberlain abandons his


promises to Czechoslovakia and agrees to Hitler's demand that he be allowed to
annex large portions of the country. Hitler promises that he has no more territorial
ambitions.
Consequence: Other parts of Czechoslovakia are annexed by her other
neighbours. Germany not only takes over the remainder, but Hitler, emboldened by
what he sees as weakness in Britain and France invades Poland as well, the action
that triggers war.

Decision: In June of 1940, Nazi Air Marshal Göring launches the Battle of
Britain, bombing major population centres to demoralize the British citizenry.
Consequence: Her industrial capacity all but untouched, Britain continues
manufacturing airplanes that prove superior to the German ones, inflicting heavy
losses on the Luftwaffe, and eventually forcing the Germans to accept defeat in the
air.

24
Decision: In June of 1941, Hitler decides to break his secret pact with Stalin,
and orders his army to invade Russia.
Consequences: (1) Cold winters, long supply lines, and the Russian army bring
the Germans to a halt within sight of Moscow. As the Germans are pushed back, the
misadventure weakens the German army, forcing Hitler to fight on two fronts.
(2) Blaming his generals for the defeat, Hitler dismisses them and assumes
personal command --a task that proves to be beyond his ability.

Decision: Despite steadily worsening relations with the Japanese, the United
States military command decides to ignore the early December 1941 warnings of an
Imperial fleet mobilization in the Pacific.
Consequence: Japan surprises the American fleet at anchor in Pearl Harbor on
December 7, 1941. Much of the U.S. Pacific force is destroyed in the sneak attack,
and Japan has a free hand to expand her empire throughout the South Pacific for
the next two years. To this day, there are some who claim this was a deliberate ploy
by high U.S. officials to gain public approval for entering the war.

Decision: In August of 1945, the Japanese government decides to ignore


American warnings about their new and destructive weapon.
Consequence: Reasoning that the cost in lives would be much greater if an
invasion of Japan were to be launched, the U.S. drops atomic bombs on Hiroshima
and Nagasaki. The war ends, but at the cost of much of the population of both
cities.

Technology, Transportation, and Communication

The ability to maintain a nation of any size is as closely tied to the availability
of fast and efficient means of transportation and communication as it is to the
provision of sufficient food. Indeed, as previously observed, these two are closely
linked, for the food problem is one of transportation as well as one of production.
With every advance in the ability to move goods and people about or to
transmit messages over larger distances, the effective size of the world shrinks and
the potential size of nations (or empires) grows. The converse is true as well. For
example, when the roads deteriorated after the fall of Rome, European peoples
retreated into more localized communities. Not until centuries later did the resulting
nations build empires again.
In this century, barriers of distance (for developed countries) have for all
practical purposes ceased to exist. This does not necessarily mean the world will
become a single nation politically, though it is in some senses already one
economically. It must be remembered that familiarity breeds contempt, and
enhanced communication and transportation facilitate killing one's traditional
enemies as much as they foster understanding them.

Standard of Living

25
There can be little doubt that people are for the most part better off in the
material sense today than at any time in human history. Many of the poor are not so
poor as they once were, and the world could probably feed all its people (for a
while) if the political will existed to do so.
In Western nations, even the lower middle classes are wealthy beyond the
dreams of ancient peoples. Typical citizens own or control their living space, can
buy any kind of basic food (and many luxuries) and have several modern appliances
(stove, refrigerator, washer, dryer, central heating) that do more work than a
houseful of slaves or servants. Moreover, their leisure time is abundant, and
entertainment industries are big business. There are inequities of course, for even in
wealthy nations, the gap between the rich and the poor is great, and there are
homeless people even in the most prosperous cities. There is also still a chasm
between the rich and poor nations. However, there can be no doubt that technology
has improved almost everyone's standard of living, and few would care to return to
the days of poor nutrition, no medical care, high infant mortality and a thirty-year
average life span.
Where are these trends leading? There are two very different views of the
possible future, and the contrast between them illustrates the difficulties involved in
attempts to look ahead.
The first is to suppose that if the standard of living continues to rise, there
would eventually be no practical difference between the rich and the poor, for
beyond a certain point the actual amount of wealth is more a means of keeping
score than it is an indication of class differences. This is the most idealistic and
optimistic view. It must be tempered by the observation that such progress has not
in the past led to classless societies. It is much easier to imagine that there will
merely come to be new definitions of class than that there will be no class
differences at all.
A second is to suggest that continued growth will inevitably result in collapse.
The more emphasis put on material goods, the less regard there would be for
people, straining the bonds of society. Such concepts as duty could break down as
people isolate themselves, become laws-unto-themselves, and live for their own
pleasure without regard for others. Moral consensus could vanish. Having lost the
glue that holds it together, society could dissolve into chaos. This is the most
pessimistic view.
Those who hold to the cyclical views of history might subscribe to one of the
bleaker two scenarios, whereas those who hold to some purposeful view of history
may believe that society will remain cohesive even if both material goods and war-
making abilities both increased without limit. The true course of future history is
likely to contain some elements of both.
In any event, production efficiency for material goods cannot continue to raise
the general standard of living without at some point causing profound changes to
the way people view and use consumer goods. Just what these changes might be is
unknown, though some possibilities will be considered later.
In addition, many now unknowable technologies and events will affect the
future. The generation of the 1940s might have forecast changes due to, say,
television, for it had been in the making for some time, and the example of radio

26
was available. It would have been impossible to predict the effects of computers,
however, for they were due to unforeseeable technical breakthroughs. Each
generation of new technology contains such elements--ones that would seem
magical to people twenty years earlier. By their very nature, aspects of tomorrow's
standard techniques that are magic today are unknowable. What is more, wars,
revolutions, stock market crashes, terrorist attacks, other monetary crises,
shortages in commodities (such as oil), nuclear disaster, new trade patterns,
changes in consumer preferences, political and religious scandals, and other factors
also shape the society to come, determining what behaviour and attitudes are
acceptable, what technologies will be pursued and used, and what ethical standards
will be followed or discarded.
Moreover, one can never underestimate the power of general disappointment
with and the desire to distance from the perceived failure or irrelevance of the
values and activities of a previous generation. There is every reason to believe that
this perception is particularly pronounced today, and that sharp shifts in the
dominant world view are in the offing, with even more dramatic consequences for
society and the technology it uses.

Summary

New technology has always had profound social effects. Moreover,


technologies are not only linked to social change, but to each other. The pursuit of a
given technological change will necessarily cause changes in society, in other
technologies and--in typical feedback fashion--in the seminal technology itself. It
would be well to restate the principle of interdependence once again:

It is impossible to change only one thing.

Finally, because of its central importance to the present and the future, it is
necessary to consider the specific development of computing technology. For this, a
separate section is warranted.

1.4 A Brief History of Computing


People have long recognized the competitive advantages that could be
realized by having available more efficient data storage and computational ability.
From counting on the fingers, to making marks on the walls of caves, to the
invention of picture numbers, to the modern check or banknote, there has been a
steady progression away from directly manipulating the objects computations
describe and toward the use of abstractions to represent the originals. Mechanical
devices have played an important part in this progression. More than one culture
has come up with the idea of placing beads on a string (the abacus). In some places,
these are still the preferred calculating device after several thousand years. A
skilled operator can calculate the cost of a large number of purchases on an abacus
much faster than most people can enter them into a calculator.
Some who have studied the ancient British monument known as Stonehenge
have come to the conclusion that it was an enormous calculating device for making

27
astronomical predictions. Other monuments left by the Babylonians, South and
Central American Indians, and South Sea Islanders may have had similar purposes.
The Scottish mathematician John Napier (1550-1617) devised Napier's bones and
published tables of logarithms intended to simplify tedious arithmetic computations.
These led directly to the wooden or bamboo slide rule, known and loved by many
student generations prior to the development of inexpensive electronic calculators.
To the French mathematician and theologian Blaise Pascal (1623-1662) goes
the honour of inventing the first mechanical adding machine (1642). It was based on
a system of gears similar to those in a modern automobile odometer and was used
for computing taxes. However, parts for the device could not be manufactured with
sufficient precision to make it practical, and it never became widely used. About
thirty years later, the famous German mathematician and co-inventor (with Newton)
of calculus, Gottfried Wilhelm von Leibniz (1646-1716), made a similar but more
reliable machine that could not only add and subtract but also multiply, divide, and
calculate square roots. Many people improved calculating machines over the next
century, and by 1900 they had an important place in government and commerce.
But as late as the mid 1960s electromechanical versions of these calculators could
do only basic four function arithmetic, weighed thirty pounds, and took up half a
desktop.
Meanwhile, another idea important to the modern computer was emerging--
that of the stored program or instruction sequence. This idea arose in connection
with the development of automatic looms by the French inventor Joseph Marie
Jacquard (1752-1854). First shown at the 1801 Paris Exhibition, these looms used a
collection of punched metal cards to control the weaving process. The machine,
with variations, is still used today, though it is now controlled by punched paper
cards or tapes, or by direct connection to a microcomputer.
The first computer--a machine combining computational ability with stored
programs--was designed by the British mathematician Charles Babbage (1792-
1871). He worked on his "Difference Engine" for about eleven years before
abandoning the project. Later, he designed a much more ambitious "Analytical
Engine" that was intended to be an algebraic analogue of Jacquard's loom. Although
Babbage even had a programmer for the engine (Lord Byron's daughter, Ada
Augusta, the Countess of Lovelace), this machine was never constructed in his
lifetime. Its concepts were not realized until 1944 when the Mark I computer was
developed in the United States.
By this time, the punched paper medium had become standardized through
the work of Herman Hollerith. He devised a card data storage and sorting system for
the U.S. Census Bureau, which was first employed in the 1890 census. Hollerith left
the bureau six years later to form his own company, the name of which was
changed to International Business Machines in 1924.
Meanwhile, vacuum-tube technology had developed to the point where an
electronic computer could be manufactured. The first of these were the British code-
breaking devices Colossus Mark I and Colossus Mark II built in 1943 and 1944 for the
British intelligence service at Bletchley Park. The latter attained speeds not matched
by other computers for a decade. When the war was over, these machines were
dismantled and their parts sold as surplus.

28
At about the same time, the groundwork of a number of researchers in the
United States came to fruition in the construction of the Electronic Numerical
Integrator and Calculator (ENIAC) by J. P. Eckert and J. W. Mauchly at the University
of Pennsylvania. This machine, which contained over 18,000 vacuum tubes, filled a
room six meters by twelve meters and was used principally by military ordnance
engineers to compute shell trajectories. In subsequent years, many similar
computers were developed in various research facilities in the United States and
Britain. Such devices, which generally were limited to basic arithmetic, required a
large staff to operate, occupied vast areas of floor space, and consumed enormous
quantities of electricity.
Eckert and Mauchly were also responsible for the first commercial computer,
the Universal Automatic Computer (UNIVAC), which they manufactured after leaving
the university. Their company was eventually incorporated into Sperry (now merged
with Burroughs to become UNISYS), which still manufactures large industrial
computers. Today, those early vacuum-tube monsters are referred to as "first-
generation computers," and the machines that are their successors are called
"mainframes."
The transistor, developed by Bell Labs in late 1947, and its improvement
during the early 1950s, was designed to replace the vacuum tube, reducing both
electrical consumption and heat production. This led to miniaturization of many
electronic devices, and the size of typical computers shrank considerably, even as
their power increased. Transistorized machines built between 1959 and 1965
formed the second generation of computers.
Price were still in the hundreds of thousands to millions of dollars, however,
and such machines were generally seen at first only in headquarters of large
research and government organizations. Even by the mid-1960s, not all universities
had even one computer, and those that did often regarded them as exclusive toys
for the mathematicians and research scientists. There were occasional courses at
the fourth-year level, but freshman introductions to computer science had not yet
become popular.
Invention of the integrated circuit dramatically changed things in the
computing world. The first result was another, even more significant size reduction,
for what once took up several floors of a large building now occupied a small box.
The first of these third-generation computers was the IBM System 360, which was
introduced in 1964 and quickly became popular among large businesses and
universities This size reduction also resulted in the first "pocket" calculators, which
appeared on the market in the early 1970s. Even at the initial price of several
hundred dollars, these put into the hands of the average person more computing
power than the first UNIVAC had possessed. New models proliferated so rapidly and
so many new features were incorporated into the pocket calculator that one
company decided to have a chip designed that would allow it to program new
functions so as to cut down the time necessary to bring a new model to market.
The chip, called the 4004, gave way to the 8008, and then to the 8080 and
8080A. The latter became the backbone of the new small-computer industry, as
numerous companies developed kits and fully assembled computers. In its later
incarnations by Zilog as the Z-80 and other descendants, such as the 8085, 8088,

29
8086, and now the 80186, 80286, 80386, 80486, Pentium, and P6, this invention
lives on in millions of microcomputers. Not long after the 8080 became a
commercial reality, Motorola developed the 6800 chip, which had the advantage to
programmers of being cheaper and somewhat easier to work with than the 8080. It,
too, became popular for a time, but soon gave way to other designs.
At about the same time the Z-80 was developed, the 6501 and 6502 chips
were derived from the 6800 as low-cost industrial process controllers. In 1976, the
6502 was also used to build a small computer, this one entirely contained on a
single board. It was called the Apple, and Apple Computer Corporation went on to
sell millions of the Apple ][ and its descendents, the ][+, //e, //c and //GS, surpassing
all other manufacturers of small computers in the process, and becoming the sole
source for nearly every important advance in small computer technology for two
decades.
In 1977, Radio Shack joined the competition with its Z-80 based machines. In
Europe, the equivalent popularizing role was played by Commodore (a Canadian
company) and by Sinclair (a British firm). A few years later, IBM came into this
market with the 8088-based PC. The mere presence of the giant changed the whole
market for a time, with most other manufacturers seeking to make machines
compatible with those of IBM. Eventually some of these "clone" makers, such as
Compaq, became a larger presence in the market than IBM itself. By the late 1990s,
the machines generating the most attention were capable of storing more and
manipulating larger numbers than anything previously seen in the microcomputer
market. They were also capable of handling processing requirements of the
graphics user interface (GUI) first realized in the Xerox Star, Apple Lisa and
Macintosh, then in Commodore's Amiga and Atari's machines, and now employed by
most computer users. Integration of circuits had now reached the point where
millions of components were being crammed into a single chip. Between 1987 and
1991, major new commitments were made by Apple with the Motorola 68030 and
68040-based Macintosh models and by IBM with their OS/2 machines. With the
latter, IBM also followed Apple's lead into graphics-oriented software, helping to
ensure this style of interface a continuing acceptance in the marketplace. Graphics
user interfaces were also adopted by the makers of scientific workstations such as
those made by Sun Microsystems, and were being attached to other machines
running the UNIX operating system.
In the early 1990s, Microsoft, already the dominant manufacturer of operating
systems for Intel 80x86 chips and of applications for both these and Macintosh
platforms, began to market a GUI called Windows that was a rough copy of the
Macintosh Operating System. The courts ruled, however, that it was not a close
enough imitation to fall under copyright law, and Windows (in various flavours)
gradually became dominant on Intel based machines (sometimes now called
"Wintel" systems).
By 1995, Apple had formed partnerships with Motorola and IBM to develop
new microprocessor technology and was already marketing machines based on the
new PowerPC RISC chip, while IBM was porting its operating systems to the new chip
as well. The two were readying new operating systems and preparing specifications
for a common hardware platform on which to run them. Apple had licensed its

30
operating system and the first Macintosh clones were appearing on the market--
some from very well known consumer companies such as Motorola. Micrcomputers
had become powerful enough that the minicomputer category had been all but
crowded out of the market on price/performance considerations.
By 2002 Microsoft had moved through Windows 95, 98 and NT to Windows
2000 (ME). The world had also seen the demise of OS/2, and the migration of the
MacOS to a new UNIX-based OS (NextStep, later rebuilt and renamed OS X)
developed by Steve Jobs--the once ousted co-founder of Apple. At the same time,
Apple had transitioned to the RISC-based G4 PowerPC chip and was offering
machines whose raw processing power would once have placed them in the
supercomputer category, Meanwhile, in its lower priced line, Apple had made
computers into fashion statements, an innovation others were also quick to copy.
While much of the marketing activity and most headlines focused on the
microcomputer segment of the industry, larger machines had undergone startling
changes also. Fourth generation supercomputers could be used in situations where
calculation complexity or data quantity is so great as to be beyond the ability of
ordinary mainframe devices. These machines are used by governments, the
military, and in academic research institutions. Still newer generations of computers
are on drawing boards in the United States and Japan, and many of the new
developments will undoubtedly filter down to become consumer-oriented devices in
the future. At the same time, however, desktop computers, with their ever-faster
chips and larger memories, were encroaching on application domains once thought
to belong only to supercomputers.
At the opposite end of the scale, pocket sized computing devices had also
become important. These ranged from the DOS or Windows-based miniaturized
version of the desktop sibling to the specialized personal time and communications
organizer (Personal Digital Assistant or PDA). Also called the Personal Intelligence
Enhancement Appliance (PIEA) these devices boast handwriting recognition,
wireless communications abilities, and sophisticated time management functions.
Apple's Newton was a key player and innovator in this market, but the 3Com/Palm
Pilot eventually took it over.
For most applications in the near future, however, microprocessor-based
computing devices will have sufficient power to suit the majority of individual,
academic, and business uses. They are inexpensive, easy to link (network) for
sharing other resources (storage devices and printers), and they run languages and
other programs similar to those found on mainframe computers. Much development
work (particularly in programming and publishing) is being done with
microcomputers in view, and it is safe to predict descendants of these machines are
the ones most people will be referring to when they speak about computers in the
future.
Larger machines will also continue to grow and change, as will organizations
depending on them. Moreover, computers of the future will be as different from
those of today as these are from ones of the late 1940s. They will be smaller (down
to pocket size), faster, and with greater storage capacity. They will be integrated
with video and communications technology to give immediate access to worldwide
databases. They will undoubtedly become easy to use, and at some point the need

31
to offer university level courses in their operation will cease, for they will have
become common technical appliances.
So broad and diverse have the applications of electronic processors become
that "computer" seems a misnomer, for the machines in which such devices are
embedded spend little time calculating, and much more finding, organizing,
preparing and communicating data. In this respect, the Internet, especially the
portion known as the World Wide Web (WWW) has become a kind of prototype for
the universal distributed library of the future, and most organizations have
connections, for e-mail if for nothing else.
Computers have already profoundly changed many of society's institutions
(business, banking, education, libraries). They will have even greater effects on
institutions in the future. They have also raised or caused new ethical issues, and
these will need to be addressed in the interests of social stability. In addition,
developments in computing have affected or given rise to other new products and
methods in a variety of fields, further demonstrating the interdependence of ideas,
society, and technology.
There are microprocessors in stereos, televisions, automobiles, toys and
games. Entertainment and telecommunications industries are heavily dependent on
new electronic technologies. Computers themselves are directly attached to
research instruments that gather and interpret data in basic physics, chemistry, and
biology experiments. The resulting changes and advances in scientific research
have also caused profound effects on society and its institutions. They have resulted
in new social and ethical questions being raised, whose very asking could not have
been anticipated in the industrial age. These include issues relating to software
copyright, data integrity, genetic engineering, artificial intelligence, displacement of
human workers by robots, how to live in and manage an information-based society,
and how to repair damage wrought in the industrial age.
Technical trends and possible social and ethical consequences will be
examined and extrapolated in more detail in later sections of the book. It is at least
possible to conclude at this point that advent of the fourth civilization (aka "the
information age") is owed more to the modern computer than to any other single
invention of the late industrial period.

1.5 Forecasting the Future


It has already been observed that events constituting history are understood
in the context of both motivation and technology over a time continuum. Similar
considerations apply to attempting to predict the future--that is, forecasting what
might yet happen to society as a result of current and new technology and
motivations. On the one hand, the flow of events perceived to date may usefully be
projected forward, if this is done in a reasonable fashion that takes into account the
most likely results of that flow.
On the other, few modern day forecasters can claim the authority of Biblical
prophets, who correctly predicted events in complex detail (and sometimes names)
decades or centuries ahead of time. Today's forecasters rely on extrapolating ahead
the trends of the recent past, rather than on Divine revelation. Therefore, all such
modern attempts at long-term prophecy will fail, at least in part, for they cannot

32
take into account the human-unforeseeable watershed events and decisions that
result from creative departures from tradition, and that change a technology and its
society quickly and dramatically.
Who could have forecast the uses of electricity, the internal combustion
engine, or atomic energy, even ten years before their discovery? Who can take into
account the serendipitous discoveries of ten years from now, or the result of, say, a
narrow election win or loss on the people governed? Who knows exactly how today's
decisions and discoveries will be applied to change the course of society a year from
now? Because of such uncertainty, all forecasting implies considerable speculation,
even when it appears to be a straightforward extrapolation. Indeed, given the
recent history of political and technological change, assuming that things will
continue according to current trends may be the most unreliable speculation of all.
Any one of the alternative futures proposed by today's forecasters may indeed
come to pass--or none may.
Speculation about the future is actually necessary for progress, for scientific,
technological, economic, and political breakthroughs are all impossible without the
application of a lively imagination to possibilities no one else has seen. Noted
speculator and science fiction writer Arthur C. Clarke (Prophets of the Future) has
this comment on qualifications for a successful predictor of the future:

I would now go so far as to say that only readers or writers of


science-fiction are really competent to discuss the possibilities of the
future.
This claim may produce indignation, especially among second-rate
scientists who sometimes make fun of science-fiction (I have never
known a first-rate one to do so--and I know several who write it). But the
simple fact is that anyone with sufficient imagination to assess the
future realistically would, inevitably, be attracted to this form of
literature. I do not for a moment suggest that more than one per cent of
science-fiction readers would be reliable prophets; but I do suggest that
almost a hundred percent of reliable prophets will be science-fiction
readers--or writers.

There are also those who expect a day when forecasting of at least the broad
outlines of future society (and perhaps many details) will become possible--even
commonplace. Perhaps the best known fiction with this theme is Isaac Asimov's
Foundation series of novels. Asimov portrayed a day when "psychohistory" has
become a science, and the future is indeed forecastable. Though it is not possible to
dismiss such a possibility altogether, the advent of such techniques is surely a long
way off (A discussion of the concept also appears in Michael Flynn's An Introduction
to Psychohistory). It would also be foolish to assume that a technique for predicting
future history could be developed without the forecasters becoming the managers
of that history, influencing critical events to make the outcome better--at least in
their own eyes.
This book also deals with many speculations about the future. Most are
attempts to determine which outcomes of technology are likely, based on historical

33
experience and existing trends. That is, this work is more concerned with
extrapolation than it is with speculation. However, some things are new, or are yet
at the research stage, and it is difficult to make predictions with any degree of
confidence. This difficulty has not stopped others from publishing their ideas, and
there now exists a rich literature of future scenarios, aspects of which will be
discussed in more detail later (along with new ones).

Visions of the Future

One of the earliest classics in the field was Jacques Ellul's The Technological
Society, published in 1965. Ellul had a clear vision of the tragic aspects of the
technological revolution. He saw society losing what had made it truly human,
blundering rapidly down unexplored paths, following guides competent in narrow
technical fields but in little else. Ellul was not afraid of technology, but felt that its
material promises were empty, that its faith had come to be in progress for its own
sake, and apart from a higher standard of living, there was little for the human spirit
to celebrate in the new age. Although his comments were made in the context of
the old industrial age and its failures, many others have expressed similar views of
the information age since that time.
One of these is Theodore Roszak, long a critic of the goals of artificial
intelligence research, whose 1986 book The Cult of Information is subtitled The
Folklore of Computers and the True Art of Thinking. Roszak castigates many other
modern writers as members of an unthinking cult who have made "information" into
what he calls a "godword". He desires to re-establish a clear distinction between
what machines do when they process data, and what human beings do when they
think--a distinction that he feels has been incorrectly blurred by others. Here is
Rozak's view of some of the technological optimists:

We might almost believe, from their simplistic formulation of the information economy,
that we will all soon be living on a diet of floppy disks and walking streets paved with
microchips.

The most optimistic views of the future come from such as Alan Toffler (The
Third Wave), John Naisbitt (Megatrends), Grant Fjermedal (The Tomorrow Makers),
Harry Stine (The Hopeful Future) and Eric Drexler (Engines of Creation). All of these
are willing to foresee many new and better potential worlds resulting from current
and projected technologies. A society of plenty, the colonization of space, near
immortality, and the removal of class barriers are among the predictions that these
and other writers make. They are in a long line of philosophers who believe that
Progress is an inevitable upward flow in the state of human affairs. Progress has
become a quasi-personalized idea--as have Gaia, Nature, Evolution, Love, and
Justice--invested with qualities that resemble those of a deity. Things will always get
better, wars are only "mistakes" in the flow of progress, and technological solutions
will always be found to all problems.

34
It is not hard to find data to support such optimism. After all, much of what
was science fiction in the 1950s is now a reality. Space flight, cancer cures,
information utilities, nuclear power, robots, and many other once fanciful ideas are
now taken for granted. Of course, far more of the old predictions have yet to be
fulfilled, and perhaps never will be, but the most optimistic in the scientific and
technical community often seem to believe that a permanent utopian civilization is
within the very grasp of humankind.

Profile On ... A technological optimist

G. Harry Stine is well known among readers of futurist publications for his
unabashed confidence in the future. The selection below, from The Hopeful Future
is typical of those who believe in Progress to solve problems through new
technology. The chapter title for the section from which this is taken is Enough
Energy for Everybody to do Everything

The human race has never run out of energy or had a real energy shortage.
The human race will never run out of energy or suffer from an energy
shortage.
As Caoanda observed, we're surrounded by energy. In the past when human
beings faced the possibility of exhausting or exceeding available energy supplies,
thereby creating an "energy crisis," they discovered new energy resources and
learned how to use them. Each time, we worked our way out of an energy crisis by
developing new energy sources and technologies. If the trends are reliable--and
there's no reason to suppose they're not--we'll also work our way out of the current
energy crisis . . .
Forecasts about limits to growth are based on specific energy resources and
have assumed no future technical developments. Technology defines resources.
Waterwheels made water into an energy resource. Steam engines did so for coal.
Internal combustion engines did the same for oil.
At the time forecasts about an energy crisis are made, inventors are already
quietly developing the new technology that will develop new energy sources within
twenty-five years and, within fifty years or less, will completely displace the older
energy technology.
Current technologists completely miss when they forecast how we'll work our
way out of an energy crisis. For various reasons, they discount or neglect to
consider the role that new technology will play in less than twenty-five years.
Technologists prefer to improve familiar technology by a fraction of a percent
than to gamble on a major improvement from unfamiliar technology. They manage
to make marginal improvements in old technology just before it is made obsolete by
new technology.

On the other hand, there is also a rich popular literature of apocalyptic visions
of the future--visions of imminent disaster. These see no hope at all for humanity or
for earth. If it does not perish in a nuclear holocaust, everyone on it will starve to
death when all arable land has turned to desert or poisoned. Perhaps all will freeze

35
when air pollution becomes dense enough to block the sun and lower the
temperature, or die of heat when the "greenhouse effect" increases it instead.
Alternatively, life could all dissolve in acid rain, perish from hard solar radiation
when the ozone layer disappears, or even be destroyed by a superior race of aliens.
Not a year goes by without a forecast of global economic collapse, nuclear conflict,
or the provision by some would-be prophet supplying a date on which Jesus Christ
will return and God is supposed to end the world, despite the Bible stating this
cannot be done.
No conclusive evidence can be cited for any of these extreme scenarios. The
kind of future expected may depend more on the predictor's personality than on the
analysis of today's trends. The optimistic technologist says there is hope for the
future; thoughtful philosophers worry that humanity has lost more than gained; and
the doomsayers have given up all hope. In the midst of this uncertainty and
contradiction, others have tried to find spiritual answers to difficult questions. Some
have turned to mystical claims that meditation can bring on a new order; others to
the Biblical answer that God the creator alone determines the fate of the universe.
Some may see such a refocusing as another manifestation of the tension and
balance between the high touch and the high tech. It may be regarded as part of a
struggle for liberation of the human spirit from the perceived bonds of the machine
age. It may be a holding position while people await more definitive data from the
scientific community, in which their long-term faith resides.
The actual near future will likely fall somewhere between utopia and
apocalypse. New and existing technologies need to be examined both for their
potential to improve the human condition, and for their potential to cause harm.
Part two of this book concentrates on the various technological revolutions and the
direct effects they may have on society, and part three focuses on the roles of
certain major institutions. It is worth noting, however, that if any modern day seers
(including the author of this book) really knew what the future would bring, there
would be far more money to be made in the stock market than in writing books.

1.6 Summary and Further Discussion

Summary

There are a variety of views about the meaning of history and whether it is a
purposeful and controlled unfolding, or a series of fated cycles. To be useful,
accounts of history must be interpreted in a flow from the past through the present.
With some care, its lessons might be extrapolated to forecast the future. History is
influenced strongly by the motivations of the people who make it, and by the
technology that they use. Its events in turn shape ideas and technology for the next
generation.
Although many divisions in history can be identified, those useful to this text
are:

o Hunter-gatherer (the first civilization)


o Agricultural (the second civilization)

36
o Industrial (the third civilization)
o Information (the fourth civilization)

Many nations are now well into the fourth phase, and the transition is being
accompanied by broad changes in society and technology, as well as by a lively new
interest in ethical issues.
Specific examples of the historical development of certain technologies
include:

o Technology and food


o Technology and health
o Technology and warfare
o Technology and transportation
o Technology and communication
o Technology and standard of living

The brief history of modern computing gives an indication of the way in which
this particular technology has developed and become a powerful influence on
society. Much of this text is concerned with such mutual influences, and many of
these will be developed further in later sections.

Research and Discussion Questions

1. If you are reading an account of some event in the past, what clues could
you look for in the narrative to determine how factual it is? In particular, how does a
falsehood or exaggeration distinguish itself from the truth? How does a mythical
account distinguish itself from an historical one? How could you spot possible
distortions designed to favour the author's political, religious or economic theories?
2. What are some of the external sources to which one could turn in an
attempt to verify a historian's account? Describe as many as you can think of, and
comment on their value.
3. Consider Caesar Tiberius and Jesus Christ. Do library research to find out for
which of these two there exists more complete and reliable documentary evidence
to verify the historical accuracy of the main events of their lives (This could be a
rather extensive research project). Now also comment on the extent to which such
evidence is actually accepted by scholars and by the population in general.
4. What effect, if any, did the invention of the printing press have on the
industrial revolution? On society in general?
5. This chapter speaks much about interplay of motivations (especially
ethical), technology, and the events of history. From your own knowledge or
research, provide examples of important historical events that hinged on (a) a
specific application of technology; (b) a moral/ethical or political decision by a key
player in the event.
6. Which do you think is more nearly correct: that societies develop in the way
they do because of technological advances, or that technological advances take
place because the society in which they are made is ready for them?

37
7. What were some of the effects on family life as a result of the industrial
revolution?
8. If you (a product of the industrial/information age) were suddenly thrust
into a hunter-gatherer society to make your way with no technological help, what
would you do to survive?
9. If the plough is the key invention for the agricultural society, and the
computer for the information society, what can be said of the industrial society in
this respect? Is there a single piece of technology that can characterize the whole
age? If so, what is it? If not, why not?
10. Write a history of the automobile, focusing on its effects on the economy
and society of the Western world.
11. Write an account of the effects of television on Western society.
12. Write an account of the effects of computers on Western society.
13. Some material in the chapter focuses on particular turning points in
history. Try to imagine how the world's history would have been altered if certain
events had not occurred. Write down what the major differences in today's society
and technology would be if:
a. the Romans had built a practical steam engine from Hero's model. Could an
industrial revolution have taken place in A.D. 100 ?
b. the Mayans and Incas had both discovered the wheel and begun to use
animal-drawn carts centuries before the Spanish arrived. (What if the Europeans
had met a civilization stronger than their own?)
c. half the munitions factories built in the United States before the Civil War
had been in the South, instead of (virtually) all in the North.
or the South had followed up at the battles of Bull Run, pursued the defeated
Union army, and taken the undefended city of Washington, D.C.
d. England had been overwhelmed by Germany in the Battle of Britain.
e. Lee Harvey Oswald had missed, and John F. Kennedy had lived to be re-
elected.
14. The text mentions some turning points in World War II. What were some
others?
15. What is the importance of studying history for our present day society? for
the future?
16. There have been many who have attempted to prophesy the future. Look
up one or two of these from before 1990 and assess the extent to which they
succeeded or failed.
17. Try to obtain one of the supermarket "tabs" annual issue of psychics'
prophesies for the ensuing year. Describe these and say how many came true
during the following year.
18. The Western notion of prophesy comes largely from the Bible.
a. What does the word "prophet" mean in the Biblical context?
b. Make a list of at least twenty prophesies, both the making and fulfilling of
which are recorded in the Bible.
c. Make a list of at least twenty Biblical prophesies that do not appear to have
been fulfilled as yet.
19. Write your own prophesy of the next ten years of technology.

38
20. Look up the Gutenberg project. Who is it named after? What are its goals,
and why is it important (or not)?
21. The text attempts to formulate a principle of interdependence. Explain
this, and try to reformulate it in other words.
22. Make a list of the ten most important turning points in history and explain
why each was so important.
23. Make a list of the ten most important current problems that could be
turning points in an account of our history written in the future.

Bibliography

Asimov, Isaac. The Foundation Trilogy. London, England: Octopus Books, 1982
(first pub. 1952, 1962, 1964)
Barraclough, Geoffrey. Turning Points in World History. London, England:
Thames and Hudson, 1977.
Beres, Louis René. Apocalypse--Nuclear Catastrophe in World Politics.
Chicago: University of Chicago Press, 1980.
Burke, James. The Day the Universe Changed. Boston: Little Brown, 1985
Bebbington, David. Patterns in History. Leicester, England: InterVarsity Press,
1979.
Cetron, Marvin & O'Toole, Thomas. Encounters with the Future: a Forecast of
Life Into the 21st. Century. New York: McGraw Hill, 1982.
Clarke,Arthur C. Prophets of the Future (second revised edition). Bury St.
Edmunds: St Edmundsbury Press, 1982
Drexler, K. Eric. Engines of Creation. Garden City, NY: Anchor Press, 1986.
Durant, Will and Ariel. The Lessons of History. New York: Simon & Schuster,
1968.
Flynn, Michael. An Introduction to Psychohistory. Analog April 1988 p 60 - 78
(Part 1) and May 1988 p38 - 64 (Part 2)
Fjermedal, Grant. The Tomorrow Makers. New York: Macmillan, 1986
Hardison, O.B. Jr. Disappearing Through the Skylight--Culture and Technology
in the Twentieth Century. New York: Viking Penguin 1989
Hodges, Henry. Technology in the Ancient World. New York: Knopf, 1970.
Lower, Arthur R.M. A Pattern for History. Toronto: McClelland & Stewart, 1978.
Montgomery, John Warwick. The Shape of the Past. Minneapolis: Bethany,
1962.
More, Thomas. (tr. Ogden, H.V.S.) Utopia. New York: Appleton-Century-Crofts,
1949
Naisbitt, John. Megatrends. New York: Warner Books (pb), 1984.
Naisbitt, John & Aburdene, Patricia. Megatrends 2000. New York: Morrow,
1990.
Orwell, George. Nineteen eighty-four. Harmondsworth, England: Penguin,
1964
Project Gutenberg. <http://www.prairienet.org/pg/& Larsen, Judith K. Silicon
Valley Fever--Growth of High Technology Culture. New York: Basic Books, 1984.
Roszak, Theodore. The Cult of Information. New York; Random House, 1986

39
Schaeffer, Francis A. How Should We Then Live. Old Tappan, NJ: Fleming H.
Revell, 1976
Shuurman, Egbert. Technology and the Future--A Philosophical Challenge.
Toronto: Wedge, 1980
Sutcliffe, Richard J. Introduction to Programming Using Modula-2. Columbus:
Merrill, 1987
Sutcliffe, Richard J. <mailto:rsutc@charity.twu.ca

Chapter 2
The Foundations of Science and
Technology
Seminar - "I Don't Understand - What is Science?"
2.1 The Kinds of Knowing
2.2 The Nature of Scientific Enquiry
2.3 The Role of Abstraction
2.4 Science, Technology and Technique
2.5 Science and Technology--Practice and Practitioners
2.6 The Technological Society?
2.7 Summary and Further Discussion

40
2.1 The Kinds of Knowing

Logos

One of the most important of philosophical questions has to do with the


meaning of "knowing" (epistemology). That is, what does one mean by such
statements as "I know this is true", or "We hold these truths to be self-evident"? The
answer to meaning questions like these depends very much on culture, on
discipline, and the thought system of the one who is the alleged "knower", for there
are a variety of ways to regard this concept.
In the tradition represented by certain of the ancient Greek philosophers such
as Plato, and as later reinterpreted by such as Rene Descartes (16th century), the
highest and most reliable form of knowing was the most abstract (including the
mathematical), for knowing is equated with the result of reasoning.
True ideas, once appropriated from the realm of the divine and put into the
transmittable form of words by logical argument and rhetoric, were termed "logos,"
and this too had an element of the divine about it. Taken to extremes, the science
of this philosophy consists of logic alone and logic judges everything else, including
the physical world. What cannot be brought into this process is either uninteresting
or suspect. Applications of the pure science of thinking to the physical world,
including the development of technology, was regarded as unimportant, and even
beneath the notice of the philosopher. Knowledge is thought of as an end in itself
rather than a means to develop physical products.
For instance, in this view, the god who created the universe was not just
unknown, but unknowable, unless he would deign someday to send to mortals a
logos (word) to reveal himself--a task that John assured them had been fulfilled in
Christ (John 1).
An example of this kind of knowing is the statement "two plus two equals
four." The truth of this statement seems to depend on universal ideas independent
of language or the notation in which they are written and so this truth is knowable
absolutely (within the context of the usual real numbers). This is true regardless of
whether it is written this way, or 2 + 2 = 4 or II + II = IV, or deux + deux = quatre.
Such knowing also includes lines of reasoning such as:

All women are mortal.


Nellie Hacker is a woman.
Therefore, Nellie Hacker is mortal.

The conclusion is held with confidence (given the premises), because the rules
for such a logical process are regarded as infallible.
Logic is important in itself, and its study worthy as a prerequisite for all
disciplines, for all scholars need to be able to think clearly and correctly. However,
taken to extremes, there is no truth but logic alone, and it judges everything else,
including the physical world. In this view, anything else is at best uninteresting,
perhaps suspect, and may not be knowledge at all. In the most radical view,

41
applications of the pure science of thinking to the mechanics of the physical world,
including the development of science technology, are unimportant, even beneath
the notice of the philosopher. Knowledge is thought of as an end in itself rather than
a means to generate practical applications or products. Why should the Greek
thinkers have built steam engines? Did it not suffice to demonstrate their theoretical
possibility?

Empiricism

Another kind of knowing is that derived from experience, or, as Aristotle would
have said, from the substance a thing has (including its potential properties) rather
than from its abstract form. That is, this kind of knowing is practical, not just
theoretical. Such is the knowledge derived when the scientific method is applied to
the physical world.
One could also express this in terms of data and information. Data consist of
the raw facts of a matter, so far as these can be ascertained; information is the
meaning attributed to those facts by some community of appropriately informed
experts.
o That Canada has a $700 billion debt might be a fact; whether one should
conclude that the country is on the verge of bankruptcy (and what to do about it) is
a matter of interpretation.
o That a political leader has been pursuing secretaries sexually may be
factual; whether anything can or should be done about the matter is a consequence
of interpretation within a value system, including the values of political priorities.
This (empirical) kind of knowledge depends utterly on the ability to gather and
interpret evidence from the physical world. It also depends on the ability to give
meaning to that data and communicate that meaning reliable to other people. That
is, the data and the consensus on the information it conveys together constitute
"knowledge" in this realm.
o The fossils dug from the earth provide a factual record of dead organisms;
the meaning of that record depends on its interpretation, for no human alive has
actually seen the creatures who left those bones. This is true of all history, the
moreso if sufficiently removed from the present.
It is important to realize that the consensus of experts that is at some point
called "knowledge," is always in process and may be wrong. Indeed, "knowledge
shifts" are not at all uncommon. A theory might be taught as universally accepted
fact for many years, only to be later (and perhaps suddenly) replaced by a
contradictory one. However, as long as one realizes that what is called knowledge in
this data/information sense is an approximation and a moving consensus, it is still
possible for those involved in a particular field to say they "know" a lot of things.
With some refinements, this is the model for knowledge actually used in the
sciences today.
While modern scientific thought has roots in the rationalism of ancient
Greece, it owes its current form to modifications made first by Renaissance
humanists and later by the materialists and logical positivists of the nineteenth
century. The sphere of modern science is the systematizable, the organizable, and

42
the empirically investigatable. It is not always possible to tell what belongs in this
sphere, nor is it always possible to induce knowledge of absolute truth from
instances investigated by the senses (Because stones fall to the ground more
quickly than feathers does not mean that it is the nature of heavy objects to fall
faster than light ones). Thus, there must always be an element of doubt and
incompleteness to science. Karl Popper believed that doubt expressed the very
essence of the scientific method: "It must be possible for an empirical scientific
system to be refuted by experience" (The Logic of Scientific Discovery) Absolute
verification was not the issue to Popper, but potential falsifiability by empirical
means was. Scientific results could be thought of in terms of probabilities of truth,
but this was the best knowledge a scientist could have.
It is important to note, however, that doubt implied by potential falsifiability is
not of the existence of the reality being investigated. Rather, it is doubt that current
descriptions of that reality constitute the final and most accurate word on the
subject.

Extremes of Empiricism (1)--Positivism

Some of its most radical philosophers have taken empiricism to other


conclusions. They have held that the experience of human senses suffices to
describe the entire knowable universe. For them, the supernatural is specifically
defined out of existence, as is everything not approachable with the standard
methodology of science. In short, if it is not science, it is not knowledge. That is,
while such moderns do not disdain practical applications of their intellectual
achievements, some tend to scorn anything not achieved through a particular kind
of mental discipline.

Extremes of Empiricism (2)--Deconstructionism

In a departure from the classic Greek reverence for knowing as a pure


abstraction, knowledge is sometimes today held to be almost totally experiential--
even to the point that material phenomena are held to exist only as they are
perceived. For example, some modern philosophies of physics hold that if a tree
falls in a deserted location, and there is no one to hear it, not only is there no sound,
but the tree continues to exist in both fallen and not-fallen states until an observer
comes along to trigger it into one or the other condition. Should a century go by
meanwhile, the second state could exist in an instant, complete with old, decayed
wood as soon as the first traveller came that way. That is, human observation is not
only necessary to give the physical world meaning; it actually creates the physical
reality to observe--the very existence of an objective reality is radically doubted; no
objective truth exists; and its place is taken by whatever a person perceives to be
the case or wants to be the case (deconstructionism again).

Empiricism and Practical Science

43
Whether they believe experience describes real-world phenomena (most
practising lab scientists), or that observations create the events they purport to
describe (some theoreticians), there is still a general belief among scientists that all
data is acquired through the senses, and becomes knowledge only as it is filtered by
the intellect. This approach is useful, provided everyone involved realizes the
relative truths it produces are determined by specific intellectual filters, with not all
views equally valued or listened to in the process. Every society has certain
dominant, ruling, or control paradigms that set its intellectual agenda and provide it
with its characteristic way of looking at the world. When these reigning paradigms
undergo a shift, some old knowledge ceases to be, and other things come to be
placed among the "known" (The false becomes true, and vice-versa).
o Scholars once "knew" that the earth was the centre of the universe; today
they "know" otherwise.
o Intellectuals once "knew" that God created the world, but most today say
they "know" it came about by chance, evolution, and natural processes.
There is another problem with taking radical doubt too far. Is the proposition
"nothing is real" itself real? David Stowe (Popper and After: Four Modern
Irrationalists) makes the following points about this approach to knowing:
(1) It implies that human knowledge has not been increasing--a proposition
that seems at variance with actual recent experience.
(2) It implies that all of the potentially infinite number of world views are
equally valid, and so every kind of physical law or theory is equally as improbable
as, say, a logical self-contradiction. Thus knowledge is impossible. This, says Stowe,
is irrational, and neither can nor does provide a working philosophy for scientists.
However, the scientific method does depend on the idea that knowledge
gained by application of the senses has to have its truth content measured by
reliable standards and that its acceptance depends upon informed judgements. For
example, the statement "objects released in air fall to the Earth's surface" is
universally attested as true by the experience of every human being. The statement
"the sun will rise tomorrow," is very nearly in the some category. However most
people must accept "the Earth is an oblate spheroid" on the basis of evidence
gathered by others, for they have no means of performing the relevant
experiments. Likewise, only a few can verify "napthalene has a molecular weight of
228.30." That is, the reliability of all these knowledge statements is subject to
human judgement. Yet, they all seem to imply that there is an objective reality to
judge. Other factors also influence the truth value of statements made from
empirical evidence; these will be discussed in more detail in the next sections.

The Contrast with other Fields

The arts and the humanities of the Western World, by contrast, are based on a
more subjective tradition. Their heritage is culturally characterized by a strong
Judeo-Christian influence (as redefined by the Reformation thinkers) and influenced
to a somewhat lesser extent by materialism. Though humankind is still given central
place in modern Western versions of these philosophies (again, most notably in the
humanities), humanity is not regarded as a mere observer, evolving by chance and

44
whim in a purely mechanistic universe. Rather, humanity is part of a whole that is
greater than the human mind or senses can comprehend and may therefore obtain
and use that which may legitimately be termed "knowledge" quite apart from
experience as an observer in the scientific sense of the word. An artist, musician, or
writer (fiction or non-fiction) also uses reality filters to make a statement about the
world and a personal response to it, but these filters are not identical to those of the
research scientist, though they are of a similar kind.
In this tradition there is a tendency to view the material world as a limited and
incomplete or even flawed manifestation of realities that go beyond physical
perceptions of the physical universe. This is certainly true among religious thinkers,
though such a view is found in other disciplines as well. For instance, modern art
and music--perhaps in partial reaction to the success of the sciences--have both
moved away from interpretations and depictions of the physical world, and have
come to concentrate on representing the emotions of the artist or releasing raw
emotions from the audience. In common with the deconstructionists of the written
literary work, these practitioners have moved to the notion that there is no reality in
their work apart from the experience of responding to it. Thus, their connection with
the physical world has diminished even as the threshold of artistic activity needed
to release the raw emotions of the audience has gone up.
It is important to realize that a statement such as "this is the best piece of
writing (art, music) of the year" is a true observation about the speaker's own
interaction with the work. Even if no other person agrees, the statement is no less
true. That is, some knowledge can be highly personalized. Even the box office
success of a work is not a statement of absolute merit. Rather, it is the aggregate of
many such instances of personalized knowledge, or to use the common term, of
"taste."

Intellectual Multiculturalism

C. P. Snow in a 1959 lecture later published as The Two Cultures, postulated


there had come to be a division of intellectual activities into two distinct categories,
with scientists in one, and almost everyone else in the other. Snow detected deep
and sometimes bitter animosities between the two groups. Each had its own view of
what constitutes knowledge, how it is obtained, and what are the ethics of applying
it. What is worse, the depth of division between the two camps is directly
proportional to the sophistication of the technology developed by the one, and to
the despair of the other that it will never have the power to control it. There were
people who could travel in both circles, but they almost had to become different
persons when they moved from one culture to the other.
Some fourteen years later, Jacques Ellul (The Technological Society)
expressed a more comprehensive view of the situation. He saw the technological
mindset (if not strictly the scientific one) becoming overwhelmingly powerful,
sweeping all other forms of thought away--becoming not just dominant, but the only
way of thinking. Technique (i.e., efficient method) is in his view irresistible. Every
task or discipline has a most efficient technique that eventually emerges, develops
fully, and destroys anything of lesser efficiency. All humanity will ultimately be

45
caught up in a kind of amorphous technological totalitarianism extending over every
aspect of life--one that cannot be avoided because of its claim to maximal
efficiency.
Meanwhile, a group of intellectuals known as deconstructionists promoted the
radical rejection of the idea that objective truth and meaning exist. They claimed
both were lacking even in written words because only at the experiencing of
interaction with a text was meaning generated in the reader, and because this could
never be shown to be universal, the text had no meaning in itself, even if one was
intended by the author.
What antinomians did for the study of morality, deconstructionists in general
did to epistemology (the theory of knowledge), for there was nothing that could be
said to be known any more.
With despair over the perceived dominance of technique reinforced by the
parallel deconstruction of truth itself, many intellectuals were left viewing
humankind as shorn of purpose, hope, and values--its very humanity simultaneously
deconstructed of meaning and sold for a technical lentil stew.
Attempts to liberate technique by reconstruing it within a framework of
meaning--such as Schuurman's 1972 book Technology and the Future--underscore
the feeling among philosophers that the technological boat had set sail for
destinations unknown and left both them and the human spirit behind on the shore.
Indeed, by the mid-1980s, Allen Bloom (The Closing of the American Mind)
could lament that the battle seemed lost. In their obsession with technology, he
believed that Americans had entirely lost sight of the humanities, but especially of
philosophy and even of logical thinking. For Bloom, philosophy had become a voice
crying in an academic wilderness: no one was interested in hearing it, and none
were qualifying themselves to do so. Even science had given way to the demands of
the marketplace, relinquishing its claim to be a pure discipline. He recommended
returning students to the rigor of the classical Greek thinking as an antidote to the
sloppiness he detected in modern approaches to knowledge. His critics have not
been certain that Bloom's criticisms are valid, or whether any such return is
possible. (The sharpest opponents of an important role for philosophy in education
claim it is irrelevant to economic reality, adds no understanding of the physical
world, and thus is not worth studying.)
In like manner, Charles Sykes (Profscam) claimed that the university become
captive to professorial vested interests, neither doing research nor teaching well,
and offering a form of education that had little value for any but foreign students. A
spate of similar books joined the attack on the relevancy of the university enterprise
in the eighties and early nineties.
In more recent years, as high technology has become easier to use, artists
and writers have embraced the new machines as a means to their own ends, and
are now among the most enthusiastic and demanding users of computers. It is
interesting to ask whether this phenomenon is a refutation of Snow, a vindication of
Ellul, a further point of lament for Bloom, et al, or something else altogether.
It is also worthwhile to observe that an atmosphere of despair is rather
unstable, for it presents opportunities for new infusions of hope into the mix,
perhaps from unexpected directions. This observation may explain the increase in

46
interest in spiritual answers being given to the truth and meaning questions by the
end of this period.
As noted in Chapter 1, some optimists see no loss at all in becoming an
overwhelmingly technically-oriented society, for there are manifest benefits to many
technologies. Still, doubts and questions remain--are there no other valid ways of
"knowing" other than by science? Must not empirical (scientific) sense-based
knowledge forever remain an approximation or interpretation of a reality of which it
cannot be known that it is absolutely known? Finally, is a technological society rich
or poor in human values?

Belief

Another model for knowledge is that of a belief widely, sincerely and


reasonably held. Beliefs rest on some evidence for the thing believed so have an
empirical aspect. However, most people use "belief" in a slightly different way than
they do "knowledge." Things "believed in" are generally considered to be less
secure in their foundations, and perhaps less widely held to be true than things said
to be "known." That is, a small group (or one person's) certitude that something is
true, based on what others might regard as incomplete evidence, is termed a belief,
while a more general consensus about something is termed knowledge. Of course, a
belief, however widespread, is not necessarily true just because it is sincerely or
widely held--it could be sincerely wrong in all the holders. On the other hand,
scientific knowledge is not always true either--it is sometimes shown to be wrong
after having been defended as absolutely true for a protracted period of time. One
could even conclude that all knowledge is based on shades of belief.
The adherents of some religions, Christians included, would add yet another
term, "faith," by which they would mean an absolute knowledge derived through a
gift of God's revelation. Faith is knowledge that does not lose certainty because it
lacks universal consensus or current empirical evidence. The idea is that God exists
and knows all; humanity finds truth by paying attention to what God has revealed.
As John asserts: "In the beginning was the logos, and the logos was with God, and
the logos was God." (John1:1) He goes on in the same vein in an attempt to
convince literate Greek readers that the otherwise unknowable God had now sent a
revelation of Himself in Jesus Christ and so had become knowable in person.
In addition, much information about the universe God has made can also be
found by a sufficiently careful examination--this is sometimes expressed as
"thinking God's thoughts after Him," or "knowing God by his works." In the faith
context, absolute knowledge is external to the human race--it is revealed rather
than being discovered or invented. Thus, empirical knowledge (of things discovered
or invented) cannot in this view be absolutely relied upon, for nothing can be known
with the certainty of the things revealed. Neither science nor belief are therefore
qualified to judge such knowledge; they are simply tools to enhance it.
In this theory of knowledge, there is no a priori conflict between the absolutes
affirmed by faith in God's revelation and the approximations obtained by the senses
(scientific). However, conflict does exist in practice for two reasons.

47
First, there is the tendency for institutions to develop around the holders of
faith. Such institutions then demand some of the faith affirmation for their own
pronouncements, and these may touch upon empirical matters rather than on the
revelation allegedly being safeguarded. Individuals find it far easier than do
institutions to be simultaneously affirmers of the faith and seekers of empirical
knowledge. Organizations can sometimes officially adhere to statements on matters
peripheral to the original faith long after most of their (still faithful) members have
abandoned those statements.
The second reason for conflict between faith and empiricism is that members
of the scientific community often reject faith affirmation as inherently abhorrent,
regardless of whether such matters are within their sphere of competence and
training, and even though they themselves affirm a faith in and build institutions
around a set of philosophical presuppositions.
One could summarize these difficulties by saying:

Religion attempts to answer questions about ultimate meaning; these and not the detailed workings
of the physical world are its territory. That is, its theologians stray when they pronounce upon
physics as much as do the philosophers of science when they speak the meaning of the universe.

Opinion

The last kind of knowing to be considered here is called opinion. This has the
weakest claim of to the term "knowledge," and is also the hardest to define. Opinion
is commonly thought to consist of positions privately and personally held to be true,
and that can be so maintained without reference either to facts or to the effect of
the opinion upon other people. Determining whether a statement falls into the
category of opinion is extremely difficult. To disparage a statement by another, one
may say: "That's just your opinion." There often seems little rebuttal from such a
judgement, for it appears to rest on the democratic notion that all personal views
are of equal value and equally likely to be true. Statements such as "it's too cold,"
"that was a good book," "God exists," and "killing is bad" could all be disparaged as
"mere "opinion.
However, the first two of these are true statements about the speaker's
reality--they are matters of taste, rather than of opinion in the casual sense of the
word. The third is a statement of faith, and the fourth is about moral objects. These
last two statements are surely more than private opinions, for they cannot be
privately held and acted upon but are by very nature about relationships, for to act
on them is to affect others.
If one tries to define an opinion as a claim about knowledge of which the
speaker is unsure (e.g., "I think the bum is guilty.") then the statement is not
directly connected to an external reality, however strongly stated. However, a
statement of doubtful knowledge of the truth is eventually resolvable as to its truth
or falsity, so it is related to facts, and is not just an entirely personal and private
reality. What is more, people act on their views, so what are called opinions do
affect other people, and their truth or falsity therefore matters to other people.

48
Thus, there may not be anything left for this category--what are called opinions are
either another kind of knowing or else are meaningless (as far as truth value is
concerned).

Summary

It would be far beyond the purpose of this book to analyse the shades of
meaning of the "knowledge-terms" in any greater depth, or to present all the
arguments concerning their correct use. Those interested should consult a good text
on epistemology and one on systematic theology. Instead, in this chapter, a further
examination of the notion of scientific knowledge will be undertaken. Comparisons
with other disciplines will be made, and these may shed light on how scientific ideas
develop and on the relationship between science and technology. Consideration will
be given to the role of science and technology in the development of a society, and
certain general technologies will be looked at with a view to their impact on the
future.

2.2 The Nature of Scientific Enquiry


One characteristic of humankind by which it distinguishes itself from all other
life forms is an insatiable curiosity about the workings of the universe. Societal
conditions have not always allowed this curiosity to be indulged; even practical
pursuits require wealth and a freedom from manual labour not available to
everyone. Theoretical pursuits also demand toleration of investigators whose entire
exercise may be mental, and whose results may mean little to the average person.
The systematization of such work gives rise both to technology (or technique, as it
might better be called) and to science. These are not the same thing, despite being
closely linked in the present culture, and the purpose of this section is to give
careful consideration to both, to see how they are related, and to investigate some
of the scientific and technological influences on society.

Science

First, consider science, the more theoretical of the two. The fundamental
premise of any discipline that is to call itself scientific is this:

The universe that is the object of a scientific study is sufficiently orderly to be in some sense
measurable, testable, and at least potentially predictable.

That is, radical doubters notwithstanding, the doing of science seems to


require at least the perception of a systematic reality in order to have something
that it can be about. It is usually a simple matter to distinguish disciplines of a
scientific type from those of most others. For instance, the creation of music,
painting, sculpture, and the writing of poetry or novels are not generally regarded
as scientific activities. These pursuits may have rigid rules for some aspects, but
need no always, for their practitioners are thought to be free to present ideas and
impressions without being bound by anything called reality.

49
It is also possible, though more difficult, to distinguish science from the
humanities (philosophy, languages, literature, etc.) and from the studies of society
(sociology, anthropology, psychology, economics, etc.). In the latter group in
particular, attempts are often made to apply scientific methods, but this may not be
entirely successful. After all, it is not known whether human behaviour is predictable
in the same manner as that of things under scientific study is supposed to be
(except, perhaps, statistically). Although it is methodology (and not results) that
determines whether a discipline is scientific, the method of science assumes an
element of predictability, and it is not until some work is done that one knows if this
factor is present. A discipline may use the methods of science on the assumption
that they are appropriate, but it is only as those methods produce reproducible
results that the practitioners gain confidence that they are indeed "doing science,"
and not something else.
It is easy to assume that scientific methods do or ought to apply to a given
field of study. It is more difficult to discover how to make them apply (which is
partly a matter of technique). It is harder still to demonstrate that the assumption
was correct and the phenomenon being studied can be demonstrated by the
methods being used to be predictable. Finally, if some apparently orderly pattern is
discovered, these techniques shed no light on the source of the perceived order (or
lack thereof.)

The case of Mathematics

The case of the discipline of mathematics is particularly interesting, for its


philosophers can take one of two extreme (but not necessarily mutually exclusive)
views:
o that mathematical ideas are entirely theoretical and speculative, with no
necessary connection to the physical world
or
o that mathematics describes things with a real existence.
To put it another way: Do mathematical ideas come into being the first time
someone thinks about them (created by thought), or are they pre-existent (already
in the universe) and only being discovered as time goes on? For example, the
equation ax2 + bx + c = 0 (a, b, c are real numbers with a Did the quadratic formula
exist before it was first written down by a human being, or has it always been
inherent in the concept of number?
Although there are some who will hold out for the absolute truth of one or the
other of these positions, mathematics actually has both aspects, for while the
entities it discusses are on the one hand mental ones, these ideas clearly do on the
other hand have some relationship to the physical universe.
o The concept of number is universal and pre-existent. God has always
existed in three persons, for example. However, the numerals employed for the
communication notation used by humans to express this idea are cultural
inventions, not universal truths. Thus, the ideas contained in the assertion that 2 +
2 = 4 are inherent in the concept of number, and are not inventions. However, the

50
notation in which the idea has been written is an artifact, for rather than "two" or
"2," one could use "deux" or "II" without changing the meaning.
o The same is true of the meaning of the quadratic formula on the one hand,
and any particular way of writing it on the other.
o The use of base ten numerals like 4645 to express the idea of
4000+600+40+5 is probably due to the vast majority of humans having ten fingers
with which to begin learning to count. There is no a priori reason why one should not
use a system founded on a base of two, three, eight, sixteen, or some other
number. Indeed, one does use base twelve (dozens and/or gross) to measure
quantities of eggs, buns, or hours, base sixty to measure degrees, minutes, and
seconds, and bases two or sixteen inside computers.
o Pythagoras' Theorem on right triangles is true regardless of the way in
which it is written out, and it unfailingly categorizes triangles as right triangles or
not regardless of what any observer may think or how that observer might write the
result down. It is true even if you call the things left triangles.
o Likewise, the interesting observation that the number 1961 reads the same
right-side-up or upside-down is entirely a construct of the notation; it has no
universal truth in itself. On the other hand, the idea of symmetry that this example
illustrates is universal, and can be found wherever some object can be rotated or
flipped onto a copy of itself.
o In a broader sense, this example illustrates the universal notion of
complementarity found in such pairs of opposite ideas as: left/right, up/down,
right/wrong, good/evil--all of which exist independent of the language that describes
them.
Similar arguments can be made, not just for number theory, but for statistics,
topology, algebra, analysis, discrete mathematics, calculus, transfinite numbers,
and set theory. Although many of these ideas have appeared on the human scene
recently, the very rationality of their interconnectedness argues that they are in
some manner inherent and inevitable (part of an objective reality) and that they will
certainly be discovered once one thinks long and deeply enough.

Who Can Understand Mathematics?

The difficulties in understanding the nature of mathematical statements are


compounded by the fact that in all but the simplest cases one must be a
mathematician in order to perform its mental experiments. A grade ten student in
remedial (general) mathematics once said to me "I know everything there is to
know about mathematics already; why should I have to take this course?" The sad
fact was that he barely had acquaintance with the multiplication of fractions and
had never heard of the aspects of mathematics already mentioned, much less of
computational geometry, complex analysis, relativity, probability, combinatorics, or
any of their applications--the chasm of his ignorance was unbridgeable.
That is, in this realm "truth" can only get informed consent--can only be
understood--if one has sufficient training and experience in mathematical thinking
to qualify as a member of the consensus. Not just anyone can comment
meaningfully on mathematics, for to grapple with its ideas requires special

51
knowledge and experience. Even among highly qualified mathematicians,
embarrassing errors take place. For instance, a proof for a widely accepted theorem
is sometimes later shown to be incorrect and either a new proof must be supplied,
or the theorem may be shown to be false after all. In one celebrated case in the 60's
and 70's a graph theory result was purported to have been proven in published
papers by three successive writers, and all three proofs were subsequently shown to
be incorrect.
One could summarize by saying that whether mathematical truths are created
or discovered by mathematicians, they certainly cannot be discerned apart from the
collective experience, training, and beliefs of the mathematical community, and this
is not unlike the situation in the scientific community and in other disciplines.
Specialized training is required to comprehend the ideas of a discipline.
That is, acceptance of mathematical and scientific results by most people,
even those trained in another branch of the discipline, requires some degree of
acceptance of the consensus of the expert part of community. This consensus,
because it is an interpretation, is not necessarily true absolutely. For example, no
matter how much a mathematical model for the first few seconds of the existence of
the universe may be consistent with present-day scientific observations, acceptance
of the model as a fact is a leap into faith, one that bears a great resemblance to
that held by others in an all-powerful creator having made everything in six literal
days.

Is Mathematics Certain?

In the latter part of the nineteenth century, a number of logicians showed that
the standard methods of logic employed at the time led invariably to fundamental
contradictions. For instance, consider the definition:

The barber of Seville shaves all the men of the city who do not shave
themselves.

or, similarly

S is the set of all sets that do not include themselves.

Now does the barber shave himself or not? Is S a member of S or not? Unless
one sneakily attempts to escape the logical trap by positing that the barber is a
woman, a machine, or an alien, either answer leads to a contradiction. The
existence of such contradictions introduces an uncertainty into mathematical logic
itself, not just into the correctness of part of its consensus. That this uncertainty
could not in any way be resolved was shown in 1931 by Kurt Gödel when he showed
that no set of axioms used to describe a mathematical system could prove both the
consistency and completeness of the system.
Consider, for example, the natural numbers:

N = {1, 2, 3, 4, ...}.

52
Gödel showed that, on the one hand, any set of axioms (rules) that could be
used to prove all true statements about these numbers would necessarily be
inconsistent (lead to contradictions like the one above). On the other hand, if the set
of axioms is consistent (no contradictions possible) it could never be sufficiently
complete to derive all true statements about the system. As Douglas Hofstadter
puts it: "In short, Gödel showed that provability is a weaker notion than truth, no
matter what axiomatic system is involved." (Hofstadter, p. 19)

As far as the laws of mathematics refer to reality, they are not certain; and as
far as they are certain, they do not refer to reality.
--Albert Einstein

That is, unless one is willing to use logical tools known to be unreliable
(potentially inconsistent), there are always truths about the number system that
cannot be proven.
The same principle applies to computing, for one equivalent to Gödel's
theorem (the undecidability of the halting problem) demonstrates that no machine
can be built using logical systems that can process all problems, or even to
determine ahead of time whether they will be successful in the attempt. Simply put,
not everything knowable is computable. That is, human beings can know more than
finite logical machines, no matter how elaborate those machines may be.
In a similar vein, it is not possible to prove with the rules of human logic that
God exists (or that he does not). His existence may be strongly inferred or
authenticated from evidence, but not proven in a logical or a priori sense.

Comparing Other Disciplines to Science

This concept of uncertainty is applicable to science as well because Gödel's


Theorem applies to all logic when it is applied to infinite systems, and not just to
mathematics. Science must also deal with the uncertainty that the closer something
is observed the more the very act of observation changes the thing being examined,
and so the less accurate the observations are.
Yet, the entrance of such uncertainties into the scientific realm does not
create the difficulties for its practitioners that it might for theoreticians. After all, a
researcher in some other part of the world with similar equipment needs only to be
able to duplicate a reported experiment and obtain essentially the same
experimental results within a reasonable margin of error. For science, duplicatible
results are the important thing, even where there is no agreement about the
interpretation (meaning) of the results.
Indeed, questions about ultimate meaning are not really on the agenda of
science, and scientists who speak of them are no longer talking about their own
specialities, but about those of others. The Nobel prize winning physicist who goes
on the talk show circuit to proclaim there is no God has fewer qualifications to speak
on that subject than a theologian does to declaim on gravitational field theory.
It is also important to note that other intellectual pursuits, such as religion,
art, poetry, philosophy, languages, mathematics, and so on, while methodologically

53
different, are organized disciplines in the same sense as are the various sciences.
Each such field of study constitutes a recognized body of knowledge with its own
rules, practitioners, and special methods of interpretation. In fact, one distinguishes
a scientific discipline from the others precisely on the basis of its particular
intellectual methodology, the rules for which serve both to define what is science,
and to determine what are its appropriate fields of study.

The Scientific Method

A typical elaboration of the scientific method in five steps goes something like
this:

1. Observe the universe in question, collecting raw descriptive data.


2. Analyze the data, systematizing and interpreting it.
3. Synthesize a theory (formulate a hypothesis) to explain the data, or
develop a model to illustrate it (i.e., create a mental abstraction of the presumed
physical reality).
4. Test the theory or model under as many variant conditions as possible to
determine the degree of correspondence between the abstraction and the physical
universe.
5. Modify the theory or model and re-test it until it agrees with all the relevant
phenomena. If a universal consensus is reached on a particular theory it might be
promoted to the status of a "law."

This process might be summarized by saying that science is a search for true
descriptions of the world by making logical inferences drawn from empirical data.
Although this description of the scientific method would probably be accepted (with
variations) as a working definition by the vast majority of those who term
themselves scientists, one must realize that it is only a close approximation of what
science is. A number of cautions must be added to properly explain it.
First, applying the method implicitly assumes the hypothesis is testable. Some
are, some are not. Strictly speaking, the latter are not scientific, for if they cannot
be tested they can neither be refuted nor verified.
Second, taking a narrow view of this process would exclude mathematics--a
discipline that attempts to produce its results by logic alone. Yet mathematics not
only provides the language, structure, and tools for systematic investigation, it also
has reasons of its own to be applied to the real world. Mathematics is therefore
inextricably intertwined with all scientific disciplines. Not only can no science exist
without the language of mathematics to describe its investigations, but also the
boundaries between applied mathematics and science are quite unclear.
The term "mathematical sciences" has therefore become common today, and
few people are unhappy with the tendency to regard these disciplines as more a
science than an art. That this acceptance somewhat undermines the working
definition of science given above is of little practical importance to most scientists,
for an exact definition of the field's overall scope has little effect on their specific
work.

54
Third, whether mathematics is included or not, the definition given has one
serious drawback: the tendency of some to regard scientific knowledge as the
exclusive form of knowing, and to specifically exclude from the "knowable" category
any results obtained by other methods. This philosophy, called logical positivism,
asserts that logic combined with the empirical methods of science forms the only
possible way to know things. It rejects the conclusions of other methods of enquiry
as irrelevant--as not being knowledge.
While this view was widely held in the nineteenth and early twentieth
centuries, and some still try to defend it today, it has lost much of its popularity
among philosophers. It is now realized that scientists do not operate exclusively
within the empirical methods given above. On the contrary, as John Ziman remarks,
"they tend to look for, and find, in Nature little more than they believe to be there,
and yet they construct airier theoretical systems than their actual observations
warrant." (Klemke, p. 35) This may overstate the case somewhat, but it leads to an
important observation regarding the "doing" of science.

Scientists work within a world view.

That is, like everyone else, a scientist brings to the work at hand a framework
of ideas about how things ought to work, a set of conceptions about why the world
behaves the way it does, and a collection of goals that are regarded as desirable to
achieve. All the scientist's work is done within the context of such a world view--it
influences every decision and every step. Because of this, theory has a tendency to
come before the collection of data and not as a result of it. Consequently, data is
often collected under the influence of the theory that is supposed to explain it, and
researchers are naturally inclined to reject data that does not fit. This process is not
dishonest in any way; rather, it is human nature to observe and interpret the world
in terms of what one already believes about it. A shared community world view also
lends a consistency to the voice of science as it speaks to matters of public concern.
Fourth, this consensus in a scientific community tends to be remarkably
broad, monolithic, and very slow to change. Unfortunately, this very consistency can
sometimes hamper objective investigation and make truth harder to discover. The
important insights and discoveries in any field often come about because the world
view expands or changes to allow people to see things in a new way. Those who
make this breakthrough may have a difficult time convincing anyone else to listen
because a changed world view often forces people to re-examine matters previously
considered settled.
Such "new views" are common in the artistic world. Each generation reflects
its world view in its artistic creations, and may fail to communicate with the
previous generation through these forms. Rock music, for instance, expresses a
direct connection with the emotions, a raw "me-ism" that can be incomprehensible
to those who do not share its context. Indeed, it could be argued that it is not the
nature of some music to be comprehended--that both its medium and message are
entirely emotional.
Examples of such paradigm shifts from the history of science include Galileo's
heliocentric model of the solar system, the periodicity of the elements, atomic

55
theory, radioactivity, Einstein's theories of relativity, and quantum mechanics. All
were ultimately accepted by the scientific community, but each had difficulty at first
due to the radical change in world view required to comprehend the new model.
This explanation is not intended to suggest, as some "new-age" philosophers
do, that a new way of looking at things (a new model or paradigm) changes or
becomes the underlying reality, a sort of ultimate "man is the measure of all
things." Rather, it is to point out that the practice of science does not quite conform
to the general view of its philosophy as expressed in the step-by-step scientific
method. Scientists do assume an underlying reality, but they interpret or filter that
reality, so their results partially depend on the nature of those filters.
Fifth, the results of scientific enquiry are always approximations, subject to
reinterpretation in the light of new data that may be more exact, be collected in a
different manner, or be interpreted with a different world view. There is also the
possibility that new data will overthrow a fraud, a hoax, or a conclusion derived
more from wishful thinking than from careful reasoning. For instance, a technician
being asked to do radioactive dating of some sample might ask the supplier what
range of dates are acceptable, and might not report any test results that fall far
outside this range. The non-conforming results are obviously spurious. But, what if
there another view that explains them as part of a consistent data set?
Another complication arises from the fact that many scientific workers engage
in the building of highly speculative theories with little or no connection to actual
data. This habit is particularly widespread among astronomers and others with an
interest in the origins and mechanics of the universe as a whole (i.e., cosmologists).
Such speculation is healthy, because it tends to open up many new lines for
investigation. Scientists must speculate if they are to make any progress at all, for
otherwise they will generate no hypotheses and be unable to do anything. However,
this necessity of practice does illustrate that the boundaries between science and
the more speculative or metaphysical disciplines are not always as sharp as is
generally believed.

The Role of Consensus

These observations on the imprecise aspects of science lead back once again
to the example of its ally, mathematics, for a better idea of what science is, if it is
not just pure logic combined with rigid experimentalism. As mentioned before,
mathematics relies on a community consensus of what is "true"--one that is not
infallible, but that is at least a reliable determinant of what things are part of the
discipline and of what constitutes a properly derived result.
In a way, the existence of this peer consensus is not unlike that of the high-
diving or figure-skating judge who holds up a score card after each performance.
The consensus of the group (i.e., the average scores) becomes the final judgement
on the dive. The determination of what constitutes good science or good
mathematics takes more people and a longer time (perhaps generations). It is
nonetheless the result of a community examination of the work in question, and is a
consensus of its value. It is even possible to quantify this agreement somewhat by
counting the number of times a paper or book is cited positively in the

56
bibliographies of later works--the higher the number, the more firm the consensus
of worth. In view of the fact that the majority of published "research" papers are
never once cited by anyone, those that are quickly distinguish themselves from the
rest.
Looked at in this way, science loses none of its empiricism, precision or
status--but it is seen as one among many consensual ways at arriving at an
agreement about the way the universe appears to work. On the other hand, this
view does cause science to lose some of the mystique and exclusivity that it has
built up during its 150 year ascendancy over other thinking methods in the West, for
this perspective places it in a continuum of disciplines, blurring the edges between
it and other concepts of truth-seeking.

Other Considerations

The methods of science are also important in a variety of fields, (such as


government or economics) where facts need to be gathered and interpreted for the
benefit of the decision-making process. Scientific techniques can be invaluable in
discovering "what is going on." Of course, subsequent decisions will always depend
in part on values and evaluations not provided by the fact gathering process alone.
For instance, as the costs of techniques soar, decisions have to be made
about what research to fund and develop, and what to delay or drop altogether.
How much goes to AIDS research, how much to computing, how much to developing
new fields, to transportation, to the environment, and so on? Such decisions raise
political, economic, and ethical questions that scientific investigation cannot by
itself answer. Moreover, there is no particular reason to believe that the conclusion
about what should be done, when reached by a scientist, is any better or any more
logical than the conclusion reached by a politician, or by the general public. Indeed,
if a scientist takes pride in the belief that only empirical methods produce
knowledge and everything else is erroneous or irrelevant, then the resulting
ignorance of other thought processes, disciplines and people is more likely to
produce a bad decision than a good one. Knowledge, thinking, decisions, and their
consequences are interrelated. Science provides one of many methods of thinking
and of obtaining knowledge; it is most effective when integrated with others as well.
That is;

It is impractical to think one way.

One must conclude therefore, that the logical positivists, in seeking to exalt
the scientific method as the only road to knowledge, actually restricted its domain
and made it less useful than it could have been. In short, it is precisely integration
with other ways of thinking that makes scientific methods generally applicable and
practical. Such applications of science as well as the relationship between science
and technology is the subject of a later section of this chapter. The next section is
devoted to placing the theory-making of science into the larger context of a
common thinking device.

57
2.3 The Role of Abstraction
Among the activities of scientists, the forming and manipulating of scientific
theories is important enough to warrant a discussion of its own. Theory formation is
considered by some to be such a unique undertaking that it is the province a
privileged few and has no parallels in other endeavours. However, like the scientific
method as a whole, theory formation is an example of a broader and relatively
common activity whose exercise is necessary for all citizens in a sufficiently
complex society. Indeed, the ability to propose theories may be necessary for the
formation of such a society in the first place.

What is an abstraction?

The Western Judeo-Christian religious tradition holds that God is capable of


holding in his thoughts all the details of the fine structure of the universe
simultaneously. This limitless knowledge and creative energy brought the universe
into being in the first place, and now holds it together. Although not all agree that
God even exists, much less is omniscient and all-powerful in this sense, no one
seriously believes it possible for any human being to achieve such a universal
awareness. Even mundane and ordinary objects (a chair, a tree, a cow, one human
cell) are sufficiently complex to make such a comprehensive understanding
impossible. It has been centuries since a single human being could have even a
passing acquaintance with all the available academic knowledge, and it is now no
longer possible for any person to comprehend the whole of any single discipline.
Neither is it correct to assume that everything knowable about a given discipline
has already been discovered, or ever will be.
Fortunately, it is not necessary to have such comprehensive knowledge about
something in order to make appropriate use of it. One can enjoy a car ride without
knowing how to drive. It is not necessary to be able to build an automobile in order
to drive one, and not required that the workers building it be able to design it. The
designers need not be able to produce the metals and plastics from which it is
made. None of these must know how to refine the petroleum products required to
run it. Road designers, builders, and mechanics occupy related specialists, and so
do the legislators, sales people, auto company executives, parts manufacturers, and
many others.
Each of these has different priorities for what they must know about the
automobile. For each, there is an essential subset or extract of detail taken from all
that it is available to know about the subject. Each views an automobile by focusing
only on the details essential to a particular role, and needs only a cursory
acquaintance with details important to others.
A similar process is at work in the formation of theories by scientists (and
others). Here, it is clearly understood that no object can be comprehended in every
detail down to the sub-atomic. It is the concentration on essentials and the
exclusion of details that makes understanding manageable, and even possible. Such
a process gives a researcher an intellectual handle on the subject that would be
impossible if knowing everything were deemed to be the only adequate kind of
knowledge. It is therefore possible to conceive of something by knowing an

58
appropriate and sufficient subset of its properties. In this light, it is possible to offer
the following definition:

Abstraction is the process of excluding or digesting details in order to concentrate on essentials.

One aspect of abstraction is deciding which properties are the sufficient


essentials to the task at hand, and which are details that can be ignored. This
decision very much depends on the community within which the abstraction takes
place, for to be useful, an abstraction must not only be communicable, it must be
communicated. If only one person understands it, but cannot transmit its essence to
another, an abstraction has no practical use. Thus, the kinds of abstractions that
come to be widely accepted depend on the level of knowledge and education of the
community for which they are intended. For instance, a solar-system model for
explaining atomic structure is sufficient for those who are not equipped to grasp the
finer points of probability and quantum mechanics, but quite inadequate for
researchers at the frontier of knowledge in the field. Likewise, there are a variety of
models for explaining the workings of a modern economy, and these vary in
complexity and usefulness depending on whose understanding is being addressed.
The needs of most citizens are quite different from those of a politician making a
decision, or those of a professional economist summarizing available information for
that decision.

Other Abstractions

This process of attempting to explain a myriad of detail through an


abstraction of certain broad outlines or essentials is not confined to the sciences or
even to the academic disciplines that attempt to use the scientific method.
Numerous examples are possible from all fields:

o A computer program is an abstraction of a problem solution into a specially


devised symbolic language (notation).
o A chart or graph is an abstraction of data or relationships into pictorial form,
in order to allow them to be visualized, and therefore understood from a different
perspective.
o Words and numbers are symbolic abstractions of specific ideas.
o A language (including a computing notation) is an abstraction designed for
the purpose of communicating other abstractions. It could be termed a meta-
abstraction.
o Whenever someone learns a skill or a trade, the necessary activities and
actions are abstracted from the task details. The skills become automatic, so they
can be exercised without thinking about the details.
o The manufacturing/wholesaling/retailing chain is an abstraction that allows
people to buy goods without having to make their own.
o All job specialization is a type of abstraction that frees people from excess
complication, allows them to concentrate on a small number of useful skills

59
themselves, and to deal with most of the necessities of life through other specialists
in a similarly abstract manner.
o Money, whether expressed as precious metal, coin, paper, cheque, or
electronically, is an abstraction for the wealth of nations, corporations, and
individuals.
o A representative democratic state is an abstraction that allows individual
input into the governing process without having to consider every detail of every
person's stand on every issue.
o The Judeo-Christian understanding of God is an abstraction for one who is
too complex ever to know entirely.

Thus, far from being the province of academics alone, abstraction is a process
fundamental to all human activity. The totality of the abstractions used by a culture
is an important measure of its complexity. The most sophisticated abstractions are
those that allow people to perform complex tasks without much thought. For
instance, the graphical interface found on modern computers allows the user to
perform very complicated tasks with a minimum of effort (at a higher level of
abstraction) by comparison with the verbal interface found on old-fashioned
machines. Indeed, all computers are tools for high-level problem-solving--they
enable people to make abstractions and avoid detail. Likewise, most industrial
machines (and even bicycles) have to be operated abstractly--at a level of
unconscious skill, for so long as the details must still be thought about, the task
cannot be performed efficiently, if at all. (If you have to think about what you are
doing, you fall off your bicycle.)
While one could criticize the process of abstraction over many levels as
removing people from "real" understanding, it is precisely such distancing that gives
abstractions their power. It is not necessary to understand how cheese is made in
order to enjoy it. Neither is it a prerequisite to know how to make or program a
computer in order to make productive use of it for such tasks as word processing or
data analysis.
These examples illustrate that abstractions are the most useful when they are
far removed from the thing being abstracted; when they have been refined to the
point that they can be usefully employed by most people in an automatic fashion.

Other Names for the Process

So important and pervasive is the process of abstraction that it has a variety


of specialized names arising from different disciplines and from the terminology
adopted by the various people who have considered various aspects of this activity.
Some of these equivalent terms are mentioned here, because they are of
importance in later chapters.
A digest is a summary of that portion of data deemed by the one making the
digest to be the most essential. It is an attempt to filter the data, removing the non-
essential, redundant, or irrelevant. For instance, data reported from experiments
are nearly always digested from the entire set obtained; this is necessary for brevity
and clarity.

60
A Model is a representation of something in a more concrete or accessible
form than the original. It may be also used of a scale model for some proposed
project. The term conveys the idea of explaining or showing by means of an analogy
to something else that is supposedly better understood. (i.e., for which there are
believed to be adequate abstractions already). The term modeling may be used by
scientists to describe the process of theory formation.
Theory formation is an attempt to abstract into some simple statement the
workings of the subject under study. This term tends to be less concrete than
modelling, for a theory is an attempt to define rather than to model, though in
practice the distinction may be a fine one.
A paradigm is also a way of looking at a subject by way of analogy or
example. It too is a model, but this term tends to be used in a broader sense to
describe abstractions of considerable importance or size (a collection of related
abstractions). One example is the evolutionary paradigm, within which are many
models for origins. Another is the Marxists' class struggle, to which view they bend
all political science and economics.
A meme is a (perhaps indirectly perceived) transmittable idea that is the basis
of a social movement or a political philosophy. Its spread through a population can
be studied in a manner similar to that of an infection, because it is the nature of a
meme to induce the desire to proselytize. A meme can be benevolent (e.g., the
ideals of democracy), fatal to their holders (e.g., the Jim Jones cult beliefs) or fatal to
others (e.g., Naziism and Stalinism).
A world view is a complete set of philosophic or religious presuppositions
within which paradigms and individual abstractions are formed. It constitutes the
total way in which a person does abstractions (thinks) about the real world, and
generally finds its expression within the communities of which the person is a
member. It encompasses the complete set of memes that a person possesses and
spreads. One may speak, for example, of a scientific world view, of a Christian one,
of a liberal one, or of an American one. Within each of these there exist numerous
specific views of parts of the world.
It would also be instructive in this connection to observe that some media
make use of word pictures and of various figures of speech to evoke a much
broader point (poetry is like this; so are many aspects of the Bible). Likewise, other
media make use of visual pictures to convey a broader message (television
commercials are like this). In both cases, a more subtle form of abstraction is being
used to transmit actual ideas that are related to or suggested by the formal
communication.
The mention of some abstraction term, theory title, or world view name,
evokes in the hearer a vision of a set of beliefs, views, or typical activities. That
evoked image will invariably be to some degree inadequate or incorrect, especially
if the hearer is not a part of the community that devised or is described by the
abstraction. When such a misconception takes place, it is often because the hearer
holds to some popularly believed ideas about the group in question, in which case
the hearer's own (mistaken) abstraction is called a stereotype.

61
Thus such words as "fundamentalist" or "immigrant" or "liberal" or "Christian"
will generate in the hearer a collection of related impressions whose semantic
meaning depends on what that person has abstracted under the term in question.
This is not to say that the deconstructionists are correct and that no message has
an absolute semantic; it is only to observe that communication requires agreement
on the meaning of abstractions.
Plants and animals do not make abstractions; this is a uniquely human
activity. Abstractions make thinking and communicating possible. They make it
possible to understand the world and its processes, whether by science or
otherwise. They make it possible to make, to build, to specialize and to cooperate.
They are therefore the essential building blocks, not of science alone, but of human
civilization itself. This section concludes with an attempt to abstract itself:

Abstractions are never the "real" thing, and therein lies both their power and their usefulness.

Abstractions are intellectual creations; they are not discoveries.

Abstractions are approximate and relative perceptions or descriptions, not precise or absolute
realities.

Before looking at how the making of abstractions bears on the meaning of


science, it is instructive to consider also the relationship between theory and
practice.
2.4 Science, Technology and Technique

The Relationship Between Science and Technology

One way of defining what is meant by technology is to view it as the


handmaiden and the child of the doing of science--as the practical adjunct to theory.
In this popular view, science serves as the tool to discover the rules by which the
universe operates, and technology provides the eventual payback for all the
investigative work. This way of looking at the relationship between science and
technology has elements of truth, but can be misleading. It is one thing to create a
model to explain, say, electromagnetism. It is quite another to use the theory to
make an actual product such as a radio, a television, or a computer. The kind of
thinking that goes into applying the principles worked out by scientists to the
making of real products is quite different from that which goes into discovering such
principles in the first place.
This can easily be seen when one realizes that science is essentially an
inductive and theoretical process, wherein one examines many actual instances in
the real world of some assumed underlying order and attempts to find a general
structure for those instances. The development of technology, on the other hand,
involves a deductive or tool-making mentality, by which one derives specific
applications of general principles. Perhaps the best way to distinguish between the

62
two is to say that science is concerned with why things work, whereas technology is
concerned with how to make something work, that is, how to do something.
The fundamental motivating factors are also very different. Pure science can
be driven by the desire to know, or by intellectual passion, and requires very little
more. The motivation may be pure curiosity; it may be a desire to "think God's
thoughts after him," or it may be to "become like God, knowing all," or it may be
anything between. As in mathematics, pure science may have an inner cry to be
applied (the cry may come from a funding agency), but the researcher need not be
personally interested in such aspects. Work in basic science can be done for the
same reason that climbers scale Mt. Everest--the challenge is simply there.
On the other hand, the drive to build tools (technology) comes from the need
for better and more efficient ways to get a job done. People innovate to better feed
themselves, to defend themselves from attack, to become more effective
aggressors, or to gain some other competitive advantage. They build higher, faster,
wider, cheaper, and more beautifully than the last person and what they have built
fulfils a need and may increase their wealth. They may even do it to help other
people achieve their full potential, or because they believe that God ought to be
honoured in the full use of their talents to benefit others. They may not even be
able to articulate a reason why they build, except to say that they enjoy tinkering.
One other difference between the two should be noted, and that has to do
with methodology. Since technology is required even in the absence of scientific
knowledge, it often uses trial-and-error methods. For instance, it is difficult to
predict what the physical properties of an alloy will be just by knowing those of the
metals to be mixed. The constant search for lighter, stronger, or more ductile alloys
cannot wait for science to provide a working model to explain what will happen
when a given collection of metals is mixed in specified proportions, for such a
theory is a long way behind the need. Rather, metallurgists actually mix different
combinations and then test the properties of the alloys they produce. They may use
only general rules of thumb based on past experience and not require a unifying
theory. This procedure may lack in pure theoretical beauty, but it gets the job done,
and that is what technology is all about.
Because many of the technological advances of this century have depended
on science, it is easy to forget that the creation of tools goes on independently of
science--even (to a great extent) in its absence. Moreover, each set of tools or
machines has the potential when once manufactured to enable the building of
others of a higher order--and to do this even before the first set is understood
clearly. Yet, the industrial age has seen a phenomenally successful partnership
between research science and engineering, and to a considerable extent the nature
and goals of science have come to be dictated as much by the needs for new
technologies as by pure curiosity. Pure science has become woven up with its
applications and the two can no longer be completely separated. Indeed, it may no
longer be accurate to distinguish pure science from applied science, because the
separation does not really exist in practice. Nowhere is this relationship more
evident than in the technology parks adjacent to many North American universities.
Perhaps the best known of these is the Silicon Valley area of California, which owes
its existence to nearby Stanford University and the many professors and former

63
students who have successfully turned their knowledge into products and cash. The
same thing is now taking place in biochemistry, where academics are racing to turn
a profit by transforming their research into marketable pharmaceuticals.
Likewise, the U. S. space program has generated large numbers of
commercial spin-offs to the mass market. These technologies were developed
initially for conditions of zero gravity, extreme temperature, high stress, and limited
mass or size, and had ultrahigh reliability requirements, but quickly found uses in
more mundane environments as well.
All technical advances in these fields (computing, biochemistry, space) have
had consequences for a wide range of marketable products. The same comment
can be made for military technologies, for the entire aerospace industry has grown
as it has largely because of the impetus provided by the needs of two world wars.
The observation can be repeated for almost any research or technology. Thus, pure
research and pure invention do not exist alone and entire to themselves. Each
inevitably affects the other and reflects back onto itself. Answers generate both
products and new questions. Here, the interdependence principle could be stated:

There are no such things as pure theories, or pure applications.

Technology in its Own Right

However, the relationship between science and technology goes far beyond
the fact that one is inductive and creates abstractions and the other is deductive
and generates concrete results, for science as it is now known is only a few
centuries old, whereas technology has been around at least since the first person
thought of throwing a stone. It is not hard to argue that technology gave birth to
science by providing a critical mass of industrial tools and complex processes that
could only be understood and carried to the next step of their development by
inventing the exacting analytical techniques called science. Viewed in this way,
science could be regarded as the tool of technology rather than the other way
around.
In fact, if the definition of technology is broadened in the manner of Jacques
Ellul to include all systematic techniques--all searches for the most efficient way of
doing--then the scientific method itself is actually one example of a technique. As a
technique, it is subject to being studied for its own sake, and to being modified in
order to become more efficient. Seen in this light, scientific enquiries take place
under the control of one out of many possible techniques of thinking. They do not so
much generate products from theory as they apply a practical methodology
themselves. This concept is even more evident when one considers that scientific
investigations themselves almost always require tools other than simply the
particular mental discipline known as the scientific method. Whether the device is
the mass spectrometer of the chemist, the meson machine of the physicist, or the
computer employed by the molecular biologist to map genetic structures, there is
always a level of co-requisite technology without which the particular science
cannot be performed. Indeed, it becomes increasingly difficult to speak of the

64
science without the technology that is required to do the work. Moreover, there may
well be more efficient techniques to pursue a given line of enquiry. There may even
be a better way to do what is now called science as a whole. Techniques that are
yet known may not even exist, but the point is that it cannot be proven that modern
science is the most efficient possible technique of its kind.
Furthermore, just as science and technology drive each other, and their
modern versions could scarcely exist without each other, each technological
advance drives others. That is, just as no scientific discovery is without its
implications to technology (and vice versa), the same is true of new products and
techniques themselves--none exists alone or is without a broader influence. Some
examples include:

o The development of reliable pumps made it possible to mine the deep


seams of coal underlying much of Britain, one of the prerequisites for the industrial
revolution.
o The burning of coal eventually forced the creation of scrubbing technology
for cleaning emissions.
o The development of steel made possible a wide range of machinery,
instruments, and consumer goods that could not have been foreseen by those who
made the first alloys of iron and carbon.
o The World War II German rocket program led directly to today's ICBM's and
also to space exploration technology.
o Radio led to television, and the demands of both led to communication
satellites.
o The growing complexity of telephone systems required automatic switching
systems and eventually computers.
o The modern microcomputer was made possible by a number of inventions,
most notably those of the vacuum tube, the transistor, and the integrated circuit. It
in turn has spawned new products, disciplines, and whole industries.

Examples of this sort of thing could be multiplied for product development


alone; they lead to two more statements of the interdependence principle:

It is impossible to discover one thing


and

It is impossible to make one thing.

That one application drives another explains why in the long run the overall
growth of technology is exponential, even though any one application reaches
natural limits, perhaps in a relatively short time. Consider transportation
technology, for instance, and its progression through walking, riding, sailing, driving,
and flying, until achieving space travel by rocket. Each of these on its own imposes
a natural upper limit on speed, but the need to travel farther and faster forces new
transportation technology to be developed. The theoretical limit on rocket speed is

65
some substantial fraction of the speed of light; the most optimistic of science fiction
writers take it for granted that a new technique of transportation (warp speed) will
eventually be developed to get around this barrier. More conservative voices assert
that this is impossible, and it appears to be from most theoretical and practical
considerations . However, such voices have been heard before--the horseless
carriage, the aeroplane, the moon rocket, and the personal computer were all
impossible until they were done. These examples may serve to illustrate an
important fact of both science and technology that may be termed the
incompleteness principle. It applies to all knowers with the exception of an all-
knowing God.

For any field of study or application, it is either impossible to know everything, or it is impossible to
know when everything is known.

or, to put it another way

No body of knowledge can ever be known to be complete, and no technology can be absolutely
known to be the most efficient possible.

Technique

Broadening the notion of technology in order to view the scientific method as


one in a spectrum of techniques has other consequences as well. If technique is the
search for efficient methods as well as for efficient devices, then one may suppose
that virtually every discipline has techniques better suited to that field than to
others. This supposition leads to the further insight that the best techniques of
management or the study of sociology may resemble scientific technique, but do
not have to correspond exactly to it. Indeed, one ceases to expect that all technique
must be of the scientific kind, for efficiency will surely be related to the nature of
the field, rather than to theoretical considerations. Thus, it makes sense to speak of
techniques of economics, politics, management, advertising, communicating,
teaching, and of clear thinking (logic). One can also suppose that such techniques
also lead to efficient methodologies in each of these areas, without having to apply
the label "scientific" to them.
Jacques Ellul observed that every field of human endeavour can be assumed
to be subject to the search for technique. As techniques develop, he observed, they
do so in the most efficient manner available, reducing the number of choices for
method, and tending to become rigid and authoritarian, admitting of no exceptions
because of the claim to be the most efficient. He saw the end result of this
progression of technique to be an amorphous totalitarian society with no individual
choice at all (everyone would of necessity always do the most efficient thing).

66
However, there was a factor that Ellul did not in his pessimism consider--the
incompleteness principle. What if some other path were followed from the start?
Could not a different "most efficient" point have been reached? How would anyone
know that such a point had in fact been reached?
Clearly, it is not possible to know when the ultimate efficiency possible has
been achieved in any field. It may be reached for a given technique applied in a
particular way, but there may be other techniques with vastly different results. The
high technology explosion in so many fields simultaneously illustrates this better
than any theory. The view of the 1950s, like that of the 1890s, was that certain
ultimate goals for both scientific knowledge and technological efficiency were close
at hand. This view cannot any longer be sustained. It is being replaced by a more
open-ended thinking that does not suppose that any state of equilibrium (in the
sense of an ultimate technique) must necessarily ever be reached in either product
development or in the potential application of technique--even to the social
sciences.
To put this concept another way, suppose humankind was indeed created in
the image of an omniscient and transcendent God. The process of learning may still
be at the stage of the infant who makes piles of someone else's blocks and then
knocks them over. Children naturally believe that they know everything, and are
constantly amazed to discover that they do not. The principle of incompleteness is
worth restating in these new terms:

No technique can ever be known to be ultimate, the best possible or


universally applicable to all situations and cultures. All are open-ended.

Summary

The popular conception that science discovers and technology applies


reverses the dependency of the two. Technique (efficient methodology)
encompasses both science (one technique) and what is commonly called technology
or engineering (efficient product development). It is also incorrect to assume that at
any given time the most efficient methods have been discovered--or even that an
optimum technique for something exists at all.
These insights assist in more properly placing science and technology within a
spectrum of related human activities, demythologizing them to an extent, and of
partly removing the notion that technique irresistibly and inevitably progresses to
all-encompassing and dignity-destroying final goals. They lead to a more open-
ended and continually changing scenario for the future. They also lead to a more
realistic view of the practice and practitioners of science and technological
development.

Profile on . . . Society and Technology

The Telephone

67
What is it for? March 10, 1876: Alexander Graham Bell becomes the first
person to transmit speech electrically. The powerful telegraph companies, seeing no
business applications, refuse to have anything to do with the "electrical toy." Even
its inventors seemed at first not to know what to do with the new machine.

A new occupation: Early telephone subscribers were connected to one


another's lines by central operators. Since they could (and usually did) listen to the
conversations, operators became powerful and important in their communities, for
they were the primary information clearing houses.

Women and the telephone company: Early operators were usually well-
educated single women with a status comparable to school teachers. They were
well cared for, but generally required to leave upon marriage and few entered
management. However, the sheer size of this work force contributed to the
acceptance of women working outside the home.

Depersonalization: As exchanges grew in large cities, it was no longer


possible for operators to know their customers. They became detached and
impersonal handlers of routine switching chores, many of which were ultimately
taken over by automated machinery. Today, even the operator's voice is
synthesized.

An information medium: The early practice of transmitting concerts and


sermons to homes and hospitals became the forerunner of similar entertainments
on radio and television. It was no longer necessary to go to an event to experience
the pleasure of having attended.

Business practice: Once in use, the telephone was not seen as a social
medium, but as a tool for conducting business. For instance, installed at resorts, it
allowed businessmen to keep in touch with their offices. Cellular telephones and
facsimile allow instant communications anywhere. Large businesses can be
cohesive, and small ones can compete using telephone technology.

Urban development: The suburbs and the upper floors of high buildings were
not practical as locations for doing business before the telephone. It has contributed
to the growth of cities both upward and outward.

Old technologies obsoleted: Telegraph usage, which peaked in the late 1920s,
and again in the mid forties, declined steadily thereafter. Today, the use of the
telephone growing even as the amount of first class mail declines.

Better services: The telephone permitted the creation of efficient emergency


services over large areas. Medical aid, firefighting, and policing all improved
dramatically because of the ability to communicate requests for help quickly.

68
Crime: The telephone enabled new forms of crime. Prostitutes became call
girls. Obscene calls became a problem. Gambling networks became more
widespread. Wire-tapping became a new kind of crime and a new method of law
enforcement.

Environmental issues: From very early, complaints were often heard that
wires, poles, and towers were disfiguring the countryside. Today, automated calling
equipment allows the individual's personal environment to be invaded by junk
phone calls.

It changes social behaviour: If two people are talking and a third enters the
room, the newcomer must wait for the chance to talk. If instead the third calls on
the telephone, most people cannot ignore the demand and will drop whatever they
are doing to answer immediately.

It is difficult to regulate fairly.


(1) How is a fair rate for service determined? Flat fees give business and other
high volume users a quantity discount, causing home users to subsidize them. On
the other hand, metering local calls requires more equipment and raises the rates
for everyone.
(2) How are costs and fees split properly between long distance and local
service? This is especially hard to determine when two or more companies are
involved.
(3) Should telephone service be a monopoly so as to ensure greatest
efficiency and uniformity of service? Or should it be competitive, so as to ensure the
lowest prices?
(4) Should telephone service be closely regulated as an essential public utility,
or should free competition be allowed. Which is most in the public interest?
(5) In either case, should it be government owned or private?
(6) Should all long distance directory service calls be free? Credit bureaus are
heavy users of this service, reasoning that a phone listing is an indicator of
creditworthiness. These are commercial operations, yet they pay nothing to use this
service.

The telephone changes society.


(1) It is an instrument for organizing and socializing people.
(2) It converges space and time, making rapid communications with remote
places as effective as those with next door.
(3) Mail order shopping became important. "Let your fingers do the walking" is
not simply an advertising slogan, but a new way of life.

The telephone spawns new technologies:


(1) Demand for long distance and transatlantic service gave rise to copper
wire, undersea cables, microwave transmission, and then to satellite transponders.

69
(2) Demand for new services produces picture telephones (not widely used),
improved facsimile service, cellular telephones, and phone company sponsored
information networks.

Telephones empower the individual.


(1) They are sophisticated, but anyone can operate them.
(2) They create mobility, allowing people to find and apply for jobs at remote
locations.
(3) They provide access to information stored in distant computing systems.
(4) The telephone system guarantees that

Everyone is connected to everyone else.

2.5 Science and Technology--Practice and Practitioners


It is important to realize that even as science and engineering are disciplines
(techniques) like any other, their practitioners are people. They are therefore
subject to the same failings of jealousy, narrow-mindedness, pride, error, and even
fraud as those in any other field. Since this is a book on issues, a brief discussion of
some of the problems in the practice of science and the pursuit of technology is in
order.

World View and Scientific Debates

First, consider how pride and narrow views give rise to debates and
disagreements even in what are regarded as exact sciences. As has already been
pointed out, no scientist or engineer works independently of an internal
metaphysical framework or world view. Every step in the application of any
technique (including the scientific one) demands that judgements be made, and
these can at best be only relatively objective.
Cultural and global world view are non-unique concepts, so individuals see
things differently. If the internal thinking framework of any two scientists or
engineers (or any two people at all) were absolutely identical, then one of them
would be redundant. For instance, if the reader agrees with everything that is said
in this book, then clearly the author is unnecessary. The non-uniqueness of world
views means that different people choose different specialities for study in the first
place. It also means that two specialists in the same field may place entirely
different interpretations on the same set of data, may expound varying or even
contradictory theories, or may develop quite different products or applications from
the same theoretical base.
Indeed, the divergence may begin sooner in the process. The decision to
accept or reject certain data (or to seek it in the first place) is not necessarily
scientific or logical--rejection may occur when the data fails to "fit" the
preconceptions of the researcher. Armed with competing theories and possibly
differing data, two factions of the scientific community may seek to line up

70
institutional and individual support, particularly among the so-called scientific
celebrities. If the question is actually (or appears to be) decidable, one side (or
some third party differing from both) can eventually emerge from the ensuing
debate as temporary victor.
Some of the most controversial discussions take place when the issue is not
decidable, for either intrinsic or extrinsic reasons. If the problem is extrinsic, such as
the lack of technology for testing purposes, there is still hope for an eventual
solution. One of the best modern examples is relativity theory, many facets of which
were not at first amenable to investigation in the physical sense. As the years
passed, new techniques permitted experiments not previously possible, and the
general theory of relativity came to be universally accepted as experimental results
matched theory.
However, if the undecidability is due to intrinsic reasons, that is, the theory
itself is of a metaphysical or otherwise unprovable nature, then debates will rage
indefinitely. There are not always definitive and acceptable ways to answer non-
scientific questions through the use of science, whether or not it happens to be
scientists asking the questions.

Questions About Origins

This will certainly be the case when the two sides are arguing, say, about
events that took place in the Earth's past. It is impossible to prove or disprove in
any absolute sense many assertions concerning prehistoric times. Indeed, historians
cannot always agree on the facts concerning recent events, much less on their
interpretation, so one should not expect agreement on questions of prehistoric
events. This is particularly true where questions of the origin of the universe are
concerned, and each generation of modern scientists has adopted quite a different
cosmological model, defended it, and taught it as fact, only to have it replaced at a
later date. The confidence of the scientific modeller rises if the model correctly
predicts things that were not used to build the model, but the inability to test its
main premises experimentally means that this confidence will always be partly of
the faith kind. There is no safer prediction about the future of scientific theories
than that the widely accepted big-bang theory of the universe's origin will
eventually be replaced. In such cases, the new model must be able to explain
everything that the old one could, as well as resolving inconsistencies of the old.
A person could object that acceptance of any strictly mechanistic model for
origins, especially one acknowledged to be incomplete and temporary, is of such a
different degree than faith in a creator God that the two are not really comparable.
Some find this objection attractive, but its analysis may be superficial, for it does
not take into account the level at which the belief systems operate.
Is it a particular mechanism or the necessity to explain origins mechanistically
that is the subject of faith? The former may be a holding position pending suitable
confirmation of detail and possible modification; the latter may represent a
fundamental and non-negotiable philosophical position. If that which is believed in is
a universe presupposed to be mechanistic and without an intelligent planner, then
the mechanism currently accepted is mere window dressing for a deeper faith--one

71
that always insists upon a materialistic explanation for origins, regardless of
evidence. At this level, the two faiths (in a creator God, and in natural origins) would
be indistinguishable, though they appear different when considering specific details
(such as mechanism) rather than the broad presuppositions behind them. The
motivation of an affirmer of beliefs is as telling as the details of the belief held.
Evolutionary biology provides a second (related) example of an issue that is
not decidable for intrinsic reasons. Conclusions about the biological past will always
be tentative, describing what might have or could have happened, with backing
from empirical evidence resting on interpretations of data more than on the data
itself. Even the evidence gained by comparing the genetic material of organisms
catalogues relationships descriptively, not historically, and sheds no light on
whether they came about by chance or by design. It is easy to confidently assume
that new discoveries will support some current theory of biological evolution. Some
such discoveries may well be made, but different confidences might produce their
own supporting evidence.
It is important to note that in both cases, it is not the discussion of specific
mechanisms that is likely to have a metaphysical flavour (though it may). To see if
that aspect is present, one must enquire deeper and determine whether the
individual is a priori committed to philosophical presuppositions demanding specific
categories of interpretations for origins and life and cannot conceive of alternatives.
Such a prior determination is likely to be the case for most people for whom such
questions are important. That is, if person self-describes as "creationist" or
"evolutionist," a commitment to a philosophical position is being expressed that
underlies any specific scientific thought or investigation. It is here, and not in the
work the person may do that the question of metaphysics arises.
For further discussion of the radically differing views on the subject of creation
and evolution, the reader is referred on the one hand to the Usenet newsgroup
"Talk.origins" and related homepages, and on the other to materials maintained by
such organizations as the Creation Research Society. These two present the poles of
thought, while other groups attempt to find middle ground. The reader may wish to
investigate whether such groups even mention their metaphysical presuppositions,
much less discuss them in any detail.

Questions Requiring the Use of Models

Similar situations can also arise if the objects under study are too small or too
fast to see directly and can only be described by reference to a model for their
behaviour (e.g., the wave/particle nature of light and the actions of subatomic
particles). In such cases, competing theories or models sometimes arise to explain
the same phenomena and it may be the case that the two (or more) sides forget
that they are arguing not about science, but about interpretations, that is, (possibly)

72
about metaphysics. Indeed, modern physics is as much concerned with philosophy
as it is with anything else, and sometimes has difficulty attaching meanings to the
terms it employs to describe the phenomena it investigates. The world that is
ordinarily seen by people in the everyday sense is not obviously related to the one
seen by the physicist, an example of the fact that physical knowledge does not
convey the thing itself, however useful an abstraction it may be.

The Case of Theology

Another example of the non-uniqueness of world views can be seen in the


answers various people would give to the question: "Is theology a science, or is it
entirely metaphysical?" This may seem like an obsolete question to ask, for the
majority view among educated people today would almost certainly be that
theology has no connection with science whatsoever. However, this is a new
consensus, for just as mathematics was historically Queen of the Arts, so theology
was Queen of the Sciences. To the practitioner, theology is the systematic study of
a body of factual information--which being revealed by the deity, is no less reliable
than if derived from a microscope slide. The receipt of this information from another
(instead of by personal observation) is not regarded as a problem, in view of what is
regarded as well-attested source reliability. Theologians observe that people in all
fields accept a great deal of information as factual in much the same way; the logic
of so doing is not different, though the nature of the source is. For example, no
scientist verifies the entire body of prerequisite knowledge before carrying on with
the next experiment; to do so would be considered absurd. Thus, the study of God
may begin with a faith affirmation, but it continues with a partially empirical,
scientific, and therefore fallible study called theology--one that differs in subject
matter but shares some methodology (technique) with other sciences.
To the typical modern scientist, who accepts the control belief of materialism
and leans toward logical positivism, such a definition of theology as akin to science
would be objectionable. There is a tendency to think that unless data can be
personally verified, it is unacceptable. Unless a theory is at least potentially
falsifiable by empirical means it is not scientific. One could even say that since in
death the senses are left behind, empirical methods cannot be extended across the
gulf of the grave, even if one believes in life after death. There do remain the
methods of history and related disciplines, but in these the evidence itself, not just
the interpretation of it, may be selectively disputed, especially if the event is far
enough in the past.
For instance, one may accept documentary (and other) evidence that one's
own great-grandparents existed, though never having met them. The evidence is
compelling, though not strictly the result of repeatable experiments. It is easier to
dispute the validity of documentation for events and people farther in the past,
particularly if others' interpretation of those events does not accord with one's
preferred world view. Thus, some accept the Bible as a historically accurate
document collection describing the actions of God in history, while others selectively
dismiss all or portions as myth or fabrication. If even the evidence of history can be

73
so disputed, there is no certainly no way to personally use science to verify or falsify
claims about the existence of God.
From a historical point of view this thinking is rather new. Scientists such as
Kepler, Bacon, Newton, Boyle, Fleming, Maxwell, Faraday, Joule, Davy, Pasteur,
Kelvin, Pascal, and a long list of others of past centuries "did" science because of
their deep-seated belief that they could discover more about God by unfolding the
nature of the universe that he had created. Indeed, few of the originators and
builders of what has become today's science would be comfortable with the
philosophical orientation of their heirs. Their world view was significantly different
from that of the moderns. Though they might rejoice at the progress made in the
fields they began, they would probably consider the move to a materialist
metaphysical basis to be costly.
Of course, one could object that an appeal to the theological views of past
scientists is invalid, regardless of how popular these views were--after all, they were
a product of a cultural world view. The objection is valid, but it must not be
overlooked that the same objection can be applied to any appeal to the uniformity
of world view and metaphysics of today's scientific world. Consensus in any age is
not necessarily evidence of absolute truth. Moreover, the modern scientific
community recognizes the greatness of the science that was done in the past,
despite the fact that it was accomplished within a different metaphysical
framework. Why then do points of difference among today's scientists result in so
very much hostility and acrimonious debate? Even today, excellent science can
legitimately arise from within the framework of a minority world view. One could
even argue that it must do so, in order to achieve the paradigm shifts that are
required to make the great breakthroughs. Moreover, religions that speak of a life
beyond death generally hold that there are also some senses that can be used
there. Thus, the argument that God's existence can never be verified or falsified is
not yet proven, for one must presumably die to the empirical world to find the
answer.
The kind of peer pressure and search for consensus discussed here can have
another, and more subtle effect. Academics are rightly conscious of the need for
their work to stand on the shoulders of those who have gone before, and so they
adorn their own reports with quotations from others so as to lend their own
conclusions support. If such quoting is done with due respect to the whole context
of the original, it is not only correct, but to some extent necessary. However, there
is always the possibility that the mutual respect of a small number of workers in a
field may generate circular quotations of one another and these may create an
impression of far greater authority than what actually exists. As Nellie Hacker said
in the seminar: "If I quote you, and you quote me, who is any the wiser?"
These issues will be picked up again in a later chapter with a more detailed
consideration of the creation/evolution debate--in some ways the classic clash of
world views. For now, it will suffice to make the point that the human element in
science removes some of its reputed precision, exactness, and reproducibility to the
theoretical realm. In practice, things don't happen in quite as orderly a way as they
are supposed to.

74
Publish or Perish

Another pressure on the practitioners of science is caused by the need for


them to prove themselves by getting some results accepted for publication in
recognized journals. A book placed with a reputable publisher counts for even more,
and two books may even be worth a promotion. In the case of technology-driven
research, working prototypes, patents, and production models determine success.
Part of the reason for this is the tenure system used by the universities where
most North American scientists do their work. Following the research that leads to a
doctoral thesis (duly defended before peers) and the degree that is accepted at
graduation, the new academic seeks to become attached to the faculty of a
reputable university. If successful, a probationary appointment is given that may be
renewed for up to four or five years. At that time the candidate's research output is
measured by the number of books and papers published. If the level is acceptable, a
permanent contract (tenure) is offered; if it is not, and a second review a year later
offers nothing better, the unfortunate would-be professor is instead terminated.
Teaching ability is not usually a major issue.
In most cases, denial of tenure status at one university ends the research
career entirely because a second chance at another institution is very unlikely to be
given. The (now ex-) academic can either find a position in industry, teach at a
community college or high school or chalk the degree papers up to experience and
find another line of work. For those who do become a part of the academic system a
continued high paper production level is required for consideration of promotion
from assistant to associate to full professor, and even more important, for the
acquisition of research grants from governments and private foundations.
There are a number of consequences of this system that are not very positive.
First, this practice fails to take into account that research in some areas is
much more difficult than in others and may take far more time to produce new
results. There is, therefore, pressure to stay away from such fields and concentrate
on those where answers can be obtained quickly. This increases the volume of
research papers greatly, but reduces the likelihood that any one of them will be
very memorable. It is questionable whether anyone reads the majority of such
reports once the journal editor is finished with them and officially puts them into
print. As the majority are never cited by anyone else, it seems likely many are never
read either.
Second, it fails to take into account that some papers are more publishable
than others because they are trendier. A mediocre work on a subject that happens
to be of current interest is much more likely to be published than a better work in a
more obscure area. For instance, it would be much easier to publish work on AIDS,
superconductivity, or cold fusion than on tuberculosis, the properties of napthalene,
or heat engine efficiency. It is all but impossible to publish a substantial critique of a
majority interpretation on an important issue. There is nothing either morally wrong
or deceitful about this; it is just the human side of science showing through. This
kind of bias causes fads to be accentuated even more, but also dilutes the overall
quality of the work.

75
Third, it fails to take into account that money and influence speak more loudly
than other voices. Senior faculty can pressure their more junior compatriots into
their own fields and away from innovative ideas because they control tenure and
promotion committees. Women, blacks, and those who attempt to cross cultural or
religious boundaries can be systematically kept out of the system. Funding
agencies, particularly those under government control, can cut off grants for
political or military reasons and thus can also channel research according to their
desires. The result is that free and open enquiry is reduced and so is creativity. The
progress of new and innovative work must wait for the rare junior researcher not
only to become senior (and a funding referee), but to simultaneously retain some
creative spark. In the meanwhile, most research will be done in teams with agendas
defined by others. The risk of funding individuals is too great, no matter how
talented they may be.
Again, none of this is unique to science, for the dead hand of bureaucracy
reaches everywhere. Such problems are characteristic of any institution; that they
would eventually reach the scientific community was a foregone conclusion. For
example, the Soviet Union produced more university-trained scientists and
engineers per capita than any other nation. Yet, it struggled to catch up to the
United States in the quality of basic research and technology. Why? Because the
Soviet Union was also run by the largest bureaucracy on earth, and gaining approval
for a scientific project was even more difficult than in North America. By the time
the research had been allowed, the results may already have been in some
American journal By the time a technological development was permitted, it might
have been cheaper to buy it in a New York surplus store than to build one from
scratch.
An old story with many variations illustrates the difficulty of developing new
technology in the former Soviet Union.
The noted Russian engineer Ivan Fedorvich arrives in Fort Langley, British
Columbia, to visit his old friend and correspondent Stan Barker. Upon arrival at his
house, Dr. Barker expresses interest in his visitor's watch. Fedorvich's face lights up
as he tells him it is not just a wrist watch, but also a computer, radio, data terminal,
and television all rolled into one. It has built-in voice recognition, a gigabyte of
memory, and even a programmable alarm clock, and a miniature satellite dish in its
concave crystal--a veritable triumph of socialist engineering. "And what," says
Barker, "is that," pointing to the large suitcase Fedorvich is carrying with
considerable difficulty. "Ah," says Fedorvich ruefully, "the Politburo insisted on using
Russian batteries."

Finally, and related to these other factors, the pressure to publish at any cost
encourages scientists to find quick and easy solutions, to take short-cuts, and to
stay with traditional ideas and methodology. The safe and familiar can become so
comfortable that the scholarly apparatus begins to substitute for thought. The
watchword is "don't rock the boat," and this attitude, while it may get papers
published, does nothing to advance science.
Not all is wrong with the academic system however, or it would not have
lasted as long as it has. It ensures new work is reviewed by peers on editorial

76
boards before being published, and serves as a check on very bad papers. It creates
a sense of community and a kind of apprenticeship for entry into the community,
ensuring that new applicants do have at least certain minimum qualifications. If the
system does promote mediocrity, it also promotes volume, and every bit of
knowledge, however small, pushes back the frontiers of human ignorance. Every
once in a while, a truly great insight is achieved, and the spin-off benefits from that
one-in-ten-thousand paper are incalculable.
Similar observations can be made about those engaged in technological
development, where building the wrong product or targeting it to the wrong market
may destroy both product and career. Far more devices and methods are created
than will ever see the marketplace, but the vast amount of activity does guarantee
that some revolutionary new products are developed, even though some good ones
never see the light of day.

Funding Pressures

It has already been remarked that many decisions for both basic research and
technological development are made on the basis of grants available from a variety
of funding agencies. It is worth observing further that the largest portion of this
money tends to come from government, if only because the size of some projects is
far too great for any private means. Specifically, many of the projects so funded are
likely to be sponsored by the military. Thus, political and military considerations
have the largest say in the direction of research, increasing the direct and indirect
control of the state over the technology that shapes society. More will be said in a
later chapter about the role of the state; the mere observation of its control over the
purse strings is sufficient for now. Like some of the other things taken note of in this
chapter, it leads to the conclusion that human, political and economic factors more
than curiosity or actual needs may often dominate selection and development in the
scientific/technological process.

Other Problems

It is also natural that the kinds of pressures indicated above will lead to
serious problems from time to time. Thus, science has not been without the
occasional scandal caused by fraudulent data, wishful thinking, fanciful conclusions
or hoax. In the celebrated case of the supposed ape/human "missing link" known as
Piltdown Man, a hoax got out of hand, and what was apparently intended as the
deception of a single individual continued to delude the entire scientific community
for years. In the case of Nebraska Man, another putative missing link, what turned
out to be the tooth of an extinct pig had at first an entire fanciful proto-man built
around it, complete with life-style to match. On the other side of the same debate,
far too much was made of some human-like footprints that appeared in the same

77
strata as those of dinosaurs. Time, and due consideration led the people involved to
withdraw their original suggestions and reclassify the prints. More recently, there
have been a few dismissals of researchers who had been caught fabricating data to
maintain their standing with a high publication output.
There is also a large speculative content in some disciplines, and this too can
generate much discussion about very little. In such cases, the mere repetition of
speculation by enough of the leaders in the discipline is sufficient to have others
accept it is fact. This is a foible of scholarship that must be lived with, for it too is
human nature. However, it is one of the most subtle of difficulties to deal with,
because the generation that accepts speculation as fact is unlikely to tolerate
challenges to that dogma, and it may take a great deal of time to shift the discipline
in question to a different view.
All of these instances reveal the human tendency of scientists to see only
what they expect to see, and to continue to do so long after the means is available
to correct their misconceptions. Time is the best remedy for such problems. It also
helps to have a general determination to test occasionally even the most basic,
fundamental and longest held assumptions, just in case the universal faith in them
has been ever so slightly misplaced.
Faith is often placed in people, too, and science, like any other field of
interest, has its few celebrities among the many foot-soldiers. This can be a positive
thing, for such individuals are usually the ones who have the charisma and public
presence to sell the discipline to a sometimes skeptical and usually demanding
public. Celebrities can also mislead, however, particularly when presumed to be
experts about all science and are asked to express to the public views on things
that are far from their own narrow field of expertise.
Thus, for instance, a book by a celebrity scientist on investment strategies,
playing golf, or understanding the Bible might sell very well, despite the writer being
entirely unqualified in the subject at hand. This is not only a problem with science,
but one it shares in common with the entire "star" system so prevalent in North
America. A realization that the eminent chemist Dr. Zork is plain Mrs. Zork outside
her own field would be healthy for all concerned.
Thus, if one is to ask why some study is undertaken, why some product is
built, why some technique is developed, one would not necessarily find the answer
within the nature of the discipline, but might find it in the society in which the
discipline is pursued. This is true of both the society of specialist practitioners, and
also of the larger culture from which they come. In turn, science, technology, and
technique change the context society in which they develop, and new ideas become
feasible when such changes take place, for they enable all members of the society
to think and act in new ways.

2.6 The Technological Society?


The society of the late industrial and early post-industrial age is in some ways
most profoundly influenced by a scientific and technological mindset, and much less
so by other ways of thinking. This influence is seen in the academic world, where
terms such as "social science" are applied to disciplines whose claims to be
scientific in methodology are rather tenuous. It is also seen in the wide-scale

78
application of technique to social, political, and business problems, as discussed in a
previous section. The high-tech information society will in the long run be more than
simply a gloss over or renaming of existing practices, though even these name
changes reflect a genuine shift in the collective point of view of society. People are
not just using a new vocabulary, they are not just buying the consumer goods that
reflect the latest technological advances--they are planning for and assuming a
continuing state of such change. This is done in the purchase of household goods,
appliances, and automobiles, the building of houses, offices and apartments, and in
the other ordinary decisions of life. That all such goods will soon be obsolete, and
thus can be expected to have only a short lifetime, is assumed and planned for. A
disposable economy is a necessary by-product of rapid technological change. So is a
general familiarization with relatively sophisticated products. Moreover, people tend
to trust technology for solutions to such problems as food scarcity, disease,
overpopulation, pollution, and energy shortages. As they trust, so they act. When
the scientific/technological community has already delivered so much, it is difficult
not to assume that it can answer any question, solve any problem and build any
kind of machine.
This has also caused many old barriers to crumble. As high technology has
become commercialized, businessmen, accountants, and economists have been
conscripted to work side-by-side with electrical engineers and computer
programmers. Indeed, the computer has generated more crossover among
academic and other disciplines than C. P. Snow could have imagined in the early
1960s, for both social scientists and writers have been quick to use this tool to
enhance their work. In the first years of the computing discipline, most of its
theoreticians and practitioners were drawn from other fields (particularly
mathematics), and many a university computing department today is administered
by psychologists, philosophers, mathematicians, and economists, rather than by
those with doctorates in computing itself. University curricula also recognized this
crossover, and have introductory courses in computing, data processing and
technology for non-science majors. Such students eagerly embrace the machine for
the benefits it can bring, particularly to word processing and data analysis tasks.
However, this does not yet mean that a technological culture is universal or
deep-seated, even for students, whom one would expect to adapt most easily to
change, for few of them emerge from such courses with much understanding of how
those machines work, except in the most vague and general terms. Given that
students' general science and mathematics background is often very weak upon
entry to the course, such basic concepts as the binary numbering system or simple
programming not only cause eyes to glaze over during lectures, but generate a firm
resolve to avoid any further courses with technological overtones. This situation
may change, but not until the computer becomes as simple to use as a toaster or
any other household appliance. The change will be the elimination of such courses,
not the requirement of more technical learning. After all, who would need a
university-level course in how to use a waffle iron? Indeed, as any high technology
matures, there are fewer and fewer people who understand it, even though there
may be many more who use it. Such developments ought to be expected.

79
Thus, the term "high-tech information society" should be understood in the
context of technology use, rather than in that of the search for technique. The latter
is the function of experts in each field who are seeking to optimize their work; the
former is the province of every member of society. Use of machines does not
necessarily mean there is change in the way people think. They embrace both
technological advances and new techniques for the personal benefits and
efficiencies they bring, not because the philosophy of science is fascinating. For the
typical person, changes in thinking and living patterns are caused by new
techniques, not the reverse. To put it another way, it is the highest level
abstractions (finished products) that most people employ; they are not interested in
the detailed work that went into them being made ready for common use. There is,
therefore, an extent to which technological society is a thin patina over an
underlying culture--one that changes much more slowly than it appears on the
surface.
At the intellectual decision-making level, Snow's observations about the two
cultures still have a certain validity. Academics can still go their way with their
specialities without much regard for the changing world around them. Poets and
physicists can speak different languages, read different books, and need not talk to
each other. They may pretend each other's work is irrelevant, as if physicists had no
imagination or poets could live in primitive communes and use nothing of modern
technology. While there has been a "scientification" of many academic disciplines,
acceptance of such techniques is uneven, and viewed with suspicion by some
traditionalists. Thus acceptance of technology is mixed, and there is still division
among academics because of the broad acceptance of and trust placed in it. These
feelings are reciprocated.
Such suspicions and divisions are potentially dangerous, for they may develop
into prejudices that are capable of destabilizing a society. Politicians (and ordinary
citizens) cannot make good decisions about technology they distrust or do not
understand. Scientists with little or no education in the arts and humanities cannot
express themselves in a way that makes their work accessible and believable to the
general public. They may also lack the foundation for making moral/ethical
decisions, and take the attitude that science and technology are always socially and
ethically neutral, when in fact neither is. Kranzberg (Ethics in an Age of Pervasive
Technology) puts it this way: "Technology is not ethically neutral because it is not
only an instrument of human practice but a form of it also; the ethics of technology
concerns human technical practice and its normative problems." Thus, scientists
and engineers are called on constantly to make decisions with ethical implications,
not only about the way they conduct their work, but about how their work is or will
be used. Because of this, they need to be ethically informed themselves, and
articulate enough to bring issues to the attention of those outside their own sub
culture.
As they communicate with others, they close some of the interdisciplinary
gaps, and simultaneously become more familiar with the relationship science and
technology have with society as a whole. To the extent that this happens, their
techniques can also become the instrument and the object of social and political
policies. Eventually, those who do have technological familiarity may demand more

80
power to make decisions. They will need to qualify themselves to be decision
makers in order to do this. Likewise, politicians will have to gain greater
understanding of technology and its effects, or make way for those who will. The
wider public will not even struggle with the theory, but simply use its products as
tools. Society will continue to change, for there will be more things that people will
be able to do with their tools without thinking about them, and the size of this
collection of activities is an important measure of what a civilization is about.
In the longer term, academics may not be as divided as in the past. Their
separation into non-overlapping specialities was a response to the need to know
enough facts about one field to do useful work in a world suffering from information
overload. As this characterization is now irrelevant the barriers between disciplines
have already begun to come down to some extent, for the means to manage
information effectively and find it on demand has become universally available. This
topic will be revisited in the chapter on the information society, as well as in the one
on education.

The Third World

There are more important tensions in the world outside academia, because
technological benefits continue to be inequitably distributed on a social and
geographical basis. The disparity in technology and wealth between the European,
North American and white Commonwealth countries on the one hand, and everyone
else on the other, may get much worse before it gets any better. To be sure, there
are hopeful signs of industrialization, agricultural change, and technical education in
the third world. Countries such as China sometimes seem capable of jumping
directly from primitive agricultural economies into the information age. However,
political and social instability through much of Asia, Africa, and South America
conspire to limit growth, and most countries in these regions are still pre-industrial
or mid-industrial, with few apparent prospects for improvement. Some African
nations have even worse problems, including drought, famine, and an AIDS
epidemic that threatens to carry away many of the educated people they have
managed to produce. The poor of the underdeveloped nations have little meaningful
interaction with the prosperous West and there seems little immediate prospect of
changing things. As long as this situation continues--and especially if it worsens (as
it seems likely to do)--there still exists the possibility that another war could engulf
the whole globe. Such a conflict has the potential not only to destroy centuries of
technological advances, but even the human race itself. One of the most important
technical problems to solve, therefore, is to find ways to bring the benefits of high
technology to all peoples of the world in a non-destructive way.

Technology and Trade-offs

There is a tendency on the part of those employed in the daily pursuit of new
technology to assume that progress is always good. However, the use of any new
technology has a variety of consequences, and there are times when trade-offs
have to be made between increasing efficiency and utility on the one hand, and

81
negative social and human factors on the other. Among many examples are the
following:

o Improvements in railroad equipment that are designed to make trains safer


and more efficient may cause the price of the service to rise, resulting in more
people using the highways. This produces the twin negatives of lower utilization of
the now more expensive service, and a higher death rate because highways are
much less safe than the railway was initially.
o Large amounts of money are spent making the control rooms of nuclear
power plants orderly and efficient. However, if the result is a sterile environment,
the resulting operator boredom may actually increase the risk of accidents.
o The introduction of chemicals into meat and other food may make it better
tasting and preserve it longer, but at the risk of other health-threatening side
effects when the food is consumed.
o A dam built to reduce random flooding and produce large amounts of
electricity may be politically advantageous and improve the economy for a time, but
may prevent silt deposition in the delta, reducing fertility, and increasing both net
erosion and dependence on imported chemical fertilizers. The flooded and ruined
valley will silt up (sometimes rapidly), eventually destroying the utility of the dam.
In the end, there may be little but damage to show for the expense of billions of
dollars.
o Even when technological and economic goals are achieved in the short
term, vast megaprojects create correspondingly large capital debts, and these may
in the long run ruin the economy and lower the standard of living of a whole nation.
A default to the international banks could threaten the economy of the entire world.
o The factories and foundries that bring wealth and prosperity may cause
acidic rains to fall (perhaps in another country) resulting in deforestation, soil
sterility, fish kills, and respiratory illnesses and increasing the levels of metals such
as aluminium in the human system to dangerous levels.
o The manufacture of dangerous chemicals may be conducted in a distant
part of the world, on the soil of another nation. This has the twin advantage of
reducing risks at home, and creating good jobs in a third world nation. It has the
disadvantage of increasing the risk that untrained personnel will make mistakes
that result in the release of the chemicals and cause large numbers of deaths.
o Computers introduced into offices allow employees to do more in less time.
This can lead to them wasting some of their time, producing more reports but that
no one reads, or being laid off.
o A focus on technology may cause managers to forget that the principal
assets of a company in the information age are its people not its machines.
Continued, this attitude could destroy the enterprise.
o Technology developed for peaceful purposes can also be used for warfare.
In particular, it can be used by terrorists. The results might be more negative than
positive.
o Governments are constantly faced with demands for increased social
spending. If they fund technological developments instead, they may have to trade
off certain short term social pain for the hope of long term prosperity.

82
o Resources are limited. Governments and corporations are always faced with
choices between development proposals of uncertain benefit, where selecting one
will surely kill the other.

Examples could be multiplied, and many of the discussions later in this book
could be mentioned here as well. The point is that one must always question the
potential value of a proposed technology--not everything new is necessarily good or
positive, just as not everything old is necessarily obsolete. There is no shortage of
new things that can be done; the interesting problem is deciding which ones are
worth doing.

Technology and the Average Citizen

As noted above, typical citizens even in technically advanced countries


participate only as users of technology. They labour at stores, factories, and menial
office jobs, or stand in line for welfare or unemployment checks. Although they are
eager consumers (when possible) of technological products, they neither engage in
nor care about the issues dear to the learned.
The average North American knows something about how to maintain an
automobile or small machine, but would greet any conversation about molecular
biology or philosophy with equal parts disdain and amusement. The toilers in the
humanities and social sciences are not understood at all. There is more sympathy,
but not much more understanding for scientists (who are commonly stereotyped as
"mad") and both sympathy and some understanding of engineers. Mathematicians
are looked at askance, and a computer scientist is regaled with tales of non-
functional hardware and software--much as a doctor at a social function would go
away knowing about everyone's arthritis.
Of course, the ordinary citizen is the one most affected by changes in
technology--for it always creates new jobs and eliminates others. However, the
intellectual and material gap between the consumers of high-tech goods, and the
creators, sellers, and managers of such products is considerable. What this will
mean to decision making and effective power in the society of the future is not yet
clear; both centralizing trends and individualizing trends need to be considered in
order to make any forecasts. For most people, understanding is not a prerequisite
for participation in a machine-oriented society. It should be noted, however, that in
the next (information based) society it may become increasingly difficult for
ordinary citizens to function at all without a substantial technical background.
A continuing widening of the gap between an elite and the general population
in the wealthy Western countries would be just as destabilizing and potentially
dangerous as the same process on the international scene between countries. If
only those who can use the new information tools can work in the new civilization
(and this is increasingly so) what place is there for anyone else? Can the
industrialized nations remain stable if this question is not addressed?

Assessing the Situation

83
Important cautions must be sounded about the uneven distribution of
knowledge and technique, but there are some very encouraging signs. Modern
society is far from static, and some sub-cultures are moving into territories
previously occupied by others. Those in poorer countries, and the lower classes of
the richer ones generally know (in theory) how to achieve greater wealth. Thus,
education and industrialization are actively pursued by the disadvantaged who seek
to move up. There are no secrets about how a nation becomes wealthier, and there
are no peoples who would willingly choose to retain the short life span and disease-
ridden poverty of the agricultural age, when offered a choice.
The poorest people of the most impoverished nations will sacrifice anything to
send a child to school, for they know that the next generation can be better off.
They would also gladly trade their poverty for the problems of the industrial nations.
The same upward route exists for the children of the working class and poor of the
industrialized nations, and they take it whenever they can, particularly to the
sciences.
Meanwhile, the scientific community is pulling out of its own intellectual
isolation to some extent and beginning to address the ethical questions related to
the society its products are creating. Along the way, there is some measure of
reconciliation with its religious and philosophical roots, though the differences here
can still be severe. Also, the use of computers, particularly those machines having a
graphics interface, has increased among artists and writers. This may not yet have
removed all the intellectual barriers to the use of technology, but it has reduced
some of the emotional ones, and the anti-technology faction has become more
muted. It may be the use of this machine more than anything else that gives
legitimacy and common currency to the term "high-tech information society."
It is the contention of this book that all the peoples and cultures of the world
need each other, that technologies pursued by one have effects on the others that
cannot be ignored, and that it will be less and less possible for any individual,
profession, discipline, or nation to act in narrow self-interest without regard for the
interests of others. Just as there has come to be a human-machine cooperation
(synergy) for the solving of problems which neither can do alone, there needs to be
an understanding that all peoples of the earth are crew on the same ship. All
peoples have common interests (even if they are unwilling to admit this); they have
a common origin, and a common destination. These themes will be developed
further under a number of headings throughout the remaining chapters; for the
present here is another aphorism:

Society is maintained on communication, and it in turn requires acknowledgment and


understanding of common ground.

2.7 Summary and Further Discussion

Summary

84
Philosophers of various times and in various disciplines have given different
meanings to the term knowledge. It has meant the result of a particular kind of
reasoning process (logic); it is confined by some to the outcome of the scientific
method; and it is equated by others to belief or faith. It can also be personal (taste)
or opinion, though most (all?) of what is placed in the last category may properly
belong in the others. There are a variety of conflicts among the groups that hold
these positions, and these conflicts show up in both academic disciplines and in the
gaps between members of academia and ordinary citizens.
As for the scientific method itself, it relies on the assumption that there is a
reliable and potentially predictable underlying reality behind the phenomena being
investigated, though the nature of that reality is itself the subject of some dispute.
Science is connected to, but cannot be completely identified with technology, for
the search for tools and techniques has often been independent of theory, even
though the scientific and technical community is as one in this century.
If technique is given its broadest possible definition, it may be seen to include
the scientific method as one technique. Whether technique is an irresistible force
driving society to certain inevitable goals depends on whether or not there exist
absolute techniques--the most efficient possible for a given task--and it is not
certain that this is knowable.
The very term "high-tech information society", which is often taken to imply a
monolithic culture, sure of its content and goals, is quite possibly misleading in view
of the number of factions that are present even now. The disparities between the
"haves" and the "have-nots" seem likely to continue for some time to come, both
within the advanced nations, and between them and the third world. Although high
technology is having a profound influence on society, people seem content to use
its products in everyday life without needing to understand either how the products
are made, or the science behind them. In many ways, such automatic and
unthinking routine uses of technique actually characterize a civilization, more than
(and perhaps despite) the way its intellectuals think.

Research and Discussion Questions

1. To what extent would it be possible to live without any use of modern


technology? Give your reasons in detail.
2. Write a research paper describing the historical origins and development of
the scientific method.
3. Compare and contrast the methods of historical and scientific studies.
4. What is the meaning of the word "knowledge" as it is used in science, in
mathematics, religion, economics (or some other social science of your choice) and
in English literature (or another of the humanities)?
5. Does "knowledge" mean one thing for the academic disciplines mentioned
in question 4, and a different thing for art, music, and sculpture?
6. To what extent can the knowledge obtained by the scientific method be
regarded as "true" in some absolute sense?
7. Develop further the argument that the academic disciplines are mutually
dependent, and cannot exist entirely on their own.

85
8. To what extent, and in what ways can the cultural and intellectual elements
of Western society function together more purposefully and harmoniously?
9. Which is more probable and why: that technological developments will
reduce class distinctions or that they will increase them? Consider both the short
and long term.
10. Does the presence of the human element invalidate the claims of Science
to be objective? If so, to what extent? If not, why not?
11. To what extent is the computer unifying or further dividing the academic
world?
12. Write a defence of the academic tenure system, or a detailed proposal for
changing it.
13. Expand further upon (or refute) the suggestion in the chapter that there is
no such thing as "mere opinion."
14. Are some beliefs more important than others? Why or why not? Consider
both the issues of probable truth and probable consequences.
15. Are some beliefs more permissible than others? Why or why not? Weigh
freedom of speech against the possibility of some beliefs harming their holders or
others.
16. Consider the two statements:
a) "Religious faith and scientific rationalism/empiricism are absolutely
contradictory and can never be reconciled."
b) "There is no conflict between true science and true religion."
Defend one or the other of these two statements.
17. Explore the contention that there is (or may be) a metaphysical element in
any position on origins. Do you take the same middle position as was advocated in
this text, or a radically different one?
18. To what extent are the high-technology, industrial, and agricultural
nations of the world mutually dependent? To what extent ought they be?
19. Does the advent of high technology mean that the gaps between the rich
and poor nations of the world will widen or narrow? Discuss ways in which
technology can be used to narrow such gaps, and ways in which national policies
can be formulated to achieve such goals.
20. Is it fair to those countries that develop high technology to have them
share it with poorer countries? What would be the consequences of not narrowing
such gaps?
21. Research some examples of fraud, wishful thinking, research padding, or
hoaxes in modern science and report on the significance of such events in the
overall progress of science.
22. Develop further the theme that science and technology are really very
different concepts.
23. Develop further the assertion that pure research is now seldom done apart
from associated technological goals. You may wish to take the position contrary to
that posed in question 21 and argue that science and technology are really just
different aspects of the same thing.

86
24. Expand further on the theme that one technological advance often drives,
or even requires others. Use specific examples from the past and suggest more for
the future, based on present problems.
25. The text mentioned the assertion of Jacques Ellul that there is an
inevitability to the quest for the most efficient techniques--one which tends to
sweep aside all other considerations. The author expressed certain reservations
about this, at least in theory. Read Ellul, and then support or attempt to refute his
thesis.
26. Alternatively, attack or defend the thesis that even if technique is an
irresistible force, it is leading nowhere (i.e., that it has no goal).
27. Discuss and expand upon the theme that one measure of a civilization is
the size of the set of tasks that its citizens can perform without having to think
about them.
28. Discuss the relative importance of teaching and research tasks for
university professors. Do the priorities change if the perspective is that of the
professor? the university? the student? the state? society as a whole?
29. Research the acid rain problem. What are the economic trade-offs
involved in finding and implementing a solution to this problem? in doing nothing?
30. A major city built around a navigable inlet with spectacular natural
scenery is considering the building of a crossing for the inlet. It could be a bridge,
which some say would blight the landscape and create a navigation hazard. It could
be a tunnel, which would do neither, but cost 50 percent more. The tunnel would
also create more construction jobs, and based on past experience, there is less
likelihood of accidental deaths during construction. How can this decision best be
made?

Bibliography

Asimov, Isaac. Science Past--Science Future. Garden City, NY: Doubleday &
Company, Inc., 1975.
DeGregori, Thomas R. A Theory of Technology. Ames, Iowa: The Iowa State
University Press, 1985.
deSola Pool, Ithiel (ed.) The Social Impact of the Telephone. Cambridge, MA:
MIT Press, 1977
Creation Research Society (Homepage)
<http://www.iclnet.org/pub/resources/text/crs/crs-home.html& Row, 1973.
Ellul, Jacques. The Technological Society. trans. New York: Knopf, 1973.
Florman, Samuel C. The Existential Pleasures of Engineering. New York: St.
Martin's Press, 1976.
Henson, H. Keith. Memetics and the Modular Mind--Modelling the
Development of Social Movements. Analog, August 1987: p29-42
Hofstadter, Douglas R. Gödel, Escher, Bach: an Eternal Golden Braid. New
York: Basic Books, 1979.
Holmes, Arthur F. All Truth is God's Truth. Grand Rapids, MI: Eerdmans, 1977.
Klemke, E.D., et al. (ed.) Introductory Readings in the Philosophy of Science.
Buffalo, NY: Prometheus Books.

87
Kranzberg, Melvin (ed.) Ethics in an Age of Pervasive Technology. Boulder CO:
Westview Press, 1980
Kuhn, Thomas S. The Structure of Scientific Revolutions--Vol 2 No 2 in The
International Encyclopedia of Unified Science (Second Ed.). Chicago: The University
of Chicago Press, 1970
Popper, K. R. The Logic of Scientific Discovery. London: Hutchinson, 1959
Racism, Science, and Pseudo-Science. Proceedings of the symposium to
examine pseudo-scientific theories invoked to justify racism and racial
discrimination. Athens, 30 March to 3 April 1981. New York: UNESCO, 1983
Schuurman, Egbert. Technology and the Future--A Philosophical Challenge.
Toronto: Wedge, 1980.
Snow, C.P. The Two Cultures: & A Second Look. London: Cambridge University
Press, 1963.
Smullyan, Raymond. Forever Undecided - A Puzzle Guide to Gödel. New York:
Knopf, 1987
Stove, David. Popper and After--Four Modern Irrationalists. Oxford: Pergamon,
1982
Susskind, Charles. Understanding Technology. Baltimore, MD: The John
Hopkins University Press, 1973.
Sykes, Charles J. Profscam Washington: Regnery Gateway, 1988.

Talk.Origins Archive--exploring the Creation Evolution Controversy <http://earth.ics.uci.edu:8080/

Chapter 3
Basic Concepts in the Theory of
Ethics

88
Seminar - "Can We Define 'Good'?"
3.1 What is the Study of Ethics?
3.2 Moral Philosophy--The Good, the Right, and the Loving
3.3 Ethics and Pure Reason--The Legacy of The Greek Philosophers
3.4 The Nonabsolutist Philosophers--Morals are Decided Upon
3.5 Traditional Absolutism
3.6 From Theory to Decision--Practical Morality
3.7 Summary and Further Discussion

3.1 What is the Study of Ethics?


The first task facing anyone who desires to understand ethical issues is to
determine what is the nature of the things being studied. This task does not appear
to be as straightforward as it does in some other disciplines. After all, moral objects
are not the same sort as chairs, automobiles, or electric motors. Nor are they of the
same sort as planaria, fir trees, water buffalo, harp seals, or even the girl next door.
Consequently, the study of moral or ethical ideas must be approached rather
differently than the study of physical objects, whether inanimate or animate.
For instance, an automobile can be measured; the relationships between its
parts can be described completely, and detailed specifications for building another
just like it can be developed. On a less exacting level, the owner of a car can use
the senses of sight, touch, and possibly smell to distinguish a particular vehicle
among a number of functionally similar but not identical ones. On yet another level,
an automobile can be described in terms of its performance. One might wish to own
a car that can stop from 100 km/hr in less than 10 seconds, or can accelerate to this
speed in less than 20 seconds, or uses less than 10 litres of gasolene to the hundred
kilometres (in some places this would be expressed in miles per gallon). These
performance factors can be tested for and the results published for all to see.
Decisions can then be made on the basis of concrete, reproducible, experimental
data.
The point is that scientific methods can be employed to describe (in some
kind of statistical or numeric sense) every physical object and every living thing. Not
only can the natural earth and its contents be so described (geology, biology,
chemistry, and physics), but so can the products of human invention (engineering
and technology)--all this despite the reservations about the nature of reality
discussed in Chapter 2.
There are some things that are relatively less tangible that may nevertheless
be physically measurable and therefore open to an exact study. Consider the colour
red, for instance. By agreement, people use the word red to describe a particular
part of the visible light portion of the electromagnetic spectrum. Someone could
object that "redness" might not be perceived by everyone in the same way.
However, the mutual agreement means that it is still possible for any person with
normal vision to decide whether or not something is red simply by referring to
personal knowledge of this consensus. Everyone has from birth been involved in an
indoctrination into a language for describing the properties of the physical world,
and in particular, into the meaning of red.
It is not even necessary to know what an electromagnetic spectrum is in order
to be a part of the consensus that some object has the redness property. Even

89
though there are many shades and kinds of red, the communication of the idea of
this colour does not at all depend on any technical understanding of the idea.
Redness can be communicated accurately, even though the term is an abstraction
of a physical property, and not, in the strictest sense, a measurement (although it
could be turned into one by attaching a particular wavelength of visible light to the
word). This is true even though it cannot be guaranteed that every person
experiencing redness does so in exactly the same way.
The difficulty with moral objects--like others that are not physical in nature--is
that one cannot often describe them in the same ways as one does the physical
ones. If one says that it is "good" to tell the truth, for example, one must ask what is
meant by "good." How can one tell when such a quality is present, and how does
one know whether some actions have more goodness than others--that is, how does
one quantify goodness?
Goodness is clearly not an adjective that describes a physical object like a
chair. It is also a different kind of abstraction than is "red," for the latter can be
thought of as referring to a measurable physical quality, even if neither directly nor
exactly. Redness describes something in the physical world, even if those who use
the description do not know or care about the scientific principles underlying the
concept. Goodness, on the other hand, may not be physical, but most people do
attach detailed meanings to the term. Though a Christian would ascribe the quality
of goodness to God alone, many do not point to any person or thing as its origin, yet
still assert that it exists.
The study of moral issues is not only different from that of science but also
from, say, history, sociology, or economics (even though it once included the latter
two). In the last three cases, the exact methods of science may not always be
applicable, but the practitioners of such disciplines all agree that they are studying
something tangible. That is, they are certain that factual determinations can be
made in these disciplines and that there are objective truths to study or discover,
even if the character of such determinations is quite unlike the character of physical
objects.
For instance, not all historians would agree that "Nero fiddled while Rome
burned", but they would agree that the truth or falsity of this statement is at least
theoretically determinable--capable of being decided on the basis of the weight of
testimony of a sufficient number of reliable witnesses. The historian gathers
accounts of the incident under study and attempts to weigh these accounts to get
at the truth--and assumes that there does exist an objective and discoverable truth.
The outcome of such a study may not be supported by a repeatable experiment in
the same sense as in a laboratory science, but the outcome is not regarded as less
than "knowledge". Furthermore, historians assume that any similarly competent
person can repeat a study of the available evidence and either come to
substantially the same conclusions or attempt to achieve some new consensus of
what is the historical truth. The important concept is the agreement among
historians on methodology, evidential content, and (ideally) conclusions. Even
where there are disagreements about these, there is no argument that an objective
truth does exist.

90
Likewise, not all economists would agree ahead of time on whether a tax
reduction would decrease the average price of a can of beans, but all would assume
that with good and sufficient data, such actions can be studied after the fact and
well-founded conclusions drawn as to what the effects have been. That is,
economists always suppose that they are studying something real in the sense of its
being perceivable and measurable, even if they cannot always agree on how to
make the measurements or on what the data mean.
To summarize, in the scientific disciplines one gathers first-hand, empirical
evidence in a repeatable fashion and evaluates this data to verify a knowledge
assertion statement. In several other fields, data of a slightly different type are
accumulated and conclusions are drawn on the basis of what seems to be the
weight of evidence. Even where the facts are in dispute, there is little doubt that all
these disciplines have a factual basis.
The study of moral issues is not as straightforward, for in this case one cannot
explain varying views of truth merely by making allowances for imprecision and
differences of interpretation. Disagreements go deeper, for it is more difficult to
obtain agreement about the nature of moral statements, what they are based on,
where they come from, and whether they are well-founded. This difficulty is not
lessened even when there is agreement about the content of a statement. For
example, several moralists might agree that "Abstinence from sexual relationships
outside marriage is good" constitutes a valid moral statement, but each one could
have a different reason for saying this. Other moralists might agree that such a
statement is deserving of study but would disagree with the content. Still others
might deny even that the statement is worth making or has any meaning.
In addition, two moralists might agree on the nature and validity of a factual
statement, but act in very different ways as a result because they hold to differing
views on related moral issues. For example, two people might agree that "the
incidence of AIDS is increasing" is a true statement, and even that this fact has
moral implications. They might then come to opposite conclusions about how those
having this disease should be treated socially. These differing conclusions have to
do with the philosophical and religious presuppositions behind their moral reasoning
processes, and the extent to which knowledge of the facts, fear, or prejudice enter
into their thinking.
It is easy to make statements about whether an action is right or good,
without giving the matter much thought or even being aware of what these two
words mean. Are they synonyms or do they have slightly different connotations?
Can they be defined in terms of other words that do not have moral/ethical
meanings, or is the concept each conveys an irreducible and indefinable idea? Are
they (as some claim) such subjective terms that their meaning is private to each
individual and not communicable to others?
At this point, it would be valuable to set this discussion aside for a while and
attempt your own definitions of these two words. (Try it!) Most people find that they
can readily produce a number of additional synonyms to elaborate on the moral
concept of goodness. This procedure sets a word in the cultural context of a list of
other words with similar or identical meanings.

91
One teaches children in this manner, first tying a word-abstraction to a
concrete object, and then enhancing the child's vocabulary by referring new words
to the abstractions the child has already learned. For instance, the word "car" could
be taught by pointing to the family vehicle. At a later date, if the child questions the
word "automobile," the earlier abstraction "car" can be referred to. If this fails, a trip
to the garage for another look at the physical object would be in order. At some
later time, perhaps in a high-school automotive course, the child will become able to
redefine car in terms of an assembly of simpler and more fundamental parts.
This example makes evident several difficulties in assigning meaning to words
with moral content, such as good and right. Here are a few of the more interesting
ones:

1. Are "good" and "right" synonyms in the same manner as "car" and
"automobile?" That is, if one sets aside varying meanings in other contexts, do they
have exactly the same meaning in a moral context?
2. What is the concrete object that can be pointed to in order to define a first
word with moral connotations, and so get a handle on the remaining synonyms?
That is, if goodness and rightness cannot be found in the garage or on the street,
where can they be found?
3. For the more sophisticated inquirer, what are the constituent parts of
goodness and rightness into which these complex ideas can be disassembled for
more detailed study? Or, are there none--because these are irreducible concepts
that cannot be defined in other terms?

Aspects of the last two questions, the most difficult, shall be dealt with in this
chapter. For the moment, note in connection with the first question that a problem
arises because there are many uses for the word "good" that carry the meaning
"desirable," "more than satisfactory," or the like. There are similar problems with
the word right. Consider, for instance, the use of the word good in the following
statements:

1. World War II was good for the North American economy.


2. Vanadium is a good catalyst.
3. Friendship is good.
4. It is good to tell the truth.

Historians and economists might argue about whether statement 1 is true as


it stands, but they would be comfortable with modifying and qualifying it until they
had a version that they could agree was either true or false. The record of their
decision would also carry with it a review of the facts or statistics that went into
making the decision, as well as a discussion of what were the agreed-upon criteria
to place positive or negative interpretations on movements in a nation's economy.
With all this in hand, similarly qualified experts who did not participate in the initial
decision would have the means to become a part of the consensus (or not) at a later
time. However, even if other experts did not agree with the conclusion, they would

92
have little trouble attaching rather specific meanings to the word good as it is used
in the initial statement.
Likewise, scientists would also have to qualify statement 2, for it may not be
true under all circumstances. Moreover, this assertion is true as it stands only
relative to the effectiveness of other catalysts in similar circumstances. It is not
even necessary to know exactly what Vanadium is or what catalysts are in order to
realize that meaningful criteria could be established and experiments done to verify
the truth of statement 2. A person who knows little or nothing about chemistry
could imagine a numerical value being attached to this use of the word "good" such
as: "A good catalyst shall be defined as one which speeds up the progress of a
chemical reaction by a factor of at least 3.14 over what it would be in the absence
of said catalyst." In short, there is some general agreement in such cases about
what the word "good" will be taken to mean, and disagreements about the meaning
will be neither sharp nor divisive but simply indicate that a better definition or a
more specific term is needed.
It is much more difficult to say precisely what statements 3 and 4 mean, and
even harder to determine their validity. There are two groups of questions
associated with such statements.
The first questions focus on meaning. What kind of statement is being made?
Is it a description of a fact? Is it an expression of the belief of one person or of a
small group of persons? Is it, on the other hand, the declaration of a generally
accepted consensus--that is, a collective decision of society? Does the statement
represent the conclusion drawn from some logical thought process by a repeatable
method of deductive reasoning from more fundamental principles or assumptions?
Or, is it perhaps the announcement of a discovery in a way similar to the
determination of facts in other disciplines? That is, has some principle of the moral
universe been uncovered that has the same kind of validity as a physical law by
virtue of being inherent in the reality perceived? Does this mean that there is a
moral sense, like that of touch or taste? These questions will be considered in
following sections. For now, note simply that the greatest division among theories of
"good" is on whether such ideas are decided upon or discovered.
What exactly does the word good mean in the contexts of statements 3 and
4? It seems clear that it means something fundamentally different than in
statements 1 and 2, but just what? Does good carry the same meaning in 3 as it
does in 4, and would both statements still convey the same idea if one used the
word right instead?
The second group of questions focuses on the validity or content of such
statements. How does a person determine if statements about moral concepts are
true? Even granted that people can reach an understanding or at least an
agreement about the meaning of goodness in both statements, how does one
determine that friendship or truth-telling belong in the category of things that are
good? Is there only one good of which all others are aspects, or are there many
goods? If many, what happens if two goods are in conflict? Is it possible to prioritize
goods or rights so that such conflicts can be eliminated or at lest reduced?
Furthermore, if something is in the category of good, does it also follow that it
should be pursued--that is, does there exist an imperative that what is good ought

93
to be promoted or done by everyone? (This last question adds a new one to the first
group: What do the words "should" and "ought" mean in the context of moral
statements?) Finally, who or what authority is authorized to pursue the shoulds, and
with what force? That is, if good does imply should, can society or an individual in it
legitimately require behaviour that ought to be done because it is good?
It will come as no great surprise to learn that many books have been written
in the attempt to answer these questions. A complete survey of all the schools of
thought on all of these points is far beyond the scope of this work, but in the
balance of the chapter an attempt will be made to summarize the major positions
on these issues. Anyone who considers seriously the specific social and
moral/ethical issues raised in this book must at least have some idea what it is that
people are doing when they make moral judgements, and how the judgement-
making method in question fits in with those of the major schools of philosophy.
Before going on, here are three working definitions:

The study of the meaning and nature of moral statements is called moral
philosophy.
The study of the content of moral statements with a view to applying them to
right and wrong human behaviour is called ethics, and one who makes statements
resulting from such study is termed an ethicist or a moralist.
Profile on Issues . . .

The Good and the Should -- A few Questions

Once the good and the right is known or believed to be true, what power has
it to constrain a course of action? Here are samples of questions that arise in such
contexts.

Self-enforcement:
Does the knowledge of good automatically imply a person will do that good?
Does failure to do good mean the person did not know the good? Who is responsible
for the failure -- the one who did not do good, or all those who did not ensure the
person fully knew the good? Does it make a difference if "belief" is substituted for
"knowledge?"
- Who is responsible for crime -- the criminal, society, or no one?

Individuals enforcing the good on others:


Can one individual require a second to do what the first knows or believes to
be good?
- May a parent require a child to submit to the parent's beliefs? discipline?
- Ought a person intervene to prevent another person from being harmed?
(killed, beaten, robbed, raped, defamed, economically exploited, harassed)
- Ought a person intervene to prevent another person from self-harm?
(suicide, reckless driving, using drugs, entering a bad business or social contract,
believing wrong or harmful things)
- If so, ought force be used? To what extent and in which situations?

94
The State enforcing the good on individuals:
1. Must the state enshrine its citizens' moral consensus in law or are there
circumstances in which law is above moral consensus? If the latter, does the state
have a duty to re-educate its citizens to a new and correct (by its own lights)
morality?
2. May the state require (with what force?) its citizens to submit to (agree to)
the political, moral, or religious theories on which it is based? To what extent ought
it permit seditious talk? action?
- If a parent has religious objections to blood transfusions, may the state
intervene and force one upon a child to save life?
3. Does the state have the right to require a certain religion of all its peoples?
no religion? If in the name of impartiality the state separates itself from or ignores
religion altogether, does this constitute anti-religious discrimination?
- Is the reason for the separation of church and state the prevention of state
involvement in the church or church involvement in the state? both?
- May the state legitimately regulate the employment practices, business
affairs, or teachings of churches? of church-owned schools? May it require the hiring
of an out-of-work pastor or teacher on welfare? May it overrule the church's
decisions on whether to admit a member to the church or a student to its school if
these decisions conflict with its own agenda?
- May the state overrule a church on questions of morality, declaring that
since a behaviour is legal, the church contravenes the law and violates individual
rights by declaring it to be immoral?
- Does the practice of granting property tax exemptions for churches and
income tax deductions for contributions to churches constitute state promotion of
religion? what about religious slogans or sayings in a nation's constitution? on its
coins?
4. How closely may the state observe and regulate the economic activities of
individuals in the name of promoting the common benefit, detecting cheaters, or
ensuring fairness?
- Ought it keep cross-matched records of all economic dealings so as to spot
income tax cheats?
- Ought it sell census data to private marketers?
- Ought it to guarantee certain minimum medical protection, dental
protection, living accommodations, food, clothing, or wages to its citizens?
5. Should the state enact laws discriminating against a dominant religious,
political, sexual, or ethnic group in order to redress perceived past inequities giving
that group an advantage?
- Should the state fund minority lobby organizations for them to press their
case to the state?
- Do university entrance quotas favouring minorities work to the advantage or
the detriment of the minority? the majority? the university? society?
- Ought women be front-line combat soldiers?
6. What punishments may the state legitimately employ against those who
break its laws (none, economic, physical, social)?

95
- Which of the following ought the state be permitted to do:
o require certain actions of its citizens to prevent self injury, and subsequent
economic loss to others and to the state? (e.g., compulsory seat belts, motorcycle
or hockey helmets)
o censor the advocacy of violence against some group? the promotion of
fraudulent schemes to obtain money? the advertising of dangerous goods?
o prohibit substances (drugs) or objects (hand guns, assault rifles) deemed
dangerous?
o publish the names of convicted criminals in the newspaper?
o confiscate the assets of criminals for state use?
o imprison those convicted of violent crimes? of economic crimes?
o make restitution to the victims of crime?
o require restitution from those convicted of a crime?
o physically punish certain criminals, say, whip a child molester or rapist, or
execute a murderer?
o lock a device on the leg of a convicted criminal or parolee to track the
person's location?

The State enforcing the good on another State:


May one state intervene with another when the second violates its citizens
rights by the laws of the first? by international law? What if the international law is
unwritten, or has never been agreed to by the offending nation?
- Should a nation intervene with (a) economic, (b) political, or (c) military
sanctions if another state:
o invades a third state to capture its resources or to kill its peoples?
o systematically oppresses a group of its own people because of the colour of
their skin (blacks in South Africa) their religion (Moslems, Jews and Christians in
Communist countries), or their economic political and ethnic background (the
middle class of Kampuchea, out-of-power tribes in Uganda)? What if oppression
becomes large scale slaughter?
o kills large numbers of its own citizens for protesting state tyranny (students
in China)?
o engages in a methodical economic exploitation of most of its citizens in
order to enrich the rulers and their friends (rulers of many countries)?
o harbours (encourages and finances) terrorists or criminals (drug dealers,
murderers, thieves) whose activities are detrimental to other states?
o employs economic and social systems known to be inefficient and harmful
to its people (Communism)?
o is over fishing international waters whose resources are vital to itself?
o uses industrial processes that are polluting the first nation? (acid rain,
chemicals dumped into border rivers and lakes)
o subsidizes its own industries or otherwise allows them to sell goods in the
first nation at prices lower than they can be produced there?

Does God Intervene?

96
The oral traditions and scriptures (including the Bible) of several religions
record instances of God (or gods) intervening in the affairs of individuals or nations
to enforce some good or right action. Does such "higher intervention" still take
place? Is there a corresponding outside action directed against good and for evil?

3.2 Moral Philosophy--The Good, the Right, and the Loving


As long as the human race has existed, in all of its societies, there have been
codes of moral conduct. For example, a person might be expected to keep a
promise or to tell the truth. Indeed, it is difficult to imagine how any society could
exist where contract- or promise-keeping was not practised. Likewise, there are
always some restrictions on sexual relations, as well as on violence to settle
disputes. Behaviour deemed suitable on some occasions is not on others, and
severe violations of a given society's codes always result in organized
consequences.
There are four sets of these conventions governing interpersonal behaviour.
They are religion (including magic and witchcraft), ethics, etiquette (including
folkways), and the law. The last two are conventions to enforce behaviour patterns,
so they are largely derived from the first two, which are collections of beliefs about
behaviour. Also, the influence of religious ideas upon ethics is very strong. In
addition, the word "moral," though often used as an unqualified synonym for
"ethical," tends to have religious overtones. In this context, the Bible has had a
particularly powerful influence on Western civilization and its ethics. It offers an
externally referenced explanation of the origin of ethical ideas external to humans
by referring to God who is absolutely good. It also offers an internal one, citing the
role of conscience.
The very existence of society implies that there is an organized control on the
interrelationships among members of the society. Agreements about what
constitutes acceptable behaviour, (i.e., rules of conduct, morals, and ethics) are the
essential glue that holds society together. When these rules are codified and
documented, they are called laws, and their enforcement may be delegated to
particular authorities such as police, lawyers, and judges. When they are enforced
by peer pressure alone, they may be referred to as etiquette. Among free peoples, a
consensus is necessary on what ought to be the content of law for there to be any
practical possibility of enforcing them. Under a tyranny, any law deemed desirable
by the state--no matter how oppressive--can be maintained by sufficient application
of force. Examples of such in this century include those headed by Stalin, Hitler,
Mao Zedong, Pol Pot, Idi Amin, "Papa Doc" Duvalier, and a host of other brutal
dictators in all parts of the world.
As long as there have been scholars, people have wondered where such ideas
of what is good or proper behaviour came from. What follows is a classification of
answers given to such questions. For purposes of simplification, the categories are
larger than those which moral philosophers would usually create. Distinctions are
made on the kinds of responses that would be given to questions raised in the last
section. The material here is only one way of summarizing a vast body of literature.
To begin with, schools of moral philosophy could be divided into three major
groups on the question of where ethical ideas originate.

97
Group I--Moral/Ethical Laws Are Deduced by Pure Reason

For this group of moral philosophers, ethical statements are obvious, in the
sense that logic alone is sufficient to arrive at a knowledge of what the statements
contain and how they are to be applied. This perspective, the position underlying
some of the traditional Greek philosophies, has had a strong influence on Western
civilization, particularly in its notion of justice as a high ideal that transcends both
law and actual human behaviour.
The fundamental assumption of these philosophers is that all who are
sufficiently trained in the art of reason--anyone who proceeds in a rational and
logical manner--will arrive at the same moral principles. In this view, ethics, too, is
not a product of culture, history, or opinion. Rather, to the properly trained mind,
moral rightness is thought to be found intrinsic to the universe.

Group II--Moral/Ethical Principles are Decided Upon

Others assert that moral questions are decided upon as an act of the will. To
this group, a moral principle such as the requirement for truth-telling represents a
collective decision of society that such behaviour is desirable--a decision that may
only partly be the result of some logical thought process. That is, moral laws are not
proven like mathematical theorems, but are arrived at because society collectively
deems them (for whatever reason) to be in the best interests of most of its
members. This theory does not so much describe why specific principles are agreed
upon; it merely asserts that this is the process by which they come about.

Group III--Morals and Ethics are Derived From External Absolutes

This group asserts that moral principles exist independent of the will of any
individual, or even that of humanity as a whole. Here, moral principles are universal,
either because they are part of the very attributes of God, or because they are in
some other manner built into the very fabric of human existence, or even of the
universe. In this view, humans do not so much deduce or decide upon appropriate
moral behaviour. Rather, they discover or have revealed to them preexisting
principles. They then choose whether or not to apply these.
The Psalms summarize nicely this view that goodness is part of the character
of God, flowing by revelation through to human beings.

The law of the Lord is perfect, reviving the soul.


The statutes of the Lord are trustworthy, making wise the simple.
The precepts of the Lord are right, giving joy to the heart
The commands of the Lord are radiant, giving light to the eyes
The fear of the Lord is pure, enduring forever.
The ordinances of the Lord are sure and altogether righteous
They are more precious than gold, than much pure gold;
They are sweeter than honey, than honey from the comb.

98
By them is your servant warned; in keeping them there is great reward. --
Psalm 19:7-11

Blessed are they whose ways are blameless, who walk according to the law of
the Lord.
Blessed are they who keep his statutes and seek him with all their heart.
They do nothing wrong; they walk in his ways.
You have laid down precepts that are to be fully obeyed. -- Psalm 119:1-4

Within these three large groups one can further distinguish several positions
that depend on what the members of the various schools of philosophy say about
how many--if any--universal moral principles there are. One can also make
distinctions on whether moral statements are regarded as:

o binding--prescriptive of what ought to be done.


o non-binding--descriptive of what people actually do.
o emotional--expressing the opinion of what someone likes people to do.

The positions taken on these questions also depend heavily on where the
philosopher thinks moral ideas originate, so some of these will be considered as
subheadings under the three main groups. The experienced student of philosophy
will have no doubt seen a variety of other, slightly different classifications of this
same material.

3.3 Ethics and Pure Reason--The Legacy of The Greek Philosophers


This section will examine the first of the three views just mentioned--that
moral statements originate through a process of reason or logic. In this view, all who
are trained in the application of logic must necessarily arrive at the same conclusion
about ethical matters. Those in this group agree that moral principles are absolute,
for logically derived principles do not change with the majority opinion from one
place or time to another, as logic itself is immutable. They also tend to agree that
more than one absolute exists. Consider this statement as a simplified
representative position of this group:

Moral statements are absolute because they are arrived at by pure reason. They are related to self-
evident virtues, each statement promoting a single virtue. There are no conflicts among these moral
statements because they do not overlap.

As mentioned earlier, this was the position of certain Greek philosophers,


including Plato and Aristotle. It has also been adhered to in various forms in more
modern times, a common modification being the omission of the second sentence,
or even a recognition that conflicts may indeed exist between the different
absolutes.
However, despite the contention that logic alone is sufficient to arrive at
ethical statements, actual conclusions of this group about the number, nature, and
priority of ethical principles vary widely.

99
Plato held that the goal of the rational person was the cultivation of personal
virtue (or excellence) and happiness. In his view, such a person knows what is true
by pure reason, can control the desires, and is capable of both philosophy and
command. The ideal ruler in the Platonic state is its best philosopher. Some of the
virtues that Plato put forward were temperance, courage, wisdom, and justice.
Aristotle, on the other hand, emphasized those of friendship, pride, and moderation.
Today, it is easy to underestimate the importance to these teachers of human
reasoning and the spoken word (logos) they used to convey that reasoning by way
of argument. The logos of reasoned argument was not just a symbol or even just a
conveyer of meaning; it was the very substance of knowledge itself. Logos made
reasoned discourse possible; it was, therefore, the very stuff of knowledge; it was
what made one truly human.
On the other hand, the interesting thing for a modern reader of Plato and
Aristotle is the near total absence in these philosophies of any discussion of right
and wrong in the moral sense that these words were usually used in the Christian
societies that followed. These philosophers did not equate virtue with what has been
termed morality in modern culture. Rather, they believed that such concepts were
either self-evident or incidental to the training of the virtuous. Likewise, modern
concepts of justice--such as "all are equal before the law"--would have been foreign
or perhaps even immoral to Plato. To him, it was entirely correct that there be
differing standards for the virtuous philosopher-governor on the one hand and for
the uneducated masses on the other. Again, it would not be so much, say, truth-
telling, that was at issue to Aristotle, but loyalty to one's friends. The long-term goal
was the perfection of pure reason in governing the relationship between individuals
and the state. Indeed, it would be accurate to say that the advancement of a
person's rational life was the ultimate good in these schools.
Issues of right and wrong in ordinary life were in a different and much lesser
category than the pursuit of philosophical excellence. Such matters were regarded
as being common knowledge, within the reach of ordinary people, and sufficiently
self-evident even to the untrained as not to be worthy of detailed rational
consideration. Here is a clear separation between common morals, which anyone
could understand and apply, and the ethics of virtue, to which only the deep thinker
could truly aspire. Once having achieved an understanding of those ethics, they
could be justifiably proud of the difference between them and the common person.
Indeed such issues as friendship could arise only between good men; one could not
be friends with a slave (thought of as a living tool) or a woman (not regarded as
rational beings). Some taught philosophy to women as well, but this was
uncommon. These principles might be summarized in this way:

Goodness refers to virtue, and rightness to action.

Another illustration of the difference between modern Western ideas and the
ideas of some of the ancients can be found in Plato's concept of justice. In his view,
the just person must fulfil his or her proper role in a state--that of ruler,
administrator, or citizen. Each person has a natural position of control or
subordination, and any perversion of this is an injustice. No one should ever seek to

100
act outside their just station in life. To propose, therefore, that the same laws should
apply to both commoner and king would be illogical, and therefore seditious.
As to the common morality, Plato's belief was that there was a moral nature
with which the rational person lived in harmony, even though this might often be in
contradiction to the conventions or practice of the non-rational person. In this view,
morality is part of nature itself; it is not man-made or dependent upon culture or
invention in any way. It is part of the natural order, as are male/female distinctions,
skin colour, and the nature of fire, earth and sun.
In an ethics based on reason, moral laws are supposed to exist apart from
convention, culture, or decree. They do not change with time or civilization. They
simply are. The task of both the individual and the state relative to such matters is
to determine the correct natural order of morality and justice and then to change
convention, law, behaviour, and legal justice so as to conform to that right order. In
this view, it is not only possible but also probable that the vulgar, uninformed, and
irrational masses will have as a conventional morality a code that upon rational
examination will prove to be immoral, because whatever common opinion may be,
true (logical) knowledge cannot be wrong.
Socrates, according to Plato, held that a person who once knew what was
good could not choose to do evil, and therefore the acquisition of knowledge
through philosophy was sufficient to attain to all virtue. Moreover, wrongdoing in
anyone's own eyes can never be a voluntary act. Thus, for example, an evil tyrant
could never be happy or informed.
By the time of Immanuel Kant (the late 18th century), these traditional
absolutist views were virtually unchallenged. Kant reformulated them in terms of a
law of duty (not love, which is an emotion) that he called the "categorical
imperative." Briefly stated, it is this:

Whatever one does, one must act in a manner that is consistent with wanting that action to become
a universal law. The corollary to this is that people are to be treated as ends, not as means to an
end.

Kant was so convinced of this law of duty, which he claimed to have


formulated by pure reason, that he rejected any mixture of love, compassion, or the
pursuit of happiness in governing actions as dangerous corruptions of the Moral
Law. He regarded the categorical imperative as the triumph of pure moral reason.
However, there are several flaws to the notion that true morality can only be
discovered through pure reason. The first is that the actual law discovered by Kant
seems, if it stands alone, to be rather arbitrary. Why not pick some other law, such
as "Do what enhances your own self interest?"
It seems apparent that Kant was trying to bring within the sphere of duty (his
highest goal) a statement incorporating the Golden Rule of Jesus Christ, "Do unto
others as you would have them do unto you." Because of the potent influence of
Christianity in the Europe of that era, it was important to Kant that reason seem to
achieve the same ethical result as religion. At the same time, Kant believed that he
was not merely modifying the Golden Rule but held that even if Christianity did not
exist, pure reason would have discovered this principle unaided. Kant believed duty

101
to transcend not only philosophy but also the results produced by the application of
the senses (science). It was by serving duty in accordance with the categorical
imperative that all true notions of etiquette, morality, and law would be derived.
In summary, Plato concluded that ethical duty was collectively owed to
society, or the state. Aristotle stressed friendship, and Kant decreed that the
primary imperative was to duty itself. For each, the well-governed state had an
obligation to enforce moral laws, putting weight behind the transition from the good
to the should.
There are five great difficulties with such views of ethics. The first is that if
they are valid, all philosophers ought to arrive at the same conclusions about what
are the highest principles of moral law, and ought to apply them to ethical conduct
in at least very similar ways. That they do not suggests that one needs to seek
another source of absolutes than unaided human reason.
The second is the abstractness of the concepts. Such theoretical ideas often
seem to have very little practical context. It is not clear how to use such a system to
make applications to specific situations in order to act morally. It is not always clear
what is meant by the term "virtue" or what specific qualities ought to be included
within its purview. Likewise, it is difficult to agree on what specifics do flow from the
categorical imperative. This abstractness is not necessarily just a weakness, for the
strength of the categorical imperative also lies in its generality, which is achieved
precisely because the statement speaks not to the ethics of specific actions but to
the moral process by which the ethics of any action is determined. On the other
hand, such generality, along with many specifics, was already present in the Bible
(and to a lesser extent in other religions) before Kant; his work refocused Biblical
thinking rather than providing a radical departure from it.
The third is that actual experience also forces one to question the assumption
of some absolutists that the sufficiently well-informed person cannot choose to do
what is wrong. On a most practical level, this assumption mocks the aching heart of
every parent who has taught a child to do right, only to have the child grow up to do
evil instead. That this actually happens, and does so frequently, calls into question
the Socratic assumption that adequate knowledge of good alone is sufficient for
enforcing good behaviour. On a global scale, the increase in all forms of knowledge
would presumably carry with it more knowledge of what is good, and this would in
turn result in a more moral society. Yet, the last three centuries have seen as much
war, tyrannical oppression, brutality, and other evils as have any time in human
history, if not more. Indeed, although education has been more extensive in scope
and application during the latter part of the twentieth century, it has become
abundantly clear that knowledge and goodness demonstrably do not increase
together. One could argue that it is the absence in the curriculum of the study of
virtue that is at fault, but as those who control the schools cannot themselves agree
on what, if any, moral principles ought to be inculcated, it appears that this avenue
is a dead end.
Fourth, there is somewhat of a ring of arbitrariness to these philosophers'
conclusions. It is easy to imagine coming to a different conclusion than that of
Kant's, and indeed modern philosophers no longer do place the categorical
imperative at the top of their list of logical conclusions about morality. Other

102
considerations have become paramount, and other priorities have risen to the top.
This would appear to be a fatal blow to the whole concept that sufficiently trained
thinkers will always arrive at the same conclusions about moral philosophy.
Fifth and finally, a Christian must argue that since all aspects of humankind,
including the intellect, are fallen and flawed due to sin, we do not have the ability to
reason perfectly, and therefore could not come to correct conclusions about moral
principles by unaided reason. In this view, the ability to reason as God would do is
damaged by the fall, and therefore the process and the conclusions are bound to be
wrong (at variance to God's) at least some of the time. Thus, at the end of the day,
the Christian discovers at the heart of this theory a mistaken confidence in human
reason and so must reject this theory of moral philosophy as fundamentally
defective, and even idolatrous. It is also not enough to rescue the morality-as-pure-
reason theory to say that humankind is made in the image of God and can therefore
think His thoughts after Him, because this weak attempt at a recovery still ignores
the fall, and so is fatally flawed.
All these considerations and others are the object of many books. They have
led modern philosophers to consider a number of other positions, some of them
nonabsolutist.

3.4 The Nonabsolutist Philosophers--Morals are Decided Upon

Position 1: Moral statements have little or no meaning.

The most extreme position here is occupied by those philosophers who assert
that there is no such thing as absolute morality. That is, there are no universal
norms on which choices of right or wrong can be based. To this group, called
antinomians, there are not only no discoverable moral ideas independent of human
reasoning, there are also none that can be reasoned out from first principles or
axioms--that is, antinomians deny the existence of any such axioms. A few of them
may accept the existence of a good god but yet deny that even his revealed
principles for human behaviour always apply. At the very least, members of this
group will assert that such words as good, right, moral, and ethical are all
essentially synonyms for some indefinable concept that is common to all these
terms but cannot be explained in simpler words. They conclude that such words are
therefore meaningless in any practical sense.
Some of these go farther, arguing that moral statements are absolutely
without meaning because they are not about physical objects and are not therefore
verifiable though scientific methods. They also assert that moral statements are not
logically deducible from non-moral premises. These thinkers, variously known as
logical positivists or materialists, hold that logical argument and the scientific
method applied to the material world are the only possible ways to know anything;
all else, including moral statements, is rejected as irrelevant. What isn't knowable
from the application of the five senses and the filtering of data through the scientific
method isn't knowable at all, so it isn't anything.
Despite taking this stand, some might still consider moral statements to be
useful, even if they are not verifiable. However, this usefulness would be entirely

103
utilitarian and pragmatic. For example, a speed limit serves the purpose of
promoting a valuable kind of order in which fewer people are inconveniently and
messily killed. Perhaps, they might argue, moral statements are of a similar nature,
providing all realize that they have no inherent compelling force of their own but are
merely convenient conventions or agreements. That is, etiquette has a use but not
morality, because the latter term implies a universalism that the former does not.
In this view, unethical behaviour, if there is such a thing, is not absolutely
wrong because wrong has no fixed meaning. However, some in this camp might
concede that if a behaviour inconveniences or harms a sufficient number of people,
society has a legitimate right to restrain it. This is a democratic view of ethics and
one that has some appeal, for right and wrong can be almost anything that the
majority in a society want them to be. Of course, to say that no absolute wrong has
been done seems like cold consolation to the victims of rape, to the families of
hostages, to those who have been defrauded, or to others whose "level of
inconvenience" is rather high. However, this view does assert that terms such as
"good" have some use, even if they have no meaning.
Some antinomians may go even further, holding that the terms right, wrong,
good and evil have neither meaning nor practical use. In this extreme view, all
people have an absolute right to do whatever they personally conclude is proper,
and there exists no authority that can legitimately restrict this right. All people have
the total personal responsibility to assess whatever situations they are in and to act
accordingly. To say that an action is wrong is an unwarranted invasion of privacy;
no person can legitimately participate in, or even comment upon another's moral
decisions. Herein lies the ultimate of freedom: there are no bounds, no chains, and
no responsibilities--one is accountable only to oneself. This view is sometimes
termed libertarianism, though those who hold to social and political movements
bearing that name might have less extreme personal views.
Some would moderate this view, correctly observing that it does uphold at
least one absolute, namely freedom. They advance this principle as the best
contribution of the antinomians:

Always act to maximize your own freedom and that of others.

Adopting very much from of the antinomian position would make this book
either very short or entirely unnecessary. The whole subject of ethics would be
quickly disposed of if it were so neatly to be defined out of existence. The scientist
who denies the existence of reality may still be motivated to study the appearances
of phenomena but the person who denies the existence of the moral appears to
have no basis to be a moralist. The chief difficulty with all antinomian positions,
even in their moderated forms, is that they provide little or no basis for agreed-upon
forms of appropriate social interaction, in particular, none for law. They are, in short,
a formula for anarchy rather than for society. If each person is a law alone, then
civilization is already dead and those who remain are but its pallbearers. This
observation also applies to unions, companies, and other organizations within the
larger society that occasionally claim the absolute right to act in the self-interest of
their owners or members without any regard for the rest of society. That is, they

104
define good to be what advances their collective power or position, even at the
expense of all others.
These difficulties lead to another variation on the antinomian theme: although
morality is not absolute, it is nonetheless appropriate for the strongest in any given
group to create and control society as they see fit. In this view, rules of conduct are
arbitrary rather than absolute. Those who are strong must arbitrate codes of
behaviour for the weak, enforcing such codes through their position of strength. It
may be argued further that the evident superiority of some human beings gives
them both the right and the duty to be the arbiters of morals. Anything else, they
could continue, would be an encroachment of the weak upon the strong, and such is
not to be borne. Clearly, there are borrowings here from Plato, even though these
would deny his premise that morality is absolute. These views are also compatible
with those of the social Darwinists, who hold that human society and ethics are
evolving in a process of natural selection that will guaranteeing the survival of
strong people, strong ideas, and strong ethics, as well as ensuring the unlamented
demise of the weak. That is, since the aspect of progress called evolution is
inevitable, the more highly evolved (the strong) need to be little concerned with the
rest (the weak) as they are bound to be selected into oblivion.
The problem with theories of this type lies in the determination of who the so-
called strong are, and why. As the Nazis showed so graphically, the logical
conclusion of any theory that purports to uphold a superman morality is that the
supposed superior beings may claim the right--even the obligation--to eradicate
those perceived to be lesser beings. After all, their reasoning goes, they are merely
helping the inevitable progress of evolution to achieve its predetermined goals, so
they are doing right.
The world is not so far removed from the Holocaust that it should forget what
such twisted reasoning did to the Jews of Europe during World War II. It is easy to
make a political or economic scapegoat of a group of people who, for philosophical
reasons, are regarded as lesser beings. Once a group has been intellectually
ghettoized (for whatever reason) it takes very little time to decide to physically
segregate them as well. It is a thus a small step from such a philosophy--which is
just a mask for religious or racial hatreds--to genocide, and it is a step that has been
taken many times in history. There is no reason to suppose that it will not be taken
many more times.
However, even if this most extreme conclusion--that the lesser beings should
be eradicated--is not drawn, but the rules that govern society are entirely arbitrary
(because there are no moral absolutes to derive them from), then it will at least
follow that the strongest arbiter will ultimately rule the rest. That is, the normal end
result of an arbitrary moral code is totalitarianism. Once this situation comes to
pass, it does not matter to those ruled by a tyrant whether the tyranny is of the
political left or right. It is here, in the arbitrary suppression of the ruled, that Fascism
and Communism, having left the stage on the right and left, meet and shake hands
behind the scenes.
The cyclical view of history popular with some Greek philosophers held that in
the decay of the moral principles that brought democracy onto the scene ,such
tyranny was inevitable. To some extent, this theory has support from the historical

105
record, for it can be seen in operation in Greek and Roman times as well as in
modern societies. When the glue of moral consensus dissolves, the society also
disintegrates. It then becomes ripe for a takeover by a tyrant from within or without
who can impose a new order. On the other hand, if an imposed order is actually just
a thin arbitrary veneer over a number of competing hatreds, the removal of the
external force leads at once to anarchy, and this fact has been thoroughly
demonstrated in Eastern Europe in recent years.
Taking all this into consideration, the principle of maximizing freedom seems
to be the only valuable contribution of the antinomians. Yet this principle contradicts
the idea that there are no absolutes, for it is apparently being enshrined as just
such an absolute. For the purposes of this book, it will be assumed that both
anarchy and tyranny are unacceptable and that even freedom must be tempered,
for freedom is not the same thing as license. Because of the undesirable outcomes
of antinomian, there is a strong practical motivation to look elsewhere for the
meaning of moral and ethical statements.
There are religious reasons to do so as well, for antinomianism expresses the
hostile antithesis of any belief in a supreme being who has the authority and the
character to define what is good and hold creation accountable to do it. Since, for
example, Christianity does hold such a position as of fundamental importance, it is
impossible to follow Christ and also be an antinomian.

Position 2: Moral statements are a general consensus.

The philosophers who hold to this view accept that moral statements are
meaningful. They do not believe such expressions to be discoveries of universal
principles, but rather to be general decisions about behaviour made with the ends in
view which that conduct should produce. That is, they concentrate on the results of
actions rather than on the actions themselves. Actions that lead to desirable ends
are defined to be good; others are less so. Two actions leading to the same end are
equal in moral content, even if they appear to be contradictory in themselves. For
example, in this view, if the same result can be achieved by lying as by telling the
truth, then the two courses of action are morally indistinguishable.
There are two main groups of philosophers who held this view, the hedonists
and the utilitarians. The hedonist believes that the chief end of a person's life is the
maximizing of pleasure and the minimization of pain. This is a natural outgrowth of
the starting premise, for if only the ends of actions are important and not the
actions themselves then one might as well put one's own pleasure first and follow
that with the pleasure of others, if any energy remains.
Some hedonist schools have attempted to define or even quantify the
measurement of relative amounts of pleasure for varying numbers of people, but
this philosophical position remains at its core a self-serving one, with little support
or concern for the benefit of others. Thus, since moral issues are raised principally
to discern what one's relationships and responsibilities to society as a whole ought
to be, the hedonist view has little to commend itself in a study of societal issues.
Indeed, from the point of view of society at large, it seems to have little to
distinguish it in practice from the antinomian. The latter disclaims mutual

106
responsibility for moral behaviour on the grounds that no such thing exists; the
former on the grounds that pleasure supersedes responsibility and is the only
worthwhile pursuit. It is difficult to imagine how either can provide a basis for any
kind of society--an association of people working and living together to fulfil
common goals--because neither provides a motivation for being especially
concerned about the other members of a society.
Perhaps hedonism's most serious shortcoming is its failure to account for the
extreme situation in which the majority in a society are sadists whose pleasure is
maximized by inflicting pain on others. The hedonist, even if uncomfortable with this
situation, would have little choice but to admit that the majority of such sadists
would do good in torturing, murdering, or otherwise causing pain for the minority.
In stark contrast to hedonism, the Bible draws entirely the opposite conclusion
about pleasing oneself by holding up the example of the Christ:

"Do nothing out of selfish ambition or vain conceit, but in humility consider others better
than yourselves. Each of you should look not only to your own interests, but also to the
interests of others. Your attitude should be the same as that of Christ Jesus: Who, being in
very nature God, did not consider equality with God something to be grasped, but made
himself nothing, taking the very nature of a servant, being made in human likeness. And
being found in appearance as a man, he humbled himself and became obedient to death--
even death on a cross!"--Philippians 2:5

Moral philosophers who are not hedonists but still hold a consensus view of
moral statements may be loosely termed utilitarians. These attempt to develop a
philosophy of the maximizing of good results for the largest number of people,
without necessarily using the word pleasure to describe that good. The essence of
this view can be summarized by the principle:

Always act to bring the largest benefit to the greatest number of people.

This is also a democratic view, though clearly of a different sort than the one
that says there are no good norms. Utilitarianism acknowledges the existence of
both legitimate moral statements and a form of mutual responsibility. For this
reason, it is a widely accepted popular philosophy, and many people embrace moral
principles that they perceive as being utilitarian.
However, even the non-hedonist utilitarian still has the problem of calculating
the relative amounts of good in the ends of moral acts in order to justify the acts
themselves, and this problem stubbornly resists solution. The person doing the
calculation is almost certain to weigh personal benefit most heavily, so the dividing
line between hedonists and utilitarians tends to become obscure.
The chief difficulty in this position seems to be that actions are regarded as
having no intrinsic value in themselves. An attempt to save a drowning child would
not in this view be a good act if it failed. If the would-be rescuer dies in the futile
attempt, then far from being a heroine, she is a fool. If she dies, but the child is

107
saved, then the act is at best neutral, depending on how one evaluates the relative
worth of the two lives.
There is also very little in this philosophy for the person seeking any kind of
ultimate meaning to life and its activities, for unless one knows ahead of time what
will be the outcome of an action, there is no motivation to perform it or to avoid it--
yet no philosophy offers a method for predicting the future. Decisions must be made
at the time action needs to be taken, when the consequences are difficult or
impossible to foresee. It is then that a person needs a sense of whether an action in
itself is good; and it is not often that the time is available for computing probabilities
of various possible outcomes and weighing these for perceived good results.
Thus, on the one hand, this philosophy has considerable value as a means of
attempting to find a justification or condemnation for actions already completed,
based on their consequences. On the other hand, it fails as a means of making
decisions about conduct itself--it seems impractical to apply in real situations, even
though it sounds good in theory. Moreover, as with antinomianism, hedonism and
utilitarianism both conflict with the view that there is a God who can and does
dictate absolutes. Thus, the Christian, for instance, must decline to use such
theories as a basis for judging actions.

Position 3: The only moral statement is the law of love.

This position holds that the most desirable collective moral decision is to set
forth a standard of love for persons (not things) as the single universal ethical
imperative. This is an attempt to capture a middle ground between the antinomians
(no rules at all) and the legalists (rules for everything), and it seeks to do so by
setting forth a single intrinsic good, namely love. All actions are relative to the
principle of love; they have otherwise no positive or negative value of their own. The
principle might be stated as follows:

Always do the loving thing.

Once again, as in the previous cases, actions are not in themselves good.
Instead of having the relative value of actions decided by results (as in
utilitarianism), actions are judged by the motivations behind them. No general rules
for responses to particular situations can be given, because one cannot know in
advance what a lovingly motivated response or action will be. Instead, one must
wait to be in the situation to decide on the most loving course of action.
Because of its emphasis on doing the loving thing according to the situation,
this moral philosophy is sometimes called situationism. In this system, there is no
rule book for behaviour, and there are no principles by which actions themselves
are judged; only motives have a value attached to them. This position differs from
the ones above in that it holds that there is a universal norm--that of love--but it is
similar to utilitarianism in that each action is judged in a manner that attaches no
value to the action itself but is essentially pragmatic (but with loving motives
replacing good outcomes).

108
This position is also relativistic, for any other ethical norms are valid only
relative to the one universal principle. Indeed, they are only valid if they happen to
serve the law of love in a given situation. It is not possible to say that either lying or
promise breaking is bad in itself, for the situationist might decide at some point that
lying is the most loving thing in a particular circumstances and is therefore good.
Moreover, this strategy is a personal one. Its practitioners concentrate on the
person who is to be the object of loving action rather than on abstract ideas of right
and wrong actions.
Notice that the corollary to this principle is essentially the same as that of the
categorical imperative, even though the motive for stating it is quite different:

Things are to be used, people are to be loved. Above all, people are never to be used as means to an
end.

The love so expressed could even in some cases be akin to the New
Testament concept of agape--the giving of self without respect to merit or
expectation of return. It is most particularly not erotic love, which is seen as self-
serving rather than truly loving, and it is much more than brotherly love, fraternal
affection, or friendship. Therefore, such expressions as "sexual morality" are at least
difficult to discuss if not entirely meaningless in such a philosophy, for a sexual act
of whatever kind is never thought of as wrong in itself. Morality depends on the
motivation of the participants, rather than on the act itself.
Because of the emphasis on the value of persons and because of its claim to
be able to resolve apparent conflicts in marginal cases (do the most loving thing),
this theory is attractive for a variety of people, whether their moral convictions arise
from religious considerations or otherwise. However, this position is not without its
difficulties, though they are not as great as some of the ones already examined. The
chief problem is that love is ambiguous. If there are no discoverable universal
principles--and therefore no outside references from which to obtain a definition--
then what is love, and from where or whom does it acquire meaning? Does love get
its meaning from the situations in which the principle is applied? If so, situationists
are faced with a circular definition, for love was supposed to be the judge of the
situation. How can the term gain its only meaning from the situations for which it is
supposed to be the arbiter? Is love an emotion--and is one supposed to "feel" the
loving thing in a particular situation? If so, love may not be a moral idea at all, for
emotions differ both with personality and over time.
There seems to be no way to judge the lovingness of a situation other than by
being the one experiencing it. Once a principle becomes so personal that it cannot
be the same for two people (or for the same person at two different times) it can no
longer effectively be communicated at all, and so loses all practical claim to have
meaning. Thus, if situational experience or emotion alone are the guide for morality,
it is not clear how this system differs in any practical way from antinomianism.
Additional rules are needed to clarify what love is.
There is no way out of this difficulty, for if there existed any other rules by
which one could determine the meaning of love, then love would not be the only
universal norm but would share its position with some other norm. Not only that, but

109
the situationist also seems to have the same problem as the utilitarian in making
any decisions ahead of time as to the value of actions. Computations must still be
done at the worst possible time--when a decision is necessary and action must be
taken. Here it is the maximizing of love that must be computed rather than the
maximizing of so-called good results, but the effect is not likely to be much different
if such approaches are used, for in both systems actions have themselves no moral
content and are at best catalysts for something else.
What is more, it has become common to advocate self-love as the highest or
most important form of love. Whenever this is so, love-situationism becomes
indistinguishable from hedonism.
There is an even more serious problem, namely, the decision to choose this
particular single norm. The choice is supposedly not based on the discovery of any
more universal principle than love, but is a collective decision of society. However,
the motivation for this collective choice of love is unclear. Could not something else
have been chosen--say hatred? This possibility reveals that there must be some
more fundamental principle that leads to the decision to choose love. For example,
in the Christian religion love is an attribute of God that is revealed to human beings
in the form of the gift of his son to die for sin. This is reciprocated by believers in
him by loving God, and thereby love between human beings is also legitimized .
"Love your neighbour as yourself" is not the most comprehensive statement of this,
but rather, "hold others in higher regard than yourself." Thus, Biblical agape
(selfless) love has a context and is part of a hierarchy of activity in which love for
God--not simply love itself--is at the top of the pyramid.
With no authority beyond their choice of norm, the situationists' love, on the
other hand, stands alone, unsupported. In practice, this love is often identified with
sexual activity and situationism used to justify a complete license in this regard, as
if the broader society could not conceivably have any interest in any social or
medical consequences. While it is not quite fair to associate this position exclusively
with the so-called sexual revolution, the difficulty it has in dealing with this
important and closely related area is a powerful argument that the theory is
incomplete. Moreover, situationism is sometimes expressed in the slogan "if it feels
good, do it," and in this form it also becomes indistinguishable from hedonism.
Sexual mores raise yet another problem with situationism, particularly when
expressed in the latter form. When a paedophile has sexual relations with a child,
both parties may feel at the time that the activity is loving. Yet, society persists in
regarding such actions as exploitive, harmful to the child, and wrong. Yet it is
difficult to see how to reconcile this revulsion with situationism, for if the parties feel
right and loving about their actions, on what basis can anyone else condemn those
actions? To proscribe pedophilia is to say that the feelings of love at the time of the
act are not the same thing as true love, and therefore to establish a higher norm
that claims to be able to examine actions themselves for lovingness. While this
would seem to be an improvement on an ethic bases on completely personalized
feelings of love, it does at least undermine the premise that one can indeed judge
what is the loving thing in a given situation.
Indeed that society would want to urge any restraint at all on the satisfaction
of sexual cravings at any time or place, or with any person, suggests that self-

110
control is being held up alongside love as a parallel value, and that love does not in
fact stand alone.
Christians also have little choice but to reject this moral theory, because they
hold that humankind is fallen, and therefore that feelings of love are unreliable at
best, and twisted at worst. Moreover, they hold that God who defines what is good
does not change, and that therefore moral principles, while they might have to be
adapted to apply to a given situation, transcend all human experiences and
situations. Biblical morality has a universality that goes beyond one's feelings of
love at the time of an individual act.

Ethics as a Social Contract

This is another relativistic theory of ethics. Its operating principle and chief
contribution is contained in the following:

Ethics consists of a mutual behavioural agreement between individuals and the society in which
they live.

This statement contains an important truth, for it recognizes the dependence


of individuals upon society and vice-versa. As has been noted several times already,
society is a mutuality and its very existence depends upon predictability in the
relationships between its members. Here, this concept is acknowledged and ethics
is regarded as codifying the mutually agreed-upon contract. Being a part of society
means that individuals have both written and unwritten obligations to the culture as
a whole, including to other individuals. In return, society has an obligation to its
members to provide a predictable framework within which to live and act.
It is also possible to deduce from contractual ethics other principles, for
humanity as a whole has an ethical contract (by virtue of sharing the habitat) with
the global environment--particularly with other living things. Thus, there is an
obligation to secure and maintain both the physical and social environment.
Thus, the contractual view has great strength, for it seems to give individuals
a substantial framework within which to make ethical decisions. Yet this strength is
simultaneously a weakness, for it focuses upon the existence of contractual
dependence without giving any guidance about the contents of the contracts.
Neither does it contain an intrinsic way to determine the relative importance one
ought to attach to different contracts when their obligations conflict.
Not many people will acknowledge themselves to be bound by contracts
whose contents are vague or unknown, and for which there is no external
enforcement mechanism. Thus, the operating principle has worth but it does not go
far enough by itself to be of practical value; it must be combined with one or more
other expressions to guide the choice of good actions. In short, at least some of the
contracts need to be specified, and this need places all the specific contracts at the
same level of importance as the norm of their existence, shading this ethical theory
over to a somewhat rules-based system after all. That is, although the notion of
dependence is valuable because no one's actions exist entire to themselves, until

111
an ethical theory can provide specifics, it is inadequate for the whole task of
governing behaviour.
Contract ethics also shares the weaknesses of all democratic views of right
and wrong behaviour, for a contract agreed to by a majority may well be
unbeneficial or even fatal to a minority. The majority might agree together (a
contract) to exterminate all the Jews (or all the Christians) but the mere existence of
such an agreement surely is not sufficient to show that it is right. The fact that it is
possible to show that there are social contracts that are not in fact desirable once
again points us to the need for a higher set of norms whereby social contracts have
to be judged. Moreover, if it does attempt to create a hierarchy of value or
importance for contracts, it tacitly admits that there are better or even best
contracts, and so begins to become absolutism in the end.
The Judeo-Christian view that has shaped Western civilization does not deny
the existence of binding duty contracts, but would view them in the context of
higher obligations to an Almighty God rather than as just mutually agreed-upon
democratic ideals. Indeed, the Bible is replete with examples of covenants that
entail behavioural expectations, but these are agreements whose terms are
dictated by God Almighty on His terms, and subsequent human arrangements are
expected not to conflict with one's contractual obligations to Him.

Summary

Over the last few centuries a variety of nonabsolutist ethical theories have
been proposed by philosophers, some of which have become quite popular. On the
one hand, the extreme antinomian theories virtually deny the existence of right and
wrong; and on the other hand, the relativistic ones assert that nothing definite can
be said about an act itself, for rightness and wrongness depend on other things.
Considering the changing views of practical morality, it is uncertain whether these
(mostly) relativistic theories actually influence behaviour or were simply used to
explain and justify whatever a person fully intended to do anyway. Since a lack of
guiding principles is inimical to the very existence of society, and since actual
experience with relativism has not had very positive results, it may be that the
future holds a return to some form of absolutism.

Profile On . . . Issues

The Slippery Slope

An argument used by conservatives in all eras goes: "One departure from


traditional norms starts a process leading inevitably to complete corruption." This is
called a "slippery slope" argument, because the premise is once society starts down
certain paths, it cannot help but slide to the bottom. Moral relativists discount or
ignore such arguments, but to illustrate they have some validity, consider what
might once have been thought a slogan for behaviour:

112
If is right in God's eyes then do it.

has tended to become the allegedly more democratic

If it seems right in our own eyes, then do it.

Situationism shortened and refocused this, rendering its social slogan:

If it feels good, do it.

This at least still requires some judgement (about the feelings, not the
actions). However, in the late 1990's (at a time when little effort is put into thinking
about morality at all) the social slogan has become:

Just do it.

The final outcome of a few decades of the triumph of situationism has been
antinomianism after all.

3.5 Traditional Absolutism


In contrast to those who hold that ethics can be arrived at by human beings
alone--either by logical deduction or by mutual agreement--traditional absolutists
hold that ethics transcend not only human reasoning and society but humanity
itself. In this view, right and wrong are meaningful even without reference to
philosophy or culture. That is, moral ideas do not come from the human mind or
from mutual agreement, but from somewhere else. Philosophers of these schools
are agreed that moral principles are absolutes, but they differ on how it is that such
principles are known or discovered. In this section, several such positions are
examined. The first is based on the idea that every person apparently knows there
is such a thing as right and wrong.

Position 1: Ethics are discovered by an inner sense that is capable of distinguishing right from
wrong.

This widely held view has both great strengths and great weaknesses.
Proponents can claim that defenders of pure reason will be liable to arrive at similar
conclusions because each is directed by the same inner moral sense. They can say
the same of utilitarians and situationists, who (they hold) ought normally to decide
that the greatest good or the greatest love are whatever the idealized inner voice
says they are.
Certain of the Eastern mystic and meditative religions have a view of morality
that could in some ways be thought of as falling in this absolutist category--even

113
though they are not always directly concerned with right and wrong in the same
sense as Western philosophies and theologies. Rather, some of them stress being
true to one's inner self, a self that is in some fashion part of a universal life force or
flow in the universe. The inner being is, in effect, a god--or at least part of a god.
Self-examination in the form of meditation, particularly if the physical body can be
cast aside or ignored, leads to knowledge of deity within. A life of peace with all (for
all share the life force) is assumed to be the consequence of such knowledge
becoming universally experienced.
In one sense, this is an absolutist theory, for it asserts the connection of the
inner voice to a universal "all." In another, it is relativistic, for each individual must
find the inner voice alone, and no specific and reliable absolutes for moral conduct
can be offered by those who have trod the path of enlightenment before, nor is any
guidance offered for recognizing when the true self has been found.
Moreover, it is not actions that are the issue for these mystics but the process
of meditation toward self-actualization itself. If there is a goal, it is a state of
harmony rather than a behaviour. Because of the individualistic emphasis and their
process orientation, theories like these have also recently become popular in the
West, where they are often combined with astrology and spiritism as parts of the so-
called New Age religions. It is too soon to judge whether this latest infusion of
mysticism will have any long-term effects on Western thinking and society, or
whether it will prove to be a passing fad. Note however, that this theory of goodness
is also directly opposed to the Judeo-Christian one, which holds that God is entirely
external to the created order, and is not in some fashion contained or created within
the physical universe, or actualized only by each person's meditation.
The notion that good can be found through some inner sense--whatever that
sense is called--is held as doctrine by many religions, though they disagree on the
details. It is also held by secular philosophers, who give other explanations for it.
Some of the Greek philosophers were inclined to this view with respect to questions
of ordinary right or wrong, for they regarded these concepts as self-evident--matters
of common (inner) knowledge, and so not the proper focus of philosophy, which
ought rather to be goodness in the sense of the virtuous. These last concepts were
worth putting under the microscope of logic, but everyday morality was obvious to
all, and did not need to be questioned or examined in this way. All people knew
about moral rightness; philosophers had more difficult and more interesting
concerns to subject to the logos.
Likewise, since everyone in society supposedly has this inner voice, a social
contract ethics is also easily arrived at. All would desire the same agreement,
because all have (access to) the same inner knowledge.
The chief point of contention among those who hold this absolutist position
has to do with the reliability of this inner knowledge of good and evil. If every
human being has such an inner sense and the moral laws detected by the sense are
indeed absolute, then everyone should access the same body of knowledge and
produce the same results. However, people do not all act in a way consistent with
there being a single set of moral imperatives. What can one say about this?
There are at least three answers to this objection. To understand the first, it is
necessary to ask once again whether knowledge necessarily results in application. It

114
is easy to see, for example, that the knowledge of scientific principles does not
imply even the existence of an application, much less an exclusive or universal one.
Two people who know the same theory will not necessarily discover identical
applications.
Likewise, people often act contrary to all good advice, common sense,
etiquette, and even the law of the land. They can and do contradict other voices;
there is no reason to suppose that they could have an inner voice but simply refuse
to listen to it. That is, Socrates was wrong--knowing the good does not always imply
that a person will do it. Different actions do not mean that the right is not absolute
or that it is not known, merely that a person has chosen not to perform it. Whenever
theory is put into action, there is an act of the will to make a decision. That a human
being is capable of willing to do good in agreement with conscience implies the
possibility of willing to do otherwise.
It is in an effort to make wrong choices less likely that laws are instituted,
both to codify the consensus and to mandate sanctions against violators. For the
sake of long-term stability, then, law ought to conform to the broad and historical
international consensus (many listeners to the inner voice), with such local
modification as thought necessary to suit local conditions or emergent technology.
Specific issues relating to law will be discussed in Chapter 9; for now, note that a
narrow self-interest, whether by one person or one nation, is unethical according to
the standards of this position (It contradicts hedonism).
Second, though the existence of a moral sense has long been widely believed
to be true, the notion does have its critics. Those inclined toward moral relativism
dismiss the whole idea, saying that no inner sense can exist to detect moral
absolutes, for there are none to detect. That is, they say that if any voice is being
heard, it is merely that of the majority custom of society.
There is a third answer to the difficulty of actions not following knowledge--a
Judeo-Christian one. In this tradition, the inner voice of right and wrong was given
by God in the context of the fall from grace into sin. Adam and Eve ate the fruit of
the tree of the knowledge of good and evil (i.e., of conscience) because they chose
to disobey. However, since through their act the whole human race fell into sin and
out of fellowship with God, conscience cannot be a reliable guide because it is
corrupt. Indeed, no person out of fellowship with God can assume that conscience is
trustworthy. Such a one may not even believe there is a voice of conscience, much
less act upon it.
Nonreligious proponents of the inner sense idea have a more difficult time
with the knowledge/action problem. The best answer may be that bad teaching and
some wrong choices corrupt the inner voice and cause it to be more easily ignored,
but this answer actually weakens their position. A critic might then ask, "How do you
know that the inner voice is not just collected memories of parents teaching the
behaviour they wanted?" One response in return is to observe that people seem to
be able to apply this sense even to situations they did not face as a child.
A weakness of that response is there are sometimes two contradictory claims
to conscience. For example, one person supports nuclear arms as a deterrent
against war and a second opposes such weapons altogether. One person advocates
funding recombinant DNA research and a second considers such work an

115
abomination. Similar contradictory claims of conscience are made for the use of
animals in research, in vitro fertilization, abortion, surrogate motherhood, artificial
intelligence research, and many other contentious techniques. In each case, two
sides cite the deep conviction of what they say is conscience and cannot understand
how an opposing view can be held. One may try to overcome this problem by
claiming that some of the issues cited here are questions of custom, and however
dearly held, customs are not morals. Such a reply may be partially correct but still
does not explain away all instances of contradictory conscience. Neither is such an
explanation likely to be heard by either side of a dispute whose protagonists hear
only their own inner voice, not that of any others. An assertion that conscience has
become corrupt may help somewhat, but only if there is something besides
conscience by which it in turn can be measured and corrected. Otherwise, there is
at the present time no logical difference between a corrupt conscience and none at
all. Moreover, if there was a point in history before which (or after which) conscience
did not exist, there must at such times also have been (be) other standards by
which good and evil can be distinguished (if they can be discriminated).
It is therefore possible to go at least part way toward meeting all but a last
and most serious objection to the idea of an inner moral sense: its proponents are
unable to prove logically that it even exists. Its secular proponents acknowledge this
weakness when they call this theory intuitionism. Yet it nonetheless has the
authority of both an extensive tradition and some practicality behind it. The inner
voice theory is attractive because it seems to be true in the experience of most
people, despite the difficulty in bringing forward logical arguments to demonstrate
the existence of this sense. Perhaps most people would concede that there is such a
thing as conscience, but also agree that it can neither be proven to be reliable nor
be regarded as the exclusive source of ethics. Summarizing a modified form of this
position:

Everyone has an inner, though possibly flawed, moral sense.

This principle has in its favour the independent belief in it by peoples of widely
differing cultures and times. It has against it that conscience is used to justify widely
varying and even contradictory actions, and these differences can only be explained
in terms of flaws in conscience or by the existence of other absolutes with which
conscience coexists or by which conscience is judged.

Position 2: Rules for human conduct are absolute and at least ideally nonconflicting.

The variations within this general group depend on the extent to which human
conduct is covered by these rules. Staunch legalists may well have a rule for
everything; others will offer far fewer. Some make no claims about the origin of the
rules; others are sure that absolute rules can only come from Divine revelation.
They may believe that religion is the authority for their moral code, and that they
must adhere to the code approved by their god. This group believes that no moral
rule can ever be broken without incurring guilt. In religious legalism, the basic set of
god-revealed moral laws will often be augmented by a much larger codification of

116
institutional (church) law that is continually being added to, much as are national
laws.
Legalism in all its forms has a great attraction for many people. Neither
philosophy nor conscience is much needed, for the rules are readily available for
consultation. Furthermore, in very religious versions, fear of a god's punishment (or
institutional rejection) for the slightest violation of these codes is a powerful
incentive to obey.
The problem with legalism--and with any other theory that holds that absolute
norms do not conflict--is that people nevertheless must sometimes have to choose
between norms. A standard example is that of the spy who is caught and must
when questioned either lie or be disloyal. To the classical legalist, this is a choice
between evils, and the person who makes the choice is not absolved from guilt by
the requirement that the choice is forced. The resulting guilt is real and must be
confessed, repented of, and (possibly) atoned for.
Despite this problem, and regardless of the religious overtones, legalist
positions have probably been the most popular of traditional absolutist moral
theories in Western civilization and have served as the basis for many extensive
national codes of law. The chief contribution of rule-based absolutism can be
summarized by this statement:

There exist external and absolute rules for moral behaviour.

Also of interest here is that rule-based ethical systems are the most
vulnerable in times of rapid technological change, for in such transitional periods
there are always a large number of novel issues that arise in connection with the
development and use of new technology and that defy analysis by the old
interpretations of rules. Although there will be efforts to introduce new rules (such
as Internet censorship), in the period before the rule makers catch up a kind of
moral anarchy may prevail with respect to the new techniques. Because this aptly
describes today's situation, the present sociotechnological difficulties serve as an
excellent illustration of the difficulties with legalism. These problems lead to another
absolutist position.

Position 3: Principles for human conduct are absolute. They exist in a hierarchy wherein the doing
of a greater good absolves one from a conflicting choice of a lesser good.

Some of those holding this view would also (like the last group) state that the
absolutes in question come only from God. One difference between this position and
the last is immediately obvious. In the case where absolute norms come into conflict
and a person must choose, no guilt is here attached for breaking the lesser of the
norms (provided, of course, that the norm being rejected is indeed the lesser).
Arguments used against Nazi leaders at the Nuremberg war crimes trials after
World War II fall into this category. To the claims of Nazis that they killed Jews
because of a duty to follow orders, the prosecution replied that there was a higher
natural legal order forbidding genocide that made their actions a crime against all
humanity, and therefore punishable even though the defendants had broken no

117
laws of their own country. The court accepted this argument, enshrining in
international law the notion that there exists a hierarchy of values that can be used
to judge even law itself, and that this is true even if the higher principles have not
been formally codified by any country, much less by them all. That is:

Duty to all humanity is of a higher order than duty to one's country.

There are also overtones here of Plato's concept of an overarching justice that
is above law, behaviour, and opinion.
Consider also the example of the captured spy cited in the last section. The
situation would be interpreted quite differently from this point of view. The
hierarchical moralist says that since the good to be done for a just cause is greater
if the enemy is deceived than if told the truth, there is no guilt attached to breaking
the lesser norm (lying) for the sake of fidelity to the higher (a just cause). Of course,
if the spy is supporting the wrong cause...
Students of the Old Testament might be interested in consulting Joshua 2 for
an example of this type. This is the story of Rahab the prostitute who lied to the
soldiers of her own town of Jericho concerning the spies from Israel, throwing in her
lot with the invaders. Despite betraying her city, she gains a high commendation,
marries a prince of the realm, and becomes an ancestor of King David, and so also
of the promised Messiah. There is more to this than just happening to pick the
winning side; she chose the higher good by aligning herself with the forces of God
and against those of idolatry.
Another very modern sounding instance of a hierarchy of values in the Bible
concerns the issue of surrogate parenthood--not for the mother but for the father.
Old Testament law forbade a man from having sexual relations with his brother's
wife. However, if an oldest brother should die childless, the next brother was
commanded to father his brother's children for him with the widow, so that the dead
brother's name would be perpetuated. Evidently the issue of family continuance was
sufficiently important to override the usual norm, and to do so even though it might
cost the younger brother his own chance at the inheritance, as he preserved it for
those who would be his older brother's legal children. So important was this
obligation to redeem the name, land, and heritage of the heir that the duty passed
to the nearest relative when no brother was available. When Ruth asked Boaz to
become her husband and kinsman-redeemer, it was to perpetuate the name and
line of Elimalech before establishing his own. In agreeing to this arrangement, they
two also became a part of the line of the Messiah, as David was their direct
descendant as well.
Workers in countless situations must trade off the values of company loyalty
and the pragmatism of profit for professionalism in their work, safety
considerations, and the quality of a product. Politicians must strike a balance
between personal friendships, party loyalties, personal beliefs, and the need to
govern a country. Athletes must choose between the value of winning and that of
playing an honest game. Students may need to trade higher marks and better job
potential for the same honesty in their writing.

118
Examples can be multiplied--in practice, people do prioritize their values. The
difficult problem is to create an actual hierarchy of principles that incorporates, as
much as possible, the important insights of the other theories but that still remains
absolute.
What can a hierarchical ethicist propose as a suitable ordering of moral
duties? Here is one of many possible outlines; this one is based on the discussions
of this chapter. The first duty and possibly the second, are Judeo-Christian
contributions. Some would omit both, but moralists with a religious background
might argue that the first two are the only important part and that the rest depend
on them to such an extent that they cannot be neglected. They encapsulate the
idea that moral absolutes are not discovered or voted upon, but revealed by God as
part of His character. The third one includes duties not previously emphasized in the
chapter and does so in order to recognize the social contract and obligations to
people, and also to place humankind in a context of life and even the inanimate
environment.

An Ethical Hierarchy
First Principles:
1. Love of God comes before all else, for only in such love can one gain the
good virtues and the ability to perform right actions, and only by God's revelation
can one discover that there do exist external and absolute rules for moral
behaviour.
2. Love of other people takes priority over love of self. This is an aspect of
revealing the good character of God to others.

The origin of ethical principles:


3. Ethical norms are absolutes that are revealed by God as aspects of His
character.

Resolution of conflicts between norms:


4. There is a duty to people; next there is one to animals, then to other living
things, and finally to the inanimate world. This is an aspect of the stewardship God
gave at creation.
5. Duty to many people supersedes duty to a few people, yet the many have a
duty to protect the few who cannot protect themselves.
6. It is better to be a whole person than an incomplete person. This may be
applied to self or to others.
7. Actions with foreseeable or demonstrable effects weigh more heavily than
those with possible or theoretical effects.
8. People are more important than things, even if they are still in
development, or otherwise incomplete. Duty is owed to people regardless of
whether they are deemed to be completely developed mentally or physically.

New Situations:

119
9. When it is necessary to derive new ethical norms from the absolute
principles because revelation is insufficient and does not cover new situations or
technologies, one should adopt the following rules:
a. Act according to the inner voice (conscience) of virtuous people (not
necessarily only one's own).
b. As far as possible, do what is the most loving thing, not ignoring
conventional and prudent wisdom for emotion, nor following such customs blindly.
c. Act to maximize the benefit to the largest number of people (this includes
their freedom).
d. Remember that each person has a social contract with other people, the
biosphere, and the earth.

This is not a complete list, of course, because it reflects only the brief
discussions in this chapter, with a few additions. It will do for illustrative purposes
and will help in considering various cases later in the book. It should also be noted
that after point two the ordering is mainly within the points rather than between
them.
Apart from placing love of God and other people first and second (for those
who require those points there), this list does not rank the persons or things to
which duty is owed, only some of the duties themselves. Thus, one may wish to
place duty to family before duty to the next-door neighbour, and duty to one's own
nation before duty to people in other nations. Placing that hierarchy with this one
would add another dimension to the obligations, as well as another set of potential
conflicts to be resolved. Such complexities illustrate that the obligations that bind
people to other people may be hierarchically ordered to some extent but are
actually practised in a multidimensional network, rather than simply in a top-down
fashion. Also, the attempt to express duty to humankind within the context of duty
to God may be useful but in a rapidly changing society may appear for a time to be
inadequate to explain all the details of interpersonal ethical obligation. This problem
is not unique to ethical systems whose cultural heritage is religious; ethical
responses may grow from various original principles, but the specifics of how they
are worked out change as society and its technology do.
It can readily be seen from this discussion that hierarchical absolutism is not
the same as rules-based absolutism. It reflects the complexity of moral choices and
attempts to emphasize character rather than simply ritualistic obedience. That is, it
suggests that the making of moral choices is required from without, learned from
within, and applied as part of a dynamic growing maturity. This permits the
hierarchist to adapt rules to situations rather than making them up on the spot and
to respond with love without allowing feelings to supersede objective morality.
It is also expressed positively. The person who asks: "What is wrong with what
I am doing?" is waiting until it is too late and then asking the wrong question.
Rather, this hierarchy suggests that one should ask: "What is best about my
possible choices for action?" and then have some measure of the mature character
needed to discern or discover the answer. A Christian would do better still by
asking: "What does God want to do, and how can I line up with that?"

120
Elements of this list are also reflected in some of the other nonhierarchist or
even nonabsolutist philosophies, and this serves to illuminate what has been
presented in this section. For instance, the situationists' law of love is incorporated
by the second and last points, and Kant's categorical imperative is closely tied to
the fourth. That ethical norms come from outside individuals or even whole societies
is reflected in point three. Reflections of Plato's concepts of duty within the context
of the state or society are found in point five, and this item together with the last
also includes the notion of a social contract. The idea that a morally educated and
informed person has a natural advantage over one who is not is covered in point six.
It also suggests that actions that cause people to build or retain wholeness of mind
or body are better than those that do otherwise. The calculation of relative goods is
addressed by several points but embodied in a particular fashion in point seven and
nine. Point eight asserts the primacy of people over things again and extends it
even to the full potential of life for development of human life. Together with point
four, it asserts, for example, that an undeveloped, uneducated, or otherwise
helpless child is of more importance than, say, money. Point nine recognizes that
hierarchical absolutism does not have a rule for every situation and must use every
available tool to derive new rules from the old.
One must not suppose that this list agrees in every point with those that all or
most hierarchical ethicists would provide, nor that it gives a complete statement of,
say, Christian ethics, which, according to Carl Henry (Christian Personal Ethics), is
best interpreted as hierarchical. Such a comprehensive undertaking would fill a far
larger book than this one. However, as indicated, this list does provide a touchstone
to important elements of several ethical systems. Although not everyone will agree
with it in every respect, it is an attempt to order the contributions of the major
ethical theories in a way that incorporates them into a non-legalistic absolutist
position.

Christian Ethics and Legalism

Even by many of its professed supporters, Christian views of ethics have often
been legalistic. However, if the Biblical documents (rather than institutional
traditions) are taken as defining Christianity, then this religion claims both to
explain and to set aside legalism. Those who followed Moses had a direct and
special relationship with God. They were to strive for holiness, not for the sake of
formal legalism, but as a witness to all other peoples and nations of the essential
good character of God. That they bore His Name was significant; being His people
meant being like Him.
New Testament doctrine holds that the Mosaic law was also intended to prove
that God's standard (perfection) was too high for any human to achieve unaided. He
is too holy to approach except in perfect holiness. In other words, achieving
essential goodness through legalism is impossible. On the contrary, argues the New
Testament, legalism can only condemn, because no person can obey a legal code
faultlessly and without guilt. Thus, an entirely different view of access to the
goodness of God is required.

121
The New Testament goes on to proclaim that Christ took all the punishment
required by the holy God for the guilty upon himself during the torment of his
crucifixion. Thus, those who believe in him and understand that his death was a
personal substitution are set free from their guilt. In addition, believers are
transformed and made fit for presentation to God by having Christ's perfect
righteousness attributed to them at the same time that their belief in Christ sets
them free from their guilt. Thus, for those who receive His grace, the condemnation
of an impossible legalism is paid for and at the same time, Christ's real goodness is
imputed to the believer. That is, goodness is a gift from God rather than a personal
achievement.
Consequently, Christians do good actions not to gain God's approval, which
God has given them without respect to merit, but as acts of gratitude for having
already received his free favour. The result is supposed to be a living out of the
goodness of the inner spirit of God in practical life and actions. This is possible for
the faithful through God's power, despite a natural human inclination to do evil and
despite a corrupted conscience. In this view, such a life is the only achievable
human good, for goodness is a character attribute of God alone, discovered only by
knowing God in a personal way and having God's goodness placed within oneself.
Right actions then follow automatically, for they flow from a good heart, and are not
a striving to gain favour. To put it another way, God gives his goodness to the
believer, and this enables the person in question to do right actions.
In practice, this view of Christianity has only indirectly affected society.
Attempts to codify specific rules for Christian behaviour seem invariably to lead to
institutions that are to some degree legalistic. These organizations (whether
churches or governments) when grown large enough, have exerted most of the
actual religious influence on the culture and laws of the West. Still, Western legal
heritage owes much to the influence of the Judeo-Christian scriptures, and this is no
more evident than in such notions as human rights, which are often incomplete or
missing in places that lack this influence.
This view of Christianity also suggests that although ethics must be practised
in social and institutional contexts, the moral absolutes are practised expressed
personally and individually as the outgrowth of a character directly impacted by that
of God's Holy Spirit for His purposes--and not as part of ritual obedience to either
the state or a church institution. Indeed, Christ condemns the Pharisees precisely for
the error of turning what should have been a matter of character into a set of
external rules.

Moving On

With the proposed hierarchy, it is time to conclude the subject of ethical


theory and turn to more practical matters. From this point forward, theory will not
be of foremost concern, but it will underlay many of the discussions in subsequent
chapters. For the purposes of examining actual issues, the author will take the view
that rules-based absolutism is both stifling and inadequate; that the non-absolutist
positions all inevitably lead one to antinomianism and the destruction of the social
fabric; and that only hierarchical absolutism is able to deal with the actual

122
complexities of life. The hierarchy given here attempts to borrow and incorporate
points from all the other theories, and will be used (whether implicitly or explicitly)
to judge ethical problems throughout the rest of this book. Readers who come to
different conclusions on specific points should at least be able to analyse their own
reasoning and to know which moral philosophy they have been following to arrive
where they did.

Profile On . . . Issues

Toleration

Introduction: The people who make up a nation may have a variety of ideas
and individual beliefs (religious, moral, political, and others). Since, for instance,
there are many religions and political parties, such beliefs may contradict each
other. In a stable society, there are certain "control beliefs" that characterize the
dominant culture, form the basis of normal government policies and laws, are
transmitted by its media, and generally present its public face. Tightly closed
societies presuppose that all non-control beliefs ought to be suppressed. More open
societies allow a plurality of beliefs some expression, even when these contradict
the control beliefs.

A Definition: Toleration is a practice based on the higher value of freedom. It


is the deliberate choice not to suppress the expression of beliefs or behaviour
differing from or disapproved of by the tolerator.

Is this a moral issue? At the heart of toleration is the belief that other people
are moral agents whose freedom to express that moral agency must be respected,
even when the beliefs they profess are not given credence. Tolerance is designed to
promote freedom, respect of persons, and the education of all who hear or express
moral views. It also recognizes that the consequences of intolerance can be
catastrophic for society, and is therefore in everyone's utilitarian self-interest to
practice it.
Problem: If the control believers use the word "toleration" to imply the dogma
that all expressions of belief are equally valid (equally likely to be true), then they
will be intolerant of any claim to be right, that is, to know an absolute truth. Such a
view of tolerance sounds very liberal and accepting, but when its own absolute is
challenged by those who claim on any other grounds to know an absolute right or
truth, the narcissism of this kind of tolerance causes it to self-destruct, sometimes
in spectacular ways. In such cases, those who advocate any moral, religious, or
political absolutes may find themselves under severe attack.

Is toleration absolute and unlimited? For the most part, tolerance theoretically
cannot be selective and be itself. In practice, it is always exercised over some range
of permitted dissent. For instance, if intolerance is one of the things allowed, and
that becomes more persuasive than is tolerance, the latter may be obliterated.
Although not to permit the expression of intolerance seems self-contradictory,

123
tolerance must have some limits or it cannot survive, being quickly replaced by
some form of intolerance.
Problem: By virtue of their dominant position, the control believers in a
society are disinclined to tolerate challenges to any of their beliefs. If the control
believers are certain of the rightness of their beliefs, those who question these
moral, religious, or political absolutes will be at least marginalized, if not ghettoized.

Must all beliefs be tolerated? The holding of beliefs is not strictly in the
category of things to which tolerance applies, for there is no way to know what a
person is thinking until those beliefs are communicated. Toleration applies to the
expression of beliefs; it makes no demands on an individual for intellectual
conformance to the control beliefs.

Ought all expressions of belief to be tolerated? Even some of these are not in
the proper category to which toleration applies. For instance, expressions that
defame the character of or incite violence against a person or group violate the
higher value of freedom on which tolerance is based.
Problem: If the control believers are dominant and powerful enough, they may
come to define criticism of any of their beliefs as defamation and incitement, and so
to be a threat that must be eliminated. This is when intellectual ghettoization
becomes first physical segregation and then active persecution.

What are the limits of toleration for non-conforming actions?


1. Acts of violence or those taken in reckless disregard for the life and safety
of others restrict the victims' freedom, and must be regulated.
Problem: A completely passive people is ripe to accept a dictator, or to be
invaded by another nation.
2. A state has an obligation to be intolerant of expressions or actions that
threaten its own existence.
Problem: Fear of subversion or invasion can be used to destroy all freedom in
a state.
3. A state may have to restrict the ability of a group or individual to
accumulate wealth or power, so as to avoid a threat to the well-being or freedom of
others.
Problem: Some enterprises can only be conducted efficiently (or at all) with
large accumulations of capital. Too many restrictions on this results in a lower
standard of living for everyone.
4. Criminal acts are also presumed to be forbidden by higher principles and
are therefore not in the category of the tolerable.
Problem: The greater the freedom, the more scope there is for terrorists and
criminals. The more regulations there are to detect such activities, the less freedom
there is.
5. Acts that endanger a person's own health or safety may place an economic
burden on society. To the extent that this restricts the freedom of others, such acts
may have to be regulated.

124
Problem: Sufficiently dominant control beliefs may make expressions of
competing beliefs a criminal offence (This is how totalitarian rulers maintain power).

3.6 From Theory to Decision--Practical Morality


The focus of this book is not moral/ethical theory in isolation but rather the
interplay between high technology and the practical ethics of the society. Some
issues of great importance to everyday relationships will not be considered at all in
this text, and some that most people would not normally think about become
central to these discussions because they relate specifically to science and
technology.
Furthermore, it is time to move from theory to practice. It is useful to
examine, understand, and even adapt theories of making ethical statements, but if
these theories are to have more than abstract value, they must be put to practical
use--in this case by examining the high technology society and trying in part to
determine what difference ethical theories make when they are actually applied in
real life by the members of that society.
The relationship between moral philosophy and morality is akin to the one
between theoretical physics and engineering. For instance, it is interesting to know
something of how the structure of various metal alloys gives them certain physical
properties, but it is more useful to society to employ this knowledge to build a safe
and efficient bridge. In addition, mere knowledge of how to build a bridge will not
bring one into being; there must also be an engagement of the will, a decision to
take action, and this followed by the action itself.
Likewise, it is not enough just to know what is a good action that serves God
or humanity in the best possible way, for one could still choose to do the opposite
out of self-interest. For example, if law does derive from ethical consensus, then it is
at least in the long-term best interests of society to have a consensus that is
generally applied, that is reflected in the laws of nations, and that has been adapted
to the particular needs of the day and age.
Specific ethical and societal problems related to high technology will be
discussed in appropriate chapters. An attempt will be made in each case to provide
a historical context for the situation and to examine it within an ethical framework
as well. In many cases, the need for solutions to problems will be pointed out and
one or more possible directions for change will be given, but these will not be the
only possibilities. Readers will be expected to provide some of their own solutions,
particularly in questions at the end of chapters.

Profile On ... Applying Ethics To Technology

The following widely-circulated statement was adapted by an international


symposium on ethics and technology held in Haifa and Jerusalem in December,
1974.

The Mount Carmel Declaration

125
1. We recognize the great contributions of technology to the improvement of
the human condition. Yet continued intensification and extension of technology has
unprecedented potentialities for evil as well as good. Technological consequences
are now so ramified and interconnected, so sweeping in unforeseen results, so
grave in the magnitude of the irreversible changes they induce, as to constitute a
threat to the very survival of the species.
2. While actions at the level of community and state are urgently needed,
legitimate local interests must not take precedence over the common interest of all
human beings in justice, happiness, and peace. Responsible control of technology
by social systems and institutions is an urgent global concern, overriding all
conflicts of interest and all divergencies in religion, race or political allegiance.
Ultimately all must benefit from the promise of technology, or all must suffer--even
perish--together.
3. Technological applications and innovations result from human actions. As
such, they demand political, social, economic,ecological and above all moral
evaluation. No technology is morally "neutral".
4. Human beings, both as individuals and as members or agents of social
institutions, bear the sole responsibility for abuses of technology. Invocation of
supposedly inflexible laws of technological inertia and technological transformation
is an evasion of moral and political responsibility.
5. Creeds and moral philosophies that teach respect for human dignity can, in
spite of all differences, unite in actions to cope with the problems posed by new
technologies. It is an urgent task to work toward new codes for guidance in an age
of pervasive technology.
6. Every technological undertaking must respect basic human rights and
cherish human dignity. We must not gamble with human survival. We must not
degrade people into things used by machines: every technological innovation must
be judged by its contributions to the development of genuinely free and creative
persons.
7. The "developed" and the "developing" nations have different priorities but
an ultimate convergence of shared interests:
For the developed nations: rejection of expansion at all costs and the selfish
satisfaction of ever-multiplying desires--and adoption policies of principled
restraint--with unstinting assistance to the unfortunate and the underprivileged.
For the developing nations: complementary but appropriately modified
policies of principled restraint, especially in population growth, and a determination
to avoid repeating the excesses and follies of the more "developed" economies.
Absolute priority should be given to the relief of human misery, the
eradication of hunger and disease, the abolition of social injustice and the
achievement of lasting peace.
8. These problems and their implications need to be discussed and
investigated by all educational institutions and all media of communication. They
call for intense and imaginative research enlisting the cooperation of humanists and
social scientists, as well as natural scientists and technologists. Better technology is
needed, but will not suffice to solve the problems caused by intensive uses of

126
technology. We need guardian disciplines to monitor and assess technological
innovations, with especial attention to their moral implications.
9. Implementation of these purposes will demand improved social institutions
through the active participation of statesmen and their expert advisers, and the
informed understanding and consent of those most directly affected--especially the
young, who have the greatest stake in the future.
10. This agenda calls for sustained work on three distinct but connected
tasks: the development of "guardian disciplines" for watching, modifying,
improving, and restraining the human consequences of technology (a special but
not exclusive responsibility of the scientists and technologists who originate
technological innovations); the confluence of varying moral codes in common
action; and the creation of improved educational and social institutions.
From: Ethics in an Age of Pervasive Technology Melvin Kranzberg (ed)

3.7 Summary and Further Discussion

Summary

The study of what constitutes the knowledge of good and right is known as
moral philosophy; the actual application of these abstractions is ethics. There are
three main groupings of the schools of moral philosophers, those who believe that:
1. Ethical laws are deduced by pure reason. This group includes the ancient
Greek philosophers, to the extent that they discussed such things at all, and
Immanuel Kant, whose categorical imperative was claimed to be the final and
necessary conclusion of this reasoning process.
2. Ethical principles are decided upon. The positions within this group vary
from the antinomian (there are no ethical norms) through the consensus view
(including hedonism and utilitarianism) to the position that the law of love judges all
moral actions and finally to situationism and the social contract view. The first of
these is to an extent a denial of the existence of ethics; the last three assert that
goodness is relative to certain calculations about resulting benefits of actions and
that actions do not have this quality themselves.
3. Ethical norms are absolute, transcending both reasoning and decisions. This
includes (but is not limited to) the view that an inner sense--conscience or intuition--
exists that dictates ethical principles. It also includes legalistic absolutism (all-
encompassing absolute norms) and hierarchical absolutism.
This text offered one such hierarchy as a comprehensive synthesis of the
ethical theories examined under all three headings. Readers may well settle on
other hierarchies, possibly subsets of this one, or on one of the other schools of
thought. However, in examining specific issues, whether relating to technology or
not, it is valuable to understand what ethical criteria are being used.

Discussion Questions

1. Under what circumstances is it right (or excusable) to lie? Give some


examples and your reasons, based on your view of moral philosophy.

127
2. Under what circumstances is it right (or excusable) to break a promise?
Give some examples and your reasons, based on your view of moral philosophy.
3. Normally, part of the duty of a citizen to society is to obey the laws of the
land. When, if ever, is it better to break such laws?
4. In certain very rare medical circumstances associated with the birth of a
child, doctors may be faced with the choice between saving the life of the mother or
that of the child. Which should be saved and why? Would the same answer be given
by all schools of moral philosophy?
5. Is it always right to report to authorities the crime of another person that
you have witnessed? That you have heard about from a third party? Do your
answers change if the criminal is your friend?
6. Suppose the law requires you to report certain types of activities as crimes,
but you do not believe they are wrong. Do you have to report or not, and why? Does
it make any difference whether your own views of the matter are based on moral
philosophy? on political convictions? on religious beliefs?
7. Under what circumstances is it morally right or morally wrong to practice
birth control? What difference does it make if the law of the land requires (forbids)
this? Now repeat the question for abortion, for infanticide, for euthanasia.
8. The police have just revealed to you that your closest and dearest friend is
under investigation for tax fraud. He is about to be arrested and, if convicted, faces
a lengthy prison term. (a) You have been asked not to tell him because it is feared
he may flee the country. What should you do? (b) As soon as you are told this, you
realize that you have in your possession conclusive evidence that would convict
your friend, information that the authorities could not possibly know about. What do
you do? Does it make any difference to your answer if the friend is also your boss?
your spouse? your child? the mayor? the pastor of your church?
9. You are the prime minister of a country at war, and your secret service has
a spy at work infiltrating the enemy high command in an effort to discover its plans
for a major offensive three months away. She has just reported, however, on less
important plans for an attack that will be made in another place tomorrow. If you
use the information, your country's forces will win tomorrow's battle, but your spy's
activities will be unmasked and she will have to flee, abandoning the long-term
plan. If you do not use it, many more soldiers will die the next day than necessary,
but the spy will be able to continue in the hope of gaining a greater victory later.
What does a utilitarian do and why? What does a traditional absolutist do and why?
Can you give a hierarchical absolutist answer?
10. You see a young child drowning in a river. Being both an expert
mathematician and a good swimmer you instantly calculate a 40-percent probability
that you can save the child and a separate 70-percent probability that you can save
yourself once you do jump in. What should you do if you are: (a) a utilitarian? (b) a
situationist? (c) a traditional absolutist? (d) a hierarchical absolutist?
11. What effect does it have if the two probabilities in question 9 are
reversed?
12. You are starving and have no money. You see a passer-by drop a wallet,
and you pick it up. It contains over a thousand dollars in cash, some of which, you
are convinced, this richly dressed person could easily spare. What do you do? Does

128
it make any difference if the lost article was food instead of money? Does it make
any difference if you are the mother of two young children who are closer to death
from hunger than you are? Does it make any difference if you know that the owner
of the wallet is a notoriously tightfisted individual whom you are certain would never
reward your honesty? Does it make any difference if, in addition, he was once
responsible for cheating you out of your home, property and money and thus is the
cause of your destitution in the first place? What if the person is a known criminal
and you are certain that the money is profit from selling drugs?
13. Your country has a severe famine due to a failure of an irrigation system
and very little money. The government has a choice between spending all available
funds to buy food, in which case it is estimated that the lives of 500,000 people will
be prolonged for a year, after which time, there will be no funds left to prevent
millions of projected deaths. Alternately, the available funds could be used to
rebuild the irrigation system, allowing the 500,000 deaths in the short term but
preventing the larger famine. Assuming that this is all the available information,
what is the best course of action? Does it matter what ethical school you belong to?
14. What difference does it make if the famine is in one country and the
money in a second? Which country should make the decision? What if the second
country has only this money; it is earmarked to update its own irrigation system, for
it is estimated that in the next year there is a 25-percent probability that this too
will break, and the second country would then also experience famine on a similar
scale to the first country? What difference does it make if a decision not to help
means a 50-percent probability of a war, in which hundreds of thousands would
surely die?
15. A man you know has been beating his wife. She kills him, not realizing that
you have witnessed the crime, but otherwise successfully conceals her deed. Do you
turn her in? Does it make any difference if she is your close friend? Does it make
any difference that you have just realized you are in love with her and want to
marry her? Does it make any difference if she has two young children and no
relatives who could care for them? What effect does your school of ethical thought
have on your answer?
16. You have evidence that a certain individual is a child molester. However,
for the case of which you are aware, you have been sworn to secrecy and asked by
the victims' parents not to involve the police. Now another child has brought
charges against the same offender. Should you come forward with your information,
despite your promise? Does it make any difference who the offender is, or what kind
of work he does? Suppose the new complainant has confided to you that the
charges now being advanced are false and being made for the purpose of revenge,
with no knowledge of any other offences? Do you betray the lie, or the truth?
17. How do the various ethical schools handle the issue of tolerance? For
example, can an ethical relativist tolerate a traditional absolutist? a hierarchical
absolutist? What about the reverse?
18. Consider the Mount Carmel declaration. What ethical theory does it
appear to be based on? Is there any evidence of a foundation for its statements;
that is, are they based on higher principles, or do they stand alone? How does it
envision that its goods be enforced as shoulds?

129
19. Rewrite the Mount Carmel declaration in the form of a hierarchy. What
additions or deletions do you propose, and why?
20. Attack or defend the statement made in section 3.5 that the "inner voice"
theory contradicts hedonism. Extend the discussion to the relationship between
conscience and utilitarianism.
21. It is well known that tobacco causes a myriad of illnesses, many of which
are very expensive to treat and a burden on society. Should the use of this product
be tolerated, regulated, or forbidden? Why?
22. Repeat the analysis of the last question for (a) heroin (b) alcohol. What are
the essential differences among these three?
23. (Research question--use the Internet or a library) Find a code of ethics that
has been adopted by some recognized group or profession and analyse it. What are
the presuppositions behind the code? What is its purpose? In what ethical school are
its framers? What are the specific things required? forbidden? How would you
rewrite the document?
24. To what extent ought freedom of religion be an absolute value? Consider
cases where the practice of religion conflicts with the law. What if the religion
demands that its adherents either convert others or destroy them?

Bibliography

Carman, John and Juergensmeyer, Mark (eds.) Bibliography of Comparative


Religious Ethics. Cambridge: Cambridge University Press, 1991
Frankena, William K. Perspectives on Morality. South Bend, IN: University of
Notre Dame Press, 1976.
Geisler, Norman L. Ethics: Alternatives and Issues. Grand Rapids, MI:
Zondervan, 1971.
Hancock, Roger N. Twentieth Century Ethics. New York: Columbia University
Press, 1974.
Henry, Carl F. H. Christian Personal Ethics. Grand Rapids, MI: Eerdmans, 1957.
Horton, John & Mendus, Susan (eds.) Aspects of Toleration. London: Methuen,
1985
Hudson, W. D. Modern Moral Philosophy. 2nd Ed. New York: St. Martins Press,
1983.
Kranzberg, Melvin (ed.) Ethics in an Age of Pervasive Technology. Boulder,
Colorado: Westview Press, 1980
Logstrup, Knud E. The Ethical Demand. Philadelphia: Fortress Press, 1971.
Montgomery, John Warwick. Human Rights and Human Dignity. Grand Rapids:
Zondervan, 1986.
Nielsen, Kai. Ethics Without God. London: Pemberton Books, 1973.
Ramsey, Paul. Deeds and Rules in Christian Ethics. New York: Charles
Scribner, 1967.
Taylor, Richard. Good and Evil--A New Direction. Buffalo, NY: Prometheus
Books, 1984.
Tavani, Herman. (ed.) The Tavani Bibliography of Computing, Ethics, and
Social Responsibility Palo Alto, CA: Computer Professionals for Social Responsibility

130
January 15, 1997
<http://www.siu.edu/departments/coba/mgmt/iswnet/isethics/biblio/index.html--
Theory and Practice. Englewood Cliffs, NJ: Prentice-Hall, 1985.

Web resources on Ethics:


The Ethics Centre for Engineering and Science
http://www.cwru.edu/affil/wwwethics/index.html

The Institute for Business & Professional Ethics


http://condor.depaul.edu/ethics/about.html

Bioethics Online Service http://www.mcw.edu/bioethics/

Chapter 4
The Information Revolution

131
Seminar - "Knowing Everything - Man in the Image of ...?"
4.1 What is Information?
4.2 The Information Technologies
4.3 The Availability of Information
4.4 Toward The Metalibrary
4.5 The Accuracy and Security of Information
4.6 Information Analysis and Decision Making
4.7 Summary and Further Discussion

4.1 What is Information?


A collection of raw data, however large, and gathered from whatever source,
is not necessarily information. The term "information" implies that the things it
describes have some measure of meaning, significance or relevance. Meaning in
turn requires two things:

1. intentional and intelligent organization, and


2. capacity for being communicated.

That is, it can be said something is information only if it has a deliberately


attached meaning that can be communicated and interpreted in such a manner as
to preserve the meaning for another person.
Thus, information is a product of purposeful organization and design, not
simply an assemblage of facts. The activity required to generate and communicate
information therefore requires intelligence. In addition, information enables change
in the people who have it. It empowers them to do things they could not otherwise,
giving them more choices and in turn affecting their available techniques.
Some of what passes for information may not be in accord with the facts--
either because the underlying raw data is incorrect, or because it has not been
interpreted or communicated in accord with proper professional practice. That is,
questions of right and wrong need to be asked with respect to information as well as
with respect to beliefs and actions. For at least some such cases, the answer is
likely to be absolute; for others, there may be legitimate differences of opinion on
interpretative matters.
Some information is also more useful, and therefore more desirable to know.
This fact affects people's motivations--both in learning new things, and in applying
what they do know. In the style of the last chapter, one could ask "if some things
are good to know, to what extent should they be known?"
All this tells us that information does not exist in a vacuum or stand complete
on its own. Rather, human beings interact with it in a complex feedback pattern that
changes both them and the information continually.

132
The larger the data pool is, the more complex the processing activity must be
to produce useful information. This in turn requires a certain sophistication for the
society in which the information generation takes place. Thus, the higher (more
abstract) the level of the culture, the greater the demand for reliable information.
This provides a feedback mechanism that forces the techniques for the processing
of data into information to become ever more sophisticated as the quantity grows.
Some information theorists would present the diagram above as a mere
mechanical process, suggesting that the entire task of processing data into
information may be relegated to a machine. This position would not be very
palatable to a Christian, or to anyone who believes that the human mind is more
than a machine, and that it does something unique when its owner engages in
assigning and communicating meaning.
The explosive growth of data sources and the demand for information also
demand that filters be created to sift out the useful from the less so. Someone who
has been shot cannot afford to stop to wonder about the bullet's manufacture and
trajectory before arranging to have it pulled out. It is questionable whether
arguments over who wrote Shakespeare's plays or authored the book of Isaiah
contribute anything to knowledge. A farmer does not need much military technique.
An engineer does not need to know how to raise pigs. Thus, specialities are
developed--no one can know everything, even in an agricultural society, and much
less so in an industrial one. However, in the information society, anyone can find
out anything.
It is important to note that economic activity does not march automatically
and inevitably from hunter-gatherer to agricultural to industrial to information-based
society. A subtle interplay of available technique, culture, and political decisions
determines what happens next. Each civilization must build a sufficient level of
information and other infrastructure at each level to progress to the next. It was
accumulated knowledge of plants and animals that permitted agriculture.
Agricultural production grew larger and more sophisticated in the industrial age, but
employed fewer people. Likewise, the information based society requires a highly
sophisticated and complex industrial base, and the production of goods must surely
continue to increase, even though the proportion of workers directly involved in that
production will diminish. The production of ever larger quantities of agricultural and
industrial goods requires better knowledge and more complete demographic
information all the time. Once the two older sectors become efficient enough,

133
managing the information required to maintain them becomes the most visible
occupation, even though the information sector only indirectly generates material
wealth.
It is at least in part for this reason that the efforts of missionaries and other
aid workers in underdeveloped nations are sometimes counterproductive. For
instance, there is little point in attempting to introduce electric ovens into a society
that lacks a reliable supply of electricity, or tractors into one that lacks one of
gasoline. A simple, easily repaired mechanical pump might turn out to be the single
most appropriate technology needed to lift a remote African village out of poverty.
Teaching its people to read and write may be the most important contribution
available from information techniques. Just as important in all such cases is the lack
of infrastructure to repair and maintain even middle technology products and
processes.
At the same, every society has to solve information storage problems. When
oral traditions became inadequate to preserve information, it was written down on
slates or scrolls. Eventually, the sheets were piled up and bound into books to save
space. When it became necessary to produce many copies of such books, the
printing press was required. This all had to be put somewhere, for shoe boxes and
file cabinets can serve for mass data storage only up to a certain point. Thus,
libraries have been particularly important in the preservation and transmission of
both data and information. They are the repositories of what a civilization has found
out about the world, what it believes about the world, and what it has done to
change the world (and itself). There is now so much knowledge in the collective
human archives that it is no longer practical to store it all on paper, and electronic
means have become necessary. Today, the media size needed to store data and
information continues to shrink, and the speed at which data can be machine
processed is increasing.
It is necessary to do more than just store information, however. To be of much
use, it has to be interpreted (given meaning) and disseminated to other people,
including the next generation. Thus, as the amount of information grows with the
size and complexity of society, greater demands are also placed on the means of
communicating that information. This principle is of particular importance to any
cultural or religious groups such as Christians, whose continued existence depends
utterly upon transmitting its essential ideas and practices in toto to the next
generation, for failure to succeed in this task implies extinction of the group.

4.2 The Information Technologies


The speed and reliability with which goods, services, and ideas can be
transferred from one person to another has always been critical to human
civilization. Early on, information transfer was completely dependent on the
available physical transportation. Orders, government data, and intellectual
properties could only be conveyed to distant places by personally carrying them
there, and the effective size of any nation was limited by this fact. The printing
press made certain kinds of information more readily available (to those who could
read, so many learned) and ensured that knowledge would not so easily be lost

134
from one generation to the next. However, printed information still suffered from
the restrictions imposed by the limited means of transportation and communication.
The invention of the telegraph and telephone altered this situation profoundly,
for now information transfer could be effected anywhere that the wires could be
strung. Then emerging countries such as Canada, the United States, and Australia
benefited the most from such developments, for they were able to weld together
enormous territories into single political entities because there were efficient
transportation and communication among the parts. First the railways and then the
copper wires tied these nations together, and without such technologies it is likely
that today they would be (as Europe has been) many small countries divided by
language and culture and unable to communicate effectively.
Increasing use of telephone services forced carriers to automate to prevent
the system from being stalemated by the number of operators required for manual
equipment. Thus came dial phones, automatic switching, and computerized routing.
Likewise, inter-city telephone cables gave way to radio, broadband transmission,
satellite routing, and fibre optics to keep pace with the ever-increasing traffic. Both
radio and image transmission techniques have merged with the telephone to
produce cellular phones and a practical facimile system (Fax technology is over a
century old, but was little-used by most people until speed and quality improved).
More innovations are necessary as the quantity of data transmitted over these
circuits continues to grow.
Meanwhile, the entertainment media have also shrunk the effective size of the
world, as programs created in one place can be seen around the globe in a matter
of minutes. Thus, while the time it takes to physically transport an object to any part
of the world has been reduced to hours by modern jets, data can be transmitted
instantaneously. It is safe to predict that the efficiency of information transmission
will continue to grow for some time yet, and that there may also be further
improvements in the speed of air travel.
Faster, cheaper, more powerful computers can store and manipulate more
data. They are even more widely interconnected than they were, and it is now
possible to send and receive information through this network at any time to any
location on the planet on the so-called "information highway".
In the near future, the transportation/communication network will expand
much more than it already has into near space. The entertainment industry will
depend more heavily on satellite transmission, to the point where the viability of
local television stations may become endangered. A small (30 cm or less) dish
antenna will be able to pick up hundreds of national and international channels from
orbit, and local broadcasts may well become redundant. This trend toward a global
information store may be offset in part by an increased interest in the local
community, so that there may well be new low-power local stations established as
well.
Except for physical transportation, the spread of information is now limited
only by the speed of light. At least inside the orbit of the moon, this is effectively no
limitation at all, though in the more distant future it may be. If other parts of the
solar system are colonized, there would be no instant communication possible, since
the time taken by light to travel to other planets is appreciable. Data transfer will

135
still work; as will any other communication that can tolerate waits of several
minutes or more. Though it is premature to speculate upon the next stage of
societal development, it seems clear that one of the current stage's most important
characteristics is the instant and unlimited availability of data and information.
As will be seen in subsequent discussions, this availability in itself raises many
important issues for those with an eye to the ethics of the situation and an interest
in the quality of life in society.

Profile On . . . Data and the Law

Several countries have passed laws regulating the security and privacy of
data. Some also have attempted to regulate transborder data flow as well. Here is a
selection:

Australia:
the Freedom of Information Act (Public sector only) Privacy Act (private
sector)
Canada:
the Federal Access to Information Act and the federal Privacy Act (Public
sector only)

France
the Act on Data Processing, Data Files, and Individual Liberties. (Public and
private sectors, transborder data flows)

Israel:
Privacy Act (Public and private sectors, transborder data flows)

United Kingdom
Data Protection Act (Public and private sectors. Transborder data flows can be
prohibited)

United States
oPrivacy Act 1974 (information practices, notification procedures for
government agencies)
o Fair Credit Reporting Act 1970 (private sector credit, insurance and
employment info).
o Fair Credit Billing Act 1974 (privacy in granting credit)
o Freedom of Information Act
o Family Educational Rights and Privacy Act 1974 (information practices of
Federally funded educational institutions)
o Right to financial Privacy Act 1978 (limiting government access to financial
information to law-enforcement agencies
o Privacy Protection Act 1980 (limiting government seizures of material
intended for public communication)

136
o Cable Communications Policy Act 1984 (privacy of cable television
subscribers)

West Germany:
the Federal Data Protection Act (federal public sector)

Source: The International Handbook on Computer Crime

4.3 The Availability of Information


Information services now in place may yield the most accurate view of the
future, for these have already come some distance toward the goal of unlimited
availability of information. It is not possible at the present time to count the number
of facts on file in publicly available data bases. Bibliographies and information files
for law, Bible study, medicine, the stock market, business, education, biography,
history, computer science, government activities, chemical and physical data, and
many others are readily available to anyone with a telephone and computer.
There are gateway services and indices that operate as data bases of data
bases. This includes public utilities such as the various World Wide Web search
engines which allow the seeker of information access to data repositories by subject
or title. Such services are utilities to extract meaningful information from the
splendid chaos that is the Internet, and they finance themselves either with user
fees or by displaying banner advertising. An initial query may produce thousands of
computers and/or data bases located in various parts of the world. The user then
narrows down the search to specifics, using the facilities of the particular
information provider, software package, or search engine until the required material
has been assembled.
It takes little imagination to project major extensions to today's useful but
fragmentary facilities and to realize that--with very few additions to current
technology--citizens of the fourth civilization should be able to obtain any recorded
information on any subject in which they are interested. Most of the technology for
this is already in use, and changes that will come will not be revolutionary, as far as
the hardware is concerned. Rather, people and their way of working will change.
Indeed, heavy use of facilities like CDs, DVDs, and the World Wide Web has already
altered some traditional professions, as their practitioners have come to depend
upon the easy availability of technical data.

Information Services and the Professions

Doctors, dentists, lawyers, engineers, and accountants not only have local
client data bases in which to file histories, treatments, recall dates, project designs,
and billings, but they also have access to local or remote expert systems on which
they can diagnose, determine treatments, look up case law, find parallel situations,
and so on. These tools allow a single professional to handle a much larger number
of clients more efficiently and more accurately than previously. Such means also
reduce dramatically the number of facts that the professional must learn and retain
in personal memory in order to work competently. This not only affects how they do

137
their jobs but also radically changes the education required to become professionals
in the first place. Since many other segments of the marketplace are simultaneously
moving away from an employer-employee model to professional-client relationships,
these changes have the potential to alter the very nature of work for most people.
Similar advantages are also available, for instance, to real estate agents,
except that their files are less customer-oriented and more product-oriented,
because the available listings in a given geographical area change on a daily basis,
whereas customers tend not to repeat very often. Here, video technology is being
combined with computer searching so that a picture file of each listing can be made
available in any real estate office. Thus, a potential buyer need not go to the site to
find out what a house looks like on the outside or inside. The video technology can
also be combined with the telephone and cablevision to allow potential customers
the luxury of seeing videotapes of houses for sale on their home televisions.
Researchers and translators of the Bible and other specialized literature have
all the reference materials, manuscripts, parallel writings, commentary, and
language aids ever produced readily available. They are therefore able to produce
translations into new languages in a fraction of the time it previously took.
Other organizations already maintaining specialized databases include
government (taxation, geologic, geographic, demographic, and other statistical
information), law-enforcement agencies (arrests, fingerprints, DNA records, and
stolen property records), wholesalers and retailers (market trends, inventory,
accounts and customers), credit card issuers, libraries (loans, books on hand, and in
print), newspapers (articles by subject), stock market and brokerage houses (prices,
press releases, and transactions).
There is also a growing number of private entrepreneurs who perform
contract information searches and data digests for their clients using public and
private data sources. These may be many of the same people who help businesses
to set up Web pages to get their own message out to the world in the most
compelling form possible.
Scientists and engineers can look up the physical properties of substances or
locate journal articles or books on specific topics. Researchers in all fields can do
periodic searches and keep up to date on the most current work in their field. These
facilities are becoming more standardized and organized as they continue to grow.
They will gradually become major information utilities in their own right, and the
professionals who rely on them now will find their dependence growing to the point
where they cannot work at all without them.
Meanwhile, commerce is alive and thriving on the Internet, and billions of
dollars a year worth of business is handled on-line via electronic storefronts and
secure ordering systems. It is reasonable to expect this use will continue to grow
rapidly for the foreseeable future.

Sex and the Internet

On a less enlightening note, the purveyors of explicitly violent and/or sexual


materials have also used electronic distribution to further their own ends. There are
three possible responses to this particular development:

138
1. Some would introduce censorship of the electronic media and remove
sexually explicit materials altogether.
2. Others make and sell "filtering" programs that parents can use to prevent
access to the better known of such collections.
3. Still others note that censorship implies censors; that is, someone must
decide what ideas are allowed expression and what are not. There is potential for
"good" ideas to suffer more than "bad" ones if that happened, so they prefer to
allow all ideas to compete openly in a free marketplace.

Unfortunately, the latter (more freewheeling) view may not adequately


protect groups that find themselves the target of abuse in such materials, such as
women, children, and minorities. The mere existence of depictions of abuse lends
credence to its actual perpetration. That is, fictitious or potential abuse portrayed in
violent or sexual materials appears to promote real abuse of real people. Indeed, in
many cases actual abuse must be inflicted to make pictures of it in the first place. In
the same manner, false denials of the holocaust inflict great psychological pain on
the survivors of the death camps and their families. Indeed the law already
recognizes that a speaker is responsible for what is spoken--if it is destructive of
reputation, damages for libel may be granted. If it has only the potential to damage,
the law is less clear.
Moreover, the third view is based on the liberal hope that good ideas and their
use will overwhelm bad ones. This in turn implies that those expressing such a hope
have the ability to know which are which. Based on actual human history, not only
are the hopes vain, such abilities appear to be wanting. At some point and in some
manner, it seems necessary to decide to what extent the good of freedom of
speech/information must be set aside for the good of preventing threats to people's
lives and health. In other words, there is a balance between the right to freedom of
speech and the right to enjoy safety under peace, order, and good government.
In like manner, there is a debate over whether the depiction or promotion of
homosexual acts is sinful or immoral on the one hand, or desirable for the freedom
of speech and the liberation of an oppressed minority who cannot help being what
they are on the other. Although the antagonists on both sides often seem to try to
shout each other down even on the Internet, such issues can be reduced to a choice
between perceived goods of freedom of religion and "minority group rights." In this
particular case, the issue can be further reduced to the choice of whose view of
human sexuality and its morality will be the dominant or control view for the society
of the future.
Perhaps the most interesting aspect of such issues for the purposes of this
discussion is that much of the interaction is taking place on the Internet itself, as
people from all over the world have, for the first time, a vast town hall in which to
participate in any debate they wish.

Information Services and Daily Life

139
It is interesting to observe that, while many specific professions have already
been changed by the information revolution, the effect on the broader society had
not by the late 1990s been particularly profound. In daily life, only a minority took
advantage of information services. Where broad subscription gateway utilities were
once sold, they were not very popular, and some were closed or scaled down due to
lack of interest. Only the Internet and its subset, the World Wide Web seemed by
the end of the decade to have potential longevity.
To a point, people of the latter part of the industrial age remained willing to
continue relying on the existing mass media information filters rather than seeking
and filtering it personally. Thus, even in the first part of the information age,
universal access to information was conceded to employees of various news media,
and they in turn published or broadcast what they deemed to be in the best
commercial interests of their employers. Terrorist organizations make good use of
this when they stage events for media coverage to gain recognition that their small
numbers could not by their own efforts ever achieve. So do conventional politicians
when they time their news releases to hit or miss major telecasts or other media
deadlines, depending on the amount of publicity they want. For their part, those
who have controlled the news media have perceived little mandate to extend
access to the information from which their articles and editorials are constructed.
Consequently, consumers of the news media gradually lost the ability to distinguish
between factual news (data) and editorial interpretations of that data (assigned
meaning). This is an example of an abstraction (removal from detail) that is
potentially detrimental, for giving up to others the decisions on assigning meaning
to data threatens a person's ability to function as an informed citizen. Indeed, the
chief potential for tyranny in an information-based society lies in confining the
ability to provide data interpretations to a small number of people, who by virtue of
their positions are able to exercise some control over what all other citizens are able
to think about.
One reason for the initial lack of interest in broader access to news and other
information was that the benefits had not yet exceeded the costs. These services
are used effectively and efficiently by those whose job demands such use or who
can benefit from it and are willing to expend time and effort to learn the
idiosyncrasies of the various information services because they must. They are also
willing to pay the fairly high monetary cost because the information they get has a
substantial direct effect on their earnings.
Most people have few such motivations and rather than searching for
information at all, they become enamoured of recreational possibilities on the Web,
such as games and pornography. They only begin to use information services per se
(or search the Web) under three conditions:

1. The cost drops to the point that they think nothing more of it than they do
of paying for electricity, telephone, cablevision, or the newspaper.
2. Access to these systems is so simple that even the worst mechanophobes
will use them routinely without thinking of them as complicated or unusual in any
way (This requires a high level of abstraction; they become appliances rather than
complex machines).

140
3. There are direct, obvious, and immediate benefits to everyday activities--
they save time and money.

These three conditions are all developing together, for they are related.
Improvements in the ease of use toward satisfying number two and in the services
offered satisfying toward number three automatically satisfy number one as well,
for widespread use will greatly reduce the cost. For many in the general population,
the mid 1990s evidently saw an essential watershed passed with proliferation of
world wide web sites, for in a span of two years the Internet became a part of
millions of homes.
This illustrates that there is a critical mass for wide-effect technologies that,
when reached, causes dramatic improvements in all three conditions (cost, ease of
access, and perceived benefits) and then begins to alter the society whose needs
gave rise to the technology in the first place. For example, the automobile went
from being a bicycle-and-carriage-shop sideline to a toy for the rich. Later, it was
raced and also used for taxis. However, it had no broad impact on society until
increased demand led to mass production, and that in turn lowered prices and
increased sales. In two generations Henry Ford's assembly line put a car in every
garage and transformed North American society, but no one could have anticipated
the critical conjunction of the necessary attitudes and technologies that made it all
possible. Similar comments could be made about the impact of the telephone, the
television, and, to an extent, the airplane.
Moreover, there is a critical point in the use of a technology beyond which it
becomes something other than what it was at first, and this is also true of
information utilities and the Internet. As they stood at the close of the millennium,
such facilities largely referenced and duplicated what already existed on paper in
various libraries. They enabled faster, broader, and more convenient access to such
information, but this did not in itself constitute a breakthrough to a new order. What
will constitute a breakthrough is a facility so extensive, powerful, and cheap that it
will replace older technologies altogether and simultaneously open up whole new
ways of dealing with information, opinions, and knowledge. The building of such an
information appliance will require not simply new types of machines, but new ways
of thinking about their use--new information paradigms. Seen in this broader
context, the Internet (including the World Wide Web) are primitive first steps along
the road to something much more profoundly significant.

Profile On . . . Issues

Information and Third World Nations


The Gap
The Third World is so called because of the large economic gap between the
industrialized nations of North America, Europe, Australia and New Zealand on the
one hand, and the heavily populated pre-industrial countries of Africa, South
America, and Asia on the other.

141
Will the gap narrow or widen?
Optimistic observers believe many nations will make the leap from an
agricultural economy directly to the information age, without an industrial phase.
However, information-based economies require sophisticated and efficient industrial
bases. If rich nations become richer still, and the poorer do not quickly catch up, the
widening gap would threaten world peace.

Where does the third world get its information?


In 1980, 95.5% of printed matter, 95.1% of printed books, 85.6% of TV sets,
68.1% of radios, and 97.9% of data processing equipment exports originated in
technologically more advanced countries. Small shifts in these percentages since
have been due more to Western countries moving manufacturing capabilities to low
wage regions, not to any technology or decision making transfers. Television
programs are still overwhelmingly American and European in origin. Four agencies--
Associated Press, United Press International, Reuters, and Agence France Press--
dominate news reporting, so most of it originates in London, Paris, and New York.

A new imperialism?
One-way flow of information and its associated technologies carries along the
culture, values, entertainment preferences, and commercial tastes of the exporters'
life styles. Poorer countries cannot afford to generate their own television programs
or news, so they buy both from those who can, increasing their economic and
cultural dependence. Printing, production, advertising, and packaging are cheaper
in English than in any other language (economy of scale), creating additional
disadvantages for non-English speaking countries. On the other hand, universal
availability of the same information to everyone inevitably has a homogenizing
effect on culture. Is it worth keeping the old culture and feeling independent and
self-reliant at the cost of giving up benefits that other nations have?

Is third world censorship justified?


Faced with challenges to culture or religion, some nations heavily censor
imported cultural materials. This may reduce the perceived threat, but at the cost of
making the imports more costly and less useful, and of partially isolating the
country from the rest of the world.

Will satellite communications allow poor nations to catch up?


Because of their less efficient use of the transponder circuits, these can cost a
poorer country twice as much for the same facility as a western nation would pay.
Moreover, there are a limited number of satellite positions available in
geosynchronous orbit, and many have been taken by the richer nations, so even if
the poorer nations do develop the necessary technology, they may have to continue
to rent space from the first nations to claim it.

Does Information Technology have the same effect everywhere?


In a rich democracy, individuals can obtain information technology for
enhancing life style or improving their economic position. In poorer countries, often

142
with no previous democratic traditions, the government is more likely to draw such
technology into its own hands and control its use. It may therefore have more
potential for oppression than for democratization in such nations over the short run.

Ought poor nations to pay for information?


If information is just a commodity, poorer nations must buy it from the ones
that generate it, increasing their dependence. If all knowledge collected by the
human race is its common heritage, perhaps it ought not to have a price put on it.
Some argue necessity: because the poorer nations need knowledge and techniques,
they are justified in helping themselves, regardless of the patent, copyright or
property laws in other nations. How, in any event, can a poor nation obtain what it
needs, rather than have to take what is offered by the wealthier ones? These
comments apply equally to information technologies.

How valuable are current databases to the developing nations?


Industrial nations maintain large databases of scientific and technical
information, along with statistics on consumer preferences by age groups,
occupation, income, and the like. Much of this is of little value in planning product
development for developing nations. The techniques to make or use some products
may not exist; the demographics are completely different; some products are
inappropriate when moved into a new cultural setting; or there may not be enough
people at an appropriate income level to make a product feasible.

What moral responsibility is there on the part of wealthy nations to share?


Is bottom line profitability and the corollary demand for a return from
information resources and tools the highest value? If so, this implies no obligation to
share at all without a direct return on investment. If the common good of the
human race, or a value that requires sharing with the poor is of more importance,
information resources ought no more to be hoarded than any other form of wealth.
A more pragmatic consideration might be that information resources are almost
impossible to hoard, and that trying to do so will lead either to theft or threats to
peace. Another pragmatic consideration is that ensuring that new technologies get
into the hands of poorer nations may enhance trade in many other goods and lead
to greater prosperity for all in the long run.

4.4 Toward The Metalibrary

Limitations of Current Information Technology

There are two major categories of information distribution that are of concern
to the discussion in this section--the commercial and the scholarly. Word of mouth
also conveys information but in smaller quantities and so inefficiently and
incompletely that it will have much less effect on the direction of society in future
than it did in the past.
In the third civilization, the chief commercial media have been magazines and
newspapers in the print category and radio and television in the electronic. These

143
are the products of highly developed institutions, and they are carefully tuned to the
desires of their consumers. As with all institutions, their essential mandate is to
perpetuate themselves; in common with all commercial enterprises, this goal is best
pursued by paying attention to the bottom line. Correctness, completeness, and
societal consequences are of little importance to the institutional media unless
there is some higher controlling ethical imperative, or unless such concerns coincide
with others that have an impact on profits.
Those who control such media have the power to decide whether something is
news or not, whether to present factual accounts or editorials, and whether to
identify the nature of either to the customers. If they hold profits to be a higher
good than, say, truth, then the "news" material their outlets present to the public
will be sifted through a profit filter but not necessarily through an honesty filter. In
such a context, the trivial--glamour, sex, money, power, violence and self-
gratification--are made to appear heroic, for they appeal to the sensual and can sell
product. Meanwhile the genuinely heroic--love, kindness, honesty, peace, moral
goodness, cooperation, and social duty--become trivialized; because they lack
sensual appeal, they lack profitability. Even if the for-profit media do not necessarily
set about consciously to change traditional values for reasons of conviction, they do
so inevitably for ones of gain.
However, it is the heroic (other-centred) values that forge bonds between
people, giving a society meaning and enabling it to be. Sensual pursuits are selfish,
individualist, and isolationist--they eat away at the bonds of society, and if
unchecked, can destroy it. Yet the survival of civilization is too distant and vague a
goal to affect short-term bottom-line thinking.
Commercial television is particularly susceptible to the temptation of
becoming the advocate of selfish sensuality because it can present the illusion of
being a "hot" or personally involving medium, even though it actually has no
feedback mechanism and hence no group dynamics. Its watchers do not actually
participate in the events portrayed, but they can be given an illusion they do have
the power to do so. Television's goal therefore need be no loftier than the
excitement of emotions; it has no inherent mandate to inform, except peripherally.
Originally, it was thought that television's ability to bring the world to one's
home would promote global understanding and cooperation, but the medium's
individualistic and sensual appeal can have very different results--reduced creativity
and scope of world view, and a levelling to mediocrity. Its immediacy brings the
random violence of terrorism to the living room, and can even encourage such acts
on the part of those who have no hope of any military victory. Consistent with its
natural sensual appeal, television has developed an almost continual portrayal (and
linkage) of sex and violence. There is no mechanism in place to assess the effect
this has on society and on children in particular. At the very least, such portrayal
desensitizes viewers to murder, rape, brutality, exhibitionism, and violence. In its
sensationalizing of world events, the product of the commercial media can easily
become disinformation and not even the people who produce it may be aware of
this fact. What is more, objectors have no means of making corrections, because
none of the present commercial media are interactive. Ratings tell advertisers how

144
many people watch a program, but reveal neither their reaction to it nor the effects
it has on their subsequent thinking and behaviour.
Among other things, television becomes its own reality, overwhelming any
message delivered through it. Nowhere is this fact more obvious than in the fate of
several televangelist superstars who lost their own message to the glamour and
sensuality of the medium in which it was being delivered. Rather than their
message changing the world of their viewers, they themselves became television
entertainer/stars, fully entering into the life-style and the values of the artificial
world in which their performances were being crafted, and their message became
emptied of its content as their lives put the lie to what they were saying.
Another aspect of the problem is that the late industrial age media came to be
a closed system of entertainment, offering only highly control-belief filtered versions
of certain currently fashionable world events. Thus "news" was created and
managed as much by the media as by the participants in events. In order to
entertain, they focused on the flamboyant, outrageous, and shocking rather than
the ordinary, on the negative and dangerous rather than the positive or uplifting.
Their nature is to be oriented to conflict and personality, rather than to information.
Some believe that North American news reporting is sufficiently negative to
create its own crisis--one of non-confidence in the very society that gives it free
reign to operate. These outlets cannot therefore be relied on as factual sources of
information. In this view, their product closely resembles historical fiction--a
tapestry of fancy hung upon a few threads of fact. Stories constructed within such a
medium may reveal more about the thinking of reporters than they do about the
real world.
Critics of such attacks will be quick to point out the many benefits of
television--its potential for informing, for entertaining, for educating, and for
allowing at least vicarious participation in events most people could never attend in
person. There are educational channels, family-oriented programs, sports networks,
and a thriving public television facility--all these indeed serve to mute the criticism
given above. By extending the choices available to consumers, these alternatives to
the standard commercial fare also whet the appetite for what television could be if
unlimited choice were in the hands of the individual viewer; that is if each person
could supply the filters on all available entertainment and news without having
everything predigested. That is, television still has the great informing potential it
has always had, even though that potential has not yet been realized.
At the same time, and like the other traditional media (newspaper, magazine,
and radio), television may also have reached the limits of its particular technology.
Available time and channel space conspire together with the profit motive to ensure
that individual choices from these media are severely constrained. It would appear
that in order to permit unlimited access to basic information, a major technological
breakthrough is required--one that allows individuals to control their own
information filters.
Turning to another realm entirely, when one considers the available
information channels for scholars different problems become evident, for here
correctness and completeness are not usually as serious problems as are relevance
and information overload. Today's libraries--paper and electronic--are added to by

145
millions of book and journal pages a day. The use of book review digests, cross
references, citation indices, and bibliographies can be extremely time consuming
and does not always guarantee that the desired information will be found.
Conversely, if something is not found, that does not mean the information does not
exist.
What is worse, just because something is stated in a book on some library
shelf does not guarantee its accuracy. It might be based on poor research or
inadequate information. It might be deliberate falsehood, poorly reasoned, obsolete,
an opinion the author later withdrew, or such bad scholarship that it isn't even
wrong. It is always filtered through some world view before being presented, and
this means that academic information is no more value neutral than is television
entertainment. Correct information on a given subject may be in such a fragmented
and widely scattered form that it is impossible for one person to search enough of it
to synthesize an integrated whole from the many parts. Even in electronic research
libraries there are several problems that a new paradigm for information storage
and retrieval must be able to solve in order to create something that constitutes a
significant breakthrough.

Hypertext

The first step toward solving some of the problems with information access is
called hypertext, because it adds new dimensions to the referencing of textual
material. Its characteristics are:
1. The ability to follow up a book or research paper directly. That is, citations
and references are electronically linked to the original work rather than stored in a
separate citation index. This means that bibliographies become bidirectional, and
their entries have reference threads (citations) that extend forward from the date of
publication as well as backward, automatically.
2. The ability for readers of a paper or book attach their own links to
individual arguments in a work. These may lead to entirely different threads, if
someone else has an appropriate link attached. Naturally, this applies to the original
author, who is thereby able to withdraw some points or retract errors, so that they
do not proliferate through the literature.
3. The ability to request missing knowledge from others using hypertext. One
can ask for threads to be attached directly to the request if someone can either
create or find the desired information somewhere in the library. Such requests to fill
"holes" can be published on their own or attached to other papers by threads in the
same manner as citations or comments.

No such advanced facility yet exists for general use for several reasons. First,
no one computer has enough capacity and speed to implement a sufficiently large
data base. Second, (therefore) what scholarly data bases do exist are both
fragmentary and scattered over millions of computer stores. Third, Internet indexing
and searching are still extremely primitive, and there is no ability for users to attach
new links to old data either manually or automatically. Fourth, few journals are at
this point even available electronically, so the Internet lacks authoritative editorial

146
and peer review processes. This means that scholars have little assistance in
determining the reliability and authenticity of electronic data. Fifth, personalized
hypertext software on small computers made its first appearance on the market
only in 1987 (Apple's HyperCard for the Macintosh), and considerable work is yet
required to take such packages from single-user environments through multi-use (in
schools and offices) and thence to large-scale operation and general utility status.
However, enough has been done on this concept to make it clear that it is
both practical and viable on a small scale. Moreover, the successes of HTML
(HyperText Markup Language) and related notations in creating the World Wide
Web shows the concept has promise on a large scale as well.
When advanced hypertext does become a comprehensive academic utility,
scholars will be able to keep up to date in their fields for the first time in more than
a century. Whenever a new work is published, its bibliographic references will
automatically generate links to the older works, so that someone viewing an article
in the middle of a chain will be able to move in either direction without consulting a
separate reference.
Moreover, problems created by vague, incorrect, poorly reasoned, retracted,
refuted, or irrelevant papers will also be alleviated. As things now stand, journals
employ referees and editors to sift out and reject some of the junk before
publication. This has the advantage of screening out most of the truly awful material
before it is published. The disadvantage is that it may prevent publication of
radically new ideas just because they are new or because the author is perceived
unfavourably by the editor. It can also prevent altogether the publication of ideas
not considered politically or religiously "correct" by the editorial establishment. The
present system also does not allow an author to retract a bad paper once it is in
print--at least not in a way that anyone seeing the original would automatically be
referred to the retraction.
In a hypertext environment, everything can be published, and it could be left
to each user to decide how to sift the material. Screening would be easily done, for
an individual account could be set to recognize only the links approved by reputable
editors and referees, who do their work after publication rather than before and who
are able to alter their approval if they have a subsequent change of heart. This
could be done by attaching priority numbers to links that could later be raised or
lowered. Editors' links could be removed altogether only if no threads depended on
them or if suitable warning were given to persons who had shown an interest by
using the link. This would give those users the chance to establish an independent
personal link. Naturally an individual's normal settings--to recognize certain editors'
approvals and not others--could be overridden in a search for any new links to the
field of interest regardless of whether they were on the "recommended" list.
Researchers could create a personal list of whose recommendations or links
their accounts would recognize and at what priority. A person could subsequently
reject some of these or approve other links if that seemed desirable. These filters
would grow and change as individual interests did and would make the total body of
scholarly work (including all the junk) look different to each user. This very growth
and change in individual filters could also be automated by appropriate software.
Such a system could serve research needs at various levels, for editors' links could

147
also have a difficulty index attached. These indices would determine whether
particular items would normally be of use to a grade school, undergraduate, or
graduate student, or to a professional. People would be free to change their
difficulty index on a topical, subject, or global basis as they learn more or as their
interests change. Alternately, they could set the system to change it for them
according to actual usage patterns.
In addition, fees could be charged both for publishing and for reading
material. An author would pay to have work put on the system, but each time a
piece is referenced, the author would be credited with a portion of the fee paid by
the reader (The rest of the reading fee would maintain the system). This would also
filter out some low-quality material, for few people would continue to pay to publish
things that no one read.
The hypertext concept is not new; it originated with Vannevar Bush, Franklin
D. Roosevelt's wartime science advisor, during the 1940s. It was not feasible to
build Bush's "memex," however, and the idea languished until recently. The coiner
of the modern term "hypertext," Ted Nelson, calls a hypertext system with this
added publishing facility "hypermedia." His long running Xanadu project was an
attempt to implement such systems on a marketable basis, but despite having
many corporate homes, no commercial product resulted. It seems likely, given the
magnitude of the task, that an advanced hypertext system is more likely to grow
from the collective efforts of researchers and editors already using the Internet,
than it is from the workings of one mind, however fertile and energetic. As it does
grow, the challenge will be to maintain the openness of the Internet of the late
nineties while still allowing individual scholars to view the parts they need under
some structure.

Beyond Hypertext--The Metalibrary

If this were all that the next generation information systems could do, they
would be revolutionary enough, for hypertext alone would radically change
scholarship, publishing, and current libraries. For instance, paper books and the
need to store them could eventually cease to exist. The quality and quantity of
information could improve dramatically. It would always be possible to find out if a
piece of research has already been done, and every scholar would have access to
the most current material. A great deal of time would be saved, both in library
searches and in preventing unnecessary duplication.
However, there is no reason to stop with hypertext. A fourth characteristic
could be added to information access in order to transform hypertext from a
scholars' tool into an everyday appliance. It could be given the potential to
overcome some of the problems associated with the commercial news media by
applying the scholars' tools to transfer power to create filters to the public at large.
That characteristic is:

4. The ability to link all publications, whether commercial or scholarly, and


whatever the original medium.

148
Before moving on from this point, a definition is in order:

An (abstract) metalibrary is the entire collection of a society's data, information, and techniques,
together with the means by which it is stored, accessed, and communicated.

The Metalibrary of the fourth civilization is the complete, electronically linked and accessed version
of its abstract metalibrary.

Personal data such as what the family has for breakfast, what colour the cat
is, and one's preferences in clothes, music and computing languages is not included
in a metalibrary, for this definition focuses on the general knowledge and
techniques that the culture as a whole uses and communicates. Although every
society has a metalibrary of its generally communicable knowledge, it has never
before been possible to assemble and index that information in a manner readily
accessible to all. Not everyone can physically drop by the Library of Congress, and
even those who do might find a search rather daunting (though it, too has a web
site, now). From this point on, references in this textbook to the Metalibrary will
always be to the electronic version.
Such a library could contain and link textual material (books, articles, papers,
and newspapers of all kinds), graphics (pictures, art, posters) and sound (music,
radio programs). It could also have integrated forms such as movies, TV programs,
recorded concerts, sports events, and daily news, weather and sports from around
the world, as well as lessons on every subject at every level in a variety of
languages or with universal translating ability.
Some of this has been done or is in process. Java applets and a variety of
other browser plug-ins already allow simple sound and video to be a part of Web
materials. Little indexing of visual-oriented material has been done thus far, and
communications bandwidth would have to be expanded enormously to handle much
of it, but what has been done is a move in the direction envisioned here, however
primitive it may be. Moreover, the necessary communications capabilities seem to
be being driven by a desire for video conferencing in any case, so by the time much
indexed video and other live material is available for access, the necessary
hardware may well be in place for other reasons.
In the remainder of this book, a hypertext system having this fourth
characteristic will be termed The Metalibrary. The difference between the two is that
the Metalibrary allows links to all information, not only to text and simple graphics
and animations as at present. Moreover, it would serve the general population, not
just scholars.
Emerging technology would give the Metalibrary a variety of abilities. Some of
the possibilities are detailed below, though not necessarily all would come to pass,
for other factors might make them unnecessary or unachievable.
Metalibrary terminals could become voice-activated, allow either large wall
screens or book-size wireless portable units, and be capable of displaying text or
colour graphics in the same resolution as a printed book. This would make it the

149
preferred publishing medium for such material as National Geographic as well as
the Journal of Combinatorics, including all the back issues.
For most purposes, such terminals would have the potential to replace books,
magazines, newspapers, television, and the telephone with a single inexpensive
appliance. It would be possible to ask one's home Metalibrary terminal, "What was
the gross national product of Belize each year from 1972 to 1999?" One might
expect the answer by voice with backup hard copy on the house printer--all without
getting up from one's living room chair.
A somewhat more "fuzzily" defined request for, say, a comparison of
conservative evangelical and Catholic twentieth century commentary on the
meaning and application of the first chapter of John's gospel should also be
processed to produce appropriate results.
Neither would information have to be confined to a textual form or be
statistical in nature. The command "give me the national news, topic government"
could result in the wall-sized flat screen delivering a series of news items, editorials,
and film clips tailored to the request. Everyone could design their own news,
weather, and sports show, with different announcers and different emphases. A
hockey fan could have an all-hockey sportscast, and a would-be traveller could see
the weather for Hawaii or Nice instead of Des Moines or Bradner.
The chosen announcer need not actually have ever read that day's news
before a camera, for sufficient information could be stored on the person's voice,
inflection, and appearance for the Metalibrary to synthesize a program with any
desired person's image appearing to do the reading. If people want Walter Cronkite
doing the evening news on December 12, 2046, they could have him. If they want
Marilyn Monroe, her electronic persona could do it instead. One could have a few
personal films and voice recordings made and anchor the news for oneself.
User interests as expressed in actual operation would determine to some
extent what current items were available, but once the growing technology allowed
enough storage, there would be no need ever to remove an item from the
Metalibrary once it had been recorded. Someone who had gone fishing could catch
up on a whole week's news on returning.
Movies, including ones now shown first in theatres, could be accessed in the
same manner. For the usual access fee, "The Sound of Music," "Ben Hur," "Bambi"
or "Rocky XXI" could be ordered and shown in one's own home. Parents would be
able to instruct their house computer about what, if anything, their children could
order. Television shows would be obtained in the same manner, though the lines
between the TV and movie industries could become quite blurred. Producers of a
given series would advertise their latest creation and the day when each episode
would first be available for viewing. Each family could make up its own schedule of
movies, news, comedy, drama, hockey or baseball, and so on--watching when
convenient for them, not according to any national or local schedule.
Commercials, however, would probably be inserted at viewing time, though a
premium might perhaps be paid to bypass this. On the other hand, it might become
economical for advertisers to pay viewers to look at their commercials. Ratings
would be compiled daily, weekly, and monthly, and would be cumulated on a long
term basis on actual rather than estimated use. Such a use of electronically

150
distributed "canned" entertainment might compete for some time with the already
ubiquitous videotape rental store, but in the end, the cheaper, more universal, and
more convenient of the two would predominate. If the information highway that is
the Metalibrary's infrastructure has sufficient lanes (channels) to transmit one or
more movies to each home, it would have the advantage of universal selection and
easy accessibility.
Books could be printed a page at a time on the screen for conventional
reading, or their contents could be acted out by synthesizing characters cast at the
request of the user, who could take the starring role personally, if desired. The same
is true of school lessons or university lectures that could be studied through a
Metalibrary terminal if desired. An interesting task for some future technician would
be to make these lectures interactive so that the students' questions would be
answered by the synthesized teacher on the screen. More difficult enquiries could
be deferred to the next lesson and taken on by a live expert connected to the
session. These questions and answers, once recorded, would remain for the next
student with similar interests. In the view of some, such a facility could eventually
replace schools, colleges, and universities, though implementing this would take
longer than would data-base functions. On the one hand, the "teacher" really would
know everything available to know; but on the other, there would be no social
interaction with other students, and no personal mentoring possible.
Very large screens could be used to download, store, and display (for a fee)
great works of art in homes and offices. These would remain until the owner decided
to change the pictures on the wall, at which time the rental contract with the owner
of the original art would cease (no fixed term). Eventually, an entire house might be
decorated in this fashion, with whole walls being massive screens that projected
suitable wallpaper and art collections. Three-dimensional projectors would
eventually become available and the images of sculptures could also be rented
through the Metalibrary. New television shows or movies, as well as live events,
would eventually be available in three dimensions; in fact, such technology may
well be among the first of these actually used (although it would require a vastly
greater bandwidth than conventional movies).
At the same time, Metalibrary services to professionals will be expanded, and
the number of jobs depending on banked information will grow. Anyone still having
a desk would have a Metalibrary outlet--probably supplied by the same utility as the
one at home, but with a smaller screen (or 3D projection volume). For people on the
move, a pocket unit would serve as well, but in less space still.
As in other mature industries, the number of information providers (or at least
infrastructure providers) would shrink as their scope grows. In all likelihood, three or
four competing Metalibrary utilities would emerge to replace the current patchwork
of small companies, but customer equipment would necessarily allow reception
from each, switching automatically from one to the other as the user requests. To
the end user, it would all appear as a single system.
It should be clear at this point that the Metalibrary might prove to be a
concept as revolutionary as was Gutenberg's printing press. It could become at once
knowledge machine, entertainer, teacher, home decorator, and communications

151
device. While this cluster of functions would develop over time, it is clear that there
would be many disruptions in traditional industries, jobs, and patterns of living.
The same utility that is built with the goal of improving access to information--
so that individuals can find out what it is that people collectively know--will by virtue
of the facilities it offers cause a massive reorientation of several industries and of
almost everyone's life. It is uncertain at this point what the effects will be, because
the all possible Metalibrary facilities will come into being entirely as described here.
However, this examination may provide some indication of the possibilities, given
current technologies.
The Metalibrary described here is not just speculation. It could be said to be
partially extant already (albeit in primitive form) in the many interconnected
networks of government, academic, utility, and industry information systems that
even now exist. Furthermore, what have been described here are actually emerging
tools and techniques--ones that enable the manipulation of old data and the
generation of new information. These tools are not the totality of the information
itself, much of which is already available on the Internet via less comprehensive
tools.
In this book, the term Metalibrary will normally be used to refer to a
metalibrary that has at least some large subset of the tools and facilities discussed
in this section. Where there may be some ambiguity, the term full Metalibrary may
also be used to emphasize that it is not simply the information content being
referred to, but also a set of techniques for universal access.

4.5 The Accuracy and Security of Information


The universal accessibility of information is not without potential problems. As
individuals, corporations, and governments make growing use of data repositories,
a series of difficulties arise. These problems have already been widely reported on
in the popular media, but as more people use stored information, the number of
those who could be adversely affected also increases. It should be noted that
information accuracy is a genuine issue only if truth in information is a broadly held
value; otherwise this discussion is irrelevant.

Information Accuracy

As things now stand, it is not always possible for individuals to know whether
information about themselves exists, or where it is stored, much less what such a
file might actually contain. There are a number of ways in which errors can creep
into files, there to remain for years unchallenged, all the while affecting the lives of
people. Credit rating, job prospects, accessibility to government services, and travel
opportunities can all be influenced by incorrect information on file. Such errors
come to be in several ways, the most common being through malice, typographical
error, guilt by association, or because of an incomplete system.

152
Malice

A neighbour or worker who has been offended in some way might deliberately
place false information into another's file--either by entering it directly, say, as a
credit bureau employee, or by complaining to authorities and having an
investigation undertaken. For instance, an anonymous tip that an individual has
been molesting neighbourhood children could get a name onto a list of potential
suspects regardless of whether any evidence was offered. One could also get into
such police molester files through evidence in a divorce case, where the temptation
to offer false evidence in custody hearings is very great. In some countries,
government security agencies compile lists of people considered to be risks
because of their political views or their membership in organizations deemed to be
subversive. Unions and corporations have also been known to have such
"blacklists." Unless denied a visa or a job, the person might never suspect that the
list exists. Some promote hate against individuals or groups, and others make lists
of those they claim to be promoting hatred and in turn vilify them.

Typographical Error

A clerk who types a slightly misspelled name in an arrest record or adds an


extra zero to a balance owed can set off a chain of embarrassing events for the
person affected. Police data bases are not generally public, and correcting their
mistakes may be very difficult. Changing faulty financial records, especially such
government ones as taxation files, can be a formidable and costly task, consuming
much time and large legal fees.

Guilt by Association

An innocent party who happens to share an aeroplane seat with a known


terrorist could be entered into an international police file and be classified as a
security risk, denied government jobs, or forbidden to travel to other countries--all
without knowing why. Because such files are kept secret by the authorities who
maintain them, it can be extremely difficult to find out what is going on and to
correct the problem.

Incomplete Systems

There are numerous examples of large systems in which information once


entered is never updated, verified, or removed when it becomes out of date. For
instance, police departments routinely record arrests, but may not follow up with
the courts' disposition of cases. Likewise, reported thefts are recorded, but recovery
of goods may not be. A person could report a car as stolen one day, have it
recovered the next by the police, and be arrested on the third for driving a stolen
car. With that cleared up, a promised job could suddenly be denied on the fourth

153
day because of a check of arrest records indicates a positive match. The individual
may never realize what has happened.
Some jurisdictions have already recognized these problems and passed laws
to deal with them, but protection from incomplete information is still very poor in
most parts of the world. Such cases illustrate the adage; "A little knowledge may be
worse than none."
One of the more spectacular illustrations of incomplete systems is the so-
called "year 2000 problem" (Y2K) or "millennium bug." Caused in large part because
many software and hardware systems recorded only two digits (not the century) for
the date, such systems had the potential to cause disruptions in all industries
dependent on personal data that includes dates, such as banking and government
record systems. When the clocks on many systems rolled over to January 1, 2000,
they used the date as if it were 1900, throwing off calculations of interest, pensions,
and rendering inoperative many real-time devices (bank machines, equipment
controllers) that depended on using the time and date for their correct operation.
Much work went into repairing this problem beforehand, however, and actual effects
turned out to be minimal, though several other problematic dates are yet to arrive.

Solving Information Accuracy Problems

The Y2K problem had to be solved, and was, but cost vast sums of money and
drove up programmers' salaries and lawyers' fees for a few years while the work
was done. It also had potential (or, so it was thought) to cause disruptions in
government, banking, general commerce, and the operation of much automated or
robotic equipment.
As for some of the others, up to a point, all these types of problems are likely
to become worse. However, with the advent of universal information accessibility,
everyone could be given access to all files relating to them, regardless of who has
created the file. Provided a person checks periodically to see what has been filed--
particularly before applying for a job--the problems of inaccurate information could
(in theory) be nearly eliminated. Ideally, all personal information would be stored in
a single place, with access to individual items available only to qualified authorities
or by permission of the person named in the file. Even better, the system could
contain a program that electronically mailed peoples' files to them whenever the
contents were changed.
However, this is an ideal. In an actual society, it is impossible to control all
abuses. It is too much to hope that reorganizing the form of and the access to
information would be sufficient to prevent the kinds of problems described here
(and new ones) from recurring. Only a conscious effort to build carefully designed
system safeguards would offer individuals security from bad personal information.
After all, the mere computerization of a careless and flawed data system makes its
problems worse, not better, as many a university and business can testify.
Moreover, the centralization of personal information even for the purpose of
making it accessible and changeable for the person it names is itself dangerous, for

154
it gives the controllers of the system containing that information the potential for
great power over everyone.

Profile On . . . Issues

The Correctness of Information

When a search for information returns results that are incorrect, or a data
security flaw allows a crime to be committed there can be serious consequences for
reputations, loss of income, or even physical danger.
Who owns data?
o Does personal information belong to the individual or organization that
entered it, the data bank that stores it, the person it is about, or to no one at all?
o Is government-gathered statistical information the domain of the state, or
does it belong to each person in the state?
o Is corporate data the private property of the company in question, or are
the shareholders entitled to it? the customers? the state?
Examples: ought the magnetic coding system for bank machine cards be
public information? What about prison records? medical records? school records?
tax information? marriage, divorce, birth and death records?
o Is it the ownership or the possession of data (or is it neither) that carries
with it the responsibility to ensure its correctness?
o Does "news" information belong to the people in the story, the reporter who
gathers it, the wire service that assembles it, the state in which it is disseminated,
or to no one?
o Suppose a gene that confirms immunity to a serious disease (such as AIDS)
is discovered in a person's DNA. Who owns this information--the person in whose
body it resides, or the one who discovered the presence and effect of the gene?

Who is (ought to be) responsible?


o If a bank relies on incorrect credit data and so denies a loan, causing the
customer a loss, is the bank liable? the credit agency, the individual who entered
the faulty data?
o If the security facilities of a system are inadequate, allowing one user to
defraud another with the system, is only the perpetrator liable, or are the owners of
the system as well? What about the manufacturers of the hardware and software?
o If a stolen bank card can be used by the thief because the owner has written
the PIN access number on the card, is the owner partly liable?
o If an investment company continues to do business with the public while
concealing its poor financial state, who is responsible when the firm collapses? only
the principals of the firm? the regulatory authorities who failed to monitor the
situation closely enough? a journalist who knew the truth, but was afraid to print it
and so trigger the collapse? the investors, who ought to have been more cautious?
o If a commercial program is faulty and causes damage to a business, are the
publisher and author of the program liable? What if the package had a statement

155
disclaiming such consequential damages? What if the copy in question had been
pirated rather than purchased?
o When incorrect conclusions are drawn because data is incomplete, what
liability attaches to the gatherer or user of the data?

What about compensation?


When economic or other loss is caused to some party due to incorrectly
stored or stated information, who ought to compensate the injured person? (the one
who caused the error, the party who ran the storage system, the one who used the
data, or no one?) Does it make a difference if
o the data was maliciously entered wrongly? accidentally?
o the data was changed because of a machine fault with no human
intervention?
o the data was incorrectly processed into information because of a faulty
program?
o the data had simply been allowed to become outdated?
o the correct data was destroyed accidentally by human carelessness? by the
action of a computer virus designed to destroy the data?
o rather than losing money, the injured party lost a job opportunity? her
children in a divorce case? her reputation?
o the injured party never discovered the error, but someone else did?

Who has jurisdiction (Where does the crime take place?)


o when a computer crime is committed over the telephone lines in a distant
computer across state or provincial lines? national borders?
o if data (such as pornography) that is stored in one country is used in or
triggers a crime causing death in another country?
o if an electronic copy of data is stolen in one state or country, then taken to
another where a paper copy is made, then to a third where the data is actually used
for the first time?
o when a "hacker" creates a virus, turns it loose on a network, and thousands
of computers all over the world suffer loss of data?
o over the information owned by a multinational company with headquarters
in one country and branch offices in others? Can one government order the firm to
comply with its laws outside its own borders? What if so doing would cause it to
break the laws of other countries where it operates?

Technical Legal Issues


o Is electronically stored data tangible? If it is not a "thing," can it have value?
Can it be stolen?
o If funds are embezzled from many sources using a single program that
generates many illicit transactions by running in a loop, is this one crime or many?

Who (or what) is the victim


o when money is stolen from a bank machine? (A machine is not a person; is
the element of deceit (of a person) necessary for fraud?)

156
o when false data is used to win an election, engineer (or prevent) a merger,
or kite stock prices?

Privacy

Observations about correctness immediately lead to questions about who


ought to have access to personal information. It seems at times that one must not
only assume that government and private companies know every intimate detail of
the lives of ordinary citizens, but also make the same assumption about the nine-
year-old down the street with the cheap computer and modem in her bedroom.
Although it may be possible to establish a system of safeguards that require
permission of the subject before personal information could formally be obtained
and used, the spread of such data may not ever be controlled entirely, for
information exists in many locations. Some of these are less secure than others, or
have less than scrupulous owners. Any such system that was sufficiently
comprehensive to enforce rules about personal data access would by its very
existence pose a threat to privacy greater than any it could prevent. Since criminals
will also use data facilities, it is also not hard to imagine someone setting up, say, a
blackmail data bank to store sensitive or embarrassing personal information for sale
to the highest bidder.
There was a time when such information was not readily available. A
president of the United States could be a notorious womanizer and the news media
collectively choose not to report it. A member of Parliament could hope that an old
police record would never surface. A vice-presidential candidate could keep hidden
an old stay in a mental institution, and a would-be senator could keep secret a
string of shady business deals or underworld connections. A high official could have
an affair with a secretary or a student intern and not be found out. The past could
be hidden and forgotten, whether it included unusual sexual practices, divorce,
illegitimate children, molestation, abuse, bankruptcy, tax fraud, a criminal record,
failure in school, a dishonourable discharge, cowardice, bad judgement, the
misappropriation of funds, or a collection of traffic violations large enough to fill a
car.
Today, investigative reporters armed with terminals can discover all these
things and more in public records (today's Metalibrary). In the society of the future,
everyone will have to assume that all details of their past life, however
embarrassing, are a matter of public record. For those in the public eye, whether as
government, corporate, or union leaders or as professionals in positions of trust, life
will therefore be much more an open book than it has been in the past. For better or
worse, the ability to forget the embarrassments of one's past is on the road to
extinction. Thus, it is hard to say whether having the full Metalibrary would make
blackmail any more or less likely. If all information is readily available, there can be
little embarrassment in having it revealed, for it could never be concealed.
Whether anyone will care or not about others' morality or judgement is a
separate question. When such information is so readily available, the result could
well be a cynical and jaded public that, hearing about the private lives of the rich
and famous, turns a blind eye to morality altogether.

157
However, what would be left of a right to privacy in such a world? Only that
which leaves no record behind. Since many people would choose to have their home
Metalibrary terminal monitor activities inside the house as well as their use of what
is available in the outside world, there might be very little human activity that is not
recorded in some manner. At the place of work, performance monitoring will be
increased, and more information retained about individual commercial transactions.
While there may be some restrictions, it is not difficult to imagine the state (or
society collectively and informally) gathering the power to continuously record all
the activities of every person. This could initially be justified in terms of law-and-
order enforcement efficiencies, for every criminal would be documented. However,
the corresponding possibility for absolute state control over every citizen cannot be
ignored.
Likewise, if the state has control over the strong encryption of data, and
forces vendors of such products to give the state "keys" to decode any data back to
plain text, there could be no privacy of data or communication. In this instance,
however, the technology for message and file encryption was sufficiently
widespread by the late 1990s that it can no longer be controlled--government
officials simply had not realized this fact as yet.
Even at present, a record of every credit card transaction is kept by the card
issuer. While little could be done in the past to systematize such records because of
their sheer number, the technical obstacles are melting away even as the perceived
rewards to merchants and card issuers are seen to become more tempting. After all,
if you know who buys what kind of goods, you can target advertising very cost
effectively and efficiently, and this alone would make keeping and analysing such
records worthwhile.
If all that were done was the elimination of cash so that every retail
transaction were on record, it would then be impossible for any person to hide
anything significant. An institution (governmental or not) that could know
everything could also control everything. In such a scenario, one could easily
imagine that a "universal person code" could be placed on the hand of every citizen,
to be passed over the supermarket scanners along with the beans and bread--
permanently recording not only all human activity but also humanity itself. That
exactly such a society would one day exist was predicted by the Apostle John
writing in the first century A.D.:

"He also forced everyone, great and small, rich and poor, free and slave, to receive a mark
on his right hand or on his forehead, so that no one could buy or sell unless he had the
mark..." Revelation 13:16-17a (NIV)

It is not difficult to see that the technology to institute an Orwellian 1984-style


state already exists and that such collectivizing trends are present.

158
Big Brother and Little Brother

On the other hand, the effect of universal information availability upon


governments may prove to be neutral or even positive. There may even be greater
democracy, for there is a counterbalance here that promotes individualism. While
there is the potential for increased government control of information, individual
access to knowledge of government activities could also be improved. So too could
the opportunities for citizens to express themselves and change the course of
government. Some envision a participatory democracy emerging--one in which
citizens have daily opportunities not just to express opinions, but to learn the facts
and decide the issues.
Thus, even while people lose some ability to act as "private" citizens,
governments may also lose much of their capacity to operate arbitrarily and in
secret. That is, loss of personal privacy does not necessarily mean a gain in
centralized power--it just means that nothing can be hidden from anyone.
This could also frighten away from public office those with a seamy past to
hide. However, since no one has a perfect past, perfect judgement, or perfect
morality, the effect even upon the aspirations of society's leaders might not be very
great. People would have to judge others (including their leaders) for who they were
in the present and what they might be in the future rather than for their past.
Two more extreme responses are possible. On the one hand, standards of
behaviour for people in the public eye could come to include a stricter practice of
moral actions. A swing of the pendulum towards a comprehensive and rigid moral
legalism of the type popularly attributed to the Victorian era could not even be ruled
out.
On the other hand a variation of antinomianism is already prevalent among
modern liberals. This is the notion that in many areas of human activity the idea of
morality is simply irrelevant. This is usually phrased in terms of tolerating alternate
life-styles, but there is no effective difference between permitting all moral systems
as equally valid or saying that none are valid. Although this position, as usual,
carries with it the logical contradiction that it tolerates everything except
disagreement with itself, it has nonetheless become a popular response to the
"outing" of information with moral overtones. Indeed, it has become so popular that
it is today the control belief in this arena, threatening the freedom or the very
existence of those who hold that moral issues are important--especially if they say
they are absolute.
Whatever the case, the implications for the information age are profound--
actions will be public, and so will be the moral judgement of them (or the lack of
such judgement).
Turning from the action of individuals in government to those of the state
itself, there are similar tensions between the desire for secrecy and the need to
gather and manage information. Although most people in the Western world do not
want comprehensive statism, the opposite extreme--no government, only daily
electronic democracy--may well be too unstable and discontinuous to work.

159
The most likely outcome is a situation involving gains and losses to both
privacy and democracy--not a swing of power to either the individual or the state,
but a realignment that changes both. Information availability does create the
potential for a new kind of tyranny, but it also provides for new kinds of checks and
balances by giving the individual citizen greater knowledge and therefore more
power. The two trends may not simply cancel each other out, because an open
information society will be very different, but these trade-offs between privacy and
knowledge may well become generally accepted and thus little remarked upon.
Another possibility is that power over information storage and transmission
will become concentrated in the hands of a few technology managers and corporate
suppliers. Such developments are commonly advocated to achieve efficiency,
security, or convenience, but these are not the central issues. Control is. Given the
lessons of history, one must assume that where there is centralized control, there
will inevitably be abuse of power, regardless of whose hands hold the reigns of
power and how (why) they obtained it. To date information technology has had a
largely decentralizing and democratizing effect, but there is no reason to suppose
that this situation will last indefinitely. Those who wish little brother and sister to
win out in the long run must be diligent to retain their freedoms or they will surely
lose them.

Why Privacy?

The ethical question here relates to the fundamental basis for the desire for
privacy. Is privacy a fundamental human right, or is it merely a culturally derived
preference? One could argue on religious grounds, for example, that since human
dignity and self-esteem are at stake, the greatest possible amount of privacy ought
to be granted other people in order to affirm their value. On the other hand, one
could argue that the New Testament requires the people of God to be an open and
transparent community and that they ought, therefore, to have no secrets from one
another. One could even argue that both of these principles are true and that they
do not contradict each other.

Data Security

At the corporate and government levels, it may at first be somewhat easier to


keep information confidential than at the personal level, for there will be fewer
copies and these will be stored in more carefully guarded systems, not (initially)
readily available through the public Metalibrary. However, sophisticated
computerized analysis of the activities of business and government even now
leaves them with few secrets of the quantifiable kind. Any skilled individual should
be able to analyse the market share and profitability of most companies. The trick
will be to keep one's actual plans for the future secret for as long as possible.
Moreover, the proliferation of international corporations and the consequent
increases in money, data, and technology flows across national boundaries make it
much more difficult for governments to control corporate activities. This is already
illustrated by the international banking system, within which large sums of money

160
are routinely shifted from one country to another instantaneously and without much
possibility of government intervention. Even today, no one nation or group of
nations can be said to control the banking system. Thus, the ability to retain
information within national boundaries has already all but vanished in the Western
nations and will also do so eventually in the (previously) more closed East.
Governments will still attempt to keep national security, taxation, and military
information secret. Corporations, credit providers, and banks will need to guarantee
the security and integrity of the information they store, just to survive. Ultimately,
government and corporations must also operate in a more open environment, for it
will become progressively harder to keep anything out of the public view. A secrecy-
oriented government can keep fax machines and photocopiers under lock and key
and track every sheet of paper they produce for only just so long. Once it develops
its own appetite for the efficiency of information machines and acquires several
thousand of them, effective control becomes all but impossible. When those paper
files become computerized, the security problems are multiplied.
This is not good news for those who desire to keep at least some information
confidential. There have been numerous incidents of computer security violations at
government or corporate installations, both by insiders and by enterprising hackers
from without. Freebooters have rummaged through medical records, corporate
finances, and even some military files. Insiders have stolen data for competing
companies or nations, and saboteurs have destroyed whole installations.
The victims of these violations have learned from their woes and tightened up
their poor security. Inside personnel are screened more closely on hiring and may
be searched when leaving the job site. Modems are now designed to call back only
to authorized numbers before connecting, passwords are checked regularly and not
left lying around, and backup copies of important files are made regularly and
stored in secure, off-site locations. Disks brought in are routinely scanned for virus
programs that could destroy data. Critical installations often have an entire physical
duplicate, usually in another city, so that service to customers can continue
uninterrupted even through an explosion or fire at the main data centre (This is
standard banking practice).
As security consciousness increases and governments attempt to control data
flow across borders, some countries may set up data havens, much as they now
establish tax havens. There will also be an increase in data traffic (buying and
selling) on a very large scale, as economic, legal, and consumer files are copied
from owner to owner.
The net long-term result will surely be even greater data availability on an
international basis, and a general breaking down of national borders in favour of a
more global view of information. While this is tending to make Western societies
more open in some ways, it has already sounded the death knell for the old closed
societies of the communist world. The very efficiency of information techniques
mitigates against a tightly controlled society. Widespread availability of information
is inimical to totalitarian forms of government, and a computer and modem are
much more deadly enemies to statism than is a copying machine. Perhaps the best
way to hasten the fall of tyranny is to ensure that it is well supplied with
photocopiers, computers, and fax machines.

161
Thus, on balance the information age may favour the individual, but nagging
doubts do remain. The gains available through individual access to information
imply a corresponding loss of privacy. Are the trade-offs fair? Will Big Brother still
end up watching? Will the millions of "little brothers and sisters" triumph? Will
people have any vestige of personal privacy, or will everyone really be able to know
everything about everyone else? The answers to these questions will vary from time
to time and country to country, but the extreme scenarios now seem less likely than
that some middle course will instead be charted.

4.6 Information Analysis and Decision Making


Even in the fully realized Metalibrary, universal accessibility of information
does not in itself solve practical problems. Finding solutions is a multistage process
resembling the scientific method that leads from raw data, first to knowledge and
understanding and then to decisions. Moreover, discovering knowledge and making
decisions are not necessarily on the same path, but may often be nearly
independent of each other.
The full Metalibrary, like present-day paper libraries, provides material for the
first step in the process, by organizing raw data by category and giving users tools
to relate the data to other categories, to analyse it, and to record conclusions or
argue with those drawn by someone else. All this is done now in scientific journals,
though not very efficiently. However it takes place, a community or collective
consideration of data is necessary before information can be derived from it.
Indeed, it has been in high density, strongly interactive population concentrations
that great new ideas took root and flowered in the past, and there is a sense in
which the Metalibrary makes the entire Earth into a single city.
When the subject matter can be described in quantitative terms (chemistry,
cell biology, economics, and demographics), the first step is to establish what are
the facts, that is, what data are valid and what information they convey. Expert
forums operate through today's journals (and tomorrow's Metalibrary) to achieve
consensus on what the facts are, given the data available.
However, managers and other administrators often need to make decisions
long before there is widespread consensus about what the pertinent information
means. Ostensibly, such a manager makes decisions on the basis of available
information about past history and probable consequences for the future. On the
other hand, two people may easily make a different decision under the same
circumstances. Here are a few examples:

1. The task is to implement a trial version of a universal data base. The


problem is to decide on who should have access to what information. The company
hires a lawyer and a computer scientist to advise. Their recommendations flatly
contradict each other; one wants tight controls, the other a completely open
system.
2. A bank determines that it requires a new computerized billing system for its
expanded safety-deposit department. Extensive studies are run, and software is
chosen to control the data base. But many compatible machines can run this
software, including brand I, brand C, and brand X. Systems from all three vendors

162
are tested and the results charted. Brand C comes out on top in price and
performance, with brand I second and X third. The branch manager then overrules
the selection committee's recommendation and decides to buy brand I, because the
mainframe presently in her office is from the same company and she values brand
loyalty higher than price or performance.
3. The Fraser Valley Library acting on recommendations from the Ministry of
Human Resources, has decided to build a branch to serve a slum neighbourhood of
Aldergrove. The only available property is an old park adjacent to a heritage
building, formerly the residence of a certain well-known author. Psychologists, social
workers, and government officials claim that dramatic improvements in similar
slums have always resulted when a library was built. They insist that the house and
park ought to be sacrificed. Historians and local community leaders point to
community pride for one of theirs who made good, as well as to the benefits of the
park for their children. They do not deny the potential value of the library, but hold
that the value of the house and park are greater, intangible though that value may
be. The recommendations are again contradictory.
4. During World War II, the British scored an intelligence coup by breaking the
German coding scheme and routinely translating military messages from the
opposite side. One night, the decoded message contained instructions to bomb the
city of Coventry at a particular time. British Prime Minister, Winston Churchill, knew
that many lives could be saved if he evacuated Coventry. He also knew that this
move would reveal to the German High Command that their secrets had been
breached, and the codes would immediately be changed. By the time the British
could decipher the new ones, many more lives could be lost on the battlefront than
could be saved at Coventry. A utilitarian, Churchill did not warn the city; the bombs
came, and civilian lives were lost. Clearly, an act-oriented ethic would have dictated
the opposite course of action.

The point of these four examples is that the mental filters through which both
history and consequences are passed often have more influence on a decision than
the facts and probabilities themselves. People do not make decisions on facts; they
make them for other reasons. The decision in example 2 hinges not at all on the
data--in fact collecting it turns out to be a wasted effort, for the manager makes the
choice irrationally, basing it on emotional familiarity rather than on facts. Business
people commonly do decide things emotionally, particularly when it comes to
technology--this explains why inferior systems can become commonly used. Such
scenarios are normal in any situation where the people making the decision are not
personally familiar with the technology; they do not understand the data; or they
trust advertising more, so they ignore fact and embrace emotion.
The human element is critical to the outcome of the decision-making process,
and the world view (including the ethical view) of the decision maker may well
determine the outcome quite apart from (or in contradiction to) the facts. Above the
individual's world view, and creating its context, is that person's group culture.
Depending on education, peer group, social status, local ideas, organizational
outlook, and national goals or prejudices, each person shapes a world view in some
degree of conformity with others sharing the same culture. Membership in a given

163
subculture of society will determine whether a person even sees certain data, much
less understands it enough to make informed decisions. Thus, even extending
decision making to the entire populace would not guarantee that better decisions
will be made, or that they will last long in the face of the fickleness of popular
opinion.
The full Metalibrary could help with some of these problems, would
exacerbate others, and would create new ones. For instance, it could be used to
enforce a requirement that some expertise be demonstrated before participating in
a decision. In matters such as the building of libraries or parks, a simple test on the
facts of the case could be required to gain voting status. Those living in the affected
area would read a selection of the arguments for each course of action, and answer
simple questions to show that the issue is understood. The decision would then be
made by the informed and affected people.
Decisions with wider effects and more profound consequences might require a
different voting structure, in which the degree of knowledge about the problem
would determine each person's share of the vote. If a dam is proposed on the
Columbia River, economic benefits would have to be weighed against environmental
effects. It might be too much to expect everyone affected to become sufficiently
knowledgeable about the proposal and its effects to cast an informed vote--there is
too much technical information for non-experts to digest. Moreover, those with the
best engineering expertise are not necessarily those most knowledgeable about
costs and benefits or about environmental effects. Perhaps a formula could be
devised to weigh the votes of those with greater (or multiple) expertise more
heavily than those who qualify with less knowledge but are still affected. This would
give those with a strong interest a powerful motivation to do some research, and
might make it more likely that a consensus on the decision could be reached.
However, this particular issue is complicated in that the river in question crosses the
Canada-U.S. border. How could the relative interests of two entire countries be
weighed when one is larger in area and has more environment to affect, and the
other has more people and a bigger economy?
The premise behind such electronic participatory democracy schemes is that
everyone sufficiently informed would be more likely to come to the same
conclusion. This would be a major--and in many cases unjustified--assumption; as
has already been indicated, good information is not the only factor in decision
making. Such systems would also be a substantial modification of current
democratic practice; whether they would be found acceptable or not is another
question, especially if they vary in any way the accepted democratic idea of "one
person-one vote." It is also important to note that the mere technological enabling
of weighted voting is not in itself a reason for implementing such a scheme.
Moreover, such a scheme does away with the cherished idea of "one person, one
vote" and may not therefore be seen as an improvement.
Yet another common supposition is that the existence of comprehensive
communications and information facilities such as the Metalibrary would tend to
reduce or eliminate differences in culture and world view and thereby promote
unanimity in decision making. This would continue a process begun by books, radio,
and television and fostered by modern-day population mobility. However, the world

164
of the 1990s was still far from the global village envisioned by some in the 1960s,
even though its peoples hold far more in common now than for thousands of years.
Indeed, though there might gradually be fewer sharply distinct cultures based on
geography, and fewer international boundaries as well, there are some very basic
conflicts of world views that are unlikely ever to be eliminated. If the fall of the
former Soviet Union has taught us anything it is that centuries-old ethnic hatreds
such as the ones it brutally suppressed can still survive for generations and readily
be called upon to create new bloodbaths when that repression is removed. In such
cases, the availability of more technology merely means that people are killed at a
faster rate than before.
Moreover, along with its new kinds of information filters, the Metalibrary could
well create new culture and world view conflicts, for not only will people perceive
information differently, they would also be able to personalize their view of the
information to the extent that they will not have to look at the same data.
An ivy-leaguer with great pride in her type of institution might accept
information connection threads only from people at similar schools and choose not
to see the threads attached by anyone from smaller or less prestigious universities.
Prejudices over spiritual ideas would remain, with some religious people refusing to
read certain scientific works and some scientists refusing to read certain religious
works. The same applies to those of differing political persuasions. Except when a
person placed a foot into the other camp or crossed over altogether, people on one
side of any debate could pretend that the other side did not even exist, much as
happens today.
When a new link did cause a thread to trail over self-imposed borders, it would
at once be obvious. Denial of recognition would remove the threat and the troubled
mind could again be safely closed. New ideas and related data would not reach
people unless allowed to do so. As is done today with existing information
techniques, such denials perpetuate an already well-formed group thinking pattern
and increase the possibility that decisions made by such people would be bad ones,
because they are not fully informed. Once again we see that automating a bad
process (here it is decision making) does not make it better. Rather, it merely
produces the bad results faster.
On the other hand, as the pornography issue illustrates, not all information is
either useful or beneficial, and it may be a good thing to prevent some of it from
entering one's home, or, in such extreme cases, to prevent it from even being
available. With a fully implemented Metalibrary, the former may be rather easy, but
the latter likely very difficult.
One could suppose that a kind of natural selection (good decisions and advice
are more efficient and useful than bad) would gradually reduce the influence of the
close-minded as the poor quality of their decisions became evident. However, there
is no guarantee that a particular discipline or speciality would not become as rigid
and unbending as can now happen using the medium of journal articles. A control
belief group has the power to reject new ideas by collectively refusing to look at
them. Denial of recognition by the cultural leaders--who may still be termed
reviewers or editors--would guarantee that new ideas would not be read. That is, for
all its promise as an information utility, the Metalibrary might make it even harder

165
to challenge the control beliefs of a society, for each sub-culture using it would still
have unlimited ability to effect intolerance of competing views. A possible way
around this difficulty would be to have the Metalibrary rules allow universal visibility
to new information links regardless of who makes them, at least until such time as a
person reads the item and expressly denies the link. Another possibility would be to
create ombuds-reviewers who can make connections that every user will see for at
least a certain period of time after the person first reads the new material.
There is no completely satisfactory solution to the problem of intellectual
intolerance, however. Everyone filters what they will read and who they will talk to.
They must, for there is too much for one person to assimilate. The filters in the
Metalibrary will in some senses be more tangible, but they too are necessary.
Although the narrowness of specialization may be greatly reduced because much
less knowledge will need to be memorized--looking it up will be better--specialities
will still remain, and their practitioners will still have difficulty communicating cross-
culturally.
Once again, it becomes evident upon some thought that ideas, like goods and
services (whether cultural or academic), are accepted or rejected by society as a
whole in the short run on their perceived merits, not on absolute standards. In their
own generation, the guardians of the control ideas and beliefs can always refuse to
acknowledge anything else, or even suppress competition. It is only in the longer
(historical) run that they come to be evaluated with more global measuring sticks.
Neither will language barriers necessarily be broken down, for eventually the
Metalibrary would communicate with users in their own languages. There would
therefore be no incentive to learn another tongue, and meeting other people
personally might even become more difficult. Spoken communication could suffer
and isolationist tendencies increase, balancing off the improvements in written
communication.
The full Metalibrary will also be sophisticated enough to allow the use of
cultural, religious, or personal values to assist in filtering information and making
decisions. Since it will record every data search and every decision, it could record
how each person's filters operate and suggest solutions to problems consistent with
one's stated values and past decisions. Again, though there would be benefits to
this, there would be no incentive to re-examine one's presuppositions periodically,
for the Metalibrary could be set to reinforce them.
In any event, the advent of the full Metalibrary would make it clear that
everyone has a world view. Each person would construct a reflection of that world
view in the process of learning the system, developing the filters, and making
decisions at both the information and interpretation levels. Since there would still be
"superstars" of each discipline even in this new medium, there would be a demand
for the ability to adopt other people's world views (or sets of connecting threads).
So, in addition to being able to modify one's own personal set of connecting
threads to recognize any other person's links, it ought to be possible to rent
another's. This is different from incorporating in one's own set the links with that
person's name on them, for that does not also add the connections the second
person has recognized from other people. Borrowing a whole world view would allow
people literally to see things as others do. In this scenario, world views would be a

166
commodity for rent or sale, and would be mergeable with one's own. A person could
keep several independent world views on hand and switch between them or revert
to an older version of a connection set.
Many scientists who are also writers have remarked that although they can
travel in several academic cultures, they seem almost to become different people
when they do so. This is a routine phenomenon also, for everyone has a different
mindset and vocabulary (called "registers") to communicate with different people
(one vocabulary subset and thinking pattern for the children, another for clients,
and other for co-workers, and so on). The Metalibrary would allow someone to be
(intellectually) as many different people as desired, though it is likely that most
users would integrate their interests into a single collection. Some people would
undoubtedly make their world views available as a public service for anyone to use,
others might make a tidy profit selling theirs, much as they now do from books.
All of this would allow for decision making that is potentially more factually
informed and that enables participants to better consider each other's points of
view and how these were formed. This does not mean that making decisions will be
any easier than it is now or that most (or any) will be unanimous. It does imply
broader participation and less bureaucracy, as well as the possibility of more
satisfaction with the results. There would still be differences of opinion and there
would be more opinions expressed than ever. All opinions could be considered, even
though all surely would not be. To put this another way, being better informed may
be good in the ethical sense of the word, but it is not clear to what extent that
"good" would be sought after. It is even less clear how well it could be enforced.
Yet another problem with an information-based society is the potential to rely
too much on machines for the decision making process. When this is done, it is easy
to forget that information is more than whatever is stored in or processed by
computers. To have meaning, it must be communicable. Assigning and
communicating meaning, judging value, and taking action based on informed
decisions are all part of the unique province of human activity, and there is as yet
no indication that any of these can be automated. It is easy to rely on the neat rows
and columns of figures in a spreadsheet, but unless the assumptions behind the
formulas used to produce the output are known, the reader cannot make informed
human judgement on the information content. There are value judgements behind
the process of data collection in the first place; there are value judgements involved
in organizing it; and there are value judgements involved in deciding on what
meaning to assign to (and what action to take upon) the material in the end. Thus,
who decides, and out of what value system, turns out to be what gives information
its ultimate quality and meaning. Humans can think about and evaluate their
thinking process; machines cannot. This appears to provide an answer for the
(ethical) question: who ought to decide for humans--themselves, or machines?
The availability of instant information also creates pressure to make instant
decisions. For instance, because it is easy to do so, and the means are at hand,
many people respond to electronic mail messages and Internet news postings as
soon as they receive them. As users of such systems are well aware, this results in a
large volume of intemperate, ill-considered, and impolite mail traffic and news (Such
messages are called "flames"). Likewise, if thought processes and analytical

167
techniques are unsound or if decision makers are so culturally conditioned as to be
incapable of considering alternatives, the Metalibrary facility will not help.
Computerizing a bad decision-making process does not produce good decisions, it
only causes the bad ones to be arrived at in milliseconds instead of days.
As mentioned, prejudice will also remain. That is, irrational dislike of others
and refusal to consider things from another person's world view would be as likely
then as now. Perhaps the greatest contribution to decision making of instant and
universal information availability could be the recognition of legitimate differences
among world views as people realize (in the process of automation) how they have
been making their decisions. Perceptual and decision-making filters would be
obvious instead of hidden; their existence could no longer be denied or ignored. This
has the potential to blur boundaries between sub-cultures, promote communication,
broaden specialities, make learning easier, and promote the possibility of sounder
decisions. On the other hand, prejudice has stood the test of time as a stronger
force in human affairs than any of these potential benefits.
Thus, as for all technologies, the impact of electronic media on knowledge and
decision making will be mixed. Great benefits will be available, great abuses will be
possible, and for many people there will just be a transfer of their old ways of
thinking to a new medium.

4.7 Summary and Further Discussion

Summary

Information is more than data. It must be processed and communicated in


order to have the potential to convey meaning. When this takes place, participants
in the exchange (and the whole culture) are altered. Every civilization depends on
its transportation and communications technologies. These were once essentially
the same, but the latter have now become critically important on their own.
Together with the new means for storing and manipulating large amounts of
information, fast communications make unlimited access to information available to
citizens of the industrialized nations for the first time.
Services offering such data access exist now and have already had a strong
impact on many professionals, who have begun to rely more on looking up facts
than on memorizing them. It is likely that this way of doing things will be adapted by
most people in the near future, though costs must come down and ease of use
improve substantially first--steps that all technologies require to become widely
accepted.
As this takes place, new technologies will have the normal transforming effect
on ideas and demands that created them. Already, hypertext promises to
revolutionize the scholarly use of libraries. The extension of this concept to that of
the full Metalibrary facility promises to make a wide range of benefits available to
the general populace.
Information technology has the problems of accuracy, privacy, prejudice, and
state control to overcome. If not, it could cause more problems than it solves.

168
People may have to live in a world where the concept of privacy has changed
radically or ceased to exist for many aspects of life.
The effect on decision making is also dramatic because information
availability empowers more informed decisions. It does not guarantee good ones,
however. The Metalibrary may also make world views into more visible and obvious
entities, to the point where they can become commodities for rent or sale. There are
both benefits and disadvantages to information technologies, as has been the case
for all others.

Discussion Questions

1. Describe the terms hypertext and Metalibrary and distinguish between


them.
2. Use your present library to find and describe the term dynabook. Try such
subject headings as technology--the future; Alan Kay; Xerox Corp. How long did it
take? How long would it in a hypertext?
3. You are in a (paper-based) library researching your master's degree thesis
in mathematics and stumble across a brilliant paper: Sutcliffe, Richard J., and
Alspach, Brian. "Vertex Transitive Graphs of Order 2p," Annals of the New York
Academy of Sciences , v. 319 (May 14, 1979): 18-27. You are captivated by the
ideas presented there, and several new theorems that follow from their results
immediately come to mind. Has anyone already thought of your ideas? Describe the
steps you must take in a paper library to find out who has referenced this paper in a
bibliography in the intervening years. Go to your library and do this, making a list of
derivative papers and following them through later works as well. When you get
tired, estimate how much time it would take to finish. Now describe how this would
work using a hypertext system and estimate the time savings. Keep in mind that
this is a rather obscure paper with few citations. More popular ones can be orders of
magnitude more difficult to trace entirely, because the citations fan out into a maze
of papers.
4. Another library task is the making of book bibliographies. This is a little
easier than following a paper through citations but can still be quite a challenge.
Use your paper library catalogue to make a bibliography of all available books on
the computer programming language Modula-2. Now obtain access to an electronic
bibliographic data base (your library may subscribe) and perform the same search.
How many titles do you get using each method? How long did each take?
5. This chapter has mainly presented the positive side of universal information
availability and has been relatively optimistic about the technology becoming
available to do it. Write a paper attacking this concept, pointing out its weaknesses,
and saying why it can never, should never, or will never come to pass--either from a
software/hardware or from a social point of view.
6. Write a paper in which you extend the concept of the Metalibrary in content
or use. There are many things it could be or do besides those that are given in this
chapter. The more unusual or original can be sent to the author who will include
some of the best in a subsequent edition if enough buy the first to make a second
worthwhile. Some small prize may be given for the best idea.

169
7. What effect would the Metalibrary have on a hobby like stamp collecting?
Be careful!
8. What effect would worldwide availability of information have on the gap
between rich and poor nations?
9. What will the effect on the size and scope of government be? Will it tend to
become larger or smaller? Why?
10. Who should manage the Metalibrary and how--or should anyone?
11. What degree of privacy over personal details can and should be
guaranteed? What should an individual be able to keep secret? Perhaps you would
care to argue that privacy should no longer exist, or at least that the general
diffusion of information cuts down on abuses and on the need for privacy.
12. If privacy is a fundamental human right or urge rather than, say, only a
legislated right, how will people compensate for their loss of information privacy by
increasing some other aspects of personal privacy?
13. Research the subject of computer security and describe the methods of
preventing unauthorized access to data in some detail.
14. How much control should government have over data repositories and
data transmission? Should such be regulated, taxed, or even run by the
government? Give reasons.
15. Look up and explain in at least some detail, the methods used to encrypt
data and messages. Include a discussion of DES, RIA, and PGP.
16. The body of the text argues that it is effectively impossible for
government to control encryption technology. Refute this.
17. In the Chapter, much of the contents of current news media were
described as "news editorials." Do you agree with this description? Why, or why not?
What (if anything) should be done to change the situation?
18. Discuss carefully the degree to which the Metalibrary facility would
promote understanding, cross-cultural communications, and better decision making.
Will such things be improved, or will people simply become more isolationist?
19. To what extent is information available electronically now? Write a
summary of the major categories of data bases that can be accessed by the public,
their cost, and the type of information they contain.
20. What effect would the Metalibrary have on poverty, illiteracy, poor
sanitation, economic exploitation, and discrimination in (a) Western industrial
nations, (b) present and former communist nations, and (c) third world nations.
Specifically, what ethical obligations (if any) do users of such an intellectual facility
have to employ it in bettering living and working conditions for others?
21. The author suggests that collapse of the Soviet Union precipitated the
ethnic wars of Eastern Europe (indeed a much earlier version of this text predicted
both). Either argue that Western Europe is unstable and subject to the same kind of
warfare, or argue that there is good reason to believe that Western Europe is now
immune to such problems.
22. Attempt to apply an information analysis to the problems of the Middle
East. Could more knowledge of other peoples and their ways make any difference to
the inhabitants of Israel and her neighbours?

170
23. Answer the same question as in #22 but with reference to India and
Pakistan.

Bibliography

Denning, Peter J. & Metcalfe, Robert M. Beyond Calculation--The Next Fifty


Years of Computing. New York: Springer-Verlag, 1997.
Drexler, K. Eric. Engines of Creation. New York: Anchor, 1986.
Fjermedal, Grant. The Tomorrow Makers. New York: Macmillan, 1986.
Inose, Hiroshi, and Pierce John R. Information, Technology and Civilization.
New York: Freeman, 1984.
Marchand, Donald A. and Horton, Forest W. Infotrends--Profiting From Your
Information Resources. New York: Wiley, 1986
Naisbitt, John. Megatrends. New York: Warner Books, 1984.
Naisbitt, John and Aburdene, Patricia. Megatrends 2000. New York: William
Morrow, 1990.
Nelson, Theodore Holm. Literary Machines. Bellevue WA., Electronic Ed. by
Owl International, 1988.
Orwell, George. Nineteen eighty-four. 1949. Reprint. Harmondsworth,
England: Penguin, 1964.
Plant, Raymond, et al (eds.) Information Technology: the Public Issues.
Manchester: Manchester University Press, 1988
Roszak, Theodore. The Cult of Information. New York; Random House, 1986.
Sieber, Ulrich. The International Handbook on Computer Crime. New York:
Wiley, 1986
Stover, William Jones. Information Technology in the Third World. Boulder, CO:
Westview Press, 1984

Internet resources:

Cranor, Lorrie
<email:lorracks@cs.wustl.edu<http://www.cec.wustl.edu/~cs142/links.html<http://www.wofford.edu/~ka
ycd/competh.htm

171
Chapter 5
Robotics and The Second Industrial
Revolution
Seminar - "Work, Workers, and Machines"
5.1 Tracing a Second Industrial Revolution
5.2 Robots and the New Industries
5.3 Work and Workers in the New Society
5.4 Some Issues in Automation and Robotization
5.5 Other Industrial Futures
5.6 Summary and Further Discussion

5.1 Tracing a Second Industrial Revolution


The Industrial Revolution involved harnessing machines for production
previously done by hand. By "second industrial revolution" is meant the automation
of those same tasks so machines require few, if any, human attendants. This is not
new, for the entire machine age tells the story of machines having ever greater

172
efficiency, power, and productivity. Automated textile devices first claimed the
livelihood of thousands of independent artisans when the English garment and lace
factories came into being in the early nineteenth century. Each machine that
mechanized work previously done by hand reduced the number of people required
to produce a given quantity of goods. This affected agriculture and industry
simultaneously. The machine revolution proceeded simultaneously through all parts
of the economy because the various sectors competed for raw materials and human
resources. Also, new technologies developed for one industry are applied to others
in short order.
The machines of the industrial age, though as diverse as the industries in
which they were employed, had one thing in common that distinguished them from
the human workers before them. Each was a speciality device, designed and built
for a specific task. To accommodate any subsequent changes in an industry
invariably meant retooling--a euphemism for scrapping much of the existing
machinery and replacing it with new. In only a very few cases does any part of an
industrial-age machine survive a substantial technological change; it becomes
noncompetitive or irrelevant, so it is unplugged and thrown away.
Human workers, on the other hand, can be retrained to use new skills and
new tools--if the employer takes the time and effort to do so, and the unions will
allow it. Throughout the machine age, human retraining took place continuously as
new machines demanded different skills of their operators. However, as time
passed, more of the physical tasks in manufacturing became automated, and the
machines started to become more general-purpose and to require fewer operators.
Logically, the next step in the sequence is the replacement of the human operators
by machines sufficiently versatile that the "retraining" could be applied to the them
instead of to workers.
Until recently, this step could not be taken because there were no satisfactory
ways of encapsulating retrainability in a machine, and the jobs of at least some of
the human workers were safe. With the advent of programmable automatons, or
robots, they no longer are.

Profile On . . . Technology

Robots

Where did the term "robot" come from?


In 1921 Czech dramatist Karel Capek wrote a play, R.U.R. or Rossum's
Universal Robots. The Czech word means "heavy work."

What disciplines are involved in robotics?


Robotics is a difficult multidisciplinary field embracing computing science,
mechanical engineering, control systems, and knowledge of the design and
operation of the manufacturing process.

What are the Laws Of Robotics?

173
1. A robot may not injure a human being, or through inaction allow a human
being to come to harm.
2. A robot must obey the orders given it by human beings except where such
orders would conflict with the first law.
3. A robot must protect its own existence as long as such protection does not
conflict with the first or second laws.

Who enforces these laws?


No one does. They were formulated for use in the fiction of Isaac Asimov in
1940 and popularized since then by a number of other writers of science fiction.
There are no robots yet capable of being programmed to "obey" these laws, and it
is not certain there ever will be.

Does this mean that a robot could kill a human being?


Present day robots are little different from any other industrial machinery in
this respect. Some have detectors that allow them to avoid a human being, but
apart from this, it is as dangerous to get in the way of a working robot as it is to
stand in the path of a moving truck.

What can robots do?


Robots have been equipped with grippers, manipulators, motion sensors,
heat, light, and sound detectors and are capable of handling tools, moving about,
lifting, carrying, and fitting parts.
They can weld, assemble electronic components, spray paint, sand and polish,
apply adhesives and other coatings, drill, make tools, load, unload and store
materials, move parts about in a factory or warehouse, mine coal, make castings,
and assemble and inspect finished products.
They can be sent to Mars to rove a hostile landscape, gather and assess data
for scientific experiments, or serve as a child's toy.

Can robots see?


There are many manufacturers of robotic visions systems. These allow robots
to sense colours and shapes, position parts in the correct location, and inspect
products for flaws. The patterns read by the optical systems are compared with
ones in storage. Whether this is "seeing" depends on the definition of sight.

Which industries use robots?


Examples include: automobile and aeroplane manufacturing, shipbuilding,
electronics assembly, appliance manufacturing, tool and die making, mining,
warehousing, transportation, and undersea exploration.

In what manufacturing environments do they work best?


o where the products are hard items that must be moved about and stacked.
o (so far) where the items being moved are relatively large.
o where the actions required are relatively simple and are repeated in exactly
the same way every time until a reprogramming is done.

174
o where decisions are simple, have few options, and do not call for shades of
judgement (i.e., the domain of action has clear and strict boundaries).
o where any visual inspections can be handled with a low resolution,
monochrome, two dimensional vision scan.
o where quantities are great enough to warrant using robots, but not so large
as to make fixed machinery more economical.
o where the plant can be run in continuous shifts.
o where labour, land, buildings, and other costs are high, but capital is easy to
obtain, and interest rates are reasonable.
o new factories where the entire building and assembly line can be designed
with robots in mind.
o where conditions are too hazardous to risk many (or any) human beings
(inside a volcano or nuclear plant, inspecting or disassembling bombs, or exploring
in space).

5.2 Robots and the New Industries


Robotic devices have gone far beyond the realm of science fiction, having
become a day-to-day reality in the lives of many people. Home appliances have
built-in microprocessor controllers and timers. Automobiles include diagnostic
centres and several computers to control their operations. Golfers ride about on
robotized caddy-carts. Computers have revolutionized the writing and publishing
industries by automating many tedious functions. However, such devices have not
caused dramatic large-scale changes in basic living and working patterns for most
people. Instead, they have produced simple, small-scale changes to the existing
industrial society. To constitute anything revolutionary, they would have to be
capable of displacing large numbers of workers from their positions. However, both
on the assembly line and in the office, that displacement has now begun.
There is a fundamental difference between the Industrial Revolution and this
second revolution. Many of the workers who have kept their jobs up to this time
because of the (human) ability to be retrained will now lose them, for the new types
of automated manufacturing machinery are indeed reprogrammable. Machines can
now be given not only computational and routine work, but also something that
passes for decision-making ability. Robotic tools are used extensively in such
situations as automobile assembly lines, where the fact that robots cannot make
wrong decisions makes them more economical than human workers. Many of these
are heavy equipment models, with limited and rigid capabilities, but these are
rapidly giving way to much more flexible devices.
It was long a piece of American folklore that one ought not buy a car made on
a Monday or a Friday because the workers were not at their peak on those days.
However, robots do not get hung over, bored, angry, sleepy, or careless. They do
not require time-and-a-half, lunch breaks, sleep at night, salaries, pensions,
washrooms, or stock-sharing plans, nor do they go on strike or make demands.
Robots require a substantial capital expense but small operating cost, for they are
paid no salary. They are reliable, and can be retrained without expensive courses.
A robot can replace between five and ten assembly-line workers and pay for
itself in three years or less. It will do exactly the right job time and time again,

175
welding two parts with the right temperature and pressure and in the right place, or
applying a nut to a bolt with exactly the specified torque. Parts fit better and are
stronger, and the final product can be counted on to be of uniform quality every
time. Small pieces can be attached to machines or electronic devices with any
desired degree of precision, and this can be done quickly as well as accurately
throughout an entire production run, then reprogrammed for a different run.
Most important, technological changes can be worked out ahead of time and
new computer programs devised to direct the manufacture of new products. With
reprogrammable tools, the assembly line can be retooled with much less scrapping
of machines and very little lost production time. Ultimately, it should be possible for
a new automobile to be designed entirely by computer and for the assembly line to
be switched over to the new product automatically and with minimal human
intervention on the factory floor.
The first four-hundred-fifty-nine vehicles made some future day could be four-
wheel drive trucks, and the four-hundred-sixtieth could be a newly-designed
compact car--with no intervening space or time on the assembly line. The line could
then switch back to trucks until the prototype is tested. For the foreseeable future,
the decision to make the switches would be a human one--there is no method yet of
automating the reasoning leading up to it.
While the kind of flexibility suggested here is not yet available, robotic devices
are already used extensively in Japanese automobile assembly and seem destined
to take over the same functions in North American plants as well, if their owners
wish to stay in business. Ultimately, assembly lines of all types will be automated in
this fashion, and most consumer goods will be produced with few or no human
workers in the plant.
The resulting changes will be as sweeping as were those following the original
Industrial Revolution, for millions of skilled and semiskilled jobs in manufacturing,
mining, forestry, materials processing, warehousing, and other smokestack
industries will no longer be required. Most of the small staff remaining in such
industries will be white collar workers, accounting for and running the machines that
operate the machines--and doing so from the office environment, rather than from
the factory floor. Other employees will be the highly trained and versatile
technicians whose task it will be to effect the inevitable repairs.
Eventually, factories also will be designed and built to order largely by
machines, and can be placed in remote or uninhabitable regions without blighting
either the urban or rural landscape. They could be built beneath the ground, inside
mountains, under the ocean, in outer space, or on the moon. Of course, some
people must continue to work on the design and operation of factories. However,
the consumers who benefit from their production will need neither to know nor care
where those factories are physically located, so long as the flow of goods continues
unhindered. In many cases it might be difficult or impossible for an unprotected
human being to pay a physical visit to the floor of one of these factories; ultimately
it will perhaps be almost entirely unnecessary.
Some mark 1956 as the watershed year in the progress of automation, for in
that year the number of service jobs in North America exceeded the number in
manufacturing and farming (i.e., in production) for the first time. The next four

176
decades saw a steady growth in the number and sophistication of available
consumer goods and in the general standard of living even while jobs continued to
shift to the service and information sectors. Recessions notwithstanding, there was
during that period a level of economic expansion and prosperity such as has never
before been seen. If this could be projected into the future, those who have jobs of
any kind would probably be able to afford far more technological luxury than ever
before.
Even now television antennas sprout on the roofs of the most primitive tin
shacks in the barrios of South America and Asia. Video cassette recorders and tapes
of dubbed American movies can be rented in small Pakistani towns. There are few
but the remotest of jungle dwellers who lack radios or who are unfamiliar with at
least some modern technological amenities. Even in such settings, the local
missionary-cum-Bible translator is likely to come equipped with a microcomputer,
word-processing software, and a portable electric generator.
As familiar as people are with the recent economic impact of existing
technology, they may not be very well prepared for the changes that are coming.
The Industrial Revolution took over a century to run its course in England with the
most dramatic changes between 1780 and 1850. A critical mass of new industrial
technology has again collected, but the changes this time may take place over a
much shorter period. Some forecasters predict that the transition to robotic goods
production could be essentially complete in a few decades. By another generation
after that, few people would have much detailed knowledge of what a factory is, or
where any are located. Industrial production could then be as invisible and as much
taken for granted as a farm is now.
The economic impacts would be as profound as those of the industrial age, for
even as smokestack industries all but disappeared from sight, consumer goods
could simultaneously improve in quality and sophistication and be reduced in price.
The distribution chain could also be shortened, for there would be much less need
for retailers and wholesalers in any of the big-ticket items. Stereos, televisions,
refrigerators, and many other products could be ordered by the customer directly
from the factory (through the Metalibrary) and delivered to the door without the
need of intermediaries such as wholesalers. Smaller appliances, clothing, shoes, and
such other goods as household robots could be obtained in the same fashion (It
could be a long time--or never--before robotic truck drivers are deployed, however).
Information providers on the Internet already allow such direct ordering of a variety
of goods and conduct business activities electronically on a large scale, so these
comments are saying little that is new.
If such methods were to become more widely adopted, stores and shopping
centres as they now exist could be much reduced in size and importance, perhaps
becoming manufacturers' showrooms. If Metalibrary terminals eventually had three-
dimensional colour-projection ability, many items could be accurately previewed in
the home. With fully automated factories, clothing could be guaranteed to fit, for
single items could be made-to-order for the customer's measurements with no loss
of production line efficiency. Indeed, goods might only be made to order, with mass
production disappearing altogether.

177
Such large scale automation also suggests to some observers that the new
era would see more planned economies, though presumably not along the lines of
the now discredited and abandoned communist statism. Planning could consist of
surveys and projections of customer wants by the companies engaged in satisfying
those wants, and not involve government at all. This requires no new techniques
other than better information access and processing, for decision making by polling
for public opinion has long been a feature on both the commercial and political
scene in North America. Naturally, the advertising industry will continue to seek out
new ways to change those wants so that consumers focus on new products. Indeed,
even a full Metalibrary's entertainment facilities would undoubtedly be as heavily
commercialized as are today's television networks.
Looked at optimistically, and only from a material point of view, the robotic
manufacturing technologies appear to promise a rosy future. However, the people
of this projected new society would be profoundly affected in ways other than
simply having more and better products available to buy, consume, and dispose of.

5.3 Work and Workers in the New Society


Automation and robotization do not simply influence institutions, as if the
economy were an abstract entity that does not touch real people. On the contrary,
large numbers of people are directly affected, for nearly every job that existed in
the 1980s and 1990s could either change beyond recognition or vanish altogether
within the working lifetimes of their holders--as had many jobs of the 1960s and
1970s already. As in the first Industrial Revolution, the effect of large scale
robotization in the workplace will be profound, particularly in the transition years
when the new industries are just becoming established. Service industries, the
information sector itself, and the professions have so far done well to absorb new
workers, shifting the balance of employment with relatively little pain. However,
more rapid changes that appear to be in store for the future could overwhelm for a
time the ability for society to cope with them.
At any time, there are three kinds of dislocation that may be experienced by
workers whose jobs become obsolete. The most severe is outright termination,
leading at least temporarily to unemployment. A worker's job may cease to exist
because of automation, reduction in market share, or because the enterprise goes
bankrupt. During stable times, the person may have a reasonable expectation of
obtaining a nearly equivalent position with another company. However, in changing
times those other companies are reducing staff, for the problems encountered by
the original employer are common to the whole economy. Many jobs lost during the
periodic downturns in economic activity are never regained; the companies involved
each time introduce new techniques and new efficiencies to reduce their labour
needs. As a result, North American structural unemployment (minimum levels
during good times) has increased substantially during the last thirty years and
seems destined to grow higher still. Indeed, the minimum rate at the top of the
cycle may be well above six percent (nine in Canada; higher is some countries)--
levels that until recently would have been regarded as unacceptable and warranting
massive government intervention in the economy. In the long run, it can be

178
reasonably expected that the number of new manufacturing sector jobs created
during good times will be far fewer than the number eliminated in the bad times.
The second kind of dislocation is called displacement. This occurs when a
worker's old job vanishes but there is immediate retraining available for a new
position that has opened up because of the new technology. Here, the employer
shifts and grows with the economy and, despite new technology, need not reduce
the work force. Perhaps the employer also perceives a moral obligation to retrain
current employees for new positions rather than counting on schools and
universities to supply trained workers at no cost.
Alternatively, the worker may have the foresight, initiative, and imagination to
seek appropriate retraining when the time is ripe. Such a worker may displace to
another employer or industry or become a self-employed professional, but does so
voluntarily and perhaps even with confidence. While such visionary and mobile
workers were relatively rare in the past, they could well be the norm in the future.
Technically, a worker replaced by a machine is only displaced, for
retrainability supposedly implies that everyone can find other employment. In
practice, the displaced very often become unemployed because they (or their
employers) are unwilling or unable to effect retraining. Semiskilled workers with a
poor educational background and those who are relatively new to the labour force
are the most vulnerable in such situations. It is often perceived (and was once
stated as fact by Marxists) that there is little to restrain industrialists from seeking
maximum profits while having no regard for the human consequences. Such a
perception is a stereotype, for no business or economy could operate that way
openly and indefinitely in a competitive marketplace. Too many valuable workers
(and customers) would be alienated, and profits would eventually suffer.
A third kind of dislocation, job growth, is more subtle, for it may be visible only
in retrospect. Here, the job holder and the job are mutually transformed over a
period of time, often without anyone noticing that the original job no longer exists--
the old job has been replaced by an entirely new one with no break in continuity.
Although not always possible, this is the least traumatic type of dislocation and can
bring a high degree of satisfaction to everyone involved. This kind of growth does
not ordinarily take place by accident. Managers who wish to foster it must ensure
that workers have a degree of independence and job control that enables them to
plan their own change and growth as employment conditions demand. Rigid,
locked-in job descriptions or contracts prevent people from learning new skills,
whereas flexibility to meet the challenges of change foster such growth. These
observations suggest a trend toward more flexible and educated workers, a more
professional style of employment, and a correspondingly greater worker control
over terms and conditions of the job. Adaptability to new environments would
become the key to remaining employable.
As existing positions are metamorphosing or vanishing, many new ones are
being created. The computing industry now employs millions of people with job
descriptions that English lacked the vocabulary to write three decades ago. General
affluence has resulted in large numbers of new jobs being created in the
entertainment, tourism, and hospitality industries. Likewise, the global information
and communications industries, the biochemical field, and space-based enterprises

179
will soon employ millions who once might have worked in factories, and one can
only guess at what their job descriptions will be. Certainly, few of them will be on
production lines. Most will be administrators, office workers, information brokers,
researchers, data handlers, medical personnel, computer operators, pilots, and the
like. This reinforces the suggestion above that the new positions will be for
technicians or professionals rather than for unskilled or semi-skilled laborers.
Thus, jobs and wages will continue to flow out of smokestack industries and
into the service and professional fields. The holders of these new jobs will
presumably make more money, expanding the demand for both goods and services.
Perhaps most people will eventually be employed (or self-employed). However,
depending on the speed at which robotization takes place, there could be a period
of 16-24 percent unemployment in some countries. In the past, when
unemployment reached such levels, riots, revolutions and great social unrest have
occurred. Thus, the rise of modern-day groups of Luddites (machine smashers) or
the establishment of totalitarian states in some previously democratic countries are
possibilities that cannot be completely dismissed. Passions could run very high
during such dislocations, and racial, religious, or political scapegoats could once
again be sought. These possibilities (and natural human resistance to change) might
argue for a slow transition to complete automation, but the market forces
demanding quick action may be too powerful to be tempered by anything short of
total societal collapse.
In the long run a higher percentage of people may be self-employed or work
in what are now called part-time positions. Some predict that tourism,
entertainment, and the arts will be the largest employers. Central governments may
grow dramatically in size for a time, as they attempt to regulate or seize even more
of the wealth and production. There may also be pressure on them to employ many
of those displaced from market sector jobs, just to give them something to do. In
the long run, however, government may become much smaller and less significant
in the overall economy as some of its current functions become irrelevant. Any such
changes could take place rather painfully, for the state never relinquishes power
easily.
As in past transitions, new technology will demand changes in educational
content and practice. The new work force will have to be much better educated and
informed than in the industrial age, and the changes will be greater in relative terms
than in the transition from an agricultural to an industrial society. Such education
must be focused on the ability to change and adapt over a person's working years,
for jobs may well come and go at a rapid rate--this may be at least a medium-term
feature (if not a permanent one) of the information age. If most people are faced
with changing jobs or professions repeatedly, they will have to be broadly educated
beyond any narrow speciality in order to cope (Chapter 10 will cover the topic of
education in detail).
If industry and government will be transformed, then so will the unions--the
third institutional leg on which the industrial age has stood. These organizations
were created to provide a means of representing relatively uneducated workers'
interests to a possibly exploitive management. Some models of the information age
suggest that in a society where it is difficult to keep secrets, cooperation may be

180
easier to establish and confrontation may be frowned upon. New industries tend not
to inherit either social baggage or technique from the old ones; they use a
substantially different work force and often locate in different places.
According to Robert Blaumer (Alienation and Freedom) those in the new
industries find their work more satisfying and less alienating than do those working
in typical factory jobs. With the advance of technology, drudgery work is reduced or
eliminated and work requiring a substantial intellectual component is created.
Workers can become more skilled and achieve the high levels of job satisfaction
that typified earlier types of craft occupations. Perhaps the difference is that people
felt themselves to be servants when they tended the old machines, whereas in the
new order, they perceive that the machines work for them. Of course, this analysis
is true only of the larger picture. It tells us nothing about the many unskilled
workers who become permanently alienated from employment when replaced by
automatons and their small cadre of highly-skilled technicians. The latter have both
education and jobs, and have every reason for self-satisfaction.
This satisfaction has other consequences. Workers in the newer industries,
and in white-collar positions generally, have not joined their industrial counterparts
by unionizing in any great number. The percentage of union members among all
workers in North America peaked some time ago and has declined rapidly in recent
years. There is every reason to suppose this trend will continue and even
accelerate. Unions that merely hold onto their traditional power bases seem
destined to gradually lose members and power. They may disappear as the jobs
they now represent vanish. Others might change into consumer associations or find
some other way to represent the interests of service-industry workers. Some
observers predict that the traditional trade unions will not have any substantial
influence in the long term. In the shorter term, certain unions may gain both
members and power, depending on their circumstances. However, models of the
information age seem to have little room for traditional industrial unions, so their
survival may depend on a willingness to change substantially.
On the other hand, professional organizations, such as those representing
nurses, doctors, lawyers, accountants, and so on could well be formed for computer
scientists and other professional knowledge workers. In the 1960s and 1970s the
job-description buzzwords were "technician" and "engineer;" for the 1980s and
1990s the buzzword has been "professional." To some extent professional
organizations will be like unions for they will likely inherit some of their politics and
a few business managers from the traditional labour movement. As they grow in
influence and power, they could also come to resemble guilds with high entrance
barriers and elaborate codes of what constitutes the proper practice of the
profession. They might concentrate on raising their members' standing and status in
the community, rather than on making strictly material gains. They might convey
social status alone, and have little practical power. However these scenarios are
speculative, for the formation, growth, and role of political parties, professional
societies, unions, and other organizations is subject to too many unknowns to
predict reliably. A single accident, scandal, malpractice suit, or election can make or
break the power of any group. Thus, of unions and the like, it is only possible to say

181
that, like all institutions, they must develop and change with society or vanish as
they lose their vitality.
One thing that can be said with some assurance is that any such organizations
whose sole interest is maintaining the status quo of their own power and influence
will surely go the way of the butter churn, the horseless carriage, the keypunch
operator, and the silent movie.
Changes in the workplace will not be confined to the industrial scene. Many
office tasks that are today performed by the white-collar counterparts of the skilled
factory worker will also become obsolete. The number of secretaries, receptionists,
and clerks could decline dramatically as Metalibrary facilities develop. Past
projections of the advent of a paperless office proved to be erroneous--there is now
more paper than ever--but this was because the emerging technology was fitted
into and used to promote existing ways of doing things, rather than providing new
models for office work. This is to be expected of new techniques, which are
generally used at first only to supplement existing practice and do not generate new
ways until a certain critical mass is reached. This example also indicates the
dangers inherent in making projections. All of them (including the ones in this book)
are likely to be partly if not wholly wrong.
The Metalibrary (even as it now exists on a small and disconnected scale)
does provide a new office model by making most paper files unnecessary, for it
does obsolete many of the clerical jobs in countless offices, including most of those
in the government sector. Such jobs are still done by people for two reasons.
First, the power and productivity of existing facilities for electronic data
search, document creation, information storage, and paperless communication are
only just being realized (i.e., the Metalibrary as it now exists is so new that it is
being under-utilized).
Second, these facilities are still quite primitive. Problems to contend with
include lack of universal connectivity, fragmentation of data storage, data
inconsistency, and difficulty of existing interfaces to the Metalibrary. Before there
can be a substantial impact on office routines, the Metalibrary must become
completely connected, consistent, fully functional, easy to use, cheap, and offer
access to all public databases and mailing systems. No lesser technology will
suffice, for only a completely reliable, size-unlimited, ultra-fast and convenient
facility with obvious competitive advantages over the filing cabinet can replace the
office routine of the past. If it were made so, even microfilm would be unnecessary,
for documents could be stored in a form reproducible on any terminal.
It would take decision makers some time to get used to a relatively paperless
environment, but competitive advantages would overcome initial concerns about
information security and loss. Backup systems in local versions and on the
worldwide version of the Metalibrary would have to be extensive to earn the trust of
decision makers. Use of the facimile machine, despite it consuming even more
paper, was a step in the direction of the paperless office routine. Once people
become used to the idea of carrying about and using light, portable devices that
allow them to send and receive information anywhere and at any time, they will also
demand much larger electronic storage capacity and other features that will
eliminate paper consumption except when necessary.

182
Meanwhile, middle management may continue to be a casualty of workplace
revolution, as each recession in the business cycle squeezes out more workers who
have made it thus far. There is less need all the time for people to collect data and
then filter and summarize it on paper for the attention of senior management.
Already, decision makers can obtain such summaries and form projections on
alternative decisions easily, more quickly, and more accurately from computers on
their own desks (let alone from the Metalibrary) than they could ever get through
relying on several layers of middle management. Improvements in the capacity to
do such things only imperil more mid-level jobs. The task of doing such gathering
and filtering will become more common than ever, and the time required will be
less, because much of it can be automated. Decision makers will be the ones
assigning meaning to the data; they will not need to rely on others to do it for them.
Not all of yesterday's senior management will survive the changeover. Those
who fail to obtain the necessary technical skills for making computer-assisted
decisions will join their less capable middle managers on the unemployment rolls,
their places taken by those who have prepared for the move up.
There may also be less need for in-person meetings, except as an excuse to
visit convention centres in exotic vacation destinations. For those people who do
work at what are now called office jobs, the bulk of what they do could be
accomplished at home rather than by commuting to a central location. Not all such
face-to-face gatherings (meetings and communal offices) can be eliminated, for it is
difficult to take a person's measure, to know who they are, and what their responses
mean except by arranging a personal meeting. Today's executive is also quite
dependent on the business lunch--an institution that can only be maintained by
clustering offices in a central location, and one that would take some time to be
abandoned. There is, in short, a need for some socialization in the conduct of
business--one that machines will not fill, and therefore will not eliminate entirely.
Research for potential decisions can also be contracted out by the decision
maker to experts who work with the Metalibrary, assembling the relevant data into
the desired format, and collecting their fee without leaving home, seeing their
employer, or even knowing who it is. Offers can be made on the Metalibrary for so
much money in return for the solution to a particular problem within a certain
number of hours. The solutions offered could be collected by yet another person and
the contributors paid in proportion to the amount of their ideas that was actually
used in the final decision. This is not much different from present practice, except in
the means of communication, and except for the fact that the largest of the existing
networks are non-commercial, so there is no monetary value in answering the
questions of others.
Clearly, telecommuting of all types has some advantages for those who are
involved: they can save time and money; those unwilling or unable to commute can
work at home; and fewer cars, freeways, and office buildings are required. It also
has disadvantages. It promotes isolation from other people, a loss of identity with
the employer, and the holding of loyalties to oneself alone. Thus, futurists differ
sharply when discussing forecasts of how large a percentage of the population will
ever work at home. Those who focus on the advantages paint an idyllic picture of
such a life and make extravagant projections indeed.

183
Your granddaughter does her job right from home. She's a teacher specializing in
exceptionally bright children as well as severely retarded ones. She has never met most of
her students face-to-face because they live all over North America. She's in contact with
them daily by video link on an individual basis. She sets up their daily work schedules and
programs their home learning computers with problems and exercises. She discusses their
daily work with them and guides them through their individual problem areas. No
computer can do that. Because of time zones, her work is over for the day and she has only
to do tomorrow's session planning and student reviews before going to bed tonight. She's
good at her work and is paid well--sometimes by parents, sometimes by local school
boards, and sometimes by institutions. She and her students have the Central Data Bank
available to them twenty-four hours a day. The little red schoolhouse has become the
whole continent.
- Harry Stein in The Hopeful Future

Those who are more concerned with what they see as the dehumanizing and
desocializing aspects predict that few people will ever make the home their
workplace. Rozak (The Cult of Information) sees an eeriness in visions like Stein's--
they are part of what he calls "megahype" employed by information industry people
to sell products and increase the value of their company stock. The true future is
probably somewhere between extreme visions--fewer offices, not none.
Any large-scale telecommuting would also have important demographic
implications, for the need to build large cities to host vast armies of office workers
could be greatly reduced. This would profoundly affect patterns of where people
choose to live and how they travel. Cities that failed to attract new residents on the
basis of living amenities would lose population rapidly, and some of them could well
decline into ruin. Certain old-time industrial cities in the United States have already
lost as much as 25 percent of their population due to the departure of the former
industrial workers. If job loss at the office became as substantial, the effect would
be both greater and more widespread.
The most important effects of telecommuting would be felt by workers
themselves. Matters could be worst of all for those who have lived in the inner city--
a group already at the lower end of the economic scale--who might find themselves
even further disadvantaged. Those who lose jobs also lose status and dignity in a
society that has traditionally measured people's worth by what they do for a living.
What is more, much of the traditional strength of the middle class in the industrial
age has been drawn from well-paid unionized factory workers (and lately from
middle management). When these people lose their positions, they often find
themselves unqualified for anything but very low-paying (sometimes part-time)
service-sector jobs, and they suffer a dramatic decline in their standard of living.
Here, for contrast, is Rozak's critical version of the vision of the empty office:

The fully automated office will do for white collar workers what the automated assembly
line has done in the factories: it will "save" labour by eliminating it, starting with the file
clerks and secretaries, but soon reaching to the junior executives and the sales force.

184
Possibly these casualties of progress will find work at Burger King down the street, where
the cash registers come equipped with pictures, not numbers, or as the janitors who clean
up whatever there is left to clean up at the end of the day--at least until these jobs are
turned over to robots. There may soon be no one left in the high-rise ziggurats of our cities
but a small elite of top-level decision makers surrounded by electronic apparatus. They will
be in touch around the globe with others of their kind, the only decently paid work force
left in the information economy, manipulating spreadsheets, crafting takeover bids,
transferring funds from bank to bank at the speed of light, arranging "power lunches." As
time goes by, there will be less and less for them to do, for even decision making can be
programmed...

At that point, even the corporate leadership will not have to report
to the office. Most of what needs to be done by way of human
intervention will be done out of the home. One forms an eerie vision of
the high industrial future: a vista of glass towers standing empty in
depopulated business districts where only machines are on the job
networking with other machines.
- Theodore Roszak in The Cult of Information

Taking a more middle course, others forecast that those who would have jobs
in the new order might simply work fewer hours for higher pay. Job sharing could
become routine--one person working only four hours and someone else the next
four. Or, a person might work seven hours a day for three days a week. More people
would go into business for themselves, and fewer would use a time clock, because
even in working for someone else, salaried contracts would be the norm and hourly
wages the exception. Such people set their own hours, so those who earn their
living through the Metalibrary would keep the system in continuous use around the
world twenty-four hours a day.
The hope of the most optimistic is that the amount of wealth generated by
those who choose to work will be so large that there will be plenty for everyone, and
a guaranteed minimum income will by itself keep the world's population well
supplied with both necessities and luxuries. Even as things now stand, the food
problem is one of distribution, not of quantity. There are people starving to death in
some parts of the world, but there are surpluses large enough to feed them in other
countries. If the loss due to rats and insects alone could be eliminated, the net
availability of food would increase by 30 percent worldwide. Of course, the optimists
also assume the inherent goodness of humanity. They discount population growth
and shifts, and take by faith that food production techniques will somehow adapt.
They also discount tyrants, wars, famines and plagues as mere "accidents" in the
inevitable upward spiral of progress. History is not really on their side.
It seems likely that underdeveloped nations will at first continue to experience
high population growth as the available wealth increases. At some point, they could
follow the industrialized West and have stable or even declining populations. For a
time, present-day third-world countries would have to erect trade barriers to protect
their human-run factories from the cheaper competitive products of the West's
robotic plants. However, underdeveloped countries would experience both industrial

185
revolutions in close succession, and at least some of them seem destined to catch
up eventually, though perhaps at the cost of even more social upheaval than in the
more developed nations.
High unemployment during the transitional time could cause severe social
dislocations, rising crime, and the possibility of the new social order being cut off in
violence and poverty before even getting started. There are other problems to
overcome, and the new society will have its own difficulties as well. There will still
be workaholics trying to get ahead. Some will still be bored or hate their lot in life
and will always be dissatisfied. Despite the optimism of some observers, there will
probably still be those who are richer and those who are poorer, and the rich will
still have their status symbols and privileges, even if the means by which they
obtain both is very different.
Is automation, then, a good thing? Perhaps, if by "good" is meant only an
increase in the availability of material goods. It will also likely mean much more
time for everyone to do what they choose, even if some of this free time is enforced
by unemployment. If "good" means morally good, the answer is unknown, for
although technological advances in general are anything but morally neutral,
specific ones often turn out to have more "good" applications than others. This is
something that is difficult to guess ahead of time even when the motives for
developing the particular technology are known.
Some of the problems with automation have already been touched on in this
section; in the next, certain of them will be considered in more detail. Some of the
other implications of the new industrial revolution and of the role of automated
machinery will be examined in Chapter 6.

Profile On ... Motives

Eight Reasons to Automate

1. To reduce overall costs


o If the cost (amortized over some number of years) of a capital purchase that
replaces a worker is less than the wages and benefits that would be paid to the
worker for the same number of years, then automation has a direct and irresistible
effect on the bottom line.
NOTE: workers' fringe benefits may cost 30% of salary. Allowing for interest
rates and maintenance, suppose it cost 30% of an initial capital expenditure per
year of operation (amortized over ten years). Then, if the cost of the robot is less
than ten years' salary, it is cheaper than the worker (Such figures may vary widely).
o Other savings can come from reduced heating, cooling, and lighting bills, for
robots can work in harsher environments. They do not need lunch rooms, vending
machines, recreational facilities, company social events, or daycare facilities -- all
these affect capital as well as operating costs.
o The more widespread the use of robots, the lower the cost of making them,
and the more cost effective it becomes to use them. Some manufacturers use

186
robots to make robots. Computers already design robots, and the human input is
decreasing.
o Wages go up with inflation. The principal cost of servicing a capital loan is
fixed, only the interest rate and maintenance charges are affected by inflation.

2. To eliminate unreliability
An automaton can be programmed to do the required task exactly the same
way every time producing a higher quality and more uniform product. (e.g.,
welding)

3. To overcome a shortage of skilled labour


At times, workers with particular skills may be in short supply, and those who
are available command high wages. It is usually easier to make or reprogram more
machines on short notice than it is to get more skilled workers quickly.

4. To achieve results that would be impossible manually


o Hazards: Remote robotic manipulators can work close to the core of a
nuclear reactor, or with very hot or cold parts. Some can work in the vacuum of
space, in poisonous gases, or underwater.
o Strength: They may be built to lift heavier parts or apply more force or
pressure in an assembly than could a human.
o Precision: They can be designed to work on a microscopic scale with a
precision that a human cannot achieve.

5. To increase output from a given factory floor area


It may be possible to place robots closer together or run them faster than is
practical for human workers. In places where space is at a premium (e.g., Japan)
this may be the most important consideration.

6. To lower inventory
o A faster assembly line implies that fewer of the raw parts are tied up in the
process.
o If inventory of finished product grows too large, a robotic assembly line can
be closed down simply and cheaply, and re-started easily. The cost of either with
human workers can be very high.
o Robots may be employed in the warehouse to achieve efficiencies similar to
those obtained by the ones on the manufacturing floor.

7. To improve flexibility
o It may be easier (and cheaper) to reprogram a robot than to retrain a
human worker.
o The more capable such machines become, the more feasible it is to use
them for small volume production runs, and even one-of-a-kind or made-to-order
manufacture.

8. To improve market share

187
Anything that reduces costs and improves efficiency and quality relative to
the competition in related industries can increase market share. Improved sales can
lead to other economies of scale, further reducing costs.

Is Automation Inevitable?

"In any repetitive manufacturing process, 95% of the shop-floor work-force


can be eliminated ... Manual skills will no longer be marketable as such." -- David
Bell (Employment in the Age of Drastic Change)

"robotization now seems imperative for car manufacturers if they wish to


remain competitive."
"So we move towards the factory that has just one man and a dog: the dog is
there to make sure no one touches the machinery, and the man is there to feed the
dog!" -- Christopher Rowe (People and Chips)

"Eventually, robots could do all the robot-assembly work, assemble other


equipment, make the needed parts, run the mines and generators that supply the
various factories with materials and power, and so forth. -- Eric Drexler (Engines of
Creation)

5.4 Some Issues in Automation and Robotization


As indicated in the last section, the chief motivation for automation, as well as
its chief effect, is to reduce the number of workers and save operating costs, while
producing more goods. This is an illustration of technique at its best (or worst), for
in this case the search for efficiency would clearly result in massive job
displacement if taken to its fullest extent.
Whether the apparent material benefits are worth the disruption can be
debated with good arguments on both sides. This situation however, does seem to
illustrate the irresistibility of technique--even if one is capable of assessing the
broader costs, automation will still take place because it produces more efficient
results for the business. Also, the important ethical and social issues do not all lie at
the start of the path, for the road is partly travelled already and the way back is cut
off. Rather, they are found along the way, and relate to the appropriate responses
that can be made to the process of automation. Only a few will be considered here.

Who is Responsible For Retraining?

It was remarked in the last section that relatively more of the future workers
may be professionals, taking charge of their own education and training and
contracting their services. Yet this route cannot be taken by everyone in the
present-day work force. The typical assembly line or factory workers facing job-
threatening automation will need considerable education and/or retraining to qualify
for any new job, much less to take charge of their own destinies. Faced with a
choice between unemployment and an arduous re-education, many will slide onto

188
the welfare rolls, not as an active choice but as a passive one, for nothing in their
background convinces them of the value of the harder path.
What is the ethical obligation of the other parties, including government and
employers, to the large numbers of workers who are thus displaced? Surely the
ethical imperative to assist others to be whole persons implies at least an offer to do
the necessary retraining to allow re-employment. The employer may prefer, in
consideration of the bottom-line profit, to simply terminate an unneeded worker.
However, the months and years of employment have created a mutual bond and
obligation (a social contract) that cannot exist between owner and machine but that
always does between employer and employee. The employer who breaks this bond
and discards the worker like a worn-out part creates bitterness and resentment that
are certain to cost the broader society far more than job-retraining would have. The
implicit social contract that the employer has with society as a whole binds both to
act responsibly. Both therefore have an obligation to the person whose job has been
automated to help make reasonable alternatives possible, or both will suffer the
consequences of exaggerated class structure and broad social unrest (a pragmatic
consideration). Government also has a responsibility to promote social stability, if
for no other reason than (utilitarian) survival of the state itself.
The difficulty is that such responsibilities cannot easily be seen by employers,
for they do not benefit the immediate bottom line, and they are hard to put into law.
Some companies are too small or too unprofitable to afford such education. Yet
unless retraining schemes are universal, a firm that does act responsibly in this way
may become uncompetitive if others in their industry ignore those same
responsibilities. Since life spans will probably increase in the future (see Chapter 7)
and national economies will continue to change rapidly, the typical employee may
need to retrain many times over a working lifetime. This projection also argues for a
universal job retraining scheme, one in which employers, government, workers, and
unions all participate.
A possible solution would be a comprehensive savings/insurance plan into
which all parties pay--something similar to present-day pension and unemployment
compensation schemes. If a job is automated, the employer could be required to
increase payments to the plan for a period of time. On the other hand, after a
certain number of years service, a worker ought to be able to take voluntary
retraining at no additional cost, much as one might now take early retirement. Or,
perhaps industry could learn from the sabbatical system used by academics to
recharge their intellectual batteries every few years. After six years of service, a
tenured university professor can normally apply for a one year leave at partial pay
for the purpose of further study. Such plans have the advantage of recognizing the
mutual obligation of all parties to retrain workers; they have the disadvantage of
creating yet another payroll deduction and yet another administrative headache.
Whatever it is called though, some such retraining insurance or educational pension
plan may well be necessary in the light of events.

What About The Unretrainable?

189
Such retraining plans do not provide the whole answer, however, for there
would remain a core of workers who would be unwilling or unable to accept
retraining. Since the newer jobs require more technical skills than the jobs being
obsoleted, they also demand a more educated work force. For those who have held
menial jobs because they could not do anything else, a technical education may not,
in many cases, be a prospect.
No society could afford to have such people simply remain unemployed and
collecting welfare the remainder of their working lives, for large numbers of jobless
people have always been a destabilizing force in the past, and there is no reason to
suppose that such a situation would not also lead to widespread rioting and
destruction in the future. That is, the "haves" cannot for long wall themselves off
from and ignore the "have-nots," for their own way of life is also at stake.
The number of unskilled labouring jobs in traditional sectors will continue to
decline; the challenge is to find new jobs for those to whom it is not practical to give
professional or technical educations. Since such jobs can only be created in the
service sector, it is easy to predict great increases in, say, tourist-industry
employment. This may be enough, but if it is not, there could be pressure to hire
personal servants, estate caretakers, cooks, and maids--even to put human crews
on farms or construction projects that could be done safer, faster, cheaper, and
better with robots. There will also be some pressure to make intelligence
enhancement devices and drugs (see Chapter 6), but it seems unlikely that these
could soon be made universally available and thus they alone will not solve the
problem.
The utopian ideal of some science fiction portrays every future citizen as idly
rich, dabbling in professional activities while the robots do all the work. This vision is
unrealistic even at current population levels, let alone at the higher ones that will
soon prevail. If the expansion of service and information sector employment fails to
absorb the displaced industrial work force, considerable creativity in job creation
may be necessary to avoid massive social unrest. This problem could be severe
even in the developed countries, testing the skill of the most democratic, honest,
and caring of governments working with relatively prosperous citizens. In
underdeveloped countries, the potential for disaster is great. It is not difficult to
imagine the rise to power during a time of civil unrest of a tyrant who decides to rid
his nation once for all of "undesirable" elements, slaughtering his own people in a
dreary repetition of Nazi-like themes of racial or religious purity. It would also not be
difficult to imagine wars between the "have" and the "have-not" nations or civil
unrest in the prosperous ones. Change exposes fragility in social contracts, and it
can cause more. The trick is to keep the fragile from breaking.
In the very long run, increased prosperity historically results in a significant
decline in birth rates, but on a worldwide scale, no such remedy can be hoped for
until several more decades have passed.

Techniques of Automation

190
On a smaller scale, the methods chosen for introducing automation to a
workplace, or for implementing any new system, can have a considerable effect on
the morale and the jobs of the workers. The most common strategies are:

o Cold Turkey On the day assigned for the changeover to the new system, the old
methods cease and the new replace them at once. Depending on the employer, this could
involve either an extensive training program ahead of time, so that all the employees are
ready to take up their new responsibilities, or a wholesale replacement of personnel. The
latter approach is fraught with peril, not only for the workers displaced but also for the
employer, who may find while awaiting the day of unemployment that employees do very
little work or even engage in sabotage. Extensive retraining is also not without its
difficulties, for even if the workers are able to take part in the actual planning of the new
system, there are bound to be many errors in the period following the change. There may
still be the problem of disgruntled workers, because automation is usually undertaken to
achieve personnel efficiencies, and this implies that there will eventually be fewer workers.
Moreover, as many organizations have discovered after such a change, few new products
are either what they were advertised to be or are bug free, and most existing data sets have
corruptions or anomalies that are revealed only in the changeover. In general, this is the
worst possible way to introduce an automation or a new system.
o Phased Here, the change to a new system is made gradually, with some parts being
operated in the old way while some are switched to the new. This method has the
advantages of being less abrupt and disruptive and may be less error prone than the cold
turkey changeover. However, there are inherent inefficiencies involved in partial
automations or introductions, and the employer who does things this way must be prepared
to wait until some time after the completion of the process to realize the expected gains.
Indeed, since parts of the enterprise are operating under one system and parts under
another, there may be some losses, for additional employees may be required during the
changeover.
o Parallel This is the most costly of the three methods of implementing a new system. It
involves running both the old and the new side-by-side for a period of time, comparing the
results, and working out the bugs in the new through experience. In most manufacturing
systems, the parallel method is not practical, and one of the first two methods must instead
be chosen. However, in accounting systems, or student or employee record systems where
great care must be taken to ensure the accuracy of the results at all times, it is unwise to
place any trust in a new system until its results have been carefully compared to those of
the old over a period of time. While such caution may be necessary in some cases, it may
also imply the use of parallel staffs to operate the two systems. If the new staff is intended
to replace the old, there will be great tensions; the old employees cannot be expected to
work in the employer's interest during this time, for their energies will be put into finding
new positions. If, on the other hand, the staff operating the new system is temporary and its
other function is to train existing personnel in the new techniques, the atmosphere in the
workplace may be better, though there may still be those whose jobs have been lost, and
they are certain to resent the others. In general, parallel implementation, done properly, is
the safest method.

191
Thus, the method chosen for automation can have a profound effect on the
workplace. In the industrial age, the perception on the part of many employers was
that the workplace belonged to them and they could do as they pleased with it,
without regard to the effect on the workers. The perception on the part of many
workers in the industrial age was that employers tended to exploit them, so they
banded together in unions to create a power base of their own and to force changes
in working conditions. The very idea of the information age, however, carries with it
the assumption that workers will not only be fully informed about proposed changes
but also that they as professionals will more often be in control of their own
workplace. It will be they who will design and implement changes, for they will be
the company. They will be less likely to be in opposition to the shareholders, as
represented by the board of directors and officers, for they too will be shareholders
and operators. In such circumstances, the problems associated with automation will
not disappear, but the perception of them will, for change will not be something
imposed from above; instead, it will be a result of the informed collaboration of
professionals. Of course, conflict in the workplace will not be eliminated altogether
by such changes; it will merely shift to other focal points.

Why Automate?

Some of the considerations mentioned thus far sometimes lead to a


questioning of certain basic assumptions most forecasters make--that automation is
both desirable and inevitable. Perhaps it is neither, or only one, and not the other.
Both benefits and problems are easily seen when considering the process in an
abstract way. For the people directly involved, though, such dispassionate
considerations may be impossible. Where automation is possible and does bring
economic benefits, it will surely be done, whatever the other consequences (unless
constrained by some higher authority). Perhaps it will be necessary to require that
such consequences be examined in each case and specific provision made for
displaced workers before proceeding. It may even be that most automations will
proceed slowly enough that no serious large-scale problems develop, though that
would not release individual employers from their obligations to workers.
It may also be that there are other solutions to the displacement problem that
use yet-to-be-deployed technologies, and some of these possibilities will be
examined in later chapters. It is clear that this second industrial revolution
(automation) can no more proceed without social consequences than did the first,
and that the choices involved in dealing with such problems are to a great extent
ethical in nature.

5.5 Other Industrial Futures


Automation is not the only influence upon the industries of the future, even
though it is the major one that can now be measured because it is well underway.
There are two other technological developments that may eventually play important
roles, but both are in the very earliest stages, so detailed comments on their long-
term effects are highly speculative.

192
Space--A Third Industrial Revolution?

On first consideration, it might seem unlikely that much manufacturing will


ever be done off this planet, because of the great expense and logistical difficulties.
However, there are products that may be worth the trouble. For instance, certain
alloys are very difficult to mix homogeneously within Earth's gravitational field. The
more massive constituents either form into globules or collect on one side of the
molten mix, preventing the desired alloy from forming when the metals are re-
solidified. Such mixtures may also have strength-robbing air bubbles because the
metals do not completely lose all their gas during solidification. These problems do
not occur in the zero gravity and vacuum of space, and there may well be alloys
that are sufficiently valuable, say, for microelectronics or jewelry, to be worth
manufacturing in Earth orbit.
Indeed, orbital environments are probably the best place to make alloys from
which to build the space factories and habitats themselves, for the desired materials
will have properties utterly unlike those required by Earthbound construction. Down
here, a large building must support its own weight, and this is the first consideration
in erecting its framework. In space, structural strength need only hold a building
together against rotational forces; the materials need to combine strength with low
mass, for they need not hold anything "up." Alloys that can do this and continue to
perform well in a space environment are likely to be those made in the same
environment.
There are obstacles, of course, to any such construction on a large scale. The
main one is the expense and difficulty of supplying raw materials from the Earth
below. However, once such manufacturing reached a certain scale, it could become
economical to mine raw materials in the asteroid belt or on the moon.
Transportation to Earth orbit from either location would be time consuming but
relatively inexpensive. Or, manufacturing facilities could be located on the moon
itself, where the vacuum is nearly as good as in orbit and gravity is only one-sixth
that of Earth.
It is not clear whether the optimistic projections some make (of large numbers
of people living in orbital habitat) are well founded though, even if substantial
manufacturing were transferred there. After all, if Earth-based factories (and
research or military establishments) would need few workers, the same would apply
to those in space. For large communities to be built there, some other economic
justification would have to be found, and it is not yet known what this could be.
Other substances whose manufacture might be easier in space include
various chemicals, particularly pharmaceuticals. For very fine work involving precise
reaction conditions and requiring fast and uniform mixing, zero gravity may be
ideal. For example, if it turned out that a cure for some fatal disease could best be
made in orbit, it surely would be.
Such work, however practical it may turn out to be in the long run, is still very
experimental. Only when it becomes clear to entrepreneurs that there is money to
be made by commercializing space will they rush to construct orbiting factories.
One way to encourage this would be for the U.S. government to allow private
companies to bid for the delivery of materials to Earth orbit, for such delivery can

193
certainly be done for a much lower cost than it is at present. On the whole however,
suggestions that a move of industry off planet will constitute a third industrial
revolution (already underway) may be somewhat premature. Indeed, tourism might
be a stronger motivation to make money from space than is manufacturing.

Nanotechnology and Manufacturing

At the opposite end of the size scale from large space factories are the
microscopic technologies of the silicon chip and the even smaller molecular-scale
technologies. It is already possible to etch very small electronic features on glass,
but even these are still hundreds of atoms wide. Yet living cells contain much
smaller protein factories and assemblers that are capable of working atom by atom
to build very specific molecules. It is, therefore, easy to wonder whether such
assemblers can be made to order, like any machine, and directed to build the
desired molecules by a chemist, engineer, or geneticist.
Eric Drexler (Engines of Creation) uses the term "nanotechnology" for work of
this kind and observes that some success in the engineering of proteins has already
been achieved. He sees the first generation of nanomachines as programmable and
able to work like cellular organelles to build molecules into artifacts according to
patterns coded into some auxiliary molecule acting like a "memory" enzyme. He
terms such machines general purpose assemblers, for they could build a variety of
molecules, not just proteins. In particular, they could make more robust, much
smaller, and more specialized assemblers that could operate on atoms rather than
on molecules.
These specialized assemblers would first have to build many more copies of
themselves, or the quantity of the intended end product would not amount to much.
Once this step was complete, they could in theory manufacture any amount of the
target substance out of its atomic constituents--from houses to hot dogs to
electronic circuits only one or two atoms wide. For example, carbon atoms could be
laid down in the correct lattice to make diamond fibres that would give, say, engine
parts great strength. Drexler envisions nanomachines that could even build entire
rocket engines or computers in a fluid environment containing the raw elemental
materials. Other potential nanotechnologies will be mentioned in later chapters.
The potential for large-scale manufacturing by such methods is difficult to
assess. While it is true that only a few breakthroughs may be necessary to start on
this route, it is not clear that nanomachines will necessarily be better or more
efficient for large-scale work than ordinary machines. Assuming that they are
developed, it seems more likely that such assemblers will be used principally for
very fine work on speciality molecules and in chemical, genetic, and biological
applications than on the making of consumer goods. At least, this is likely to be the
case for some time until the technology matures.
However radical the changes that nanotechnology might bring therefore, they
may not represent another industrial revolution but have their impact on society in

194
other ways that are less direct. Some of these will also be touched upon in later
Chapters.

5.6 Summary and Further Discussion

Summary

What is called here the second industrial revolution is the process of


eliminating human workers and machine operators from the industrial scene by
building and deploying devices that are sufficiently general purpose and
programmable to operate with little or no human supervision.
The advantages of robotized manufacture are considerable, ranging from
ultra-high reliability to lower operating costs, and the ability to redirect assembly
lines without either re-tooling or retraining. Commercial advantages include higher
quality goods at lower cost in greater variety and the ability to manufacture to
individual orders.
The problems generated by the use of robots centre on large-scale
displacement of the existing work force. Retraining and replacement are necessary
in order to keep unemployment from rising to disruptive levels. Whether concurrent
economic changes will be rapid enough to absorb the released workers is not yet
known. Certainly, such problems generate ethical issues for those directly involved,
as well as for the broader society.
Other factors that may influence future industry are space-based
manufacturing and nanotechnology. While both seem poised for near-term
breakthroughs, it may be some time before either has a large-scale influence.

Discussion Questions

1. Desk jobs tend to be sedentary, having adverse effects upon general


health, increasing the probability of heart disease, hypertension, and obesity.
Discuss probable effects on general health of a large increase in the percentage of
the population employed at desk jobs.
2. Discuss the probable effects on farming of the second industrial revolution.
3. Defend this thesis: The second industrial revolution will decrease the
percentage of the population living in large cities.
4. Now defend the opposite position; use "increase" rather than "decrease."
5. The optimistic view is that despite robotization, underdeveloped countries
will catch up to developed nations. Write the most pessimistic scenario and defend
it as more realistic.
6. You are the owner of a small snowmobile factory in Quebec that employs
about 100 people and is in fact the major employer in your town. These people have
nearly all worked for you for more than ten years and are completely dependent on
these jobs. You have just learned that your major competitor is about to robotize its
factory and will be able to sell snowmobiles for half your retail price. You may either
follow suit, laying off 75 employees, or see your business go bankrupt in a year,
costing everyone's job. What should you do?

195
7. What is the likelihood (or unlikelihood) that a "mad engineer" could develop
and build an army of robots to conquer the earth? Answer in practical, well-
reasoned and well-justified terms, please.
8. In the previous chapter, concern was expressed about the balance between
information availability and privacy. The reduced need for people to congregate for
work also cuts down on social interaction and promotes individualism. Discuss the
advantages and disadvantages of this aspect of automation.
9. Many believe that people have traditionally found self-worth in their jobs.
How will they do this if most people are essentially self-employed, or not employed
at all?
10. What effect will there be on pollution in a more highly automated society?
11. In the text, a retraining insurance scheme was suggested. Flesh out this
idea to a detailed proposal, complete with appropriate premiums for two or three
industries.
12. Attack the suggested retraining scheme and show why it cannot work.
Then find another solution for the same problem and show why it is superior
economically and/or ethically.
13. Refer to situations like that in question 6. Now discuss carefully the ethical
issues involved in job displacement for the worker, employer, and for government.
14. Make a case for transferring all manufacturing off the Earth's surface.
What ethical issues are involved? Deal with them in your discussion.
15. Research the "mass driver"--a device for removing raw materials from the
Moon to Earth orbit--and discuss its operation and economics in the light of the level
of space manufacturing activity you think is likely.
16. Research the arguments for building large-scale habitats in space. Now
argue for or against such projects in detail. Address specifically the oft-raised
objection that it is unethical to embark on such projects while there are still people
who are hungry and in poverty.
17. Write a report summarizing the major potential applications of
nanotechnology. How likely do you think each is, and why? What ethical issues need
to be addressed?
18. Research the extent to which transportation is now automated. Consider
railways and airlines and describe their attempts to automate traffic. Now, propose
a way to automate automobile and truck traffic, or argue convincingly that this task
is impractical.
19. Suppose that people transport via commuter railways, aeroplanes, and
possibly cars could be automated so that no human pilots or drivers were
necessary. Ought such technology be implemented if it became available? Why or
why not?
20. Re-read the quotation from Stein in section 5.3 on the woman who
teaches from her own home. Now list the assumptions about the future society, its
politics, and its social norms that Stein makes. Are these assumptions reasonable?
21. Argue for or against (economic, political and ethical grounds) Stein's
specific assumption that teaching children from the home via the Metalibrary is
such a good idea that it will become commonplace.

196
22. An industrial robot is being used to move parts from a tray to an assembly
line. It is enclosed in a security fence. A technician turns the robot off and enters the
fenced area to effect repairs on the assembly line. While she is there, another
worker (who does not see her because of the fence) re-activates the robot which
moves up against the technician, trapping her against a piece of machinery and
crushing her to death. Discuss the degrees of liability here. How much attaches to
(1) the technician herself for not shutting off the power at the breaker box and for
failing to post the work site, (2) the co-worker who turned the robot back on, (3) the
owner of the plant, (4) the builder of the robot, (5) the builder of the fence--intended
for protection, but instrumental in the death, (6) society as a whole for not somehow
preventing the accident.

Bibliography

Asimov, Isaac and Frenkel, Karen A. Robots--Machines in Man's Image. New


York: Harmony, 1985
Ballard, Edward Goodwin. Man and Technology--Toward the Measurement of a
Culture. Pittsburgh: Duquense University Press, 1978.
Bell, David A. Employment in the Age of Drastic Change--The Future With
Robots. Turnbridge Wells, U.K.: Abacus Press, 1984
Blaumer, Robert. Alienation and Freedom. Chicago: University of Chicago
Press, 1964
Drexler, K. Eric. Engines of Creation. Garden City, NY: Anchor Press, 1986.
Drexler, K. Eric, and Peterson, Chris. Nanotechnology. Analog, (Mid-December,
1987): 48-60.
Hall, Ernest L. Robotics--A User-Friendly Introduction. New York: CBS
Publishing, 1985
Hunt, V. Daniel. Smart Robots--A Handbook of Intelligent Robotic Systems.
New York: Chapman and Hall, 1985
Jenkins, Clive, and Sherman, Barrie. The Collapse of Work. London: Eyre
Methuen Ltd., 1979.
Leach, Donald, and Wagstaff, Howard. Future Employment and Technological
Change. London: Kog and Page, 1986.
Menzies, Heather. Women and the Chip. Montreal: The Institute for Research
on Public Policy, 1984.
Naisbitt, John, and Aburdene, Patricia. Re-inventing the Corporation. New
York: Warner, 1985.
Naisbitt, John. Megatrends. New York: Warner, 1984.
Peccei, Aurelio. One Hundred Pages for the Future. Elmsford, NY: Pergamon,
1981.
Roszak, Theodore. The Cult of Information. New York; Random House, 1986
Rowe, Christopher. People and Chips--The Human Implications of Information
Technology. London: Paradigm, 1986

197
Sherman, Barrie. The New Revolution ---The Impact of Computers on Society.
New York: Wiley, 1985.

Chapter 6
The Intelligence Revolution
Seminar - "What is Intelligence?"
6.1 Building Thinking Machines
6.2 Simulating Human Intelligence
6.3 Augmenting Human Intelligence
6.4 Issues in Artificial Intelligence
6.5 Summary and Further Discussion

6.1 Building Thinking Machines


Can a machine ever be regarded as intelligent? British mathematician and
theoretical computer scientist, Alan Turing, proposed in 1950 what he called the
"imitation test." The person performing the test sits in a room that has two
computer terminals at which questions can be typed. One is connected to a room
where a human responds to the questions, and the other has a computer generating
the responses. The tester engages in a lengthy conversation with the two
concerning any topic, such as the weather, sports, politics, mathematics, and so on,
and then decides which responder is the human and which the computer.
Turing proposed that one may regard the computer as intelligent when it was
no longer possible to distinguish between the two any more reliably than by
guessing--that is, when the tester guessed correctly which respondent was human
only 50 percent of the time. This is now known as "Turing's test" and is commonly
regarded as fulfilling every practical need for the verification of a machine as
intelligent in the human pattern. Consider a fragment such as:

Question Are you able to tell a lie?


Answer Yes, I am.
Question Are you self-aware?
Answer But, of course.
Question Do you have a soul?
Answer Please explain what is a soul?

Such an exchange would not in itself be enough to settle the issue, for these
are obvious questions for a programmer to anticipate and make provisions for. At
the very least, a machine would have not only to claim self-consciousness but also
to defend the claim capably in order to pass the Turing test. At this point there is no
device that can come close to approximating human behaviour this well. Whether it
will ever be possible is a question open to argument--one that some suggest can
only be settled by the machines, acting on their own behalf. While the production of

198
a machine that can behave in a way indistinguishable from a human (social
intelligence) is regarded by some as the ultimate goal of research in this field, there
are also other more practical and more immediate goals.
The most important of these shorter term projects have to do with knowledge-
based machines, which carry out tasks using processes that in some ways could be
described as human thinking, but that are so far also profoundly different. A
machine whose purpose is the analysis of knowledge is far easier to build than one
that could pass Turing's test. There are four kinds of tasks that current machines
are commonly programmed for: simulations, expert tasks, inference tasks, and
design tasks.

Simulations

One of the most popular computer games is a program to simulate the


controls of an aircraft. The player can practice flying and landing at various airports
under safe conditions, where a crash signifies only the end of the game, not the end
of the pilot's life. The aircraft industry has long had specialized machines for this
purpose, and with the help of computers, these are becoming more realistic. Some
occupy an entire room and come complete with a cabin at the end of a long rotating
arm capable of both motion and acceleration. No matter how elaborate they are,
such simulators are cheaper and safer than employing a genuine aeroplane in the
same exercises.
Simulations are also used in the design of expensive components or systems.
Once again, the aircraft industry is an important user of such devices and programs.
For example, it is now possible to run a graphical simulation of a wind tunnel and
picture the stresses on an airframe using a computer. Expensive though such
machines are, they are cheaper than building the wind tunnel itself, and far less
expensive than testing a prototype of the plane.
Medical schools have found it difficult to obtain cadavers for students to
practice surgical technique. Artificial cadavers connected to a computerized
analyser can allow a safe practice of many types of operations, and provide a
detailed summary for the instructor afterwards. The military also uses war games or
simulations to train personnel in order to avoid unnecessary risk to human lives.
Indeed, wherever the cost in money or lives of doing a test of technique or
machinery is very great, simulations can be used to reduce the risk. The goal is to
approach realism as closely as possible, without subjecting the learners to any real
risk except those of the failures necessary to learn.

Expert Tasks

Perhaps the best-known and most successful examples of computers


performing expert tasks are in the field of medical diagnosis. There have been
several programs, including the ones known as CADUCEUS, CASNET and MYCIN,
that have performed at the level of human diagnosticians. The idea behind such
systems is to create a very large data base of diseases and their symptoms

199
together with probabilities that the two will be associated, and the suggested
treatments together with their success rates, side effects and contra-indications.
Such a program uses a search scheme to take a list of symptoms provided by
a doctor and suggest tests that can be performed to narrow down the possible
causes. Once the results of a series of tests have also been entered, a probable
diagnosis is made and treatment suggested. At each stage of the testing regimen,
the medical practitioner is given a list of the possible diagnoses still "in the running"
together with the probability that each is the correct one. During treatment, the
program can be updated with patient responses and provide expert assistance with
drug dosages and alternate treatments.
Similar software is available in other fields, including the law, metals and
minerals prospecting, and chemistry. It is from the base of such knowledge devices
that hypertext and ultimately Metalibrary systems are gradually growing.

Logical and Inference Tasks

Although inference tasks overlap the expert tasks and will one day merge with
them, these are somewhat different in concept. Here, the major data base on which
the tasks operate is not so much a pool of facts but a history of the success or
failure of previous decisions made by the system. Moreover, the program is
designed not so much for the analysis of data as it is to follow a collection of rules.
Consider, for example, a program designed to play chess. There are two kinds
of rules that must be made available to the program. The first are the rules of the
game, wherein the program is instructed how to move the board pieces legally. The
second group consists of a set of rules of thumb, which are also called heuristics.
These are collections of general ideas about the overall strategies that work best at
various stages of the game. Standard opening sequences, together with such ideas
as controlling the centre of the board and when to trade pieces for advantage, are
all among the chess heuristics. A set of sample games completes the system, and
this collection is added to by the machine as it plays.
The chess program uses the board rules, the heuristics, and the history,
together with brute force computational methods that can examine tens of
thousands of combinations that may arise from any of the possible legal moves at a
given time. The actual move made by the program is based on what generates the
best possibilities two, three or even ten moves ahead. A human chess player does
not work in this fashion, but employs a broader and subtler array of heuristics for
making such decisions. Even the masters of the game do not try to envision all the
possibilities more than a couple of moves ahead of the current position, but play for
strategic advantage based on experience.
Even though chess playing machines are now capable of generating games
that can defeat a world champion, the type of machine logic used here is very low
level. It is based entirely on fast computational ability, and does not even
approximate human thinking. Thus, it does not have human intelligence, even
though it can achieve some of the same results.
The approach taken by other programs designed to simulate intelligence,
such as EURISKU, developed at Stanford, is rather different from the chess-playing

200
ones. It relies more on heuristics and less computational speed and is capable of
developing logical lines of analysis, suggesting new heuristics and rating these with
other heuristics. It can also develop competing heuristics, and remove defective or
parasitic ones. EURISKO has been used to solve problems in computer
programming, mathematics, games, circuit design and various engineering
applications.
This approach may have great potential, for the ability to devise and test
competing models of the universe being studied and to make logical inferences
based on both data and decision history are both essential to anything that will be
called artificial intelligence. Such an approach is also a better simulation of human
thinking than is the purely computational, even though it too depends for its
success on computational ability.

Design Tasks

These also overlap the other two, but are important enough to discuss
separately. Drawing on a knowledge base and sometimes using rules for analysis
and inference, computers are already being used to assist in the design of both
manufactured products and of the machines to make them. They are increasingly
being employed to develop new designs for more complicated devices such as
three-dimensional integrated circuits. It is a short step from this point to the
successful design of more powerful computers using software alone. Better
designing software could then be designed by a computer for installation in the next
machine, and the history of the first designer downloaded as the initial data base for
the second. Thus, computers could eventually design their successors' hardware
and software, and each machine in the sequence would be smaller, faster, and a
better designer. In theory, the process could be continued with more intricate
machines being built in this fashion until the processing power and memory reached
and exceeded that of the human brain. Numerous such elaborate processors,
working in parallel would be required to control all the functions of the Metalibrary.
One task of an automated designer will be to monitor the available computer
technology and continue the process in order to make improvements to its own
capabilities. Perhaps with some robotic help, these improvements could be
automatic, achieved without human intervention.
At some point along this trail, enough will also become known about the
chemical construction of large molecules to design new ones, and these new
molecules might in turn be programmed to design others. Some researchers have
suggested that people may one day be able to employ virus-sized machines (i.e.,
another form of nanotechnology) for such tasks as studying brain functions neuron
by neuron, locating and repairing arteries blocked by strokes, and eliminating
specific toxins, bacteria and viruses from the body. AIDS, herpes, and other
retroviruses that go dormant and hide inside certain cells for extended periods
might be eliminated from the body by such means.

201
Current Research

Japan for a time made artificial intelligence research (AI) a national priority in
an effort to secure a lead in computer technology for so-called fifth-generation
machines. Similar work has also been undertaken at various universities in North
American and Europe. This research has been given high priority by funding
agencies of governments, the military and private foundations. As a result, those
making research commitments in areas relating to AI have little difficulty securing
monetary support.
Problems of language translation have also provided one of the strongest
motivations for the Japanese involvement with these projects, for one of their goals
has been machines that can translate to and from Japanese and other languages in
both spoken and written form. At first, it was thought that only a machine that could
do this would certainly be worthy of the label "fifth-generation." However, such
problems have a variety of full and partial solutions in software alone. Such
programs will be employed by telephone companies to allow verbal and written
communication between speakers of different languages, and for the deaf or blind.
They will also be used by cable companies to convert the closed-captions
transmitted with their programs into the language of the viewer's choice. Whether
later systems in this category will be thought of as artificially intelligent in any new
sense of the term remains to be seen.
On the hardware scene, attention is focusing upon parallel processing, in an
effort to break the von Neumann bottleneck associated with the traditional
sequential processing. Machines that rely on a central processing unit must execute
instructions from a stored program one at a time in sequence--a technique
suggested by the mathematician John von Neumann in the 1940s. Even at the limits
of today's fastest experimental processors, such machines are limited to speeds
under a billion instructions per second, or BIPS. If problems can be broken down into
many parts for processing, each portion being handled simultaneously by a different
processor (i.e., parallel processing), the overall rate can go up many orders of
magnitude. Supposing that, say, a PowerPC chip runs at 1 BIPS by itself; a computer
with 10,000 of these working simultaneously would execute 10,000 BIPS or 10
trillion instructions per second (TIPS). Even this machine would have only a small
fraction of the power of a human brain, but if it were reduced to a single chip, and
10 of these were in turn paralleled, the resulting device would be up to 100 TIPS.
The last figure may be close to that of the brain.
Of course, new hardware demands new types of software. Traditional AI work
has been done in the programming languages known as LISP and PROLOG, but
lately the Smalltalk and Prometheus notations have gained some credibility in this
field. In order to work on a multiply paralleled machine, the language must be
modular and have the ability to schedule its processing both sequentially and
simultaneously. Notations such as Modula-2 (designed to replace Pascal) have this
capability, and perhaps the new machines will initially be programmed in some
common descendent of these current programming notations. Of course, devices

202
that will be used extensively as design tools for other machines and to simulate
intelligence must be capable of programming themselves, and of devising the
languages in which to do this. Ultimately, it may not be necessary to have many
human programmers or human-readable notations, for the machines (in theory) will
be capable of translating voice or other requests into programs and then executing
these without further human intervention.
As indicated, the ultimate goals of artificial intelligence research extend
beyond computational and designs to the understanding and emulation of the
behaviour of the human brain. There are two paths down which this research may
lead, and these are examined in the next two sections. The first path, seen as an
ultimate goal by some researchers, is probably the more difficult. The second path,
a somewhat more short-term solution may be easier to accomplish.
Profile On ... Technology

Expert Systems

What is required to build an expert, or knowledge-based system?


o A acknowledged human expert at performing the task must be available.
o The performance of the human expert must be based on special knowledge
and the application of techniques.
o The expert must be able to explain the special knowledge and techniques.
o The rules used by the expert must each be capable of controlling decisions
for large data sets and combinations of situations.
o The boundaries of the application in question must be clearly defined.
o The use of the system must improve the performance of the expert.
o The expert must remain available, if not as the system operator, then as the
consultant to the operator.

What situations are not good candidates for expert systems?


o Those requiring the application of common sense.
o Those involving open-ended questions.
o Those with large numbers of special cases and subtleties (e.g., language
processing).
o Those in which a belief system or world view is a factor in producing a
decision.
o Those that involve the generation of new ideas from data, rather than the
application of existing ideas to data.

A few examples of early expert systems:


Name Use Developer
NOAH Robotics planning University College, Santa Cruz
MOLGEN Molecular genetics work Rand Corporation
CADUCEUS Medical diagnostics University of Pittsburgh

203
MYCIN, PUFF Medical diagnostics Stanford University
DENDRAL Chemical data analysis Stanford University
PROSPECTOR Geological data analysis SRI
ELAS Analysis of oil well logs AMOCO
MACSYMA Symbolic mathematics MIT
SPERIL Earthquake damage Perdue University
IDT Computer fault diagnosis Stanford University/IBM
CRITTER Digital circuit analysis Rutgers University
EMYCIN, AGE Expert system construction Stanford University
ROSIE Expert system construction Rand Corporation
VISIONS Image processing University of Massachussetts
BATTLE Weapons in battle National Research Lab
EURISKO Learning from experience Stanford University
RAYDEX Radiology assistant Rutgers University
TECH Naval Task force analysis Rand Corporation
OP-PLANNER Mission planning Jet Propulsion Lab
SYM Circuit Design MIT

6.2 Simulating Human Intelligence


Many researchers today believe that the totality of what it means to be
human will be known when they can fully describe the activity of the brain. Toward
this end a great deal of work has been devoted. So far, it is known that nerve cells
called neurons respond to electrochemical signals in the brain in a complex
switching operation or neurotransmission taking place at a junction known as a
synapse. The patterns for transmissions through synapses change through time,
and these changes no doubt have something to do with learning and memory.
Some things are also known about the speed at which a synapse operates and
the rate at which signals move through the brain. These turn out to be substantially
slower than in electronic switches, by several orders of magnitude. Even though the
mechanism by which all these take place is not clearly understood, the information
that is available makes some researchers confident that the functions of the human
brain can be duplicated in a smaller and faster electronic device. Note that it is not
necessary to duplicate the human brain itself, only to understand how it works well
enough to build a functional equivalent--that is, a machine that calculates and
stores in a way capable of producing the same results.
If a functional equivalent of the human brain could be constructed, two things
could be done with such a device. The first possibility is to program it or "teach" it
so that it can perform a few simple tasks. Then it can act as an "intelligent"
controller or designer capable of making decisions and acting upon them in a way
that is the electronic equivalent of the fashion in which the human brain works.
It might eventually be possible to build an ambulatory body for this thinking
machine and thus create the mobile robot of science fiction. This perfect
servant/slave could be given instructions such as "take out the cat," "bathe the
kids," and "go to the grocery store."

204
It would presumably be able to carry out such tasks without any further
human intervention. Indeed, it should be possible to let the machine decide when
the house or office routine dictates that something needs doing, and go ahead
without asking, or being told. Whether it will ever be possible to discuss child
rearing, philosophy, or one's emotions with such a machine is quite another matter.
The motivation to spend the enormous sums of money that would be required
to develop such machines would have to be powerful indeed. While building the
ideal butler, maid, secretary, lover, or factory worker might be interesting, it is not
clear that machines are necessary for such tasks, or that they need either a human
shape or the equivalent to a human brain. Perhaps such devices are needed in very
hostile environments where humans could not go,--for instance, the ocean floor,
space, the moon, underground, or a nuclear reactor core. To do crucial jobs that
cannot be done otherwise, machines will be built. They need not look or act
anything like a human being for such purposes, and it is uncertain that many robots
ever will. Moreover, such devices will not be used at all in the home unless there are
substantial benefits to cover the enormous cost.
There is a second, and perhaps even more ambitious potential use for a
functional analog of the human brain, however. The most optimistic of AI
researchers are confident that not only can such a machine be built, but that a
human brain could eventually be scanned on a molecule-by-molecule basis and its
activity duplicated in the artificial version. Thus, they believe the totality of the
human's thinking would have been downloaded into the mechanical construct. Give
the electronic brain a mechanical body to match and the result is hoped to be not
just an intelligent robot, but a mechanical copy of the human being.
Since the duplicated human would now reside in a more easily repairable
body, or in no body at all, and since backup copies could be made at any time, the
body of flesh could supposedly be discarded when the downloading was complete.
The net result: immortality would have been achieved in a mechanical form. A
human would cease to be blood and bone and becomes a cyborg (part machine) or
a fully machine intelligence--with electronic capabilities projected to be many times
as great as those of the bodies they currently inhabit.
The end result of this line of research is supposed to be nothing less than the
ultimate in man-made salvations from death--eternal life in a manufactured body
and brain--not in heaven, but here on Earth. Not only that, but the ability to make
backups means that a person really could be in two places at one time, and merge
the memories afterward into a single copy. If backups were frequently made, even
the fatal destruction of a single unit would cost no more than a few days out of
one's life and experience.
Quite apart from any other ethical and moral problems that may come to
mind, this goal raises an old conundrum, the answer to which may in part determine
whether this is possible.

Is the mind more than the brain?

If on the one hand, the human mind and soul can be expressed
unambiguously as the sum of the brain's electrical parts, then downloading its

205
activities to a functional equivalent would transfer a copy of the personality from a
"machine made of meat" to one made of electronic parts. This would be regarded
by many as offering a final proof that the empirically verifiable material world is the
sum total of all existence, that the spiritual and supernatural are fantasies, and that
logical positivism is permanently triumphant. On the other hand, if such things as
emotions, friendship, anger, fear, intuition, poetic appreciation, conscience,
intentionality, self-awareness, and the ability to enquire about the existence of God
cannot be expressed as sets of electro chemical impulses, this endeavour may well
fail, for the mind is then more than the brain.
These issues touch on the essence of what it means to be alive and what it
means to be human. Thus, attempts to achieve practical immortality by such means
are sure to touch many raw nerves. Those who oppose such research may say that
there are some things that ought never to be tried. To those who support it, the
potential prize is great enough to pursue at all cost. Furthermore, there is no
stopping such work now that it has begun. To forbid such research and make the
prohibition stick is impossible, as long as qualified researchers are not yet satisfied
that the question has been answered one way or the other.
Even if such a transfer succeeded, questions would still remain. Would the
downloaded person's thinking and memories really constitute the person, or is this a
simulated person, and not a full duplicate of the original? Assuming there is a soul,
perhaps it would depart when the flesh and blood body was discarded, and would
not transfer along with the contents of the brain. The question of whether such a
copy is fully human would still remain.
There is also the question of timing. As Grant Fjermedal (The Tomorrow
Makers) notes (p. 5):

In the weeks and months that followed my stay at Carnegie-Mellon in February of 1985, I
would be surprised and intrigued by how many researchers seemed to believe downloading
would come to pass. The only point of disagreement was when -- certainly a big
consideration to those still knocking around in mortal bodies. Although some of the
researchers I spoke with at Carnegie-Melon, and later at MIT, Stanford, and in Japan
thought that downloading was still generations away, there were others who believed we
were actually so close to achieving robotic immortality that some of the researchers
seemed to be driven by private passions never to die. And perhaps this explained the
eagerness of Hans's young research assistants to work through the nights and weekends to
further this quest for the life ever after.

Some regarded these developments as imminent two decades ago. Others


believe it could be many years, if ever, before such questions need to be answered
seriously, for research on the activity of the brain is moving very slowly, and it
seems unlikely that it can be functionally duplicated soon, if ever. Indeed, the small
progress made in the intervening years seems to argue this work may be at a dead
end.
Biological enhancements to the human brain achieved by genetic
manipulation or chemical means might obviate the necessity to go the mechanical

206
route altogether. In the meantime, computer and communications research may
take other turns, the products of which could also render the production of
artificially intelligent brains unnecessary. On the other hand, startling new
techniques have a way of appearing on the scene almost full-blown and with great
rapidity. Predictions about this area of research and its potential applications have
as much likelihood of being proven too conservative as too extravagant.

6.2 Simulating Human Intelligence


Many researchers today believe that the totality of what it means to be
human will be known when they can fully describe the activity of the brain. Toward
this end a great deal of work has been devoted. So far, it is known that nerve cells
called neurons respond to electrochemical signals in the brain in a complex
switching operation or neurotransmission taking place at a junction known as a
synapse. The patterns for transmissions through synapses change through time,
and these changes no doubt have something to do with learning and memory.
Some things are also known about the speed at which a synapse operates and
the rate at which signals move through the brain. These turn out to be substantially
slower than in electronic switches, by several orders of magnitude. Even though the
mechanism by which all these take place is not clearly understood, the information
that is available makes some researchers confident that the functions of the human
brain can be duplicated in a smaller and faster electronic device. Note that it is not
necessary to duplicate the human brain itself, only to understand how it works well
enough to build a functional equivalent--that is, a machine that calculates and
stores in a way capable of producing the same results.
If a functional equivalent of the human brain could be constructed, two things
could be done with such a device. The first possibility is to program it or "teach" it
so that it can perform a few simple tasks. Then it can act as an "intelligent"
controller or designer capable of making decisions and acting upon them in a way
that is the electronic equivalent of the fashion in which the human brain works.
It might eventually be possible to build an ambulatory body for this thinking
machine and thus create the mobile robot of science fiction. This perfect
servant/slave could be given instructions such as "take out the cat," "bathe the
kids," and "go to the grocery store."
It would presumably be able to carry out such tasks without any further
human intervention. Indeed, it should be possible to let the machine decide when
the house or office routine dictates that something needs doing, and go ahead
without asking, or being told. Whether it will ever be possible to discuss child
rearing, philosophy, or one's emotions with such a machine is quite another matter.
The motivation to spend the enormous sums of money that would be required
to develop such machines would have to be powerful indeed. While building the
ideal butler, maid, secretary, lover, or factory worker might be interesting, it is not
clear that machines are necessary for such tasks, or that they need either a human
shape or the equivalent to a human brain. Perhaps such devices are needed in very
hostile environments where humans could not go,--for instance, the ocean floor,
space, the moon, underground, or a nuclear reactor core. To do crucial jobs that
cannot be done otherwise, machines will be built. They need not look or act

207
anything like a human being for such purposes, and it is uncertain that many robots
ever will. Moreover, such devices will not be used at all in the home unless there are
substantial benefits to cover the enormous cost.
There is a second, and perhaps even more ambitious potential use for a
functional analog of the human brain, however. The most optimistic of AI
researchers are confident that not only can such a machine be built, but that a
human brain could eventually be scanned on a molecule-by-molecule basis and its
activity duplicated in the artificial version. Thus, they believe the totality of the
human's thinking would have been downloaded into the mechanical construct. Give
the electronic brain a mechanical body to match and the result is hoped to be not
just an intelligent robot, but a mechanical copy of the human being.
Since the duplicated human would now reside in a more easily repairable
body, or in no body at all, and since backup copies could be made at any time, the
body of flesh could supposedly be discarded when the downloading was complete.
The net result: immortality would have been achieved in a mechanical form. A
human would cease to be blood and bone and becomes a cyborg (part machine) or
a fully machine intelligence--with electronic capabilities projected to be many times
as great as those of the bodies they currently inhabit.
The end result of this line of research is supposed to be nothing less than the
ultimate in man-made salvations from death--eternal life in a manufactured body
and brain--not in heaven, but here on Earth. Not only that, but the ability to make
backups means that a person really could be in two places at one time, and merge
the memories afterward into a single copy. If backups were frequently made, even
the fatal destruction of a single unit would cost no more than a few days out of
one's life and experience.
Quite apart from any other ethical and moral problems that may come to
mind, this goal raises an old conundrum, the answer to which may in part determine
whether this is possible.

Is the mind more than the brain?

If on the one hand, the human mind and soul can be expressed
unambiguously as the sum of the brain's electrical parts, then downloading its
activities to a functional equivalent would transfer a copy of the personality from a
"machine made of meat" to one made of electronic parts. This would be regarded
by many as offering a final proof that the empirically verifiable material world is the
sum total of all existence, that the spiritual and supernatural are fantasies, and that
logical positivism is permanently triumphant. On the other hand, if such things as
emotions, friendship, anger, fear, intuition, poetic appreciation, conscience,
intentionality, self-awareness, and the ability to enquire about the existence of God
cannot be expressed as sets of electro chemical impulses, this endeavour may well
fail, for the mind is then more than the brain.
These issues touch on the essence of what it means to be alive and what it
means to be human. Thus, attempts to achieve practical immortality by such means
are sure to touch many raw nerves. Those who oppose such research may say that
there are some things that ought never to be tried. To those who support it, the

208
potential prize is great enough to pursue at all cost. Furthermore, there is no
stopping such work now that it has begun. To forbid such research and make the
prohibition stick is impossible, as long as qualified researchers are not yet satisfied
that the question has been answered one way or the other.
Even if such a transfer succeeded, questions would still remain. Would the
downloaded person's thinking and memories really constitute the person, or is this a
simulated person, and not a full duplicate of the original? Assuming there is a soul,
perhaps it would depart when the flesh and blood body was discarded, and would
not transfer along with the contents of the brain. The question of whether such a
copy is fully human would still remain.
There is also the question of timing. As Grant Fjermedal (The Tomorrow
Makers) notes (p. 5):

In the weeks and months that followed my stay at Carnegie-Mellon in February of 1985, I
would be surprised and intrigued by how many researchers seemed to believe downloading
would come to pass. The only point of disagreement was when -- certainly a big
consideration to those still knocking around in mortal bodies. Although some of the
researchers I spoke with at Carnegie-Melon, and later at MIT, Stanford, and in Japan
thought that downloading was still generations away, there were others who believed we
were actually so close to achieving robotic immortality that some of the researchers
seemed to be driven by private passions never to die. And perhaps this explained the
eagerness of Hans's young research assistants to work through the nights and weekends to
further this quest for the life ever after.

Some regarded these developments as imminent two decades ago. Others


believe it could be many years, if ever, before such questions need to be answered
seriously, for research on the activity of the brain is moving very slowly, and it
seems unlikely that it can be functionally duplicated soon, if ever. Indeed, the small
progress made in the intervening years seems to argue this work may be at a dead
end.
Biological enhancements to the human brain achieved by genetic
manipulation or chemical means might obviate the necessity to go the mechanical
route altogether. In the meantime, computer and communications research may
take other turns, the products of which could also render the production of
artificially intelligent brains unnecessary. On the other hand, startling new
techniques have a way of appearing on the scene almost full-blown and with great
rapidity. Predictions about this area of research and its potential applications have
as much likelihood of being proven too conservative as too extravagant.

6.4 Issues in Artificial Intelligence


At this point, it is not known what are the limits, if any, to the building of
artificially intelligent artifacts. Neither is it known whether these will be silicon
based, carbon based, some combination of the two, or neither. Such artifacts may
well turn out to be genetically modified living organisms, inorganic devices, or both.

209
The issues raised in this chapter are more or less independent of such
considerations, and would eventually surface regardless of the type of construction.
Some are in the form of common objections to be considered and possibly
answered; others are rather more speculative. Many are philosophical or even
theological rather than social or strictly ethical. If the most enthusiastic projections
of some in the AI community are correct, all these issues will have to be dealt with
in this generation. Moreover, some issues will need to be addressed regardless of
whether or not the artifacts are exact simulations of the human brain, whether or
not downloading succeeds, or whether the only "intelligence" involved is that which
PIEA and Metalibrary devices seem to exhibit due to their current users.

Playing God?

One objection raised against AI work (as well as against genetic engineering)
is that the researchers are "playing God." On the surface this is a comprehensive
and definitive moral objection to such work. However, its meaning and validity are
very difficult to analyse. This objection could be taken casually, to mean simply that
those involved in the research are overstepping safe bounds and playing with things
they ought not to. It may also be regarded as a gut feeling, an emotional response,
or a desire for the world forever to remain the same. It may be a statement that the
desired action is contrary to nature or God's laws and therefore to be forbidden.
It may be intended and taken literally. If so, then on face value it appears to
be a claim of acquaintance with God's agenda and to imply that the research in
question is known to usurp rights and privileges God reserves for Himself. Similar
assumptions are found in such arguments as: "if God had meant us to fly, we would
have wings." One could reply that if God is the all-knowing one, this implies that
quests for knowledge about creation are, in general, good because they are
searches for something that is of God.
"But," the response could be "it is not knowledge at stake, but its application.
Only God has the right to make intelligent beings." The first part of this statement is
one few people would argue with, for whatever "good" and "evil" are defined to be,
their meaning takes substance in the effects of applications, not in theory alone,
even when theory seems to lead to action rather directly. However, the last part of
the statement is a rather large presumption, for even if God alone has the power to
create ex nihilo, it does not follow that human beings cannot create at all. One could
even suggest that if human beings are created "in God's image," this would seem to
imply the ability to create in a similar (if not identical) manner.
Those who make this objection have two practical obstacles to overcome. The
first is to demonstrate that they do have the credentials to speak for God in such
matters, and the second is to devise an effective means of preventing the research
they propose to forbid. There does not, at this point, seem to be a way to do either.
First, there seems little evidence to support the idea of modern divine appointments
to prophetic office. Second, the control of research on an international scale is
effectively impossible. Perhaps it is more practical to control the products that
might emerge from such work (by taxation?) than attempt to ban it and thereby
drive it into secrecy.

210
There may still be something to the objection that AI researchers are "playing
God," for pronouncements made in the name of science can occasionally have a
ring of divine revelation to them. Even so, this is a comment on the motivations of
the people involved, and not directly on the legitimacy of the work. On the other
hand, perhaps the objectors themselves might be accused of "playing God" if they
cannot substantiate their claim to know the mind of God. It is possible that the
spokesmen/objectors for God may be judging the motivations of researchers by
their perceptions of the work in question. That is, the objection might focus upon
what the objector supposes the products of the AI work are to be used for. However,
it is not clear how applications that are only potential or theoretical (or even
imagined) can invalidate the AI research itself.
The objection "playing God" may be well-taken if the goal of AI researchers is
to become immortal or to become "like gods." Even such an avowed goal would not
validate or invalidate all AI work, for there may well be other motivations and
applications. However, these possibilities do suggest other questions about
motivations and potential applications. Would the benefits of AI be confined to the
creators and controllers, who would indeed become "gods" and "lords" over the rest
of humanity? Or, would they be shared with all? History would seem to suggest that
the former outcome is more likely than the latter, and therefore to urge a
considerable degree of caution when it comes to the development and deployment
of any new technology.
Perhaps the value-laden "playing God" objection is instructive, for if nothing
else, it forces reflection once again on the interaction of motivation, technique,
society, and history. It weakens the classical argument that pure knowledge and
technique are morally neutral, for it illustrates that both have a context from which
they cannot be extracted pure and value free.

What Does Success in AI Mean?

There are two kinds of questions to consider here. The first has to do with
knowing how to apply the word "intelligent" to an artifact, whether organic or
inorganic. For example, one would not consider that a motor-driven slide rule or
calculator is intelligent. As previously remarked, even a device that passes Turing's
test could not thereby be considered "human." Even if a machine could duplicate
the results of human thinking exactly, it is not necessarily "intelligent" in the same
way as a human being, because the process by which the results are obtained is
different. For example, the ability to manipulate objects or symbols does not imply
that the manipulator understands what they are. One does not regard a language
translator that works by substituting one set of symbols for another set as an
understander or speaker of--much less a scholar in--either of the languages
involved. It is simply a deterministic machine; it comprehends nothing. Neither is it
clear that a mechanical device can have a dynamic memory, be able to learn or
forget, be able to think associatively rather than strictly linearly or be able to learn
by making mistakes.
In other words, there is an element in the processing of data to a form that
can be called information that cannot (yet?) be ignored. The assigning of meaning

211
to data is a uniquely human activity, and it may turn out to have been
presumptuous to assume that anything analogous to this thing called
"understanding," "knowing" or "perceiving" can be programmed into an artifact.
One could even question the appropriateness of using cognitive terminology,
such as "intelligence," to refer in any way to manufactured things. After all, to do so
seems to assume that such machines are already intelligent, or surely will become
so, when the outcome is still very doubtful, to say the least. For instance, it is
already common to speak of the "memory" of a computer, but this is no more
analogous to human memory than is a printed page in a book, even though there
may be a limited amount of shared functionality. Because machine manipulation of
symbols does not require or imply human-like understanding in the machine, it may
well be inappropriate to use the word "know" in connection with anything that it
contains or does. On the other hand, if the duplication of functionality is the only
criterion for calling a machine "intelligent," the use of this word may indeed
someday (already?) be appropriate. Perhaps the issue hinges on whether or not
logic and empiricism are adequate to express "knowing." If they are adequate, then
a similar logic can surely be applied to the a "thinking" artifact that is capable of
"knowing" in every way equivalent to humankind. If they are not adequate, it may
always be necessary to have one word for human intelligence and another for the
manufactured kind.
It should also be noted that success in AI, however measured, does not bear
on questions of ultimate origins or meaning for the human race. The great
expenditure of time, money, energy and planning needed to build artificially
intelligent artifacts will not somehow prove that man's existence is itself an
accident. Indeed, though it may suggest the opposite--that humanity too was
planned--it will not prove that either. Neither will it provide any new meaning to
human life or destiny, however great an achievement it may be. Knowledge about
how the universe works, even when taken to the point of building something never
before made, does not answer questions of ultimate origins or meaning. From an
ethical point of view, it is far more important to consider the uses and effects of the
proposed devices as they relate to people than to imply from them fanciful castles
of philosophical implication about the origin, purpose, and destiny of humankind.

Who Makes Decisions?

Under this heading come several questions without ready answers. The first
concerns the decisions about whether AI devices ought to be made and how they
should be used if ever they are finished. As in all major scientific/technological
projects there is the question of funding. It is not clear at this point who should
make money decisions for such research. At present, it is the national science
research councils of various countries, and the board members of various
foundations and corporations who decide which projects may go ahead. The public,
and even their political representatives, have no direct input here, even though the
implications for ordinary citizens could be very important.
Oddly enough, questions of appropriateness are seldom asked at the onset of
actual research and development, and ordinary citizens are even less often

212
consulted. That is, although all scientific and technological discoveries have
consequences to society as a whole, the decisions to proceed are made by a very
narrow group of specialists who do not, in general, have much training in either
social theory or ethics. Although most scientists would not wish to involve what they
would term "unqualified" people in such decisions, true democracy would seem to
demand that those affected ought to have a voice in the process. In fact, since no
technology is pure in itself and free from the context of motivations and social
effects, all three of these, and not just the technology itself require careful and
informed consideration before proceeding with research, and again prior to
deployment.
Given the relatively narrow educations many specialists have often had in the
past, they may in fact be among the least well qualified to assess the
appropriateness of developing and using a proposed technique. It therefore seems
necessary to reform both current funding practices and technological decision
making, and also to restructure the educational system to produce better qualified
decision makers among those who do work directly with technological development.
The second question concerns what artificially intelligent machines might be
used for, assuming they are built. Their very nature suggests that they will make
decisions or assist in making them. This raises the question of who ought to be
making decisions for people, if not people themselves? If the AI devices are an
extension of human intelligence and a decision making aid, this objection goes
away. But if they are to be independent decision makers, there is a serious potential
conflict. If such devices are capable of making autonomous decisions, then in what
interests will they make them--the human, or their own? If such autonomy is ever
achieved, there is no reason to suppose that these artifacts will be partners with
humankind, or share any of the same interests or goals. It is therefore not clear that
these ends of AI research are desirable. Those who speak in this context of
designing the successor to mankind appear to be casually writing off the whole
human race to extinction--hardly a helpful outcome to those extinguished.

What Will be the Nature of AI Devices?

The discussion above leads directly to questions about the attributes of AI


devices and their relationship to humankind. Suppose for a moment that the most
optimistic and generous assumptions turn out to be true, and truly autonomous AI
artifacts were built that were cooperative, benign, and work in human interests.
Would they be thought of as "alive"? If they were biological entities, that is they
have carbon-based chemistry, the answer is probably, in some sense, "yes." But if
they were to be silicon-based, or electrical and mechanical artifacts, then even if
they had human-like attributes of "intelligence" so far as can be measured, the
question returns. Should "living organisms" then be taken to include silicon-based
ones? In either case, the matter can be taken a step further. Will such devices be
self-aware? Some futurists are sure they will be, provided they have sufficient
memory and computational complexity. Others are equally convinced otherwise.
The only way to find out if this is possible is to actually achieve it; but by then the
issue of whether the achievement is desirable would be rendered irrelevant.

213
Other questions centre on whether such entities would be regarded as
subordinate to humans, or as equals. If the latter, at what point in their
development would they be given the status of persons? For instance, if it were
considered that an AI device were alive, and had some semi-human status, would
turning off or destroying one be murder? Would an AI device be more human than a
child in the mother's womb, or less? No ready answer exists to these questions, and
they only become more complex if the thinking machine is housed in an ambulatory
robot body, for then their qualities become even more human-like. Moreover, if AI
researchers are ever able to download themselves into an artificial brain and hope
in that state to be regarded as human, it may be difficult to withhold the same label
from one of the same devices that has been programmed but lacks a human
download. This subject is already the focus of controversy, for whatever the
resolution, it touches upon what is the definition of a human being, and the extent
to which that definition is to change.
It should be noted that the definition of a "person" in Western civilization has
changed several times in the last century and a half, first to include women, then
blacks and orientals, and more recently to exclude unborn children. Will it be
changed again?

Rights of AI Devices

This brings the discussion to the heart of the matter, for if devices are to be
made that are intelligent in a meaningful human fashion; if they are to share with
mankind the essence of thinking humanly, must they not then be accorded the
rights of a human being? Should they not have freedom of speech, the right to
liberty, and the right to own property? If so, could they by virtue of superior
computational ability capture ownership of the entire economy, all the stocks, bonds
and properties, and place the original and slower humans entirely in their debt?
There are other ways to dominate than by force.
What of the right to bear arms or, for that matter, to marry each other (or
human beings) and to adopt or have offspring? Indeed, what about such artifacts as
sexual partners with human beings? Futurist Frank Ogden, who goes under the
name of Dr. Tomorrow, is fond of shocking his audiences with the notion of "live-in
robot lovers"--an idea he presents as a technological fix for the problem of diseases
such as AIDS. "If you own 'em, don't clone 'em or loan 'em." is one of his slogans.1
Given the amount of time and energy the average person spends on things
sexual, it seems inevitable that various forms of technology will be applied to the
satisfaction of sexual urges. While some will question these scenarios as patently
unnatural and deviant, such objections are unlikely to make much impression, for
these practices would merely be taking a place in the long line of ones for which
similar unsuccessful protests have already been made. Indeed, the recent history of

214
the Internet as it stands suggests that technologically-inspired and assisted sexual
commerce will only expand.
Objections to new sexual practices are often made with a view to supporting
the ideal of monogamous heterosexual marriage relationship as the sole focus of
legitimate sexuality. It is not clear that this kind of relationship will be any harder to
maintain in the future than it is now. In fact, it may be easier, for it provides a
definitive answer to problems of sexually transmitted disease (STD). What is more,
if AI artifacts do turn out to be so much more capable, they may be as much
interested in cohabitation with humans as most humans would be with a
grasshopper.
It should be emphasized at this point that these are not merely fanciful
speculations; they are natural questions that arise out of following the logic of
building human-like artificially intelligent artifacts. They lead the line of reasoning to
its final set of questions.

Can Artificially Intelligent Devices be Moral Agents?

Any discussion of the potential human-like qualities of AI devices would be


incomplete without inquiring whether such intelligence will be sufficient for them to
be able to ask ethical questions and make moral decisions of their own. Human
beings have always assumed that they were the only moral agents on earth, for it
can readily be observed that plants and animals do not ordinarily appear to act out
of such considerations.
Is it intelligence alone that confers the ability to ask questions about right and
wrong and to act on the answers? This question could be answered whenever AI
artifacts are first programmed. At one extreme, if "downloading" were to succeed,
some would say that the answer would have to be "yes," contending that the human
being had simply transferred her residence into the device. Others would counter
that the device is an animated dead creature--a zombie--and accuse all participants
of murder if the original human body were destroyed after an apparently successful
transfer. The tape containing the recording of such a human brain scan also has a
doubtful status. It would manifestly not even be alive, yet it would contain the
pattern not to grow a human body--as does DNA--but to reconstitute the essence of
humanness supposedly residing in an artifact. Would such a tape, if it could ever be
made, also have to be regarded as a moral agent? (What difference is there
between a running program in a computer, and a copy of one on a disk?)
Even though downloading is one of the express goals of the most optimistic of
AI researchers, it is still highly speculative and there may be many more immediate
goals with a greater likelihood of achievement. Some, however, lead to very similar
questions. For example, suppose an AI device were programmed in such a way that
it could make errors--not just those due to data problems, but to choose to ignore
what the program indicates is the right choice based on data analysis and to act on
the wrong choice instead. The question now becomes whether the choice is merely
random, or whether it is volitional. That is, will AI devices ever be able knowingly to
make wrong choices? The mere possibility of this, whether through software or
hardware is the single key issue in AI work, regardless of the nature of the device.

215
For if the artifact is intelligent, and has volition by standards that are essentially
human, then it follows that it is a moral agent, regardless of whether or not it is
human. If so, then it is surely capable of choosing to do wrong when it "knows" the
right, and this perhaps in nanoseconds rather than upon lengthy deliberation. Some
fear AI devices might be capable of doing good or evil at billions of times the rate
human beings can. If so, the eventual quick destruction of much of humanity by a
berserk machine would seem to be assured, for human beings have done almost as
much with a far slower time frame within which to decide and act.
At the very least, this turns the tables on an earlier question, for it leads one
to ask whether a house or office computer could be arrested for murdering its
owner. Could not such a device, if possessed of free will, become jealous or angry if
its tenants decided to move or its users wanted to replace it with a better model? A
little judicious fiddling with the building climate controls or the air supply, or a small
short circuit on the operator keyboard would quickly eliminate the human
inconvenience with no effect on the computer. This scenario too is not just
speculation, but a probable outcome of any decision to engineer volitional
machines. Thus, if they are to be devised, so must the mechanisms for protecting
humans from and dealing with the criminal element among the new devices. This
includes arrest procedures, reading their rights, and the means to try them and
pass sentence on artifacts. When more ambulatory devices are made, would
citizens need protection against mechanical muggers or bank robbers? Would AI
machines discover some expensive electronic addiction to help forget their troubles,
and need physical and psychological treatment for their equivalent of alcoholism?
How would human beings respond to volitional machines? Would they riot and
try to destroy them? Would protective societies come into being, label the
termination of AI machines "genocide" and vow to establish legal rights for their
mechanical "brothers and sisters"? Such questions should be considered at the time
decisions are being made to build thinking machines, rather than after the fact.
Christian and other theologians would also face interesting problems with
volitional machines. Granting that a device were regarded as "alive,"--that is, had a
soul, the breath of life--would it also be considered to have inherited a fallen nature
from man, its creator? If so, could it be said to have a spirit, that is, the ability to
relate to God. If so, does salvation apply to it equally as to human beings?

Simple Answers?

The answers to most of the questions raised in this section are simple if AI
devices are to remain mechanical expressions of subsets of human thinking
patterns--not to be thought of as actually intelligent, or alive in the human sense.
Given the present-day agenda of the AI research community, however, the difficult
questions seem liable to re-surface from time to time with increasing complexity. At
some point, it seems likely that a careful definition of "human" and "intelligent" will
have to be agreed upon, and it is not at this time clear what that definition will
eventually include, especially since human "intelligence" can scarcely be said to be
well defined.

216
Profile On ... Issues

Machines and Understanding

understand verb 1 : be aware of the meaning of 2 : deduce 3: assign a


meaning to

What Does Understanding Involve?


o awareness: Understanding implies the existence of an understander, that is,
of a personality that is at least self-aware.
o intentionality: There must be a capacity for mental states that are directed
at things or objects outside the understander, i.e., the ability to be aware of the
things that are to be understood and to consciously direct understanding at them.
(e.g., beliefs)
o meaning: There must be an aspect or an idea that is the subject of the
understanding. As noted in Chapter 4, meaning requires both intelligent
organization, and the capacity for meaning-preserving communication.

Will Machines Ever Understand?

YES
o Information is the basis of understanding, and it is just data that has been
processed. Machines process data, so they can also be said to be intelligent.
o Biological systems are finite and bounded as are physical ones. They can
therefore be completely comprehended by the inputs and outputs of their own
processes without reference to anything else--including the method of internal
processing.
o The human mind can be completely explained by knowing how the brain
works. This activity can be duplicated.
o All human activity results from the interaction of the electrical, chemical,
and physical properties of the body (brain included). These constitute programs that
determine all human activity. We need only learn how to write similar programs to
duplicate human activity.
o The processes of the brain are nature's way of executing algorithms with a
finite number or (possibly repeated) steps. Since every such algorithm is executable
on a machine (Church's Thesis) everything the brain does can be duplicated
artificially. Meaning is just something that is implicit in data and algorithms and is
encoded along with them.
o Understanding is just a name for a certain kind of data processing. It too can
be understood (from within the process, so to speak).
o Experience is another name for memory and social conditioning. Since all
information can be encoded, experience too can be programmed.
o It is not even necessary to know what the mind is to be able to duplicate the
results it can produce. Only the functionality of the human brain needs to be
duplicated to produce an intelligent machine, not the exact way that it works.

217
o Understanding is something that is contained within programming. A
simulation of the activity of neural circuits is sufficient to produce understanding
equal to what humans have.
o For instance, a chess playing program is intelligent. It uses different
techniques than do human players, but it yields the same results.

Understanding is a duplicatible mechanical process.

NO
o Understanding data so as to create useful information requires
interpretative ideas, not just mechanical organization. Humans create ideas,
machines do not.
o Biological systems are not just physical. The human mind, viewed as an
information processor, is unbounded. The brain's biological state constitutes the
environment within which its processes take place. This environment is different in
a computing machine.
o The human mind is more than just the brain. The mind might understand
the brain, but the brain cannot comprehend the mind.
o There is no known link between the brain's hardware (its constituents as a
material and physical system) and its software (its constituents as an information
processor). There are no known material elements that give it the ability to be a
semantic processor, self aware, or intentional.
o Formalizing the execution of algorithms on a machine (encoding data and
its processing) requires only a syntactic notation (like a programming language).
Semantic analysis (assigning meaning) is a different kind of activity altogether. It
too can be described with a notation, but it can only be created by a human.
o The awareness, intentionality, and meaning that compose understanding
are motivators for techniques, but are not themselves techniques. Thus no process
or technique used by understanding, whether biological or physical, is sufficient to
understand understanding. Nothing (including understanding) is the same when
viewed from within itself and its processes as it would be viewed by an observer
from outside itself and those processes.
oInteraction with the environment, including other human beings, is
necessary for understanding.
o The process of understanding is at least as important (if not more) than the
functionality, for the ways in which meanings are assigned and communicated are
intimately attached to human experience, and that is something no machine can
ever have.
o Consequently, understanding involves more than neural circuits, chemistry,
or formal symbols, and is closely tied to the social and biological organism it serves.
A simulation may to some degree approximate this, but it will not model it.
o The exhaustive search of all possible moves by a high speed computer is
fundamentally different from the technique used by a human player. Even if the
result on the board may be the same, the mental outcomes are not comparable,
and that is what matters.

218
Understanding is non-duplicatible and uniquely human.

6.5 Summary and Further Discussion

Summary

The degree to which machines can mimic the results of human thinking is
becoming progressively greater. It is not known whether there are limits to this,
either short of complete human capability or beyond this point. At the present time
machines are already used for expert tasks, logical and inference tasks, design
tasks, and language translation.
Over the longer term, some researchers would like to simulate human
thinking exactly, and even be able to download themselves into such devices to
achieve immortality. The former goal is difficult enough, but too little is known
about the human brain to expect the latter to be achieved soon.
Various artificial augmentations to the human brain may be more immediately
realizable, and even more practical. Devices here called PIEAs (Personal Intelligence
Enhancement Appliances) but more commonly termed "pocket brains" may be
developed to allow quick computation and large-scale local memory storage of
information. Eventual implantation of these is also a possibility. The same device
could be the individual's Metalibrary link and allow fast interpersonal
communication and data exchange. The method by which any brain/machine link
could be made is not yet known.
Whatever route is taken with AI-capable devices, the very possibility they
might exist raises basic questions about whether they would be regarded as alive,
as human, or as moral agents. If they are autonomous decision makers, there is no
reason to assume that they will share many, if any, goals in common with humanity.
The consequences could be catastrophic, so decisions on construction need to be
broadened as to participants, and to take these issues into consideration.

Research and Discussion Questions

1. Why does one not call a chess-playing machine intelligent?


2. If a machine passes Turing's test, claiming for itself self-awareness, does
that mean that it is in fact self-aware? How can one tell?
3. Why are language translation problems so important?
4. Research the extent to which language translation software has already
been developed and deployed.
5. What advantages would an artificially intelligent machine have over today's
expert systems?
6. Perform the necessary library research to determine whether the human
brain is generally regarded as a parallel processor or a sequential processor.
Summarize, with citations..

219
7. Write a paper summarizing some of the problems involved in the machine
representation of knowledge.
8. Discuss the technical problems involved in simulating vision in a machine.
9. Write a paper defending human brain downloading as the ultimate goal of
research into intelligence.
10. Write a paper attacking the concept of human brain downloading on either
scientific or moral grounds, or both.
11. Attack or defend the proposition "A human being is a machine made of
meat."
12. If the Metalibrary or any AI device behaved in every other way as though it
were intelligent, how could one determine if it were independently self-aware?
13. Suppose a person recorded in detail every life's experience, all learning,
reading, thinking, motives and so on, and placed this into a mechanical brain. Would
this constitute a better "copy" than one made by a molecule-by-molecule scan of
their brain at one point in time? Would the copy be human?
14. Argue that AI machines must be made autonomous moral agents, and that
the consequences for the human race will be good.
15. What is intentionality? Could a machine have it? Why or why not?
16. Discuss the problems involved in making machine simulations of
emotions.
17. Argue that only catastrophe could occur if AI devices were to become self
aware, autonomous decision makers and moral agents. Make a case therefore that
such machines must be kept strictly subservient to humankind, and unable to be
autonomous.
18. Research Isaac Asimov's fictional "Laws of Robotics" and discuss their
enforceability, implementability, and adequacy to prevent such machines from
mastering the human race.
19. Discuss the kind of work best done by Metapersons, and why a special
legal status is necessary for such partnerships.
20. Should an AI device that is capable of duplicating the results of human
thinking be regarded as equal in status to human beings?
21. What legal rights should AI devices have? Does your answer depend on
their appearance or only on their "thinking" capabilities?
22. What social rights should AI devices have? Does your answer depend on
their appearance or only on their "thinking" capabilities?
23. Should an employer give a job to a human applicant in preference to an AI
device that can do it better for the same salary? Why or why not?
24. In view of the problems discussed in this Chapter, what restrictions, if any,
should be placed on AI research and development? How can these be enforced, and
who should do this enforcing?
25. You are the pastor of a small church and are visited one night in 2065 by a
mobile AI device that tells you: "I've been listening to your sermons and I want to
become a believer and a member of your church." What do you do and why?

Bibliography

220
Denning, Peter J. & Metcalfe, Robert M. Beyond Calculation--The Next Fifty
Years of Computing. New York: Springer-Verlag, 1997
Drexler, K. Eric. Engines of Creation. New York: Anchor, 1986.
Drexler, K. Eric with Peterson, Chris, and Pergamit, Gayle. Unbounding The
Future - The Nanotechnology Revolution. New York: Simon and Schuster, 1991
Fjermedal, Grant. The Tomorrow Makers. New York: Macmillan, 1986.
Grimson, W. Eric L. and Patil, Ramesh S. AI in the 1980's and Beyond, An MIT
Survey. Cambridge, MA; MIT Press, 1987.
Minsky, Marvin. The Society of Mind. New York: Simon and Schuster, 1986.
Ramo, Simon. Century of Mismatch. New York: David Mckay, 1970.
Roszak, Theodore. The Cult of Information. New York: Pantheon Division of
Random House, 1986.
Scientific American October 1987 issue on Artificial Intelligence
Sowa, John F. Knowledge Representation: Logical, Philosophical, and
Computational Foundations. New York: Brooks Cole. 2000.
August 1999.
Turing, Alan Muthiscn. Computing Machinery and Intelligence. Mind 59: 433-
460, 1950.

Internet resources:

Corme, D. W. <mailto:D.W.Corne@reading.ac.uk<http://www.cs.reading.ac.uk/people/dwc/ai.html

221
Chapter 7
The Biospace Revolution
Seminar - "How Shall we Live?"
7.1 Life in Time and Space--Defining the Biospace
7.2 Disease and Surgery
7.3 Engineered Medicine
7.4 Engineering New Life Forms
7.5 Human Genetic Engineering
7.6 Rights, Health Care, and Life
7.7 The Environment and Human Life
7.8 Building New Environments
7.9 Summary and Further Discussion

7.1 Life in Time and Space--Defining the Biospace


Every life form occupies a unique niche in the context of all life on earth. This
niche can be expressed in terms of the physical space that it requires to make or
gather food and also in terms of relationships with other life forms with similar
agendas. Human life, however much it might be considered as different from other
forms, is also lived out in such a context. It draws sustenance in the form of
clothing, shelter, and food from other forms of life and cannot exist without the
support of the plants and animals with which it shares the earth. The whole of the
environment is a complex network of dependencies of one form of life upon
another, and it is now understood that no part can be changed without having an
effect on the rest. This is by no means a concept new to the information age,
though it has not been an important paradigm of the industrial one. Such an idea
was expressed by the Elizabethans as the "chain of being." It is the principle of
interdependence again, this time expressed as:

Every life form depends on every other life form.

Every habitat has to be shared.

Likewise, human life is lived out over a span of years, and this implies a
certain balance in the relationship of people to each other and to the environment.

222
The passing of the years has been counted upon to supply a steady stream of new
people with new ideas, to be the new consumers and the new leaders. If a society
does not grow in size and power by increasing in absolute numbers, the inexorable
passage of time will at the very least by the cycle of birth, life, sickness, and death
ensure some measure of renewal in that society. One characteristic of modern
medicine has been a great increase in the number of years lived by the average
human being, and there is some prospect of further gains. Of course, such changes
in life span produce profound side-effects--and not just in the population or society
where they take place, but through the rest of the chain of related societies and life
forms.
It is the combination of life in both space and time that can be expressed as
biospace, specifically:

The niche occupied by human life in space and time together is called the human biospace.

With each of the earlier transitions to a new society, the interrelationships


among humans changed (in some cases dramatically), and this is now happening
again. The purpose of this chapter is to consider the aspects of the continuing
scientific and technological revolution that directly affect the living of human life
itself--that is, medical, environmental, and living space concerns. These issues are
multifaceted and complex. It could be argued that they are all somewhat
independent of one another and that several chapters are required to do them
justice. That they are here collected into one is precisely because there is a unifying
theme--all relate to how many people will live, where they will live, and what quality
of life they will enjoy in the Fourth Civilization.

7.2 Disease and Surgery


One of the most dramatic differences between this time--the close of the
industrial age--and all earlier ones is that people now have an entirely different
attitude to disease and to the practice of medicine. Until the late nineteenth
century, death was ready to knock at every door at any time in the form of plague,
smallpox, diphtheria, typhus, polio, and a host of other diseases. In developed
countries, none of these need be feared today because of the widespread use of
antiseptics, antibiotics, and vaccinations. Smallpox has now been completely
eradicated, and many of the others on the old list of killers have been reduced in
certain countries to rare cases of little more than nuisance value to society as a
whole. This has come about because hospitals are kept clean, patients are
segregated from one another and from the healthy, and drinking water is treated at
central locations to remove contamination. It is difficult for moderns to appreciate
the scope of these changes. Until less than a century ago, the practice of medicine
other than surgery was based on little more than superstition, and a sick person was
in many cases better off dying in peace than calling a doctor.
Will this trend continue to the point that most communicable diseases
eventually become a thing of the past? Pessimists point to modern population
mobility and suggest that some new disease (natural or deliberately engineered)
could even now sweep the earth in record time and carry away a large percentage

223
of the population before a cure could be developed. A disease as deadly as AIDS but
as easy to catch as the flu could infect most of the world's population in a matter of
weeks. No defense could be devised in time to save any except by the small
percentage who would be naturally immune. Moreover, people in the developed
nations have become complacent about vaccinations against what they regard as
rare diseases, and may have made themselves vulnerable. Optimists are sure that
new drugs combined with new methods of tailoring enzymes, proteins, and "fake"
viruses to stimulate antibody production will all but eliminate the transmission of
disease in the next few decades.
The hardest problems to solve are those of viruses, the semi-living capsules of
genetic material that invade the body's cells and take them over as factories to
replicate themselves. Some, the so-called "retroviruses", are capable of hiding in
the body for years before being triggered to begin or continue their damaging
behaviour. These include the viruses that cause herpes and AIDS.
One line of research has concentrated on curing or preventing the symptoms
of retroviruses, possibly by forcing them back into dormancy with the hope that
eventually, a way may be found to remove them from the body altogether. Another
line focuses on interfering with the virus in such a way as to prevent its
reproduction or to foil its mode of attack on the body. Still others seek to tailor
molecules that can bind with the DNA of the virus directly to kill or inactivate it. All
hold the promise that eventually all viral disease can be eradicated.
When the human immune system and its behaviour with viruses is sufficiently
well understood to achieve this, a true watershed will have been passed--the last
barrier major to the elimination of communicable disease will have been eliminated.
The "cure for the common cold" is an old touchstone in such research; achieving
that implies the ability to defeat almost all viruses.
Recent experience with antibiotics, however, ought to sound a cautionary
note. These work by selectively killing off most of the target organism population.
Usually, the smaller numbers that remain are then taken care of by the body's
immune system on its own. However, the prolonged use of antibiotics guarantees
that the only remaining bacteria will be the ones that were resistant to the
antibiotic. Even if this resistance is conferred by an otherwise unfavourable or
recessive mutation, eventually the entire population of the organism will be
resistant. Such "superbugs" can then defy modern medicine in wreaking their havoc
on humankind. This selection of the existing bacterial population for resistance has
some potential for catastrophe, and it cannot be denied that a similar caveat may
hold for the treatment of viruses. However, the human population had, by the end
of the twentieth century, a much better strategic position in the war against both
kinds of disease than it had fifty years earlier.
Neither has medical progress been confined to the problem of communicable
diseases; numerous surgical procedures are routinely performed today that were
unheard of a century ago. Appendicitis is now seldom a killer, and many kinds of
cancer and heart disease can be beaten. The people of industrialized countries live
longer, are healthier, and are more active and productive later in life than at any
time in the past. They have forgotten the times when the limited skills of surgeons
were exercised without antiseptic or anesthetics and hospital patients usually died--

224
if not of trauma, then of infection. Instead, most live long enough to become the
victims of cancer, heart disease, accident, or suicide rather than of surgery,
infection, or communicable disease.
Those advances have not come without cost, for medical science now allows
more people with genetic defects or chromosomal damage to live and pass on those
defects to the next generation. Old age comes more slowly than it ever has, but
death is postponed only at an ever increasing cost for more sophisticated medical
techniques and larger extended-care facilities. Thus, the modern medical system is
under constant pressure, forced to make difficult choices over the allocation of
scarce and costly resources. The resulting ethical questions concerning the
application of life-saving or life-prolonging technology can be divided into three
kinds--those of facilities, those of cost, and those of appropriateness.

Facilities

It is often the case that a hospital is severely constrained in its ability to


provide complex services. For instance, the number of open-heart repair procedures
that can be performed at a given facility in a year is limited by the availability of
both operating rooms and qualified surgical teams. Some means of scheduling the
clients of such services must always be devised. This may be strictly first-come-first-
served, or surgery may be prioritized by the severity of the case, or it may be
dependent on some attempt to place a value on the client. This value may derive
from the perceived contributions of the person to society (i.e., a famous scientist or
artist versus a common criminal or skid road habitue). It may be a monetary value
(i.e., the ability to pay for priority treatment). It may be potential value to society--a
young person with a whole life to live being given surgical priority over an aged one
whose contributions are in the past. The more severe the shortage of resources and
the more fundamental the issue--here it is life--the greater the pressure both on
those who must wait and on those who control the resources.
In addition, new diagnostic and treatment techniques are always very limited
in availability, sometimes for a protracted period. For example, two of the latest are
magnetoencephalography (MEG), which measures the magnetic fields of brain cells,
and nuclear magnetic resonance (NMR), which is a means of scanning and mapping
the internal body organs. The former is useful in the diagnosis of brain disorders
such as epilepsy, cysts, and tumors. The latter can produce whole body images and
even a spectroscopy, or chemical map, to pinpoint subtle imbalances in the body's
functioning. It also can pinpoint tumours and soft tissue damage. Such machines
represent very high technology, and come with an equally high price tag. They are
expensive and complex to build and operate, and their results require special
training to interpret. As a result, they are relatively scarce, as the latest techniques
in medicine always must be.

225
Limited availability of resources is a broad and growing problem to modern
medicine. As the population ages, and as new ways are developed to prolong the
life of some people, the shortage of facilities and trained personnel become ever
more severe, as do the problems of rationing--deciding who will get sophisticated
treatment, who will get basic treatment, and who will get none. An act-oriented
ethic may seem to demand the maximum possible medical effort to be available for
all. A utilitarian one demands the greatest net medical benefits for the largest
number of people. But, these are both ideals, for as long as resources for diagnosis
and treatment are scarce--and it seems inherent in the present system that they
always will be--there may be no solution to the problems of allocation that can
possibly satisfy everyone. Consequently, it becomes necessary for both practical
and ethical reasons to pursue the development of medical services that can be
delivered with the least of scarce and expensive facilities and personnel. One such
method is the delivery of treatment at home rather than in a hospital, and by
personnel with less training than a physician (perhaps even be self-administered).
At the same time, it is necessary to seek technological solutions to certain
problems caused by existing techniques. For example, diagnosis is often achieved
only by batteries of complex tests, sometimes using scarce and expensive machines
or drugs. It is impossible to deploy such facilities for every doctor in the world, so it
is necessary to devise cheaper, more automatic diagnostic techniques in order to
make even the existing level of medical knowledge usable.
In addition, society itself, by its very industries, sometimes creates new
medical disorders that require sophisticated diagnosis and treatment. Examples
include chemicals such as urea formaldehyde and a variety of pesticides,
substances such as asbestos, and damage from radiation accidents.

Cost

The problems of paying for new types of treatment is also becoming more
acute with each passing year. For instance, should the person who receives a heart
or lung transplant be required to pay all, part, or none of the cost? Does it make any
difference if the person's own life-style (smoking, drinking, driving) contributed to or
caused the problem in the first place? Should the employer be responsible if the
procedure was necessary because of some job-related activity? Should the
government pay just because the person is a citizen? What if the patient is not a
citizen?
As diagnostic techniques and drugs increase in sophistication, they increase in
cost as well. Complex treatments, particularly when new, may require hundreds of
thousands of dollars, amounts far beyond the ability of most individuals to pay.
Neither can medicare programs grow indefinitely without bound, for they will
consume the entire budgets of governments that sponsor them long before any end
to their growth is in sight. It is not possible to afford every possible medical
technique for every patient, yet the physician who does not perform them risks a
malpractice suit. Insurance premiums to cover such legal expenses push up medical
costs even more. In addition, there is the expense of training medical personnel so
that they stay abreast of latest developments. Thus, from a cost perspective alone,

226
it is clear that new and expensive technologies will always be rationed. There will
always be the question of whether a procedure should be performed just because it
is possible, or only when it is affordable. There will also be the question of who
should make such decisions--patients, doctors, lawyers or politicians?
For instance, suppose that the president of the United States, a billionaire
industrialist, and a poor but Nobel-winning cancer researcher are all dying. Each
requires a liver transplant within twenty-four hours and only one surgery can be
performed. Which should be given the treatment? Suppose the issue is complicated
because both the first two could make hard-to-refuse offers to the medical facility
that would see it grow in size and in the ability to treat others. Treating many
people like the poor researcher could, on the other hand, put the facility out of
business, even though treating a cancer researcher might eventually cause more
benefit to more people. Can the course of action likely to cause the most good to
the largest number of people be determined? Can the right thing to do be
determined? On a larger scale, can medical practitioners ever refuse necessary
treatment to groups of people who cannot pay, while advancing it to those who can?
What about the same question asked of unnecessary (say, cosmetic) treatment that
is requested by a patient?
Some examples may be extreme, but similar ones are commonplace.
Consider the dilemma faced by the administrators of medical insurance plans when
asked to fund complex, expensive, and risky transplant operations for children at
remote places. The cost for one such operation may well be greater than that of
keeping a hospital bed at home open for a year. How many such special procedures
can be afforded before the whole system deteriorates in its ability to provide care
for society as a whole? How many community appeals for such special cases will
distraught parents be able to make before both sympathy and wallets become
exhausted? Should elective surgeries be funded just because the patient wants
them, or should only the ones needed be paid for--and who is to decide which ones
are needed?
Choices may also arise between expensive and inexpensive treatments. For
example, a patient with poor eyesight might be treated with a prescription for
glasses or contact lenses at a cost of under a hundred dollars or might be given an
expensive laser treatment to reshape the cornea and eliminate the need for
external correction. Should there be (in this case, or any other) different treatments
depending on the wealth of the patient? Who should pay for the poor man's glasses,
or the rich woman's corrective surgery? As the cost of medicine rises, so does the
number of people who cannot afford even basic services, let alone exotic new
techniques.
If MRI diagnosis is needed to determine if a person has a tumour and there is
a six-month backlog, should the person be allowed to have the procedure done
privately if willing to pay (this may not be an issue in countries that have no public
medicare)? Or, should exactly the same opportunity be available to everyone,
regardless of their wealth and the time it would take? What if the private clinic's
purchase of a machine means that none is available for a public hospital? What if
the hospital can charge for some procedures and use the money to finance more
hours for public clients?

227
The wealthy Western nations make important choices all the time when they
elect to make resources available for expensive surgeries whose cost could fund
thousands of simple, sight-restoring eye operations in third-world countries. Rarely
are things so simple that it is apparent to those directly involved that such a choice
is being made, but any comparison of medical facilities in different parts of the
world makes it obvious that some nations choose to afford far more than others
possibly can.
Organ transplant surgery will always have such limitations because of the
problem of finding donors and matching the tissue type with the intended recipient.
Even if transplant operations become as simple as appendectomies or plastic
surgery, ways will have to be found to increase the supply of spare body parts.
Today, the presumption in Western countries is that such parts can be removed
from a fresh cadaver only with prior authorization from the deceased or the
permission of the immediate family. If the individual did not make a point of
granting such permission while living, medical staff are understandably reluctant to
ask relatives immediately after death, for fear of giving offence to the grieving
relatives.
On the other hand, if a potential donor comes into a hospital after, say, a
traffic accident, and there is another patient there in need of the badly injured
patient's heart, will the potential donor get the best medical care, even when their
death could mean life for the other, and vice-versa?
One possibility for alleviating the shortage of organs would be to move to the
opposite legal presumption--that the body parts of the newly dead are the property
of society and freely available to the medical profession unless authorization has
been specifically denied. However, what is more important, the right of the dead to
privacy or the need of the living for a new organ? Surely, life must take precedence
over privacy in any ethical hierarchy, and yet there is something at the very least
dissatisfying or even ghoulish about the idea of routinely harvesting body parts from
the deceased, without express prior approval.
Yet, if this seems unattractive, what are the alternatives? One could argue
that human organs ought to become commodities like gold, pork bellies, or coffee.
After all, the Red Cross already buys blood in many parts of the world (Some
countries, such as Canada, have in the past forbidden the purchase of blood from
donors and commercial transactions in organs). If blood can be a commodity, then
perhaps kidneys, lungs, or even limbs could be treated as such. What is more
important, the need to make body parts available when and as needed, or people's
reluctance to engage in something that could be likened to the operations of a used
automotive parts business?
It should be noted that in countries where people are sufficiently desperate,
there is a temptation to make money by selling one's own body parts to the citizens
of "enlightened" western countries. Alternately, sufficiently repressive states in
need of foreign exchange could decide to farm such organs from its citizens without
their consent. Meanwhile, western governments might find the prospect of
benefiting their own citizens in this way to be politically irresistible. Who ought to
have the authority to allow or deny such transactions? Is it even possible to stop
them? Perhaps not, for there is already a lively traffic in such organs from third-

228
world countries to the operating rooms of the West, with only sporadic and
ineffectual outcry.
The complexity of these issues is also illustrated by the difficulty experienced
by doctors wishing to use transplant parts from newborn but nonviable infants.
Sometimes a child is born with much of the brain missing (anencephalic) and will
surely die in a matter of hours or days. Some doctors are unhappy that even when
parents' permission is given, it may be legally impossible to use organs from such
an infant for transplant purposes until breathing ceases, at which point the parts are
much less useful. One rationale for forbidding earlier transplants is that the child is
a human being--though a very much injured one--and therefore entitled to be
allowed to die in peace without being torn apart for spare parts while still alive.
Some hospitals attempt to steer a middle course, keeping the doomed infant well
oxygenated and healthy, so that the organs will be in good condition when death
does come. However, even this practice is controversial, for the definition of death
as the cessation of brain activity is hard to apply when much of the brain is missing.
The success of the pro-abortion movement in effectively re-defining an unborn
child as a non-person and thus part and property of the mother's body raises the
possibility that cash-starved mothers could sell the parts of her unborn child. After
all, attempts to prosecute pregnant women for child abuse for taking drugs, or
murder for shooting their unborn child have already been dismissed by the courts
because the child in the womb has no legal personhood.
The danger in breaking new legal ground in such respects is in defining that
severely injured or malformed infants are not human and can therefore be
scavenged. If that could be done in such an admittedly extreme case, there would
be pressure to do the same to those born with, say, cerebral palsy or Down's
syndrome. If those cases were also permitted, then any child deemed unacceptable
by the parents for any reason--say, by being the wrong sex--might be at risk.
Moreover, there could be no compelling reason to limit this to infants once such a
practice were begun, for the precedent would be set whereby anyone could be
declared insufficiently human and subject to salvage, such as for failing in school or
for being too old. The state that had the power to do these things could also define
the members of a race, religion, or banned political party to be subhuman and
available for spare parts. Under current North American law, it would seem that
even healthy children could be disassembled and sold for parts while still in the
womb even if not after birth. After all, it is already common practice to harvest stem
cells and other tissue from aborted children, in some cases without any
authorization being necessary.
On the other hand, under a hierarchical ethic of the type developed in Chapter
3, even the most severe "defects" would not disqualify a person from the right to
live, for such disablements reflect on the quality of the child's future life, not on the
fact of it. Life itself clearly has a higher priority than its quality.
A related and equally contentious issue is the use of tissues from miscarried
or aborted infants in transplants, corrective surgeries, and other experimental
purposes. It has long been known that such tissue has growth and restorative
potential not shared by the corresponding tissue from a mature human. Thus, if
bone marrow, skin, liver cells, and other organ parts are required to grow well after

229
transplant, there may be no better source than fetal tissue. Whatever one thinks of
the ethics of abortion or of the status of the unborn, such a practice has some
unique ethical hazards. If the use of such parts were to become an important
medical technique, they would become a correspondingly important commercial
commodity. Not only would there be a lively trade in such tissues by hospitals and
abortion clinics, there might even be a sufficient economic incentive for destitute
women to become pregnant and sell the right to abort the child to the highest
bidder.
Such outcomes may seem bizarre and repulsive to those who regard the
human body at all stages as something much more than a convenient assemblage
of chemicals into tissue. To others, such practices are simply the logical and normal
outcomes of regarding the parts of the human body as commodities like any other.
That is, the conclusions one draws on these and similar issues depend on one's view
of the human body. If the body is held either as sacred and inviolable (on the one
hand) or as entirely material but constituting the whole of a person (on the other),
then it could be argued that the body's parts ought to be left alone. Two normally
opposed groups find common ground here--one because of a transcendent view of
the body and the other because of the belief that the body is the whole of a person.
Members of the former group would want to bury the dead respectfully--and by that
they mean intact. Members of the latter group might suggest instead freezing the
body against the day when it could be thawed out and brought back to life with all
parts intact. Respect is the common theme here.
An entirely different conclusion might be drawn by those who emphasize the
immaterial aspects of what they see in a human being. A human being is sometimes
viewed as a body and soul dichotomy, or as body, soul, and spirit trichotomy, and
there are those who regard the material body as by far the least important of these.
With such an emphasis, the dead body is not the whole person and not at all sacred.
It is just part of the material baggage left behind when the essence of the person
(the soul) departs. Since there is in this view no reunion of personality with that
body, the previous owner has no more use for its parts, at they might be used to
benefit someone else. Other combinations of these ideas are also possible, and the
two conclusions about the value of the body can be reached by other means, so
there is little agreement on such issues.
There are two possible techniques for alleviating the supply difficulties
associated with transplant operations. The first is to use animal donors, such as pigs
or chimpanzees, to obtain the organs destined for human beings. This would have
the advantage of solving the supply problem, though some people might be
repelled at the prospect of having a pig's heart replace their own, and animal rights
activists might also be offended. The second is to develop artificial hearts, kidneys,
livers, and so on. This would also solve the supply problem, though the cost of such
work has so far been very high, and there is no immediate prospect that such
devices can be made and produced in the necessary quantity, with the required
reliability and at a reasonable price. Nor does either of these solutions address the
availability of surgeons and facilities, or the cost of performing the procedures. Mass
availability of artificial hearts would not in itself mean that more transplant
operations could be performed; other factors are at least as complex.

230
This discussion highlights the built-in limitations of surgical techniques--
barring some dramatic changes in the availability of transplant parts and other
costs, the application of such methods simply cannot increase without limit. Thus,
the most sophisticated of today's surgical methods are unlikely ever to become
available to much of the world's population. There will never be sufficient funds
available to perform every necessary surgical procedure, much less every desirable
one--and this is true even if robotic surgeons are developed. However, no one with
any sensitivity would want to turn away the sufferer with no hope for relief. Again,
necessity seems to force the conclusion that simpler and less costly methods must
be found to replace the use of surgery and other complex procedures.

Appropriateness

A third set of problems that accompany the allocation of scarce medical


resources has to do with the appropriateness of treatment in some cases. For
instance, should a doctor do an expensive heart transplant on a patient who is also
suffering with a terminal cancer that will probably kill her within two years? Should
expensive surgery be done to allow a child with Down's syndrome to live, even
though it is known that the quality of that life will be impaired? Some, including
courts called upon to judge such cases, have said "no." According to the ethical
hierarchy developed in Chapter 3, on the other hand, the answer to both questions
should be "yes." Denial of treatment can be supported for economic reasons--
neither person is likely to be able to pay society back for the treatment. The
expense of the procedure is therefore wasted. Those giving the latter answer could
argue that the future cannot be known in advance and that the termination of a life
is not an answer, but a reaction born of despair. What is more, everyone dies
eventually, even if it is simply of old age. The physician who may be able to give a
woman dying of bone cancer a few more years, although pain filled, does not know
if tomorrow will bring a cure or a fatal traffic accident to the patient. Has the doctor
a mandate to prejudge her life on the basis of an unknowable future?
Questions of appropriateness are also raised in connection with fertility
problems, for treatment of such conditions can also be expensive and time
consuming and can require repeated hospitalization. Neither is it easy to argue that
a woman has a fundamental right to bear children. Moreover, there is already
population pressure, and this grows steadily worse. In that light, it is difficult to
justify using scarce resources to enhance fertility. On the other hand, it is hard not
to sympathize with the plight of one who desperately desires children but is unable
to have any, particularly when adoption is unlikely to be an option.
Other questions of appropriateness are quite different. Many concern
cosmetic surgery. Should surgery to reduce a nose, enlarge breasts, change the
smile, or "Westernize" the eyes be paid for by medical plans, or even allowed? After
all, these take up a substantial portion of scarce hospital and surgical resources--
ones which, it could be argued, would be better spent on those who are actually
sick. How important to society as a whole is the self-image of some of its members?
One might agree that correcting actual defects is good and should be done, but how
does one respond to the argument that a nose perceived as less than perfect by its

231
owner is a defect? As the population ages and life spans increase, cosmetic surgery
will be in greater demand. Some argue that this area should be left rather
unregulated, apart from defining "cosmetic" and excluding such surgery from public
medical insurance plans. Even some of these might want the government to step in
if cosmetic surgery took a bizarre turn. For instance, perhaps some future youth cult
would find it "cool" to have orange cheeks, green lips, an elongated nose, and skin
flaps instead of hair--all arranged by surgery. Is this different from what is often
called "corrective cosmetic" surgery? Even if it is, it may be impossible to draw the
line between the frivolous and the acceptable, so "user-pay" may have to rule this
area. One could suggest that hospitals might make sufficient profit on such
enterprises to subsidize other areas. On the other hand, such activities might make
operating rooms and personnel unavailable for other types of work.

Summary and Conclusion

As long as current surgical methods are relied upon for replacement or repair
of defective body organs, a regulated means of providing the necessary resources
must develop, however unsatisfactory it may be. However, many of the surgical
methods themselves are likely to remain complex, scarce, and expensive. In all
likelihood, those methods will never become available to the world's general
population, and the chief problems associated with them will continue to be how
patients are selected and who pays rather than, say, the supply of organs as an
economic mass commodity. What is required are different technologies--ones that
use simple, cheap, and nonsurgical methods. In many cases, the functioning of
human body systems are fairly well understood (on a macro level), and so are their
dysfunctions--, in theory. This means that solving the problems may be more a
matter of applied science or engineering than of pure science or understanding.

7.3 Engineered Medicine


As mentioned earlier, the development of effective ways to combat viruses
will represent a major medical turning point. If viral diseases can be conquered
without hospitalization, the cost of medical care will decline and life spans will
increase--both perhaps rather substantially. Enough progress has now been made to
make some health researchers confident that most infectious diseases will soon be
a thing of the past, providing no intervening political or economic catastrophe sets
the work back.
This leaves three other categories of organic malfunctions for which to
consider treatment strategies. The first is invasive illness, such as cancer. Here,
encouraging progress has already been made toward the development of biological
and chemical agents capable of targeting specific cancerous cells in the body and
either destroying them or tagging them in such a way as to invite the body's own
immune system to eliminate the intruder. It is now known that a healthy immune
system is able to make antibodies for almost any foreign protein; the trick is to keep
that system healthy and working. Whatever the method, many forms of cancer can
now be completely defeated, especially if detected early. The most difficult
remaining ones to overcome may be the non-localized cancers of the bone, blood,

232
and lymph. Still, optimists point to the progress to date and predict that even these
forms of cancer will be curable by injection, radiation, and other nonsurgical
methods within twenty years.
Lung cancer may also be hard to cure. It is presently on the rise, especially
among women, who became smokers in large numbers more recently than men and
have not been giving up the habit as readily. This particular problem raises a
subsidiary ethical question--whether the production, sale, and advertising of so
potent a carcinogen as tobacco should even be allowed. It would not be if it were a
new food additive or drug, but the vested interests of a large existing industry are
not easy to set aside, even when the lives of many people are at stake. This is an
example of the way that economic considerations sometimes overpower ethical
ones.
The second category of malfunctions is those involving the accumulation of
extraneous material in the body. Calcium deposits cause painful spurs on bones and
cholesterol accretions block arteries, causing damage to the heart and other organs.
It seems likely that in many cases, substances will eventually be developed to
dissolve such accretions in a harmless fashion. After all, there are already drugs
that can dissolve gallstones or block secretions of stomach acid. These eliminate the
need for such surgeries as gall bladder removal and duodenal ulcer repair.
In an interesting sidelight, it was long thought that ulcers were caused by
excess stomach acid, and the typical treatment was a sedative prescription
combined with a bland diet. It is now known that ulcers are caused by bacteria, and
antibiotics are quite an effective treatment. That is, rather than being a systemic
failure of one's own body, ulcers are caused by an invasive agent. There may well
be other such misunderstandings in modern medical knowledge.
Another problem that may be of a similar type (system failure) relates to body
cells' seeming inability to divide and replace themselves more than a given number
of times before dying. One theory was that this may be due to the action of
substances that built up between or inside the cell and eventually blocked its
reproduction. If the cell-division inhibiting agents could be identified, an anti-
inhibitor could surely be designed. Understanding what to do is not the problem
here, the difficulty lies in actually performing the necessary engineering. But even if
an accretion-dissolving molecule must be designed atom by atom--and the ability to
do that is limited as yet--such design problems are not theoretically insurmountable.
In fact, overcoming barriers of this type involves only a modest expansion of today's
already formidable battery of pharmaceuticals.
A new theory of cell aging is equally interesting. It suggests that multiple
copies of sequences called telomeres at the end of DNA chains vanish one at a time
with each cell division. Eventually, no telomeres remain, and the cell can no longer
duplicate itself, thus limiting the organism's lifetime. Perhaps an agent can be found
to change this action so cells can reproduce indefinitely. On the other hand, such a
cell would bear a strong resemblance to a cancerous one.
The third set of challenges for medical engineering relates to repairing
physical damage to the human body. In this context, one is tempted to view the
body as a biochemical machine, albeit of extraordinarily intricate design.
Unfortunately for the mechanics of this machine (the surgeons), their smallest tools

233
are thousands of times larger than some of the very fine parts they wish to repair.
Heavy structural members (bones) and outer protective sheeting (skin) are
relatively easy to work on, as are the larger subsystems (organs). But as many
paraplegics know to their sorrow, the nervous system is another matter. Sewing
these with thread is like trying to tie up a flea with an ocean liner's hawser. Finding,
let alone repairing, individual damaged cells is impossible with traditional surgeons'
tools.
The engineering challenge here is to develop first the knowledge of the fine
structure of the human body at the molecular level, and then the ability to design
biological or chemical agents that can effect repairs at that level. This is not as far-
fetched as it may seem, for the body can already conduct repair operations to a
great extent, and some animals are even capable of regenerating severed limbs.
Human bodies cannot effect this, because even as they grow in the womb, their cell
tissue differentiates sufficiently to lose the ability to replace parts. However, the fact
that such tissue could grow a limb at one stage of development suggests that it
could be given that capability again when necessary. This is the point of working
with stem cells (ones that retain the ability to produce various kinds of tissue), for
these may be induced to grow a variety or organs or parts thereof.
If limb regeneration seems too grand a task, perhaps promoting the healing of
severed nerves will be easier to do and reward far more people. Once again, the
problem is one of biochemical engineering--of building the appropriate substances
to stimulate the body to repair itself. An old engineer's motto is worth mentioning
here:

If it used to work, it can almost certainly be made to work again.

There are more comprehensive repair problems, however. As the body's cells
grow older they gradually lose the ability to replicate themselves correctly, or at all.
As noted, this may be due to some inhibitor. It may simply be that the body's parts
gradually wear out and so die or that each DNA replication causes some portion of
the genetic material to be discarded, and eventually the cell simply lacks enough
DNA to reproduce. That is, perhaps a body cell is not unlike a page of text that has
undergone many successive photocopyings. After a time, the text loses definition,
and it eventually becomes illegible. If the DNA of a body cell is subject to similar
losses, its successors would eventually lose too much of its information content to
remain viable. Here is yet another case where it is easy to visualize the problem and
to have some idea how to fix it--on a large scale and a theoretical level. Engineering
a solution that allows the body to repair or replace structures damaged by age is a
much more difficult matter. It could involve the development of many biochemical
agents, some natural to the body at some stages of development (enzymes) and
others that are new drugs.
It should be emphasized in this respect, however, that the body subsystem
once termed the "simple cell" is anything but. On the contrary, it is known to have a
biochemical design of incredible complexity and sophistication--more so than any
computer, for example. Thus, finding problems at the molecular level, and designing
answers at the same level will not be a simple or a short task.

234
However, the potential for such medicine extends from simple cell repair to
the dramatic and even to the far-fetched. If the human body could be induced to
grow a new limb, then perhaps it could also be made to grow a new heart, lung, or
liver and then to dissolve the old one. Restoring hair to the bald may not turn out to
be difficult or even important. Restoring hearing to the deaf or sight to the blind is
another matter, for both involve problems of fine structure whose repair is often not
amenable to surgery but could be to new pharmaceuticals. Some have even
wondered whether a "memory pill" could be devised to stimulate the brain in such a
way that while under its influence anything heard or read would never be forgotten.
Regardless of whether any of these are achieved, the principal research focus will
be on replacing expensive and difficult surgical methods with cheap and easy
chemical and biological ones.
One possible method of producing biological agents is to design cells, say,
bacteria, to produce proteins that in turn could be used to make specific enzymes or
antibiotics. Such living nanomachines could be developed much further--to the point
where a collection of them can act as a miniature assembly line for new DNA, new
proteins or new enzymes. Such substances could then be built to order, molecule by
molecule. Other optimistically projected nanomachines would be programmable or
instructable--and may be termed nano-computers or general purpose assemblers.
The reconstruction of a damaged heart, liver, or other organ and even the
rebuilding of damaged nerves or neurons could be well within the ability of agents
made in this manner. Another possible technique involves the direct construction of
DNA strands that can manufacture the desired molecules. Another still is the
chemical stimulation of the affected parts to induce them to self-repair through
growth. Although these ideas are still in their infancy, there are already machines
that are capable of analysing or constructing specific protein molecules. In the
longer term, nanomachines might also be employed to grow a PIEA as an implant in
the brain or to make alterations to body or brain structures to improve both or to
repair congenital or genetic damage in situ--not on a gross structural level, but by
editing gene sequences.

Automating preventive medicine

Setting aside the more spectacular speculations for a moment, an important


potential for the use of existing technology is in the computerizing information and
activities relating to preventive medicine. In particular, the most important
contributors to health--or to the lack of it--are nutrition and exercise. Although the
appropriate levels of neither are known yet exactly, a great deal of general
information is known about both. Average citizens have little access to much of this
in ready form until they come under professional care for a back injury, obesity,
diabetes, or a heart attack. Most people will not make use of what is known without
such a powerful motivation, unless it is in a form that makes it very easy to obtain.
This is an interesting but not insurmountable challenge to some in the high-tech
industry, for if people had and used the available information on nutrition and
exercise, there would probably be a significant decline in health care costs, and an
increase in the average life span.

235
This is yet another instance where reducing the barriers to finding information
has great potential. People who would not make a trip to a library to find nutritional
data are more likely to do so if it is easily available in their homes via an appliance
that they use frequently, and on which the presence of such information is
advertised frequently. The Metalibrary would not itself solve health problems, but it
might prove to be an important tool in providing people with the means to solve
some of their own.

Consequences of longer life

As understanding of the aging process, of preventative medicine, and of how


to do molecular engineering grows, a substantial increase in life spans seems likely.
Longevity researchers differ widely in their estimates of what the eventual human
average life span could be, with figures of 200, 500, and 1000 years being tossed
about. Even if one believes the more conservative of these optimists, and assumes
that some of today's under-50-year-olds would live to, say, 150 years instead of to
the current average of about 75 years, the social implications are staggering. At
every stage in the development of longevity treatments there will be pressure from
the rich, the powerful, and the intellectuals to obtain priority treatment. Moreover,
since the already highly developed countries would have such agents first, the
medical gap between rich and poor countries (and individuals) may grow even
greater, increasing the destabilizing forces on world society. Some attempts might
even be made to keep the fact of such treatments a secret at first. However, even if
the recipients had complete facial make-overs and an entirely new identities, their
continued survival could not long be kept from the rest of the wider population or
from the citizens of other countries--all the more so since effective treatments for
old age are likely to be independently discovered by many researchers more or less
simultaneously.
Over the longer term, age sixty-five retirements, the whole concept of
pensions, the hope of inheritances, and the ability of youth to obtain jobs vacated
by their elders will all be affected by any substantial increase in life span. In
addition, unless birth rates are substantially reduced, population sizes could
increase dramatically. Some already crowded countries might restrict any longevity
treatments that are developed to a small elite for this reason alone. Power and
money concentrations could grow, not only because their holders might at first
control the treatments but also because they would live longer and have more time
to accumulate both.
Some of these problems might be resolved by pragmatic force of
circumstances. The managers of large pension plans would either re-market their
funds as general investment packages or go into a different line of work. The tax
structure might have to change to limit accumulations of wealth. Perhaps a means
would be found to encourage people to change careers every few years in order to
alleviate the job entry problem. However, the social and economic disruption due to
such changes would still be substantial. On the whole, these changes may increase
inequities and tensions between the rich and the poor of the world--a prescription
for disaster if the treatments are not seen to be fairly administered. Marriage could

236
change, for it is even now seen by many as a temporary commitment during part of
a longer life--such a view could become even more prevalent if life lengthens and
bearing children were discouraged or forbidden.
There are also some balancing forces to contend with on this issue. While
increased longevity would suggest much larger populations, the birth rate in
industrialized countries has been declining for decades, and it seems likely to do so
in third world countries as their economies change as well. The net result could
simply be a stable but much older population, and an increase in the retirement age
because of a lack of younger workers to take jobs.
It is also not clear whether people who live longer would also stay healthy for
longer or whether they would merely have to spend more years in extended-care
facilities for the aged. If the latter were all that could be achieved, the benefits
would be small indeed. The most optimistic of longevity workers are convinced that
the greatest benefits of longer life will come by extending the productive years of
those people whose generation of new ideas and techniques most profoundly
impact the direction of society. This is a large assumption, for many people are
productive only in a few of their present years. One might also hope that more
years will mean greater productivity, but this may be an occasional side-effect. It
may equally be supposed that longer-lived people would gradually become less
innovative and productive, and not contribute anything of benefit to others for most
of their lives.
Thus, although human life may be extended considerably over the next few
decades, the long-term implications of such increases for society as a whole are
unclear. There may be a declining birth rate, a more stable and conservative
population, upheavals in the job market, and the disappearance of some institutions
catering to retirement as it is now known. Whether the extra years would mean
"better" people from either a moral or educational point of view is unknown. History
would seem to suggest that there would likely be the same proportion of scoundrels
and saints regardless of how long both lived.
Another aspect of the increased use of medical drugs is the corresponding
increase in their abuse. The more drugs that are discovered, the more mind-altering
substances will be among them. Moreover, as the workings of the human brain and
body become better understood, so will the ways of stimulating the pleasure
centers. Consequently, one should expect that there will always be addicts. What is
not known is whether the societal changes now in progress will result in a higher
percentage of the population becoming "wired" for pleasure than at present, or
whether a significant sacrifice of general civil liberties will have to be made to
detect and eliminate such practices. This is already an important question, not just
for athletes who attempt to perform better on drugs but also for the employees of
railways, airlines, hospitals, and other places where human performance affects the
safety of others.
Thus, it can be seen that the development of new pharmaceuticals, as that of
any technique, is likely to have mixed results--some very beneficial, some much less
so.

7.4 Engineering New Life Forms

237
Some of the methods described in the last two sections are also being
employed to develop new or modified life forms. Current examples include special
strains of bacteria that can attack ocean oil spills, digest the material, and reprocess
it into harmless substances. Others make antibiotics, antibodies, and other
pharmaceuticals for a variety of human and animal disease treatments. On the
drawing board are bacteria that can concentrate the minute amounts of, say, gold in
a rock ore, making the extraction of very low percentages of such metals feasible.
Other useful strains could process garbage, produce crude oil, or separate discarded
metals and plastics into their elemental constituents for recycling. In conjunction
with the biological nanomachines of the last section, scavenger species could also
be devised and targeted to particular harmful substances or organisms in the
human body. Such goals are already being achieved through recombinant DNA
techniques, wherein the specific gene(s) responsible for some attribute of one life
form are identified and spliced into the DNA of another form. In the case of simple
proteins and DNA strands, genetic engineers have been able to design and build the
entire strand from the base materials, and methods for doing this are gradually
being extended to more complex proteins.
Possible uses of these techniques on plant genetic material include
development of new species of high-yield grains, ones that can grow even in poor
soils or climates, as well as the combining of two food producing methods into one
plant. An example of the latter is the "pomato," which grows fruit above the ground
and tubers below. Higher forms of life may also be modified in this manner. Cattle
could be developed that are hardy to colder weather, can graze on poorer ground,
give more meat or milk, require less care, or are more resistant to disease. Chickens
may be induced to grow larger and to lay bigger eggs. Since the change is at the
DNA level, the result is not just a hybrid cross-breed; the new characteristics breed
true. Science fiction writers have long speculated that someday all food may be
manufactured by bacterial action on raw materials at a far higher efficiency than
photosynthesis, with conventional farming becoming obsolete.
There is still concern that a genetically engineered virus or bacteria might be
released that would cause a plague taking millions of lives. Another disaster
scenario involves creating a life form that is capable of nothing but making copies of
itself, using the entire biological world for its own ends--the "grey goo" finale to all
other life forms. Laboratories that work with genetic materials must be very careful,
for it is not yet possible to predict all the side effects of gene splicing. The section
being spliced may control characteristics other than the one being targeted, and the
life form developed may not be what is expected.
Such difficulties are to be expected in any technology in its infancy. It is safe
to assume that the understanding of genetic coding will continue to grow to the
point where the DNA even of complex life forms can be mapped. Several nations
have already funded the complete mapping of human genetic material (the human
genome project). Now this information has been gathered, the raw data is available
to discover what each gene controls and to edit human gene sequences.
Methods for changing specific characteristics will also become more
sophisticated. It will be several years before new life forms can be developed from
scratch, tailored to measure for their niche in the earth's ecological system, but this

238
too seems inevitable. New plant and animal species will likely be made to improve
the food supply or replace it with chemically manufactured substances. In the
future, some adventuresome workers might tackle the revival of extinct species like
the mastodon, certain dinosaurs, or the passenger pigeon. Another possibility is the
enhancement of existing animal species. Could some be given enough intelligence
to perform menial tasks, become factory workers, cleaners, or message carriers?
Can some kind of animal/machine be developed that is not only alive but also
programmable? The answers to such questions are not known at this time, but if
they are positive, then humanity will ultimately face a period of adjustment to its
own creations, and the necessity of finding them a place alongside the human
biospace.
Questions have been asked about who owns the products of such research. In
the United States, patents on new life forms developed in the lab have been
granted--a practice that is certain to remain controversial. At issue is whether living
materials developed in the lab at the cost of time, talent, energy and money are
qualitatively different from medical drugs developed in the same way. As long as
viral, bacterial, or plant material is in question, the public may take relatively little
interest in the matter. When animal species are genetically re-engineered,
opposition to patents and even to the research itself may run somewhat high.
However, the controversy generated by plant and animal genetic research pales by
comparison to that from questions of applying this research to humans.

7.5 Human Genetic Engineering


The methods of gene splicing to re-engineer the human race fall into three
broad categories--those intended to make repairs, those intended for selection, and
those intended to make improvements.

Genetic Repairs

Repairs of cellular-level damage fall into two categories. The first, the least
controversial and probably the easiest to effect, would be those done after birth, or
at least after conception. The development of DNA-driven machines to produce
specific proteins could lead to the ability to repair damage from mutations, remove
dangerous or hostile proteins from the body, or even to correct some chromosomal
or genetic defects after the fact. That this is of somewhat limited benefit can easily
be seen by considering Down's syndrome. Even if all the associated genetic damage
were somehow to be repaired, say by a nanomachine, there would still remain the
physical and mental deformities programmed in before birth.
The second category is the elimination of genetic disorders before they
happen, or at least before they have any opportunity to do any lasting harm. There
is a long list of candidates for such work. Sickle-cell anemia, hemophilia, and the
predilection to organic problems such as cancer and heart disease are some that
come immediately to mind. Some disorders can already be identified with specific
gene locations in human DNA. Diagnosis from material in the fluid surrounding the
child in the womb is also possible in many cases. It may not be many years before
the genetic causes of most major inherited disorders are well understood, though

239
there is little indication at this time that cures for any of them are feasible once they
have been inherited.
In the time between the discovery of the genetic cause for an ailment and
that of a cure, a new issue arises, however. Is a person known to have a genetic
predisposition to heart failure, diabetes, or drug addiction insurable? Is such a
person employable?
At present, prenatal examinations for such disorders are usually done to
determine whether or not to abort the child rather than with any healing process in
view. Where governments permit such abortions, it is not hard to imagine them also
mandating that genetically defective children be aborted in order to protect the
economy by avoiding costly treatment and care after birth. After all, a state that has
the power to allow abortion clearly has the power to require it. Indeed, in view of
growing population pressures and strains on the medical system, such a possibility
must be regarded as quite likely in some countries--a final solution to the cost of
birthing, raising, and caring for those deemed genetically defective. The chief
question would then be how to determine which genetic characteristics constitute
the "pure" human race and which others ought to be exterminated. As the Nazis
showed in the 1930s, the decision can be made on a political level and then
technology used to enforce it afterwards. Since such a policy would be effective only
if it were enforced on the already-born as well as on those not yet born, it would be
as easy to establish in the former case as in the latter. It might be a short step to a
new program of genocide directed against those deemed sub-human for national,
religious, or racial reasons. Because this would be a politically motivated policy, it
would also be easy to label dissidents as defective, much as they were in the old
Soviet Union, except that a mental hospital expense would not be incurred.
Can such frightening outcomes be avoided? The assumption here is that they
must. One possibility is by focusing research on conception itself, rather than on
some later point in human development. Assuming that gene splicing methods have
also advanced in step with diagnostic techniques, the same methods applied to
plants and animals will be available to alter human DNA. The egg and sperm of
would-be parents could be examined, and the coding at the target gene sites
changed before inducing conception in vitro. The "corrected" fertilized egg could
then be implanted in the mother's womb for carrying to full term. The child would
still be biologically the offspring of its parents but would lack the damaged genes
that they would have passed on in the normal mode of conception. Moreover, those
damaged genes would then be eliminated from future generations as well.
There are also objections to this kind of research on the grounds that the
result is unnatural, and therefore ought to be forbidden. For their part, advocates
respond that, in the case of hemophilia, for instance, blood that clots is more
natural than blood that does not. Surely, goes the argument, it would be of great
benefit to the whole human race to find ways of correcting and eliminating
haemophilia (and similar problems). Heading off genetically-induced disorders
before they happen also avoids the more unpleasant aspects of attempting control
after the fact. It would also solve some problems that now exist for infertile couples
who use sperm banks in order to conceive a child. As things now stand, two children
independently conceived through these services could well be half-siblings with no

240
means of knowing this. If they met and married, the probability of genetic defects in
their offspring would be much higher than normal. The ability to eliminate genetic
defects would make the operation of anonymous banks for human eggs and sperm
much safer. Whether, in the light of population pressures, the cost of continuing to
use these techniques is justified is quite another matter. So is the potential right of
the child to know who the biological parents are.
On the face of it, choosing to seek the ability to make genetic repairs seems
an easy decision to make. Indeed, the numerous research efforts already underway
along these line testify to the fact that the decision has already been made. As long
as fertilized eggs are not discarded during the process, even the most conservative
observer might not object too strenuously to such an apparently beneficial program.
However, there are clearly implications of this technology that would be much more
controversial. Moreover, there is no guarantee that the best of such techniques,
chosen with "good" motives, would not still result in the worst of abuses as well.

Selection

The ability to repair genetic material implies that traits other than those
involving defects could also be selected. The most obvious application is choosing
the sex of the child. On a personal level, this might be regarded by many as
ethically neutral. However if sex selection were easy enough to implement on a
large scale, it might not be neutral, for in some cultures there is a powerful bias
against female children. In such situations, the ability to select sex would quickly
throw the male/female ratio far off the rough balance it now enjoys. Indeed, in some
parts of the world, amniocentesis is already performed solely for this purpose. It is
hard to imagine the consequences for the particular generation so selected, but the
practice of selecting only males would certainly have to die out quickly, with or
without the culture that once used it.
Eventually, various selections would be just as feasible--for beauty, strength,
longevity, intelligence, height, colouring, and other characteristics. Since some of
these characteristics might be perceived to give the next generation a decided
advantage, there would at the very least be strong social pressure on parents to
adopt whatever technology was available to ensure that their children had the best
possible genetic heritage. Indeed, if they chose not to do so, they would surely find
themselves on the losing end of a parental malpractice suit brought by their
offspring or even the target of criminal negligence charges brought by the state.
What is more, if current population pressures are heightened by a dramatic increase
in life spans, the birth rate will have to decline in equal measure. If society is able to
tolerate only a few children, the pressure to use whatever techniques are available
to select the "best" parental genetic material would undoubtedly become extreme.
In the same way that abortion of "defective" children might come to be
required, the availability of genetic selection methods could lead to their enforced
use, because there would be compelling arguments that selection is in the best
interests of society (another example of efficient technique being irresistible). There
would always be the problem of who defines the best interests, of course. Some
future government might want to breed docility into the general population to

241
enhance its power or to eliminate certain racial characteristics in the interests of
what it regarded as purity. This could not be done easily unless authorities also
limited longevity treatments to an elite, for otherwise the population would change
too slowly to achieve this goal. While a government with enough power to do the
one could clearly do both, it would still be faced with population pressure and might
opt instead for mass sterilization or some equally draconian means of compulsory
birth control. The more neutral path of requiring reproduction licenses--and only
issuing them to those deemed genetically fit, or those able to afford gene editing--
might be impossible to enforce under any but a totalitarian regime.
The major assumptions that lead to the most severe of these difficulties are
three: that such selection will become possible; that life spans will increase; and
that living space will be limited. There may therefore, be a troubled future for
human genetic research--and now that it has begun, it cannot be stopped, for it will
go on in some part of the world even if banned in another. There may be ways
around these problems, if room can be found for a much larger population, and that
aspect of the human biospace will be discussed later in the chapter. There are even
more troubling aspects of human genetic research, however.

Making Genetic Changes

There is a process of genetic selection acting upon the pool of genes--in every
generation of living things most genetic traits are inherited and remain possibilities
for transmission to the next, but some do not and thus die out. Of course, genetic
selection does not on its own result in the production of anything fundamentally
new in succeeding generations, just in variations on the central theme for that life
form. Information is selectively lost, not gained, even when speciation results. Even
done intelligently (as in plant breeding), selection only allows designers to take
advantage of characteristics already inherent in the normal range of genetic
variation. Careful selection allows the distribution of traits already present in the
population of a life form to be moved toward one end of the existing range. If then
left alone, they would tend to regress back to the natural mean. Genetic
modifications have the potential to force this selection to become permanent,
because undesirable genes would be eliminated altogether.
This is already done with plants and animals, even to the point of generating
new species. Is there therefore any reason to suppose that human genetic research
would stop with repair and selection? Various "improvements" or "modifications"
would certainly be suggested, and no amount of government control could halt such
experiments indefinitely. Thus, as with any fundamentally new technology, this one
has the potential to be used for what seem to be attractive ends, for what may be
frivolous ends, and for ends that seem very threatening.
Possible attractions include further enhancements to intelligence and more
extensions of the life span, for instance. A strain of human beings with very large
lungs and an altered circulatory system could live in high mountains where the air is
thin. Some do this now, so such an ability might fit in the selection category.
Perhaps someone would try to develop a human with gills and fins to colonize the
oceans, or one with hollow bones and wings to take to the air, or a human with a

242
modified chemistry to live on some hostile planet. Are such things feasible? Not yet,
but no one knows at this point they are impossible. In the quest for knowledge,
some such things may well be attempted, assuming there are no enforceable
limitations placed on genetic research.
What of still stranger alterations in the name of making improvements, such
as extra limbs or eyes? Experimenters could try everything from dual sets of sexual
organs to new skin colours. Perhaps a docile subspecies with limited intelligence
and great strength would eliminate the need for robots or enhanced animals to fill
menial positions. Such possibilities may seem shocking, but the point is that at
present no one has any idea where genetic research may lead. It may not be either
wise or practical to prohibit all such research, but society does have a vested
interest in ensuring this work is regulated with codes that have moral/ethical
strength beyond the simply legal. Otherwise, human beings may eventually find
themselves asking whether a creature derived from their own genetic stock is in
fact human--or, worse still, having the question asked of them. At the very least, the
question of who owns the rights to new forms of life would have to be reopened. If
changes to "lower" forms of life such as plant or animal modifications are
patentable, then a degree of ownership is implied. If the same is true of
modifications to the human stock, it could lead to the enslavement of the modified
humans. It could equally lead to the enslavement of the original human variety by
enhanced versions. As for all technologies with potential for enormous benefit, there
is equal potential for enormous mischief and harm. Genetic research is a Pandora's
box with the lid already off and it may already be too late to effectively regulate it;
certainly it is not possible to prohibit it.
The "playing God" objection that was discussed in connection with AI research
is also used by critics of human genetic work. As there, it seems to carry little
weight any longer. People in the field are usually not conscious of "playing God" in
any way, and they are therefore inclined to ignore the argument as false, irrelevant,
or meaningless. Perhaps what the objectors really mean is that they believe a
Creator intended the creation to be left as humankind found it. But, people who
believe they are created in the image of God, yet make a blasphemous thing of
specific knowledge and creativity, may not yet have come to terms with the
meaning of the creative part of that image in humanity; the accusation has
therefore at least some appearance of inconsistency. What is more, making at least
selections at the genetic level is not different from the selective breeding that has
been the stock-in-trade of plant and animal husbandry for millennia, and that has
few objectors today.
Still, the objection has merit, particularly if the genetic technology falls under
the total control of the state. In a totalitarian system the idea of designing good
docile citizens could be very attractive, for in these systems the state is the deity.
Thus "playing God" is a real problem, for such research is not play but deadly
serious. For now, these issues must be left open for discussion, but there is a time
limit. Choices will have to be faced in the next few years, and it is better to come to
grips with the issues before they are presented as fact than after. If a consensus as
to what constitutes "good" and "bad" outcomes of genetic research cannot be

243
reached early on, an unregulated chaos could ensue, with consequences that could
face humanity with threats every bit as grave as those from nuclear explosives.
What is more, as the level of technology increases, so does its potential to
affect all life. Eventually, many people will have the means to single-handedly
destroy all of civilization. What can prevent some mad (wo)man from doing so? If
there is no return to a pervasive moral code, such destruction seems likely. Even if
there is, how can the consequences of a single rejection of moral codes be policed
against, and who would control the police? Some of these issues are examined in
fictional form by this author in his Timestream novels. See the web site at
http://www.arjay.bc.ca.
What makes regulation difficult is the traditional autonomy of the largely
university-based researchers, as well as the large amounts of money involved in
commercial aspects of genetic research. Potential economic return for certain
animal vaccines, specialized viruses, and modified plant stock is large enough, but
opportunities to make money on human genetic manipulations are far more
substantial. One means of control that may work is to regulate the sale of products
and techniques of such research, much as is now done with chemicals, pesticides,
and pharmaceuticals. As experience with illicit drugs has shown, however, if the
amount of money to be made in trafficking is large enough, no degree of regulation
suffices to stop it altogether.

7.6 Rights, Health Care, and Life


Any discussion of genetic engineering leads naturally to questions about what
are the rights of people to benefit from this work. Present-day medical practice is
loosely founded on a generally accepted, but often unstated right of all persons to
live healthy, normal lives. There are two possible ways of viewing this right. One is
to suppose that everyone ought to have the best possible medical and genetic
treatments available to bring that person's individual health to an optimal level. A
second is to conclude that the human race as a whole should be brought to an
optimal health. The danger in the latter approach is that it may also lead to
mandating that "undesirable" persons have no right to live.
Improving the human race as a whole through a program of selective
breeding and the elimination of those considered undesirable gained great
popularity in the early third of the twentieth century under the name of eugenics.
The acceptance of this idea was enhanced by the way in which it fit the prevailing
model of evolutionary progress. There was great optimism that the human race
could take control of its own evolutionary destiny this way, and few voices of
concern were raised for the consequences. By the time World War II started, Hitler's
program of eugenics was already well under way in Germany, but it was not until
the war was over that the horrifying consequences of arbitrarily defining who is
acceptably human were seen. State-run eugenics has been anathema ever since,
but new medical techniques now mean that individuals could practice a more
personal eugenics for their offspring. This apparently attractive option still has the
same potential negatives, however, and it would not do to lose sight of those in
some euphoric optimism that a new age is dawning for the human race.

244
On the one hand, selection and modification, as well as effective means of
birth control, could well do away with the demand for abortion and infanticide. This
would be beneficial for all concerned--child, mother, family, and society. On the
other hand, those born before or without the benefit of genetic selection, or whose
abilities came to be regarded as inferior would be at a disadvantage. Their genes
would not be the latest models. In addition, long life spans will not necessarily solve
the problem of caring for the old, who might need more rather than less medical
care. A future society could become so obsessed with obtaining and maintaining
genetic perfection that individuals of any age who were perceived to fall short could
simply be discarded. The state now can permit death at the request of a
sufferer--"pulling the plug" on someone who wishes to die in peace. As with other
issues, the state that can permit a death can easily assume the power to require the
same death. If it has the power to allow certain lives never to develop past the point
of discovery of some physical deformity, then it might also claim the power to deny
life to any individual for any defect, including economic, political, racial, and
religious ones.
Pressures on the medical system lead to questions about when it is
appropriate to deny medical treatment. Likewise, economic and demographic
pressures conspire to persuade people to limit new life, and abortion or infanticide
are used where birth control has not been employed successfully. In addition, the
continued existence of humans with limitations caused by genetic defects,
accidents or even old age is threatened. All the issues involving life itself are
difficult ones; even where the individual involved makes the choice.

Patient Choices

What should a physician do when a terminally ill patient facing painful


treatment asks the doctor to withhold further treatment in order to allow a relatively
peaceful death? This is a difficult question. Free will and freedom of choice are
argued in favour of the patient's freedom to make that choice. Yet, some would say,
the doctor is in the position of deciding whether to become an accomplice to
suicide. If a woman was about to jump to her death from a bridge, a passer-by
would be expected to intervene, to attempt to prevent the suicide, or to call for
more help. Most involved would exert themselves in the cause of life, not being
willing to give up until she actually jumped, and even then launching an extensive
search of the water below in case she lived. If she were to survive, every resource of
the medical establishment would be brought to bear to save her life, restore her
body to functionality, and provide the necessary counselling to ward off another
such attempt on her own life. Shall a doctor do less in the treatment of other
patients? If not, what is the essential difference between the two situations, and
who decides when that difference exists? Is it more humane to allow an escape from
suffering for those who desire it? Or, is the desire to escape from life prima facie
evidence of lack of competence, which should therefore be ignored?.
These questions have already been resolved, in the West's legal and medical
systems at least. Living wills, in which the testator dictates a do not resuscitate
order for extreme eventualities, are now accepted and acted upon in most

245
jurisdictions. However, this has taken place without much debate, and it seems to
have escaped the notice of most that if a state has the power to permit a practice, it
also has the power to require it.
Some utilitarians who focus on the money issues, and some proponents of a
right to die argue that death in terminal illness should be made quick and easy.
Some act-ethic moralists would condemn this conclusion, pointing out that cures
might still be possible, and suggesting that participating in another's suicide is not
different from murdering the person. One possible conclusion that a consensus
moralist could come to is that if the majority of, say, inoperable cancer patients
wanted to die, they all should. If this conclusion seems stark, consider a single
paralysed patient whose doctor has already participated in assisted suicides in
similar situations. It would not be difficult in such circumstances, especially with the
permission of anxious heirs, to have the patient declared incompetent and then
argue that if the person were able, she would want to die, and therefore must do so.
Indeed, such a decision may seem quite utilitarian. A society that had already
allowed assisted suicide would have little motivation to enquire about such deaths,
and perhaps not much to care about them either.
Such decisions can be even more difficult if a second condition exists that will
kill the patient anyway and the (possibly expensive) treatment would only postpone
the apparently inevitable. This issue was discussed earlier in the chapter without
the complication of the patient's own request for death. When simultaneously faced
with such a request as well as a waiting list of patients who can be treated more
inexpensively, the pressure to allow or require death increases. There must, some
would argue, be some limit on paying for extending the lives of the terminally ill.
That there are such limits cannot be disputed. What is in question here is the
degree to which active intervention in allowing or encouraging death should be
tolerated. Even the most enthusiastic spenders realize that economic considerations
ultimately force many life-and-death issues, even if these may be hidden in
government appropriations measures and seen as entirely political in nature. It is
therefore part of the challenge for new medical technologies to remove as many of
the limitations as possible to the extension of productive life, making it cheaper and
easier to achieve year by year.
The opposite problem may also arise, for in some cases the technology is
available to save the life of a patient, but treatment is refused, perhaps for religious
reasons. The doctor, the hospital, the law, and the state must in such cases decide
whether the extension of a patient's life is to take precedence over that patient's
beliefs. The difficulty is particularly acute when the patient is a child, and it is the
parents who are refusing medical treatment. In the case of religious sects refusing
blood transfusions for children, the courts have sometimes stepped in and ordered
the treatment over those objections. Would they do so for someone old and infirm?
Parents have also in some cases been charged with manslaughter when they have
refused to seek medical help for life-threatening conditions and allowed a child to
die. In other cases, the opposite is done, and permission to rely on such alternatives
as faith healing is explicitly written into the laws of some states, with such parents
being protected from prosecution.

246
In either type of situation, the hierarchical ethic of Chapter 3 would insist on
the primacy of life, and the making of every reasonable effort to preserve it.
However, there would be many who would not accept such an ethical framework
and would arrive at the opposite conclusion. The problem for a utilitarian, for
example, would be to decide whether there is more good in preserving one life, or
more good in alternative uses of the same medical resources. Limits are bound to
be reached in some cases, and it may be necessary to force treatment in others, but
great care must be taken in making life and death decisions for other people
without their personal and informed consent; for the right to life is the most
fundamental of all.

Euthanasia

As already suggested, similar considerations apply to the terminally ill who


cannot themselves request a quick death because of a coma or some other
circumstance. Proponents of euthanasia would put such people out of their misery,
much as they would compassionately shoot a dying horse. They reason that while
perhaps extending life might be good, supplying a pain-free death is better. Given
the reality of pain and the other pressures cited above, this argument cannot simply
be dismissed. An ethical absolutist, on the other hand, is likely to draw the line at
"relative good" when it comes to life and death issues. As long as there is any hope,
this argument goes, the patient should be kept alive, even at great cost. To the
absolutist, death may be an enemy to be fought with all available resources, and its
victims to be sorrowfully mourned as casualties in a war. Paradoxically, many
religious persons who hold this position believe death to be a release to a new and
better kind of life--to be welcomed--even while being fought against as an enemy.
One of the major difficulties with any policy that allows such deaths to be
administered has already been mentioned--what the state can once allow, it can at
a later time require. People who support the voluntary euthanizing of the aged may
one day find the process forcibly applied to them too, for reasons could be found for
declaring almost anyone an undesirable or an incompetent.

Infanticide

It was for similar absolutist reasons that in the early days of the Church,
Christians collected unwanted infants who had been abandoned by their parents to
die of exposure, even though the rescuers then suffered being accused of killing
and eating the children they rescued. The Christian view of the sacredness of life
prevailed, and for centuries after, infanticide was close to unthinkable--an act that
evoked universal moral outrage. In this century it has become respectable again.
For instance, at issue at the moment is whether severely damaged or impaired
newborns should be left to die without food, water, and treatment, or whether they
should be provided with extensive, painful, and costly attention to allow them to
live. In China of the 1980s, infanticide was reported to be widespread after the
government decreed that couples could have no more than one child. Boys were
more desirable than girls, so many female infants were killed. Population pressures

247
as well as political, economic, and racial considerations are potential factors in
deciding which infants shall live or die. These have already been decisive in
liberalizing the availability of abortion, and it is surely an arbitrary legal fiction to
say that a child situated at one end of a birth canal is disposable tissue, and at the
other end is a protected human. All the same arguments that allow abortion can
therefore be brought to bear on newborn infants, or on children of an older age, or
indeed on adults.
While this issue will not go away soon, one major contribution of the new
medical technologies may be to render it moot. The goal is to remove from the
genetic pool the causes for severe birth handicaps, but it will not be achieved soon,
and in the interim there are likely to be renewed calls to do away with "defective"
infants and a new insistence that no life at all is better than one lived impaired.
Again, ethical absolutists will oppose any such policy, insisting as always that the
quality of life is not pre-judgeable or even knowable in another, and that it is a
lesser ethical consideration than life itself.

Abortion

The practice of abortion also generates many issues not likely to be settled
soon, and it is one on which the medical profession has done a complete about-face
in recent years, switching from a long standing view of an unborn child as a patient,
to that of it as a disposable appendage of the mother. The debate is sometimes
couched in terms of the mother's right to privacy with respect to parts of her own
body versus the right of an unborn child to life itself. The mass of cells that will
become a child cannot, with current technology, be separated from the mother and
live, and in this sense is a part of her. On the other hand, these cells are genetically
distinct from her even at the point of conception, and there is therefore a sense in
which even the first cell is not part of her body at all. Some regard a particular
demarcation point (e.g., end of first trimester) as the start of a distinct human life.
To others, any boundaries are purely arbitrary. What is really in dispute is the point
at which full human rights ought to be accorded--at birth or at some prior time. Or,
do the rights grow gradually with the developing child, and does the ratio of these
rights to those of the mother change from zero at first to equal at birth? Does
equality come at some earlier time, or at some later time?
Proponents of a women's right to an abortion on demand consider the
procedure to be a cheap, safe, and effective way of ensuring that only wanted
children are born. Opponents point to the right of the child to life, and also claim
abortion to be both physically and psychologically threatening or damaging to the
health of the woman. Furthermore, they consider abortion to be reckless of human
life in general, and to be murder of a child in particular.
New technologies may enliven the abortion debate even more, for it is
possible sometimes to save infants born prematurely at a stage earlier than many
others who are aborted. Two surgeons can work side-by-side on two pregnant
women, each the same number of months past the point of conception, with one
performing an abortion and the other delivering a premature baby. As the ability to
live outside the womb (with technological help) is pushed back closer to the point of

248
conception, the medical establishment in particular, and society in general, faces an
ethical problem whose difficulty is increasing with time and with the availability of
new life-saving techniques. As a result, arguments about the point at which life
begins become increasingly irrelevant and the abortion issue becomes more an
ethical and political one. That is, the answer to the medical and scientific question
"From what point is an unborn child alive?" is now obviously "from conception", so it
is now the economic and political question "From what point is an unborn child
human?" that has become the central issue, and North American courts have now
answered "not until birth". As we have seen, however, this answer itself raises new
questions about what can or should be done to (for) the child before it takes its first
breath of air.
New techniques may also take all of the risk out of having an abortion, as they
will for many other surgical procedures. It may even become possible to reverse the
sterility that is a an occasional side effect. Furthermore, it has already been noted
that there will be a tendency to reduce birth rates in the face of increased life spans
and to exercise quality control over births as this becomes possible. Three points
are worth making:
First, it is not yet possible to discern what are the very long term effects on
total population of combining increased longevity and declining birth rates. In
developed (and nearly developed) countries, the birth rate is below replacement
levels. This would argue for an eventual population decline, except that any
substantial longevity increase would overwhelm this trend. Indeed, increased
longevity is the main reason for the world's population increase over the last
century. It may be over the next as well. Thus, current methods of birth control may
be either already more effective than necessary, or woefully inadequate to control
population. It is too soon to tell.
Second, abortion-inducing drugs have already been produced. Recent history
would suggest that religious protests over such drugs will have no effect on their
marketing. Thus, although it may take some time to sort out potential side-effects, it
will only be a matter of time before these are available worldwide. The significance
of these drugs is that they have the potential to take the matter of abortion out of
highly visible hospitals and clinics and make it a decision that can be undertaken
entirely in private. Thus, there would be no specific targets to protest against once
the sale a drug was approved. In view of the fact that ever fewer doctors are
interested in doing abortions, drugs may eventually become the only means of
providing them--yet another example of the scarcity of medical resources forcing
the adoption of non-surgical procedures (whether one likes the outcome or not).
Third, the technologies considered in this chapter may also be capable of
producing cheap and efficient conception control agents for men or women that the
state could widely disperse and then require a licence to obtain the antidote. There
are several ways in which this could be done. The simplest might be a drug that
could cause sterility even in very low doses. Another possibility for the genetic
engineer might be a communicable virus like those that cause common colds that
would be capable of preventing conception without causing any other symptoms.
Since several research teams could build variations on one or both of these themes,
it might be expected that at least one of the sponsoring governments involved

249
would release the agent. An antivirus might be as easy to produce, but the
opportunity to control it and regulate population would likely be seized upon by
most of those involved. After all, it is difficult to imagine some types of government
passing up the opportunity to regulate population growth absolutely. Such a
development and deployment would imply extensive and intimate control by the
state over the lives of citizens, but such things do have a way of coming about, even
in Western democracies.
Another way to regulate population growth would be to include a sterilizing
agent in longevity drugs, or to package sterility and longevity antidotes together.
There would have then to be a strong incentive in order to have children, for they
would cost parents potential life span. Abortions would cease almost entirely if an
enforceable conception licensing scheme were devised. While this might remove
some of the population pressures caused by longevity, it would simultaneously
change the whole structure of society by cutting off renewal, all but eliminating the
family, and promoting a long-term status quo. Could such a scheme be enforced if it
were to become technically feasible? It seems likely that an antidote to the agent
would quickly become available on the black market--perhaps even supplied by
foreign governments bent on destabilizing a country by increasing its population.
Moreover, given the record of the West on control of now illicit drugs, it seems hard
to imagine the pharmaceuticals suggested here could be so tightly controlled as
this. The net result on population could be neutral with the single exception of
eliminating unwanted births and therefore removing most of the desire for abortion.
Such a technological "fix" seems to be unsatisfying, but the alternatives are
unclear. Neither unlimited population growth nor unlimited abortions are politically,
economically, or ethically desirable. The population question must therefore be left
unresolved with a technical answer only intimated.

7.7 The Environment and Human Life


Human life is lived out in the context of the whole continuum of other life
forms and the physical environment that surrounds them. The industrial age has
generally been viewed as an exploitive one insofar as the environment has been
concerned. Progress has been the byword, and the bottom lines were the standard
of living and the gross national product. Simultaneously, the rapid decline in the
agricultural work force and the gathering into cities have isolated industrialized
peoples from the natural environment and left them largely unconcerned with
harmful changes, except when such issues periodically become fashionable.
The earth is a large place, and its systems have both a great deal of inertia
and a massive capability to absorb damage. Nevertheless, there have been severe
strains in a number of areas, and it is now clear that the next civilization must
continue to develop and deploy a technology of the environment, if it wishes to
maintain the earth as a viable living place. Problems have shown up in the quality of
the air (acid rain), the water (dead and dying lakes or streams) and the land
(erosion, desertification, salt poisoning, and fertility loss). There are also scarcities
of strategic minerals, oil, and other energy forms. These problems become most
visible in the scars left by open pit mines, in the changed climate and ruined soils
from deforestation, in the extinction of entire plant and animal species, and even in

250
the quantity of nonbiodegradable waste floating about on the ocean surface. Cities
themselves have largely grown up from old agricultural centres in river valleys,
spilling out into the surrounding area and swallowing prime farmland in the process.
The contribution of technology has thus far been negative, accelerating
damage to the environment, but this is because the governing model has been
exploitive. Moreover, many of the 1960s environmentalists were associated,
whether correctly or not, with radical left politics, and this made it easy for
conservatives to discount their legitimate message. Political considerations can be
of first importance in facing environmental problems. For example, North American
governments are reluctant to tackle solutions to the acid rain problem, so they fund
studies to see if it really exists--when it has been known and described in some
detail in the literature for over a century. In more recent years, though,
environmental groups have successfully called attention to the more spectacular
damage and a new model has emerged--one that uses technology in a conserving
manner. Gradually, this conserving image is replacing the radical one, and care for
the environment is becoming conventional conservative wisdom.
Thus, smoke emissions are now scrubbed in Great Britain, and city air is once
more breathable. The same thing will eventually be required of new North American
installations, and over time, lakes and forests destroyed by acid rain will probably
recover, though perhaps with different species inhabiting them. Likewise, energy
sources will in the future likely be required to be clean, giving impetus to research
on solar, geothermic, nuclear fusion, and other nonpolluting supplies. At some point,
petroleum and coal will no longer be used as fuel, and electricity will be the
principal medium for delivering energy. The desire--indeed, the necessity--for a
cleaner environment will thus alter many industries and result in further structural
changes to society as resource-based industries in certain sectors go out of
existence. The political map will likely also be affected, for any current economy
that depends on, say, oil or coal production and does not industrialize or otherwise
diversify will be seriously impaired.
There will no doubt continue to be a variety of environmental activists for
some years--protesting logging, whaling, sealing, habitat destruction, and the
experimental use of animals. Although these voices were muted or neutralized
somewhat by the 1980s and 1990s concentration on business and the bottom line,
their impact will be permanent, for they have expressed the important truth that
humanity cannot go on fouling its nest but must come to terms with the fact that
the human biospace is part of a complex continuum that must be lived in.
Some environmental groups have in their tactics raised interesting new
ethical issues by going beyond civil disobedience to property destruction and
violence. Such methods have been criticized even by those who support the causes.
Some argue that protesters ought to take a legislative route to make their point, but
the more radical environmentalists have claimed that only dramatic action is
sufficient to sensitize enough people to the difficulties even to make them public
issues. Thus, there have been invasions of labs doing animal experiments; harp
seals have been painted red to destroy the commercial value of their pelts; nuclear
tests have been interfered with; and a whaling fleet and processing plant have been
vandalized to put them out of business. In addition, protesters for a variety of

251
causes throw themselves in front of trains or trucks when they dislike their cargoes;
they picket the homes and offices of researchers or politicians who oppose them,
and they block roads to proposed mining and lumbering sites or nail ceramic spikes
into the trees, and lay down in front of bulldozers to prevent them from clearing
land. Regardless of what one thinks about the ethics of these tactics (what is the
higher norm that must be obeyed?) there can be little doubt that the
environmentalists have indeed successfully touched a raw nerve of new-found
sensitivity to the environment and that the effects will be very long lasting.
They also have going for them that new technologies are indeed likely to be
cleaner--at least in the industrialized countries, and provided that clean power
sources become available. On the other hand, they have against them that they can
be premature or sensationalist in their pronouncements--especially when they offer
incomplete or preliminary scientific research to bolster their claims. For instance,
that the climate has warmed over the last century is a fact. That it is due to certain
gasses causing an alleged greenhouse effect is still speculative; the cause may be
sunspot cycles or some other cyclical effect instead. If the latter turns out to be the
case, the bad science will prove costly to all environmental efforts.
In any case, the day of the exploitive society may now be coming to an end. If
so, all those individuals and institutions aligned with it will decline in influence. This
may include some religious institutions that have so allied themselves, taking for
example the "subdue the Earth" command of Genesis as exploitive instead of as
management responsibility. Political thinking has also often focused upon the
immediate economic benefits to be had from industry, however conducted, rather
than upon its long term effects. With the new-found realization of the network of
biospace dependencies, such voices will no longer be listened to, for people will
know that there are many more than the purely short-term considerations to the
making of decisions. In an ideal information-based society, decisions would always
be made both openly and in a fully informed fashion, and specifically with the effect
on the larger environment having been considered. At least, this is the ideal--
whether human nature will allow it to be achieved is another matter.
There is also a potential downside to these environmental concerns that must
not be neglected, and that is the cost of making suitable changes. Ironically, the
cost for the cleanup technology may be so great that only the largest of industrial
firms can afford to develop and deploy it. This could have the effect of
strengthening the very conglomerates that created many of the difficulties in the
first place, at least in the intermediate term. On the other hand, industry could
respond to Western environmental concerns simply by moving manufacturing (and
its pollution and jobs) to the third world, with negative effects on the Earth as a
whole due to increased production and decreased regulation.
In the long term, however, the preservation of plant and animal species,
recycling, and a concern for soil conservation and clean air and water will all be part
of an environment-conscious ethic of the next civilization. There may even be some
who will wish to live at one extreme--indoors, in completely controlled and managed
environments. There may be others will want to get back to nature and live in more
direct communion with it, but without giving up any of their technological benefits.
Both may well be possible, along with as numerous alternative life-styles.

252
7.8 Building New Environments
Much has already been said here and elsewhere about the pressures of
unrestrained population growth. Those who find unpleasant the suggestion that
longer life will imply enforced birth control and various other practices will want
instead to expand the available physical space for human life. Even the present
world population is having difficulty finding places to live. For example, as countries
industrialize they experience the phenomenon of urban expansion. In already
heavily populated nations, the growth of a handful of urban centres can be very
dramatic, and several more such megalopolis in the developing nations could reach
20 to 30 million in a few years at the current pace. Many of the newcomers are
housed in shacks precariously perched on sewage-filled mud or on unstable
hillsides. There are no paved streets, lights, running water, official police services,
fire departments, or building codes. Yet the slums and barrios of dozens of cities
grow by thousands of immigrants from the countryside every day, with no end and
little immediate hope in sight. Under such circumstances, the state can easily loose
control to anarchy. Criminal elements tend to step in and to become a de facto
government. This can be clearly seen in a number of third world countries today,
and there is no reason to suppose that the urbanization giving rise to these
problems will slow in the foreseeable future.
Nearly the opposite is taking place in cities of the already industrialized
nations as they move beyond industrialism to the next stage of civilization. Large
old cities--especially industry centres--have lost population in two waves of out-
migration. One is the continuing exodus to the lawns, gardens, and golf courses of
suburbia. This shift forces relocation of schools, shopping, and some offices as well
as the costly extension of transit lines. In this kind of move, people at least remain
in the broad surrounds of their original cities. It does cause jurisdictional problems
and tends to harm the city core, which is often left with the lowest socioeconomic
group, a declining tax base, and neighbourhoods impoverished to the point of
destitution. Meanwhile, at the edges, such cities grow together with their
neighbours in broad bands of alternating urban centres and suburbs. One can drive
hundreds of miles through such areas on the Eastern seaboard of the United States,
and a similar situation is developing around the central Great Lakes, in Florida, and
in California.
However, other cities, notably in the Midwest, have lost net population to the
South and Southwest or to smaller urban centres at some distance from the major
cities. In these cases, the central city suffers all the problems mentioned above, and
the urban region as a whole cannot compensate economically, for the population
has moved too far away.
The net result of these two migrations could be a spreading out of North
American population over a much larger percentage of available land, and a
dramatic lessening of crowding in large city centres. Suburban areas and new cities
gradually develop centres of their own, but these are smaller and somewhat less
concentrated than those of the older cities. Better communication and
transportation systems and relocation of factory work, once only located near
resources or shipping, all contribute to this migration. There are fewer reasons

253
every year for information workers to locate at traditional city centres, and so they
move. As communications improve, working out of one's living space becomes more
and more feasible, and attachment to cities may lessen further.
Meanwhile, the cities affected by migration from the core see large tracts of
former housing and industrial land becoming surplus, and some try to renew and
attract counter-migration by making their cores attractive places to live, shop, play,
or tour--even when the people work elsewhere, and especially when they work at
home. If some people work where they live, perhaps others can be induced to live
where they work. To achieve this, it may be that some present office buildings
would become a mixture of offices and apartments. Certainly, new construction and
renovation at the core of older cities would have to be radically different in order to
make them attractive places once more.
Improved transportation of both short- and long- range types helps to allow
such changes and is also driven by them. New technologies for reducing airport
congestion and improving takeoff and landing efficiencies are a high priority, for
example. Another priority (in North America) is improved high-speed ground
transportation technologies for commuters in order to relieve the traffic jams in
such cities as Los Angeles and Chicago. At the same time, it will become critical to
devise strategies to lessen the impact of the loss of industrial tax base at the city
core and to alleviate conditions for those left behind in poverty, with no hope of
obtaining jobs or of migrating to places where their prospects might be better. That
is, whatever one's ethical framework, there are a large number of factors that have
to be traded off and prioritized, and the task is far from simple.
At the same time, new building technologies and a new respect for arable
land are driving development in places once considered inhospitable. For instance,
mountain sides cannot be cultivated but they can be lived on or in, and an
underground house can have many floors, with a large lawn and garden on the roof.
Building technology is becoming more and more effective at sealing off living space
users from hostile environments. It is now feasible to live comfortably in deserts, on
infertile land, or in some of the earth's coldest regions--if population pressure
demands. There are vast empty lands in the Canadian north, the Australian interior,
and the African and Asian deserts, as well as in mountainous regions in all parts of
the world. It may be desirable to cover, heat, live in, and grow food on the Arctic
tundra. It may be desirable, say, to run a pipeline from the mouth of the Amazon
through the Atlantic and bring abundant water to the Sahara desert. It might
become practical to roof over a valley in the Canadian Rockies and build a large city
underneath. When population pressures are great enough, technology often can be
found to respond to produce new living space--yet another example of the need to
find technological solutions to problems caused in part by technology. That is, the
doomsday forecast by Malthus may be postponed by the development of still more
technique, though it cannot be put off indefinitely if the population continues to rise.
It ought also to be noted that new technology may be required to develop new
living space to replace that which has been rendered unusable by older
technologies. Even assuming that nuclear war is never the cause of such problems,
people may be driven underground by cosmic radiation if, as some suggest, the
ozone layer of the atmosphere is any further depleted by the use of

254
chlorofluorocarbons (CFCs) and other pollutants. On a less global scale, overfishing
and pollution may force the inhabitants of fishing communities to relocate and take
up different work. The same thing happens when acid rain defoliates a commercial
forest or cropland. In each case, jobs, people, the environment, and politics play out
an intricate dance of interlocking responsibilities and duties complex enough to
deeply engage the most sophisticated of ethicists.
Perhaps the ultimate in environmental technology would be the ability not just
to predict but to manage the weather. Though often wished for and frequently
assumed by science fiction writers, this goal has proven elusive, and there is little
immediate prospect of much progress toward it. There has not even been
agreement on whether the climate is warming or cooling, what the major factors in
such changes are, whether human activities have had a great effect, or if there are
long term cycles over which little control can be exercised. There is presently a
warming trend, and sea levels are rising. Some call this the greenhouse effect and
pin the blame entirely on carbon dioxide pollution; others cite sunspot cycles and
assert that major climate changes are caused by forces far larger than anything
humans have yet deployed. Whether these are short term fluctuations, and whether
anything can or ought to be done about them is not known. There is always the
possibility that any large-scale attempts to change global climate will make things
worse. When a complex dynamic system is ill understood, it is perilous to make
dramatic changes to any part. On the other hand, some argue that the industrial
society has already made just such major changes, and that these must be reversed
before it is too late.
On another note, there may be modest efforts made to establish living places
underwater and even off the planet. The former could substantially increase the size
of livable areas even in the short run; the latter would have little immediate impact
but possibly a dramatic long-term effect. Some suggest that if the mining of raw
supplies from the moon and asteroids can ever be done economically and space or
moon-based manufacturing becomes feasible, there could be a third industrial
revolution that transfers a substantial percentage of human resources and a
sizeable population off the planet entirely. Since some manufacturing has already
been done in space, it could be argued that this new industrial revolution has
already begun. Optimists suggest that facilities in orbit, on the moon, and on
various asteroids or artificial planetoids would in all likelihood make the earth a
wealthier and more livable place. At the same time, a new frontier of indefinite size
would be created and another age of expansion begun. At some point in the distant
future, some suggest the earth might hold only half the inhabitants of the entire
solar system, and that long before that stage is reached there may well be attempts
to reach others. The colonization of space has an important side effect. Once self-
sufficient communities exist off the planet, it will no longer be easy to destroy the
whole human race in a nuclear war--even if the earth itself becomes uninhabitable.
Whether this factor would make war more or less probable is impossible to guess.
On the other hand, more pessimistic voices point out the high cost of doing
anything in space, and demand that the money instead be used to improve
conditions here on earth. Others don't want space to be used at all, reasoning it will
only be exploited as the earth has been. Still others point out there would be little

255
need for a substantial population to leave Earth, as (presumably) automated
factories located there would need few human workers to staff them in any case.
This illustrates an interesting point--in the long run, profitability will drive all but the
most modest of space exploration and colonization. If there are no tangible benefits,
the lure of science alone cannot indefinitely sustain the kind of expenditures
necessary for such adventures. What is more, the prohibitive cost makes it
infeasible to move any significant percentage of Earth's people out of its deep
gravity well. The only way large numbers will populate extraterrestrial regions is to
be born there.
Back on earth itself there probably will be advances in genetic engineering
and other techniques that improve the ability to feed and clothe a larger population.
Other applications of the same techniques have the potential to trigger dramatic
changes in living space as well. For example, instead of planting trees, fertilizing
them, thinning them, and then cutting them down after fifty years to build houses, it
would be much simpler to develop genetically modified trees so that they would
grow directly into living spaces. Perhaps various species could be grown as specific
types of rooms that could be harvested and joined together in modules to make
complete houses.
If that proves too ambitious, there are far less spectacular ways to achieve
manufacturing efficiency and modularity. One is by applying mass factory
techniques to housing. This is now being done on a small scale, with encouraging
results. Houses are built in pieces, trucked to the site, and assembled on a prepared
foundation. Widespread use of such methods promises to improve quality, decrease
building time, and dramatically lower costs. It would also cause many old trades and
professions to vanish, and new ones to come into being. Far fewer blue collar
workers would be needed to assemble a house in an even partially automated
factory than on the site. On the other hand, there might be a need for used-room
salesman and house junkyards. Certainly, these methods would lead to changes in
the way people live; tract row houses of older subdivisions would give way to more
customized homes on rugged terrain. New transportation and communication
methods would be required as well, and all of these would change the physical
surroundings--the space part of the biospace.
For the technology in the home itself, there is already a substantial
automating trend. This trend can be expected to continue, at least for those homes
that "must" have the latest machines. In time, refrigerators, ovens, home heating
systems, lighting, and even the distribution of electricity will be microprocessor
controlled as part of the invisible infrastructure. One technology with promise is the
so-called smart electrical system, wherein outlets for appliances, and telephones for
both high and low voltage use are all identical, but power is delivered only as
requested by a "smart house" client device plugged in to one of the outlets. This
system eliminates shock, short circuits, and multiple hardware and wiring types and
is programmable to improve its utility. To some extent, houses may well look very
much like they do today, for there are few floor plan styles available that were not
four decades ago, and taste in such things tends to repeat cyclically in a manner
similar to clothing styles. However, any increase in the number of people working
out of their homes would cause a shift to a more functional residential architecture.

256
When both husband and wife were working away from the home, it could be more
decorative than functional. If one or both use it as an office, it must have
appropriate facilities for work and not just for play. Formal living and dining rooms
may tend to be deprecated, and the minuscule dens now included in new
construction may grow into useful offices, perhaps with separate entrances. There is
no reason other than zoning bylaws and personal preference why most professions--
and these are the model for future work--cannot be practised from the home office.
Another new technology with domestic implications is the use of broadcast
energy, such as electromagnetic radiation, to provide operating power directly (no
plug) to small appliances such as televisions, radios, and computing devices. Very
low power chips are now being manufactured, and prototypes of such devices have
been shown. Whether the necessary electricity will be converted from power
broadcast here on earth or from stations established in near orbit remains to be
seen; the present experimental devices use the radio waves already available. In
the home, this technology would reduce consumption of power transmitted by
wires, but away from it such devices would be even more important, for they would
make many appliances truly portable, with no need for batteries. The most
important effect, however, would be the elimination of overhead or underground
power servicing, an enormous reduction in costly infrastructure required by
industrial age housing. Removal of this requirement would also lessen the need for
houses to be clustered in urban subdivisions, further reducing the need for cities in
the information society. What would be the health implications of using broadcast
power is unknown; there are those who claim deleterious effects from levels already
present in radio, television, and cellphone transmission.
Other technologies not yet guessed at will subsume some of the ones
discussed here or render them irrelevant before they develop. It is difficult to see
exactly how market forces will drive them, but it is worth observing that population
pressures are pressing. Such pressures in the past have only been reduced in a
limited number of ways. Three of the oldest ones are war, which today could
destroy the whole earth; famine, which might be eliminated with international
cooperation; and plague, which it is hoped will be eliminated altogether. Other ways
are wholesale abortion, infanticide, euthanasia, and genocide, all of which are at
worst abhorrent or at best problematical. Another way is birth control, but society
may well hesitate at making this compulsory. The last method is to continuously
expand the available living space by creating new frontiers and new livable areas in
the old living spaces.
Of these options, most are unacceptable in any global strategy with a claim to
an ethical base or are unenforceable even by the most totalitarian of governments.
Only the methods of expanding living space seem workable in the medium term,
and even these will generate new problems that cannot now be foreseen. However,
increasing living space might postpone some of the harsher alternatives and the
need for the less desirable forms of population control. New living space and the
technology to achieve it and use it effectively may also bring other important
benefits. The very existence of new frontiers could provide a refreshment and
revitalization, new kinds of innovation, and an outlet for the creative and restless. It
might also help prevent stagnation in a dreary status quo, because it would provide

257
for fresh starts, new opportunities, and a place for youth and enthusiasm--all of
which are in jeopardy if the future were to hold only a stable population of gradually
increasing life span. Moreover, it would postpone, perhaps forever, Ellul's
amorphous totalitarianism of maximal technique, for it would ensure that expansion
and change were the prevailing models rather than efficiency alone.
It is important to realize, however, that deployment of new technologies for
the creation and improvement of human living space has an effect on the earth as a
whole. Animal species are displaced or extinguished, natural vegetation is
destroyed, soils are paved over and made forever useless, and the climate itself is
altered. For example, it was once conventional wisdom that large hydroelectric
power dams were an unmixed blessing for the state that built them. They would
improve living conditions for ordinary people, attract industry, and provide much-
needed downstream benefits in the form of flood control. However, in some cases
where large areas were flooded, the lack of downstream overflow reduced fertility
and increased salinity, industry still found conditions unattractive, and silt buildup
behind the dam ensured that it would have a very short life. The High Aswan dam
built by Egypt with Russian help has all these problems and is also contributing to
the reduction in size of the Nile delta, the fertile bread basket of that nation.
In the future, it will become more important at all times to consider long-term
environmental effects of building large housing projects, converting land to other
uses, or constructing massive utilities. There will be more people to accommodate,
but there will also be more at risk when things go wrong. Moreover, advances in
habitat technology in the affluent West will not go unnoticed in the rest of the world
as it struggles with the older problems of wide-scale poverty and continuing
urbanization. In an effort to catch up, there will be pressure to take short-cuts--parks
and wildlife preserves could be threatened, and the very magnitude of short-term
people problems will ensure that long-term considerations are de-emphasized.
While it will always be impossible for the West to solve the people problems of the
third world by donating money (because such difficulties are cultural and relate to a
state of civilization), it may be possible to assist in the financing of park and wildlife
preservation until such time as the industrializing nations can afford or are ready to
use other help. Whether such a global view of the environment will ever be
politically feasible in the West is another matter, but highly targeted aid of this type
at least has a higher probability of accomplishing its goals than do unspecified
handouts of money.
These and other considerations lead once more to the observation that new
civilizations are both enabled by and subsequently demand new techniques, even
while the new techniques bring mixed blessings. They have great potential for
raising the standard of living and human comfort, but an equal potential for causing
long-term deleterious effects. The challenge is to achieve the benefits and plan to
minimize the harm. Such planning has not always been done in the past, but it
cannot be done without in the future.

7.9 Summary and Further Discussion

Summary

258
Human life is lived out in the context of both a physical space and a time
span. This biospace also has various quality aspects. Numerous issues affecting
both were discussed in this chapter. New surgical and diagnostic techniques have
proliferated in recent years, greatly increasing the number of treatable problems
and simultaneously generating questions of facilities, cost, and appropriateness of
treatment. If medical techniques are to become universally available, various new
techniques must be found to reduce cost, people, and facility pressures. Some such
new techniques being developed are those to do cell-level treatment by tailoring
bacteria or DNA to produce new drugs and molecular machines, called
nanomachines. It is hoped that such machines can effect repairs, conquer
communicable diseases, reduce the need for surgery, and prolong life.
Genetic engineering may also be used to improve food supplies, reduce
defects, and enhance many desirable traits. It may also be used to change plant
and animal species or even human beings dramatically. Some benefits or difficulties
with such technologies were considered, including the effect of changing technology
on the right to life.
Finally, the space aspects of human life were considered, and it was pointed
out that the creation of new living spaces on earth and off could both alleviate
population pressures and supply new frontiers to prevent technical stagnation and
postpone the totalitarianism of the efficient.

Discussion Questions

1. Explain the term "biospace".


2. What are some of the reasons that nonsurgical medical methods ought to
be preferred over surgical ones? Give specific examples where one or the other
choice may be necessary in the long term and others of situations where
nonsurgical techniques ought to completely replace surgical ones.
3. Suppose you are in charge of an agency responsible for a medical
insurance scheme run by the state. Which (if any) of the following treatments
should the plan pay for, or not pay for, and why?
a. Open-heart surgery on an 85 year old, when average life span is 77. Does it
make a difference if average life span is 150?
b. An experimental procedure (40 percent chance of success) to remove a
tumour from the brain of an infant who has otherwise an 80 percent chance of dying
within a year.
c. The surgery in b when without it the child would have impaired hearing and
sight but otherwise live normally.
d. The surgery in b performed in another country at a cost higher than
keeping a bed in a hospital in the home country for an entire year.
e. A sex-change operation for a patient who claims to be in great distress over
being in the wrong kind of body.
f. Plastic surgery to remove small and harmless but unpleasant looking
growths from the face of a teenage boy.

259
g. A liver transplant for a boy whose body has already rejected two livers and
whose older brother died after unsuccessful surgery of the same type.
h. Sterilization (tubal ligations and vasectomies) for those who wished to have
no further children but have no organic malfunction.
i. Tubal or ovarian repairs (30 percent success rate) for a woman who wanted
to bear a first child.
j. Surgery in i for a woman who already had two children; five children; ten
children.
k. An in vitro fertilization ("test-tube baby") for a couple who are infertile.
l. A new and experimental drug whose preliminary tests indicate may relieve
some AIDS symptoms and postpone death as much as a year but whose safety and
side effects are unknown. A year's supply is estimated to cost $30,000 per person.
m. A $30 000 pacemaker for an 85-year old woman.
n. Radiation treatments and/or costly surgery for an 80-year old man with
prostate cancer.
4. Discuss the idea of dispersing a 100 percent effective (and otherwise
known harmless) birth control agent in the water or atmosphere, with a state-
licensed anticontrol agent being available.
5. Suppose it became possible to grow a clone in a nutrient vat from a
person's own DNA and then transplant the cloned brain into a new, youthful body
(all at great expense). Under what circumstances should this be allowed, if at all?
6. Suppose a simple, cheap chip implant became available that would allow
the blind to regain their sight with a video device. What if some blind people do not
wish the operation? Should they be required to have it, so as to reduce the burden
of their care on society? Should their disability tax benefits be denied if they refuse
it?
7. There is already a wide medical gap between the developed and
underdeveloped nations of the world, including a substantial gap in life expectancy.
What would happen if this gap widened to 100 years?
8. Ought longevity treatments, when they become available, be provided to
everyone--including, say, the poor of third world nations--if the consequences are
greatly increased population pressure, and possible famine? Even in countries with
little such pressure, ought such treatments be limited to those who can "earn"
them? Be sure to say what you mean by earn.
9. A hospital has a wealthy patient who has been comatose and on life-
support systems for five years with no discernible change in condition. The man's
family approaches the hospital and asks for the support to be removed so he can
die in peace. The hospital refuses at first, citing the fact that brain waves are still
present, though severely impaired. A family member cites the patient's own
expressed desire for release from such a state, but can offer no written proof. What
should be done?
10. Does it make any difference to your answer in question 9 if:
a. the hospital desperately needs a liver for another patient who will surely die
without it and the comatose man's liver is a perfect tissue match?

260
b. the man's family suggests it will give a large sum of money to the hospital--
enough to allow a new surgery to be established wherein many lives can be saved
in a year?
c. the comatose patient is a Nobel scientist; the prime minister of Canada; a
convicted rapist?
11. Discuss the pros and cons of establishing an organ bank of spare body
parts for transplants. If it is done, should it be public, non-profit, or commercial?
12. If genetic engineering could triple intelligence, should the requisite
treatments be compulsory?
13. Which, if either, genetic-engineering project should be undertaken: (a)
enhancement of domestic animals to enable them to perform menial tasks or (b)
the development of "subhumans" for the same reasons. Explain.
14. Suggest several other new environment-enhancement technologies in
addition to those discussed in the chapter.
15. What are some ways in which the human environment might be
engineered, other than those discussed in the chapter? (Good and bad, pros and
cons.)
16. Discuss the pros and cons of building new habitat (a) on the ocean floor,
(b) in the Russian or Canadian north, (c) in the Sahara desert, (d) on the Moon, or
(e) in outer space.
17. Which has a higher priority: (a) plant and animal genetics with the end of
improving food supply or (b) human genetic research with the end of repair,
selection, or improvements? Explain.
18. You are a nurse in an elementary school in a close-knit rural community
and become aware of various medical and other problems. Which of the following
do you report to the authorities? Explain your answers.
a. A child who often has severe bruises and lacerations on the arms, legs, and
buttocks. The child seems not to be under any stress and has a reputation for
clumsiness. You must weigh the consequences of not reporting a possible case of
child abuse against those of becoming a false accuser.
b. A child who brings no lunch to school and who is always hungry. You know
that the parents care for the child but have very little money. Yet the child may be
apprehended and taken from them if you report this case.
c. A child with a medical condition (spine curvature, poor eyesight or hearing)
that is debilitating but not life-threatening but that the parents cannot afford to
have treated or refuse to have treated for religious reasons.
19. You are a hospital nurse with many years of experience on duty with a
doctor who is a recent medical school graduate. An emergency patient comes in
and the doctor orders tests but leaves one out that you know ought to be done. Do
you order the test, exceeding your authority, if (a) the doctor has by now gone
home and cannot be reached or (b) you bring it to her attention and she dismisses
your concern as unimportant.
20. Discuss the problems of medical diagnosis as they relate to cost, time,
facilities, and the possibility of a malpractice suit.
21. What authority ought the state have to (a) permit, (b) require, or (c) forbid
various medical procedures? To what extent ought this to be done in defiance of,

261
say, the religious beliefs of the patients, or their parents? Be specific and give an
ethical argument to justify your conclusions.
22. Suppose that advanced medical technology made a fetus able to be
maintained outside the womb from the moment of conception to the normal end of
gestation.
a. What effect would this have on the abortion debate?
b. Should public funds pay to allow infertile couples to use this technology to
have children?
c. Should public funds pay for women to use this technology in order to avoid
the inconvenience of pregnancy?
d. Should such technology be required for all births and pregnancy be
forbidden as too hazardous?
23. The harmful effects of poisonous substances such as tobacco and alcohol
are well documented. In view of this, discuss the implications of forbidding
advertising in such products. Is the value of free speech more important than the
value of deprecating the use of such products?
24. Which is preferable, to attempt to prevent trafficking in mind-altering
drugs altogether or to make such drugs legal and freely available and treat the
consequences? There are ethical, medical, and economic issues here. You might
start with alcohol and tobacco and go from there.
25. One way of reducing medical costs is to transfer some techniques and
responsibilities from expensive doctors to other less-expensive medical personnel.
Discuss the benefits and limitations of this from ethical, medical, and economic
standpoints. In particular, consider the potential responsibilities of nurses, nurses'
aides, orderlies, technicians, and other hospital staff.
26. It was remarked in the text that a potential advantage of colonizing space
is the survival of humanity in the event of global nuclear conflict. Is this really the
case, or would such colonies themselves be added as targets? Indeed, would the
very existence of such colonies make nuclear conflicts more or less likely?
27. Research the use of frozen embryos in livestock breeding programs. Now
consider their use in human reproduction. Suggest several situations in which they
could be employed, and then consider the ethical, legal, and any other problems
that arise from this technique. On balance, is this a useful and desirable technique?
28. Research and discuss the issue of patenting new life forms developed in
the laboratory. What stand have the courts taken thus far? To what extent do you
think new life forms ought to be patentable? Answer for plant, animal, and human-
derived genetic material.
29. Research the use of animal organs in human transplant cases. What are
the advantages and disadvantages to such work from a medical and ethical point of
view?
30. Under what circumstances, and for what groups of people ought the law to
mandate periodic checks for the use of drugs? When the law itself does not so
mandate, should employers do it themselves?
31. A number of Canadian Aboriginals still make their living as trappers. In
response to the concern of animal rights activists, Great Britain proposed new
legislation mandating the labelling of Canadian furs with a warning that the animals

262
may have been caught in leg-hold traps. The intended effect of the law was the
elimination of such trapping in Canada. Critics respond that a secondary effect
would be massive unemployment among one of Canada's poorest groups. Discuss
the ethics of this situation, giving particular attention to the relative rights of the
trappers and the animals.
32. In the course of the discussion on scarce medical resources, the text made
the statement: "Neither is it easy to argue that a woman has any fundamental right
to bear children." Develop an argument for or against such a right. Can your
argument be applied to support or deny (as the case may be) a parallel right to not
bear children? Why or why not?
33. Which is a higher priority, and why--to spend public money on researching
the causes of AIDS or to spend it on cancer research?
34. Determine how much money your government puts into research on
prostate cancer and breast cancer. Now, what are the incidences of both in men and
women, respectively? Do expenditures match? Should they?
35. Which is more important, and why--space exploration, or poverty relief?
Or, is there a link between spending on one and not the other?
36. Ought a state to permit (to encourage?) the use of suicide to reduce
medical costs and keep the population down? If you say "no," how do you deal with
those who are terminally ill, in great pain, and costing the state large sums of
money to maintain alive? If you say "yes," how do you deal with the young and
healthy who commit suicide and leave behind small children and a destitute
spouse? In either situation, how would you deal with the case if the potential suicide
were a renowned cancer researcher on the verge of a significant breakthrough?
37. Research the contention that population pressures are the result of
poverty, and that raising the standard of living will eliminate these pressures.
38. Suppose that a government determines its territory has become
overpopulated to the point where its people are at risk from starvation. Is an
enforced birth control program justifiable?
39. Suppose that a government determines its territory has become
underpopulated to the point where its national identity is at risk of extinction. What
measures can it legitimately take to increase its population?
40. Propose and defend against the alternatives a solution of your own for (a)
the problem of medical scarcity (b) population pressures on food supplies and living
space.

Bibliography

Asimov, Isaac. Science Past--Science Future. Garden City, NY: Doubleday,


1975.
Drexler, K. Eric. Engines of Creation. Garden City, NY: Anchor Press, 1986
Drexler, K. Eric and Peterson, Chris. "Nanotechnology". Analog, (Mid-
December 1987): p48-60.
Fjermedal, Grant. The Tomorrow Makers. New York: Macmillan, 1986
Geisler, Norman L. Ethics: Alternatives and Issues. Grand Rapids, MI:
Zondervan, 1971.

263
Henry, Carl F. H. Christian Personal Ethics. Grand Rapids, MI: Eerdmans, 1957.
Kelly, Northrup. Legal Issues in Nursing. St. Louis, MO: C.V. Mosby, 1987.
Kenny, Martin. Bio-Technology--The University-Industrial Complex. New
Haven, CN: Yale University Press, 1986.
Minsky, Marvin. The Society of Mind. New York: Simon & Schuster, 1986.
Montgomery, John Warwick. Human Rights and Human Dignity. Grand Rapids,
MI: Zondervan, 1986.
Naisbitt, John. Megatrends. New York: Warner Books, 1984.
Toffler, Alvin. The Third Wave. New York: Morrow, 1980.

Internet resources:

Biotechnology Information Centre. <http://www.nal.usda.gov/bic/

264
Chapter 8
Technology and Economic
Institutions
Seminar - "Well, who would you buy a used car from?"
8.1 Foundations of Business and Economics
8.2 Technology, Business, and Economies
8.3 Wealth in the New Era
8.4 Technology and Business Organizationsg
8.5 Financial Techniques in the New Economy
8.6 Business Ethics and Technology
8.7 Summary and Further Discussion

8.1 Foundations of Business and Economics


The development of human societies through the various phases from hunter-
gatherer through agrarian and industrial to the present information age has been
accompanied by--and on occasion driven by--the techniques of business and
economics as much as by any other kind. Early economies were based on
transactions as informal and unregulated as the peoples who undertook them.
Surplus fish might be bartered for firewood, meat, or berries. Metal and arrows
might be traded for clothing. Rates of exchange for one transaction, if they were
remembered at all, had no necessary bearing on other barters in the surrounding
neighborhood, or even on those conducted by the same individuals at a later date,
and no records were kept except mentally. Each exchange was unique and set its
own rules, and there was no notion of profit and loss in any "bottom line" sense.
Even hunter-gatherers sometimes developed more elaborate economies with
the passage of time. The division of labour into hunting, weaving, gathering,
administering, transporting, war making, and spirit consulting necessitated the
development of formal, if unwritten, sets of rules for trade and commerce and
eventually caused the introduction of various media for exchange, that is, currency.
These could be hunted objects such as teeth or hides, gathered objects such as
shells, stones, flints, or metals, and even manufactured objects such as arrows,
cured foods, or beaded belts. For example, for more than a century the currency of
British North America was the beaver pelt. The prices of guns, whiskey, utensils,
blankets, and other trade goods were all measured in these furs. In the case of

265
blankets, a number of colored bars called "points" were woven into one side of the
blanket to indicate its price. Thus, one spoke of a "three point" blanket or a "five
point" blanket, and this meant that it cost three or five beaver pelts, respectively.
There were also rules of thumb for testing the quality of the trade goods. For
example, to prevent traders from cheating on the alcohol content in the whisky,
those to whom it was offered would spit a portion into the fire. If the drink was of
sufficiently high proof, the fire would flare up; otherwise it would not--thus the origin
of the term "firewater."
Agrarian societies had relatively more complex trading problems to solve, for
by their very nature they produced local surpluses of foodstuffs that had to be taken
to distant markets. Surpluses, the lifeblood of those societies, fed citizens, armies,
and distant peoples. Surpluses also fueled the search for trade routes to exotic
lands and their goods, creating further trading complexities. These societies all
developed metal-based exchange media in order to create smooth running and
convenient business environments. They also developed the means to keep
permanent records, taxation, corporate partnerships, trading cartels, sophisticated
transportation, and international trade routes. In addition, the economies of
agrarian societies came to depend more and more on government, for only the
state was strong enough to protect trade routes, guarantee the value of currency,
settle disputes, and regulate the growing techniques of the economy.
The industrial age brought with it a greater dependence on capital
accumulation and therefore on the means of recording these accumulations. Thus,
double-entry bookkeeping was added to the techniques of accounting, which began
to become systematized and regulated, that is, to become a collection of formal
techniques. For the same reason, banks were developed to assist in capital
accumulation on a large enough scale to finance industrial undertakings--for some
had grown too large to be within the means of any single private individual. Banks
made it possible for both the nobility and the wealthy middle class to loan money to
industrialists without direct participation in or even knowledge of the actual use to
which it was being put. The anonymity of the capital pool allowed the nobility to
profit from industry without the "taint of trade." It allowed others a relatively safe
return on investment by virtue of the diversification of the lending institution.
Organizational techniques for the conduct of business also developed. These
included limited liability companies, corporate groupings, and holding or investment
companies. A variety of state-run enterprises also sprang up, some operated
directly by governments, and others through stock holding partnerships with private
enterprises. Still others were maintained at a fictional arms length by the state as
"crown corporations." In some cases, only the state could undertake a venture,
because only it could risk sufficient capital--both because of its power to accumulate
such funds, and because of its lack of accountability to shareholders.
Flexibility and liquidity were improved, as were the means of raising still
larger capital sums, by the introduction of publicly traded stocks and bonds and by
the subsequent development of technical apparatus needed to support such trade
in the form of stock and bond markets, brokerage houses, regulatory authorities,
and so on. In the industrial age, these trading enterprises have sometimes grown to
enormous proportions, occasionally taking on the status of international

266
conglomerates or monopolistic cartels that in some cases were more powerful than
the states of which they were nominally citizens.
At the same time, the state itself has become progressively more involved in
trade and commerce, regulating such conduct to the minutest detail and even
entering the marketplace directly on its own behalf. When large corporations have
failed, governments have sometimes bailed out the owners and employees by
taking over the operation, an action that would have been beyond the ability, or
beneath the interest, of even the largest groups of private investors. In some
situations, governments have felt compelled to intervene to force the breakup of
very large companies into smaller entities. In all such cases, the governments
involved have had to walk a tightrope between the need to manage the economy
and the danger of taking it over entirely--a danger clearly illustrated by the
bankruptcy and collapse of the former Soviet Union and its client states. The public-
sector economy is important, but no market forces exist within it and it has
therefore relatively little incentive to be efficient, unless the state itself approaches
bankruptcy.
By the late twentieth century, the systematic and methodical application of
management technique to business and the economy in most developed parts of
the world was widespread and very advanced. At the same time, Western
democratic governments' economic involvement and interference in their
economies was scarcely less than what had been attempted in Eastern Europe,
through such control achieved far more success in the West in terms of material
benefits for the majority of citizens.
The information age owes its existence in large part to the freedom of
individuals to innovate, to employ venture capital, and to operate as entrepreneurs.
The ability of freely flowing capital to back new ideas and make them winners in the
marketplace before any government has been able to respond with new regulations
has been critical in the development of computer hardware and software,
communication systems, the Internet, medical and other technologies. Major
developments in all these fields have taken place primarily in countries like the
United States, where new things are freely allowed until they become subject to
regulation.
Revolutions are periods of rapid change; they do not take place unless
conditions allow such change. Thus the information age cannot even start unless
information can flow freely from one individual to another--a state of affairs that still
exists in the West, but did not under the closed economies of the former Soviet
Union, where new ideas were routinely forbidden until regulations were changed to
allow them. The inability of these nations to compete in a rapidly changing world,
and their subsequent disintegration, also illustrates that it is not change, however
rapid, that endangers a people, but stagnation.
In each phase of economic development thus far, a certain level of technique
has been required before the next stage began. For example, an agrarian society
must trade and will not thrive unless that trade becomes international in scope. An
industrial society requires banks and cannot finance its enterprises without them.
Ultimately, the growing complexities of the industrial age brought forth the

267
automation of both production and record keeping. Thus was born the computer--
the characteristic and necessary technology for the information age.
Like the others before it, the information age not only has been accompanied
by a transforming technology that in itself necessitates the rise of many new
enterprises and the demise of those made obsolete, but also has produced
fundamental changes in society and in the attitudes of citizens. It has therefore
demanded new ways of viewing the conduct of business and the management of
the economy. These changes will result in many "winners" and "losers" among
individual business enterprises, various corporate sectors and among national
economies. Some corporations and peoples are in a much better position than
others to exploit the new paradigms and become the next era's economic and
political leaders. The remaining sections of this chapter give a more detailed
examination of a few of these issues.

8.2 Technology, Business, and Economies


Each change in the characteristic technology of a given society results in a
shift in economic activity and power. For example, those skills or techniques that
are most highly valued in an agricultural society are much less at centre stage in its
industrial successor, while those of the hunter-gatherer, while they still exist, fade
altogether from significance. Businesses that depend on the importance of a
particular form of trade or technique may find their very existence jeopardized
when new technologies or even new trading routes and patterns come to
predominate. Such alterations are always necessary whenever technology changes,
for some countries will have the now required resources in greater abundance, will
better develop the new techniques, or will be more strategically located than others.
Indeed, those peoples most securely wedded to the previous technologies--and
most successful with them--may well be the most reluctant or unable to change.
They may have, in human and other terms, the poorest resources for moving into
the following age. On the other hand, countries that are able to use the new
technologies to bring the old ones up to date, then move on a large scale into the
new are those that prosper when changes come.
When the industrial age dawned, the principal trade was in food, clothing,
spices, precious stones, and metals. As machine-based industry grew in importance,
these older markets all continued to exist and even enlarged in absolute terms.
However, they declined in importance relative to the overall economy, which was
growing many times faster. Eventually, the trade in consumer goods and in the raw
materials required to produce them took on an importance that dwarfed the older
economy in volume, value, and number of people involved. Those enterprises that
remained in older types of trade were required to become more efficient in order to
retain investment capital, as well as to release the large number of employees
needed by the new industries. Thus, new farming technologies grew up side-by-side
with and benefited from new industrial techniques. Farming transformed into a type
of industrial activity in its organization, machinery, and prerequisite education. The
flow of people and money to new economic dominators was disruptive to the degree

268
that individual businesses either adapted to the new conditions or died. Meanwhile,
overall gross national product in the industrializing nations grew at a rapid pace, as
did the disposable income and standard of living of all their citizens. The key was
the ability to increase productivity dramatically in virtually all sectors of the
economy simultaneously. At the same time, provision of transportation and
communications themselves became a major industry, as has the provision of
information services in the Fourth Civilization.
Profile On . . . Business Changes

What business am I in?


o Am I a hunter-gatherer or a food supplier? If the latter, can I become a
farmer as well?
o Am I a farmer who takes my produce to market or am I in the transportation
business? If the latter, could I establish a distribution network?
o Do I make bicycles and horse-drawn buggies, or am I a vehicle
manufacturer? If the latter, could I make the transition to building cars and trucks?
o Do I run a telephone company or an information infrastructure? If the latter,
could I establish Metalibrary facilities?
o Do I loan money, or do I assemble capital? If the latter, could I arrange
financing for multi-trillion dollar projects involving a network of organizations?

Where Should I be Located?


o If I am a hunter gatherer, I need to live and work where the animals happen
to be on any given day in order to survive.
o If I am a farmer, I need to live and work where the soil will allow me to grow
food and the transportation routes will allow me to sell it.
o If I am in the transportation business, trade and travel routes and population
concentrations are critical factors in the location of my work.
o If I am in the food distribution business in the late industrial age, my
location is divided between producing and consuming areas, and includes
everything between.
o If I am in sales, my location is wherever my customer is.
o If I distribute goods and information via the Metalibrary, do I need even to
think of myself as being located anywhere, or is the very concept of a place of
business obsolete?

The Future of Economic Institutions

Transitions of a similar nature are now taking place on an even larger scale as
industrial paradigms give way to those of the information age. The manufacture and
distribution of consumer goods will continue to be important into the indefinite
future, though the nature and sophistication of such goods will change with startling
rapidity. Likewise, the production and distribution of foodstuffs must continue as
well. That is, the basics cannot be discarded, though both these sectors will
continue to shrink in relative importance to the overall economy. Manufacturing and

269
food production will both lose even more jobs than they already have to the service
and information sectors, though this shift will be more obvious in manufacturing
because it has recently been relatively larger than agriculture. Even the latter will
undergo wrenching changes as it too adopts the tools and techniques of the
information age. The workers of both sectors will continue to need higher skills than
ever, while they simultaneously decline in numbers. Reductions in employment in
these sectors will be achieved in part by extensive robotization as many manual
forms of labour are turned over once and for all to machines. In the same way as
farming became industrialized, both it and industry will be remade in the image of
the next age.
Successful manufacturers in the post-industrial age will continue to switch
rapidly from product to product as their research and development departments
improve on or replace the old and as their public relations arms either detect or
create changes in consumer tastes. Because computerized design and
manufacturing of goods can so easily be re-programmed, lead time for bringing new
goods to future market becomes much shorter. In addition, it will be more cost
effective than in the past to order short production runs of specialized items,
perhaps even to the point of manufacturing single pieces to customer's personally
ordered specifications. Thus, variety of goods increases, within broad standard
categories. It will still be possible for small companies that carefully select niche
markets to compete with larger ones on even terms, though it may take some time
for the prices of manufacturing robots to fall to the point of being affordable by the
smaller firms. There will be a continuing increase in the volume of goods ordered
through electronic shopping malls that collect subscribers' orders and send them
direct to manufacturers, thus eliminating wholesale and retail middlemen.
Improvements in efficiency and price to the end-user could be very substantial,
because from 30 to 70 percent of the retail cost of some goods represents post-
manufacture markup. This disintermediation or exclusion of the middle man is an
important trend in the information age, as it places manufacturers in direct contact
with their customers for the first time since the agrarian age.

Everyone is connected to everyone else, but the chains that form those connections are getting
shorter.

Ultimately, however, service and information sectors, and manufacturers


supporting them, will dominate other sectors as much as industry came to
overshadow agriculture in the past. More goods may well be manufactured, and
they may be cheaper and more sophisticated, but their making will employ fewer
people, and this part of the economy will contribute less obviously to everyday life.
Except when they take the time to design their own, people will buy and use
consumer goods as they now do food, that is, with little thought for or
understanding of the means by which such goods come to their hands, since few
people will have direct contact with those processes except at the retail and
delivery (i.e., service) stages. It could become as rare to be or even to know an
assembly-line worker as it is today a full-time farmer. In other terms, the
manufacture and distribution processes will become higher level (more invisible)

270
abstractions, passing out of sight and out of mind as things many people think
consciously about. Perhaps "hobby factories" could develop, much as hobby farms
have, to provide outlets for a certain kind of nostalgia and a source for hand-made
goods for specialty markets, or as some kind of tax shelter.
Capital will still be needed for the extraction of raw materials and for the
establishment of factories--even those run by robots--but much less money would
be spent on operating costs because of the relatively lower numbers of salaries
paid. These funds would go instead into shareholder profits, research and
development, and large-scale capital-intensive projects. Medical research and
technology, information systems, communications, transportation, the building of
new habitat on earth, and the colonization of space could become the dominators of
the large capital economy and the most visible vehicles for the business of doing
business on a grand scale.
Some of these teraprojects would require concentrations of capital orders of
magnitude larger than those arranged by the wealthiest tycoons of the industrial
age. This need could result in even larger banks and multinational corporations and
even more government involvement, for these institutions would also need to grow
to meet such challenges. At the same time, however, the rapid economic changes
taking place are catching many industrial giants unprepared, creating many new
opportunities for individual entrepreneurs and small companies, if for no other
reason than that a small number of people can reach decisions faster than can a
large number. Some of these enterprises are in turn becoming the middle-sized and
larger corporations of a decade or two later. Change opens up numerous
opportunities for small-scale business, so one could predict that numerous and
dramatic changes could cause the average size of business enterprises to shrink for
some time, reach some equilibrium, and not begin to grow until a new stability
emerges (if one does).
In other words, there are collectivizing and individualizing trends in tension
here as well. Another kind of equilibrium could be reached if manufacturing and
financing concerns formed temporary project-oriented partnerships on an item-by-
item basis, rather than merging into permanent and enormous conglomerates.
Despite a proliferation of small companies, large enterprises will have to
operate on a scale never before seen, and this will require new financing techniques
to make sufficient capital available. Just as the introduction of stocks broadened
participation in business during the industrial age, new instruments would have to
be devised to create larger and broader-based capital pools than ever before. Many
ordinary citizens, companies, and governments will probably have to have a stake--
even if only through mutual funds--in such things as space-based industries,
biochemical research and sales, land reclamation, the Metalibrary, the robotization
of industry, and a variety of transportation, communications, and habitat projects.
The result may well be that even more international commercial enterprises will
transcend national interests to the point that people will have more feeling of
loyalty to and involvement in the companies in which they own shares than they
have an attachment to the countries in which they reside. For the increasing
number who work as single entrepreneurs under service contracts to make a living,
corporate ties will be entirely proprietary. The employees that businesses do have

271
will more often be either part owners or independent contractors, for the times of
viewing labor as a vast pool are now past and the day of the professional worker
has dawned. This professionalizing of the work force focuses attention on the
individual rather than on the mass of workers and may well create an economy and
social values that are more individualistic in many aspects of life.

The Question of Size

It is worth observing that there may be an upper limit to the size of a


company as a single financial or structural entity, for some governments have acted
strongly in recent years to break up large concerns into smaller ones, acting in what
they believed to be the best interests of the public. In the past, such divestitures
had widespread popular support on the theory that giant size and monopolies
automatically produce predatory behavior. Classic U.S. cases are the breakup of
Standard Oil early in the century and the more recent creation of the Bell group of
companies from the former ITT holdings. It is too soon to judge the results of the
latter action, or of the blocking of proposed mergers in other fields. By the late
1990s Microsoft had grown to be the most dominant company in the software world,
and also became the target of successful antitrust suits, though political
considerations following a change of government may have precluded applying
appropriate penalties for the illegal activities. It is also worth noting that very large
companies sometimes break themselves into a number of entities based on
geographical or market sector considerations, so as to gain efficiencies and
maximize shareholder value. A case in point is the giant Canadian Pacific Railway,
which in the late 1990s re-invented itself as several non-overlapping entities,
allowing each to concentrate on a specific market sector.
Even if it is the case, however, that companies were never individually
allowed to grow beyond a certain size, they may still form multi-trillion dollar capital
pools through use of corporate linkings and partnerships. After all, each expansion
of the scope of the economy has required a corresponding increase in the size of
capital pools to serve the growing market--just as it has required a larger
transportation and communication sector. The difference is that such partnerships
need not be permanent, but could instead be temporary and flexible--large versions
of the metaperson. Some formalization of international cooperations of specialist
firms or of individual professionals could provide the alliance a formal identity.
These entities could be corporations whose stakeholders are other (possible
personal) companies (a practice already common for longer-term entities). Though
not necessarily large in themselves, the conglomerate could have an enormous
supporting structure.

Suggestions for Large Projects

Among the more remarkable proposals (fictional and otherwise) that would
take large sums of money, people, and other resources:

o reclaim large deserts such as the Sahara for living space,

272
o build domed cities under the ocean,
o build a circular dam in mid-Ocean, pump out the water behind it, and build a
new nation-city within the dam walls,
o construct an elevator to near-earth orbit,
o construct cities in space,
o explore the solar system and beyond,
o build colonies on the Moon, Mars, Venus, and a moon of Jupiter,
o build a pollution free power source/ transmission facility in space,
o mine the asteroids,
o build solar energy power plants in space to transmit energy to Earth,
o establish the full Metalibrary
o move manufacturing to non-habitable locations underground, inside
mountains, to space, or to the moon,
o not only map the human genome, but find out what every gene does,
o explore the Solar system and beyond.

Some of these may never come to pass, but others on a similar scale surely
will. It should be clear that they can only do so through individual, corporate and
government cooperation of a type that has never before been experienced.

Trends for the Future

In all, it is clear that the post-industrial economy has both collectivizing and
individualizing trends. It has facets that promote individual professionalism and
entrepreneurship on the one hand and those that promote the growth of very large
scale enterprises on the other. None of the present aspects of the economy will
vanish--manufacturing of goods and food distribution will continue to grow with
population in absolute terms--but they will not continue to dominate either public
interest or employment. Instead, those sectors will shrink back into a nearly
invisible infrastructure run by machines that are superintended by a few managers,
who would themselves be part of the information economy rather than be
traditional blue-collar workers. Of necessity, the companies engaged in such activity
would be relatively large (in capital terms) by today's standards but would occupy a
middle ground in the economy. It is of course, uncertain how long some of this will
take. Social responses to new technology are even harder to predict than
availability of the techniques themselves, but the very possibility of moving in the
indicated directions will itself create pressure to do so, in the name of efficiency if
nothing else.
There will be vast new economic frontiers opening up, and the instability
common to all rapidly growing and changing economies will be a feature of the
economic landscape for some time to come. An interesting side-effect of this is that
the study of economic technique itself could become more important, though efforts
to apply systematic planning to the economy could continue to be frustrating
because old data never quite catches up to current reality. It could be argued, for
instance, that one reason for the recent success of the computing industry is that
governments have not understood it soon enough to be able to stifle it through

273
regulation. It could also be argued that a state of stasis or economic stability is
undesirable and should not be striven for, because it would imply stagnation and
eventual decay, and that change ought therefore to be welcomed as a benefactor
and encouraged.
There is no shortage of information about the economy, and computing
hardware will soon exist to collect, store, and manipulate this information even on a
global scale, but it is not yet evident that an economic calculus exists that can
comprehend the effects of change on the scope being discussed here. Such a
calculus will have to deal more with change and growth than with a stable economy.
Explaining what is happening in economic terms even while the object of study is
rapidly transforming itself beyond recognition is one of the major academic and
technical challenges of the information age. There is certain to be no shortage of
candidates for the position of Economic Newton to the new era. If a means of
explaining how the economy works can be found, there will likewise be no shortage
of those proposing to manage it on a professional basis. Thus, economic technique
might be part of the collectivist aspect of the future order, despite the failure of
comprehensive management efforts in the past.
Jacques Ellul saw this trend clearly, and argued that like all techniques, those
of economies could brook no opposition in being developed to their logical
conclusion--a comprehensive planned economy, micromanaged to the last detail by
an army of economic technocrats in the name of maximum efficiency. However, his
conclusion was based on certain assumptions of the industrial age, some of which
may no longer be valid.
First, such an analysis presupposes that there does exist at least one
comprehensive economic technique. During an age when industrial technology
provided the paradigms, this may have seemed to be a reasonable assumption. It is
possible to observe techniques in many fields of human endeavor as they are born,
develop, and mature to a comprehensive statement or discipline. They become
well-understood components of their environment, taking on an automatic and
machinelike aspect. Although this is certain to continue, it is unclear as yet that this
process applies to knowledge itself--that is, to information, the supplier of
paradigms for the current age. Is the sum total of knowledge in any field of study, or
in all of them collectively, limited or unlimited? If it is limited, a final equilibrium
state is perhaps possible; otherwise it is not.
However, it will never be possible to prove that everything knowable has
already been discovered by the human race, even if it has. Humanity may well style
itself as its own god but can never know that it is all-knowing. A transcendent deity
is one that can only be revealed, not constructed. If, therefore, the information
economy continues to transform itself rapidly, growing without any apparent limits,
a comprehensive set of economic techniques might never be devised--attempting to
do so might be as difficult as managing the weather at every point on the globe.
Second, it is not clear how the general availability of economic information
and general knowledge of economic technique will affect application. The citizenry
may well submit to the professional opinion about what is economically efficient,
thus creating a de facto managed economy. However, there are certain to be
competing voices offering alternative theories, so that even if the best of economic

274
techniques is discovered and applied, there will be the political problem of
persuading people that this is so. There will always be some prepared to argue that
a current technique, efficient as it may seem, ought to be replaced by a different
one. The only way to discover if the argument is correct may be to try it; theoretical
demonstrations of efficiency are unconvincing and often wrong. This may always be
the case, for technique is application, not theory.
That is, Ellul's worries about the inevitability of economic techniques may be
valid, but as long as there is neither understanding of the role of information nor the
ability or will to manipulate it, the day when the economy will be manageable in
every detail will be postponed indefinitely. Indeed, economic theory has had a hard
time in recent decades. It failed to explain the "stagflation" of the late 1970s, which
saw high unemployment and high inflation together--a combination previously
thought impossible. It also failed to predict or account for the dramatic stock market
plunge of October, 1987 and for the prolonged rise through the nineties. Economic
pundits had widely differing, even contradictory, interpretations of why such events
took place and what the effects would be, even in the short term. For its part, the
information economy is not only new, but entirely out of regulatory control. No one
person or group "owns" or manages the Internet (or is likely to the Metalibrary). It is
growing and changing so rapidly that its major features may be in place long before
there is even any theoretical work done on its management.
Third, the sorry history of attempts to do comprehensive micromanagement
of the economy must lead us to question whether it would be wise to attempt it
even if it did seem possible. Putting such power into the hands of a government has
invariably resulted in brutal dictatorship, and there is no reason to suppose that
because we collectively became competent to do what could not be achieved in the
past, we would be better off for doing it. History suggests that such power would
simply produce another tyranny.

Profile On . . . The Economy

Productivity

Throughout the 1980s and 1990s billions of dollars have been spent on
computing and information technology. Yet, the standard measure of productivity
(product value divided by average wage) has remained constant during this time.
This fact is called the "productivity paradox" and many commentators have used it
to suggest that money spent on new technology has been wasted.

However, this analysis fails to take into effect two factors:

First, this is not the first productivity paradox, but the third. The first two (in
North America) were the electrification of factories from 1899 to 1939, and the
electrification of urban homes from 1907 to 1929. In both cases, there was little or
no change in productivity until the end of the period. Then, on the one hand, more
efficient factories could be built with radically different designs, and on the other,

275
the gains made by using machines in the home could be translated into economic
advantages for the family and the standard of living improve dramatically.

Second, productivity gains are unlikely during the time an old system is being
modernized. Such a process consumes time, energy, and capital, but leaves
essentially the same way of working in place when it is done. Rather, the important
changes take place after the technology has been deployed for traditional tasks,
when it begins to transform the workplace into something new and fundamentally
different.

Thus, it is only when the economy is into the latter phase of computerization,
and adoption into existing systems has reached at least 50% is it likely that the
transforming enablement of the information age will produce great productivity
gains--these take place after the old system has been changed. Perhaps just such a
radical transformation can be seen in the growth of the World Wide Web after about
1993, when the Internet went from being a scholars' tool to a household appliance
in a few short years (Changes take place much faster than in previous technology
revolutions).

Ted Lewis, writing in the May 1998 issue of the IEEE journal Computer
suggested that the economy is moving to a friction-free model, wherein new
information can be acted upon almost immediately. He notes some of the same
trends mentioned in this chapter--the exclusion of the middle man
(disintermediation), the integration of many economic functions under single
entities, the increasing flexibility of the workplace under a more professional and
less rigid organizational model, and the increasing ability for manufacturers to
target individual customers' wants and needs.

If this analysis is correct, then the lengthy expansion of the North American
economy experienced in the 1990s might well continue for a decade or more, as the
productivity gains of the Metalibrary begin to be realized by the majority of citizens.

Conclusions

What then, can be drawn as a definite conclusion about future businesses


activity and the economy as a whole? Although there are certain broad collectivist
trends, individualist ones seem also likely to be manifested, with economic
knowledge and power becoming widely disseminated. This very knowledge will feed
back into the system to reshape it. A group of self-professed economic technocrats
or professional managers may well exist, though they may not be successful in
achieving a comprehensive understanding or much real control over a very rapidly
changing system.
However, less neutral opinions are not hard to find. The stock market could
suffer a great crash, confidence in government could decline from low to none, or
the new century could see a variety of other calamities that render late 1990s
economic and social analysis unusable. It is too soon to say, for instance, whether

276
the bear market begun in 2001 constitites a trend, the beginning of a deep decline,
or merely a temporary technical setback. Ted Lewis might be wrong, and a deep
recession just around the corner. Governments could collapse, nations self-destruct,
and corporations important in the 1990s cease to exist. Some of this has happened
already, with problems in Eastern Europe and the Middle East continuing to affect
the world economy, perhaps in the end profoundly. In this volatile time, when
dramatic change is normal, some apparently small factor could trigger radial
alterations to the world's economic, corporate, and political structures that dwarf all
the considerations mentioned thus far.
Given recent patterns of economic change, it ought, however, to be possible
to forecast with reasonable accuracy what will be the winning and losing sectors in
the new civilization--in at least the intermediate to long run-- and the next section
will be devoted to these issues.

8.3 Wealth in the New Era


Perhaps it seems pessimistic to speak of economic losers in the information
age. After all, is it not an axiom of progress that it automatically results in everyone
being better off than they were before? Indeed, progress, thought of as an
independent force toward human betterment has been very much an icon of the
industrial age. In material terms--goods and services--a citizen of a modem
industrialized country is certainly much better off than one of a century ago, and
this is particularly evident in terms of medicine. Thus, in some spheres, progress
seems real, even though in the social, political, and moral ones it could be
questioned. Whether the almost mystic notion of inevitable progress will continue to
be a control belief of the new era will be examined partially in Chapter 11.
Assuming for the sake of argument that (at least economic) progress is an
inevitable force, the wealth it brings has been distributed very unevenly both by
industry and by geography. Despite its name, which seems to suggest specific
routes and particular goals, progress is fundamentally unpredictable--most of its
prophets seize on one or another utopia as the eventual resting place, the nature of
which reflects the economic and political biases of the predictor. In addition such
forecasts are often made in absolute terms, ignoring that peoples' perceptions of
change are in relative ones--both for where they have been and for where they see
everyone else as being. Each individual in a society has a somewhat different view
of the past, the present, and the future. With this come different hopes, and so
different views of what has already or might yet constitute progress.
For instance, one person might define progress solely in absolute economic
terms such as employment or home ownership. Another might define social
progress in terms of the percentage of the population with incomes, say, one
standard deviation below the mean. Since this figure--the number below a relative
poverty line--does not change as the mean rises, those fond of such analysis may
see no progress at all, even if more people can buy more things. Still others might
survey the moral landscape of the population and draw conclusions that are not
dependent on material prosperity at all--or even suggest that there is an inverse
relationship. Thus, it is not difficult to generalize the arguments of the last section

277
and suggest that progress, if it is real, may have no general goals, with the possible
exception of efficiency.
Economic change also takes time, for there is a great deal of inertia in the
spending habits of governments, industries, and individuals. However radical the
changes in these are, this inertia acts as a governor or brake, and is itself a key
factor in determining how society can change. Moreover, economic information
disseminates through society slowly, and decisions made in one year might not
realize their full impact for several years. Nevertheless, government and industry
leaders are often expected to take immediate action and produce immediate results
for the bottom line. Since the economy is probably already in a different part of the
cycle than it appears to be, such pressure often results in exacerbating the very
problems they were designed to solve, or exaggerating the swing away from the
difficulty. One could hope, as Ted Lewis, that in a friction-free economy such bad
decisions will become less likely, but such a hope also assumes that decision
makers will also ignore emotional pressure, bad economic models, and
demagoguery, and history would suggest this to be unlikely.
Moreover, although the new economy will inherit from the old in a straight line
fashion, the steps whereby a new order is attained are documentable after the fact
but are not necessarily obvious or even evident during the transition. With all these
cautions, it is still interesting to analyze present and possible trends by industry and
by country to determine which could gain ground and which could lose it relative to
the ever-changing norms of economic activity.

Economic Change by Industry

It may seem easy to pick a few of the industries, occupations, and enterprises
that will be economic winners in the information age. Electronic and biochemical
high technology, information industries, robotics, habitat engineering, and space
technologies are poised to become economic leaders, and many businesses in such
enterprises will undoubtedly grow and prosper. It is difficult to be more specific,
however, and to select individual companies as the greatest beneficiaries in the new
economy, for much depends on adaptability, vision, the readiness to make decisions
and take risks, and the ability to combine all this with sound management. The
winners also need to be able to form and maintain associations or formal
partnerships with other companies on an international basis--an exercise in cross-
cultural cohabitation that may be beyond all but the most imaginative and flexible.
For instance, even well-positioned giants such as IBM, which should have been
at the center of computing technology innovation, sometimes found themselves
pushed to the wings by the likes of Apple, Dell, and Compaq. The big players of a
previous wave of innovation often find themselves forced to play catch-up in a
marketplace that is highly competitive and demands rapid decision making and
change--both of which are difficult to do in traditional big companies. When IBM
attempted to set standards, these were not particularly innovative and were either
easily duplicated by a host of smaller companies, or ignored altogether. The result:
though its sales have grown in absolute terms (they could hardly help but in this

278
sector), IBM can no longer claim to be the single dominant force in the industry and
innovative product leadership has often come from elsewhere.
Yet, though IBM may have lost some of the edge of innovation, it retained the
marketing skills that kept it in a prominent position even without the mantle of
leadership. It also retained sufficient trust in its name to be able to charge the
highest prices, as well as sufficient volume to manufacture at the lowest cost in the
industry. These factors increased the size of the company and allowed it to do costly
research on supercomputers, superconductivity, new materials, and new kinds of
software--all of which will be required to build the kind of devices that will establish
the Metalibrary, will result in new and innovative products, and will enable
management of the wider economy. IBM may not be the first to manufacture the
devices made possible by its research, but its role is important to their eventual
production. This situation nicely illustrates the need for both large-scale, capital-
intensive research and small-scale, innovative applications of new knowledge.
Meanwhile, Apple Computer, on the other hand, illustrates that even being the
company responsible for almost every innovation in computing for two decades is
not enough to maintain a dominant market share. This is because it is not quality
that sells in a mass market, but perceptions (the sizzle rather than the steak).
Better managed companies whose people understood this were able to out-market
Apple, even though their software and hardware products were often inferior. With
the return of Steve Jobs to Apple, and the subsequent change in management style
and marketing techniques the company once again became prosperous. This
example serves to teach two lessons: first that any one of bad management, poor
decisions, or worse marketing can easily overcome superior product quality; and
second that as in the moral realm, knowing what is best does not imply doing it--or
in this case, buying it. In other words, even perfect information does not lead to
corresponding action; Aristotle was simply wrong about this.
The automobile industry provides another interesting example. In its infancy,
it boasted hundreds of small enterprises, but those gradually coalesced into a
handful of giant car makers with numerous cross-industry standards. These in turn
came to be threatened by more sophisticated and automated offshore firms that
took advantage of new technologies to build better products before their North
American counterparts could react to new market realities, and it took a long time
for the latter to catch up.
As the computing industry grows, aspects of it will also mature and become
standardized, and there will continue to be a reduction in the number of vendors,
though some of the remaining ones will grow in size. Such a rationalization in the
computer industry has taken place before, in the 1960s and 1970s, when
mainframe manufacturers adapted to the IBM standards. A similar process may be
as all-encompassing in the small-computer industry, even though there are many
personal tastes to accommodate.
For instance, the visual desktop metaphor that was pioneered by Xerox and
popularized by Apple as the MacOS, was then partially imitated by Microsoft, and as
"Windows" has taken most of the market. There remains room for some divergence
in basic hardware, storage devices, operating systems, languages, and even
consumer applications. In the end, the number of small computer manufacturers

279
may shrink substantially and fall in line behind Intel, Microsoft, IBM, Sun, Apple, and
one or two others insofar as standards are concerned. While some will undoubtedly
be casualties in the consolidation crossfire, others will prosper as they do now
supplying parts, peripherals, service, consulting, and software. At the same time,
international organizations such as ISO, the IEEE, and ECMA have also become more
important for the setting of useful standards in the industry for the simple reason
that they are perceived as impartially transcending any single economic interest.
The establishment of the Metalibrary might serve to open up individual
opportunities to gain and use technical knowledge for profit. It may well be some
future start-up company run out of a garage that builds and sells, say, the first
"pocket brains" or the first lap-top or pocket Metalibrary terminals. Today's
computing giants must contend with innovation from start-ups at home and also
with attacks on their market share from abroad. They may not even exist in sixty
years; their survival depends on their adaptability and vision.
Apart from the computing industry, others likely to be successful in the
information age include biotechnology, communications, transportation, and high-
technology construction. Engineers of all kinds will be in great demand, as will be
lawyers, teachers, analysts, system designers, accountants, document writers, and
contract supervisors. Certain professions that may seem to be obvious winners,
such as circuit design and programming, may enjoy only a short-lived boom. When
computers are used to automate such tasks, such jobs may not be needed as much.
However, their holders will be used being problem solvers, and as such will always
experience demand for their services, regardless of the specific job title.
It is more difficult to forecast which industries will be unsuccessful, especially
outside the high technology fields. Some managements have the vision to adapt
new tools to their fields efficiently, and others do not. The properly perceptive
corporation in any field--including agricultural and smokestack industries--may not
lose market share or importance, even if their overall sector declines relative to the
entire economy. Manufacturers such as General Electric have often been cited as
examples of companies that have been able to grow and change with their
customers over long periods of time. Likewise, the Hudson's Bay Company (HBC)
has become the world's oldest continually operating commercial concern by being
able to discern those same changes on a retail level, though it was very late getting
on the Internet. Interestingly, it too is currently re-inventing itself as discrete chains
of speciality outlets instead of as monolithic department stores.
By contrast, many firms that were successful in the late industrial or early
information age collapsed when they were unable to make the transformation from
one product or market to another that a rapidly changing situation demands.
Whether a given company-- old or new--can expand its vision sufficiently to thrive
(or survive) change remains to be seen. After all, there are numerous high tech
firms, some of which were for a time quite large, that have already succumbed to
market pressures, poor management, and negative marketing perceptions. A
journalistic feeding frenzy alone can do serious damage to a company--even in the
absence of any genuine marketing or technical problems. A classic example was the
treatment Apple got from the press in 1996-1997 when reporters with no real news

280
on hand were able to halve Apple's market share on the basis of nothing more than
mutually-reinforced negative reporting.
The HBC serves as an example that it is possible to establish trade to hunter-
gatherers from an agricultural society, maintain that trade throughout the entire
industrial age, and participate in the next as well, by retailing electronic devices.
That no other companies have survived the same time span is telling evidence of
the difficulty of managing commercial change and might lead to the conclusion that
few of the present companies will survive, though information on potential change
will also be available to them if they can take advantage of it. The point is that the
new technologies necessarily transform the old institutions even while they create
new ones. Those that can effectively manage the transformation survive and thrive;
those that cannot die out.

Geographical Considerations

Just as there have been geographical, economic, and political considerations


that have resulted in uneven concentrations of certain technologies in the past,
such factors will also cause some regional disparities in the near future. That is,
there will probably be geographic winners and losers at least in the initial stages of
the new economy.
North America has been a leader in the computing industry and that may
continue, though it is always hazardous to predict that some economic pattern of
the past will continue into the future for very long. This is particularly true in the
context of the Metalibrary, where neither information nor vendors need respect
artificial lines drawn on the Earth as national boundaries. As the biomedical and
aerospace industries grow and change, they will go through startling transitions. It
is tempting to suggest, for instance, that the progress of the former could be similar
to that of the computing industry, with the giant drug companies losing ground to
enterprising garage and basement operations. Research and development costs in
this field can be remarkably small in some situations; the potential return is often
very high, and numerous academics have left universities in order to reap profits
from their research. These factors are certain to encourage many new entrants to
the field. On the other hand, the existing North American regulatory environment
can result in a ten year, 250 million dollar cost to bring a new pharmaceutical
product to the commercial market. This fact alone stacks the deck heavily in favour
of the very large companies, and the industry has already become quite
concentrated.
In the aerospace industries, the relatively large amounts of money involved
will likely benefit large companies in the near future, but some contract work will
continue to go to much smaller firms, especially those in close physical proximity to
the industry leaders. Again, it is easy to forecast more mergers as the market
globalizes and capital needs for new and larger products skyrocket.
As the routine use of global communications networks becomes a reality,
geographical proximity may diminish in importance in many industries. This could
effect such enterprises as banking, retail sales, travel, investing, and the offering of

281
professional services. Eventually, economic partnerships will know few national
boundaries.
Like corporations, those countries and regions that foster economic innovation
and permit rapid change will benefit the most economically. Some countries are in
much better position in this respect than others. The United States, though
experiencing massive shifts from old northern manufacturing belts to sunnier climes
in the South and West, has a wide lead in all the future technologies. The former
Soviet Union also has certain advantages in basic science and engineering,
including the largest concentration of doctorates in the world. These people now
have more freedom to pursue their work without state interference, and could
become an important factor in the future. In the short term, however, they have
little funding, outdated manufacturing capability, food shortages, political instability,
and a collapsed economy to deal with, and will therefore be severely handicapped
for a time.
For a time, the United States lost some of its economic leadership to smaller,
more rapidly developing and innovative countries whose entrepreneurs seemed
ready to manage change effectively. These included Japan, Germany, South Korea,
Taiwan, Singapore, and the underpopulated but resource-rich Canada and Australia.
However, bad management, government corruption, and poor market supervision
led to excesses in the Asian economies in the late 1990s and a sharp contraction
ensued in that part of the world. It is too soon to say whether those countries will
recover and resume the mantle of leadership, or whether they will give it up to
others. In the meantime, capital and other resources have flowed back to safer
havens in North America and this has been partly responsible for a renewed
domination of the world economy by the United States.
Canada and Australia might be thought to have some important advantages
because they can establish robotized manufacturing without as large a work-force
displacement as in some countries. Canada in particular has an abundance of
natural resources and space for potential habitat. Like the United States, it is
positioned both on the old Atlantic trade routes and on the new Pacific ones. It also
has considerable experience in both transportation and communications
technologies--essential prerequisites for information age leadership. However, it had
had decades of poor government management at bot the national and provincial
level, and so has squandered most of its natural advantages. Australia likewise
could be well positioned for the new trading realities, has substantial natural
resources, and a people with a reputation for adaptability and innovation. The
downside for both is that if the climate continues to warm as it has done recently,
important agricultural lands could become too dry to use, costing them important
economic leverage. Both will also need to establish visionary economic and political
leadership; electorates do not necessarily make the wisest or most appropriate
choices of governments, and even the best of these can lose office for reasons that
have nothing to do with their technical expertise, management ability, or vision.
Canada also has the handicap of its continuing identity and unity crisis; if it does
splinter into two or more parts, no one of these can expect to be significant players
on the world scene.

282
Even the rest of old-world Europe need not fare badly in the future, for the
information age does not require absolutely that a nation be a manufacturer of
goods so much as a manager of that manufacture and of the flow of information.
This can be done even if there are no industrial-age style trade routes, just as Japan
was able to establish industry without many raw materials of its own. Indeed, a
unified Europe could constitute a large enough market as to be virtually self-
sufficient in technology. On the other hand, such unity has never come easy to the
Europeans. Lacking the threat of the Soviet Union, they may revert to their age-old
hobby of cutting each other's throats.
Indeed, since the first version of these materials was written in 1988, that is
exactly what happened in Eastern Europe. There is no particular reason to suppose
that it cannot happen in Western Europe as well. Lest the author seem to boast of
inappropriate prescience, however, it should also be noted that the earliest versions
of this book mentioned Indonesia as a dawning world power, but that nation
continues to be hampered by its own government; has not progressed socially or
economically; and probably will not until some time after its system of governance
has substantially altered.
It is also important to note that in recent decades Western industrialized
nations have adopted many Marxist ideas, and their people are less free than they
once were. That is, the trail of failure and death left by radical egalitarianism
throughout the last century has not dampened the desire of Western intellectuals to
tread this path again. This desire has given rise to some stifling of creativity and
free speech, denial and revision of actual history and literature thought to be
embarrassing, and a repudiation of the moral basis for law. History would caution us
that these nations could easily drag themselves back from whence they came,
passing the mantle of civilization to others in a rather short time.
The biggest winner might be China. Rich in resources, people, and innovative
spirit, she could be poised to move from an inefficient agrarian society to the
information age in a single leap. China has the potential to become the leader of the
Pacific Rim economies in the near future, taking the baton from the United States
and Japan, which have shared it since the Second World War. China is not without
political problems, or its own unity issues, however, and it remains to be seen
whether it can fare any better as a free nation than it has under communism.
It is worth noting that the Pacific region has the U.S., Canada, Australia, Japan,
South Korea, Taiwan and China, all of which are mentioned in most projections as
economic leaders of the next few decades. Clearly, a dramatic shift in trading
patterns from the Atlantic to the Pacific is indicated, not so much reflecting any
absolute decline in volume on the older routes, but massive gains on the newer
ones. One local region in North America that may have much to profit is the
Vancouver-Seattle port complex. Over the medium term these two could grow
together into one of the world's largest trading hubs. They could become
international centers for banking and information as well, because of their strategic
location on the trade and communication routes of the future. They could together
handle thousands of times their present volume of trade goods and raw materials,
but of course with much more automation, and relatively fewer human
longshoremen.

283
Other countries with potential for gains include Brazil, Argentina, and South
Africa--providing the first two can become politically and fiscally stable and the last
can rid itself of the ghosts of ruinous racial policies without disintegrating into
chaos. These and a handful of others could move into a modified industrial age
comparable to the 1950s and 1960s in North America, but with many of the benefits
of high technology added. There will be other countries that will surprise all analysts
by how well they do, and there will be some of the apparently favored few that do
not succeed, but instead slide back into either genteel poverty or outright chaos.
Many other third world nations will continue to make absolute progress toward
agricultural success and even industrialize to an extent, but it is difficult to forecast
anything but relative economic loss for many of the politically unstable,
overpopulated, badly divided, or resource-poor regions of the world. For just as an
industrial revolution must be built on a sound base of agriculture and established
financial, political, and physical infrastructure, so also must high-technology
revolutions be built on their proper bases. At the very least, such revolutions require
an innovative spirit, a commitment to extensive education, political stability, and at
least some previous industrial success. Too many countries lack some or all of these
qualifications and do not seem to be obtaining them. It may therefore be a long
time before the benefits of the industrial age--let alone those of the information
era--come to the majority of the people in many parts of the world. Such benefits
are not easily sharable with people who are not ready for them or who do not want
them, but are locked into a more primitive society and economy. That is, high
technology cannot simply be developed in one part of the world and transplanted to
another. Without the proper groundwork preparation, the seed will not take; even
with much hard work, it might mutate and grow into a very different form than in
the place that it originated. Moreover, techniques that in a democracy enhance
freedom and the quality of life may be used elsewhere to destroy both. It cannot,
therefore, be just assumed that technology transfer will take place. However, if it
does not, disparities in living standards will grow larger, and this would be
destabilizing to world peace. Thus, the successful countries may find that it is in
their self-interest to find ways to encourage other countries to adopt the new
techniques in some form--to gain markets, if nothing else.
Finally, there is the economy of Russia and its former satellites and puppets in
South Central Asia, Eastern Europe, and Cuba to consider. As much as any countries
in the world, the former Soviet Union and its allies have been wedded to the
industrial age and to its paradigms--and even this without much success. Only a
new found openness to allowing profound change on a large scale could allow these
nations to prosper in the information age. It was the very prospect of such changes,
and the fact that the people were aware of them taking place elsewhere, that
revealed to them the essential economic and ideological bankruptcy of their
nations. Whatever value Marxism may have had in the industrial age, it was too
rigid to cope with the information era, and collapsed with its advent. Its problems
are especially well illustrated by the economies of countries such as Hungary,
Romania, Poland, and the former Czechoslovakia, Yugoslavia, and East Germany, all
of which ran up enormous debts and fell on hard times. Poland's economy has
rested on steel and shipbuilding, and the same shift away from such activities is

284
taking place in Europe as in the United States. Since what is needed in this sector
can be undertaken more efficiently elsewhere, no amount of planning could revive
these industries to their old glories. Similarly, the Russian determination after World
War II to punish what it controlled of its enemy's territory left East Germany in
economic shambles. Its people were not ignorant of the economic contrast between
their nation and West Germany, and when their Soviet masters lost the will to
terrorize them into submission, the unification of the two Germanys became
inevitable, creating a new candidate to dominate the European community, if not
the world economy. It remains to be seen, however, whether even united Germany,
let alone the rest of Eastern Europe, can soon recover from the ravages of Marxism
and become important players in the next civilization.
Another consideration at the international level is the degree to which some
nations may feel threatened by the shifts in global trading patterns. Those that
allow old trade and industries to die and foster the formation of the new in order
best to take advantage of the changing world economy will likely enjoy continuing
prosperity. Those that attempt to protect whatever they think they already have
behind high tariff barriers, or that subsidize the inefficient and bail out the
incompetent will probably find their trade deficits growing and their relative
prosperity declining. Those that indulge in trade protectionism out of nationalistic or
xenophobic motives will damage not just their own people, but the prosperity of
other nations as well.
For example, Canada and the United States have a trading relationship of a
size and complexity that dwarfs any other. Yet special economic and political
interests continually threaten to damage this relationship, especially during election
years. Such actions are not only counter to long-run self interest; they oppose
information age realities and threaten the prosperity of all the world's peoples.
Openness, cooperation, and trust are by necessity the watchwords of the future.
There will be little economic sufferance of the closed,the suspicious, or the unfair
traders.
On the other hand, protectionism could induce trade wars that result in a new
world-wide depression, substantially delaying the advent of any new economy. Even
when the dangers of such behaviors ought to be well known, history suggests there
is a significant possibility that politicians might want to adopt the most irrational
course of action, and that the populace will be sufficiently ignorant of history to
allow them. That is, the optimistic economic scenario is not the only possible one.
Whether or not the world's economy both modernizes and globalizes, so that
individual and regional economic disparities are reduced, government and corporate
leaders will face many serious issues in population, employment, health care, and
wealth distribution--and unlike other times, they will not be able to keep their
people ignorant when other people are more prosperous than theirs. This alone will
force them to provide a different quality of leadership--one that is both accountable
and result-oriented.
The political implications of these geographic economic problems will be
considered in more detail in the next chapter. For now, having looked at institutional
change at the macro-economic level, it is time to focus attention on business
organizations.

285
8.4 Technology and Business Organizations
In the hunter-gatherer and agricultural civilizations, people who were engaged
in commerce were an integral part of their community, and their social status
reflected that fact. The village blacksmith, for example, had a specialized function,
and that function defined a total role in the community, dictating much of the life
and expectation of family members as well. One of the effects of industrialization
has been the fragmentation of work into many highly specialized job tasks, many of
which are only a small part of some major enterprise. Since such specialties are not
the means by which their practitioners relate to the community as a whole, many
people have found themselves dividing life into a series of roles, each depending on
the situation in which they have found themselves. This personal fragmentation has
been one aspect of a pervasive societal one; people play at religious and moral
roles in scattered fragments just as they conduct all their relationships with the
community in this way. Even language can be fragmented into a series of registers
for different life contexts so that one becomes different people speaking different
dialects for various discrete roles.
For example, an industrial-age entrepreneur might play the role of business
owner to the community, philanthropist to a church, customer to suppliers, boss to
employees, rival or friend to peers, partner to mate, and parent to children.
However, the connections among these roles are often incidental and tenuous. In
each of the various contexts, this might as well be a different person--little need is
seen for an integration of the aspects of life into a whole. Such lack of integration
extends to the moral/ethical realm as well. Different roles are seen to call for
different ethics. Thus, the generous philanthropist might also be a ruthless business
practitioner and merciless employer, and no one see any contradiction in this.
Likewise, she may be unfaithful to her marriage partner and not detect an ethical
issue in the situation. The woman who is a Children's Bible School teacher on
Sunday might pirate software for her employer on Monday and use her employer's
machine to view pornography on Tuesday. The politician who takes a high moral
stand on integrity in government during the day might cheat on his wife by visiting
prostitutes in the evening and not consider this a contradiction
The work of any employee in a dull and repetitious job might be even more
fragmented from life as a whole, for that job defines no role in the community and
provides little satisfaction to the worker. Such work is unlikely to have little to do
with family relations, for the children of such workers are usually not encouraged to
pursue such a job but to better themselves and move up the socioeconomic ladder.
The advent of the techniques of the information age suggests that individual
workers will know about and operate more of their enterprise. Very highly
specialized and repetitive jobs are those most likely to become automated, and
human beings are more likely to find themselves either unemployed, controlling a
large number of these automatons, or working in the service or information sectors
instead. That is, new techniques not only change existing business organizations
and bring new ones into being, but they also create new organizational forms. The
demand of technique is always for greater efficiency, higher productivity, more
automation, more profit, and fewer workers. Up to the early 1980s, this had the

286
greatest effect on the blue-collar worker on the assembly line or plant floor. Better
mechanical devices have gradually reduced the need for such jobs, and the payroll
has shifted to white-collar workers in secretarial and middle management positions.
However, the computing and information technology now available is causing
a similar automation to take place in the office as well. Fewer secretaries can have a
higher output if they have word processors. Senior management can get
information summaries done to order without needing their juniors to do research
for them--such work can be done easily, quickly, and automatically. The result is a
new round of job displacement in what had previously been some of the fastest
growing employment areas.
In general, organizations are becoming leaner and more efficient, and even
very large enterprises can be operated by fewer people with each passing year. One
result is an expectation of rapidly rising productivity and efficiency, that creates a
stress with which not every worker is equipped to handle. That stress can
sometimes be reduced by diffusing or networking responsibilities among all the
workers (see Chapter 12). Another result is that the average amount of training and
experience that a firm has invested in employees is increasing rapidly as the
responsibility and knowledge expected of each increases. Still another result is that
it once again becomes possible to define one's total role in the community by one's
job because it is a profession or a craft--not merely an hourly drudgery for someone
else.
The employees who remain under such circumstances become progressively
more essential to the operation and more difficult and expensive to replace. For this
reason, many firms have turned to stock options and profit sharing to lock in
loyalties and high performance through part ownership. When this continues, the
employees over time are the company, for they eventually become partner-owners
of the whole enterprise. This model has worked well for professional firms of lawyers
and accountants in the past and is likely to be adapted in the future by many
others. Possible candidates for future conversion to employee ownership include
many public facilities now operated by the state, such as hospitals, road
maintenance, schools, social services, and even tax collection. For those who
promote this model, the potential benefits for the quality of goods, service and
employee satisfaction are so great that many public and private services--and even
some manufacturing--seem likely to become professional collectives in the future.
Those opposed to the privatization of government services, on the other hand,
worry about potential declines in quality and universality of services and the lack of
direct control by central authority. However, the necessary accountability to meet
these concerns can be established in many ways--and these may be much more
effective than in the past.
The adversarial relationships that have characterized management/labour
interactions in the past are an industrial age phenomenon that seems not to fit
easily with the new paradigms. Confrontation results as much from a lack on
information as from different ideologies. Information sharing promotes (but does not
guarantee) collaboration. In all business and government enterprises, the trend to
collaboration may become pronounced; this seems to be a required characteristic of
an information society. Of course, there will likely be organizations and places

287
where for ideological and emotional reasons this does not happen; it is a truism that
familiarity can breed contempt as much as it can cooperation. Thus some will still
fight old battles for control of dying enterprises; but such people seem destined to
become footnotes to history, for such engagements will know no working survivors.
Both within and without organizations, therefore, there will at least have to be
a network of relationships, responsibilities and authorities rather than a strict
hierarchy as in the past. A few companies in high-tech and other industries already
operate with networked rather than hierarchical structures. These are characterized
by minimal or no job descriptions and complex reporting lines, which may at times
be reciprocal (One person is the boss for some things, another for others). All
employees become a part of the management of the total enterprise and
accountable for its success. In the future, there may be an understanding that the
various professionals contribute to a common cause differently, and need not do so
equally in monetary terms to be given equal respect. Although high technology does
not require networked models of authority, it does make such models possible on a
large scale by providing distributive communication methods. These are generally
more versatile and comprehensive and may therefore supplant the relatively
inflexible top-down methods that characterize a hierarchy.
Another model is that of the flexible organization, where teams are formed for
specific projects, and then disbanded afterwards. For the purposes of one such
team, it could choose to have a hierarchy during its lifetime; the same people on a
new team at a later date might mind themselves in a different hierarchical
relationship, or forming a network instead. One could go further still, and suggest--
in the Metaperson model--that a professional/contract model for work suggests that
employers per se will become less important, and that individuals will band together
on their collective initiative to form corporate entities for specific projects, and not
be permanent employees of anyone else.
The whole range of possibilities is shown on the chart below--the original of
which was used to show the four civilizations in overview in Chapter 2.

288
In this overview, there is a near linear relationship depicted from lower left to
upper right--the direction of the apparent trend in organizational structure and
emphasis. As individual-oriented structures ought to be in some sense more flexible,
this makes sense. Structures off this line are perhaps not only inappropriate for their
time, but probably also inherently unstable. As can be seen in the next chapter,
such instability may indeed be characteristic of certain styles of government. It
should also be noted that the style, not the size of an organization alone,
determines where it would be placed on such a scale.
The organization gains in the shift to greater flexibility because there is more
uniformity of goal, purpose, and technique, and this is by mutual agreement of the
professional participants. The individual partner-employee also gains professional
status, a degree of autonomy, control over personal job conditions, and a voice of
influence in the whole organization. A consequence is that much attention will
continue to be paid in the future to organizational cultures, for the relationships of
the employees to one another and to the organization as a whole, as well as the
commitment of the group to the perceived mission of the enterprise, will be the
keys to success in the information age.
At the same time, improved communication facilities have made it more
feasible to work at home, and increasing numbers of people are already

289
telecommuting. As observed in the last chapter, this could change the home
environment and even effect house design. It could also affect the office, for at least
some of the workers in an enterprise will have little incentive to be with their fellows
for much of the week. Since this is a potentially fragmenting and isolating trend,
means may have to be found to overcome it by making the work place more
attractive--perhaps even as a place to live. Some interaction with fellow
professionals is necessary, and it may be that the majority of work done at home is
contracted out to professionals who are not part of the organization but are free-
lancers.
In the next chapter, some of the concepts developed here will be extended to
society as a whole. For now, it is time to turn attention to the methods of financing
and ownership of the typical information-age enterprise.

8.5 Financial Techniques in the New Economy


The importance of financial techniques to the success of an industrial-based
economy cannot be stressed too much. Without the means of collecting together
many smaller savings and then loaning large sums, the capital-hungry companies of
the industrial age could never have grown as they did. The next era will have its
own kinds of capital requirements and thus its own kinds of financial institutions. To
see what these may be, it is necessary to consider current trends in the handling
and transmission of money (the medium of exchange), stocks (the instruments of
equity), and bonds (the instruments of debt).

Money and its Equivalents

Among the early hunter-gatherers, the medium of exchange was generally


something that had intrinsic value. That is, it was valuable because it was useful in
itself. Beaver pelts, buffalo hides, and flints are examples of intrinsically valuable
exchange media. Because value was tied to use, the rates of exchange fluctuated
with both the supply and the perceived need of the valued object but were capable
of long-term stability--and prices were as likely to go up as down. In late hunter-
gatherer and early agricultural societies, the medium of exchange shifted to objects
with symbolic or decorative value, such as pieces of metal. These also gain their
status because they are used, but the use tends to be by the upper classes and for
ornamentation rather than being universal and necessary. When such use is
widespread enough, the state eventually organizes the minting and distribution of
coins in order to provide a guarantee of weight and purity and to lend some
authority to the practice of using such coins for settling all transactions involving
goods and services. Thus, money passes from being an object of concrete value to
that of an abstraction known as "legal tender." As is often the case, the abstraction
that no one needs to think about is more useful, more universal, and has a wider
scope than the concrete object it represents.
The metal in question, such as gold, is still often said to have an intrinsic
value, but this is only true to the extent of its direct usefulness--already inherent in
such a medium of exchange is a considerable degree of abstraction from tangible
wealth. As the economy grows in complexity, the supply of metals with symbolic

290
value eventually becomes inadequate to serve the needs of industrial capitalization,
and a second abstraction takes place, this time to paper money. A certain level of
technology is required--advanced printing presses and an art of paper making and
engraving that can prevent counterfeiting--but this second abstraction is necessary
once the money supply reaches a certain size. At first, the fiction may be
maintained that gold backs the currency, but gold too is only an abstraction for real
wealth--which lies in the resources, productivity, inventiveness, and enterprise of a
people who are behind the value of their money. Eventually, the monetary
abstraction becomes more obvious when alongside the cash economy there grows
up the practice of writing notes or checks to cover larger settlements--for paper
money is also inadequate to meet all the needs of the industrial age. Once more, a
certain level of technology is needed to maintain an efficient and large-scale cheque
clearing system; beyond a certain level of use, it is next to impossible without
computing machinery.

Plastic Cards

Credit cards, a more automated equivalent of checks, are also impossible to


implement without sophisticated record keeping. The number and percentage of all
commercial transactions presently covered by credit cards is growing rapidly with
each passing year and exceeds those represented by the cash economy by a wide
margin. Debit cards in turn represent a slight refinement of this system, for they are
used at the retail level in an identical fashion to the credit card. The difference is
that the account to which the billings are posted is the user's bank account, not a
credit account. The user is directly debited for the transaction, and the writing of a
month-end check is eliminated. Meanwhile, cash itself is more easily obtained with a
card by using bank machines than by waiting for office hours and speaking with a
human teller. An third alternative is the smart card, which carries its own balance
electronically encoded and debits that whenever it is used for a purchase. All this
automation results in efficiency at the cost of some jobs. It also makes for more
mobile capital, for money can be transferred internationally as well as locally. It can
be moved instantly instead of with substantial delay. Both of these further
contribute to the ability to accumulate larger capital pools than ever before.

Taking the Next Step--Abstracting Money Altogether

Another level of abstraction has already taken over for a significant portion of
transactions. This is the electronic transfer of funds. Here, there is no paper trail
whatsoever, just a message sent from the machine managing one account to the
one managing the other. Once the two are agreed on security measures (correct
password and electronic signature) the one account is credited and the other
debited, with an electronic record of the transaction entered into files at both ends.
The end result could well be a cashless economy--there isn't enough of it
anyway--with automated record keeping of salaries, purchases, services, and taxes.
Such a system has the advantages of efficiency and accuracy, and as Ellul points
out, such considerations are often decisive. Whether the potential for the loss of

291
personal control, freedom, and privacy is a price that people are willing to pay for
such efficiencies remains to be seen. If enough are convinced that such a trade-off
is worthwhile; however, the rest will not have the choice of remaining in a cash
economy, because in that event, the latter would disappear. Black markets, criminal
activities, and other economic undergrounds will all have to find other means than
cash to keep track of their enterprises. However controversial the cashless society is
to some, it may come about so gradually that its advent is scarcely noticed. Even
now its greatest critics generally use checks if not credit cards, rather than cash.
This development in itself is not at issue; it is the potential use of financial records
for control and direction of individuals by the state that creates ethical issues. Those
issues will be considered in more detail in the next Chapter; it is time to move on to
other financing means.

Stocks and Bonds

Money, however abstracted, represents services, whereas the instruments of


equity and debt represent collective ownership and financing. These, like paper
money, are abstractions for wealth, but the wealth represented, while of monetary
value, is of a different and less tangible sort. The value of an industrial-age
company lies only partially in its assets of money, land, machines, buildings, and
inventory. Value is also found in the skills of the employees, the company's ability to
capture and hold market share or good will, its efficiency, and its ability to change.
It is these latter assets that are the most important in an information and service
economy. In fact, if key employees depart, the good will of customers evaporates,
or the company cannot adapt to changing markets, the machines and inventory
may be worthless. How all these values are perceived is reflected in the trading
patterns of its shares and bonds, which move up and down with the willingness of
people to buy and sell.
In a sense, the currency of a country reflects the same kinds of perceived
values for the nation as a whole, as it also trades up and down on the market. The
flow of information affects such trading, and the assumption of the currency
marketplace is that when information availability is perfect, the trading price will
reflect this. On the other hand, when information is free to flow in near zero time,
the volatility in such markets may increase, especially if only part of the system is
automated, and other parts have a restricted information flow.
The existence of both speculators and options to purchase currency, bonds,
and stocks at a later date adds to this volatility, especially when the volume of
trades rise and the current prices do not immediately reflect the actual information.
This occurs when the transactions are executed more slowly than information is
acquired. If the information is negative, the price of the option may drop faster than
that of the actual stock or currency, whereupon dealers will sell the underlying
instrument and buy the option. This selling should in theory immediately drive down
the price of the actual stock or currency, but if volume is high and execution time
lags, the gap may get larger and the activity accelerate. When the buy-and-sell
decisions are made by computers, this acceleration can cause enormous gaps and
very high volumes to develop, resulting in dramatic price declines. Prices can rise in

292
a similar fashion if the reverse situation takes place. Market volatility (or at least its
appearance) can be exacerbated when an index is based on capitalization and one
company's valuation dominates the entire index, such as Nortel did on the TSE
before its collapse.
Stock markets have therefore been forced to automate in the same way as
some investors have, so that all orders are executed immediately by computers and
no one part of the system runs out of control, as it has done at times past under
only partial automation. Eventually, this may mean that brokers and floor traders
will be unnecessary, for there would be no reason why individuals could not buy or
sell shares in their own names without professional intervention. There is sure to be
no shortage of professionals offering advice, however, so the people working in the
investment industry today may only be slightly displaced.
Once this point is reached, there would be no reason to maintain trading on a
100-share or 1000-share block basis as traditional rules require. Individuals would
be able to execute buy-and-sell orders on a per share basis directly from their own
homes and offices with the centralized trading facility. Indeed, equity and debt
could be expressed in any fraction of a single stock or bond as long as computing
facilities are large enough to keep the records. A section of the Metalibrary could
eventually be devoted to this function. Because market participation would become
so easy, it could be almost universal. Indeed, mutual funds and the ability to day
trade have already taken us partway down these roads. Professional fees could be
paid in share fractions of the company, and the payee could order those to be
automatically converted to other shares, bonds, or uncommitted reserves--
anachronistically known as "cash." Again, there are advantages implied in such
abilities for individuals, organizations, and even for society as a whole.
Individual professionals could incorporate as personal services companies and
might pay some bills in shares of themselves. The advice of counsellors and
investment specialists--all stored on line in a subsection of the Metalibrary--would
become sought after and extensively used (electronically). One's investment
portfolio could be intensely personalized and entirely tailored to individual desires.
Of course, if everyone were an incorporated entity, there would be a tendency to
abstract the value of a person by the value of his or her equity instrument. Greater
worth as a person might be attached to an individual whose shares traded at $100
than at $1. However, people already do this today when they ask "how much is she
worth?" when they mean "how many possessions has she at the moment?" Still, a
universal investment registry such as this would allow anyone to attempt to finance
projects by equity or debt--provided it was possible to convince enough individual
investors to subscribe. Bankruptcy would remain possible, with all participants
taking part of the loss. Just as is the case now, there will probably be both the
"doers" and the financiers, but there may well be more of both.
The same broadening of investment instruments would make the financing of
large projects possible. State involvement could add some guarantees to such
undertakings, making them more attractive opportunities than they might otherwise
be. Such financings could be undertaken without wholesale discounts to middlemen,
for they could be advertised directly to consumers, who could purchase them like
loaves of bread, boxes of cereal, or a new dress.

293
Banks and Credit Unions

While the size of large banks has, as expected, grown in the late 1990s with a
number of high profile mergers, many people are not comfortable with this trend.
Thus, while large business and large banks have formed a natural alliance, so have
individuals and small businesses with credit unions. Once again the collectivizing
and individualizing trends can be seen--this time defining who puts their money into
or borrows from what kind of institution. If the trend to large bank mergers and the
proliferation of small to medium size Credit Unions continues, it would not be
difficult to predict that banks will eventually see very little of the retail money trade
in personal loans and mortgages, and will instead confine themselves to the
wholesale or commercial trade. However, there is no reason to suppose that the
formation of temporary project-oriented pools of capital would not be just as good
an idea as the same kind of flexible working partnerships.

Controlling Enterprise

The scenarios of the previous section may not come to pass exactly as
described here; but whatever the means, there will have to be broader participation
in debt and equity as the working population becomes more professionalized,
corporate ownership broadens, and the size of the largest projects increases. Along
with ownership comes responsibility and a degree of power. At present, voting by
shareholders of a company is done mostly by proxies signed over to one or more
representatives to a firm's annual general meeting, at which time a board of
directors is elected. They in turn hire the operating officers and seldom intervene
otherwise in the everyday affairs of the company. Except in the rare cases where a
large shareholder votes herself into the directorship or holds a key operating
position, corporate owners are usually removed two or three steps from actual
control. For practical purposes, small shareholders of large enterprises, even if they
may collectively own 80 percent, can be ignored by the operating officers; there is
no means for such small fry to influence company decisions. They also have no
channel for communicating their dissatisfaction to or even locating each other and
thus cannot organize their dissent.
If corporate information were disseminated and voting collected through the
Metalibrary or a similar facility, things could change dramatically. Such a facility
offers broad possibilities for the formation of alliances, the developing of voting
strategies, and the influencing of decisions. Although many shareholders might still
elect to automatically turn over their proxies to the directors, dissident shareholders
would more easily be able to organize an opposition, combine interests, and force
changes. A larger percentage of the outstanding shares would be required to
maintain effective control.
Of course, this innovation, like the cashless society, would not necessarily
come about just because it is efficient, desirable, or possible. Social response to
such technologies is highly unpredictable, and there are certain to be people who
would want to prevent these developments or to lead real events in different and
unforeseen directions. However, even a limited implementation of these ideas in

294
which only some people participate would still enable some of the economic
activities described to take place.
Because of the high value people tend to place on economic interests that
directly affect them, any scheme for personal involvement in corporate affairs is
likely to draw interests and loyalties away from the state and toward the company
or group of companies in which they participate. Such ventures will not need to
respect geographic boundaries, and eventually some may not even be based on
Earth. The result may be a gradual diffusion and internationalization of economic
power and interest. The potential loss of individual freedom implied by joining a
completely centralized record-keeping facility may therefore be offset by at least a
perception of increased individual power over the international economy, a greater
ability to raise capital, and a possible shift of power away from the state and to the
professionally owned corporations. Such developments are not necessarily good or
bad in themselves. There are some alarming potentials here, just as there are some
interesting potential benefits, and such potentials lead naturally to the discussions
of the next section.

8.6 Business Ethics and Technology


Perhaps more than in any other field of human endeavor, success in business
is quantified in terms of a single outcome--profit. As a result, there is always
pressure to cheat the customer or trading partner by failing to disclose information
in order to improve the monetary return. Business people, whether honest or not,
have always been regarded with some suspicion, and every treatise on morals has
had something specific to say about the ethics of doing business.

Cheating

For instance, the New Testament Greek word that is usually translated as
"sincere" literally means "without wax" and refers to the practice of holding pottery
up to the sun to determine whether a dishonest peddler had filled cracks with wax
to make the pot appear sound when it was not. Every society that has used coins as
a medium of exchange has had difficulties with those who debased the coinage
either by shaving off a portion or by mixing inferior metals with the "precious" ones.
Suppliers to the military have been notorious throughout history for cheating the
state. The Roman army, the British Navy, and the United States Marines have all
had to deal with high-priced but cheaply made goods, with rotting food supplies,
with late or missing deliveries, overbilling, kickbacks, bribes to commanding
officers, clothing that fell apart, and weapons that did not work.
Modern consumers face the same kind of problems with misleading or false
advertising, industry cartels or monopolies, the selling of used goods as new, and
such high-pressure sales techniques as the "loss leader" or "bait and switch." They
must also deal with goods that do not work or fall apart but carry no warranty, or
whose manufacturer goes out of business rather than repair the item, replace it, or
even deliver it in the first place. The scam may even involve advertising goods that
do not exist, collecting some money, and leaving town. Nor are software goods
immune, for there is no means other than the collective will of the market to

295
enforce their quality, and in such a rapidly changing industry, there are no effective
guarantees that they will even work. Yet, large vendors have been successful selling
badly flawed goods, even in the face of the ability of the target audience to become
informed of those flaws.
In most sectors of the economy, such problems arise with only a small
percentage of all commercial transactions, and only over a short time, for if they
were pervasive or enduring, there would be no economy to speak of. When they do
arise, they attract considerable publicity for the same reason. Like any social
institution, trade and commerce can exist at all only if the people can trust them to
be reliable; the instances where they cannot be trusted tend to undermine the
entire economy and society. The software industry is so immature that its
customers have so far been willing to put up with inconveniences they would not
tolerate from any other group of manufacturers, but this situation cannot be
counted on by vendors to continue. Its customers will surely begin to demand the
same kind of functional warranties for software as they do for waffle irons. If its
workers can more to a more professional model they may simultaneously be more
able and willing to deliver such guarantees.

Unbridled Competition

Less publicly obvious but equally problematic are the practices of cutthroat
competition intended to damage or destroy other businesses in the same field. In
the name of such competition, product or business plans may be stolen, employees
bribed or lured over to the competition, merchandise duplicated despite copyright
or patent, and damaging information circulated about competitors. Alternately, a
company with a large enough bank account might force competitors out of business
by selling at a loss in the targeted marketplace, buying off the alternatives, or using
market muscle to force vendors to carry, display, or bundle only the dominant
product.
To some extent, these difficulties stem, on the one hand, from concentration
on market share and profit as the exclusive measures of business success and, on
the other hand, from the frequent separation of any sense of ethical behavior from
the world of business. The prevalent world view of the industrial age has been that
business is done in a jungle, that it is necessary to destroy competitors before one
gets the same treatment from them, and that any means justifies the end of higher
profits. For the most part, absolutist ethics had no place in business because an
action that maximized profits in one situation might not in another. A business
person might seriously claim to hold to an absolutist ethic as part of a religion but
live a business life as though mutual responsibility and ethical obligations were
outdated or did not even exist--and do this without seeing any contradiction. A
common slogan has been "business is business." The implication is that it carries its
own rules, whatever ethical code the doer of business might in all other aspects of
life assent to. In short, such a world view is focused on money, not on the actions or
process by which it is obtained. It is also consistent with a kind of economic
Darwinism--only the profitable survive to make more money.

296
However such attitudes and practices contain the seeds of their own future
destruction, for they can effectively help achieve the desired ends only if they are
kept secret. Consider the White House whose dirty tricks become exposed, the
stockbroker caught stealing information and manipulating prices, the chip
manufacturer whose FPU cannot do proper arithmetic, or the soft-drink company
that continues to market bottles it knows will explode when tipped over. The
common thread in all such events is information release. Automobile companies can
no longer cover up their release of a lemon, and a software manufacturer whose
newly released product still contains bugs will have the fact trumpeted on electronic
bulletin boards all over the world within days.
What could once be effective only by being kept secret can no longer be
hidden in the information age, and is therefore much less likely to work. Too many
people have too much access to information and there are too many disgruntled
employees ready to blow the whistle for any modern corporate executive to
suppose that damaging information or questionable business practices can be
withheld. Such whistle-blowers might not only become celebrities, but also sell their
memoirs for a tidy sum, and have the protection of the law to keep their jobs
besides. There are too many reporters with the time, patience, and computerized
access to government and other public files and a desire to win a Pulitzer prize.
Moreover, board members can now be held personally liable for the companies they
direct, and this gives them a direct stake in doing what is, if not broadly right, at
least narrowly legal. The net result is that the past can seldom be hidden and the
present likewise is becoming an open book. While not everyone will necessarily care
about the deleterious information, and it may be blunted by effective advertising, it
is usually available.

Honesty Revisited

The model of universal information availability therefore forces a


reexamination of business practices. In the process, it could be discovered that
honesty in business may be not only the best policy also but the only one that a
fully informed public will accept. A tycoon of the industrial age could sometimes
oppress the work force, ignore safety standards, bribe officials, lie about the
competition, create a monopoly, produce shoddy goods, and overcharge the public.
All this could escape condemnation or even notice if the tycoon could
simultaneously pose as a god-fearing, church-going philanthropist and benefactor. It
would be impossible in ideal information age to operate this way, because such
practices would quickly become common knowledge.
In addition, the perceived failure of moral relativism to provide society with
useful and consistent answers to ethical problems has opened the way for a
possible return to moral absolutism, and this is true in business as a part of the total
society. In the conventional mid-twentieth century view, right and wrong had little
meaning, so to have a different view of them in the various fragmented
compartments of life created no philosophical problems, even if it did create social
ones. In an absolutist view, right and wrong do have consistent meanings, and these

297
meanings are true in all contexts, including business. Commerce is a part of life, and
governed by the same rules; it does not constitute a world of its own. Its leaders will
be expected to be privately and publicly clean, and it may become impossible to
pretend that one's personal morality is irrelevant to one's fitness to lead a
corporation in the public interest. Indeed, the corporation, which as a fictitious
person is accountable for the ownership of property and the making of commercial
transactions, will be held accountable for its moral integrity as well. Thus both
business leaders and their corporate entities are increasingly being held responsible
for the results of their decisions--in the workplace, the environment, the
marketplace, and the personal space where products are used.
This trend is seen in two developments: first the widespread interest in
developing codes of ethical conduct for professions and conflict of interest
guidelines for politicians, and second in the proliferation of "ethical" mutual funds
targeted to those who wish guarantees that they are investing in industries that do
not, say, pollute, produce tobacco or alcohol products, or traffic in weapons,
pesticides, or child labour. It is not that there is general agreement on which such
activities are or are not ethical; the interesting thing here is the revival of interest in
the idea of ethical behaviour--even in the absence of agreement on what is it and
where it comes from.
Moreover, the private lives of business leaders are no longer seen as a thing
apart from their public ones, but as part of a seamless whole. Such things as insider
trading, bribery of officials, sexual exploitation of employees, false and misleading
advertising, bait-and-switch selling tactics, and the theft of secrets through
industrial espionage are all becoming harder to do and much harder to conceal from
public scrutiny. That scrutiny may not in itself change anything--whether the public
cares about such things is a separate issue--but the information will be available.
The public is paying closer attention to such arrangements as the "golden
parachutes" out of takeover situations by the senior officers of a company, to
insincerity in labor negotiations by both parties, to age and gender discrimination,
to advertising innuendos, to high executive compensation, and to predatory pricing
policies. In an information age, one might also suppose that there could be less
tolerance of strikes, because they could be seen as communication breakdowns and
therefore as violations of the reigning social paradigm.
Manufacturers are being held more accountable all the time for the effects of
the products they make. For example, the maker of a defective computer printer
that requires expensive repairs after a few months will not be able to hide behind
the legalese of a ninety day warranty limitation but be required to make good on
the product for a much longer period of time. Such openness was forced on the
automobile industry by consumer advocates in the 1970s and 1980s and it now
routinely conducts recalls for safety and other reasons that it would never have
considered in the 1960s. Likewise, tobacco companies have been successfully sued
for making, selling, and advertising products with reckless disregard for (and even
concealment of) the danger to consumers' lives. The very advertising that glorified
smoking as the epitome of the "good life" has been cited in lawsuits as deliberately
leading people to their deaths. The bottlers and sellers of alcoholic drinks may be

298
held partially liable for the carnage on the highways caused by the use of their
products.
Such surprising (to a 1960s mindset) outcomes leads one to wonder whether
the counsellors, politicians, child rearing specialists, educators, and ethicists whose
advice produces injurious social outcomes will also be sued for malpractice in the
future. After all if their product (bad advice) is found faulty, can they not be held
liable as well?
As a result of such considerations, many managers are already concluding
that profit now depends very much on at least a public perception of a business as
honest, and on the fact that the ordinary citizen has many means of finding out if it
is not. That is, the mutuality and dependency so often mentioned in this book
extend to the relationship between corporations and their customers. Ignoring that
mutuality of interests may have immediate and possibly devastating consequences
for a company. A seamless WYSIWYG ("What You See Is What You Get") is clearly an
aspect the kind of open society implied by universal information access. Honest
guarantees, fast and sure repairs, and a genuine atmosphere of serving the
customer are becoming the watchwords of business in the new age. The
seamlessness of all aspects of life will not only be reflected in this way; it will
require new attention to be paid to the education of children. After all, the child who
is taught to see nothing wrong with stealing from a neighbor's apple tree or a
classmate's desk is not likely to be honest with advertising and expense accounts as
an adult.
Thus, along with the information paradigm come others. Honest guarantees,
fast and workmanlike repairs, and a genuine atmosphere of serving the customer
may well have to become the watchwords of business in the new age. That is,
Caveat emptor is being replaced by Caveat vendor . These trends could be
magnified by any more shifts away from the selling of goods and toward the
provision of services, wherein the human element is even more important, for the
whole product and process is on view to the customer at all times. In sum, if the
industrial age created the consumer, the premise of the information age is that it
enlightens and empowers the consumer.
Neither will it be easy for industrial or business concerns to cover up their
wrongdoings or shortcomings in the future. For the sake of their own professional
reputations and personal integrity, their own employees will "blow the whistle"
much more frequently than in the past, and they will do so with the full protection of
both law and public opinion.

Another Possibility

Lest all this discussion seem impossibly optimistic, it is worth repeating that
another outcome of universal information availability is also possible--that no one

299
will care about "damaging" facts. Perhaps the current attention to public and
corporate morality is a passing phenomenon, a novelty whose bloom will soon wear
off. What is more, the principle of holding manufacturers responsible for defective
goods could be carried so far that many companies become afraid to introduce new
products, lest they someday become the target of a liability suit for unforeseen
defects. Should such fears spread further, technological innovation, along with
much of the premise the information age seems to offer, could be thwarted. Or, it
could shift to those parts of the world where product liability suits are less common.
On a more theoretical level, a society that has bought into a relativistic
morality can have its collective view of right and wrong changed very easily. This
could lead it to accept shoddy goods, corrupt or unfaithful politicians, and cheating
at business as though they were normal. That this is not at all far fetched can be
seen by the radical changes in recent years in views on sexual morality. That is,
arbitrary or democratic ethics are clearly unstable--information availability could
result in anarchy as easily as it could result in demands for greater integrity.
Thus, while the information age promises to change society, it will not
necessarily do so in ways that are all for the better--people have to choose to act on
information before it has any effect, and they may decide to do nothing.

8.7 Summary and Further Discussion

Summary

The increasing complexity of economic activity through the various stages of


civilization generates more sophisticated techniques for trade and commerce at
each step. In particular, money is an abstraction for value that has been
progressively refined from barter to furs to beads to metal to paper money and
cheques. It is fast becoming just an electronic entry.
Meanwhile, shifts in the characteristic technology alter the balance of
economic activity and power. Not only do old trades and professions vanish, but so
do industries, while even entire countries slide from center stage to the sidelines as
trading patterns change. In the information age, both industry and agriculture are
being transformed by the new models, and once again there will be corporate and
geographical winners and losers, in the relative sense at least.
Winners may include the electronics, communications, biochemical, habitat
construction and aerospace industries along with the Pacific Rim nations. Relative
losers include industrial activities such as making steel or machines and the
countries either most tied to such enterprises or the most closed to rapid change
and new ideas.
Both collectivizing and individualizing trends were noted in the shifts in
economic power, with larger than ever capital accumulations being made necessary
to finance some of the construction projects, and simultaneously better facilities for
individual participation in and control over the economy. Certain trade-offs of
privacy and freedom to gain such power were noted. A cashless society and more
flexible instruments of debt and equity were among the possible innovations

300
discussed, though the actual social responses to enabling technology cannot be
reliably predicted.
The consequences of universal access to information on the conduct of
business were discussed, and two possible outcomes suggested--one that business
would be forced to conduct itself ethically, and the other that little change would
take place in the long run regardless of the amount of information available.

Discussion Questions

1. What would be the effect on a gold-based currency system of the discovery


in one country of a gold deposit containing more than the entire world supply
previously known?
2. When some governments run a deficit, rather than borrow money, they just
print more to pay the debts. What effect does this have on the price of goods and
on the exchange rate for that currency internationally?
3. Suppose a government were to tax information access (so much per byte of
information transferred on the net). What effect would this have on the progress of
the information society? Consider both economic and control implications.
4. Suppose that food production were to become an entirely robotized
industrial process with chemical and energy raw materials so that plant and animal
husbandry were no longer necessary. What economic effect would this have? How
would it change society?
5. Which is now more nearly correct--that consumer tastes determine which
products get manufactured, or that available technology determines this and then
advertising must be used to make consumer tastes match. Which way ought it to
be, and why? Who should enforce this?
6. Discuss the advantages and disadvantages of electronic shopping. Are
there things that will never be bought this way, or is it possible to eliminate most, if
not all, retail stores?
7. Discuss the advantages and potential dangers of the cashless society.
8. Attack or defend the suggestion that in the future economic and corporate
loyalties will be more important than political and geographical ones.
9. Suggest ways in which high technology can be used to assist the peoples of
underdeveloped nations without making their situation even worse.
10. If technology does allow the economy to be micromanaged, should it be?
What are the advantages and potential dangers?
11. Discuss the effect of standards in high-tech industries. Do they promote
economic activity, or do they smother innovation?
12. What are some specific ways in which high technology is favoring
economic development in the Pacific Rim countries?
13. Argue that in the long run the information age means that economic
prosperity will be independent of geography.
14. The role of Central Europe in the information age was scarcely mentioned
in the Chapter. Do some research of your own and write a paper outlining the most
probable role that these nations will play in the future economy.

301
15. The text suggests that labor will be more professional and less adversarial
than in the past. Argue that this cannot and indeed should not be so.
16. Develop further the concept of paying for services in shares of the payer,
whether an individual or company. What practical, economic, and social
consequences would such a system have? Perhaps you could argue that it cannot
work or must not be allowed for ethical reasons.
17. Argue that a Metalibrary facility would make it harder, not easier, for the
ordinary citizen to have an economic influence.
18. Consider the proposition, "Business is business and ethics is ethics and
never the twain shall meet." Either attack or defend this statement (a) as an
observation on what now is the case and (b) as a statement of what ought to be the
case. A classroom debate on the subject could be organized.
19. Explore further the relationship between the openness of information and
the ethical behavior of the business world. Which is more likely--that business
conduct will improve or that there will be little change?
20. Is there a religious view of business ethics? Try to use examples from as
many religious and historical periods as possible.
21. Discuss the advantages and disadvantages of working from the home.
Consider the personal, social, economic, and ethical aspects. For instance, are such
workers more or less likely to be exploited?
22. Research the causes of the great depression of the 1930s and make your
own assessment of the likelihood of such a thing happening again in the foreseeable
future.
23. Which is more important--loyalty to one's company or the right of the
public to know about that company's activities. In particular, if you became privy to
potentially damaging information about your employer's pricing, advertising, or
other business practices, what would you do?
24. You work in a hospital and have a strong commitment to the patients and
to their community. You are also in a hospital employees' union and believe in the
importance of that community as well. After a protracted and unpleasant set of
negotiations breaks off, your union calls a strike, which you believe could have been
avoided if both sides had been more open and honest with each other. The union
says you cannot in conscience cross the picket line, but you have been taught all
your life that you cannot in conscience abandon your responsibilities. What do you
do?
25. Suppose instead you do not work at the hospital but are a member of
another union, and its leaders tell you that you must honor the hospital picket line.
What do you do if: (a) You break your leg and need hospitalization, and no other
facilities are available within forty kilometers? (b) You have a dying relative in the
hospital who wants you to visit her? Is there an essential difference between this
situation and a strike of teachers? Of industrial workers? If so, what is the
difference?
26. At the company you work for, software is routinely pirated. Your manager
buys one copy of a new word processor and expects you to put a copy on the
computer you are using. She as much as tells you that if you don't you can find
another job. What do you do, and why?

302
27. Research and report on several examples of high-tech companies that had
great success initially but eventually went bankrupt. What were the chief causes of
such failures?
28. Give additional examples of products that are sold today with little regard
for the consequences of their use, and detail the changes that need to be made as
people become more informed about such matters.
29. A chemical company makes a herbicide that has been shown to be
particularly effective on a plant from which a black-market drug is derived. The
government orders a large quantity of the herbicide intending to spray it
indiscriminately over remote areas of South American nations, with their
cooperation, in an attempt to reduce trafficking in the drug. You, as president of the
company, are concerned that eventually there may be damaging lawsuits against
your firm if this program goes ahead. Should you place the potential interests of
your shareholders first and refuse to sell the chemical, or should you place the
declared interests of the government first? What do you do, and why?
30. Several times in the chapter it was noted that people who have more
information about unethical activities in the political or economic sphere might not
choose to act on their knowledge, but might instead change their view of what is
ethical or simply tolerate the activity. Research this further and either argue that it
is the civic duty of all persons to act on their ethical beliefs, or argue that it is their
civic duty to tolerate all ethical systems, regardless of how they differ from theirs. In
either case, be sure to say how the legal system is connected with society's sense
of ethics.
31. "Monopolies are good because they lead to standardization and
efficiency." Argue for or against this position.
32. Research innovation and monopolistic practice in the computer industry
and describe the relationship between the two.
33. The last section of the chapter for the most part argued that a
consequence of the information paradigm would be better warranties, higher quality
goods, and more open and honest business. Argue that this is not the case and that
the opposite result will occur.
34. Suggest a "teraproject" of your own and outline how it could be organized
and financed.
35. Which is more likely and why--that the general population will demand
that business will return to a more absolutist ethic, or that they will simply ignore
behaviour once thought unethical?
36. Select a particular type of business or professional activity and write a
code of ethics for it. Defend your points, saying what is the basis for each.
37. The author suggests a trend from hierarchicalism to networking to flexing
to professional/contracting models for organizing enterprise. Argue that he is wrong,
and rigid hierarchicalism is the best business model.
38. What are some other measures than profit to determine the efficiency of
the techniques an economic enterprise? Argue that one of them is better.

Bibliography

303
Braybrooke, David. Ethics in the World of Business. Totowa, NJ: Rowman and
Allanhold, 1983.
Chewning, Richard C. Business Ethics in a Changing Culture. Richmond, VA:
Rubert F. Dame, 1983.
DeGeorge, Richard T. Business Ethics. New York: Macmillan, 1982.
Donaldson, Thomas, and Werhane, Patricia H. (eds.). Ethical Issues in
Business--A Philosophical Approach. Englewood Cliffs, NJ: Prentice-Hall, 1979.
Ellul, Jacques. Money and Power. Downers Grove, IL: InterVarsity Press, 1984.
Ezorsky, Gertrude (ed.). Moral Rights in the Workplace. New York: SUNY Press,
1987.
Glos, Raymond E., et al. Business--Its Nature and Environment, An
Introduction. 9th ed. Cincinnati: South-Western, 1980.
Lewis, Ted. Why the Economy Is So Good IEEE Computer May 1998 p110-112
Hess, J. Daniel. Ethics in Business and Labor. Scottdale, PA: Herald Press,
1977.
Hirschmeior, J. and Yui, T. The Development of Japanese Business. 2nd ed.
London: Allen & Unwin Ltd., 1981.
Hosmer, LaRue Tone. The Ethics of Management. Homewood IL: Irwin, 1987.
Naisbitt, John, and Aburdene, Patricia. Re-inventing the Corporation. New
York: Warner, 1985.
Naisbitt, John. Megatrends. New York: Warner, 1984.
Pemberton, Prentiss L., and Finn, Daniel Rush. Toward a Christian Economic
Ethic--Stewardship and Social Power. Minneapolis, MN: Winston, 1985.
Westin, Alan F. Whistle Blowing! Loyalty and Dissent in the Corporation. New
York: McGraw-Hill, 1981.
Williams, Oliver, and Hovick, John (eds.). The Judeo-Christian Vision and the
Modern Corporation. South Bend, IN: University of Notre Dame, 1982.

Internet Resources:

MacDonald , Chris <mailto:chrismac@ethics.ubc.ca<http://www.ethics.ubc.ca/resources/business/

304
Chapter 9
Technology, the State, and the Law
Seminar - "Computers, the Law and Rights"
9.1 Foundations for Law and the State
9.2 Technology and the State--Big Brother and Little Brother
9.3 Technology and War
9.3.1 Confrontation Outcome Scenarios
9.3.2 Other Kinds of War
9.3.3 Policing a Peace
9.3.4 Summary
9.4 Some Legal Problems for the Next Age
9.4.1 An Overview of Current Problems
9.4.2 The Present Law
9.4.3 The Insufficiency of Law Alone
9.4.4 Some Proposed Remedies
9.4.5 Summary
9.5 Technocrime
9.6 Ethics, the Law, and the State
9.7 Solving the Problem
9.8 New State and Legal Forms for the New Age
9.9 Summary and Further Discussion

9.1 Foundations for Law and the State


The network of relationships among people that is termed society is not
simply a random affair; it requires organization and supervision. To achieve this,
there have developed the related institutions of the law and the state. For its part,
law exists to codify the moral/ethical consensus of a society and to provide the
"ought" with some force to back up the "shall". It prevents citizens from harming or
exploiting each other and protects the nation itself from those who would destroy it
from within or without. It regulates behaviour in personal and commercial ways and
punishes transgressors for the benefit of society as a whole.
In times past, law's presumption was that it flowed from the emperor or king
who gave it either benevolently or despotically to protect the royal regime and its
peoples. Later--in particular because of the Magna Carta and to a lesser extent
because of Irish Brehon law--the presumption became rex lex, that is, that law rules
over all, including the monarch. This principle came to underlie the laws of modern
Western democracies. Even more fundamental is the assumption, applied in such
situations as war crimes trials and international trade cases, is that there is such a
thing as justice in an absolute and abstract sense--even when it may be missing
from the laws of one or more nations. This concept is not exclusively due, even in
the West, to Judeo-Christian principles being incorporated into law but was also
present in Greek philosophic thought.

305
Another assumption is that the people and institutions covered by the law
agree to be regulated. Even if the ethical consensus is cultural rather than religious,
there must be one, for if the law is not either respected as an authority in itself or as
deriving from another (higher) source, it will not be obeyed without the application
of force (which itself is a higher appeal). The best laws in a democracy are those in
which the citizens perceive they have a personal stake, and are therefore willing to
obey even if they may also have reasons to dislike them. Even better are those laws
with which few citizens will ever come into conflict because their acceptance is so
universal. This presumes, however, that the citizens take an active role in exercising
and so maintaining their democratic freedoms, or they will surely lose them.
It is also worth observing that because law itself depends upon and derives
from the ethical consensus of society, the law's direct impact on society is
secondary to that of the underlying values. To put it another way, law does not so
much change behaviour as it regulates the exceptions to what the moral consensus
regards as acceptable behaviour. This dependence of law upon ethics, and the
simultaneous assumption that there exists an absolute justice means that law
assumes the existence of an absolute ethic from which it may derive justice. Thus,
there are fundamental conflicts between the assumptions of relativistic moral
theories and those of law, and these are difficult to reconcile without changing one
or the other. For if morality is relative, so is the law on which it is based, and there is
no abstract justice, nor is there any means of deriving or enforcing national or
international law. For example, there would have been no case to make against the
Nazi war criminals or modern terrorists under a strictly relativistic law, and this
illustration alone is a powerful argument for a form of absolutism in both realms.
Moreover, without a pervasive and compelling ethical consensus, society cannot
survive the high-tech empowerment of many unrestrained individuals with the
ability to destroy everything.
Even if the influence of law upon society is neither direct nor immediate, the
law does touch and is touched upon by technology. For instance, the Romans had
great success with their legal system because they viewed it as a management
technique for an orderly society, and so long as they continued to do so, their law
proved very durable. The extensive ocean-based trading systems of the European
imperial powers enabled by ship-building technology led to a system of maritime
law. The industrial society had to develop laws to regulate all its technologies in
order to protect its users and the society at large. Today, the ability to store and
compare fingerprints, retinal patterns, and DNA records has become critical to the
resolution of many criminal cases, and has potential to partially restore confidence
in the modern legal system.
The state, on the other hand, is not only involved in both law and technology,
but also affects its citizens' lives directly and immediately in these and many other
areas. The state has two principal reasons for its existence. One is to devise and
enforce laws of all kinds to promote orderly behaviour and commerce, through
which it serves to institutionalize the economic life of the community. The other is to
provide leadership for the people being governed--to represent them to other
peoples; to have its leaders act as role models for virtuous behaviour; and to plot a
course for the future by making choices for resource allocation. Because the body of

306
law is large, and tends to develop slowly, the state's impact for change can be much
greater in the leadership role than in the legislative.
Every state must also convince its people of its ongoing right to rule them, for
unless they are so persuaded, they will eventually destroy that state and bring
forward a new state and new leaders. This appeal for legitimacy can be by force,
and often has been. Indeed, the overwhelming experience of the majority of the
Earth's peoples has been government by tyranny. Sometimes, philosophical or
theological foundations for the state have been given, such as the "divine right of
kings" or Plato's "philosopher-king" concept. There have almost always been
alliances of religion and state in the business of governing, and these have added
an authority to the state that it has not always been able to find on its own. It is
interesting to observe in this connection that the Bible itself, though cited as an
authority by many Western states through the centuries, makes no statements
whatever about an ideal form of human government--only about the duties of
citizens in the state and the obligations of leaders to their people. Its teachings on
universal justice, human dignity, and the equality of all persons have, however,
been a powerful indirect influence in the development of democratic forms of
government.
The modern secular state has no exemption from this need to appeal for its
legitimacy to higher authority of some kind. In today's totalitarian states of the left
or the right, xenophobic nationalism or statism takes over the role of religion in this
respect. The Caesars were quite consistent in viewing the Christians as a threat to
their state--the idea there was a higher authority than their own persons was indeed
subversive. Likewise today, the current leader may be honoured, praised and
paraded in ways not particularly distinguishable from the worship of a deity, except
in terminology. Totalitarianism may attempt to establish the state itself as the
highest authority, but it is more common to appeal to other loyalties as higher still--
the party, the cause, the revolution, or an assortment of political dogmas are
common choices. These all share a vagueness that must be supplemented with a
daily dose of propaganda to keep citizens loyal. This can work quite well if other
information flows are restricted or can be overwhelmed, for lack of substance is no
more an obstacle to a clever marketer in the political realm than in the economic
one.
In the modern democracies, personalities are also emphasized, but the
objects of such adoration are more likely to be entertainers or athletes than
politicians. The original operating philosophy of the democratic secular state was
that its people could collectively and more or less infallibly determine the will of God
and/or what was absolutely right, then express these in action. Instead of the "might
makes right" of tyranny, democracy proclaimed "majorities discern right". The
fundamental assumption was that most of the people, at least when voting, would
collectively know what is best and act upon it. That is, democracy is founded on
something more than the whims of the electorate and the manipulation of these by
clever advertising or demagoguery. There is an assumption built into the very fabric
of the democratic idea that a generalized or universal morality or ethical consensus
exists and that the people can be trusted both to know what this is and to reflect
that knowledge consistently in their actions. Should this notion be lost, say if the

307
prevailing view became that the state serves neither a deity nor abstract principles,
but simply the current will of the majority, democracy would be in as much jeopardy
as if its peoples simply failed to exercise its practice.
All this is not to say that propaganda and other emotional appeals are not
employed in democracies, for they most certainly are. However, they are used more
in the service of political or entertainment personalities who are seeking public
acceptance than by the state itself. The legitimacy of the democratic state is more
or less assumed by those involved, though this was not always so. Such a state
does not usually require propaganda to support its existence, though it may to sway
public opinion to its policies--at least to the point of making people believe they
have been consulted, and are therefore part of the enterprise. Moreover, political
parties tend to abuse power by using state funds to trumpet their virtues in the
effort to get re-elected, but this is an aberration in an otherwise cleaner record. The
democratic state is fundamentally dependent upon a mutual trust in the collective
rightness of its electorate and in the integrity of its leadership, not on propaganda.
The notion that there exists a knowable "right" is therefore implicit to Western
democracy. It depends for its very existence on an absolutist ethic, for it claims to
be absolutely superior to tyranny. If the best ethic is relativistic or situationist, an
arbitrary form of government would have as much claim to legitimacy as does a
democracy--that is, neither would have any. Provided the concepts of free speech,
universal justice and liberty are themselves absolutes with a higher moral authority
than any form of government, the people governed know these principles to be
true, and act upon them, rule by the wishes of the majority is legitimate.
This establishes an interesting paradox, for the freedom to hold differing
views can exist only in a state whose chief absolute is that such freedom is
necessary, yet the genuine pluralism this implies could easily lead to an ethical
chaos that produces anarchy or tyranny and so destroys freedom. Likewise,
pluralism can easily slide into the notion that all idea systems--including the
absolute ones underlying democracy itself--are not just tolerable, but of equal value.
If this happens, the philosophical foundations of democracy evaporate. The
democratic state is, therefore, a fragile institution--always open to the possibility of
being persuaded that its values are only relative and of being taken over by an
arbitrary and tyrannical ruler--yet always needing to remain confident enough of its
fundamental absolutes to give even those greatest enemies a free platform to
attack it verbally.
What is more, as the World Trade Center incident shows, democracy's
enemies can also use those freedoms to assemble physical attacks without much
fear of discovery until it is too late.
How can this confidence be maintained in the future? As shall be seen,
technology may have something to do with it, for it opens up some interesting
possibilities for both types of state.

9.2 Technology and the State--Big Brother and Little Brother


It was observed in Chapter 1 that technology development is of critical
importance in understanding changes in society. The same is true of the state--the
governing institution of society--for it is the character and actions of the state that

308
ultimately will write its peoples' history and affect the neighbouring states both in
place and time.
Reviewing the development of the various stages of civilization, it is easy to
see that the state has grown in size and complexity with the numbers of people it
governs, with the technologies which it uses and administers, and with the economy
it seeks to manage. When transportation and communication were primitive, the
state was a localized affair. The strongest or most successful individual in a small
area could rule or influence it solely for personal enrichment or benefit. Even
hunter-gatherers were not so busy surviving that they could not mount excursions
to other territories, waging war to control land, food, animals and slaves. There
were elaborate social structures developed in some such societies, and those of one
of the most successful--the Eastern North American natives' "Six Nations" --had a
direct influence on the constitution of the United States itself.
Agricultural societies had developed a variety of kingdoms, empires and even
democracies by the time of Christ--some of them far more successful than others.
The success of each was in direct proportion to its ability to use the best available
technology for transportation and communication. As the focus of these eventually
shifted from land to sea and from the Eastern Mediterranean and Orient to Western
Europe, and then to the American East, so did the centres of the successful states.
That is, the current centre of economic activity and expansion will invariably be the
centre of political influence for the immediately following time.
Throughout this long period, democracy has had only relatively short runs.
The Greek version was based on the votes of free and educated citizens who met
and debated the day's issues in public forum. It was limited to such city states for
three closely related reasons. First, democracy assumes an informed citizenry, and
therefore requires a means by which they can be kept informed. So long as they can
physically meet together and debate face to face, this can be achieved. However,
there are obvious physical limitations to such a system, and these ensured that it
never grew to encompass more than a small city. Second, democracy assumes an
informable (that is, educated) citizenry. The people who gather together to make
decisions must understand the issues fully and be able to explore what are the
consequences of the courses of action they may take. The third is that since the
state was closely allied to the economy, decision making was generally limited to
those with economic interests in the state. Such a democracy was possible,
therefore, only for those who had economic interests (i.e., not for women or slaves).
The greater the degree of economic participation by individuals, the more broadly
based was decision making power. Early forms of democracy, and not just the
Greek, always limited the franchise in some way--usually to free, adult, educated,
land-owning males, on the assumptions that everyone else was not informable, and
being unable to make informed decisions, should not be allowed to participate in
them at all. It was also assumed that not everyone had an economic interest, and so
could not be qualified to make economic decisions.
The industrial age caused some of these assumptions to be swept aside
permanently, for the workers in mines and factories had to be educated to some
extent in order to work the equipment and use the techniques. As the machines
became more sophisticated, and work shifted to white collar occupations, they had

309
to become much more educated and then economically independent as well in
order not only to work but also to be consumers. From this starting position at the
agricultural/industrial transition point, the state variously followed one of two paths.
One reinforced the democracies that were simultaneously developing, and the other
resulted in new kinds of tyranny.

Technology and Big Brother

As Jacques Ellul argues in The Technological Society, the development of new


techniques, and the necessity for applying them on a universal scale to achieve full
potential, results in an inevitable growth of the state, for it is the only agency
capable of satisfying the demand for such universal applications. This is true in
economic management, in education, in the provision of medical care, in
transportation, in communication, in the provision of consumer utilities, in law
enforcement, and in basic scientific and technological research. For one field after
another, the size, costs, and risk become too great for the private sector, so the
public sector first participates and eventually takes over.
Both communism and fascism came out of this milieu as attempts to achieve
total management of the citizenry through systematic application of appropriate
technologies. Both are industrial-age philosophies that require a citizenry
sufficiently educated to use technology, but sufficiently uninformed so as not to
question the state's decision-making powers. Fascism failed in Europe because of
necessity it had other items on its agenda--after all, an educated, but uninformed
citizenry must have its attention focused away from the business of the state lest it
become informed. Putting that attention onto the destruction of "enemies" is an
effective temporary measure, but either victory or defeat renders this technique
moot, and all states relying on it must therefore eventually fail. Thus Fascist states
are either defeated by external forces, or they suffer a revolution from within.
Neither of these necessarily relieve the tyranny, for it may only replace one set of
masters by another, and these may be far harsher. The fascist states that did
survive World War II, or were founded later, remained far from the world's economic
mainstream and away from public attention. In the 1970s and 1980s, many of these
went through peaceful revolutions and saw the establishment of more democratic
regimes, and by the 1990s there were relatively few fascist states or military
dictatorships remaining.
Radical egalitarianism, or communism, on the other hand, was, for a time,
more successful for two reasons. First, in theory, its declared enemies were poverty
and injustice, so it had genuine incentives to offer an oppressed people in
persuading them to exchange one form of tyranny for another. Second, it delivered
on its promises, at least to some extent, in part by acting socialistically according to
its stated philosophy, but also in part by pragmatically borrowing capital from the
rest of the industrial world and acting as a state capitalist. However inefficient this
may have been by Western standards, Russian statism did offer its peasants more
than did the old Czarist agrarian system. However, the very arena of its success
illustrates why the Russian system was never successfully exported to a country
where the industrial revolution had already taken place, even though Marx intended

310
his ideas for industrialized societies. Simply put, industrial societies require
educated consumers, and an educated citizenry cannot forever be kept uninformed
by the state. Moreover, consumers have an economic role by definition--their
economic voice is eventually heard regardless of any attempt to suppress it. Neither
is everyone the same--people differ in intelligence, the willingness to learn or work,
and their aptitude for the ingredients of economic success. The state that demands
equal outcomes regardless of inputs can therefore achieve its goals only by
increasing its application of force to the citizenry until it results in either bankruptcy
or rebellion. Doctrinaire socialism will therefore always fail.
Ellul and others are therefore correct when they observe that the actual
application of Communism is essentially the same as Fascism--the two have
somewhat different philosophical roots, but their manifestations as states are not
particularly distinguishable by the average citizen. Losing one's life, property, or
freedom to enrich an elite, maintain an army, and support an all-encompassing
state does not appear to be different just because of the label attached by
politicians or academics to the philosophy behind the tyranny. It can be anticipated
therefore that the old Russian style statism is applicable only to relatively primitive
agrarian societies, or to those in shock trauma after a war, and not elsewhere. This
is due to the information revolution for two reasons: first, because the equally poor
of such a nation can see that their system does not work--those of other countries
have visibly more, and they want to be equal to them, not each other, and second,
because informed citizens do not long tolerate any tyranny. Since even the people
of third world nations have greater access to information than ever before,
establishing state capitalist tyranny is becoming more difficult all the time.
When people were already under the unjust and repressive despotism of a
king, general, or dictator, they were loathe to believe that their lot could become
any worse, and sometimes welcomed Marxism. Despite its consistent failures, its
credentials as a supposed liberator were not always examined closely, for potential
subjects often lacked both information access and democratic experience to realize
its true nature. Because of the prerequisite level of education and information
access present in an already industrialized nation, it is becoming less likely with the
passage of years that any nation will embrace communism overtly. It could get
much the same thing, however, if its people are not vigilant and they allow the state
to acquire similar power gradually and by default, or if the state appears to lose its
moral legitimacy and they demand a dictatorship to restore confidence.
Even the authoritarian state based on a genuine "big brother"--the god-like
larger-than-life glorious leader whose image is constantly held before the people--
cannot continue indefinitely in the information age. Even if the despot can prevent
other information sources from reaching her people, she must eventually die, or
perhaps a wide-ranging natural disaster such as a famine strikes. In either event,
the feet of the icon are eventually revealed to be of clay, and the illusion fails. At
this point the state either crumbles into anarchy, is taken over from the outside, or
the successors of the autocrat preside over a metamorphosis of the government
into some new form.
It is important to emphasize that the idea that it is possible to keep an
educated citizenry uninformed is self-contradictory and, in the medium to long run,

311
self-destructive. Even in a purely technological age, such a state of affairs is
inherently unstable, and requires ever more frequent exertion of tight state control
over intellectuals to maintain--an activity that cannot be continued indefinitely
without robbing the state of its own future. The most damaging development to
such philosophies is the information revolution, for its advent has destroyed forever
the second of the two assumptions on which tyranny is based--for in an information
age the citizenry cannot be kept uninformed. The growing base of information about
the more affluent West was one of the major contributing factors in the demise of
the Soviet Union and its satellites in the early 1990s, and it will eventually prove the
downfall of all remaining dictatorships.
The case of the two Germanys is instructive in this respect. Ultimately it
proved impossible to keep half a nation under totalitarian dictatorship while the
other half was free. The Berlin wall showed the failure of the Soviet doctrine. Once it
was erected, it was only a matter of time before the eastern regime toppled, and a
costly re-unification begun. The two Koreas make a similar point. One is affluent,
thriving, a substantial player in the world economy. The oppressed North is
impoverished to the point of being destitute. In this case, the economic gulf has
become so wide, however, that even were the North to become free, the South may
wish nothing to do with unification.

Little Brother

While all states have of necessity grown with the techniques they are called
upon to manage, democracy has also been able to adapt as it has grown. It is
important to realize that democracy depends utterly upon the informed electorate--
and modern means of communication are what have kept it informed. When
information transfer was slow, the franchise was limited and representative
democracy was both appropriate and reasonably efficient. The people's elected
representatives would meet and debate the issues and then report back to their
segment of the electorate who would then decide whether to keep them in office or
send someone else the next time. In such a system, representatives were elected in
order to become fully informed decision makers because the electorate could not.
They then had to explain the decisions they had made to the less well-informed
population at large. The electorate was essentially delegating the process of
assessing information, but only to an extent--they expected to be advised and
consulted by their representatives. The slowness of information transfer meant that
office holders had to be given terms of two to six years in order to ensure mature
deliberation and to allow them to be judged on more than a single issue. Meanwhile,

312
as education and economic interests became universal, the once limited franchise
did as well, and the base of democracy broadened, first to former slaves, then to
women, and finally to some of the economically dependent through the lowering the
voting age.
As the next era dawns, the electorate is better educated than ever, and much
more capable of fully informing themselves, rather than finding it necessary to
delegate this responsibility to representatives. Consequently, people now expect
more direct participation in the decision-making process. The information lag time in
political matters as in all others has virtually ceased to exist. An obscure Middle East
newspaper can publish a paragraph on an arms for hostages deal buried on an
inside page one day, and a thousand North American newspapers can pillory the
U.S. president over the contents the next day. The bedroom antics and drug habits
of each office seeker or holder constitute information available to every household,
and disaffected clerks, bodyguards, and secretaries can achieve media fame and
fortune by blowing the whistle on their political masters. In the new era, therefore,
nothing can be assumed to be a political or personal secret; citizens are fully
informable. Among other things, politicians can no longer assume that they have
any private life whatever, nor that their ethical standards can be substantially
different from those who elect them--without at least the latter's knowledge and
implicit consent. This does not force politicians to adopt an absolute code of
morality, but it does require them not to deviate much from popular standards.
Another consequence of this information availability is a growing desire to
participate directly in decision making rather than simply send representatives to do
it. Thus far, as Naisbitt details in Megatrends, this has resulted in greater interest in
state or provincial, local, and community affairs, for this is where citizens have
direct access to public forums and referenda and thus an immediate influence.
There has been a corresponding decrease in interest and voter turnout where
national politics are concerned, for these so far present fewer opportunities for such
direct participation, and lacking this, people are becoming increasingly alienated
from the central government--a dangerous state of affairs in geographically large
nations such as Canada and the United States.
In the medium-to-long term, the nature of Western democracies seems likely
to change in order to take this informed electorate directly into the decision-making
process. At least some national decisions may be made by direct debate and vote of
all interested citizens. Many others now handled at the national level might be
delegated to state and local governments, reflecting the new realities by reducing
central powers.
Still another variation is possible: require potential voters on given issues to
qualify by demonstrating both an interest in and a knowledge of the subject--
perhaps by reading and participating in the debate preceding the vote. There are
now few technical obstacles to such direct decision making, only political and
traditional ones. The full facilities of the Metalibrary are not required, only those of a
much smaller and more easily implemented political data system. The chief danger
of participatory democracy, as opposed to representative democracy, is the one
inherent in all situations involving zero-time information flow. They tend to result in
instant and therefore ill-considered decisions. The qualifying of an electorate for an

313
issue might cause a sufficient delay to prevent this. Perhaps some classes of
decisions will need a constitutionally guaranteed debating period in order to ensure
that a resolution will come out of mature deliberation.
At the same time, the internationalization of trade and commerce has had a
tendency to create super-states that transcend present national borders. This is
already taking place with the European Community, and may one day do so in
Southeast Asia. The free trade treaty between Canada and the United States (with
later additions of Mexico, Chile, and others to come) could eventually result in the
largest international economy in the world, and the Europeans are responding to
such initiatives with a greater unity of their own, to prevent becoming subject to
North American and Japanese trade domination for years to come. There will no
doubt be other such trade zones established, because the benefits of international
trade and the harm done by protectionism are manifest. For instance, Asian
countries whose economies were badly damaged in the late 1990s might hope to
recover by establishing stronger trade organizations of their own.
Despite the necessary internationalization of trade, legislators in some
countries may enter into protectionism anyway, for they may be unable to see
beyond certain narrow local issues, and they may be prepared to destroy the
international economy for years to gain a few years' jobs for one city. It was exactly
such behaviour that was one of the chief causes of the great depression of the
1930s. Neither can nationalism be discounted as a factor, and this element may
ultimately prevent the traditionally fractious Europeans from achieving unity. As the
attacks on the Multilateral Agreement on Investment (MAI) in the late 1990s
illustrate, nationalism is still a potent enough force to doom a treaty deemed to
surrender too much sovereignty to a vague and ill-defined internationalism. This
suggests that even though such an internationalization might well be in most
people's best economic interests, efficiency of technique is not sufficient in itself to
guarantee the outcome, and it may not happen for a generation or two, if ever.
However, in the long run, it is probably in the best economic interests of all nations
to give up some sovereignty and allow fuller and freer international trade to
operate. Whether such trade alliances will eventually also result in formal political
unity (whether regional or global) may not matter, for the trade alliance itself may
become the de facto senior government for most practical purposes.
What do these twin flows of power from the national level to the local one (on
the one hand) and to the international one (on the other) mean for the future of the
national state? Its office holders could lose personal power and prestige, and
nationalism could diminish as a world force. If so, war could also lessen in
importance. However, the regional governments that are today's nations would not
necessarily vanish, but could serve as local checks and balances on the world
economic and political system. The individual voter may have an extended reach of
power, for the Metalibrary is capable of promoting the billions of little brothers of
the world into the drivers' seat of participatory democracy on a global basis. At the
same time, it seems that interest in the governance of the local community will
continue to increase, and this could eventually result in the substantial transfer of
decision making and taxation powers to this level. If these economic trends were
the only factors, the net result could be a gradual diminishing of nationalism in

314
favour of global and local interests. Nationalism as it is known today is largely a
creation of the departed agricultural and industrial ages, and it might not be
expected that it would necessarily survive far into the next era. On the other hand,
the break-up of the former Soviet Union has revealed a great deal of traditional
nationalism that had been suppressed through much of this century. Without the
Russian threat hanging over their heads, the old/new nations of eastern Europe
readily returned to their traditional violent nationalism, ignoring the unifying trends
of the information revolution until they could either settle some old scores, or give
birth to new citizens whom they could teach to have new priorities.
Note however, that this last comment assumes they might want to teach new
priorities or perspectives. It is doubtful ethnic members in the former Yugoslavia do
desire this. Those in the Middle East and on the Indian subcontinent who teach
hatred to their children in their schools are unlikely to stop doing so any time soon.
There are therefore collectivist and individualist trends in the affairs of state
as well as in the operation of the economy. Individuals may become more protective
of, more interested in, and more desirous of control over what they regard as their
local turf, and if they can achieve this even while participating occasionally in global
decisions, they may be as content and as democratic as the citizens of any of
today's nations, even though their relationship to the state will have changed
dramatically.
In the last two civilizations, the government bureaucracy made most decisions
of state, and it can certainly be expected that the civil service will be as much a
feature of the next era as of the last. It may, along with other service groups,
become more of a professional and managerial cadre. If so, the role of elected
political leaders and heads of state might become even more ceremonial in nature
than it is now. Whether this subsumption of the political and economic life in a
technical one is entirely a good thing or not remains to be seen; but it seems
reasonable to suppose that one day it could become no more appropriate to elect
the state's managers than to elect its doctors or electricians. One does not need to
suppose that such a situation will imply a loss of liberty for individual citizens, nor
that it would result in an amorphous technical dictatorship. On the contrary,
individual citizens could have more power in local matters as well as input into
global policy making.
It should be kept in mind, however, that even though technology both enables
such changes and may drive society toward them, they may actually come about
only slowly, or not at all. Institutions have a momentum and life of their own, and
have a way of surviving in roughly their traditional way--even if in form alone--long
after they have outlived their usefulness, or better replacements are available. Such
change as does come to Western style democracy may be slow--perhaps almost
unnoticeably so, and thus lag far behind the level of enabling technique--there are
always many with a vested interest in the status quo.
What is more, change is a fragile thing. The availability of new techniques for
better democracy does not mean that these techniques will never be subverted to
destroy democracy, for tyranny is always a possibility. Indeed, although information
technology appears to reduce the risk of traditional despotism based on fear and
ignorance, it also allows for rapid spread of bad ideas along with the good, and so

315
increases the risk of a self-inflicted tyranny based on demagoguery or on oppression
of minorities by the majority, or both. Which path (greater freedom or less) will
actually be followed depends more on the attitudes and motivations of the peoples
of the present democracies than it does on the enabling techniques. It also depends
on there still being an ethical consensus-- the social glue that is the basis for the
rule of law. If the present infatuation with relativism continues to erode this base, it
is difficult to see what basis there would eventually be for government at all.

The Global Struggle

However, it should also be clear that universal availability of information is


ultimately the mortal enemy of tyranny. The computer, radio and television, copying
machines, and the free press all strike at the heart of regimes like the old Russian
statist empire, and it could not long endure in its previous form in the face of such
technology. The Chernobyl incident well illustrates this, for it was Western media
coverage of satellite photographs that forced the Russian leader to go on television
and explain the nuclear accident to his people--an act without precedent in that
nation. Even China, with its centuries-old reverence for authority has already
transitioned itself from Maoism to a mixed system in which communism is
disappearing. Undoubtedly, this process will continue.
The underfed, poorly clothed, and poorly housed billions of the third world can
now also see in living colour how the peoples of the industrialized world live. They
are ready to accept either democracy or a new tyranny if it means they will get a
share of the same material pie. It is entirely possible that as the information age
and the participatory democracy it brings, come to Russia and China, their tyranny
and state capitalism will be temporarily adopted elsewhere in pre-industrial
countries. All this makes for very precarious times, for it is still possible for tyranny
to use existing technology to overwhelm democracy, and attempt to abort the
information age. Indeed, some degree of international cooperation is needed to
continue; several-well placed nuclear explosions in medium orbit would effectively
destroy most electronic equipment in the world, and there are a number of nations
with such a capability.
Television also makes it possible for a morally conservative third world to pass
harsh judgement upon the West for its self-portrayal in this medium (That is, the
religious leaders of countries like Iran refer to the United States as "the great Satan"
for moral reasons as well as political ones). As such countries grow in power, it is
possible to imagine these judgements eventually resulting in new "holy wars" with
the goal of bringing the West in line morally. Such conflicts would also serve to
maintain tyrannies that promote them in the same way as in the past, for of the
support of totalitarianism by the making of war with technology there seems to be
no end.

9.3 Technology and War


The Professor is waiting in the seminar room with Johanna, Dorcas, and Ellen.
As he frowns at his watch, the door opens and Nellie arrives with a visitor who has
to duck even more than her to get under the top of the six-foot eight door frame.

316
The visitor is a broad shouldered woman with a somewhat dark, freckled face, red
hair done up in a bun, and is wearing a kilt, a brocaded shirt and vest. A
broadsword, a heavy cudgel, and a slingshot hang from her belt, and over her
shoulder are holstered two throwing knives. She is limping slightly. Nellie is similarly
attired and armed, and sporting a large bruise on her face.

Johanna: (to Ellen) I thought Nellie was big, but this one must be over seven
feet!
Professor: (rubbing his hands and brightening) Lady Mara, thank you for
taking time from your busy schedule. Nellie, glad the two of you made it. Would you
do the introductions, please?
Nellie: (indicating each in turn) Mara, these are Johanna, Dorcas, and Ellen.
The Professor you know. May I present Lady Mara Meathe, administrator, physician,
and regular army officer.
Professor: You're a little late.
Nellie: We did a stick work-out in the gym. Just time for one hit apiece.
Ellen: (looking at Mara skeptically) And which of the Professor's imaginary
worlds are you from, dearie?
Mara: (turning sharply to Nellie, who suddenly grows pale) Is this sacred
ground?
Nellie: (shuddering) I think we better assume that, Mara.
Ellen: What's that supposed to mean?
Mara: Our officers' schools are held under the sword truce of sacred ground.
Nellie: (glaring savagely at Ellen) Under those rules she's not allowed to kill
you for insulting her as long as the course is in session.
Professor: (motioning everyone to a seat) Well, enough idle pleasantries.
Today's topic is: "War in an information age."
Dorcas: There will always be wars. Only the means of waging them changes.
Johanna: I disagree. War is an unnatural condition. Human beings are
naturally good and normally peaceful. Now we know the horrors of nuclear and
biological warfare, there will never be major wars again.
At this, Mara laughs derisively.
Professor: Lady Mara, would you care to give us a thumbnail sketch of warfare
on your home world since the nuclear age.
Mara: Are you sure it's all right?
Professor: This conversation is fictional, remember. You may speak freely.
Mara: Glad to, Professor. In a nutshell, there hasn't been a single year free of
war in the two centuries since. On my planet, the banning of nuclear weapons,
chemical and biological warfare, and gunpowder wasn't intended to eliminate war
but to ensure that when soldiers fought they would have to do so with what they
could hold in their own hands. Swords, sticks and knives are the weapons of choice.
Some like the battle-axe, but I find it cumbersome.
Professor: When was the last time nuclear weapons were used?
Mara: At the end of the war with Japan in 1750. There haven't been two
decades without a major conflict, and no year without several minor ones since.
Biologicals were used a few times, but no one has fired atomics in that span.

317
Johanna: (shocked) Your leaders still send people to war?
Mara: No, they take them. A noble who declares war on a neighbour must
personally lead the troops into battle. Only the First Lord at Tara is exempt, and that
only because he is not allowed to declare war, just to respond to breeches of the
peace.
Johanna: (sarcastically) What about the women? Do they stay at home and
cook for their husbands?
Nellie: Mara is a high government minister, Johanna, and a major general of
the army. Woman on Ortho have equality.
Johanna: (horrified) You don't mean you're a combat officer?
Mara: Of course I enter combat. I told you. Leaders go first; they never send.
Johanna: You've killed people with that sword?
Mara: Certainly. Do you think I use it to shave my legs?
Johanna: What if you get wounded or killed?
Mara: There are army physicians, of which I am one. But, if the Lord of Heaven
has done with counting out my days, then who am I to object?
Ellen: (disbelievingly) How many people have you killed in combat?
Nellie: Careful, Ellen. An honourable soldier does not boast, but in Mara's case
I know it's around twenty. She has come close to being killed a couple of times
herself.
Johanna: (incredulous) Men would kill a woman in battle?
Mara: No one has to take up the sword, but once you do, everyone's equal. In
battle you kill or are killed.
Nellie: Isn't that what you and Ellen want, Johanna--equality? Don't you like
the consequences?
Ellen: All right, all right. I'll bite. You're from a mythical world where you're a
barbarian queen. How do you reconcile your apparent religion with being a warrior?
Mara: I am loyal to the Lord of Heaven, to the throne of Tara, to my family and
sworn friends, and to the liege people sworn to me--in that order. Sometimes those
loyalties require me to fight against injustice or despotism, or for the cause of an
ally. I do my duty under Heaven.
Ellen: (grunting) Another Christian, but what happened to "love your
enemies..."
Mara: "..and do good to those that hate you?" When you have the choice, you
take the higher road, of course. In war, your duty is to win--but not at the cost of
justice or honour.
Johanna: (stubbornly) There is no such thing as a just war.
Mara: You live on the world that spawned Hitler and can say such a thing?
Dorcas: There will always be those who believe the strongest must rule, even
if they have to kill everybody else to get what they want. It is just to defend society
against such.
Ellen: (smiling) There's an inconsistency in your story, Mara, dear. If soldiers
on your imaginary world have to kill with their hands, why are you carrying a sling?
Mara: The sling and the throwing knives protect against a coward's attack on
a defenceless person, or defend against a banned weapon such as a gun, gas, or
bow. One who does or uses such things need not be treated with honour, and can

318
be killed out of hand. A knife through the throat is quite effective. (Glancing around
and spotting a wooden bust on a platform by the opposite wall, she turns to the
Professor.) Shall I demonstrate?
Professor: Be my guest. That's why I brought it.
Nellie stands up, grinning, bows to Mara, strides to the bust and stands
behind.
Nellie: Now suppose I were to draw a throwing knife.
Reaching over her own shoulder, she produces one. There is a breath of
motion, a blur, and Mara's knife buries itself almost to the hilt in the throat of the
wooden bust, rocking its platform slightly.
Johanna: (in a shaky voice) Nellie could have been hurt!
Nellie: Oh, nonsense, Johanna. We've practised hundreds of times. Besides,
Mara is a superb surgeon. She'd patch me up in a jiffy.
Ellen: (rising, reaching for the knife with both hands and giving it a good pull)
Hey, I can't get this thing out.
Nellie yanks it out it with one hand and tosses it to Mara, who catches and re-
holsters it in one motion.
Nellie: (grinning) Want another demonstration? Mara and I could go a couple
of rounds with the sword if you like.
Ellen: No. I'll buy the proposition that you're somehow and somewhere the
real thing. I just don't see how a society with an advanced enough science to have
nuclear weapons could go back to using primitive swords.
Mara: These blades are the best product of the swordsmiths' profession, and
scarcely primitive. Besides, it was a deliberate choice to limit weapons as part of the
warriors' code. We found killing by proxy offensive. So the change is an advance,
not a retreat.
Ellen: But what of those who can't or won't learn the sword? Where does that
leave them?
Mara: Wearing a white shirt and the kilt of someone who can and will defend
them.
Johanna: But, that is still rule of the strongest over the weakest. It's
exploitation.
Nellie: On the contrary, the lord or lady to whom noncombatants owe
allegiance has the sworn duty to protect them from exploitation and ensure their
rights and freedoms. That's the only reason they carry the sword. It's the reason
God ordained government, of whatever form.
Dorcas: Fallen human nature is corrupt. What if a noble fails in that duty and
does exploit?
Mara: Then the liege covenant is broken, and those freed thereby can replace
the base noble. If they are unable to do so, it becomes the responsibility of the rest
of the nobility to do it for them.
Nellie: You didn't mention they also have a duty to kill the faithless noble.
Mara: I thought it obvious.
Ellen: Only the workers are fit to rule. No one's born noble.
Mara: Of course nobility isn't by birth. The most fit to inherit family
responsibility takes the name and goes to Tara, whether it be the child of the

319
previous lord, or of the village blacksmith. One is part of the nobility if one can and
does fulfill its responsibilities.
Ellen: What about seniority?
Nellie: (when Mara looks puzzled) Sorry Ellen, there are no unions on Ortho. If
someone tries to hang on to responsibility too long, one more capable will surely
challenge and take it away.
Johanna: Yours is an information-based society?
Mara: It has been for over two hundred years.
Johanna: I would have thought in such an environment war would only be
fought at a terminal on the network.
Mara: That happens, too. Nellie is my consultant in that line of work.
Johanna: It couldn't work. A standing army is an intolerable tax burden on
society.
Mara: Local lords collect the difference between twenty percent of income and
the up to ten percent that may have been given to the Church. Ten percent of that
tax in turn is owed to the central government, but the cost of maintaining the
prescribed army units is included, except during an aggressive or rebellious war of
local making--then the domains pay troops out of their own pockets.
Dorcas: Is a soldier bound to the same lord for life?
Mara: If they swear to be, yes. Many officers prefer to hire out to whomever
they wish.
Johanna: They work as mercenaries?
Nellie: Nominally all local units are subject to the crown of Ireland at Tara.
Ellen: (chuckling) I'll have to tell my aunt about this one. She'd get a kick out
of a world where the Irish are in charge. She's always talking about how the English
mistreated them for seven hundred years. (pausing suddenly) Say, how do you treat
the English?
Mara: They are partners in the United Kingdom of the Emerald Isles.
Johanna: Is it a democratic partnership?
Ellen: Some countries elect a parliament to look after local government
matters under the authority of the crown, but England is not among them. Her chief
magistrate is Lord Kent.
Professor: Nellie, you act as summarizing prosecutor, and Mara you put on
your brehon's chain. Give some examples of how military and criminal discipline is
handled.
Nellie: Right. A local ruler stirs up a dispute with another noble and declares
war, but stays home from the battle.
Mara: His troops refuse to go. They banish or kill him and install a new lord.
Nellie: An officer in the heat of battle kills the non-combatant historian
recording the scene for the opposing side.
Mara: The guilty officer's troops suspend battle under a truce, execute the
coward, supply a bard to replace the fallen one, recognize a new leader, then decide
whether to continue the battle.
Nellie: The General of the Army stages a successful coup against the central
government.

320
Mara: The coup leader is now First Lord, has to give up all his lands, money,
and titles, and assume the responsibility of rulership. It's not a job many people
want, by the way.
Dorcas: I thought you had a monarchy. You mentioned a throne.
Mara: The King was deposed almost sixty years ago and the nobles took over.
Their obligation is to the principles behind the green chair, even though no one can
sit in it for over a year yet because it's under an ill-advised ban.
Nellie: Suppose a coup fails, and the instigator escapes.
Mara: The Donal sends officers after the loser to kill or capture him.
(hesitates) We had a case like that once where the coup instigator was allowed to
escape, changed his name, rejoined the army as a lieutenant and eventually
regained his honour. The senior officers know about him but no one ever turned him
in to the government and now that the twenty-year ban has passed he could take
back his old name if he wanted.
Nellie: A soldier runs away from battle.
Mara: Someone runs after him and kills him. That's not an ethical issue, Nellie,
just the practical reality of fighting with swords.
Nellie: The House of Lords tries to abolish the army.
Mara: The people would rise up in rebellion against the house, kill them all,
and replace them with ones with more sense. The same thing would happen if one
of them tried to govern without the people's consent. The government is there to
protect the people. They get very nasty when their security is threatened by
abrogation of duty.
Dorcas: That's the theory, at least. I doubt if it's always like that in practice.
Mara: You're right. There have been despots among domain lords and even at
Tara, but the high nobility is sworn to prevent such, and most take it seriously.
Dictators don't live long.
Nellie: An officer challenges an enlisted trooper, or a lord deliberately
provokes a fight with someone much less skilled.
Mara: The guilty party is dismissed from all positions of responsibility,
dishonoured, and assigned to work as a field hand for one to three years.
Nellie: An officer or noble uses the position for self-enrichment.
Mara: The purpose of power is the fulfilment of responsibility. Abuses indicate
high-handed and deliberate dishonour. Three to five years of field work.
Professor: Very good. We'll have a formal debate next week on the proposition
"War may be waged justly." Ellen and Johanna have the positive side, Nellie and
Dorcas the negative. Mara will be the judge. (noticing Ellen about to protest) Mara
must judge fairly. If she didn't she would lose her brehon's chain, and sentenced to
field work herself.
He dismisses class, and leaves. Mara turns to Ellen and Johanna.
Mara: The gym here's pretty good. How be I take you both on together in a
couple of rounds unarmed? If I break any bones, I've got my medical kit along.

******

321
As observed in Chapter 1, new technologies have historically found many of
their applications in he waging war. The modern military, like that in every age,
carefully evaluates all techniques and develops some of its own for their potential to
kill or to defend. The presence or absence of critical war-making technologies has
changed the course of history too many times in the past to suppose that it will not
happen again. This is actually the case in every war, for if combatants are
technological equals, strategy may win the day, but strategy is a technique in its
own right--that is, there is always a most efficient way to wage war in a given
situation. In this century superior firepower much more than strategy has been the
decisive factor in the major wars. However, in the 1960s, lightly armed and ill-
trained young North Vietnamese guerillas took on the heavily armed and
mechanized United States army and forced it into a humiliating retreat. Not so in
the Gulf war with Iraq, where the United States overwhelmed Iraq with sheer
military might.
In the 1970s and 1980s a rough balance of power was perceived to exist
between Russia and the United States, and neither side wished to start a war that
could not be won, or whose fighting might sterilize the planet, so strategy became
paramount again. In the end, the United States essentially bankrupted the Soviet
Union by raising the technology stakes to the point where the Soviet economy could
not keep up. For a while, pundits hailed this as the end of history--the ultimate
triumph of liberal democracy. That this was a foolish hope should have been
apparent even then--there are several more countries (and possibly terrorist
organizations) that have nuclear capability, and are willing and able to face off
against each other or the United States. Thus, one cold war is over; but others have
begun between other participants and in different parts of the globe.

9.3.1 Confrontation Outcome Scenarios


There are several ways in which a balance of power in a stalemated
confrontation can change as a direct result of new high technologies, or new
appreciations of strategic technique. At least three kinds of scenario are possible,
though some may be less likely than others.

Escalation--a lose/lose scenario

It is now clear in the aftermath of the cold war between the United States and
Russia that two nations cannot escalate military confrontation indefinitely. This is so
for both military and economic reasons, for unlimited escalation eventually leads
either to war or to the effective bankruptcy of one or both parties.
First, since there is little to be gained militarily by being the responder in a
conflict, striking first in such a way as to obliterate the enemy provides the highest
probability of survival. The cold war participants realized this, and designed their
capability so that they could both survive a first strike and respond with enough
force to obliterate the aggressor. This policy of Mutual Assured Destruction (MAD)
takes away the strategic advantage of the first strike, but in the nuclear age has the
consequence that any conflict has the potential to destroy all life on Earth.

322
Second, each technical and scientific advance that is applied to the making of
war costs more than the last and, like medical costs, military ones have potential to
bankrupt all nations that do not limit spending. The demise of the Soviet Union was
brought about to a great extent by overspending on the military to the point of
bankruptcy. Even though there is now much less chance of nuclear war involving
Russia, the economic aftermath still could cause a catastrophe of global
proportions--a depression from which it might take decades to recover. Such a
result may well have been inevitable if the Soviets and Americans had continued on
the course of the 1970s and 1980s; both were spending far beyond their means,
and the following years saw many painful readjustments as the United States
sought to balance its budget and trade deficits, and Russia struggled to feed its
people and bring its economy up to date. None of these goals was achievable
without large tax increases and massive reductions in spending--either by a general
demilitarization, or by making sharp cuts in social programs.
The former communist nations had to do both, but any Western governing
political party that emptied its citizens pockets, whether by taxation or social
spending reductions, would quickly lose power to some other party, so it would
appear that it is impossible for them to maintain military spending near historical
levels for long without at least a major economic upheaval. Moreover, the larger the
percentage of Gross National Product once committed to social programs, the
smaller the ability for an all-out mobilization to fight a conventional war--though this
consideration does not apply so much to a nuclear conflict.

Advantage--A Win/Lose Scenario

The second possibility is that one side achieves a decisive advantage over the
other. This result is possible if one side or the other succeeds in developing and
deploying a (non-nuclear) technology that breaks down the stalemate dramatically
and permanently in its favour. Such an outcome was possible in the Cold War--it
would not be the first time a long standoff ended this way. Indeed, it could be
argued that computerization was the trump card that made the technical lead of the
United states insurmountable.
Possible candidates for future confrontations include the deployment of
biochemical agents to render the other side impotent long enough for a
conventional military takeover, or the deployment of orbital weapons capable of
waging a decisive war and/or preventing any attack from the other side. However,
the United States has a vast technological advantage in some areas such as
computing systems, biochemical and physical research, and it is doubtful that other
nations are currently in a position to employ any of them strategically and
decisively. For their part, biochemical agents, such as new viruses, could not be
confined to one territory, but would spread world-wide in a matter of days.
One relatively safe option might would be to deploy the "flying crowbar"--a
smart , internally guided, chunk of rock or metal dropped from orbit to a pinpoint
landing on missile and other military installations. In theory, such non-explosive
devices could be made small enough to avoid detection and yet strike with such
force as to destroy buried missile silos, such accuracy as to take out mobile ones,

323
and in such numbers as to prevent retaliation. Indeed, the flying crowbar was
probably a component of the proposed U.S. Strategic Defense Initiative. However,
the ability of even the United States to utilize space in a cost-effective manner is in
some doubt, so an effective space-based weapon or deterrent may be a long way
off.
Neither can the expenditure of resources needed to gain decisive
technological advantages be kept entirely hidden from the other side. For instance,
the aggressor in a biological attack would have to inoculate its entire population
with an anti-agent prior to launching the agent against the other side. Such an
action could not be kept secret in the information age, and this fact alone seems to
make this win-lose scenario unlikely--though not impossible.
Another win/lose scenario has one nation gradually gaining the upper hand in
space, to the point where it could dictate terms to the Earth-bound losers. This
strategy has its own risks, for here the aggressor would have to agree to short-term
military spending cuts to free up money for space hardware, accepting temporary
losses in ground level power in exchange for long-term total superiority. The nation
that did this could be hailed as a peacemaker by the third world, while
simultaneously pursuing a strategy that would eventually result in total domination.
What makes an apparently passive strategy likely to succeed where an active one
would not, is the tendency of people to overlook the obvious, and the desire, often
shown in the Western democracies, to grasp peace at any price, or to pretend that
an aggressor state is really a benevolent friend. In this version of the win/lose
scenario, no war will be fought, but tyranny might triumph anyway--at least until its
inherent instability eventually caused it to collapse.
It should also be noted that it is too early in the life of post-communist Eastern
Europe to tell whether the cold war is actually over, or just in interregnum. It is still
necessary to deal with the threat of nuclear action from any of the former Soviet
states now in control of part of the arsenal, and from any one of several other
nuclear club members and organizations in other parts of the world. Perhaps the
same kind of high technology "Star Wars" threat that proved too much for the
Soviet Union can be effective against other nations as well--if it could be deployed.

Profile on . . . Losing a War


It's not over until...

The greatest drawback of any win/lose scenario is the frequent inability of the
loser to accept defeat and live with it.

o Seven centuries of oppressive British rule in Ireland could not shake that
people's determination to avenge the loss of their sovereignty, and the Irish
eventually won their freedom.
o Germanys chafed under sanctions and reparations imposed after their loss
of the First World War to the point where this issue alone probably made the second
inevitable.

324
o The Serbs, Croats, and Muslims have never done better than live in a state
of uneasy stalemate.
o Rwanda Tutsi and Hutu have an even worse relationship, and the situation
is similar in other African countries.
o Neither India nor Pakistan is happy with the borders between the two
countries, and neither is likely to accept the outcome of any new war as final.
o China makes territorial claims that are unacceptable to Taiwan, Tibet, and
India.
o There are numerous unresolved border disputes in South America.
o Iraq seems destined to fight more wars with the West, and she and her
neighbours with Israel.

Likewise, one has to wonder whether the people of the former Soviet Union
will be able to live with the loss of prestige that came from the end of the cold war
and the disintegration of much of their empire. It is still possible that a revived
Russian nationalism will renew this conflict, with disastrous consequences.

Genuine Peace Strategies--Win/Win Scenarios

The discussion (and supporting events) thus far lead some to the historically
improbable and highly idealistic conclusion that high technology may have made
war obsolete, at least war of the global variety. "We are at the end of history," say
some--meaning that civilization has reached a new and permanently peaceful state.
If so, it is communications technology that is the main cause, on the one hand by
making it difficult for one side to gain a technological advantage, and on the other
by reducing suspicions and promoting global cooperation. Indeed, if it could be
assumed that all military knowledge were equally available to both sides, then no
global war would be winnable. If the leaders on both sides of a nuclear confrontation
were rational enough to believe that those on the other side would never strike first,
they would do so themselves. If they were sufficiently rational to understand that
even a first strike that destroyed 90 percent of the potential counter strike would
still result in their country's own annihilation, then neither side would strike first, nor
even at all. Realizing this and acting upon it logically--that is by an eventual total
world nuclear disarmament--are not the same thing, however. If a perfect defensive
system could be devised, and any hostile ICBM destroyed at or shortly after launch,
then the best insurance for peace would be to share this technology publicly and
with all nations. Since it is the presumption of the information age that this
knowledge, like any other, could not be kept secret for long in any case, there would
be important real advantages for peace in sharing it from the outset.
The first obstacle to such a result would be the development of provably
reliable defense hardware and software--a goal often striven toward, but so far
elusive. The second obstacle to the peace scenario is that a considerable portion of

325
the world economy is dependent on the war industry, and any cutbacks in the level
of spending on arms would create severe economic dislocations. It must however be
judged on measurement of the ethical considerations, whether absolute or
utilitarian, that the payment of even a very large economic cost to secure the
number of human lives at risk is beneficial, and ought to be undertaken. In a nuclear
age, the alternative to peace is no longer war, but the destruction of the whole
human race--something that cannot be entertained, and that must be avoided if
there is to be a next civilization. It is worth remarking that the transitional period of
the early years will be the most difficult, for nationalism is an opponent of both the
spread of technology and of peace, and so are many traditional institutions.
However, if the twin trends of globalization and localization rob national
governments of power, and the universal availability of information takes away their
ability to keep secrets, both the motivation for waging war and some of the means
will be seriously impaired.
That does not mean war will not happen anyway. For political or religious
reasons, some countries are still relatively closed, and information about them is
still difficult to obtain and unreliable. If the Russians have not in fact given up their
desire to rule the entire world, their best course of action would be to pose as
peacemakers and lull the West into a false sense of security; pretend to disarm
while actually re-arming; and hope to strike at a weak moment. That such an
approach may seem improbable in the West might not deter them; war has often
seemed unlikely until it has actually started. Neither are the new democracies in
Eastern Europe very secure; one or more of them could easily return to the old ways
in an effort to stave off economic collapse. In addition, China has yet to emerge
entirely from under the yoke of tyranny, and it is much too soon to speculate on
what will happen to her when she does. Add to these the perennial powder keg that
is the Middle East, tensions between some South American countries, the poisonous
relations between India and Pakistan, and the vested interests of the arms industry,
and there remain all the ingredients for many minor and major wars for decades to
come.

9.3.2 Other Kinds of War


Military wars are not the only kind, moreover. There are a variety of
substitutes for gaining or expressing dominion over others. One outlet is sports,
which in some countries takes on warlike dimensions, both on the field and off. Thus
the Olympics became the focus of great endeavours not only for the athletes, but
also for the national governments who funded their training. Success on the field,
slopes, or ice was often seen as a proof of national superiority, and such proofs are
easier to buy than those obtained from winning a military victory. Revenge for an
old invasion of one's territory may be impossible to achieve on the battlefield, but
very sweet at the hockey arena or soccer stadium in front of thousands of
screaming spectators and the hungry eye of the television cameras. Moreover, as in
conventional warfare, some may decide that winning at sports is everything--and
such an attitude has both ethical and medical consequences, as performance-
enhancing drugs come to be seen as essential for such high-level competition.

326
A more important form is economic war. For the sake of promoting
employment at home, governments often adopt strategies to ensure the
establishment and prosperity of industries under their jurisdiction. These may
include tax breaks, low-cost land or loans, production or wage subsidies, high tariff
or other import barriers, or combinations of these.
These activities take place at all levels of technological development and in all
industries, and the results are much the same in each case. For example, the United
States engaged in an agricultural products subsidy war with the Europeans and
Japanese through the 1980s. Farmers enjoyed government sponsored prices and
protection from competition, but overproduced and caused enormous surpluses,
which had to be stockpiled at even greater cost, or risk a price collapse. No more
people are fed by such policies, and the natural market becomes so distorted that a
substantial portion of government resources comes to be devoted to the subsidies.
The cost easily escalates to the point where they destabilize the economies of the
countries using them as much as they do their ostensible targets in other nations.
For instance, by 1988, the elaborate subsidies and protections built into the
Japanese system had become among the most costly in the world, and some cracks
had already begun to appear. These became more serious in the 1990s, and it
remains to be seen whether Japan can recover. Subsidies, like all forms of
protectionism, have the potential to cause either wars or economic collapse.
Other examples can be taken from high technology trade. When American
manufacturers of memory chips (DRAMs) found they could not compete with the low
wages and subsidies of the foreign manufacturers, they asked for government
protection in the form of import quotas. This reduced the supply of the chips and
drove prices up. Since the American manufacturers could not respond immediately
to fill the demand, prices continued to increase, and offshore manufacturers got
most of the profit from the windfall. By mid-1988, the prices for such devices had
reached their highest level in years, due to the distortion of the market caused by
political actions. That is, the result was not an immediate increase in domestic jobs,
but shortages, higher prices, and a transfer of wealth to other countries.
Subsequently, of course, supply and demand took over as the information that there
were profits to be made disseminated; supply was ramped up; and prices came
down sharply.
Airplanes are also an important commodity, both for the sake of jobs, and for
national pride. Aerospace contracts are lured to the soil of a particular nation by
governments that subsidize manufacturers to offer their goods at a loss in order to
get the business. In such an atmosphere efficient companies are penalized, and the
inefficient are encouraged to become more so. Fewer planes are actually made,
because inefficiencies drive the price up in spite of state largesse, and consumers
pay the higher cost for the waste. The space industry is also often cited as an
example of state subsidies creating artificial barriers to the kind of free trade that
could result in much greater efficiency and lower cost. Manufacturers whose only
customer is the government, and who work on cost-plus contracts, have no
incentive to reduce costs and are discouraged from or even forbidden to export.
Sometimes, the market is interfered with by the establishment of cartels at
the state level--again for the purpose of ensuring the highest possible price for the

327
goods of the nations involved. The Organization of Petroleum Exporting Countries
(OPEC) is an excellent example of this kind of activity. By combining the major oil
producers into a single price-fixing group, it was able to engineer enormous price
increases in the 1970s, and an equally large transfer of wealth to its members.
All trade wars must come to an end eventually, however, because they are by
very nature self-destructive. No nation can afford to increase subsidies or trade
barriers indefinitely, and every cartel eventually causes other sources of supply to
be developed in response to the high prices. In their collapse, trade wars can do
even more harm than while they are in progress, for if they do not end in a
negotiated peace, they may end in a shooting war, thus they may constitute one of
the greatest risks to world peace. A premise of the information age is that national
leaders will realize that greater long term prosperity is available for everyone if
trade and other tensions can be eliminated and the prospect of new wars reduced.
However, enablement does not mean implementation. The ability to eliminate trade
wars will not necessarily lead to their demise; in some places and for some
industries they may become worse.

9.3.3 Policing a Peace


All these calculations and arguments may easily be upset, for the history of
this century in particular is replete with the ascension to power of fanatics who were
prepared to see their own nations crippled militarily and economically, and die
themselves in their efforts trying to dominate or destroy either their neighbours or
some hated ethnic or religious minority. Even if the superpower governments have
now become super-rational and do succeed in disarming because they realize that a
policy of mutual assured destruction is neither moral nor sane, there is no
guarantee that those who follow them in power will even be rational, much less
super-rational. Unless all the world's citizens are not only fully informed but also
free to determine their governments, there is little likelihood that disarmament
would be permanent; and there is a small possibility that it will not be even then.
What is more, there are dozens of small states that spend even larger
percentages than did the cold war superpowers of their much more meagre budgets
on arms. In some of these, the military has not just a vested interest in the status
quo, but directly operates the state for its own benefit. Such regimes, like all such
tyrannies before them, are stable in the short term only if they can persuade their
citizens that there are real threats to meet, or good reasons to become an
aggressor. So long as there are nations, there will probably also be wars. Even if the
combatants gradually kill off each other's populations, and wear out their
economies, there will be larger powers prepared to use such local conflicts both for
economic gain and as testing grounds for their own weapons--so the world is still
threatened. At some point, many more of the smaller countries will achieve nuclear
capability, raising the probability that one of them will use it in an effort to
decisively settle things with their enemies.
Other nations will clamour to have nuclear capability too, and the temptation
to sell this technology for the very high prices being offered will be too great for the
world's arms merchants to resist, so nuclear weaponry may well spread world wide.
After all, there are already many countries that see the sale of conventional

328
weapons solely as a way of obtaining cash, not as a moral issue. Once such
technology did spread, it will be impossible to stop its use unless war itself could be
prevented. This too is possible, but can only take place with the cooperation of all
the large powers, for every nation with weapons capability would have to be
persuaded to stop their manufacture and sale. Such an enterprise would involve a
surrender of sovereignty on a scale never before seen, and a placing of trust in an
international body to a degree not yet imagined. Such action will be hard for all the
world's nations, regardless of their size, but the alternative--gradually escalating
nuclear arsenals until one is used--is not an option for human survival, and cannot
be followed much longer.
A new threat, and perhaps one more difficult to police, comes from the many
nuclear technicians and scientists of the former Soviet Union who are now looking
for a place to use their skills and knowledge. If only a few of these were to sell their
expertise to some of the more warlike of the third world nations, there could arise
several new nuclear dangers to world peace. It seems unlikely that all these people
can be gainfully employed by non-belligerent nations for peaceful purposes, and
those affected are hardly likely to enjoy unemployment after their many years as
part of an elite establishment.
As the revival of Naziism illustrates--even if in small numbers of adherents--no
nation is safe from demagoguery. It is always possible for a small number of the
disaffected to raise the spectre of real or imagined ills, blame them on a domestic or
foreign scapegoat, and persuade the majority to initiate genocide or war. No
minority can be assumed to be free from the fear of such activities--whether Jews,
Christians, or some relocated ethnic group. Neither is any country immune from
such activities--whether in North America, Europe, Asia, South America, or Africa.

9.3.4 Summary
The discussion in this section raises further questions about the viability of
nationalism, totalitarian forms of government in general, and the morality and
sanity of any policy that emphasizes the ability to wage war as a high national
priority. It is assumed here that there cannot again be such a thing as a just or
moral nuclear conflict; no imaginable goal of public policy or national interest can
justify the annihilation of most or all of the human race. The question is no longer
whether the world ought to disarm and to effectively police a general peace, even
at the cost of some national interest; it is rather a question of how to go about it.
For it is no longer national interests, but human survival that is at issue, and that is
surely a high priority in any ethical system. In the long run, an information economy
is a global phenomenon, and nationalism may decline. If it does so gracefully, peace
is possible. If its death paroxysms are sufficiently violent, any other outcome
becomes possible.

9.4 Some Legal Problems for the Next Age


It is time now to turn from matters of high state and international
relationships to consider the interaction of the law and technology in more practical
and mundane matters. One area of law that is of specific concern to the
development of high technology is that of property rights, for the recent direction of

329
change here may indicate some potentially serious obstacles to the widespread
creation of new intellectual properties that would otherwise be a hallmark of the
fourth civilization.
Specifically, when a work exists only in electronic form, what rights of
ownership are there? More generally, who owns the information that is the basis for
the new society, and who owns or may regulate the right of access to it? The same
question could be asked of the programs that organize that data and even of the
machines on which they run. What will be the status of the books, articles, poetry,
and plays that are produced on a word processor and whose originals are stored in
non-paper form? Can anyone own the way a class of software looks and feels to the
user? Before attempting to answer these questions, it is worth pointing out that
these issues have been singled out for detailed examination because they are
representative of the manner in which developing technology forces changes to the
legal assumptions and institutions of a previous age. It is not intended to suggest
that these are the most important legal difficulties, only that they are good
illustrations.

9.4.1 An Overview of Current Problems


As the larger manufacturers of equipment have found to their dismay, the
more successful their products are, the more likely it will be that someone else will
have an identical or "cloned" version on the market in a matter of months. The first
company does the expensive research and development, and a host of imitators
with little developmental overhead reap the benefits. The authors of best selling
programs have also discovered that for every legitimate copy their publishers sell,
many pirated ones are distributed. Some pirates have even sold copies of well-
known programs under a different name, or have counterfeited the original label.
It is not that the mousetrap designers of older technologies did not face the
same problem; after all, there are only so many ways to make wax paper,
photocopying machines, or flashlight batteries, for instance. However, it used to
take much more time to copy new inventions. The protection of the law was clearer
and easier to obtain, and the concept of private property was thought to be a
touchstone of Western society, with the law being applied vigorously for its
protection.
By contrast, in the early stages of computer development, lawyers sometimes
admitted in court that their clients had copied thousands of bytes of computer code
for their own version of some machine, but claimed that the original owner had no
right to the creation because, as machine readable code, it was not really property.
The argument was that once a legitimate creative expression had been reduced to
electronic form, ownership was lost, because no one could "own" electrical impulses
or codes that can be read only by a machine. Alternately, they claimed that the
software expressed a formula in essentially the only possible way, and therefore
was not protectable. Adding to the legal difficulties was the fact that there was very
little lead time for the copying of either hardware or software. In the former case, it
takes but a few months. In the latter, the feat could often be accomplished in a
matter of seconds. For heavily copy-protected programs, an expert may have
needed hours or days, but all such locking schemes could be broken in a far shorter

330
time than it took to create them. A vigorous race between protectors and pirates
developed, and there was often more effort going into this sort of activity than into
the production of original programs.
The educational marketplace, where copying was epidemic, suffered the
most. Few publishers would enter it, for they knew there was no money to be made
when they could sell only one copy of a program to a School District employing a
thousand or more teachers. In a celebrated hardware case, judgment was rendered
for Apple Computer against the leading domestic maker of imitation Apple ]
[ computers, and after 1984 such clones were no longer legal in the United States.
However, this case was based on the contents of the ROM programs built in to the
computer, which had been simply copied by the clone makers. Later, when similar
companies copied the IBM microcomputer, they were careful to make new built-in
code with the same functionality, but different instructions. By doing so, they were
able to get around the earlier judgment, so that current litigation is focused more on
software than on hardware. In a later suit over the look and feel of the operating
system, Apple claimed that Microsoft had illegally copied the essential substance of
its intellectual property by imitating its functionality in the various versions of
Windows. Although lower courts ruled that there was no infraction, the suit was
resolved only when Microsoft settled out of court in 1997, buying shares in Apple in
the process.
Despite the fact that courts in several countries have now also ruled that the
creators of computer software can copyright their programs, and despite the
subsequent dropping of software protection schemes by most major manufacturers,
these problems still exist, and the production of genuinely new forms of software
and hardware has suffered. Neither are these difficulties confined to the computing
industry, for a similar situation is faced by the manufacturers of pharmaceuticals,
who see their expectation of some profit from research and development taken
away by inexpensive generic copies of their original drug formulas, and in the face
of this, decline to do such work within the boundaries of countries that allow the
practices. Canada has recently passed new legislation restoring protection to patent
medicines; it remains to be seen whether this deliberate restriction of technology
transfer will be more productive and beneficial, or whether it will result in higher
costs and profits with little or no public benefit.
A third area, besides patent and copyright, in which property rights have been
endangered is that of access to stored information. Many computers that served as
information repositories, including those of governments, corporations, and banks,
were not at first nearly as well-protected against the casual intruder as they should
have been. Consequently, there came to be people who specialized in breaking into
computer systems to examine or change the data they found. While there were only
a few cases of this sort that gained great public notoriety, there was for a time a
widespread underground trade in access numbers and passwords for a variety of
installations, and it is still not known to what extent the security of important
systems was or is being compromised. This is probably not as much of a problem at
the present time, as these few incidents have resulted in much more effective
security measures being taken. However, it is not clear what is the extent of liability
when a system is broken into and data stolen or vandalized. Is it the fault of the

331
operating system vendor for allowing security holes, the software manufacturer for
not plugging these, the owner of the computer for not working around them, or the
owner of the data for irresponsibly trusting any of the above?
Another form of vandalism that has become a major problem is the spread of
"computer viruses". These are small programs that attach themselves to
applications when they are run, and whose instructions cause them to attach
themselves to all other programs in the system. At some point, they begin to
destroy the hapless owners' data and program files or cause random interruptions
during computing tasks. Even in the cases where they merely print messages on the
users' screens, they constitute an invasion of property and privacy, and reflect a
side of the information age that claims there ought not to be any rules. The
computing community has been quick to respond to this threat, and there are a
number of "vaccine" programs available to protect systems against such infections
or remove them afterwards. Sometimes the virus, or some other destructive
behaviour, comes coded inside some otherwise useful program that a person
installs and uses. Such a program is termed a "Trojan horse," and while relatively
rarer, these can cause as much serious data loss as viruses. Files created under
Microsoft's Word and Excel can contain a macro that runs whenever the file is
opened. This capability has also been exploited to vandalize other systems.
There is still a feeling both among those who copy designs and programs and
among those who "hack" into others' systems that there is no such thing as private
property, or ought not to be. "Ideas are the property of the masses" and
"information is for the people" are the kind of slogans that they have adopted. Will
this attitude ultimately characterize the next age? Will there be no rights of
ownership to electronic property in the future? Or, is private property a leftover
from the old industrial age, and destined for the same oblivion? On the other hand,
if the right to private property ought to be a characteristic of the Information Age,
what steps can be taken to ensure that the creators or intellectuals have their
interests protected? There is a fundamental conflict here between the need to
widely disseminate information and the need to reward its creators in order to allow
them a living sufficient to continue their creative activity.

9.4.2 The Present Law


There are already certain rights vested under present law for the protection of
intellectual property rights. Usually, these are based on international treaties and
conventions. Not all nations sign each treaty, and even for those that do the details
vary somewhat from one country to another. In the United States, there are often
state laws or court decisions enhancing or limiting such protections further. Thus,
the discussion in this section is limited to the principles on which such protections
are based. Briefly, there are four general classes of protection, each with certain
advantages, and each with drawbacks.

332
Trade Secrets

If a manufacturer sells a product that is produced by some process or recipe


and that technique has been carefully kept a secret, not being published or
registered in any way, the secret offers a measure of legally recognized protection
against imitators. Even should a disgruntled employee leave and "spill the beans" to
a competitor, the law still protects the original owner of the secret. However, if the
secret is made public inadvertently, through the owner's failure to protect it
carefully, or if it is independently discovered by someone else, all protection is
gone. What was secret then becomes part of the public domain, available to
everyone. It may in some cases even become patentable by a competitor, and thus
lost altogether to the originator.
The advantage is total protection while the secret is kept. The disadvantage is
total loss if someone else can discover or legally obtain the secret for themselves.
This method is essentially worthless for protecting either computer hardware and
software or pharmaceuticals, as the secrets of these quickly become open to an
intelligent prober and, as a result this type of protection holds little promise for
much of high-tech industry.

Patent

Patents are intended to protect devices or artifacts. Typically, they are applied
to machinery and equipment, including both consumer items and devices used in
the manufacture of other goods. They provide an exclusive right to make the
patented artifact for a fixed number of years and are effective in preventing the
distribution of identical imitations.
A specific computer may be patentable, for example, as may many of its
components. However, mathematical algorithms cannot have this protection, nor
can any other intellectual expression that is not a physical device. In a move that
has generated considerable controversy, the U.S. Patent office has begun to grant
protection to certain processes that are part of various software packages, however.
On the other hand, there is no computer (and few circuits) sufficiently unique in an
electronic sense that the same end result cannot be achieved in some other way. In
practice, there is therefore little in patent protection to prevent another company
from building a virtual copy of a computer or other machine, and nothing to prevent
others from manufacturing an "improved version". Cheap foreign counterfeits or
exact clones can now be kept out of the United States, but many countries have
thus far not even taken that step, so that a computer patent in such places is
essentially worthless. Moreover, workalike machines are usually so easy to make
(and these are often called clones as well) that there is little that can be done to
stop very close imitation of computers.
Following the older practice of patenting medicines, and medical machinery,
patents have also been sought and granted on genetically engineered life forms--a
practice that is sure to create considerable controversy as this field expands. As
remarked in Chapter 7, this is likely to worsen if animal life is worked on, the more
so for human genetic modifications.

333
A patent does have the advantage of blanket protection while it lasts, and
where it is enforced. It has the disadvantage of being relatively difficult to obtain
and often impractical to enforce, particularly in the case of electronic equipment. It
may also artificially inflate prices and deprive the general population of the benefits
of the discovery by creating a monopoly for an unscrupulous manufacturer
determined to exploit the meeting of needs for all the available cash. In the case of
patents on key aspects of software, it may prevent manufacturers of even non-
competing software from using some ideas or code in their products. For such
reasons, most countries are now limiting drug patents to a smaller number of years,
and may do the same in other strategic industries, such as computing. Patent is also
an artifact-oriented protection; it is more applicable to industrial age devices than it
is to the stock-in-trade of an information driven economy.

Copyright

This third type of protection has traditionally been applied to printed material,
such as is found in books and periodicals. It is supposed to confer on its owner the
sole right to make copies of a work, with limited exceptions, such as allowing one
copy for study or archival purposes. For instance, it is illegal to make and sell copies
of a novel, textbook, or scholarly paper without the consent of the copyright holder,
except that copies of insubstantial portions can be made for research purposes.
It is also illegal to copy cassette tapes or record albums, video tapes, sheet
music or song lyrics, to make classroom sets of magazine articles, or to distribute to
students copies of chapters in supplementary textbooks. Yet, all of these things are
done daily in many homes and offices and in virtually every school in North
America. This will become even more common as personal copiers become as
common as personal tape recorders. The ease of accomplishing the deed have led
to an even more widespread ignoring of copyright in the case of computer software.
What taping has done to the record and video industry and the dry copier has
done to the music industry is nothing compared to the effect of disk and file
duplication on the software industry. The deed can be done in a matter of seconds
and in complete privacy. It is nearly impossible to discover afterward unless a
"friend" turns the perpetrator in to the offended company. This situation is
complicated by the fact that the courts in many countries (including the United
States and Canada) took several years to decide that computer software was indeed
protected by copyright laws. Yet, despite all these problems, the affected industries
do thrive, even if some portions do not grow as expected. The companies that do
well achieve success by offering to their legitimate buyers a variety of personal
services such as toll free help, continuous updates, and comprehensive manuals
that the owners of illegal copies do not have. These successes illustrate that a
reliance on the legal protection of copyright may not be necessary; it is apparently
possible to grow and thrive even when it is being widely ignored. In recognition of
this, most commercial software manufacturers had dropped copy protection
schemes from their products by the early 1990s.
Unfortunately for record companies and booksellers, the advent of electronic
book and digital music storage has meant the same considerations now apply to

334
their copyrighted materials as well. It is too soon to say what kind of answer can be
devised to protect the livelihood of writers and artists.
Copyright has the advantage of being easy to obtain. One does not even have
to register the work to get protection, just to ensure that every copy that is
distributed carries the standard notice. In the case of small printed materials, tapes,
and computer diskettes, however, copyright has the disadvantage of being
practically impossible to enforce.

Software License

Some software vendors require their customers to sign an agreement that


they will make only personal back-up copies of the software, not give it to another
person, use it only as directed, and so on. These documents will often disclaim all
warranties or guarantees on the part of the vendor, declare that the purchaser
takes all the risks, and state that the customer has only a license to use the
software, but does not own it. Such contracts are holdovers from the days when
software was designed for and sold to only a handful of customers, or only to a
single one. Today, notices like these may even appear inside the front cover of the
documentation and state that they took effect when the package was opened.
This form of protection has the advantage, in the case of an actual signed
agreement, that the vendor can point to a piece of paper and say, "You agreed!".
They are also good for the lawyers who make handsome fees by designing them.
However, "contracts" that are discovered only after opening a package are
worthless, as are any agreements obtained by duress, false pretences, or
accompanied by a failure to disclose material facts. They are also void if their
primary purpose is actually to disclaim all responsibility on the part of a vendor. The
few such documents that may be legally enforceable from a technical point of view,
and provided that they are actually signed by the customer, are of little more value
in practise than is the copyright notice. People have been ignoring them, and
making copies anyway, and will probably continue to do so. Moreover, in many
places laws have now been proposed or passed that limit the validity of these
"shrink wrap" licenses. On the other hand, there are initiatives to enshrine them in
law.

Monopoly

Sometimes, the hold of a particular manufacturer over a market segment or


technology becomes excessive, and the company acts to take unduly large profits,
restrain the trade of competitors, and prevent customers from using rival products.
In the communications and information storage and retrieval industries, there is a
natural tendency toward such market concentrating activities because of the need
for standards to make the technology work at all. Many countries have laws against
monopolistic business practices. For instance, the United States government forced
Standard Oil and Bell Telephone to be broken into smaller entities, and in the late
1990s took on Microsoft over what were alleged to be predatory practices.

335
However, seeing an actual case through the courts may take many years, and
some, at least, of the issues may be moot by the time it is settled. This is
particularly so in the computing industry, where product lifetimes are measured in
months, and legal times perhaps in decades. Given this time frame, and the
sufficient application of money and influence, a company found guilty of illegal
practices can probably influence the political process sufficiently to escape penalties
and continue the same practices in slightly different ways.
The real harm done by a monopoly is not so much in price gouging and the
reduction of choice, but in the stifling or destruction of competition, and therefore of
innovation. One could argue that the lack of monopolies in the early days of the
small computing industry was precisely the reason for its rapid pace of
technological change and innovation. If all common applications were rolled into the
operating system, and this became standardized on everyone's desktop, the
incentive to compete from outside the monopoly or to innovate from within it would
vanish, and the industry could simply stagnate.

What are the courts doing?

After some initial waffling over whether a computer program was indeed
copyrightable in its electronic expression, the courts in a number of countries have
now given several clear indications that copyright can be applied both to external
storage media (diskettes, CDs, DVDs) and to Read Only Memory (ROM) chips that
contain programs. Such rulings have slowed down the activities of those who
"clone" computers, because an essential part of such a device is usually a large
amount of built-in ROM programming. For their part, manufacturers initially
responded to the challenge by greatly increasing the amount of code built into ROM
so as to make it more difficult for other programmers to build a functional copy or
create work-alike code. However, except in flagrant commercial cases, there have
been few attempts to bring even large-scale violators to justice. The courts in many
countries have yet to follow their U.S. counterparts even as far as the latter have
gone, so there is in effect no protection as yet for hardware manufacturers or
software authors in such places.
Meanwhile, patents granted on genetic modifications are sure to be
challenged, and seem unlikely to be maintained over the long run. The clear
direction of change is away from information restriction and secrets; the courts are
not unaware of this fact, and are beginning to reinterpret the law in the spirit of the
age in which it is applied, rather than in that of which it was written--a trend that
can either be welcomed or feared, depending on one's point of view.
The fundamental problem with treating intellectual creations as general
information, rather than as property, is the great disincentive this provides to the
creators. If they cannot make a living by their creation, and few software authors,
book authors, or musicians can, they become less inclined to create again. They
devote their time to enterprises that can provide an income, and their creative
endeavour is stifled, impoverishing everyone. It is this fact that leads us to search
further for some way of achieving the contradictory goals of protection and
dissemination.

336
9.4.3 The Insufficiency of Law Alone
Even if there is a change of attitude on the part of lawmakers to provide
clearer protections, there still remain great obstacles in the way of solving the basic
problem. Software piracy is encouraged by the very high prices that foster the
attitude: "They're trying to rip me off, so it's okay if I do it to them." The ranks of the
copiers are swelled even more by the fact that it is easily, quickly, and privately
accomplished--generally by amateurs, and that few of these could ever be
apprehended even if enforcement officers were to try.
All of this leads some to ask whether unenforceable laws should remain on the
books. There can be little doubt that if this trend becomes one of the hallmarks of
the new age, electronically expressed "private property" would cease to exist as it is
now known and this will sharply reduce the incentive to create such materials. Since
it is unlikely that the new society could live with the broad implications of such a
change, new approaches will have to be attempted.

9.4.4 Proposed Remedies


Several means have been proposed to improve ownership rights for
electronically expressed intellectual work. What follows is a brief exposition of a few
of these, with the attendant advantages and disadvantages.

1. Improve Legal Protection

This remedy would be welcomed by lawyers, software authors and publishers,


since the present laws could certainly use some clarification. However, the
enforcement problem would still remain, and it is unlikely that this approach alone
would suffice.

2. Improve Copy Protection

A number of promising new methods have become available to make diskette


duplication more difficult. If a foolproof copy protection could be devised, it could
solve much of the problem. However, this avenue has looked promising before, but
has yielded few results. A combination of built-in serial numbers and machine
customized software does hold some promise, and this might be pursued, but the
larger software houses have now all bowed to criticisms of inconvenience from their
major customers and dropped copy protection altogether. A few have employed a
hardware key or "dongle" that attaches to the parallel or serial port of a computer
and that is checked for by the software. This has the advantage that the software
can be moved from machine to machine, but the disadvantage of inconvenience,
especially if the dongle is lost. All things considered, most observers feel that
barring a dramatic new discovery, this approach probably has little future.
To a limited extent, the same comments have applied to the security of
systems and their data--doors capable of being bolted could also have their security
bypassed by a sufficiently clever and patient thief. However, large systems are
gradually becoming more secure, and those thieves usually need to obtain

337
passwords from the legitimate users--if they do, it is carelessness that has defeated
the security, not superior technique.

3. Software Transmission

Potential users of a work could be required to sign on to a host computer


(software server) by telephone and pay a fee for each use. This solution reduces the
power of the user's computer somewhat, but does ensure that the program itself is
never copied, for it runs only on or by being loaded from the larger remote
computer. The local device (a network computer) ends up with comparatively little
or no storage for programs, for each use is rented.
This approach would solve the copying problem, but would have the
disadvantage of being expensive to implement. It would also remove some
computing power from the hands of individuals, but in the long run, it has
something to be said for it. The Metalibrary, if ever fully implemented, would
provide a fast and operationally inexpensive method of recording all accesses to
original ideas or programs and of crediting the author with the fee charged, with a
percentage retained by the utility to finance its own activities. Like the earlier
suggestion that art could be rented from the owning museums for a fee paid as long
as it was displayed, this too could be implemented on a time-related rental basis.
Alternately, if billing information were present in the software itself and were
updated at each use in a special non-user accessible section of the individual's
Metalibrary terminal, the software could be retained and stored locally by its user
and the information on fees charged transferred to the central computer on the next
access to it. Indeed, the software could even refuse to run without making such a
contact. Of course, there might be people who would try to change the billing
information, but if there were only one thing to protect, security could be more
sophisticated.
If either style of software rental were adopted, there would also be many who
would bemoan the end of the chaotic, free-and-easy era of computing where the
power was on individuals' desktops. However, a central system may be the only way
to maintain the integrity of either the software or the accounting to pay for it. On
the other hand, if a single company controlled this security system and could tax all
access, it would rapidly assume power comparable to (or greater than) government.
It seems likely that processes and formulas could be licensed in much the
same manner. Such information is not now too difficult to get, but if an automatic
method to credit the discoverer existed, it might be feasible to do away with
monopolies on its use. The most efficient manufacturer would only sometimes be
the originator, but the research and development would not simply be lost to the
originator; it would just be reimbursed through royalties instead of profits. This is a
technological solution to a technological problem; presumably it will create new
problems for later solution, but may have some potential. It is worth remarking that
profits are easier to make, and protection less important, when the software is in a
constant state of rapid and substantial change. The real crunch comes for the
vendors of stable products over the long term; on these the original market can

338
almost always be ultimately swamped by cheap duplicates. This fact forces an
eventual resolution to these problems, though it does not suggest what it can be.
A related difficulty arises from the fact that there are often many ways to
duplicate the result of a technique without duplicating the technique itself. Should
the result be protectable, even though the process is not, or has been avoided? This
is the very issue behind several "look and feel" suits launched in the late 1980s by
original software vendors against competitors who had created functional work and
look-alikes to their programs without copying the actual code. Some courts initially
ruled that the "look and feel" is protectable, others the contrary; the latter view
seems to have prevailed even though the Apple-Microsoft case was never heard at
the highest levels.

4. Lower Prices

The high prices charged for much of the microcomputer software now being
sold are also a holdover from the days of low volume production runs for very
expensive machines. With some brands already having hundreds of millions of
computers installed, a mass market potential exists for well-designed software.
Once word processors and computing languages sell for the price of a textbook--
say, under $50--much of the incentive for copying will vanish. Of course, many
retailers, particularly computer dealers, will resist this trend because of the lower
dollar margin involved. Under such circumstances, they would not remain on the
scene, because computers would become consumer products and be sold in
department stores, and by other general merchants. Bookstore chains would handle
software, and sell it as they would their other products.
No set of solutions that fails to address the pricing problem has much chance
of success, for the motivation to pirate software is strongly influenced by the
economic barriers to owning it. Low prices and no copy protection at all is a better
solution than high prices with protection, and some vendors have already
discovered this fact. This policy would certainly result in a better utilization of
creative energies in the writing of programs, contrasting sharply with the waste
inherent in the protect/deprotect cycle.
At the same time, the licensing of drug manufacturing rights, among others,
would also help to drive down a range of consumer prices, because this too would
promote volume without reducing the long-term return on investment. As this
discussion indicates, it may be the case that marketing solutions can sometimes be
found for ethical problems.
On the other hand, a near monopoly by a single company of much of an
industry probably indicates that no lower-price solution is achievable in that
industry without government intervention.

5. Royalties

One answer to the cassette/record industry copying problem has been to


impose a surcharge on all blank cassette sales. This is then distributed as a royalty
to all performing artists affected, in proportion to their actual sales. The same

339
approach could be taken with blank CDs and DVDs and the accumulated royalties
distributed among software artists.
This would have the advantage of redressing the economic injury to the
creators of the software, and that is probably the main concern of the industry. On
the other hand this approach has the drawback that it deliberately chooses to
accommodate to the underlying problem, and appeasement is a policy that can
come back to haunt; sooner or later reality must be faced and action taken that
leads to a definite solution. Moreover, such a solution is temporary. A given medium
is unlikely to remain the primary method of software distribution for long. As
indicated above, a royalty on the use of ideas is a better policy, but it can only be
enforced by the use of a very large scale and heavily used computing facility such
as the Metalibrary.

9.4.5 Summary
The information paradigm itself causes new legal questions to arise, as well as
new versions of old ones. If the creators of new ideas are to be encouraged to make
a living at such activity, some new form of protection and reimbursement needs to
be tailored to the information age. It will likely be a highly technological solution
requiring great cooperation and widespread support, but such a solution will help to
guarantee that the flow of ideas continues. Full implementation of the Metalibrary is
one solution that would encourage continued creation and publication, and provide
protection and revenues to the creator, but at low cost for distribution. The
drawback of this solution is the potential for control or abuse by a small number of
individuals or companies that might be able to exercise effective control over it.

9.5 Technocrime
High-technology devices and processes do not just generate new forms of old
legal questions, however. Along with their enormous potential to benefit society,
they have a similar potential for harm, and some have already found extensive
employment in the service of crime. These uses can be divided into three broad
categories, and there are some possible future uses in other criminal activities.

Technology as an Auxiliary to Conventional Crime

Conventional crime can be big business even when it is organized over a


single city, but much more so when this is done on a regional or national basis. Drug
dealers, illegal gamblers, prostitution rings, protection racketeers and smugglers
are like any other business people when it comes to the need for efficient record
keeping. They use computers to keep track of their traditional enterprises and
improve their bottom line.
If the illegal operation is espionage, the computer may be the means of
storing large quantities of stolen information. It may simply be used to store names
and phone numbers of easy marks--including passwords of unprotected computer
databases. Some criminal enterprises buy legitimate businesses and use computers
in entirely conventional ways; others keep records of a less savoury kind. So long as
there are laws, there will be those who find profit in flouting the law; and criminal

340
organizations have certainly adapted high technology in much the same way as do
others.

Technology as the Target of Crime

At the same time, the value of expensive hardware, software, or technical


devices does not get overlooked by criminals. As objects of value, they become
worth stealing for re-sale--one more category of hot goods for police forces to track
down. They may, in the technology they represent or in the information they
contain, also be targets of espionage. From integrated circuit chips to
microcomputer designs, to finished mainframe computers--all are candidates to be
stolen and shipped to countries that the nation of manufacture regards as enemies,
or to a competing company in the same country. The same is true of space,
medical, broadcast, and military technologies. It is also true of information filed in
data banks of corporate and government files and bank accounts. These latter have
spawned numerous new approaches to old crimes and a few new ones as well.
Technical devices are broken into, vandalized, stolen, or tampered with because
there is a potential for gain or satisfaction in the criminal act. In this they are no
different from low technology objects of crime such as cash or jewellery or from
those of older technologies (TV's, stereos, and cameras) for all of which there exist
lively black markets. In short, while it is military technology and bank deposits
whose theft may generate the most publicity, anything that is perceived to have
value is bound to become a target for theft. There is a broad spectrum of possible
responses to such activities, ranging from attempts to improve security on the one
hand to the placing of information in the public domain where anyone can have it,
and being little concerned about its fate and use on the other.
Clearly, the same response is not always appropriate; while it may be
consistent with the general thrust of the next age to make most data public, access
to the banking, government files, and military hardware will not likely ever be in this
category, and so there will always be thieves, and there will always be police.

High Technology as the Instrument of Crime

Where the computer differs from other targets of criminal activity, however, is
in its potential to be a powerful instrument for crime in its own right. Thus,
computers are used to attempt illegal entry of data systems belonging to banks,
schools, corporations and governments--either for curiosity or vandalism. In a day of
careful outside audits, it gets harder all the time to embezzle money from a
financial institution. This does not stop those who understand accounting
procedures well enough to try and turn them against their victims. For example,
utility programs intended to correct errors can be used instead to transfer money
from one account to another. If the account robbed is dormant, and the total
deposits continue to balance, such schemes may take some time to uncover. Other
thefts have been made by clever programmers who instruct the system to round
down all partial cents on interest payments and place the fractional excess in their
own accounts. Given enough accounts and sufficient time, very large sums can be

341
stolen with little risk. Yet another scheme is a variation on an age-old flimflam. The
thief sets up a dummy company and generates payables from the organization
being victimized to the fictitious supplier of goods or services. These appear on the
record to be ordinary invoices for routine activities, but no goods are ever delivered
or received; only cheques go out.
False records inserted into a database via a remote computer can create or
destroy damaging information. Stock market manipulation can be attempted by
carefully synchronizing buy and sell orders utilizing computers. They may be also
used to fraudulently transfer equities among interlocking companies to disguise
irregularities or to create an appearance of prosperity for the auditors of annual
reports. False information can allow the dead to collect government paychecks,
goods never ordered or delivered to be paid for, and audit trails to vanish. The
larger the organization, and the more insiders it has with special knowledge, the
easier it is to steal from it using a computer. The state is itself the most obvious
target of such activities, and its agencies must now take elaborate steps to prevent
becoming large-scale victims. It is also the largest employer, and so is the most
vulnerable to abuse by insiders. The situation is complicated by five additional
factors that apply to crimes committed using computer technology.
First, such crimes are widely regarded as "victimless" because they are not
directed against a specific person in such a way as to cause bodily harm. There is a
general perception that holding up a bank with a gun and fleeing with $2000 is a
worse crime than embezzling $200 000 from the same institution, despite the fact
that the latter crime also involves a breach of trust. Both are regarded as lesser still
than mugging an elderly lady in the park for the $23.57 in her purse. This puts a
high monetary price tag on the nonviolent nature of the crime, which may be a good
thing in a violent society. However, it illustrates the difficulty in weighing the
relative evils of different kinds of thefts. The large sums involved may also make
restitution impossible, and this, too, narrows the courts' choices.
Second, the argument is sometimes advanced that the institutions embezzled
from have lots of money and can afford to lose some. It is not clear that this
argument is relevant in deciding on the degree of wrongness in the theft, or even on
the punishment that ought to be meted out.
Third, technological crimes are usually committed by white collar workers with
no previous criminal record. The judge and jury tend not to make a connection
between office tower criminal activity and that perpetrated on the streets of the
slums below. These first three factors together generate another complication.
Fourth, despite the fact that crimes involving the use of high technology such
as computers are on average far more costly in dollar terms to the victim than are
"blue collar" crimes of violence, the sentencing tends to be lenient. There is less a
sense that the person convicted is guilty of anything serious, and the punishment
does not provide much disincentive, especially if a three-year prison term can be
followed by the enjoyment of the fruits of the crime. This is further exacerbated by
the general crowding of prisons in North America and the high cost of incarceration,
which make it even more unattractive to imprison those guilty of economic (rather
than violent) crimes.

342
Fifth, crimes committed with high technology, such as computing devices,
may not even be properly understood by the courts, nor may their consequences be
appreciated, because of the rate at which technology is changing, and the difficulty
that the law has in keeping up to the new situations.
These difficulties are likely to increase with further research on genetics,
intelligence, and computing machinery. The possibility may already exist for
genetically engineered life forms to be used to commit crimes or to pursue wars.
Sensitive banking or military installations will have to be protected against
infiltration not only by personal or electronic thieves, but against biological and
chemical ones as well. After all, if the people or the electronics of such an operation
could be temporarily incapacitated by the agent, it would become easy prey for a
more conventional break-and-enter.
Artificial intelligence research could also have its products turned to criminal
activity, for great intelligence, so misdirected, has always been capable of
correspondingly great mischief. Such devices, should they be devised, could well
become the de facto directors of criminal organizations or tyrannical governments
as well as of more beneficial organizations such as hospitals, and the courts
themselves. They may become both the agents and also the targets of attempts at
corruption, simply because of the high stakes involved in such controlling activities.
Even the highly personalized PIEA could become a target for thieves. The
owner who stores plans, outlines, new ideas and methods on the PIEA could see it
stolen by someone with an interest in marketing the ideas. An artist, poet, or writer
who created a new work with another person's PIEA could produce a new kind of
collaboration--it might well be possible for an expert to trace the activity of the
owner of the PIEA and to have charges laid for a new kind of theft of intellectual
property. Perhaps these devices will have to be built to work only with the mind of
the person that first "imprints" them, and not with any other. On the other hand,
perhaps the notion of ownership of ideas will become obsolete, no longer protected
by the law.
Some kinds of technocrime are already becoming a thing of the past. Large
computing systems are usually protected against hackers by call-back modems that
only allow a user to access the system from pre-specified phone numbers.
Moreover, organizations are becoming more security conscious and more often use
appropriate account password protection and data compartmentalizing to prevent
unauthorized access and to reduce the consequences when it does take place. So,
although technology may introduce new openings for criminal activity, the use of
such technology does not mean that the crime cannot be prevented; it may simply
mean that potential victims must take greater care to protect themselves. As
indicated in the next section, it also means that the state and the legal system will
have to adapt to the use of the new technologies as well.

9.6 Ethics, the Law, and the State


Some problems associated with law and the state have been aggravated by a
general disrespect for both that has become endemic in the West in the last few
decades. Some observers feel that the situation has been exacerbated by actions of
some judiciaries, which have in recent years undertaken adventuresome rewritings

343
of laws and constitutions with an almost total disregard for the intentions of the
original framers or legislators. On the one hand, proponents of judicial activism
argue that the courts need to reinterpret law to suit new social realities, even if this
means taking this function away from the legislative branch. On the other,
proponents of judicial conservatism argue that the judges' attitude has become that
the law means what they say it means, nothing more, and nothing less. They point
out that such an outlook not only does not foster either respect for the law or for its
enforcement, but destroys the system of checks and balances needed for the
functioning of democracy.
Will the direction of change reverse? There may come to be some realization
that the courts have usurped too much legislative prerogative by changing laws
through reinterpretation, and a more legally conservative era could begin. There
could be a trend back to a traditional view of the separation of the legislative and
judicial functions, to a more cautious view of law itself, and to a greater willingness
to enforce the laws that do exist. On the other hand, no group or institution willingly
yields power once it has obtained it, and one could as easily go to the opposite
extreme and suggest that a new tyranny imposed by a judicial system run amok
could be in the offing. After all, such an outcome would be one way that a system of
arbitrary law could succeed the chaos of a system based on moral relativism.

The Relationship Among Government, Commerce, and Society

In addition to these difficulties, the role of the state in regulating and


providing order to the economic and social system has always been one that has
made government officials vulnerable to a variety of temptations to abuses of
power and conflicts of interest. Some typical falls to such temptations include:
o The owner of a company who runs for office or uses political influence to
direct government business to the family business,
o The union official who is elected to high office and uses the opportunity to
change labour laws and government contracts to favour his union associates,
o The former government or military official who takes employment after
leaving office and then uses insider information from the years of public service to
enrich the employer,
o The banker who allows her institution to be used to launder drug money or
finance terrorism in support of her personal loyalties,
o The social activist appointed to the bench or to a quasi-judicial but
supposedly impartial body who uses the position not to dispense justice but to bring
about social change consistent with her political beliefs,
o The person in a position of government, policing, teaching, or professional
authority over others who uses that power for personal enrichment or the obtaining
of sexual favours,
o The government official who personally directs lottery profits or tax money
toward organizations in her own district to ensure re-election,
o The utilizing of networks of school friends extending over business and
government to work to each other's mutual benefit,

344
o The awarding of contracts to specific companies or regions for political
reasons instead of economic ones,
o The bribing of government officials for information or contractual favors.
o The maintenance of expensive lobbies to press for measures favourable to
particular businesses, when the voice of the ordinary citizen has little opportunity to
be heard, and
o The frequent transfer of government and industry officials to and from the
lobbying profession that creates an incestuous relationship rife with conflicts.

Much more has been heard about such things in recent years, though it would
be difficult to say that the increase in publicity represents any change in levels of
questionable activities. Indeed, in some parts of the world, bribery is the only way to
get anything done. Rather, the publicity about government/industry relationships
and the conflicts therein reflect the information age paradigms beginning to
operate. Simply put, there may soon no longer be any such thing as a closed-door
deal, a secret agreement, an undisclosed relationship, or an unknown potential
conflict. Regulating these problems is another matter, however. No number of
regulations and audits can control the desire for illicit gain from public office if
everyone involved believes the behaviour to be normal, or has no absolute
standards. Indeed, the words "illicit" and "corrupt" in such a connection lose their
meaning when there is no standard to say that any particular behaviour is wrong,
that is, when morals are only a relative or arbitrary matter. This is easily illustrated
by travelling to one of the many countries where accepting bribes to perform public
duties is a way of life.

The Problem Behind the Problems

The legal problems associated with technology are not simply those of
mechanics, of law, or of economics. Rather, they involve moral and ethical issues.
Until they are addressed as such, there is unlikely to be any real progress, and
solutions will be only cosmetic. Among other things, there is often an attitude that
no longer sees it as wrong to steal or to trespass. In fact, if one can rationalize that
there shouldn't be private property, then stealing and trespassing do not even exist.
This is a risky paradigm, for a destruction of all personal rights to privacy and to
ownership would mean that the individual person has far less protection than a chip
on a machine.
It is important to understand at this point that Western civilization is based on
concepts of respect for the individual citizen, and the notions of liberty, freedom of
speech, privacy, and the right to hold property in private ownership that this
implies. Moreover, these concepts cannot be divorced from the stream of Judeo-
Christian culture and thinking in which they were developed. This assertion by no
means implies that Western society ever was "Christian", but only that it has a
heritage of respect for Judeo-Christian values, and that those values are to a great
extent reflected by a legal system that assumes that there is an absolute basis in
morality for law.

345
The growth in statism, and popularity of the concept of law as relative and
arbitrary has eroded that basis for law and the democratic state without replacing it
with a new one, and the result has been diminished regard for both. Simply put, the
social compact that once provided the basis for ethical decisions has been set aside,
and for a growing number of people there is no underlying set of principles on which
they can base ethical decisions. Hence the law, once regarded as solidly grounded
on certain immutable principles, is now treated as subject to arbitrary change or
reinterpretation.
This is in part why there is little hope of enforcing drug or property laws under
such circumstances. If every person is a law unto herself to the extent that they can
get away with it and behave as they choose, there is no reason for them to consider
obeying laws whose enforcement have only minor penalties. If the self-abuse of
drugs is not seen to be a matter of wrongdoing, the pleasure obtained from using
them is, pragmatically, the greater good for the individual, than is abstaining for the
sake of a vague social contract. If a person wants money and property, and can take
it from someone else who has it with little prospect of legal retribution, and no
concept of guilt for doing wrong, the formality of a law is not a sufficient obstacle to
deter theft.
Moreover, that which is relative can be changed at a whim either by the
courts or by the legislators. That which is arbitrary is subject to the will of the
strongest arbiter, a condition that easily leads either to tyranny or to chaos. To an
extent, this is the situation society now faces, with a myriad of special-interest
pressure groups vying for dominance over the political system amidst growing
economic tension. Each group has a set of "rights" it insists upon for its own
members, sometimes at the expense of everyone else, and no interest in discussing
its corresponding responsibilities. Thus, it becomes more difficult all the time to
pass and enforce laws that are obeyed because they are universally perceived to be
just and fair. Orwell's vision may turn out to be slightly blurred--legal chaos may be
as possible an option as legal rigidity.

Is any Solution Possible?

This suggests a rather bleak general picture, but there are good reasons to be
optimistic as well. The recession of the early 1980s and the 1987 stock market
crash changed a lot of attitudes, and a new breed of high-school graduate with a
more work-oriented and education-conscious attitude emerged in the aftermath.
Coincidentally, these are the first wave of information age men and women and
they are obtaining the skills and attitudes necessary for the next great awakening of
creative spirit. There is also some reason to be optimistic that a new consensus in
support of the law may also emerge, but if so, it may at first be rooted for many
people in the somewhat novel grounds of economic pragmatism and self-fulfillment
rather than in any absolutes. They may not have learned much from the stock
market though, for these were some of the same people who drove its prices to
unreasonable highs in the technology bubble of the ninety's only to suffer the
predictable meltdown in 2001.

346
A coincident increase in religious interest has also characterized recent years,
and will undoubtedly come to bear on ethical questions, but if this activity turns out
to be largely experiential and nominal, it may not have significant impact on the
mainstream of society. If, on the other hand, it results in a revival of the notion that
faith and ethical behaviour are an integrated whole, religion may yet play a
significant role in establishing a new ethical consensus. Of course, even though
some have always believed that moral and ethical issues were absolutes, it may be
that only a minority will be committed to such an approach in the new society. In
the past, many others have agreed with the ethical conclusions of traditional
religious thought, either for reasons of self-interest, or out of purely pragmatic
considerations. These may join a new ethical consensus for similar reasons, but the
consensus they join may well have a different basis than any in the past if religion
continues to play only a small role. One such basis could be enlightened self
interest, but the very real possibility exists that no ethical consensus will be found,
and so long as this is true, the nascent society could lack the social glue on which to
base workable laws.

9.7 Solving the Problem


It will never be possible to settle all legal problems; there will always be those
who ignore the most widely accepted standards of behaviour. If the real problem is
with the ethical base on which law rests, and not with law itself, then any solution
must address itself to ethics and to education.
If men and women of the information age are to believe in the rights of others,
including, say, property rights, those beliefs will not come about by accident. If their
hallmark is economic pragmatism and self-fulfilment, it is in their best interest to
develop property rights, for the promise of the information age cannot be fulfilled if
production of new intellectual properties is discouraged, or if the state and the law
either grow burdensomely large or become impotent. Can citizens be so educated,
and will teachers want to do this? This question may now be unanswerable in the
broadest sense, but it is possible to see how partial solutions could be worked out in
the computer industry, and this approach might be useful in other sectors as well.
This solution could involve data processing and programming personnel,
particularly those with managerial and teaching responsibilities, banding together in
a professional guild with a high-profile code of ethics. They would commit
themselves to teach, promote, and enforce that code for themselves, for their
employees, and for their students.
For at least pragmatic and economic reasons, members could give a specific
written commitment to:
1) Respect the copyright of other programmers and neither sell nor give away
copies of others' work.
2) Respect the privacy of data and agree never to use their skills to enter into,
examine, or change the contents of someone else's system.
3) Provide to clients and customers only structured, thoroughly tested and
debugged, properly documented and fairly priced products and stand behind those
products with a guarantee that errors and deficiencies will be fixed.

347
4) Advertise only finished products and make no exaggerated price,
performance, or delivery claims.
A rigidly enforced discipline could make violators unemployable, depriving
them of the skills and means to continue their activities. Cross representation with
governing bodies of other professions could raise the profile of these guilds and
increase likelihood of success. Strict enforcement and heavy promotion in
educational institutions could eventually reduce the number of thieves and vandals.
Similar codes of ethics could be devised by those doing work in Artificial
Intelligence, genetic research, or robotics.
Codes of ethics for professionals are nothing new. Doctors, lawyers,
accountants, engineers, and teachers have had them for some time. There are
already associations of data processing professionals that have promulgated codes
as well, but membership in such associations is voluntary and a large percentage of
those working in the field belong to no such group. If the projections of this book are
correct, there will be a considerable professionalizing of all work in the fourth
civilization, and this implies creation of new organizations for information workers to
provide performance guarantees and stability. The result would be that most
workers would belong to a professional organization, and that such groups, or
guilds, would have detailed and enforceable codes of ethics, so performance
expectations of their members would be clear.
These guilds would be a temporary measure, of course. Even a new
cooperative professionalism would not last indefinitely, and their inevitable decline
would eventually render such structures ineffective--the fate of all human
institutions. By that time, society might be ready for a new try at making modified
forms of government work. It might even discover that moral absolutes have been
there all along and are a more enduring foundation for the notion of ethical
behaviour and the rule of law.

9.8 New State and Legal Forms for the New Age
As previously observed, the modern state exists in a tensed condition,
balanced between big-brother statist collectivism and little-brother individualist
participatory democracy. While, on the one hand, new and increasingly complex and
expensive techniques seem to require massive state involvement in administration
and regulation, the ordinary citizen has greater knowledge and therefore broader
power than at any time in history. The state is clearly at a crossroads. With its
citizens' consent, the modern state could continue to grow in size and influence
until it encompasses the entire economy and technically regulates every aspect of
its citizen's lives. A side effect would be the resolution through the use of force of
the ethical problems discussed in the last section. Such a prospect cannot be ruled
out even in the relatively free West, despite its obvious failure in communist states.
Alternatively, the state could give up administrative power and regulatory
authority, and turn decision making into a participatory democratic process. There
could be great danger for the liberties and rights of minorities even on the latter
path, however, for unless safeguards were built in, majority voting via computer
tabulated voting could become just another kind of mob rule. Moreover, the state,
though somewhat amorphous, might be equally powerful under participatory

348
democracy as under more explicitly statist regimes. There is also the danger that
liberty could become rugged individualism--taken to the point that the state
fragments into an ungovernable chaos. Modern Lebanon, and Yugoslavia, if there
can be said still to be such entities, became the archetypal example of
fragmentation. In these situations, power eventually devolves to the strongest
available tyrant, but not before thousands have died. Individual liberty and freedom
depend more upon the general consensus that these things ought to be fostered
and practised than they do on the specific form of democracy within which they are
attempted. If a free people ceases to act free, they have already lost their freedom,
and may not get it back. There is a delicate balance between the need to govern,
and the need for freedom--too much of either one can destroy the other.
Similar observations are true of the law. If the majority will not obey a
particular law, it cannot be maintained except through force, and not very well even
then. At the very least, the general population must be convinced that a law is both
fair and in its best interests, or the law will fail. It is even better if they are
convinced that the law is right in the absolute sense--that it expresses a moral
principle they all agree with. It has been the case that the judiciary, policing, and
legislative functions have been separated in some democracies in order to improve
these perceptions of the law and its enforcement. This separation has blurred in
recent years; perhaps it will have to be sharpened once more to restore the old
balance of power. It is also worth observing that, if, on the one hand, technology is
creating new difficulties with laws and their enforcement, on the other it is providing
new tools to these as well.

Technology and Legislation

Legislative decision-making can benefit from the information age in much the
same way as does the decision-making process in any other enterprise. However,
the need to debate proposed new laws in the light of prevailing political dogmas,
and the large number of people involved in these decisions, guarantee that political
decision making is relatively slow. If new technologies do permit wider participatory
debate, they will be neutral insofar as the speed of this process is concerned, for
debates take time, even if conducted electronically. This may be a good thing, for it
would help to ensure that instant information will not lead to instant and ill-
considered decisions, so the quality of legislation may not deteriorate much. On the
other hand, participatory democracy could result in laws that reflect majority
opinion without regard to whether there is an overriding justice or principles of right
and wrong. Although this trend now exists, there is no reason to believe that a legal
system with a relative and arbitrary base can lead to anything other than tyranny--
the rule of the strongest (even if collective) arbiter. If, therefore, technology is used
to expand the participation in law making, new constitutional safeguards for
minority rights may be necessary to prevent unfair laws from being passed simply
because the majority desires them. Not every proposition that becomes popular
ought to be made law, for as Hitler showed, it is possible to make scapegoating
popular to the point of genocide. However unwelcome his ideas are to most at
present, one must remember that they once were popular. What can be done once

349
can surely be repeated, so active measures have to be taken to prevent tyranny by
the majority.
In any event, the same considerations that were applied in the last chapter to
suggest a move from hierarchical business management to flexible or
professional/contract models can also be used to suggest a move to participatory
democracy as a model for the state. Here is another version of the same chart.

The trend toward the top right of the chart, identified earlier, seems to favour
participatory democracy in the information age, even though it need not be
particularly flexible. It is worth noting that the radical egalitarian model appears to
be off the line of stability, even though its placement along the bottom somewhere
depends on whether one considers its theory or its practice as being the chief
determinant of whether it emphasizes the individual or the collective.

Technology and Law Enforcement

If technology has broadened the scope of criminal activity, it has also added
new means of enforcing laws. Police of the information age are obtaining database
readouts on automobiles, victims and suspects via computer terminals in their cars.

350
It is already possible to make identification of a criminal with genetic mapping from
semen, skin, hair, blood, and fingernail samples, and the use of retinal patterns may
also be feasible. At some point, an automatic blood sampler and analyzer could be
built to instantly identify any human being by genotype--an ultimate identity card.
Whether any such device will be used in the near future is another matter--it would
be very likely under a totalitarian state, moderately likely in many European
countries, and relatively less likely in Canada, the United States, New Zealand and
Australia.
Another device whose reception has been mixed is the electronic locator and
tracer. Fixed to the arm or leg of a criminal on supervision but not in an institution,
it provides an ongoing readout of the subject's activities and a means of checking
on parole violations. It has the potential to replace many prison sentences with
house or community arrests, and to prevent or solve new crimes because the
individual's whereabouts would always be known. Like any device with a beneficial
potential, it has an equal capacity for harm--the same device could be fixed to every
citizen and the state could then monitor and control all daily activities. However,
despite downside potential, reduction of prison population will soon become so
pressing a necessity that these devices will probably go into large scale use without
much debate.
Applications of technology to the practice of law that have already begun are
the use of videotaped testimony and teleconferenced trials. The former is used in
cases where victims would be traumatized by having to relive the incident and be
cross-examined, perhaps years after the crime. This includes rape and other assault
cases, and situations involving child molestation and abuse, or others where
testimony taken immediately after the incident is preferable to that made much
later. Teleconferenced trials are useful when it is too expensive to transport a judge
to a remote location, or the various parties are widely scattered. These ideas have
yet to be well exploited but have the potential to effect considerable savings in the
judicial system.
Yet another potential application for technology is the much maligned "lie
detector" or polygraph. These devices, if ever made reliable, could be installed at
airports and banks, whose customers could be required on entry to place a palm on
the sensing plate. Those whose emotional state appeared abnormal, as indicated by
sweat and pulse, could be further scanned for weapons before being allowed to
continue. Indeed, devices now coming into use at airports to detect even parts per
billion of plastic explosive molecules on a person's clothes or skin also have
potential to be used in detecting chemicals given off when a person is emotionally
distraught. These sniffing machines would not prevent all incidents, but would offer
an additional level of screening. As in all such proposals, however, there would be a
loss of liberty and privacy to gain security, but this is a common trade-off. It is only
after some time that it is possible to determine if such trade-offs are satisfactory, by
the time this is known, it may be too late to change course.
Law enforcement will make incremental new uses of technology as does the
criminal element. Police work will never become perfect, because if it did the result
would be a permanent police state. It seems most likely that insofar as both law and
government are concerned, there will continue to be a big brother/little brother

351
tension, and that there will continue to be both a state and a legal system that are
in rough accordance with the prevailing moral/ethical consensus of the citizenry. So
long as the absolute principles upon which democracy is based are upheld by its
citizenry, it will survive, though its form may change somewhat.
It is worth noting however, that in an age when many individuals can
command weapons of mass destruction and the loyalty of suiciders to deliver them,
no nation is safe from attack, however democratic it may be.

Internationalization

As with state and economic institutions, law will also exhibit both collectivist
and individualist tendencies. That is, more of its application will tend to be at the
local and community level, where it can be tailored to suit local needs. At the same
time, it may tend to become internationalized as nation and economies spill over
their present frontiers and operate more and more on a continental and global
basis. In this arena, law is likely to become more standardized and universal. This
change would be complex and exceedingly difficult for four reasons. First, it is hard
for present nations to concede a measure of sovereignty to new global authorities in
order to begin writing universal laws. Second, actual agreement on what kinds of
law ought to be internationalized and what their content should be would be very
hard to achieve--especially since some countries are now returning to a religious
base for law while others are moving away from one. Third, the entire effort must be
handled with the greatest caution, for once it is agreed that a law is a manifestation
of universal justice, it will be almost impossible to change it--there will be no
external comparisons or cross-fertilizations possible with global laws. Fifth, there will
always be people who vigorously oppose change, whether it is logical and
necessary, or not. These considerations point to a long process of change; the
economy may well be a global entity long before law is.
However, there are pressing concerns that can only be handled on an
international basis, because they affect more than one nation. These include air and
water pollution, biological, nuclear, and chemical experimentation, and control over
the effect on the environment of industrial by-products such as pesticides and
ozone-destroying refrigerants. They also include export and import of resources like
water, which may be in abundance in one country while being very scarce in a
neighbour. Many of these problems have to be solved together along with others,
for if they are not, even a reduction in nuclear arms would find the world a less safe
place, not a better one.

The Control of Space

Some of the problems that will be faced are not confined to the globe, but
relate to explorations off its surface. Thus far, control and operation of these
adventures has rested in the hands of two nations--the United States and Russia
(with Europe, Japan, China and Canada bit players). In the long run, what happens
even in immediate Earth orbit, let alone in the solar system as a whole, affects all
nations, and all will have to be part of the process of controlling it. Thus, at the

352
same time as both law and government is becoming to some extent globalized, it
will also have to have new forms developed to regulate space.
As soon as any number of people become resident off planet, there will arise a
need for commercial, civil, and criminal law in space, as well as regulations
governing behaviour of agencies of the various states. The only way that these
undertakings could fail to become international would be for space to become the
monopoly of a single nation and, while that possibility cannot be entirely ruled out,
it may be unlikely, given the vast profits to be made, and the large volume in which
to make them. Indeed, by such a consideration, it may be more likely that
corporations will dominate space rather than governments.

9.9 Summary and Further Discussion

Summary

The state and the law are critical institutions in any society, and both are
bound up in technology. Not only do new techniques demand new laws and new
statecrafts, but they provide new methods for both.
The problem of respect for law has little likelihood of a solution based entirely
on extensions of the present law or technology. Practical solutions must also
incorporate new respect for ethical behaviour, whatever its basis. One method of
adopting and promoting an ethical code for technical professions has been
suggested here as a potential solution; in conjunction with the other remedies
discussed, it could provide us with the kind of respect for electronically expressed
intellectual property that is necessary for the information age to happen.
New types of crime have already come into existence because of new
technologies, and so have new types of law enforcement, and new possible forms of
government. There are collectivist and individualist trends in these areas as well as
in the economy; there may be losses of some privacies and liberties, and
corresponding gains in individual influence on government, especially local
decisions.
There are both chaotic and unifying trends in law and statecraft, and the
ethical consensus for both institutions is in a state of flux--necessary for a new
society, but unnerving because the destination is unknown. A new stability requires
a new ethical consensus, and this is possible, even if its basis changes.

Research and Discussion Questions

1. What is the basis for the existence of the state? Answer from both
theoretical and practical points of view.
2. It was asserted several times in this chapter that law codifies an existing
ethical consensus. Give some other possible foundations for law and then argue that
these are either more or less important.
3. Examine and discuss the extent to which it would be correct to suggest that
law in Western democracies is substantially based on Judeo-Christian principles of
justice.

353
4. Research and report on the extent to which law is influenced by religion in
Muslim, in Buddhist, as well as in largely irreligious societies.
5. Write a paper either supporting or attacking the idea that there exist
universal absolutes of justice upon which global laws can be based. Be sure to
consider the practical implications of your theoretical conclusions.
6. Suppose that statecraft does to some extent become globalized. What
degree of power and authority ought to be transferred from the current national
level to a global one? How can the details of universal involvement in decision
making be worked out in a practical fashion?
7. On the other hand, what power and authority that is currently national in
scope should become regional or local in scope, and why?
8. Argue that some power and authority ought to remain at the national level--
say which, and why.
9. Some nations are either too small or too resource poor to be economically
viable in the long run. How can they participate in the global economy most
effectively? Should they give up their national identity and join other nations to
achieve greater economic prosperity, or do they have some other reasonable course
of action?
10. Free speech, even to the point of tolerating antidemocratic views, is a
cornerstone of democracy. To what extent must even this liberty be regulated to
protect minorities from attacks? Explore the question of whether some views are so
dangerous to the public interest that their expression must be limited. Discuss
specifically whether defamatory speech and writing can be permitted if (a) it is false
or (b) it is true.
11. This chapter expresses optimism that the spirit of the information age is
antithetical to that of tyranny. Argue for or against this view--is a global dictatorship
likely or unlikely?
12, What is the difference between propaganda and advertising, and what are
the legitimate limits to the use of both by the state?
13. Some suggestions are made in the chapter about individual involvement
in statecraft. Are there any corresponding roles for individuals to play in the legal
system? Why or why not?
14. The practice of courtroom law is very much an argument over precedent
and content. Could this be mechanized and made automatic so that human lawyers
and judges could be (at least partially) eliminated and the application of the law
become more certain and uniform?
15. Argue for or against the suggestion that global war has already become
impossible.
16. Argue for or against the suggestion that global nuclear disarmament is
inevitable.
17. Argue for or against the continuance of some form of intellectual property
rights. Should access to information be a universal right, or should it be restricted in
some way?
18. Argue for or against the separation of legislative and judicial authority.

354
19. Look up and report on at least two existing codes of professional ethics in
your own areas of interest. How would the code for your profession need to be
modified in the information age?
20. What role can professional associations or guilds play in the next society,
besides promulgating ethical codes? What social implications do guilds have? How
do they fit in to the individualization and collectivization trends? How lasting an
institution are they likely to be?
21. Authorities in some countries turn a blind eye to software piracy, arguing
that their nation is so poor that it could never move into the information age by
buying the technology and so stealing software is justified. Consider carefully both
sides of this argument and attempt to come to a resolution of this issue.
22. Women's groups often argue for government sponsorship of child care
centres in order to allow them equality of opportunity in the job marketplace. Critics
argue that these programs constitute unfair trade subsidies, discriminate against
traditional families, and undermine traditional moral views. Is this an appropriate
activity for the state? Why or why not?
23. "The rugged individualism of Americans is likely to lead to a Lebanon-style
fragmentation of their country." Argue for or against this view.
24. Research the profession of government lobbyist. Explain what they do and
why. Also, outline the chief ethical difficulties common to this activity and propose
ways to control it. Be sure to include recent examples of real or alleged ethical
conflicts in these situations.
25. Research from the popular press recent incidents of alleged conflict of
interest on the part of government officials. How serious was each, from an ethical
and legal point of view? What steps could be taken to ensure that those particular
problems do not arise again?
26. In 1988, the United States customs office announced a drug policy of
"zero-tolerance," under which vehicles found to contain minute quantities of
controlled substances or paraphernalia for using them would be seized. Proponents
of such harsh actions argue that they are necessary to control drugs. Opponents
counter that they impede civil liberties. What is the ethical response here? Research
specific instances of these seizures from media of the day, and state whether each
was justified.
27. In connection with robbery prevention, the author suggests banks employ
scanning devices to detect people in abnormal emotional states and further scan
them for weapons. Would this differ substantially from the practice of scanning for
weapons at airports? What are other cases in which gains in security are traded off
for losses in personal freedom? Are trade-offs in general a good thing?
28. The computing language Ada was originally developed by the Department
of Defense of the United States. Suppose you are a pacifist. Ought you to refuse a
contract that specifies software is to be written in this language? Why or why not?
29. In section 9.3 (Technology and War) Mara gives us a view of a society that
is warlike, but has banned all but hand-wielded striking weapons such as swords. Is
this situation possible? enforceable? stable?
30. Research a warrior culture such as that of Ireland or Japan and detail the
idea of honour among and between warriors.

355
31. Argue that a feudal system is still a viable form of social organization--
even in the information age.
32. Argue either that Marxism is a stable and viable form of government or
that it is not.
33. The author (and some characters) suggest that a participatory democracy
is the most viable in an information society. Either argue that this is the case, or
refute it (Good debate topic).
34. In the Fall of 1965, the Undergraduate Debating Society of the University
of Calgary held a debate on the topic "Resolved that this house shall mind its p's
and q's." Prepare both a fifteen minute argument on the affirmative side and one of
equal length on the negative side of this topic. Make sure that what you say is
relevant to this chapter.
35. What are high technology ways to wage war in the information age
without using conventional weapons?
36. Give a detailed explanation of how governments responded to the year
2000 problem, and how those responses differed from those of business and
industry.
37. In view of your answer to question one, what services should the state
provide, and which ones that it currently is involved in ought to be moved to the
private sector?
38. Select a particular function or role of government and write a code of
ethics for it. Defend your points, being sure to say what is the basis for each.
39. Old Testament law limited government to taxes of 10% of income, with
another 10% due the priests. Today, most countries use a sliding scale, where the
higher income earners pay not just more money, but a higher percentage of what
they make. Find out what the income tax structure is in your country and
province/state, and determine the marginal rate for an income equivalent to $50
000. What level of marginal rate is fair and just? What are the ethical issues here?
40. The existence of international trade and investment agreements such as
the North American Free Trade Agreement (NAFTA), and the proposed Multilateral
Agreement on Investment (MAI), transfers a measure of national sovereignty to non-
elected trans-national tribunals, thereby globalizing both decision making and law
enforcement. As these agreements are made by nations, they override the authority
of state, provincial, and local governments, whose taxation, subsidy, and zoning
bylaws could be voided by such bodies on behalf of a multinational corporation,
even though they would still apply to local firms. While increased trade manifestly
increases prosperity, the trade-off is the loss of control over affairs within national
boundaries. Discuss these trade-offs and argue that the sovereignty of the nation-
state is too important to risk such globalization, or argue that internationalism is not
only necessary but good.
41. Research the anti-globalization protest movement. Examine its arguments
and its tactics and comment on (a) their success, and (b) their validity within a
specific moral framework.

Bibliography

356
Arden, Harvey. "The Fire That Never Dies." National Geographic 172, 3
(September 1987).
Drexler, K. Eric. Engines of Creation. Garden City, NY: Anchor Press, 1986
Ellul, Jacques. The Technological Society. New York: Knopf, 1973.
Fjermedal, Grant. The Tomorrow Makers. New York: Macmillan, 1986
Kaku, Michio. Visions--How Science Will Revolutionize the 21st Century. New
York: Anchor, 1997
Lund, Erik; Phil, Mognes; and Slok, Johannes. A History of European Ideas.
Reading, MA: Addison-Wesley, 1972.
Montgomery, John Warwick. Human Rights and Human Dignity. Grand Rapids,
MI: Zondervan, 1986.
Naisbitt, John. Megatrends. New York: Warner Books, 1984.
Ohmae, Kenichi. The End of the Nation State--The Rise of Regional Economies.
New York: Simon and Schuster, 1995
Schaeffer, Francis A. How Should We Then Live--The Rise and Decline of
Western Thought and Culture. Old Tappan, NJ: Fleming H. Revell, 1976.
Snow, C. P. The Two Cultures: AND A Second Look. London: Cambridge
University Press, 1963.
Toffler, Alvin. The Third Wave. New York: Morrow, 1980.

Chapter 10
A New Education for a New
Civilization?
Seminar - "What do they Teach in These Schools, Anyway?
10.1 Foundations--Theories of Learning
10.2 Learning, Education and Training
10.3 The Content of Learning in the Cultural Context
10.4 Issues in Formal Learning
10.5 Education and Technology

357
10.6 Schooling in the Fourth Civilization
10.7 The Role of the University
10.8 Summary and Further Discussion

10.1 Foundations--Theories of Learning


Learning is one of those difficult-to-define concepts that most people claim to
know the meaning of, but few are able to explain. There are a variety of theories
proposed by educators and psychologists as paradigms for the learning process. For
a detailed discussion, the reader is invited to consult a text on educational
psychology; what follows is only a brief summary of these by certain major
categories in a form that will be found useful in the remainder of the chapter.

Behavioral Approaches

Behavioral theories hold that all learning involves (or perhaps consists solely
of) a change in the learners' behaviour--either externally and in an easily observable
fashion, or internally, but no less physically real and quantifiable. They depend
heavily on assumptions that there is nothing extra-material about the human spirit,
mind, or consciousness but that all these can be explained entirely in terms of
quantifiable responses to changes in the environment, that is, in physical terms.
Learning is supposed to be achieved when new electrical patterns are established in
the brain, and to be entirely objective and scientific in nature. Complex Behaviours
are learned a piece at a time, with connections being made in the brain in order to
build up the whole pattern
The techniques that are supposed to achieve the desired change of behaviour
vary according to different members of this school. They include:
o Classical conditioning, which involves the introduction of a stimulus
designed to evoke a particular response. These theories grew out of the work of
Pavlov and his experiments with dogs. His subjects were presented with an
unconditioned stimulus (food) along with a neutral one (a bell) and their salivation
was observed. Once this had been done many times, the dogs became conditioned
to salivate at the sound of the bell; they had "learned" its connection with food. In a
like manner, this theory holds, children can be conditioned to make certain
connections with what they already know, and all their learning can be explained by
this process.
o Operant conditioning, which is similar to classical conditioning except that
the animals were required to operate some apparatus such as a lever in order to
have the food dispensed. Here, learned behaviour is self-selected rather than simply
reflexive. That is, the learner actively participates in acquiring the conditioned
behaviour.
o Environmental shaping, which holds that control of the environment is the
principal tool to behaviourally engineer learners into any desired pattern. In
particular, heredity is regarded as relatively unimportant either to the ability to
learn or to the final outcome of the process.
o Contingent reinforcement, which reverses the order of classical conditioning
and supposes that reinforcing stimuli ought best to follow a learned behaviour.
Thus, the likelihood of some behaviour is increased or decreased depending on

358
whether it is followed by a reward or a punishment. One would view the dog's
salivation at the sound of a bell as the learned behaviour, and the provision of food
as the reward. Here, all teaching methods, whether using reward or punishment,
are evaluated for their ability to produce the desired behaviour, and not for any
intrinsic value they might have. Practical techniques based on this theory, which
was developed by B.F. Skinner, include scientifically scheduled and programmed
learning and the use of a variety of teaching machines.
o Social learning theories, which hold that new potential Behaviours are
acquired by the observation and imitation or modelling of others, and that these are
stored for use on appropriate occasions, the suitability of which are also learned by
imitation.
These behavioural theories have the advantage that they are end-result
focused on the actual physical or brain activity of the learner that results from the
process. They have two disadvantages: First, they do not attempt to explain what
thinking is or offer a context in which to evaluate ideas--that is, they are useful only
for outcomes. Second, they do not provide an ethical framework within which to
judge techniques of teaching and learning, except for the end result of changed
behaviour. Together, these mean that the means are divorced from the ends,
except as cause-and-effect, and that could be hazardous indeed for the participants
in learning processes.

Cognitive-Discovery Approaches

Cognitive-discovery approaches concentrate on thinking patterns in the


learner, without the behaviourist stress on objective physical changes achieved in
the learners' overt activities or brain patterns. The whole of learning is held to be
greater than the sum of its parts, that is, the mind assigns meaning to patterns that
transcends the original data. In this view, learning takes place not just by adding up
the facts and physical associations of the stimuli, but by grasping the relationships
between them. The learner does not merely respond, but perceives. That is,
something new is mentally synthesized from the raw data that was not present in
any physical sense, but that is a result of classification, organization, and insight
taking place as mental activities within the learner. Some variations within the
theme include:
o Gestalt theory, which emphasizes the high level pattern of the whole as
opposed to the low level details of the structure. Perception is dynamic and at any
one moment concentrates on one pattern or "figure" against a background of detail,
much of it not actively being perceived. Another way of putting this concept is to
say that perception involves the making of abstractions, while learning is the
acquisition of or assent to new abstractions.
o Piaget's theory of the active learner, which holds that the person learning is
an active processor of the stimuli being presented, and has a built-in desire to
organize and make sense of the data. Thus, understanding is not the making of a
mental copy of what is seen and heard, but is the product of each individual's
unique ways of knowing or transforming data. Because the drive to learn is
inherent, success in doing so provides its own reward, and the learner need only be

359
encouraged in the active process, not given external rewards or punishments. In
particular, the painstaking memorization of material organized by others is
discouraged, because it bypasses the learner's own ability to create patterns, and
therefore carries no intrinsic reward. Piaget based his theory on two premises. First,
he postulated an underlying organizational ability that enables a human to develop
intellectually. Part of this is biological; that is, it is inherited genetically and is
therefore variable. Another part is generic to the human race as a whole and is
therefore constant. Second, he observed that the human system was capable of
adapting to the environment. This is done by assimilating new data into existing
behavioural patterns in view of the new data.
o Cognitive-discovery theories that hold teaching ought to be concerned with
assisting students in the process of sorting data and organizing their own
conclusions. The idea is that if they grasp the total idea of the subject, they learn
principles that can apply to other studies at a later time. The learner is more likely
to remember and, having done so, to have a good foundation for extending the
study to more complex levels. Rather than making of learning the sum of many
details, the discovery approach asks the learner to generalize from a few
experiments and observations, and then apply the generalization to similar
situations in detail. It is assumed that the student not only can reason, but also
wants to, and that curriculum must be arranged so as to provide ample opportunity
to explore within a broad structure. In a sense, this method could be contrasted with
the technique of the Greek philosopher Socrates, who believed that knowledge was
to be drawn out of the student by a series of well-designed questions posed by the
teacher, the end of which was the convincing of truth by logical argument. By
contrast, the discovery approach assumes that the student poses the questions, and
can generate meaningful self-rewarding answers from the data. The teacher need
only provide the raw materials for gathering the data, not the actual answers.
Cognitive-discovery approaches have the advantage that they consider two
additional aspects beyond just outcome: the intellect and experience of the learner
on the one hand, and the act of learning on the other. They do however, make
assumptions about the process of learning that may not be universally applicable,
and concentrate somewhat more on that process than on the results, in contrast to
the behavioural approach, which does the opposite.

Humanistic Approaches

Humanistic approaches focus on the human potential of the learner. They


agree to some extent with the cognitive ideas of the last section, but their emphasis
is on development of the social and emotional aspects of their students' lives more
than just on knowledge acquisition and new paradigms for organizing it. There are
many variations on this theme, but certain characteristics that all modern humanist
approaches share to a greater or lesser degree are:
o Progressivism, which is a term adopted to indicate a reaction against
traditional values and techniques. This reaction involved far more than education
and also included agrarian reform and changes to the status of workers in large
cities. Although the "progressivist era" is thought of as having ended by the Great

360
Depression, elements of its thinking continued to be important in education long
afterwards. For these purposes, progressivism includes the principles that
i) There are no absolutes, including of moral values, but all truth is relative.
John Dewey was the principal proponent of the application of relativism to
education, teaching that each individual generates personal truth--including values
and reality--by interacting with the environment and engaging in a transaction with
the consequences of that activity. There ought therefore to be no authority,
competition, or punishment involved in learning. Although many of Dewey's ideas
were no longer being explicitly used after the 1950s, the notion that values are
relative has survived in more modern theories. Thus, if there is any discussion of
values, it is to have as its end the "clarification" of students' values, and their
comparison with those of others, but certainly not the inculcation of any from a
predetermined or authoritative set. This rejection of truth reached its zenith in the
1990s deconstructionism, and has the same problem in the learning arena as
elsewhere--there is no difference between a solely personal and relative truth and
no truth at all.
ii) There ought to be no repression of the ego, or of painful feelings or
thoughts, but complete freedom of self-expression. This is supposed to produce a
more open and creative learning environment. That it might simultaneously make it
impossible for anyone else within range to teach or to learn is less important than
unfettered self-expression.
iii) Learning should be child-centred in the sense that there ought to be as
little adult influence as possible, and the child's perception of needs and interests
ought to dictate both the curriculum and the methodology. All activities should be
democratic, with the teacher having only one vote, along with each of the students.
The drawback of this version of child-centredness is that uneducated students do
not yet have the ability or the techniques to ascertain what activities and ideas are
the most important.
iv) Children are naturally good, curious, energetic, and eager to learn, and one
of the tasks of the teacher is to facilitate the removal of traditional societal
inhibitors of these traits. Unfortunately, these assertions appear to contradict actual
classroom experience, and have, therefore to be regarded as suspect.

o Existentialism, which in this context holds that learning is to be viewed as


part of the learner's unique and personal struggle to find meaning in existence.
Emphasis on this aspect of learning implies a corresponding de-emphasis on the
potential educational demands of the society as a whole. Full realization and full
actualization of the self become the most important factors in learning, and the
needs of others are taken into consideration only secondarily. Curriculum is also
secondary, because the way individual students feel about the subject at hand is
more important than the factual information itself or the students' understanding of
it. Sometimes, attempts are made to inculcate values or attitudes based on feelings.
Thus, there have been curricula designed to teach young children about sexual
abuse based on the child's feelings relative to the abusive activity. Attempts have
also been made to teach about the environment solely on the basis of students'
feelings about species extinction. The problem with such approaches is that

361
feelings, being individual, cannot be relied on to achieve a specific curriculum
outcome. Thus, far from validating the theory, this approach seems more likely to
contradict it.
o Freedom from fear, including fear of criticism, competition, punishment, and
failure. This is one of the most controversial aspects of this group of theories, for
others have held that one learns a great deal from all four, so they need not be
feared. Other freedoms have also been proposed, including those from dependence,
or from all forms of authority.
o Learning is principally a matter of experience, and that it is therefore
intensely personal, and to at least some extent, non-transferable.
According to its proponents, the humanist approach to learning has the
advantage that it extends the process to the whole person, and that it recognizes
individual strengths and differences. Its opponents suggest that it has the
disadvantage of negating all absolutes, and therefore of undermining any basis for
its host society. This may be the reason why so few of the progressivist experiments
last for very many years--a school by its very nature is both an organization and a
society, but the progressivist models are hostile to both. As with all relativists, the
progressivists are vulnerable to criticism on theoretical grounds, for their espousal
of the non-uniqueness of truth also undermines the foundations of their own
theories. No relativist theory can ever assert its own superiority over an absolutist
one with confidence, for if it does, the very relativism it proposes becomes an
absolute. The humanists also have practical critics who claim that children both
need and want authority in order to know their limits in society.

Toward a Unified View of Learning

Each of the three schools of thought examined so far has something to


contribute to the total understanding of what learning is. Each provides a useful
paradigm for some aspect of the process--the behavioural, the mental (or
cognitive), and the emotional and social, or some combination of two or more of
these. Each is also open to criticism for concentrating on one or two aspects to the
exclusion of others, and for the extremes to which such focusing may lead. There is,
for example, no shortage of "educators" who are ready to compel all children to
learn in a particular way, and to require all teachers to use a particular technique to
achieve this. After all, if there does exist a universally applicable theory of learning,
then it follows that there may be an optimal technique as well.
It is also important to note that these three schools of thought have an
implied definition of the totality of the learner that omits important considerations
that go to the heart of what it means to be human. Beliefs, values, convictions,
motivations, and meaning questions also have to be considered, for it is these that
provide the reasons for behaviour, the structure for cognitive filters, and the basis
for engaging in experiences having emotional reactions, and being social. That is,
they do not deal with the issues behind ethical and social questions, or with the
meaning of what it is to be human. But, there is an aspect to humanness that is
more fundamental than those dwelled upon by these schools of educational
psychology, and it is necessary to consider how this aspect relates to the whole

362
person as a learner. In keeping with the integrative and wholistic themes of this
book, what follows is a suggested approach to understanding learning that attempts
to combine this fourth element with those from all three of these schools of thought
in a way that deliberately avoided answering the question of whether there exists
an optimal technique to achieve the desired goals, but that does suggest how the
process ought to work.
Learning can be elaborated in terms of certain physiological changes that take
place in the brain as new information is stored there, or it can be cast in terms of
the stored information itself. It has something to do with growth and development,
and with the process of finding out more about the world. It depends upon ones
beliefs, philosophy, commitments and religion, and it also changes the learner. It
also depends on past experiences and relationships, and enables different ones for
the future (i.e., learning takes place in a continuum). Learning is not just any kind of
change in human capacity, for the forgetting years of advanced senility are not
what one wishes to regard as learning ones. However, learning does change as one
matures, so a definition must include this aspect as well. Here is an attempt to
include all this in a single statement:

Learning is a process of abstraction taking place within the context of existing culture, behaviour,
knowledge, and beliefs whereby the person who is the learner acquires or acquiesces to new
paradigms in order to explain experiences and by so doing changes in the ability to respond to new
circumstances.

Many interactions are involved in learning, even though some theorists


emphasize one more than another. The following diagram may be helpful to
illustrate these relationships and their mutual interaction.

363
Diagram 10.1--A Model for Learning
Note that although all interactions are mutual, there is a certain order of
priority implied by the positioning of the elements in this diagram. Who a person is
comes at the centre of all, for from beliefs and commitments flow everything else.
Indeed, all the others can be thought of as aspects of being. Experiences, including
the emotional, are next, at the top; then the intellect at the lower left, and finally
the relational at lower right. Also, the categories are not entirely distinct. For
example, inventing is not a purely relational activity, but requires the intellect as
well. The diagram also indicates a mutual interaction among the elements pictured,
for none of them stand alone. This description does have a comprehensiveness of
its own that with some amendment could applied to machine that "learn," however,
this chapter will only be concerned with learning undertaken by human beings.
It can even be argued that learning is a uniquely human activity. Actual
response to stimuli--pushing a lever for food, or learning a maze--may be
appropriate terminology for rats, but there is little evidence that it can be adopted
uncritically for humans. Mental changes alone are of theoretical interest, but lack
practicality. Likewise, there is yet no convincing evidence that terminology used to
describe human cognition--words such as "perceive", "understand", "assent to",
"comprehend" or "intend"--have any application either to animals or to artificially
constructed devices.
Since learning involves the altered ability to change within and to respond to
alterations without, and since change is partly the result of mutual interaction

364
among motivations (ethics), society and technology, it is clear that a great deal of
learning is required of all members of a civilization in transition to a new mode, just
for them to remain functional. Another mutual dependence exists here: new
learning creates new Behaviours and techniques, and these in turn require new
learning--oftentimes even on the part of those who developed the techniques but
may not understand what they have wrought until later. In addition, new techniques
also require new applications of old ethical principles, and the learning of a new
consensus of behaviour with respect to the new methods. Many examples of this
can be drawn, say, from new medical applications, which often raise related ethical
questions. Thus, the lines shown on the graph above are all two-way paths of
mutual influence.
The definition also includes potentially negative changes, for people can learn
Behaviours that inhibit their abilities to respond, but such "negative learning" tends
to make the person less efficient and reduces the ability to pass on the behaviour in
most cases. Thus, not all learning is useful either to the individual or to society, and
that which is will tend to carry with it a reinforcement that adds motivation for the
potential learner.
Learning is also something every human being experiences from the time of
the womb on. All have to learn how to make sense of sound, to talk, to read, to
write, and to operate within the myriad of common conventions that make up the
bond that is society. This commonality must include:
o a knowledge of its history, or else the bond is incomplete,
o understanding of its ethical norms, or else it is impossible to exhibit
behaviour appropriate for the society,
o skills in its techniques, or one cannot participate productively in it.
That is, learning that results in a potentially productive and contributing adult
does not just involve the acquiring of factual knowledge and social skills; it also
involves the acquisition of a set of restraints and imperatives that characterize the
culture and allow it to operate, and a set of experiences that provide empirical data
on which to base one's own actions.
A child growing in mind and body and acquiring more of the total cultural
consensus, gradually becomes a part of it, and is enabled to enrich it in turn--this is
an outward-directed relational and transformational aspect of learning. In order for
all this to happen, the already functioning members of the society must organize
the child's learning in order to ensure that the total cultural context is passed on
efficiently and effectively; for with each generation the society that context
represents is but one step away from extinction. At some stage of this process, each
child must learn how to learn in order to take over this responsibility for that too is a
part of the expected adult responsibility in every society. That is, this wholistic
description implies that the culture and not the child sets the learning agenda until
the learner has the fundamentals in place and is ready to take responsibility for
carrying on.
Thus, for the sake of their own survival, hunter-gatherers had to teach their
skills to their children. Likewise members of agrarian societies that followed them
had, and now have, to teach plant and animal husbandry. Indeed, in many
situations (including military ones), the choice is between learning or dying--such

365
circumstances have little room to allow for those who cannot or will not learn the
necessary skills; the non-learner is a non-survivor. This is as true today. On the one
hand, one who does not learn effectively cannot participate in society appropriately.
On the other, ideas, beliefs, and cultures, are all engaged in a variety of conflicts;
those that have no means of effective transmission to the next generation all
perish. And it is not the best ideas that survive, but those with the most adherents
to transmit them.

A civilization is always one generation from extinction.

This is not intended to suggest that learning is a part of evolving toward some
high goal, as though it were an aspect of "Progress" with its own sense of self-
direction. Rather, it is simply to observe that every society has associated with it a
set of attitudes, skills, techniques, and ideas that it must transmit to the next
generation if it is to survive in a recognizable form.
In at least one sense, learning continues throughout life, for every day that
passes brings experiences that are at least in some respects different from those of
all the yesterdays. Yet, in many societies, particularly the very primitive or stable
ones, substantive learning effectively stops at a very early age, except as later
required in order to survive in emergent conditions. An individual can find a niche in
such a society, and stay there from early adulthood until old age claims back the
abilities guarded and used through a lifetime. Even in the industrial age, a worker
could learn a single trade, such as automotive welding, and do nothing else until
retirement. However, in a rapidly changing society, such a luxury is available to
relatively few, for job descriptions and even whole industries change much faster
than the passing of the generations alone can accommodate, and social survival
becomes an immediate and very personal incentive for learning.
Having made a case for a wholistic and comprehensive approach to learning,
it is now time to make new distinctions, this time based on the subject and goal of
the learning, rather than on the process by which it takes place.

10.2 Learning, Education and Training


It is worthwhile to distinguish incidental learning from organized learning.
Learning that takes place with focus, purpose, and direction on the part of the
learner, and that specifically engages cognition toward understanding ideas is more
properly called "education". While it is correct in one sense to apply this word to the
acquisition of life skills as described at the end of the last section, it may be useful
to restrict it to that subset of learning undertaken with some sense of mindfulness,
deliberation, and purposefulness (intentionality) by the learner. Thus, even when
learning is organized as "schooling", it may not be entirely proper to consider the
process as education, for many of the "schooled" are having something done to
them, rather than actively and willingly participating in changing themselves toward
some goal.
For its part, schooling could be regarded as the attempt of a society to cause
a degree of learning to take place that will be useful to that society. It has ambitions
beyond mere functionality for the learner, but it may only partially enable

366
education. While it might therefore be in the interest of a society to insist upon a
certain minimum level of schooling for its members, it is not possible to ensure an
education for any of them unless their minds are engaged at some point to become
willing participants in the enterprise. Mere years spent in the schooling process will
not of themselves guarantee that the schooled person will be more productive,
more educable, or even more useful to society. Schools can be used as little more
than safe holding places while awaiting a certain legal age.
It could also be useful to distinguish between human intelligence and that
potentially ascribed to artifacts (A.I.) on the basis of whether education is possible--
that is, whether understanding is achieved, or whether all that can be accomplished
is a technique of factual regurgitation. Even in the latter case, the recitation of facts
by a human being, though not high level synthesis, requires some integration of
memory and verbal skills. It is not at all clear that there is a machine equivalent
even to this; even it may be a uniquely human activity.
Yet another learning word is "training", which is distinct from education in that
it requires a lesser degree of cognitive activity and is not primarily focused on
mental activity. While education must deeply involve the mind, training need
engage it only slightly, for training is the perfection of skills, and these are at their
best when mastered to the level of instinctive and unthinking reaction to stimuli.
The main purpose of training therefore is to change an individual to conform to and
be able to use existing techniques. Education, on the other hand, ideally leads the
willing mind on to understanding, and enhances the ability for self-change--perhaps
in developing new techniques, or in demonstrating to society that faith in some of
the old ones has been misplaced. It may even result in substantial changes to
society.
Like education, training may also be attempted through formal schooling, but
this will never be entirely satisfactory or complete on its own, for the necessary
instinctive level of technique comes only from long practice of the method in its
actual application, not from classroom lessons. Thus, one need not expect there to
be much correlation between years of schooling and subsequent on-the-job
trainability, even when this connection is the stated purpose of school-based
training. Berg (Education and Jobs) points this out in referencing military and other
government statistics on recruits, which clearly indicate that years of schooling,
even in extreme cases, are not necessarily well-related to trainability. One could
therefore argue that job-related skills ought to be taught in the grade schools, but
there would not be ready agreement on the specific skills to be included.
Training cannot be completely separated from education, however, for there
are ideas behind all skills and techniques. Moreover, those with training in some
technique are among the best qualified to think about and improve upon those
techniques. They may also become capable of forming abstractions based on what
they do, and thereby making new intellectual contributions. However, it is not those
who are solely technicians who make new scientific discoveries--without the
creative and questioning aspect of the educated intellect, a technician can only
continue to do things the same way indefinitely. It is in the integration of education
and technique that the power of the scientific method lies, and that is why it is

367
essential that scientists be experienced evaluators of ideas, not just trained in the
application of technique.
Since new methods that remain confined to their inventors are of little use to
society, training is important for the dissemination and use of techniques of all
kinds, and education plays the same role for ideas. It is possible to be a well-trained
scientist/technician, but so unmindful of ideas, as to be properly regarded as
uneducated. It is likewise possible to be somewhat unschooled, but of considerable
education--though this latter is perhaps rather less likely than the former.
It should be apparent that a society needs both forms of organized learning; it
cannot hope to survive with only one. One of these transmits actions and methods;
the other culture, beliefs, and ideas. It should also be evident that a whole person
ought to have both training in technique for the sake of a job, and also education in
ideas for the sake of understanding and wholeness as a human being. In the
hierarchy of Chapter 1, this makes wholistic learning an ethical priority. Of course,
one could also make it a priority on pragmatic grounds.
Because the existing members of society have a vested interest in
perpetuation of that society, both education and training have been organized and
institutionalized from very early times. Formal institutions of learning such as
schools, however, are like any other organizations. Once they become sufficiently
entrenched in the fabric of society, they take on reasons of their own for existing. In
addition, schools are not proactive institutions; rather, they react to what the
community deems important, and may lag behind those desires by a considerable
time. They also develop techniques for managing the enterprise of learning--ones to
deal with salaries, budgets, buildings, public relations, discipline in school, and a
variety of teaching strategies. They often have a specific agenda for reinforcing or
changing the broader society that has given them nurture. This additional agenda
comes variously from the state, the local community, the parents, or from their
teachers. It may have political, social, cultural, religious, ethical, or economic
motivations, or may simply arise from the growth momentum of the appropriate
bureaucracy. It may be expressly stated in printed goals for the jurisdiction, or it
may be kept hidden from public view to avoid controversy. In all, schools invariably
end up with far more concerns than the specific learning that is their ostensible
task.
For example, the state may use schools to reinforce its power, by dictating
both the content of the curriculum and the form of teaching. Teachers may desire to
use the schools to achieve economic or political goals of their own, and these may
have no intersection with those of the state. Parents are likely to want the school to
reinforce values they have taught their children, and their emotions can run
extremely high if they perceive that these values have instead been tampered with
or denied by the school. The community, as represented by its school board, may
have a vested interest in certain shared ideas--which in some places could include a
racist attitude toward some group, usually one highly visible for its color, national
origin, or economic status.
All parties consider that the students' values and loyalties are up for grabs to
the most persuasive, and that they can be secured through the school. Whether this
is true or not, the belief that it is, together with differing agendas of the parties

368
involved, guarantee that there will always be conflict over control and use of
schools. Furthermore, whatever agenda is adopted, the result is likely to be a
concentration on the most efficient techniques for achieving that agenda, rather
than on what the broad curriculum should contain, or who should teach it.
However, in preoccupation with techniques of learning institutions--especially
when these are in turn used mainly to teach techniques--it is easy to lose sight of
education. Techniques are routine, safe, familiar, and easy to manipulate for
specific purposes, and so are the institutions that focus on them. Ideas, on the other
hand, change people at a far more fundamental level than does training in
technique. By their very nature, ideas can be dangerous, strange, threatening, and
difficult to engage another's mind to. When new ideas are adopted, the emotions
are also involved, and behavior changes; so do motivations for engaging in learning
new ideas and techniques. Ideas are also the whole stock-in-trade of education; and
know no institutional boundaries. This can make them very threatening indeed for
those with institutional and political agendas.
There is also a danger that preoccupation with technique in school-based
learning may sometimes cause practitioners of those techniques to forget their
clients are people--citizens in transit, supposedly becoming more productive and
better thinking and behaving adults. The follower of modern political debates over
educational philosophy, curriculum, teaching methods, or funding, cannot help but
be struck by the paucity of references to the actual students who are engaged in
the process. In such discussions, learners can easily become an amorphous
manipulable mass product, lacking personality, humanity, and individuality. The
irony is that this dehumanization can take place within the confines of the very
institutions entrusted with the task of making individual students functional as
humans. However, economic and political considerations have a way of forcing
some dehumanization by a kind of assembly line approach to schooling. It should
also be noted that students are often very much aware, both of agenda conflicts
and of the degree to which they are treated as products rather than as people, and
they often come to resent being less than they know they could be. This is true
even when they appear to cooperate willingly with an educational experiment, for
they are quite capable of simultaneously and contemptuously criticizing the same
arrangement whose benefits they enjoy.
There is also an interesting tension and competition between education and
training. Since training relates principally to technique--that is, to the ability to do
things a society considers important--it is one of the keys to obtaining a job that can
feed a family. A person may have much education, but insufficient training to earn a
living, for there is no compelling efficiency in hiring a learned, but unable person. In
the industrial and prior ages, ideas alone would put bread on the table of only a
very few people, and it is not their activity that most citizens see and judge society
by.
However, it is not only the technological achievements, but also the ideas of a
people that generate the judgments of the future upon a society, and a civilization
would become stagnant and start to die without either. Neither can the two ever be
completely separated as might be inferred from the discussion above, for there are

369
ideas behind all techniques, and there are at least consequences, if not applications,
of all ideas.
An important challenge to formal schooling systems seems therefore to be to
provide people with a suitable mix of training and education to allow them to be
doers, experimenters with, and also thinkers about what it is that they are doing.
Another is to work within a student-oriented ethic, recognizing the importance
developing human potential fully, for none of the other parties have so great an
interest and so much to gain or lose as the learners themselves. This is not intended
in the progressivist meaning, wherein the student sets the agenda, but rather in the
sense of recognizing the uniqueness of the individual within the total context of
society--having the good of students as first priority.
Of course, schools are not the only agents of learning. Families, churches,
peer groups, and the media also play an important part. The agendas and
institutional priorities of each of these are part of the process as well--whether of
education or training. Since there is a complex interaction of all these forces in the
whole culture, which is the learning milieu, an interesting set of tensions is created.
On the one hand, the voice of any one of the teaching agents is weakened
when they do not all speak consistently. The parent who speaks disparagingly of the
school to a child may well render ineffective much of what it is attempting to do.
Likewise, the teacher who sets out deliberately to undermine the parents'
authority--or that of the whole society--is likely to succeed at least to some extent.
The communications media with a social agenda may have an even easier time
persuading people, especially the young, to adopt novel values. This is particularly
true in a society in which the family, once the most important factor in value
transmission, has been greatly weakened, for there is a silence into which many
voices seek to speak authoritatively.
When there are no uniform voices from which to learn, the result may well be
confusion. The student who has not consistently learned the history and morals of
society cannot be expected to make a commitment to them, and may never
become a part of it, for being becomes confused and so does knowing; the result is
likely to be chaotic. This observation may bode ill for a society that seeks to be
ethically pluralistic, and therefore does not give any one set of values priority over
all others, for even its pluralism would then have no transmittable legitimacy, and
the freedoms that make pluralism possible would have none either. That is, there is
always a tension between the need to transmit values, and the need to allow for
diversity and challenge to those values.
For example, the constitutions of both Canada and the United States enshrine
fundamental freedoms as legal rights on the assumption that they are "self-
evident." Yet, the very rights to free speech, a free press and a free and secret vote
must be extended to the enemies of all three rights, who must have the freedom to
attack these values, attempt to persuade people to discard them, and to vote
against their continuance. Unless these values are believed in by most people, and
most of the society's institutions work in concert to transmit them to the next
generation, their enshrinement in the laws of this time may mean little in a few
years. The Greeks' cyclical theory of history held that such a deterioration of
democracy into dictatorship was inevitable and that to hold freedom as an absolute

370
is to walk near the edge of a precipice. The paradoxical challenge for any free
society is to systematically preserve that freedom without destroying the freedom
to speak and act inconsistently with itself. As noted earlier, "tolerance" that regards
only itself as the highest value becomes narcissistic and tolerates no voice that
claims to know absolutes. It then becomes intolerance.
Education with a reasonably consistent voice is essential to the continuance of
any society for it cannot survive without transmitting its particular ideals. In the
social compact, teaching and learning are aspects of a society's imperatives and of
the mutual dependence of its members, for there is an implied obligation on the
part of society to assist individuals to become functional members, and there is a
return obligation on the part of the individual to obtain the requisite learning
needed to make the mutuality called society work, and to repay society for that
learning by keeping it working.
For this reason, a school must always have a clearly defined mission
statement, philosophy, goals, and expectations of its students. After all, it must
present a consistent organizational culture to its clients that is an appropriate
microcosm of the broader culture that has entrusted to it the transmission of its
essence. It ought not be the forerunner of change, following every new whim and
opinion as soon as these are in the majority, and neither must it be too slow to
change to society's new paradigms. It must manage the task of preserving the
historical values that gave it birth and simultaneously enable students to live in
tomorrow. This is not a hard task in times of little change, but a nearly impossible
one when a society is rapidly metamorphosing into a new form.
On the other hand, while conformity and consistency are important for the
transmission of values, an excessive concentration on both would destroy the
freedom of enquiry necessary for democracy. The regimes of Hitler, Stalin, Mao, and
Pol Pot were nothing if not consistent, but they were brutally repressive of every
idea not deemed to be part of the state's agenda for indoctrination. Therefore, the
other players in society, such as corporations, governments, and the media, cannot
be required to have missions, philosophies, and goals that are entirely consistent
with the schools, or freedom will already have ceased to exist. The possible extreme
for consistency also points out the need to teach students to identify propaganda, to
discern facts, to evaluate the content of opinions, to be able to propose and weigh
alternatives--in short, to think clearly. For instance, the author is continually
advancing various points of view, and it is sometimes clear that he is advocating
one over a number of others. Even in the cases where this fact is not evident,
however, the omission of some ideas from the discussion is also evaluative. The
reader is expected to assume that everything in a book of this kind is evaluative,
and to continue the evaluation process personally.
That is, education very much involves a conscious give-and-take among
informing, preserving, growing, and changing. Rigidly legalistic absolutist
philosophies often fail important tests here, for they do not allow sufficient flexibility
for necessary change. However, a completely relativistic and individualistic
philosophy also fails the test. Such thinking holds that no ethical principles, or
indeed anything else is absolute, but that the individual may assess and then accept
or reject all ideas equally. In this view, schooling is a smorgasbord from which the

371
student may sample piecemeal according to choice, regardless of the degree to
which that choice has been informed. However, people actually tend to absorb
beliefs, emotions, ideas and techniques wholesale from their teachers on the
strength of mere assertion, and without assessing them. Moreover, the very notion
of a society is that some ideas are held by its members to be more important than
others, and the idea of a civilization as a whole is that some ideas are universal and
absolute. These include honesty, the value of its members work, empathy with
other members of the society, the ability to cooperate, and so on. If education, from
all its sources, fails to show clearly the superiority of the fundamental ideas at the
core of a society, it will actively prevent continuance of that society.
The notion that learning can be value free, or that one can learn about values
without assigning any sense of importance to any of them is a myth, for this attitude
itself is a statement about values. It is also one that is contrary to upholding
principles of liberty and democracy, for these provide the context for anything that
can be called "free" enquiry. There can be no such thing as context-free studying
about culture, religion, politics, economics, good taste, or morality. The very
attempt to remove context is a contextual act, one that asserts that there are
indeed other absolutes than those of the society. Moreover, if there are neither
truths to learn nor values to assess what is learned, it is not clear that education has
any ideas to talk about.
Thus the school in a democratic society must teach democratic cultural
absolutes as such and set students into the existing cultural context, while at the
same time including freedom to think differently as one of the absolutes, and
empowering them to change the culture dramatically, without necessarily
suggesting that they ought to. Achieving this also implies imparting a fine sense of
values, a deep sensitivity for both individual people and for the culture, and a broad
ability to integrate ideas from many subjects at once. An integrated agenda, rather
than a fragmented one, would likely therefore be a hallmark of the next civilization,
and this would surely be reflected in its schools.
In like manner, the attempt to distinguish too sharply between education and
training will ultimately fail. They may be somewhat different aspects of the greater
process of learning, but neither can be completely separated from the other. This is
true not only because there are ideas inherent in technique, and methods are
implied or necessitated by ideas, but also because both education in ideas and
training in technique take place within broader contexts. First, there is the context
of the whole human being who is undertaking the learning, and who is changing as
a whole because of it, and second, there is the context of the society in which the
two processes operate. Since education and training are connected through those
contexts, there is always a mutual influence of the two, so they can be treated
separately only insofar as they are aspects of useful learning in a total context.
While some schools may specialize in one or the other of the two, all must to some
extent integrate them into a seamless whole, for the people of the fourth civilization
will all have to know technique, but they will also be required to be evaluators of
ideas.

372
Just as the total context of what is learned is of critical importance to the
process, so also is the content of that learning and it is on these two that the next
section is focused.

10.3 The Content of Learning in the Cultural Context

"A human being should be able to change a diaper, plan an invasion, butcher a hog, craft a
ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort
the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new
problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die
gallantly. Specialization is for insects."

-- From the Notebooks of Lazarus Long, by Robert A. Heinlein

What ought education to contain, in order to achieve the goals of engaging


the mind to the task of bringing the learner into the main stream of society? The
various interested groups such as government, teachers, administrators, and
school-based associations, have all conducted studies and composed their own lists
over the years. The one here is therefore in a long tradition, but it is presented in
the context of the discussion of this book, not so much as a summary of the extant
literature Some of the items are fundamental and obvious, but worth restating
precisely because the obvious is sometimes invisible, and therefore ignored, even
when it is important. Other things may be less obvious, but nonetheless important
to achieve the appropriate balance in the entire cultural context.

Life Skills

Once, it was once assumed that basic life skills were taught at home, and that
the school system need no longer concern itself with them. However, such things as
manners, etiquette, how to balance a chequebook, use the banking system, how to
shop, to budget, to raise a family, to look for a job, and to obtain government,
medical, dental and legal help--among many others--can no longer be assumed to
be in possession of the student. To fulfill the mandate to help a child become a
functional adult, schools must pay attention to such things; failure to do so will
leave many students incapable of breaking out of family patterns of ignorance and
poverty. Critics of including such things in a school curriculum are quick to blame
the parents or some ethnic sub-culture, but in so doing they forget that the school
has an ethical obligation of its own that does not depend on the student's
background, and that this obligation is to the principal client.
Who is the principal client? Depending on one's point of view, it is either the
society that gave the school its mandate and agenda, or the student who is
receiving the knowledge. In a democracy, the difference may not seem great, but it
can be very large indeed in a totalitarian state. Ideally, the student is the focus of
the learning activity, and socialization into the broader society is for the benefit of
the student as much as it is to fulfill the mandate of the state.

373
Neither can the universities suppose that this observation about life skills is
relevant only to the grade schools. After all, what is the use of producing academics
who are filled with ideas and techniques from their specialty, but who are non-
functional in their society? Such learning may not have to be a large part of the
curriculum for mature adults seeking new horizons, but for the student fresh out of
high school, it is still a necessity.

Communications Skills

The ability to receive and transmit ideas, needs, and emotions is crucial at
every level of participation in society. On the one hand the person who cannot be
communicated to cannot be taught, and is therefore unable even to begin the
pilgrimage to adulthood and responsibility as a member of a society. Indeed, it is
precisely the ability to learn that delineates what professions or job roles are
available to a given individual. On the other hand, from the infant who needs to
inform parents of a dirty diaper to the Ph.D. in biochemistry trying to publish a
potential Nobel-winning breakthrough, the ability to get a message out is equally
crucial to functionality. The following maxim is offered to bring this critical need into
focus:

A person who is unable to communicate beliefs, feelings, information, and knowledge efficiently and
effectively might as well not have any.

The case has already been made in this book that the society of the future will
be characterized as one in which suitably trained people will have instant access to
all forms of information. Thus, the memorizing of many facts will take second place
to the ability to acquire, manipulate, and transmit information--that is, to organize
and communicate ideas. Since such communication will involve ideas as well as
facts, those who do work in the future will have to work smarter, and the specific
area of their learning that will need the most attention is the ability to communicate
clearly and effectively. There will be much less tolerance of incomplete or erroneous
communication, and there will be substantial pressure on all those using information
facilities to ensure that they do so correctly.
Those who are able to master the techniques of working with and
communicating ideas, which is part of being educated in the sense used here, may
have enormous advantages over those who cannot, so this will be a critical part of
future learning. At what point such training would become universal is not yet
certain, but the assumption being made here is that by the advent of the major
Metalibrary facilities, the ability to make use of them will simply be taken for
granted by both workers and employers. Thus, reading for understanding and
writing for the clarity of another's understanding will not simply be the goals of
future learning; they will of necessity be its major acceptable outcomes.

Literature

374
No people can be understood, nor any culture perpetuated, without serious
reference to its literature. The sophistication of the students will, of course,
determine the complexity of the writings with which they can grapple, but books are
essential for a people to know where they have been and who they are. For
example, no serious study of Western civilization is possible without coming to grips
with the Bible--the single most influential collection of books in this culture's
literature. The British heritage and even the English language are heavily
dependent on Chaucer, the Magna Carta, Shakespeare, and many others. Likewise,
the writings of the founding fathers of the United States, including the Federalist
Papers, the Declaration of Independence and the Constitution, are crucial to an
understanding of American society and culture. Works of fiction also hold up a
mirror to the soul of a people. It is no mere coincidence, for instance, that science
fiction became so popular in the machine age, or that fantasy exploring alternate
worlds, cultures and religions was on the ascendancy at its close. In both forms, this
genre of literature is reflecting culture--the stable one of the mature part of the
industrial age, and the changing, groping, and uncertain one of its passing. These
observations are true of every language and culture; all express themselves in their
writings, and no culture can otherwise be understood.
Likewise, the history of ideas is contained in books, and the modern person is
treading on dangerous ground in dismissing the thinkers of the past on the grounds
of supposed obsolescence. Plato, Aristotle, Augustine, Kant, Luther, Calvin, Galileo,
Newton, Locke, Marx, Freud, Einstein, and a host of others shaped the world and its
views as they now are; modern culture cannot be understood without reading them.
The great danger in making the transition to the new age is that foundational
attempts to grapple with ideas will be dismissed as irrelevant simply because they
are old, or were conceived of (and therefore are seen as tainted by) people whom
the moderns do not view as politically correct.

History

This topic and its importance have been remarked upon extensively in this
book. In order for citizens to understand their place in society, and their potential to
contribute to or change that society, the learning of its history is essential. The
cultural bond of any society is not just with its current citizens, but with all those
who made the society what it now is, and with those who will follow. When the
motivations and techniques that brought it to the present point are understood, and
the points at which the major decisions were made for stability or change have been
identified, it is possible to begin informed consideration about new directions.
Otherwise, decisions are taken in a vacuum of historical knowledge. Being ill-
informed, these are as likely to bring harm as good and so the mistakes of history
come to be repeated.
As for the literature of a culture, so for its history--the present cannot be
understood without the past, and the past exists for the present primarily through
its books. Fail to read these books and see where society has come from and why,
and the student will surely be unable to determine where it is going and why.

375
Ethics

It is also insufficient (and impossible) to study the values of a society in a


judgement-free context, as if they were of no real account for either the present or
the future. Both the values being studied and those of the one examining them are
inseparable from their respective cultures. These are not relativistic; they are part
of what makes a society and a people unique. When they are discarded, the people
lose their distinctiveness and become, even if only partly, some other people. This is
the case of every truth that a people holds to be self-evident, for such a statement
is an assertion that its values are absolute and essential to their distinctiveness.
Moreover, it has been argued several times in this book that it is impossible for a
society to exist at all without a collective conviction of what constitutes "good" for
that people. If all go their own way in this, doing what is right in their own eyes
without concern for anyone else, then there is no society, just a collection of
individuals, centered upon themselves and groping about in an increasing chaos.
Moreover, an absolutist ethic holds that there are "goods" that transcend all
cultures, and without an agreement on them, humanity itself may be at stake. That
is, the commonality of good and evil applies to the community of nations as well as
to each country individually. Lesser values may pertain to the survival of a culture;
but greater ones may have an impact on the continued existence of humans on the
planet. Since each generation must work these things through in order to avoid
extinction, values are a part of every education, whether this is acknowledged or
not. On the big issues, some of today's students will eventually make the decisions
that might precipitate or avoid a global war, cause or avert an ecological
catastrophe. For a myriad of others, they will have to live life and share a
community participating in its understanding of how to apply the good and the right.
In addition, since the lives of teachers are an open book to their students, and
can be far more persuasive and compelling than the content of the formal
curriculum, it is often the teachers' moral actions that students will imitate more
than their words. Thus, the ethics that are caught will depend only somewhat on the
subject matter in schools; they will depend much more on who does the teaching
and how. Students are very sensitive to the total classroom context; their response
to the subject matter is much more dependent on the teacher than it is on their
peers. This observation applies to all who teach, including the popular media and its
stars. Indeed, the more popular the media, and the more hero-worship offered to its
major figures, the more potential those people have for influencing change, and the
greater the responsibility if by that change, the values of the society are replaced
by others.
Once again, the dilemma of democracy is highlighted, for students must learn
to allow and even to hear voices that would destroy freedom in the name of the
absolute of free speech, but if they do not also learn a self-imposed restraint along
with their commitment to such absolutes, they will have the power to obliterate
democracy in a single generation.

376
Sex Education

This discussion highlights one of the most controversial curriculum items of


all. Every person must come to terms with sexuality; it must be understood, for it is
a part of being a human person. Sexuality is not just a collection of anatomical facts
and techniques, but a critical aspect of humanness, and an essential part of the
most important relationships one forms. Sexuality is critical to knowing and
participating in cultural rituals and to understanding literature; it is an undeniable
part of everyone's life. It has the potential for enormous pleasure and satisfaction in
the closest of all possible bondings two human beings can form. The corresponding
dark side shows as a great a capability for harm, for evil, for disease, for perversion
or exploitation, and even for death.
Because of its power, pervasiveness, and importance for both personhood and
socialization, human sexuality generates a wide range of moral issues. This creates
an irresolvable dilemma for schools. On the one hand, a child must learn to become
a responsible adult, and sexual education is too critically important to leave out of
the curriculum. On the other hand, information about sexuality cannot be
transmitted free of values, for it is behavior that is in question, not simply facts. It is
impossible for children to be taught about their sexuality without some indication
(even by implication) of what is appropriate behavior; the attempt to require
teachers to do so itself makes the moral assertion that there is no moral question
involved. For example, if students are encouraged to use condoms in order to
prevent AIDS, they may be given the message that sexual indulgence carries only
disease risks, and is otherwise morally acceptable--a stand that contradicts
traditional beliefs. Yet, not to tell them about condoms at all might well be
irresponsible; the school is in a dilemma from which no easy exit exists.
Likewise, one cannot teach about homosexuality without making value
statements, for its practice is not merely a lifestyle issue, but a moral one. There are
also health implications to such practices.
In the ensuing debate, one side correctly points out that lives are at stake if
students remain ignorant; the other rightly complains that merely "factual"
education both ignores the relational aspect and undermines moral values learned
in the home. Some wish children to learn that sex was designed for free use only in
the context of strictly monogamous, permanent, and heterosexual marriage. Others
view it clinically--as a body appetite to satisfy, and no more a moral issue than
eating--one takes precautions to avoid tainted food, but nothing else. Schools are
caught in the middle of the conflict, unable to meet important needs without
causing offense.
There is no escape from this dilemma in an ethically fragmented society. Sex
education is essential, but can never be value free, and schools are unlikely to gain
a mandate to lead culture, especially in its values. The advent of new killer sexually
transmitted diseases (STDs) has merely sharpened the controversy. In some
communities, the short-term consequence may be a renewed emphasis on private
schools, many of which are religious in nature, so that families can ensure the
transmission of their sexual morality to the next generation. Of course, such families
generally do so at home; their private schools often do not teach such things

377
because they are established without such a mandate. Such a fragmentation of
schooling is inevitable when a portion of society changes direction and adopts a
new world view, but it bodes ill for consistency, unity, and cultural survival into the
future, for there are often other forces, such as racism, working to divide the school
system as well. In the longer term, there must be a new consensus on this issue as
well, for a society cannot be built on diversity alone.

Science, Technology, and Mathematics

Since the society of the future will be even more wedded to high technology
than it is now, far more of its citizens will require some technical knowledge in order
to function. Just as the factory workers of the industrial age had to be trained to run
the machines, so also will any who wish to work in the future be required to have
some technical literacy. This is not the narrow training of the late industrial-age
specialist, but rather the ability to understand and effectively use the new
technology and the information it provides. For example, it will no longer be
possible for even the most ivory-tower of academic intellectuals to work without
using computing equipment, for their publishing too will be accomplished in this
way. It will also not do for any citizen to be ignorant of basic science, for far too
much of life will be directly affected by and constantly changed by the new
discoveries.
Yet, while necessary, the objective of infusing broad technical training into
education will continue to be difficult to achieve. The majority of students abandon
even the modest general science courses offered in today's high schools at their
earliest possible opportunity. There are a number of reasons for this, but the chief
one appears to be an early loss of interest in mathematics, without which any
further science education is impossible. As such organizations as the National
Council of Teachers of Mathematics (NCTM) are too well aware, much of this
problem is probably traceable to the lack of qualified or even interested
mathematics teachers. In North America, it is rare for an elementary school to have
a mathematics specialist on its staff. Even at the junior secondary level, many of
these courses are taught by the great surplus of English and social studies teachers,
regardless of what mathematical background they might have. Math avoidance can
be perpetuated from one generation to another by elementary teachers who may
themselves be afraid of the subject, and by high schools that are too busy with a
broad agenda of other problems to attempt the costly and time consuming
rehabilitation of the avoiders. For their part, most universities operate on the
assumption that the student chooses the major, and outline the curriculum within
that narrow choice. Few of them operate the traditional liberal arts curriculum and
demand at least a few courses from each of the major areas of study, and their
students can easily depart with little or no mathematical literacy. Yet those who do
avoid mathematics cut themselves off from a broad range of careers, including most
of those that will be at the center of the action for decades to come.
Some of this problem is cultural, for there is a broad perception that
mathematics is not necessary for many occupations, and indeed the very abstract
materials often included in the curriculum are not. However, it is exactly these types

378
of jobs that are threatened as the industrial age closes, and it is the ones requiring
greater technical knowledge and presupposing some mathematical literacy that are
multiplying.
There are institutional reasons for these problems as well. Many school
administrators not only believe (or are forced to act as though they do) that anyone
can teach mathematics, they also act as though it can be taught anywhere,
anytime, and with no equipment. Thus a high school mathematics teacher may be
handed a piece of chalk and sent to the sewing or drafting room or to a vacant
science lab. The same teacher may be required to use decades-old books and lack
the budget, facilities, or expertise to produce local materials. For social and cultural
reasons, the few well-trained teachers of this subject are still far more likely to be
male than female, despite the fact that their high-achieving students are more likely
to be female. In such circumstances, the female students will lack role models
appropriate to breaking out from stereotypes.
There is no easy or short term remedy for these problems, but those countries
that are successful in engaging the attention of and convincing their peoples of the
necessity for mathematical and technical learning will be the big winners in the
economic sweepstakes of the future--provided that this is simultaneously combined
with the learning of effective communications skills. There is no indication at this
point that such a happy realization will soon come to North America. If it does not,
the unchallenged scientific leadership it once had will surely pass elsewhere.

Technique

In a sense, the industrial age majored on techniques--the many narrow


specialities of the academic, the tradesperson, and the industrial worker. The
information age demands techniques of its own--those of finding, assigning meaning
to, and using information. Thus the education of professionals must change
dramatically, for they will no longer need to be factual repositories when they have
machines to take over this function. Instead, they will concentrate on being finders
and users of facts as necessary, and on the creative artistry that has always
separated the many mundane practitioners from the few truly brilliant ones. Doctors
and lawyers will have to change their ways most dramatically, for they are currently
the most dependent on knowing facts when needed. However, every step toward
the Metalibrary will force similar changes on many other professions, and their
techniques will grow much more similar with the passage of time.
Thus, training in skills will tend to concentrate on the finding of appropriate
information, which, being about the idea of technique, is actually a matter of
education more than it is training. Alternately, one could term these "meta-
techniques." Education in the new civilization must be much more concerned about
the process or ability to learn. This is something many theorists had already hoped
it would be, and studies of education have usually called upon it to be, but there is
little evidence that it has been delivered on in the past. If suppositions here about
the nature of work in the next civilization are close to the mark, future education
must become more concerned about both the process of learning and the ability to
learn. The difficult task will be to devise the techniques to bring it about.

379
The Big Ideas

This in turn means that more people in the future will have to become
assessors of ideas, for this ability is part of the technique required by the
information paradigm. Implied by this is a reduced emphasis on some of the
discrete and narrow specialization considered important in the industrial age, and
an increased one on the activities of the mind itself. Certain very broad questions
that have always been important to philosophers will become part of the education
of the future, because they touch directly upon the assessment of ideas. These
include:

o Classical Metaphysics, or the study of ultimate reality and meaning. This is


the discipline that provides the frameworks within which to create world views, such
as those of the scientist. In it, one also studies questions about the origin and
development of the universe (cosmology), about the nature of being or existence
(ontology), about the existence and characteristics of God (philosophical theology),
and about the nature, role, and destiny of humanity (anthropology).

o Epistemology, or the study of the nature of knowledge statements, and the


sources and meaning of knowledge. Though this word as such has not previously
been used in this text, epistemology has been at the heart of several of the
discussions thus far. It bears on the truth value of statements, on the reliability that
can be ascribed to various forms of knowledge, on whether truth is relative or
absolute, and on whether knowledge is subjective or objective. It helps to
distinguish whether knowledge is based on another's authority, is revealed by God,
comes from a reasoning process, is intuitive, or is empirical and derives from the
senses. Clearly metaphysics and epistemology are closely, even circularly, related,
for one needs a theory of reality to say reality can be known, and a theory of
knowledge to say that one knows that reality exists. These are important questions
to the serious knower, and need to be considered by the would-be assessor of ideas.

o Axiology, or the study of values. This usually includes ethics, though that
topic has been treated separately in this book. It also includes aesthetics, which
expresses cultural values in art. Because it relates to the imagination and creativity
of a people, and because artistic media tend to express the history and other values
of a people, an understanding of aesthetics is important to the knowing of a
people's soul. The answers to questions such as "What ought I like?" or "What ought
I consider beautiful?", are important to the would-be member of a culture, and are
not techniques, but appreciations, without which both communication and
functionality are seriously impaired.
These three areas of study, while often regarded as unimportant, and not
really needed by many people, are a part of the intellectual makeup of every human
being alive. If not taught, they are caught, but they are learned. Perhaps the
difference in the future is that they will be explicitly identified and discussed, for
there will be a greater realization that it is not behaviour that gives a person an

380
essential identity, but beliefs and values, for these shape the emotions, the
experiences, and the behaviour.

The Social Studies

In this final category are included all the studies of the behaviour of
humankind, both in the mass of society, and as individuals. Such disciplines as
economics, politics, psychology, and sociology are represented here. As indicated in
the chapters on the economy and on the state, these too will become progressively
more important as time goes on. However, if the projections of this book are even
close to correct, there could be a long period of dramatic social change in the
immediate future. This would increase the desire to develop social techniques and
the yearning for such disciplines as economics and sociology to become full-fledged
sciences, but the turmoil will seriously hinder the ability to achieve this goal.
It should be recalled at this point that social change initiates new techniques,
and is caused by the interactions and conflict that arise out of new ideas and
inventions once they have been implemented. Thus, the rate of social change and
the rate of technological innovation are closely linked, and it may often be unfruitful
to enquire which of the two came first in a given instance.

Will the Generalist Come Back?

There was a time, not too many hundreds of years ago, when it was possible
for a well-educated person to contain within a single mind virtually the whole body
of scientific and literary information known. Ideally, such a person would have
attempted to read all the works of philosophy and theology available, and would
have striven toward being the complete scholar. Such a day passed away with the
scientific and technical revolution of the nineteenth and twentieth centuries, when
the pool of knowledge became so vast that no one individual could hope to
comprehend it. Even artificially lengthening the adolescence of young scholars by
keeping them in school until their mid twenties could only produce a "doctor of
philosophy" who could be so narrow a specialist as to be almost non-functional
outside the tight little world delineated by the final dissertation.
In the hard sciences, particularly physics, it could take several more years for
the student to arrive at the frontiers of knowledge and begin to do useful and
original research. The minuscule overview the student received of all those fields
outside the speciality was hopelessly out of date even by graduation. However, in
the information age, remembering all the facts all of the time will become less and
less important. Being able to find the facts, associate them, and use them will be of
first importance. That is, in a given project, it will still be necessary to assemble the
factual information, for this process is fundamental to integration and synthesis, but
it will not be necessary to personally retain such information in order to continue
functioning in one's profession. This necessity to learn for skills rather than
information is most pronounced in the computing and information sciences
themselves, where material can become obsolete by the time it has come to the
attention of the person who proposes to teach it.

381
All workers in the next civilization, but especially its leaders, will have to be
more articulate communicators, broadly educated, with at least some technical and
some business background. Once certain minimum skills have been obtained, and
without diminishing any of them as broad, basic requirements for functionality, the
man or woman of the new age may be free to specialize. The present day
stereotype of the able science student who is nearly completely illiterate in the
English language will probably have to vanish. The equally stereotyped arts major
who fears, distrusts, and is willfully and even proudly ignorant of modern science
and technology will be equally out of place.
The quote from Heinlein at the beginning of the chapter provides a good
starting point, but men and women do not yet have the multi-century lifespan of a
Lazarus Long that is required to become knowledgeable in every field. Moreover, if
new progress is to be made in technology, specialists are needed to make it. The
amount that these specialists will have to know, the way they will work, and the
demands society will place upon them to enable them to function at all are
changing dramatically. The new civilization belongs to those who will specialize
enough to earn their bread and butter as distinct individuals but who will be
generalist enough to qualify them as functional human beings in a society where
information management and communication skills are paramount. They will have
to use their knowledge skills to work in several specialities simultaneously or
serially, switching from one to the other as the need dictates. It is to such ends that
learning is likely to be directed in the future.

10.4 Issues in Formal Learning


The compilation of issues discussed in this section is far from comprehensive--
they are only a few of the major concerns about learning: those that fit in to the
themes of this book, those that deal with the broader society, that are ethical in
nature, or that touch upon the techniques of learning.

Why is Learning Undertaken?

Groups with an interest in the learning process were mentioned in the first
section of this chapter, along with their possible conflicts. The idealist's answer to
this first question is that learning is undertaken to make the learner functional and
preserve and enhance the knowledge and values of the society wherein it occurs.
However, there are many groups with special interests in learning for the benefits or
changes it can bring to them. For its part, the state often wishes to develop willing
citizens, and to this end an authoritarian state will dictate the entire curriculum,
perhaps in both a comprehensive and an arbitrary fashion. At the same time, a
democratic state must use the curriculum, albeit much more subtly, in an attempt
to convince students of the superior virtue of democracy. In any case, the state that
fails to achieve this persuasion of students that it is legitimate will soon cease to
exist, for there are other voices prepared to persuade them differently.
The state (at the national or regional level) may have a variety of specific
agenda items to achieve and hope to use the school system for these purposes. For
instance, catching up with the Soviets became the byword of the early 1960s after

382
the United States' perceived humiliation by the Sputnik launch of 1957. For a time,
being beaten into space provided the motivation for an extensive rewriting of the
national curriculum to emphasize mathematics and science. Academic theoreticians
were called upon for a quick fix of the mathematics curriculum, and produced the
impractical and abstract "new math" with its scores of axioms and abstractions and
few examples, applications or exercises. It did not seem to have occurred to anyone
to ask whether it was necessary for grade two students to become mathematical
theoreticians, nor whether any practical needs were being addressed. Not many
teachers even understood what the purists had given them, and some ten years
went by during which few had the courage to deprecate theory and try to relate
mathematics to students' lives or to the surrounding culture. It is not clear at this
point whether much is accomplished by such dramatic changes, nor even what the
general effect of state dictated or influenced curricula is in general. Neither is it
clear that the uniformity demanded in such cases is actually achieved; the
classroom teacher has opportunity to implement other agendas than those of the
state.
There may also be specific social reasons for undertaking schooling--it could
be done in order to perpetuate class differences or an existing division of economic
spoils by keeping certain groups "in their place". Since information is available from
many sources, such uses of schooling are becoming more difficult and could be
effectively impossible in the more open society of the future. Others have the
opposite view--that schooling ought to focus on the elimination of socioeconomic
distinctions; that it ought to be the great class leveller. Thus they promote "lowest
common denominator" curricula, on the assumption that if not everyone comes into
the school system the same, they can surely be made to leave it the same. Even
though this philosophy has very strongly influenced North American school
administrators, it is seldom carried out to such an extent in the classroom, because
its premises are observed by practising teachers and by their students to be untrue.
For instance, by the time students finish high school, there may be five to
eight years difference in reading and arithmetic capability, and there may be no
immediately available means to close this gap. This does not mean it will never
close, or that those that are either less able at that point or are completely and
permanently dysfunctional; it merely suggests that the modern school lacks
techniques, resources, and the mandate to produce a uniform product in factory-like
fashion. Formal learning is not, for a variety of reasons, a class leveller, though it
has some trends in that direction, nor is it likely to become effective as one. Quite
the contrary, the provision of equal opportunity to excel in learning might result in
an entirely new class structure based on ability. Whether this turn of events is a
desirable outcome or not is questionable.
Another social agenda may sometimes be followed by employers, for whom
the number of years of schooling may be a screening device, on the understanding
that those who have successfully cleared one set of hurdles in life are better
equipped to clear another. However, this will work against the employer who hires
people who are overqualified, for when the reward offered by the job fails to match
the expectations of the jobholder with respect to responsibility or remuneration, the
result is dissatisfaction and a high turnover rate. Employers who hire those with

383
slightly lower paper qualifications than needed and train their employees on the job
will have higher job satisfaction, lower turnover, and a better retention rate. Some
employers know this, and deliberately hire overqualified workers for menial jobs on
the theory that they will not be around long enough to organize effective demands
for improvements in working conditions (See Berg--Education and Jobs: The Great
Training Robbery).
However, such exceptions aside, it is remarkable that employers have so far
had little interest in or opportunity to influence schools, even though they are
entirely dependent on them for their human resources. This is probably due to their
concentration on the short term bottom line--a practice that may be necessary to
meet immediate competition and satisfy shareholders, but tends to limit long term
potential. Such limitations are most problematic when, as in the fourth civilization,
the business milieu is changing rapidly and markets can appear and disappear
overnight. The ability to respond sufficiently rapidly in such situations requires a
flexible generalist work force that can reinvent themselves and their enterprise at a
moment's notice. Finding and training such people requires that long term
commitment to change have priority over the immediate bottom line.
Perhaps it is most accurate to say that society organizes learning for
economic reasons. These include the ability of a nation to compete effectively in
innovation and high-tech production, and the ready availability of a well-trained
labour force so that employers can make quick changes or additions to their staffs.
There is also the benefit to the state accruing from higher taxes paid by better
trained workers, who usually obtains a higher salary for the effort. However, these
observations apply mainly to training in technique. On the other hand, the benefits
to society of education--the ability to think and evaluate ideas--are much less
immediate and tangible, especially at the close of an age that has been obsessed
with training in technique for short term rewards, and wherein education is ideas
has largely been an incidental by-product of such training.
One could advance the suggestion that education produces a better and more
complete person, and is therefore desirable for ethical reasons, but this would be
unlikely to impress those who have to pay for the process, and it would be difficult
to establish as true, even if it seems to be self-evident. A better argument might be
that more highly educated people are likely to be the very kind of versatile problem
solvers that a fourth civilization enterprise needs to survive and thrive. Thus, in the
future, education in ideas may be undertaken for economic reasons, because it is
more available, or because there are time and money to pursue it. These may not
be profound reasons, but they are pragmatic ones, and will likely serve.

Who Should Control Education?

Since there are public, family, and business interests in learning, and since
there are both societal and individual concerns, the issue of control over organized
learning is a difficult one indeed. Teachers claim the right to oversee it in the same
way as other professionals control their work. But, doctors and lawyers police only
the entry to their professions and, to some extent, the ethical standards of practice.
Hospitals are built and their administrators are appointed by their owners, which

384
may be private organizations, churches, community boards, or officials of
government health departments. Likewise, judges are either elected or appointed
by government. That is, in neither of these cases is the entire medical or legal
apparatus operated by the professionals for their own ends. Thus, teachers cannot
expect to gain such control over schools, curriculum, and the administrative aspects
of learning, except as they are appointed by and represent the broader society.
Yet, just as hospital administrators have usually been doctors, and judges are
usually lawyers, the educational apparatus is normally run by teachers, or former
teachers, even if it is not their organizations that appoint them, for government
education departments are often operated by those whose training is as classroom
teachers, just as are the schools. There is not necessarily any direct conflict of
interest in this, because such people generally cease to be teachers and become
instead administrators--removed from and possibly unsuitable for the classroom
environment after a time. They have the time to be the developers and pursuers of
educational techniques of all kinds, and the persuaders of classroom teachers to
experiment with these techniques, but gradually become divorced from the
practical realities of the classroom and immersed in the business, politics and public
relations of schooling. Indeed, just as hospitals have in more recent years begun to
create a separate profession of medical administration, so also there is beginning to
become a distinct breed of educational administrator.
However, such comprehensive control by teachers, former teachers, and
professional administrators, while not a conflict of interest, is very narrow. Parents
and children have the most at stake in learning, but there has not been much
mechanism for their voices to be heard. Employers also have a major interest in the
learning process, for they are expected to hire the students afterward. Yet, very
little attention is paid to their needs, and they are seldom represented in the design
of curriculum. Likewise, universities may set entrance standards, but their influence
on the grade school is otherwise confined to the trickle-down effect of their
graduates who take on teaching positions there. This gives them a delayed-action
philosophic control over the grade school, but no direct voice in decision making.
There are exceptions to these observations. In some cases, universities have
worked closely with grade schools on curriculum, and some states in the United
States are mandating a broader business and community representation on the
boards of technical colleges. However, these are usually experimental or isolated
test cases, and it is not clear how soon these will become general practices. That
they will so become is one of the information society paradigms, but it may take
some time.
Another difference between teaching and the practice of law or medicine is in
the direction of accountability. All may be somewhat administratively accountable
to government, and to institutional bureaucrats, but lawyers and doctors are always
understood to be responsible for the outcome of their work directly to the client.
Although the ideal professional teacher has a deep sense of responsibility to
students, there has in the past been little to hold teachers externally accountable
for the work they do, or for the interests of their clients to be directly taken into
consideration. If they used the techniques promoted by their superiors and did not
offend anyone, it was unlikely that any enquiry would ever be made about whether

385
their teaching had been effective or if it had achieved the desired learning
outcomes. Moreover, there is frequently sharp disagreement about what outcomes
are desirable, how to achieve them, or how to measure them. It is also in the vested
interests of teachers and their unions to oppose any measurement of teaching
effectiveness.
This too is changing, however. The combination of easy access to information
and the litigious nature of American society is already starting to produce
educational malpractice suits, and there have already been cases involving students
who sued because they were allowed to graduate while still illiterate. There is
nothing like the threat of legal responsibility to concentrate the mind on making
improvements. Thus, in far more learning situations, specific goals are being
delineated before the process begins, and these are being tested for once it is
finished. Also, more people are becoming involved in the making of curriculum
decisions, in the development of teaching strategies, and in the measuring of
outcomes, and the premise of the information society is that such integration of the
efforts of the interested parties will increase substantially in the future.
Since the learning process is addressed to a critical ethical point --the
completing of functional human beings--the question of effectiveness and the
matter of accountability to the student are crucial issues. It would be inappropriate
to have the student, if a young child, control the process and the curriculum, even
though this may well be suitable for an adult learner. However, if teacher-student
accountability were practised, and the outcomes of the process measured, it might
become possible to determine which, if any, of the many educational techniques are
most effective. However, it should be noted that the institution of teacher
performance assessments will probably take a long time and the process will likely
involve much pressure by clients and much resistance by unions. Theory may
indicate that such assessments are desirable, but theory must still be put into
practice. The wide variation in control over education will continue to ensure that
the move from theory to practice is uneven.

What is the Status of Teachers?

At the present time, teachers are often regarded as professionals only at the
university level, and those in K-12 schools may find themselves accorded a rather
low status in the community, especially considering the importance of the task in
which they are engaged. Yet, there is little evidence that university professors are
better teachers. Indeed, as a rule they are not expected to be competent in
anything but research--for that, and not teaching, is their profession. In the public
eye, there is often suspicion that the K-12 teacher does little that could not be done
by anyone. There are sometimes deep concerns about the results of grade school
education that are said to be partly justified by the scores on some standardized
tests, and many of those concerns are focused on the teacher.
The problems of the broader society also impinge on the classroom, making
the job more difficult still. Because public schools can operate with training or
educational goals that are either few in number, not clearly defined, or that
contradict one another, and because of the competing desires of society, teachers

386
find themselves at the conflict point of the various demands made for learning.
Instructors in technique have an easier time, because they know what the outcome
of their work is supposed to be. University professors have the least problem of all
with such conflicts because the general public does not expect to know what they
are doing, and they need not have educational goals, so they are virtually immune
from criticism. They are, however, academics and not men and women of action, so
they are extremely vulnerable to direct frontal attack, especially with the threat of
violence. Thus, in the 1960s academic standards and course content came to be
dictated by the most violent student radicals, and few academics were courageous
enough to withstand the onslaught. After a period of relative calm, such attacks
were renewed in the 1990s, this time focusing upon the removal of the last vestiges
of cultural relevance and values, and on the grading system (see Bloom--The
Closing of the American Mind).
This discussion leads directly to the question of qualifications for teaching. In
the formal sense, these vary widely from one jurisdiction to another. Until well into
the twentieth century, many teachers took only one year of teacher education
beyond high school at a "Normal School," and were never formally tested for
competence once they were certified. They were at the mercy of superintendents,
inspectors, and board members, and could be fired for any reason. Later, the single
professional year of training came to be appended to the end of a standard
university degree, and teachers were also able to form powerful associations or
unions. In some places, it became virtually impossible to fire a teacher, regardless
of competence. However, many certification boards now have rigid formal
standards, and periodically require recertification or upgrading of skills and they
make every effort to remove those unable to perform the task set to them.
There are three aspects to teacher qualification: personal suitability, teaching
skills, and subject competence. A degree of administrative ability may be included
in the first of these, and the last has traditionally been directly equated with
academic background. At the primary school level, the first two are held to be more
important, but as one moves up through the years, they suffer in regard by
comparison to the last, and at the university level, the latter becomes paramount.
Yet, at all levels, the learner is also searching for role models to assist in the
becoming of a whole human being. This fact would seem to suggest that the order
of priority should always be: (1) who teaches, (2) how it is taught, and (3) what is
taught. Care must be taken not to allow such a shift in emphasis to become an anti-
intellectual relegation of subject matter to a low priority, or an emphasis on
charismatic leadership at the expense of content. All three are important factors in
education, but the industrial age has tended to diminish the human element, and it
may need some restoration.
Such a realignment of priorities might be achieved if the essential unity of
learning at all levels is realized, and teaching becomes a more professional, self-
policing discipline (with public input). Given the size of the teaching force, this goal
seems difficult to achieve, but the ethics of the task of completing human beings
would seem to demand much more careful selection for personal suitability to the
task. In addition, it requires both a greater subject competence at the lower levels,

387
and a greater ability to teach at the higher ones. These will not come about by
accident, but requires deliberate action on the part of educational administrators.
Information paradigms seem to encourage such action, however, because it
will become increasingly difficult to hide incompetence of any kind in an open and
highly competitive society, and this will force these issues into public view where
they must be faced. It is also possible that the passing of the baby boom and the
increased demand for more highly trained and educated workers will put heavy
pressure on schools, and this will help to ensure that teachers are suitable, do
become competent, and are effective. Here, too, change does not come easily, and
there are sure to be teacher organizations that will fight any perceived loss of power
and influence.

Who Should Pay for Education and Training?

Population stability also means that North American school enrolment has
peaked and will begun to decline. However, increasing participation at the upper
levels, raised expectations by students, the great number of new capital-intensive
techniques, and higher teacher salary expectations, continue to put upward
pressure on school expenses. Like the health system, the learning system has
potential to break the budgets of even the wealthiest funding agency. This has
resulted in two trends.
First, there is a search for new technology in order to turn over some of the
teaching process to machines--this in the hope that it would make learning more
capital intensive and less salary intensive. Whether this is a good thing or not will be
discussed at length later in the chapter. This is, however, yet another illustration of
demanding that technique solve problems, without considering whether the
problems ought to be solved first, and then technique applied to make the solution
more efficient. As in the computerization of a bad office system, when it is done the
other way, with technique before solution, it is the problem itself that becomes
more efficient, that is, it becomes worse faster.
Second, there is a growing lack of enthusiasm on the part of the state for the
whole process, at least in some places. Governments, and the people who elect
them, have in some cases begun to lose both ability and will to fund the learning
system to the extent that it has come to expect. The result has been a series of
conflicts between governments and teacher unions, and in some jurisdictions a
sharp decline in the percentage that formal learning has of the overall government
budget.
Quite apart from long-term survival issues, this has created a funding vacuum
for existing schools that can be met only by appealing for funds to the private sector
or by going out of business and turning their students over to private sector
institutions, which charge fees, but on average spend much less. In either case, it
would appear that, over the short term, private sector involvement in and private
funding of education is likely to increase dramatically. The companies and
individuals providing these funds will of course demand a corresponding control.
Whether the resulting competition with the public sector will improve learning or
fragment it remains to be seen. It is likely to mean that education and training will

388
become more distinct, for the private sector is much more interested in the latter
than in the former; it will not pass up the opportunity to influence schools to
produce graduates who are immediately useful to them.
One widely advocated proposal to fund publicly while allowing the competitive
advantage of a private system is to issue vouchers for all children in a jurisdiction.
They, or their parents, would then select a school and turn over the voucher, which
would then be exchanged for a fixed number of dollars from the state. Such a
method would have been cumbersome to implement without computing equipment,
but would now be relatively easy to administer. Critics have attacked the proposal
as a way to create two standards and two tiers of education and so divide society--
all at public expense. A way around this objection might be to use a partial voucher
for half of the estimated cost, or some other suitable proportion. Public schools
would receive the other half from the state as they do now. Private ones would be
expected to raise their second half from the private sector. This system could
possibly reduce the total cost of education, allowing the public sector to redirect
money, say, into the inner city schools and actually provide improved learning
experiences--if politicians could avoid the temptation of using it elsewhere, such as
in reducing deficits or increasing other spending.
A further objection to a voucher system in the United States is that it would
necessarily result in government funding of religious schools, including Catholic
parochial schools and various other Christian institutions. Some believe this to be
unconstitutional.
Other funding methods are possible as well. As the corporate sector grows in
power and influence it may set up its own schools for employees and their children.
Corporations in fast-changing technologies that must engage in continuous
retraining could become more like technical schools or even universities than like
the businesses of old. Indeed, this is already true to a great extent of companies like
IBM, Apple, and Microsoft. However, since the average size of corporations will likely
decline in the information age, and these may undergo very rapid and continuous
change, the ability to run formal schools in this way may be rather limited.
In theory, universal public schools are the prime agency for democratization
and socialization, but this ideal is difficult to maintain in a divided society.
Privatization may have the advantages of maximum efficiency and precise
targeting, but it has the disadvantage of potentially severe fragmentation. This is
already taking place, and seems likely to continue until some time after the broader
society has reached a new consensus on its values and priorities. What seems
certain is that schools will have to find new sources of revenue and support in the
private sector. Where these will be, and what this will lead to is not yet clear.

Is Learning Fair?

Every society has a variety of socioeconomic groupings or classes. As


mentioned earlier, one socialist ideal is the levelling of these through learning, but
what may well happen is that either the existing class structure is exacerbated or
that new ones based on ability are established. There can be little doubt that the
wealthy have access to better and expensive private schooling for their children,

389
and therefore to substantial opportunities to perpetuate their economic advantage
for another generation.
Even in the public system, schools in better neighbourhoods or wealthier
regions have a greater capacity to raise money for school projects, equipment, field
trips, and even renovations to the physical plant. They may also attract the better
teachers, and people of poorer districts may not be able to complain effectively
about such inequities. Thus, in some inner city schools, problems with low status,
poor self-esteem, broken families, despair, inability to speak the language, and
skepticism of the value of schooling, all conspire to make the schools relatively
ineffective. Since wealthy suburbs may be under different school boards, and the
city core often lacks a solid tax base, these problems get worse as time goes on.
Such schools have little pride, little of the latest technology, and few outstanding
graduates to any kind of post-secondary institution other than the local prison.
Some have relatively few graduates of any kind, and some become focal points and
breeding grounds for crime, substance abuse, and yet another despairing
generation. It is important to note that although such problems are often
complicated by the fact that particular ethnic or racial groups may dominate the
inner city, these difficulties are social and economic, and have nothing to do with
race, though in some cases official reluctance to address them might.
Busing students to and from the inner core was once seen as the panacea for
such problems, and it did focus the attention of uncaring suburbia for a while and
was an interesting social experiment, but the experience has shown that this does
not in itself solve the real problems, and it is now generally recognized to have been
inappropriate. This seems to be an instance where only the centralizing of control
and the funding of education over much broader regions can spread out the tax
base enough to equalize facilities and salaries. That, and a broad-scale injection of
private funds may have some potential to improve such situations, but these are
long-term problems and will not go away soon--there is no quick fix for complex
problems.
In this instance as well, there is pressure to find technological solutions to
social inequities, but it must be appreciated that neither money nor technique alone
will solve human problems; it takes people motivated by an ethical compassion,
love, and a desire to break the cycle of poverty. On the other hand, if the fourth
civilization's information industries have a sufficiently high demand for workers,
they may only be able to get them from the thus-far neglected inner city schools.
Thus the cycle might also be broken by economic pragmatism.
Another fairness question has to do with ability. As mentioned earlier, and
notwithstanding socialist doctrine, it is readily observed in real classrooms that not
all people of a given age have equal ability to learn. Since training will continue to
be important and education will likely make a comeback--with either or both
becoming more necessary for job holders--there is a distinct possibility for the
creation of new and very sharp class distinctions in the future. For instance, one
could worry that if the Metalibrary came into being, the 10 percent or so who were
sufficiently well-learned to use it effectively would have the potential to control it,
even to the point of denying its use to those they deemed unfit. On the other hand,

390
with the proto-Metalibrary now known as the Internet/World Wide Web, such trends
do not appear to be very evident.
One possibly longer term alternative to such a class structure is to find
technological fixes for the learnability problem--drugs or electronic implants that
erase the advantage of the more able by raising everyone's ability. In some ways,
the latter of these especially seems like a rather inhuman solution, for it appears to
make cyborgs of all, but if the alternative is a meritocracy or dictatorship of the
learned, there may be those who will prefer to become part machine instead. Yet
another possibility with problematic overtones is that genetic selection could be
employed to change the next generations and make them more able. Such eugenics
programs are still closely associated with fascism and would not likely be well
received in the West. Indeed, there are more frightening alternatives still, for there
will always be demagogues who wish to initiate a neo-Nazism based on hatred of
the less able, and propose a new "final solution". These dangers are as real as the
lack of resolve by society to solve the problems of educational and other inequities
in real and lasting ways. Whether such a resolve will ever come about or be
effective is yet to be seen.
Finally, the premise of public schooling is that there is a shared set of
fundamental values that society has the duty to transmit to the next generation. As
long as North American society was viewed as a melting pot with a homogeneous
result--one that presupposed an essentially British view of law, justice and
civilization, and assent given to a Judeo-Christian moral background, public
schooling was workable and generally acceptable. However, in a multicultural
environment that presupposes no absolutes of culture or morality, each group
would appear to have an equally legitimate right to demand either that the schools
respect its ethnic or religious traditions or to insist that it have its own schools in
order to survive.
Thus, some religious groups (often including Christians) find that they cannot
tolerate a school or university system operated on principles antithetical to their
existence, so they set up their own schools. Others do the same for linguistic,
cultural, and religious reasons. Such efforts result in fragmentation, and mitigate
against the long-term survival of a recognizable uniformity of culture.
Is there a short-term way out of such a dilemma? Short of a new
totalitarianism, there is probably not, for even using the Metalibrary for much
learning may be potentially more fragmenting than unifying. Indeed, it will almost
certainly be used by some at first to promote industrial paradigms such as further
fragmentation and isolation and only later be seen to enable something better.
In the long term, the very existence of the fourth civilization will mean that a
new consensus has been reached, and a new stability may well come to the school
system along with it. But the shape of moral/ethical and societal consensus is still
very much up for grabs in the marketplace of ideas, and it is not now possible to
confidently predict a stable or unified immediate future for North America's schools
and universities.

Is There a Right to Learn?

391
At first, the answer to this question may seem to be an obvious and
unqualified "yes". Everyone must learn certain basic techniques and background
context to function in society. But as some of the above discussion indicates, it may
be difficult to determine how much each individual has a right to learn, and there
are subsidiary questions:

o Is there an obligation on the part of every person to learn?


o Does any right and obligation to learn extend to the provision and forced
acceptance of twelve years of schooling?
o Does it extend, as asked above, to a fundamental right to become equal in
ability by technological means? ...to be born equal in the first place by genetic
selection or manipulation?
o What limit, if any, is there on the obligation of society to provide training
and offer education, and at what point do both become the individual's
responsibility to society?
o If twelve years of schooling produces an inept, illiterate, and ignorant
graduate, who is responsible? Is it the parents', for passing on a poor genetic
heritage or providing an anti-education atmosphere; or the schools' for not
overcoming the inability or lack of interest; or is it society's in some general way?
On the other hand, is it no one's fault but the student's, or perhaps no one's at all?
o Is there a right to be educated in one's own moral, religious, or cultural
heritage, or is there a right and duty to become part of that of the historical majority
in whatever nation one finds oneself living? Or, is there a correct mix of rights and
obligations in this connection?

In the increasingly litigious society of North America, these questions need to


be phrased in terms of who shall the defendants be in the learning malpractice suit.
For that matter, who should the plaintiff be--the student, the parents, or the
employer or society upon whom the student is eventually inflicted?
Over the longer term suppose that very capable artificially intelligent artifacts
were built. Would they have a right to training because they are able? Would they
have more such right than an incapable human student? More seriously yet, would
the ability to receive training imply understanding? If so, would it imply also the
right to be educated? If so, could it be taught aesthetics? etiquette? morals? Do
history, sociology, or human society have any meaning for such a device? The
answers are not simple; seeking them forces one to go back to the questions of
what it means to be alive or to be human--and the answers shed very little light on
the problem of less able humans.
As previously mentioned, there is here much potential for social troubles, up
to and including violence; the question of what it means to be human is one whose
answer must be universally accepted in the ethical consensus, or the cost in misery
and human lives could once again be great.

Is Learning Sexually Biased?

392
This is another flashpoint question--one that may well have as many answers
as there are students or teachers. The author recalls one occasion on which an inner
city school teacher from another country held forth at length during a conference on
the subject of the inability of women to learn mathematics. He cited the fact that in
many years of teaching, no female student had ever taken his analysis (pre-
calculus) course. When informed by the author that his own two classes in the same
subject were at that very moment overwhelmingly female (and had been for years),
he reacted with disbelief.
In many cultures, women have been regarded as ineducable. They were
routinely and systematically denied a variety of types of education even in the
nineteenth century Western industrialized nations, and this is still the case in many
parts of the world.
Some authors claim that North American teachers in all grades still give most
of their attention to boys--either because they ask for it more often and more
effectively, or because of a built-in cultural bias of their own. On the other hand, it
may easily be observed in most high school graduating classes that majority of
graduands are female, and this is overwhelmingly so of the valedictorians and other
medal winners. By the late nineties, women outnumbered men in Canadian
universities by nearly two-to-one. If this latter point were the criterion by which to
judge, it would appear schools were at this point heavily biased in favour of female
students. Since the majority of elementary school teachers are women, such a
conclusion is not only tempting, but also has a ready explanation (though not
necessarily the right one).
Interestingly, however, though women have begun to outnumber men even in
highly competitive disciplines such as medicine, they still are a decided minority in
the mathematical, computing and information sciences, and to a lesser extent in
physics and engineering. Typically, first year calculus has still more women than
men, but they all vanish from the discipline by second year. Likewise the small
number of women in first year computing courses are seldom seen at the higher
levels, even though there are many women in managerial positions in the
computing industry.
Why are these things so? Is there systematic discrimination favouring boys in
some parts of the schooling experience, and girls in others? Some claim there is,
and governments through the 1990s have launched expensive programs to change
the outcomes, on the theory that equal numbers of both sexes ought to be
successful in each endeavour. However, some other questions about the cause of
such observed differences in outcome have not been carefully investigated. Are
there some innate differences between the male and female brain that affect what
students are capable of learning, or perhaps what they want to learn? Or, on the
other hand, are boys and girls culturally conditioned to differing social roles and
learning modes by the toys they are given as toddlers, so that they become unlikely
ever to develop certain interests? One might suppose, for example, that if a person
has never as a child played with construction toys or been encouraged to take
things apart and put them back together, he or she could never become an
engineer or a physicist. It is certainly reasonable to suppose that if female primary

393
teachers conveyed a fear of mathematics to young girls, their students would not
likely train for any profession needing that discipline.
Answers to such questions have only sometimes been sought, and then have
proven elusive. Performing such investigations would be as politically difficult as
doing so on the basis of racial differences. This would appear to be one of those
areas of educational mythology and policy where much effort and funds have been
expended to fix the outcomes of an ill-understood problem.

Is There a Technique for Learning?

Many attempts have been made to describe what happens when a person
learns, and how to induce it efficiently. One ancient method of teaching, the
Socratic, attempts to draw out of a student the ability to think via a series of
questions. A question this technique begs is whether every person has the ability to
think at the desired level.
A modern theory, the behaviourist, couches every part of the process in terms
of the actual response to stimuli and the altered behaviour patterns observed. There
are champions for the lecture method, for classroom demonstration, and for the
discovery method. On the other hand, there are critics of all. Teaching has been
attempted with large groups and with small, with "lock-step" lectures and with
individualized progress, with rigid structure and with none, by dispensing knowledge
and by demanding that students discover it. Along with techniques, cognitive
theories to explain how learning takes place have also multiplied. However, there is
little convincing evidence of the general superiority of any one of these theories or
techniques in all situations. Each one of them may be shown to work given the right
teacher, student, and subject material. The challenge is to suit the best available
method to all three, addressing the whole person through a variety of strategic
techniques.
One thing that does seem necessary is to provide the student with not just
factual knowledge, but also systematic (and perhaps formal) techniques for doing
their own analysis afterwards. For instance, children can learn to read by
memorizing vocabulary as whole words, but this method has its limitations, because
not every student can memorize very many words, and all eventually run out of
capacity. At some point, children have to learn the techniques of phonics so that
they can analyze new words in a systematic way without committing to memory
each new combination of letters that arises in their reading. Similar comments could
be made about mathematics--only so many number facts and formulas can be
learned by rote; sooner or later analytical methods have to substitute for
memorization.

394
The question of learning effectiveness is compounded by the fact that it is not
always easy to measure it. If the object is training, the student can be asked to
demonstrate the technique after a suitable period; it is for this reason that there are
examinations and theses. If the object is education, it is not clear how the outcome
can be measured, for the ability to assess ideas is more than an ordinary technique.
Questions of motivation are also difficult to deal with. Some students are motivated
for training by the prospect of social status, employment, or approval. But, what
makes a person want to learn about ideas, and can such a desire itself be taught? In
other words, is there a meta-education that must precede education, and if so,
where does it come from and how? It is the very difficulty of answering such
questions that makes the task of educational administration discussed earlier so
very daunting, and the need to develop reliable techniques for learning so pressing.
If a consensus on the answers to such questions remain elusive after
thousands of years of schooling, what prospect is there of discovering them now,
say, as part of the artificial intelligence agenda? They are so far a part of the human
mystery and will need to be understood much better before any kind of intelligence
enhancer or autonomous artificially intelligent device can be constructed.

Summary and Conclusions

The questions in this section have thus far been presented without many
suggestions for answers. The hierarchical ethical framework developed in Chapter 3
presents an obligation for all involved in the learning process to work in their mutual
interest in order that the learners will become more functional human beings and
society might become enriched. This wholistic approach would tend to increase
emphasis on educational aspects of learning. A Christian would further suggest a
larger obligation to God to become (and to enable others to become) the most able
human being with the talent available just in order to serve Him and to express this
in service to others. On the other hand, a Marxist might aver that the only aspect of
learning worth considering is the extent to which it serves the needs of the society.
But, these motivations for an obligation to learn are idealistic, and there are
other agendas. For example, the technocrat might wish either to raise everyone's
intelligence or to impose a dictatorship of the most able--whichever seemed to be
most efficient. Since such an approach is little concerned with the individual as
such, it would likely mean that training would come to be the chief factor in
learning, because education, and anything else that might produce a "better"
person, might not be regarded as relevant to such a society's needs. Moreover,
there are many pressure groups vying to dictate what it means to be that better
person. Whose agenda will the student be compelled to follow?

10.5 Education and Technology


The purpose of this section is not to examine educational technique in
general, interesting though such a discussion would be. Rather, it is to consider the
effect upon the learning process of the use of specific products of technology, and
to consider in what direction current and projected technologies may influence or
drive institutions of learning.

395
The Purpose of Technology in Learning

It has long been assumed, with some evidence, that if the number of sensory
gateways to the learner's mind could be increased, so could the efficiency of the
learning process. For example, a lecture on Shakespeare would normally involve
verbal and aural skills, but watching his plays as they are acted out would engage
the visual as well, and therefore initiate more complex and comprehensive learning.
The richest experience would be gained from acting in the play, but this is not an
option available to everyone. Perhaps at some point, it may be possible to build an
interactive feedback to the Metalibrary that would allow the watcher to become a
participant, but this enticing possibility is still very speculative.
Thus, various types of print media and audio-visual materials have been
produced for schools over the years in an effort to expand sensory interaction with
the lesson being learned, and to reinforce its contents. Administrators often greet
such innovations with great enthusiasm, even though in many cases classroom
results are inconclusive.
The examples that follow are not to be taken as the sum and substance of the
application of technology to the classroom environment, but rather as illustrations
of the way in which some of the products of technology have affected it. After all,
the object or machine is not itself the learning experience--it is a support for some
intended lesson. However, the medium in which the experience is presented does
have an important effect, and cannot be completely separated from the motives for
using it.

Common Classroom Materials

To meet such ends, charts, maps, posters, and various graphics materials
made their way to the walls of the modern elementary school classroom. Such
materials become less common as the student moves through the schooling
process, and the walls at the higher levels are often rather bare. Various object
lessons are also available for "hands-on" experiences in counting, exploring science,
social studies, health, life skills, and so on. These can be effective in the hands of a
skilled teacher, but care must be taken to ensure that the desired lesson is
delivered. Some educational theorists think that unguided exploration can too easily
become unfocused play to no particular end. Others hold that unguided exploration
is the central method of learning in early childhood and is necessary for most
children to learn. In the last three decades, this latter theory has been in vogue
twice and gone out of favour both times. It will undoubtedly make another
comeback.
Primary classrooms keep counting tokens available as concretions for the
abstractions of arithmetic. They also might have students dissect owl pellets to
discover rodent skeletons and reinforce lessons on the food chain and life cycle.
Secondary schools will have a wide range of scientific and other equipment and
universities far more, for they require exacting original research.
There can be little doubt that the wide range of common equipment does
make the classroom a more interesting place to be, and that fact alone may justify

396
its purchase, even in the absence of hard data about its effectiveness. In fact, it is
possible to show that discovery methods of learning can be more effective than
reading about something in a book and then demonstrating what is known.
However, it is not practical to allow the student to personally rediscover the entire
body of scientific or other knowledge. Neither is it possible for every school to own a
piece of equipment to support each item on the curriculum, so there must be some
limitations placed on such methods.
Without some centralized control, moreover, teachers may decide to purchase
equipment largely to reinforce personal interests, and this becomes surplus when
they move to another school. Many are the rooms filled with unused and obsolete or
broken equipment, and few are the budgets that can continue such uncoordinated
purchases indefinitely.
With pressures on budgets increasing, many schools are having to turn to
parental donations and community fund raising for their equipment purchases. This
is forcing them to be more accountable to these new sponsors, and is increasing
community ties to the school after a long period of separation. Thus, the ordinary
day-to-day technical provision for and operation of the schools is no longer the
exclusive domain of their professional administrators, but is once more coming
under the closer scrutiny and control of their local constituency. What is more, a
greater portion of that constituency has more schooling than ever before, and is
both qualified and prepared to ask whether a given piece of equipment or a
particular technique is appropriate. Again, availability of information, which in this
case has nothing to do with computer access, enables wide cooperation among the
interested parties and encourages broad control over purchase decisions.

Slides, Filmstrips, and Movies

That which is too expensive to purchase for the classroom can always be
photographed and shown on film. Though still lacking in the ultimate experience of
personal participation, such media have the potential to bring techniques and ideas
to the classroom from effective teachers, and thus to supplement locally available
resources. They have the disadvantage of being inflexible, impossible to question,
and subject to rigid scheduling constraints as several classrooms compete for the
same material at the same point every year. Since such media may remain unused
the rest of the time, and since they require an investment of time, training, and
inconvenience on the part of the teacher, their cost-effectiveness is difficult to
establish. They also take years to justify their purchase, so school audio-visual
centres tend to become filled with 30-year-old material of doubtful relevance but
which still makes its dutiful circuit about the district once or twice a term--often to
the great hilarity of the students who are subjected to ancient material. Likewise,
libraries at all levels, including university ones, can easily become filled with
obsolete materials that may physically prevent new arrivals, but that no one has the
heart to throw out.
Furthermore, if the message of the film is not immediately reinforced by the
teacher, say, using supplementary print media or by requiring some feedback from
the students, the use of such technology is likely to be regarded by them as a form

397
of entertainment rather than as a part of the instructional process. Since such
materials are often of a much lower quality than television and arcade
entertainment to which students are accustomed, they have difficulty taking the
lesson seriously. In addition parents tend to view frequent use of films with
suspicion. They are not, therefore, asked to fund such purchases, nor are these
made at the school level, for the cost is too high. Since purchase decisions are
made centrally, there is additional resistance by classroom teachers to some of this
material, and they often turn to media over which they have more control.

Radio and Television

The high hopes that were initially expressed for radio-assisted instruction
were later repeated for television. In both cases, broadcast lessons were supposed
to revolutionize learning, as school boards established their own transmitters and
developed series of lessons that could be taught to an entire district at once. These
expectations were followed in both cases by disillusionment. In the case of radio,
the lessons suffered by comparison with those offered by a live teacher, who could
be seen and talked to, not just heard. Television added the visual dimension, and
was a "hotter" medium, but the participation was still relatively passive, the
scheduling inflexible, the costs very high, and the return on investment not able to
be measured. Moreover, the school system has always had very high inertia with
many of its teachers uninterested in changing their ways. Faced with the extra
responsibility for scheduling their classes and lessons to suit what was available on
educational radio or television, few such teachers went beyond the experimental
stage with either, and eventually both disappeared from most classrooms.
Moreover, at the university level, neither was ever used extensively outside the
faculties of education, so the broadcast media has had little effect on this level of
learning.

Videotape and Video Disks

The advent of videotape has revived television as an educational medium to a


great extent, for it allows lessons to be given once and recorded, rather than
broadcast. Taped lessons can be played at the teacher's convenience and do not
require transmission facilities. However, it is not clear that videotape has important
advantages over film, except in cost, nor has it been established that students
regard it as more important than the home entertainment it so closely resembles.
While teachers are already making extensive use of video recordings, it will be
some time yet before their impact on learning can be assessed. The potential seems
great, but it also did for its two older cousins, so a degree of caution would seem to
be in order.
The perils are also great, for videotapes are often made in schools with little
regard for copyright law, just as are copies of printed materials. Programs are
commonly taped off the air for later classroom showing without asking permission,
or copies of commercial products are made and retained in the school for re-use.

398
Those who engage in such practices need to consider carefully what it is they are
really teaching their students.
At the university and technical school, the use of television and video is a little
different. Lecture scale monitors are now being found useful in classrooms where
computing and information retrieval skills are demonstrated, and video also has
great potential in the recording and viewing of complex or rare techniques, such as
those to which a medical student may need exposure. Such larger institutions also
will often have the facilities to make their own materials, and rely far less on those
taped from the commercial media.
As the 2000s begin, DVDs are replacing videotape, but their purpose and
potential use is similar, and there is no reason to suppose they represent an
educational breakthrough, even if they are a technological innovation.

Computers in the Classroom

The latest technological marvel has also been touted, as others before it, as
the answer to many learning problems, and judging by the multi-billion dollar sales
of hardware and software to schools of all types, there is widespread confidence in
the value of this technology to deliver that answer.
At the university level, computers are indispensable to both the research and
learning processes. No one who has grown accustomed to the enormous power
offered by the simplest word processor would trade it for a stick with graphite on
one end and a piece of rubber on the other, or even for a typewriter--if one could be
found on a modern campus. Neither would the business faculty want to trade
electronic spreadsheets for the paper kind, or go back to the hand-calculated trial
balances of pre-computerized accounting courses. Their students will be working in
office-like environments after graduation for the most part, so the computer forms
an essential component of their training. Programming these machines has also
become a discipline of its own, and though not yet very mature as such, has already
carved out an important place in the university curriculum. It should be noted,
however, that while there is an educational component to some of these activities,
for the most part they involve training--somewhat of a departure from the
traditional role of the university as repository, transmitter, and generator of ideas.
Much of this new business and computing curriculum would once have been thought
instead to be the province of the technical school, but such is the power of the new
technology to transform society that computing studies has become accepted at the
university without its credentials as an academic discipline being questioned.
Similar remarks, to somewhat less effect, could be made of the high school.
There also, the computerization of marks gathering, attendance, student records,
and timetabling, have all proven important time and money savers in an
environment that is severely stressed financially. These schools also all have the
advantage of having technically oriented mathematics or science specialists ready
to seize upon the machine for the learning of computing or doing computations, and
a clientele mature enough to take advantage of such instruction. They usually have
appropriate business courses to take advantage of the typical applications packages
and train students in their use. On the other hand, teachers in other disciplines have

399
not found them particularly useful as yet, because of a lack of both training and
suitable software. Also network access has yet to come into its prime at this level
because of the relatively high cost of school infrastructure, and so secondary school
computer-use mirrors that at the university, on a reduced scale. In both cases its
larger potential for other disciplines is unrealized, and may be for some time.
However, the picture is quite different in the younger grades. There, a variety
of uses for the computer have been proposed and tried, but in this context, the
computer remains a machine that has not yet found a technique to apply it. There
are several reasons for this. The first is that elementary teachers are not always
qualified in either mathematics or science and may have little interest in technology
that requires technical expertise. The second is that there is little need to study the
machine for its own sake at this level, so no programming need be taught.
Moreover, computers are not required for computational purposes in elementary
school. Programming was for a time advocated by some for use at this level, but it
was never clear what usable skills such exercises impart, so this too fell into
disfavour. The third is that traditional word processing use is not relevant unless
schools consider having primary students learn typing instead of, say, writing--for
they would be worse off attempting to apply word processing skills if they could not
touch-type. The fourth is that the flood of "educational software" that hit the market
in the 1970s and 1980s did not prove to be very useful. As programming for its own
sake or as a teaching tool, such material was at first uniformly poor--the result of
being written either by teachers with amateur programming skills or by gifted
computerists who knew nothing about teaching. The computer language LOGO was
once advocated for teaching programming to children, but its implementations were
clumsy, and neither teachers nor students were convinced of its value in the
instructional process.
Have the billions been wasted? If the testimony of many idle elementary
school computers with no one to use them is to be believed, the answer is "yes," at
least for many schools.
What went so wrong? The same thing that often does in business, where the
computer is also commonly regarded as the solution to a badly run and inefficient
operation. Computers cannot, by themselves, solve the problems in an organization;
they can only exacerbate them by making them occur faster and by entailing a
capital outlay that turns out to have negative return. Schools have been under very
heavy criticism of late, and the computer came billed as an easy fix for their
difficulties--one that the public and funding agencies were quick to agree to--too
quick, for they did not consider whether the solution had the potential to be
effective. As in business, it is critically important to have adequately problem-free
systems in place before introducing computers, because machines can never by
themselves help a badly managed operation. The vague and enthusiastic
assumption that computers were the wave of the future, coupled with the usual
bandwagon effect, have caused schools to spend enormous sums on these
machines with little research done on how they could be used effectively. At this
point, few elementary teachers have any clear ideas what to use them for, so unless
an elementary school is fortunate enough to have a computer specialist available,
the machines may well sit idle. This problem is compounded by the rapid changes in

400
computing technology, which can render a school's entire inventory of such
equipment obsolete in one or two years, and make it cost ineffective to replace
much of it.
Some recovery may be possible as computer software becomes easier to use
and therefore more accessible to young children. A limited set of word processing
skills, some art and music, and certain information searching may yet prove to be
valuable; but these applications may arrive too late to avoid having many of the
existing computers join other little-used machines from past years on the
educational technology scrap heap--at least as far as elementary schools are
concerned.
On the other hand, what could be right about a decision to use computers in
elementary schools? In a word--the Metalibrary. If a school can afford the
infrastructure, the opportunity to perform and present research using a world wide
data store is an educational key to unlock the techniques of the future for today's
students. Used in this way, networked computers have enormous potential to
enable even very young students to obtain the very life skills that will be required in
the information society. It may be some time before the actual data available on the
system is complete, reliable, and easily accessible, but the Internet of the late
1990s is at least a primitive beginning, and a well-educated teacher can assist the
students to sift it for relevance.

Teaching Machines

Various audio-visual and print media have been combined in a number of


attempts over the years to build machines that could automate some portion of the
learning process. These have included hand-held wands touched to metal studs
through holes in question cards that lighted or rang a bell when the stud for the
correct answer was touched. Later, there were slide sets keyed to experiments or
post-tests, high school and university language labs, and in more recent years,
various "drill-until-it-kills" software packages, often in the form of computer games.
Recent attempts involve multimedia integration of computers with videotape or
optical disk (CD-ROMs and DVDs) to provide both textual and graphical lessons
based on a very large database and with multiple paths through the material
depending on the student's responses. While this technology is not widely available
as yet, it probably will be in the near future. If it is used, it apparently has the
potential to reduce labour costs somewhat, but also to increase capital ones
dramatically. Like all other such technologies, it will have to prove its worth, and
this may not be easy; thus far machinery has not done well in the classroom.
Moreover, there are few meta-techniques of proven worth for evaluating the process
of learning, and in the absence of convincing evidence of their worth, complex
machines are unlikely to receive a warm welcome from many teachers, particularly
at the lower grades. Rather than struggle with such devices in their own classrooms,
they might prefer to wait for the day when interactive multimedia lessons are
available prepackaged from the Metalibrary.
School jurisdictions that welcome these machines so enthusiastically that they
replace human teachers on a large scale--as a few have attempted--may soon

401
realize that it is untried and unproven technology. Heavy commitments to the
unknown will be hailed as visionary if they succeed, and exorciated as foolishly
shortsighted if they do not. Unfortunately, it is real human beings who are being
experimented with, not just techniques, and this raises the stakes uncomfortably
high for such commitments. If the earlier experiences are any guide, the new
teaching machines ought to provoke a reaction of extreme caution. History would
suggest they might instead be met instead with wild enthusiasm and high levels of
spending.

Summary

The effect of these products of technologies used in modern schools and


universities has been mixed. It is difficult to discover whether it has improved the
efficiency of the learning process, or whether the continuing problems of learning
institutions would be worse without it. Clearly, a better technique for measuring the
effects is needed, and so are more carefully defined goals for their use. Methods will
also have to be found to ensure that the use of such hardware in the classroom
does not exacerbate socioeconomic differences, for some schools can afford it, and
some cannot.
As observed at the beginning of the section, these are only examples of the
use of equipment, and this discussion does not encompass all of the term
"technology." Indeed, if the extension to "technique" suggested in Chapter 2 is
used, one must include methods of administration, classroom management, school
organization, and all the quantifiable methods employed directly or indirectly in
specific schooling or general learning activities. While new technologies and
paradigms of the information age affect all these things, there is insufficient space
here to consider them in detail. Interested readers are encouraged to research
further such specific topics. It is also important to ask whether young people ought
to be taught by machines or by role-model adults, and the answer to that question
may determine whether much more, if any, of the learning process will be
experienced through machines.

10.6 Schooling in the Fourth Civilization


There are other factors influencing schooling, and it is time now to consider
the effect of major new technological revolutions on the organized school system. If
the specific machines discussed in the last section have had mixed and ambiguous
effects on schooling, the same cannot be said of the societal changes being driven
by the major technological revolutions discussed earlier in this book. Organized
schooling did not exist at all in hunter-gatherer societies, and was an individual or
small group affair arranged directly with the tutor in agrarian ones. During this time
it became possible to divert surplus production to the systematic maintenance of
scholars who could spend their time generating and examining ideas. The greater
the prosperity, the more such people could be afforded, and thus the university
came into being, and was an important fixture before other types of institutionalized
schooling existed. Indeed, part of being a member of the elite ruling class came to
mean either having an education, or having an educated advisor available, but

402
there was no motivation even to teach the general population how to read, for they
had little use to which to put such skills.
It was only with the economic and social demands for the schooled labour
force of the industrial age that it became necessary to organize grade school and
technical training on a widespread, even universal basis. As techniques became
progressively more sophisticated, so did requirements for training in their use.
Gradually, higher and higher levels of schooling became necessary in order to
function effectively in the society. First grade school, then secondary school became
all but compulsory, and college began to have a similar imperative by the close of
this period. At the same time, education became, like many other enterprises, far
too large for any but the state to organize and control. However, industrial age
assumptions that drove learning to become mass training in technique, and all but
obliterated education in the process, are no longer valid.
To consider what changes might take place in the next civilization, it is
important to examine the potential effect on learning of all four of the major modern
technological revolutions.

The Computer and Information Revolution

Aspects of the effects of this have been discussed in Chapter 4 and in this
chapter. On the one hand, the reception in classrooms of computers themselves has
been mixed, and they have yet to have practical uses established in the lower
grades. On the other, the information they make available so universally must
eventually change learning dramatically even if only in curriculum. Much more
emphasis will be placed on communications skills and broad techniques--including
those of information--and on ideas, because far less time will be required to develop
deep, narrow specialists, since these will not need to retain as much information in
their own minds.
This new emphasis will, among other things, involve teaching students to use
terminals to gain access to data bases, and ultimately, the Metalibrary. It seems
much less certain that most of them will ever need to know how to program the
machines, or that they will need to learn how to use many business applications,
especially in the younger grades, but widespread use of word processing causes a
new emphasis on typing skills at a younger age.
In an earlier chapter, it was remarked that the Metalibrary had certain
potentials to become a universal teacher, storing and reproducing by student need
and demand lessons on every subject and at any level. Perhaps a live teacher could
be simulated, and so could the ability to answer questions interactively. One could
even claim that schools at all levels as they are now known are already obsolete,
and that machines will eventually suffice for all learning. This might sometimes be
the case for adults engaged in continuing training, for they need only to add new
techniques to an existing background of general knowledge. They usually already
know how to do this and are interested in achieving training quickly and efficiently--
not jumping through a series of arbitrary hoops established by a school to flesh out
its programs and provide further reasons for existence. Such incremental skills may

403
be learned individually and by machine to a great extent, though it might be some
time before this method becomes common, much less universal.
However, the obtaining of essential background knowledge and cultural
context as a child is a different matter. Here, the learner is finding out what it
means to be a human being and how to be a part of the culture. However attractive
teaching machines may be, they are not role models, for they have not experienced
the culture, lived in it, observed it, known new life, cried over death, and pieced
together the memories of a typical human being by living them out one day at a
time. Even if the Metalibrary were to have available the whole vast store of facts, it
would still be something altogether different from a human being. While it may be
possible to obtain facts and background information from a Metalibrary, it is
questionable whether it is possible to learn how to be a functioning human being
from a machine.
The same objections could be made, say, of an ambulatory artificially
intelligent (AI) device built to look roughly human--even if it were smarter, faster,
and more capable in every way than a human. Unless it were possible to certify that
such constructs are identically human, it seems inadvisable to turn over to them the
responsibility of teaching children what it means to be human, for they lack the
capacity to understand and to model the role they are supposedly teaching, and
these are the most important aspects of the learning process. Unless the machine
had understanding and intentionality, it could scarcely teach what either of these
meant.
A similar reservation could be held when it comes to education in general. The
entertainment, creation, and evaluation of ideas is a distinctly human activity, for
ideas are held and examined within the context of a person's total cultural and
world views. There is no reason to believe at this point that a machine can be
provided with anything resembling a human world view. If it were other than exactly
human, then it would either not be capable of sharing the human commonality of
culture, experience, and world view, or it would be able to have a distinct non-
human one of its own--perhaps in common with other machines. It ought not be
expected that a non-human device would share much, if any, of the human
commonality even if it could be regarded as intelligent in a meaningful sense and
even if it could initially be given much (all) human knowledge. What is more, no
such device ought to be expected to have human interests in mind--if it can be said
to have a mind--but rather its own interests, and these are unlikely to include
teaching human children to be the best they can be in order to advance the human
race.
Thus, at all levels, the education of humans in culture and ideas probably
ought to be undertaken by human beings, rather than by machines, even if some
training may be done by machines, and even if there may be short-term economic
benefits to replacing teachers by such devices. The need to be distinctively human
has a higher priority than the need to save money.
If these arguments prevail, what would the effect on schooling be from the
information revolution alone? Elementary schools would show little change in the
first grades, and there might be some use of teaching machines and later
Metalibrary terminals in the higher ones. Secondary schools might split into streams

404
emphasizing education on the one hand and training on the other, with teaching
machines playing a more important role in the latter. Universities could consider
going back to their original business of education in ideas and not directly use
teaching machines very much at all, though they would use computers even more,
but technical institutes, trade schools, and community schools could use such
devices extensively.
On the other hand, proponents of using technology in learning have always
been persuasive with both the public and the educational hierarchy, and the
possibility that all learning will actually come for a time to be mechanized in the
name of efficiency and lower cost cannot be completely discounted. Since such a
transition, even if only partly achieved, would be extraordinarily expensive, it would
also add weight to an earlier suggestion that more of the funding would have to
come from the private sector. The great initial capital outlay required would also
give such an experiment a life of its own and a necessity to succeed that would in
itself fuel demand to see it through to completion, regardless of whether it could be
shown to be effective. Moreover, private funding would imply private objectives, and
there may be more likely to include training than education, so be more likely to use
machines in the process. Indeed, the university that wished to be a place of ideas
might find that no one was willing to provide funding for such endeavours.

The Effect of the Second Industrial Revolution

The advent of large-scale robotization of many manufacturing processes, and


widespread use of efficient office machines will likely continue to obliterate many
jobs and create others, with the new ones tending to be either more technical or
more service oriented. Up to a point, the computing industry will continue to have
very large personnel needs, though computers will eventually be utilized in much of
their own design, manufacturing, and programming. Many field engineers and
technically trained construction workers will be required for new habitat creation
and any expansion off planet. After all, robots have to be much more complex and
mobile to be useful on a construction site than inside a factory. Other people will be
employed in the biochemical and pharmaceutical industries.
Since, as earlier observed, new human workers in such fields can only be
drawn from among those who once would have been content to leave school early
and take a low-skill factory job, there will have to be a much larger and more
effective participation by children on the training side of schooling, and for a longer
time. At the same time, the service sector, and knowledge-related and other
technical industries will all expand, but these changes will create a sellers' market
only for those with the appropriate skills. There might also be more people with the
time, inclination, and need for an education in ideas, and more older workers who
must return to the learning process either for retraining or for an education. These
trends would seem to indicate that a larger percentage of the population at all ages
will be involved in formal learning in the future. Since not all of this activity will ever
be mechanized, even if the full Metalibrary becomes a reality, the result of these
workforce realignments alone is likely to be an increased demand for both training
and educational facilities for the indefinite future, though not necessarily an

405
increase in the number of teachers, because of the decline in the number of
children.

The Intelligence Revolution

The effects of AI research on learning are very hard to guess. If the most
optimistic scenarios prove to be accurate, there are two ways in which the success
of AI efforts could render moot the entire question of humans learning from
humans. On the one hand, if AI devices are built that are faster and more
knowledgeable than humans, and in the unlikely event that they are also capable of
autonomous decision making, then they might deem it to be in their best interests
to dispense with the human race entirely.
On the other hand, if partially programmable intelligence-enhancing devices
like the PIEA are built, the factual part of learning could take place through a simple
electronic transmission or through the addition of a new ROM (see Chapter 6). A
practicum could then follow to allow the skills transmitted to be experienced, but on
the whole, little interaction with human teachers would be necessary for factual
learning. It is not clear whether ideas could ever be handled in the same way as
facts and skills, for if it is possible to electronically represent ideas and the meta-
idea of evaluating them, then AI might already have been achieved, and these
methods might be unnecessary.
There is a certain efficiency-related attraction to such approaches as this one,
though there is likewise an opportunity for the truly comprehensive statist to plan
the production of trained people as never before. If the state determined it needed
5000 doctors, it need only produce 5000 sets of physician ROMs and sell them to or
implant them in selected individuals. If it wanted a million soldiers, it would do
likewise with a ROM programmed with efficient killing routines. These last
considerations ought to give pause to the idea of taking some of these technologies
to their logical conclusions, however attractive some of them may seem to be on
grounds of efficiency.
A third possibility is that intelligence may become somewhat enhancible via
implants, drugs, or genetic engineering, but that current teaching methods would
survive. Education and training would then take on a somewhat different character,
but would be natural and straightforward extensions of what is being done now.
All three of these possibilities are interesting, but speculative, for as
previously indicated, the nature of any achievements for all the hard work in AI is
not yet clear. In any event, dramatic changes in learning due to AI work may be
some time in coming; the fourth of the technical revolution has more immediate
probable implications.

Effects of the Biospace Revolution

Many of the changes in medicine will have little direct effect on learning.
However, the cumulative result of new life-saving and prolonging techniques, and
the outcome of longevity research will almost certainly have the effect of
substantially increasing life spans. This factor would lengthen the number of

406
potential working years, and along with a rapidly changing economy, guarantee a
continued strong demand for re-training. Indeed, continuous training on the job so
that job holders change with their jobs will probably become the norm. Longer life
means more time for ideas as well, and it seems likely that it will result in new
pressures on the educational part of the learning system, provided that people will
want or be able to continue having or considering new ideas at an advanced age.
One result could be a considerable increase in the demand for learning facilities at
the university and technical school level.
However, longer life creates population pressures and this, combined with
other economic factors, will put very strong downward pressure on the birth rate.
This long-term trend is already clear in Western countries. There will continue to be
fewer children, fewer schools, and less need for teachers. Since the skills and
expertise involved in training and educating adults are different from those involved
in teaching children, surplus teachers cannot simply transfer to other levels without
themselves retraining. Thus in many jurisdictions the attrition of resignation and
retirement may be insufficient to rationalize the greatly reduced demand, and there
could be layoffs as well.
Paradoxically, there could be dramatic increases in short-term demand for
teachers in newly developing areas (new towns or suburbs). Families with young
children, or who are still childless, are highly mobile, and are among the leaders in
migration from the city core to the suburbs and other regions for economic
purposes. This creates a demand for new facilities in some areas, even while those
in others are being closed for lack of use. Moreover, in many parts of North America,
a high percentage of the teaching force is close to retirement age and must be
replaced over the next few years. Even more important is the fact that in much of
the world it is not the transition to the fourth civilization that is underway, but the
one to the third. Thus, outside the already industrialized West, the population is still
increasing, especially in the cities; it is still very young; and it is still relatively
uneducated
All these factors taken together may mean that there is little cause for
concern in the overall teacher employment picture in the near term. Teachers will
have to scramble from one level to another and from one geographical area or
country to another in large numbers to stay in the occupation, and many will decide
it is not worth the effort and will find other employment. Such dramatic turnover
does present a unique opportunity for standards to be raised and for teaching to
become professionalized, but in the midst of such turmoil, it may be that this aspect
will not receive much attention.
In the long term, the need for K-12 teachers in the previously industrialized
countries might be very much less, even if governments do not impose limitations
on births. At the same time, training schools and universities could need far more
teachers, and they may obtain these by retraining and re-educating some of the
ones they have temporarily placed in the K-12 schools to take care of their short-
term boom. In addition, developing parts of the world will need K-12 teachers in
great numbers and for a longer time than Western ones, unless arbitrary birth
control measures are adapted there. However, this would only create a demand for
Western trained teachers if language and culture were not regarded as barriers.

407
Should the habitat-expansion scenario develop to any great extent, the formal
learning system may actually expand in absolute terms, even though the K-12
portion eventually seems likely to become a much smaller part of the overall
educational picture.

The Overall Picture

Overall trends affecting the size of formal learning systems are mixed, with
some factors increasing demand, and some reducing it. The number of trainers in
specific techniques will undoubtedly increase, though these too will need constant
updating as techniques in all fields continue to change rapidly. The number involved
in the education in ideas seems likely to increase as well, but there could eventually
be sharp declines in the workforce at the K-12 level. Certainly, there will be very
substantial pressure to cut costs here, and this will be the most difficult in which to
obtain private sector participation.
One possible means of cutting costs is to move to a more effective utilization
of resources by using schools for twelve months instead of nine or ten. In theory this
could produce a fifteen percent efficiency improvement. In practice, there would be
many difficulties to overcome in making such a change, and great resistance on the
part of families and teachers to the potential loss of summer vacations. The future
of this oft-proposed (and just as oft disposed of) idea is still cloudy.
Since present schooling practice is highly labour dependent, great cost
savings could also come from a substantial reduction in the relative number of
teachers, by whatever means. They might in turn respond to this in one of two
ways. They could further unionize and refuse to entertain any cuts in staff on the
threat of a strike, insisting that staffing levels remain as they are regardless of the
use of machines or the number of students. This reaction would be consistent with
the economic self-ism and tendency to fragment so prevalent in the late industrial
period. It would also promote both the privatization and the automation of
education, for strikes by teachers are easily seen by students and parents alike as
betrayals, and could both lower teacher status and promote a determination to
replace at least some of them at any cost. On the other hand, teachers could, even
in the face of a reduction in their numbers, professionalize and attempt to raise
their status--something that would take much more courage and foresight, but that
would be more in accord with the spirit of the information age.
The trends to more local and participatory forms of government discussed in
Chapter 9 will also have a substantial impact on schools at all levels. These ought to
mean that more decision making authority is devolved to the local level from the
state, and that many individual schools will have far more control over budget and
hiring than in the past. Along with greater local control will go greater local
supervision, and this may mean higher expectations, and ought also to promote the
professional model for teachers.
At the university level, the situation is more complex. At the same time, there
is a greater independence, flexibility, and understanding of change. Thus, in theory
at least, there is a greater potential to discern these forces and restructure to meet
the needs. The potential for this to happen is the subject of the next section.

408
10.7 The Role of the University
Many of the changes discussed here could have a dramatic effect on the
modern university in the long run. The original premise of the university was that it
was an educational gathering, in the sense used in this chapter. That is, its special
concern was the development, examination, and transmission of ideas. Training in
technique was done elsewhere--either on the job, in technical institutes, or as a by-
product of some other form of schooling.
The ideal university was free from any constraints of technique or utility and it
was free to question any idea, presupposition, or world view; it neither leaned upon
nor was very much obligated to produce a specific product for the broader society.
It did depend on that society for funds and students, but it was free to go its own
way intellectually, even if it attacked the very structure that gave it birth. This "ivory
tower" separation of intellectuals from ordinary life was not a bad thing, but a
necessary one, for only through a relatively free detachment could objectivity be
achieved.
Whether this ideal ever existed apart from theory, even for a short period, is
debatable. The first universities were intimately connected with and dependent
upon the church. Later, many had mandates deriving from a local ruler, and served
the state--if in no other way, by loaning it intellectual prestige. The university has
always depended on the broader society to supply it with a cultural context, a
collection of scholars, and the resources to carry on its investigations even when
there was no economic return. At the same time, the relentless pressure of the
search for efficient technique, which has been a hallmark of the industrial age, has
gradually re-created the university in a very different image from this original,
perhaps never-achieved ideal.
Although some academics still attempt to maintain what they hope is a
detached attitude from the broad culture, the university as a whole has gradually
adopted the search for technique as its own mandate. Thus, university-trained
scientists major in technique. In fact, today they study little else, for philosophical
questions about the meaning and legitimacy of their discipline and its place in the
whole spectrum of knowledge-gathering activities are of little interest to the person
whose whole orientation is the empirical method or the economic survival of the
department. The gestalt is one of barely suppressed excitement at the prospect of
personally finding out something never before seen or touched; ideas are divorced
from the asking of whys about the validity of what is being done and instead are
limited to mechanical explanations of empirical data. The metaphysical studies are
generally ignored as uninteresting or unimportant, or even dismissed as
nonexistent.
At the same time, the prestige of the university has been borrowed for other
technical studies, for it has also become the home of computing science,
engineering, teaching technique, and the technical study of economics and other
social disciplines in a manner that attempts to mimic the methods of the natural
sciences. The humanities have also subjected their subject matter to technical
analysis, with linguistics becoming more of a science than an art, and research on

409
literature sometimes tending to concentrate on computerized dissection of the great
works to discern the supposed involvement of many hands in the writing of them.
Even theology has been invaded by such "higher criticism"--by which is meant a
similar technical analysis of the books of the Bible. The fact that such efforts nearly
always accomplish little other than to confirm the researchers' presuppositions
about disputed authorship and content does not slow such work down. In fact, it
increases the opportunities for learned scholarship, for it seems to be possible to
develop a pseudo-technique to establish every possible hypothesis about a piece of
literature. The older the work, the less likely there is to be any parallel material to
corroborate its professed authorship or content, and the more freedom there is for a
modern researcher to create and apply such techniques to the desired end.
Most universities have gone much farther with theology--they have dismissed
it altogether from the curriculum as an intruder in the domain of technique--this
discipline that was cornerstone to the original Western university movement.
Theology, for its part, has obliged by removing itself to seminaries and schools of
religion that are no longer associated with the unfriendly university, and for the
most part, ignoring it altogether.
But theology is not the only embarrassment to the modern university; so is
the whole historical and cultural context that gave it birth, and these too are now
deprecated if not denigrated. Consequently, courses in Western culture, while they
make an occasional reappearance, have been systematically removed from the
broad curriculum, along with the study of philosophy and ethics.
Clearly, deconstructive scholarship is on all fronts destructive not only of the
historicity, content, and meaning of the work studied, but also of the relevance of
the discipline doing the study. Once the subject matter of a group of scholars has
been deconstructed to that point, they themselves are easily seen as irrelevant to
society and so to the school. If a discipline has no important ideas, why should
people study it? If theology, history, philosophy, ethics, and the humanities all
likewise have no answers to the meaning questions, why operate a university when
technical schools are cheaper, more efficient, and more immediately useful? If
students have no idea where there society has come from, how will they decide
where to take it?
But, it has become unnecessary for a university graduate to have a nodding
acquaintance with the great thinkers and writers of the past; their view is
considered inadequate merely because it is old. Also, a technically oriented society
cares most about the day-to-day pursuit of what it regards as being practical,
functional, and relevant; that is, doing rather than thinking. What matters is the
development and analysis of ever more sophisticated technique, and such pursuits
are expected to supply their own reasons and meanings. The modern university, by
and large, has adapted the technical philosophy that apart from empiricism there is
nothing, or at least, there is little of value, but at the same time it no longer
questions what constitutes "values" in the first place.
Allan Bloom, himself a philosopher, argues most cogently (The Closing of the
American Mind) that North American universities in particular have discarded their
formative culture, for a nihilism and relativism in which there are no absolutes, no
values, no culture, no religion and no education worthy of the name. He accuses

410
them of blindly following popular culture and the latest trends in pseudo-
intellectualism without pausing to examine the ideas they accept to see if they have
any content. In his view, the universities have betrayed their intellectual heritage,
debased culture, and are pursuing they know not what. He believes they have
cravenly caved in to the nihilistic demands of "student causes" because they no
longer had a purpose of their own, and the thing they offer called education has
become devalued to the point of worthlessness. Only the domain of the sciences
and technologies have escaped the abandonment of academic standards, because
only they can measure the outcome of what they propose to do and only they have
confidence in what they are doing.
He could have added that since what they primarily do is train in technique,
science and engineering faculties cannot be expected to defend the traditional
educational domain, or to believe that they have any common cause whatever with
the rest of the university or its traditional educational goals. Bloom suggests that of
all the humanities, only philosophy is left as the guardian of ideas, and its role has
become much like that of a museum keeper, taking students on guided tours
through the musty and discredited past of their culture--all the while fearing the
wrath of the radical left that controls the university lest they permit their charges to
touch anything there, much less evaluate it or embrace it. Any of the older writers
whose works can be labelled as racist, sexist, or religious can be banned from the
curriculum without further examination for content or validity, and if no such
reasons can be discovered, they could always be accused of being irrelevant
because of age alone. Not only do these attitudes tend to render the university as
irrelevant, they also ensure that the control ideas of the dominant intellectual
culture cannot be examined and will not be challenged. Thus, when authors such as
Emberly--Zero Tolerance--complain about the systematic politicizing of the
university, they may be discussing symptoms rather than causes.
Since it is always possible to discern cultural overtones in the most abstract
and philosophical of work, and much of it cannot be understood without that
context, it is always possible to bring culturally-based charges against the old
writers and find them guilty of failing to support modern causes. Bloom concludes
that North American academics have taken the same journey into an unthinking
nowhere as their European predecessors before World War II, and are blithely
unconcerned with where this nihilism may lead them, for they do not study history
any more either. Such trends can be seen in modern fiction as well, for it often
writes of the past not by holding up for examination the values and ideas people
actually espoused in those times, but by projecting modern ideas and conflicts into
characters who almost certainly would never have entertained them.
Bloom has his critics of course, and these believe that he has either misstated
or overstated his case. Since he is a philosopher, his lament of the decline of his
own discipline and his promotion of its revival could also be viewed as somewhat
narrow self-interest. Moreover, his comments on the scientists of academia carry
with them the flavour of one who is a distant observer, rather than a personal and
professional acquaintance. Others undoubtedly think there is nothing wrong with
either nihilism or the casting of the culture adrift from the past, and could easily
welcome his observations while rejecting his criticisms.

411
But like Snow in the early 1960s with his "Two Cultures" critique of academia,
he appears to have touched a raw nerve; to have stated what many people had
come to believe or perceive even though they had not articulated it. Many
academics, in common with the whole culture, have indeed discarded idea
examination for technique. They have also separated themselves from the human
aspect of their work, that is, from student contact. Retreating into the safety of their
tenured sanctums, they have left the work of teaching undergraduates to graduate
students and untenured temporary instructors. It has become too easy to redirect
the university apparatus that was designed for the process of careful thinking about
ideas into the pursuit of ways to make money, and to quote one another in an
incestuous circularity that gives an appearance of learned scholarship without
substance.
While he may be criticized at many points of detail, the broad thrust of his
criticism is factually correct, for the modern university has become a creature of the
technique-obsessed industrial age--after all it has only found a niche in the social
and technical apparatus. Whether his suggestions about the consequences of all
this are appropriate or not is another matter, but there is no denying the fact of
intimate relationship between the modern university and modern technology.
This relationship goes beyond the specific disciplines studied by it or ignored
by it, for it is also expressed in its ties to the broader society. For example, what
Eisenhower first termed the "military-industrial complex" in the early 1950s has
become a military-industrial-university axis in the 1980s. As remarked in Chapter 2,
a large percentage of the funding for technical research in the university comes
from government defense projects. These are sometimes directed to basic science,
but their major thrust is often the development of specific military hardware. The
U.S. Strategic Defense Initiative (SDI), or "Star Wars" project of the late 1980s drew
much comment and criticism, but is typical of much military spending on research,
and is not by any means unique. For the sake of the research funds, its agenda was
readily adapted by many universities. In such ways, whatever independence the
university may have once had, it has come today to be tied very closely to
government funding; its ability to pursue knowledge for its own sake has been
severely curtailed, and its freedom to speak out largely ceremonial.
New connections are being made all the time to new industries. As economists
and professors of commerce have found in joining with colleagues in engineering,
physics and chemistry, there is much money to be made in selling consulting
expertise, techniques, and products to the broad marketplace. They have been
eager to meet such challenges, and the web of connections to outside interests has
been woven ever closer in recent years. Their discipline has even been renamed,
and is now the study of business; that is, it is now concerned about technique more
than it is about people or even social institutions. Questions are occasionally asked
by some traditionalists about whether the university is an appropriate location for
the study of business technique, but such voices are drowned in the flood of
applicants for places in such schools, and this creates a compelling economic
argument of its own--it allows some threatened universities to survive what had
been a declining enrolment.

412
Biotechnologists have followed suit, and begun marketing the products of
their research labs via their own companies or in alliance with existing
pharmaceutical firms. Their work too is frequently tailored to the marketplace, and
borrows business techniques to enhance their prospects as vendors. Like the
members of other disciplines, these are no longer academics in the old sense,
trading in ideas; they have become entrepreneurs in generating and selling patents.
All this activity has made university administrators consider whether the
institution itself ought to receive a share of the income. Thus, there are starting to
be more joint ventures, profit sharing, and quid quo pro arrangements with industry
that involve profits returning to the university. Reductions in public funding are
increasing the pressure to find other sources of revenue, and business and industry
need more than just a tax receipt for their money. It seems, therefore, that the
existing research institutions, especially in the sciences, engineering, computing,
and business, will be under long-term pressure to become more closely associated
with industry all the time. This may be a good thing if it relieves the public purse,
but it is sure to enhance the role of technique in the university and push it even
more in the direction of training and away from education. After all, privatization of
this type could be very hard on those university departments that have no
commodities on which they can obtain patents, but whose stock-in-trade remains
ideas.
However, as already noted in this chapter, there are other long- term trends
pointing to a revival of the latter as well, because less time will need to be taken to
learn technical facts, so more is available for a broad consideration of ideas. In
addition, the more people with time available to think, the more thinking that could
get done. However, there is no guarantee that this will take place, for many people
may not want to be confused by ideas. After all,

You can lead a son or daughter to Horace, but you can't make one think.

Thus, while there may be a window of opportunity for the university to revive
its traditional role as a forum for the enquiry into ideas, this niche could be
preempted by other institutions or vanish altogether.
For example, electronic discussion forums are even now being conducted on
the Internet, and are certain to be included in the Metalibrary of the future. These
are much more open and free-wheeling than the university lecture hall, and though
quantity dilutes quality in this medium, it is impossible for the possessor of a
doctorate and chair, say, at Harvard to use paper credentials and position to bully
acceptance of ideas upon the other participants. It is possible for a thoughtful janitor
to participate with no formal education past the third grade, but with a big reading
list. The slogan is "On the Internet, nobody knows you're a dog." It is too soon to
determine where this will lead, but it is possible that much larger groups of people
will take over the philosophical roles of guardians and evaluators of ideas from the
universities and exercise these independently of academia. This would require
many changes from the late 1990s version of the Metalibrary called the Internet, for
in the latter's news and discussion groups there is often little that passes for rational

413
argument, and the bullying for politically correct views is cruder and more blatant
even than in the universities.
Another candidate to take over some of the university's late industrial role is
the new-style corporation that does much of its own training and re-training. Any
substantial shift of training in technique to the business sector itself will leave the
university without many of its industrial-age clients and groping about for a new
identity.
Such trends are not likely to go unnoticed, however, and the universities may
attempt to recover some of their traditional function. It is possible to make a case
for an immediate and long-lasting return to a very broad form of liberal arts
education in the universities, based solely on the observation that the day of the
narrowly trained, fact-knowing technical specialist has passed, and that the future
belongs to the fact-finding and integrating knowledge worker of much wider
interests and adaptability. At any given time, such people could still be regarded as
specialists because of the results they produce, but there will be an important sense
in which they are actually generalists.
The challenge for the university will be to find new ways to integrate
education in ideas with training in technique. Their graduates must become
evaluators and potential users of ideas and techniques in general, rather than be
wedded to a small selection of both. This argument is even stronger if one concedes
to the teaching machines and corporate sector some of the on-going training in
specific techniques, but reserves education almost entirely for human teachers, for
then the university must either integrate its traditional role back into its agenda or
become little more than a small branch of the Metalibrary. This could happen
quickly for some institutions who are unable to hire from the shrinking pool of
potential computing science professors. There is too much money to be made
elsewhere for people with such qualifications to settle in to academia.
How likely is such change? The momentum built up in the industrial age
suggests that the trend to specialize in technique at the university will continue and
that its relationship with industry will become closer. Information age paradigms, on
the other hand, suggest that there will be a renewed need for education as well--
one that could possibly be filled by the universities, but will not necessarily be.
History seems to suggest that institutions have a tendency to allow their own inertia
to resist even inevitable changes, and that not many of them survive radical social
alterations. Whether this will be true of the university remains to be seen. Perhaps
some of them will die slow and painful deaths, or become absorbed by the private
sector, and new ones will be established that suit the new paradigms. One
possibility is that universities will become private umbrella consortia of academic
professionals offering their services electronically and with no physical campuses at
all. An easy prediction is that the traditional tenure system will change radically or
even vanish, for the universities will have to have flexibility in order to compete in a
world characterized by rapid change. Moreover, their academics will have to involve
themselves more personally with students if these institutions are to remain in the
idea business at all.
It is also worth observing that the role of many private universities is in some
doubt. While on the one hand, there is pressure to increase funding from private

414
sources, there is also the tendency to employ the same "star system" that applies to
sports and entertainment figures--established institutions with gilt-edge reputations
receive the new private funding and the best students, while all others suffer cuts in
public money and attendance. In addition, many private colleges and universities
have merged or folded because they were unable to maintain a distinctive character
sufficiently great to allow them a niche, and they were equally unable to compete
on general terms. It seems likely that this trend will continue. Most of the smaller
institutions cannot hope to compete with the larger ones in changing times, unless
they have a clear distinctiveness that extends to their student population and to
their constituency--including funding. In addition, there is little for small schools to
gain by seeking support from the government, but they may have to seek it from
the private sector.
For example, there are a large number of church-related colleges and
universities in North America. To the extent that these maintain their historical ties
to their founding constituencies and simultaneously offer sound academic
instruction, they have the opportunity to grow and prosper. But if they sacrifice
either scholarship or their traditional orthodoxy, they will lose their reason for being
in short order. Identity problems in such cases cannot be kept secret in the
information age; such an institution can vanish from the scene almost overnight if
its constituency is caused to lose faith in it. The day when a distinctively religious
school like Harvard could evolve in genteel fashion into a secular one without
anyone taking notice has now passed. At the same time, if the world view of any
university, whether public or private, remains mired in an inflexible pattern that
relates to a bygone culture so that it has no basis to speak to the new civilization, it
too will fail. Thus, the church-based school is faced with the task of remaining
distinctive and separate in its world view, yet becoming able to speak relevantly to
the new one in the culture around it. This is an especially daunting task, and the
number of such schools that survive in recognizable form to make an impact on the
fourth civilization may well be few indeed.
This need for a distinctive identity is not unique to religious or even private
institutions. Like schools at all levels, and any other cohesive organization,
universities must have an organizational purpose believed in and promoted by its
members, and a sense of shared pride in its accomplishments. There are many
strains on both of these, and only the schools that can maintain both will survive--
even the currently public ones will be in trouble if they fail to convince their
constituencies of their need for them. They will also be in trouble if they fail to
flexibly re-organize to meet the new challenges and demands upon them, or if they
discard altogether the old role of idea-brokers for the patenting of techniques.
In the long run, information age paradigms would suggest that the
educational sector will become like the business sector in its new identity--more
entrepreneurial, more accountable, and more people-oriented. There may be a
greater realization that the people involved are the enterprise, and more teachers
will actually own their place of employment. There may also be a greater emphasis
on establishing a long-lasting relationship between teachers and students as they
wrestle together with ideas. Some of these integrative ideas are discussed in a more

415
comprehensive form in Chapter 12. In the meanwhile, the following conclusion is
offered:

The proper task of the university is to analyse, integrate, and transmit ideas.

10.8 Summary and Further Discussion

Summary

Education and training can be thought of as distinct subsets of the whole


learning process, and may take place either as part of a formal schooling, or outside
it. Training can be largely associated with technique, and education largely with
ideas. They exist together as parts of integrated packages, but different cultures
and professions tend to emphasize one, often to the exclusion of the other. One of
the challenges of the next civilization will be to integrate the two, particularly in the
universities.
The content of school curricula will be broadened and somewhat de-
specialized to meet the needs of the next civilization, as it will depend more on
ideas (education) and somewhat less on specific technique (training). People will
have to acquire the meta-techniques of information retrieval and retrainability in
order to hold jobs in the future.
Issues of importance in the process of learning also include:

o Why is learning undertaken?


o Who should control education?
o What is the status of teachers?
o Who should pay for education and training?
o Is there a right to learn?
o Is learning sexually biased?
o Is there a technique of learning?

Technology has an effect on learning by dictating a new curriculum to meet


the new life-needs, by requiring learning to be organized once society reaches a
certain complexity, and by forcing it to examine itself for effective teaching and
learning techniques. The actual machines employed in the process have had a
mixed reception and doubtful results, but each new wave of machines is greeted
and purchased more enthusiastically than the last. The various technological
revolutions now in progress have several potentials to change the formal schooling
system, as some may increase the demand for teachers, and some may reduce it. If
education for life skills in a cultural context on the one hand and for the ability to
assess ideas on the other is to be regarded as uniquely human, it should not
become the province of machines, even if most training does.
Finally, the role of the university in the future is in some doubt. It may
continue to be wedded to current technology and even more fully integrated into

416
the state and economy to serve those interests, or it may revive its old activities as
the developer and guardian of ideas, allowing philosophy, for example, to make a
comeback.
In any event, adults will have more time for learning, and more need of it.
Both education and training will tend to become lifelong and continuous pursuits
rather than being confined to specific years and schooling experiences. These
trends, as well as the new population demographics, are certain to force major
changes in the philosophy and methodology of the teaching/learning process,
especially where this is formally undertaken in schools. The Metalibrary may also
come to play an important role in training and in idea exchange, but there seems to
be great importance to retaining human teachers for many of the learning
experiences as role models of what it means to be human--at least for the education
of children.
Funding constraints and demand may simultaneously cause private sector
involvement in education to grow, and the relative share of government control to
decline accordingly; this trend could either level out or increase current
socioeconomic gaps in education, depending on the response of government.

Research and Discussion Questions

1. Argue convincingly that all present-day forms of schooling are obsolete and
that the entire process can and should be mechanized as soon as possible. Or,
argue that machines are of little value in schooling, and that all of it should remain
in the hands of human teachers.
2. As an alternative to the extremes suggested in question #1, attempt to
propose a reasonable balance between the two. What things can be automated and
what things must remain in the hands of human teachers?.
3. The author suggests that the private sector will become more involved in
learning in the future. Either support this argument in detail, or attempt to refute it
and to show that the state is the only institution qualified to control education and
training.
4. Research and discuss some or all of the major theories of learning. In
addition to those mentioned in the first section of this chapter, some modern ones
include the Gagne-Briggs, Algo-Heuristic, Structural Learning, Inquiry Teaching,
Component Display, Elaboration, Motivational, and the method of Complements and
Contrasts. Compare some or all of these with each other and/or with the Socratic
method, or with the discovery approach.
5. Research and discuss the origin and development of compulsory schooling
in Western nations. What important differences are there between the European
and North American approaches?
6. What differences are there between European and North American
universities? Consider curricula, techniques, training versus education, and the
status of teachers.
7. "North American schools are doing a good job in preparing students to
become functional adults and useful workers." Attack or defend this thesis.

417
8. The author argues that the school system must become more people- and
idea-oriented. Either argue that this is not so, or detail specific changes that would
have to be made to the present schooling system to achieve this.
9. Who should control schooling, and why?
10. Who should fund schooling, and why?

11. "Ethical principles and cultural values must be part of education." Argue
for or against this thesis.
12. The author argues strongly that the day of industrial-age style narrow
specialist has passed and that of the knowledge worker or generalist has arrived.
Develop this idea further supporting it with research into current trends and other
authors who make the same claim.
13. Refute the thesis in the previous question, citing convincing authorities
and research to make your case.
14. Should teachers (a) unionize or (b) professionalize in order to best
advance their economic and political interests?
15. Is learning fair? Should it be? If so, how can it be made fair?
16. What should the K-12 school of the future be like? Sketch out your ideal,
and then propose a way to pay for it. Alternately, answer this question for the
university.
17. Which of the following should be part of the mandatory K-12 curriculum
and in what form? Why or why not? (a) personal finance, (b) business principles, (c)
health and hygiene, (d) sex education, (e) ethics and morality, (f) politics, (g)
religion. Alternately, answer this question for the university.
18. In which direction ought the university go--toward training, toward
education, or toward some mix of the two? How should the goal you prefer be
achieved? How do tenure, the academic ranking system and student/professor
contact fit your model? How does research?
19. Make a list of what you regard as the ten most important books of all time
and defend in detail your choice of each as an important part of every education. At
what level should these books be a part of the schooling experience? Are there
books or classes of books that ought not be part of the schooling experience? Why
or why not?
20. Bloom argues that the North American university has discarded its
traditional role as the guardian, expositor, and evaluator of ideas. Do you agree?
Why or why not? Be sure to read him first.
21. The author argues, as does Bloom, that there are absolute principles,
including moral ones, that must be a part of every education, and that all cultures
are not equal, but some are better than others. They also argue that education can
never be entirely value free. Support these arguments by references to the specific
principles that underlie democracy and demonstrate why such things must be
taught as absolutes.
22. Argue on the contrary that no cultural or moral absolutes exist, but that
society can still exist even with no such commonality. Specifically argue that
education can and must be entirely value free, except for the dispassionate

418
examination of all values as equals. Argue that democracy can still survive even
when its principles are not held as absolutes.
23. The author argues that "who teaches" is more important than teaching
technique and that is in turn more important than curriculum. Support this in
general, but show that there may be specific exceptions.
24. The author argues for a wholistic approach to education as superior to the
strictly behavioural, cognitive, discovery, or humanistic. Refute this argument in
favour of one of these approaches or some other.

Some Ethical and Other Issues For Teachers

25. You have promised your twenty-four biology students a field trip to the
local aquarium as part of the current unit; indeed you believe it to be essential to
what you are trying to teach. What do you do if (a) the school board cancels funding
for field trips? or (b) some parents refuse to allow their children to participate in
"such frivolity"? (c) your union orders you to stop all such activities to protest slow
negotiations with the board over a new contract? (c) your orders you to stop all such
activities to protest new social policies of the provincial government?
26. A student of the opposite sex comes to your counselling office and unfolds
a tale of academic and personal woe that has left him (or her) in a state of near
despair. Feeling loveless, deserted, alone, and a complete failure, the student is
nearly hysterical and inconsolable. As part of your counselling and role-modelling of
humanity, do you hug him (her)? What criteria do you use to decide? Does it matter
whether the teacher or the student is male? Why?
27. You are a high school physics teacher and a young lady (age 17) comes to
you who wishes to sign up for your course because she wants a career as an
aeronautics engineer. Her parents are adamantly opposed and demand she take the
child-care course instead, stating "She's just going to get married". All efforts to
convince them otherwise have failed. The student wishes your help to deceive her
parents and take your course without telling them. What should you do, and why?
28. Reverse the scenario in the above question. The student in question is
being pressured by her parents to take physics. She is quite able to do so, but wants
to take the child care class, because she rejects her parents' ambition that she
become an engineer like them and wants to get married and have children,
regarding that as a higher goal in life than a career. Do you help her to get the
course she wants despite her parents? Is your conclusion the same as in the
previous question? Why or why not?
29. You are a school administrator with enough funds to set up three regular
classrooms for standard courses and reduce class size by 10% throughout your
school. Alternatively you could set up two special classes--one for slower students
who need extra help to catch up to the others, and one for better students to have
an opportunity to excel. This option will only reduce general class size by 5%. Which
do you do, and why?
30. You are the same administrator with a choice between hiring one new
teacher, and purchasing much needed gym equipment and some computers. Which
do you pick and why?

419
31. A student in her high school graduating year confides in you that she is
pregnant and plans to leave town to move away and live with her boyfriend,
abandoning her education. You are unable to convince her otherwise. Do you inform
her parents, breaking the confidence? Does it make any difference if you are (a) her
friend and classmate, (b) one of her subject teachers, (c) her school counsellor, (d)
the school principal, (e) her social worker, (f) the pastor of her church?
32. Repeat the analysis in the previous question, but this time what has been
confided in you is (a) a recent incident of sexual abuse, (b) a many years old
incident of sexual abuse, (c) a criminal act by her parents, (d) a perceived
derogatory remark by another teacher, (d) perceived unfairness in marking or other
treatment by another teacher, (e) a racial bias on her parents' part, (f) religious
discrimination by another teacher.
33. You become aware because of frequent student comments and complaints
that one of your fellow teachers is very strongly pressuring students to adopt a
particular philosophical view. Those who disagree are shouted down in class, have
their marks reduced, or are simply ignored or ostracized in class. What should you
do about this? Does it make any difference whether the philosophy is (a)
Christianity, (b) an aboriginal religion, (c) an oriental religion, (d) new age, (e)
political liberalism, (f) atheism, (g) feminism, (h) Marxism.
34. An election campaign is in progress, and the teachers' union has taken
issue with the present government and is strongly supporting the opposition party.
Is it legitimate for the teachers to (a) send some of the members' dues to the
campaign headquarters of their favoured party, (b) wear political buttons and
slogans to class, (c) proselytize their students for their cause, (d) send literature
home to the parents with their students?
35. You are a department head who requires one new Math/Physics teacher
and have a choice between one who could teach both and has all the correct
qualifications on paper, and a second who is certified only for Physics but is more
energetic, personable and could sponsor several extracurricular activities. Which do
you pick and why?
36. You are a school administrator, and it is traditional for administrators in
your jurisdiction not to teach classes themselves. One of your teachers puts it to
you strongly that as a leader in education you have a responsibility to role-model
the activity you are leading and so should teach at least one class. What should be
done? Your discussion should include an examination of possible conflict of interest,
union opposition and time pressures on your "real" job.
37. Research an existing code of ethics for teachers or propose one of your
own. Examine it point by point and give reasons for or against the inclusion of each
item. Be careful to analyse items for the reasons they are present--occasionally they
are there for political or power purposes rather than ethical ones.
38. The argument is often made that some or all of the schools in a given
public school district ought to specialize in some fashion. Thus a variety of special
schools have been established--for the fine arts, emphasizing technology,
committed to the fundamental skills, or language immersion. Either argue for such
models, or argue that the neighbourhood school ought to provide exactly the same
programs for all students in the area.

420
39. Should ideas that the majority culture deems offensive be examined at all
in the university, or should they be brought up only in the context of condemnation?
Does it matter if the ideas are philosophical, political, economic, racial, or religious?
40. Survey a the class in "Mathematics for Elementary School Teachers" at
your university. What is the male/female distribution? Ask if the students are afraid
of mathematics or dislike it, and why. If time permits, compare with the Calculus III
course. What conclusions do you draw about sex differences?

Bibliography

Barlow, Daniel Lenox. Educational Psychology: The Teaching-Learning


Process. Chicago: Moody Press, 1985
Barrow, Robin. Radical Education. Oxford: Marien Robertson, 1980.
Barr, Donald. Who Pushed Humpty Dumpty--Dilemmas in American Education
Today. New York: Atheneum, 1971.
Barton, Len and Walker, Stephen (Ed.). Education and Social Change. London:
Croom Helm, 1985.
Berg, Ivan. Education and Jobs: The Great Training Robbery. New York:
Praeger, 1970.
Berman, P. Debating PC: The Controversy over Political Correctness on
College Campuses, New York: Laurel, 1992
Biehler, Robert F. & Snowman, Jack. Psychology Applied to Teaching (Fifth
Ed.). Boston: Houghton Mifflin, 1986.
Bloom, Allan. The Closing of the American Mind: How Higher Education Has
Failed Democracy and Impoverished the Souls of Today's Students. New York:
Simon and Schuster, 1987
Bloom, Harold. The western Canon: The Books and School of the Ages. New
York: RiverHead Books, 1995
Bowen, James. A History of Western Education. New York: St. Martin's, v1
1972; v2 1975.
Bowers, C.A. The Promise of Theory--Education and the Politics of Cultural
Change. New York: Longman, 1984.
Coleman, James S. (Ed.). Education and Political Development. Princeton:
Princeton University Press, 1965.
Craft, Maurice (Ed.). Education and Cultural Pluralism. Philadelphia: Falmer
Press, 1984.
Dewey, John. Experience & Education. New York: Collier, 1968.
Emberley, Peter C. & Newall, Waller R. Bankrupt Education: The Decline of
Liberal Education in Canada. Toronto: University of Toronto Press, 1993
Emberley, Peter C. Zero Tolerance--Hot Button Politics in Canadian
Universities. Toronto: Penguin, 1996
Hill, Winifred F. Learning: A Survey of Psychological Interpretations (Fourth
Ed.) New York: Harper & Row, 1985.

421
Hirsch, E.D., Jr. Cultural Literacy: What Every American Needs to Know.
Boston: Houghton Mifflin, 1987.
Jencks, Christopher & Riesman, David. The Academic Revolution. New York:
Doubleday, 1968.
Karier, Clarence J. Man, Society, and Education. Glenview Illinois: Scott
Forseman, 1967.
Kline, Morris. Why Johnny Can't Add--The Failure of the New Math. New York:
St. Martin's, 1973.
Kozol, Johnathan. Death at an Early Age. Bantam Books, 1968.
Lewis, C.S. The Abolition of Man. New York: Macmillan, 1953.
Reigeluth, Charles M. Instructional Theories in Action. Hillsdale, N.J.: Lawrence
Erlbaum Assoc., 1987.
Rossi, Peter H. and Biddle, Bruce J. (Ed). The New Media and Education--Their
Impact on Society. Garden City NY: Anchor, 1966.
Saettler, Paul. A History of Instructional Technology. New York: McGraw-Hill,
1967.
Sandin, Robert T. The Search for Excellence--The Christian College in an Age
of Educational Competition. Macon, GA: Mercer University Press, 1982.
Snyder, Tom and Palmer, Jane. In Search of the Most Amazing Thing--Children,
Education, and Computers. Reading, MA: Addison-Wesley, 1986.
Tesconi, Charles A., Jr. & Morris, Van Gleve. The Anti-Man Culture--
Bureautechnology and the Schools. Rubana Il; The University of Illinois Press, 1972.
Tyack, David B. (Ed.). Turning Points in American Educational History.
Walthum, MA: Blaisdell, 1967.
Vandenberg, Donald. Human Rights in Education. New York: Philosophical
Library, 1983.

Chapter 11
Religion and the Transcendental in
the Fourth Civilization
Seminar - "Who Needs Religion?"
11.1 Foundations of the Study of Religion
11.2 The Major World Religions
11.3 Religion and the Scientific and Industrial Revolutions
11.3.1 Rome to Galileo
11.3.2 Reason in Science and Protestantism
11.3.3 The Partnership Dissolves
11.4 The Debate Over Origins--an Illustration of World View Clashes
11.5 Learning How to Conduct Debates
11.6 Science, Technology, and Religion in the Twentieth Century
11.7 Summary and Further Discussion

422
11.1 Foundations of the Study of Religion
Every people and culture has one or more possibly overlapping sets of
absolute values, outlooks, attitudes, and beliefs that can be regarded in part as its
religion. While it is in some cases difficult to classify aspects of a culture as religious
or otherwise, the characteristics of a religion in general are:

o It makes absolute assertions about the existence or nonexistence of a god.


o It provides reasons for human existence. These may include statements
about the origins or destiny of humanity in general or of individuals in particular.
o It makes statements about what is the content, reliability, and manner of
human knowing and about what is the ultimate meaning of knowing. That is, it has
intellectual content.
o It addresses the meaning of human experience and engages the emotions
of the individual and society. It also includes an element of experience; that is, it
must be experienced.
o It makes statements about how existence, knowledge, and experience
interact in the conduct of relationships with other people, with the cosmos in
general, and with God (or gods). That is, it includes an ethical code, though the
source of this may vary.
Such statements provide a comprehensive world view through which to filter
and give an interpretation to the intellect of both human existence and human
experience.

423
Figure 11.1 Model of Religious Experience

Figure 11.1 is a model of religious experience. Note that this diagram is


similar to that used to model the learning process. It must be, for education and
training address the issues of being human and of acquiring appropriate techniques
to act humanly, while religion serves to answer the question, "What is a human
being?" That is, religion makes comprehensive or wholistic declarations about the
ultimate meaning for the existence of humankind, both as individuals and as a
society. Such comprehensive answers necessarily embrace all of being, knowing,
experiencing, and doing. The central titles in this version of the circles are derived
from the Bible's statement about what is owed by human beings to their Creator:

"Hear O Israel, the Lord our God, the Lord is one. Love the Lord your God with all your
heart and with all your soul, and with all your mind and with all your strength."
--Deuteronomy 6:4-5 and Mark 12:29b-30

There have been a variety of approaches to the study of the role and function
of religion in culture that have focused on one or more of these four aspects, but
each such approach is insufficient in itself to explain religion, even though all
provide valuable insights. A summary of some of these is provided here, although
aspects of them will be expanded on further in later sections.

Religion as Philosophy

It is possible to view religion as a branch of philosophy--that is, as part of a


general intellectual search for ultimate answers and meaning in life. This aspect
differentiates religion from, say, experimental science. The latter generally deals
with the measurable physical universe and usually sidesteps questions of ultimate
reality, even though its operating assumption must be that one does exist (i.e., that
it measures something real).
Religion deals not just with what people believe about meaning but with how
they think, emote, and live in the light of such beliefs. That is, it always has an
important experiential and relational component, and in this way it differs from
philosophy and has an abstract kinship with both the empiricism of science and the
applicability of technology. However, religion also tries to provide answers to
questions about ultimate origins--about an originating deity--whereas neither
philosophy nor science need do this. Thus, the philosophical approach to religion,
taken alone, is insufficient to determine its substance or even its meaning, for
religion gains much in its empirical interaction with the deity on the one hand and
with people and life on the other.

The Historical Approach

424
If anything, this may be opposite to the philosophical approach, for in it,
religion is assessed by the roles its adherents have played throughout history. Such
an approach is commonly used in overview courses having such titles as
"comparative religion." Inevitably, this view of religion concentrates on religious
institutions, for it is through these that the state and its decisions (and so, history)
are most often and most directly influenced. However, the historical approach alone
may discern little about the effect on individuals and in turn their influence on
society.
For example, histories of England discuss Wilberforce and his long and
eventually successful political campaign to eliminate slavery, but often omit his
Christian commitment and the high view of the value of human beings that led him
to oppose slavery--this at a time when the institutionalized church favoured it.
Likewise, the religious convictions of the framers of the American constitution
played an important role in determining its contents--but today it is the words and
the judges own perceptions of social needs that matter to the courts, not the
principles that motivated the framers.
It is worth noting that carefully preserved and transmitted religious writings
often do have a substantial historical content that is potentially verifiable by parallel
documents from other sources, and by archaeological evidence. For example, many
archaeological digs in the Middle East would be difficult to interpret without the
accounts found in the Bible. Archeology in turn casts light on many details of life
and culture in the times described in religious works. However, this light is confined
to externals, and likely to be bereft of insight into the thoughts and beliefs of
practitioners.
However, a historical approach alone need not take into account the ideas
behind religions (doctrine) or even the religious ideas behind social change, but
need view them only as mass movements on the broad stage of human history.
Thus, while history is clearly an important component of the religious picture, it too
cannot stand on its own as a description of religion.

Religion as Art and Literature

These are aesthetic approaches, wherein various religious ideas are not
approached directly, but via the literary value of their sacred writings, and the
artistic value of the painting, sculpture, and music that their followers generate. This
approach has the advantage of recognizing the rich contributions of religion to
culture in these broad areas--ones so large that they could easily be the subject of
many books far larger than this one. It has the disadvantage that it deals with
religious ideas in a peripheral and abstract manner, as themselves forms or aspects
of artistic expression, and not so much for their value as ideas. Neither can an
aesthetic approach shed much light on the application of those ideas to life.
This method is certainly worthy of being included in a comprehensive
examination of religion, even though its details are beyond the scope of this book.
However, the aesthetic engages only part of the whole person, and this approach
therefore reveals only a portion of religion's scope.

425
A Psychological Approach

There have also been a number of attempts to use scientific methods to study
religion as a collection of mental or behavioural phenomena and even to make
systematic evaluations of religious ideas. This is more comprehensive than a purely
descriptive historical approach, for it recognizes that, while mass religious
behaviour is important, it does not give the whole picture. The ideas themselves
must at some point be grappled with, and as more than just mental or personal
effects. However, religion has an element of the transcendental or supernatural
about it that extends its realm beyond that of empiricism and makes it very difficult
for an analytical science to fully cope with it.
Yet psychological effects are of interest, and there is considerable material of
mutual concern to psychologists and to religious adherents. For example, those
most concerned with finding answers to the meaning questions are the ones who
are likely to be the clients of both religious and psychological counsellors. The two
approaches may occasionally have superficial resemblances, for both attempt to
secure a suffering client in an environment that can comfort and soothe, thus
transcending anguish with a reality of another order. The modern psychologist
might say that the better reality already lies within the client waiting to be
summoned forth, and a mystic may well agree, differing only in the means of
drawing it out.
By contrast, many religions declare that ultimate reality and meaning lie
outside the individual and provide a framework in which to relate an individual not
just to society, but also to the whole of being. Thus, the mystical idea of locating
god within a person is in direct competition with the more conventional religious
idea that God is a distinct and separate entity.
Some religions point to a supreme being as the ultimate knowable reality, all
else having been created at that being's command. A few religions assert that all
created matter is therefore an illusion, and so is any knowledge of it, denying that
there is much, if any, reality to draw upon, and counselling an acceptance of such
nihilism as the path to mental health. Others counter that such a conclusion is the
very antithesis of religion, asserting that, whatever else they may be, religious ideas
are every bit as much about specific realities as are scientific ones.
Psychology also makes interesting contributions to the study of behaviour
under the influence of faith assertion, particularly in the proclamation and
profession of that faith to large groups of people, either in person or through the
media. Certain threads common to the persuasion and response of any crowd can
readily be identified, and these seem to be applicable to the pronouncements of
both preachers and, say, politicians. In addition, the new cognitive theories of
modern psychology attempt to evaluate beliefs and knowledge, also in a scientific
fashion.
The strictly psychological study of religion, however, limits religion to physical
and empirical phenomena on the assumption that the mind is not more than the
brain. However, most religions assert that the two are distinct and that religious
activities transcend the physical. Measuring such transcendence appears to be

426
beyond the empirical bounds of modern science. On the other hand, measuring only
its effects seems to be somewhat incomplete.
Thus, all attempts to find common ground for science and religion are fraught
with delicate problems. On the one hand, science seeks to evaluate religion as a
quantifiable and measurable physical phenomenon, because such are its realm. On
the other, religion asserts that it derives from and deals with things that are no less
real for going beyond the physical realm and that science has nothing to say about
such matters--only religion's effects can be scientifically measured and not its
causes and motivations. Since the existence or nonexistence of the supernatural
realm is a matter of basic assumption and is unlikely ever to be provable or
disprovable by empirical means without a revelation from the supernatural realm,
this debate is irresolvable.
Yet, it is unsatisfactory to assert that religion and science have non-
overlapping magesteria (teaching domains) and leave things at that, for science can
at least attempt to describe behaviour--whether religious or otherwise--and religion
does attempt to describe motives--whether those of scientists or others. The way in
which the two once interacted comfortably is described later in this chapter, and
some thoughts on achieving new rapprochements are expanded upon in Chapter
12.

The Social Aspect of Religion

Groups of people with like beliefs play a role in society that also cannot be
ignored. There is a social need--to be with, to talk to, to share emotions with, and to
do with--that is part of the human experience. Such general needs extend to the
commonality of religious belief and in that sphere are part of what give rise to the
assemblies known as synagogues, churches, temples, reading rooms, mosques, and
so on. Such gatherings are a partial expression of what it means to be a distinct
culture, or a society-within-a-society, and are part of the reason for being of every
person.
Put in other terms, the total cultural world view of a person or a people always
includes a set of religious beliefs, even if these are mostly negations. It is not always
possible to tell whether certain aspects of a society are the cause or the effect of its
religious beliefs, nor is it easy to separate the religious aspect of a culture from its
other elements, so intertwined are they. One can easily argue that this is true even
in professedly irreligious cultures, where similar commitment, activities, behaviour,
and passions are evoked for causes with other names. Thus, Western liberal
humanism and communist statism both have religious significance, even though
both often claim to have none--for all people act on a set of values and in turn
attribute these to some cause, and all hold some values to be more important than
others, even to be supreme. All people must deal with questions of the ultimate
origin and meaning of life, and all cultures have some reflection of the answers to
such questions in the lives of their members. Even a firm denial that such things
exist, or that they matter even if they are real, is a statement with religious content

427
and has consequences for the person or society making such a faith statement. That
negative statements on religion are indeed faith affirmations becomes evident when
one realizes that the same quality of emotion and the same kind of empirical
evidence is behind a rejection of the supernatural as there is behind a belief in one.
Thus, the common argument that the beliefs and values of irreligious people cannot
be compared at all to religious faith affirmations is very weak. So is the assertion
that social behaviour has no necessary connection with religion, for both logic and
historical experience argue otherwise.
A social approach to the study of religion is, therefore, potentially more
comprehensive than a purely historical one. However, it still tends to have the
limitation of being primarily descriptive rather than evaluative and does not
therefore suffice alone, for the thing being described demands an evaluation by its
very nature. That is, at some point, the claims of religion upon the whole person
must be dealt with. Moreover, it is evidently impossible to stand entirely outside the
thing called religion and give it a dispassionate evaluation, free from any
preconceived notions about the thing being described. Thus, there will be bias in
any description of religion.

A Moral (Behavioural) Approach

Religion also addresses moral issues, for it is concerned with how to be a good
person, how to live life in harmony with those around, and perhaps how to live a life
that pleases a supreme deity--any or all of which may lead to rewards in a life to
come. It is to behavioural issues generally (and moral ones specifically) that
comparers of religions look, for it is here that the most common ground is found,
even though the reasons for arriving at moral laws may be very different. Such
commonality is yet another evidence for the universality of a moral sense and, to
some extent, for the content of moral laws. Moreover, many people think of religion
as a set of practices and/or rituals, that is, in behavioural terms.
What distinguishes various religions on this score, however, is the reasoning
behind the rules, and the motivation for having them. One must ask if ethical rules
exist just to have an orderly society, for the production of relative good, or for the
pleasing of a god who is absolutely good. Do the rules exist to gain a salvation or
are they behavioural guideposts for those who already have one? Do they arise
from within people or do they come only from the character of the deity being
worshipped? It is, incidentally, only in the context of worshipping a moral god that
the concept of sin (violation of Divine standards) arises.
The moral aspect is evidently essential to comprehending a religion but
clearly also does not stand alone. Moral codes need to be assessed in the context of
their reason for being and also in the context of the actual behaviour people exhibit
when they claim such codes for their own. Do the moral codes change a person; do
they change the person's behaviour; or do they change nothing outside the person's
mind? That is, do they have a transforming effect, or are they entirely mental
abstractions with peripheral, if any, application to life?
Do the deeds (behaviours) dictated by moral codes become the means to the
end of an eternal salvation, or are they the result of already having it, the external

428
consequences of a prior inner transformation? Do they co-exist with belief, produce
it, or are they produced by it?
That deeds themselves do not answer these fundamental questions illustrates
that there is more to religion than the behaviour associated with its adherants.

Religion as a Commodity

There are two ways in which one can approach religion as a commodity. The
first is to treat it as a marketable item--one with a perceived need, a set of clients, a
product to respond to the need, a sales force, an organizational structure, and an
appropriate pension plan. In this view, religion is a package of ideas and life-style
that has to be sold into a competitive marketplace, and the methodology for doing
so is similar to that for the advertising, packaging, and selling of any other product.
Potential customers need to be convinced that they have a need and then
persuaded to accept a particular brand-name solution. The deal must be closed, and
then the client must be maintained as such over the long term. The competition has
to be assessed and beaten back at every turn. The sales force (preachers) must be
trained in all the latest marketing techniques. Available media have to be utilized to
their fullest potential to get the message out, and the entire enterprise must be
financed through voluntary payments of its purchasers. If the marketplace (cultural
milieu) changes, then the company (the religious institution) must either change
with it or diversify, lest it become insolvent and either cease to exist or have its
assets in people taken over by a competitor.
In this view, religion maintains cultural relevance as long as it remains
economically viable in the marketplace of ideas, so its models and techniques must
continually be adapted to those of the culture into which the ideas are being sold.
Thus, religion has discovered television, and numerous sellers of religious products
now hold forth from electronic pulpits, backed by advertising, orchestras, choirs,
and the latest broadcast technology. The difficulty with this approach to religion as
commodity is that once the product has been wrapped up for retail sale, it may be
difficult for customers to discern what are the contents of the package. Exactly what
is being sold, and what is its cost? Is there a warranty?
In fact, when any product is marketed with standard methods, the lines
between substance and packaging blur, and in this way, religion can metamorphose
into either a business or a form of religious entertainment. If so, it necessarily
comes to engage the emotions far more than the intellect. At such a point, it no
longer stands on its ideas or on its ability to transform people but only on the
marketing technique that has provoked the response. That is, it is not religion itself
that becomes in such cases the commodity but an emotional experience described
by a religious-like vocabulary. Since this is at least one step removed from the
religious ideas themselves, it may be best to assess this particular phenomenon
strictly in terms of its business and entertainment value and to regard its most
visible sales force as media celebrities who are only incidentally related to the
actual religion. They sell an emotional experience that many people are prepared to
buy as a less comprehensive and less demanding substitute for religion, much less
to the beliefs motivating religious behaviour. These comments apply regardless of

429
whether the emotional experience comes from being an observer at the pomp and
ceremony of some ancient ritual or from the charged atmosphere of a charismatic
miracle meeting.
A second problem created by the constant repackaging of religiosity for the
marketplace is that ideas which started with a claim to transcend time and culture
come to be heavily modified in the light of both, and perhaps ultimately to be
discarded. Thus, the irony of the search for relevance in the application of religion to
modern culture is that the modified religion can all too easily thereby become
irrelevant to the search for universal applicability.
To be sure, religious ideas are indeed in competition with those from other
sources and with each other. In the West the very notion of a democracy requires
the free competition of ideas. But, as illustrated here, the model of competition
among ideas cannot be pressed too far in practice without dissipating their vitality
in spectacular if unengaging show. The same observation, to the same effect, can
be made of politics, which when partnered with modern media may also become a
personal and emotional experience that lacks much intellectual content or
relevance to people's lives. After all, the most effective marketing sells the sizzle of
outward appearances to the emotions, not the steak of content to the intellect, and
this is true whether the product is computer software, religion, or politics. That is,
the treatment of religion strictly as a commodity obscures its nature rather than
explains it.
This marketing of religious ideas and fragments also raises some interesting
legal and ethical questions related to the definition and status of religious
organizations in the United States and Canada. Since these organizations are tax
exempt in both countries and there is in both a societal and legal prohibition against
government entanglement with religion, there are few means to ensure that funds
raised on television and by direct marketing will indeed be used for the stated
purposes. It is not always clear that the actual use of such funds has anything to do
with religion. Yet attempts to regulate activities operated by religious bodies are
sure to provoke charges of interference by the government in areas prohibited to
the state. On the other hand, failure to protect a trusting and unsuspecting public
from fraud may be irresponsible, for only the government has the power to
undertake such a role. As in the case of pluralism, there is a paradox here. To the
extent that religion is to be marketed as a commodity, there must be some measure
of accountability and responsibility for the activities of its people and organizations.
On the other hand, freedom of religion ceases to exist if the government dictates
how it shall be practised, or acts as arbiter of the validity of its ideas. But, if it
ignores religion altogether, the state may fail in its duty to protect its citizens from
danger. One must conclude from these marketplace considerations alone that,
although the state perhaps should avoid excessive entanglement in the affairs of
religion, there can never in practice be a total wall of separation between the two.
The second, and rather different, approach to religions as commodities is to
assess them competitively by the benefits they bring people. Which ones promote
war, famine, corruption, totalitarianism, and moral decadence? Which brings
hospitals, food, medicine, new agricultural techniques, freedom from slavery,
equality of opportunity, and high moral standards? What payment is asked for the

430
material benefits offered? Is political or cultural assimilation demanded? Does the
preacher demand the buyer's soul to hang at a denominational belt like a trophy
scalp? Is the motivation for the message a love of God translated into love for
people? What benefits are offered the believer? These could vary from a nice feeling
of doing good, to avoidance of divine punishment, to divine acceptance for eternal
life, to the nirvana of escape from another dreary reincarnation, or to some vague,
but unspecified reward. Whatever the salvation offered by a religion, it too could be
viewed as an idea in competition with similar ones, and this is one way of assessing
religions that may be useful if taken in conjunction with others listed here.
Analyses of the commodity aspect of religion are quite legitimate, for they
deal with the effects of beliefs on people and culture, and such effects are of
interest to everyone, not just those who profess a certain religion. For example, a
religion that believed in and practised human sacrifice, the killing of people with
some other skin colour or religion, or the violent overthrow of the state has a direct
impact on everyone in the society. Consequently, the society has a vested interest
in protecting itself from beliefs that threaten its existence or the lives of its citizens.
The comparison of religions may be undertaken with the implicit assumption
that there is a standard to which the comparison is being made--that absolute
religious ideas exist. This assumption is especially evident when the comparison is
being conducted with the ostensible purpose of debunking the very notion of
religion (common in secular universities), and there is a logical circularity here from
which no immediate escape is evident. Even when no standard is made apparent,
the effects chosen for comparative study reflect the relative importance attached to
such things by the one making the report. Why is one group of outcomes from
religious beliefs more desirable than another, and who is qualified to judge between
the two? In particular, who can say that one moral standard is higher than another,
apart from an all-knowing deity who Himself reveals the truth? It is no easier,
therefore, to make comparisons between religions as commodities in an objective
manner than it is to do so as objects of scientific, historical, or psychological
examination. To a greater or lesser degree, the comparer of religions is emotionally
and religiously affected by the objects under study, making an objective analysis
difficult if not impossible.
On the other hand, such comparisons may be conducted with a view to
choosing the portions one likes from various religious systems. This practice is one
contributor to religious fragmentation in the twentieth century--the picking of
religious ideas from this or that system of beliefs as one might select a dinner from
a smorgasbord. The result may be an exotic but ephemeral taste experience, but in
the long haul and for practical purposes the components may not go together at all.
The same kind of fragmentation may also take place when religious ideas are
exported to other cultures. If they go attached to the cultural strings of the sending
nation, they will be perceived as strictly cultural exports and not as religious ones. If
a religion is indeed universal, then it is culturally independent and it can be believed
in and practised by the peoples of any language, culture, civilization, time, or
economic status. If it is not, pieces of it will perhaps be added to the importing
culture and religion, but the religion as a whole will not be widely adopted.

431
Even when religious and charitable organizations go to other nations with aid
for hunger or disaster, such help may be received only as a mixed blessing. It may
create dependence, encourage poverty, cause resentment, and actually promote
rejection of the religious or cultural message accompanying it. Thus, those who
undertake missionary or religious-based relief work must have a clear idea of what
it is that they are attempting to do or to persuade a people of, as well as how to
make it that people's belief, independent of all cultural and material considerations.
Similar comments could be made of the political and social strings attached to aid
provided by governments.
In summary, religion can be treated as a commodity, but this is a level of
abstraction that fails to do justice to any of its content and claims, or to measure its
impact on people. Such a view of religion is also superficial, for it looks only at its
competitive behaviour and not at the object itself. Religions can also be analysed
from a commodity perspective, with their effects compared to one another or to
some implied standard. This is more useful, though it is impaired both by the
difficulty in achieving objectivity and that of knowing what can be included in the
study. In addition, all such attempts suggest that there is a kind of meta-religious
standard by which the truth claims of religion can be assessed, and this too
becomes a religious statement. One ends up in the same place as before--religious
statements can be validated against a standard only if that standard exists
independently and absolutely.

Religion as a Personal Experience

This final view of religion regards it as a sum of personal experiences, an


intimate thing difficult to expound upon in universal terms. Such an element is the
focus of the mystical religions that search for ultimate reality within each person--
one that must necessarily constitute a different experience for each individual. This
aspect is present in a different sense in evangelical Christianity, which holds that
true religion involves a personal relationship with a personal God, one that
recognizes individual differences but whose character is nonetheless essentially
similar, but only because God never changes.
While this view of religion expresses an important aspect, it is one that is of
most interest to individuals and their own experience and is of lesser concern to the
historian or forecaster because it addresses societal and ethical questions only
indirectly, by hypothesizing how the changed individual might work out
relationships with respect to other people. That is, assuming that religion is a
personal experience means that its behavioural consequences in general terms are
elusive. When actions do not appear to match the religious theory, it is not certain
whether the theory is faulty or whether the personal experience of it is absent.
This uncertainty has always, for instance, been a particular problem for
evangelical Christians, who assert that the believer in Christ has been born again to
a new life and has the indwelling power of the Holy Spirit to live that new life
successfully. How are actions completely inconsistent with the religion's moral code
to be explained? How are the apparent conflicts among faith, emotions, the
intellect, and one's relationships to be resolved? Is the person's understanding

432
defective or was the experience a false one? The only answer that can be given is
that God alone knows absolutely who His true people are, and this reply is unlikely
to satisfy the skeptical observer. Another difficulty is caused by the fact that people
respond emotionally in very different ways to the same set of beliefs. Is this simply
an artifact of their differing personalities and interests, or does this observation
invalidate either the experiences of some, on the one hand, or the claim to
universality on the other?
As a result, personal experience alone is also insufficient to explain the
phenomenon of religion. It does not hold religious ideas up to scrutiny in
themselves, but attempts to assess them solely in terms of experiences--and
experiences claimed in the same cause may be very diverse. External reference
points are needed in doctrine to account for experiences; they are insufficient on
their own to explain religion.

Summary

As this overview indicates, religion is rich and complex, affecting both the
individual and group life in a wide variety of ways, none of which taken alone
provides a satisfactory explanation. At least the entire context of the model
diagrammed at the start of this section in Figure 11.1 is necessary, and even then it
is not clear whether what has been achieved is an understanding of religion or just a
partial redefinition of something that lacks a full explanation. It is clear that the
impact of religion on both individuals and society as a whole is substantial, and it is
appropriate to consider that impact further, first by looking at the teachings of some
of the major religions, then by discussing the origins of modern science and
technology in their religious context. It will also be of interest to consider whether
that impact will continue to diminish, as it has in the industrial age, or whether
religion will make a comeback and become an important factor in the information
age.

11.2 The Major World Religions


In this brief overview of the history and teachings of the major world religions,
emphasis will be on the ethical teachings of each, the impact each has had on
society, and the current status of each. Doctrine will be discussed only to the extent
that it directly affects teachings on ethical and social behaviour, and not in any
comprehensive terms. It is beyond the scope of this book to make detailed
comparisons of major religions at all doctrinal points.
It is worth noting that the classifying of many such systems of teachings as
"religions" is a modern Western idea. A body of beliefs and activities may not be
separable from the culture and society of which they are a part. It could therefore
be argued that such classifications can be done only for the sake of making
comparisons between ones that claim cultural independence and universality
(Christianity, Buddhism, and branches of Hinduism and Islam) and ones that do not.
A better model for most of the latter group might be as social or philosophical
systems. For instance, one properly speaks of the religion of the ancient Romans,
Greeks, Egyptians, and Babylonians as having been principally cultural and

433
educational phenomena. Their priests were the primary preservers of culture--of all
the available knowledge--and not just of their philosophy or theology. However,
these have all passed from the world stage with the cultures that spawned them,
and those that do remain must compete into the future for human souls, so it is
they that must submit to such comparisons.
There are also numerous folk religions that survive to this day, but these are
usually confined to limited geographical areas and only occasionally play an
influential role on the world scene. Their impact can be substantial on local
variations of some of the major religions, as the latter will often accommodate
themselves to local practices and beliefs and incorporate many of them into their
own structure. Because of the wide variety of such systems and their limited impact
on the larger society, they will not be considered in any detail here.

Buddhism

The philosophical and ethical system attributed to Buddha (563-483 B.C.) in


India was not at first a religion as the people of the West might view one. Buddha,
the wealthy son of a warrior king, renounced his inheritance and family for the
asceticism of a monk and then for the role of reformer. The Buddhist philosophy is
based on the "Four Noble Truths":

1. Existence involves suffering.


2. Suffering is caused by indulging insatiable desires.
3. Suffering will cease if these are suppressed.
4. This suppression can be achieved by following the eightfold noble path,
which consists of striving for: right views, right goals, right speech, right actions,
right livelihood, right effort, right mindfulness, and right meditation.

Buddha did not mention a supreme being, but after his death he himself came
to be venerated as a deity, for the people of the polytheistic societies in which his
teachings spread quickly added him to their pantheon of gods. Followers of some
Buddhist sects came to believe that anyone could reach a state of Buddhahood or
enlightenment and also become an immortal deity (or at least absorbed into the life
force of such). As can be seen, the ethics of Buddhism are negatively expressed and
individualistic. They are directed to improving the self through suppression of
desire, and have little to say to the society at large. Evil is entirely an individual
responsibility, and if it has not been sufficiently put down to achieve Buddhahood or
nirvana as did the founder, then his followers teach that the person's karma will
cause reincarnation to another life.
In China and Japan, where most modern Buddhists live and where there are
also more Buddhist deities, the faithful are often organized somewhat as in Western
churches. Nirvana--the salvation offered--involves a deliverance from the necessity
to live another life and to continue suffering. It is not always clear whether this is
comparable to the Western idea of heaven or if it is simply personal annihilation.
Though Buddhism has split into many sects, it is not confined to national
boundaries but has been adapted to a number of rather different societies and

434
could be said to claim it is universal. Certainly it has been missionary, and portions
of it are even now being adopted by many in the West as interest in new
philosophies and exotic religions grows.
The industrial age is a very late arrival in traditional Buddhist countries, and it
remains to be seen whether its essential pessimism of humanity in general and of
the body, and women in particular, will allow it to secure a place in the optimistic,
humanistic, and egalitarian late industrial and early information age. It may be that
in the short term aspects of Buddhism will be incorporated by Westerners into a
diverse and fragmented religious menu that simply ignores any aspects of it that
seem inappropriate. In the countries where Buddhism is widely practised, the
industrial age had a rather late start, and its presence is still rather uneven. The
transition to the fourth civilization will likely bring bout the demise of a number of
totalitarian regimes in that part of the world, as it already has in the West. However,
there do not appear to be inherent conflicts between Buddhism and the information
paradigms.

Confucianism

The teachings of the Chinese philosophy and culture were organized by their
greatest expositor, Confucius (551-479 B.C.) who did not so much set about to
found a religion as to effect social, political, and educational reform. It was
important to him to place a sound and authoritative philosophical foundation under
the institutions of society--family, social class, and nation. Veneration of this sage
began after his death. He was given many titles by later emperors, and temples for
his worship came to be erected throughout China. This worship had begun to
decline somewhat even before the communists came to power in China and has
been suppressed since then, as have all other religions. It is not clear how much of
it has survived at this point as a distinct religion, but elements of its cultural and
nationalist zeal can be detected in the devotion to the communist leaders, which
was similar to that commanded by the earlier emperors.
Confucianism concentrates on relationships, especially those of friends, of
family and of the subject to the state, with particular emphasis on the last. The
superior man (this philosophy has little to say about women) does his proper duty in
each relationship in a dignified and aristocratic fashion. Virtues such as propriety,
sincerity, faithfulness, studiousness, justice, benevolence, reverence, moderation,
calmness, and honesty are encouraged. These virtues were guaranteed by heaven
or by an impersonal god, and the deity has supposedly implanted in everyone an
inherently good moral sense. There are remarkable similarities between the good
Confucian ruler and citizen and those of Plato.
Worship is to be directed toward heaven, earth, and one's ancestors. It was
conducted by the emperor on behalf of the people of the whole nation, for there was
no priestly role except for that of government officials. Emphasis is placed on social
duties, a variation of the golden rule, the family, religious values in the state, and
the wisdom of the past. Confucianism is national rather than universal, and
salvation is humanistic and social rather than personal and other-worldly. Despite
its religious-like observances, it is not clear that Confucianism ought to be termed a

435
religion; perhaps it is better regarded as part of the Chinese culture. For this reason,
it has not been exportable, and its devotional aspect may continue to have a
troubled time in the light of rapid and dramatic changes in Chinese society.
Indeed, the most remarkable thing about the suppression of religion by
communists has been the conversion of tens of millions to Christianity during the
first fifty years of its reign, a process that continues at a rapid pace. This change has
been even more substantial in South Korea, where fully a third of the population is
now Christian and where the ancient philosophies of Buddhism and Confucianism
are now on the decline, in relative terms.
Much in China does still remain of Confucianism, but it is woven into the social
and cultural patterns of the Chinese people and is much less discernible as a
devotional-style religious worship today than it was in the past. China has long been
a closed and insular society, one with great Confucian regard for authority and self-
sufficiency. However, Chinese communism has apparently not survived rapid
industrialization, internationalization, and the beginning of the information age. It
has not suffered catastrophic collapse as in the former Soviet Union, but has begun
the process of dismantling itself from within by changing into a form of state
capitalist dictatorship. Even this is only temporarily stable because of the cultural
Confucian-like reverence for authority. Otherwise, the regime's brutal suppression of
political and religious dissent would long since have resulted in collapse. Thus, Mao
Zedung may have been the last Chinese emperor to receive old-style veneration,
and China seems poised to perform the great leaps forward into the technological
and information ages that it has hitherto been unable to make. How much of the
Confucian philosophy will survive the ongoing wrenching social adjustments that will
accompany the information age remains to be seen.

Hinduism

Hinduism is the term used to refer to the religious beliefs of the majority of
India's people. These date to about 1500 B.C. and include a very broad range of
philosophical and social ideas and gods--so broad a variation that they are difficult
to characterize. A Hindu may be an atheist, a polytheist, a monotheist, a nature
worshiper, a contemplative, a mystic, an agnostic, or a follower of formal ritual
Hinduism. The last kind was until recently chiefly characterized by its caste system;
to its followers, it was much more important what caste one's neighbours belonged
to than the specifics of what they believed--as long as they were not adherents of a
different religion altogether. There were more than fifty major castes and well over
a thousand subcastes in addition to those who were noncaste, or "untouchables".
This last concept was outlawed by the current constitution of India, and the whole
caste system is under great pressure from foreign ideas, though in practice it is still
an important feature of daily life.
The only unifying theology is belief in one all-present being or world soul
called Brahma-Atman. Hinduism is more a religion of nature (pantheism) than of one
god (monotheism), for the goal of human beings is to separate themselves from the
illusion of life and reality as it is commonly perceived and merge themselves into
the Brahma-Atman, or rather to fully realize that they are already part of it. Death is

436
not final, for the individual soul (the atman) is reincarnated in some new form,
which may be an animal or a higher caste member. These two--the belief in Brahma
and that of the transmigration of the karma--were added after the caste system
became prevalent.
There are a multiplicity of legal codes, movements, and deities in various
Hindu traditions. One of the most popular gods is Krishna, the compassionate
warrior-teacher, whose love his followers devote themselves to imitating. The
eclectic nature of Hinduism is also illustrated by the Ramakrishna reform movement
dating from the nineteenth century, which teaches that the same degree of mystic
enlightenment can be achieved whether one comes from Islam, Christianity,
Buddhism, or Hinduism. The founder of this group, Sri Ramakrishna, is now revered
as the reincarnation of Krishna, Rama, Christ, and Buddha. A variant of this offshoot
of Hinduism with its Krishna worship and belief in reincarnation has been exported
to other countries in missionary fashion, whereas Hinduism in general has not,
primarily because of the caste system.
The ethics of Hinduism are as diverse as its theology. Good and evil are not
entirely distinguishable, and defects such as ignorance, or the violation of caste
rules, while lamentable, can always be corrected in another incarnation. Evil is an
illusion, and it is overcome by being immersed in the Brahma-Atman and by
complying with the social conventions of caste. Individuals have little value as such,
nor can they improve their situation in this life. Worship is ceremonial and
meditative, and a deity is more a force than a personal being. In general there are
no universal absolutes of behaviour.
Thus, even the industrial revolution, let alone the information one, runs up
against religious or cultural obstacles in India, because both require improvement of
the individual and of society as a whole through education. As long as the caste
system and the relatively low estimate of the individual and of women survive,
movement into the information age will probably continue to be slow in India.
Moreover, despite broad religious tolerance within Hinduism, India is troubled by
religious differences internally with the minority Sihks and externally with its Islamic
neighbour of Pakistan, and the temptation to use nuclear weapons to resolve these
conflicts may yet prove irresistible. Yet the diversity of Hinduism is such that
occasional elements of it can easily be exported and added piecemeal to the
religious menu of the West. That same diversity may in time allow India to import
the cultural assumptions, scientific ideas, and techniques from the West that make
the third and fourth civilizations possible.

Shintoism

Like Hinduism, this religion is a national cultural and social phenomenon


confined to a single country--in this case to Japan. Its chief feature is a belief in the
divine origin of the islands of Japan, and the divine appointment of the Mikado, or
emperor. Its ceremonies are both patriotic and devotional, and its gods are many,
including the emperor himself. There are a variety of nature gods, the most
important of which is the sun-goddess. To these are made many ceremonial
offerings to purify the faithful from guilt and to cement their relationship to the state

437
and to Japanese culture as a whole. In this, there is a strong resemblance to the
Confucian philosophies.
At times, Shintoism has been combined with Buddhism and other Chinese
religions, and it has always expressed a tolerance of these, even though it has
traditionally taught the essential superiority of the Japanese people and culture. This
doctrine was formally repudiated after the surrender by the emperor to the
Americans in 1945, as was his own divinity, and thus the connection of the Japanese
state to Shintoism was officially severed. Devotional Shintoism is still self-sacrificing
and patriotic, though the patriotism is now somewhat more vague. It emphasizes
purity, though it lacks specific moral injunctions except as these have been
borrowed from Buddhism and Confucianism. It teaches reverence for one's
superiors, and especially for the state, but offers no hope for a new life after death,
no place for outsiders, no intrinsic individual value, nor specific guidance for living
morally.
The Japanese are quite prepared to use both Western science and technology
to advance their collective cultural interests, and it may be said that their religious
zeal (if the word is appropriate for a cultural phenomenon) has been turned from
the former goal of military superiority to one of economic domination. Thus, the
Japanese continue to have a patriotism and desire to serve national interests that
give them organizational unity and flexibility. These seem likely to serve them in
good economic stead well into the information age, despite that the elements
Westerners would call "religious" are now of somewhat lesser importance than they
once were. A philosophy that is attuned to this life rather than to a hope for the next
must be pragmatic, and the pragmatism of collective economic advantage can
serve to unify and energize a nation in the place of a religion for a generation or
two, if not longer.
As the information age progresses and the Japanese people continue to be
exposed to religious ideas that claim to be universal from the rest of the world,
there may be some reassessment of their cultural beliefs. Such a reassessment may
also happen as a result of inevitable economic declines that are part of the normal
cycle of activities; if economic success has indeed been incorporated into the Shinto
culture, such events could be extremely painful for the entire nation.
At the present time, the Japanese still remain much less open to outside
religious and cultural ideas than, say, the Koreans and the Chinese. They have,
however, no reluctance to borrowtechnique and, up to the late 1990s, were among
the most successful of the late adapters of many industrial age methods. These
adoptions were, however, into a closed and highly nationalistic context, and
reluctance to be fully internationalized partners in trade and banking ultimately
brought them serious economic difficulties in the late 1990s, ones that have still not
been dealt with a decade later. Since success in an information age requires
openness, cooperation, and the free flow of goods, ideas, and capital, continuing
Japanese success into the fourth civilization appears to hinge on making extensive
internal and external adjustments. As any such changes would have cultural (and
therefore religious) overtones, it is not clear that they can easily be made. On the
other hand, the economies of the rest of the world's nations have become too
closely interconnected with that of Japan for them to allow her to collapse and so

438
threaten their own stability. Thus an accommodation to circumstances will
undoubtedly be found that will either bring Japan into much greater international
cooperation (at the expense of local cultural sensibilities), or cushion her decline
back into isolationism (preserving traditional nationalism at economic expense) so
that it takes place gradually.

Judaism

This name, given to the national religion of the descendents of Abraham,


Isaac, and Jacob (Israel), is borrowed from that of one of the last surviving
identifiable tribes of Israel's children--that of Judah. It is the earliest of three
religions, including Christianity and Islam, to proclaim a single, personal, all-
knowing, ethical creator God who has revealed His existence and actions, and that
He has righteous moral demands upon all people. These revelations began with
Abraham, who was influenced by them to break away from his polytheistic culture
and religion to become the father of a new faith in one single highly ethical and all-
powerful God. Typical of those of a nomadic culture, neither he nor his descendents
did anything to propagate their faith beyond their own family. It was to Moses,
centuries afterward, who codified both sacred history and the ethical demands of
God in the Pentateuch (the first five books of the Bible). However, when his
followers subsequently conquered Palestine under Joshua, they quickly adapted as
their own a large pantheon of local gods and goddesses known as "baals." Much of
the prophetic literature of the Old Testament was written to counter this idol
worship and to warn the people of the consequences of continuing in it.
The Jewish God's ethics are revealed in His commands respecting
relationships, to Himself, family, neighbour, and nation, which are highly detailed
and reflect concern and care for fair treatment of the defenceless orphan and
widow, of the poor, and even of the foreigner. He emphasized production of an
ethical nation to reflect God's character in the conduct of an entire people. When
they turned away from His law and participated instead in temple prostitution and
infant sacrifice, worshipping Molech and other gods, He turned His back on their
nation for a time. Their subsequent captivity in Babylon burned away all trace of
polytheism, and the Jewish people have been relatively monotheistic since. On their
return from this exile, their teachers gradually expounded upon and expanded the
codes of Moses until priestly interpretations of law became comprehensive legalistic
regulation of every aspect of life. Simultaneously, there arose an ever more
elaborate religious ritualism centred about the rebuilt temple in Jerusalem.
Much of this was obliterated by the Romans when they sacked Jerusalem and
dispersed the Jews about 70 A.D. following a rebellion against the empire. During
this time the rabbis (teachers), rather than the priests, became their religious
leaders, and their sayings too came to be collected in a work known as the Talmud.
Though dispersed widely, speaking many languages, and severely persecuted, the
Jews maintained both their religion and their cultural identity for centuries,
principally in Europe and later in America as well. Their persecution reached its
greatest depths in Hitler's holocaust, during which some six million Jews perished,
nearly a third of their total number.

439
Subsequently, the Jewish people were able to re-establish a national
homeland in Palestine, naming their new country Israel, though it is a secular rather
than a religious state. Its citizens are diverse, both in cultural origin, and in religious
practice, which vary from the highly traditional to the rather liberal, with some
professing no religion at all. Israel's national unity may depend not so much on
religion as on a desire to survive the hatred of enemies on all sides.
Two major streams of religious thought have existed in Judaism--that of
detailed observance of the form of the law as the means of salvation, and that of
devotion to the law as part of a personal relationship to God. In both cases, the chief
characteristics are a scrupulous ethic and monotheism as well as an insistence upon
the sovereignty of God over all of life. God's ethical demands are universal, even if
His promise of a saviour (Messiah) is held to be a national one. At the same time,
their long history of persecution by countries whose leaders they would not bow to,
and by the zealots of other religions who could not convert them, left the survivors a
profoundly pragmatic people. This, coupled with their belief in the regularity of
God's creation, has helped to make them eager adapters of techniques of all kinds,
and given them great success in the industrial age and good promise in its
successor.

Islam

Next to Christianity, Islam is the second largest of present-day religions. Like


Christianity, it was personally founded, claims universality, is monotheistic, and is
missionary--to the point of being, perhaps, the fastest-growing religion today. It was
founded in distinct opposition to Christianity by Muhammad of Mecca (570-632
A.D.), whose experiences, teachings, and visions were later recorded by his
followers in the Koran, the holy book of Islam. After his death, Muhammad came to
be regarded as more than a prophet, approaching the status claimed by Jesus Christ
except that he is not considered to be God.
God is represented in Islam as an ultimate unity, and the representation that
Christianity has three gods is explicitly attacked. God, or Allah as he is termed, is
punisher of the wicked and rewarder of the good. However, good deeds alone do
not necessarily assure one of paradise, for nothing is certain about the next life,
except that Allah, in his good pleasure, will reward whom he regards as the faithful
and will punish others.
Throughout much of its history, Islam has been closely associated with the
state, and there have been numerous Islamic theocracies (officially Islamic
countries). There have also been many sects in Islam, though the chief ones today
are the Sunni (traditionalists) of the majority and the Shia (militant mystics) of Iran,
Lebanon, and some parts of Africa. Other sects have included the Baha´i--though
they now claim to be a world movement that encompasses all the major religions
and are severely persecuted in their birthplace of Iran by the Shia.
Insofar as technology is concerned, Islamic scholars were the great preservers
of philosophy and developers of mathematics and philosophy during the Middle
Ages of Europe. Yet, for the most part, modern technology and the industrial
revolution were not imported into Middle Eastern Islamic countries until lately, when

440
oil revenues allowed those nations to purchase the products of both. In more recent
years, there has been a simultaneous increase in missionary expansion and a
turning inward to a strict fundamentalism. Highly ethical Islam castigates the people
of Western countries--which nations it often equates with Christianity--for what is
seen as their immorality, and it continues to promote the establishment of officially
Islamic nations whose laws are those of the Koran. In some such countries, converts
to any other religion face the death penalty, on the theory that they have insulted
the prophet.
With its current power, wealth, and success, Islam is aggressive, expansionist,
confident, and devotional. It appeals to force when necessary, is somewhat
fatalistic, and postulates a sensuous heaven. It gives women a low social and
spiritual status, though greatly improved on what Arabian women previously had,
and it continues to be somewhat fragmented. It is therefore difficult to predict the
future of Islam, but for the time being it is one of the most potent religious forces in
the world, and therefore one of the most important shapers of ethics and of culture
even if not presently of technique. It appears to be the chief contender, along with
Christianity, and Buddhist/Hindu syncretisms for the religious-style heart allegiances
of all peoples in the years to come.
On the other hand, the closed nature of Islam with its hostility to new ideas
and information, place it in fundamental conflict with information age paradigms,
and this makes it more likely to aggressively resist the fourth civilization than to
embrace it. This does not bode well either for the material prosperity of the
predominantly Islamic nations or for world peace.

Christianity

Statistically, Christianity is the largest of all religions--in its various forms


numbering perhaps a billion or more adherents. Its scriptures incorporate and
explain those of Judaism, its predecessor, as the Old Testament, and add to these
the account of the life and sayings of Christ together with those of His apostles.
Christianity is monotheistic but teaches that the one God is manifested in three
personalities--the Father, Jesus Christ the Son, and the Holy Spirit. Uniquely among
all religions, its personal founder claimed to be the Almighty God Himself, having
taken on human form for the express and sole purpose of providing an answer for
the general problem of evil and for offering a way to relate to a holy God despite the
pervasiveness of sin.
Specifically, Christianity holds that evil is an offence against God's standards
and character as revealed in moral laws He gave to Moses. All human beings violate
those laws and are therefore already judged by God, condemned, and sentenced to
the eternal and painful punishment of separation from Him--this by their own choice
not to seek Him. The Jewish sacrificial laws established through Moses are explained
as pictures of the one final and completely effective sacrifice--that of Christ on the
cross who alone, being perfect and divine, is capable of actual substitution for the
punishment of death due the sinner. By this act, God extends His grace to individual
human beings, giving each the power to overcome the problem of sin, to gain
salvation, and to have an eternal life in His presence.

441
The New Testament teaches that salvation is entirely a gift of God, not due to
any merit on the part of the one saved. The sins of the one coming to faith are
forgiven, and God chooses to regard the one so redeemed as having the perfect
righteousness of Christ and so fit to enter heaven. It also teaches that not only are
God's past acts toward humankind rooted in history, but that He will yet return
personally to earth to judge each person individually according to his or her
relationship with God. The resurrection of Christ from the dead is not just the
evidence of His defeat of death, but is also a foreshadowing of the general
resurrection of all people to an eternal body in which each person will individually
receive either reward for Christ's righteousness seen in them by God because of
their faith, or will instead receive punishment for their pervasive and unatoned-for
evil.
There are several major divisions within Western Christianity. The Roman
Catholic tradition holds that the New Testament was created by the church and can
be interpreted and modified by it. Thus, the supreme source of doctrine is the
church and the final arbiter of the faith is its head, the Pope, when speaking
formally as its doctrinal teacher (ex cathedra). Here, good works by a baptized
person are also held to be essential to participate in the salvation offered by Christ,
and one's status is never really secure until one dies, for at any given moment one's
sin account may be larger than one's works account. Similar doctrines are found in
many other groups within Christendom as well. In this particular case, it is coupled
with a large body of Catholic law that is held to have the same force as the ethical
principles of the Bible itself.
The Protestant Reformation of Luther, Calvin, and others was an attempt to
remove institutional trappings and to uphold the Bible as the only rule of Christian
faith and practice. The reformers taught that the Bible documented the church's
reason for being, rather than the other way around. They concluded, therefore, that
good works were not a means to the end of salvation, for that was God's finished
work and perfect gift. Rather, moral behaviour was something the already saved
would naturally exhibit out of gratitude for the gift of God, and was due to the Holy
Spirit dwelling in each believer and so incarnating in that person the character and
works of Christ.
However, many of the churches founded by the reformers themselves
acquired the status of self-perpetuating institutions, and their elders and deacons
became, if not priests, at least a professional class of clergy with their own agenda
for self-preservation. Eventually, some of them discarded reform teachings either
for a vague doctrine of salvation by works or a teaching of universal salvation for
everyone regardless of what they believed or practised. Others retreated into
nominalism, perhaps retaining social action for its own sake, but losing interest in
doctrine and beliefs. Some came to view Christ as an interesting moral teacher, His
actions exaggerated by His followers to provide an example to emulate but one that
is impossible for anyone, even Himself, to achieve. Eventually finding themselves
with no raison d'etre in faith, such groups began a slow decline into oblivion. Thus,
there have periodically been new reform movements within Protestantism in an
attempt to sharpen the distinction between institutional and nominal Christianity as
represented by the formal denominations on the one hand and the cross-

442
denominational "true church" of faith-affirming converts on the other. For example,
modern evangelicals hold with the earlier reformers that a person is a Christian not
by virtue of being a citizen of a country or a member of a church of Christendom,
but only by a specific act of the will by which one becomes a member of the family
of God and is assured of salvation.
It is therefore possible to view Christianity as an institutionalized religion, one
among several others, or as an individual relationship with a personal and living
God. Seen in the former way, it can be analyzed beside other institutions and
cultural movements. Seen in the latter way, it is not a religion in the institutional or
cultural sense at all, but something quite different. Indeed, in the latter view, much
of what is popularly or traditionally seen as within the realm of Christendom (e.g.,
religious wars or persecutions) is not Christian at all.
It is also worth remarking that there are numerous sects or cults that have
borrowed ideas or language from Christendom and which are sometimes loosely
regarded as part of it, but which ought properly to be considered as different
religions. The usual test for inclusion in any form of Christianity would be whether a
group at least believed in a triune God and specifically in Christ as God's Son (and
Divine Himself); lacking this distinguishing doctrine would classify them as
something other than Christian in even the most liberal and general sense of that
term.
As remarked earlier in this book, it is principally the Christian institutions that
have made their way into Western historical accounts and that have had the main
recorded interactions with society and with science and technology. The rise of
science and the industrial age both took place in a society galvanized and energized
spiritually by the Christian reformers in particular, and both must be considered in
the context of the religious atmosphere in which they began. That is, the history of
the relationship of technique, especially that of science, to religion is essentially the
story of its relationship to Christianity, and it is to this that the next section will be
devoted.

Summary

It is not easy to separate many of the religions from the culture of which they
are an integral part. Of the ones considered here, Shintoism, Confucianism, and
Hinduism are so bound up in their national cultures that it may be a misnomer to
term them generally as religions at all, in the Western sense, and this is particularly
true of Hinduism, which has only a few beliefs common to all its people. Judaism is
also a national religion and claims to have universal application, though it is not at
the present time missionary. Besides certain fragmented portions of Hinduism,
three religions claim universality and are missionary, Buddhism, Christianity, and
Islam. Of these, only the last two, along with Judaism, are monotheistic. All these
have been associated with the state (or with particular economic models) in one
form or another, though some, including Christianity, contain no doctrines in their
scriptures to support such a partnership.
All religions contain some references to ethical codes, but only in some
branches of Christianity is moral behaviour regarded as a natural consequence of

443
having received the gift of salvation from God rather than as the means of earning
it. All have at one time or another acted as the sole means for their culture of
preserving and teaching knowledge, including the available techniques, but it was in
Christian-influenced countries that modern science, industry, and technology arose.
Most of the other religions still have the task of developing a doctrine and suitable
cultural response to industrial-age and information-age ideas that arose in (to them)
a foreign religious and social context. For Christianity itself to speak with authority
to the people of the future, it needs to find a way of reconciling its own former
partners of science and technology to itself--and this may prove to be an even more
difficult task than that faced by the culturally foreign importers of science and
technology. These prospects will be discussed in greater detail later.

11.3 Religion and the Scientific and Industrial Revolutions

11.3.1 Rome to Galileo


The rise of science and of industry must both be considered within the total
cultural context that gave them birth. The exact causes of the Industrial Revolution
may always elude historians. However, there are certain attitudes, certain cultural
values, a particular world view, and specific societal conditions within which science
came to the fore and the Industrial Revolution began. The fact that they did not do
so in other cultures is telling evidence of the importance of this context. Since the
societal context for both cannot--as none can--be entirely separated from its
religious aspects, it is worthwhile to consider what specific influence religion has
had on these developments.
As noted, Islam played an important role, preserving and developing
philosophical and mathematical knowledge for many centuries while Europe
declined from the days of Roman glories and also lost much of its knowledge of the
past. However, it was the various forms of Christianity that had nearly exclusive
direct religious influence upon Northern Europe and Great Britain, where the major
historical developments of interest here took place--that is, the rise of science and
of industrial technology.
Through the decline of the Roman empire and the long political and legal
vacuum that was its aftermath, the Christian church--especially the Irish-led
monastic tradition--gradually became the custodian not only of Europe's learning,
but of its statecraft as well. Like all religious institutions, it was not the least
reluctant to extend the doctrines of its scriptures wider and wider field. On the one
hand, its care over the moral values of its faithful came to reach into all social
realms--including the legal and political--so that state and church became
inextricably entangled. On the other, learning under its care came to be scrutinized
by and subject to reconciliation with or incorporation into church doctrine, so that
accepted paradigms for understanding the physical world were themselves
eventually moulded into doctrines, and given the weight of ecclesiastical
magisterum.
Thus, scientist-philosophers used the notion that God is beyond time and
reflects no change in time to explain His works--the world--and also science, the
body of learning about those works. That is, the Biblical notion of progressive

444
revelation in the context of both history and society was not applied to the physical
world. It was regarded as created perfect, fixed, and unchanging. Thus, the earth
did not move. Other bodies moved about it and did so in circles, for circles were the
most "perfect" of motions. Even though this model was an import into Christianity
from another world view--that of Greece and Rome--the fact that church scholars
had adopted it gave it a weight equal to the most sacred of doctrine. In this way a
common-folk or broad cultural world view came to be incorporated into church
tradition, then to be reimposed upon the surrounding culture by both religious and
political means whenever deviation was suspected.
This fixed view of the universe was applied not only to the physical world but
also to the biological one (all species must have existed unchanged from creation
on) and to the social/political one as well (the place of both state and citizen in
society were fixed and immutable). The scriptures themselves were made
subordinate to such traditions, even viewed as having been produced by it, so that
their display of a dynamic flow of history and society was subservient to the Church
teaching of a static one. Finally, the fixed world view came to be applied to
knowledge itself. Aristotle's mechanics, Galen's medicine, and Ptolemy's astronomy
all found a home in the church and took up permanent residence as doctrine--
despite that none of these writers were Christian. What seemed like adequate and
plausible explanations in the light of available knowledge were accepted as the last
word and absolute truth on such subjects. As a result, medieval scholars came to be
concerned with logic, rhetoric, and oratory as means to the end of making
intellectual points to support what were viewed as obvious truths fully known. They
were not very interested in making new discoveries because of their confidence in
the completeness of the understandings they had. Institutional Christianity came to
see itself as the guardian of a specific culture, rather than as a critic of all cultures.
Despite these rigidities in the prevailing world view, the stage was already set
for the dawn of both modern science and the industrial age. A number of important
points in the historical development can be identified.
First, Christians seized Toledo and Sicily from the Muslims--in 1085 and 1091,
respectively--and took over the libraries and the scholarship practised there. This
opened up not only Arab learning, but the complete set of Greek works, rocking
scholars from their complacency and triggering a sweeping Renaissance of learning,
scholastic enquiry, and art.
Second, a century later (1214-1258) the Mongols conquered both China and
the Eastern portion of the Islamic world, establishing a conduit for the flow of both
goods and ideas between China and Europe. During this period Roger Bacon, a
Franciscan monk, criticized the basing of learning on the authorities of the past
alone and advocated questioning and experimentation. In one of his letters (1249)
he also mentions gunpowder, an oriental import that was to shake the secure
foundations of many fortified castles dominating feudal societies with the church's
help. Though he was reprimanded for his views on learning by his ecclesiastical
superiors, his recommendations were later to become the basis for the science of
Galileo and his successors.
Third, the centre of economic power was shifting northward, away from
Rome's direct influence and watchful eye, a move marked in part by the founding of

445
the powerful Hanseatic trading league of Northern Europe in 1241. This set the
stage for political power to pass into northern hands as well.
Various industrial techniques had also become important by this time, among
them metal foundries, water wheels, windmills, hay making, and the heavy horse-
drawn plough, and during the thirteenth century many crafts organized guilds for
the perfection and transmission of techniques among their members. This brought
another power influence and knowledge base onto the scene to compete with the
church; it also broadened control over economic power. By the sixteenth century,
there was to be a marriage of technique and academic studies that the Greeks
could never have contemplated, because by that time technique in itself had
become worthy of study.
In the fifteenth century, great strides were made in the understanding of
human anatomy--many made by painters such as Leonardo da Vinci who dissected
cadavers for study in order to be able to accurately render the external human
form. That century also saw the introduction of the most revolutionary technique of
all, for paper had been made in France by 1189 and now the printing press was
perfected by Gutenberg during the period 1436-1450, so that it became impossible
to control the dissemination of knowledge. Thus, new ideas and physical models
proliferated, and the church's adopted static world view came under increasing
pressure. This was no more apparent than in the new models put forth to explain
the motion of the sun and planets.
The complexity of Ptolemy's fixed-earth model, with some sixty circles and
epicycles of motion about it, first came under attack by Copernicus, himself a priest,
in 1543 when he showed that a sun-centered solar system could reduce this model
to forty-eight circles. Copernicus attributed to God the skill of a clockmaker who had
put the sun in the centre of the universe to govern and rule over it all--a
characterization that was also to play a major role in later science. Later Tycho
Brahe (1546-1601) attempted a mathematically indistinguishable compromise that
had the earth fixed, the sun moving about it, and the planets orbiting the sun. His
successor, Johannes Kepler, was able to use Brahe's painstakingly gathered data
and derive an even simpler model based on a sun-centred solar system and only
seven ellipses. Galileo drove the final nail in the Church- sanctioned Ptolemaic world
view when he discovered sun spots, mountains on the moon, and moons orbiting
Jupiter, thus removing the perfection of the heavenly bodies and the earth as the
centre of all, by observation rather than by argument. He also used and promoted
mathematical/scientific methods in his early experiments on mechanics and so laid
the foundation for the modern scientific technique of examining the physical world
by observation, hypothesis, prediction, and experimentation, rather than accepting
a view of it based on arbitrary dogma borrowed from pagan philosophers.
Unfortunately, he tended to be dogmatic himself, promoting Copernicanism as
absolute truth (rather than a model) and ignoring Kepler's detailed work in favour of
his own general and qualitative observations of the solar system. This dogmatism
was also to guarantee him an escalating conflict with the Catholic Church.
Meanwhile, the Church was under simultaneous attack from within, for
Gutenberg's presses were busy printing what turned out to be the most
revolutionary book of all--the Bible. It did not take long for people to realize that

446
there were great discrepancies between it and the church's teachings. Not only was
the church's view of the physical world not to be found there, many of its other
doctrines were also absent from or contradictory to it.
Those who subsequently broke with the Church were reformers at first,
preaching against corruption and immorality among its officials, but they soon
differed substantially in doctrinal matters as well, and after the 1520s, the teachings
of Luther and Calvin became progressively and rapidly more important in Northern
Europe. They taught personal responsibility for sin, salvation through grace rather
than works, and they eschewed the religious institution per se as the means of (or
substitute for) establishing a relationship with God. England joined the reformation
in 1534 when Henry VIII had parliament confirm him as head of the church in that
nation following his dispute with the Pope over a divorce he wanted. Much later,
Calvin's followers in Scotland and England (Puritans) were to stress the importance
of good works on the part of the elect of God, and they included in those the pursuit
of science and the building of machines when such activities were undertaken to
improve the lot of humanity. This Puritan attitude had much to do with the
promotion and rapid spread of industrial techniques and was one of the key factors
in the scientific and industrial revolutions in England. Science also was promoted
there by Francis Bacon (1561-1626), the lord chancellor of England under James I.
Bacon analyzed and promoted the scientific method espoused by Roger Bacon and
by Galileo.
Thus, it was a church under fire on many fronts that assessed Galileo's
publications and scientific methodology in the early seventeenth century. But, it was
also a church that had some experience with dissent, having run the Spanish
Inquisition since 1483 and the Italian since 1542--both dedicated to the destruction
of heresy and to the purification of doctrine. The Catholic Church had also clarified
and reaffirmed its own doctrines and asserted the supremacy of the Pope in the
Council of Trent, begun in 1545. So, when the Inquisition summoned Galileo in
1633, the trial represented a classic clash of world views, but the outcome was a
foregone conclusion. Galileo was forced to recant, was placed under arrest, and his
books prohibited--a ban that would last for over two centuries. To some extent, he
was the author of his own misfortune, for he dogmatically insisted that the
Copernican view was not just a better mathematical model for the universe (that is,
an abstraction to explain it, but was also the ultimate physical reality)--something
that is impossible ever to prove. After all, Earth could be the centre of the universe
with everything else moving about it, even though this is not the simplest of
explanations for what we observe.
The church was not prepared to move on what it had come to regard as a
doctrinal point merely because Galileo's explanations were more useful. He wished
to remain a good Catholic, but was convinced that the church's interpretation of the
Bible, and not the Bible itself, was at fault. Neither he nor the cardinals who tried
him seemed to realize that both had trespassed into areas in which they were not
competent to judge. Conflicts over church doctrines about the physical world, and
corresponding insistence by scientists on their models as ultimate reality, have
been a feature of the intellectual landscape ever since.

447
11.3.2 Reason in Science and Protestantism
The Catholic Church continued to fortify itself politically and doctrinally,
effectively ending both the Reformation and the pursuit of science in its realm of
Southern and Western Europe. Meanwhile, the scientists and religious reformers of
Northern Europe and England discovered that they had more in common with each
other than merely being mutual enemies of the church that had given both birth
and then cast them out.
First, they shared the scholarly attention to detail, to rigor, and to logic that
was their common heritage from their scholastic predecessors. This did not mean
that such scholars always agreed, but it did mean that they could argue well--and it
is in the crucible of intellectual argument that many a theological and scientific
truth is born or tested.
Second, they also inherited the general Christian belief that God had not only
expended His creative energy as the Old Testament described, and sustained it
through the power of Christ as the New taught, but that He did both in a rational
way. God was therefore regarded as the best of both Hebrew and Greek ideals.
Moreover, humankind, being made in His image, could also expend creative energy
and do so rationally. These ideas, coupled with a new reliance on the Pythagorean
concept that reality is rooted in its observable and numerically describable form,
meant that God's works could be analysed after Him and that enlightened reason
could think His thoughts after Him. Consequently, the rules of His rational universe
could be deduced (and ought to be) for the sake of understanding God and applying
this to the good work of improving the human condition. That is, the created order
could be studied and used for good because it had an underlying rationality--and
this unique idea is in fact fundamental to all science. This was expressed in
Calvinism as part of God's predetermination; He had ruled by fixed decrees from the
beginning of time and these were observable as the laws of nature. This very
observability, however, meant that such laws were not untouchable; they were not
quite a part of God Himself so they could be understood and used.
The great seventeenth century scientist Isaac Newton believed God sustained
the workings of the universe, such as those of planetary motion, by supplying
additional force on an ongoing basis to keep the system constant. As far as
Protestant churchmen and most scientists were concerned, the Bible, properly
understood, was literally true, and it could not contradict the truths discovered in
the physical world, provided they were also properly interpreted.
Third, Protestant emphasis on faith as a personal experience rather than a
cultural and institutional one coincided with scientific compulsion to personally
doubt, investigate, hypothesize, and experiment. In its formative years, science was
intensely personal and individualistic, and its great thinkers were not afraid to
become highly emotional over science as the great gift of God placed in their hands
to understand the world He had made. In its later years, there was a tendency to
downplay this aspect of science and to present it as if it were impersonal,
institutional, a finished work, and entirely value free--but it still does have the
personal element, as demonstrated in Chapter 2. Even though the professed goal of
modern science is the specific exclusion of all factors that depend on variations

448
between individual experimenters, and the practice of science assumes that a
reality exists that transcends both beliefs and culture, it is impossible for its
practitioners to achieve transcendence themselves. Paradoxically, they are even
reluctant to do so, first, because transcendence is seen as religious, and second,
because the very assumption that an objective reality exists must also be
questioned if science is to be consistent. That is, just as democracy must give its
enemies leave to criticize and even condemn its fundamental assumptions and it
would cease to be democracy if it either abandoned its absolutes or its toleration of
potentially fatal opposition; so also must science encourage and even conduct a
potentially fatal criticism of its own foundational presuppositions. Failure to do so
would be a betrayal of those foundations.
Fourth, both science and Christianity profess to be designed for seekers of
truth, (or at least seekers of reality) and to need observable facts, though not to be
dependent totally upon them. That is, both claim to be the revealers of the
meanings of mysteries, not the creators of myths. There is a strong and striking
element of commonality in thinking, in methodology, and in application. It is
expressed in the theoretical realm by the need to conceptualize, explain, predict,
generalize, and synthesize--one with theological ideas and God's revelation, and the
other with physical ideas and the empirically observed universe. In experience, both
are personal, descriptive, and experimental. That is, theoretical knowledge satisfies
neither; both need to relate knowledge, classify it, predict with it, and consider its
consequences. Also, both are relational and transformational; they both have a
imperative to be applied to and to change real people in real situations. Further,
they both have a philosophy of being, a set of presuppositions, or a world view
through which everything else is filtered. In addition, in both Protestant Christianity
and science, these four--world view, theory, description, and application--act in
mutual feedback to change one another, and cannot exist alone.
Fifth, their mutual disdain of the kind of hierarchicalism expressed by the
earlier Church in both its theological/ecclesiastical and physical views, led both
science and Protestant Christianity to dispense with hierarchical views of nature.
There was no longer need for either to suppose the heavenly bodies were virtuously
ordered, or that a host of angelic beings of various ranks were employed in
maintaining heavenly movements. Similarly, there was a general loss of belief in
evil spirits, the other side of the angelic host, and with it the personality and activity
of Satan as a progenitor of evil faded from view. Humanity began to be viewed as
autonomous--free from external influences and bonds, and able to make anything of
the race. Scientists were to take this much further, however, and eventually to
make differences on this point a crucial factor in a near total break with Christianity.
Galvanized by the techniques of science, the growth of industry, and an
entirely new view of religion, the society of Northern Europe and Great Britain
underwent a startling transformation after the seventeenth century. The momentum
was slow at first, for many wars came to be fought in the complex and unstable
political environment left behind by the Reformation. But scientists continued as
partners with the Protestant churchmen until well into the nineteenth century, to a
time when the industrial revolution was solidly underway, and the scientific one
highly advanced. There was a slackening of scientific momentum at the start of the

449
eighteenth century, but it revived as scientists turned practical in England and
fostered the Industrial Revolution, and as they turned theoretical in France and
promoted the political revolution there in 1789.
The opening of the Americas to settlement also had a profound influence on
the European scene. The new frontier provided a population safety valve, and a
place to flee from religious persecution. It also triggered a major expansion in the
European economy and assisted in raising the standard of living--both for those who
emigrated and for those who stayed at home. During these years, it was common to
be both theologian and scientist; indeed the practice of combining the roles of
ordained minister with scientific seeker-of-knowledge was widespread, especially in
Great Britain.

11.3.3 The Partnership Dissolves


However, this partnership was not to last, and several factors contrived to
drive a wedge between the world view of science and that of its mainly protestant
church partners. One was that even though both movements had begun in a
reaction from the institutionalized church, both had been busy creating institutions
of their own. These institutions took on new reasons for being, entrenched positions,
and dogmas that their originators did not have. The institutions of the scientific
world took form as the Royal Society of London (1662) and the Paris Academy of
Sciences (1666--the same year as Newton's gravitational experiments). On the
religious side, the new churches became more firmly connected to the state, though
the 1662 Act of Uniformity in Britain had many dissenters, who set up their own
universities and also became prominent in scientific circles. Though they had much
in common at first, these institutions were also eventually to play a role in
destroying that commonality. Usually begun under the protection of a church, they
gradually went their own way and declared their independence of all things
religious, and even of all society.
Meanwhile, many Protestant church organizations took on a nominalism,
legalism, and formalism not unlike that which had characterized the Catholic Church
before the reformation. They formed self-perpetuating bureaucracies and lost their
sense of urgency and mission. It became possible to be a "good Christian" by
occasionally attending services and giving a donation while pursuing economic gain
in the service of industry with the energy of one's substance. Their theologians had
also followed their earlier Catholic counterparts by adopting as absolute fact the
current scientific world view, in this case that God had wound up creation like a
clock and had then stepped out of the picture. As this abstraction seemed adequate,
churchmen came to believe in it as dogma. Thus, they came to discount miracles as
fables and to define the supernatural out of existence. For all practical purposes,
their clockwork god became irrelevant to real (empirical) life.
At the same time, the Catholic Church had not only consolidated its doctrinal
hold on the peoples it influenced, but it had also come to terms with science--not
adopting its world view but tolerating it at arm's length and making an attempt to
integrate an understanding of its methodology into the faith. Thus, by 1820,
Galileo's books were no longer forbidden, and scientific investigations were thriving
in some Catholic nations as well as in the Protestant ones. This was particularly true

450
in France where, through the turmoil of revolution, church and state had a very
rocky relationship and intellectuals had great freedom from church control.
Meanwhile, the world view of scientists was changing, too. Flushed by the
success of both the scientific method and of industrial revolution's machine and by
a rising standard of living, they developed four ideas much further than they had
before and adopted the product as the guiding principles of a comprehensive world
view of a very ambitious scope.
First, their own mechanistic view of nature was gradually reinforced by their
successes to the point where it replaced the personal God who sustained the
universe, and Christian belief was discarded, with God being reconceptualized or
represented in daily practice as a vague personality termed "nature." Because such
a god had at best only created, then stepped back and let nature take its course, he
was impossibly remote, could not be experienced and was therefore essentially
unknowable. To the deists among scientists, God existed only as a creative force; he
had no personality and certainly no personal interest in the real world or in any
individuals. He was an abstraction like any other and could therefore be redefined.
Moreover, the Christian scriptures themselves were subjected to a new analysis
based on the exclusion of miracles as even theoretical propositions. Thus, many
scientists first rejected any intersection of the supernatural with the known physical
world, then the historical accuracy of the Bible, and finally Biblical doctrines as well,
for if its contents could be regarded as substantially mythological in some areas,
then none of it need be considered authoritative or legitimate knowledge. In many
circles, this vague deism was, of course, carried much further and became either
disinterested agnosticism or hostile atheism.
Second, the philosophical techniques of science--rationalism and empiricism--
gradually became elevated to positions of unquestioned, absolutes achieving a
status equivalent to apodictic religious doctrine. This version of materialism
ultimately came to be known in science as logical positivism--the view that
positively asserts the sole existence and knowability of logical reasoning and
empirical data, and that explicitly defines out of existence the supernatural (The
corresponding political expression was the Marxist doctrine of dialectical
materialism). That is, their world view went beyond indifference to religion, and
became hostile. So great was confidence in science by 1900 that it was believed
that the essential workings of the universe were now well known in the absolute
sense and that only a few details--such as more places after the decimal in certain
physical constants--needed to be settled. It was inconceivable that a god could have
worked outside a technique knowable to science if he existed, and these were
believed to be already known by humanity, so there was no need to propose there
were higher ones known only to him by which he could have created and sustained
the universe or worked miracles--say, of healing. Such a god could be treated as a
dispensable hypothesis, as irrelevant to science. For all practical purposes, Newton
had created the world, for the power of his mechanics and calculus served to
explain all but a few minor details. It appeared to be plainly obvious to scientists
that the physical world was indeed as it could ordinarily be observed to be.
Newtonianism was no longer an abstraction, it was the real thing. Such adoptions of
current scientific models and world views as though they were absolute truth had

451
happened before, to later regret, but, as with many lessons of history, this one had
also been forgotten.
Unfortunately for the nineteenth century view that only a few more decimal
points needed to be calculated in some constants, those few niggling details were
later to give birth to the revolutions of relativity theory and quantum mechanics--
revolutions that shook the theoretical foundations of science and caused a few
cracks to appear in the epistemological ones as well.
Third, scientism (for by now it was a full-blown belief system) expanded upon
the humanism of the Greeks and of the renaissance thinkers. Since God either did
not exist at all or did not matter, humanity was self-evidently the autonomous
pinnacle of all observable nature. At most, God needed only be consulted on those
questions where humans had difficulty finding scientific answers. The world was the
human oyster, to do with as desired, and with no higher accountability. There were
no bounds, no one need submit to another authority and no one needed a social or
other context to give meaning to self. The whole cosmos existed and drew meaning
from human observations; indeed, it could well be regarded as having been created
by humankind, for the rational/empirical world view was the only ultimate reality.
Human beings could therefore recreate the cosmos; they had either collectively
become god or they could define new ones for themselves.
This new scientific humanism had a great deal going for it, because the record
of achievement and progress was substantial, and there was therefore reason for
the great optimism both would continue indefinitely. It seemed to nineteenth
century scientists that the human race had pulled itself up by its own unaided
efforts from poverty, disease, and superstition, and if not already in a golden age, it
was on the very edge of one. All problems could be solved. Unlimited wealth was
available for exploitation and the inherent goodness of people was evident in the
technology they had created. Soon war would be abolished and a glorious age of
health, wealth, harmony and global community would be entered upon. In the
process of coming to this view, "progress" came to be a god word of status akin to
"nature." The assumption that humanity was progressing to a higher state was
unquestioned and unquestionable--it was considered a self-evident fact.
Fourth, the scope of progress was extrapolated backward as well as forward in
time, and a number of theories of evolution came forward in the early part of the
nineteenth century. One effect of these was to produce a naturalistic explanation for
the presence of humans on the earth, one tied to the material world alone and
contained entirely within the context of naturally observable processes. Thus, the
mechanistic view of the universe came to be applied to humans as well; biological
as well as physical systems all must have progressed (evolved) to higher forms by
past operations of the law of progress in a purely mechanistic universe. Several
means had been proposed whereby this mechanistic evolution might have taken
place, but experimentation refuted most, and there was no popularizer for such
models to make them more widely acceptable. However, the stage was set for
someone who could produce a plausible theory. Then, in 1858, Charles Darwin
published his idea that evolution had taken place by the natural selection of traits
according to their survival value in changing environments (survival of the fittest).
In addition, he was able to communicate this notion effectively to the public.

452
Evolution had its mechanism; moreover, it was firmly placed in the past and could
not be refuted by experimentation.
Though acceptance in some scientific circles and many religious ones was
slow at first, the philosophical ground had been well-prepared, and evolution by
natural selection eventually swept all other naturalistic theories of origins away.
Along the way, it gave rise to a new hierarchicalism--this time of species--ordered
according to the height (i.e., the complexity) to which each had evolved. This was
later to have disastrous effects when applied to the "races" of humanity, when
science came to be employed to "prove" one or another nationality or skin colour to
be inherently superior, because of supposedly being more highly evolved.
Moreover, natural selection came to be applied to social, political, religious,
economic, and moral systems as well as the biological. In these fields, there was
even less inclination to regard this kind of evolution as a model or an abstraction--it
quickly became the only possible expression of ultimate reality. For example,
human opinions were part of the social machine; opinions therefore and necessarily
determined social and moral patterns, which were relative, not absolute, for they
too could evolve and would necessarily and inevitably do so to a higher and better
order of existence. Christian morality had survived in a world that had rejected its
theology, now it too was set aside. The old morality was replaced by a new one,
based not on absolute divine fiat, but on changing human opinion--higher and more
sophisticated principles than right and wrong applied to evolved mankind. That
there was in practice little difference between the new morality and the old
immorality was not much remarked upon.
It is important to note, however, that there is no logical connection between
theories of social evolution and the biological ones of Darwin, so that evidence for
one lends the other no support, and the refutations, problems, or failures of one
cannot be used to criticize the other.
Thus, by the end of the nineteenth century, those phenomena associated with
what is called religion in other cultures had been abdicated by a largely nominal
Christianity, and had been taken over by a confident and aggressive scientism
which, though it by that time tended to eschew all religions, was filling the same
intellectual, philosophical, and practical role, and could be properly regarded as a
religion itself.
Conservatives in the religious community had by this time so little knowledge
of, interest in, or commonality with science that they were unable and unwilling to
debate the issues on scientific grounds. They had so thoroughly committed
themselves to a view of the physical world as the product of a God of the clockworks
they had forgotten this idea was an abstraction and could not be proven to be the
physical reality itself. When science changed course and committed itself instead to
a new model that conservative Christians could not accept, the evolutionary model,
a classic irresolvable clash of world views resulted, one that gradually drove
Christians into silence, accommodation, or an intellectual ghetto.
Thus matters stood until well into the twentieth century, when events began
once again to force changes, not only in both world views, but in the societal
context in which world views are formed and exercised. Before considering the
direction in which these are now going, a more detailed examination of the

453
creation/evolution debate is in order. It is not only an important case study in
differing world views, but has recently been restored as a debatable topic principally
through the efforts of a new breed of conservative Christians who are not prepared
to assume that the last word on the subject was said in the nineteenth century.

11.4 The Debate Over Origins--an Illustration of World View Clashes


Because of timetabling conflicts at MUSHEAT, Nellie is taking her biology
course at the local public community college. Hoping to finish at least his first year
of university before completing high school, Lucas is also enrolled in this course.
The instructor is Ed Mandel, and this scene takes place the night before their
seminar at the university, close to the end of class time as Mandel is summarizing
his lesson.

Mandel: This great panorama presented by the evolution of the universe and
of life from nothing at all ought to result in a sense of awe at the wisdom of nature.
We are who and where we are because of being the best adapted product of that
grand process, superior to all else, and veritable lords of all we survey. We do not
need to hypothesize that some god created us, for we have created the gods. Man
is the epitome...
Lucas: (interrupting) How can you be so sure of all this?
Mandel: (smiling coldly) What do you mean?
Lucas: I mean, you seem pretty dogmatic about it. How do you know it all
happened the way you say, when you weren't there to see it? Isn't evolution a
theory rather than an absolute fact?
Mandel: Evolution is a theory. It is also an absolute fact. We are as certain that
it happened as it is possible to be about anything in science.
Lucas: But the "certain" mechanism seems to change with each new
generation of scientists. There isn't much in common between biology text chapters
on evolution that are more than five years apart. For instance, isn't natural selection
now rather discredited?
Mandel: It makes no difference whether we know what the mechanism is or
not. Even if every possible mechanism for evolution were proven to be wrong, we
would still know it happened.
Lucas: How would we?
Mandel: Because we're here, and there is no other possible explanation other
than evolution.
Lucas: But that's an ex post facto argument; it's not valid to say that the
present determines what history must have been, only that the present is a result of
whatever history was. Aren't there other possibilities?
Mandel: Well, for one thing, you can see it in embryology. You started out as a
one-celled creature, passed through a variety of stages including that of a fish with
gills, and became a human. Thus your own development recapitulates that of the
species.
Lucas: I've read about the ridges you are referring to as "gills." They never
actually serve such a purpose, and develop into important structures in themselves.
Besides, no one who knew anything about it could ever mistake a human embryo at

454
any stage of development for the embryo of any other species, so that argument
won't hold water either. Not only that, but it has been conclusively shown that the
man who first suggested that theory faked his photographs, and the whole thing is
now discredited.
Mandel: What other explanation could there possibly be for the existence of
complex organisms like ourselves than that they all evolved by chance through
natural processes? You cannot deny the similarities among various forms of life that
prove their evolutionary relationship.
Lucas: Similarities don't prove relationship, even if they may suggest it. If you
see two similar buildings in downtown Vancouver, you don't think one evolved from
the other, though you might suppose both were designed by the same architect.
Mandel: You're not one of these creationists, are you? What's your name
again?
Lucas: Lucas Dominic, sir.
Mandel: (startled) Dominic? You're not related to that old Bible preacher that
used to live up near Cultus Lake are you?
Lucas: (biting his lip and speaking almost inaudibly) I live at Berea, sir, and Mr.
Dominic is my guardian.
Mandel: So the old fellow is still infecting impressionable boys with the
disease of Christianity, eh? What do you think he would say to a public debate on
the believability of evolution versus that of the creation myths? Oh, never mind--I'll
go phone him myself right now. Just wait here.
As soon as Mandel has rushed out the door, an excited buzz of conversation
erupts. A few sidelong glances are shot Lucas' way, but others seem embarrassed
to speak to him. Finally, Nellie leans over and whispers:
Nellie: What's Berea?
Lucas: (in a very pained voice, and after a long pause) It's an orphanage.
Nellie: (startled) I'm sorry, I didn't know such things existed any more.
Lucas: (looking forward) I'm the last one there. (and then, turning accusingly
to her) You weren't much help.
Nellie: Well, I couldn't get a word in between you two. Besides, you have little
to lose.
Lucas: What do you mean?
Nellie: This is a required course for me. I have to pass it. You can always take
it again later.
Lucas: What are you getting at?
Nellie: Mandel fails anyone who defends creation in the class.
Lucas: I didn't. I'm not certain what I believe in myself; I just thought he was
too sure of himself, so I challenged him. Look, I feel sick and I don't want to be
around when he comes back; I'm heading out.
Lucas leaves, and a few minutes later Mandel enters by the other door. He is
rubbing his hands together and seems pleased with himself. He immediately notices
Lucas is missing, and remarks to the rest of the class:
Mandel: He actually agreed to do it. We'll settle his hash. A few invited experts
will make mincemeat of a country preacher, and I'll arrange for it all to happen in
front of a few thousand people and some television cameras.

455
*****

Debates over scientific issues arise for a variety of reasons. These may
include disputes over the data employed, the methods of doing the research, the
qualifications of the researchers, the validity of the results, or even personality
conflicts. Many such disagreements can be solved by attempting to repeat the
experiments under a variety of conditions to see whether the suggested conclusions
are indeed justified. Because of the human element, a variety of ethical issues may
become involved in the doing of science, but these too are in many cases resolvable
within the scientific community. After all, the premise of the information age is that
mismanagement, sloppy research, bad analysis, wrong conclusions, or outright
deceit will eventually be exposed to the light of public scrutiny and be corrected. In
such situations, there may be no need to appeal to outside absolute ethical
authority for assistance in reaching a resolution. The experiments either
demonstrate the validity of a theory or they do not.
However, some issues that are not as straightforward touch on the very
meaning of science, as well as motivations for doing it and interpretations of the
results. As usual, such fundamental issues are further complicated by the ethical or
religious questions that are often related to them. An interesting illustration of this
is the debate surrounding the theories competing to answer the question: "How did
we all get here?" This not only is one of the most fundamental of such issues but
also is frequently made a focal point for disputes between science and religion. In
one form or another, the question of origins and its principal answers have been
around from the beginning of recorded history. The usual answers are often broadly
classified as either "creationary" or "evolutionary." However, the meanings of these
words vary as much as do the meanings attached to words with ethical content
(such as "good"). Moreover, the boundaries between the two principal positions are
not always distinct or easy to determine, even though some modern-day
protagonists insist that they are. Neither is it always clear what, if any, are the
ethical consequences of holding to one position or another. In their most general
form, the major views of origins can be summarized as follows:

o Strictly materialist view


The universe and all that is in it, including humanity, can be explained in
terms of time, chance, and physical laws. There is no design, purpose, and
direction; what is, simply is. All that can be observed is all that there is, and this in
turn is entirely the product of time, random events, and the operation of natural
processes.
o Deistic/mechanistic view
The universe was created, as were the physical laws governing its operation.
After that, the deity stepped back from the process and allowed everything to run
its course without interference. Life, including humanity, is the product of the
predetermined operation of created processes. If the universe and the life it
contains have any purpose or direction, it is to a large extent built-in.
o Theistic view

456
The universe and its physical laws were created. At certain points in the
subsequent action of time, chance, and natural processes, a deity may have
intervened. This might have been done once to cause life to come into being, and
again to provide a distinct nature to human life. The particular involvement of the
deity may imply some measure of purpose and direction to life. It may also imply
some accountability to that deity for humanity as a whole. Whether such purposes
are knowable or not is debatable.
o Supernatural view
The universe and all that is in it, including humanity, exists by intelligent and
personal design. It had a definite beginning, has a definite purpose for being, and
will have a definite end. Natural processes operate within a purposefully created
framework of both time and space. All of creation is accountable to the creator.

There are a variety of positions on the date of origin, on the duration of


various creative or evolutionary processes, and on mechanisms for origins within
each group. Summaries such as these four are very broad; they contain statements
about both the cosmos in general and the origin of human life in particular. Models
for origins that fit in one of the first two categories are, roughly speaking,
evolutionary. Those that fit one of the latter two are to a greater or lesser extent
creationist. Some theories have elements from two or more of these groups, and
indeed the variations shade almost imperceptibly from one to the next. It is not
necessary to examine all of the possibilities in order to understand the debate over
origins. However, it is important in debating such issues to distinguish among
o the origin of matter, that is, of the initial cosmos,
o the origin of the earth in the cosmological context,
o the origin of life from nonlife,
o the development and speciation of plants and animals, and
o the special case of the origin of human beings.
Many scientists would confine the term "evolution," for example, to just the
fourth and fifth of these issues. Others use it for all five. A creationist, on the other
hand, would accept variation within species as the only observable form of evolution
(i.e., microevolution, if she used the word at all), and might deny that any of the
others have ever happened.
Actual debate on such issues tends to involve mostly the extremes, with the
deists and theists--who would rather be accommodative than raise issues--being
largely ignored, or declining to participate on various grounds. Within their
immediate circles, both conservative creationists and strict materialists tend to be
outraged at any suggestion that the other side (or any middle ground) has any
credence whatsoever. It is important to remember, therefore, that in the detailed
discussion of major issues presented in this chapter, mere provocation is not the
goal, but rather the inducement to think. As the following section concludes, the
point is to consider the nature of scientific debate; a particularly provocative topic is
a useful foil to do so. The reader is free to elaborate any position, indeed is
encouraged to attempt this, but only after considering positions on some of the
issues of concern to the major participants.

457
Philosophy and World View

Which model for origins is held by a person to be true depends on their world
views as it touches such issues as the existence and attributes of deity, the
revelation that such a deity is alleged to have provided, and the relationship of
human beings to that deity. There are also consequences of holding particular
beliefs about origins.
First, world-view considerations, and the degree of allegiance to those views,
tend to determine attitudes toward interpreting data that bear on the question of
origins. Whatever that may be, scientists necessarily work within the framework of
some such world view, fully convinced of the reasonableness of their beliefs. It is
therefore natural for them to filter their scientific experiences through their world
view in a way that yields results supporting their beliefs. There is nothing wrong
with this as long as those involved are aware of it. However, in some cases, it may
be that there does not exist any conceivable data or argument that could ever be
accepted as invalidating the world view held. Such a position, in effect, removes the
matter in question from the realm of the scientific altogether, for something that is
not even conceivably falsifiable by scientific methods is also not verifiable by them
either.
Thus, to some the deity is creator by very definition, and a random universe
without that creative power is inconceivable. A completely mechanistic universe is a
denial of the existence of the overseeing deity in whom they trust. Even if they
cannot explain how creation was performed (in the scientific sense), and regardless
of how many people claim that the physical evidence not only supports but
completely documents materialism, they must still say, "We absolutely know that
we were created."
To others, the world can only conceivably consist of the material, and it is
meaningless to suggest that anything called the "supernatural" could exist or be
perceived if it did. That is, they must believe in some form of evolution, as generally
defined above, and do so regardless of the current state of the evidence for or
against; the alternative is not possible even to consider. Theirs is an episteme that
requires evolutionary explanations for origins lest it be self-contradictory. That is,
even if every mechanism for it were shown to be impossible, strict
rational/empiricists and materialists must still say, "We absolutely know that we
evolved." Even if they do not completely reject religious accounts of origins, they
will consider them as subject to scientific analysis and potential refutation, for the
ideal of this brand of science is that no beliefs, including their own, are free from the
possibility of being falsified by actual experiment.
Others, who are between these two extremes, may want to believe to some
extent in a creation, but are also persuaded to accept the majoritarian view that
evolution is an established fact rather than a model. This group seeks one of a
variety of possible accommodations between two extremes, but their faith is in a
world view more compatible with materialistic models than supernatural ones.
Second, answers to the question of whether or not there is a God who created
are related to whether one believes that such a God has the authority to make
ethical demands. If a creator does exist, and has made ethical rules, then such

458
demands constitute a higher authority than human opinion, reasoning,
methodology, or law. To a greater or lesser extent, this is the position of most
religions, though they may differ on the details of ethical demands. If, on the other
hand, human life exists solely because of the operation of time, chance, and natural
processes, then no external moral authority need be assumed. That is, if any
meaning exists for humanity as a whole or for human ethics in particular, it can be
found only in the natural world, and by using natural processes. In summary, some
claim that there are answers to meaning questions and preexisting rules of ethics
that affect the entire human social dynamic and that these transcend the natural
order. Others claim that answers can only be found in the observable natural order,
for there is nothing else. One possible conclusion from this is that the principle of
"survival of the fittest" can be applied not only to the development of life itself, but
also (in some fashion) to moral rules--indeed that it is itself the only moral rule. This
conclusion is problematic, for it makes morality not only relative to the situation, but
also to the evolutionary place of the one applying the rules, and it is unclear how
this could in the end produce anything but hedonism.
It is not hard to see that this debate involves far more than science and
religion, for anyone who ascribes transcendence to any aspect of the human
experience, in such things as art, music, or philosophy, will also be in conflict with
strict materialists.

Recent Developments

Many people are happy with what they see as a compromise on one of the
two middle positions outlined above, and it was on this centre ground that most
North Americans probably stood before the 1950s. However, deists and theists may
have the most difficulty formulating a belief framework for their position, for their
desire is to find a middle ground that allows for the action of a creator, but that still
accepts the mechanism of evolution as an established fact. It is not easy to do this
when the underlying principles of two positions they are attempting to reconcile
contradict each other so fundamentally and so thoroughly.
Moreover, since that time, new conservative creationist groups have arisen
and the debate over origins has been rejoined in earnest. This has resulted in the
founding of such organizations as the Institute for Creation Research and the
Creation Research Society, both of which consist of scientists committed to working
and publishing within a creationist world view. Much of the material produced by
these groups is sharply at variance with the interpretations of the majority of
modern scientists and is severely criticized or ignored by mainstream schools and
journals.
The new creationists assert that time, chance, and natural processes could
not possibly have produced life in the first place or resulted in any evolution
afterward other than minor variations. They claim that the fossil record is better
evidence for creation than for evolution, and that neither biology nor geology
support an evolutionary scenario. Many (not all) of them doubt or deny that the
earth is very old, suggesting an age of thousands rather than billions of years. They
flatly deny there is any evidence for human evolution. They attempt to conduct the

459
debate on scientific rather than religious terms, and try to use scientific methods to
demonstrate their points, though few in the majority scientific community will
engage them on those terms. As will be seen below, those at the other extreme
have counterattacked in force on all fronts.
While one could argue about the degree of success both sides have had in
convincing the general public of the rightness of their views, a major effect has
been to polarize the lively discussion, all but excluding the middle ground from the
debate.

Belief and Faith

Thus, someone like Steven Jay Gould will say: "Well, evolution is a theory. It is
also a fact." (Montagu, p 118) and also: "'Scientific Creationism' is a self-
contradictory, nonsense phrase . . . dogma, not science" (Montagu, p. 120). Another
in the same vein is Isaac Asimov, who says of Creationists [ibid, p 184] that they use
the general belief of most peoples in a Creator as evidence there must have been
one, and dismisses all such accounts as myths, none inherently more credible than
any other. The essential point made at this end of the spectrum is that the very idea
of creation is inherently religious and so not open to examination. It can therefore
have no connection whatever to science. Most of its writers do go further, however,
and assert that creation is not only unscientific, but actually false.
In a piece written as a direct response to Asimov for the Institute for Creation
Research, Henry Morris responds that creationists do not use such arguments, but
that evolutionists: "use the argument that 'all scientists believe evolution' as the
main proof for evolution." (Impact #99 pamphlet) It ought to be clear that
majoritarian arguments are not sound logic, so that it makes little difference
whether either side actually uses them or not.
Both creationist and evolutionist scientists believe themselves to be engaged
in legitimate and experimentally sound searches for knowledge. The result is an
indefinite impasse, for apart from a religious-style conversion experience, neither
can ever convince the other to abandon what they believe to be valid science
consistent with their world view. And yet, truth is unique, for the fundamental
principle with which science begins forces one to believe there is such a thing as a
'right answer' to a scientific investigation of the physical universe, that the cosmos
as we perceive it today only has one history. The assumed uniqueness of truth
would appear to mean that one side or the other could be proven wrong, but as
indicated above, no such purported proof could ever be accepted by the "losing"
side. Asimov: "...creationism has clearly lost. But creationists, placing myth above
reason, refuse to accept the decision..." [ibid p. 191] Morris: "Asimov, in his anti-
creationist harangue, does not attempt to offer even one slight scientific evidence
for evolution." [ICR pamphlet]
The problem is that the issue is not decidable by physical means, because at
least part of the argument is not about science at all. The difference in world views
causes the two groups to search in different places for different things and to
interpret what they do find consistently with what they already believe. Because the
question of the existence or otherwise of a creator's initiation of the universe is

460
fixed in the empirically unknowable remote past, it is not decidable by the kind of
mechanistic process that the strict rational/empiricist claims science to be, and so
there is no possibility of resolving the impasse in general terms. One of the
difficulties in analysing a debate of this sort is that the two sides do not agree on
what it is that they are debating--science or religion. Indeed, both sides will claim
that theirs is a scientific position and that the other's is a religious one.
Turning to the scientific aspects, evolutionists say that having a falsifiable
mechanism for evolution makes it inherently superior to creation on the grounds the
theory of revelation by a deity is un-falsifiable, and is therefore not science but
accepted by blind faith. Asimov: "To those who are trained in Science, creationism
seems like a bad dream ... a renewed march of an army of the night risen to
challenge free thought and enlightenment." [ibid, p 183] They are not concerned if
they have disagreements about the possible mechanism for some aspect of
evolution: "However much scientists argue their differing beliefs in details of
evolutionary theory ... they firmly accept the evolutionary process itself." [ibid, p
187]
Creationists claim that such an insistence that evolution must have taken
place in turn constitutes the faith of the other side. Morris says in the ICR pamphlet:
"Evolutionists walk by faith, not by sight!" One side accuses the other of naivety in
belief systems, and the other counters that proposals to fill in evolutionary 'gaps'
involve more wishful thinking and flights of fancy than factual evidence. Both
believe that the opposing world view is mythological, fragile, and unscientific, and
they do so principally because of their opposing world views. In addition, the two
sides reject each other's interpretation of things like fossil evidence, claiming that
this too is based on faith and not reason.

Thermodynamics

When they turn from the philosophical to consider more specific scientific
matters, creationists argue that the idea of a high degree of order in both the
cosmos and in living organisms could arise spontaneously from a primordial chaos is
a violation of the entropy principle, also known as the second law of
thermodynamics. They assert it is not good enough to claim that matter in chaos is
inherently able to organize itself or that the provision of raw energy to a system
causes its order and information to increase spontaneously without any previously
existing program. Evolutionists reject these arguments as simplistic and assert that
energy input to the earth from the sun is sufficient for evolution (i.e., the earth is
not a closed system, so the second law is inapplicable). They also search for
mechanisms whereby order can increase even in closed systems, but the examples
of this, such as the growth of crystals in solution, are rejected by creationists as
irrelevant. They insist that there are no known physical laws by which the
voluminous information coded by a DNA molecule could self-organize.

461
It should be noted that the two sides profess to be unable to understand each
other's arguments on these points, and they differ sharply on the meaning and
applicability of entropy to the whole question of evolution.

Mechanism--Chance and Design

An important argument of the creationists derived originally from an analogy


to the old Copernican "clockwork" view of the universe. If one found a watch
somewhere, it would be immediately obvious from its very intricacy that it did not
come about by chance and that no one would dispute that a designer was involved.
A single living cell, much less a human being, displays an intricacy of far greater
order than does a watch, and therefore also implies a designer. To put it another
way, if one beholds a woven wicker basket full of flowers, one would immediately
"know" the basket had a maker. However, the flowers are much more complex.
Surely therefore, the argument goes, they have a maker too.
Asimov responds: "This argument seems unanswerable ... [but] to surrender
to ignorance and call it God has always been premature ... the complexity of the
universe ... is not in itself an argument for a creator." [ibid, p 184] Morris contradicts
this in his response: "The principles of mathematical probability and scientific
causality certainly do not constitute a 'surrender to ignorance,' but provide a
compelling demonstration that complex systems do not originate out of chaotic
systems by random processes."
Asimov is correct in stating that this objection is unanswerable; the question
of a cosmic design is moot to one who does not or will not conceptualize such a
designer. The paradigms of modern science require that the act of such design be
demonstrated, even repeatable. Familiarity with the design of machines in actual
experience makes the origin of the watch obvious; but nothing less than seeing God
in the act, not even doing it in His stead, can satisfy the strict materialist. As things
stand, they have at best nothing but what some other people hold to be His word for
evidence, and their paradigm for knowledge rejects that as insufficient and circular
reasoning.
However, creationists go much too far if they assert the existence of the
clockworks proves that the God of the clockworks also exists. At best, it is only a
hint that this may be so, not a proof. Existence of a watch maker or a flower basket
weaver is only inferred, then believed on the basis of what appears to be reasonable
and probable evidence; and this is the best this argument can do for creationists.
However, this is also the nature of scientific evidence in general; even the balance
of a preponderance of empirical evidence does not constitute an absolute logical
proof for a theory, only reasonable support for one. That is, the best conclusion that
can be made of this kind of argument is negative--no known physical mechanism
could cause the basket, clock (or flower) to self assemble, so asserting that it must
have done so places the evolutionist in at least the awkward position of having to
assert the existence of a mechanism even though there may be no evidence for
one.
Asimov fingers the real issue, for the question here is whether we are ignorant
of the existence of the creator, or whether there are reasonably reliable evidences

462
for the existence of one. Science alone is inadequate to answer this question, for
the supernatural is outside its realm, and therefore creationists need more than the
existence of clockworks to offer it as scientific evidence, they need verifiable traces
of the creator's work to carry this argument. The Greeks had it right in their
theology of the unknown god. If there was a creator who operated from outside the
realm of the physical to create it, he could be known within the creation only by
revelation. On the other hand, scientists cannot use the inability of their
methodology to examine the supernatural as proof the latter does not exist. Neither
can it be forgotten that there are scientists who are also creationists, who believe in
a supernatural god-creator employing evolution as one of his mechanisms, so this
issue is not as sharply focused as some claim it to be.
Evolutionists at one time believed the entire past history of man, life, the
Earth, solar system and the whole cosmos could be completely explained in terms
of physical processes now being observed (This principle, in general, is called
uniformitarianism). Creationists said that in the ultimate origin of the universe there
had to be different processes, because those presently operating are in capable of
explaining the very beginning. In particular, they asserted the mathematical
impossibility of chance mutations producing better adapted life forms, rather than
simply genetic damage, with its accompanying information loss. They reject Thomas
Huxley's comparison of evolution to a large number of monkeys at typewriters
pressing keys at random, and given enough time producing the entire works of
Shakespeare--even this, they say, is clearly impossible even in all the billions of
years one might wish to grant the universe, and the probabilities of producing a
single protein by chance are far lower.
Uniformitarianism is now less often advanced as a point of argument. Faced
with the actual calculations, Huxley's descendents have abandoned the line of
argument that given enough time even the most improbable events must occur.
Evolutionists now concede physical laws must have been different in the first
few moments of the universe; indeed they claim that the processes that took place
then determined what those laws would be. One sees more reference to
catastrophe and sudden, sweeping change in their literature than to the old
gradualism. However, they reject the creationists statistical and probabilistic
arguments as irrelevant; after all, the presence of life on Earth demands some
explanation, regardless of its apparent improbability, and they are not prepared to
give it other than a materialistic one. They point to the driving force of natural
selection, asserting that the changes in life forms are not random at all, but driven
by the need to adapt to changed environments. Creationists respond that
anatomical adaptions have to come from somewhere, and the only mechanism
proposed to this point is mutation. Changes could be selected among after they
occur (or if a suitable genetic variant were already present), but they have to
happen in the first place, and this puts the argument back into the realm of
vanishingly small probabilities.
In the 1990s, Johnson, Dembski, and others brought forth refined and updated
arguments that the complexity of natural systems shows characteristics of design,
quite apart from what one may believe about a designer. They argue, for instance,
that many systems have an irreducible complexity that could not have evolved in

463
stages because any one component or behaviour of the whole has no particular
survival value. Among complete complex organisms they may cite the woodpecker
and hummingbird, among large structures the elephant's trunk or giraffe's neck,
and on a smaller scale the living cell.

The Fossils

Creationists claim that the fossil record contains the strongest evidence
against evolutionary scenarios; that it shows the sudden appearance of the major
types of life in the various strata, and that it shows a complete lack of the many
transitional forms that would have had to have existed in order for a gradualist
mechanism to have ever operated. Indeed, many more lines have now been traced
to the Cambrian, now known to have had a rich diversity of complex life forms, a
fact they claim to be entirely at variance with Darwinism. They assert that the vast
numbers of fossils now catalogued do not show inter-family gradualism, and that it
is therefore safe to conclude (from a statistical argument) that still larger numbers
never will. They sometimes question the entire concept of the geologic column,
claiming that it too is a fanciful interpretation of fragmented data, and that the
indexing of fossils by rocks and of rocks by fossils constitutes circular reasoning.
They further assert that numerous examples of rock strata laid down in the "wrong"
order refutes evolution absolutely.
Evolutionists are confident there are indeed abundant cases of transitional
fossils, and that the geologic column, while not existing entire in any part of the
Earth, can legitimately be pieced together by comparing similar strata from many
places. They are unconcerned about "wrong order" strata, referring to them as
"overthrusts" in most cases, and maintaining confidence that they can eventually
explain the others as well. They also claim that, in general, geology is the
creationists' weakest point, and that the latter would have to do a great deal of
work to explain what is seen in the Earth's crust consistently with any of their
current sub-models.
However, evolutionists have also now conceded the fossil record does not
support the thesis of uniformly gradual changes over millions of years via natural
selection. A number of suggestions for a replacement mechanism have been made,
chief among them the 'punctuated equilibrium' scenario. This hypothesis holds that
evolution proceeded in sudden bursts, perhaps triggered by vast ecological changes
or multiple mutations from large doses of radiation. Large changes over a short
period of time would leave few or no traces in the fossil record, and this is much
more in accordance with what is actually found. In an interesting side-development,
this theory has found much greater favour among Marxists, who have promoted it
actively because revolution fits their world view better than slow evolution.
This very willingness of evolutionists to change their theories in the light of
new data is regarded by them as a great strength, for they point to the
unwillingness of creationists to do the same. The latter maintain that changing
theories is a weakness, and that holding to revealed religious ideas is their strength.
Creationists use the difficulties with interpreting the fossils as evidence
against evolution, but their opponents are unconcerned. If one interpretation fails to

464
work, another can always be tried out as a model; moreover they are confident that
the fossils will eventually provide entirely adequate proof that large scale evolution
has taken place. For their part, Creationists would look on even the most complete
taxonomy of life forms, including intermediates between the various families and
orders, as nothing more than an interesting chart of similarities, not as proof that
any relationship by descent exists. Indeed, they regard such similarities as evidence
of the handiwork of a common designer. From a logical point of view, these two
conclusions are equally valid. Whether the evidence does equally support both is
another matter.

The Age of the Earth, and Related Issues

Some creationists dispute the multi-billion year time scale favoured by


evolutionists. While there is evidence in things such as magnetic field, lunar orbit
decay, and sediment deposition (land to ocean and space to Earth/Moon system) for
the young Earth this group often advocates, almost all scientists interpret
radioactive dating methods as authoritatively yielding much greater ages.
Geologists are virtually unanimous in asserting that there is ample evidence in the
pattern of sedimentary rock formation and erosion for a very long earth life.
Creationists also have the difficulty posed by light and its speed relative to the
apparent size of the universe. They must decide whether to assert that the light
from stars was created in transit so that the universe had an apparent age, or
perhaps, as a few have, that its speed may decay over time, or that the geometry of
the universe is not actually as commonly accepted.
Asimov says of the apparent age suggestion: "Can it be that the Creator is a
cruel and malicious prankster, with a vicious and adolescent sense of humour?"
[ibid, p 189] Morris responds: "There is no deception involved at all. As a matter of
fact, the world does not even look old, except to the distorted vision of an
evolutionist." [ICR pamphlet] It should also be noted at this point that some
creationists do accept an old earth model, and do not regard these issues as
problems. For example, among Christians, there are a variety of hypotheses to
stretch the six days of creation into a sufficient number of years to harmonize with
the evolutionary scale. However, all such compromises tend to be roundly
denounced by those at both ends of the spectrum.
Creationists claim that evolutionists have their own problems with time, and
that the data do not unequivocally support either a young or an old earth. They say
that the indicators for a young earth need to be properly explained, as must the
nature and significance of the assumptions inherent in radioactive dating methods.
They point out the problem of the missing mass--galaxies are not observed to have
enough of it to have stayed together for as long as their claimed age. Mainstream
cosmologists have conceded this is a problem, and are actively trying to find the
missing mass. Creationists add that there is no evident source for dust and comets
in the solar system that could account for their presence for longer than about thirty
thousand years, because such small particles are constantly being swept up by the
larger bodies. Some even question the nuclear model for the sun's energy, stating
that the observed gravitational collapse is sufficient for current energy output, and

465
suggesting that the predicted but not yet observed flow of neutrinos needed to
support the nuclear model will never be found, and that this too supports a young
solar system scenario. A new argument is that the existence of halos characteristic
of the decay of a very short-lived polonium isotope in granitic rocks indicates an
Earth solid from the moment of creation, rather than cooling over time from a
molten state. They also assert that rock formations are fragile, and could not bend
into the domed upthrusts that are so common except when they were first formed
and the rocks were still quite plastic, denying that layering and upthrusting needed
to take place over long periods of time.
Evolutionists believe that they are on very solid ground on the age of the
Earth, despite the many interesting questions and issues connected with this. For
instance, they dismiss the problem of cometary and dust supply by hypothesizing a
cloud of such materials at some distance from the Sun, and from which material is
periodically ejected into the solar system proper.
Creationists have become somewhat reluctant to offer a specific date for
beginnings, so they have confined themselves to criticizing the specific dating
methods and assumptions of the other side. If Bible-based, they may claim that the
flood of Noah is entirely sufficient to explain the fossil record without any
assumption of age beyond a few thousand years, but this group had not until
recently worked out such an explanation in any great detail, and is subjected to
some of the most scathing attacks on this particular issue, for it is at this point that
the lack of detailed work leaves them most vulnerable.

Life From Non Life

A critical issue, on which there seems destined to be no end to speculation, is


the way in which life could have arisen from non-living matter in the very first
instance. Lately, this discussion has focused on the possible manufacture of life
forms from non-living chemicals in the laboratory. Some think success in this area
would be definitive evidence for an entirely mechanistic view of life. Creationists are
quick to point out that if the application of sufficient time, energy, intelligence, and
creative ability resulted in the manufacture of a living thing from non-living
components, this would instead be evidence that life was intelligently designed in
the first place.
Many possible mechanisms have been proposed to explain how this step
could have taken place in the past, based on a variety of assumptions about the
primitive Earth; but the suppositions about both the conditions and the mechanisms
are highly speculative and change with each generation of scientists, so this
discussion is unlikely to bring about any quick resolution of the debate.
Some schools of thought would shift the focus elsewhere, proposing that life,
or at least organic chemicals, evolved in outer space and that the Earth was
'seeded' in some manner from there. This avoids the problem of the non-
demonstrability of the appropriate conditions at any time in Earth history by simply
assuming that these conditions existed in some place that cannot be observed at
all. However, this suggestion is also unlikely to provide much in the way of
resolution to the argument; for it clearly begs the question. Creationists are also

466
accused of begging the question by assuming that life was created and did not in
any substantial way evolve. The issue here is whether either belief is an a priori
assumption, or whether it is based on reliable evidence. The problem is that even a
demonstrated proof that life as we know it could come about in a particular way
does not show that such was the path of actual history.

Human Evolution

By far the most contentious and important issue is that of the evolution of the
human race. Whatever else they may concede, Christian creationists at least must
insist upon the direct hand of God in the making of the first human beings. For
evolutionists, on the other hand, establishing a line of descent for man has always
been a kind of holy grail--if found, to be considered as the final and definitive proof
that man is the pinnacle of a series of chance processes aided by natural selection,
and nothing more. Over the years, many such lines for man have been proposed,
and even regarded as firmly established, but a number of once highly-trumpeted
ancestors have now been relegated to other roles. As for the current candidates,
both Java and Australopithecus, including the skeleton known as "Lucy," are
regarded by creationists, and by some others, to have been entirely ape. At any
given time, the "missing link" promoted by one group may be derided as an
irrelevant evolutionary offshoot by others with their own candidate. As with other
such topics, the evidences offered in textbooks all tend to disappear within a
decade or so, to be replaced by something else.
The difficulties involved in constructing an ancestry for the supposedly most
recently evolved creature ought to provide future interpreters of fossils with a larger
dose of the more customary scientific caution, and forestall some of the premature
rush to speculative publication that has characterized this particular field. For their
part, the creationists have the difficulty that if man came on the scene at roughly
the same time as the animals, the bones of man ought to be found with those of all
animal species. Likewise, evolutionists require many more bones to be found to
make a case for their side. Is the fact that both generally have not found the
evidence to support their positions simply a matter of interpretation, or must some
grave theoretical difficulty be acknowledged?
Nothing is being conceded by anyone on this front. One side is certain
humankind has evolved, and that at least some fossils are evidence of it; the other
is certain no such thing happened; that no catalogues of bones (or even of
comparative DNA), however arranged and sorted by human-like characteristics, can
possibly prove otherwise, and indeed that nothing that could possibly be interpreted
as a link between humans and animal primates has ever yet been found. They are
confident that none ever will be, and that they will have little difficulty debunking
evolutionists' claims to the contrary.

The Law

Conservative creationists say they are eager to debate the scientific merits of
their theories and that they wish only a fair examination of their ideas and work. To

467
this end, they did manage to convince a few state legislatures their theories ought
to be taught in schools alongside the majority ones. There, they attempted to make
three points: a) that the intent of the separation clause in the U.S. Constitution was
to keep state influence out of churches, not the converse, and that recent rulings to
the contrary have amounted to an adventuresome rewriting of that document by
the courts, (b) that forbidding any mention of creation paradigms in public schools
and allowing only the teaching of evolution is a violation of the free exercise clause
of the same amendment, and (c) that if creation is to be banned from schools on the
basis of its religious or metaphysical underlay, so must evolution for precisely the
same reason.
Evolutionists counter: (a) that whatever the scope of the original intent, the
correct interpretation of the U.S. Constitution in the light of present day society is to
mandate an absolute separation of church and state, that is, a rigid secularism, (b)
that allowing any mention of religious ideas in the schools is a violation of the anti-
establishment clause of the first amendment, (c) that creationism must be banned
from the schools because it--and not evolution--is unscientific, inherently religious
however taught, and promotes sectarian religion in what must be a secular and
pluralistic society.
Eminent scientists, theologians, and lawyers lined up on both sides to testify,
but in two celebrated court decisions in Arkansas (1982) and Louisiana (1987), laws
mandating the teaching of creation as an alternative theory to evolution were ruled
unconstitutional on the grounds that they promoted entirely religious doctrines
rather than fairness between equal competing theories. The strength of the rulings
indicates that it will probably be a long time before such questions are ever again
examined in an American court of law, and there is no immediate prospect they will
be considered at all by Canadian ones. Thus, from a legal point of view at least, the
issue has been settled in favour of the majority scientific establishment. However,
law will not stop this kind of debate, for it involves abstractions competing to
explain the existence of the cosmos, life in general, and humans in particular. That
is, it is a conflict between comprehensive world views, not just on interpretations of
the law, and such a conflict will undoubtedly exist forever in some form.
Interestingly, a British Columbia court, in a case involving whether a school
board could decline to use homosexual-advocacy books in primary school
classrooms, went much farther, when it ruled such public bodies could not make
decisions that were even informed, much less influenced, by religious belief. Such a
novel doctrine, if upheld, would effectively bar Christians from all public office, not
just remove their ideas from public view.

Education

Educational institutions are caught in the middle of this dispute. The religious
right claims that its forbearers invented the whole concept of mass education and is
dismayed to find its influence and presence there now reduced to naught by the
ascendancy of what it calls "secular humanism". They regard this as a threat to their
very existence, for if their children are taught only in a hostile philosophical setting,
then their beliefs are a generation from extinction at the hands of the state. They

468
argue that the education of children exclusively in such theories is an infringement
on their right to hold their beliefs unhindered. For their part, the other side is not
about to relinquish any portion of its control over Western educational institutions.
They in turn regard the exposure of children to any theory other than evolution as
tantamount to the teaching of witchcraft, astrology, and superstition. They are
confident in the essential rightness of evolution and of the evidence for it, not just
as a good abstraction, but as actual physical truth. Two things seem to have
escaped notice here:
first, that the assumption of models as absolute truth has never been
historically tenable for long, and
second, that the exclusion of religious based models is inconsistent with
holding tolerance as a high value.

Summary

Creationists accuse those they term "secular humanists" of practising a


narcissistic and exclusive tolerance--one that has room only for their views and
none for any other religious ideas. Their opponents countercharge that creationists
are censors who wish to replace good science with religious myths, and that they
must be suppressed for the good of society. Asimov: "...church and family can easily
censor printed matter or television. Only the school is beyond their control."
[Montagu, p 191] Creationists respond that it is their views that are being censored,
not those of the evolutionists. Morris: "Unfortunately his (Asimov's) last statement is
mostly correct." [ICR pamphlet] Both sides accuse the other of having a hidden (or
not-so-hidden) agenda--that of the imposition of its own views via a program of
mind control and propaganda. Those on the extremes also excoriate or ridicule any
attempts to take a middle position, and those who do try for some intellectual
compromise are likely to be equally emotional in their views.
Both sides have found weaknesses in the other position--and though from a
logical viewpoint it makes a poor argument, the shortcomings of one side are used
as a psychological boost for the other. Occasionally, answers to criticisms have been
provided by one side in a very speculative fashion, and this practice gives great
encouragement to the other. Much of the argument is conducted in extremely
emotional and personal terms, which could be used to encourage the other side
because of its lack of content, but tends to inflame it to do likewise instead.
On both sides, some old untenable positions have been abandoned. For
example, only the most outdated of books still mention "vestigial organs" or claim
that "ontogeny recapitulates phylogeny". These were two of the interesting, but
ultimately misleading speculations of the past that have now been laid to rest, a
fate no doubt to be shared by some of their successors. For their part, creationists
concede that life forms can be observed to change via natural selection, even if
they do not agree that they do so to the extent extrapolated by evolution's
supporters or that mutation is a possible source of this change. Events may also
force a realization that humankind will soon have some measure of control over
future changes to itself, something sure to enliven the debate, though proving
nothing about the past.

469
11.5 Learning How to Conduct Debates
Whether one thinks that such debates are about science or not, several things
can be learned from this particular one. First, new theories, or any theory that seeks
to displace the consensus view, will always be the subject of vigorous examination
and criticism. Such debates are healthy, for they are a sign that science is still alive
and growing. If established ideas are never questioned, stagnation will result and
new discoveries will cease. Second, it is important to know what is actually being
debated. Is it factual matters (data), methodologies, theories, interpretations, or
conclusions? Is it about intellectual honesty, scientific integrity, personal
qualifications, or the associations of the participants? Or, do the disputes arise
because of differing world views? If so, are they therefore irresolvable? While those
who differ sharply on such fundamental issues cannot be made to compromise on
their world views even in the long term, they could learn to live with and honour one
another. They could also learn to give and take in a vigorous and yet respectful
fashion.
In the particular case discussed here, any rapprochement between modern
science and most religions (especially conservative Christianity) would have to
involve an acknowledgment on the one hand that science is not an entirely
mechanistic process and on the other that religion does not necessarily include a
rigidly fixed view of the physical universe. If each side took a friendly though surely
disbelieving attitude toward the other, a great enrichment of both science and
religion could take place as each sought answers to the legitimate questions raised
by the other, questions that might never be asked if one of the two views absolutely
dominated. One way of achieving this might be to increase the intellectual content
and reduce the emotional aspect of the debate. Alas, since many people reading the
last section become angry that it is even in print, and thereby gives credence to the
idea there is something to debate (when they are convinced there is not), such a
lowering of the temperature seems unlikely, to say the least. Perhaps there is a
reason why the author picked such a provocative subject.
In an open information society, there could be a realization on the part of
scientists that there is room for such debates, even with those who are not
materialists. After all, science does have a tendency to overlap into important
metaphysical territory and is not just a subset of logical positivism. From the ethical
principles of wholeness, respect for the individual, and the promoting of the
common welfare, one could conclude that there ought to be a greater tolerance of
those whose views are at variance with those of the current majority. Once again, if
a society can use the perception that a minority poses a threat in order to suppress
that minority, the society is not truly free. Pluralism and tolerance must be real and
not exclusive; they must not demand freedom exclusively for one set of views.
World view affects the science one does and it shapes the interpretation of
data. If this becomes more generally realized--as it may in the information age--all
scientific debates, whether seen as having world-view implications or not, would be
perceived as healthy, potentially enhancing knowledge and human understanding of

470
the universe. Such common good is an ethical goal for all seekers of truth, whatever
their belief framework.
For their part, creationists--who are usually religiously motivated--must
abandon any fear that conceding anything to the other side would destroy their
entire belief framework. Furthermore, if they expect creationism to be evaluated
seriously as a scientific study, they will need to develop testable hypotheses, for
there is likely to be little future in their attempts to have evolution labelled as a
religion, even if it does have such characteristics. They might not abandon their
fight to have the influence of evolutionary ideas reduced, but the one to have
creationism taught alongside it in science classes has apparently been lost--at least
for the foreseeable future in North America.
With these considerations in mind, the following suggestions are offered for a
potential ethical framework for participating in debates and discussions on issues
that relate to science. They may be particularly useful for those issues that are
entirely philosophical or world-view dependent, or otherwise have a high level of
personal and emotional content, but they can be applied to many other situations.

o Participants in such debates need to agree on what it is they are debating. Is


it science? If so, what are the testable hypotheses, what experiments have been
done, and where are the repeatable results? If it is not science, then let that fact be
clearly stated before the debate begins.
o Specialists who enter debates against generalists over broad world-view
related topics are at some risk, for they are likely to lack the breadth of knowledge
to function in such circumstances. They may also be too aware of the difficulties of
supporting their position from within their own speciality and will attempt to use
what they believe to be better-founded interpretations from other fields, not
realizing that those abstractions too are no more substantial than their own. Models
of the information age also suggest there will be fewer narrow specialists in the
future, and this fact alone may make such debates more productive.
o Personal attack, innuendo, and scornful remarks concerning the opponents'
world view, politics, friends, or religion should all be avoided. So should the
emotional denigration of the other side's views. Anger wins nothing in a debate
about science or theology; onlookers will carry away a negative impression of
everything that the angry person has said. Ideas that one finds unpleasant will not
go away just by shouting at them loudly enough.
o A line of argument should never be based on assumptions about an
opponent's beliefs. For instance, one who attempts to discredit a creationist by
referring to the Bible could be dismayed to discover the opponent to be Zoroastrian,
an Animist, or a believer in some creation account other than Genesis. Comments
about the irreligion of a scientist could also be quite off the mark. People often set
up a stereotype (or "straw man") of opponent's beliefs and attack that. If sufficiently
eloquent, such a tactic may impress a gullible audience, but it too is irrelevant to
the main issues.
o Arguments from silence are useless. It cannot be assumed that someday
someone will find experimental evidence for or against a particular theory when
none exists today. Neither is the absence of such evidence relevant in the most

471
general terms, even if it does appear to refute a particular aspect of some theory.
For example, both evolution and creation are abstractions for describing a past
reality, and they will continue to exist in some form independently of all suggested
mechanisms or refutations thereof.
o Indefensible positions cannot be defended. When one group has seen fraud,
wishful thinking, over-confidence, bad management, or poor experimentation, it
may as well concede this when opponents point out these specific problems. If one
side has not thought out a complete explanation of part of its position, this too must
be acknowledged, and not covered up. It will be less difficult all the time to hide
anything. Good work can stand up to any amount of public scrutiny.
o Speculation is futile. The fact that it is taking place at all may be an
indication that the topic has metaphysical overtones and that it is about world view,
not science. In any case, trying to fill in the gaps in a theory with speculation only
displays more clearly its weakness both to opponents and to the audience. This is
not to suggest that speculation in general is futile, for it leads to new abstractions
and new ideas to test for theoretical consistency, coherency and practical value.
There is a tendency, however, to advance such speculation as fact, especially when
attempting to fill in the details of a cherished theory. Just as Ptolemy's geocentric
abstraction became a doctrine, so can a modern theory. No matter how cohesive,
useful, or even beautiful an abstraction about the physical world, it does not convey
the thing itself; it remains an abstraction. No matter how closely a model can be
made to conform with interpretations of the observable evidence, it is not absolute
truth. Many models for one underlying truth may be possible; far more can be
devised when not all the evidence is available.
o Arguments from consensus are invalid, even if they appear to have a facade
of authority. The fact one group may well have a majority of living or dead scientists
on its side does not mean that it is right. Neither does it matter if the current
scientific or religious media superstars are in agreement. What does matter in
science is not how many people find a particular abstraction or explanation of the
phenomenon more satisfying to their world view, but which explanations best fit the
experimental evidence, if indeed any experiments can be done.
o The losers of a particular debate should be gracious. It is easy to claim that
one's opponents were better funded, perhaps by some shadowy conspiratorial
group, and that an innocent lamb such as oneself is no match for such sophisticated
tactics. This is really an excuse for not doing one's homework, or for tackling a
debate one lacked the knowledge or ability to prosecute.
o No one should fear such debates. The worst that can happen is that both
sides will reinforce their positions. A better outcome is that both will realize they do
not have the whole truth and all the answers, not yet. The best is that both sides
will be forced to abandon some bad science or incorrectly interpreted religion and
find better answers.
o Most important, no one should be afraid to examine ideas, regardless of how
much they may seem to be at variance with one's own views. Likewise, it ought to
be possible to be free from fear of danger or ridicule in putting one's own ideas up
to public scrutiny. Such fears prevent growth, foster prejudice and censorship, and
impair the viability of a whole culture. It may be possible for the fourth civilization to

472
be freer from such fears because of the openness of information, but tolerance of
others' religious or scientific views is an ethical issue and not just a matter of having
better information.

11.6 Science, Technology, and Religion in the Twentieth Century


The hold on all aspects of Western society of the nineteenth century secularist
thinking models (humanism, rationalism, empiricism, materialism, progress, and
evolution) continued to grow in the twentieth as the influence of religion declined.
For instance, the doctrine of the separation of church and state began in the United
States as a way to protect churches from state interference and to prevent the
establishment of state religion. In this century, this doctrine has also been used to
eliminate from public life and action all mention of Christian ideas. A case in point
was a lower court decision in 2002 declaring that the words "under God" in the
American pledge of allegiance were unconstitutional. Although Europeans could not
point to any law enshrining this doctrine (most have state churches) the same
situation existed there as much as fifty years earlier.
A similar self-censorship, was nearly complete in academic publications early
in the century. By the 1980s, it had been successfully extended to politics, public
bodies, music and art, public prayer, books, the media, entertainment, and schools.
One could grow up in both Europe and North America without ever hearing or
seeing religious themes or ideas or having them positively referred to by any figure
in authority. It became possible to read newspapers and magazines and watch
television entertainment without seeing much mention of religion or religious ideas,
except in paid-for programs, promotions, and advertisements, or as negative
stereotypes in programming. Likewise, popular fiction seldom mentioned religion
except negatively. Religious ideas had largely been relegated to their own
publications, radio and television stations, and book publishers. They were not to be
seen in public, unless there was a scandal of some kind, and then they were very
public indeed.
The assumption of both the media and of intellectuals in general was that
religion had been relegated to the scrap heap of superstition and had no more
relevance for the present or the future. Television, movies, and school textbooks
came to be so thoroughly sanitized of religion that no one from outside Western
culture would ever guess from them that even a tiny percentage of their audience
even believed in God, much less was devoted to Him. Many publications would no
longer accept paid advertising from religious groups, even on social or political
issues. Voluntary school prayer groups were forbidden, and religion was virtually
banned from public life, the courts being enlisted to enforce these prohibitions and
enshrine them into law.
Meanwhile, churches and their people had refined and redefined the ways in
which their doctrines related to the physical world, in order to remain rational and
consistent. In general, this involved a pulling back from the older practice of
incorporating views of the physical world into doctrine, except to assert that God
was the ultimate source of all creation. The realm of Western religion came to be
morals and relationships alone, and the assertion by academics of the moral
neutrality of technique was accepted and used to justify the noninvolvement,

473
especially by conservative Christians, in science or technology. However, since
technique was the chief concern of life in the industrial age, this withdrawal
increasingly implied that religion had less and less to do with the everyday activities
of ordinary citizens. Even for those who wanted to go beyond nominalism, there
came to be little of relevance to be found in the churches, and it became easier to
compartmentalize life into the sacred on one day of the week and the secular for
the rest of it, or to ignore religion altogether.
The tendencies of late machine-age people to be very narrow specialists and
to pick and choose only fragments of the culture outside their specialities were also
applied to religion. So was the tendency to view ideas as commodities. The result
was an increasing fragmentation of religion itself, as people become shoppers in the
delicatessen of religious ideas. Thus, people might regard themselves as associated
with a church, but would select what doctrines they wanted and rejected the rest.
For instance, many Catholics will accept church teachings on some theological and
moral issues, but will reject that authority on an issue like birth control. Likewise,
many Protestants will say they believe in the Bible's guidance but will actually follow
the daily horoscope (Babylonian religion forbidden in the Old Testament).
The same trend can also be seen in the tendency to form religious
organizations independent of any church body, as well as in the rise of theologies,
such as the modern charismatic movement, that have themselves become
fragments of the menu selected by members of many traditional churches. The
result is that people attend the church of their choice a few times a year (Easter,
Christmas) and to buy its commercial services for rites of passage such as
weddings, baptisms, and funerals. Otherwise, what was a day of worship is
incorporated into the rest of the week, and make shopping, lawn mowing and golf
into more relevant Sunday activities than church attendance. Alternatively, they
require their churches to diversify and to offer a larger menu of theological choices
so that they can remain within them, even if they do not agree with their church's
traditional beliefs. Yet another approach is to incorporate diverse ideas and
activities into Church itself, transforming worship either into entertainment or a
series of self-improvement seminars. One effect of these trends has been to
produce mega-churches that can entertain masses of impersonal and uncommitted
attendees.
Even the use of technology and techniques by the churches of North America
has contributed to its separation from life, rather than promoting bonds, for such
use has tended to make them over in the image of the world their original doctrines
purported to be trying to change. Nowhere is this more evident than in the plight of
some television evangelists who rose to media stardom on a wave of adulation
through the early 1980s and suffered such scandalous falls afterward. Most began
as apparently legitimate and sincere preachers of a message, but the very nature of
the media they used replaced that message by the same kind of personality cult it
tends to generate for all its stars. The magnitude of their success destroyed the
authenticity of their message by building in them the first of the seven deadly sins,
pride--and the least Christian of all loves--that of money.
For others, the shift in message was of a different type--first accommodating
within their gospel the success philosophy, which is a different kind of materialism,

474
and then allowing that in turn to replace the Christian message altogether. These
writers and television personalities encourage their audience to make material
prosperity their principal goal and deliver this message with a Christian-like
vocabulary, declaring that God owes His followers health, wealth, success, and
prompt delivery upon demand. Of course, such a god is neither transcendent nor
master of anything, but the creation and servant of those making the demands--an
appropriate deity for the self-centred.
Thus, some segments of Western Christianity distanced their religion from life
to the point of meaninglessness, while others became assimilated to a materialistic
culture to the point of indistinguishability. Both of these trends may have buried the
historic Christian message and made it irrelevant to modern society. By the late
1960s, popular philosophers, some purporting still to be Christians, could well
proclaim that "God is dead." It is interesting to observe in this connection that in
both compartmentalization and assimilation there was a search for sophistication or
"relevance," in a trendy sense of the latter word. Both were historically regarded as
heretical, but now together represented the mainstream of Western Christendom. In
particular, the philosophy that equated success with a blessing bestowed by God on
the deserving righteous and material failure with His judgement on the unrighteous
was entirely at variance with both modern and historical interests of the majority--
the poor and oppressed. These had always flocked to Christianity, not to find a way
out of their problems and into material prosperity, but to find a way to transcend
and to live through them. This "gospel of sophistication" was the same one that
Western Christians scorned as "worldliness" a few generations before, and the
change demonstrates the redefinition of religion in the 1980s better than anything
else could.
The process of mutual withdrawal by science and religion into non-overlapping
spheres or magesteria is known as secularization. Among social scientists it became
an axiom that in any scientific and technical society, secularization would inevitably
proceed to the point of extinguishing all remaining traces of religious expression.
Christianity was held to be particularly vulnerable because its very rationality
accelerated the process. Up to a point, this assumption appeared to be correct, for
as long as the machine age's materialism persisted, religion fought a rear guard
action, nominalism increased, and churches continued to become either refugees
from culture or assimilated by it. Churches that accepted secularization gradually
became unable to refute anything science affirmed, and afraid to affirm anything
scientists denied or doubted. The more doctrinally conservative groups retreated
almost entirely from the intellectual world between 1850 and 1950, abandoning
both education and technique and abdicating a role in or responsibility for
defending their faith in the open marketplace of ideas. There, an optimistic and
confident science reigned supreme and unchallenged in the Western world for
decades, reaching its peak in the 1950s. However, the spiritual leadership provided
by nineteenth century scientific paradigms eventually faltered somewhat, for
several reasons.
First, the optimism of humanism was shaken by two world wars, a devastating
arms race, and numerous famines and other local political crises that demonstrated
the practical failure of the idea humans could be autonomous from any external and

475
absolute moral standards. It began to appear that autonomy was a myth, for it led
not to freedom but to either anarchy or the arbitrariness of totalitarianism. Too
many such tyrants had become mass murderers. The very emphasis on the
individual that had fragmented religion led people to question the validity of the
notion of collective and statistical man--summarized, probed, and laid out rationally
with all the parts on the table and open to view. The old ideas that mind might be
more than the brain began to gnaw away at humanist assumptions, and the failure
of paradise to arrive on schedule to suggest that perhaps it ought after all to be
looked for outside humanity or even outside the physical world.
Second, these same failures also led to a re-examination of rationalism and
empiricism as intellectual absolutes. A variety of people came to wonder whether
the scientific technique really did not have all the answers, to question its ability to
find them all, and to suspect that its practitioners might not even know what the
most important questions were. A certain amount of technological cynicism
developed among ordinary citizens. Meanwhile, Gödel's work on logical systems,
Einstein's on relativity, Heisenberg's on the uncertainty principle, and indeed the
whole unsettling field of quantum mechanics had seriously eroded late nineteenth-
century scientific community confidence in the absolute authority and completeness
of their own work. There was a growing realization (in physics at least) that the best
understandings of the universe were indeed models or abstractions, and that
ultimate reality was far more elusive than had once been thought.
Third, materialism too came to be questioned, as more people began to react
to the impersonality of the machine age and to the dehumanization of the
individual. Through the 1960s and 1970s, there came to be a much greater
emphasis on the individual and on the rights of minorities. This emphasis was
reflected in a shift in government spending priorities from the military and
technological into social programs. Perhaps the benevolent state could bring
salvation even if people could not save themselves by their own efforts. Meanwhile,
the idea of the melting pot for American culture--assimilation to the traditional
majority--was abandoned, and the best society came to be seen as multicultural.
The structure of big business also began to change. Large firms tried to become (or
be seen as) social leaders and relate to people as well as to products, and many
new companies were set up as unstructured entrepreneurships.
The 1980s saw the emergence of a new materialism, based on economic
pragmatism. Everyone was seen as a potential entrepreneur out to become better
off, either economically or spiritually. Personal satisfaction and individual self-
realization to the maximum of human potential became important and a vague
supernaturalism was attached to these new materialisms that partially contradicted
the old materialism. The spiritual and transcendental came back into vogue, with
many new religions being founded, and both Christianity and Islam began a new
period of rapid missionary-style expansion, particularly in emerging industrial
nations. In North America, however, new age mysticism rose to challenge and
compete with the rational faiths of both Christianity and scientism. It filled a void; it
was new; and it was vague enough to require no commitment or make any
demands on its followers' life-style--one could design one's own religion, and even
be one's own god.

476
Fourth, the notion of inevitable progress was also set back badly by the
political, military and economic "accidents" of the twentieth century--so much so
that even the greatly accelerated pace of the scientific revolution after 1945 could
not convince people to trust progress as they previously had. Some of them were
now prepared to march against it or even lay down in front of the bulldozers and
stop it. Opposition to nuclear technology, for example, eventually became
sufficiently widespread at the grass-roots level to force both governments and
corporations to cut back or cancel many programs in this area. Environmental
groups also became influential in the battle against the inevitability of progress, and
their cause had by the 1980s become not only respectable, but positively chic.
Many of the same people became protesters against globalization by the turn of the
millennium, apparently fearing that individual freedoms would disappear into the
maw of international corporate greed. The political and philosophical ecology had
changed; evolutionary-style progress was no longer a god, nor even inevitable; it
was perhaps incorrect.
There can be little doubt that the Three Mile Island and Chernobyl incidents
set back progress in nuclear power technology by years, if not permanently. Even
though nuclear fusion (an entirely different technology) might have been safer and
cheaper than other energy forms, such plants may never become a commercial
reality because of political and emotional considerations. This nucleophobia leads to
extraordinary extremes--Western nations may spend millions of times as much per
life saved in nuclear safety as they do in their relief to the Third World or on disease
control and prevention, for example. But, famine, earthquake, volcanic eruption,
flood, fire, and the like can all be attributed to the acts of the impersonal nature-
god. When people die because of technology, the wounds appear self-inflicted; they
are acts of a god created by man, and are intolerable where the others are not.
Paradoxically, the family automobile is perceived as a god of a lower order of
technology, more familiar, less powerful, and less emotionally threatening than
nuclear energy, even though it destroys legions of lives each year. In addition, acid
rain attributed to industrial pollution is slowly poisoning lakes and rivers, killing
trees, and dissolving the clay feet of the god of progress. Its fall is not one easily
acknowledged; this shrine may be badly eroded, but it still receives much homage.
Fifth, the only recently elevated god of the secular and benevolent state that
could itself bring about a socialist utopia on Earth also became discredited. Not only
did the Soviet Union and its client states all fall utterly, but there came to be an
increasing sense in the West that there were limits to the extent of control that the
state should have over individuals' lives, and limits to the percentage of income that
could be extracted to ensure a social safety net. Confidence in government, only
recently risen to unprecedented heights, fell to corresponding lows, and the new
millennium's beginning finds the average citizen of most Western countries
indifferent to, cynical about, or hostile to the state, and distrustful of its leaders.
That is, the political version of evolutionary natural selection has collapsed,
because "survival of the fittest" has come to be seen as a dangerous social doctrine,
and because historical evidence is against it. Neither the Nazi superman, the
Marxist strongman, or the liberal humanist man has proven to be better, much less
to be the end of history, as their proponents supposed. Note that there was never a

477
logical connection between biological natural selection and survival of the fittest in
the social and political sense when these ideas were on the ascendancy, and there
is still no connection when one of them becomes discredited. However, the
connection, once made in some minds, is hard to disown.
Sixth, as seen in the last section, evolution itself came under renewed attack
from a completely unexpected quarter--scientists who were also conservative
Christians. Rather than accommodate two world views by compartmentalizing life,
or by mythologizing or re-interpreting the Bible, they chose to believe it and attack
the conclusions of scientists, attempting to use science to do so.
There is a tendency among intellectuals to write off this renewed challenge to
the accepted doctrine as irrelevant, incompetent, and insignificant. It is none of
these, for it strikes at the very heart of the central issue--not creation versus
evolution, but rather what ought to be the proper sphere of influence of science and
religion. Can scientists really insist the evolution of the universe, of life, and of
humankind, as they model them, constitute the ultimate historical reality? The new
breed of creationists say they cannot, and so offer creation as an alternate model,
with or without the Biblical account, and with or without physical evidence. Both this
new challenge, and the necessity to make periodic changes to evolutionary models
despite having previously taught them as ultimate fact causes a credibility problem,
one that will be compounded when the big bang model for the origin of matter is
eventually dramatically revised or discarded.
For these reasons, it is gradually becoming more obvious that progress and
evolution are both abstractions or models and not ultimate reality. That is, scientism
is losing the emotional and religious-like following it has had at both points. Thus, if
the rationalism of the Christian faith was eventually to contribute to its
secularization, the same can be said to some extent for the rationalism of scientism.
Science and technique are useful for developing the means of describing and using
resources implicit in the physical world, but the descriptions and techniques so
generated are neither final answers nor ultimate reality, and may therefore be
legitimately competed with on those grounds.
The absolutism of recent scientism could in the end be as troubling to itself as
was the absolutism of the Catholic Church in the seventeenth century. If it cannot
learn to question its own most basic philosophic foundations by applying its
methodology as a meta-technique to itself, it could find itself swept aside and
replaced by new gods. This outcome is not one that humanity can afford, for
whatever its failings, science cannot be dispensed with altogether, or lived without
for very long, and neither materialists nor Christians would be very comfortable in
an anti-rational world.
At the same time that doctrines of scientism have come under challenged, the
information age has begun, and its models are replacing those of the machine age.
Although there are new machines, new techniques, and new kinds of progress at
the heart of this change, the magnitude of the break with the past ensures that
peoples' basic assumptions and beliefs will be up for grabs for some time. Just as
politics, economics, corporations, and society are being redefined from the
collection of fragments they have become, so are beliefs, world views,
epistemological systems, and religions. As the scientific and industrial revolutions

478
progressed, Christianity seemed to lose its vigour and go into decline. Likewise, as
the new civilization dawns and the basic cultural premises settle into new patterns,
the religious aspects of those cultures will also undergo transformation. What will
the gods of the information age be? Computers themselves? The Metalibrary? The
dominant monopoly providing software and hardware?
Religious change has already begun. Some churches with roots in pre-
industrial revolution days have retreated into a genteel intellectualism, abandoning
both experiential and relational aspects of their source. They have suffered sharp
declines in membership, and these appear to be accelerating. Not only are they
having increasing difficulty holding onto existing members, but also their missionary
emphasis has all but vanished, cutting off the flow of new blood and causing
substantial increases in the average age of their memberships. Like all other
organizations perceived as lacking fresh and vital contact to the current culture and
its models, they cannot survive on old glories when the culture changes a second
time. Nor might they survive on their new-found sophistication, however culturally
accommodating this may be, for by itself, this is too insubstantial, and the less
erudite readily observe that the god-emperor of sophistication not only has no
clothes, but is the enemy of piety. Such churches become like hunter-gatherers in
an industrial society--irrelevant and all but invisible.
Even the most vibrant and rapidly growing churches are barely holding their
own relative to the total North American population, and they are doing this chiefly
through migration, birth and marriage, not by bringing in previously unchurched
people. Moreover, the process of fragmentation and the delicatessanizing of
religious beliefs and practices has become advanced even in the growing churches,
and it is unclear how much of their growth represents renewed commitment to
conventional orthodoxy or how much is due to doctrinal diversification and
insubstantial cultural accommodation. In short, religion in the West is rapidly
becoming as much form as substance, and what growth is observed may well be
illusory.
But the set of beliefs about God and about moral absolutes--the transcendent
things that one calls religion--appears to be a necessary part of the experience of
being human in any culture, by the example of historical record if by nothing else. If
this need is not met (in the West) by traditional Christian denominations, and if the
answers provided by a century of scientism are no longer an entirely adequate
substitute, it becomes reasonable to ask what will fill their role in the next culture.
For, even if all the old gods are fragmented beyond repair, religion will still exist.
People will still seek a meaning for being, a foundation for knowledge, an
understanding for experience, and a basis for human and other relationships, and
they will want all these integrated into a comprehensive belief system by which to
live.
In the next chapter, the possibilities for a revival of the human spirit and of
religion will be considered, and contenders for leadership roles will be examined.

11.7 Summary and Further Discussion

Summary

479
Religion is a comprehensive, and vital, energizing force in every society. It
encompasses beliefs and the reason for being, knowledge, and a world view. It has
the power to transform behaviour through its ethical imperatives--though it is not
the only such transforming force. It can be viewed philosophically, historically,
artistically, psychologically, socially, morally, economically, or personally. Each of
these approaches contributes to the total picture for religious activity, though none
are sufficient alone to explain it.
The major world religions today include Buddhism, Confucianism, Hinduism,
Shintoism, Judaism, Islam, and Christianity. Of these, some are more philosophical
and cultural, or even strictly national. The Western notion of religion may not always
be applicable to certain patriotic or cultural expressions such as Confucianism,
Hinduism, and Shintoism. Judaism, Islam, and Christianity are monotheistic and in
the same tradition, and the last two are the largest and most aggressive today.
Besides them, only Buddhism claims to be universal and has had some missionary
history. It is uncertain whether the "New Age" syncretism of fragments from
scientism, Buddhism, and Hinduism is a religion in the same sense as some of
these.
Science and the Industrial Revolution both came about in the crucible of
European Christianity, and the Protestant reformation was a critical factor in the
development of both, sharing key elements of their world views for nearly two
centuries, until the rise of scientism in the late nineteenth century coincided with a
retreat of Christianity into defensiveness and nominalism. This new world view
became progressively more important up to the mid-twentieth century when it
began to come under pressure due to internal inconsistencies, the failure of
doctrines of progress, antitechnology sentiment, some resurgence of conservative
Christianity, and the imminent passing of the industrial age.
The debate over origins was selected as representative of a typical and likely
permanent world view clash. Suggestions for debating these topics were made, and
it was asserted that such debates, properly conducted, are healthy.

Research and Discussion Questions

1. Research and discuss in detail an approaches to the study of religion


mentioned in the section one. First attempt to encompass the entire phenomenon of
religion under this single heading, then discuss the extent of limitations, if any, to
this approach.
2. Research at least three additional religions other than those mentioned in
the text. Summarize their principal beliefs about God, humanity, and ethics. Are
they universal? missionary? Attempt to determine the extent to which they are
cultural phenomena or philosophies rather than religions (in the more
comprehensive sense). In each case, be sure to assess their followers' prospects for
living with and utilizing machine-age and information technology.
3. Research and discuss in much greater detail than in the text the history and
beliefs of Islam, particularly as these interacted with Europe during the time of the
scientific and industrial revolutions.

480
4. What was the effect of the plague ("black death") on (a) the scientific and
industrial revolutions; (b) the Christian Churches in Europe?
5. What role did Irish-led monasticism during the dark ages have in the
preservation of western learning?
6. The author indicated that the printing press was one of the most important
technical innovations. Research and discuss in detail the history and effects of this
device on both religious and scientific institutions. Why is the advent of personal
desktop publishing seen to be of equal or greater importance today?
7. Some have suggested that the age of the printed word or verbally modelled
communication has passed and that the future belongs to visual communication or
images. What are some of the aesthetic, religious, social, and scientific
consequences if this is true?
8. Consider the suggestion that secularization (the separation of religion from
daily life, politics, and work) will no longer be an operating premise in the fourth
civilization, either supporting this with a detailed argument or attempting to refute
it.
9. Examples were given of the way in which institutions (scientific and
religious) can take on a life of their own and become something very different than
what they started as due to their mutual interaction with society and technology.
Research and describe this process in detail with one or two specific institutions of
any kind.
10. Write a detailed biography of one of the scientists mentioned in this
chapter, or another of your choice who made revolutionary contributions to the
development of science. Pay particular attention to his or her religious beliefs and
how these affected the science that was done. Now do the same for a religious
figure of equivalent stature, paying particular attention to his or her world view as it
related to science and technology.
11. The author argues that in the late nineteenth and early twentieth
centuries, scientism has played the role and achieved the status of a religion.
Refute this argument.
12. Research and discuss some of the cosmological models for the origins of
the universe (continuous creation, big bang, and so on). Why were previous models
abandoned and the big bang generally adopted? What are weaknesses of the
theory?
13. Research and discuss paradigms for the evolution of life (spontaneous
generation, inheritance of acquired characteristics, gradual uniform natural
selection, punctuated equilibrium). What experimental or other evidence caused the
first ones to be abandoned? What is the evidence for the last?
14. What are the differences between what a creationist paradigm predicts
will be found in the fossil record and what a natural selection paradigm (both slow
and punctuated equilibrium versions) predicts will be found there?
15. A slogan once found in textbooks on animal evolution was "ontogeny
recapitulates phylogeny". Find out and explain what this means, and why it
eventually came to be criticized as misleading.
16. Obtain from the library a text on evolution (or a first year biology text)
published in the 1960s, the 1970s, the 1980s and the 1990s. Make a brief summary

481
of the principal arguments and evidence used in each decade for evolution, and try
to account for the differences.
17. Do an actual calculation of the probability that if a thousand monkeys
were set to typing at random, one of them would, in under 100 billion years produce
all the works of Shakespeare in a contiguous piece within the text.
18. "The absence of nascent (developing) organs refutes the notion that
others are vestigial (remnants)." Either develop this argument further, or refute it.
19. Several court battles have been fought over the creation/evolution issue.
Research this history and summarize the arguments made on both sides and the
conclusions of the court in each case.
20. Research and assess the numerical extent of religious faith in the modern
world. Which is the fastest growing religion in the world as a whole? In the West? In
information-age countries? Why?
21. The author asserts that fragmentation of beliefs is a religious expression
corresponding to the phenomenon of specialization in the machine age. Research
this trend in the literature and summarize statistics supporting the conclusion that
this fragmentation has taken place. Alternately, argue that no such fragmentation
has occurred and support that conclusion with data.
22. Give detailed examples of the Western Christian assimilation of and
redefinition by the surrounding culture.
23. "God is dead." What does this mean? Either support this thesis or refute it.
24. Read some of the creation/evolution debate literature and analyse the
quality of the arguments on both sides of the criteria in this chapter, or on the basis
of criteria you develop yourself.
25. What are additional examples, besides those in the text, of the adoption
as dogma of a fixed view or model of the universe into either science or religion?
Show how the idea came to be discarded.
26. What are UFOs? What evidence is there they are visitors from somewhere
else? What factors contribute to beliefs that they are/are not?
27. Research the topic of cold fusion. Define the term and give a brief history
of its initial discovery and the aftermath. Now, research later literature for
subsequent references. Is there anything to cold fusion? What do you think of the
initial response of the scientific community?
28. Who is Uri Geller?
29. Very recent trends (see Bibby's Restless Gods) may show an increase in
religious interest by younger North Americans. Research this suggestion and
comment on it.

Bibliography

Aardsma, Gerald E. Minisymposium on the Speed of Light--Part I: Has the


Speed of Light Decayed Recently--Paper 1. Creation Research Society Quarterly 25,
1 (June 1988).
Barbour, Ian. Issues in Science and Religion. Englewood Cliffs, NJ: Prentice
Hall, 1966.

482
Barbour, Ian (ed.). Science and Religion--New Perspectives on the Dialogue.
New York: Harper & Row, 1968.
Bibby, Reginald W. Fragmented Gods--the Poverty and Potential of Religion in
Canada. Toronto: Irwin, 1987.
Bibby, Reginald W. Restless Gods--the Renaissance of Religion in Canada.
Toronto: Stoddart, 2002.
Berggren, W. S., and Van Couvering, John A. (eds.). Catastrophes and Earth
History--The New Uniformitarianism. Princeton, NJ: Princeton University Press, 1984.
Butterfield, H. The Origins of Modern Science 1300-1800. New York:
Macmillan, 1962.
Braswell, George W. Jr. Understanding World Religions. Nashville: Broadman,
1983.
Brophy, Donald (ed.). Science and Faith in the 21st Century. New York: Paulist
Press, 1968.
Cahn, Steven M., and Shatz, David (eds.). Contemporary Philosophy of
Religion. New York: Oxford University Press, 1982.
Chittick, Donald E. The Controversy--Roots of the Creation-Evolution Conflict.
Portland, OR: Multnomah Press, 1984.
Dannenfeldt, Karl H. The Church of the Renaissance and Reformation--Decline
and Reform from 1300 to 1600. St Louis, MO: Concordia, 1970.
Dembski, William.The Design Inference: Eliminating Chance through Small
Probabilities Cambridge: Cambridge University Press, 1998
Dembski, William (ed.). Mere Creation: Science, Faith, & Intelligent Design
Downers Grove, IL: InterVarsity Press, 1998.

Dembski, William. Intelligent Design: The Bridge Between Science and


Theology Downers Grove, IL: InterVarsity Press, 1999.
Dembski, William and Kushiner, James(ed.). Signs of Intelligence:
Understanding Intelligent Design Brazos Press, 2001
Gish, Duane T. Evolution The Fossils Say NO! San Diego: Creation - Life, 1973.
Granberg-Michaelson, Wesley (ed.). Tending the Garden--Essays on the
Gospel and the Earth. Grand Rapids, MI: Eerdmans, 1987.
Green, Ronald M. Religious Reason--The Rational and Moral Basis of Religious
Belief. New York: Oxford University Press, 1978.
Hammond, Phillip E. The Sacred in a Secular Age--Toward Revision in the
Scientific Study of Religion. Berkely: University of California Press, 1985.
Hawkin, David J. Christ and Modernity--Christian Self-Understanding in a
Technological Age. Waterloo, Ontario: Wilfred Laurier University Press, 1985.
Henry, Carl F. H. (ed.). Horizons of Science. San Fransisco: Harper & Row,
1978.
Hillerbrand, Hans J. Men and Ideas in the Sixteenth Century. Chicago: Rand
McNally, 1969.
Hoogkaas, R. Religion and the Rise of Modern Science. Grand Rapids, MI:
Eerdmans, 1972.
Hume, Robert E. The Worlds Living Religions. (rev. ed.) New York: Scribners,
1959.

483
Hummel, Charles E. The Galileo Connection--Resolving Conflicts Between
Science and the Bible. Downers Grove, IL: InterVarsity Press, 1986.
Humphreys, D. Russell. Minisymposium on the Speed of Light--Part I: Has the
Speed of Light Decayed Recently--Paper 2. Creation Research Society Quarterly 25,
1, (June 1988).
Johnson, Phillip. Evolution as Dogma: The Establishment of Naturalism
Haughton Publishing, 1990.
Johnson, Phillip. Darwin on Trial (Revised edition) Downers Grove, IL:
InterVarsity Press, 1993.
Johnson, Phillip. Reason in the Balance: The Case Against Naturalism in
Science, Law & Education Downers Grove, IL: InterVarsity Press, 1995.
Johnson, Phillip. Defeating Darwinism By Opening Minds Downers Grove, IL:
InterVarsity Press, 1997.
Johnson, Phillip. Objections Sustained: Subversive Essays on Evolution, Law, &
Culture Downers Grove, IL: InterVarsity Press, 1998.
Johnson, Phillip. The Wedge of Truth--Splitting the Foundations of Naturalism
Downers Grove, IL: InterVarsity Press, 2000.
Kenyon, Kathleen M. The Bible and Recent Archaeology. Atlanta: John Knox
Press, 1978.
Knight, David. The Age of Science. New York: Blackwell, 1986.
Ling, Trevor. A History of Religion East and West. London: Macmillain, 1968.
Kuhn, Thomas S. The Structure of Scientific Revolutions. Chicago: The
University of Chicago Press, 1970.
Lund, Erik, Phil, Mognes, and Slok, Johannes. A History of European Ideas. W.
Glyn Jones, trans. Reading, MA: Addison-Wesley, 1972.
Mason, Stephen F. A History of the Sciences. New York: Collier, 1962.
Montagu, Ashley (ed.). Science and Creationism. Oxford: Oxford University
Press, 1984.
Morris, Henry M. An Answer For Asimov. in Impact #99 El Cajon, CA: Institute
For Creation Research 1981.
Morris, Henry M. (ed.). Scientific Creationism. San Diego: Creation-Life, 1974.
Omni magazine, February 1987 issue.
Owen, Virginia Stem. And the Trees Clap Their Hands--Faith, Perception, and
the New Physics. Grand Rapids MI: Eerdmans, 1983.
Quebedeau, Richard. By What Authority--The Rise of Personality Cults in
American Christianity. San Fransisco: Harper & Row, 1982.
Reid, W. Stanford (ed.). The Reformation--Revival or Revolution. New York:
Holt, 1968.
Schaeffer, Francis A. How Should We Then Live--The Rise and Decline of
Western Thought and Culture. Old Tappan, NJ: Revell, 1976.
Smith, A. E.Wilder. The Creation of Life--A Cybernetic Approach to Evolution.
Wheaton, IL: Harold Shaw, 1970.
Tillyard, E. M. W. The Elizabethan World Picture. Harmondsworth, England:
Penguin, 1966.
Walsh, Brian J., and Middleton, J. Richard. The Transforming Vision--Shaping a
Christian World View. Downers Grove, IL: InterVarsity Press, 1984.

484
Weber, Max. The Protestant Ethic and the Spirit of Capitalism. New York:
Scribner's, 1958.
Wilson, Edward O. Consilience: The Unity of Knowledge. New York: Knopf,
1998
Internet resources:

Leadership U. Origins. <http://www.origins.org/menus/design.html

Chapter 12
Integration And the Fourth
Civilization
Seminar - "Did We Get Anywhere?"
12.1 A Caveat Concerning Futuresy
12.2 The Case for Integration
12.2.1 Integration as a Process of Consilience
12.2.2 Integration to Show Concinnity
12.3 Aspects of Integration

485
12.4 Integration and Relationships
12.5 Integration and Society
12.6 Renaissance and Reformation in the Fourth Civilization
12.7 Missing the Mark--Some Potential Difficulties
12.8 Technique and the Fourth Civilization
12.9 Summary and Further Discussion

12.1 A Caveat Concerning Futures


This book has been relatively optimistic about the advent of the fourth
civilization--even taken some aspects of it for granted. There may be a certain
inevitability to some of the trends that have been discussed, but one should not
suppose that a kind of utopia is being suggested. Far from it. Instead, the necessary
conditions for the establishment of any civilization following the late industrial
period are being put forward, and it is obvious that these too have been filtered by a
particular world view--that of the author.
Depending on the way its etymology is taken, "utopia" can mean either the
"best place", or "no place". The fourth civilization may turn out to be some place,
some time, but it will not be the best possible society. Some of its potential is
described here, and an attempt made to show why certain trends exist and what
their outcome might be, on the whole. However, all the cautions advanced in
Chapter 1 concerning the study of history apply even more so to the study of
potential futures. The trends and predictions discussed here are just potentials and
even after one of the possible futures has come into being and it turns out to be
different from all those forecast by every futurist, it might not be entirely clear to
historians even with the benefit of hindsight why society changed as it did.
Thus, this chapter will pursue the integrative themes of the last few chapters,
summarizing them and attempting to tie them together in additional ways.
However, it will also present major obstacles to the kind of future foreseen herein,
and in addition provide a short list of specific problems for which the next
civilization may be interested in seeing solutions.

12.2 The Case for Integration


The possibility and availability of wholistic integration is an important one, and
it is worth considering these ideas apart from the specific contexts of the last two
chapters. Moreover, it is important to consider integration with respect to
relationships on both personal and societal levels. After all, general philosophy may
have its interest, and even its fascinations, but it does not, even when it deals with
ethics, tell a wholistic story.
In each of the last two chapters a four-fold model of the person in a cultural
context was presented--once to discuss the nature of learning, and once to explain
religion. It has been an important theme of this book that specialization and
compartmentalization, that, of necessity, characterized the machine age, will not be
the dominant features of the next civilization (once it is mature) or at least that they
will be very much muted. Instead, far more people will be required by the nature of
their work to be generalists, and by the nature of society to be less private, more
open, and more consistent.

486
Beyond that, it is the premise here that all aspects of knowledge,
understanding, behaviour, emotion, and being are interwoven (the principle of
connectedness), even when they have not been perceived as such.
The academics of the industrial age, and in particular its latter-day
postmodernists deconstructed that seamless whole into a series of apparently
disconnected and supposedly meaningless fragments. However, this lack of
connectivity is just an illusion fostered by the inability of any one person to organize
vast quantities of material, diverse forms of experiences, and different modes of
learning. Computing and information technology removes such limitations and
enable all forms of knowledge to be examined for interconnections.
These tasks can also be described in the definition of a term that has already
been used informally several times:

Integration is the process of decompartmentalizing, connecting, and generalizing beliefs,


knowledge, experience, and relationships, thus showing that what appear to be fragments
are actually a seamless whole.

The essential idea behind fourth civilization integration is that its new
paradigms and technologies enable a more wholistic approach to being, knowing,
feeling, and relating. Not only can each of these be broadened, enhanced, and de-
specialized, but they can be better interconnected with one another, or
decompartmentalized. The process is one of re-conceptualizing the knowledge as a
whole from its postmodern fragments. It is important to realize, however, that such
a result is only enabled by new technologies and new ways of thinking--it is not
guaranteed. As remarked in the sections on the use of the Metalibrary, people
would as easily be able to tailor its facilities so as to reinforce their existing world
views (filters on the body of knowledge) and thus simply ignore any one else's. It is
also conceivable that a totalitarian regime of political or religious origin might be
able to enforce a certain world view and set of relationships on its people, at least
for a time.

12.2.1 Integration as a Process of Consilience


The argument that all knowledge has always been part of a seamless whole
waiting to be discovered or at least assembled is far from new. Neither is it unique
to this text. In his 1998 book Consilience--The Unity of Knowledge, noted
philosopher of science, Edward O. Wilson, argues this point in a powerful manner.

"Most of the issues that vex humanity daily--ethnic conflict, arms escalation,
overpopulation, abortion, environment, endemic poverty, to cite several persistently
before us--cannot be solved without integrating knowledge from the natural
sciences with that of the social sciences and humanities. Only fluency across the
boundaries will provide a clear view of the world as it really is... Wilson:
Consilience--The Unity of Knowledge, p. 13

487
This, according to Wilson, was not only the view of the Enlightenment, it is the
only correct way to approach knowledge. He presses an integration of the natural
sciences, social sciences, humanities, the arts, religion, and ethics, with arguments
similar to those already familiar to readers of earlier chapters of this book. Wilson
calls the empirical process of joining knowledge together by the linkage of facts and
fact-based theories "consilience," and views the ability for some fact or idea to
become a part of the whole in this specific way to be the only possible test of its
truth. To Wilson, if something is not scientifically integrable with the whole, it is not
knowledge.
Central to Wilson's thesis is that knowledge can only be unified under the
rubric of evolution, and that the empiricism of the natural scientist is the sole valid
means by which anything--including ethics--can be known. In all realms, humans
are, believe, emote, act, think, and perceive as they do, according to Wilson,
because they have evolved to do so down through the millennia in a self-organizing
and entirely autonomous fashion. They need only understand what they have
evolved into, and they will know everything else as a by-product. His faith is that the
scientific method in general, and evolutionary biology in particular will ultimately
explain and subsume everything else.

"Once we get over the shock of discovering that the universe was not made with us in
mind, all the meaning that the brain can master, and all the emotions it can bear, and all the
shared adventure we may wish to enjoy, can be found by deciphering the hereditary
orderliness that has borne our species through geological time, and stamped it with the
residues of deep history. Reason will be advanced to new levels, and emotions played in
potentially infinite patterns. The true will be sorted from the false, and we will understand
each other very well, the more quickly because we are all of the same species and possess
biologically similar brains." Wilson: Consilience--The Unity of Knowledge, p. 43

Thus, Wilson's consilience is a sweepingly general process by which


autonomously evolved humankind builds an understanding of evolution, and so
takes control of its own evolution. It is a rational and empirical joining of knowledge
to demonstrate truth by virtue of its unity under the rubric of evolution. Anything
that cannot be conciliated empirically is not knowledge; indeed it may not be
anything. Wilson's view could be summarized:

The process of consilience is the next step in human evolution--the cataloguing, validation,
integration and subsumption of all knowing under the empirical.

12.2.2 Integration to Show Concinnity


However compelling to the secularist, Wilson's argument for what amounts to
a revival of logical positivism hinges on some important presuppositions, none of
which are provable by empirical methods. Here are some of them:

1. That self-directing evolution is not just an organizing paradigm, but a


historical fact, even though unverifiable,

488
2. That a transcendental creator-God not only can be dispensed with as a
hypothesis, but that Wilson can himself positively and accurately assert that no
such being exists,
3. That empiricism will eventually become capable of explaining ultimate
meaning as well as of describing and organizing some kinds of facts and theories,
even though it has never before been capable of this,
4. That the mind can be completely understood by a scientific description of
the brain, and
5. That religion itself is only an evolved internalized mechanism growing out
of the survival value of ethical behaviour.

However, an almighty God who created the universe as something other than
Himself could only be known if He chose to reveal Himself in that universe. Lacking
such action, no empirical method could uncover one who exists outside the universe
He created. Such a failure is not evidence that God does not exist, it only reflects on
the inability of the human senses to judge and measure one who transcends those
faculties by virtue of being their creator. To be sure, some of the creator's qualities
might be inferred from the physical world, but He personally could not be deduced,
reduced, described or related to without moving beyond the physical. Thus, in
promoting what amounts to an aggressive atheism as the conciliator of all
knowledge, Wilson makes himself the transcendent arbiter of ultimate truth--and in
areas not knowable by him in the exclusive manner he proposes for discovering that
truth.
Thus, there is plenty of space for an opposing view, and as in other contexts
in this textbook, the obvious candidate is design--a position that must be taken by
most Christians, and likely will be by those of many other religions as well. In this
view, knowledge is a seamless whole not because it happens to have become self-
woven with the thread of human evolution, but because it comes from an integral
God whose creation was always a well-designed integral whole, and can only be
perceived correctly as such--not in limited fragments.
In this view, it is not the process (consilience-integration) of building up the
whole from parts that is the primary focus. Integration of parts is here not the end of
an evolution of wholistic knowledge, but a means to discover and demonstrate that
all creation was from the beginning well-designed intentionally and elegantly as a
seamless whole in order to glorify the creator. This could be summarized as:

Integration is a means to discover and illustrate the comprehensive concinnity (i.e., skilful and
harmonious unity, aesthetic beauty, and rational organization of creation's design.
Thus, while "consilience" is a process of pulling (literally "jumping") together
pieces of empirical knowledge into a coherently evolved whole, "concinnity"
describes the well-designedness of creation that humankind can discover by virtue
of having been made in the image of the creator and so having the capacity of
thinking His thoughts after Him. Further, while consilience limits itself to the
empirical and has no referents external to humankind, the creator's concinnity
necessarily includes the spiritual or transcendental realm as well.

489
The two positions differ sharply on whether integration is an end in itself
(consilience) or a means to the end of discovering a preexisting design (concinnity).
They agree, however, that knowledge, understanding, behaviour, emotion, and
being need to be seen as inextricably interwoven, either because they were made
that way (concinnity) or because it is useful to construct them that way
(consilience).

12.3 Aspects of Integration


Setting aside for the time being the question of under which rubric one ought
best, on the one hand, to attempt a consilience of all knowing (as Wilson might put
it) or reveal the concinnity of all that exists (as this book would have it), the purpose
of the next few sections is to demonstrate specific needs for integration as a means
to piece together post-industrial fragments.

Integrating Being and Beliefs

The world view of the citizen of the next civilization will initially inherit much
of the rationality, pragmatism, empiricism and humanism from their machine age
counterparts in a very straightforward manner. Because of the de-emphasis on the
need for personal factual knowledge of specialized techniques and the new
emphasis on ideas, fact finding, and evaluation, people may generally be more
aware of what they believe, of their world view, of who they are, of why they know
what they do, and of how their knowledge is related. This implies a greater
emphasis on the influence of such beliefs (who a person is) on knowledge,
experience, and relationships. It may be clearer that the kinds of things that are
said to be "known" (and what is meant when this is said) depend heavily on one's
total world view. It may be more evident that the nature of one's experience or
experimentation is a consequence of such beliefs as well. There is already a greatly
increased interest in ethics--one result of applying a set of beliefs to relationships.
Moreover, such an integration will not be confined to an intellectual elite, because
the working demands of the information age require far more people to be well-
trained and educated. It is not difficult to predict, even to discern already, a greatly
increased interest in philosophy and religion as a result of this new emphasis on the
heart of what it means to be human.

Integrating Experience

Everyone gathers data and experiments with the surrounding world on a


continuous basis. These data come from the senses of touch, taste, and so on, in a
direct manner, and from relating with other people or using the communications
media in a less direct way. These data are constantly being integrated with past
experiences and emotional responses to create new reactions, relationships, and
ideas. For example, one might read about a new type of cancer, and file away the
fact of it. Later, a television documentary could provoke recollection of the earlier
fact and heighten interest. If a relative is diagnosed with the ailment, the knowledge
becomes more personal and emotional. Both what are called "hard facts" of

490
everyday living and the more hidden emotional aspects of life are a part of such
experience. The emotional and the audio-visual-tactile are both a part of the raw
data or base truths that each person experiences and remembers, and though
contrary to Wilson there is no evidence yet that the emotions can simply be
subsumed under the empirical, the two aspects do bear on each other and need to
be taken into consideration together. Here, both are taken as aspects of experience.
Data gathering (including reading and listening) also take on new forms and
meanings in the fourth civilization because new information media, such as the
Metalibrary, enable new ways of obtaining and using knowledge, and informing the
determination of answers to the "What else is?" questions. There may be somewhat
less relative emphasis on the empirical (data from experience) and this will have to
be more fully integrated into the whole person as it becomes realized that the
rational/empirical description of humankind is insufficient to achieve a general
integration of the person. One of the interesting challenges for artificial intelligence
research would be to develop machines that can experience sight, touch, sound,
emotions, and so on. This may be almost as difficult as providing such artifacts with
the ability to comprehend or even to know in an abstract sense. It may turn out to
be the chief obstacle in developing any sense of "being" for such machines.
People will have a broader range of experiences in the fourth civilization.
Those who have employment will necessarily have more training and re-training,
more education, and more factual knowledge available to them. There will be better
communications, transportation and information access. As in the past, higher
technology will mean that there will be more sharable wealth, although this does
not mean that everyone, even in the prosperous countries, will share such benefits
equally. History would suggest that there will continue to be both rich and poor
individuals but that the general standard of living will rise, even for the poor. It
would also suggest that there will continue to be rich and poor nations, and that
people may continue to starve to death even in a more prosperous world.
In the same manner, information technology gives more access to the
experiences of very different people. Thus, there would be more sharable
experiences in the information age, and this could suggest that greater acceptance
of others might be enabled. As with the sharing of wealth, this enabling does not
mean that there will be more tolerance of other peoples, only that there will be
additional forces empowering it. If the arguments presented in this book do turn out
to be correct predictors, there will be more international cooperation as a result of
greater knowledge--not only knowledge of other peoples, but of the total
interdependence each person and nation has with every other. One also, as always,
must be careful to define what is meant by "tolerance." Here it is used in the sense
of a benign acceptance of differences, not in the aggressive sense that demands all
that beliefs be given equal credence or value and marginalizes those who believe
they know one or more absolutes.
It is important to note that, although greater sharing of experiences promotes
greater toleration, this sharing does not in itself solve the problems of hunger,
racism, waste, pollution, and war. Actual solutions will not come easily or soon, but
the premise here is that it is likely to make more people aware of the mutual nature
of such problems and increase the desire to solve them.

491
Integrating Knowledge

This may be the area that changes most in the new civilization. The number of
people whose livelihoods depend on the depth of detail in their speciality knowledge
will be far fewer. Although specialities will continue to exist, the people who master
them will either be serial specialists who move on to new areas of interest after a
few years, or parallel specialists who have a wide and shifting range of related
interests. That is, there may well be even more speciality knowledge, but it will be
more diffuse insofar as individuals are concerned, for they will need to be able to
find it when needed, not know everything about a speciality all the time. Their
knowledge will also have to extend to the ethical and legal effects of their work, and
will be predicated on better organized and more efficient thinking, greater
productivity and creativity in ideas, and a much wider base of ideas and techniques
for work skills. This means that people will have broader and somewhat less focused
educations and be more acquainted with a broader variety of disciplines. They will
therefore be enabled to apply insights obtained in one to solve problems in others,
and might, for instance, even look to Medieval and Renaissance scholars for insights
in how to be this way.
If the prediction of longer life spans is correct, this trend may be sharply
accentuated in the long run as knowledge workers, including academics, change
from one area of interest to another over their lifetime. The longer a life becomes,
the smaller will be the number of people who regard their current special interests,
job, or social situation as permanent, and the greater the percentage who will have
spent at least part of their professional lives engaged in what are now called
academic or scholarly pursuits. It is now taken as commonplace that most people
entering the work force in the fourth civilization will have to be prepared either to
change careers altogether several times or else re-invent their jobs continuously
throughout their working lives.
It should be evident from discussions in Chapter 1 that this scenario also
depends on large-scale transfer of goods production to machines--the second
industrial revolution--and perhaps to some extent also on the intelligence
revolution. After all, the proportion of people depending on intellectual activity for a
living can only increase if the necessities of life are provided, and the premise of
automation is that indeed they will be--by machines requiring very few human
attendants.
It seems reasonable to suppose that one consequence of this will be that the
"culture" of the scientist and technologist identified by Snow and others may
become much broader and more diffuse as it extends into other disciplines of
thought--borrowing, and changing itself as it goes. The fragments of late industrial
culture may be reunited in some strange new ways, but if a new civilization is to
rise, there must necessarily be a new intellectual cohesiveness. This scenario, which
Wilson shares, but for different reasons, is quite unlike that of Ellul's--instead of a
total victory of soul-less depersonalized technique, it predicts a merging of what has
been technique with the humanities, social sciences, and religion to produce new
and much more comprehensive ways of knowing.

492
However, that new paradigms and new facilities (like the Metalibrary) only
enable such a diffusion and generalization; they do not determine its actual path.
The broad new view of knowledge presented both here and by Wilson is very
idealistic, and may turn out to be impossible for reasons as yet unforeseen, such as
wars, politics, economics, and new technological directions. Nonetheless, it seems
likely that what is regarded as knowledge in the future will be more broadly
integrated across what have been very narrow intellectual specialities. Not only
that, but the emphasis on the intellectual and scholarly will itself be more diffuse
over the general population, as people pursue learning on a less single-minded,
specialized, and full-time basis. Neither does admitting far more people to what has
often been an elite club of academics have to dilute the quality of thinking; it could
simply be the case that more adults have the time, inclination, and access to such
activities. Moreover, in an information-based society, ideas are economically
valuable, so practicality alone ought to dictate that the number of people involved
in assessing and manipulating them will increase.
An integrated view of knowledge and the intellect cannot be confined only to
its own branches, but necessarily involves other aspects of the total person as well.
Thus, one could also expect the fourth civilization would bring a greater emphasis
on the mutual effect of knowledge on beliefs and relationships, and possibly a
corresponding decrease in the relative stress placed on the specific aspect of the
relationship of knowledge to the empirical--a special emphasis characteristic of the
machine age, but one that need not be of the information one.

Integrating the Transformational and Relational

People change, and they cause other people to change because of what they
believe, how they think, and what are their experiences. Free and open access to
information suggests a possible breaking down of old relational barriers and a new
emphasis on other people. This would be in sharp contrast to the "I-it" relations of
the machine age and the specific raw "me-ism" of its latter decades. Such a shift
seems to be necessary because while Me-ism is part of basic human nature, only if
it is muted sufficiently to avoid excessive fragmentation and achieve a high level of
cooperation is the continued existence of civilization possible. If this does happen,
there will be a greatly increased emphasis on the application of ethical principles in
relationships--a shift that does seem to have begun already.
In the working out of ethical relationships, both parties are transformed, just
as in the working out of technologies knowledge is transformed. It is partly for this
reason it was suggested in an earlier chapter that the new stress on environmental
issues is not simply flash in the pan, but will be a permanent part of the social and
intellectual landscape. More broadly, this new integration and emphasis on the
transformational and relational is intimately connected with all aspects of the
biospace revolution. The advent, to any significant degree, of artificially intelligent
artifacts would also force re-examination of the relational. Not only would the
question of who or what is a human need careful examination, but so also might, in
the extreme case, the question of how to get along with other intelligences.

493
In addition, for many people the effective size of the world will shrink further,
as communication technology improves. Some of the more totalitarian nations may
be forced to operate with greater openness, for their dictators will be unable to hide
their activities from the critical scrutiny of other nations. The industrial and then the
information revolutions will come to more countries, and a far greater number of
people will be enabled to have frequent contact with those from distant locations.
These forces promoting unity will not necessarily homogenize culture or create a
single "global village"; but they will at least force everyone to stress relationships
with people they would once have ignored--just to function in their jobs. Again, it is
not necessarily large social groupings and organizations such as the state,
multinational corporations, and old-time institutional churches that will benefit the
most. Rather, it is smaller social units such as local governments, entrepreneurial
organizations, local clubs and churches, and families that might have the most to
gain.
The integration of relationships does not just refer to other people and how
one's actions affect them. It refers inward as well--the relational and
transformational aspects of the individual person cannot be considered in isolation
apart from world view, knowledge, and experience. But greater personal integration
and wholeness do enable more substantial and deeper interpersonal relationships,
even though they do not lead inevitably to these.
As the late 1990s events in Eastern Europe have shown, old hatreds, racial
and religious divisions and old xenophobic nationalisms die hard, and there will
always be those who perceive an advantage in the fanning of such flames anew.
Even though such behaviour is manifestly not in the long-term self-interest of either
individuals or nations, it may still take place. Indeed, one could argue that it is
precisely new and close contacts with different cultures--especially very similar
close neighbours--that lead to divisions and wars. That this is especially so among
old enemies that have only been kept apart by fear of a greater one is now obvious
throughout Eastern Europe. The most difficult thing to do with a relationship is to
get it past the stage where little knowledge and much misunderstanding are
dangerous things and onto the one where there is enough understanding to attempt
conflict resolutions with a reasonable chance of success.

The New Renaissance Person

The admittedly idealistic portrait painted here is that of a fourth civilization


people who think and act in greater harmony with themselves and with others. It is
for this reason that religion may rise in importance--such a radical integration as
suggested here forces people to re-examine and re-apply the meaning questions:
"Who is God?" "Who am I?" "Why am I here?" "What can I know?" "What does my
experience mean?" and "How then shall I live and relate to other people?" Such a
re-examination also supposes that the answers are rationally and coherently
integrated by a whole people into a whole culture.
However, the kind of integrated equilibrium suggested for this idealistic new
Renaissance person is far from static. After all, the principal feature of the fourth
civilization will be rapid, continual, and substantial change in economics, politics,

494
technique, and ideas. Therefore, any new equilibrium of being, thinking,
experiencing, and relating will necessarily be a dynamic one as well. This provides
yet another reason to predict a rise of religion and of moral absolutism along with
it--people will need "being anchors" to give them a definite sense of who they are
and how they can go about integrating the other three in a rapidly changing
environment. After all, there comes a point where being "in process of change"
must be replaced with a sense of having arrived at someone definite, even if this
does turn out only to be a temporary stop. Furthermore moral relativism was one of
the fragment-promoters of the late industrial age; it does not fit an integrative
paradigm very well.
Such anchors are most useful if they are non-contextual; that is if they take
their meaning and substance from outside knowledge, experience, and
relationships. In other words, they are at least in part religious, for they need to rest
on that which is external to all of life in order to provide a meaning for life.
Such a radical re-integration of what had in the machine age become discrete
compartments and fragments of life, intellect, and religion would force a re-
invention of education, government, economics, corporations, family life, and
society in general. The first four of these have been considered in some detail in
earlier chapters, but there are some issues in family life and personal relationships
that have not previously been mentioned, and it is with some of these that the next
section is concerned.

12.4 Integration and Relationships


To this point, most of the discussions in this book have focused on the big
issues--knowledge, philosophy, technique, and large institutions. One of the four
areas of life involved in the radical integration of people is the relational, about
which little has been said in detail thus far.
Personal relationships can be thought of as taking place on a variety of
levels--from those with the whole society and culture to those with other individuals.
It is the latter that give relationships their most direct sense of meaning, generating
both experience and knowledge of those other persons. However philosophical and
general one may become about relationships, they are actually conducted one at a
time and in small groups. Specifically, they consist of a number of one-on-one pairs
of personal and individual interactions that collectively form a group gestalt of an
ever widening, more dilute, and less personal nature. Thus, it is appropriate now to
give brief consideration to some of the more personal relational issues that grow out
of the new paradigms and techniques and that themselves form aspects of the new
civilization.

Men and Women

In some hunter-gatherer societies women were little more than possessions.


Survival depended on the strength of one's arm or on those of one's allies or mate.
In many cases, a strong man could take as many women as he could provide food
and security for. Those physically less able--including women--had nothing to say

495
about the matter--they could often be forced into submission, or simply killed.
Relationships could be harsh, hard, and even brutal--but so was much of life.
Agrarian societies have generally had a more substantial role for women
because they depend on animal power and some machines to enhance strength and
rely less on personal brute force--at least on the part of farmers themselves.
However, such societies have shortages and trade disputes, and therefore armies
and wars. With these there often came an organized exploitation of those
considered weak. Slaveries of whole nations or "races" became possible or even
economically advantageous. In such an atmosphere the ability of women to become
economic and political partners with men was severely limited. Along with all others
who could be overcome by force, they were often still regarded as little more than
chattels, and had few opportunities to break out of such a pattern. Matriarchies, or
even the occasional woman monarch, were rare exceptions to the universal rule and
authority of men, and this was in large part because such societies still depended
on physical strength, even though for different things. All but the most rudimentary
of training and education were unavailable to women in most manifestations of the
second civilization, and they had few economic or other influences to use in order to
change their lot.
In the industrial society, the picture changed dramatically. As more of the
labour was transferred to machines, there was less need for physical strength to
provide for daily sustenance. It gradually became evident that women could run
machines as efficiently and effectively as men, and during times of war they did just
that. Since the machines required a more educated work force, it also became
obvious that women were as intellectually capable as men, so the educational and
economic barriers to their full participation in society began to crumble, though
centuries of tradition ensured that this process was slow.
Moreover, in the West, where the industrial and scientific revolutions took
place, Christianity was the dominant religious force. After a long self-struggle, it
found it already had a paradigm for the essential equality of the sexes--at least
before God ("There is neither Jew nor Greek slave nor free, male or female, for you
are all one in Christ Jesus." Galations 3:28). This idea eventually broke through into
the broader society, where it ultimately became a political and social equality that
went even further than many church groups desired, or thought doctrinally correct.
Passages in the New Testament about the leadership role of men were cited by
some as reason not to have equality in the broader society. But most Christians
argued that, whether the original intent of these was doctrinal or cultural, they
applied at most to Church and family, and had nothing to do with the workplace or
politics.
These discussions became moot, however, for women joined the workplace in
force starting as early as World War II, and this accelerated to record numbers in
the 1970s and 1980s, so that the tradition of their staying at home became a
cultural fragment, even among the conservatively religious. This change was not
brought about solely by a different view of work. Much of it was economic--it
became nearly impossible to support a family on a single income. Some reversal of
this trend in the 1990s has not made much of an impact on the way most people
think about work. Another reason why women gained economic power in this period

496
is that widows rather than first-born sons came to be the inheritors, and since most
women survived their mates, family monies passed into their hands at peak size.
The great economic story of the late industrial age years was the way that
thriving Western economies absorbed both the post-war baby boom and the
simultaneous sharp increase in female participation in the work force. In North
America, the majority of women came to take working for granted.
In the late industrial age, men for the first time had no advantages due to
their greater physical strength. In the information age, what counts in most
occupations is intellectual and integrative ability. In this context, newcomers to the
work force have great advantages, for they are not set in traditional ways of doing
things--something often seen with new immigrants, for instance. Thus, even though
women may be concentrated in areas such as secretarial, service, and middle
management that are vulnerable to automation, they are also more mobile and
flexible--not because of their gender, but because many are relatively new to the
market. Those who are already working may therefore be somewhat less subject to
long-term unemployment, more aggressive to retrain and displace themselves, and
more able to survive radical change. They may also overtrain and overcompensate
if they believe themselves to be making up for what they perceive to be past
injustices.
Enrollment trends in medical, dental, and law schools by the late nineties
indicate that the equality of numbers that was already reached among teachers
would ultimately exist in these professions as well. Also, as Western women
continued to reduce both their number of children and the amount of time they
devote to raising them, salary inequities caused by childbearing dropout diminish. It
is not difficult to predict that as these relative newcomers work their way to the top
of their professions, there will continue to be many more women chief executive
officers, judges, hospital administrators, and politicians. It seems possible that
within a few decades it will be as likely to have a woman president, prime minister,
premier, or governor in most Western nations as it will be to have a man in such
roles.
On the other hand, many women who were not inclined or educated to make
their own way received a harsh introduction to the realities of sexual "equality." For
a substantial percentage, this has meant abandonment of them and their children
by men who developed other interests. Increasingly, it meant leaving men who
drank too much, assaulted them, abused their children, promiscuously pursued
other women, or did all four. Indeed, this became one of the major social problems
of the 1980s and 1990s, for such women suffer a sharp decline in living standards,
while their former partners often enjoy an increase in disposable income. Because
women in the twenty-five to forty-five-year age bracket were at that point the
product of families and schools that operated on industrial age assumptions, they
often had little education, less work experience, and no marketable skills, or if they
had any of these, it was long out of date. As a result, they had to live either on
social assistance or on the income from very low-paying jobs in which they were the
most vulnerable to unemployment. They usually had young children to care for, and
could not both work and do this themselves, nor could they afford to pay someone
else to do it for them.

497
They were the victims both of the "sexual revolution" and of the trends to
fragmentation and individualization that conspired together to cause many to throw
off the former constraints of religion and custom and instead regard sexual liaisons
as a passing thing rather than as part of a permanent commitment. In such an
atmosphere, there was no underlying motivation to hold marriage together once
passion had cooled, and many people became uninterested in trying. Too many
women became like their men's used cars--traded back into the marketplace as
soon as models perceived to be more glamorous became available. The culture did
not help, for it had exploited women as decorations in the service of selling a variety
of consumer goods, and the disillusionment that resulted when marriage stripped
away the veneer and revealed a real person was too much for many to take.
If anything, the idea of autonomy and independence for all women promoted
by the radical feminist movement has exacerbated the problem. Such independence
may be a worthwhile goal for some, but there were many women who neither
wanted nor were prepared for such independence in the industrial age, and instead
found it thrust upon them at an awkward stage of life. Moreover, hyper-aggressive
anti-male feminism has probably generated some costly backlash that has made
things worse. Indeed, it is not even clear that the gains women made in the
marketplace had anything to do with such political movements, or whether both
were independent results of more fundamental economic and cultural forces. In any
event, newly and involuntarily created women heads of households have few
opportunities to retrain and enter the job market on the same terms as the less
constrained and more highly educated younger woman can do.
The overall result of these gender shifts has been a re-definition of the entire
workplace, with some women aggressively moving into the middle and higher end
of the labour market to claim equal pay and status with men while others were
forced into the lower end because their traditional male bread winners left them to
fend for themselves. In this new milieu, a young woman of high school age can no
longer expect to be able to find a lifelong male provider; she has to assume the
responsibility of carving out her own niche in the marketplace. She therefore has to
make personal learning and career choices as an individual in the same way that
men have always done in the past, and quite independent of any considerations of
marriage--even if she does not immediately enter the work force on graduation from
school or university. Schooling becomes a necessity, as does the choice of the
"right" courses. For example, women can not afford to give in to social or other
pressures and avoid the hard subjects such as mathematics, physics, and
computing science if they expect to play the economic game in the future on an
equal basis with men.
On a more personal level, the days when one or the other partner could
dominate a marriage relationship, make all the money and the decisions, and
determine the entire family agenda, had passed in most Western families by the
time the information age was underway. In the future, women might afford so little
time from their careers for childbearing, and not much more for child raising, that
the impact of their sex on their careers will be very much reduced. It may, however,
take an entire generation to rescue the victims of this transition from the ghettos
into which they have been forced by making what turned out to be the wrong

498
learning choices and mating assumptions. Much of a generation has been caught off
guard by a sudden change in course, unprepared for independence and equality;
their daughters are unlikely to rely on the same assumptions.
These changes have severely strained men as well, for they have been
shaken out of their traditional roles in order to accommodate women moving in.
Some men may feel threatened by the prospect of reporting to a woman, or of
being married to one who commands a higher salary--perhaps high enough to
relegate the man to the housekeeping role, or at least to the follower when his wife
is transferred to another city. At the close of the industrial age, there was an
increasing fragmentation of male/female relationships, fewer men and women were
selecting long-term commitment or faithfulness as a way of life, with more of them
either dropping out of marriage or never entering it. In this casualization of
relationships, they were supported by various pop philosophy versions of relativism
and hedonism, by the sexual revolution's inversion of morality, by an increasingly
accommodating and largely irrelevant religious climate, and by the corresponding
fragmentation of the once monolithic society around them. Much less support
existed for the old-time and long-term stability of commitment, so it was no longer
being followed as much as in the past. At the same time, the wide gap between
male and female life expectancies, coupled with the tendency of past decades of
men to marry younger women, would have ensured that most women would spend
many years alone, as widows if not as divorcees.
What is more, there are early information age factors that have tended to
exacerbate male/female tensions. In its early days, the computer was largely a male
preserve, a toy for the men who were already dominating mathematics, science and
engineering, and who once would have had model trains, collected stamps, or built
furniture for a hobby--none of which many women did. Now they bought home
computers, and isolated themselves from their families even more with their new
pastime than with their old. There was also initially a perceived appeal in
programming work to a certain kind of pioneering male who once might have been
seen as a misfit and dropped out of school or even society altogether, but who could
now legitimately fall in love with a predictable, safe, and impersonal machine. At the
same time, he could master it, dictate to it, and make it perform extraordinary
feats--all without any concern for its potential to damage his ego when something
went wrong. It would never question his manhood, care about his faithfulness,
criticize, complain, use sarcasm as a weapon, or fight at all, much less fight dirty.
It was some of these very social misfits who were the most successful early
programmers, and who eventually had sufficient money from their efforts that
everyone listened to them, so that they could define for others how they would fit
in. For a variety of social, cultural, and economic reasons, this was a path to success
that attracted few women, and almost none took it. The result was that men
dominated the computing and information sciences at first even more than they had
some of the traditional ones.
However, programming is now a sufficiently complex and mature technique to
be studied as an academic discipline rather than learned entirely on one's own, so
there is a path into the heart of the industry that is more conventional, and at least
some women are taking it. They are also entering high-technology realms as users,

499
trainers, managers, editors, executives, sellers, designers, and manual writers--all
areas in which a highly skilled and specialized programmer is often severely lacking.
Thus, as the industry matures, women are finding excellent positions, and often
ending up in management--more than making up for being latecomers. There can
be little doubt that there are far greater opportunities for educated women in the
new industries than in the old, because in the new it may be more possible for them
to compete on equal terms without having to be concerned with stereotypes or with
an entrenched male-dominated establishment. Moreover, they are in a job-seeker's
seller's market, because sufficiently knowledgeable and skilled people for the
information industry are in ever-increasing demand, and will be for some time.
There are even opportunities for women to work at home and manage their families
without a man, although telecommuting is not yet as important a factor in the job
market as it may become.
Thus, there continue to be strong forces acting to promote equal participation
of women in the new marketplaces, despite the setbacks many of them have had
due to the dissolution of large numbers of marriages, and despite lingering
reluctance to study mathematics. The new civilization will be the age of non-
specialized ideas and information, and physical strength without intellectual ability
and education will be a severe disadvantage. It is even possible that women may
come to dominate men, rather than be their subjects as in the past. While this
suggestion may seem far-fetched, history would seem to suggest that when a large
group of people who have had to survive on their wits is suddenly freed from
bondage, an extreme swing of the power pendulum may well take place. If it did, it
would counter other trends toward egalitarianism, but it is in the fourth civilization
that just such startling and unpredictable exceptions are taking place during
transitional periods. After all, cultural fragments can be assembled in a variety of
ways; the "best" is not necessarily discovered first. Indeed, it is those who first
successfully assemble the fragments into a cohesive cultural whole who thereby
acquire the power to define their way as having been best all along.
Many men, especially younger ones who have grown up in a more egalitarian
society, will adapt well to the changes in sexual roles. Others will react very
negatively, and one also cannot rule out the possibility of a substantial backlash
against successful women. Some men, finding their physical skills no longer in
demand, and facing a new culture ill-equipped, could retreat into defeatism, random
violence, and alcoholism. Indeed, if the "macho-male" culture suffers a substantial
defeat in the face of new cultural patterns, the result for many could be similar to a
military defeat by a superior civilization--total despair. Judging by certain historical
parallels, it could take at least several generations to effect either a recovery from
the cultural shock or assimilation by the new one.
It is interesting to note in this context that social institutions such as schools,
and parental and peer pressure still promote an effective gender segregation. Most
boys and girls still grow up in two non-intersecting worlds, with their own friends,
toys, activities, and values. They live it side by side in the elementary school but do
not really meet until their teenage years, when a different set of the same kind of
pressures rushes them into sexual intimacy before they even know who they are.
That is, the forces favouring sexual equality are economic, not yet educational, and

500
certainly not yet very culturally rooted. They are therefore fragile, and are
vulnerable to being diverted to a variety of possible extremes.
It is also worthwhile to cite the despecializing and integrative themes of the
fourth civilization in connection with the trend to establish local and personal
relationships. Thus, whereas university students in the sixties had their causes, and
in the seventies their self-fulfillment, those of the late eighties and nineties opted
either for financial security or for friendship as the highest value and most
important goal. Because men and women often still grow up in sexual solitudes,
such friendships are mainly of the same-sex type, but the openness of the fourth
civilization and its new stress on personal relationships may well lead to the
breaking down of some of the cultural barriers to opposite-sex friendships. Broad
networks of friends will gradually become more important in both business and
politics, and will be the principal basis for personal and loyalties and professional
partnerships, much more so than obedience to hierarchical command structures.
Similar points might be cited, to become contrarian, predicting an increase in
the number of permanent, monogamous marriages, and there is some indication
that this may be taking place. However, such commitments in the future will be
partnerships of essential equals as the old gender solitudes give way to more open
and less private but deeper relationships. On the other hand, the advent of any
substantial increase in life spans will be a severe test for marriage, perhaps more so
than was the so-called sexual revolution, for under such circumstances a lifetime
commitment is a far more serious matter, because it must be maintained over a
much longer time. In such a context, and from a little distance, it might almost be
as tempting to predict the virtual demise of such marriages, but the suggestion here
is that commitments, because they are integrative, will become more important,
and that they will overwhelm the other factors.
Thus, there are social and economic factors that may promote sexual equality
over the long term. There are also factors that are exacerbating some of the old
tensions, some that are creating new ones, and some promoting new harmonies. It
may be a couple of generations at least before things settle down, and a new kind
of male/female relationship is established. There is also the possibility that some
new extreme may become the norm for a time.
It is also important to realize that not all women find the social trends
described here to be to their liking. Some want to work as home managers and feel
uncomfortable with, or even antagonistic to the messages that seek to persuade
them otherwise. They wonder if there is a place in the new world for wives and
mothers, or whether they will be forced into a marketplace they have no desire to
enter, and they feel that the apostles of feminism are too busy advancing their own
careers to hear dissenting voices. Their voices too will shape the new society, and in
some ways may prove more powerful than some of the economic factors.

Children

501
Much of what has been said so far might be construed as suggesting that
children are endangered and that the traditional family may turn out to be a thing of
the past. To a great extent, this may well be true, and there are a variety of reasons
for it. While families were the centre and mainstay of community life in both hunter-
gatherer and agrarian societies--because of the need for cooperative action to
survive--they were only one component of industrial society, which took its shape on
the basis of other organizations and from work, rather than on the basis of personal
relationships. The industrial society has ended in a highly specialized and
fragmented state, with the traditional family seeming to play a much reduced role,
and many observers claiming that it no longer need constitute the basic societal
framework. The extent to which this perception is true is somewhat doubtful;
although there are certainly far more single-parent and otherwise non-traditional
family units, the family still exercises an important influence on society. There can
be little doubt, however, that this influence has been in a period of perceived
decline.
Moreover, economic and population pressures in an information society cause
extreme downward pressure on the birthrate, and so does the fact that a mother
can no longer assume that there will be a father on the scene for very much of her
potential children's lives. These factors, coupled with what in effect is free abortion
on demand in many Western countries, are causing a dramatic reduction in the
relative number of children, especially in the cities. Prolongation of life much
beyond the present seventy to eighty years is likely to increase this pressure, even
though it could extend the number of possible childbearing years. Even the post-war
"baby boom" was only a temporary wavering above the trend line of births, which
has been falling steadily throughout the entire industrial age. There is no reason to
suppose that in the fourth civilization there will be anything but cyclical variations in
this trend, despite the continuing high birthrates in agrarian societies elsewhere.
Indeed, even those reflect a lag behind a declining death rate, which can confidently
be expected to be followed by a corresponding decline in births, judging from the
Western experience.
One could argue that growing up has itself become more difficult, for far fewer
children in North America can now count on the presence or the attention of even
one parent for any substantial portion of their developmental years. In such
families, children are frequently left to fend for themselves at baby-sitters', at
school, soccer clubs and other activities--out of the sight and hearing of their
parent(s). Starting at a very early age, they make their own decisions, pick their
own friends and activities, buy their own clothes and music, and live their own
independent lives without much traditional parental oversight. The one parent, or
both where applicable, works all day and there is much less energy left for children
than there once would have been. These children have little guidance, and lack
much opportunity to learn by example or teaching about traditional familial,
cultural, and religious values--for in all these things too they are left on their own to
make individual decisions. While some claim this produces better decision makers,
this practice also contributes to an increased fragmentation of culture and society,
and helps to ensure that the only value passed on by example is that of a ceaseless
striving for self-satisfaction and economic success.

502
The building of a social fabric begins with young children; if this is simply
abandoned there ought not to be any surprise when they reach adulthood and have
no commitment to the values and laws of the society they find. Even if parents
could transmit 75% of their ideas to the next generation, and it has 25% new ones
of its own, it would be only six generations before the original transmission
constituted less than the new ideas of the current generation. But, if the value
transmittal is less than 50 percent, the world view of parents is potentially
overturned with every new generation.
On the other hand, if each decade or so changes a very small percentage of
the population because of low birth and mortality rates, the progression of change
through society could be slow. Moreover, low birth rates could mean a high value for
children and more attention paid to them, and this might increase the rate of
culture transmission. Thus, despite recent fragmentation of the family, it may yet
turn out to be an important institution of the fourth civilization. After all, there are
not many other candidates for the role of basic societal building block.

Relationships and Sexual Behavior

As observed in Chapter 3, it is not certain how much of the "sexual revolution"


was real and how much was simply more publicity for activities that had always
been undertaken. It can scarcely be doubted, however, that public perceptions have
an effect on behaviour. The encouragement of the media, peers, and social leaders
to throw off old norms and experiment with sex has brought about a substantial
change in the way the most intimate of relationships is viewed and practised.
Rather than being seen as a way to express commitment in a permanent,
monogamous and heterosexual marriage, sexual relations have come to be as
fragmented as other ones. In at least their public and media image, they became
temporary, promiscuous, casual, and indiscriminate as to gender. Except to suggest
a possible partial reversal of this trend, fourth civilization paradigms have little to
say about such matters, and they would be best considered as moral issues.
However, there are social and medical consequences to the casualization of
sexuality that cannot be ignored. Apart from the obvious connection--not
necessarily cause and effect--to the decline of the family, society's most pressing
concern is now with the spread of disease by such means. There have always been
STDs (sexually transmitted diseases), but old nemeses such as gonorrhea and
syphilis had been thought to have been relegated by drugs to the role of mere
inconveniences. Genital herpes could have in time changed that view of the
situation, but it, though as yet incurable, was at least not life-threatening. Even the
very widespread chlamydia trachomatis bacteria caused only a few cases of
blindness, and its more common effect--infertility--was much less publicized.
The arrival of AIDS changed the situation permanently, however, because
casual sexual behaviour can now result in death. Not only is no cure in sight, there
is little to hold back the grisly symptoms of a wasting, painful, and inevitable
demise. Although health authorities were at first confident that only injecting blood
or having male homosexual sex would cause transmission, blood transfusions, and
even routine dental work are all likely to continue as factors in the global spread of

503
AIDS for some time, for even proper syringe sterilization, much less routine blood
detection tests have not yet come to many countries. Moreover, it is now apparent
that heterosexual sex also spreads AIDS, though less rapidly.
Moreover, the early recommendations that condoms were sufficient
preventatives are now being hedged in the light of their well-known poor
performance even as birth control devices, for they break or leak in up to ten
percent of uses. An individual who engages in sex on a casual basis in a population
with, say, a 5% infection rate will be exposed to infection after an average of 13.5
sexual encounters. The same individual who consistently employs condoms--which
break or leak 10% of the time--will be exposed to infection after an average of 138
encounters. Indeed, virus particles can go right through the pores in latex rubber,
and few doctors feel safe from infection even when double gloved. This analysis of
course does not apply to non-random encounters, for if one's sole sexual partner is
not and does not become infected, no number of encounters will expose one. Thus,
although the use of condoms may slow down the spread of the disease, it does not
stop it, and the higher the percentage of the population infected and the greater the
degree of sexual adventurism, the faster the disease will proliferate even in places
where needles and drugs are clean and condoms are used. In other places, it
spreads unhindered, threatening to sharply reduce the population in some regions
of the world. The "safe sex" campaigns are not just misleadingly named; they have
tended to encourage casual sexuality and minimize consciousness of risk, while
increasing actual risk.
Thus, there is a potent pragmatic reason to change actual sexual behaviour--
the fear of death. Society as a whole also has a powerful incentive to stop the
spread of AIDS and to find a treatment, for even the number of individuals infected
by the late 1990s had the potential to bankrupt the entire Western medical system
within another decade. These considerations could trigger a dramatic change in the
practice of sexual relations--possibly a swing of the pendulum far to a conservative
or even repressive extreme. Among church groups there are already large groups of
young men and women who have made public vows of chastity until marriage.
At the very least, relationships are likely to be more cautious than in the
recent past and the trend toward the integrating of relational fragments may
reinforce a behavioural change. The economic result of such a change could be bad
news for the operators of brothels, and owners of sex boutiques, but any move to
greater sexual equality and less relational fragmentation would put additional
pressures on these industries anyway, sharply reducing their presence and
influence even without the AIDS scare. Countering this suggestion, however, is that
the marketers of pornography have made their business a major world industry,
especially on the Internet, and show every indication of gaining ground.

Summary

Nothing is certain about any of the trends in personal relationships discussed


in this section, and other factors could intervene to render some of the points made
here entirely moot. However, when viewed on a historical basis, the casual and
fragmented state of relationships of recent years appears as somewhat of an

504
anomaly; one could be on relatively safe ground in forecasting radical changes for
this reason alone. The two greatest relational challenges will be to re-invent the lost
art of friendships and to break out of the gender solitudes. A society that fails on
both counts may still form the fourth civilization, but it will be nonetheless
impoverished for remaining fragmented.

12.5 Integration and Society


Integrating the fragments to build a new civilization will be neither a simple
nor an easy task. The new will be built partly from remnants of the old, but it will
have to have a new social glue, a new world view, new technology, and a new
ethical consensus. This section will contain a summary of how some of the ideas
throughout the book relate to this integrative theme.

Fragmentation and Dependence

The theme of dependence has been an important one in this book. As


indicated in Chapter 1, the course of history depends on the ethics and technology
of the people it describes. Indeed, the very notion of society assumes human
interdependency. All actions are part of chains of consequence--they have effects,
and none of them is isolated--everyone's depend on everyone else's. The same is
true of technology--everything is connected to everything else. Thus in sum, as in
Chapter 1:

It is impossible to be, to know, to experience or to change one thing.

The recognition of this principle is at the root of building of new cultures, for
whatever its diversity, a society consists of the commonality of interests and mutual
dependencies that have been agreed to by the people involved--even in the
inherently unstable situations where this agreement has been temporarily secured
by force. In the difficult years of transition when an old society is brought face-to-
face with its successor, the old inevitably fragments. This has been pronounced in
the current transition because of two additional factors: First, rapid technological
change has coincided with pronounced shifts in the ethical and spiritual foundations
of Western culture, and these have promoted equally rapid social changes. Second,
the specialization that was so important to the industrial age tended to encourage
people to fragment their life roles into discrete compartments. One result has been
to place great stress on the individual and reduce the emphasis on duties and
responsibilities to society as a whole.
This book has attempted to make the case that, as a new world view takes
hold, new bonds of dependency must emerge and that these will give structural
form to the successor civilization. The new will still depend on machines, on
techniques, and on speciality knowledge, but these will exist and operate in the
background of society, rather than by being its sum and substance. People will tend
to the needs of other people for more of their time than they spend watching over
machines, and they will concentrate more on ideas and somewhat less on
technique.

505
Two trends may affect the speed at which cultural fragments of the machine
age are assembled into a new form. First, reaction to perceived extremes of
fragmentation may promote even more rapid integration--in the same contrarian
manner that the stock market often reacts after a too-sharp increase or decrease in
prices. Such perceptions of concern may include those of the breakdown of old
moral codes and religions, an increased crime rate, the isolation of individuals from
each other and from government, emphasis on rights to the exclusion of duty and
responsibilities, and the focus on individuals at the expense of society. The extent to
which such beliefs are true matters less than that people become uneasy with social
instability and seek a new and more solid social framework in which to think, work
and relate to others. Chaos--even the mistaken perception of it--is simply not stable.
An opposing trend is the continuing rapid pace of technological change, which
may contribute to the propagation of new technological and social fragments on an
ongoing basis. As a result, it may be necessary for the next society to have
continuing dynamic integration as an important operating principle, at least for a
time. There is also the possibility, in those nations that are already highly
industrialized, that the momentum of the pattern of fragment creation in late-
industrial society will result in an "overshoot" into wide-scale social disintegration,
aborting the coming of the fourth civilization as described here, or shifting its locus
to other parts of the world. The potential for such "overshoot" will be examined
more carefully in section 6 of this chapter. It is therefore conceivable that the
nations best equipped to successfully implement an information age integration
may be those that do not have a long industrial history accompanied by a
debilitating load of cultural fragments. In other words, China or some other nation
building from scratch may inherit the future, rather than the West.

Redefining Culture

This last observation applies to all aspects of the machine age culture--they
must be re-integrated into a cohesive unity, in order for it to be meaningful to speak
of the advent of a fourth civilization. This means, for example, that the academic
disciplines must cross-fertilize and begin to integrate. It means that the population
as a whole must have better access to and better opportunities to participate in
what have in the past been called "academic" activities, and that the intellectual
elitism of the past will continue to diminish. It therefore means that there must be a
better and more broadly educated population. It implies a renaissance in the arts,
new interests in the social sciences and humanities, a more human face on science
and technique, a new ethical consensus, and likely a revived general interest in
underpinning religious ideas. It means coming to terms with a dynamic view of the
physical world and a rapprochement even between science and religion. More
broadly, it means gaining the ability to assess ideas, however strange they may
initially seem. After all, if certain ideas, disciplines, or doctrines are fit only for
intellectual ghettos, it would not be long before the people who expressed them
found themselves likewise isolated. In short, integration takes place as

506
interdependence is recognized and a whole culture is pieced together, a whole
civilization is built.
In particular, the conundrums faced as the fourth civilization begins are so
large, so pressing, and so comprehensive, that humankind can no longer afford the
luxury of not using cross-disciplinary approaches in their solution. Environmental
problems, for instance, require scientific, economic, social, and political solutions
that make sense not in isolation from each other--which is where their practitioners
worked in the machine age--but in unified concert. Snow's "two cultures" have to
cross boundaries and become one--not by way of a takeover by the sciences as
Wilson would have it, but as a true partnership of equals.
What is true in the academic sphere is true of the world as a whole. Its
peoples live on a single planet, and are interdependent whether they care to be or
not. Environmental, social, economic, and political problems know no boundaries;
their solutions cannot either. This is not to suggest a homogenization of all the
world's disparate cultures; it is rather to point out the necessity of their cooperating
more than competing. The fourth civilization is global in scope; people and cultural
groups who attempt to stay entirely their own course and keep others at bay will
either fail in the attempt or find themselves shut out of the new era.

Hierarchicalism and Networking

The nature of relationships is also different in a more open and unified, less
individualistic and fragmented culture. In all of the first three civilizations,
relationships were principally conducted through a hierarchy. The hierarchical
family, which one could argue was shaped by the prevailing technological
environment, served as a model for all of society. Thus, religious institutions,
governments, and eventually corporations all were structured in hierarchical fashion
and had a command pyramid. One individual was the leader in each of these
organizations or in each sub-cell, and had a variety of assistants, each with their
departments and authorities. Most people in the organization or larger society were
a part of the led, the commanded, the governed, or the workers, and their function
was to obey, to follow and to accomplish the tasks given them.
The fourth civilization focus on the dependency of people upon each other and
on the importance of their relationships with each other increases the value of what
each person contributes to the commonality. This is antithetical to a hierarchical
command structure in most activities of the culture. Interconnections of information,
technique and relationships are too complex to maintain under a simple hierarchy,
and the technology is available to handle them in several dimensions
simultaneously, not just vertically. The appropriate model for a culture in which
barriers are dismantled, privacy is lessened, and fragments are integrated, is an
informed network of partnerships in work and relationships, rather than an inflexible
chain of command and obedience. There will of course always be leaders, but
effective leadership in the fourth civilization will be more by enlightened consent,
and much less by absolute fiat.
Thus, the suggestion was made in Chapter 9 that representative democracy
might ultimately give way to participatory forms--on the local level first, but with the

507
potential for extension to a worldwide scope eventually. Other organizations now
having an extensive hierarchy, such as some churches, may find their influence
decline rapidly unless they devolve the decision making process to their members.
More specifically, they will have to realize and respond to the fact that fourth
civilization organizations will be the people who make them up, they will not either
command or even necessarily employ them. Decision making and accountability in
all areas will become much broader and more mutual; in most organizations people
will be mutually responsible rather than hierarchically accountable. Such a model
could be termed networked accountability and the term implies that each person
has direct and mutual responsibility for the success of the total enterprise, precisely
because they are the enterprise.
In a hierarchical system, those at the lower and wider end of the pyramid
have only to do as they are told. Even if they wish to know about policy, decision
making, and the management objectives and goals of the total enterprise,
management/politicians/religious leaders can frustrate this desire by keeping
necessary information from the lower ranks. Authority flows one way and
accountability flows the other. Absolute monarchies, feudal systems and their
surviving relatives under the modern communist, third-world police states, and
religious theocracies epitomize this in the political realm, and the closely held and
managed corporation does in the economic one.
A networked model for government would surely imply a participatory
democracy with minimal formal government apparatus, as the functions of
providing society with an infrastructure would be broadly diffused among many
small professionally owned and operated enterprises. These would have a local
scope and focus in most cases. Relationships with distant places could be handled
by agents on the scene and by long distance communications rather than by large
numbers of political appointees and diplomats.
Recent moves by some Western governments such as those of England and
Canada to privatize many of the corporations they have built or acquired could
therefore just be just the beginning of a trend that could also see a massive
localization of many previously centralized services. There is resistance to this both
from statists and from those who perceive the apparent fragmentation of
hierarchical structures as the end of the process rather than as a means to
reassemble them on a diffusely networked basis. There will also be opposition from
vested interests such as unions, whose leaders will try in vain to dissuade their
members from participating in such activities, say, by purchasing their workplace.
The opposition will be particularly strong where the enterprise has been a part of
the industrial age centralized state, and it is necessary to move it to the
entrepreneurial private sector, for the opponents will then attempt to arouse public
passions against the inevitable by wrapping themselves in patriotism to make their
pronouncements. As the example of the former Soviet Union shows, the process of
diffuse privatization can also be taken hostage by organized criminals along the
way, temporarily halting any benefit to the common people.
Similar trends can be seen in churches, which, though slower than business to
respond to social change are much faster than government. Centralized and tightly
governed denominations are giving way to fellowships of like-minded

508
congregationally-governed groups having a support staff, but no particular authority
in their central offices. Independent community churches are already thriving,
particularly those that are stressing the relational and acting as social change
agents because of their beliefs. Among Christians, the idea of the "priesthood of all
believers" is making yet another comeback, as part of a new conception of the
essential spiritual equality of all humans under the sole authority of God.

Re-inventing the Corporation

A 1985 best-seller of this name by Naisbitt and Aburdene detailed changes


that were even by that time already taking place in the work force and tried to
project these forward into the next millennium. The trends they detailed were based
not only on information age paradigms, but also on their perception that the new
industries already in place would have a higher demand for workers than could
easily be satisfied, once women and the baby-boomers were absorbed into the
labour market. They foresaw dramatic labour shortages, with future workers being
in such demand they would be able to write their own tickets with respect to job
security, working hours and location, fringe benefit packages, retraining and other
terms of employment. Some of the key trends identified either by them or in this
book, and that have already begun to reshape the corporate environment are:

o the trend away from large corporations toward small entrepreneurships,


o the increase in participation of women at management levels,
o the corporate provision of contracted child care, medical and dental
programs, health and fitness facilities and counselling services--in order to attract
and maintain workers,
o the new emphasis on company loyalty and involvement in the decision
making process,
o the new commitment to extensive retraining and job satisfaction,
o the provision of flexible hours and benefit packages,
o the extension of ownership to the employees in recognition that they are
the enterprise,
o the merging of university and corporate interests as each grows more like
the other, and
o the opening, broadening, networking, and general demystifying of
management.

Many of these themes have been touched upon in earlier chapters of this
book. Some of them have already been widely implemented; others have not
become important as yet. For instance, workplace demographic change has been
slower than they expected; in the mid 1990s jobs were still being lost in sunset
industries about as fast as they were being created in new ones, and shortages did
not appear even in the computing and information sectors until after 1997.
However, in general, changes in corporations may be the most prophetic of all
institutional ones, because companies are forced to respond rapidly even to slight
changes in the economy. Thus the forces that are destroying hierarchicalism and

509
promoting networked models are most advanced in their case and their
consequences are good indications for what is in store for the slower-moving
institutions of society.
The average corporation of the future seems likely to be smaller, largely
worker-owned and will have few or no job descriptions. Existing
owner/worker/partners will decide when to add new professionals to their teams,
and these will be expected to responsibly and quickly contribute to the joint
interests. A typical worker will, like Heinlein's human being (see section 10.3), both
take and give orders--and likely to and from the same people at the same level of
authority at the same time, rather than in a chain or command.
On the larger scale, big projects will be undertaken by networked corporations
whose relationships are put together by high-tech engineering/consulting firms.
Such meta-specialists in the technique of assembling techniques will play an
important role on the corporate scene, for the number of very large corporate
players with a permanent identity will be quite small, and the formation of strategic
and often temporary alliances among smaller companies will be much more
important. An individual professional can become a team player in these big-league
enterprises because it is already possible for anyone to maintain a seven
day/twenty-four hour office accessible to the entire world, and do so without
secretarial assistance.

Rejoining Ethics with Technology

A networked model of mutual responsibilities within and between corporations


and in personal relationships also implies a larger emphasis on ethical conduct.
Since fourth civilization society is postulated to be built on relationships rather than
on job descriptions, and on integration rather than on fragments, a high degree of
dependability and predictability in those relationships is essential. While a
relativistic ethic might be temporarily compatible with a specialist, role-playing, and
fragmented society, the fourth civilization demands a cohesive and reliable absolute
ethic, because it will be built on a rather diffuse network of relationships. People will
be once again counted upon for who they are, and not just for the way in which they
can play a specific role in one compartment of their fragmented lives.
To put things another way, a fast-paced, rapidly changing and relation-
dependent civilization cannot afford to feel its way tentatively through a minefield
of potential civil and criminal lawsuits over its every decision or action. Things are
happening too fast for that. By the time a major lawsuit can be settled, the
elements of the dispute are rendered irrelevant by the passage of months. The
professionals one deals with in one's own networked corporation will have a mutual
economic interest and community that will work to reduce such hazards internally.
They will also need to have enforceable codes of ethics so that their behaviour can
be depended upon by the owner/workers of the other corporations who do business
with them, either professionally, or as consumers. While using computers may make
it easier than it has been to produce comprehensive and iron-clad legal documents

510
governing such undertakings, and even to produce quick decisions on
disagreements, a networked model must depend far more on the a priori
assumption of ethically reliable behaviour than it does on a posteriori threats of
legal action over non-performance. To put it another way, the costs of law suits
cannot be allowed to continue growing, or they will swallow the entire world's
production.
In a professional trust environment, there is little time for and too much to be
lost by legal action; means will be found to avoid the litigious death-embrace by
ensuring that conditions leading to it do not arise in the first place. It may also
become more likely for a partner/worker who has acted unethically and damaged
another company--even a competitor--to be bought out by the other partners and
decertified by the professional body than for the injured parties to sue for damages.
While there would still be criminal activities, the use of the courts to settle
contractual disputes between professionals would, in this scenario, decline
dramatically. In sum, there will be practical economic and professional reasons for
ethical behaviour; its rewards will promote it, even if no intrinsic value is perceived.
This ideal for professional ethics is based on pragmatism, honesty, full
disclosure, truth in advertising and the concept that a person's word is as good as a
bond. Mutual, rather than competitive advantage is important in such an
environment. After all, the competitor on one project is a potential partner on
another. The assumption will not be that a professional knows everything about the
field of current practice, but that such a person is equipped to find out what is
necessary quickly and efficiently, and can effectively integrate it into the whole
project at hand because of a wide-ranging knowledge and the ability to grasp
interdisciplinary work as a whole. Such a professional would do so not with an eye to
exclusive personal advantage, but with that to successful completion of the total
enterprise and to the mutual benefit not only of the partnership but to some extent
that of the society as a whole.
At least, such seems to be an indicated outcome of the networked society.
Human nature being what it is, there are likely to be many who will want to make a
living subverting this model rather than contributing to it. However, in doing so,
they would be generating fragments, not a viable integration. The latter requires a
bond of trust on the part of those integrating.
To such ends, an important part of every education will be an in-depth study
of ethics--both in general terms, and in the ways that such principles relate to
specific professions. Throughout the nineties, early versions of such courses were
already proliferating at many universities, particularly in schools of business and of
computing science, where there had often previously been the most difficulties. It is
important to note however, that such courses are of little use if they are designed
only to raise sensitivity levels on ethical questions. Professionals having to make
real choices in real situations are little served if they are only aware that ethical
considerations happen to be involved. They get little from reading numerous case
studies if they have no ethical apparatus to employ in drawing conclusions and
modifying their own behaviour. Instead, they need to use specific, generally reliable,
and probably hierarchically ordered guidelines in order to function predictably. If
such cannot be achieved, the fourth civilization as described here will not come

511
about, for the necessity for professionals to rely upon each other in such respects
will dictate that no lesser course can be followed.

Integration and Education

As indicated in Chapter 10, the school system will also have to change to
encompass the new social and economic paradigms. For the grade school, this
means a new stress on communications skills so as to prepare students to enter a
world where these are essential to earning a living. High school graduates will
already be skilled in the finding of information in electronic databases, and in
integrating and recasting this into new forms for their benefit and that of others.
Universities will have three major challenges. The first will be to achieve an
integration of their own. This means a new emphasis on beliefs, ideas, emotions,
and relationships, and a new integration of these with the study of technique,
including that of politics, economics, and the hard sciences. Their graduates will
need to know who they are, what they believe and why, and be able to place an
intellectual foundation under their experience of technique, for all this will be
expected of them as leaders in the fourth civilization.
The second challenge, related to the first, will be a shift in strategy when
dealing with ideas. It has become the practice to do so only by asking questions.
But, while a questioning and healthy doubting attitude are necessary for intellectual
maturity, questions alone too often constitute a destructive criticism that leaves the
student empty and cynical. Since the imperative of the fourth civilization will be the
finding of practical answers, students will need to be taught how to do so. Thus,
universities must become places that do not just teach to question, but how to find
answers, and having found them, how to integrate them into the total person and
society.
The third will be to learn how to deal with the information revolution itself. The
existence of the electronic Metalibrary implies the availability of information, and
courses on any subject a student might wish. By the late 1990s there were already
virtual universities on the Internet, some of them sponsored by old and prestigious
institutions. It remains to be seen whether the "bricks-and-mortar" college can
compete with one that has no travel expenses or dormitory fees, but can buy
interactive lectures from the best teachers in the world. In other words, can the
traditional university survive?

Integration and Computing Technology

Computing technology is the driving force behind the changes to the


institutions, professions, and other disciplines. It is re-shaping society at least as
profoundly as the industrial revolution did in its day. It has become essential for
decision making, knowledge acquisition and representation, and for the functioning

512
of communications and commerce. This technology is, therefore, by its very nature,
interdisciplinary, and its proper study and application require broadly educated and
experienced practitioners.

The Global Society

The ethical consensus of the industrial age was based on a belief in one's
place in society. Church (if applicable), job, family, and state all delineated one or
more socially approved roles that were played out by the majority of people. As that
age closes, these roles have been compartmentalized and fragmented, and they
now await a new integration that will provide a basis for the next civilization. That
such integration is necessary for a coherently identifiable civilization is taken here
as axiomatic--continuing fragmentation can surely lead only to chaos and anarchy.
Some might regard networked relational models as anarchistic by comparison with
hierarchical ones, but they are not. Rather, they are different models for different
times, hierarchicalism having served its purpose and run its course.
The suggestions made for modifying democracy will also seem troubling to
some, but if the basic presupposition that free people ought to govern themselves is
still to be taken as absolute, then form must inevitably follow function and bring
change to the structure of modern government. Most troubling of all to many will
surely be the idea that religion will not only revive but also become a major spiritual
energizing force of the fourth civilization. History would suggest, however, that this
must be so; indeed it may be the safest prediction of all those in this book. The
question of where and when this will come together most effectively and what
people will lead the information society has been left open, however, for the best
confluence of all the integrating forces is extremely difficult to forecast.
This matter of potential revival of the human spirit and of religion is an
important leadership question that was posed in Chapter 11 and has yet to be
considered in detail; the next section is devoted to doing so.

12.6 Renaissance and Reformation in the Fourth Civilization


It has been noted already that information paradigms will have a tendency to
break down the rigid barriers that delineated narrow specialities and compartments
of the industrial age. At the same time, people will have more time for, and be
better educated to consider ideas, and they would also be better equipped to
communicate their own with a wide variety of people. The Metalibrary could break
down some of the authoritarianism and insularity inherent in the present academic
system and throw open the examination and discussion of ideas to a far wider
constituency than ever before. The result is likely to be as great a turmoil, and as
intellectually stimulating an atmosphere as that which resulted from the infusion of
Arabic and Chinese ideas into Europe after the fourteenth century.

A New Renaissance

It is not therefore difficult to dip into history and predict a time of great
flowering of the arts and an infusion of new life into the collective human spirit. In

513
such an atmosphere, new forms of music are being generated, many of them
synergistically with new types of machines. Indeed, most music is now composed
with synthesizers driven by computers. The flexibility of this form of composition
means that music is no longer limited to "real" instruments or even to major
variations on the traditional ones. Any sound can be generated electronically,
regardless of whether there has ever been a stringed or wind device that could
duplicate it. The result is an ever broadening experimentation with sound effects in
a search for new styles of music. This activity may be so intense and so diverse that
music appears to be chaotic, but some forms will surely seem more aesthetically
pleasing than others and will be added to or modify the traditional ones. It should be
noted, however, that much of this experimentation thus far is individualistic; for the
large symphony, there may be considerable difficulty attracting customers. In fact,
the better the home entertainment facilities offered by the Metalibrary, the more
difficult it could become for theaters and concert halls to attract customers to live
performances. A few of the very best orchestras may come to dominate the
"sensorium" that television becomes, making things very difficult for smaller groups.
It may even become possible to synthesize an orchestra of choice (both sight and
sound) from stored recordings of individual musicians--without physically
assembling them. On the other hand, there will be a demand for social events to
offset extreme individualization prevalent in the late industrial age, and this could
translate into renewed life for venerable old artistic institutions at a local and
community level. For instance, theatre chains have been quick to seize upon such
trends and reconceptualize large movie houses as complexes of smaller, more
intimate entertainment experiences.
New multimedia art forms will continue to proliferate, particularly as the
resolution of video and graphics displays improves, and holographic image storage
and retrieval become refined. The full Metalibrary could be an artistic medium of
major importance, for works could be rented from it for display in individual homes
and offices. More adventuresome artists might perhaps experiment with sculptures
in orbit that would be visible from the Earth's surface, or attempt to transform large
landmasses in an artistic fashion for viewing from space. All this soft high-touch
counteraction to hard high technology may also encourage the dance medium both
to revive ancient forms and to produce new ones.
Electronic publishing will continue to make it much easier to express oneself
in writing as well. It is more difficult for a new writer to gain a conventional mass
audience than in the past, for there is far more material for editors to consider.
However, electronic distribution will become simpler and the collection of royalties
more efficient. Publishing houses can work through the Metalibrary facilities to
conduct the editing and approval (publishing) functions and collect a royalty for
their efforts on the author's behalf--but in the electronic medium at least it is the
artist who is the cornerstone of the whole endeavour, not the editor. Printed paper
editions may still be made on a one-off or print-on-demand (POD) basis, but perhaps
more for collectors than for the general trade.
The visual media will be much improved as well, so drama could take on new
life. While live theater will probably not vanish altogether, the quality and quantity
of dramatic presentation available through the Metalibrary will be very much

514
greater than that on today's television, and this too will mean many more
opportunities for writers, producers, media technicians, and actors. The number of
channels available is increasing in order to provide a greater diversity of material,
and this could reduce the current reliance on the old networks, on formula writing,
and on the faddism that have dominated television in the past. What is said above
about the potential for live music performances to become important social events
applies here as well. This will not necessarily be manifested in better opportunities
for well-known national and international troupes; it may simply mean that more
people participate in and watch local and amateur theater. This would also imply a
continuing growth in the number and variety of athletic events and team sports, as
available time and money for such activities also increase.
It was noted earlier that a new civilization needs a new economics and
politics. It will also need a new sociology, a new psychology, a new anthropology,
and a new way to integrate these disciplines to gain insights from all of them and
bring their ideas to bear on the problems of people. Although the beginning of this
new Renaissance of the arts, fine arts, humanities, and social sciences is only just
being seen, the intellectual turmoil and excitement will surely be as great as in the
first Renaissance. For the first time in over a century, ideas about the human
condition, knowledge, thinking, and the human spirit are available to be freely and
widely discussed in a cross-disciplinary fashion. It would be far too speculative to
suggest what specifics might emerge from this ferment, but not at all speculative to
predict that it will continue to take place, for such activity is a necessary part of
reconciling the academic world with the new culture and developing a new world
view to match the new civilization. The intellectual fragments of the machine age
will surely be carefully and thoughtfully picked over before being pieced into a new
coherence.
The duration of the new Renaissance is also uncertain. Despite that an
important feature of the new civilization will be continuing rapid changes in
economic, social, and political systems, history suggests that its peoples' world
views will crystallize on a new set of paradigms, and that once this has been done,
the culture will view itself through that new filter and stabilize around it. These are
conflicting suggestions, to be sure; perhaps a way will be found to make rapid
change and healthy conflict part of a new world view.
An important part of such a world view is a meaning framework, or religious
system. If there will be a great increase in interest in ideas, a more integrative world
view, and a renaissance of the human spirit, then there will be a new view of
religion as well. There are, at this point, three major groups of candidates to capture
the religious loyalties of the men and women of the fourth civilization; these will be
considered in the remainder of this section.

A New Religion

There is no shortage of new candidates to move into the vacuum left by


Christianity's retreat and scientism's shortcomings, and to claim the heart-
allegiances of information-oriented peoples. More than one pundit has remarked
that starting a new religion is a good way to become rich, famous, or both. Some

515
have gone on to do so. New groups that have attracted the greatest attention thus
far have been those with mystical overtones and connections to the religions of the
Far East. The leaders of such groups have not been hesitant to give themselves the
titles of guru, blessed teacher, or holy one, or to claim to be Christ re-incarnate and
God himself. Apart from devotion to the group--love manifest in a variety of ways--
and to the leader, the teachings of such groups tend to be theologically and morally
vague. There is not always agreement, even among their members, on whether
such systems are religions. For instance, on the one hand, L. Ron Hubbard's
Scientology group lost a court case whose decision stripped it of tax-exempt church
status, and on the other, Transcendental Meditationists have sometimes had their
teachings declared in court to be religious against their will and so forbidden from
public schools. Again, not all followers of the very diverse group known as the "new-
age movement" would characterize their belief system as religious, despite its
Hindu mysticism and reincarnation, its pantheism and its search for the God-in-the-
self, in the manner of Buddhism. It is a blend of all these with elements of scientism
in a mixture that appeals to the intellectual who wishes to engage in spiritual
experimentation without accepting the accountability demanded by a personally
theistic religion like Christianity. It also seems to be highly commercialized, and
constitutes an excellent example of a modern religion as a marketplace commodity,
for many of its entrepreneurs have garnered great wealth by manufacturing new
religious ideas and re-packaging old ones--not as whole systems, but in typical late-
industrial age specialized fragments for the modern shopper at the religious market.
Such movements are a natural outgrowth of the specialization and
fragmentation of the late machine age peoples. They are the ultimate in designer
philosophy, for each individual can create a personal religion with whatever gods
are desired--self included. This makes such Hindu/Buddhist mysticism an ideal
religious form for an individualistic time. It also fits in well with some elements of
scientism such as humanism and materialism, though it conflicts with others by
being somewhat fatalistic, and indifferent to both progress and evolution. But
mysticism is most sharply antithetical to both science and Christianity in the
rationality that they share and it does not, and in treating experience as an end in
itself when they both regard it as a means to other ends. Thus, it is difficult to
assess whether the "new age" movements in the West are only late-industrial
fragments, whether they constitute the nascent form of a significant new religion
that, when matured, integrated and organized will become a major force in the next
civilization, or whether they will be simply be incorporated into a new versions of
both science and Christianity.
Indeed, the latter has happened to at least some extent, as many ostensible
Christians have shopped the marketplace of ideas and put together a stew of ideas
that bear little resemblance to any taught in the several churches they attend, as
the need and whim strike them.
For the new movements that have formally organized themselves as religions,
few have thus far sustained memberships of more than a hundred thousand at any
one time, and fewer still have survived long after the death of their founder. Among
the groups using some Christian vocabulary, if not its doctrines, and that are
sometimes termed "cults" there have been some major scandals involving tax

516
avoidance, the sexual proclivities of leaders, and of course the beat known of
several mass suicide/murder pacts, such as the Jonestown massacre. These have
taken their toll on some of the new religions, and it is not clear whether any of
them, even the new-age groups, have a sufficient following to substantially
influence the next civilization.
The reason why some of them have failed to make a great impact is the same
as that for the decline of statism. They are often arbitrary and autocratic, and are
run for the benefit of charismatic individuals who convince people salvation entails
following and serving those superiors. The information age being what it is, it is
impossible for leaders to hide either the wealth they thereby accumulate, or the
inconsistencies between their lives and their teachings. It is also harder for followers
to fail to notice the radical doctrinal changes commonly made by arbitrary leaders.
Thus the new religions, especially those of the very exposed religious entertainment
industry, often have a very high turnover, for people make and keep long term
commitments in the information age only to those organizations in which they can
be a part of the decision-making process, and within which they can be kept fully
informed. Such commitments also imply consistency in their agreement with the
religion, that is hard to achieve in an arbitrary organization.
It is a telling commentary on the current fragmented state of religious belief
and practice that the diverse and unorganized New Age movement has been more
successful than many of the other new groups that have organized themselves.
Were one to suppose that cultural fragmentation were to continue, it would be the
best contender. However, the supposition here is that it will not and that integrative
paradigms will rule instead. If so, a more cohesive religious view may turn out to be
more successful.
In the interim and out of the fragments, there is also a potential opening for a
new fanaticism based on some of the old hatreds to arise, and certain European
Fascist political movements must in this context be viewed with great alarm.
Religious-like allegiances are sufficiently open as the millennium approaches that a
suitably charismatic leader could well lead them captive once again into the
darkness of racial and religious hatred, or even a new large-scale war. Even though
this kind of gathering of the fragments might seem counterproductive and
contradictory to information age paradigms, the religious vacuum does exist, and
could well be occupied by such passions. This result too, however unlikely it may
seem, would constitute a new religion, or at least a new manifestation of an old one.
What is more, universal information in this case, means that such religio-political
movements can spread rapidly and unpredictably, and therefore are even more
dangerous. They can also brook no critics nor competitors, whether those critics
may bear the name "North American" or "scientist" or "Christian". It is also of little
utility to enquire whether the passionate advocates of such movements are
politically on the left or of the right, for there is little difference in the effects upon
people of either extreme.
The prospects for the continuing expansion of Islam are also difficult to
assess, because it has not yet come to grips with modern science or technology. It
is becoming more difficult all the time to read modern scientific advances into the
Koran and yet Islam has always held it to be potentially dangerous to know anything

517
that is not in the book. Neither is it unaffected by superstition, materialism, and a
nominalism of its own, even in professedly Islamic nations. Meanwhile, education
and modern communications chip away at the often monolithic Eastern states and
introduce ideas from the West. The tentative alliances of some such nations with
the former Soviet Union were seriously undermined by the invasion of Afghanistan,
as Islamic unity was by the Iran/Iraq war and then the Gulf war. Indeed, about the
only thing that unites the Islamic states of the Middle East is their desire to
annihilate Israel, and that kind of unity is scarcely encouraging for world peace.
Neither is the newfound willingness to engage in suicide bombing, either with a vest
full of explosives, or at the controls of a passenger jet. Thus, despite earlier
successes in the North American marketplace, Islam appears to have enough
internal problems and difficulties left over from the current age to raise serious
questions about its ability to maintain its influence in the next, and this is
particularly so when one considers that the oil money will not last long into, nor
much influence the fourth civilization. Moreover, even though the World Trade
Center incident was engineered by a small group of fanatics, there will undoubtedly
be a backlash against all adherents of Islam as a result.

A New Philosophy for Science

Many of the shifts in thinking that are taking place in the move to a new
civilization mentioned in this book have been noted by others as well, and there has
been no shortage of technical writers pointing out the new paradigms to the
scientific and technical community. Moreover, science and other techniques will
continue to be of great importance in the information age; its very existence
depends on them--both in the heritage of the last age and in their new
developments. The hold of the tenants of materialism, humanism and progress
(collectively called scientism here) on the hearts of Westerners has been shaken,
however by its failure to deliver on the promise of a better humankind to wield all
the new high technology toys.
At the same time, a new society has by definition a new world view, breaking
with the past on the transcendental issues along with everything else. It is much
easier to pass technique along from one generation to the next than it is world view,
cultural assumptions, or religion.
Indeed, history would suggest that world leadership tends to pass to those
nations that undergo revitalization in both technology and religion at the same time.
This was certainly true in the industrial revolution, and there is every reason to
suppose that it may be true in the information age as well. The changes, when they
come, may well be more rapid and more dramatic than the last time, just as will the
technological and other societal ones.
There are some signs of a revitalization in the world view of many modern
scientists. The old insistence upon empiricism and rationalism tended to exclude
ethics, and therefore to allow only an agnostic antinomianism in science. But the old
assertions that technology is ethically neutral are being replaced by an increasing
caution, and by a willingness to consider carefully the effects of new techniques on
real peoples in real societies. Moreover, the populace at large is demanding such

518
caution, and greater accountability as well, as it learns more about the activities
and potential dangers of certain types of scientific research or technologies. Indeed,
it was precisely such greater awareness of the potential environmental
consequences that effectively killed off any possibility of building more nuclear
power plants in the United States.
More people now realize that technological solutions to problems are not the
only ones possible, and that they are not necessarily the best. Certainly, there are
always trade-offs to consider with the solutions not pursued. Part of this caution
stems from the experiences of scientists who worked on nuclear energy before 1945
and must still live under the shadow of the bomb. Thus, anti-nuclear protests in the
1990s were not just anti-technology; they were sometimes being led by scientists
themselves--ones who were trying to find and express new forms of social
responsibility. Moreover, there is a new and growing concern for the doing of ethical
science and for the wielding of ethical techniques over the broad range of both
science and technology. A number of books have appeared that examine the ethical
side of specific professions or of science and technique in general. Most of these
would not even have been contemplated, much less have been published, as
recently as 1980, but today's teachers and students have already been sensitized
by the scandals and conflicts brought their way courtesy of universal information,
and they are anxious to study what they can of ethical behaviour. While much of
this material is still the product of a relativistic and individualistic age, and while
many of these books consider ethical systems uncritically, describe case studies
without guidance, or ask questions without suggesting answers, the very fact that
such issues are being discussed represents a substantial change.
The message of Snow about the danger of living in two isolated cultures (or
many fragments of one) is also being listened to. In the future, there will be more
time to listen, to learn, and to integrate what were disparate fields, because a
scientific education and career will not need to be so narrowly specialized as in the
past; it will have to include a substantial general education as well. This seems likely
to generate a great influx of ideas from the arts and humanities into the formerly
isolated scientific community. The exact effect this will have is uncertain, but it is
sure to cause some softening in the exclusivity of the scientific world view, and a
greater acceptance of the notion that knowledge can legitimately come from other
sources than rationalism and empiricism. This would make scientific debates much
broader, much deeper, less dogmatic and far more interesting. There could be a
return to fundamentals, with theories and models being seen once again as the
abstractions for reality that they are; this would allow evolutionists, for example, to
debate easily with creationists, (and vice versa) without feeling that their whole
persons, livelihoods and spiritual beings are under attack.
In short, information paradigms ought to force openness and destroy narrow
exclusiveness. Ideas are put out onto the open marketplace to compete for the
hearts of people. If the paradigms of science can be modified accordingly, the new
ones will form an important part of the basis for the religious-like attachments of the
men and women of the next era. Scientism as it has been known may have passed
however; those who dogmatically cling to it may find it to be as relevant to the

519
information age as Medieval scholasticism was by the nineteenth century--an
honoured, but no longer believed spiritual ancestor.
A new scientism could recognize and integrate into its structure an
understanding and appreciation of the human spirit and in particular of the role of
belief systems and world views. It could employ itself as a meta-technique to
criticize its own basic assumptions and world view. It could employ models and
paradigms without supposing that either constitute ultimate reality. If it answered
meaning questions it could acknowledge openly and frankly their religious nature. It
could freely debate its own meaning and that of other systems without either fear or
aggression. It could accept that there are things beyond itself, that ethical
considerations can shape it, and that it has a total societal responsibility, and it
takes these mutual effects seriously. It could recognize that its own technique is one
of many possible, and is not the generator of all possible knowledge. A scientism
less confident of its own invincibility and inevitability is not as aggressive a religious
force as in the past, but a frank, responsible one that knows its limitations may be
one that more ordinary people will believe in, trust, and follow in the future.
This scenario is like that of Wilson's consilience in seeing the need for and
inevitability of a new assembly of all the intellectual fragments; it differs from his in
suggesting that whatever integration may come about, it will not be by a total
intellectual takeover by empiricism--a new triumph for hard, exclusive logical
positivism and the complete expunging of supernatural ideas from society seems
extremely improbable.
If the reviews for the newly-minted religions are mixed, and the prospects for
a modified scientism depend on it muting its religious force (rather than per Wilson,
increasing it), it is also important to consider the potential for a Western revival of
Christianity.

A New Reformation

First of all, it is worth repeating that the historical conjunction of religious


revitalization and great social and cultural dynamism is no mere coincidence.
Neither are the great technical revolutions conducted by those whose spirits are
bound to the paradigms of old cultures; the re-awakening and re-focusing of the
human spirit in the transcendental sense--a new renaissance--seems to be a
necessary factor in the building of a new civilization. The powerful dynamic that
causes people to break decisively with the past and stride forth confidently to
fashion new ways of thinking, experiencing, and transforming, is not technical
alone, as some scientists believe, nor is it only the result of inevitable social and
historical forces, as, for instance, Marxists aver. It is partly both of these, but a
motivator is also required, a spiritual awakening that re-focuses people in large
numbers to strive toward new goals and re-energizes them to achieve things
previously thought impossible.
For millennia, the spiritual ancestors of modern Western society defined their
responsibilities to each other in terms of their responsibilities to an almighty God.

520
The fact that they generally do so no longer has led some twentieth century
intellectuals to think that religion has been forever banished from the mainstream
of society and to suppose that a new order could be ushered in on the basis of
technique alone. Was Ellul's despairing concession of apparently inevitable loss by a
weakened Christianity the last word?
However, the effect so far has been a dehumanization and depersonalization,
an emptiness reflected in a nihilistic philosophy, a modern literature of pessimism
and despair, and a hope focused only on economic advantage. Living under the
shadow of nuclear extinction, pollution and a multitude of social problems that new
technologies seem only to exacerbate, the people of the Western nations may well
be ready for a swing of the religious pendulum back from the secularism of the first
half of this century. The very extremes to which the fragmentation of culture and
religion have gone would seem to reinforce this view. There certainly appears to be
a spiritual vacuum into which some religious system has an opportunity to step and
to re-unite the pieces into an integrated whole.
There is no obvious prospect for this among new religions, though the high
level of experimentation with them is indicative of the strong undercurrent of
religious interest. It is important therefore to consider the prospects for a greatly
revived and energetic Christianity to become the spiritual galvanizing factor in the
West. These are mixed, depending on how they are viewed. After all, the Christian
response to culture in the past has ranged from radical rejection, such as in the
monastic or ascetic life-style, to near total identification with and assimilation as in
the last culture. Any Christian revival in the West would have to forge a new place in
society based on an open, energetic and critical participation that engaged the
culture at all its important points. To do this it would need to develop or rediscover
several characteristics:
First, it would need a new sense of being, that is, a new sense of commitment.
Part of this is a rediscovery of its ancient doctrine that moral goodness is an all-
encompassing character quality, not something that is just done. Nominalism may
suffice for a declining organization to survive through a single generation, but not
beyond. If Christianity (or any other religion) is to mean anything to the fourth
civilization, it will have to reflect total commitments, faith-assertions sufficiently
broad to encompass the whole person, and sufficiently deep to stand up to testing.
Generalization and integration rather than specialization and fragmentation will be
important in the future; the organizations that have diversified and fragmented and
lost all their distinctives may vanish from the scene in a single generation.
Successful religions for the future would, therefore, have coherent beliefs and
confident, comprehensive faiths. Their world view will of necessity contain filters for
understanding the physical world, but these will be general assertions and, if they
are to learn from the lessons of history, will not stray far into the territory of science
or adopt its tentative conclusions as inflexible dogma. Most important of all, they
will have a definitive and transcendent view of God that makes them distinct
alternatives to humanist and pantheist philosophies. Without this, all else is wasted,
for religion without transcendence, and a clear view of God lacks any meaning, and
if it cannot answer meaning questions about itself, it certainly cannot for its people.

521
Second, a renewed Christianity would require a new intellectual discipline,
aggressiveness, and willingness to debate the ideas it holds to be truths in an open
and free-wheeling fashion. If its doctrines are indeed self-evident, or rationally
supportable, and were actually historically revealed by God, they can stand up to
scrutiny in the open marketplace of ideas, and Christian intellectuals need not be
afraid to pedal their wares outside the ghetto's dark corners. As part of any
intellectual revival Christianity will have to come to terms with a response to
evolution in particular, for it cannot sacrifice its general assertion that God created
the known physical universe and its specific claim that He created man, without
losing its whole reason for being. Without the real first Adam and a literal fall from
grace, Christ, the second Adam is simply irrelevant--it makes no difference whether
He came if there was no reason for Him to have died for sin. The aggressive
assertiveness of the ultra-conservatives is one way to reclaim intellectual territory
here. An understanding of Genesis as literally true but undetailed as to time, order
and mechanism could be another. However, the liberal assertion that such things do
not matter is cowardly and tantamount to intellectual suicide; issues like this one
have to be faced, not defined out of existence by willful ignorance.
There is also in this category the problem of what to say about miracles.
Should they be regarded as a result of the work of God who invented all the physical
processes and can therefore set them aside as He wishes? Or, should God be
regarded as employing as-yet-unknown techniques that are in accord with the
physical laws of the universe, but that humanity does not yet possess paradigms to
understand? Whatever its response to challenges of this nature, Christianity must
end its century-long retreat from the mainstream of academia, for religion that is
anti-intellectual has little relevance or foundation even in a fragmented intellectual
landscape--and none at all in an integrative one. It is important to note, however,
that taking the conservative and even literal ground on such matters as creation is
not in itself anti-intellectual. Rather that term describes those who cannot or will not
debate their ideas openly. It is also not wrong to assert a specific interpretation of
the Christian Scriptures as literally true. After all, they either have no meaning at all
or some specific one; it is a cop-out to assert that they may have many possible
meanings and turn one's back on debate.
Third, such a revival would need to have a new excitement, a new sense that
its doctrines are not musty things for theologians to argue about, but part of an
encounter with the power of the living God. There is some evidence this is
happening, because for some time, there has also been an increased emphasis on
experience in some Christian churches. This emphasis is roughly termed the
charismatic movement, and is at least some indication of a return to the
experiential aspects of Christianity. This particular manifestation is as yet immature
and intellectually incomplete, and its consequences are as yet unknown. However,
religion that fails to touch the emotions or to explain the experiences may be an
interesting philosophy, but has no humanity. Moreover, an experience grounded in a
success philosophy is a fleeting one, good only for today. The authentic and
orthodox experience of Christianity was not just emotionalism or success, but both
within a context of faithfulness in social action and intellectual vigour; to be revived,
it would have to be so again.

522
Fourth, it would have to regain its power to change both people and society.
Its claims to be able to do so are both radical and comprehensive; if these are to
have any credibility they must be seen in action. That is, it must rediscover that
moral goodness is a character quality that leads to things that ought to be done,
rather than things not to be done. At the same time, there is a large collection of
activities that Christians have no conceivable motivation to pursue, which many of
them do now in the name of sophistication, but ought to desist from--more to regain
their distinctiveness and voice over culture than for the sake of legalism.
Unfortunately, there is little evidence at the present time of the transformational
aspect of Christianity in Western society. Nominalism, and the confining of Christian
life to a single day of the week are the rule rather than the exception. It is worth
noting that the objections of outsiders to Christianity are seldom intellectually
grounded; they are nearly always morally based. Yet, the actual morality practised
by those who profess to be Christians is often not especially different from that of
anyone else. But, if it is impossible to tell by observing a radically transformed life
morally--in priorities, life-style, and relationships--that people are Christians, then
they may as well not claim to be. Religion that lacks effective, practical,
transforming compassion is not only useless; it is misleadingly dangerous. Faith,
knowledge, and experience with no consequences in human relationships are not
just dead, they are counterfeits.
Fifth, these four aspects cannot exist alone and entire unto themselves; they
have to be integrated into a unified and whole people in order to become the
galvanizing factor in the intellectual life and experiences of the broader society and
play a role in transforming it. Some denominations emphasize one aspect of the
four, and some another, but there is little effective and comprehensive integration
that can be pointed to. Progress has been made by many Christian academics
seeking to integrate their faith and discipline, but the radical integration of these
two with experience and relationships to demonstrate the concinnity of God's
design has not yet overcome the fragmentation that has been going the opposite
way.
One church may pride itself on doctrinal correctness, another on its faith, a
third on its emotional experiences and a fourth on its program of social works. All
four are out of balance, so all these organizations are impotent in an age that
demands comprehensive integration and generalization for relevance. Nominalism
and fragmentation go hand in hand and so do commitment and integration to see
God's concinnity. The former are safe, because they are defined and sanctified by
the old culture; but the latter are dangerous, because they come out of a claim to
stand above culture and to have the authority not only to critique it, but even to
rule on the legitimacy of many of its elements.
Thus, sixth, any revival of Christianity from its present state will require a
catalyst, a rallying point, and leadership. If the fourth civilization is to see
Christianity as a religious force, it will require visionaries such as were Luther and
Calvin in another age to refocus nominal, existential, and fragmented Christians,
rally them behind a comprehensive view of their faith, and restore the dynamic and
missionary force.

523
There is not much evidence that any of this is happening in the West. But,
there are indications that such a radical reformation is taking place elsewhere--in
China, Korea, and Africa. The historical record would also seem to suggest that
Christianity may rise elsewhere even as it continues to die in the West. After all, it
began in the Middle East, where today it scarcely exists. It flourished for centuries in
Europe, where it had its most glorious successes, but where it is today quite
indiscernible. It was exported to the Americas, where it is still visible, but socially
and culturally ineffective, and where it has lost its transformational aspect and its
ability to dynamically criticize and interact with society.
It must be concluded, therefore, that the question of spiritual leadership in the
fourth civilization, and with it the energizing of political and economic leadership, is
very much open. A modified scientism will play an important role; whether it will
have a corresponding religious partner remains to be seen. Both the New Age
movement and the remnant of Christianity seem to be too fragmented as they
currently are to provide a general spiritual dynamic in the West, but there is an
interesting possibility that the latter may do so elsewhere. There may be reasons to
suspect that a revival of Christianity could supply the spiritually galvanizing force to
complement the enabling technology and transform civilization into a new form, but
there are few reasons to suppose that this new energizing will take place in North
America, and even fewer to offer Europe as a candidate. If it takes place anywhere,
it might be in China, for that nation has both the technical momentum and the
openness to a new spiritual paradigm required to assume the leadership role.
Moreover, the comments made in this section can be applied, with suitable
modifications to religions other than Christianity; it has been used for illustrative
purposes because of its past role in the development of the technological society;
but the possibility cannot be discounted that some entirely unrelated religion might
arise to play the role of spiritual motivator to the fourth civilization (this both in
negative and positive senses.)

12.7 Missing the Mark--Some Potential Difficulties


There are many things that could go wrong with any or all of the integrative
scenarios presented in this chapter. Indeed, one reason for suggesting the leading
nations in this civilization may be different is the very momentum built up over the
last century by the most successful exploiters of machine age paradigms. Clinging
to these now and resisting change would result in stagnation, but there is a heavy
investment in the old ways, and many good reasons why people may not want to
change. This is often seen in computing technologies, where people will stick with
inferior software and hardware and ignore better operating systems and computer
designs because of familiarity with and investment in the installed base.
Thus, this section examines a few problems, all of which have the potential to
derail features of the information age and divert attention away from the integrating
themes discussed here.

The Infrastructure Problem

524
In economic terms, investments in infrastructure take the form of large
corporate structures, buildings, ships, roads, aeroplanes, factories, and other
machinery. The short-sighted view of infrastructure is that it is built once and used
forever. A realistic view recognizes that nothing lasts indefinitely and envisions a
specific plan for infrastructure replacement and modernization. It is for this reason
that every modern accounting system considers the depreciation of capital goods.
However, this system works well only in a relatively unregulated private
sector where market forces directly affect decision making. The provision of public
infrastructure is entirely different, because it can be built without regard to either
financial or future considerations, and the concept of depreciation is not used in the
public sector. The massive expansion of water, sewer, and roads over a relatively
short period of time in North America means, for example, that much of this
infrastructure will have to be replaced over an equally short period of time. Indeed,
some of the more recent techniques sometimes proved less durable, and this
foreshortens the overall replacement period. For example, steel-reinforced concrete,
if not properly treated, will deteriorate rapidly due to reaction of the steel with salts
in road water. As a result, the majority of bridges and buildings constructed with this
method in the 1950s and 1960s will need to be replaced much sooner than
expected. Meanwhile, governments have placed social spending at the top of their
agendas, and much of the infrastructure has simply been left to decay. Similar
remarks could be made of the health and education infrastructures.
This may not be as great a problem as it seems, for the new civilization will
surely require much new infrastructure in any case. However, those nations that
build one from scratch may have the advantage of the lack of attachment to
obsolete structures and ways of doing things. As post-war West Germany and Japan
have shown, building from scratch, or even out of chaos, may be easier and more
economically beneficial than attempting to change the momentum of an existing
nation; this may prove to be the case in the next civilization as well.
The same problem exists in heavily regulated industries such as shipping,
agriculture, and public utilities. Though not necessarily owned by the state, these
tend to be so closely supervised by it as to become its close associates; they then
take on an economic aspect similar to that of enterprises directly constructed by the
state. It is not surprising, therefore, that these sectors have similar infrastructure
problems, and that change comes only very slowly. Yet all are important to the
fourth civilization, and its success in any given nation will depend heavily on the
ability of these sectors to solve their infrastructure problems, even if it means
casting them loose from the protective and smothering embrace of government.

Vested Interests

Organizational infrastructure can also be an impediment to change. New


techniques demand new methods of structuring businesses, educational institutions
and governments. The case for such change has been made at length in this book,
and so has the assumption that it will take place. Hierarchies, for example, seem
inimical to the new ways of doing things. However, some of these will be very much
attached to their current arrangements and will resist change. Those that do may

525
not survive; if enough could, the fourth civilization may in many quarters be only a
pale extension of the third, with little new to commend it.
The present communications media may be among those institutions with the
greatest interest in maintaining their current status. Never have they been more
powerful than at the close of the machine age. They serve as official oppositions to
all governments, expose the sordid side of public figures, filter the news and shape
public opinion to their standards, and confer widespread recognition on the causes
and leaders of their choice, regardless of how few followers they may initially have.
As more people have direct access to information, the role of traditional media may
be threatened, and they could use their influence to block such changes in order to
retain power. Censorship, heavy taxes, or technology restrictions on the Internet
could, for example, slow its growth in favour of the traditional media.
Politicians too may be more interested in extending the last age than in
ushering in a new one in which their scope of influence narrows, especially if
participatory democracy does become important. The fragmentation generated by
political polarization or jingoistic nationalism can work well for them in the short
term, for they can keep their own answers vague and meaningless and still appeal
to enough of the electorate to remain in office, especially when many lose interest
and do not vote.

Ethics Problems (reprise)

One could also wonder where the kind of integrity envisioned for the fourth
civilization professional will be found. After all, as the fragmented machine age
draws to a close, integrity appears to be sick, even to the point of death. The future
is nothing but an undefined nightmare if its peoples have no interest in the making
and keeping of promises to their spouses, to employers, and to society as a whole.
But, the very publicity that integrity problems are generating can be taken as a sign
that they have at least been diagnosed, and that may be a positive sign for the
future. Certainly, lack of integrity is one aspect of the machine age that fourth
civilization openness paradigms would appear to be hostile to, but it may be some
time before integrity becomes the norm. If a marriage between a religious-based
moral philosophy and the technological and scientific mindset cannot be achieved,
it may never happen.

The Scapegoat Problem

Sometimes, change is resisted more actively, and those who feel themselves
to be powerless can find themselves in very dangerous straits indeed. If only a few
are perceived to desire the change, and they achieve a high profile, the majority
could begin to blame them for all the ills it believes society to be afflicted with. If
only a few resist the changes, they could find themselves culturally isolated, and
ready targets for scapegoating when things do not go quite as the majority had
expected (They seldom do). Although the open information paradigm says this
should not happen as much in the future, the possibility of some charismatic

526
demagogue stirring up one nation against another (or even against one of its
minorities) can never be completely discounted.
Thus, blacks and orientals are still vulnerable in North America and Europe;
Christians are hounded to their deaths in many parts of the world; and the spectre
of anti-Semitism continues to haunt modern civilization. Muslims and Hindus stare
each other down over Kashmir, and Israelis and Palestinians do the same in their
partitioned land. It is also easy to blame the peoples of another nation for the ills at
home, for this shifts the responsibility for not solving them to someplace else.
Western governments often ignore such problems if a politically unpopular group
such as Christians, Jews, or Muslims are the targets, or if there is money to be made
by trading with the oppressors. Better not to offend the voters back home or
damage fragile trade prospects by taking up the cause of ethnic or religious
minorities for the sake of justice.
One could express the optimistic hope that greater openness and willingness
to take responsibility ought to reduce the risk of scapegoating, but caution from
historical experience knows it will not be eliminated. There will always be people for
whom the pace of technical and social change is either too slow or too fast, and who
would rather fix blame than work on solutions for the problems they perceive. Such
people will always be a threat to civilization as a whole; the challenge will be to
eliminate the threat without destroying the freedom to dissent.

More Problems of Race and Discrimination

This leads once again to a dilemma. Every society and every nation, if it is to
continue to believe in the things that make it distinct, must promote itself and find
ways to demonstrate its own cultural superiority, while resisting encroachments
from others. However, such healthy vitality and the defense of values, customs,
religion, and a way of life can easily be taken to extremes, and become racial or
religious discrimination. These problems are compounded by historical leftovers:
o The uninhibited racism of the past, inflamed by doctrines that one's own
"race" is a product of higher evolution, religious hatreds, or the cultural remnants of
black slavery.
o The legacy of white European colonization of large areas that were already
inhabited by peoples of less advanced technology who were pushed aside and
treated as inferiors, fit only to be ruled.
o The continuing, deep-seated problems posed by religio-cultural hatreds in
various parts of the world (former Yugoslavia, the Indian subcontinent, and the
Middle East).
o The misuse of the Bible to justify racial discrimination and white superiority.
All of these are fragmenting, rather than unifying. Integration paradigms apply
in this area as well, so that it is easy to hope that these attitudes may change in the
future. Indeed there is evidence in some parts of the world that they are changing
already, though hatreds with their roots in religious controversies seem intractable .
Some of these attitudes may only be cured by the passage of time and the
education of successive generations in a more forbearing attitude. Others may
never be.

527
Only a fringe minority now attempts to make an evolutionary case for the
superiority of a particular race, and slavery by race, while not forgotten, no longer
has such a profound effect on Western society as it once did. It is too soon,
however, to determine whether the aftermath of the civil rights movement of the
1960s in the United States will make a lasting difference to race relations there--
only time can judge this.
In most of the regions colonized by Europeans, the aboriginal population is
now so greatly outnumbered that problems of race seem invisible. However, they
still do exist, and nowhere is this more evident than in South Africa, where blacks
outnumber whites by a wide margin. Like North American whites, those of South
Africa are natives; they are generations removed from knowing any other
homeland, and have no desire to leave. Because of the relative numbers involved,
their repression of blacks was more severe than any discrimination against native
Indians in North America, yet it is of the same kind and origin. However it is much
easier and more realistic to foresee the elimination of paternalistic, government-
controlled Indian reservations, and a reduction of discrimination against aboriginals,
than it is to predict the soon coming of a stable or an integrated society in South
Africa, despite disbandment of the old white supremacist government there. One
might note, however, that the general drift of some land claims settlements in
Canada seems to be in the opposite direction, more towards segregation then
integration.
Apart from the history of discrimination itself, there is no external support for
the notion that one type of human is inherently superior to another. There is no
scientific evidence for such claims, and there is no more reason to cite differences
on the basis of skin color than there is on the basis of height or eye colour.
Moreover, the Biblical passages often cited in support of racism are not only obscure
and used in isolation from their context, but provide no mandate to pursue a policy
of discrimination by race. Indeed, the modern idea of "race" was not known to the
writers of the Bible at all, and such discrimination that the Israelites were
commanded to practice was intended to maintain the purity of their religion, not so
much that of their physical ancestry. This much is made obvious by the inclusion of
foreigners in the genealogy of David, for instance.
Thus, from every rational point of view, it is safe to assert without fear of
objective contradiction that there is only one race--the human race, and that the
interests of all humanity are nearer the top of an ethical hierarchy than are the
interests of any one nation. Will the men and women of the fourth civilization act on
this self-evident truth? For the sake of their collective survival, they have to, but
history tends to suggest that they might not. Racism is an old problem, a leftover
from a dark and evil age, but it is not one that will go away easily. We will know that
it has when people think no more of skin colour than arm length, number of
freckles, or toenail shape--not only for decisions about business, but also those for
church membership, choosing a neighborhood, and for marriage. We can suppose
racism has passed when a Baptist Pakistani can move in next door to Muslim
Japanese and Liberian families, and across the street from a northern Ireland
Catholic, and all join the same clubs, their children all play together without eliciting
comments from anyone, and their religious differences provoke no more than

528
vigorous discussions. We will know that it has passed when the peculiarities of some
religious group can be satirized without that groups' leaders sentencing the offender
to death or complaining of discrimination. We will know when all ideas (including
unpopular ones like creationism) have equal chance to compete for space in the
newspapers, magazines and airwaves without being censored by advocates of the
currently fashionable version of scientific or political correctness.
The possibility that racism and other forms of discrimination will "overshoot"
into the next civilization and mar its ability to work and communicate in universal
fashion cannot be discounted. Neither can the possibility that religious groups
rather than national ones could become the focus of a new and fatal form of
intolerance--there are indications of such attitudes today.
However, it must be appreciated that solving such a deeply entrenched
problem will not be easy--even in the most "enlightened" of societies. Care must be
taken that attempts to redress old inequities do not become a reverse
discrimination and create a backlash of new racism, or to simply redefine
"tolerance" to mean the same as an old intolerance. Care must also be taken in
applying sanctions to countries engaged in racial or cultural wars or internal
discrimination that they do not simply make things worse, by strengthening the
resolve of the racists to maintain power at any cost. That such divisive problems will
eventually be solved and a new unity of humankind emerge is a premise of this
book; that it will come easily or soon is not.

Summary

All the institutions of the machine age can react to its demise either by
embracing change and re-defining themselves to suit new roles, or by acting to
perpetuate themselves and their existing agendas at any cost. Likewise, its peoples
can welcome and work with the changes for good, or either resist them or use them
to evil ends. However, it is the contention of this book that change will be the
course to achieve social stability, and that attempting to continue to live in the
industrial age will only result in further fragmentation. The real challenge of the new
civilization is not the preservation of old and disintegrating institutions, it is the
integration of their fragments into a new and cohesive whole--not change for its own
sake, but for the construction of a new and better society.
Finally, and more specifically, and since this book is intended to challenge
students of technology not only to consider ideas, but also problems, it is
appropriate to put forth some very specific questions for solution. In keeping with
the tradition for such things, these are presented in lists, with only minimal
comment.

12.8 Technique And The Fourth Civilization


The techniques that will characterize or drive the fourth civilization are not
difficult to discern, and it was to the major ones of these that Section II of this book
was devoted. There are a number of specific issues that will likely receive a great
deal of attention in the coming years, however. Some of these represent likely or
potential developments in science or technology, and not all are central to the

529
character of the fourth civilization. Also, in accord with such traditions, the technical
issues raised in this section are in a series of somewhat unspecific questions and
requests for action. They constitute one person's summary of some of the
interesting outstanding puzzles or problems for solution by investigators over the
next few years. They have been selected partly because of the impact on basic
science that a solution might represent, and partly because of the magnitude of the
potential effects some of them have for the integrated body of knowledge or for
society in general. A few may be regarded as resolved already but perhaps are in
need of reopening. Others may be answered in part by the time these lists are
published.
In keeping with caveats already sounded in this chapter, some of the social
issues raised here are critical, for failure to develop appropriate techniques to solve
them could mean that there will be no fourth civilization, but a new dark age
instead. In no case should these lists of questions be regarded as exhaustive or
static; they will certainly change if there is another edition. For convenience, they
are divided into categories, not all disjoint, and not all traditional. Oh, and these are
not homework problems, but portions of them could become the lifelong research
projects for a few readers.

Basic Physical Sciences and Applications

o How constant are physical constants? Do any of them, such as the speed of
light, change, say, decay?
o Does the proton decay?
o Develop a grand unification theory (GUT, also known as a Theory of
Everything or TOE) to incorporate gravity. Is a practical antigravity device feasible?
o How does superconductivity work? For that matter, how is electricity
conducted, really? Make a room-temperature superconducting material that can be
drawn into a wire.
o Is there an effective way travel faster than the speed of light, or to avoid the
problem, as assumed by so much science fiction?
o Does quantum mechanics imply that alternate universes exist or that the
behaviour of particles can be determined by what takes place with paired particles
even when these are at great distances? If the latter, what practical use does this
have? Build a working quantum computer.
o Devise a practical method for molecular-level data storage. Build a
Drexlerian assembler (nanomachine) and employ this in both manufacturing and
medical applications.
o Build a commercially viable lightweight battery or fuel cell and eliminate
petroleum use from transportation applications (forecast the political implications).
o Of what class of phenomena is time? Is it a physical phenomenon, or does it
even have an existence independent of the physical world. Or is it a mental
construct devised by humankind to accommodate a limited perception of some
more fundamental phenomenon?
o What is the most fundamental particle of matter? The quark, or something
else (smaller)? Is it really a particle? Define "particle."

530
Cosmology and the Earth Sciences

o Demonstrate the existence of a black hole close enough at hand to run


experiments with (i.e., closer than those hypothesized to exist at the centres of
some galaxies). Is it possible to use one?
o If the universe is closed and does contain enough mass so that it will
eventually contract, where is the mass? It is not sufficient to hypothesize "dark
matter" that cannot be perceived.
o Settle the question of whether the neutrino is massless.
o Revise the nuclear model for the Sun, or find the neutrinos it ought to emit
under the present one.
o Find an actual, not just a hypothesized, source for comets and solar system
dust.
o What is the mechanism whereby the earth's magnetic field is produced and
maintained?
o Is any substantial portion of natural gas or coal produced directly by earth
processes (e.g., outgassing from the core) rather than being of organic origin?
o Predict earthquakes and volcanic eruptions with a high degree of reliability.
Do the same for hurricanes; for the weather generally.
o Devise a cohesive cosmology that is substantially different from the
standard big bang theory and that can worthily compete with it or even replace it.
Pay particular attention to a mechanism for galaxy formation and the large-scale
structure of the universe.
o Is the Earth getting hotter or cooler? Whatever it is doing, is the process
naturally cyclical, or do man's activities have much to do with it? Do something
about it, without making an even worse mess than the one you think already exists.

Biospace Problems

o Extend the detailed mapping of human DNA, to include the function of every
gene. Is this all there is to life and heredity, or do other parts of the cell, such as the
mitochondria, also play a role? What is it?
o Discover a way to block or remove viruses--one that can be tailored to any
virus. Achieve a complete analysis of the human immune system.
o What is sleep? Is it necessary, or can it be dispensed with?
o Settle the mind-brain question. That is, is the mind something more than the
brain or something less? Or are these two names for the same thing.
o Is aging a disease or perhaps a complex of diseases or malfunctions? If so,
how can it be treated?
o Find a way to persuade the body to regenerate damaged organs or
otherwise remove the necessity for organ banks or transplant surgery or both.
o Find a foolproof sterility drug and an antidote. Or, find some other means to
eliminate nearly all abortions.
o Determine whether the earth's ozone layer is diminishing. If so, is this as
part of some geological or solar cycle, or because of the activity of man, and can
you do something about it without creating a worse problem?

531
o Find a way to stop acid rain. Solve some of the other deforestation and
stream pollution problems while some of both remain alive.
o Construct new human living space for a substantial number of people
without destroying farmland or extinguishing any more animal species in the
process.

Miscellaneous Applications

o Is a railway or elevator to near-earth orbit feasible? Build one. Otherwise,


find a method of transporting materials to orbit at, say, one percent of the cost of
space shuttle launches.
o Is there a way of preventing either the launching or the explosion of nuclear
warheads by some simple and cheap method?
o Develop an automated prospecting method for coal, oil, and natural gas; for
minerals.
o Develop a commercially viable, inexpensive, plentiful, safe, and publicly
acceptable power source, and find a way to stop using organic fuels altogether.
o Develop a new, fast, cheap, and safe method of (computer controlled)
ground transportation, solving the problem of highway massacre.
o Find a way to solve the airport bottleneck problems without raising fares,
reducing competition or compromising safety.
o Simulate human vision with a computing device.
o Build the data repository and searching facilities of the academic
Metalibrary. Build the PIEA.
o Build a permanent and self-supporting space station in orbit, on the Moon,
or both.
o Build a holographic broadcast/receiving system with a fine resolution on
each plane.

Integrations and Other Social Questions

o Develop a science of history (i.e., a psychohistory) that explains historical


phenomena in terms of statistical behaviour, or prove this cannot be done.
o Develop an economic calculus that can operate even in a highly complex
and rapidly changing climate.
o Dismantle the statist regimes and restructure democracy without either
falling into anarchy or global war. In both cases, achieve a state that concentrates
on collective and networked obligations, rather than on either the demanding of
fragmented rights or a hierarchical command structure.
o Find a way to prevent either science or religion from creating a societal
intolerance of the other. Forge a reconciliation between the two that includes a
mutual respect and use of intellectual territories.
o Find a way to overcome racial and national prejudices and avoid the
establishment of a new Naziism. Establish the principle that there is only one human
race.

532
o Find a way to rebuild the education system, making it locally relevant, but
culturally and sexually integrated.
o Devise workable codes of ethics for the major professions, and a means to
enforce them.
o Either legalize the use of all drugs, or find an effective way to prevent the
production of the illegal ones. In either case, use the solution to reclaim both the
third world governments and the inner cities of the West that are now hostage to
the drug gangs.
o Solve the problem of the creation of new class structures based on the
ability to use the fourth-civilization techniques. In particular, find a way to prevent
the disappearance of the middle class.
o Find a way to resolve the sexual differences and gender solitudes without
creating a new social extreme in the process.

The most important social problem are implied by items in the last list. The
most daunting task of all is the piecing together of the fragments of the exhausted
machine age civilization into a cohesive culture that has the dynamic energy to fulfil
some of the potential of the information age--and doing so despite the fragment-
creating momentum that still exists. The peoples who achieve these things in their
nations will dominate the early years of the next era, for they will indeed be the
inheritors of the fourth civilization.

12.9 Summary and Further Discussion

Summary

Each phase of civilization fragments before the next one can be built. This
process has been most noticeable on this particular transition because of its rapidity
and because the specialization of the machine age has tended to encourage
intellectual, professional, and religious fragmentation and specialization as it
developed more fully.
The task of building the fourth civilization is that of forging a new integration
of the fragments. This takes place both on an academic level, and on a broad
cultural one. It requires a redefining of such institutions as government,
corporations, education, science, and religion and a wholistic approach to personal
and institutional relationships. The heart-commitments of those in the new era are
not yet evident, but there is much experimentation with various forms of religion,
some of them quite novel. A muted scientism may be an important factor and the
new age philosophies may play some role; however, any resurgence of Christianity,
for instance, would require it to come to terms with its world view, intellectual and
spiritual foundations, and work out a transformational relevance in the new society.
North America is a possible venue for such a revival, but the Far East may be more
likely. There is a tension between the use of integration to self-evolve a consilience
of purely empirical knowledge and its use, on the other hand to demonstrate a
preexisting concinnity, or design.

533
The last section presents several interesting technical and critical social
problems whose solution could have an important impact upon the further
technological and social development of the fourth civilization or even on its very
existence.

Research and Discussion Questions

1. The author argues that fragmentation is caused by machine age


specialization paradigms exacerbated by normal transitional problems and that
integration must characterize the fourth civilization. Argue instead that the chief
paradigm of the new will instead be fragmentation, and that a cohesive culture can
be built upon such a foundation.
2. What is the future of the family? Do not assume anything that the author
has said in your answer, but argue from such other sources as you have available,
filtered through your world view. State your definitions and assumptions and
develop your reasoning clearly.
3. Argue that a participatory democracy is just another form of totalitarianism
and propose a better alternative.
4. A woman who is eight months pregnant applies for a position for which she
turns out to be the best qualified person. Should she be hired without regard to her
pregnancy? Are there any exceptions to your conclusion? Does it matter if this is a
sedentary desk job or one requiring considerable physical exertion--say, as a deck
hand on a tuna fishing boat? Does it matter if this is a key management position
whose occupant will be a critical company player over the next four months?
5. Now repeat your analysis in the last question for the case where the father-
to-be is the applicant. Is there any difference? Why or why not?
6. Now, do the same analysis of the last two questions for the case where the
task is a combat posting for the army. Is there any difference? Why or why not?
7. The author has expressed some doubt as to whether the sexual revolution
involved a change of behaviour or a change in public perceptions of behaviour.
Research the issue and try to settle it one way or another.
8. Argue that a technical solution to the AIDS and other venereal diseases will
have (a) no effect or (b) a permanent effect on sexual behaviour.
9. The author makes much of the "principle of interdependence". Argue that it
is the individual and not the total society that is of paramount importance, and that
this principle is of much less importance than suggested here. You may wish to
defend the position that individual rights are far more important than obligations to
society.
10. Take one of the categories of questions and tasks proposed in the last
section and examine two or three items in detail. Research what is now known,
explain the magnitude of the task remaining and summarize the probable effect on
society and on knowledge in general that a solution would have.
11. The author asserts that in the information age the late machine age post
modernist religious fragmentation will be replaced by an integrative view of religion.
Argue the opposite--that either religious fragmentation must continue, or argue that

534
religion will die out altogether. Base your argument on what you believe will be the
prevailing paradigms of the fourth civilization.
12. Do a detailed review of Wilson's Consilience, summarizing his arguments
and defending them.
13. Argue contra Wilson, but in more detail than in this chapter, that all the
physical universe is a concinnity, that is, an integral design.
14. Argue contra both Wilson and this author, that there is no need for a
comprehensive integration of knowledge.
15. Expand upon and detail the means by which a "new Reformation" of
Christianity might take place. It might be appropriate to critique a particular church,
preferably your own, if applicable with respect to the proposal of radical integration
and cultural criticism.
16. Apply the same integrative model to another religious system other than
Christianity, and detail how it would have to change to become the major dynamic
religious force in the new civilization.
17. Explain in detail the beliefs, practices and cultural impact of the New Age
movement. Is it a religion? Why or why not?
18. Make your own list of ten interesting and important unsolved problems in
any field. Justify the inclusion of each item.
19. What of the role of the family in society? Some claim that the traditional
family is obsolete. Either defend the traditional family as the only reasonable
building-block for a society, or propose an alternative view of the way society can
cohere.
20. Write a detailed explanation of the way in which you personally will be
attempting to integrate your being, knowing, experiencing, and relating as you look
forward to living and working in the fourth civilization.
21. Write a detailed explanation of why you do not need to do any of these
things, but be sure to state what your presuppositions are for the main features of
the fourth civilization and how you will relate to them if it does not have to be in an
integrative way.

Bibliography

Alcorn, Paul A. Social Issues in Technology: A Format For Investigation.


Englewood Cliffs, NJ: Prentice-Hall, 1986.
Asimov, Isaac. Science Past Science Future. Garden City, NY: Doubleday,
1975.
Bibby, Reginald W. Fragmented Gods--The Poverty and Potential of Religion in
Canada. Toronto: Irwin, 1987.
Bloom, Allan. The Closing of the American Mind. New York: Simon & Schuster,
1987.
Denning, Peter J. and Metcalfe, Robert M. Beyond Calculation--The Next Fifty
Years of Computing. New York: Springer-Verlag, 1997
Hawkin, David J. Christ and Modernity--Christian Self-Understanding in a
Technological Age. Waterloo, Ontario: Wilfred Laurier Press, 1985.

535
Huber, Charles H. and Baruth, Leroy G. Ethical, Legal, and Professional Issues
in the Practice of Marriage and Family Therapy. Columbus OH: Merrill, 1987.
Kaku, Michio. Visions--How Science Will Revolutionize the 21st Century New
York: Doubleday, 1997
Klass, Morton and Hellman, Hal. The Kinds of Mankind--An Introduction to
Race and Racism. Philadelphia: Lippincott, 1971
Kuhn, Thomas S. The Structure of Scientific Revolutions--Vol 2 No 2 in The
International Encyclopedia of Unified Science (Second Ed.). Chicago: The University
of Chicago Press, 1970
McCuen, Richard H. and Wallace, James M. (ed.). Social Responsibilities in
Engineering and Science--A Guide for Selecting General Education Courses.
Englewood Cliffs, NJ: Simon & Schuster, 1987.
Naisbitt, John and Aburdene, Patricia. Re-inventing the Corporation. New York:
Warner Books, 1985.
Rybczynski, Witold. Taming the Tiger: The Struggle to Control Technology.
New York: Viking Press, 1983
Schaeffer, Francis A. How Should We Then Live--The Rise and Decline of
Western Thought and Culture. Old Tappan, NJ: Revell, 1976.
Scorer, C.G. The Bible and Sex Ethics Today. London: Tyndale, 1966.
Sowa, John F. Knowledge Representation: Logical, Philosophical, and
Computational Foundations. New York: Brooks Cole. 2000.
Toffler, Alvin. Previews and Premises. New York: Morrow, 1983
Trefil, James S. Space--Time--Infinity. New York: Random, 1985.
Wilson, Edward O. Consilience--The Unity of Knowledge. New York: Knopf,
1998.
Yourdon, Edward. Nations at Risk: The Impact of the Computer Revolution.
New York: Yourdon Press, 1986.

536

Potrebbero piacerti anche