Sei sulla pagina 1di 31

The Agony over Entropy

The Agony Over Entropy

The Agony Over Entropy....................................................................................................1


1. An Often Misunderstood Concept...................................................................................1
1.1 The Heat Death of the Universe.................................................................................1
1.2 Inexorable Disorder...................................................................................................3
1.3 Flavors of Entropy.....................................................................................................8
1.4 Distinguishing Flavors of Entropy...........................................................................11
Comparing Entropy Concepts....................................................................................12
2. Homes for Entropies: Network-Hierarchy Theory........................................................15
2.1 Focus and Scale in Network and Hierarchy Theories..............................................15
2.2 Process-Response and Control Systems..................................................................19
2.3 Process-Response and Configurational Entropy......................................................23
3. References for Appendix C............................................................................................30

1. An Often Misunderstood Concept


While some readers will already have a strong understanding of the concept of entropy,
others may not, and many may harbor misconceptions regarding this concept. I may very
well be harboring misconceptions myself! For this reason, I have decided to address
some of the most serious misunderstandings surrounding the concept of entropy in this
appendix. It will serve as orientation to those unfamiliar with the concept, as a corrective
to those with misconceptions about the concept, and also as a chance to display my own
ignorance, if I myself have a flawed understanding of this concept. If I am in error, I
hope that this will be pointed out to me by someone more knowledgeable, that I might
further refine my understanding of this important physical and ecological concept.

1.1 The Heat Death of the Universe


At first glance, entropy seems like a very basic concept. It is part of the second law of
thermodynamics, which simply describes the tendency for heat to reach an even
distribution. A hot cup of coffee in a cool room will tend to cool to the temperature of the
room. The reverse never happens. Once that heat energy is ‘out’ of the cup and
dispersed around the room, it never gathers itself back up to spontaneously flow back into
the cup. In other words, room-temperature coffee in a cup will not somehow draw heat
from the room-temperature surroundings until it is hot again. (Since all forms of energy
are interchangeable, the laws of thermodynamics apply to all forms of energy).
Surprisingly enough, this simple fact illustrates a physical law with some of the farthest-
reaching implications for many branches of science.

When the coffee cup ‘loses’ heat, the heat is not lost in the sense of vanishing or
disappearing. Energy can neither be created nor destroyed, only transformed (this is the
first law of thermodynamics). So rather than disappearing, the heat that has been
dispersed throughout the room is transformed into a diffuse, dissipated, low energy
format that cannot easily be directed at something to do work. This form is sometimes
called anergy, which is diffuse heat at the temperature of the environment. The heat has

©2006 Neil LaChapelle 1


The Agony over Entropy

been dissipated as much as possible within the boundaries of this coffee+room system. If
it is cold outside and you open the doors and windows, the heat will dissipate itself
further, flowing as always from a place of higher temperature to a place of lower
temperature. That kind of a heat difference or gradient across a boundary between a
warmer and a cooler system is needed in order for heat to flow.

However, if you do not open the doors and windows, the dispersed heat in the room can
still do work. If you take out a frozen dinner to defrost on the countertop, that dinner will
be the region of lower temperature that the heat from the room flows into. So energy that
is spent and dispersed from one system can still be available for useful work by another
system. The second system just has to provide a sink for that energy, a pathway for the
energy to reduce itself from a less dispersed, higher-energy state to a more dispersed,
more ‘anegric’ lower-energy state. The energy will flow down that pathway, and it can be
dissipated as it does work on the target system.

As the energy gets more and more degraded and dissipated, however, it becomes harder
to direct down a specific pathway, or better, it becomes harder to set up a situation where
there is a big, clean, sharp difference between the energy source and the energy sink. The
difference in energy between the source systems and the target system is called a
thermodynamic potential. When the energy source is concentrated (like the sun), and the
thermodynamic potential is large (like the difference between the heat of the sun and the
cold of space), then heat will flow very decisively in one direction, and this energy can be
harnessed and transformed by systems that are positioned inside that flow. The part of
the heat that is useful and available to do work is called exergy. The sun is the ultimate
source of most of our exergy on Earth. By contrast, energy that has become totally
dissipated and unable to do work in any system is called entropy. This is a fully-
dissipated state, not just relatively dissipated within the confines of a system, which we
earlier called ‘anergy’ ((Odum, 1994)).

Energy flows in nature because it continually seeks more and more entropic (lower-
energy) states. One might say that what gravity does for matter, entropy does for energy
always pulling it ‘down’ and giving it a direction of movement.

This law of dissipation is the second law of thermodynamics, which states that entropy
always increases in a closed system. Assuming that the universe is a closed system
(which it is not in all cosmologies) the total amount of entropy in the universe increases
with every physical interaction. This happens because with every interaction, there is
some random motion, i.e. some heat that dissipates entropically each time. Every process
loses energy, so in you were to chart the energy input for a system against its energy
output in terms of the work and materials it externalizes, the output would always be a bit
less than the input. Energy is lost as heat at each step in the system’s transformative
processes. This makes a perpetual motion machine impossible. In a chain of physical
reactions, a pulse of usable energy (exergy) – transferred from node to node along the
reactive pathway – will eventually dwindle to nothing (anergy and entropy). All of the
energy in the pulse will have either been used to do work or dissipated as heat. No

©2006 Neil LaChapelle 2


The Agony over Entropy

system can power itself for long. It will always need additional energy from the outside.
Usable energy always runs out.

As mentioned, entropy never decreases in a closed, isolated system. We should clarify


what is meant by a system in this context. A thermodynamic system is matter-energy that
is separated from its environment by a boundary in space and time. The system could be
a car engine, or it could be a mass of warm air. Systems have energy. Energy links the
system to its environment in two ways, via heat or via work. Work is the capacity to
make something move, i.e. to apply force across a distance to move an object. Heat
occurs when two objects of different temperatures are brought together. Heat is the
energy that gets transferred from the high temperature object to the low temperature
object. Once the temperature of the system has lowered, it will take work or thermal
contact with a higher temperature source to raise its temperature again. Once heat is
dispersed, it cannot be re-gathered except by spending more energy than one would gain
from re-gathering it. Similarly, once energy has been used to do work, it is no longer in a
form that can be used to do that kind of work anymore. Once energy has been dissipated
as anergy or entropy, it cannot be un-dissipated to form exergy again, unless work is
expended to gather it up and direct it, and that work will create more entropy than existed
previously. Entropy always increases, in a closed system.

A closed system is one that receives no energy from external sources. In a closed system,
every physical interaction produces more entropy, until there are no more differences
between high or low energy areas in the system. Energy concentrations will even out,
producing a uniformly entropic thermal equilibrium. This unstoppable, inexorable
increase of entropy has given rise to prematurely pessimistic assessments that the
universe will one day run out of usable, concentrated energy sources and all turn into one
thin uniform gas at the same low temperature – the so called ‘heat death’ of the universe.
(This is possible, but so are other scenarios, and our species would be extinct by then
anyways!) This image of thermal decay is one that some pessimists relish, while it makes
many optimists recoil. It is a touchpoint in the emotional struggle over entropy, and one
we will address later in this section.

1.2 Inexorable Disorder


Turning away from the heat death scenario, there is another doomsday scenario that dogs
our understandings of entropy, this one based on an alternate description of the second
law based on statistical mechanics. Since it is based on statistical mechanics, this second
concept of entropy has nothing specifically to do with heat or energy. This is an
important point, which is often ignored and becomes the source of enormous confusion
over the concept of entropy ((Klyce, 2005)). We shall examine this conceptual confusion
shortly, but for now, we shall focus on the affective distractor of the doomsday scenario,
described not as heat death here, but rather as the ruthless and inexorable increase of
disorder in all systems.

The statistical interpretation of entropy is based on a ‘whole-parts’ kind of distinction.


The ‘whole’ term is the macrostate of a system, and the ‘parts’ terms are the microstates

©2006 Neil LaChapelle 3


The Agony over Entropy

of all the system components that come together to make the macrostate happen. The
macrostate can be seen as the measure of the system as a whole at its boundary, whereas
the microstates are the possible internal arrangements that might produce that overall
macrostate. Say that you are in an auditorium and a smoke machine pumps a cloud of
smoke into the middle of the room. The macrostate of the cloud could be given in terms
of its volume, shape, position, size, direction of movement, speed, acceleration and so on.
In order for the macrostate to have these specific values, each individual particle of
smoke has to be within a very narrow range of its own possible values, especially for
position in the room, speed and direction. Given that an individual particle might drift
just about anywhere in the room, for the cloud to have the attributes it does, each
individual particle has to be within a narrow range of positions, directions and speeds.

It is very unlikely that the particles will stay within those value ranges, given that smoke
particles continually collide with each other and with air particles. Those collisions will
have an element of randomness to them that makes it unlikely that the cloud will
maintain its shape, or other macrostate values. It is much more likely that the cloud will
thin out, with particles scattering themselves out evenly across the auditorium, finally to
reach an equilibrium where the incidence of collisions is about the same in every
direction – as a thin fog. That is a more likely macrostate than the cloud formation,
particularly because each individual particle could be just about anywhere in the room
moving within a broad range of directions, and the macrostate values would not change
much. The thin fog would still be a thin fog. The more highly ordered cloud-formation
macrostate was only compatible with a narrow range of microstates. The less orderly
diffuse-fog macrostate is compatible with a vast number of microstates. In other words,
the less orderly state is more probable. Random particle interactions are thus more likely
to ‘find’ this more probable macrostate. The reason systems drift towards more
disorderly macrostates is simple that those states are more likely than very orderly
macrostates. The random movement of air and smoke particles could, entirely by
accident, bump themselves into a cloud in the shape of a bust of Elvis. No law of nature
forbids this. However, given the narrow range of microstates that are compatible with
this macrostate, it is vanishingly unlikely that this would ever happen. A thin fog is a
much more likely macrostate.

Entropy thus increases due to the fact that disorder is a more probable result than order
during random or stochastic interactions. The second law of thermodynamics can be
rewritten to say that disorder always increases in a closed system. Furthermore, since
some energy is lost to random motion with every physical interaction, this randomness
accumulates in the cosmos until all order vanishes and the randomness is all we have left.
The specter of death arises once more in the guise of the inexorable universal increase of
disorder. The optimist recoils. The cynic smirks. There is a dramatic pause in the
emotional debate between them. But against this onslaught of disorder, Life comes to the
rescue. Living systems emerge from the darkness, their torches held aloft, creating order
in the face of growing disorder, bravely defying the odds, and even defying the law of
entropy itself!

©2006 Neil LaChapelle 4


The Agony over Entropy

The perception that living systems somehow ‘undo’ or ‘combat’ entropy is erroneous and
misguided, but quite widespread. On this view, non-living systems may run down into
structureless entropy, but the evidence of our eyes shows us the intricate complexity of
living systems, which seems to increase rather than decrease in time. Thus, the thinking
goes, living systems must negate entropy. Concepts such as ‘negative entropy’ or
‘negentropy’, introduced by Erwin Schrödinger in his popular-science book What Is
Life?, are seized upon to explain the order-creating effect of living organisms. Life
brings hope to the world through the generation of negentropy! Entropy and negentropy
are torn from their scientific context, and cast as characters in a Manichean cultural
battle, pitting order against disorder, love against war, unity against strife and health
against decay. The following quote exemplifies this essentially mythico-religious cultural
drama, playing itself out through scientific terminology:

Two great processes operate in all systems, be they cells, humans or what
have you: 1) entropy, in which energy is decreasing and disorder
increasing, and 2) negative entropy or negentropy, in which energy is
increasing and disorder decreasing…

Negative entropy is the critical choice for humanity at this time in history.
This is the path of conscious evolution: do we unconsciously follow the
well-worn groove of cultural, social and religious entropy, or…

…do we marshal the forces of Life – Life, the great force of negentropy in
the universe – and claim our freedom from the mass mind that pulls us
down into degeneration.

... In order to optimize the universal life force in our own lives… We have
to wholeheartedly embrace a negentropic life… (to) reverse the forces of
entropy in our own bodies and households, at a time when other
planetary forces are accelerating the breakdown of all systems that have
vulnerable energy supplies.

http://www.tachyon-energy-products.com/main/entropy_negentropy.htm
Entropy or the Negative Entropy of Tachyon? (Negentropy) date accessed:
Dec 10/05 12:50am © 2001-2005 Gene Latimer

This example epitomizes the affective reaction that has distorted many people’s
understanding of entropy and thermodynamics. People are here challenged to stand up
‘for’ Life/Energy/Negentropy and ‘against’ Entropy. This reflects a profound
misunderstanding of the role of entropy in nature. Without entropy, there could be no
evolution, no life, no change, no way to sustain biological order. The image of entropy as
a chaotic monster against which the forces of order must struggle needs to be replaced by
a more organic metaphor; one that supports an affective stance that resonates more
meaningfully with the role entropy actually plays in nature.

©2006 Neil LaChapelle 5


The Agony over Entropy

A river system might furnish such a metaphor. A river system has a concentration of
water at its head, and it releases water into the sea at its mouth. Entropy is a lot like the
receptivity of the ocean and the environment in accommodating that release. The mouth
of the river continually dissipates water to the sea. Water is also lost through seepage into
the ground and evaporation into the air as it travels through the system. Without this
receptivity and dissipation into air, earth and sea, there is no river. Dissipation makes
room for water from the main channel to flow towards the mouth, which makes room for
water from the head of the system to flow into the main channel. Only the constant
dispersal of water into the endless sea makes the flow possible, and we want that flow.
We can place our waterwheels in that flow, to power our mills – or turbines to power our
cities. No dispersal, no flow, no available energy to be put to work. Energy would be
eternally locked up as potential energy, like in a perfectly sealed lake or reservoir. To get
it to power our lives, that energy needs some way to be dispersed – it needs some room to
move and a direction in which to flow. The growth of entropy provides the room that
makes flow happen. Entropy is not some primordial chaos threatening to overwhelm us.
It is the receptivity of the universe that draws energy down from its reservoirs and
through the channels whose flows drive our world. Entropy is the yin side of flow. It
enables life as space enables light.

Living systems themselves are just highly specialized reduction channels. (“The most
general objects of selection are not individual or genes or populations, but informed
patterns of thermodynamic flow” (Wicken, 1988)) Energies travel from source to sink,
because of the lower energy states are always sought. We want that. Every living system
wants to be the sink for some select group of energy flows – flows that preserve rather
than disrupt its structure. We want to reduce exergy to anergy, by extracting work from a
high-exergy, low-entropy resource. Consider the flow of solar energy through
ecosystems. Energy flows out from the sun towards the sink of deep space, and along the
way it encounters our planet. Once here, it looks for a path towards the entropic state. It
looks for ways to disperse itself further, seeking reduction pathways, channels or sinks.
Living systems eagerly offer such channels for energy, their metabolic waterwheels
primed and ready to extract work from the flow. That work creates the living order that
we see, and that we are. As energy flows from one form of life to another, one reduction
pathway to another, the biosphere becomes one gigantic gradient along which energy can
direct itself towards its final dissipation. All the way down the line, living systems
accomplish their dissipation in part by doing the work that allows them to grow, maintain
and reproduce their own improbable structures.

We all position ourselves at strategic points along the energy cascade. For example, high
energy photons from the sun may dissipate some of their exergy by working upon
chlorophyll molecules in photosynthetic plants. Plants can then sink some of their energy
into the task of producing high-exergy fruits. Energy seeks reduction pathways, and the
energy in the fruit is no exception. There will thus be a race or competition among
reduction pathways to get to the energy source first. If we get to it before anything else
does, then that exergy will flow along the reduction pathway we offer it through our own
digestive systems, where we will extract energy and dissipate it through work to grow
and maintain our bodies, and conduct our daily activities. The chemistry of life – all

©2006 Neil LaChapelle 6


The Agony over Entropy

chemistry for that matter – is only possible because chemicals seek lower-energy
arrangements, because energy seeks maximal dispersal. In the words of PW Atkins, the
second law directs us to appreciate “the chaotic dispersal of energy as the purposeless
motivation of change.” Entropy makes life possible.

The heat death of the universe is not a fruitful image of the future, and it misrepresents
the role of entropy in producing order. The fact is that projections into the far
cosmological future are very hypothetical, and need concern no-one but cosmologists and
physics enthusiasts. The large-scale structure of the universe is not completely
understood, and much of what is understood about it has been hard to reconcile with
knowledge about the small-scale structure of the universe. The heat death outcome is not
certain, and is certainly nothing that concerns us.

The second law does not primarily teach us that we are doomed. It teaches us about the
finitude and value of high-energy resources, the interdependence of all life forms, the
endless work of living and the need to economize and conserve. It is a law about
limitations, but these limits are ecological and economic, not cosmological. Dissipation
drives life, and it needs to be well directed. People who learn of the second law and recoil
at the seemingly pessimistic implications are wide of the mark. If they react against
entropy as a concept, they are unwittingly reacting against the fragility of life, our
dependence upon limited resources, and the need to economize, conserve and sustain
those resources.

Entropy implies earthly custodianship, not a cosmic war between chaos and order. We
can free our emotions from the Manichean drama of order-versus-chaos, and appreciate
how the reduction of energy allows us to extract work from the flow that will finally
release itself into the ocean of entropy. Both the thermal bogey of heat death and the
statistical bogey of growing disorder can be replaced by a more accurate image of
entropy as the enabler of the flow that feeds life.

Having discussed the affective issues surrounding these two interpretations of entropy, it
is time to return to a point we touched upon earlier: thermal entropy and the entropy of
statistical mechanics are not the same. This point is now worth emphasizing. Thermal
interpretation is an empirical description of how heat behaves when maximally and
uniformly dissipated. The statistical interpretation makes no specific reference to heat.
The smoke cloud in the auditorium used in the statistical example could easily have been
at ambient room temperature throughout its dissipation. In that case, the room would
have remained in thermal equilibrium no matter what the smoke particles did. ‘Entropy’
as disorder would have increased dramatically without changing thermal entropy very
much. These are therefore not equivalent concepts. They are two of, I would say, at least
three distinct categories of entropy concepts. Blurring these conceptual distinctions
creates great confusion over the meaning of entropy.

It is a fairly simple matter to demonstrate that the different categories of entropy are not
equivalent. It is much harder to describe how the different entropies relate to each other.
Despite their differences, the entropies (and the terms for order paired with them) seem to

©2006 Neil LaChapelle 7


The Agony over Entropy

interact in various ways. Characterizing their relationship is quite difficult. In fact, I


believe that clarity will not be achieved on this topic on definitional grounds alone. I
believe a clear cognitive model of the interactions between forms of entropy is needed, to
at least begin to make sense of the relationships between different terms for order. This
cognitive model already exists. It has emerged in theoretical ecology, in the form of
network-hierarchy theory. However, prior to discussing it, some definitional clarification
remains necessary, and that is the task I undertake below.

1.3 Flavors of Entropy


Ludwig Boltzmann was the first to describe and calculate entropy in his formulation of
the second law of thermodynamics. He later also developed a description of entropy
based on statistical mechanics. As mentioned earlier, that second description was not
specifically anchored in thermodynamics, and it has supported the importation of the
concept of entropy in other domains.

Claude Shannon famously used the concept and the mathematics of entropy in the
context of information and communications theory. ‘Shannon entropy’ represents the
uncertainty that a recipient of a message has before receiving a piece of information.
Entropy in this case measures uncertainty. There have been several mathematical
extensions of and departures from Shannon’s entropy concepts and calculations (e.g.
Renyi entropy and Havrda-Charvat-Tsallis entropy). I will not be treating these entropies
in this work, and will use Shannon as my example of what might be called ‘information
entropy’ (uncertainty).

Beyond thermodynamics and information theory, there are also some ‘entropies’ used in
what might be called the material sciences, i.e. condensed matter physics, materials and
structural engineering, chemistry, biochemistry, molecular biology, oncology, seismology,
geochemistry and other disciplines concerned with molecular structure and organization.
Researchers in these fields are interested in orderly and disorderly material states, and the
concept of entropy is used to describe degrees of disorder. Two entropies stand out as
important: configurational/positional entropy, and structural entropy. Configural entropy
describes the overall disorder in a material system, and structural entropy deals with the
distribution and degree of discontinuity in a system. Both are described more fully
below.

All of these entropies make some use of the mathematical formula for entropy introduced
into physics by Boltzmann. This mathematical continuity across domains can create the
impression that all measures of entropy measure the same ‘thing’. Entropy isn’t a thing,
however, any more than height is a thing. Entropy is a measure, and the equation used to
determine that measure was initially developed, not by Boltzmann, but by the 18th century
mathematician Abraham DeMoivre, in his pioneering investigations of probability
distributions in games of chance. ((Wicken, 1987), (Wiley & Brooks, 1988)). It might be
argued that the only feature these ‘entropies’ have in common is a way of computing the
combinatorial likelihood of a state using a probability distribution. There are certainly
indications that apart from this feature, the various entropies exhibit some sharp

©2006 Neil LaChapelle 8


The Agony over Entropy

differences. Below, I review some of the similarities and differences between different
types of entropy, divided for analysis into three categories based on three contexts of use:
thermodynamics, information theory and material science. Each are briefly reviewed
below, and contrasted and compared thereafter to highlight their categorical differences.

Thermodynamic Entropy
In thermodynamics, entropy refers to the tendency for heat energy to disperse towards an
even distribution. This refers to the distribution of heat specifically, not disorder in
general. A heat-specific example might make this clear. If you microwave a frozen
burrito, and it comes out of the oven unevenly heated, with hot spots and cold spots
throughout it, and you put it out in a room, that is a thermally ordered state, out of
thermal equilibrium. There are thermodynamic potentials between the hotspots and cold
spots in the burrito, and between the burrito and the plate, countertop and air in the room.
Heat will tend to flow so as to minimize those potentials, evening out the temperatures of
all areas, until a uniform thermal equilibrium is reached in the burrito-room system. The
system will have gone from a thermally ordered state (with various regions and thermal
boundaries delineating them) to a thermally disordered state (no more thermal
distinctions).

Note that if this system remains closed, with no extra energy being pumped into it, there
is no way to get the lost heat back into the burrito. The growth of entropy is irreversible.
However, by directing energy towards one part of this system (by reheating the burrito),
we can keep the system out of thermal equilibrium (keep one part hotter than another
part). If the system is open to new energy coming in, the system can be kept far from
equilibrium.

Information Theoretic Entropy


In information theory, entropy is a state of uncertainty or indeterminacy that characterizes
a receiver or recipient, regarding some message in a communications channel of some
kind. There are several different ways in which this concept is used. In the signal
detection aspects of information theory, the receiver’s uncertainty is based upon the
amount of background noise in the channel. To be recognized as a signal, the signal has
to diverge sharply from this noise. It has to have very different properties than one would
expect from random bits of noise. The greater this difference, the more detectable the
signal.

In the message reception aspects of information theory, entropy refers to the recipient’s
uncertainty about which symbol will arrive next, and whether or not it faithfully reflects
what the sender sent. This entropy is reduced not only by clear signals, but also by the
receiver’s knowledge of what arrangements of symbols (or ‘spellings’) are legal or
grammatical (or at least commonly distributed close together). Less common (but still
legal) clusterings provide more information than more common ones, because they
reduce uncertainty more. If you are receiving an English-language message one letter at
a time, and you get the string “th-“, that tells you a lot less about what the sender want to

©2006 Neil LaChapelle 9


The Agony over Entropy

say than the string ‘zy-‘ would. Many English words start with ‘th-‘, but the number of
words that start with ‘zy-‘ (e.g. zygote) are few.

This is important because codes can be made more efficient by giving more careful
representation to higher-information elements, and less careful representation to lower-
information elements that can be easily reconstructed from context. Thus, the printed
messages “Please send help”, “pls snd hlp” and “s.o.s.” all carry the same information.
Inessential elements can be dropped. However, the inessential or ‘redundant’ elements
both reduce the processing effort on the receiving end, and protect the message from
corruption during transmission. For example, losing three letters from “Please send help”
gives us the very understandable “ leas sen help”. Losing three letters from “pls snd hlp”
is a more serious problem, leaving us with “ls sn hl”, for example. Losing three letters
from “s.o.s.” leaves us with “ ” .

The concepts of redundancy and information have different relationships with each other
in the signal-detection and coding/compression aspects of information theory, which I
shall not elaborate on here. Instead, I want to focus on the concept of entropy as more
probable states, versus information as less probable ones. Notice that very much unlike
thermodynamic entropy, information-theoretic entropy is larger before the system does its
work, becoming smaller or even vanishing afterwards. This is in clear contrast with
thermodynamics, where entropy never decreases in a closed system. Also, there is no
stipulation in information theory that all communications increase the overall amount of
‘Shannon entropy’ in the universe. The math for computing the combinatorial likelihood
of a state using a probability distribution may be the same, but it is not clear that the
underlying concepts are related by anything more than that. This is important to
emphasize, because some people who respond emotionally ‘against’ entropy claim that
the concept of information can be used as an ‘antidote’ against thermodynamic entropy.
While there are interactions between these concepts, they are not precisely the same
concepts, and should not be mistaken as such.

Material Sciences
Two concepts of entropy stand out as important in the material sciences: configurational
entropy and structural entropy. Configurational entropy is a straightforward measure of
the disorder of a system, given in terms of the arrangement of its molecules. When some
element is its solid phase, a mole of that element will occupy a small space with fixed
boundaries, and its molecules will be tightly bound together by electromagnetic forces.
Each molecule in this arrangement will only be able to access a limited number of
configurations, so the solid state can be described as a low entropy state. In the liquid
phase, molecules are spread out. They are less constrained by intermolecular forces and
they can overrun each other more freely, thus accessing more configurations. This
generates more disorder and more configurational entropy. In a gas, constraints are very
weak and the number of possible molecular configurations is huge, so configural entropy
is high. Configurational entropy is thus related to the energy within a system that is
associated with the action of the attractive and repulsive forces that organize it.

©2006 Neil LaChapelle 10


The Agony over Entropy

Structural entropy is a measure of regularity versus local discontinuity in materials.


Basic structural units are determined, and neighboring units are compared to see how
similar they are. Materials whose units differ greatly have high structural entropy. If you
fold a piece of stiff cardboard in half, there will be more structural entropy in and around
the fold than in the regions that have remained flat. The fold is the region of
discontinuity. Like configurational entropy, structural entropy measures disorderliness in
molecular arrangements of things, rather than the distribution or flow of heat, or the
uncertainty of a receiver in communications. Structural entropy is a ‘thermally agnostic’
concept.

1.4 Distinguishing Flavors of Entropy


These three categories of entropy are different from each other in several ways. First of
all, their units of measurement are quite different. Comparing the units of measurement
in thermodynamics and information theory, for example, we see thermodynamic
measurements of heat distributions calculated in joules per degree Kelvin. In the
information theory, the uncertainty of a message receiver is the focus, expressed in units
of bits per symbol of information. It is not instantly obvious how one set of
measurements might map onto the other. ((Klyce, 2005)) Other entropy calculations
import thermal, statistical or information measurement units, and apply them to target
systems by analogy. One might thus measure the ‘temperature’ of urban pedestrian traffic
by noting how disorganized it becomes at different times of day, perhaps identifying
critical ‘phase transitions’ where the traffic goes from a ‘simmer’ to a ‘boil’. This
analogical use of thermodynamic concepts can be useful and illuminating, but that that
does not make the object of analysis identical across both models.

Each category of entropy also has a different orientation in time. Thermodynamic


entropy describes a flow into the future. Thermal entropy increases with time, inexorably
and irreversibly, marking an absolute difference between past and future. The second law
of thermodynamics is the only law of nature that requires a temporal progression from the
past into the future. Information entropy, on the other hand exists in the past or present,
where it is vanquished by information, ultimately to be left behind in the past. Current
information entropy can be reduced or eliminated in the future, very much unlike thermal
entropy. Entropy in the form of noise does increase in communication channels as they
get longer, but successful communication will still relegate that entropy to history.
Configurational and structural entropy exist in the present, in the degrees of freedom and
patterns of regularity that constitute the orderliness of the system. These material
entropies do tend to grow with time, but not in the inexorable, moment-by-moment,
gradient/potential defining way that thermodynamic entropy does. A system with very
low material entropy can be very stable, and work can be applied to reverse material
entropy and recreate order without necessarily increasing the overall amount of material
disorder in the universe (beyond the thermal contribution to growing disorder).

Besides units of measurement and time orientations, the ontological status of the
entropies are different. Thermal entropy is a real, empirically detectable state of affairs.
It does not merely describe a probabilistic ground for comparison, or a logical space of

©2006 Neil LaChapelle 11


The Agony over Entropy

combinatorial possibilities, but rather an actual stochastic fluctuation of microstates


within the macrostate of a thermal system – real heat. Moreover, this thermal entropy
never vanishes. It grows irreversibly with every physical interaction. Information entropy
is not actually existent like that. It only exists as a spread of unresolved communicative
possibilities, prior to the successful decoding and decompression of a message. After
successful communication, information entropy vanishes. Information entropy has neither
of the properties of increase or irreversibility. ((Wicken, 1987)). Furthermore, only
thermal energy is directly governed by a conservation law – the first law of
thermodynamics. Energy is never created nor destroyed, only transformed. Information
and configural order are not the targets of conservation laws in this manner.

Configurational and structural entropies are themselves empirically measurable, but only
once the units and parameters under investigation have been defined. They are not
universally given in terms of quantities like the joules per degree of any system. Rather,
concepts like temperature are assigned to analogous properties of the system under
analysis. Material entropies describe the arrangement of molecules that may all be at the
same temperature. Stochastic processes tend to disrupt the order of material system, but
it is not the case that every configurational interaction necessarily and irreversibly
increases the overall configurational entropy of the universe. This is an entropy without a
second law. Material structures bind energy, and they can resist perturbations within
certain parameters. Thermal or stochastic disturbances of structures do not necessarily, as
a matter of law, override those structural bonding parameters. Material structures can
dissipate thermal perturbation while preserving configurational and structural order.
Their entropies are thus more descriptive of properties of regularity that gain their
importance from the use made of them by engineers and scientists.

Comparing Entropy Concepts


Given all of the differences, there have been many suggestions for new terminologies that
make the distinctions between all of these ‘entropies’ clear. One proposed distinction
would differentiate between ‘thermal’ entropy and ‘logical’ entropy. Both information
and configurational entropies would fall under the ‘logical’ heading, since they are based
on probability distributions rather than energy distributions ((Klyce, 2005)). Many favor
reserving the term entropy for thermodynamic entropy, and using the term ‘uncertainty’
to refer to its counterpart in information theory ((Schneider, 1997)). Jeffery Wicken
suggests differentiating thermal entropy – where microstates are fundamentally
indeterminable, from the other forms of entropy which he would call ‘complexity’, in
reference to the importance of combinatorial possibilities for these other entropies
((Wicken, 1987; Wicken, 1988)). There are clearly differences here that need to be
marked.

©2006 Neil LaChapelle 12


The Agony over Entropy

Comparison of Entropy Concepts


Entropy Flavor Thermal Informational Configurational
Macrostate likelihood
given by Microstates   
Actual or Logical Actual Logical Logical
Time of Increase Future Past Present
Never Decreases in Noise grows in Stable bonds can form
Closed System  channel as energy fades
Taxes every Noise grows in
Interaction  channel
Irreversible

Governed by a
conservation law 
The case for differentiating the entropies appears to be strong. Terms for these concepts
might include ‘entropy’ for thermodynamic entropy, ‘uncertainty’ for information theory,
‘disorder’ for configurational entropy and ‘discontinuity’ for structural entropy. This
would disambiguate the concepts. However, such a sharp differentiation might actually
obscure some important commonalities and interactions across these different entropic
concepts. All of these types for disorder impact each other. For example:

The Chemistry of Life


The chemistry of life exploits an inverse relationship between thermal entropy and the
configurational complexity or organization. This is the point Schrödinger made in What
Is Life?. This inverse relationship characterizes biological, proto-biological and chemical
systems. These systems can all increase their order or configurational complexity by
finding new equilibria (stable configurations at higher energy levels) that allow them to
dissipate thermal energy and thus generate thermal entropy. By increasing thermal
entropy, chemical systems can decrease their material entropy, provided there is a higher-
energy equilibrium available to them.

Chemical systems that become self-producing and self-reproducing (alive) actively seek
out gradients of energy where they can play a reducing role. They maintain their material
organization by reducing energy from low-entropy to higher-entropy forms. Thermal and
material entropies are thus counter-linked in the production of chemical and biological
organization. Renaming material entropies as complexity does not change the nature of
this relationship with thermal entropy.

©2006 Neil LaChapelle 13


The Agony over Entropy

The Determinacy of Information


Information cannot be transmitted without some kind of matter-energy carrier, and the
entropic state of that carrier is not neutral to the task of informing. In order to decrease
information entropy in a system, one also needs to decrease thermal entropy. This
coupled relationship with thermal entropy characterizes all aspects of information
systems. For example, signal detection in information theory requires a signal that stands
out against or contrasts with background signals and noise. This kind of bounded energy
system, with distinct boundaries and a sharp gradient between it and the surroundings, is
necessarily in a high-energy, low-entropy state. The signal must then propagate along an
energy gradient whose path includes the communication channel which. The signal
source must dissipate free energy relative to the channel to do the work of propagation.

Thermal entropy continues to play a role in transmission by contributing to noise in the


channel. The thermal properties of systems involved in information carriage influence
the capacity of information systems to reduce uncertainty/entropy. Since all information
is carried by real matter-energy systems, these are real points of contact between
information theory and thermodynamics, even though the two entropies involved are
markedly different.

Hawking Radiation
Thermal and information entropies seem to interact in much more direct ways in
theoretical physics. Take the example of Hawking radiation. In the vicinity of a black
hole, if two photons in paired states are created by the decay of an atom just outside the
black hole’s event horizon, one photon might get pulled into the black hole. This leaves
the other particle on an outwards trajectory whose history is lost to us. Information
regarding the paired photon and the state of the atom that produced it will be lost, so the
trajectory of the photon we do see will seem random. Random motion is heat, so we see
the energy released by such interactions as a photon gas – real heat – radiating from the
black hole. Lost information produces real heat. An increase in information entropy
produces an increase in actual thermal entropy, under these special conditions.

Unification of the Entropies


At entropic extremes, the entropies seem to dovetail. Systems that reach very high levels
of thermodynamic entropy will necessarily have high material entropies (disorder) and
very high signaling entropy – all noise offering no message potential at all. Using the old
bogey of the heat death of the universe, if thermal entropy would become maximal across
the whole cosmos, it seems that all of the other kinds of entropy would have to be
maximal too. Thermal, material and information entropies, though different, seem to
depend on or impact each other in important ways. Arguing for categorical distinctions
between the entropies might very well overstate the case for their conceptual separation.
Some model of interaction may be preferable instead.

Next, I propose one possible model for understanding the interaction of thermal and
configurational energy and entropy. This interaction is at the heart of the negentropy
©2006 Neil LaChapelle 14
The Agony over Entropy

confusion, i.e. that living systems accumulate order, and thus annul the growth of entropy,
in the processes of producing entropy. The tendency to conflate the two meanings of
‘entropy’ here can be diminished by describing a more robust model of the two
dimensions of order – thermal and configurational – and how they interact. The model is
related to ones I use to discuss and describe the structure of concern in the main body of
the paper. It is drawn from an area of theoretical ecology known as network-hierarchy
theory.

2. Homes for Entropies: Network-Hierarchy Theory


The linear model of causality used in experimental science is of limited use in ecological
studies. Experimental science investigates causal relations by controlling the systems
under investigation, leaving only one variable free. Applying this kind of control to an
ecological study would be self-defeating. It would get rid of precisely the
multidirectional web of ecological interactions one would be trying to study! Ecology
requires a richer conceptual framework for understanding causal relationships. This
framework can be used to describe the ways the different ‘entropies’ relate to each other.

The two frameworks we will examine are called network theory and hierarchy theory.
Each focuses on a different class of relationships that one might trace within an
ecosystem. Network theory takes a ‘follow the money’ approach to ecological
relationships, tracing flows of energy and materials across ecological communities.
Hierarchy theory focuses more upon the various conditions and constraints imposed
upwards on systems by their components, and downwards on them by their
environments. Rather than tracing a flow, hierarchy theory models multiple scales of
ongoing, concurrent, interrelated activity. These two perspectives on ecological causality
are compatible, and have been combined in the ecological literature (e.g. (Burns, Patten,
& Higashi, 1991); (Mikulecky, 1991); (Jørgensen, 1992)). The combined paradigm is
sometimes referred to as network-hierarchy theory.

Network theory and hierarchy theory are fairly easy to combine. Neither one excludes
the other in any way, and an ecological study could easily contain both a network analysis
and a hierarchy analysis of the relationships of a given system. It is somewhat more
challenging to merge network and hierarchy analysis into a single framework. This
unification has been achieved by some however, including in a model of the interaction
between flows and structures in physical geography put forward by Chorely and Kennedy
((Chorley & Kennedy, 1971)). Their model allows us to combine network and hierarchy
models, and to describe the interactions between thermal and configurational energy.

2.1 Focus and Scale in Network and Hierarchy Theories

©2006 Neil LaChapelle 15


The Agony over Entropy

The focus of an ecological study is known as the focal system. Relationships with other
systems in the overall ecosystem only make sense with reference to a focal system.

The network perspective represents the focal system as a bounded unit that receives
inputs, processes them and releases outputs. ((Miller, 1978) (Miller, 1987); (Tracy,
1989)). These units have some stability in time, including the capacity to replicate, which
means that these systems are stable enough to enact ecological roles within larger
networks, such as food webs. Flows of energy and materials can be mapped from node to
node in an ecological network (e.g. from organism to organism in a biome or from
population to population in a community). Each node takes up input from its
environment, and outputs work, waste and byproducts back into the environment.
Organisms thus make resources available to each other, each reducing energy by some
degree, and leaving it in a form that another organism will be able to reduce to some
additional degree, releasing more materials as it does. The ‘food chain’ is perhaps the
most widely recognizable example of ecological network-type reasoning.

Systems are related through their input-output relationships, which can be mapped.
Energy circuits can be drawn to represent flows of energy through these systems, with
ecosystem components acting like components in electrical schematic diagrams, signal
flow graphs, process flow diagrams, metabolic networks or other such network
representations ((Mikulecky, 1991); (Odum, 1994)). Besides tracing the flow of energy,
material cycles can be mapped as well, such as the carbon dioxide cycle (made
increasingly salient by the acceleration of global warming). Network analysis is the tool
most suitable for representing thermodynamic flows from node to node in an ecological
energy reduction chain. It can thus represent the thermal side of the thermal-
configurational interaction.

Network theory emphasizes sequential relationships, and a full network model of an


ecosystem would show a web of sequential movements of matter and energy, centered on
a focal system. However, sometimes a higher-level event can influence every point along
a sequence simultaneously, e.g. a major climatic event. Hierarchy theory grapples with
the fact that ecosystems are radically nested, with organisms within organisms within
populations within communities within ecosystems within the biosphere. Whichever
focal level you specify when analyzing an ecological system, there will be upwards
constraints on that system from lower levels, and downwards constraints on it from
higher levels.

For example, that the focal level of analysis is the population level, and the role of a fish
population in keeping an insect population down is the specific focus of the study. The
fish population has the qualities it has in part because of qualities possessed by its
components – individual fish. Thus focal level activity, such as the role of the fish
population in keeping insect levels down, is impacted and enabled by lower-level
constraints – the attributes of individual fish (age, size, state of health, lifecycle needs,
foraging behavior, etc). These components (individual fish) may be vulnerable to a
fungal infection that attacks their skin cells and compromises their boundaries. A climate
change that raises the temperature of their waters a few degrees might create more

©2006 Neil LaChapelle 16


The Agony over Entropy

favorable conditions for the proliferation of that fungus. It may also create more
favorable conditions for insect proliferation. The climate change would thus be an upper-
level constraint, setting new limits on what is possible across the whole system. At the
focal level, it would boost the insect population directly, and it would also boost that
population by simultaneously exerting effects on the components of the fish population,
reducing the insects’ rate of predation. So multi-level causation becomes describable
using hierarchical concepts that enumerate both the lower-level and upper-level
constraints on system activity.

Hierarchy theory, more than network theory, requires us to explicitly consider scale when
selecting a focal system and the relationships that situate it. Scaling criteria can be
spatiotemporal, quantitative, or based on some other analytical dimension. A range of
values along the chosen dimensions have to be assigned to micro, meso (focal) and macro
scales of the study. Macro scale events typically occur over larger spatio-temporal spans
than focal system events. They thus constitute the conditions which frame the activities
at the focal level. Micro scale events typically involve the flux of material, energy and
events which support the activities at the focal level. They can be represented as
networks of events that enable or disable the focal system. When lower level events
change, focal level behaviors would have to change. The scales of the levels are not
theoretically fixed, however. The choice of focal level is determined by the needs of the
study, so the component level for one study could become the focal level in another, and
vice versa.

Focal level activities can be represented as networks or nodes within networks (as can
higher-level activities), but some of the determining values in that network would result
from higher or lower-level events, rather than same-level network interactions. At lower
levels, spatiotemporal frequencies are much higher that at levels above it. One can also
say that the spatiotemporal grain is finer at lower levels, and that matter-energy
fluctuations are faster at these levels. ( (Hölker & Breckling, 2002)) Lower-level or
upwards constraints are enabling constraints, in the sense that they make events at the
focal level possible. Say that the focal level event is an episode of behavior: fighting.
For that focal level event, hormone levels would be a lower-level constraint – a necessary
but not sufficient condition ((Salthe, 1985)). Lower-level values generate possibilities
and probabilities at the focal level without participating in focal level events. Hormones
do not fight, for example. Lower-level constraints are necessary conditions for focal
activity, but they are agnostic about sufficient conditions. Sufficient conditions exist only
where goals and functions can be defined – at the focal level – as constrained by macro-
level conditions.

Upper level constraints also participate in focal-level events without playing a specific
role in the focal event structure. Upper level constraints are contextual or environmental
constraints, and they reduce (or permit) the variety of options that systems of the focal
level have for action. Cold weather for warm-blooded animals, for example, forces
metabolic changes, changes in calorie intake or reductions in expenditure, which
increases the value of enclosed shelter, etc. Upper level constraints in this example alter
the cost structure at the focal level, but do not otherwise direct the activities at the focal

©2006 Neil LaChapelle 17


The Agony over Entropy

level. Focal activities are self-directed by the animals, but upper-level constraints have
changed what it is sensible for them to do. Focal level systems themselves enjoy little or
no upwards impact on these higher-level constraints. For example, we do not interact
directly with the control parameters of seasonality, like the earth’s axis of rotation and
orbital position around the sun. We cannot change these parameters as our strategy for
managing seasonal temperature changes.

Buffering and emergence represent two other ways in which events at different scales can
influence each other. For an example of buffering, the rain cycle over an ecosystem may
not deliver water at sufficiently regular intervals to meet the water needs of many of its
life forms. However, the structure of storages, reservoirs, channels and flows of water
may be such that even with irregular rain, water is distributed predictably throughout the
ecosystem at an essentially constant rate. Lower-lever processes with higher-frequency
water needs are protected from the lower-frequency rainfall pattern by the higher-order
structure of the local hydrological system. This is an important point to understand for
our description of how thermal and configurational entropies interact. Here a material
flow – a network process – is managed by the morphology or configuration of the system
it flows through. Chorley and Kennedy ((Chorley & Kennedy, 1971)) develop a
vocabulary that allows us to describe this kind of interaction in more detail.

Emergent properties arise in hierarchies when lower-level processes come together to


produce focal-level properties that could not have been deduced by extrapolating from
lower-level properties alone. Common examples of emergent behaviors include market
interactions, which regulate the supply and pricing of goods around the world without
any central entity to govern it, and flocking/herding/schooling formations among animals
that allow them to benefit from the various properties of the collective as a unit.
Hurricanes and tornadoes are also emergent phenomena, with properties that are very
different from the meteorological conditions that give rise to them. Buffering and
emergence are special instances of the kinds of downward and upward causality and
constraint that characterize hierarchical systems.

There is nothing in network theory that excludes hierarchical considerations. Network


relationships can be described among systems known to be composed of multiple
subsystems, for example. This would be tantamount to a ‘white box’ analysis of a
network, where subsystems within focal systems would be of interest, as opposed to
‘black box’ analyses where input-output relations of same-level systems are the only ones
that matter, and the internals of each node are irrelevant. White box analyses of networks
are certainly possible. Furthermore, systems at each level of a hierarchical analysis can
be understood to participate in network relationships. There is thus no incompatibility
between these two ways of describing causal relationships, but a common causal idiom
describing both of them at once is needed. The conceptual challenge lies in describing
how the two kinds of causality interact point-by-point to produce observed events. This
point-by-point interaction can be described by pointing out variables that are shared or
coupled between the two causal orders, such that changes along the network dimension
lead to changes within the hierarchical dimension, and vice versa. This solution has been
explored at some length in the field of physical geography by Chorely and Kennedy

©2006 Neil LaChapelle 18


The Agony over Entropy

((Chorley & Kennedy, 1971)). Their account of the relationship between networks and
hierarchies is the focus of the upcoming section.

2.2 Process-Response and Control Systems

Chorely and Kennedy () based their work in physical geography on a systems approach. .
Land formations were seen as open systems, exchanging matter and energy with their
environments, and achieving steady states or stable structures. As such, many of their
insights apply to open systems in general, beyond the domain of physical geography. The
crossing of an energy-exchange dimension with a structural dimension makes their
framework useful for conceptually positioning thermal and configurational entropies in a
way that keeps them distinct but interrelated.

Relationships between system variables are fundamental units of analysis in Chorley and
Kennedy’s scheme. They describe geographical formations as a set of relationships
between system components. These relationships tend to adjust themselves in a self-
regulatory manner towards a steady state, as matter and energy flow through the systems.
In other words, geographical systems find equilibria. Energy throughput thus tends to
produce and maintain discernable patterns of system organisation. This organization
includes both network-type causal patterns and hierarchical causal patterns as described
above, in addition to two other modes of causality: one reflecting the interaction between
network and hierarchical systems, and one reflecting the purposeful modulation of
intersecting network and hierarchical systems by human agents.

Chorely and Kennedy rank these patterns of system organization in what might be
considered an inverse order relative to the ecological ranking of levels. The slowest-
changing, most spatio-temporally extended elements are described as ‘lower’ on this
analytical hierarchy, with the fastest-cycle, most plastic elements described as occupying
‘higher’ levels. In ascending order, these four organizational subsystems are:
morphological systems, cascading systems, process-response systems and control
systems. This is a hierarchy of increasing plasticity and control, and it does not describe
the same relationships as hierarchy theory from theoretical ecology. The ecological
notion of hierarchy is most closely related to the topological kind of reasoning used to
analyse morphological systems in Chorley and Kennedy’s scheme. These similarities
will be described below in the discussion of the four patterns of geomorphological
organization.

Morphological systems
Morphology is a configurational concept. The energies of attraction and repulsion that
both energize and stabilize morphological formations are critical for morphological
analysis, as are component interactions, emergent phenomena and global dynamics. For
this reason, land formations, taken just as they are, can be analysed as systems. The
physical properties of components and the strength and direction of their connectivity
give rise to the emergent properties of the whole formation. Chorley and Kennedy give

©2006 Neil LaChapelle 19


The Agony over Entropy

the example of a beach, with parameters such as slope, mean grain size, range of grain
sizes, beach firmness, and the like. The relationships between these parameters can be
cross-correlated, and the strength of these correlations determines the operational
efficiency of the whole system. This dynamic interrelationship of geomorphic properties
lets us predict how the system will react to different stresses or changes in individual
variables. Positive feedback loops may help explain sudden landscape changes, and
negative feedback loops may hold morphological change within a narrow range. Land
formations thus resemble mechanical structures, where an externally applied stress is
released by a strain, readjusting the affected variables to produce a new equilibrium.

Morphological and configurational concepts are inherently hierarchical. Any discussion


of component arrangements and interactions presupposes a componential scale of
description and a systemic. The very idea of combinatorial freedom presumes a shared
combinatorial space within which different ensembles can be configured. We must
assume the distinction between macrostates with macrostate values separate from
microstates with their values. This hierarchy of levels of description is inherent in
Chorely and Kennedy’s description of morphological systems. They correlate microstate
values or lower-level interactions are to see how focal level attributes may respond to
system-wide stresses and strains. The macrostate capacity of the system to absorb
stresses and strains is described as the result of the strength and direction of component
connectivity, and this macrostate capacity also buffers the component relationships,
protecting them from entropic disruption. These strong hierarchical macro-micro
interactions justify associating Chorely and Kennedy’s concept of morphological systems
with the ecological concept of hierarchical constraint and causation.

Cascading Systems
Cascading systems in this geomorphological model are almost identical to the energy and
material networks described in ecological theory. They consist of chains of subsystems,
dynamically linked by cascades of matter or energy. Material or energy output from one
subsystem becomes input for adjacent subsystems. For example, water runoff and debris
from a slope becomes input for a stream channel. Nodes in this network regulate each
other by the input and output transactions they make. Thus, the slope helps to regulate
the stream, by partially determining how much water and silt the stream has to move, and
thus how much receptive capacity, throughput capacity or reservoir/storage capacity it
needs. In a complementary fashion, the stream helps to regulate the slope, helping to
determine how far and how quickly it can shed eroded material. Thus the surface flows
of matter and energy in this cascade participate in sculpting the underlying morphological
system: the slope of the hill and the shape and course of the riverbed. Flows of silt, sand
and water have roles in both input-output cascades and in the strength and direction of
connections between components in this overall system. The two orders of causality
interact by sharing variables which double function across them. These variables are
coupled across both dimensions of causality, and this forms the basis for the next level of
systems analysis.

©2006 Neil LaChapelle 20


The Agony over Entropy

Process-Response Systems
Process-response relationships are defined the intersection of morphological and
cascading systems, and they represent how energy cascades can interact with
configurational hierarchies. Cascades are essentially regulated sequences of storage and
flow. The structures that comprise stores, channels and regulators have to be realized or
accommodated by some kind of morphological state of the land system. This can be
achieved by the one-to-one sharing of variables by the two systems, so that a variable
plays a role in regulating both the morphology of the system and the flow of matter and
energy across it. One example would be the water infiltration capacity of a slope. This
is both a morphological property of that slope and a decision regulator in a larger
watershed system. When strict double-functioning is not occurring, one might yet find a
high correlation between a variable in the cascading subsystem and one or more in the
associated morphological system. The added flexibility results in feedback loops, with
cascading and morphological components mutually adjusting themselves to changing
input-output relationships. This can be seen in social systems such as shopping districts,
where one large retailer can draw crowds of shoppers, encouraging more large retailers to
set up in that neighbourhood, drawing even more shoppers to the district. Identifying
these relationships between a cascading process (crowds) and the resulting change in
forms or morphology (shops, parking lots) is key for analysing process-response systems.

Cascade processes interact with morphological structures to reach equilibria between the
dynamic forces of the cascade and the binding forces of the morphological formation.
Changes in the cascade will thus be reflected by changes in the morphological system as
variables are adjusted towards a new equilibrium. Changes in morphology may also alter
the way a cascade operates, since morphological elements may be stores, channels or
regulators of the flow. Such changes will also drive the combined system towards a new
equilibrium. The time that it takes the process-response system to adjust to input-output
changes, arriving at a new equilibrium, is called the relaxation time of the system. This
time period depends on things like the amount and direction of change in the energy
cascade and the number of links between the morphological variables.

Process-response analysis extends network analysis by making configurational


information part of the input-output equation. As Chorley and Kennedy write:

…the inputs into such a process-response system at one time, t, consist of


both mass and energy and the prior configuration of the morphologic
variables. As a result of the operation of the system, energy and mass are
output, together with a new configuration of the morphological variables.
Both the form and the effect of the cascade inputs at time t + 1 will then be
influenced by the new state of the morphological structure. ((Chorley &
Kennedy, 1971) p. 126)

In dynamic terms, part of the output of a process-response system is the work that it does
in changing its own formation, in ways that alter subsequent inputs. This is a feedback
relationship, and feedback processes can come to be facilitated by control systems that
intervene on key regulators to produce intentional results.

©2006 Neil LaChapelle 21


The Agony over Entropy

The interaction between thermal and configurational entropies can be described in


process-response terms. Chorely and Kennedy’s schema also describes an additional sort
of geomorphological system which they call a control system. Which is described briefly
below. The issues that arise in their discussion of control systems are highly pertinent to
cybernetics and information theory, of course, and this suggests that their schema could
provide a home for yet another form of entropy – information entropy or uncertainty.
This third home for entropy would require much time to discuss, however, and I do not
undertake that analysis in this paper.

Control systems
Control systems are based upon process-response systems. In process-response systems,
certain key variables act as decision regulators or matter-energy valves. Valves are
possessed of some degree of freedom within the process-response system, such that their
state within those degrees is either indeterminate, or bistable/multistable and easily
shifted from one state (or attractor) to another by stochastic fluctuations of the cascade,
etc. At these points, an additional flow of energy or a redirection of energy can eliminate
the indeterminacy or stabilize the system around specific attractors. This additional flow
or control signal may come from a source that itself fluctuates stochastically (a river may
be dammed by a mudslide), in which case the valve is just another regulator in the
cascade system. However, purposeful intervention is also possible at these points to
achieve various process-control goals (a dam may be constructed across a river for a
hydroelectric plant).

Valves are the points where an intelligent control system can intervene, to change the
matter-energy flow of the cascade, and the equilibrium settings of the morphological
variables linked with them. Intelligent control implies a hierarchical relationship between
the control system and the plant (the process-response system being managed). a socio-
economic decision making body and a territory. The control system will be characterized
by very complicated flows and relationships, centrally implicating a model or
representation of the plant in question. Effectors between the decision-making body and
the control valves of the process-response system will rectify the targeted discrepancies
between the model and the system.

This rectifying of discrepancies between the model and the system is a major focus of
study in cybernetics, and this opens the door for a discussion of information-theoretic
concepts in the context of adaptive systems. For the time being, I simply indicate that
this door is open. It is not a door we will walk through below.

©2006 Neil LaChapelle 22


The Agony over Entropy

2.3 Process-Response and Configurational Entropy

The ‘negentropy confusion’ we are trying to clear up arises from an apparent paradox of
chemistry, which is that thermal dissipation through a system, and the consequent
production of thermal disorder, often results in increasingly complex chemical
organization. Order arises through the production of disorder, and this seems
paradoxical. A standard response to people who seem captivated by this paradox is to
point out that the production of order requires ‘fuel’ essentially, and that so much ‘fuel’
needs to be burned to create the order we see, that if you look at the whole system,
overall order is reduced and disorder increased, even if in one small corner of the system,
order increased locally. However, this still does not always dispel the impression that
chemical systems nevertheless accomplish something miraculous in snatching order from
the jaws of disorder. This standard response reasserts the second law of thermodynamics,
but does not dispel the aura of mystery around what life ‘accomplishes’.

If we interpret chemical systems as process-response systems, the appearance of paradox


vanishes, as does the mystification of life’s ‘accomplishment’. It becomes clear that as
matter-energy cascades progress, they necessarily do morphological work. The task
before us thus becomes representing chemical systems as process-response systems. To
do this, we have to describe both morphological relationships and energy cascades,
illustrating how they interact.

Morphological relationships are produced by physical properties of our focal systems that
allow their lower-level elements to form operational assemblies of wholes and parts.
These relationships are characterized by various connective and correlative strengths, and
they give rise to the macroproperties of the system. The references to wholes, parts and
emergent macroproperties alert us immediately to the hierarchical explanatory patterns
involved. We need to describe these hierarchical properties for chemical systems in their
interactive environments, in order to appreciate how energy flow through such systems
results in increased organization.

However, the energy cascades that ‘flow through’ chemical structures are not something
entirely separate from those structures. On the process-response model, some aspects of
chemical morphology will act as storage elements in the cascade. Some will be relays
that transfer force from one area to another as energy is dissipated. Other elements will
be decision gates, changing flows when certain thresholds are reached, or when certain
other constraints and conditions apply. These elements will be part of the energy
cascade, and also parts of the morphological system, and each of these systems will
adjust the values of shared variables until an equilibrium is reached between them.

Relaxation time is relevant to this analysis. How quickly or slowly does the process-
response system find equilibrium as cascades fluctuate? As flows subside, does the
morphological system relax away from that process-informed configuration, or does it
hold the cascade-shaped configuration in a way that anticipates future flows? That is, the
morphological system may develop memory, such that it may quickly relax into

©2006 Neil LaChapelle 23


The Agony over Entropy

configurations associated with different states of flow, essentially learning the way that
flows are most likely to come.

Our task of explaining thermal and configurational interaction begins with a look at the
configuration side; the morphological or hierarchical aspects of process-response
systems. We need to know what the compositional units of chemical systems are, what
forces operate at different scales of chemical analysis, and how larger patterns of forces
constrain and delimit what happens at lower levels. All of this is very well known
information, so the main points will be reviewed here only in the most basic terms.

The components of the morphological structure of chemical systems are atoms, whose
properties are determined in part by laws of nature, and in part by the mechanics of
atomic structure. The forces of nature are asymmetrical in their current form. It is
thought that at one point in the history of the early universe, there was only a single force
of nature that had the same effect on all ‘things’. However, as the universe aged and
cooled, that symmetry was broken, leaving us with four fundamental forces that operate
at different scales, on different material objects and in different ways. Gravity was the
first, very large scale, very weak fundamental force to separate off from the unified force.
It concentrates around clumps of matter in the universe, and it pulls more matter towards
it, enabling a positive feedback cycle of more and more matter coming together. In very
massive gravitational systems like stars, gravity can crunch molecules closely together
enough that atomic and nuclear reactions occur within them, creating heavier and heavier
atomic species.

The medium-range force is the electromagnetic force, which operates at classical


(ordinary, Newtonian) ranges, and which mediates most chemical reactions at the focal
level. It is the scale of most importance to the study of chemistry. At a much smaller
scale, inside the tiny atomic nucleus, nuclear forces hold protons and neutrons together.
The strong nuclear force has a very short range, but it overpowers the electromagnetic
repulsion between protons to hold them in nuclei, and the number of protons determines
how many electrons the system can hold, creating the electron shell structure that
determines how atoms will interact in chemical reactions. So we have a scenario where
external events in the gravitational frame of reference may direct flows of energy and
matter in ways that produce chemical reactions at the classical range through the
electromagnetic force, or at the subatomic range through electromagnetism or nuclear
forces. At the focal level, electromagnetic reactions can release potential energy from
within molecules to produce kinetic energy or radiant energy at classical ranges. The
decay of nuclei and other such nuclear events can also cause these changes at the focal
level. The triadic structure of hierarchical causation is thus evident in the chemical
environment.

Other morphological aspects of chemical systems – such as connection strengths,


relaxation time, memory capacity, energy storage capacity – come not from the
stratification of fundamental forces but rather from the quantum structure of the
components themselves. At the subatomic level, elementary particles show a definite
structure in that they have definite rather than continuous quantum numbers. They differ

©2006 Neil LaChapelle 24


The Agony over Entropy

from ordinary-sized or classical objects in this regard. This limits their potential for
deformation and relaxation, giving them memory and durable morphologies.

To illustrate, a classical object like a basketball could be made to spin, move or roll in
any direction and any speed with any momentum, within some broad outer limits. It
could be rolled at a speed of one kilometer an hour, two kilometers an hour, or any speed
in-between. Elementary particles do not behave this way. Fictionalizing a bit here, if you
were ‘rolling’ an electron, you could only roll it at, say, 1.2km/h, 1.65km/h, or
1.917km/kh, and nothing in-between. Those discrete, specific speeds would be the only
speeds it could have, according to the laws of nature, and nothing in-between. This is a
fanciful example of course, and the numbers I have used here are meaningless, but it gets
at the idea that sub-atomic particles are measured by numbers that take discrete values
rather than continuous ones. There is a strictly limited set of possible values (quantities,
quanta) that such an elementary particle can have – 1, 3, 5 but nothing in-between; never
1.01, and never 2 or 4.

If we think of an elementary particles as a tiny ball (like a basketball only smaller), it is


hard to understand why some values are ‘cut out’ of the range of possible values a
particle can have. It is easier to comprehend this pattern when particles are understood
by their wave functions. When the position, momentum, energy and spin attributes of
elementary particles are given as wave functions (impulse waves, sine waves, spherical
harmonics, etc ((Herbert, 1987)), we have to consider their wave-like interactions not
only with other particles, but with themselves. In their wave functions, there will be
nodes, like the non-vibrating nodes on a guitar string, where nothing happens at all. All
of the states of the electron have to respect nodes, or else the electron would be out of
phase with itself and interfere with itself. This means that certain values for the quantum
numbers that describe particles will never be found.

There is thus no slow gradation of energy levels from minimum to maximum at the
quantum level. There are discrete, definite values, with all intermediate values taken out
of the realm of possibility. This discreteness is also felt on the molecular scale, giving
molecules definite shapes and interactive potentials. Remove or rearrange components of
a table and it still may be a table (put it on three legs instead of four, for example).
Remove or rearrange atoms in a hemoglobin molecule and it is no longer a hemoglobin
molecule, but rather something quite different. Its reactive properties will be vastly
different. It will not be an alternative design for hemoglobin. Molecular structure enjoys
a specificity that larger objects do not, due to the crispness and absence of intermediate
values at the quantum level. The definiteness of structure at these component levels form
a chemical alphabet, complete with combinatorial rules and restrictions, that enables the
construction of complex morphologies of definite kinds, obeying definite laws.

There are thus absolute differences between the world of the large and the world of the
very small. ((Icke, 1995)) As we have already noted with our basketball example, large
object interactions can be measured along continuous gradients. A lump of clay, for
example, can be continuously deformed in any way at all. Very small objects like atoms
and molecules are not like that. You can deform a molecule, but only by snapping into

©2006 Neil LaChapelle 25


The Agony over Entropy

one of its allowable states, and if you do so, it is no longer the same molecule. Molecular
interactions are very sensitive to shape. Twist one, and you have created a new building
block.

Thus, an energy cascade traversing the morphological system of a molecule is not free to
alter its morphometric properties in an arbitrary fashion. It has to exert enough force to
overcome morphological binding forces (i.e. thresholds), and then it can either move the
system to one of a few particular other states (relaxing with them into new equilibria), or
break the system in specific ways, leaving behind specific residues. These conditions
governing chemical morphology furnish part of what we need to represent its process-
response relationship with thermal systems. Energy cascades can push molecules into
high-energy configurations that then molecules will not be able to release unless further
energy is directed towards overcoming the local binding forces defining the new
morphology. The morphological dimension has be discretized and quantized.
Intermediate values are not possible, and it takes energy to move from one state to
another. Thus, the morphological dimension will not deform and relax continuously with
the cascades of energy that traverse it. Quantum effects at this scale give the
morphological dimension a longer average relaxation time than the classically-governed
cascade dimension.

To appreciate the difference between morphological and cascading systems, we have to


understand energy. Energy is not a thing. It is not a kind of ‘stuff’ like water or light
which flows or propagates. When we talk about a system being in a ‘high energy state’
we are really referring to the play of fundamental forces and counterforces in and around
the system. ((Scott, 1988)) A rock that is balanced on the lip of a high cliff is in a high
energy state. It is in a gravitational system that would have the rock accelerate
downwards and hit whatever lies below it with some force. However, a large interwoven
electromagnetic/material matrix (the cliff) is holding the rock up, in a tenuous
equilibrium against the gravitational force. I say tenuous because a very slight amount of
activating energy – a gust of wind, the cane of a hiker, a smaller rock falling on top of the
balanced rock – will nudge that rock over the equilibrium threshold. There will then be a
huge potential between its precarious position and the ground. The gravitational force, by
doing the work of moving the rock, will reduce this potential, converting it into the
kinetic energy that the falling rock will transfer to the ground on impact.

In a low energy state, the interplay of fundamental forces has all been ironed out. There
aren’t any big potentials left to close. Forces no longer counter each other in ways that
define separate systems. Andrew Scott writes:

The clue is that high energy states always seem to be associated with some
resistance against or a violation of a fundamental force. Low energy states
are associated with compliance with the fundamental forces. This implies
that energy can be thought of as some sort of ‘force resistance’ or ‘antiforce’
able to counteract the pushes or pulls of the fundamental forces. ((Scott,
1988))

©2006 Neil LaChapelle 26


The Agony over Entropy

Energy can also be thought of as a consequence of the morphology of a space or system


with subsystems that possess different relaxation times. The cliff feels the same
gravitational force as the rock, but it takes much longer for the cliff to tumble to the lower
plain than the rock does. The cliff has a much longer relaxation time, due to the
counterforce of the electromagnetic bonds that hold it together. This defines a
gravitational potential at the threshold of the cliff system. Objects in the larger
gravitational system that includes the cliff can thus be held at that level, as stored
potential gravitational energy. Objects near or at the threshold are still trapped as
potential energy, unless some activating energy pushes them over the threshold, releasing
all that gravitational potential. Alternatively, the conditions on the cliff edge might
change (part of it may loosen and fall off), or a larger cascade may overwhelm the cliff
threshold (e.g. a mudslide or an avalanche). Energy cascades are thus material changes
that close gaps and overcome counterforces to reach states where perturbations are less
likely to lead to further cascades. The systems that receive them as input, release them as
output, store them and regulate their flow are also material systems with different
relaxation times, arranged so that they define potentials between energy states across
differences in resting potential.

Energy comes in three forms, kinetic (including heat), potential and radiant (involving
photon release, exchange and absorption). Energy is the capacity to do work, which is
movement in the direction of a force, from one state or position to another. Energy
cascades themselves can be seen as morphological systems (e.g. fluid dynamic systems)
with much more rapid relaxation times than the structural morphological systems that
define the energy potentials of the overall process-response system.

Chemical seek lower-energy arrangements as much as any other physical system does.
When two chemicals in the world bump into each other and bind each other by sharing an
electron, they do so because that shared electron arrangement is a lower-energy
arrangement than one where those two chemicals are in such proximity without sharing
the electron. They will also release some heat in this process (they will lose energy to
random motion), putting them in an even lower energy state. If brought in contact with a
third chemical that would normally react with the initial two, the two-molecule chemical
might not respond – it might not be able to cross that gradient to an even lower energy
state than it is in currently – because its components are trapped in their current
equilibrium, like the rock sitting up on the cliff. However, add a little activation energy
(like a lighted match) to replace the energy lost by heat in the original reaction, and
molecules would become more active, overcoming energy thresholds and falling down
other gradients towards a new final equilibrium. The forces in this energy system are
subatomic and electromagnetic (and gravitational in terms of the characteristics of the
spacetime context), and they set up the energy potential that the cascade could flow
along. The counterforce aspect of the potential is the pre-existing molecular structure,
which molecules are now ‘straining against’ due to the attraction towards a new
configuration that better balances the various morphological forces pulling and pushing at
the system.

©2006 Neil LaChapelle 27


The Agony over Entropy

The addition of extra energy to a chemical system can also excite its components to a
level where they can react and stabilize at a higher energy level. They would start off at
lower energy levels, then become excited by an energy cascade, and in that excited state
they would find the lowest possible energy position, losing heat as they moved into that
position. Then as the background energy dropped they would find themselves trapped in
the higher energy state, unable to get out of the energy hole they dug during their
reaction. This process stores energy as potential energy in chemical structures. Living
cells trap energy in this fashion by adding a phosphate ion to ADP to form ATP. The
quantum nature of chemical components is relevant to this phenomenon. Atoms and
molecules can only take up certain very definite shapes and values. If energy is not
enough to lift or nudge a molecule from one energy state to another, the structure of the
molecule will not change, even if there is a chemical potential for change. Not every
pulse of energy overcomes the bonding energy of the chemical morphology, so not every
impulse can release the potential energy of a molecule. A kind of ‘ratcheting’ effect
becomes possible, due to the existence of thresholds, which must be overcome before that
stored energy is released to cascade once again.

An open system, one that is always open to new energy pulses, can continually lift and
ratchet molecules into more and more complex, high-energy states. The work of the
energy cascade, as it relaxes towards entropy, acts as a counterforce against the entropic
relaxation of morphology, by lifting molecules into more and more improbable states. By
pushing molecules into different states, energy from the cascade is dissipated. The
energy potential between two energy states is reduced, resulting in a loss of thermal order
(loss of difference between source and sink). It just so happens that the morphological
work that was done as part of this energy cascade created configurations with a much
slower relaxation time than the cascade itself.

This is a very often repeated rationale for how chemical systems can increase their
organization while also dissipating energy and increasing thermal entropy. What has
been added is the network-hierarchy or process-response perspective. The morphological
components of chemical systems have a definiteness of structure that is unlike anything
on the macroscopic plane. Macroscopic forces at classical scales thus are limited in their
capacity to transfer changes down to the chemical level. Some applications of force
result in microscopic changes, some do not. There are differences in relaxation time
between scales. Energy cascades through molecular system can thus dissipate energy
through heat and work without doing much to the morphologies of the molecules
involved. Because of the quantum nature of those morphologies, energy cascades can
also move through systems and lift them into higher-energy states that have longer
relaxation times than the cascades themselves. Energy can be stored at the molecular
level due to its discrete, rather than continuous nature. The inverse relationship between
thermal entropy and configurational entropy thus turns out to hinge on absolute
differences of scale. Classical level interactions impose constraints upon chemical
systems, but do not interact with the quantum-scale forces and regularities that determine
the morphological properties of chemicals. Focal-level chemical interactions that result
in macroscopic material changes are a product of both upwards and downwards
constraints and same-level interactions with other material systems. At the focal level,

©2006 Neil LaChapelle 28


The Agony over Entropy

continuous values for system properties are possible, as is continuous deformity and
smooth process-response relaxation curves. This morphological hierarchy is robust.
There are absolute differences between the different scales of activity.

The differences in relaxation times between cascades and configurations does not mean
that chemicals or living systems accomplish anything permanent to counteract the growth
of entropy in the universe. Chemical and living order are only maintained in systems that
receive an ongoing flow (or regular pulses) of energy to maintain that order. Without
energy cascades keeping a system in a certain shape, random fluctuations and random
encounters at the atomic level will eventually cause the system to relax. Entropy
degrades structure in open systems, so structure can only be maintained if ongoing
environmental input supports it. This is a cybernetic law known as the Law of Limited
Variety. Structure will only be maintained to the degree that environmental input induces
or requires it. Otherwise, structure is not ultimately maintainable

Thus, what we see in the phenomenon of life is a temporary configurational storage of


energy. This can be understood by an analogy to the circulatory system. The heart
pumps blood around the body. However, between heartbeats, blood still moves. One
reason for this is that when the heart contracts and blood surges through the circulatory
system, the walls of the blood vessels stretch a bit. They store elastic energy. Then as the
surge of blood abates and the heart cycles over into a new pulse, that stored elastic energy
in the blood vessels pushes the blood along a bit more. The energy that had been pumped
into the elastic matrix of blood-vessel wall morphology is released back into the cascade.

Order in chemical and living systems is analogous to the elastic potential energy in blood
vessel walls. As organisms, we are able to seek out and exploit opportunities to reduce
energy to maintain our structures, but this is only a temporary diversion away from the
ultimate relaxation that awaits us all. The flip side of this is that it is only the tendency
for energy to flow that makes life possible at all. Life is about limits, and about
dwindling resources, and about pulses of new energy and their careful use towards
constructive ends. An ecological awareness requires us to remain cognizant of the right
use of energy, the growth of disorder and the sources of order in the biological world.
This awareness is not well served by a mystical elevation of life as a force that overturns
the second law of thermodynamics. This error often underlies a bio-centric sense of
triumphalism and invincibility that pushes an awareness of our limits to the margins of
consciousness. The art of living is not unlike the art of surfing – a temporary ride along a
pulse of energy. To maintain our balance and to ride it well, we have to respect the wave.

©2006 Neil LaChapelle 29


The Agony over Entropy

3. References for Appendix C


1. Burns, T. P., Patten, B. C., & Higashi, M. (1991). Hierarchical evolution in ecological networks:
environs and selection. M. Higashi, & T. P. Burns (pp. 211-239). New York: Cambridge
University Press.

2. Chorley, R. J., & Kennedy, B. A. (1971). Physical Geography: A systems approach. London:
Prentice-Hall International.

3. Herbert, N. (1987). Quantum Reality: Beyond the new physics. Garden City, New York: Anchor
Press/Doubleday.

4. Hölker, F., & Breckling, B. (2002). Concepts of scales, hierarchies and emergent properties in
ecological models. In F. Hölker (Editor), Scales, Hierarchies and Emergent Properties in
Ecological Models . Frankfurt am Main: Peter Lang.

5. Icke, V. (1995). The Force of Symmetry. Cambridge UK: Cambridge University Press.

6. Jørgensen, S. E. (1992). Integration of Ecosystem Theories: A Pattern. (Ecology and Environment


No. 1). Dordrecht, The Netherlands: Kluwer Academic Publishers.

7. Klyce, B. (The Second Law of Thermodynamics [Web Page]. URL


http://www.panspermia.org/seconlaw.htm [2005, November 26].

8. Mikulecky, D. C. (1991). Network Thermodynamics: a unifying approach to dynamic nonlinear


living systems. M. Higashi, & T. P. Burns (pp. 71-100). New York: Cambridge University
Press.

9. Miller, J. G. (1978). Living systems. New York: McGraw-Hill.

10. Miller, J. G. (1987). The study of living systems: A macroengineering perspective. Technology in
Society, 9, 191-210.

11. Odum, H. T. (1994). Ecological and General Systems: An introduction to systems ecology (Revised
edition of Systems Ecology, John Wiley and Sons ed.). Niwot, Colorado: The University
Press of Colorado.

12. Salthe, S. N. (1985). Evolving Hierarchical Systems: Their Structure and Representation. New York:
Columbia University Press.

13. Schneider, T. D. (1997) Information Is Not Entropy, Information Is Not Uncertainty! [Web Page].
URL http://www.lecb.ncifcrf.gov/~toms/information.is.not.uncertainty.html [2005,
November 26].
Reprint: Last Update: Brissaud, Jean-Bernard (2005 Aug 25)

14. Scott, A. (1988). Vital Principles: The molecular mechanisms of life. Oxford: Basil Blackwell.

15. Tracy, L. (1989). The Living Organization: Systems of Behavior. New York: Praeger Publishers.

16. Wicken, J. S. (1987). Evolution, Thermodynamics and Information: Extending the Darwinian
Program. New York: Oxford University Press.

©2006 Neil LaChapelle 30


The Agony over Entropy

17. Wicken, J. S. (1988). Thermodynamics, Evolution, and Emergence: Ingredients for a New Synthesis.
B. H. Weber, D. J. Depew, & J. D. Smith (pp. 139-169). Cambridge, Massachusetts: The
MIT Press.

18. Wiley, E. O., & Brooks, D. R. (1988). Evolution as Entropy: Towards a Unified Theory of Biology
(2 ed.). (Science and its Conceptual Foundations . Chicago: The University of Chicago
Press.

©2006 Neil LaChapelle 31

Potrebbero piacerti anche