Sei sulla pagina 1di 16

S YS TE MI C S Y MBIOTI C P LANETARY

ECOV ILLAG E NE TWORK

Systemic Symbiotic Planetary Ecovillage Network


P O Box 1674
Middletown, CA
95461-1674
USA

silverj6@mchsi.com

Silver J. H. Jones

Systemic Symbiotic Planetary Ecovi!age Network


1
TABLE OF CONTE NTS

Where did grid computing begin?


4
Who needs computational grids?
6
Current and expected participants in the world grid
7
Projected near-term deliverable computation
8
Applications where the grid wi! be important
9
A new paradigm in computational capacity
10
The global computational grid paradigm
12
Exploring internet ecology and topology
13
References
15

Systemic Symbiotic Planetary Ecovi!age Network


2
CHAP TER XII
Planetary Biocomputational Grid

Silver J. H. Jones
2008

Copyright © 2002 by Silver (J. H.) Jones. All rights, electronic, multimedia, and print, reserved. A publi-
cation SSPEN - Systemic Symbiotic Planetary Ecovillage Network.
It may come as a shocking surprise to most people to know that we are currently wasting a very large por-
tion of our planets total computational capacity. Estimating just how many CPU cycles are wasted in any
given time period is a very difficult task. We are limited to making educated guesses, based upon smaller
surveys, which are sub-global. If one simply thinks about how many office computers are mainly in-
volved with receiving and sending intranet and extranet e-mails, occasional database retrievals, and word
processing (all very low CPU intensive tasks, perhaps utilizing no more than 5 to 10% of the total proc-
essing power available), one can quickly conclude that this figure is not unreasonable. If we add to this
figure all the processing power that is being wasted in home computing situations, it seems quite possible
that 75 to 95 percent of the total processing power available to the planet is currently not being util-
ized.
The path to correcting this enormous waste of computational capacity is to encourage the voluntary par-
ticipation of all computational resources in a planetary computational grid. In the book The GRID-
Blueprint for a New Computing Infrastructure, Foster and Kesselman (eds.) have described this brave
new adventure as follows [1]:
“The grid is an emerging infrastructure that will fundamentally change the way we think about-and use-
computing. The grid will connect multiple regional and national computational grids to create a universal
source of computing power. The word “grid” is chosen by analogy with the electric power grid, which
provides pervasive access to power and, like the computer and a small number of other advances, has had
a dramatic impact on human capabilities and society. We believe that by providing pervasive, dependable,
consistent, and inexpensive access to advanced computational capabilities, databases, sensors, and people,
computational grids will have a similar transforming effect, allowing new classes of applications to
emerge.”
“Computational grids do not yet exist. However, the essential building blocks from which they will be
constructed - advanced optical networks, fast microprocessors, parallel computer architectures, communi-
cation protocols, distributed software structures, security mechanisms, electronic commerce techniques -
are now, or soon will be, in place. Furthermore, we are starting to see the construction of large-scale,
high-performance national-and international-scale networks. Already the first steps are being taken to-

Systemic Symbiotic Planetary Ecovi!age Network


3
ward using these networks as large-scale computational environments. Within the United States, efforts
such as the Advanced Strategic Computing Initiative (ASCI) and DOE2000 programs in the Department
of Energy, the Information Power Grid in the National Aeronautics and Space Administration, and the
National Science Foundation’s Partnerships for Advanced Computational Infrastructure are investigating
how to build a persistent computational grid infrastructure. Existing high-speed networks - such as ESnet,
vBNS, AAInet, NREN, to name just a few-have enabled extensive experiments with collaboration tech-
nologies (“collaboratories”) and with “metacomputing”- the coupling, usually by heroic efforts, of geo-
graphically distributed supercomputers. These experiments demonstrate that new applications are feasible
and useful” [2].
We are now approaching a computational era in which it will be possible to perform petaflop computa-
tions and access petabyte storage systems.
Given the above description of computational grids one might be led to think that they are a totally new
phenomena, a further extension of the current technological era. This conclusion would be premature.

Where did grid computing begin?


Parallel scientific advances in biology, ecology, emergence, and self-organization are resulting in the col-
lection of extensive evidence of massively parallel and distributed computing in biology. Furthermore we
are leaning that biology and human technology have a lot in common. The following comments by Sole,
Cancho, Montoya, and Valverde give us some perspective on these new developments [3]:
“Complex biological networks have very different origins than technologic ones. The latter involve exten-
sive design and, as engineered structures, include a high level of optimization. The former involve (in
principle) contingency and structural constraints, with the new structures being incorporated through tink-
ering with previously evolved modules or units. However, the observation of the topological features of
different biological nets suggests that nature can have a limited repertoire of “attractors” that essentially
optimize communication under some basic constraints of cost and architecture or that allow the biological
nets to reach a high degree of homeostasis. Conversely, the topological features exhibited by some tech-
nology graphs indicate that tinkering and internal constraints play a key role, in spite of the “designed”
nature of these structures. Previous scenarios suggested to explain the overall trends of evolution are re-
analyzed in light of topological patterns.”
“Comparison between the mechanisms that drive the building process of different graphs reveals that op-
timization might be a driving force, canalized in biological systems by both tinkering and the presence of
conflicting constraints common to any hard multidimensional optimization process. Conversely, the pres-
ence of global features in technology graphs that closely resemble those observed in biological webs indi-
cates that, in spite of the engineering design that should lead to hierarchical structures (such as the one
shown in figure 1) the tinkerer seems to be at work” [4].
It would seem that nature has at least a multibillion year head start when it comes to parallel and distrib-
uted computing. If biology and ecology have already adopted highly parallel and distributed computa-
tional approaches, it seems inevitable that as we, an increasingly technological civilization, shall also
have to master this science and art. Furthermore, it is becoming ever more obvious that we are immersed
in a natural and a cocreated ecowomb that is ubiquitous with parallel and distributed processing - a plane-
tary biocomputational grid. Stephen Wolfram has recently published an enormous new book (1200
pages), A New Kind of Science, in which he refers to the ubiquity of computation as “computational
equivalence,” and he further suggests that this phenomena is so prevalent that it will transform mathemat-
ics and science once a full awareness of this reality reaches a broader audience.
Given the enormous head start biology has in computational networking, it would seem that we could
learn a great deal from looking at some examples of how the problems of distributed computation have
been tackled. We shall only provide a very cursory discussion of some of the major criteria of biological
computation which have begun to surface in recent research [5]:

Systemic Symbiotic Planetary Ecovi!age Network


4
small world property
• Many biological, ecological, social, and artificial networks are revealing common structural and func-
tional qualities often referred to as ‘small world’ properties. The degree of this property is measured by
a combination of the path length (the average minimum distance between any two nodes) and the clus-
tering coefficient (the probability that two neighbors of a given node are also neighbors of another
node). Small world networks tend to arise in otherwise highly distributed networks, when some sort of
‘directory’ or processes becomes a main connector of many other locations or processes. WWW direc-
tories and search engines, like Yahoo and Google, cause an otherwise highly distributed Web network to
take on small world properties, because people need some way to locate and find all of the web pages
available within the network. In biochemical reactions, certain molecular structures act as centralized
locations, around which numerous other chemical reactions are able to function and coordinate their
activities. Small world networks assist information propagation very efficiently. These examples in-
crease the degree of clustering in a network.

degree of distribution

• The degree of distribution in networks can usually be broken down into three categories. Link distribu-
tions which are small, single scaled, exponential, or Gaussian - are highly heterogeneous. Electrical
power grids and neural networks are examples of this type of distribution. A second type of distribution
is a power law distribution. Clustering increases in this type of distribution, and protein interactions,
metabolic pathways, and ecological networks are good examples of this type of distribution. A third
type of distribution is scale-free networks. This type of distribution exhibits an even greater degree of
clustering. Internet topology, scientific collaborations, and lexical networks are examples of this type of
distribution.

redundancy and degeneracy

• Redundancy and degeneracy are methods used by systems to provide backup for single unit node fail-
ures. They provide robustness by diminishing fragility in networks. Redundancy replaces individual
units with identical back up units. Degeneracy, on the other hand, provides different units that can per-
form similar functions. Biology has favored degeneracy as the main mechanism of providing robust-
ness.

modularity
• Very complex systems which self-organize over extended periods of time, show a high degree of modu-
larity. Subunits of processes tend to cluster into functional groups. Modularity may reduce the complex-
ity of integration in the early stages of development, and it also may reduce the amount of interference
between similar function in the early stages of integration.

Microsocieties or ecovillages and their related networks, spanning the ecosphere, would seem to be a
model of societies, economics, and governance very much in tune with biology’s previous success story.
Microsocieties connected via scaled and layered global networks would seem to capture all of the essen-
tial ingredients necessary to duplicate biology’s successes - strong small world properties, alternative
degrees of distribution, a high degree of degeneracy, and very substantial modularity. We hope that this
information, brief as it is, establishes - that many of the principles we are asking people to consider utiliz-
ing in ecovillages are not really new, untested, and revolutionary. From the point of view of biology these
are very old, well established, and exhaustively tested practices which have provided the highly homeo-

Systemic Symbiotic Planetary Ecovi!age Network


5
static ecosphere we inhabit - that has flourished and survived for more than 5 billion years. Tribal socie-
ties exhibited many of these properties at an earlier stage in human evolution. It was the coming of the
Industrial Revolution that moved humanity into a less heterogeneous ‘one model fits all’ scenario. What is
new about the microsociety and ecovillage concept put forth here, is the intentional application of these
principles to social, economic, and governance practices in a high technological civilization.
We hope that we have established the case that the functional principles of a systemic and symbiotic
planetary ecovillage network are already well established in the history of our planets evolution, and that
we can move forward to how these principles can be adapted and utilized in the new information age era
we are entering.

Who needs computational grids?


The average citizen on our planet may not understand why we would need a planetary computational grid,
and yet this new possibility may end up transforming our lives to an even greater extent than railroads,
automobiles, air travel, radio, and television did. We are just beginning to get a sense of what a world in
which computation, sensors, real-time on demand data storage and retrieval will be like when it is all
linked up together as a functional whole on a global scale. The possibility of sitting in your living room in
down town San Francisco, Hong Kong, London, Madrid, or on top of a ridge in the Himalayan or Andean
mountains, and working on a world class telescope facility from one of these remote location, is a real
possibility in the near future. The ability to program, operate, and download a computer simulation of the
dynamical behavior of the universe during the first billion years of its evolution onto your home com-
puter, from any position on the planet, in near real-time, can be a reality in the not to distant future. Imag-
ine if you are an ecologist, being able to access a single directory of information, in near real-time, about
the historical or current sensor reading of the conditions of our planets ecology, at any position on the
earth. An artist, could produce a new video, utilizing facilities from all around the world, for special ef-
fects and editing, and then placing the completed final work on the planetary computational grid, avail-
able for viewing at any position on the earth, within a matter of minutes. These and many other kinds of
possibilities are potentially available to us in the 21st century. Smarr [6] has listed some broad categories
of groups that stand to benefit from a planetary computational grid:

Computational scientists and engineers are perhaps the most obvious category. As we move into the age
of complexity, more and more of that which we seek to understand at a deeper level, is going to require
nonlinear iterative mathematical computation. Many of these calculations exceed the capacity of the larg-
est supercomputer facilities. The ability to further accelerate the speed of these computations, by distribut-
ing the computational workload over numerous supercomputer facilities, would be a great improvement.

Experimental and applied scientists will also find the grid to be an extremely valuable tool. If there are 10
experimental scientists for every theoretician, then we certainly want this very valuable portion of our
scientific community to have access to the grid. The ability to link up all research equipment, sensors, and
database outputs to high speed computers control centers, will be a tremendous boon for this group. Ap-
plied scientists will be able to collaborate, and share time on equipment at research facilities around the
world. Most research laboratories are only operating 8 hours out of every 24 hour period, due to the sleep
patterns of human beings. With extensive remote control, researchers from two other time zones in the
world could be operating the same facility for two addition 8 hour shifts, allowing a much greater return
on investment at these facilities, which can now operate on a 24/7/365 schedule.

Nongovernmental associations can be formed in a manner which more equally distributes the use of re-
search and manufacturing facilities that currently tend to be centered in the largest urban areas. Smaller
states, cities, and ecovillages could access these facilities remotely, thereby producing a more equitable
usage distribution. Three existing examples of such programs are the Committee on Institutional Coopera-
tion, The Southeastern Universities Research Association, and the Experimental Program to Stimulate
Competitive Research.

Systemic Symbiotic Planetary Ecovi!age Network


6
Corporations will certainly benefit from the grid. Caterpillar and NASA have been working to move the
concept of intranets to a much more sophisticated level, involving collaborative real time interactive vir-
tual prototyping and simulation. Caterpillar has already prototyped multiple tele-immersion sites at the
international level. Allstate is looking at new types of computational pattern recognition, which can per-
form as close to real-time as possible, as a way to integrate their 1,5000 claim offices into a single data-
base.

Environmental researchers and associations have a real need to be able to functionally integrate this mul-
tidisciplinary work into an interactive knowledge base, which will allow the study the nonlinear, highly-
coupled, and multi-time-space scale interactions of changes in the environment in near-real-time.

Educational institutions could benefit enormously form the possibilities of distance learning all around
the world. If we are to have any hope of providing high quality education to earth’s growing population, it
will have to be accomplished by amplifying the educational productivity and capacity of existing institu-
tions, and this can easily be accomplished by adding virtual classroom capability to all of our existing
facilities. Lectures at Stanford, Berkeley, MIT, Oxford, and Cambridge university, only to mention a few,
could be reaching additional millions of people by simply installing virtual broadcasting and archiving at
all major universities.

National and state governments can better coordinate, distribute, and archive information, share their da-
tabases, and interact with their citizens, if they have grid capabilities. The recent adoption of online auto-
mobile registration and renewal by the Department of Motor Vehicles, and the Internal Revenue Service
adoption of online income tax filing, are examples of the increased convenience which governments can
provide to citizens and businesses. Online voting, with proper citizen and nongovernmental monitoring,
may become another convenience available to citizens in the future.

Business has already begun to explore the additional ways that a grid can be utilized to advertise, sell,
distribute, and provide support for products and services in the form of e-commerce, business-to-business
commerce, supply chain management, and customer relations management.

Consumers benefit from the grid by being able to purchase products, and check on competitive pricing on
products and services from all around the world, without every having to leave their home.

Current and expected participants in the world grid


How much progress has already been made as we attempt to move ever closer to our planet wide objec-
tives (2002 era)? The National Science Foundation has funded a sustained project referred to as STAR-
TAP. The goal of this project is to facilitate the long-term interconnection and interoperability of a new
and advanced international network. This project will assist us in better understanding the type of soft-
ware applications and hardware technology that will provide the greatest performance in highly distrib-
uted computation. The United States currently has the most aggressive grid program and many nations
have already tied their computational facilities into STAR-TAP. Some of the groups already participating
are:
• the Canadian Network for the Advancement of Research, Industry, and Education
• the Singapore Internet Next Generation Research and Education Network

The nations or consortia which are moving toward establishing a relationship with STAR-TAP are:

• Asian-Pacific consortium
• NORDUnet (Denmark, Finland, Iceland, Norway, and Sweden)
• RENATER (French Education and Research Network)
• Japan
• Taiwan

Systemic Symbiotic Planetary Ecovi!age Network


7
• Brazil
• Russia
• APAN (Asian-Pac)

The Electronic Visualization Laboratory at University of Illinois in Chicago is currently working on the
CAVERN Research Network. The purpose of this project is intended to produce software that allows col-
laborative virtual spaces, where people, at globally distributed locations, can interact in near-real-time, via
hypernews, application galleries, and shared programs.

The grid vision is one that will require extensive collaboration to accomplish. The degree of amplifica-
tion of human productivity that we can achieve with collaboration is not yet foreseeable. For centuries we
have focused on competition. What we can accomplish with collaboration - in terms of a more efficient
utilization of our resources, and the dynamic nonlinear amplification of our intellectual resources when
combined with artificial intelligence cocreation technologies - this is the challenge of the future.

Projected near-term deliverable computation


Foster and Kesselman [7] in 1999 provided some estimates of the increase in computational capability
that it would be possible to offer the public over the next ten years. Their estimates projected an increase
of three orders of magnitude in the next five years. An increase of five orders of magnitude can be ex-
pected within the next ten years. These authors provide five areas in which they expect to see this im-
provement:
technological improvement
• Current trends in VLSI design and manufacturing promise a factor of 10 increase in computational
power over the next five years, and a factor of 100 over the next ten years.

an increase in demand-driven computational power

• Many software applications and their normal usage patterns require demanding computational CPU
time on a infrequent basis. The remainder of the time, these applications remain idle. On demand clus-
tered servers, could provide these episodic bursts of computation when required, freeing up numerous
non-networked computers for other tasks. Such a program, could increase overall computational power
by three or more orders of magnitude.

utilization of idle capacity


• Studies of minimal computational capacity computers such as personal, business, and workstation desk-
top computers, tend to use no more than 30% of their computational capacity while in operation. Stud-
ies show that even in parallel programs, a factor of two increase in utilization will not significantly di-
minish the productivity of the primary tasks already running on them. Proper and coordinated utiliza-
tion of all this computational power could result in anywhere from a 100 to 1,000 times increase in
peak computational capacity.

improved sharing of computational results

• Current estimates of the computational capacity needed to compute a daily weather forecast are about
14 7
10 numerical operations. If this computation covers a geographical area of interest to 10 people, and if
21
we combine these two needs, we end up with a combined computation requirement of 10 operations.
This figure is equal to all the computations on all the computers around the world in one day. Thanks to

Systemic Symbiotic Planetary Ecovi!age Network


8
considerable sharing of weather computation resources, this larger figure is never required. However,
this is not generally the case in other areas of demanding computation, where considerable parallel and
duplicated effort is expended on a daily basis, thus wasting considerable computational resources, by
not sharing. The overall improvement in computational capacity that could be achieved with sharing are
difficult to estimate, but they must be very substantial.

improvements in problem-solving

• New approaches to computation should allow sophisticated computations, at the desktop level, to be
handed off to network-enabled servers, without having to install the high demand software on local
node computers. Various types of tele-immersion techniques, allowing collaborative operation of simu-
lations, and the exploration of the databases that are the product of these simulations, should provide
reduced demand on many computers which can then be freed-up for other computational purposes.

The obvious increases in computational capacity, comes from the combined and synergistic application of
all of these techniques simultaneously. Thinking of computation as a systemic planetary activity,
rather than a personal or single business activity, can bring us an enormous increase in computa-
tional efficiency and optimization.
In order for a planetary computational grid to be successful compared to non-grid alternatives, it must
have a sufficient quantity and quality of infrastructure to ensure dependable, consistent, pervasive, and
inexpensive computation [8]. If we can provide these conditions, along with new improvements in speech
recognition interfaces, we can look forward to an era where computational resources, far beyond anything
we have previously imagined, will be available, on demand, almost anywhere on our planet. It is very
possible, that computation will transform every aspect of our intellectual, social, entertainment, economic,
and governance activities. This will not be an easy task. It will require considerable and sustained invest-
ment over decades, if not centuries, and it will require new forms of innovation. A combined effort con-
sisting of both human and artificial intelligence, will collectively design, and grow this enormous conver-
gence of biocomputation with DNA, molecular, optical, silicon, and quantum computation.
Microsocieties or ecovillages have the opportunity of being on the cutting edge of these developments,
because they can design their societies from the outset with grid principles in mind. The same practices
would considerably increase the computational capacity of the intranet, the extranet, the capabilities of the
ecovillage network, and the network interface to the larger planetary computational grid.

Applications where the grid will be important


Foster and Kesselman [9] have categorized five types of applications where grid computation will prove
to be highly valuable, if not indispensable:
distributed supercomputing

• Surprising as it may be to average person, there are computer simulations which exceed even the capac-
ity of the largest supercomputer facilities involving 5000-30000 cpus. Distributed interactive simula-
tions (DIS), where complex interaction scenarios, with large numbers of interacting components
(<10,000,000) involved in cosmology, computational chemistry, climate modeling, and war game mod-
eling, can exceed the capacity of some of the largest supercomputers. Grid coupled supercomputers, on
a limited scale, have already been successfully utilized to solve such problems. As these grids grow
larger, the co-scheduling aspects of these computations become ever more challenging.

high throughput computing

Systemic Symbiotic Planetary Ecovi!age Network


9
• High throughput computing can differ considerably from DIS, because it involves additional problems
of computing many different tasks across a widely distributed network. Here we are utilizing the com-
bined computational capacity of a grid to perform hundreds or thousands of different tasks. Some ex-
amples of this type of computing are the use of hundreds or thousands of workstations, in a coordinated
fashion, to design a new computer. The University of Wisconsin has utilized the Condor system of
workstations, located around the world, to study various problems like molecular simulations, ground-
penetrating radar, and diesel engine design. A similar approach has been used to solve large crypto-
graphic problems.

on-demand computing

• On demand computation is usually focused around cost-effective solutions, and where computational
capacity is not required on a constant basis. Some examples of this, are NEOS qne NetSolve, involving
network-enhanced numerical solver systems. This technique has also been used to provide real-time
image processing for scanning tunneling microscopes. Aerospace Corporation has utilized this tech-
nique to process data from meteorological satellites.

data intensive computing

• Data-intensive computation is focused around synthesizing information from numerous sources to pro-
vide a comprehensive centralized database repository. Some examples of research where this type of
computation is required are, the databases generated by high energy physics experiments, which require
terabytes of storage daily, and petabytes of storage on a yearly basis. The astronomical Digital Sky Sur-
veys also require terabytes of image storage.

collaborative computing

• Collaborative computing is intended to assist distributed interactive computational tasks where the par-
ticipants are located at distant locations, but need to collaborate within one single virtual work space.
Some examples are BoilerMaker at the Argonne National Laboratory, which allows multiple users to
design industrial incinerators, and the CAVE5 system allows users to generate and explore large geo-
physical databases, like a model of the Chesapeake Bay. The NICE system is a collaborative virtual re-
ality system for children to create and interact in virtual worlds.

Since the greatest amount of research taking place in grid computing is in universities and research insti-
tutions, it should come as no surprise that most of these examples center around research activities, but it
should be very easy to imagine how these same approaches could be used in manufacturing, design, the
entertainment arts, and economics.

A new paradigm in computational capacity


As a civilization poised on the leading edge of the ‘information age,’ where computation and simulation
will play such a fundamental and important role in our lives, can we really afford such enormous compu-
tational waste? The obvious answer is no! One must ask why this subject has not captured the attention of
the media? Perhaps it is because we still think of the computer as an individually owned appliance. In the
pre-internet era this perspective may have been justified. However, in the post-internet era, where our
computers are networked on a highly distributed global network, this perspective makes no sense at
all. The vast majority of these computers are already interconnected across the highly distributed network
we refer to as the internet. This is not primarily a hardware problem. The hardware is already connected

Systemic Symbiotic Planetary Ecovi!age Network


10
via the global network. So what is holding us back from utilizing this tremendous resource in a much
more optimal fashion?
The roadblocks consist of the following:
• We do not currently have a Distributed Computing Global Operating System Standard (DCGOSS),
which would allow all the computers on the vast and growing internet network to share the burden of
the planet’s total computational load. We need software operating systems that support multitasking,
multithreading, and distributed computation. This should be handled at the operating system level (Mi-
crosoft Windows, Mac OS, Unix, etc.), so that application software developers do not have to deal with
this complication as developers.

• At the network level we need to complete the light optical fiber roll out, so that all of our computers
are connected by light optical fiber which allows us to efficiently distribute and collect the local node
computational products to higher level network synthesis and integration centers, where the distributed
computation events can be put together into comprehensible data.

• And finally, the final roadblock is purely conceptual in nature. We simply must learn not to think about
our computers as individual technological devices. We must learn to see them as individual nerve cells
in a much larger planetary organic nervous system. This total nervous system consists of all the com-
puters and networks on the planet, in addition to all the human biocomputational power inherent in the
sum total of human brains which function on our planet.

What can be done about this intolerable oversight?


The reality of this situation has not been missed by some of our current technology companies involved in
computation. Sun Microsystems, and IBM are currently developing, and offering, distributed computing
systems. However, these systems are still proprietary, or operating system dependent. This is a big step in
the right direction, but we must have a global standard (DCGOSS) for distributed computing if we are
ever to realize the full universal advantage of this new paradigm in computation. The distributed compu-
tational portion of all operating systems, should be operating system independent - the DCGOSS ap-
proach. Independent companies can compete at the operating system level, but they should cooperate, and
share the distributed computing development costs. Governments could also contribute to the develop-
ment costs of distributed computing, by making NFS grants available to universities and research groups,
with the intention of accelerating the deployment and adoption of global distributed computing.
For those of you who are unfamiliar with computer simulation, and the enormous computational re-
sources that are needed in large computations, we will briefly try to summarize the basic reasons why
large computer simulations require so much computational capacity:
Professionals who make a living attempting to forecast large complex systems such as - universes, clus-
ters of galaxies, galaxy evolution, the long term evolution of stellar systems, the internal and external dy-
namics of individual stars, the complex interaction of planets in solar systems, long term global weather
cycles and patterns, long term global ocean cycles and currents, economic systems, and the complex non-
linear interactions of biological species within a global ecosystem over geological ages - know only too
well that these large dynamical systems can not be simulated analytically. These systems must be iterated
numerically. Analytical formulas exists for some phenomena in the natural world. In the cases where they
do exist they allow us to predict any future position, velocity, or acceleration of that system, at any given
time in its evolution, if one has the initial and boundary conditions necessary to plug into the equation.
Most systems which exhibit linear, or very close to linear, behavior can be solved in this fashion. Unfor-
tunately most of the systems in our universe are not linear. The high degree of system-to-system intercon-
nectivity makes these phenomena so complex that they are not subject to analytical solutions. So this
leaves us with the vast majority of systems being nonlinear, and nonlinear systems must be approached
from an iteration approach. Iteration solutions are referred to as numerical solutions. They require us to

Systemic Symbiotic Planetary Ecovi!age Network


11
predict the future position, velocity, and acceleration of a system based only upon its previous position,
momentum, and acceleration - one step at a time. Any digital computation involves finite accuracies,
which produce round off errors, and these round off errors, while small for any given step, begin to add
up collectively, over many steps. The finite size of the number of variables that can be tracked, the total
number of digits or accuracy that is achievable, the hardware and software limitations, and energy and
financial costs of the computation - eventually limit the step size, accuracy, overall size, and the length of
these iterated computations. The combination of all these factors, define what we refer to as a computa-
tional simulation. Even a single large nonlinear simulation taxes the capabilities of the most advanced
massively parallel supercomputers, containing 10,000 or more CPU processors. This is why we desper-
ately need to take full advantage of global distributed computing. We need the approximately 90 percent
of the global computational capacity that is now being wasted, because hundreds of millions of computers
are spending a large portion of their operating time idling at a small fraction of the full computational ca-
pacity.
We realize that galaxy cluster simulations may seem far removed from anything of importance in our eve-
ryday lives. However, the behavior of our native star, and our planetary climate system are of relevance to
us all. The complex weather and oceans cycles, the ever growing effects of our own technology, the ef-
fects of our energy utilization habits upon the planets weather and ecology, and the global interaction of
our markets and economic systems - are items that strike much closer to home in our daily lives, even if
they are not yet a topic of daily interest in the media. As our new electronically connected technological
civilization moves forward, we will become ever more in need of, and dependent upon, large scale com-
putational processes. We can not afford to waste any of our potential computational capacity, as we move
forward in this age of complexification. Most of the problems we will face going forward will not be of
the linear/analytical type. They will fall into the nonlinear/interactive/simulation camp, and therefore will
require massively parallel and distributed computational capabilities. The scientific community is becom-
ing fully cognizant of this reality, but unfortunately our general social, economic, corporate, and govern-
mental institutions are still way behind the curve in this paradigm shift. Perhaps, most important of all,
the general public around the globe, is almost completely in the dark about the importance of this issue.
This condition must be reversed as quickly as possible! Our survivability as a civilization depends upon
it!
We hope that our very brief introduction to this subject has been sufficient to convince you of the true
importance of distributed computing in this new and very challenging 21st century.

The global computational grid paradigm


As we stated in our introduction, this new 21st century we have entered into is the leading edge of ‘the
information age.’ Now what does this mean? It means that the objective ahead is to be able to supply the
entire worlds information archive and real-time information capability 24/7/365 - any time, any where, as
efficiently and inexpensively as possible. In order to accomplish this, we must not only provide for the
rapid acceleration of our computational capacity, but we must also utilize every single CPU cycle avail-
able, in important and meaningful ways, which contribute to our increased understanding of the highly
complex and nonlinear civilization we are in the process of building. It is also extremely important, that
we apply these same principles to the way we utilize our own distributed human biocomputational capa-
bilities. We can no longer afford to waste human computational capability. Every human mind that exist
in the biosphere that is not fully engaged to the maximum of it’s genetic and educational capability - is a
waste of computational ability that we can no longer afford. Unless we want to become a terminal civili-
zation that produced a brief flash of intelligence, only to stumble, and fall, and inevitably remove our-
selves from the galactic catalog of ascension civilizations, we must learn to utilize our combined human
and artificial intelligence to the maximum. The evolution of the universe, the evolution of life, and the
evolution of self-aware evolving cognitive beings, like ourselves, are all computational processes in the

Systemic Symbiotic Planetary Ecovi!age Network


12
broader meaning of computation. Our universe and everything we encounter in it, is a computational
process. All of these processes are forms of ‘computational equivalence’ which exist in our universe, and
which can be thought of as a form of universal computation. To think that we can somehow advance
through the enormous number of levels of intellectual and spiritual advancement, necessary to compre-
hend this universe, without mastering the combined science and art of computation, is inconceivable. Our
current concept of what computation is, is still very limited. Computation is more than a ‘black box’ with
a silicon based motherboard, CPU, and memory. Computation is all around us. For billions of years pre-
ceding our arrival, biology had been performing molecular computation. The warm oceans of our early
earth provided the incubator for the 1060-100 self-organizing evolutionary computations that would seem to be
required to produce life forms at their current state of development. The earliest experiments may have been no
more complex than some of the simplest cellular automata computation rules. We now know that some cellular
automata rules have already been proven to support universal computation, in a manner similar to a Turing
machines capacity to support universal computation. More advanced levels of biology probably involve multi-
ple levels of computation. The early stages of cellular automata type computation may have lead to the devel-
opment of simple organic molecules, and then higher level programming languages may have been built upon
this foundation, much like our multiple high level programming languages (Fortran, Cobal, C, Mathematica,
etc.) are built upon machine level languages. If you think computation is a subject foreign to your life, take
a look inside your body, your brain, and you will see computation everywhere you turn. We must stop
thinking of computing as something that just showed up in the arena of earths evolution 50 years ago. Biology
is already computing at the molecular, if not the atomic level. With the coming of nanotechnology, and its po-
tential for quantum computing, our man made computers may soon be computing at the atomic level. We
should also not disregard the earlier pre-biology universe, as being to simple to have involved computation.
The principle of computational equivalence suggests that as we go forward in our understanding of the uni-
verse, we will find this new principal of computational equivalence at every level of experience, and in every
time period of evolution. The principal of computational equivalence suggests that a significant portion of even
relatively simple computational systems, are capable of producing complex systems - if given extended
and significant iteration time spans. For a much more extended discussion of this topic, we highly rec-
ommend Stephen Wolfram’s book [10].

So we hope that you are convinced that computation is not something quite as foreign to our lives as you
might think, and we hope we have convinced you to become avid promoters of distributed computation.
We very much need to start a bottom-up ground swell movement to advance distributed computing. We
can think of no better place to begin such a movement than in a microsociety network of light optical fi-
ber linked ecovillages. Each ecovillage should operate their own intranet in as distributed a fashion as
possible, and then provide any additional CPU time to the broader network on a reciprocal time share ba-
sis. Simulations which exceed the computational capacity of the local nodes, can be spread across larger
and larger portions of the network.
The challenges are enormous; the thrill of participating is exhilarating; the results of this collaboration are
truly beyond our imagination.

Exploring internet ecology and topology


The ecology and the topology of the cyberspace that is growing within the internet is a very important and
fascinating subject. Dodge and Kitchen [11] have published a beautiful book, Atlas of Cyberspace, which
provides a reference source on the various experimental means of trying to visualize and utilize the vast
resource of the internet. If you are unable to buy or borrow a copy of the book, many of the links below
link to the web sites that were covered in the book (at the time this article is being written (2002) these
web sites were current, but academic web sites often change location). The following web sites should

Systemic Symbiotic Planetary Ecovi!age Network


13
prove very useful in trying to gain an overview of the early stages of what will one day be the planetary
computational grid.
http://www.cybergeography.org/
http://www.telegeography.com/
http://www.ee.surrey.ac.uk/Personal/L.Wood/constellations/general.html
ftp://ftp.cs.wisc.edu/connectivity_table
http://www.caida.org/tools/visualization/mapnet/
http://graphics.stanford.edu/papers/mbone/
http://graphics.stanford.edu/videos/mbone/
http://209.9.224.243/peacockmaps/
http://www.caida.org/tools/visualization/walrus/gallery1/
http://moat.nlanr.net/Software/Cichlid/gallery/gallery.html
http://www.nordu.net/stat-q/load-map/ndn-map,,traffic,busy
http://acg.media.mit.edu/people/fry/valence/
http://www.cs.umd.edu/hcil/treemaps/
http://smg.media.mit.edu/people/Judith/VisualWho/
http://acg.media.mit.edu/people/fry/anemone/
http://www.activeworlds.com/community/maps.asp

Well we have come to the end of this journey for now. We hope you have enjoyed this discussion of how
we look at computation, the role it must play in SSPEN, and within our global biosphere.

Systemic Symbiotic Planetary Ecovi!age Network


14
References
1. Foster, Ian and Kesselman, Carl (eds.). The GRID-Blueprint for a New Computing Infra-
structure. 1999, p. x.
2. Foster, Ian and Kesselman, Carl (eds.). The GRID-Blueprint for a New Computing Infra-
structure. 1999, pp. x-xx.
3. Sole, R. V., Cancho, R. F., Montoya, J. M., and Valverde, S. Selection, Tinkering and Emer-
gence in Complex Networks. 2002, p. 1. (http://
www.santafe.edu/sfi/publications/Working-Papers/02-07-029.pdf)
4. Sole, R. V., Cancho, R. F., Montoya, J. M., and Valverde, S. Selection, Tinkering and Emer-
gence in Complex Networks. 2002, p. 3. (http://
www.santafe.edu/sfi/publications/Working-Papers/02-07-029.pdf)
5. Sole, R. V., Cancho, R. F., Montoya, J. M., and Valverde, S. Selection, Tinkering and Emer-
gence in Complex Networks. 2002, pp. 3-4. (http://
www.santafe.edu/sfi/publications/Working-Papers/02-07-029.pdf)
6. Smarr, Larry. Grids in Context. Foster, Ian and Kesselman, Carl (eds.). The GRID-Blueprint
for a New Computing Infrastructure. 1999, pp. 7-12 [7] Foster, Ian and Kesselman, Carl.
Computational Grids. Foster, Ian and Kesselman, Carl (eds.). The GRID-Blueprint for a New
Computing Infrastructure. 1999, pp. 16-17.
7. Foster, Ian and Kesselman, Carl. Computational Grids. Foster, Ian and Kesselman, Carl
(eds.). The GRID-Blueprint for a New Computing Infrastructure. 1999, pp. 18-19.
8. Foster, Ian and Kesselman, Carl. Computational Grids. Foster, Ian and Kesselman, Carl
(eds.). The GRID-Blueprint for a New Computing Infrastructure. 1999, pp. 21-25.
9. Wolfram, Stephen. A New Kind of Science. Wolfram Media, Inc., 2002.
10. Dodge, Martin and Kitchin, Rob. Atlas of Cyberspace. Addison-Wesley, 2001.

Systemic Symbiotic Planetary Ecovi!age Network


15
Systemic Symbiotic Planetary Ecovi!age Network
16

Potrebbero piacerti anche