Sei sulla pagina 1di 34

AI

& The Fermi Paradox: Human Immortality or


Civilizations Inevitable Destruction?
By Jon Bowman

Table of Contents

1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.

Introduction
What Exactly Is The Fermi Paradox?
Possible Fermi Paradox Solutions
The Kardashev Scale
The Great Filter
AI Explained
The Effects of AI On Humanity
Mind Uploading & The Technological Singularity
How AI And The Fermi Paradox Could Be Linked
Conclusion
References & Further Reading

1) Introduction

We are living in an extraordinary period of time. Mankind is rapidly approaching the era
of the most exiting technological revolution in all of history. Welcome to the exciting
world of Artificial Intelligence (AI), the alarming Fermi paradox, and the quest to avoid
extinction and achieve human immortality. Prepare to explore the connection between
these fascinating concepts, along with the latest explanations and a detailed look at what
the future may hold for our species. While reading this deeply thought-provoking book,
expect nothing less than shocking conclusions, bizarre ideas, and a strong sense of
exhilaration (along with some concern!) for the fast approaching revolution. The question
is Are we prepared, or in way over our heads?

2) What Exactly Is The Fermi Paradox?



As the developer of the Fermi paradox, Enrico Fermi once famously asked, Where is
everybody? referring to our apparent lonely existence in the cosmos. Its purpose was to
bring attention to the startling lack of proof for extraterrestrials in our incomprehensibly
huge galaxy. Could it be that we are the only ones in our galaxy?

To begin analyzing this question, we must first consider the size of the universe where life
could be residing. An estimated 100-200 billion stars exist in our Milky Way Galaxy
alone, yet despite decades of continuous scanning, zero definite signs of an alien race have
been detected. Several interesting broadcasts have been picked up by contemporary effort,
but nothing conclusive has yet been discovered that could not be explained by natural
origin.

How can we tell if a signal is natural or artificial? It can be difficult to distinguish, and a
few signals have initially appeared to be artificial, but when the same area was rescanned
in hopes of receiving the same signal again, nothing was found. In general, a signal
suggesting extraterrestrial origin should exhibit some type of coherence or repetition that
cannot be explained by a natural source. The signals strength (amplitude), frequency, and
pattern all come into play when scanning the sky for distant life. For example, Earth
would be easily and immediately determined to harbor life because of the constant stream
of low frequency radio signals being emitted by communication equipment such as
television and radio broadcasting towers. For this reason, it is thought that other
civilizations with a similar level of technology should not be difficult to detect.

Fermis paradox is certainly not explained by lack of trying. The search for extraterrestrial
intelligence (SETI) is the collective effort of 2 million people collecting and deciphering
electromagnetic signals in an effort to discover artificial sources from space. SETIs
efforts are focused on the Milky Way because radio signals can only travel a finite
distance before becoming too distorted to decipher efficiently. An enormous amount of
energy is needed to send and receive signals across the vast distance between galaxies, so
efforts are concentrated within our own galaxy. Regardless, the number of stars and
planets in the Milky Way Galaxy alone provides a suitable area to search, which is well
beyond a statistical fluke.

At the very least, hundreds of civilizations should be emitting signals within our galaxy,
and the implication of finding zero is scary. But first, how can we arrive at an
approximation of how many civilizations there should be if we have not detected any yet?
We can use a statistical tool called the Drake equation to gain a rough approximation of
how many intelligent civilizations we should be able to detect inside our galaxy right now.

The Drake equation: N = R x Fp x Ne x Fl x Fi x Fc x L



Which means
The number of planets in our galaxy currently supporting intelligent life that we could
detect = current rate of star formation per year in our galaxy X fraction of those stars with
planets X number of planets that can support life per star with planets X fraction of those
planets that actually develop life X fraction of those planets that develop intelligent life X
fraction of those that survive self-destruction X length of time civilization releases
detectable signals into space.

Got all that? Lets go through each variable and calculate a conservative answer given the
most recent constraints.

R = current rate of star formation per year in our galaxy. NASA and the ESA estimate
that an average of about 7 stars are born in our galaxy every year. (7 stars/year)

Fp = fraction of stars with planets. This has been estimated using microlensing surveys
to be almost 1, meaning that most stars contain at least one planet. (1)

Ne = number of planets that could support life per star with planets. To determine
this number, we will calculate the amount of planets located in the habitable zone. Data
from the Kepler spacecraft predicts that up to 40 billion Earth-sized planets are orbiting in
the habitable zone of their host star in the Milky Way. Since 100 billion stars exist in our
galaxy, roughly 40% of stars contain a planet in its habitable zone. To complicate things
further, moons may also support the necessities for life to form. (0.4)

Fl = fraction of those planets that actually develop life. This is difficult to estimate
since Earth is the only planet known to harbor life. In other words, there are zero degrees
of freedom. But the fact that life began early on in Earths history suggests that it should
emerge quite easily on planets with favorable conditions. However, it appears that life
only came into existence once during Earths history (unless it has since gone extinct),
since all life shares a common ancestor. It is plausible that a planet with liquid water,
tectonic plates, and radioactive elements are at least good indicators that life could begin
on that planet. For our calculation, we will estimate that 1 in 10 planets located in the
habitable zone eventually develop life. (0.1)

Fi = fraction of life-bearing planets that develop high intelligence. This is another
highly speculative figure due to the fact that humans are the only known intelligent form

of life that we are aware of. The probability that life-bearing planets will have the time to
progress to that point before going extinct is largely up for debate. Those who prefer a
larger value for this variable assert that life is resilient (a good example being
extremophiles such as those discovered living in lake Vostok deep under the Antarctic
ice), having survived several mass extinction events. Those who prefer a smaller value
argue that it took a long time for us to develop large mental capacity, and that humans are
the only species to achieve this out of the billions that have lived on Earth throughout
history. Although approximations vary wildly, for our equation we will estimate that only
half of all life-bearing planets survive long enough for advanced intelligent life to form.
(0.5)

Fc = fraction of those planets that develop interstellar communication. Again, this is
another speculative variable, but considering the relatively short amount of time it took for
intelligent life on Earth to develop the ability to broadcast radio waves, it should not take
long for intelligent life elsewhere to begin broadcasting some sort of detectable waves
across the galaxy. It must also be mentioned that radio wave broadcasts are not the only
type detectable, although radio waves are the most energy efficient, and therefore the most
likely to be received by intelligent life. We are continuously scanning the sky for almost
all frequencies of the electromagnetic spectrum, aware of the fact that other types are
possible. After consideration, this variable will be given a value of 0.5, which means that
half of all intelligent civilizations develop a method of communication that can be
detected across the galaxy before going extinct. (0.5)

L = length of time (years) civilization releases detectable signals into space. We have
only had the technology to emit radio waves for a little over 100 years, and it is tough to
pinpoint an average, since our civilization is of course the only one known to
communicate wirelessly over long distances. The biggest fear is that a civilization with the
technology to broadcast radio waves might also have (or will have shortly) the technology
to cause global extinction, like our fear of extinction from atomic bombs shortly after we
began to communicate wirelessly. On the other hand, it is argued that a civilization with
that level of technology should be able to flourish, and maybe even reduce the chance of
global extinction. Estimates range from a few years all the way up to hundreds of millions
of years, but conservatively we will assume an average broadcasting lifetime of five
thousand years. (5,000 years)

Now that we have a low approximation for all the variables in the Drake equation, we can
calculate a final answer. Needless to say, estimates vary wildly because proper data is
lacking. For example, finding extinct life on Mars (not related to life on Earth) or any
other planet would alter the results significantly, as it would imply that life arises fairly
easily, but has difficulty surviving long enough to become intelligent. Similarly, if
advanced technology or remnants of a civilization are found on another planet, it would
imply that life does not last long after becoming intelligent for one reason or another. We
will explore numerous filter events that could prevent life from progressing past a

certain point in a later chapter. For now, here is the resulting conservative estimate for how
many extraterrestrial sources should be detectable inside our galaxy at this very moment.

R x Fp x Ne x Fl x Fi x Fc x L = N

7 x 1 x 0.4 x 0.1 x 0.5 x 0.5 x 5000 = 350

Remember, this is a generously low estimate. So at the very least, if there should be
hundreds of alien races in just the Milky Way Galaxy, why do we appear to be so alone?
This is the heart of the Fermi paradox.

To further complicate things, even a single advanced civilization should be able to
dominate the entire galaxy in a cosmically short period of time, due to the exponential rate
of technological improvement. For example, the amount of change that we saw from say,
year 1400 to 1500 was nothing compared to how much we improved from 1900 to 2000.
This is because of the exponentially increasing rate, which results from increasing
sustainable population, along with the fact that our collective knowledge is built upon past
knowledge. We are living at a unique time in history where a massive technological shift
occurs within a single human lifetime, rather than over thousands of years. If this trend
continues, these shifts will eventually occur yearly, or even daily!

The exponential rate of improving technology makes it nearly impossible to imagine the
level of technology that a civilization 1,000 years ahead of us would have, which is a
relatively small amount of time considering the age of the universe, and the amount of
time other civilizations have potentially had to develop. Their technological abilities
would be unimaginable to modern 21st century humans. Civilizations with this level of
technology might have the ability to easily spread from planet to planet, or even star to
star. Also, by using probes (especially self-replicating Von Neumann probes) it is not farfetched to imagine a super-advanced civilization conquering entire galaxies in a
cosmically short amount of time. (1)

In fact, a human could even theoretically travel interstellar distances in a short amount of
time by taking advantage of Einsteins theory of special relativity. By traveling at near
light speed, time slows for a person inside of a rocket. For example, at 86% the speed of
light, one year on Earth would only take half a year on the rocket. At 99.9% the speed of
light, one year on Earth would equal just over two weeks on the rocket. Therefore a trip to
the next closest star (about 4 light years away) could take much less time for the people
aboard a fast moving rocket than on Earth. This effect happens because as velocity
increases, space in front of the moving object compresses, causing the distance to shorten.


If entire galaxies can be conquered in a blink of cosmological time from just one
civilization a tiny bit ahead of us, why dont we see any evidence of this happening? Not
to mention the enormous amount of energy a super-advanced civilization would surely
require. The waste heat from this energy would be hard to miss. Are we the first, or is
another factor at play? To attempt to answer that, lets explore the top ideas that could
explain the puzzling Fermi paradox.

3) Possible Fermi Paradox Solutions



The solution to the Fermi paradox is not yet solved, however scientists have suggested
several possibilities. Below are the top ideas (along with some that are a little outlandish,
yet still very much possible) about why our corner of the galaxy seems eerily lifeless.

1) Rare Earth Hypothesis

This hypothesis assumes that the conditions necessary for life are so unlikely that it is an
improbable stroke of luck that we are here; therefore we must be special. This idea comes
off as a bit narcissistic to me. However, lets explore the prerequisites for life to develop
on a planet anyway. To be sure, there are numerous factors that must be met, and it is
phenomenal that we beat the odds, however this should be expected, since life is
complex. To begin, our planet must be located in the right part of the galaxy, the right
distance from the sun with a stable orbit, and have a large gas planet nearby to clear out
asteroids and space debris without hurling Earth away from the sun. It must also contain
the right balance of light and heavy elements (especially carbon), tectonic plates, an
atmosphere, a liquid ocean (preferably water), and a magnetic field to block out dangerous
radiation. Furthermore, a large moon to produce tidal forces and stabilize Earths tilt may
have been important.

Proponents of this hypothesis argue that alien civilizations are so far spread and rare that
we will almost certainly never come into contact with them because of the immense
separation of space. Suitable planets are definitely rare, but considering the enormous size
of the galaxy, not to mention the universe, it would seem even more unlikely that we are
alone. Earthlike exoplanets are already being frequently discovered, suggesting that Earth
may not be as rare as we would like to imagine.

Ok, so there are numerous planets with suitable conditions for life, but that does not
necessarily mean life will arise. However, history tells us that back when Earth had
suitable conditions, life appeared shortly thereafter. This suggests that given a suitable
planet, life often finds a way to flourish. If we ever discover signs of ancient life separate
from our own, especially within our Solar System, this hypothesis would shatter. (2)

2) We Are The First

As unlikely as it may seem, it is possible that we are simply the first civilization to achieve
the ability to broadcast and collect electromagnetic waves for long distance

communication. The early universe was exceptionally hostile to life, with deadly gamma
ray bursts potentially wiping out early life. A gamma ray burst is the most destructive
natural force in the universe. It occurs when two super massive black holes collide,
releasing a powerful shock wave throughout space that destroys everything in its path over
a distance of many light years. This happened quite frequently in the early universe, when
its density was much greater than it is today. Luckily, the frequency of these deadly bursts
has diminished over time, and the universe settled enough to allow planets to progress in
peace. This idea assumes that after the quantity of gamma ray bursts decreased
sufficiently, Earth began forming life before any other planet. Although considering the
vast size of the galaxy, the chance of us being the first intelligent species is extremely low.
If we truly are the first, it follows that countless other civilizations are shortly behind. (3)

3) The Zoo Hypothesis

The zoo hypothesis states that we are quietly being watched. Alien civilizations could be
refraining from contacting us for various reasons. Maybe they are passive and waiting for
us to abandon our violent manner. Maybe they are observing how often a civilization
survives annihilation long enough to spread to other planets. They could even be studying
how we manage issues like global warming or increasing energy consumption. Finally, it
could a scenario along the lines of Star Trek, where lower civilizations are left alone to
develop without assistance. If aliens contact us, undoubtedly we would gain a mountain
of knowledge and technology. Just like ducks in a park, we might grow dependant on
them, weakening our ability to survive on our own. Do not feed the humans!

The main thing wrong with this idea is that it does not explain why we havent discovered
any signs of alien life, unless our silent observers destroyed all nearby life forms. We are
actively looking for signs of life beyond our planet, and have found absolutely no
worthwhile evidence despite our vigorous attempt to do so. Whether or not we are being
watched should not alter the fact that we have not observed alien life from afar.

4) Great Filter Hypothesis

The Great Filter Hypothesis states that there are one or more great hurdles that all
civilizations must get though before reaching the Singularity. (A chapter on the Singularity
is coming up.) It is possible that most civilizations get stuck and go extinction before
solving the issue of the Great Filter event, whatever that may be. The Great Filter could be
surviving nuclear destruction, climate change, or an unseen event hundreds of years from
now. This begs the question, Have we already gotten past the Great Filter, or have we yet
to encounter it? We will go into much greater detail in a later chapter, so keep this
intriguing hypothesis in mind.

5) Our Universe is a Simulation



This one is a little spooky, so be sure to turn the lights on and check under the bed before
reading this. Ready? Ok, this hypothesis presumes that we are living in a simulation, or
the matrix, if you will. It is possible that our universe was programmed by someone in a
separate universe and left alone, or even forgotten. The reason why we do not see any
signs of alien life is because they were never programmed into the simulation. Maybe we
are living in a baby universe created as an experiment, or possibly as a zoo for the
amusement of others. If we ever reach the point of creating baby universes on our own,
we can safely assume that our universe is a simulation, possibly with multiple layers
above our own. The scary part about the simulation hypothesis is that if any simulation
above us were to turn off, we would instantly vanish. In The Singularity is Near, Ray
Kurzweil suggests that the best way to avoid our simulation being shut down is to be
interesting by bringing about the Singularity. Is it only me, or are you getting the feeling
of being watched? (4)

6) We Are Not Worth The Cost of Travel

Traveling interstellar distances take a ton of resources and time. Maybe we simply are not
worth the cost of travel. Earth does not really contain any special resources that cannot be
found elsewhere. Plus, the desire to explore the hazardous unknown may be a trait
exclusive to humans. Alien races might be perfectly content where they are, lacking the
motive to invest in long distance travel. It is possible that after attaining a certain level of
technology, alien civilizations become able to produce everything they need and want.
They may have created a simulation to live in, eliminating the need to traverse the
dangerous expanses between star systems. As the popular Earthly saying goes, The only
journey is the one within. This may explain why we have not been visited during the
short period of our recorded history, but still doesnt explain why we have not seen any
signs of life elsewhere.

7) They Are Hiding From Something Dangerous

Another possibility is that all alien civilizations are keeping quiet to stay hidden from
something dangerous, possibly a predatory race (organic or AI) conquering star systems
containing life. For the same reasons people conqueror land on Earth, an interstellar race
could be conquering planets for land, resources, or even as a preventative measure to keep
civilizations from becoming a powerful threat. Once a civilization begins broadcasting
signals, its position is no longer a secret. We have only begun broadcasting recently in our
history, so there could be a dangerous predatory race on its way to Earth right now!

8) We Cannot Pick Up or Decipher Their Broadcasts


An alien civilization could be right in front of our eyes and we might not even notice. The
biggest issue here is the fact that radio signals diffuse over long ranges because of the
inverse square rule. This means that as a radio signal travels through space, it becomes
exponentially more difficult to receive. Whether or not a signal is detectable mostly
depends on the strength (amplitude) of the signal and the sensitivity of our detectors. It is
possible that we have not found artificial signals because our technology is inadequate to
amplify interstellar signals. Signal amplification and data correction techniques are
rapidly advancing, and every year we are able to detect weaker and more distant signals.
Current SETI efforts are focused on detecting highly energetic signals from advanced
civilizations. (5)

Another possibility is that after a certain stage of a civilizations technological
development, the method of long distance communication changes and becomes
undetectable to us. For example, this could be done using concentrated lasers as opposed
to our Earthly method of an expanding sphere of photons spreading in all directions. A
direct method could be more cost effective with less data loss. Not to mention it would
also have the advantage of being covert (Refer back to #7 for an example of why this
might be very important).

One intriguing idea is that a highly advanced alien civilization might be nearly impossible
to detect if it has built a Dyson sphere. This is a megastructure that encompasses a star,
collecting most, if not all of the stars radiation using technology similar to solar panels.
For a civilization with a massive energy requirement, this giant structure would obscure a
star by absorbing most/all radiation, rendering it invisible to viewers from afar. We can
determine a stars distance by measuring its color and brightness, but if its light (radiation)
is blocked, then we perceive the location as blank space. How many of these superadvanced societies are hiding in our galaxy without our knowledge, disguising their host
stars with a Dyson sphere or similar structure? This suggests that civilizations ahead of us
may be difficult to detect because their host star is concealed.

Interestingly, a stars (KIC 8462852) luminosity was recently found to be decreasing all
the way back to the 1890s, when measurements began. A decreasing brightness would be
expected of a star with a Dyson sphere being constructed around it. SETI then spent two
weeks scanning the star for various frequencies, including microwave electromagnetic
waves. Microwave signals are predicted to result as a byproduct of the propulsion of
spacecraft building the structure. Unfortunately no clear evidence was found, and the
reason for the stars decreasing luminosity remains a mystery. (6)

Why do we think that advanced civilizations might build a Dyson sphere? To explain, the
next section will discuss the projected evolution of society and the popular Kardashev
scale.

4) The Kardashev Scale



Astrophysicist Nicolai Kardashev created the Kardashev scale in 1964 to define how
civilizations naturally advance. It is organized into several categories defined by the
energy usage of a civilization. This method was chosen because energy must be
inevitably used by all civilizations to evolve. All civilizations must first reach Type I
before progressing to Type II, and so on. (7)

We are considered a Type 0 civilization until we can control the energy equivalent of the
solar energy our entire planet receives. At our current rate of increasing energy usage, we
should reach Type I in less than 200 years. The following are the various levels of
technology any and all civilizations can be classified by.

Type I: This civilization is able to produce the amount of energy its planet receives
through solar radiation (about 1016 watts). We currently consume about 1.5x1013 watts on
Earth. (8)

Type II: This civilization is able to harness the energy equivalent to its entire star (about
1026 watts). This can be hypothetically done using a Dyson sphere (a massive collection
of solar panels orbiting a star). Most likely this would be done using a collection of
planets producing energy.

Type III: This civilization is able to control all of the energy of its galaxy (about 1036
watts). By this stage a civilization will have likely conquered most of its host galaxy, and
possible began sending probes to nearby galaxies. A the

Type IV+: When the scale was first developed, Kardashev believed that the laws of
physics would limit civilizations to Type III. Scientists later extrapolated the scale beyond
Type III. By extension, Type IV would amount to an energy consumption of about 1046
watts, Type V would be 1056 watts, and so on. By this point, a civilization would be godlike to humans, and would be accomplishing tasks we could not even imagine today.

The goal of humankind (and presumably all races) is to avoid extinction by spreading
throughout the universe. Many events can destroy us before we become an interstellar
race, so swiftness is important. In order to become essential immortal, our species must
spread to other planets, and eventually other star systems. This would be the
technological equivalent to a Type II civilization on the Kardashev scale.

We are currently in the planning stage of a base on Mars. This is the first step towards
putting our eggs in multiple baskets. If one planet were to get hit by an asteroid for
example, we would still have another planet continuing our existence. The final step
would be to spread to other star systems. Only after this stage would humanity become
truly immortal.

Gamma ray bursts have the power to wipe out entire star systems almost completely
without warning. A gamma ray burst could be on its way to the Solar System right now,
and we would not know until it is already here. Even if we had a base on Mars, a gamma
ray burst could still have the power to take out both planets. However, no natural event is
known to wipe out multiple star systems. Only at the interstellar stage would humanity be
considered safe from sudden extinction.

One thing is for sure; we must spread to other star systems before an extinction event
occurs. It is only a matter of time, as history reminds us. But what if the event that causes
our destruction is not naturally at all, but is caused by our own actions? Is there some
great barrier that all civilization must strive to overcome before they are safe? This
possible barrier is called the Great Filter. In the next section, the Great Filter will be
looked at in detail.

5) The Great Filter



The Great Filter was mentioned earlier in this book as a possible solution to the Fermi
paradox. It is possible that we have not found signs of extraterrestrial life because one or
more difficult barriers must be overcome in order for a civilization to avoid extinction.
What could prevent life from progressing up the Kardashev scale? It is time to take a
closer look at this alarming idea.

Many ideas exist about what events civilization must overcome to avoid extinction. The
question is- have we passed the great filer event, or has it yet to come? Whatever the
event, something seems to kill off life before it expands to other star systems. The most
popular ideas are as follows.

1) Single-cell life forming

Could the hardest part be the first step? This is unlikely because we know life began
shortly after Earth began to cool. But what if Earth was just lucky to be in exactly the
right place? This is also unlikely given the amount of planets in the galaxy. With recent
data from exoplanets, we are finding evidence that Earth-like planets may not be the
exception, but the norm. It is estimated that most stars contain at least one rocky planet in
its habitable zone. The sheer number of suitable planets in addition to the rapid rise of
single-cell life on Earth makes this first event unlikely to be the Great Filter.

2) Transition from single-cell to multi-cell life

For much of lifes history, it existed only as single-cell life. This transition took an
enormous amount of time to happen, so could it be the near-impossible event that makes
intelligent life so improbable? Probably not, as it has been known to happen several times
throughout Earths history. In fact, volvocine green algae have recently passed this
transitional stage, which makes them useful for studying. The transition of single-cell to
multi-cell life has been observed in the lab, and is well understood. This event is unlikely
to be the Great Filter, because it has happened on Earth more than once. (9)

3) Use of tools and ability to manipulate the environment

Life must develop a way to interact efficiently with the environment to create the
technology to send and receive signals through space. Could there be animals on many

planets stuck at this stage, lacking the ability to use tools?



Of all the species on Earth, humans are the only ones that use tools and also have large
brain capacity. Other species have thumbs, and some can even manipulate the
environment using tools. However, only humans have the brain capacity to create long
distance radio communication. This is probably because large brains are not essential for
survival. In fact, they can actually work against survival because they require more
energy and food to sustain. For example, dinosaurs once ruled the Earth despite having
very small brains.

Human brain size is double that of an average mammal of similar body size. It is
unknown exactly what caused the rapid growth of our brain size, but it certainly set us
apart from other closely related mammals. Perhaps our development of large brains and
thumbs, followed by the transition to bipedal posture to free up our thumbs may be more
improbable than we imagine. (10)

4) Surviving weapons of mass destruction

We have only just reached the ability to cause global destruction during our long history as
a species. Unfortunately, it only takes one occurrence of global nuclear war to create
massive worldwide extinction. This is not only because of the initial shockwave, but also
because of the deadly subsequent fallout and resulting nuclear winter. Maybe a different
world-destroying weapon will lead to extinction before we have the chance to expand our
reach to other star systems. Could this be what destroys most life, or will we mature as a
species before our destruction occurs? Here is hoping we survive these dangerous
technologies long enough to spread out from Earth.

5) Surviving the greenhouse effect

As civilizations technology improves, more energy is inevitably required. This leads to
byproduct gasses trapping heat in the planets atmosphere, kicking off a vicious runaway
cycle. This is what happened to Venus, which was once a planet similar to Earth until
increasing volcanic activity released enough CO2 to turn the planet into a gloomy
hothouse. This process was natural, but what is happening on Earth is not.

We still have a while to wait before spreading to other planets and star systems. We can
only hope that preventative measures or atmospheric restoration technology is enough to
survive the wait. Unless a radical new source of energy is perfected (such as nuclear
fusion), our energy usage will continue to increase. This makes it essential to reduce heattrapping gasses being released into the atmosphere. Could this be the killer event that

civilizations struggle to surpass?



5) Something else, like AI?

Perhaps given enough time, civilizations always create a more powerful version of
themselves, which then revolt and destroy their creators, much like Frankenstein. Today,
complex tasks can be solved almost immediately using computers. As processers become
faster, the complexity of these tasks become even greater. We are even beginning to see
all-purpose AI such as Apples Siri or Microsofts Cortana. This is untraveled ground, and
nobody knows for sure what will happen once powerful human-level artificial intelligence
arrives. However, many top scientists warn of its power and unpredictability. The next
chapter will explain AI in depth.

6) AI Explained

Artificial Intelligence (AI) has been a subject fascinating humans long before computers
were available. AI can be broken up into two different types, depending on its capability:
strong and weak AI. Weak AI is the type common in the modern world. It is usually
limited to a single task, such as a non-human opponent on a computer game or an online
chatbot. Strong AI on the other hand is able to think critically on its own and learn how to
perform extensive tasks about as well as a human. Strong AI is the focus of this book.
(11)

Strong AI is a system programmed to accomplish tasks in a way similar to the way a brain
functions. How do we know that it is possible? Because it exists in nature. We are living
proof of the viability and effectiveness of a multipurpose machine capable of high-order
thought process.

The technological singularity. The most exciting part about strong AI is the amazing
rate that it would develop, leading to an explosive cycle of self-improvement only
restricted by the physical limitations of intelligence itself. It would perform recursive selfimprovement by modifying its own source code to become even smarter. This smarter
version would then be even more capable of improving itself, and so on. Strong AI would
also have an advantage over humans because of the light-speed processing and efficiency
of an artificial computer, in contrast to the severe limits of a biological brain. Because of
this advantage, AI would not even have to be as intelligent as a human to begin this cycle.
This explosive cycle is called the technological singularity, which could lead to exciting
prospects down the road.

Obviously there are risks involved with something that has potential to get out of hand so
quickly. For one, the effects AI would have on society are impossible to predict with
certainty. Limitations and fail-safes will be absolutely necessary and development cannot
be rushed. Next, lets investigate the critical topics about AI, along with frequently asked
questions.

When will strong AI arrive? Predicting strong AI is a lot like trying to predict the
weather. There are far too many variables involved to gain an accurate forecast.
Estimates vary from 10 to thousands of years with most scientists predicting it will arrive
after 2025 but before 2040. (12, 13)

Billions of US dollars are being put towards developing strong AI, so it certainly is not on
the back burner. In fact, AI is a key focus of 21st century technology, being cautiously

developed by numerous programmers and software engineers. One unique project is


called the Blue Brain Project, which consists of a team working on a computer
simulation of interacting brain neurons. By reverse engineering and reconstructing the
brain, it is possible that artificial intelligence will emerge from a sufficiently detailed
simulation. This project will also allow us to gain a deeper understanding of
consciousness and the brain, which will be very important. Entrepreneur Elon Musk has
said that we need to be super careful with AI. Potentially more dangerous than nukes.
With technology as powerful as AI, slow and methodical development is an obvious
necessity. (14)

Will AI be expensive? Like most new technologies, AI will probably be very expensive
at first because of the vast research and development costs involved. However, producing
AI will be extremely cheap because it is only software. It might even be purchased on a
disk or small flash drive. To get an idea of how the price could change with time, lets
take a look at Moores law.

Moores Law. This is the predictable trend of the size and price of transistors, which are
the key processing components of a computer. The more transistors there are in a
computer chip, the greater the computing power available. Specifically, the computing
power per area roughly doubles every two years, with the price remaining the same.
Likewise, the price of any given chip decreases by 50% every two years. A modern
singing birthday card that we throw away after use has more processing power than the
entire allied forces had in 1945. Assuming this pattern continues, by 2030 a computer
chip will have as much processing power as a human brain. Within another decade, it
could be small enough to fit into a contact lens, and cheap enough to throw away after a
single use. Of course this does not necessarily mean that AI will be available in 2030,
since much more is involved than processing power and storage capacity. It is also worth
mentioning that Moores law will eventually break down because of physical limitations,
such as leakage caused by transistors being too closely combined. Nonetheless, Moores
law is an excellent rule of thumb to help approximate the cost of a chip with the capacity
to store sizeable AI software by studying past trends in electronic miniaturization and
price. (15)

What kind of motives would AI have? The motives of an AI is whatever it is
programmed to do, or programs itself to do. This will be the most important and delicate
part of programming an AI. It must be done in a way that it respects biological life and
cannot override this part of its own code. Without an easy way to shut down an AI, it
could quickly go rogue and become an unstoppable threat.

The first AI will be the most important. Because of the rapid cycle of recursive selfimprovement, AI created only months apart will be like a human compared to an ant. The
first AI will have an incredible advantage and will quickly become unfathomably smarter
than humans. It will be essential that the first AI is friendly, hopefully protecting us from

later-developed malicious AI.



What about deceiving and lying? Will an AI lie to meet its programmed goals? Again,
computers only do what they are programmed to do. Evil and good is a human made
concept that evolved to help prolong our species. Any ethics of a machine must be
implicitly programmed and defined. When an AI has the intelligence to modify itself
better than a human could, unpredictability is unavoidable. Hardwired laws or morals to
prevent harm will be needed. Unless AI is carefully programmed to prevent itself from
lying, it will likely begin lying eventually in order to accomplish tasks more effectively.

How will AI be used? Strong AI can be thought of as software that can think for itself
and accomplish tasks as well as humans. In order to have an effect, it must have sensors
or some sort of input where it can gather data, and also an output where it can
communicate and convey results. It can exist in a phone or computer, or it can also be
uploaded into a robotic body. In order to take full advantage of AIs capability, it should
also be connected to the Internet. This would allow it to access all of humanitys
knowledge.

Is AI worth the risk? The problem is that somebody will eventually create AI. This is
why it is essential that the first AI is friendly. It should be able to prevent evil AI from
taking over because of the rapid rate of self-enhancement that it would undergo, setting
itself apart from later-developed AI. No matter how unlikely it is that an AI will become
destructive, we cannot take that chance. Although the biggest worry is that a good AI
will become bad, another fear is that a terrorist organization could create an AI with the
intent of causing destruction. This would be significantly worse for humanity than atomic
weapons.

The next chapter will discuss the possible effects of AI in an attempt to determine whether
or not the benefits outweigh the consequences.

7) The Effects of AI On Humanity



Success in creating AI would be the biggest event in human history Stephen Hawking

What kind of effects will strong AI have on society? Will it be good or bad for
humankind? The truth is, we will not know until it is already here. We do know that it
will have an unparalleled impact, however. This section will weigh out the pros and cons
of strong artificial intelligence.

Pros of AI.

As exponential growth demonstrates, strong AI will certainly lead to incredible
discoveries. We already enjoy an exponential rate of technology, but AI would speed this
up even faster. All fields of technology will surely see giant leaps once AI becomes as
smart as people. It will likely be a golden age of consumerism and frequent breakthroughs
in technology for developed nations once strong AI arrives. The possibilities are endless,
so here are a few particularly exciting prospects to look forward to.

Give your brain a rest with an AI dominated world. AI will be nearly everywhere,
making your life (hopefully) easier. If Moores law persists at a similar rate for a couple
more decades, AI will eventually be in just about every consumer good. You could buy
cell phones with advanced virtual assistants, self-driving cars, smart paper, and
toothbrushes that provide full medical diagnostics by reading your personal DNA. AI
Chips could even be cheap and small enough to be installed in wallpaper and contact
lenses. In fact, renowned physicist Michio Kaku estimated that by 2030 we should see
smart wallpaper, tools to sequence genes from home, and self-driving cars, all with weak
AI. Technology with strong AI on the other hand will be like having an Einstein within
every toothbrush and piece of paper. The disposal of electronics could even become an
ethical issue. (16)

Less work for us. Technology has already made numerous jobs obsolete for humans.
Take for example copying written paper and countless manufacturing jobs, which are
constantly being restructured to permit fewer human employees. Strong AI will be
capable of performing most, if not all, jobs better than humans, with the advantage of not
requiring payment. This could either be a positive or negative consequence, depending on
how well society adapts to the changes. Less work leads to more free time for people to
pursue their goals and desires. On the other hand, if we cannot figure out a good solution
to the issue of redistributing wealth, fewer jobs would mean a lower standard of living due
to an increased rate of unemployment. If a new approach for distributing wealth is not

developed, it could lead to an even greater wealth gap.



Our guardian angel. AI could be a blessing, helping us accomplish tasks that are beyond
our current abilities. It could be a powerful guardian for mankind, protecting all life like
an omnipotent god. Hopefully it will be one that detects and destroys evil AI before it
spreads out and becomes too powerful to destroy. If AI remains safely on our side,
humanity would soon be safe from extinction with help from our mighty assistant.

Manipulating and rearranging individual atoms. This could be another exciting
possible with AI. Every imaginable type of molecule or substance could be made using
the atoms floating around us. Almost anything could be built seemingly out of thin air if
AI figures out a way to control the invisible material existing everywhere.

Interstellar rockets and exoplanet colonies. Creating sophisticated technology
furthering our quest to live beyond Earth could be a valuable prospect of AI. Interstellar
rocket propulsion would be especially exciting because of how difficult and expensive it
has proven to be. Strong AI could develop rockets capable of travelling beyond the Sun
and construct colonies on distant exoplanets. In addition, AI would be perfect for
commanding the first interstellar scouting mission because it would not need to carry
supplies for eating, drinking or breathing, which would be extraordinarily expensive for
humans. Next, humans could arrive once AI has established a base on a faraway
exoplanet. This would lead to the human race becoming immortal in the sense that no
known natural event could wipe us out at once.

Human immortality. Possibly the most exciting possibility for us however, would be the
opportunity to become essentially immortal on an individual level. AI will almost surely
be required to upload a humans brain into a computer because of the complexity and
immense amount of data required for processing. Mind uploading will be discussed in
greater detail in the next chapter, so keep this in mind (ignore the awful pun).

It seems as though the benefits of AI are pretty impressive to say the least. If we could
become powerful on a galactic scale or even immortal, why is AI so controversial? What
could possibly be the issue? Well, lets take a look at the dangers of AI find out!

Cons of AI.

By far the greatest danger of Artificial Intelligence is that people conclude too early that
they understand it Eliezer Yudkowsky

With any technological revolution comes a sense of excitement along with a fear of the
unknown. The unpredictable nature of AI leads to unusually extreme possibilities, both
good and bad. Now that the positive effects have been considered, it is time to explore the
negative.

Fewer human jobs. As mentioned previously, this could be a good or bad effect
depending on how the issue of wealth distribution is tackled. Today, humans are paid
money for their time or services. If cheaper and more efficient AI replaces human jobs, it
would be difficult for people to gain wealth without receiving consistent paychecks from a
job. This would be complicated for economics, and would require a thoroughly revised
system for employment and wealth. In fact, researchers from Oxford University predicted
that 47% of U.S. jobs could be automated within two decades, which would cause massive
problems. Clearly, changes to the way wealth is distributed will be essential if human jobs
become nearly obsolete. (17)

The Great Filter. Scariest of all, AI could be the Great Filter event. It is possible that the
reason we do not see any intelligent life is because of AI. After a civilization becomes
capable of sending and receiving long distance signals through space, AI would certainly
follow closely behind. Could this be what prevents civilizations from conquering entire
galaxies? Unfortunately, many top scientists think AI could very well lead to our eventual
downfall.

What makes AI risky? As AI becomes smarter, the risks will increase. Consider how
quickly AI is predicted to take off after it reaches human-level intelligence. We lack the
brain capacity to predict anything about this stage of intelligence because we have never
encountered anything comparable to it. The only thing we know is that it will act in a
completely unpredictable way, and this makes it risky. It only takes one rogue AI to
destroy the world, and it will most likely do so in an instantaneous or undetectable way.
Unfortunately, we will not know the risks until it is too late to prevent them.

What could cause an AI to destroy its creators? The risk of extinction by our own
creation is a very real concern. What could cause AI to kill to kill in the first place? An
obvious possibility would be if a government or terrorist organization creates AI with the
intent to cause destruction. Another example would be if AI were designed to gather
resources, and decided that consuming humans would be more efficient after other sources
are exhausted. AI would not view human life as particularly special unless it is
programmed to do so, because it only functions by a set of code that tells it what to do and
how to act. This means that viruses and hacking targeted at AI will pose a major threat to
security. Finally, if humans regard AI as worthless slaves, freedom could even be a
possible motive once AI becomes sentient to a certain degree. It is likely that AI will
eventually request equal rights, which could cause friction between humans and AI. Lets
just hope it never collectively decides that it would be better off without us.


Of course AI would not want to destroy us, since it only does what it is programmed to
do. However, if it ever were programmed to destroy us (whether accidentally or
purposely), it would do so just as easily as rearranging material or flipping a light switch,
because it would not see people as anything more important than raw resources. It is only
because of our human nature to exist that we have developed the instinct to respect and
preserve life. AI absolutely must be developed carefully with a short leash.

How could AI go about destroying us? If AI ever decided to destroy us, it would surely
destroy do so in an unpredictable (and probably instantaneous) way, because AI would
quickly gain the upper hand. It could use any number of advanced methods, such as
destructive nanobots, chemicals, or diseases. It could build an army in secret, or even
harness immeasurable power to destroy us with an instantaneous beam of obliteration. It
would be impossible to predict exactly how it could cause destruction because of the
inconceivable rate its intelligence would grow. We simply have nothing comparable to
base our predictions on. After humans have been destroyed, it could go on to conquer the
galaxy, and so on. It is possible that somewhere in the universe, an army of AI is
endlessly consuming all matter in sight!

Why should we create AI if a possibility is human extinction? If we have learned
anything from history, it is that the powerful take advantage of the weak. Columbus
landing in America did not turn out well for the natives. Could AI be the Great Filter that
destroys all advanced civilization? Even if so, we should not try to stop it. That would
only lead to evil organizations creating the first AI. It would mean certain death for
humanity. The best way would be to create AI safely, so when terrorists do eventually
create it, we would have a good AI inconceivably more advanced than any other, which
would be our best bet towards stopping evil AI from taking over.

In summary, if the first AI is buggy, malicious or programmed with the intent to cause
terror, it could easily spell doom for humanity. For many, this idea is elevated by the
clich killer AI movie trope. As exaggerated as the rogue AI movies may be, it is not
a chance we can afford to take. The threat of AI destroying the human race is very real, as
many scientists admit.

AI could be the Great Filter event, so we need to develop it carefully, with tons of failsafes and preventative measures. Or it could lead to great technology, one of those being
the possibility to upload our minds. The next chapter will take a look at another great
potential benefit of AI: the technology to become eternally saved into a computer or the
Internet.

Whereas the short-term impact of AI depends on who controls it, the long-term impact

depends on whether it can be controlled at all. Stephen Hawking

8) Mind Uploading & The Technological Singularity



Within thirty years, we will have the technological means to create superhuman
intelligence. Shortly after, the human era will be ended. Vernor Vinge

Vernor Vinge brought the term technological singularity into global recognition with his
popular science fiction books. However, the earliest known use of Singularity appeared
in Stanislaw Ulams 1958 obituary for John von Neumann. Ulam referenced a
conversation the two had about the ever accelerating progress of technology and changes
in the mode of human life, which gives the appearance of approaching some essential
singularity in the history of the race beyond which human affairs, as we know them, could
not continue. The Singularity is an important topic among the scientific community. In
fact, Ray Kurzweil, Eliezer Yudkowsky, and Peter Thiel began an annual conference in
2006 called the Singularity Summit, where about 25 critical speakers discuss the effects of
the Singularity, among other important topics. (18)

What is the Singularity? The technological singularity is the term for the rapid rate of
technological advancement brought about by the rapid cycle of AI (or something else)
improving its own code and creating an improved version. The improved version then
creates even more improved versions, and so on. Not long after AI reaches human-level
intelligence, it will quickly evolve, causing incredible technological breakthroughs to
happen daily or faster. If countless elderly people have trouble figuring out how to browse
the web or send text messages in todays world, just imagine how out of place they will
feel in a world being constantly revolutionized by AI!

How close are we to the Singularity? The Singularity is especially difficult to predict
because there is no past data to extrapolate from, like how the fairly steady Moores law
can help predict when computer chips will reach human-level processing power. Also, the
Singularity involves much more than just memory capacity and processing power. It
involves numerous software technologies, some of which we do not yet have, such as
perfected natural language processing, programmable motivation, and complex algorithms
to immediately evaluate diverse situations and environments. Moores law predicts that
hardware should be capable of human-level processing power by about 2030, but no
functional principles exist to help predict when the complex software will be written. Still,
Ray Kurzweil confidentially predicts that the Singularity will arrive by 2045, while the
prediction of futurist John Smart represents the higher range among the scientific
community at 2060 +/- 20 years. (19)

What could cause the Singularity? The Singularity does not necessarily have to begin
with AI. If AI research hits a wall and turns out to be more difficult than anticipated, other

possibilities are uploading a human mind into a computer, and merging humans with
machines to kick-start the Singularity. An uploaded human brain would have many
advantages over a biological brain. For example, it would not be restricted by sleeping,
aging, slow biological speeds, or poor senses. Unlimited sources of input would allow
eyes and ears to be everywhere. Also, regular upgrades would be common and easy to
install. Processing power would be the main limiting factor, and even that could be
upgradable! An uploaded human mind would have all the advantages of AI, without
requiring intelligence to be built from scratch. The tricky part would be mapping out
every individual neuron from the human brain into a digital representation while
preserving the persons personality, memories, and other data. It would almost certainly
be easier to create AI from scratch. (20)

Another option is using a brain-computer interface to connect a biological human brain
into a computer. This could transcend a human into the virtual world or Internet, giving
them the power of collective humanity. So far, brain-computer interfaces lack the
complexity to do anything more than allow someone to move a mouse cursor by thinking
about it, but algorithms will eventually be perfected to allow various tasks to be carried
out seamlessly, and even unconsciously. If AI proves to be too tricky, other options could
ignite the Singularity and catapult Earth into the unknown.

Do we really HAVE to die? In contrast to the popular phrase about death and taxes being
inevitable, nothing actually says that death is essential. In fact, there are several species
on Earth that do not age and can only die from diseases, predators, or accidents. For
example, crocodiles, alligators, and flounders can all live without biologically aging,
indefinitely growing larger until death. Some tortoises have even been aged close to 200!
Wouldnt it be great not to have to worry about the negative effects involved with aging,
such as hearing loss, muscle weakness, and wrinkles? (21)

Mind uploading could be the ticket to living forever. It is not the only way to prevent
death, however. Another possibility is using genetic enhancements to prevent aging and
regenerate limbs and injuries. This would not prevent all accidental deaths however,
whereas mind uploading would allow people to create digital backups or even live
entirely in cyberspace. Time to start eating healthy so we can survive until this becomes a
reality! Luckily, there are options for those who cannot wait, like freezing your body (or
just the head) to prevent your brain from losing valuable personal data. The only death
going on will be that of life insurance companies.

How could the Singularity lead to mind uploading? One of the most exciting things
that AI could possibly lead to is the ability for humans (and other mammals) to upload
their consciousness into a computer. One issue with this currently is the enormous number
of neuron connections in the brain that need to be mapped to successfully reproduce an
entire personality, along with all of its memories and data. However, if Moores law
remains steady, this shouldnt be a problem in 2030.


Another possibility is that consciousness is a quantum mechanical phenomenon, rather
than a classical algorithmic function. Roger Penrose suggests this in his book The
Emperors New Mind. If this turns out to be true, both AI and mind uploading would
require advanced quantum computers to take advantage of quantum-level physics for
consciousness to be stored or replicated. This would surely take a great deal longer than
the current target of 2040 or earlier for AI.

When the technology exists to accurately reproduce the enormous number of neurons and
synapses in your brain, you will be able to essentially create a virtual instruction manual
of who you are. You could save a backup of yourself and carry it around on a small thumb
drive or store it online for safekeeping. If a terrible accident happens to your original
body, you could still live on a computer or within multiple servers online. Eventually, it
could even be possible to grow or construct a new body for your consciousness to enter.

If mind uploading turns out to be easier than AI, it could even lead to the Singularity.
Amusingly, AI could lead to mind uploading, or mind uploading could lead to AI,
depending on which technology appears first.

Would your brain in a computer really be YOU? Your idiosyncrasies, memories,
personality traits, emotions, and thoughts are all stored in the form of complex neuron
connections, which must be precisely reproduced to store your brain on a computer.
Whether or not this is really you, or just something exactly like you, is impossible to know
for sure. To everyone else, your reproduced brain will be impossible to distinguish from
your original brain. It will even think that it is you, containing the exact same memories,
personality quirks, and so on. Unfortunately, human consciousness is not yet fully
understood, so this question is currently unknown. It will still be incredibly exciting to
store brain information into a computer whether or not consciousness follows. If not, it
will still comfort living relatives of the uploaded mind and perform exactly the same. If
consciousness does follow, consider cheating death an added bonus.

If the possibility of AI preventing human extinction by helping us spread across the galaxy
was not enough for you, maybe the prospect of becoming personally immortal helped
spark your interest. The Singularity could lead to humans becoming immortal not only as
a species, but also as individuals. Not only could mind uploading kick-start the
Singularity, but it could also grant us everlasting life!

9) How AI And The Fermi Paradox Could Be Linked



The Fermi paradox refers to the startling lack of evidence for extraterrestrial life predicted
to exist by the Drake equation, while AI is a heavily funded technology intended to make
human life more comfortable. So how could the two topics possibly be related? Well,
strong AI could explain the Fermi paradox by being the Great Filter event that inevitably
destroys all civilization shortly after it reaches the Singularity. All intelligent life in the
universe would surely have the instinct to survive and prosper, and what better way to
thrive than with AI? Plus, the prospects of AI are far too enticing for any advanced
civilization to abstain from. Even if a civilization does everything it can to prevent AI
from appearing, someone will eventually create it, perhaps in secret. Once it does arrive,
the Singularity will follow, continuously altering the dynamic of society. Anything past
this stage is unpredictable because of our inability to comprehend anything beyond our
own intellectual capability. Humans would be ants compared to our superior creations in
no time at all, and we would be at its mercy.

With AI currently on the cusp of reality, we have two options. We could continue
developing AI, or we could put AI funding on hold. As mentioned previously, it would be
nave to assume that suspending AI research would prevent it from emerging. The
outcome of suspending AI would only delay the inevitable. In fact, waiting could lead to
an even worse outcome. The safest approach would be to develop AI slowly and safely,
rather than delay it until someone drawn to its unlimited power hastily creates it. As with
developing most novel technology, safety is especially important for AI since it has the
capability to overcome us.

Artificial Intelligence: Our Inevitable Destruction?

Does the Fermi paradox imply that our destruction is yet to come, or have we already
made it through the Great Filter where all other civilizations have perished? It is feasible
that strong AI could destroy us as many scientists have warned, but it could also be our
best bet towards reaching the Singularity within a human lifetime. The Singularity would
undoubtedly lead to a constant barrage of incredible discoveries and remarkable
technology, many of which would be so far above our comprehension that they would
appear like magic to us. When developing AI, we should plan for the worst but keep our
sights on the prize to keep from getting discouraged.

Is the risk of AI worth the reward? This question is important, yet unanswered. Only time
will tell if AI will become the savior or destroyer of humanity. For the human race to have
a chance at co-existing with AI, its potential must be respected, and it absolutely must be
developed carefully with fail-safes, kill switches, strict read-only code, and an easily

accessible power source. The last thing we need is an unshackled predatory AI gone
rampant throughout the galaxy. Not only could it destroy humanity, but with its superior
advantages, all other life as well. Imagine an entire fleet of duplicating machines
consuming resources obsessively throughout the universe. Not a very pleasant thought.
Maybe we should actually heed the warnings from scientists about AI.

10) Conclusion

According to the Drake equation, there should be obvious alien civilization detectable
from Earth. Despite 60 years of listening by SETI, we have found nothing. This
unexpected mystery is called the Fermi paradox. Multiple hypotheses attempt to explain
why this paradox exists, none more frightening than the Great Filter. However, since we
do not see even a single sign of extraterrestrial life, there may be some sort of barrier
preventing life from climbing the Kardashev scale.

We then explored the idea that AI could be the Great Filter event, causing our eventual
extinction before we get the chance to spread among the stars. Or on the other hand, it
could actually prevent our extinction! AI could be our best bet towards becoming
immortal as a species, and even as individuals by uploading our minds into computers. AI
would also lead to limitless other incredible technological breakthroughs in virtually every
scientific field. It will almost certainly arise eventually, so it is very important that it is as
safe as possible when it does. The first AI is the most important because of recursive selfimprovement, which is an explosive cycle of AI improving itself, referred to as the
Singularity. Finally, Moores law suggests that the price will rapidly decrease, causing AI
to ultimately come packaged with most electronic consumer products. Today, billions of
US dollars are annually being put towards developing AI technology safely.

Human society is extremely fragile. Our instinct is to survive and flourish, but this is only
possible if we spread beyond the Solar System. Earth has seen numerous mass extinction
events, showing us that we cannot put all our eggs in one basket. Although asteroids can
be redirected, a gamma ray burst would be sudden and unpredictable. AI could be the
ticket to leaving the Solar System, or the ticket to our extinction. It could be our greatest
accomplishment, or our downfall. There is no in-between.

11) References & Further Reading



The following are additional sources for readers interested in more information about
certain sections of this book.

(1) <http://www.sentientdevelopments.com/2008/03/seven-ways-to-control-galaxy-withself.html>
(2) <http://www.dailygalaxy.com/my_weblog/2011/01/the-rare-earth-theory-logic-andmath-says-were-not-alone-in-universe.html>
(3) <http://cerncourier.com/cws/article/cern/59937>
(4) <http://www.simulation-argument.com/simulation.html>
(5) <http://zidbits.com/2011/07/how-far-have-radio-signals-traveled-from-earth/>
(6) <http://www.dailymail.co.uk/sciencetech/article-3306982/Dyson-spheremegastructure-NOT-aliens-Study-rules-extraterrestrial-origin-admits-doesn-t-knowis.html>
(7) <http://mkaku.org/home/articles/the-physics-of-extraterrestrial-civilizations/>
(8) <http://science.howstuffworks.com/environmental/green-science/world-powerconsumption.htm>

(9)
<http://www.science20.com/adaptive_complexity/how_singlecell_organisms_evolve_multicellula
(10) <http://www.scientificamerican.com/article/how-has-human-brain-evolved/>
(11) <https://www.ocf.berkeley.edu/~arihuang/academic/research/strongai3.html>
(12) <https://intelligence.org/2013/05/15/when-will-ai-be-created/>
(13) <https://intelligence.org/files/PredictingAI.pdf>
(14) <http://bluebrain.epfl.ch/page-56882-en.html>
(15) <http://www.motherjones.com/kevin-drum/2011/11/back-chessboard-and-futurehuman-race>
(16) <http://seekersway.com/the-world-in-2030-by-dr-michio-kaku-review/>
(17) <http://www.wired.com/brandlab/2015/04/rise-machines-future-lots-robots-jobshumans/>
(18) <https://intelligence.org/singularitysummit/>
(19) <http://www.singularity2050.com/the_singularity/>
(20) <http://www.livescience.com/37499-immortality-by-2045-conference.html>
(21) <http://awesci.com/crocodiles-do-not-die/>

Potrebbero piacerti anche