Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Table of Contents
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
Introduction
What Exactly Is The Fermi Paradox?
Possible Fermi Paradox Solutions
The Kardashev Scale
The Great Filter
AI Explained
The Effects of AI On Humanity
Mind Uploading & The Technological Singularity
How AI And The Fermi Paradox Could Be Linked
Conclusion
References & Further Reading
1) Introduction
We are living in an extraordinary period of time. Mankind is rapidly approaching the era
of the most exiting technological revolution in all of history. Welcome to the exciting
world of Artificial Intelligence (AI), the alarming Fermi paradox, and the quest to avoid
extinction and achieve human immortality. Prepare to explore the connection between
these fascinating concepts, along with the latest explanations and a detailed look at what
the future may hold for our species. While reading this deeply thought-provoking book,
expect nothing less than shocking conclusions, bizarre ideas, and a strong sense of
exhilaration (along with some concern!) for the fast approaching revolution. The question
is Are we prepared, or in way over our heads?
of life that we are aware of. The probability that life-bearing planets will have the time to
progress to that point before going extinct is largely up for debate. Those who prefer a
larger value for this variable assert that life is resilient (a good example being
extremophiles such as those discovered living in lake Vostok deep under the Antarctic
ice), having survived several mass extinction events. Those who prefer a smaller value
argue that it took a long time for us to develop large mental capacity, and that humans are
the only species to achieve this out of the billions that have lived on Earth throughout
history. Although approximations vary wildly, for our equation we will estimate that only
half of all life-bearing planets survive long enough for advanced intelligent life to form.
(0.5)
Fc = fraction of those planets that develop interstellar communication. Again, this is
another speculative variable, but considering the relatively short amount of time it took for
intelligent life on Earth to develop the ability to broadcast radio waves, it should not take
long for intelligent life elsewhere to begin broadcasting some sort of detectable waves
across the galaxy. It must also be mentioned that radio wave broadcasts are not the only
type detectable, although radio waves are the most energy efficient, and therefore the most
likely to be received by intelligent life. We are continuously scanning the sky for almost
all frequencies of the electromagnetic spectrum, aware of the fact that other types are
possible. After consideration, this variable will be given a value of 0.5, which means that
half of all intelligent civilizations develop a method of communication that can be
detected across the galaxy before going extinct. (0.5)
L = length of time (years) civilization releases detectable signals into space. We have
only had the technology to emit radio waves for a little over 100 years, and it is tough to
pinpoint an average, since our civilization is of course the only one known to
communicate wirelessly over long distances. The biggest fear is that a civilization with the
technology to broadcast radio waves might also have (or will have shortly) the technology
to cause global extinction, like our fear of extinction from atomic bombs shortly after we
began to communicate wirelessly. On the other hand, it is argued that a civilization with
that level of technology should be able to flourish, and maybe even reduce the chance of
global extinction. Estimates range from a few years all the way up to hundreds of millions
of years, but conservatively we will assume an average broadcasting lifetime of five
thousand years. (5,000 years)
Now that we have a low approximation for all the variables in the Drake equation, we can
calculate a final answer. Needless to say, estimates vary wildly because proper data is
lacking. For example, finding extinct life on Mars (not related to life on Earth) or any
other planet would alter the results significantly, as it would imply that life arises fairly
easily, but has difficulty surviving long enough to become intelligent. Similarly, if
advanced technology or remnants of a civilization are found on another planet, it would
imply that life does not last long after becoming intelligent for one reason or another. We
will explore numerous filter events that could prevent life from progressing past a
certain point in a later chapter. For now, here is the resulting conservative estimate for how
many extraterrestrial sources should be detectable inside our galaxy at this very moment.
R x Fp x Ne x Fl x Fi x Fc x L = N
7 x 1 x 0.4 x 0.1 x 0.5 x 0.5 x 5000 = 350
Remember, this is a generously low estimate. So at the very least, if there should be
hundreds of alien races in just the Milky Way Galaxy, why do we appear to be so alone?
This is the heart of the Fermi paradox.
To further complicate things, even a single advanced civilization should be able to
dominate the entire galaxy in a cosmically short period of time, due to the exponential rate
of technological improvement. For example, the amount of change that we saw from say,
year 1400 to 1500 was nothing compared to how much we improved from 1900 to 2000.
This is because of the exponentially increasing rate, which results from increasing
sustainable population, along with the fact that our collective knowledge is built upon past
knowledge. We are living at a unique time in history where a massive technological shift
occurs within a single human lifetime, rather than over thousands of years. If this trend
continues, these shifts will eventually occur yearly, or even daily!
The exponential rate of improving technology makes it nearly impossible to imagine the
level of technology that a civilization 1,000 years ahead of us would have, which is a
relatively small amount of time considering the age of the universe, and the amount of
time other civilizations have potentially had to develop. Their technological abilities
would be unimaginable to modern 21st century humans. Civilizations with this level of
technology might have the ability to easily spread from planet to planet, or even star to
star. Also, by using probes (especially self-replicating Von Neumann probes) it is not farfetched to imagine a super-advanced civilization conquering entire galaxies in a
cosmically short amount of time. (1)
In fact, a human could even theoretically travel interstellar distances in a short amount of
time by taking advantage of Einsteins theory of special relativity. By traveling at near
light speed, time slows for a person inside of a rocket. For example, at 86% the speed of
light, one year on Earth would only take half a year on the rocket. At 99.9% the speed of
light, one year on Earth would equal just over two weeks on the rocket. Therefore a trip to
the next closest star (about 4 light years away) could take much less time for the people
aboard a fast moving rocket than on Earth. This effect happens because as velocity
increases, space in front of the moving object compresses, causing the distance to shorten.
If entire galaxies can be conquered in a blink of cosmological time from just one
civilization a tiny bit ahead of us, why dont we see any evidence of this happening? Not
to mention the enormous amount of energy a super-advanced civilization would surely
require. The waste heat from this energy would be hard to miss. Are we the first, or is
another factor at play? To attempt to answer that, lets explore the top ideas that could
explain the puzzling Fermi paradox.
communication. The early universe was exceptionally hostile to life, with deadly gamma
ray bursts potentially wiping out early life. A gamma ray burst is the most destructive
natural force in the universe. It occurs when two super massive black holes collide,
releasing a powerful shock wave throughout space that destroys everything in its path over
a distance of many light years. This happened quite frequently in the early universe, when
its density was much greater than it is today. Luckily, the frequency of these deadly bursts
has diminished over time, and the universe settled enough to allow planets to progress in
peace. This idea assumes that after the quantity of gamma ray bursts decreased
sufficiently, Earth began forming life before any other planet. Although considering the
vast size of the galaxy, the chance of us being the first intelligent species is extremely low.
If we truly are the first, it follows that countless other civilizations are shortly behind. (3)
3) The Zoo Hypothesis
The zoo hypothesis states that we are quietly being watched. Alien civilizations could be
refraining from contacting us for various reasons. Maybe they are passive and waiting for
us to abandon our violent manner. Maybe they are observing how often a civilization
survives annihilation long enough to spread to other planets. They could even be studying
how we manage issues like global warming or increasing energy consumption. Finally, it
could a scenario along the lines of Star Trek, where lower civilizations are left alone to
develop without assistance. If aliens contact us, undoubtedly we would gain a mountain
of knowledge and technology. Just like ducks in a park, we might grow dependant on
them, weakening our ability to survive on our own. Do not feed the humans!
The main thing wrong with this idea is that it does not explain why we havent discovered
any signs of alien life, unless our silent observers destroyed all nearby life forms. We are
actively looking for signs of life beyond our planet, and have found absolutely no
worthwhile evidence despite our vigorous attempt to do so. Whether or not we are being
watched should not alter the fact that we have not observed alien life from afar.
4) Great Filter Hypothesis
The Great Filter Hypothesis states that there are one or more great hurdles that all
civilizations must get though before reaching the Singularity. (A chapter on the Singularity
is coming up.) It is possible that most civilizations get stuck and go extinction before
solving the issue of the Great Filter event, whatever that may be. The Great Filter could be
surviving nuclear destruction, climate change, or an unseen event hundreds of years from
now. This begs the question, Have we already gotten past the Great Filter, or have we yet
to encounter it? We will go into much greater detail in a later chapter, so keep this
intriguing hypothesis in mind.
An alien civilization could be right in front of our eyes and we might not even notice. The
biggest issue here is the fact that radio signals diffuse over long ranges because of the
inverse square rule. This means that as a radio signal travels through space, it becomes
exponentially more difficult to receive. Whether or not a signal is detectable mostly
depends on the strength (amplitude) of the signal and the sensitivity of our detectors. It is
possible that we have not found artificial signals because our technology is inadequate to
amplify interstellar signals. Signal amplification and data correction techniques are
rapidly advancing, and every year we are able to detect weaker and more distant signals.
Current SETI efforts are focused on detecting highly energetic signals from advanced
civilizations. (5)
Another possibility is that after a certain stage of a civilizations technological
development, the method of long distance communication changes and becomes
undetectable to us. For example, this could be done using concentrated lasers as opposed
to our Earthly method of an expanding sphere of photons spreading in all directions. A
direct method could be more cost effective with less data loss. Not to mention it would
also have the advantage of being covert (Refer back to #7 for an example of why this
might be very important).
One intriguing idea is that a highly advanced alien civilization might be nearly impossible
to detect if it has built a Dyson sphere. This is a megastructure that encompasses a star,
collecting most, if not all of the stars radiation using technology similar to solar panels.
For a civilization with a massive energy requirement, this giant structure would obscure a
star by absorbing most/all radiation, rendering it invisible to viewers from afar. We can
determine a stars distance by measuring its color and brightness, but if its light (radiation)
is blocked, then we perceive the location as blank space. How many of these superadvanced societies are hiding in our galaxy without our knowledge, disguising their host
stars with a Dyson sphere or similar structure? This suggests that civilizations ahead of us
may be difficult to detect because their host star is concealed.
Interestingly, a stars (KIC 8462852) luminosity was recently found to be decreasing all
the way back to the 1890s, when measurements began. A decreasing brightness would be
expected of a star with a Dyson sphere being constructed around it. SETI then spent two
weeks scanning the star for various frequencies, including microwave electromagnetic
waves. Microwave signals are predicted to result as a byproduct of the propulsion of
spacecraft building the structure. Unfortunately no clear evidence was found, and the
reason for the stars decreasing luminosity remains a mystery. (6)
Why do we think that advanced civilizations might build a Dyson sphere? To explain, the
next section will discuss the projected evolution of society and the popular Kardashev
scale.
We are currently in the planning stage of a base on Mars. This is the first step towards
putting our eggs in multiple baskets. If one planet were to get hit by an asteroid for
example, we would still have another planet continuing our existence. The final step
would be to spread to other star systems. Only after this stage would humanity become
truly immortal.
Gamma ray bursts have the power to wipe out entire star systems almost completely
without warning. A gamma ray burst could be on its way to the Solar System right now,
and we would not know until it is already here. Even if we had a base on Mars, a gamma
ray burst could still have the power to take out both planets. However, no natural event is
known to wipe out multiple star systems. Only at the interstellar stage would humanity be
considered safe from sudden extinction.
One thing is for sure; we must spread to other star systems before an extinction event
occurs. It is only a matter of time, as history reminds us. But what if the event that causes
our destruction is not naturally at all, but is caused by our own actions? Is there some
great barrier that all civilization must strive to overcome before they are safe? This
possible barrier is called the Great Filter. In the next section, the Great Filter will be
looked at in detail.
6) AI Explained
Artificial Intelligence (AI) has been a subject fascinating humans long before computers
were available. AI can be broken up into two different types, depending on its capability:
strong and weak AI. Weak AI is the type common in the modern world. It is usually
limited to a single task, such as a non-human opponent on a computer game or an online
chatbot. Strong AI on the other hand is able to think critically on its own and learn how to
perform extensive tasks about as well as a human. Strong AI is the focus of this book.
(11)
Strong AI is a system programmed to accomplish tasks in a way similar to the way a brain
functions. How do we know that it is possible? Because it exists in nature. We are living
proof of the viability and effectiveness of a multipurpose machine capable of high-order
thought process.
The technological singularity. The most exciting part about strong AI is the amazing
rate that it would develop, leading to an explosive cycle of self-improvement only
restricted by the physical limitations of intelligence itself. It would perform recursive selfimprovement by modifying its own source code to become even smarter. This smarter
version would then be even more capable of improving itself, and so on. Strong AI would
also have an advantage over humans because of the light-speed processing and efficiency
of an artificial computer, in contrast to the severe limits of a biological brain. Because of
this advantage, AI would not even have to be as intelligent as a human to begin this cycle.
This explosive cycle is called the technological singularity, which could lead to exciting
prospects down the road.
Obviously there are risks involved with something that has potential to get out of hand so
quickly. For one, the effects AI would have on society are impossible to predict with
certainty. Limitations and fail-safes will be absolutely necessary and development cannot
be rushed. Next, lets investigate the critical topics about AI, along with frequently asked
questions.
When will strong AI arrive? Predicting strong AI is a lot like trying to predict the
weather. There are far too many variables involved to gain an accurate forecast.
Estimates vary from 10 to thousands of years with most scientists predicting it will arrive
after 2025 but before 2040. (12, 13)
Billions of US dollars are being put towards developing strong AI, so it certainly is not on
the back burner. In fact, AI is a key focus of 21st century technology, being cautiously
By far the greatest danger of Artificial Intelligence is that people conclude too early that
they understand it Eliezer Yudkowsky
With any technological revolution comes a sense of excitement along with a fear of the
unknown. The unpredictable nature of AI leads to unusually extreme possibilities, both
good and bad. Now that the positive effects have been considered, it is time to explore the
negative.
Fewer human jobs. As mentioned previously, this could be a good or bad effect
depending on how the issue of wealth distribution is tackled. Today, humans are paid
money for their time or services. If cheaper and more efficient AI replaces human jobs, it
would be difficult for people to gain wealth without receiving consistent paychecks from a
job. This would be complicated for economics, and would require a thoroughly revised
system for employment and wealth. In fact, researchers from Oxford University predicted
that 47% of U.S. jobs could be automated within two decades, which would cause massive
problems. Clearly, changes to the way wealth is distributed will be essential if human jobs
become nearly obsolete. (17)
The Great Filter. Scariest of all, AI could be the Great Filter event. It is possible that the
reason we do not see any intelligent life is because of AI. After a civilization becomes
capable of sending and receiving long distance signals through space, AI would certainly
follow closely behind. Could this be what prevents civilizations from conquering entire
galaxies? Unfortunately, many top scientists think AI could very well lead to our eventual
downfall.
What makes AI risky? As AI becomes smarter, the risks will increase. Consider how
quickly AI is predicted to take off after it reaches human-level intelligence. We lack the
brain capacity to predict anything about this stage of intelligence because we have never
encountered anything comparable to it. The only thing we know is that it will act in a
completely unpredictable way, and this makes it risky. It only takes one rogue AI to
destroy the world, and it will most likely do so in an instantaneous or undetectable way.
Unfortunately, we will not know the risks until it is too late to prevent them.
What could cause an AI to destroy its creators? The risk of extinction by our own
creation is a very real concern. What could cause AI to kill to kill in the first place? An
obvious possibility would be if a government or terrorist organization creates AI with the
intent to cause destruction. Another example would be if AI were designed to gather
resources, and decided that consuming humans would be more efficient after other sources
are exhausted. AI would not view human life as particularly special unless it is
programmed to do so, because it only functions by a set of code that tells it what to do and
how to act. This means that viruses and hacking targeted at AI will pose a major threat to
security. Finally, if humans regard AI as worthless slaves, freedom could even be a
possible motive once AI becomes sentient to a certain degree. It is likely that AI will
eventually request equal rights, which could cause friction between humans and AI. Lets
just hope it never collectively decides that it would be better off without us.
Of course AI would not want to destroy us, since it only does what it is programmed to
do. However, if it ever were programmed to destroy us (whether accidentally or
purposely), it would do so just as easily as rearranging material or flipping a light switch,
because it would not see people as anything more important than raw resources. It is only
because of our human nature to exist that we have developed the instinct to respect and
preserve life. AI absolutely must be developed carefully with a short leash.
How could AI go about destroying us? If AI ever decided to destroy us, it would surely
destroy do so in an unpredictable (and probably instantaneous) way, because AI would
quickly gain the upper hand. It could use any number of advanced methods, such as
destructive nanobots, chemicals, or diseases. It could build an army in secret, or even
harness immeasurable power to destroy us with an instantaneous beam of obliteration. It
would be impossible to predict exactly how it could cause destruction because of the
inconceivable rate its intelligence would grow. We simply have nothing comparable to
base our predictions on. After humans have been destroyed, it could go on to conquer the
galaxy, and so on. It is possible that somewhere in the universe, an army of AI is
endlessly consuming all matter in sight!
Why should we create AI if a possibility is human extinction? If we have learned
anything from history, it is that the powerful take advantage of the weak. Columbus
landing in America did not turn out well for the natives. Could AI be the Great Filter that
destroys all advanced civilization? Even if so, we should not try to stop it. That would
only lead to evil organizations creating the first AI. It would mean certain death for
humanity. The best way would be to create AI safely, so when terrorists do eventually
create it, we would have a good AI inconceivably more advanced than any other, which
would be our best bet towards stopping evil AI from taking over.
In summary, if the first AI is buggy, malicious or programmed with the intent to cause
terror, it could easily spell doom for humanity. For many, this idea is elevated by the
clich killer AI movie trope. As exaggerated as the rogue AI movies may be, it is not
a chance we can afford to take. The threat of AI destroying the human race is very real, as
many scientists admit.
AI could be the Great Filter event, so we need to develop it carefully, with tons of failsafes and preventative measures. Or it could lead to great technology, one of those being
the possibility to upload our minds. The next chapter will take a look at another great
potential benefit of AI: the technology to become eternally saved into a computer or the
Internet.
Whereas the short-term impact of AI depends on who controls it, the long-term impact
possibilities are uploading a human mind into a computer, and merging humans with
machines to kick-start the Singularity. An uploaded human brain would have many
advantages over a biological brain. For example, it would not be restricted by sleeping,
aging, slow biological speeds, or poor senses. Unlimited sources of input would allow
eyes and ears to be everywhere. Also, regular upgrades would be common and easy to
install. Processing power would be the main limiting factor, and even that could be
upgradable! An uploaded human mind would have all the advantages of AI, without
requiring intelligence to be built from scratch. The tricky part would be mapping out
every individual neuron from the human brain into a digital representation while
preserving the persons personality, memories, and other data. It would almost certainly
be easier to create AI from scratch. (20)
Another option is using a brain-computer interface to connect a biological human brain
into a computer. This could transcend a human into the virtual world or Internet, giving
them the power of collective humanity. So far, brain-computer interfaces lack the
complexity to do anything more than allow someone to move a mouse cursor by thinking
about it, but algorithms will eventually be perfected to allow various tasks to be carried
out seamlessly, and even unconsciously. If AI proves to be too tricky, other options could
ignite the Singularity and catapult Earth into the unknown.
Do we really HAVE to die? In contrast to the popular phrase about death and taxes being
inevitable, nothing actually says that death is essential. In fact, there are several species
on Earth that do not age and can only die from diseases, predators, or accidents. For
example, crocodiles, alligators, and flounders can all live without biologically aging,
indefinitely growing larger until death. Some tortoises have even been aged close to 200!
Wouldnt it be great not to have to worry about the negative effects involved with aging,
such as hearing loss, muscle weakness, and wrinkles? (21)
Mind uploading could be the ticket to living forever. It is not the only way to prevent
death, however. Another possibility is using genetic enhancements to prevent aging and
regenerate limbs and injuries. This would not prevent all accidental deaths however,
whereas mind uploading would allow people to create digital backups or even live
entirely in cyberspace. Time to start eating healthy so we can survive until this becomes a
reality! Luckily, there are options for those who cannot wait, like freezing your body (or
just the head) to prevent your brain from losing valuable personal data. The only death
going on will be that of life insurance companies.
How could the Singularity lead to mind uploading? One of the most exciting things
that AI could possibly lead to is the ability for humans (and other mammals) to upload
their consciousness into a computer. One issue with this currently is the enormous number
of neuron connections in the brain that need to be mapped to successfully reproduce an
entire personality, along with all of its memories and data. However, if Moores law
remains steady, this shouldnt be a problem in 2030.
Another possibility is that consciousness is a quantum mechanical phenomenon, rather
than a classical algorithmic function. Roger Penrose suggests this in his book The
Emperors New Mind. If this turns out to be true, both AI and mind uploading would
require advanced quantum computers to take advantage of quantum-level physics for
consciousness to be stored or replicated. This would surely take a great deal longer than
the current target of 2040 or earlier for AI.
When the technology exists to accurately reproduce the enormous number of neurons and
synapses in your brain, you will be able to essentially create a virtual instruction manual
of who you are. You could save a backup of yourself and carry it around on a small thumb
drive or store it online for safekeeping. If a terrible accident happens to your original
body, you could still live on a computer or within multiple servers online. Eventually, it
could even be possible to grow or construct a new body for your consciousness to enter.
If mind uploading turns out to be easier than AI, it could even lead to the Singularity.
Amusingly, AI could lead to mind uploading, or mind uploading could lead to AI,
depending on which technology appears first.
Would your brain in a computer really be YOU? Your idiosyncrasies, memories,
personality traits, emotions, and thoughts are all stored in the form of complex neuron
connections, which must be precisely reproduced to store your brain on a computer.
Whether or not this is really you, or just something exactly like you, is impossible to know
for sure. To everyone else, your reproduced brain will be impossible to distinguish from
your original brain. It will even think that it is you, containing the exact same memories,
personality quirks, and so on. Unfortunately, human consciousness is not yet fully
understood, so this question is currently unknown. It will still be incredibly exciting to
store brain information into a computer whether or not consciousness follows. If not, it
will still comfort living relatives of the uploaded mind and perform exactly the same. If
consciousness does follow, consider cheating death an added bonus.
If the possibility of AI preventing human extinction by helping us spread across the galaxy
was not enough for you, maybe the prospect of becoming personally immortal helped
spark your interest. The Singularity could lead to humans becoming immortal not only as
a species, but also as individuals. Not only could mind uploading kick-start the
Singularity, but it could also grant us everlasting life!
accessible power source. The last thing we need is an unshackled predatory AI gone
rampant throughout the galaxy. Not only could it destroy humanity, but with its superior
advantages, all other life as well. Imagine an entire fleet of duplicating machines
consuming resources obsessively throughout the universe. Not a very pleasant thought.
Maybe we should actually heed the warnings from scientists about AI.
10) Conclusion
According to the Drake equation, there should be obvious alien civilization detectable
from Earth. Despite 60 years of listening by SETI, we have found nothing. This
unexpected mystery is called the Fermi paradox. Multiple hypotheses attempt to explain
why this paradox exists, none more frightening than the Great Filter. However, since we
do not see even a single sign of extraterrestrial life, there may be some sort of barrier
preventing life from climbing the Kardashev scale.
We then explored the idea that AI could be the Great Filter event, causing our eventual
extinction before we get the chance to spread among the stars. Or on the other hand, it
could actually prevent our extinction! AI could be our best bet towards becoming
immortal as a species, and even as individuals by uploading our minds into computers. AI
would also lead to limitless other incredible technological breakthroughs in virtually every
scientific field. It will almost certainly arise eventually, so it is very important that it is as
safe as possible when it does. The first AI is the most important because of recursive selfimprovement, which is an explosive cycle of AI improving itself, referred to as the
Singularity. Finally, Moores law suggests that the price will rapidly decrease, causing AI
to ultimately come packaged with most electronic consumer products. Today, billions of
US dollars are annually being put towards developing AI technology safely.
Human society is extremely fragile. Our instinct is to survive and flourish, but this is only
possible if we spread beyond the Solar System. Earth has seen numerous mass extinction
events, showing us that we cannot put all our eggs in one basket. Although asteroids can
be redirected, a gamma ray burst would be sudden and unpredictable. AI could be the
ticket to leaving the Solar System, or the ticket to our extinction. It could be our greatest
accomplishment, or our downfall. There is no in-between.
(9)
<http://www.science20.com/adaptive_complexity/how_singlecell_organisms_evolve_multicellula
(10) <http://www.scientificamerican.com/article/how-has-human-brain-evolved/>
(11) <https://www.ocf.berkeley.edu/~arihuang/academic/research/strongai3.html>
(12) <https://intelligence.org/2013/05/15/when-will-ai-be-created/>
(13) <https://intelligence.org/files/PredictingAI.pdf>
(14) <http://bluebrain.epfl.ch/page-56882-en.html>
(15) <http://www.motherjones.com/kevin-drum/2011/11/back-chessboard-and-futurehuman-race>
(16) <http://seekersway.com/the-world-in-2030-by-dr-michio-kaku-review/>
(17) <http://www.wired.com/brandlab/2015/04/rise-machines-future-lots-robots-jobshumans/>
(18) <https://intelligence.org/singularitysummit/>
(19) <http://www.singularity2050.com/the_singularity/>
(20) <http://www.livescience.com/37499-immortality-by-2045-conference.html>
(21) <http://awesci.com/crocodiles-do-not-die/>