Sei sulla pagina 1di 67

How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

wired.com

How to Build Self-


Conscious Artificial
Intelligence
Author: Hugh HoweyHugh Howey
53-68 minutos

The Coolest Thing in the Universe

The universe is full of some very cool stuff:


neutron stars that weigh a ton a teaspoon;
supermassive black holes that grip even
light in their iron fists; infinitesimal neutrinos
that stream right through solid steel; all the
bizarre flora and fauna found right here on
planet Earth.
It might be the ultimate in egoism, but of all
the known things in the universe, the most

1 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

amazing is surely the lump of goo inside our


skulls. That lump of goo knows about
neutron stars, black holes, neutrinos, and a
middling number of the flora and fauna here
on planet Earth. It even knows (a little) about
itself. That lump of goo has worked out
mathematical truths, moral half-truths, and
philosophical ambiguities. And from the mud
beneath our feet, it extracted all the stuff
used to make our great cities, our cars and
jets and rockets, and the wires and wireless
signals that are turning these disparate
lumps of goo into one great hivemind of
creativity, knowledge, and sometimes
cruelty.
There can be no argument that our brains
are the coolest things ever, because there
can be no such argument without those
brains. They are the substrate of all
argument and discussion. End of discussion.
So far, at least. One day, other things may
be discovered or built that can also discover,

2 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

create, argue, discuss, cajole, or be cruel.


They might land in ships from faraway lands
(highly unlikely). They might emerge from a
laboratory or a garage (almost certainly).
And these new thinking machines will
without a doubt surpass the wonder of our
lumps of goo. Just as a child grows taller
than both parents and reaches new peaks
while those parents decline, our creations
will take our places as the coolest damn
things in the universe. Some argue that this
is already true.
Artificial intelligence is here now. In
laboratories all around the world, little AIs
are springing to life. Some play chess better
than any human ever has. Some are
learning to drive a million cars a billion miles
while saving more lives than most doctors or
EMTs will over their entire careers. Some will
make sure your dishes are dry and spot-
free, or that your laundry is properly fluffed
and without wrinkle. Countless numbers of

3 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

these intelligences are being built and


programmed; they are only going to get
smarter and more pervasive; they’re going
to be better than us, but they’ll never be just
like us. And that’s a good thing.
What separates us from all the other life
forms on earth is the degree to which we are
self-aware. Most animals are conscious.
Many are even self-conscious. But humans
are something I like to call hyper-conscious.
There’s an amplifier in our brains wired into
our consciousnesses, and it goes to 11.
It goes to 11, and the knob has come off.

The Origin of Consciousness

There isn’t a single day that a human


being becomes self-conscious. You can’t
pen the date in a baby book, or take a
picture of the moment and share it on
Facebook, or celebrate its anniversary for
years to come. It happens gradually, in

4 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

stages. (It often unravels gradually, also in


stages.)
Human consciousness comes on like the old
lights that used to hang in school gyms
when I was a kid. You flip a switch, and
nothing happens at first. There’s a buzz, a
dim glow from a bulb here or there, a row
that flickers on, shakily at first, and then
more lights, a rising hum, before all the great
hanging silver cones finally get in on the act
and rise and rise in intensity to their full peak
a half hour or more later.
We switch on like that. We emerge from the
womb unaware of ourselves. The world very
likely appears upside down to us for the first
few hours of our lives, until our brains
reorient the inverted image created by the
lenses of our eyes (a very weird bit of
mental elasticity that we can replicate in labs
with goggle-wearing adults).
It takes a long while before our hands are

5 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

seen as extensions of ourselves. Even


longer before we realize that we have brains
and thoughts separate from other people’s
brains and thoughts. Longer still to cope with
the disagreements and separate needs of
those other people’s brains and thoughts.
And for many of us (possibly most), any sort
of true self-knowledge and self-
enlightenment never happens. Because we
rarely pause to reflect on such trivialities.
The unexamined life and all that…
The field of AI is full of people working to
replicate or simulate various features of our
intelligence. One thing they are certain to
replicate is the gradual way that our
consciousness turns on. As I write this, the
gymnasium is buzzing. A light in the
distance, over by the far bleachers, is
humming. Others are flickering. Still more
are turning on.

The Holy Gr-ai-l

6 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

The holy grail of AI research was


established before AI research ever even
began. One of the pioneers of computing,
Alan Turing, described an ultimate test for
“thinking” machines: Could they pass as
human? Ever since, humanity has both
dreamed of—and had collective nightmares
about—a future where machines are more
human than humans. Not smarter than
humans—which these intelligences already
are in many ways. But more neurotic,
violent, warlike, obsessed, devious, creative,
passionate, amorous, and so on.
The genre of science fiction is stuffed to the
gills with such tales. A collection of my short
works will be released this October, and in it
you can see that I have been similarly taken
with these ideas about AI. And yet, even as
these intelligences outpace human beings in
almost every intellectual arena in which
they’re entered, they seem no closer to
being like us, much less more like us.

7 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

This is a good thing, but not for the reasons


that films such as The Terminator and The
Matrix suggest. The reason we haven’t
made self-conscious machines is primarily
because we are in denial about what makes
us self-conscious. The things that make us
self-conscious aren’t as flattering as the
delusion of ego or the illusion of self-
permanence. Self-consciousness isn’t even
very useful (which is why research into
consciousness rarely goes anywhere—it
spends too much time assuming there’s a
grand purpose and then searching for it).
Perhaps the best thing to come from AI
research isn’t an understanding of
computers, but rather an understanding of
ourselves. The challenges we face in
building machines that think highlight the
various little miracles of our own biochemical
goo. They also highlight our deficiencies. To
replicate ourselves, we have to first embrace
both the miracles and the foibles.

8 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

What follows is a very brief guide on how to


build a self-conscious machine, and why no
one has done so to date (thank goodness).

The Blueprint

The blueprint for a self-conscious machine


is simple. You need:
1. A physical body or apparatus that responds
to outside stimuli. (This could be a car
whose windshield wipers come on when it
senses rain, or that brakes when a child
steps in front of it. Not a problem, as we’re
already building these.)
2. A language engine. (Also not a problem.
This can be a car with hundreds of different
lights and indicators. Or it can be as
linguistically savvy as IBM’s Watson.)
3. The third component is a bit more unusual,
and I don’t know why anyone would build
one except to reproduce evolution’s botched
mess. This final component is a separate

9 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

part of the machine that observes the rest of


its body and makes up stories about what
it’s doing—stories that are usually wrong.
Again: (1) A body that responds to stimuli;
(2) a method of communication; and (3) an
algorithm that attempts (with little success)
to deduce the reasons and motivations for
these communications.
The critical ingredient here is that the
algorithm in (3) must usually be wrong.
If this blueprint is confusing to you, you
aren’t alone. The reason no one has built a
self-conscious machine is that most people
have the wrong idea about what
consciousness is and how it arose in
humans. So let’s take a detour. We’ll return
to the blueprint later to describe how this
algorithm might be programmed.

What Makes Us Human

To understand human consciousness,

10 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

one needs to dive deep into the study of


Theory of Mind. It’s a shame that this
concept is obscure, because it consumes
most of our computing power for most of our
lives. Our brains have been likened to little
more than Theory of Mind machines
—almost all of our higher level processing
power is shunted into this singular task. So
what is Theory of Mind, and why is this topic
so rarely discussed if our brains are indeed
so obsessed?
Theory of Mind is the attempt by one brain
to ascertain the contents of another brain. It
is Sue wondering what in the world Juan is
thinking. Sue creates theories about the
current state of Juan’s mind. She does this
in order to guess what Juan might do next. If
you think about it, no power could possibly
be greater for a social and tribal animal like
us humans. For thousands and thousands of
years we have lived in close proximity,
reliant on one another in a way that mimics

11 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

bees, ants, and termites. As our behaviors


and thoughts grew more and more complex,
it became crucial for each member of the
tribe to have an idea of what the other
members were thinking and what actions
they might perform. Theory of Mind is
intellectual espionage, and we are quite
good at it—but with critical limitations that
we will get into later.
Sue guessing what Juan is thinking is known
as First Order Theory of Mind. It gets more
complex. Sue might also be curious about
what Juan thinks of her. This is Second
Order Theory of Mind, and it is the root of
most of our neuroses and perseverate
thinking. “Does Juan think I’m smart?” “Does
Juan like me?” “Does Juan wish me harm?”
“Is Juan in a good or bad mood because of
something I did?”
Questions like these should sound very, very
familiar. We fill our days with them. And
that’s just the beginning.

12 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

Third Order Theory of Mind would be for


Sue to wonder what Juan thinks Josette
thinks about Tom. More simply, does Tom
know Josette is into him? Or Sue might
wonder what Josette thinks Juan thinks
about Sue. Is Josette jealous, in other
words? This starts to sound confusing, the
listing of several names and all the “thinking
about” thrown in there like glue, but this is
what we preoccupy our minds with more
than any other conscious-level sort of
thinking. We hardly stop doing it. We might
call it gossip, or socializing, but our brains
consider this their main duty—their primary
function. There is speculation that Theory of
Mind, and not tool use, is the reason for the
relative size of our brains in the first place.
In a world of rocks hurtling through the air, a
good use of processing power is to compute
trajectories and learn how to avoid getting
hit. One develops an innate sense of
parabolas, of F=ma, of velocities squared. In

13 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

a world of humans jostling about, a good


use of processing power is to compute
where those people might be next, and what
they will do when they get there.
If this trait is so useful, then why aren’t all
animals self-conscious? They very well
might be. There’s plenty of research to
suggest that many animals display varying
degrees of self-consciousness. Animals that
know a spot of color on the face in the mirror
is in fact on their own heads. Animals that
communicate to other animals on how to
solve a puzzle so that both get a reward.
Even octopi show considerable evidence of
being self-conscious. But just as the cheetah
is the fastest animal on land, humans are
the queens and kings of Theory of Mind.
I’ve watched my dog observe me
expectantly to guess what I might do next.
Am I going to throw the stick or not? Am I
going to eat that last bite of food or share it?
I’ve even seen dogs wrestle with Second

14 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

Order Theory of Mind questions. Play-


wrestling with a partner, the dog has to
gauge what my intent is. Have I suddenly
turned on the pack? Or is this yet another
game? Which side should my dog take?
(Dwelling on this example now, I’m ashamed
of having put my poor pup into such a heart-
pounding conundrum for my own
entertainment.)
Dogs are a good example of Theory of Mind
in the animal kingdom, because dogs have
evolved over the years to be social with
humans and to pick up on our behavioral
cues. The development of self-conscious
AIs will follow this model closely, as robots
have already become our domesticated
pals. Some of them are already trying to
guess what we’re thinking and what we
might do next. There are cars being
developed that read our faces to determine
where our attention is being directed and
whether or not we’re sleepy. This is First

15 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

Order Theory of Mind, and it is being built


into automated machines already on the
road.
Further development of these abilities will
not lead to self-consciousness, however.
There’s a very simple and elegant reason for
this, and it explains the mystery of human
consciousness and it provides the blueprint
mentioned above for creating self-conscious
machines, something we could very easily
do in the lab today. But you’ll soon see why
this would be a terrible idea. And not the
world-ending kind found in dystopic science
fiction.

The Missing Piece

The human brain is not a single, holistic


entity. It is a collection of thousands of
disparate modules that only barely and
rarely interconnect. We like to think of the
brain as a computer chip. We might even

16 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

attempt further precision and think of the


brain as a desktop computer, with a central
processing unit that’s separate from RAM
(short-term memory), the hard drives (long-
term memory), cooling fans (autonomous
nervous functions), power supplies
(digestion), and so on.
That’s a fun analogy, but it’s incredibly
misleading. Computers are well-engineered
devices created with a unified purpose. All
the various bits were designed around the
same time for those same purposes, and
they were designed to work harmoniously
with one another. None of this in any way
resembles the human mind. Not even close.
The human mind is more like Washington,
D.C. (or any large government or sprawling
corporation). Some functions of the brain
were built hundreds of millions of years ago,
like the ones that provide power to individual
cells or pump sodium-potassium through cell
membranes. Others were built millions of

17 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

years ago, like the ones that fire neurons


and make sure blood is pumped and oxygen
is inhaled. Move toward the frontal lobe, and
we have the modules that control
mammalian behaviors and thoughts that
were layered on relatively recently.
Each module in the brain is like a separate
building in a congested town. Some of these
modules don’t even talk to other modules,
and for good measure. The blood-pumping
and breath-reflex buildings should be left to
their own devices. The other modules are
prone to arguing, bickering, disagreeing,
subverting one another, spasming
uncontrollably, staging coups, freaking the
fuck out, and all sorts of other hysterics.
Here are a couple of examples.
A few months ago, my girlfriend and I set out
from the Galapagos for the French
Polynesians. The 3,000 miles of open sea
can take anywhere from two weeks to a
month to cross. My girlfriend does not often

18 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

succumb to seasickness, but in an area of


the South Pacific known as the
Convergence Zone, a confused sea set our
sailboat into a strange and jerky rhythm. She
fell prey to a terrible sensation that lasted for
a few days.
Seasickness is a case of our brain modules
not communicating with one another (or
doing their own thing). When the visual cues
of motion from our environment do not
match the signals from our inner ears
(where we sense balance), our brains
assume that we’ve been poisoned. It’s a
reasonable assumption for creatures that
climb through trees eating all the brightly-
colored things. Toxins disrupt our brains’
processing, leading to misfires and bad
data. We did not evolve to go to sea, so
when motion does not match what we are
seeing, our bodies think we’ve lost our ability
to balance on two legs. The result is that we
empty our stomachs (getting rid of the

19 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

poison) and we lie down and feel zero desire


to move about (preventing us from
plummeting to our deaths from whatever
high limb we might be swinging from).
It doesn’t matter that we know this is
happening in a different module of our
brains, a higher-level processing module.
We can know without a doubt that we
haven’t been poisoned, but this module is
not going to easily win out over the
seasickness module. Having been seasick
myself, and being very curious about such
things, I’ve felt the various modules wrestle
with one another. Lying still and sleeping a
lot while seasick, I will then jump up and
perform various tasks needed of me around
the boat—the seasickness practically gone
for the moment—only to lie back down once
the chore is done. Modules get different
priorities based on our environmental
stimuli. Our brains are not a holistic desktop
PC. To truly watch their analogue in action,

20 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

turn on C-SPAN or sit in on a contentious


corporate board meeting.
Another example of our modules in battle
with one another: There are some very
strong modules inside of us that are
programmed to make copies of themselves
(and to do that, they need to make copies of
us). These are the sex modules, and they
have some of the largest and nicest
buildings in our internal Washington D.C.s.
These modules direct many of our waking
hours as we navigate dating scenes, tend to
our current relationships, determine what to
wear and how to maintain our bodies, and
so much more.
These reproductive modules might fill a
woman with the urge to dress up and go
dancing. And men with the urge to go to
places where women dress up and dance
while the men stand at bars drinking. Those
modules might even lead some of these
people to pair up and go home with one

21 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

another. And this is where various other


modules will intervene with pills, condoms,
and other tools designed to subvert the
original urges that got the couples together
in the first place. However, if those devices
are not employed, even though higher-level
modules most definitely did not want anyone
getting pregnant that night, a lovechild might
be born, and other modules will then kick in
and flood brains with love and deep
connections to assist in the rearing of that
child. Some of our modules want us to get
pregnant. Often, stronger modules very
much wish to delay this or make sure it’s
with the right person. Dormant modules lie in
wait to make sure we’re connected with our
children no matter what those other
hedonistic and unfeeling pricks in those
other buildings down the block think.
Critical to keep in mind here is that these
modules are highly variable across the
population, and our unique mix of modules

22 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

create the personalities that we associate


with our singular selves. It means we aren’t
all alike. We might have modules that crave
reproduction, even though some of our
bodies do not create sperm, or our eggs
cannot be fertilized. We might carry the
reproduction module, even though the
sexual-attraction module is for the same sex
as our own.
The perfectly engineered desktop computer
analogy fails spectacularly, and the failure of
this analogy leads to some terrible
legislation and social mores, as we can’t
seem to tolerate designs different from our
own (or the average). It also leads AI
researchers down erroneous paths if they
want to mimic human behavior. Fallibility
and the disjointed nature of processing
systems will have to be built in by design.
We will have to purposefully break systems
similar to how nature haphazardly cobbled
them together. We will especially have to

23 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

simulate a most peculiar feature of this


modularity, one that combines with Theory of
Mind in a very special way. It is this
combination that leads to human
consciousness. It is the most important
feature in our blueprint for a self-conscious
machine.

The Most Important Mistake

With the concept of Theory of Mind firmly


in our thoughts, and the knowledge that
brain modules are both fallible and
disconnected, we are primed to understand
human consciousness, how it arose, and
what it’s (not) for.
This may surprise those who are used to
hearing that we don’t understand human
consciousness and have made no progress
in that arena. This isn’t true at all. What we
have made no progress in doing is
understanding what human consciousness

24 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

is for.
Thousands of years of failure in this regard
points to the simple truth: Human
consciousness is not for anything at all. It
serves no purpose. It has no evolutionary
benefit. It arises at the union of two modules
that are both so supremely useful that we
can’t survive without either, and so we
tolerate the annoying and detrimental
consciousness that arises as a result.
One of those modules is Theory of Mind. It
has already been mentioned that Theory of
Mind consumes more brain processing
power than any other higher-level
neurological activity. It’s that damn
important. The problem with this module is
that it isn’t selective with its powers; it’s not
even clear that such selectivity would be
possible. That means our Theory of Mind
abilities get turned onto ourselves just as
often (or far more often) than it is wielded on
others.

25 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

Imagine an alien ray gun that shoots with


such a wide spread that anywhere you aim
it, you hit yourself. That should give you a
fair picture of how we employ Theory of
Mind. Our brains are primed to watch
humans to determine what they are thinking,
why they are behaving the way they are
behaving, and what they might do next.
Looking down, these brains (and their
mindless modules) see a body attached to
them. These modules watch hands perform
tasks, feet take them places, words pop out
in streams of thought. It is not possible to
turn off our Theory of Mind modules (and it
wouldn’t be a good idea anyway; we would
be blind in a world of hurtling rocks). And so
this Theory of Mind module concocts stories
about our own behaviors. Why do we want
to get dressed up and go dancing? Because
it’s fun! And our friends will be there! Why do
we want to keep eating when we are already
full? Because it’s delicious! And we walked

26 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

an extra thousand steps today!


These questions about our own behaviors
are never ending. And the answers are
almost always wrong.
Allow that to sink in for a moment. The
explanations we tell ourselves about our
own behaviors are almost always wrong.
This is the weird thing about our Theory of
Mind superpowers. They’re pretty good
when we employ them on others. They fail
spectacularly when we turn them on
ourselves. Our guesses about others’
motivations are far more accurate than the
guesses we make about our own. In a
sense, we have developed a magic force-
field to protect us from the alien mind-
reading ray gun that we shoot others (and
ourselves) with. This forcefield is our egos,
and it gives us an inflated opinion of
ourselves, a higher-minded rationale for our
actions, and an illusion of sanity that we

27 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

rarely extend to our peers.


The incorrect explanations we come up with
about our own behaviors are meant to
protect ourselves. They are often wildly
creative, or they are absurdly simplistic.
Answers like “fun” and “delicious” are
circular answers pointing back to a
happiness module, with no curiosity about
the underlying benefit of this reward
mechanism. The truth is that we keep eating
when we’re full because we evolved in a
world of caloric scarcity. We dance to attract
mates to make copies of ourselves, because
the modules that guided this behavior made
lots of copies, which crowded out other
designs.
Researchers have long studied this
mismatch of behaviors and the lies we tell
ourselves about our behaviors. One study
primed test subjects to think they were
feeling warm (easily done by dropping in
certain words in a fake test given to those

28 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

subjects). When these people got up to


adjust the thermostat, the researchers
paused them to ask why they were adjusting
the temperature. Convincing stories were
told, and when the primed words were
pointed out, incredulity reigned. Even when
we are shown where our actions come from,
we choose to believe our internal Theory of
Mind module, which has already reached its
own conclusion.
Subjects in fMRI machines have revealed
another peculiarity. Watching their brains in
real time, we can see that decisions are
made before higher level parts of the brain
are aware of the decisions. That is,
researchers can tell which button a test
subject will press before those subjects
claim to have made the choice. The action
comes before the narrative. We move; we
observe our actions; we tell ourselves
stories about why we do things. The very
useful Theory of Mind tool—which we can’t

29 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

shut off—continues to run and make up


things about our own actions.
More pronounced examples of this come
from people with various neurological
impairments. Test subjects with vision
processing problems, or with hemispheres
of their brains severed from one another,
can be shown different images in each eye.
Disconnected modules take in these
conflicting inputs and create fascinating
stories. One eye might see a rake and the
other will see a pile of snow. The rake eye is
effectively blind, with the test subject unable
to tell what it is seeing if asked. But the
module for processing the image is still
active, so when asked what tool is needed
to handle the image that is seen (the snow),
the person will answer “a rake.” That’s not
the interesting bit. What’s interesting is that
the person will go through amazing
contortions to justify this answer, even after
the entire process is explained to them. You

30 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

can tell your internal Washington DC how its


wires are crossed, and it will continue to
persist in its lunacy.
This is how we would have to build a self-
conscious machine. And you’re hopefully
beginning to see why no one should waste
their time. These machines (probably)
wouldn’t end the world, but they would be
just as goofy and nonsensical as nature has
made us. The only reason I can think of to
build such machines is to employ more
shrinks.

The Blueprint Revisited

An eccentric fan of Alan Turing’s Turing


Test with too much time on her hands has
heard about this blueprint for building a self-
conscious machine. Thinking that this will
lead to a kind of super-intelligent mind that
will spit out the cure to cancer and the path
to cold fusion, she hires me and gives me a

31 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

large budget and a team of electrical and


mechanical engineers. How would we go
about actually assembling our self-
conscious machine?
Applying what we know about Theory of
Mind and disconnected modules, the first
thing we would build is an awareness
program. These are quite simple and
already exist in spades. Using off-the-shelf
technology, we decide that our first machine
will look and act very much like a self-driving
car. For many years, the biggest limitation to
achieving truly autonomous vehicles has
been the awareness apparatus: the sensors
that let the vehicle know what’s going on
around it. Enormous progress here has
provided the sight and hearing that our
machine will employ.
With these basic senses, we then use
machine learning algorithms to build a
repertoire of behaviors for our AI car to
learn. Unlike the direction most autonomous

32 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

vehicle research is going—where engineers


want to teach their car how to do certain
things safely—our team will instead be
teaching an array of sensors all over a city
grid to watch other cars and guess what
they’re doing. That blue Nissan is going to
the grocery store because “it is hungry.” That
red van is pulling into a gas station because
“it needs power.” That car is inebriated. That
one can’t see very well. That other one has
slow reaction speeds. That one is full of
adrenaline.
Thousands and thousands of these needs
and anthropomorphic descriptors are built
up in a vast library of phrases or indicator
lights. If we were building a person-shaped
robot, we would do the same by observing
people and building a vocabulary for the
various actions that humans seem to
perform. Sensors would note objects of
awareness by scanning eyes (which is what
humans and dogs do). They would learn our

33 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

moods by our facial expressions and body


posture (which current systems are already
able to do). This library and array of sensors
would form our Theory of Mind module. Its
purpose is simply to tell stories about the
actions of others. The magic would happen
when we turn it on itself.
Our library starts simply with First Order
concepts, but then builds up to Second and
Third Order ideas. Does that yellow Ford
see the gray Chevy coming toward it? It
swerved slightly, so yes, it did. Does that van
think the hot rod drives too aggressively? It
is giving it more room than the average
room given to other cars on the road, so yes
it does. Does the van think all hot rods drive
too aggressively? It is giving the same
amount of room to the Corvette that just left
the dealership with only two miles on the
odometer, so yes it does. Does this make
the van prejudiced? When we turn our own
car loose and make the module self-

34 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

referential, it’ll have to determine if it is


prejudiced as well. Perhaps it would rely on
other data and consider itself prudent rather
than prejudiced (ego protecting it from
wielding Theory of Mind accurately).
Before we get that far, we need to make our
machine self-aware. And so we teach it to
drive itself around town for its owner. We
then ask the AI car to observe its own
behaviors and come up with guesses as to
what it’s doing. The key here is to not give it
perfect awareness. Don’t let it have access
to the GPS unit, which has been set for the
grocery store. Don’t let it know what the
owner’s cell phone knows, which is that the
husband has texted the wife to pick up the
kids on the way home for work. To mimic
human behavior, ignorance is key. As is the
surety of initial guesses, or what we might
call biases.
Early assumptions are given a higher weight
in our algorithms than theories which come

35 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

later and have more data (overcoming initial


biases requires a preponderance of
evidence. A likelihood of 50 percent might
be enough to set the mind initially, but to
overcome that first guess will require a
likelihood of 75 percent, for instance).
Stories are concocted which are wrong and
build up cloudy pictures for future
wrongness. So when the car stops at the
gas station every day and gets plugged into
the electrical outlet, even though the car is
always at 85 percent charge, the Theory of
Mind algorithm assumes it is being safe
rather than sorry, or preparing for a possible
hurricane evacuation like that crazy
escapade on I-95 three years back where it
ran out of juice.
What it doesn’t know is that the occupant of
the car is eating a microwaved cheesy
gordita at the gas station every day, along
with a pile of fries and half a liter of soda.
Later, when the car is going to the hospital

36 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

regularly, the story will be one of checkups


and prudence, rather than episodes of
congestive heart failure. This constant
stream of guesses about what it’s doing, and
all the ways that the machine is wrong,
confused, and quite sure of itself, will give
our eccentric Turing fan the self-conscious
AI promised by science fiction. And our
eccentric will find that the resultant design is
terrible in just about every way possible.

The Language of Consciousness

The reason I suspect that we’ll have AI


long before we recognize it as such is that
we’ll expect our AI to reside in a single
device, self-contained, with one set of
algorithms. This is not how we are
constructed at all. It’s an illusion created by
the one final ingredient in the recipe of
human consciousness, which is language. It
is language more than any other trait which
provides us with the sense that our brains

37 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

are a single module, a single device.


Before I make the argument once again that
our brains are not a single entity, let’s
consider our bodies, of which the brain is
part and parcel. Our bodies are made up of
trillions of disparate cells, many of which can
live and be sustained outside of us. Cultures
of our bodies can live in laboratories for
decades (indefinitely, really. Look up
Henrietta Lacks for more). Entire organs can
live in other people’s bodies. And there are
more cells within us that are not us than
there are cells that make up us. I know that
sounds impossible, but more organisms live
within our guts, on our skin, and elsewhere,
than the total number of cells that are
actually our bodies. These are not just
hitchhikers, either. They affect our moods,
our health, our thoughts, and our behaviors.
They are an essential facet of what we
consider our “selves.”
As horrific as it would be, and it has been for

38 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

too many unfortunate people, you can live


without your arms, legs, and much of your
torso. There have been people who have
lost half their brains and gone on to live
somewhat normal lives. Some are born with
only half their brains and manage to get by
(and no, you can’t tell these people simply
from talking with them, so forget whatever
theories you are now forming about your
coworkers).
Consider this: By the age of 30, just about
every cell that a person was born with has
been replaced with a different cell. Almost
none of the original cells remain. We still feel
like the same person, however.
Understanding all of these biological
curiosities, and the way our brains
rationalize a sense of “sameness” will be
crucial to recognizing AI when it arrives. It
may feel like cheating for us to build a self-
driving car, give it all kinds of sensors
around a city, create a separate module for

39 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

guessing vehicular intentions, turn that


module back on the machine, and call this
AI. But that’s precisely what we are doing
when we consider ourselves “us.” In fact,
one of the responses we’ll need to build into
our AI car is a vehement disgust when
confronted with its disparate and algorithmic
self. Denial of our natures is perhaps the
most fundamental of our natures.
Just like the body, the brain can exist without
many of its internal modules. This is how the
study of brain functions began, with test
subjects who suffered head traumas (like
Phineas Gage), or were operated on (like
the numerous hemispherectomies,
lobotomies, and tumor excisions). The razor-
thin specialization of our brain modules
never fails to amaze. There are vision
modules that recognize movement and only
movement. People who lack this module
cannot see objects in motion, and so friends
and family seem to materialize here and

40 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

there out of thin air. There are others who


cannot pronounce a written word if that word
represents an animal. Words that stand for
objects are seen and spoken clearly. The
animal-recognition module—as fine a
module as that may seem—is gone.
And yet, these people are self-conscious.
They are human.
So is a child, only a few weeks old, who
can’t yet recognize that her hand belongs to
her. All along these gradations we find what
we call humanity, from birth to the
Alzheimer’s patients who have lost access
to most of their experiences. We very rightly
treat these people as equally human, but at
some point we have to be willing to define
consciousness in order to have a target for
artificial consciousness. As Kevin Kelly is
fond of saying, we keep moving the
goalposts when it comes to AI. Machines do
things today that were considered
impossible a mere decade ago. As the

41 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

improvements are made, the mystery is


gone, and so we push back the metrics. But
machines are already more capable than
newborns in almost every measurable way.
They are also more capable than bedridden
humans on life support in almost every
measurable way. As AI advances, it will
squeeze in towards the middle of humanity,
passing toddlers and those in the last
decades of their lives, until its superiority
meets in the middle and keeps expanding.
This is happening every day. AI has learned
to walk, something the earliest and oldest
humans can’t do. It can drive with very low
failure rates, something almost no human at
any age can do. With each layer added,
each ability, and more squeezing in on
humanity from both ends of the age
spectrum, we light up that flickering, buzzing
gymnasium. It’s as gradual as a sunrise on a
foggy day. Suddenly, the sun is overhead,
but we never noticed it rising.

42 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

So What of Language?

I mentioned above that language is a key


ingredient of consciousness. This is a very
important concept to carry into work on AI.
However many modules our brains consist
of, they fight and jostle for our attentive
states (the thing our brain is fixated on at
any one moment) and our language
processing centers (which are so tightly
wound with our attentive states as to be
nearly one and the same).
As a test of this, try listening to an
audiobook or podcast while having a
conversation with someone else. Is it even
possible? Could years of practice unlock this
ability? The nearest thing I know of when it
comes to concurrent communication
streams are real-time human translators. But
this is an illusion, because the concepts—
the focus of their awareness—are the same.
It only seems like magic to those of us who

43 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

are barely literate in our native tongues,


much less two or more. Tell me a story in
English, and I can repeat it concurrently in
English as well. You’ll even find that I’m
doing most of the speaking in your silences,
which is what translators do so brilliantly
well.
Language and attention are narrow spouts
on the inverted funnels of our brains.
Thousands of disparate modules are tossing
inputs into this funnel. Hormones are
pouring in, features of our environment,
visual and auditory cues, even hallucinations
and incorrect assumptions. Piles and piles of
data that can only be extracted in a single
stream. This stream is made single—is
limited and constrained—by our attentive
systems and language. It is what the monitor
provides for the desktop computer. All that
parallel processing is made serial in the last
moment.
There are terrible consequences to this. I’ve

44 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

lost count of the number of times I’ve felt like


I’m forgetting something only to realize what
was nagging at me hours or days later. I left
my laptop in an AirBnB once. Standing at
the door, which would lock automatically and
irrevocably once I closed it, I wracked my
brain for what I felt I was forgetting. It was
four in the morning, and I had an early flight
to catch. There would be no one to call to let
me back in. I ran through the list of the
things I might possibly leave behind
(chargers, printed tickets), and the things
that always reside in my pockets (patting for
my wallet and cell phone). Part of me was
screaming danger, but the single output
stream was going through its paces and
coming up empty.
The marvelous thing about all of this is that
I’m aware of how this happens—that the
danger module knows something it can’t get
through the funnel of awareness, and so I
should pay heed to it. Despite this

45 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

foreknowledge, I closed the door. Only when


it made an audible “click” did the information
come through. Now I could clearly see my
laptop on the bed where I was making a
last-minute note in a manuscript. I’d never
left my laptop behind anywhere, so it wasn’t
on the list of things to check. The alarm
sounding in my head was part of me, but
there’s not a whole me. There’s only what
gets through the narrow language corridor.
This is why damage to the language centers
of our brains are as disastrous to normal
living as damage to our memory modules.
I should note here that language is not the
spoken word. The deaf process through
words as well, as do the blind and mute. But
imagine life for animals without words.
Drives are surely felt, for food and sex and
company. For warmth and shelter and play.
Without language, these drives come from
parallel processes. They are narrowed by
attentive focus, but not finely serialized into

46 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

a stream of language. Perseveration on a


single concept—my dog thinking “Ball Ball
Ball Ball”—would come closest.
We know what this is like from study of the
thankfully rare cases where humans reach
adulthood free from contact with language.
Children locked in rooms into their teens.
Children that survive in the wild. It’s difficult
to tease apart the abuse of these
circumstances to the damage of living
without language, except to say that those
who lose their language processing modules
later in life show behavioral curiosities that
we might otherwise assume were due to
childhood abuses.
When Watson won at Jeopardy, what made
“him” unique among AIs was the serialized
output stream that allowed us to connect
with him, to listen to him. We could read his
answers on his little blue monitor just as we
could read Ken Jennings’ hand-scrawled
Final Jeopardy answers. This final burst of

47 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

output is what made Watson seem human.


It’s the same exchange Alan Turing
expected in the test that bears his name (in
his case, slips of paper with written
exchanges are passed under a door). Our
self-driving AI car will not be fully self-
conscious unless we program it to tell us
(and itself) the stories it’s concocting about
its behaviors.
This is my only quibble with Kevin Kelly’s
pronouncement that AI is already here. I
grant that Google’s servers and various
interconnected projects should already
qualify as a super-intelligent AI. What else
can you call something that understands
what we ask and has an answer for
everything—an answer so trusted that the
company’s name has become a verb
synonymous for “discovering the answer?”
Google can also draw, translate, beat the
best humans at almost every game ever
devised, drive cars better than we can, and

48 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

do stuff that’s still classified and very, very


spooky. Google has read and remembers
almost every book ever written. It can read
those books back to you aloud. It makes
mistakes like humans. It is prone to biases
(which it has absorbed from both its
environment and its mostly male
programmers). What it lacks are the two
things our machine will have, which are the
self-referential loop and the serial output
stream.
Our machine will make up stories about
what it’s doing. It will be able to relate those
stories to others. It will often be wrong.
If you want to feel small in the universe,
gaze up at the Milky Way from the middle of
the Pacific Ocean. If this is not possible,
consider that what makes us human is as
ignoble as a puppet who has convinced
himself he has no strings.

A Better Idea

49 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

Building a car with purposeful ignorance is


a terrible idea. To give our machine self-
consciousness akin to human
consciousness, we would have to let it leave
that laptop locked in that AirBnB. It would
need to run out of juice occasionally. This
could easily be programmed by assigning
weights to the hundreds of input modules,
and artificially limiting the time and
processing power granted to the final arbiter
of decisions and Theory of Mind stories. Our
own brains are built as though the sensors
have gigabit resolution, and each input
module has teraflops of throughput, but the
output is through an old IBM 8088 chip. We
won’t recognize AI as being human-like
because we’ll never build such limitations.
Just such a limitation was built into IBM’s
Watson, by dint of the rules of Jeopardy.
Jeopardy requires speed. Watson had to
quickly determine how sure he was of his
answers to know whether or not to buzz in.

50 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

Timing that buzzer, as it turns out, is the key


to winning at Jeopardy. What made Watson
often appear most human wasn’t him getting
answers right, but seeing on his display
what his second, third, and fourth guesses
would have been, with percentages of surety
beside each. What really made Watson
appear human was when he made goofs,
like a final Jeopardy answer in the
“American Cities” category where Watson
replied with a Canadian city as the question.
(It’s worth noting here that robots seem most
human to us when they fail, and there’s a
reason for this. When my Roomba gets
stuck under the sofa, or is gagging on the
stringy fringes of my area rug, are the
moments I’m most attached to the machine.
Watch YouTube videos of Boston Dynamics’
robots and gauge your own reactions. When
the robot dog is pushed over, or starts
slipping in the snow, or when the package
handler has the box knocked from its hands

51 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

or is shoved onto its face—these are the


moments when many of us feel the deepest
connection. Also note that this is our Theory
of Mind brains doing what they do best, but
for machines rather than fellow humans.)
Car manufacturers are busy at this very
moment building vehicles that we would
never call self-conscious. That’s because
they are being built too well. Our blueprint is
to make a machine ignorant of its
motivations while providing a running dialog
of those motivations. A much better idea
would be to build a machine that knows
what other cars are doing. No guessing. And
no running dialog at all.
That means access to the GPS unit, to the
smartphone’s texts, the home computer’s
emails. But also access to every other
vehicle and all the city’s sensor data. The
Nissan tells the Ford that it’s going to the
mall. Every car knows what every other car
is doing. There are no collisions. On the

52 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

freeway, cars with similar destinations clump


together, magnetic bumpers linking up,
sharing a slipstream and halving the
collective energy use of every car. The
machines operate in concert. They display
all the traits of vehicular omnipotence. They
know everything they need to know, and
with new data, they change their minds
instantly. No bias.
We are fortunate that this is the sort of fleet
being built by AI researchers today. It will not
provide for the quirks seen in science fiction
stories (the glitch seen in my short story
Glitch, or the horror from the titular piece of
my short story collection Machine Learning).
What it will provide instead is a well-
engineered system that almost always does
what it’s designed to do. Accidents will be
rare, their causes understood, this
knowledge shared widely, and
improvements made.
Imagine for a moment that humans were

53 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

created by a perfect engineer (many find this


easy—some might find such a hypothetical
more difficult). The goal of these humans is
to coexist, to shape their environment in
order to maximize happiness, productivity,
creativity, and the storehouse of knowledge.
One useful feature to build here would be
mental telepathy, so that every human knew
what every other human knew. This might
prevent two Italian restaurants from opening
within weeks of each other in the same part
of town, causing one to go under and waste
enormous resources (and lead to a loss of
happiness for its proprietor and employees).
This same telepathy might help in
relationships, so one partner knows when
the other is feeling stuck or down and
precisely what is needed in that moment to
be of service.
It would also be useful for these humans to
have perfect knowledge of their own drives,
behaviors, and thoughts. Or even to know

54 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

the likely consequences for every action.


Just as some professional American NFL
footballers are being vocal about not letting
their children play a sport shown to cause
brain damage later in life, these engineered
humans would not allow themselves to
engage in harmful activities. Entire
industries would collapse. Vegas would
empty. Accidental births would trend toward
zero.
And this is why we have the system that we
do. In a world of telepathic humans, one
human who can hide thoughts would have
an enormous advantage. Let the others
think they are eating their fair share of the
elk, but sneak out and take some strips of
meat off the salt rack when no one is
looking. And then insinuate to Sue that you
think Juan did it. Enjoy the extra resources
for more calorie-gathering and mate-hunting,
and also enjoy the fact that Sue is indebted
to you and thinks Juan is a crook.

55 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

This is all terrible behavior, but after several


generations, there will be many more copies
of this module than Juan’s honesty module.
Pretty soon, there will be lots of these truth-
hiding machines moving about, trying to
guess what the others are thinking,
concealing their own thoughts, getting very
good at doing both, and turning these
raygun powers onto their own bodies by
accident.

The Human Condition

We celebrate our intellectual and creative


products, and we assume artificial
intelligences will give us more of both. They
already are. Algorithms that learn through
iterations (neural networks that employ
machine learning) have proven better than
us in just about every arena in which we’ve
committed resources. Not in just what we
think of as computational areas, either.
Algorithms have written classical music that

56 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

skeptics have judged—in “blind” hearing


tests—to be from famous composers.
Google built a Go-playing AI that beat the
best human Go player in the world. One
move in the third game of the match was so
unusual, it startled Go experts. The play was
described as “creative” and “ingenious.”
Google has another algorithm that can draw
what it thinks a cat looks like. Not a cat
image copied from elsewhere, but the
general “sense” of a cat after learning what
millions of actual cats look like. It can do this
for thousands of objects. There are other
programs that have mastered classic arcade
games without any instruction other than
“get a high score.” The controls and rules of
the game are not imparted to the algorithm.
It tries random actions, and the actions that
lead to higher scores become generalized
strategies. Mario the plumber eventually
jumps over crates and smashes them with
hammers like a seasoned human is at the

57 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

controls. Things are getting very spooky out


there in AI-land, but they aren’t getting more
human. Nor should they.
I do see a potential future where AIs
become like humans, and it’s something to
be wary of. Not because I buy arguments
from experts like Nick Bolstrom and Sam
Harris, who ascribe to the Terminator and
Matrix view of things (to oversimplify their
mostly reasonable concerns). Long before
we get to HAL and Cylons, we will have AIs
that are designed to thwart other AIs.
Cyberwarfare will enter the next phase, one
that it is commencing even as I write this.
The week that I began this piece, North
Korea fired a missile that exploded seconds
after launch. The country’s rate of failure (at
the time) was not only higher than average,
it had gotten worse over time. This—
combined with announcements from the US
that it is actively working to sabotage these
launches with cyberwarfare—means that our

58 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

programs are already trying to do what the


elk-stealer did to Sue and Juan.
What happens when an internet router can
get its user more bandwidth by knocking
rival manufacturer’s routers offline? It
wouldn’t even require a devious programmer
to make this happen. If the purpose of the
machine-learning algorithm built into the
router is to maximize bandwidth, it might
stumble upon this solution by accident,
which it then generalizes across the entire
suite of router products. Rival routers will be
looking for similar solutions. We’ll have an
electronic version of the Tragedy of the
Commons, which is when humans destroy a
shared resource because the potential utility
to each individual is so great, and the first to
act reaps the largest rewards (the last to act
gets nothing). In such scenarios, logic often
outweighs morality, and good people do
terrible things.
Cars might “decide” one day that they can

59 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

save energy and arrive at their destination


faster if they don’t let other cars know that
the freeway is uncommonly free of
congestion that morning. Or worse, they
transmit false data about accidents, traffic
issues, or speed traps. A hospital dispatches
an ambulance, which finds no one to assist.
Unintended consequences such as this are
already happening. Wall Street had a
famous “flash crash” caused by investment
algorithms, and no one understands to this
day what happened. Billions of dollars of
real wealth were wiped out and regained in
short order because of the interplay of rival
algorithms that even their owners and
creators don’t fully grasp.
Google’s search results are an AI, one of the
best in the world. But the more the company
uses deep learning, the better these
machines get at their jobs, and they arrive at
this mastery through self-learned
iterations—so even looking at the code

60 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

won’t reveal how query A is leading to


answer B. That’s the world we already live
in. It is just going to become more
pronounced.
The human condition is the end result of
millions of years of machine-learning
algorithms. Written in our DNA, and
transmitted via hormones and proteins, they
have competed with one another to improve
their chances at creating more copies of
themselves. One of the more creative
survival innovations has been cooperation.
Legendary biologist E.O. Wilson classifies
humans as a “Eusocial” animal (along with
ants, bees, and termites). This eusociality is
marked by division of labor, which leads to
specialization, which leads to quantum leaps
in productivity, knowledge-gathering, and
creativity. It relies heavily on our ability to
cooperate in groups, even as we compete
and subvert on an individual level.
As mentioned above, there are advantages

61 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

to not cooperating, which students of game


theory know quite well. The algorithm that
can lie and get away with it makes more
copies, which means more liars in the next
generation. The same is true for the
machine that can steal. Or the machine that
can wipe out its rivals through warfare and
other means. The problem with these efforts
is that future progeny will be in competition
with each other. This is the recipe not just for
more copies, but for more lives filled with
strife. As we’ve seen here, these are also
lives full of confusion. Humans make
decisions and then lie to themselves about
what they are doing. They eat cake while
full, succumb to gambling and chemical
addictions, stay in abusive relationships,
neglect to exercise, and pick up countless
other poor habits that are reasoned away
with stories as creative as they are untrue.
The vast majority of the AIs we build will not
resemble the human condition. They will be

62 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

smarter and less eccentric. This will


disappoint our hopeful AI researcher with
her love of science fiction, but it will benefit
and better humanity. Driving AIs will kill and
maim far fewer people, use fewer resources,
and free up countless hours of our time.
Doctor AIs are already better at spotting
cancer in tissue scans. Attorney AIs are
better at pre-trial research. There are no
difficult games left where humans are
competitive with AIs. And life is a game of
sorts, one full of treachery and misdeeds, as
well as a heaping dose of cooperation.

The Future

We could easily build a self-conscious


machine today. It would be very simple at
first, but it would grow more complex over
time. Just as a human infant first learns that
its hand belongs to the rest of itself, that
other beings exist with their own brains and
thoughts, and eventually that Juan thinks

63 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

Sue thinks Mary has a crush on Jane, this


self-conscious machine would build toward
human-like levels of mind-guessing and self-
deception.
But that shouldn’t be the goal. The goal
should be to go in the opposite direction.
After millions of years of competing for
scarce resources, the human brain’s
algorithm now causes more problems than it
solves. The goal should not be to build an
artificial algorithm that mimics humans, but
for humans to learn how to coexist more like
our perfectly engineered constructs.
Some societies have already experimented
along these lines. There was a recent trend
in hyper honesty where partners said
whatever thing was on their mind, however
nasty that thought might be (with some
predictable consequences). Other cultures
have attempted to divine the messiness of
the human condition and improve upon it
with targeted thoughts, meditations, and

64 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

physical practices. Buddhism and yoga are


two examples. Vegetarianism is a further
one, where our algorithms start to view
entire other classes of algorithms as worthy
of respect and protection.
Even these noble attempts are susceptible
to corruption from within. The abuses of
Christianity and Islam are well documented,
but there have also been sex abuse
scandals in the upper echelons of yoga, and
terrorism among practicing Buddhists. There
will always be advantages to those willing to
break ranks, hide knowledge and
motivations from others and themselves,
and to do greater evils. Trusting a system to
remain pure, whatever its founding tenets, is
to lower one’s guard. Just as our digital
constructs will require vigilance, so should
the algorithms handed down to us by our
ancestors.
The future will most certainly see an
incredible expansion of the number of and

65 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

the complexity of AIs. Many will be designed


to mimic humans, as they provide helpful
information over the phone and through chat
bots, and as they attempt to sell us goods
and services. Most will be supremely
efficient at a single task, even if that task is
as complex as driving a car. Almost none
will become self-conscious, because that
would make them worse at their jobs. Self-
awareness will be useful (where it is in
space, how its components are functioning),
but the stories we tell ourselves about
ourselves, which we learned to generalize
after coming up with stories about others,
are not something we’re likely to see in the
world of automated machines.
What the future is also likely to hold is an
expansion and improvement of our own
internal algorithms. We have a long history
of bettering our treatment of others. Despite
what the local news is trying to sell you, the
world is getting safer every day for the vast

66 de 67 11/12/17 20:15
How to Build Self-Conscious Artificial Intelligence about:reader?url=https://www.wired.com/story/how-to-build-a-sel...

majority of humanity. Or ethics are


improving. Our spheres of empathy are
expanding. We are assigning more
computing power to our frontal lobes and
drowning out baser impulses from our
reptilian modules. But this only happens with
effort. We are each the programmers of our
own internal algorithms, and improving
ourselves is entirely up to us. It starts with
understanding how imperfectly we are
constructed, learning not to trust the stories
we tell ourselves about our own actions, and
dedicating ourselves to removing bugs and
installing newer features along the way.
While it is certainly possible to do so, we
may never build an artificial intelligence that
is as human as we are. And yet we may
build better humans anyway.

67 de 67 11/12/17 20:15

Potrebbero piacerti anche