Sei sulla pagina 1di 10

THE NEW YORK REVIEW OF BOOKS

Mind Control & the Internet


JUNE 23, 2011
Sue Halpern
E-MAIL PRINT SHARE

World Wide Mind: The Coming Integration of Humanity, Machines, and the Internet
by Michael Chorost
Free Press, 242 pp., $26.00

The Filter Bubble: What the Internet Is Hiding from You


by Eli Pariser
Penguin, 294 pp., $25.95

You Are Not a Gadget: A Manifesto


by Jaron Lanier
Vintage, 240 pp., $15.00 (paper)

Early this April, when researchers at Washington University in St. Louis reported that a woman
with a host of electrodes temporarily positioned over the speech center of her brain was able to
move a computer cursor on a screen simply by thinking but not pronouncing certain sounds, it
seemed like the Singularitythe long-standing science fiction dream of melding man and machine
to create a better speciesmight have arrived. At Brown University around the same time,
scientists successfully tested a different kind of braincomputer interface (BCI) called BrainGate,
which allowed a paralyzed woman to move a cursor, again just by thinking. Meanwhile, at USC, a
team of biomedical engineers announced that they had successfully used carbon nanotubes to
build a functioning synapsethe junction at which signals pass from one nerve cell to another
which marked the first step in their long march to construct a synthetic brain. On the same
campus, Dr. Theodore Berger, who has been on his own path to make a neural prosthetic for more

than three decades, has begun to implant a device into rats that bypasses a damaged
hippocampus in the brain and works in its place.

The hippocampus is crucial to memory formation, and Bergers invention holds the promise of
overcoming problems related to both normal memory loss that comes from aging and pathological
memory loss associated with diseases like Alzheimers. Similarly, the work being done at Brown
and Washington University suggests the possibility of restoring mobility to those who are
paralyzed and giving voice to those who have been robbed by illness or injury of the ability to
communicate. If this is the Singularity, it looks not just benign but beneficent.
Michael Chorost is a man who has benefited from a braincomputer interface, though the kind of
BCI implanted in his head after he went deaf in 2001, a cochlear implant, was not inserted directly
into his brain, but into each of his inner ears. The result, after a lifetime of first being hard of
hearing and then shut in complete auditory solitude, as he recounted in his memoir, Rebuilt: How
Becoming Part Computer Made Me More Human (2005), was dramatic and life-changing. As his
new, oddly jejune book, World Wide Mind: The Coming Integration of Humanity, Machines, and
the Internet, makes clear, he is now a cheerleader for the rest of us getting kitted out with our
own, truly personal, in-brain computers. In Chorosts ideal world, which he lays out with the
unequivocal zeal of a convert, we will all be connected directly to the Internet via a neural implant,
so that the Internet would become seamlessly part of us, as natural and simple to use as our own
hands.
The debate between repair and enhancement is long-standing in medicine (and sports, and
education, and genetics), though it gets louder and more complicated as technology advances.
Typically, repair, like what those Brown, USC, and Washington University research teams are
aiming to do for people who have suffered stroke, spinal cord and other injuries,
neurodegeneration, dementia, or mental illness, is upheld as something good and necessary and
worthy. Enhancement, on the other handas with performance drugs and stem cell line
manipulationis either reviled as a threat to our integrity and meaning as humans or conflated
with repair until the distinction becomes meaningless.1
Chorost bounces over this debate altogether. While the computer in his head was put there to fix
a deficit, the fact that it is there at all is what seems to convince him that the rest of us should
become cyborgs. His assumptionit would be too generous to call it an argumentis that if that
worked for him, this will work for us. My two implants make me irreversibly computational, a
living example of the integration of humans and computers, he writes. So for me the thought of
implanting something like a BlackBerry in my head is not so strange. It would not be so strange for
a lot of people, I think.

More than a quarter-century ago, a science writer named David Ritchie published a book that Ive
kept on my bookshelf as a reminder of what the post-1984 world was supposed to bring. Called
The Binary Brain, it extolled the synthesis of human and artificial intelligence via something he
called a biochip. The possibilities are marvelous to contemplate, he wrote.
You could plug into a computers memory banks almost as easily as you put on your shoes.
Suddenly, your mind would be full of all the information stored in the computer. You could
instantly make yourself an expert in anything from Spanish literature to particle physics. With
biochips to hold the data, all the information in the MIT and Harvard libraries might be stuffed into
a volume no greater than that of a sandwich. All of Shakespeare in a BB-sized module. You may
see devices like this before this century ends.
Remember, he says gravely, we are talking here about a technology that is just around the
corner, if not here already. Biochips would lead to the development of all manner of man-machine
combinations.
Twenty-six years later, in the second decade of the new millennium, here is Chorost saying almost
the same thing, and for the same reason: our brains are too limited to sufficiently apprehend the
world.2 Some human attributes like IQ appear to have risen in the twentieth century, he writes,
but the rate of increase is much slower than technologys. There is no Moores Law for human
beings. (Moores Law is the much-invoked thesis, now elevated to metaphor, that says that the
number of components that can be placed on an integrated circuit doubles every two years.)
Leaving aside the flawed equivalencesthat information is knowledge and facts are intelligence
Chorosts transmog dream is rooted in a naive, and common, misperception of the Internet
search engine, particularly Googles, which is how most Internet users navigate through the
fourteen billion pages of the World Wide Web.
Most of us, I think its safe to say, do not give much thought to the algorithm that produces the
results of a Google search. Ask a question, get an answerits a straightforward transaction. It
seems not much different from consulting an encyclopedia, or a library card catalog, or even an
index in a book. Books, those other repositories of facts, information, and ideas, are the template
by which we understand the Web, which is like a random, messy, ever-expanding volume of every
big and little thing. A search is our way into and through the mess, and when its made by using
Google, its relying on the Google algorithm, a patented and closely guarded piece of intellectual
property that the company calls PageRank, composed of 500 million variables and 2 billion
terms.
Those large numbers are comforting. They suggest an impermeable defense against bias, a
scientific objectivity that allows the right response to the query to bubble up from the stew of so
much stuff. To an extent its a self-perpetuating system, since it uses popularity (the number of
links) as a proxy for importance, so that the more a particular link is clicked on, the higher its
PageRank, and the more likely it is to appear near the top of the search results. (This is why
companies have not necessarily minded bad reviews of their products.) Chorost likens this to
Hebbian learningthe notion that neurons that fire together, wire together, since

a highly ranked page will garner more page views, thus strengthening its ranking. [In this
way]pages that link together think together. If many people visit a page over and over again, its
PageRank will become so high that it effectively becomes stored in the collective
human/electronic long-term memory.
Even if this turns out to be true, the process is anything but unbiased.
A Google searchwhich Chorost would have us doing in our own technologically modified
headscurates the Internet. The algorithm is, in essence, an editor, pulling up what it deems
important, based on someone elses understanding of what is important. This has spawned a
whole industry of search engine optimization (SEO) consultants who game the system by
reconfiguring a websites code, content, and keywords to move it up in the rankings. Companies
have also been known to pay for links in order to push themselves higher up in the rankings, a
practice that Google is against and sometimes cracks down on. Even so, results rise to the top of a
search query because an invisible hand is shepherding them there.
Its not just the large number of search variables, or the intervention of marketers, that shapes the
information were shown by bringing certain pages to our attention while others fall far enough
down in the rankings to be kept out of view. As Eli Pariser documents in his chilling book The Filter
Bubble: What the Internet Is Hiding from You, since December 2009, Google has aimed to contour
every search to fit the profile of the person making the query. (This contouring applies to all users
of Google, though it takes effect only after the user has performed several searches, so that the
results can be tailored to the users tastes.)
The search process, in other words, has become personalized, which is to say that instead of
being universal, it is idiosyncratic and oddly peremptory. Most of us assume that when we google
a term, we all see the same resultsthe ones that the companys famous Page Rank algorithm
suggests are the most authoritative based on other pages links, Pariser observes. With
personalized search, now you get the result that Googles algorithm suggests is best for you in
particularand someone else may see something entirely different. In other words, there is no
standard Google anymore. Its as if we looked up the same topic in an encyclopedia and each
found different entriesbut of course we would not assume they were different since wed be
consulting what we thought to be a standard reference.
Among the many insidious consequences of this individualization is that by tailoring the
information you receive to the algorithms perception of who you are, a perception that it
constructs out of fifty-seven variables, Google directs you to material that is most likely to
reinforce your own worldview, ideology, and assumptions. Pariser suggests, for example, that a
search for proof about climate change will turn up different results for an environmental activist
than it would for an oil company executive and, one assumes, a different result for a person whom
the algorithm understands to be a Democrat than for one it supposes to be a Republican. (One
need not declare a party affiliation per sethe algorithm will prise this out.) In this way, the

Internet, which isnt the press, but often functions like the press by disseminating news and
information, begins to cut us off from dissenting opinion and conflicting points of view, all the
while seeming to be neutral and objective and unencumbered by the kind of bias inherent in, and
embraced by, say, the The Weekly Standard or The Nation.
Why this matters is captured in a study in the spring issue of Sociological Quarterly, which echoes
Parisers concern that when ideology drives the dissemination of information, knowledge is
compromised. The study, which examined attitudes toward global warming among Republicans
and Democrats in the years between 2001 and 2010, found that in those nine years, as the
scientific consensus on climate change coalesced and became nearly universal, the percentage of
Republicans who said that the planet was beginning to warm dropped precipitously, from 49
percent to 29 percent. For Democrats, the percentage went up, from 60 percent to 70 percent. It
was as if the groups were getting different messages about the science, and most likely they were.
The consequence, as the studys authors point out, was to stymie any real debate on public policy.
This is Parisers point exactly, and his concern: that by having our own ideas bounce back at us, we
inadvertently indoctrinate ourselves with our own ideas. Democracy requires citizens to see
things from one anothers point of view, but instead were more and more enclosed in our own
bubbles, he writes. Democracy requires a reliance on shared facts; instead were being offered
parallel but separate universes.
Its not difficult to see where this could leadhow easily anything with an agenda (a lobbying
group, a political party, a corporation, a government) could flood the echo chamber with
information central to its cause. (This, in fact, is what has happened, on the right, with climate
change.) Who would know? Certainly not Michael Chorost, whose blind allegiance to Google
which he believes is the central part of the nascent forebrain, hippocampus, and long-term
declarative memory store of the coming World Wide Mindis matched by his stunning political
naivet. A government that used the World Wide Mind for overt control would have to be more
ominously totalitarian than any government in existence today (except perhaps North Korea), he
writes. The push-pull dynamic of evolution tends to weed out totalitarian societies because they
are, in the long run, inefficient and wasteful. Contrast this to the words of the man who invented
the World Wide Web, Sir Timothy Berners-Lee, writing not long ago in Scientific American:
The Web as we know it is being threatened. Some of its most successful inhabitants have begun
to chip away at its principles. Governmentstotalitarian and democratic alikeare monitoring
peoples online habits, endangering important human rights.
One of the most significant changes in the Internet since the release in 1993 of the first graphical
browser, Mosaic, which was built on the basis of Berners-Lees work, has been the quest to
monetize it. In its inaugural days, the Web was a strange, eclectic collection of personal
homepages, a kind of digital wall art that bypassed traditional gatekeepers, did not rely on
mainstream media companies or corporate cash, and was not driven by commercial interests. The
computer scientist and musician Jaron Lanier was there at the creation, and in his fierce,
coruscating manifesto, You Are Not a Gadget,3 remembers it like this:

The rise of the web was a rare instance when we learned new, positive information about human
potential. Who would have guessed (at least at first) that millions of people would put so much
effort into a project without the presence of advertising, commercial motive, threat of
punishment, charismatic figures, identity politics, exploitation of the fear of death, or any of the
other classic motivators of mankind. In vast numbers, people did something cooperatively, solely
because it was a good idea, and it was beautiful.
But then commerce moved in, almost by accident, when Larry Page and Sergey Brin, the duo who
started Google, reluctantly paired small ads with their masterful search engine as a way to fund it.
It was not their intent, at first, to create the largest global advertising platform in the history of the
world, or to move marketing strategy away from pushing products toward consumers to pulling
individual consumers toward specific products and brands. But that is what happened. Write the
word blender in an e-mail, and the next set of ads youre likely to see will be for Waring and
Oster.4 Search for information on bipolar disease, and drug ads will pop up when youre reading
baseball scores. Use Google Translate to read an abstract of a journal article and an ad for Spanish
translation software will appear when you are using an online English dictionary. (All this activity
leads to a question that will not be rhetorical if Chorosts World Wide Mind comes to fruition: Will
our thoughts have corporate sponsors, too?)
Targeted ads (even when they are generated by what may have appeared to have been a private
communication) may seem harmless enoughafter all, if there is going to be advertising, isnt it
better if it is for products and services that might be useful? But to pull you into a transaction,
companies believe they need to know not only your current interests, but what you have liked
before, how old you are, your gender, where you live, how much education you have, and on and
on. There are something like five hundred companies that are able to track every move you make
on the Internet, mining the raw material of the Web and selling it to marketers. (Stop calling
yourself a user, Lanier warns. You are being used.) That you are overweight, have diabetes,
have missed a car payment or two, read historical novels, support Republicans, use a cordless
power drill, shop at Costco, and spend a lot of time on airplanes is not only known to people other
than yourself, it is of great monetary value to them as well. So, too, where you are and where
youve been, as we recently learned when it was revealed that both Apple and Google have been
tracking mobile phone and tablet users and storing that information as well.
Even reading devices like Amazons Kindle pay attention to what users are doing: highlight a
passage in a Kindle book and the passage is sent back to Amazon. Clearly, the potential for privacy
and other civil liberty abuses here is vast. While the FBI, for instance, needs a warrant to search
your computer, Pariser writes that if you use Yahoo or Gmail or Hotmail for your e-mail, you lose
your constitutional protections immediately, according to a lawyer for the Electronic Frontier
Foundation. At least one arrest has been made by law enforcement officers using Apple location
data. And this past April, the Supreme Court heard arguments in Sorrell v. IMS Health, in which
IMS Health, in challenging Vermonts statutory restriction on the sale of patients prescription
information to data-mining companies, argued that harvesting and selling medical records data is

a First Amendment right. Clearly, data tracking and mining give new meaning to the words
computer monitor.
In the commercial sphere, marketers are also looking beyond facts and bits of information, in
order to determine not just what you have bought, but what kinds of pitches appealed to you
when you did. Once they have compiled your persuasion profile, they will refine those targeted
ads even further. And if marketing companies can do this, why not political candidates, the
government, or companies that want to sway public opinion? There are undoubtedly times and
places and styles of argument that make us more susceptible to believe what were told, Pariser
observes.
One thing that wethe denizens of the Internethave come to accept without much thought is
that commerce is a really cool aspect of the Webs shift into social networking. The very popular
Foursquare, Loopt, and Groupon sites, for example, make shopping and branding the basis of the
social encounter. People on Foursquare vie to become the mayor of bakeries and clothing stores
by visiting them more than anyone else. They proudly display badges that theyve earned by
patronizing certain businesses, as if they were trophies celebrating excellence. Facebook users
who click on the like button for a product may trigger the appearance of an ad for that product
on the pages of their friends. Companies like Twitalyzer and Klout analyze data from Twitter,
Facebook, and LinkedIn to determine who has the most influence onlinethese can be celebrities
or ordinary people with significant followingsand sell that information to businesses that then
entice the influencers to pitch their products or evangelize their brand. This, according to The
Wall Street Journal, has ignited a race among social-media junkies who, eager for perks and
bragging rights, are working hard to game the system and boost their scores.5 As Lanier points
out, The only hope for social networking sites from a business point of view is for a magic formula
to appear in which some method of violating privacy and dignity becomes acceptable. That
magic, it seems, is already in play.
The paradox of personalization and the self-expression promoted by the Internet through Twitter,
Facebook, and even Chatroulette is that it simultaneously diminishes the value of personhood and
individuality. Read the comments that accompany many blog posts and articles, and it is
overwhelmingly evident that violating dignitysomeone elses and, therefore, ones ownis a
cheap and widely circulated currency. This is not only true for subjects that might ordinarily incite
partisanship and passion, like sports or politics, but for pretty much anything.6
The point of ad hominem attacks is to take a swipe at someones character, to undermine their
integrity. Chorost suggests that the reason the Internet as we now know it does not foster the kind
of empathy he sees coming in the Web of the future, when we will feel peoples inner lives
electronically, is because it is not yet an integral part of our bodies, but Laniers explanation is
more convincing. The hive mind created through our electronic connections necessarily obviates
the individualindeed, thats what makes it a collective consciousness. Anonymity, which
flourishes where there is no individual accountability, is one of its key features, and behind it,

meanness, antipathy, and cruelty have a tendency to rush right in. As the sociologist Sherry Turkle
observes:
Networked, we are together, but so lessened are our expectations of each other that we can feel
utterly alone. And there is the risk that we come to see others as objects to be accessedand only
for the parts that we find useful, comforting, or amusing.7
Here is Chorost describing the wonders of a neural-networked friendship:
Having brainlike computers would greatly simplify the process of extracting information from one
brain and sending it to another. Suppose you have such a computer, and youre connected with
another person via the World Wide Mind. You see a cat on the sidewalk in front of you. Your
rigsees activity in a large percentage of the neurons constituting your brains invariant
representation of a cat. To let your friend know youre seeing a cat, it sends three letters of
informationCATto the other persons implanted rig. That persons rig activates her brains
invariant representation of a cat, and she sees it. Or rather, to be more accurate, she sees a
memory of a cat that is taken from her own neural circuitry.
Now, many important details would be missing. The cats breed, its color, its posture, what its
doing, and so forth. But it would convey a key piece of information: your friend would know that
you are seeing a cat.
Of course, if you called or texted or e-mailed your friend, she would also know that you were
seeing a cat, and shed know what it looked like, and what it was doing, and that it was a
significant enough event in your life that you were telling her about it. Do we want to know every
time someone we know sees a cat?
Its easy to make fun of this, just as it is easy to dismiss the Singularity as a silly science fiction
fantasy, but that would be even sillier. Of course, one of the groups of people most drawn to
science fiction are the engineers who write code and build robots and have, in less than a
generation, changed the way we do research and medicine and read books and communicate with
each other and pay the bills and on and on. (In a 2004 interview, Larry Page envisioned a future
where ones brain is augmented by Google, so that when you think of something, your cell
phone whispers the answer into your ear.) As Lanier points out:
We [the engineers] make up extensions to your being, like remote eyes and ears (webcams and
mobile phones) and expanded memory (the world of details you can search for online). These
become the structures by which you connect to the world and other people. We tinker with your
philosophy by direct manipulation of your cognitive experience. It takes only a tiny group of
engineers to create technology that can shape the entire future of human experience with
incredible speed.
Moores Law is predicted to hit a wall around 2015, when it will be impossible to squeeze more
circuitry onto a silicon chip without it overheating. By then, though, computers may have switched
over to magnetic random access memory, chips that operate with subatomic circuitry. One of the

main creators of MRAM, Stuart Wolf, developed it at DARPA, the agency that invented ARPANET,
the precursor to the Internet as we know it. A few years ago, in an interview with Fortune, Wolf,
envisioning the future of computing, imagined that before too long well be wearing a headband
that feeds directly into the brain and lets us, among other things, talk without speaking, see
around corners, and drive by thinking.8
Another branch of DARPA is pouring millions of dollars into the development of a battlefield
thought helmet that will let soldiers in the field communicate wordlessly by translating brain
waves, which will be read by sensors embedded in the helmet and arrayed around the scalp,
into audible radio messages. (One researcher called it a radio without a microphone.)9 As early
as 2000, Sony began work on a patented way to beam video games directly into the brain using
ultrasound pulses to modify and create sensory images for an immersive, thoroughly inescapable
gaming experience.10 More recently, computer scientists at the Freie Universitt in Berlin got a
jump on Stuart Wolfs vision of a car operated solely by thought. Using commercially available
electroencephalogram (EEG) sensors to first decode the brain wave patterns for right, left,
brake, and accelerate, they then were able to connect those sensors to a computer-controlled
vehicle, so that a driver was able to control the car with no problemthere was only a slight
delay between the envisaged commands and the response of the car, according to one of the
lead researchers.11
Moreover, a group at the University of Southampton in England has developed a BCIa brain
computer interfacethat enables people to communicate with each other brain to brain without
thought or, as the developers call it, B2B, again with a kind of EEG cap that lets one person think of
left (as represented by a zero) or right (represented by a one), send one of those digits to a
second person, also wired with electrodes that are connected as well to a computer that receives
the digit, and, once it is understood, allows the second person to flash the digit back to the sender
by way of a light-emitting diode (LED), which is read by that persons visual cortex. Its not quite
the soundless, wordless, almost thoughtless integration of our thoughts, B2B, but its a fourth or
fifth step toward a future that is becoming increasingly visible.
Jaron Lanier is right: you are not a gadgetyet.
1For instance, if glasses are reparative, is Lasik surgery too? As William Saletan wrote years ago in
Slate, is it still considered reparative when a famous golfer has surgery on his nearly perfect, but
not quite perfect, eyesight so he can see the ball better? See "The Beam in Your Eye," Slate, April
18, 2005.

2 According to Ritchie:
There was a time not too long ago... when a mathematician could be expected to know, if not
master completely, all the branches of math. Now our mathematical knowledge is expanding so
fast that even an expert in mathematics...could reasonably expect to know only about 10 percent

of it all, at the very most. As long as we depend on the crude input systems of sight and hearing,
and the limited storage capacity of our own natural brains, that 10 percent figure is likely to keep
dropping.
3 See also Zadie Smith's discussion of Lanier's book in these pages, "Generation Why?," November
25, 2010.
4 This is not a rhetorical example, as the following exchange on the website Garden Web from last
February illustrates. A little over an hour after a woman writes about her thirty-five-year-old Oster
blender, she posts on the site again. This time, instead of the subject line being "Re: Blenders," it is
"Advertisements." "And then, like magic," the woman, who calls herself annie1992, writes, "look
what appears at the top of my screen...." It is, not so magically, an ad for a blender.
5 See Jessica E. Vascellaro, "Wannabe Cool Kids Aim to Game the Web's New Social Scorekeepers,"
February 8, 2011.
6 Note the comments to this New York Times piece where the author, a doctor, mistakenly gave
her dog ibuprofen and was writing about her mistake to warn others: Randi Hutter Epstein, "How
the Doctor Almost Killed Her Dog," New York Times Well blog, January 20, 2011.
7 See Sherry Turkle, Alone Together: Why We Expect More from Technology and Less from Each
Other (Basic Books, 2011), p. 154.
8 See Peter Schwartz and Rita Koselka, "Quantum Leap," Fortune, August 1, 2006.
9 Mark Thompson, "The Army's Totally Serious Mind-Control Project," Time, September 14, 2008.
10 See Brian Osborne, "Sony May One Day Beam Sensory Data into Your Brain," Geek.com, April 5,
2005.
11
See "Scientists Steer Car with the Power of Thought," fu-berlin.de, February 17, 2011.

Potrebbero piacerti anche