Esplora E-book
Categorie
Esplora Audiolibri
Categorie
Esplora Riviste
Categorie
Esplora Documenti
Categorie
il autopoiesi è una teoria che suggerisce che i sistemi viventi hanno la capacità di autoprodursi,
autosostenersi e auto-rinnovarsi. Questa capacità richiede la regolazione della sua
composizione e la conservazione dei suoi limiti; cioè, il mantenimento di una forma particolare
nonostante l'entrata e l'uscita dei materiali.
Questa idea è stata presentata dai biologi cileni Francisco Varela e Humberto Maturana nei
primi anni '70, come un tentativo di rispondere alla domanda "che cos'è la vita?", Oppure, "ciò
che distingue gli esseri viventi di elementi non viventi? " La risposta era fondamentalmente che
un sistema vivente si riproduce.
Questa capacità di auto-riproduzione è ciò che chiamano autopoiesis. Così, hanno definito il
sistema autopoietico come un sistema che riproduce costantemente nuovi elementi attraverso i
propri elementi. L'autopoiesi implica che diversi elementi del sistema interagiscano in un modo
che produce e riproduce gli elementi del sistema.
Cioè, attraverso i suoi elementi, il sistema si riproduce. È interessante notare che il concetto di
autopoiesi è stato applicato anche ai campi della cognizione, della teoria dei sistemi e della
sociologia.
indice
1 caratteristiche
o 1.1 Limiti auto-definiti
o 1.2 Sono capaci di autoproduzione
o 1.3 Sono autonomi
o 1.4 Sono funzionalmente chiusi
o 1.5 Sono aperti all'interazione
2 esempi
o 2.1 Le cellule
o 2.2 Organismi pluricellulari
o 2.3 Ecosistemi
o 2.4 Gaia
3 riferimenti
lineamenti
Limiti auto-definiti
I sistemi cellulari autopoietici sono delimitati da un materiale dinamico creato dal sistema
stesso. Nelle cellule viventi il materiale limitante è la membrana plasmatica, formata da
molecole lipidiche e attraversata dalle proteine di trasporto prodotte dalla cellula stessa.
Le cellule, il più piccolo sistema autopoietico, sono in grado di produrre più copie di se stesse in
modo controllato. Pertanto, l'autopoiesi si riferisce a aspetti di autoproduzione, auto-
manutenzione, autoriparazione e autorelazione dei sistemi viventi.
Da questa prospettiva, tutti gli esseri viventi - dai batteri agli umani - sono sistemi autopoietici. In
realtà, questo concetto è trasceso ancor più al punto in cui il pianeta Terra, con i suoi organismi,
continenti, oceani e mari, è considerato un sistema autopoietico.
Sono autonomi
Gli organismi hanno la capacità di percepire i cambiamenti nell'ambiente, che sono interpretati
come segnali che indicano al sistema come rispondere. Questa capacità consente loro di
sviluppare o diminuire il loro metabolismo quando le condizioni ambientali lo giustificano.
Quanto sopra significa che per una cellula produrre uno simile richiede determinati processi,
come la sintesi e l'assemblaggio di nuove biomolecole necessarie per formare la struttura della
nuova cellula.
Tuttavia, l'interazione con l'ambiente è regolata dal sistema autopoietico. È il sistema che
determina quando, cosa e attraverso quali canali l'energia o la materia viene scambiata con
l'ambiente.
Le fonti di energia utilizzabili fluiscono attraverso tutti i sistemi viventi (o autopoietici). L'energia
può essere sotto forma di luce, sotto forma di composti a base di carbonio o altre sostanze
chimiche come l'idrogeno, l'idrogeno solforato o l'ammoniaca.
Esempi
Le cellule
Una cellula vivente è l'esempio più piccolo di un sistema autopoietico. Una cellula riproduce i
propri elementi strutturali e funzionali, come acidi nucleici, proteine, lipidi, tra gli altri. Cioè, non
sono solo importati dall'esterno ma sono prodotti dal sistema stesso.
Batteri, spore fungine, lieviti e qualsiasi organismo unicellulare possiedono questa capacità di
auto-replicarsi, poiché ogni cellula viene sempre da una cellula preesistente. Quindi, il più
piccolo sistema autopoietico è l'unità fondamentale della vita: la cellula.
Organismi multicellulari
Gli organismi multicellulari, essendo formati da molte cellule, sono anche un esempio di un
sistema autopoietico, solo più complesso. Tuttavia, le sue caratteristiche fondamentali sono
mantenute.
Quindi, un organismo più complesso come una pianta o un animale ha anche la capacità di
autoprodursi e autosostenersi attraverso lo scambio di elementi ed energia con l'ambiente
esterno.
Tuttavia, sono ancora sistemi autonomi, separati dai media esterni dalle membrane o da organi
come la pelle; in questo modo mantiene l'omeostasi e l'autoregolazione del sistema. In questo
caso, il sistema è l'organismo stesso.
Gli ecosistemi
Le entità autopoietiche esistono anche a livelli più alti di complessità, come nel caso degli
ecosistemi. Barriere coralline, prati e stagni sono esempi di sistemi autopoietici perché
soddisfano le loro caratteristiche di base.
Gaia
Il più grande e complesso sistema autopoietico conosciuto si chiama Gaia, l'antica
personificazione greca della Terra. Questo è stato chiamato dallo scienziato atmosferico inglese
James E. Lovelock, ed è fondamentalmente un sistema termodinamico chiuso perché c'è poco
scambio di materia con l'ambiente extraterrestre.
Esistono prove che il sistema di vita globale di Gaia mostra proprietà simili a quelle degli
organismi, come la regolazione delle reazioni chimiche dell'atmosfera, la temperatura media
globale e la salinità degli oceani in periodi di diversi milioni di anni.
Questo tipo di regolamento assomiglia alla regolazione omeostatica presentata dalle cellule.
Pertanto, la Terra può essere intesa come un sistema basato sull'autopoiesi, in cui
l'organizzazione della vita è parte di un sistema termodinamico aperto, complesso e ciclico.
-I think he's best categorized as a philosophical pessimist. Here are a few others in that vein
that I enjoy:
Zapffe, Ligotti, John Gray, Eugene Thacker
-Jacques Derrida was a French-Algerian Jew born in 1930 and died in 2004. He was one
of the most famous yet controversial thinkers in recent France. He was famous for one
Derrida was born in Algiers (now Algeria), a colony of France, it was a rough time to be a
He had to accept his reality that he was considered to be in an inferior position to these
other religions.
Derrida invented a revolutionary way for one to go about doing philosophy.
One part of this philosophical movement attempts to dismantle (deconstruct) our pre-
For example, the bible. Why should we, as a civilization, give so much unquestioned
loyalty to a book written by many authors thousands of years ago and altered by many
more translators.
Derrida believed our thoughts could not be trusted and justified. This is because we have
loyalties to a certain thing over another. Here are a few of examples of this idea. We place
emphasis on:
Words over pictures. We would look down upon a picture book, thinking it’s only
for children.
The point of all this was to shift our attention to the lesser of the two counterparts. To see
the value that the supposed inferior thing has. Because there is tremendous value in
We need to start to see the value of the things that go unrecognized, for they hold unique
“He was stressing that the neglected counterparts in some of our key expressions are
could start to grow intellectually to coexist side by side to these conflicting ideas,
recognizing that no side is better than the other and that their contradictions are
A lesson to take away from this is to not go with the crowd, or follow the most-trodden
path. We need to start to see the value of the things that go unrecognized, for they hold
Are you thinking of reading some influential classic book? Why not choose to read a book
of a censored author, whose work never got to reach as many people as it should have?
La locuzione "anello mancante" (in inglese missing link) nacque nel corso del
dibattito evoluzionistico del XIX secolo per indicare la mancanza di rinvenimenti fossili che
completassero le linee evolutive delle forme viventi. Nella moderna teoria evolutiva
neodarwiniana tale espressione e il relativo concetto hanno, però, completamente perso il loro
valore scientifico. Sopravvive ancor oggi, invece, nel dibattito parascientifico (soprattutto nelle
critiche antievoluzionistiche) e, come locuzione, nella cultura popolare.
Nel diciannovesimo secolo si aspettava la scoperta di un anello mancante tra gli umani e i cosiddetti
animali "inferiori" come prova probante la teoria dell'evoluzione; tale concetto ora è stato
ampiamente superato e la teoria evolutiva dei viventi si è affinata, abbandonando il pensiero di una
"catena" evolutiva lineare, per utilizzare dei diagrammi "a cespuglio" dove ogni specie ed ogni
popolazione diventa una forma transizionale.
5 bufale sull'evoluzione
Tra 4 giorni torna il Wired Next Fest per parlare di sostenibilità - Scopri di più
Il Darwin Day sta per arrivare, ecco gli errori cui andiamo spesso incontro. A quanti avete creduto?
Il 12 febbraio si celebrerà in tutto il mondo la nascita di Charles Darwin. Tra conferenze, seminari e
caffè scientifici anche in Italia con i Darwin Day si cercherà di avvicinare il pubblico alla teoria su si
Eppure ancora oggi persistono popolarissime bufale a proposito di come funziona l’evoluzione, senza
nemmeno bisogno di andare a scomodare il creazionismo, sia esso quello caricaturale della Terra
Giovane o il più raffinato criptocreazionismo dell’Intelligent Design. Ecco alcune delle più diffuse:
è una serie di ominidi in fila indiana messi di profilo. Da sinistra verso destra, più ci avviciniamo alla
nostra specie, più la postura diventa eretta e i tratti meno primitivi. Questa immagine, nota come La
marcia del progresso, è talmente famosa da essere diventata un’icona pop, che come la Marilyn di
Warhol è stata declinata in centinaia di opere. La vediamo in ogni sorta di siti, persino di istituzioni
quale sia il modo migliore di rappresentare una versione generalizzata del processo evolutivo
(un albero? Ma che cresce in quale direzione? E perché?), la marcia del progresso suggerisce un
processo continuo di antenato in antenato fino ad arrivare a Homo sapiens, completamente eretto e
pronto a prendere possesso del mondo. Si presenta quindi l’evoluzione come finalistica e lineare, quando
invece la storia della famiglia umana non potrebbe essere più intricata ed è solo un effetto
della contingenza che ai giorni nostri sia presente solo una specie del genere Homo, la nostra. La
marcia del progresso è apparsa per la prima volta in un libro di testo del 1965, scritto
era stata pensata con quel significato, e il testo allegato era chiaro: alcuni degli ominidi nella serie già
allora non erano considerati antenati dell’uomo, ma i memi molto spesso non seguono il destino
programmato dai creatori e questa immagine, col suo bagaglio diseducativo e fuorviante, è
diventata virale.
L’uomo non discende dalle scimmie attuali, né viceversa. Scimmie e uomo hanno invece un antenato in
comune. Nel caso degli scimpanzè (Pan troglodytes) e bonobo (Pan paniscus), con i quali condividiamo
buona parte delle sequenze genomiche, l’antenato più recente in comune con noi è vissuto, secondo le
attuali stime, tra i 4 e gli 8 milioni di anni fa. Questo era senz’altro diverso sia dagli scimpanzè sia
dall’uomo, e lo potremmo pure chiamare “scimmia” se non fosse che nel linguaggio comune con questa
parola ci si riferisce implicitamente a una specie attuale. A qualcuno non piacerà, ma la realtà è che
l’uomo non deriva dalla scimmia perché, più correttamente, l’uomo è una scimmia.
Meno evoluto a chi?
Dopo Darwin, l’uomo è tornato ad avere un posto all’interno della natura, invece di essere qualcosa
di altro. Eppure il narcisismo umano è tale che i viventi vengono ancora istintivamente classificati
secondo una Scala Naturae o Grande catena dell’essere, ove naturalmente l’uomo occupa le posizioni
più elevate. Siccome oggi però siamo tutti “evoluzionisti” vogliamo mascherare questo antropocentrismo
proprio con la teoria che lo demolisce, e ci ritroviamo a dire ad esempio che “i mammiferi sono più evoluti
dei rettili”: espressioni come questa non hanno alcun senso scientifico. L’uomo non è “più evoluto” di un
insetto o di una patata, o persino di un batterio: tutti gli organismi oggi viventi sono la manifestazione di
un processo evolutivo cominciato circa 4 miliardi di anni fa, e siamo tutti profondamente imparentati.
Pensate davvero di poter stabilire chi sia “più evoluto”, ad esempio, tra vostro fratello e vostra zia?
Anelli mancanti
La favola degli anelli mancanti di cui gli scienziati sarebbero alla continua ricerca è da sempre una manna
per gli spin doctor creazionisti, che possono così esibirsi pretendendo a gran voce il loro ritrovamento.
Dire “anello mancante” avrebbe senso solo se esistesse una catena nell’evoluzione, ma questa
concezione come detto è una eredità che precede la formulazione della teoria. La teoria dell’evoluzione
implica che il frammentario record fossile ci possa offrire (e lo ha fatto ripetutamente) forme di
transizione, cioè organismi con caratteristiche intermedie tra un gruppo più antico (per esempio i pesci)
e uno più recente (per esempio gli anfibi), ma non è affatto detto che gli esemplari trovati appartengano
è accaduto nel caso di Ida (Darwinus massillae) fossile di un piccolo primate vissuto 47 milioni di anni fa e
presentato in pompa magna come l’anello mancante (con tanto di sito) nell’evoluzione dei primati.
Non è vero che nell’evoluzione sopravvive il più forte, e nemmeno il più adatto. L’espressione survival of
the fittest, coniata da Herbert Spencer e fatta propria anche da Darwin voleva essere una frase ad
effetto per riassumere il processo di adattamento per selezione naturale, ma oggi purtroppo è usata per
definire l’evoluzione nella sua totalità, che invece comprende diverse altre forze in gioco. Inoltre, anche
limitandosi alla selezione naturale, bisogna specificare che la mera sopravvivenza da sola non basta: ciò
che conta è la quantità e la qualità della discendenza. Quando i biologi parlano di fitness si riferiscono
infatti alla probabilità, calcolata matematicamente, che ha un certo genotipo di sopravvivere e riprodursi in
una popolazione.
By Meghan O’Gieblyn
December 4, 2019
If the current science of consciousness frequently strikes us as counterintuitive, it’s because even the most
promising theories often fail to account for how we actually experience our interior lives.Illustration by Ariel Davis
In order to do science, we’ve had to dismiss the mind. This was, in any case, the bargain that was made in the
seventeenth century, when Descartes and Galileo deemed consciousness a subjective phenomenon unfit for
empirical study. If the world was to be reducible to physical causation, then all mental experiences—intention,
agency, purpose, meaning—must be secondary qualities, inexplicable within the framework of materialism. And so
the world was divided in two: mind and matter. This dualistic solution helped to pave the way for the Enlightenment
and the technological and scientific advances of the coming centuries. But an uneasiness has always hovered over
the bargain, a suspicion that the problem was less solved than shelved. At the beginning of the eighteenth century,
Leibniz struggled to accept that perception could be explained through mechanical causes—he proposed that if there
were a machine that could produce thought and feeling, and if it were large enough that a person could walk inside
of it, as he could walk inside a mill, the observer would find nothing but inert gears and levers. “He would find only
pieces working upon one another, but never would he find anything to explain Perception,” he wrote.
Today we tend to regard the mind not as a mill but as a computer, but, otherwise, the problem exists in much the
same way that Leibniz formulated it three hundred years ago. In 1995, David Chalmers, a shaggy-haired Australian
philosopher who has been called a “rock star” of the field, famously dubbed consciousness “the hard problem,” as a
way of distinguishing it from comparatively “easy” problems, such as how the brain integrates information, focusses
attention, and stores memories. Neuroscientists have made significant progress on the easier problems, using fMRIs
and other devices. Engineers, meanwhile, have created impressive simulations of the brain in artificial neural
networks—though the abilities of these machines have only made the difference between intelligence and
consciousness more stark. Artificial intelligence can now beat us in chess and Go; it can predict the onset of cancer
as well as human oncologists and recognize financial fraud more accurately than professional auditors. But, if
intelligence and reason can be performed without subjective awareness, then what is responsible for consciousness?
Answering this question, Chalmers argued, was not simply a matter of locating a process in the brain that is
responsible for producing consciousness or correlated with it. Such a discovery still would fail to explain why such
correlations exist or why they lead to one kind of experience rather than another—or to nothing at all.
One line of reductionist thinking insists that the hard problem is not really so hard—or that it is, perhaps, simply
unnecessary. In his new book, “Rethinking Consciousness: A Scientific Theory of Subjective Experience,” the
neuroscientist and psychologist Michael Graziano writes that consciousness is simply a mental illusion, a simplified
interface that humans evolved as a survival strategy in order to model the processes of the brain. He calls this the
“attention schema.” According to Graziano’s theory, the attention schema is an attribute of the brain that allows us
to monitor mental activity—tracking where our focus is directed and helping us predict where it might be drawn in
the future—much the way that other mental models oversee, for instance, the position of our arms and legs in space.
Because the attention schema streamlines the complex noise of calculations and electrochemical signals of our
brains into a caricature of mental activity, we falsely believe that our minds are amorphous and nonphysical. The
body schema can delude a woman who has lost an arm into thinking that it’s still there, and Graziano argues that the
“mind” is like a phantom limb: “One is the ghost in the body and the other is the ghost in the head.”
I suspect that most people would find this proposition alarming. On the other hand, many of us already, on some
level, distrust the reality of our own minds. The recent vogue for “mindfulness” implies that we are passive
observers of an essentially mechanistic existence—that consciousness can only be summoned fleetingly, through
great effort. Plagued by a midday funk, we are often quicker to attribute it to bad gut flora or having consumed
gluten than to the theatre of beliefs and ideas.
And what, really, are the alternatives for someone who wants to explain consciousness in strictly physical terms?
Another option, perhaps the only other option, is to conclude that mind is one with the material world—that
everything, in other words, is conscious. This may sound like New Age bunk, but a version of this concept, called
integrated information theory, or I.I.T., is widely considered one of the field’s most promising theories in recent
years. One of its pioneers, the neuroscientist Christof Koch, has a new book, “The Feeling of Life Itself: Why
Consciousness Is Widespread but Can’t Be Computed,” in which he argues that consciousness is not unique to
humans but exists throughout the animal kingdom and the insect world, and even at the microphysical level. Koch,
an outspoken vegetarian, has long argued that animals share consciousness with humans; this new book extends
consciousness further down the chain of being. Central to I.I.T. is the notion that consciousness is not an either/or
state but a continuum—some “systems,” in other words, are more conscious than others. Koch proposes that all sorts
of things we have long thought of as inert might have “a tiny glow of experience,” including honeybees, jellyfish,
and cerebral organoids grown from stem cells. Even atoms and quarks may be forms of “enminded matter.”
Another term for this is panpsychism—the belief that consciousness is ubiquitous in nature. In the final chapters of
the book, Koch commits himself to this philosophy, claiming his place among a lineage of thinkers—including
Leibniz, William James, and Alfred North Whitehead—who similarly believed that matter and soul were one
substance. This solution avoids the ungainliness of dualism: panpsychism, Koch argues, “elegantly eliminates the
need to explain how the mental emerges out of the physical and vice versa. Both coexist.” One might feel that
aesthetic considerations, such as elegance, do not necessarily make for good science; more concerning, perhaps, is
the fact that Koch, at times, appears motivated by something even more elemental—a longing to reënchant the
world. In the book’s last chapter, he confesses to finding spiritual sustenance in the possibility that humans are not
the lone form of consciousness in an otherwise dead cosmos. “I now know that I live in a universe in which the inner
light of experience is far, far more widespread than assumed within standard Western canon,” he writes. Koch
admits that when he speaks publicly on these ideas, he often gets “you’ve-got-to-be-kidding-stares.”
It is an irony of materialist theories that such efforts to sidestep ghostly or supernatural accounts of the mind often
veer into surreal, metaphysical territory. Graziano, in a similarly transcendent passage in his book, proposes that the
attention-schema theory allows for the possibility of uploading one’s mind to a computer and living, digitally,
forever; in the future, brain scans will digitally simulate the individual patterns and synapses of a person’s brain,
which Graziano believes will amount to subjective awareness. Like Koch, Graziano, when entertaining such
seemingly fanciful ideas, shifts into a mode that oddly mixes lyricism and technical rigor. “The mind is a trillion-
stranded sculpture made of information, constantly changing and beautifully complicated,” he writes. “But nothing
in it is so mysterious that it can’t in principle be copied to a different information-processing device, like a file
copied from one computer to another.”
The strangeness of all this does not mean that such speculations are invalid, or that they undermine the theories
themselves. While reading Koch and Graziano, I recalled that the philosopher Eric Schwitzgebel, in 2013, coined
the term “crazyism” to describe the postulate that any theory of consciousness, even if correct, will inevitably strike
us as completely insane.
If the current science of consciousness frequently strikes us as counterintuitive, if not outright crazy, it’s because
even the most promising theories often fail to account for how we actually experience our interior lives. “The
result,” Tim Parks writes in his new book, “Out of My Head: On the Trail of Consciousness,” “is that we regularly
find ourselves signing up to explanations of reality that seem a million miles from our experience.” In 2015, Parks, a
British novelist and essayist, participated in a project funded by the German Federal Cultural Foundation which put
writers in conversation with scientists. The initiative led Parks to meet with a number of neuroscientists and observe
their research on consciousness. Parks finds that most of the reigning theories upend his intuitive understanding of
his own mind. Truth, these experts tell him, lies not in our fallible senses but in the bewildering decrees of science.
Our minds, after all, are unreliable gauges of the objective world.
Parks takes a different approach: mental experience lies at the core of “Out of My Head,” not only as subject but as
method. For Parks, our subjective understanding of our minds is trustworthy, at least to a degree; he admonishes the
reader to weigh every scientific theory against their knowledge of “what it’s really like being alive.” Throughout his
account of his travels, he dramatizes his inner life: he notices how time seems to slow down at certain moments and
accelerate at others, and how the world disappears entirely when he practices meditation; he describes his fears
about his girlfriend’s health and his doubts about whether he can write the book that we are reading.
Most of the neuroscientists whom Parks meets believe that consciousness can be reduced to neuronal activity, but
Parks begins to doubt this consensus view. As a novelist, attentive to the nuances of language, he notices that these
theories rely a great deal on metaphor: the literature of consciousness often refers to the brain as a “computer,”
chemical activity as “information,” and neuronal firing as “computation.” Parks finds it “puzzling that our brains are
made up of things—computers—that we ourselves only recently invented.” He asks one neuroscientist how
electrical impulses amount to information, and she insists that this is just figurative language, understood as such by
everyone in the field. But Parks is unconvinced: these metaphors entail certain theoretical assumptions—that, for
instance, consciousness is produced by, or is dependent upon, the brain, like software running on hardware. How are
these metaphors coloring the parameters of the debate, and what other hypotheses do they prevent us from
considering?
Parks’s skepticism stems in part from his friendship with an Italian neuroscientist named Riccardo Manzotti, with
whom he has been having, as he puts it, “one of the most intense and extended conversations of my life.” Manzotti,
who has become famous for appearing in panels and lecture halls with his favorite prop, an apple, counts himself
among the “externalists,” a group of thinkers that includes David Chalmers and the English philosopher and
neuroscientist Andy Clark. The externalists believe that consciousness does not exist solely in the brain or in the
nervous system but depends, to various degrees, on objects outside the body—such as an apple. According to
Manzotti’s version of externalism, spread-mind theory, which Parks is rather taken with, consciousness resides in
the interaction between the body of the perceiver and what that perceiver is perceiving: when we look at an apple,
we do not merely experience a representation of the apple inside our mind; we are, in some sense, identical with the
apple. As Parks puts it, “Simply, the world is what you see. That is conscious experience.” Like Koch’s
panpsychism, spread-mind theory attempts to recuperate the centrality of consciousness within the restrictions of
materialism. Manzotti contends that we got off to a bad start, scientifically, back in the seventeenth century, when
all mental phenomena were relegated to the subjective realm. This introduced the false dichotomy of subject and
object and imagined humans as the sole perceiving agents in a universe of inert matter.
Manzotti’s brand of externalism is still a minority position in the world of consciousness studies. But there is a
faction of contemporary thinkers who go even further—who argue that, if we wish to truly understand the mind,
materialism must be discarded altogether. The philosopher Thomas Nagel has proposed that the mind is not an
inexplicable accident of evolution but a basic aspect of nature. Such theories are bolstered, in part, by quantum
physics, which has shown that perception does in some cases appear to have real causal power. Particles have no
properties independent of how you measure them—in other words, they require a conscious observer. The cognitive
scientist Donald Hoffman believes that these experimental observations prove that consciousness is fundamental to
reality. In his recent book “The Case Against Reality: Why Evolution Hid the Truth from Our Eyes,” he argues that
we must restart science on an entirely different footing, beginning with the brute fact that our minds exist, and
determining, from there, what we can recover from evolutionary theory, quantum physics, and the rest. Theories
such as Hoffman’s amount to a return of idealism—the notion that physical reality cannot be strictly separated from
the mind—a philosophy that has been out of fashion since the rise of analytic philosophy, in the early twentieth
century. But if idealism keeps resurfacing in Western thought, it may be because we find Descartes and Galileo’s
original dismissal of the mind deeply unsatisfying. Consciousness, after all, is the sole apparatus that connects us to
the external world—the only way we know anything about what we have agreed to call “reality.”
Afew years before Parks embarked on his neuro-philosophical tour, he and his wife divorced, and many of his
friends insisted that he was having a midlife crisis. This led him to doubt the reality of his own intuitions. “It seems
to me that these various life events might have predisposed me to be interested in a theory of consciousness and
perception that tends to give credit to the senses, or rather to experience,” he writes.
By the end of the book, it’s difficult to see how spread mind offers a more intuitive understanding of reality than
other theories do. In fact, Parks himself frequently struggles to accept the implications of Manzotti’s ideas,
particularly the notion that there is no objective world uncolored by consciousness. But perhaps the virtue of a book
like Parks’s is that it raises a meta-question that often goes unacknowledged in these debates: What leads us, as
conscious agents, to prefer certain theories over others? Just as Parks was drawn to spread mind for personal
reasons, he invites us to consider the human motivations that undergird consensus views. Does the mind-as-
computer metaphor appeal to us because it allows for the possibility of mind-uploading, fulfilling an ancient,
religious desire for transcendence and eternal life? Is the turn toward panpsychism a kind of neo-Romanticism born
of our yearning to reënchant the world that materialism has rendered mute? If nothing else, these new and
sometimes baffling theories of consciousness suggest that science, so long as it is performed by human subjects, will
bear the fingerprints of our needs, our longings, and our hopes—false or otherwise.
Mark Vernon
This article is more than 8 years old
Philosophers that break scientistic taboos, such as Thomas Nagel with Mind and Cosmos, risk much, but
we need them
A Victorian cartoon satirising Darwin's On the Origin of Species. Today, 'goal-directed explanations automatically
question your loyalty to Darwin'. Photograph: Corbis
Fri 4 Jan 2013 09.00 GMT
444
Every year, I give an award to the Most Despised Science Book of the Year. The 2010 award went to Jerry
Fodor and Massimo Piattelli-Palmarini for What Darwin Got Wrong. In 2011, Ray Tallis won with Aping
Mankind: Neuromania, Darwinitis and the Misrepresentation of Humanity.
My runner-up this time is Rupert Sheldrake's The Science Delusion, though in fact it had a strikingly
decent reception for a book also critiquing scientistic dogmatism.
So the winner for 2012 must be Thomas Nagel, for his book Mind and Cosmos: Why the Materialist Neo-
Darwinian Conception of Nature is Almost Certainly False.
Steven Pinker dammed it with faint praise when he described it in a tweet as "the shoddy reasoning of a
once-great thinker". Jerry Coyne blogged: "Nagel goes the way of Alvin Plantinga", which is like being
compared to Nick Clegg. All in all, Nagel's gadfly stung and whipped them into a fury.
Disparagement is particularly unfair, though, because the book is a model of carefulness, sobriety and
reason. If reading Sheldrake feels daring, Tallis thrilling and Fodor worthwhile but hard work, reading
Nagel feels like opening the door on to a tidy, sunny room that you didn't know existed. It is as if his heart
said to his head, I can't help but feel that materialist reductionism isn't right. And his head said to his
heart, OK: let's take a fresh look. So what caused the offence?
Several things, but consider one: the contention that evolution may tend towards consciousness. Nagel is
explicit that he himself is not countenancing a designer. Rather, he wonders whether science needs to
entertain the possibility that a teleological trend is immanent in nature.
There it is. The t-word – a major taboo among evolutionary biologists. Goal-directed explanations
automatically question your loyalty to Darwin. As Friedrich Engels celebrated, when reading On The
Origin of Species in 1859: "There was one aspect of teleology that had yet to be demolished, and that has
now been done." But has it? This is the moot point.
The scientifically respectable become edgy when approaching this domain. Read Malcolm Thorndike
Nicholson's measured piece on the reaction Nagel's book sparked, published in Prospect. The possibility
that the universe wants, in some way, to become conscious will "appear absurd" or "strange", he warns.
But bear the anxiety, he doesn't quite continue, and consider the arguments.
I'm considering some of them with Rupert Sheldrake in a series of podcasts, if you'll forgive the plug. But
it is striking that they can be aired in relatively kosher scientific circles too. A recent example is Paul
Davies's bestseller, The Goldilocks Enigma. Davies argues that the refusal of natural teleology rests on an
assumption that nature obeys laws that are written into the fabric of the cosmos. However, quantum
physics offers every reason to doubt that this is so. The upshot is that Davies himself favours a universe
that contains a "life principle".
So how come teleology is acceptable among cosmologists? It may be that they are used to the basic
assumptions of their science being regularly overturned. Biology, though, has had a very good run since
1859. Questioning their science feels like a form of self-sabotage and dangerous. Hence, Brian Leiter and
Michael Weisberg, reviewing Nagel for the Nation, evoked the spectre of supernaturalism; and Simon
Blackburn, reviewing for the New Statesman, jested that "if there were a philosophical Vatican, the book
would be a good candidate for going on to the Index".
That was written tongue-in-cheek, but it is a purity argument no less. As Mary Douglas pointed out,
secular societies still draw symbolic boundaries to keep the permissible in and threatening stuff out.
Those who cross them risk expulsion. The media ritual of the public review offers a mechanism.
As Freeman Dyson recently wrote in the New York Review of Books, contemporary philosophers bow too
low to science, mostly because they haven't done any, and have simultaneously lost touch with the
elements that made their predecessors so great: the truths held by history, literature, religion. The 2012
award is well earned. We need those prepared to face the flak.
---------- These strange, subterranean cities are eerily like our own. But they’re ruled by ants.
Tales From the Vault: A monthly series.
0:01 / 2:36
Settings
Ted Schultz, the research entomologist at the National Museum of Natural History, talks about a type of ant that
plants fungus. (Mahnaz Rezaie/The Washington Post)
By
Sarah Kaplan
"A lot of non-biologists are attracted to things like butterflies or ladybugs — certain charismatic insects —
and ants are not among those," said Schultz, back in his office in the entomology department at the
National Museum of Natural History. "But actually I think if people could see what ants look like really
close up they would change their minds."
In front of him sit several of the ant colonies he's collected over the course of his 26-year career. The
communities inside the clear plastic boxes are miniature cities teeming with tiny life — the rare specimens
in the museum's 145-million-item collection that are still alive.
Schultz's ants are farmers, and that white fungus he pulled out of the ground is their crop. Between 55 and
60 million years ago, an ancient insect evolved the ability to feed a fungus and live off of its spores. Now
there are some 250 ant species that farm fungi — tending to them, weeding them, ensuring that they have
the nutrients they need. They depend on the fungus for food, and in turn, the fungus depends on the ants
to grow.
"It was immediately like, 'holy moly,' just a light bulb going off," said Peter Peregrine, an anthropologist at
Lawrence University, recalling the first time he heard about these ants. "That's agriculture. That's really
interesting."
For years, Schultz has been tracking these ants in the tropics and scrutinizing them in the lab in an effort
understand how the creatures evolved into farmers millions of years before Homo sapiens even existed on
the planet. But recently he's teamed up with anthropologists like Peregrine to look for parallels between
people and ants. If they can find them, perhaps the history of human agriculture will help illuminate how
farming insects evolved. And in return, Schultz believes, perhaps the ants can teach us humans something
about ourselves.
It's a high calling for a tiny bug with a trifling brain. But Schultz has long known that ants are not to be
underestimated. When he was 8, his mother gave him a book called "The World of Ants" (he still owns a
copy), which included three pages on fungus farmers.
"I remember reading that and thinking, like, 'What?' " Schultz recalled. "How can something seemingly so
simple as an ant, with such a small brain, do things like agriculture that are so complicated? That always
kind of stuck with me."
Ted Schultz digs for an ant colony while doing field research in 2008. (NMNH)
Ant colonies illustrate a phenomenon called emergence: Parts of a system acting in very simple ways can
produce collective behaviors that are incredibly sophisticated, without anyone telling them how. The
same phenomenon explains how individual birds can form a flock, and how the millions of neurons in our
nervous systems can create consciousness. Many entomologists argue that social insects like ants should
be studied on the scale of the colony, rather than the individual — it's by acting as a community that these
creatures do what they do.
"The original ant 130 million years ago was social," Schultz said, "and on that foundation of sociality all
kinds of ant lineages have evolved very, very complicated behaviors."
An agricultural ant colony begins with a single member, the queen. She seeds her garden with spores
carried from her mother's home, and gives birth to the workers who will cultivate it. As the new colony
grows, worker ants fertilize the fungus with bits of plant material collected from the outside — flower
pollen, chewed up leaves. The most massive colonies can defoliate an entire tree in a matter of days, given
the opportunity (though trees have evolved their own defenses).
Some species have learned to "herd" aphid "cattle." The ants keep their bugs docile with tranquilizing
chemicals the ants secrete from their feet (there are probably more than a few human cattle ranchers who
wish they could do the same), then feed on the honeydew that the aphids excrete, much the way that
humans drink cows' milk.
Schultz and his colleagues have reconstructed the evolution of these abilities by comparing the genomes of
more primitive species with those of advanced ones. DNA analysis of the ants' fungi shows that the crops'
evolution mirrors that of the species that farms them. Many of the ant species have adaptations that make
them better farmers, including crevasses on their bodies containing microbes that produce an antibiotic
they can apply to their crops. Likewise, the fungi have evolved to become more appealing to their ants.
"They're true farmers," Schultz said of his insects. "I suppose if you didn't want to call it farming you could
also call it symbiosis. But then you’d have to also talk about human agriculture as a symbiosis too."
Primitive farming ants atop their fungus crop. (Ted Schultz/NMNH)
Strange as it may seem, humanity's relationship with our cultivars is a mutually beneficial one. Like ants
with their fungus, we fertilize our crops, protect them from weeds and insects and help them spread
around the globe.
"Domestication is really successful for the plant or animal being domesticated," said Peregrine, the
anthropologist. "For example, corn began as this very localized, weedy grass, and now it's the most widely
planted crop on Earth. ... So who is to say that corn doesn't have a stake in this relationship as well?"
Three years ago, after hearing about farming ants from one of his colleagues, Peregrine convened a
meeting of archaeologists, anthropologists and entomologists at the Santa Fe Institute to discuss possible
parallels between human farmers and insects (in addition to ants, beetles and termites have also evolved
the ability to farm). Agriculture could be a rare example of convergent evolution — unrelated organisms
evolving a similar trait — happening across entirely different orders of animal.
"We wanted to see, do the same sort of ecological rules and laws and forces govern both systems?" Shultz
said.
The answer seems to be "yes." As was the case with humans, insects became sedentary when they became
farmers. Initially, farming wasn't as profitable as being a hunter-gatherer — both primitive ants and early
subsistence farmers are thought to have been malnourished — but as agriculture became more advanced,
it became more productive. These more sophisticated farms were able to sustain larger populations, which
promoted division of labor, which gave rise to incredibly complex civilizations. Farming societies built the
pyramids and the Internet, wrote the Bhagavad Gita and "Pride and Prejudice," sailed the seas and visited
the moon, and began to divine the fundamental nature of the universe.
And ants? Well, ants became one of the dominant ecological forces on the planet. If you weighed every
living thing in the American tropics (there are no ant farmers in the Eastern hemisphere), agricultural
ants would make up 25 percent of the weight.
"Ants are super successful," said Schultz, a hint of pride in his voice. "They really shape their ecosystems,
way more than any large, charismatic herbivore."
The head and powerful jaw of a <em>Atta laevigata</em> leafcutter worker. (Ted Schultz/NMNH)
Among both ant and human farming communities, all participants in the relationship have been
irrevocably changed by it. Farming ants and their fungus have fundamentally different DNA than their
non-farming relatives. Human farmers have evolved genetic adaptations that allow us to digest milk and
metabolize fats; our crops, meanwhile, bear little resemblance to their wild ancestors.
That said, ant farmers are not directly comparable to people. They're not consciously manipulating their
fungi (indeed, Schultz said, you can imagine a scenario in which the fungi rule the relationship, bending
millions of tiny ant servants to their will). Ant agriculture is a product of natural selection, of innumerable
accumulated genetic accidents. New strategies aren't learned, they're evolved. But humans have
consciousness and culture, and that allowed us to achieve in a few thousand years what took ants five
hundred thousand centuries to accomplish. Even though ants have been farming for much longer,
Peregrine said, there isn't much they can teach us about agriculture that we don't already know.
Except this:
"One thing that is really important and a little scary is that it places humans in the natural world,"
Peregrine said. "The development of agriculture, which we see as this great watershed in human history ...
is not a unique moment. We have our own twist on it, and we do things way faster because we have
culture, but at the base of it we are creatures that are subject to evolution just like all other organisms."
"Some people find that as taking away our humanity or something, but I find it humbling. It says, we’re
part of nature too."