Sei sulla pagina 1di 17

The Emergent Mind, A Critique of the Penrose Hypothesis 1

The Emergent Mind,

A Critique of the Penrose Hypothesis

Resti Luca, matriculation number 905039

Philosophical Issues of Computer Science, Paper

e-mail: luca.resti@mail.polimi.it
2

Abstract

The purpose of this paper is to critique mathematician Sir Roger Penrose’s position on the nature

of intelligence. Specifically, I shall examine part of what’s commonly known as the Lucas-

Penrose argument as expressed by Penrose in his book Shadows of the Mind, as well as related

assertions he has made in his recent public appearances on the topic of human intelligence and

the Computational Theory of Mind.

I do not claim to conclusively disprove these arguments, but merely to point out several

inconsistencies and oversights, which lead me to be skeptical of their conclusion that human

mental processes cannot be described in terms of computations. These issues include what I

consider to be a questionable interpretation of the implications of Gödel’s incompleteness

theorem, as well as some apparent misunderstandings about the capabilities of artificial

intelligence (AI).

I would like to point out that, despite my critique, I have the utmost respect for Sir Penrose’s

important contributions to science. Lastly, it’s important to note that Penrose draws several

interesting connections between the mind and quantum mechanics, in his writings. Responding

to such technical claims goes beyond the scope of my knowledge and of this paper, whose focus

is on the premises of the proposed arguments which fortunately make no appeal to the odd world

of particle physics.

Introduction

Can a machine that operates according to precise, deterministic rules capture the essence of the

human mind?

According to Sir Roger Penrose, the answer to this question is no.


3

His point is not that today’s technology is inadequate, but that as a result of his extension of

Gödel’s Incompleteness Theorems (Gödel, 1931), machines won’t ever be equivalent to a mind.

But is it actually reasonable to compare high-level thought processes of the human mind to the

formal systems that Gödel’s theorems concern themselves with?

Can we find a different approach that more closely resembles what we do know about the mind

and the underlying physical processes of the brain?

I try to respond to these questions in the first two sections of the paper, providing what I consider

to be a more reasonable way of thinking about the relationship between mental processes and

computation.

In the last section, I focus on the contents of some of Sir Penrose’s recent public lectures, in

which he supports his hypothesis that the mind cannot be reduced to a computational process.

He constructs an argument for the claim that human minds can solve tasks that are impossible for

machines. In order to prove this he argues that the undecidable Polyomino Tiling problem can

easily be solved by humans, just like a particular chess configuration that chess-playing

programs struggle with.

I believe I have identified some critical issues in his arguments, and I cite several results from the

field of theoretical computer science in order to show how the mind’s processes might in fact not

be as different from computational ones, as Penrose suspects.

Ultimately Penrose wishes to dismiss the computationalist viewpoint by constructing a

syllogism, but I will attempt to show that such an approach exposes him to many potential

mistakes, which are the result of deep-rooted and unproven presuppositions about the nature of

the mind.
4

The Soundness Problem

The basis for Penrose’s argument are Kurt Gödel’s Incompleteness Theorems (GIT), which may

be summarized as follows:

I. Any formal system S containing statements that can be proved algorithmically, includes

at least one true statement known as a Gödel sentence (GS), that cannot be proved or

disproved within S

II. If a formal system S is consistent, then it is unable to prove its own consistency

These theorems have been extremely influential in the field of theoretical computer science for

defining the limits of what a computer, or more generally a Turing machine can do, by showing

how some formal systems will inevitably lead to contradictions, under certain circumstances.

Penrose, however, takes this idea further in his 1994 book Shadows of the Mind and formulates a

version of Gödel’s theorems that applies to human cognition.

He begins with the presupposition that human minds intuitively know themselves to be sound.

He then states that if the human mind could be described as a formal system [as per Gödel’s first

Incompleteness Theorem], there is at least one statement - GS - which it would be unable to

recognize as true. However, since the mind is sound according to Penrose, it would have to

recognize the truth-value of GS and thus be in contradiction to GIT. His conclusion is that GIT

cannot be applied to the mind, and therefore the mind is not a formal system.

This type of argument is extremely attractive to anyone aiming to disprove the Computational

Theory of Mind (CTM), as it tries to invalidate its very core: if a mind cannot be described as a

purely algorithmic system, then the computationalist hypothesis that human intelligence is a

computational process must be false.


5

More specifically, following his adaption of GIT, Penrose concludes that mathematical thinking

in humans is unique:

"The inescapable conclusion seems to be: Mathematicians are not using a knowably

sound calculation procedure in order to ascertain mathematical truth. We deduce that

mathematical understanding - the means whereby mathematicians arrive at their

conclusions with respect to mathematical truth - cannot be reduced to blind calculation!"

(Penrose, 1994)

Indeed, this would be a reasonable conclusion to reach, if the premises that led to it were well-

founded; however, hidden in Penrose’s argument lie many unsupported assumptions about

human beings and the mind. The most critical assumption seems to be that a human mind would

be able to determine its own soundness.

How can a mind know itself to be sound?

Informally speaking, a system is sound when it’s based only on true premises which always lead

to valid and inevitable conclusions. The claim that a human mind may be able to correctly see

the soundness of its own inner workings seems to me a very strong one and deserving of a

convincing demonstration. If an individual wanted to know if her mind is sound, she would have

to demonstrate it by using the only tool available: the mind.

This is a questionable proposition at best.

One can obviously become convinced of the truth of a given statement even without any formal

proof, but this fact alone has no bearing on the actual truth value of said statement.

After all, we can just as easily be fooled by our senses when trying to make sense of the world

around us, as is often the case any time we witness a magic trick being performed or are
6

confronted with optical illusions. Indeed, even a flawed mathematical theorem can sometimes

appear to be entirely valid, at first glance.

This is clearly problematic for Penrose’s Gödelian argument, since it relies on human

understanding being sound. In my opinion, the fact that Penrose provides no formal justification

constitutes enough reason to reject the premise for his argument.

I am convinced that the moment to believe a noteworthy claim is after it has been demonstrated

to be true. Penrose has to meet his burden of proof and demonstrate that the mind is, in fact,

sound.

As a result, I call into question any conclusions that may follow from this unfounded premise.

An alternative criticism of this premise was offered by Chalmers (1995), who attempted to

demonstrate it to be a paradox.

The Emergent Mind

Before starting this section of the analysis, I would like to clarify my position on

computationalism in order to avoid equivocation.

Could an appropriately programmed computer not only act like a brain, but actually be

equivalent to a brain?

This is the position of “Strong AI” as defined by John Searle in his influential 1980 paper

“Minds, Brains and Programs” (Searle, 1980) wherein he states:

“[…] according to strong AI, the computer is not merely a tool in the study of the mind;

rather, the appropriately programmed computer really is a mind, in the sense that

computers given the right programs can be literally said to understand and have other

cognitive states.”
7

I fully subscribe to this definition as well as to Searle’s conclusion that no purely formal model is

by itself sufficient for duplicating the causal properties of a mind, and not just simulating them.

But doesn’t this actually work in favor of Penrose’s hypothesis?

It does not, because his claim is that human understanding cannot be a deterministic

computational process, whereas Searle’s claim is that the processes which give rise to

intentionality must be related to the chemical and physical properties of the brain; a position I

happen to agree with. I believe that these physical processes may still be described and studied in

computational terms with the hope that, once understood, we might explain how they produce

high-level human thought.

This can be described as a bottom-up approach.

Penrose’s Gödelian argument looks at the matter from a different angle, by focusing on the mind

as its own self-contained phenomenon.

This strategy has a critical weakness, in my opinion: the implication that high-level human

understanding can be directly compared to the evaluation of mathematical propositions.

I define high-level understanding as our experience of intuitively recognizing the truth value of

certain basic facts or propositions about the world, such as accepting that 1 plus 1 equals 2 or

that touching a hot pan will probably burn your hand.

As stated in the previous chapter, Penrose focuses on mathematical understanding which is a

form of high-level understanding, but I believe that the following considerations can easily be

extended to other types of intuitions. For the sake of argument, let’s accept his previously

described Gödelian hypothesis that the functions of the human mind cannot be entirely captured

by a formal system which follows a set of mechanical rules.


8

From this, he infers that “Mathematicians are not using a knowably sound procedure in order to

ascertain mathematical truth”.

This line of reasoning revolves around high-level human thought processes, rather than on their

underlying causes; this is the opposite of the bottom-up approach described above.

In other words, Penrose might be looking at the phenomenon of human thought processes from

the wrong angle, which could make it impossible to get any closer to the truth, much like in the

case of a doctor trying to cure her patient’s chronic headache by only focusing on the symptoms,

rather than the concussion that is causing it.

I propose instead that high-level, intuitive human understanding can be viewed as an emergent

phenomenon.

A system possesses an emergent property P, if P is not dependent on any singular part of the

system, but rather if it is the result of all parts working together. For instance, wetness is an

attribute we assign to water, and we know water to be made of many H2O molecules. Yet, we

cannot say that any single molecule of water is itself wet, but if we take many of them together,

at some point the resulting system takes on a property we call wetness.

I believe there are good reasons to suspect the human mind to be exactly this type of system, and

that features such as intelligence and consciousness are, in fact, emergent. As such, intelligence

and consciousness themselves don’t need to be computational processes, if the parts of the

underlying system from which they “emerge” (neurons, synapses, etc.), are computational.

The value of emergence in science is perhaps best described by theoretical biologist Claus

Emmerche in the introduction to his paper on the subject (Emmerche et al., 1997):

“The concept of emergence has an ambiguous status in contemporary science and

philosophy of science: On the one hand, many scientists and philosophers regard
9

emergence as having only a pseudo-scientific status. On the other hand, new

developments in physics, biology, psychology, and crossdisciplinary fields such as

cognitive science, artificial life, and the study of non-linear dynamical systems have

focused strongly on the high level 'collective behaviour' of complex systems which is

often said to be truly emergent, and the term is increasingly used to characterize such

systems.”

Whether this line of thinking ever leads to important breakthroughs in the study of human

intelligence remains to be seen, but I think it’s a paradigm that is in line with what is often done

in science when studying complex phenomena, and that is to break them down and study them in

terms of their constituent parts.

Polyominos, Chess and Heuristics

On May 12th 2018, I attended a debate between Sir Penrose and philosopher Emanuele Severino

that took place in Milan. The topic of the day was “Artificial Intelligence vs. Human

Intelligence” and Penrose offered a presentation with some accompanying slides, exposing his

opinions on the matter.

His talk was largely identical to a previous one he has given in 2017 at the Center for

Consciousness Studies, where he used the same slides and arguments. There is a publicly

available video recording of this 2017 talk which I reference throughout this chapter.

In this presentation, Penrose tries to deconstruct computationalism from an angle that is different

from his Gödelian argument. He makes his case using two separate but still related examples:

Polyominos and chess-playing AI.


10

Let’s start with the Polyominos argument.

First of all, Polyominos are

geometric shapes constructed by

stacking together squares of

equal size, as shown in Fig. A.

Penrose brings up what’s known


Fig. A
in the literature as the Tiling
A Polyomino and a possible tiling (source:
https://www.researchgate.net/figure/A-pseudo-square-polyomino- Problem which is defined as
its-decomposition-and-a-tiling_fig1_220343103)
follows: let S be a finite set of

Polyominos of differing shapes. Is it possible to cover a two-dimensional plane by arranging an

arbitrary number of Polyominos from S in a non-overlapping pattern?

This problem can be considered a variation of the Domino problem, which was proven to be

undecidable by Robert Berger (Berger, 1966) - and by reduction, so was the Polyomino Problem

(Golomb, 1970). In essence, this means that there exists no algorithm that can always determine

in a finite amount of time whether a given set of Polyominos is or isn’t capable of covering the

plane.

What does this actually tell us with respect to intelligence?

According to Penrose, because humans are capable of finding sets that satisfy the required

conditions just by using logic and intuition, the Polyomino Tiling Problem is an example of

something minds can do and computers can’t.

I am not convinced that this is a well-founded conclusion.

The fact that the problem is not decidable only means that there exists no algorithm that always

and reliably finds an answer. However, even undecidable problems admit instances which can be
11

solved through heuristics in the form of programs ultimately executed by computers. Unlike an

algorithm, a heuristic is not bound by the stringent requirement of always finding a solution and

– if it does find one – it’s not guaranteed to be optimal; it merely provides a good method for

finding acceptable and even approximate solutions.

In fact, there are several heuristics that have been proposed for certain undecidable problems.

One prominent example is found in the field of Static Program Analysis (SPA), which is the

process of automatically analyzing and making predictions about a piece of software by

examining its source code without executing it. SPA is a particularly significant case, because

it’s the quintessential example of the kind of problem Rice’s Theorem (Rice, 1953) has shown to

be undecidable.

I am stressing the well-documented nature of SPA’s undecidability to show that I haven’t simply

cherry-picked an obscure, badly researched case to argue against Penrose’s claim: the fact that

SPA poses an undecidable problem which can be tackled reasonably well with heuristics is a

foundational finding in the field of theoretical computer science. The takeaway is that the

undecidability of a problem does not imply it being outside of the domain of computation.

Given this, wouldn’t it be possible that the underlying computational dynamics of the brain

operate on the basis of certain heuristics as well?

If that were true, then the example provided by Penrose would not constitute an instance of

something a mind can do that a machine can’t. Naturally, it’s beyond the scope of this paper to

demonstrate that it actually is the case that the brain is using heuristics for problem solving.

However, ongoing research in the field of cognitive psychology suggests that heuristics might

play an important role in human decision-making, such as the so-called Availability Heuristic
12

(Tversky & Kahneman, 1973), as well as many others which I am unable to cover in detail in this

analysis.

A similar argument can be made in response to the second example Penrose gives: chess-playing

AI. He shows a slide depicting a chess board populated with chess pieces in a specific

configuration, shown in Fig. B.

He claims that this configuration has the

particular property of being difficult to

correctly assess for chess-playing AI

systems, whereas it’s relatively easy for

human players to solve. While this is an

interesting phenomenon to study, I don’t

think it bears great significance with respect

to his original thesis.

Once again, it’s worth asking what this

finding actually means.

Firstly, the given configuration is not an

instance of an undecidable problem like the

Fig. B Polyomino case. It’s merely a configuration


Slide from Penrose’s presentation (source:
that current chess-playing programs are
http://www.dailymail.co.uk/sciencetech/article-
4311932)
inefficient at solving.

However, it appears to me that nothing about this example implies that a better chess-playing

program could not be written. Already, machine learning (ML) demonstrates how it’s possible

for computers to solve problems which were previously deemed the sole domain of human
13

intuition, such as image and speech recognition. In fact, the very concepts of machine learning

and heuristics are closely connected, as they share characteristics such as finding approximate

solutions to problems (though this is not always the case with ML).

In conclusion, Penrose’s use of the Polyomino and AI examples seems to show a potential

misunderstanding of the implications of undecidability on one hand, and a rather rigid view of

computation on the other.

Conclusion

Sir Penrose certainly makes a very compelling and expertly crafted case. Rather than arguing

against computationalism by criticizing its various formulations, he attempts to strike at its very

foundations, constructing a logic-based and seemingly airtight counter-argument using Gödel’s

incompleteness theorems as a basis.

This is where his pedigree as a mathematician is made manifest: he wants to prove the

impossibility of the computationalist hypothesis and not merely cast a shadow of doubt over it.

However, such a rigid and logical approach also represents the greatest weakness of his entire

argument, because if even just one of his premises can be shown to be false or even questionable,

then everything that follows automatically collapses. Over the course of my analysis, I believe I

have identified some of these vulnerabilities. My purpose was not to demonstrate that he is

wrong, but merely to show that it might be premature to accept his claims.

Why is this important? Mainly for two reasons.

Firstly, because if his hypothesis is accepted as true without proper demonstration, it could steer

future research in the field of AI and cognitive sciences in the wrong direction.

Computationalism, especially strong computationalism, makes some very strong and equally

unsubstantiated claims on the nature of the mind, and while I am not ideologically affiliated with
14

it myself, I believe there is much potential value in applying a computational paradigm to the

study of the mind.

Secondly, Penrose’s argument highlights another important question worth considering

whenever one sets about studying such a complex and elusive phenomenon as the mind.

Am I looking at it from the right angle?

As I pointed out in section 2 (The Soundness Problem), it can be argued that it might not be too

useful to examine high-level human thinking in the context of mere evaluations of logical

propositions. I suspect that Penrose might have approached the issue of the mind with the

presupposition that if there is a path to the truth, it must necessarily come in the form of

mathematical proof.

This bias in favor of mathematics is entirely understandable, given his profession, but even

Stephen Hawking seemed to have noticed Penrose’s penchant for the abstract as shown by the

following comment (Hawking & Penrose, 2000):

“I take the positivist viewpoint that a physical theory is just a mathematical model and

that it is meaningless to ask whether it corresponds to reality. All that one can ask is that

its predictions should be in agreement with observation. I think Roger is a Platonist at

heart but he must answer for himself.”

Of course, we all possess internal biases, but I believe it’s critical to be aware of them and be

able to recognize when they might be influencing a formal argument.

Ultimately, Penrose or others might still be able to prove that there is more to the mind’s inner

workings than just mere deterministic mechanisms. However, should they eventually succeed, I

believe it will be through rigorous experimentation, rather than pure deductive reasoning.
15
16

References

• Berger, R. (1966), The undecidability of the domino problem (No. 66), American

Mathematical Society

• Center for Consciousness Studies, Sir Roger Penrose - How can Consciousness Arise Within

the Laws of Physics?, retrieved on August 10th, 2018 at: https://youtu.be/h_VeDKVG7e0

• Chalmers, D. J. (1995), Facing up to the problem of consciousness, Journal of consciousness

studies

• Emmeche, C., Køppe, S., & Stjernfelt, F. (1997), Explaining emergence: towards an

ontology of levels, Journal for general philosophy of science

• Golomb, S. W. (1970), Tiling with sets of polyominoes, Journal of Combinatorial Theory

• Hawking S., Penrose R. (2000), The Nature of Space and Time, ch.1, Princeton University

Press

• “Intelligenza Artificiale vs. Intelligenza Naturale”, https://communitasonline.org/

• Kurt Gödel, 1931, Über formal unentscheidbare Sätze der Principia Mathematica und

verwandter Systeme, I, Monatshefte für Mathematik und Physik, v. 38 n. 1, pp. 173–198

• Ollinger, N. (2009, April), Tiling the plane with a fixed number of polyominoes, In

International Conference on Language and Automata Theory and Applications, Springer,

Berlin, Heidelberg

• Penrose, R. (1994), Shadows of the Mind (Vol. 4), Oxford: Oxford University Press

• Rice, H. G. (1953), Classes of Recursively Enumerable Sets and Their Decision Problems,

Transactions of the American Mathematical Society

• Searle, J. R. (1980), Minds, brains, and programs, Behavioral and brain sciences
17

• Tversky, A., & Kahneman, D. (1973), Availability: A heuristic for judging frequency and

probability, Cognitive psychology

Potrebbero piacerti anche