Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
e-mail: luca.resti@mail.polimi.it
2
Abstract
The purpose of this paper is to critique mathematician Sir Roger Penrose’s position on the nature
of intelligence. Specifically, I shall examine part of what’s commonly known as the Lucas-
Penrose argument as expressed by Penrose in his book Shadows of the Mind, as well as related
assertions he has made in his recent public appearances on the topic of human intelligence and
I do not claim to conclusively disprove these arguments, but merely to point out several
inconsistencies and oversights, which lead me to be skeptical of their conclusion that human
mental processes cannot be described in terms of computations. These issues include what I
intelligence (AI).
I would like to point out that, despite my critique, I have the utmost respect for Sir Penrose’s
important contributions to science. Lastly, it’s important to note that Penrose draws several
interesting connections between the mind and quantum mechanics, in his writings. Responding
to such technical claims goes beyond the scope of my knowledge and of this paper, whose focus
is on the premises of the proposed arguments which fortunately make no appeal to the odd world
of particle physics.
Introduction
Can a machine that operates according to precise, deterministic rules capture the essence of the
human mind?
His point is not that today’s technology is inadequate, but that as a result of his extension of
Gödel’s Incompleteness Theorems (Gödel, 1931), machines won’t ever be equivalent to a mind.
But is it actually reasonable to compare high-level thought processes of the human mind to the
Can we find a different approach that more closely resembles what we do know about the mind
I try to respond to these questions in the first two sections of the paper, providing what I consider
to be a more reasonable way of thinking about the relationship between mental processes and
computation.
In the last section, I focus on the contents of some of Sir Penrose’s recent public lectures, in
which he supports his hypothesis that the mind cannot be reduced to a computational process.
He constructs an argument for the claim that human minds can solve tasks that are impossible for
machines. In order to prove this he argues that the undecidable Polyomino Tiling problem can
easily be solved by humans, just like a particular chess configuration that chess-playing
I believe I have identified some critical issues in his arguments, and I cite several results from the
field of theoretical computer science in order to show how the mind’s processes might in fact not
syllogism, but I will attempt to show that such an approach exposes him to many potential
mistakes, which are the result of deep-rooted and unproven presuppositions about the nature of
the mind.
4
The basis for Penrose’s argument are Kurt Gödel’s Incompleteness Theorems (GIT), which may
be summarized as follows:
I. Any formal system S containing statements that can be proved algorithmically, includes
at least one true statement known as a Gödel sentence (GS), that cannot be proved or
disproved within S
II. If a formal system S is consistent, then it is unable to prove its own consistency
These theorems have been extremely influential in the field of theoretical computer science for
defining the limits of what a computer, or more generally a Turing machine can do, by showing
how some formal systems will inevitably lead to contradictions, under certain circumstances.
Penrose, however, takes this idea further in his 1994 book Shadows of the Mind and formulates a
He begins with the presupposition that human minds intuitively know themselves to be sound.
He then states that if the human mind could be described as a formal system [as per Gödel’s first
recognize as true. However, since the mind is sound according to Penrose, it would have to
recognize the truth-value of GS and thus be in contradiction to GIT. His conclusion is that GIT
cannot be applied to the mind, and therefore the mind is not a formal system.
This type of argument is extremely attractive to anyone aiming to disprove the Computational
Theory of Mind (CTM), as it tries to invalidate its very core: if a mind cannot be described as a
purely algorithmic system, then the computationalist hypothesis that human intelligence is a
More specifically, following his adaption of GIT, Penrose concludes that mathematical thinking
in humans is unique:
"The inescapable conclusion seems to be: Mathematicians are not using a knowably
(Penrose, 1994)
Indeed, this would be a reasonable conclusion to reach, if the premises that led to it were well-
founded; however, hidden in Penrose’s argument lie many unsupported assumptions about
human beings and the mind. The most critical assumption seems to be that a human mind would
Informally speaking, a system is sound when it’s based only on true premises which always lead
to valid and inevitable conclusions. The claim that a human mind may be able to correctly see
the soundness of its own inner workings seems to me a very strong one and deserving of a
convincing demonstration. If an individual wanted to know if her mind is sound, she would have
One can obviously become convinced of the truth of a given statement even without any formal
proof, but this fact alone has no bearing on the actual truth value of said statement.
After all, we can just as easily be fooled by our senses when trying to make sense of the world
around us, as is often the case any time we witness a magic trick being performed or are
6
confronted with optical illusions. Indeed, even a flawed mathematical theorem can sometimes
This is clearly problematic for Penrose’s Gödelian argument, since it relies on human
understanding being sound. In my opinion, the fact that Penrose provides no formal justification
I am convinced that the moment to believe a noteworthy claim is after it has been demonstrated
to be true. Penrose has to meet his burden of proof and demonstrate that the mind is, in fact,
sound.
As a result, I call into question any conclusions that may follow from this unfounded premise.
An alternative criticism of this premise was offered by Chalmers (1995), who attempted to
demonstrate it to be a paradox.
Before starting this section of the analysis, I would like to clarify my position on
Could an appropriately programmed computer not only act like a brain, but actually be
equivalent to a brain?
This is the position of “Strong AI” as defined by John Searle in his influential 1980 paper
“[…] according to strong AI, the computer is not merely a tool in the study of the mind;
rather, the appropriately programmed computer really is a mind, in the sense that
computers given the right programs can be literally said to understand and have other
cognitive states.”
7
I fully subscribe to this definition as well as to Searle’s conclusion that no purely formal model is
by itself sufficient for duplicating the causal properties of a mind, and not just simulating them.
It does not, because his claim is that human understanding cannot be a deterministic
computational process, whereas Searle’s claim is that the processes which give rise to
intentionality must be related to the chemical and physical properties of the brain; a position I
happen to agree with. I believe that these physical processes may still be described and studied in
computational terms with the hope that, once understood, we might explain how they produce
Penrose’s Gödelian argument looks at the matter from a different angle, by focusing on the mind
This strategy has a critical weakness, in my opinion: the implication that high-level human
I define high-level understanding as our experience of intuitively recognizing the truth value of
certain basic facts or propositions about the world, such as accepting that 1 plus 1 equals 2 or
form of high-level understanding, but I believe that the following considerations can easily be
extended to other types of intuitions. For the sake of argument, let’s accept his previously
described Gödelian hypothesis that the functions of the human mind cannot be entirely captured
From this, he infers that “Mathematicians are not using a knowably sound procedure in order to
This line of reasoning revolves around high-level human thought processes, rather than on their
underlying causes; this is the opposite of the bottom-up approach described above.
In other words, Penrose might be looking at the phenomenon of human thought processes from
the wrong angle, which could make it impossible to get any closer to the truth, much like in the
case of a doctor trying to cure her patient’s chronic headache by only focusing on the symptoms,
I propose instead that high-level, intuitive human understanding can be viewed as an emergent
phenomenon.
A system possesses an emergent property P, if P is not dependent on any singular part of the
system, but rather if it is the result of all parts working together. For instance, wetness is an
attribute we assign to water, and we know water to be made of many H2O molecules. Yet, we
cannot say that any single molecule of water is itself wet, but if we take many of them together,
I believe there are good reasons to suspect the human mind to be exactly this type of system, and
that features such as intelligence and consciousness are, in fact, emergent. As such, intelligence
and consciousness themselves don’t need to be computational processes, if the parts of the
underlying system from which they “emerge” (neurons, synapses, etc.), are computational.
The value of emergence in science is perhaps best described by theoretical biologist Claus
Emmerche in the introduction to his paper on the subject (Emmerche et al., 1997):
philosophy of science: On the one hand, many scientists and philosophers regard
9
cognitive science, artificial life, and the study of non-linear dynamical systems have
focused strongly on the high level 'collective behaviour' of complex systems which is
often said to be truly emergent, and the term is increasingly used to characterize such
systems.”
Whether this line of thinking ever leads to important breakthroughs in the study of human
intelligence remains to be seen, but I think it’s a paradigm that is in line with what is often done
in science when studying complex phenomena, and that is to break them down and study them in
On May 12th 2018, I attended a debate between Sir Penrose and philosopher Emanuele Severino
that took place in Milan. The topic of the day was “Artificial Intelligence vs. Human
Intelligence” and Penrose offered a presentation with some accompanying slides, exposing his
His talk was largely identical to a previous one he has given in 2017 at the Center for
Consciousness Studies, where he used the same slides and arguments. There is a publicly
available video recording of this 2017 talk which I reference throughout this chapter.
In this presentation, Penrose tries to deconstruct computationalism from an angle that is different
from his Gödelian argument. He makes his case using two separate but still related examples:
This problem can be considered a variation of the Domino problem, which was proven to be
undecidable by Robert Berger (Berger, 1966) - and by reduction, so was the Polyomino Problem
(Golomb, 1970). In essence, this means that there exists no algorithm that can always determine
in a finite amount of time whether a given set of Polyominos is or isn’t capable of covering the
plane.
According to Penrose, because humans are capable of finding sets that satisfy the required
conditions just by using logic and intuition, the Polyomino Tiling Problem is an example of
The fact that the problem is not decidable only means that there exists no algorithm that always
and reliably finds an answer. However, even undecidable problems admit instances which can be
11
solved through heuristics in the form of programs ultimately executed by computers. Unlike an
algorithm, a heuristic is not bound by the stringent requirement of always finding a solution and
– if it does find one – it’s not guaranteed to be optimal; it merely provides a good method for
In fact, there are several heuristics that have been proposed for certain undecidable problems.
One prominent example is found in the field of Static Program Analysis (SPA), which is the
examining its source code without executing it. SPA is a particularly significant case, because
it’s the quintessential example of the kind of problem Rice’s Theorem (Rice, 1953) has shown to
be undecidable.
I am stressing the well-documented nature of SPA’s undecidability to show that I haven’t simply
cherry-picked an obscure, badly researched case to argue against Penrose’s claim: the fact that
SPA poses an undecidable problem which can be tackled reasonably well with heuristics is a
foundational finding in the field of theoretical computer science. The takeaway is that the
undecidability of a problem does not imply it being outside of the domain of computation.
Given this, wouldn’t it be possible that the underlying computational dynamics of the brain
If that were true, then the example provided by Penrose would not constitute an instance of
something a mind can do that a machine can’t. Naturally, it’s beyond the scope of this paper to
demonstrate that it actually is the case that the brain is using heuristics for problem solving.
However, ongoing research in the field of cognitive psychology suggests that heuristics might
play an important role in human decision-making, such as the so-called Availability Heuristic
12
(Tversky & Kahneman, 1973), as well as many others which I am unable to cover in detail in this
analysis.
A similar argument can be made in response to the second example Penrose gives: chess-playing
AI. He shows a slide depicting a chess board populated with chess pieces in a specific
However, it appears to me that nothing about this example implies that a better chess-playing
program could not be written. Already, machine learning (ML) demonstrates how it’s possible
for computers to solve problems which were previously deemed the sole domain of human
13
intuition, such as image and speech recognition. In fact, the very concepts of machine learning
and heuristics are closely connected, as they share characteristics such as finding approximate
solutions to problems (though this is not always the case with ML).
In conclusion, Penrose’s use of the Polyomino and AI examples seems to show a potential
misunderstanding of the implications of undecidability on one hand, and a rather rigid view of
Conclusion
Sir Penrose certainly makes a very compelling and expertly crafted case. Rather than arguing
against computationalism by criticizing its various formulations, he attempts to strike at its very
This is where his pedigree as a mathematician is made manifest: he wants to prove the
impossibility of the computationalist hypothesis and not merely cast a shadow of doubt over it.
However, such a rigid and logical approach also represents the greatest weakness of his entire
argument, because if even just one of his premises can be shown to be false or even questionable,
then everything that follows automatically collapses. Over the course of my analysis, I believe I
have identified some of these vulnerabilities. My purpose was not to demonstrate that he is
wrong, but merely to show that it might be premature to accept his claims.
Firstly, because if his hypothesis is accepted as true without proper demonstration, it could steer
future research in the field of AI and cognitive sciences in the wrong direction.
Computationalism, especially strong computationalism, makes some very strong and equally
unsubstantiated claims on the nature of the mind, and while I am not ideologically affiliated with
14
it myself, I believe there is much potential value in applying a computational paradigm to the
whenever one sets about studying such a complex and elusive phenomenon as the mind.
As I pointed out in section 2 (The Soundness Problem), it can be argued that it might not be too
useful to examine high-level human thinking in the context of mere evaluations of logical
propositions. I suspect that Penrose might have approached the issue of the mind with the
presupposition that if there is a path to the truth, it must necessarily come in the form of
mathematical proof.
This bias in favor of mathematics is entirely understandable, given his profession, but even
Stephen Hawking seemed to have noticed Penrose’s penchant for the abstract as shown by the
“I take the positivist viewpoint that a physical theory is just a mathematical model and
that it is meaningless to ask whether it corresponds to reality. All that one can ask is that
Of course, we all possess internal biases, but I believe it’s critical to be aware of them and be
Ultimately, Penrose or others might still be able to prove that there is more to the mind’s inner
workings than just mere deterministic mechanisms. However, should they eventually succeed, I
believe it will be through rigorous experimentation, rather than pure deductive reasoning.
15
16
References
• Berger, R. (1966), The undecidability of the domino problem (No. 66), American
Mathematical Society
• Center for Consciousness Studies, Sir Roger Penrose - How can Consciousness Arise Within
studies
• Emmeche, C., Køppe, S., & Stjernfelt, F. (1997), Explaining emergence: towards an
• Hawking S., Penrose R. (2000), The Nature of Space and Time, ch.1, Princeton University
Press
• Kurt Gödel, 1931, Über formal unentscheidbare Sätze der Principia Mathematica und
• Ollinger, N. (2009, April), Tiling the plane with a fixed number of polyominoes, In
Berlin, Heidelberg
• Penrose, R. (1994), Shadows of the Mind (Vol. 4), Oxford: Oxford University Press
• Rice, H. G. (1953), Classes of Recursively Enumerable Sets and Their Decision Problems,
• Searle, J. R. (1980), Minds, brains, and programs, Behavioral and brain sciences
17
• Tversky, A., & Kahneman, D. (1973), Availability: A heuristic for judging frequency and