Sei sulla pagina 1di 127

PROPOSITIONAL LOGIC

– An Introduction

GUY DAVIES
LOVE EKENBERG
JOHAN THORBIÖRNSON
The authors, in brief
Guy Davies holds a Ph.D. in computer science from the KTH Royal Institute
of Technology, Stockholm, and is also affiliated with Stockholm University.
Love Ekenberg holds a Ph.D. in computer science and a Ph.D. in mathemat-
ics from Stockholm Universtity and is full Professor at Stockholm University,
KTH Royal Institute of Technology as well as Mid-Sweden University.
Johan Thorbiörnson holds a Ph.D. in mathematics and is Associate Profes-
sor in Mathematics at the Royal Institute of Technology, Stockholm, as well
as Director of Resource Centre for Net-Based Education at the KTH Royal
Institute of Technology.

Supplementary materials are available at:


http://sites.google.com/site/logicbasicsbeyond/

Other works by the same authors:


Davies-Ekenberg-Thorbiörnson: Logic - Basics and Beyond, ISBN 978–91–
978450–1–4

Edition 1, first print run, 2009


c 2009 Guy Davies, Love Ekenberg, Johan Thorbiörnson
°

SINE METU, Valhallavägen 82, Stockholm, Sweden, www.sinemetu.se

Order and information: www.sinemetu.se


ISBN 978–91–978450–2–1
Attribution–Noncommercial–No Derivative Works 2.5 Sweden
http://creativecommons.org/licenses/by-nc-nd/2.5/se/deed.en

You are free:

to Share – to copy, distribute and transmit the work

Under the following conditions:


Attribution – You must attribute the work to the authors
Guy Davies, Love Ekenberg, Johan Thorbiörnson with link to
http://www.sinemetu.se (including reference to license terms in the
manner specified in the notice below).

Noncommercial – You may not use this work for commercial pur-
poses.

No Derivative Works – You may not alter, transform, or build


upon this work.

With the understanding that:


Waiver – Any of the above conditions can be waived if you get permission from
the copyright holder.
Other Rights – In no way are any of the following rights affected by the license:
• Your fair dealing or fair use rights;
• The author’s moral rights;
• Rights other persons may have either in the work itself or in how the work
is used, such as publicity or privacy rights.
Notice – For any reuse or distribution, you must make clear to others the license
terms of this work. The best way to do this is with a link to this web page:
http://creativecommons.org/licenses/by-nc-nd/2.5/se/deed.en
This is a human-readable summary of the Legal Code. For the full license, see
http://creativecommons.org/licenses/by-nc-nd/2.5/se/legalcode
http://creativecommons.org/licenses/by-nc-nd/2.5/legalcode
Preface

Purpose
This small book in propositional logic is written for everybody who would like
to get introduced in that realm of human cognition that most differentiates
us from all other creatures we know of in the universe. It is written to touch
upon this faculty as the ultimate embodiment of thought in an age when the
superficial and irrational is not only rife, but has become a dominating cultural
expression of intellectual laziness. The danger of this culture forwarns the de-
generation of civilisation into a society where unschooled minds appreciate the
opinion of celebrity ignorants, military bullies, and the values of pop-up reli-
gions more than academia’s systematically scrutinised wisdom. Such a culture
stimulates the emergence of narrow mindedness, prejudice, and foolishness,
feeding totalitarianism and oppression.
We hereby join the battle that may ensure a tolerable existence for future
generations, by expounding the virtues of critical thinking in its purest embod-
iment. We hope that this book will show how deeper meaning lurks beneath
the skin of every man and woman. Anyone can harness the tools that can
empower us to escape the relentless sea of puerile mass-culture. Allow us to
persuade you with the allure of truth and the means to discover it.

Audience
The book is for anybody who would wish to strive to learn more of our innate
faculty of reason. In practice the book can be used for basic undergraduate
studies in logic, while still observing both a formal as well as a philosophical
perspective. For more advanced studies, with a special focus on applications
in systems science, we strongly recommend Logic – Basics and Beyond, by
the same authors. Read this as intellectual ‘entertainment’ with a view to
glimpsing the power and beauty of thought or simply to understand the formal
culmination of centuries of cultural history.

Free use and printing


This book is free to copy and distribute for noncommercial use. Details are
given with reference to Creative Commons Licence in the cover page of the
book.
For those who want to read the material in printed version, a significantly
extended book by the same authors, Logic – Basics and Beyond, ISBN 978-91-
978450-1-4, www.sinemetu.se, can be bought in printed version. This book not
only includes the chapters from Propositional Logic – an introduction but also
covers propositional and predicate logic, set theory and functions, complexity
theory and algorithm analysis as well as modal logic and model theory. It can
6

be used for basic and also for more advanced studies in logic with a special
focus on applications in systems science. The material there is sufficient for
2–3 courses in logic and can be divided in two basic parts and a more advanced
part. In this book, you will also find a large number of exercises together with
solutions for a large number of selected problems.

Courses
The material is sufficient for a very basic course in elementary logic. The book
Logic – Basics and Beyond by the same authors provides considerably more
material for further studies in logic.

Reading
Reading order is fairly straightforward. It is basically a brief overview of basic
concepts and methods of classical propositional logic. Texts in logic sometimes
become very technical and the natural beauty and usability of it is then lost.
We have tried to avoid that by providing a substantial intuition for the issues
involved. We also include some important meta-logical perspectives.

Maths
Some of the sections in the book assume some acquaintanceship with pre-
university mathematics, but these are mostly illustrations rather than an in-
tegral part of the logic, and are not necessary in order to assimilate the main
ideas of the book. Mathematical sections especially those marked with an
asterisk can be passed over without loss of understanding the material as a
whole.

Reading strategies
The book contains a large number of exercises and answers. Solutions as well
as additional material are to be found on the web page

http://sites.google.com/site/logicbasicsbeyond/

We recommend the reader first to try to find the solution independently before
seeking a solution. Effort, even when unsuccessful, focuses attention on the
key difficulties. Do not look at the solutions too early to encourage your own
thinking, rather than ... that’s right ... instant gratification.

Acknowledgments
The authors are insignificant amoebas living in the context of intellectual
giants. We would especially like to extend our gratitude to the following people
who have been of great significance in the creation of this book: professors,
0.0. PREFACE 7

doctors, inspirers, and friends ... our families, Veselka Boeva, Torkel Franzén,
Paul Johannesson, Per-Erik Malmnäs, Thomas Oakland, Elisabeth Ohlson
Wallin, Petra Östergren, Vide Jansson, Lars Asker, Karl Karlander, Fidel,
Ernesto, Selima, Google, Kazuo Koike, and the Cheshire Cat.

Go forth
Enjoy your adventure into logic, and all the activities that will distract you
along the way.
8

Contents

Preface 5
Chapter 1. Introduction 10
1. The History of Logic 10
2. So What is Logic? 12
3. Content Disposition 16
Chapter 2. Introduction to Sentence Logic 18
1. Negation, implication and equivalence 20
2. The Connectives and, or 27
3. False Hypotheses and Bogus Solutions* 32
Chapter 3. The Language SL 40
1. Alphabet 41
2. The Syntax for Sentences in SL 41
3. The Meaning of a Sentence 43
4. The Expressive Power of Connectives 50
5. The Semantics of SL 52
6. Information Content of a Sentence 55
Chapter 4. Deductions and Arguments 58
1. Logical Consequence 59
2. Incomplete Arguments 66
3. Some Important Logical Relationships 71
Chapter 5. Rule Systems 75
1. Axiomatic Systems 76
2. Semantic Tableaux 80
3. The Resolution Method 89
4. Conjunctive Normal Form 89
5. Deductions with the Resolution Method 93
6. Natural Deduction 98
7. A note on sequent calculus * 107
9

Chapter 6. Soundness and Completeness 111


1. Soundness and Completeness for sentence logic 112
Solutions to Exercises 121
Index 122
10

CHAPTER 1

Introduction
Most rational lines of thought and calculations build on certain more or less
clearly expressed assumptions and conditions. When reasoning or arguing in
a way that can be called rational with the purpose of persuading others about
the plausibility of an argument, it is important to be able to see the intercon-
nection between assumptions and conditions in order to be able to assess what
conclusions these can lead to. It is also important to be able to recognise an
incorrect argument, and to understand what rational reasoning actually can
tell us.
In this book we will be analysing some of the fundamental types of state-
ments and rules that are used in rational argument. We will also introduce
certain symbols that are commonly used when studying statements and con-
texts that arguments can contain. Concepts are generally introduced in an
informal way first, offering the gist or intuition the ideas, before these are
formally presented.

1. The History of Logic


Inquiry into methods of formal deduction is called logic. Logic has been
studied since antiquity (by Aristotle and others), but has really only been
intensively developed during the 20th century. Logic gained a new role after
interest arose in constructing a formal system that could support attempts to
prove mathematical theorems in a way that would be absolutely irrefutable.
Mathematical proofs are often taken for granted, but what actually charac-
terises them? G. H. Hardy wrote that: “A mathematical proof should resemble
a simple and clear-cut constellation, not a scattered cluster in the Milky Way”1
Irrespective of whether or not this is helpful, it is in no way obvious what a
mathematical argument should look like in order to serve as a proof. During the
1
G.H. Hardy, A Mathematician’s Apology, Cambridge University Press, 1992.
1.1. THE HISTORY OF LOGIC 11

latter part of the 19th century and during the first decades of the 20th, many
people thought that by developing a formal calculus in order to prove theorems,
it would be possible to provide a more precise meaning to the concept of proof.
The first to conduct so called symbolic logic to this end were G. Boole2 and
A. de Morgan3 .
The first comprehensive description of a logical system arrived with the
publication of Gottlieb Frege’s Begriffschrift4 . Frege later expanded those
methods and axioms in his work Grundgesetze der Arithmetik 5 with the ex-
pressed purpose of providing mathematics with a formal foundation. A partial
motivation for this was that mathematics had broken free from its direct ba-
sis in physical reality as exemplified by non-Euclidean geometry where results
were difficult to verify in any concrete way without a clear cut notion of proof.
Just as Frege was about to publish his result, the philosopher Bertrand Rus-
sell demonstrated through the so-called Russell paradox, that Frege’s system
contradicted itself. A very unhappy Frege stated in a final commentary to his
second edition of Grundgesetze der Arithmetik that Russell’s critique was cor-
rect6 . Even if this was a personal tragedy for Frege, who had thereby failed
to achieve his primary ambition, his work marks the birth of modern logic.
Some years after Frege’s work, Russell together with Alfred North Whitehead,
published an alternative system in Principia Mathematica 7 . This however, has
been considered by many to be too artificial a system and of marginal interest.
The most influential school was the formalists which it could be said the
work of David Hilbert lead to. The formalists envisioned two things. The
first was to construct a system of axioms and deduction rules from which, by
purely formal means, mathematical truths could be proved. The second was,
by intuitively irrefutable methods8 to prove that their system was itself free
from internal contradictions. If this had been possible to do, it would also
have shown in a specific way that mathematics was free from contradictions
and that mathematical results are valid. The formalists devoted themselves
to these activities for a number of years until in 1931, Kurt Gödel presented

2
An Investigation of the Laws of Thought, Cambridge, Macmillan and Co, 1854.
3
Formal Logic, or the Calculus of Inference, Taylor and Walton, 1847.
4
G. Frege, Begriffsschrift, eine der Arithmetischen Nachgebildete Formelsprache des Reinen
Denken, Nebert, Halle, 1879.
5
G. Frege, Grundgesetze der Arithmetik, Begriffsschriftlich Abgeleitet, vol.1, H. Pohle, Jena,
1893.
6
It should be emphasized that Russell’s paradox does not mean that mathematics is incon-
sistent, only that Frege’s attempt to axiomatise parts of it was inconsistent.
7
A.N. Whitehead and B. Russell, Principia Mathematica, Vol.1–3, University Press, Cam-
bridge, 1910–13.
8
These are usually, though somewhat incorrectly, referred to as meta-mathematical meth-
ods .
12 CHAPTER 1. INTRODUCTION

his famous work.9 Gödel showed two things. First, that there are statements
that we consider to be true but which are not actually deducible from any of
the systems of the formalists. This dashed precisely one of the questions most
essential to the formalists, who aspired to describe all mathematical reasoning
purely and formally. Gödel’s second result showed moreover that mathematics
could not be proved to be free from contradictions using those methods
the formalists were employing, thereby showing that their second aim was
impossible to achieve in the way they had envisioned.
Similarly to the way in which Frege’s work has had enormous influence
over the development of modern logic, the formalists’ work within meta-
mathematics has been tremendously fruitful, in spite of the fact that their
original intentions could not be fulfilled. The formalists’ work has laid the
foundations for a large number of the important methods and results such as
that of complexity theory, an area that has delimited the conditions for the
development of computers.10
It should also be noted that despite the comparatively short history of mod-
ern logic (as defined here), reasoning of primarily formal nature has certainly
been used for over 2000 years and probably as long as mankind has possessed
language. A classical example is Euclid’s work Elementa (4th century B.C.) in
which theorems of geometry are deduced from basic axioms. Logical methods
are also used when scientific knowledge needs to be structured as well as in
everyday reasoning.

2. So What is Logic?
Characteristic of formal systems, is that the study of them, clearly distinguish
between the form that expressions take and what this form actually means.
In simple terms you could say that in logic, language is studied as a system of
symbols that do not actually need meaning or interpretation. The important
thing is rather how different linguistic statements relate to each other - not
what the individual statements actually express. When examining the prop-
erties of formal languages it is usual to differentiate between, on the one hand
the syntax, of the language - the form that statements in the language may
assume; and on the other hand the semantics, - the precise meaning of an
expression in the language.

9
K. Gödel, Über Formal Unentscheidbare Sätze der Principia Mathematica und Vervandter
Systeme I, Monatshefte für Mathematik und Physik, vol.38, pp.349–360, 1931.
10
Apart from those mentioned above there have been a large number of important logicians
during the 20th century. Some of the most influential were Luitzen Brouwer, Rudolf Carnap,
Alonzo Church, Gerhard Gentzen, Leon Henkin, Jacques Herbrand, Stephen Cole Kleene,
John von Neumann, Willard van Orman Quine, Thoralf Skolem, Alfred Tarski, and Alan
Turing.
1.2. SO WHAT IS LOGIC? 13

Logic is therefore scientific enquiry into the properties of linguistic utter-


ances, the truth or falseness of which is independent of whatever interpretation
or whatever value you might choose to impose on the objects and variables
included in those utterances. The only property of interest is that of whether
a statement is “true” or “false”11 (which in this book will be represented with
the values 1 and 0 respectively). Interest is thus in the logical content of an
expression and its parts.
For example the sentence Castro is cuddly expresses a property of the cat
Castro which may be of interest for studying, say, biology. In logic however,
interest lies not in the detailed properties of the cat Castro, but rather in the
logical content sentences can have. In this light

(1.1) If Castro is a cat then he is cuddly

has the same logical content as

(1.2) If ”Castro is cuddly” is not true then neither is ”Castro is a cat”.

Note that we do neither really bother whether Castro is a cat or not nor
whether he is cuddly. Frankly speaking, we do not care much about Castro
from a logical perspective. We rather study the structural properties of the
sentence, i.e.

(1.3) IF Castro is a cat, THEN he is cuddly

There are a large number of logical languages that have been constructed for
various purposes, which are briefly described below.
The simplest form of logic is sentence logic also known as propositional
logic. Propositional logic pertains, as its name implies, to assertive statements,
known as propositions. In particular, it is concerned with the relationships
between these propositions which in sentence logic consist of connectives. The
most basic propositions, basic in the sense that they cannot be meaningfully
subdivided any further, are usually referred to as atoms. Atoms are com-
bined with connectives that often correspond with certain words in natural
languages like Portuguese, Amarinja or Hungarian. Common connectives used
are:

11
It should be emphasized that this characterisation is simplified, and languages of logic
have been developed with considerably greater expressive power. For example the so called
multi-valued logics that leave room for more values than just ‘true’ and ‘false’, for instance
‘possibly true’, ‘possibly false’, ‘certainly false’, ‘neither true nor false’
14 CHAPTER 1. INTRODUCTION

and
or
not
if ... then
if and only if
Propositional logic examines assertions like
’If Castro is a cat then Castro eats fish’
Here ’Castro is a cat’ and ’Castro eats fish’ are atoms and ’if ... then is their
logical relationship or connective. A cat, indeed very similar to Castro, can be
seen in the figure below.

The expressive power of sentence logic however is limited and a richer lan-
guage is often needed. For example given the statements

’All cats are black’


and
1.2. SO WHAT IS LOGIC? 15

’There is a black cat’

sentence logic would have to formulate each assertion as an atom. Nor can
sentence logic reveal anything of much interest about the relationship be-
tween these two statements A richer language that offers such possibilities is
predicate logic.. This logic can represent statements using predicates and
variables in a particular way.
In predicate logic the first statement can be represented as
’It holds for all x that if x is a cat then x is black’
the second statement can be written
’An x exists, such that x is a cat’
In predicate logic we can deduce that if both the statements are true then it
is also true that something black exists.
Even predicate logic’s expressive power is limited since it only uses terms
such as “it holds for all ... that” and “it holds for some ... that”. Sometimes
statements need expressing like
’It is possible that all cats are black’
or
’It is necessary that all cats are black’
The languages of logic that take care of these examples and other variations
are called modal logics. This is because in the above examples cover the
degree of likelihood and requirement in the statements, otherwise known in
linguistic circles as modalities of expression. In a similar way we might want
to represent a statement like
’In 119 years’ time, all cats will be black’
In this case the logic must express the temporal modality of the natural lan-
guage. Languages of logic that cater to this are therefore called temporal
logics. In order to express
’Most cats are black’
higher order languages are needed. These operate not only on individual
objects or variables, but also on whole sets of objects. In simple terms, in order
to determine the validity of the expression above, count all objects that have
the property of being a cat, and compare with the tally of those that also have
the property of also being black. Predicate logic does not offer any semantics
(system of meaning) for doing this, which however higher order languages do.
16 CHAPTER 1. INTRODUCTION

Many consider classical logic too limiting for representing common everyday
expressions. For this reason variants such as fuzzy logic have been invented.
The purpose of fuzzy logic is to be able to reason with vague expressions like
’All cats are fairly black’
or
’Many cats are black’
Here the concepts of ’fairly’ and ’many’ don’t have any exact meaning. None
the less people often use such expressions when reasoning. Fuzzy logic offers a
number of methods for dealing with inexact meaning.
The study of formal languages and deductions is also usually concerned with
demonstrating certain important aspects of the languages, such as whether
they are free from contradictions. Since the purpose of such languages is to
be able to express matters correctly, precisely and clearly, the methods used
to study such languages need similar properties. Therefore a so-called meta-
language is usually introduced in order to help investigate the primary object
of study, which is therefore usually referred to as the object language. The
field in which languages of logic are studied is thus referred to as meta-logic.
Two primary meta-logical concerns are whether the object language is sound
and complete. For a logic to be sound, everything that it can prove must be
true. In other words nothing false can be proved. To be complete, a logic must
be able to prove every truth that it can express. These notions will be dealt
with thoroughly in the chapter about soundness and completeness.

3. Content Disposition
The book introduces basic propositional logic and examines some essential
theories in conjunction with these. The introductory chapters deal with both
informal and formal syntax and semantics for sentence logic and some basic
meta-logical results. The first chapter looks at the syntax and semantics for
the language of sentence logic. Then the concept of logical consequence is
introduced, provability and deducibility. Various types of deductions are laid
out and the relationship between them shown. In the final chapter a treatment
of propositional logics’ soundness and completeness can be found. Answers and
solutions to excercises can be downloaded from the internet, see reference at
the end of the book.
For those who want to read the material in printed version, a significantly
extended book by the same authors, Logic – Basics and Beyond, ISBN 978-
91-978450-1-4, www.sinemetu.se, can be bought in printed version. This book
includes the chapters from Propositional Logic – an introduction and can be
used not only for basic, but also for more advanced studies in logic with a
special focus on applications in systems science. It covers propositional and
1.3. CONTENT DISPOSITION 17

predicate logic, set theory and functions, complexity theory and algorithm
analysis as well as modal logic and model theory. The material there is suffi-
cient for 2–3 courses in logic and can be divided in two basic parts and a more
advanced part. In this book, you will also find a large number of exercises
together with solutions for a large number of selected problems.
Each chapter begins with ’Learning Objectives’ and ’Concepts Covered’.
Learning objectives state the skills an applied reader should possess after work-
ing through the text and exercises. Each chapter finishes with a section to help
you ’Revise & Reflect’. The questions are usually at a fairly high level of ab-
straction and require a good understanding of important concepts in order to
be answered. They are designed to help you revise, evaluate and synthesize
your knowledge but also to help you identify the limits of your understanding
as well as to dispel common misunderstandings.
18

CHAPTER 2

Introduction to Sentence Logic


Learning Objectives
After working through this chapter you should be able to:
• represent propositions from everyday language using logic’s symbols
and structure with a translation legend.

Concepts covered
Drawing conclusions Connectives de Morgan’s laws
False hypotheses Argumentation Lexicon
Proposition Sentence Equivalence
Premiss Implication Transpositive
Conjunction Disjunction Negation

Consider two assertions that can be represented in sentence logic:


’Castro has a fish and a snake’
and
’If Castro has a fish then he howls of happiness’
Note that here the significance of the connectives for the logical form is
crucial in sentence logic. The connective of the first assertion is ’and’. The
second assertion has the form ’if ... then’. Now we can draw a conclusion form
the assertions above. If we assume that both of them are true, it follows by
reason from the first proposition that
’Castro has a fish’
19

The second assertion states that if Castro has a fish then he howls of happiness.
It therefore seems reasonable, given these assertions together, to conclude

’Castro howls of happiness’


The table below itemises characters denoting atomic propositions. These char-
acters are called propositional variables. A table like this is known as a
lexicon.
B : ’Castro has a fish’
O : ’Castro has a snake’
Y : ’Castro howls of happiness’
Using the lexicon, the first assertion of this chapter can now be written
as
B and O;
the second as
If B then Y ;
and the conclusion simply as
Y.
The line of reasoning above can now be written like this:

B and O (This was assumed to be true)


B (This follows by reason assuming that both B and O hold true)
If B then Y (This was also assumed to be true)
Y (Follows, since B holds and it holds that if B holds then Y does too)

Looking a little closer at the propositions in the example, it appears that


the exact content, or lexical reference of the propositional variables B, O and
Y does not have any direct significance for the line of reasoning. What is
important is which connectives are used. If we use a different lexicon, the line
of reasoning still retains the same structure.
B : ’Rabbits like rabbits’
O : ’Rabbits are in a hurry’
Y : ’There are lots of rabbits’
The line of reasoning given this new lexicon is identical with that above and
from the assumption that both ’B and O’ and ’If B then Y ’ hold true, it
follows that Y holds true too, i.e.
’There are lots of rabbits’
20 CHAPTER 2. INTRODUCTION TO SENTENCE LOGIC

The only thing that is important for the line of reasoning here is the form of
the assertions. Just to make this point about form very clear and distinguish it
from content, consider the following example which challenges normal intuition
because the statements do not correspond with what we normally believe
about the world. This also illustrates how logic can help us to arrive at truths
in conceptually contorted areas where intuition easily fails.

T : ’Trees feed fish’


F : ’Fish fly kites’
K : ’Kites and trees eat fish’
Using this lexicon, and given that ’T and F ’ and ’If T then K’ hold true, it
follows that K holds true too. In expanded form we reason, given that ’Trees
feed fish and fish fly kites’ and ’If fish fly kites then kites and trees eat fish’
hold true, it follows that ’Kites and trees eat fish’ holds true too. This is
not nearly as obvious. The pitfalls of trying to reason in a counter-intuitive
area or one alien to our direct experience, are a primary reason for relying
on logic. Imagine 50 assertions of this kind or even just 10, and the value of
a methodical approach that is independent of the asserted content becomes
clear. This separation of content from logical form is also the very essence of
what makes automated computation at all possible.
Any rational line of reasoning uses some logical relationship between as-
sertions, as expressed by connectives, to help fill in gaps in our knowledge.
Rational reasoning is based on what we assume is true and what we consider
to be a logical argument. How we can reason using logic is determined by 1)
what we would like to know or show, 2) the connective structure of what we
already know, and 3) the logical rules of deduction.
We will now look more closely at what is meant by this. First we will look
at various aspects of the use of some common connectives and point out some
common mistakes people make when reasoning. This will then be formalised
through definitions and logic notation that together describe the syntax and
semantics of sentence logic.

1. Negation, implication and equivalence


One of the most common connectives is implication. It is used in assertions
of the form
’If Charley’s cat is naughty then Charley beats his cat’

This connective expresses a logical relationship between two shorter proposi-


tions - namely the antecedent ’Charley’s cat is naughty’ and the consequent
2.1. NEGATION, IMPLICATION AND EQUIVALENCE 21

’Charley beats his cat’. The relationship that the ’if ... then’ connective ex-
presses is that whenever the antecedent is true then the consequent must also
be true.
By this requirement and given that the whole proposition is true, it is im-
possible for Charley’s cat to be naughty and not be beaten by Charley. This
tells us, conversely, that if we know that Charley does not beat his cat then we
also know that his cat is not naughty. What the proposition does not tell us
however, is anything about whether or not Charley beats his cat, if we know
the cat is not naughty. Charley might be psychopathic and beats his cat even
when it is well-behaved, or perhaps not.1

In either case, and this is vital, the truth about what Charley does to his
cat when it is not naughty is NOT affected by accepting the whole proposition
as true.2 Restating this last sentence more generally; when the antecedent is
false, the consequent can be true or false and is unaffected by the implicational
1
This picture is a reconstruction. No animal was harmed during the process.
2
It might say something about the psychology of Charley though.
22 CHAPTER 2. INTRODUCTION TO SENTENCE LOGIC

proposition being true. In other words, an implication is always true when the
antecedent is false.
Another way to look at this is to imagine that Charley’s cat never has been,
nor ever will be naughty and so the antecedent is false. Yet this still allows the
proposition to be perfectly true, since it only says something about if the cat
were to be naughty. The proposition can still be true even if the antecedent
never actually comes true.
A third way of looking as this – more set theoretical – is to say the set of
occasion when the cat is naughty is included in the set of the occasions when
Charley beats his cat. In the next chapter we will deal more formally with the
semantics of implication which we now represent with the symbol → and let
the form
P →Q
denote
If P holds true then Q holds true
This form is thus called an implication. and P → Q reads
P implies Q
Alternative ways of reading P → Q are
– If P holds true then Q holds true
– If P is true then Q is true
– P implies Q
– If P then Q
– P leads to Q
– P is a sufficient condition for Q
– Q is a necessary condition for P
– P holds only if Q holds

In the implication P → Q then P is called the antecedent, hypothesis,


premiss or assumption and Q is called the consequent or conclusion.

Example 2.1
To say that
(2.1) If Castro eagerly jumps up and down then he is happy
and
If it is not the case that ”Castro is happy”
(2.2)
then neither is ”Castro eagerly jumps up and down”
have the same logical content means that they are either both true or both
false, which is totally independent of any mental states or physical actions
2.1. NEGATION, IMPLICATION AND EQUIVALENCE 23

of the cat Castro. As we mentioned before, from a logical perspective, we


do not care. We do not even care whether there exists a cat at all.
But what we care about is the structures here involved. If some condition
(Castro eagerly jumps up and down) implies that some other condition
(Castro is happy) holds, then the other condition cannot escape being true
if the first one is not false. If the first condition were to be true then the
second condition must be true – and the second condition cannot be both
true and false. This is what we study in logic. º
The reasoning in the example above does neither depend on the cat nor its pos-
sible actions it contains. To emphasise this we introduce (quite meaningless)
propositional variables, say, P and Q and write out the structural skeleton of
the sentence:3
[If P holds then Q holds] is equivalent to
(2.3)
[If Q does not hold then P does not hold].
Above the implication was discussed a bit and we now turn our interest to
another structural component in sentences, namely the equivalence, denoted
by the symbol ↔, and let
P ↔Q
denote
P is equivalent to Q
The equivalence above is true exactly when P and Q both are true or both
are false. This means, in this case, that P and Q must have the same logical
content or in other words that they are essentially equal seen from a particular
view point (in this case logic).
Alternative ways of reading P ↔ Q are

– P is equivalent to Q
– P holds if and only if Q holds, which is sometimes abbreviated
P iff Q
– P has the same logical content as Q
– P is a necessary and sufficient condition for Q
– P and Q are either both true or both false

A proposition is negated with the symbol ¬ such that ¬P denotes the nega-
tion:
P does not hold
3
Square brackets are of no importance, except of conveniently framing the propositions.
24 CHAPTER 2. INTRODUCTION TO SENTENCE LOGIC

also called the negation of P . We can now write the proposition (2.3), using
the symbols introduced so far, as

(2.4) [P → Q] ↔ [¬Q → ¬P ].

According to the informal discussion above, this a logically true proposition.


It is now clearer why this is so. The only possibility for

P → Q,

the left hand side of the equivalence (2.4), to be false is that P holds but not
Q. In all other cases the implication is true. This can be expressed by saying
that P → Q and ¬(P and ¬Q) have the same logical content, that is

[P → Q] ↔ ¬[P and ¬Q]

is true.
To make this clear. We have the sentence.

(2.5) If Castro eagerly jumps up and down then he is happy

What does mean for this to be true? Reconsidering the discussions around
implication above, this means exactly that Castro cannot eagerly jumps up
and down at the same time as he is not happy. Thus the logical content of
”If Castro eagerly jumps up and down then he is happy” is the the same as
”Castro cannot eagerly jumps up and down at the same time as he is not
happy”. And this relation is what is stated in the implication (2.4) above. The
actual jump can be seen in the picture below.4

4
After the superb artist Vide Jansson.
2.1. NEGATION, IMPLICATION AND EQUIVALENCE 25

So here we are using that


P holds but not Q
means the same as
P holds and ¬Q holds.
Looking now at the right hand side of the equivalence (2.4). The only possi-
bility that the implication
¬Q → ¬P
is false is that ¬Q holds but not ¬P , in all other cases the implication is true.
In the same way as above, this can be expressed as
[¬Q → ¬P ] ↔ ¬[¬Q and ¬(¬P )].
But ¬(¬P ) is synonymous with P so it holds that
[¬Q → ¬P ] ↔ ¬[¬Q and P ].
26 CHAPTER 2. INTRODUCTION TO SENTENCE LOGIC

From this reasoning it follows that both P → Q and ¬Q → ¬P are equivalent


to ¬[¬Q and P ]. Therefore they are equivalent to one another. In some sense
therefore we have proved (2.4).
In practice (2.4) can now be used to proved a proposition with the form
P → Q,
and to prove ¬Q → ¬P instead, which can often be much easier.
Note that when reversing the implication P → Q to ¬Q → ¬P negation
symbols must be added, otherwise the logical content is changed. This reversal
¬Q → ¬P
is called the transpositive proposition of P → Q.

Example 2.2
If P represents ”Castro has three rats” and Q represents ”Castro is hilari-
ous” then
P →Q
expresses something that we asserts to be true about Castro, whereas
Q→P
expresses something that might as well be false, since Castro might be happy,
despite not having seen rats for months, but is nevertheless hilarious, e.g.,
from having a couple of fishes before him. º
If both the implication P → Q and its inverse Q → P hold, then this is
synonymous with P ↔ Q 5 , that is
(2.6) [[P → Q] and [Q → P ]] ↔ [Q ↔ P ].
To emphasize this, P ↔ Q can be written6

P À Q.

Exercises
2.1 If the proposition the cat has nine tails → Bill’s back is turned is true, is it
possible that the cat has one tail and that Bill has his back turned anyway?
5
This is explained on page 45.
6
Note that in programming languages, statements like
if s then t
are very common and mean the t is executed only when the condition s is true. This should
not be confused with the logical truth value of
s → t,
which is automatically true when s is false.
2.2. THE CONNECTIVES AND, OR 27

2.2 Write down the transpositive proposition for Ketch botched the job → the
crowd is delighted.
2.3 Does the cat is in the bag ↔ the contestants are excited mean the same thing
as ’either the cat is in the bag and the contestants are excited, or the cat is
not in the bag and the contestants are not excited.’ ?
2.4 * Show that for all integers n that
n2 is odd → n is odd
holds, by showing that the transpositive proposition holds.
2.5 * Let P represent the proposition x + y > 2 and Q represent the proposition
at least one of the variable x and y is larger than 1. Show that P → Q holds
by showing that the transpositive proposition is holds.

2. The Connectives and, or


The previous section showed how when given propositions P and Q, new
propositions can be put together by forming equivalences, implications and
negations, for example P → Q, and [P → Q] ↔ [¬Q → ¬P ].
So obviously, we can construct more complicated sentences by using these
connectives. So let us continue a bit with two other important connectives –
and and or. Maybe it then not come as a total surprise that these can be used
for forming new sentences like
P and Q
and
P or Q.
Usually, the symbols ∧ and ∨ for the logical connectives and and or are
used:

[P and Q] is denoted [P ∧ Q]
[P or Q] is denoted [P ∨ Q]
In the same way as earlier these connectives can be used to construct more
complex expressions:

Charley swings the cat ∧ the cat screeches


the cat is cuddly ∨ Charley is psychopathic
(1 + 1 = 2) ∧ (1 + 3 = 4)
(1 + 1 = 2) ∨ (1 + 3 = 4)
(1 + 1 = 3) ∨ (1 + 3 = 4)
28 CHAPTER 2. INTRODUCTION TO SENTENCE LOGIC

The various connectives can also be combined as in the following examples:


(the cat is ever so cuddly) → (Charley is vicious ∨ the cat is victimised)
x(x − 1) = 0 → [x = 0 ∨ x = 1]
x(x − 1) = 0 ↔ [x = 0 ∨ x = 1]
[x > 0 ∧ x(x − 1) = 0] ↔ x = 1
[x2 ≥ 5 ∨ y 2 ≥ 1] → x2 + y 2 ≥ 6
¬ [(1 + 1 = 3) ∧ (1 + 3 = 4)]

P ∧ Q is called a conjunction of P and Q, whereas P ∨ Q is called a dis-


junction of P and Q.
Apart from the word and, the word but is, not the least in mathematical
texts, translated by ∧, as in this case
[x is larger than 0 but less than 1] ↔ [x > 0 ∧ x < 1].
Also in spite of and although can be translated by ∧. The particulars of
word choice express emphasis or perspectives such as time, that are lost when
translated by ∧ but that are of no interest in the logical analysis where only
the logical content is of interest.
An important observation for example is that (1 + 1 = 2) ∨ (1 + 3 = 4)
is true even if both components are true. This is because the meaning of the
connective ∨. The proposition
P ∨Q
denotes
P or Q or both P and Q,
which has a different meaning from
P or Q but not both.
Sometimes we write and/or in English prose when emphasizing the former
meaning. This is reason enough to be careful, as the following example illus-
trates.

Example 2.3
In the expression
[Castro is in Sweden or at Cuba] → [Castro is in Sweden] or [Castro is at Cuba]
it is not really possible that Castro is both in Sweden and Cuba at the same
time. To emphasize this we could therefore write
[Castro is in Sweden or at Cuba] →
[[[Castro is in Sweden] ∨ [Castro is at Cuba]] ∧ ¬[[Castro is in Sweden]
∧[Castro is at Cuba]]].
2.2. THE CONNECTIVES AND, OR 29

Rereading this expression might yield

Castro is either in Sweden or at Cuba.

º
The or that is used in the example above is called exclusive or and it is
sometimes denoted Y such that

(P Y Q) ↔ [(P ∨ Q) ∧ ¬(P ∧ Q)].

In the context of programming and in electronics, xor is sometimes used to


denote exclusive or.

Example 2.4
Another example where it can be useful to separate the exclusive cases is

x is an integer → [x is even] Y [x is odd].

Example 2.5
Even in everyday language the meaning of otherwise and unless is often
that of Y for example

[The number n is even otherwise n is odd]↔ [n is even] Y [n is odd]


[The number n is even unless n is odd] ↔ [n is even ] Y [n is odd]

º
Note that even in everyday language and can imply chronological or even
causal succession which the impression conveyed by

Castro clawed Charley and got beaten

clearly shows when compared to impression conveyed by

Castro got beaten and clawed Charley.

in spite of the fact that P ∧ Q and Q ∧ P are equivalent.7

7
The figure below is showing the precise moment when Charley was clawed.
30 CHAPTER 2. INTRODUCTION TO SENTENCE LOGIC

Two important properties of ∧ and ∨ are that P ∧ Q is true only in one


case, namely when both P and Q are true, and P ∨ Q is false only in one case,
namely when both P and Q are false. In all other cases P ∧ Q and P ∨ Q
are false and true respectively. This means that ¬P ∨ ¬Q is synonymous with
P ∧ Q not holding, that is to say
(2.7) ¬(P ∧ Q) ↔ ¬P ∨ ¬Q.
Similarly ¬P ∧ ¬Q is synonymous with P ∨ Q not holding, that is to say
(2.8) ¬(P ∨ Q) ↔ ¬P ∧ ¬Q.
These two laws are called de Morgan’s laws.

Example 2.6
The negation of
(x > 0) ∧ (y = 5)
is
(x ≤ 0) ∨ (y 6= 5).
º
2.2. THE CONNECTIVES AND, OR 31

We saw early the implication P → Q is false in only one case, namely when
P is true but Q is false. This means that
(2.9) ¬[P → Q] ↔ [P ∧ ¬Q],
which means the same as
[P → Q] ↔ ¬[P ∧ ¬Q].
With the help of de Morgan’s laws this can be rewritten as
¬[P ∧ ¬Q] ↔ [¬P ∨ ¬¬Q] ↔ [¬P ∨ Q].
This shows that
(2.10) [P → Q] ↔ [¬P ∨ Q].
Note that the logical implication P → Q does not require there to be any
causal relationship or any chronological sequence between P and Q. This
is noticeable in (2.10) where ¬P ∨ Q expresses no relationship between P and
Q whatsoever. In everyday speech however the impression conveyed by
If Charley teases the cat then Charley gets clawed
is clearer than meaning conveyed by
Charley does not tease the cat or Charley gets clawed
which with suitable intonation will be understood by native speakers as mean-
ing the same thing, but not as immediately. This is in spite of the fact that
according to (2.10) they are logically equivalent. The second proposition is
more abstract concerning the result of Charley’s actions. A somewhat better
translation might be to use otherwise instead of or, yielding
Charlie does not tease the cat otherwise Charley gets clawed.

Exercises
2.6 Use the symbol D to represent the proposition “the cat is out of the bag” and
the symbol P to represent the proposition “the contestant is black”. Express
the following using logical symbols:
a) The cat is out of the bag and the contestant is black.
b) If the cat is out of the bag then the contestant is black.
c) The cat is in the bag and the contestant is white.
d) The cat is out of the bag if the contestant is black.
e) The cat is out of the bag only when the contestant is black.
32 CHAPTER 2. INTRODUCTION TO SENTENCE LOGIC

3. False Hypotheses and Bogus Solutions*


Since the only possibility for the implication
P →Q
to be false is for P to be true and Q to be false, the remaining cases make it
clear that P → Q is true if P is false, regardless of whether Q is true or not.
This means that a false proposition can imply anything, which is one reason
why it is so important to verify and prove results in mathematics with such
thoroughness – a hidden contradiction would imply that every other state-
ment is true, even the most absurd. This would be disastrous for applications

of mathematics. In other sciences the requirement of proof of results is not


so meticulous, and experiment can sometimes be enough to validate results.
However, mathematics as a scientific method must prove its results, since
mathematics is intended to be applicable to so very diverse situations. If a
mathematical result were to contain a contradiction, this could lead to almost
any conclusion with possibly dire consequences in the area of its application.8

For example, it holds true that


(2.11) [1 = 2] → [18 = 36].
The implication doesn’t really indicate that there is any causal relationship
between the antecedent 1 = 2 and the consequent 18 = 36, but rather just
says that if 1 = 2 has the logical content true then 18 = 36 also has the logical
content true. Indeed, the rules of arithmetical do actually allow us to show9 ,
and easily so, that the assumption that 1 = 2 is true really does lead to the
consequent 18 = 36.
At first glance it may seem strange that (2.11) is true, but it is not so strange
when remembering what implication actually stands for. Stating P → Q, does
not say that P need be true. It only says that if P were to hold then Q would
hold too. Implication only actually applies to the case when P is true – and
when P is false, as we have seen, the implication cannot be false 10 . A common
line of reasoning, not to be confused with the implication P → Q is to assert
P holds, therefore Q holds,
8
The rest of this chapter contains some more mathematically oriented discussion and can be
skipped by readers uncomfortable with a such.
9
The following kind of calculation can be used to show that one number is equal to any other
number: 1 = 2 → 1−1 = 2−1 → 0 = 1 → 0·6 = 1·6 → 6+0 = 6+6 → 6·3 = 12·3 → 18 = 36.
10
By the same reasoning, it could be argued that an implication with a false antecedent
cannot be true either. However most logicians quietly ignore this view because assigning
truth to an implication with a false antecedent leads to a nice tidy theory with relatively
easy proofs. There are however logicians who do not accept this, claiming that all proofs
must be based on what actually is the case rather than on what would be.
2.3. FALSE HYPOTHESES AND BOGUS SOLUTIONS* 33

which asserts that both P and P → Q holds.


It is also practical and extremely important to be able to cope with false as-
sumptions and understand what these lead to, for example in solving equations
which the following example shows.

Example 2.7
If √ √
x + 3 = −1 + x + 2
then it holds that ¡ √ ¢2
x + 3 = −1 + x + 2 ,
since if two numbers are equal, then one of them multiplied by itself is equal
to the other number multiplied by itself. (The root sign denotes the positive
root of a given number.) So expanding the square in the right hand side
yields √
x + 3 = 1 + x + 2 − 2 x + 2,
which yields √
0 = −2 x + 2,
which holds if and only if x + 2 = 0, in other words
x = −2.
This shows that the implication
£√ √ ¤
(2.12) x + 3 = −1 + x + 2 → [x = −2]
is true. We know that an implication P → Q can be true in two cases:
1) if P is true and Q is true,
2) if P is false.
The question arises now why the implication (2.12) is true. Whether this is
√ √
1) because x + 3 really is equal to −1 + x + 2 for some x and which in
that case must also fulfill x = −2, or
√ √
2) because x + 3 = −1 + x + 2 is false? In that case it must be false for
all x, since according to 1) it holds that if it were true for some x then this
same x must be equal to −2.
The only
√ way to decide√which case applies is to substitute x = −2 in the
equation x + 3 = −1 + x + 2. For x = −2 this yields
√ √
x + 3 = −2 + 3 = 1,
while √ √
−1 + x + 2 = −1 + −2 + 2 = −1.
The supposed solution x = −2 as calculated is thereby not a solution to the
given equation. This does not mean that there is anything wrong with this
34 CHAPTER 2. INTRODUCTION TO SENTENCE LOGIC

£√ √ ¤
calculation, since it showed only that the implication
£√ x + 3 = −1 + ¤ x + 2 →

[x = −2] holds, not the converse [x = −2] → x + 3 = −1 + x + 2 . This
reversal
√ is partly
√ what is meant by saying that x = −2 fulfills the equation
x + 3 = −1 + x + 2, and partly that
x = −2.

All together
√ this means that for x = −2 to satisfy the equation x + 3 =
−1 + x + 2, √ √
x = −2 and x + 3 = −1 + x + 2
must be true, in other words, this is case 1) above and not case 2). This
means that the given equation has no solution – if it had a solution then this
would be x = −2. From knowing
√ that P →
√ Q is equivalent with ¬Q → ¬P
it follows that x 6= −2 → x + 3 6= −1 + x + 2. º
The example above shows a general phenomenon when solving equations, si-
multaneous ones or inequalities – however careful the calculation, apparently
bogus solutions can still arise.
This is because were implications link the various steps in the calculation
these are often only in one direction (→) , and not reversible (←). So it
is important to keep track of the direction of the implication when solving
equations. If there is no equivalence between steps in the calculation then the
implied solutions must be substituted in the original equation. The reasoning
in the example above show that this is the method to use – there is nothing
wrong with the calculations and in general it is not possible to recognise which
of the possible solutions is correct with out checking them.11

Example 2.8
Solve the equation √
x= 2x + 3.
º
Solution: √
x = 2x + 3
→ x2 = 2x + 3
↔ x2 − 2x −√3 = 0
↔x=1± 1+3
↔ x = −1 ∨ x = 3.
Note that there is only an implication and not an equivalence of the first
step. Substitution in
p the original equation with x = −1 yields −1in the
left hand side and 2 · (−1) + 3 = 1 in the right hand side. This x-value
11
This kind of problem is also dealt with in the section on truth- and solution sets in the
chapter Set Theory. Compare also with the treatment in the example on page ??.
2.3. FALSE HYPOTHESES AND BOGUS SOLUTIONS* 35

is therefore not a solution to the given equation.


√ However the left hand
side becomes 3 and the right hand side 2 · 3 + 3 = 3 for x = 3. The
reasoning in the previous example ensures
√ that x = 3 is the only solution
to the given equation, since
√ if [x = 2x + 3] → [x = −1 or x = 3] then
¬[x = −1 ∨ x = 3] → [x 6= 2x + 3] holds.
Answer: x = 3. º

Example 2.9
Solve the equation

−x = 2x + 3.
º
Solution: √
−x = 2x + 3
→ (−x)2 = 2x + 3
↔ x2 − 2x −
√3 = 0
↔x=1± 1+3
↔ x = −1 ∨ x = 3.
Substituting x = p −1 in the original equation yields −(−1) = 1 on the
left hand side and 2 · (−1) + 3 = 1 on the right hand side. This x-value
is therefore a solution to the given equation.
√ However the left hand side
becomes −3 and the right hand side 2 · 3 + 3 = 3 for x = 3. The value
x = 3 is therefore not a solution to the equation.
Answer: x = −1. º
Comparing the last two equations, they become the same equation after squar-
ing both sides. This is the reason why only one of the two x-values satisfies
the equations. Their complete relationship can now be expressed as
£ √ √ ¤
x = 2x + 3 ∨ −x = 2x + 3 ↔ [x2 = 2x + 3] ↔ [x = −1 ∨ x = 3]
It is therefore particularly important to check the solutions after squaring both
sides of the equation. A simpler example that also shows this is
x = −1 → x2 = 1 ↔ [x = −1 ∨ x = 1],
which asserts the correct implication x = −1 → x2 = 1. Obviously it would
be quite wrong to assert that x = 1 satisfies the original equation.

Example 2.10
Solve the inequality
x + 3 > x + 2.
º
36 CHAPTER 2. INTRODUCTION TO SENTENCE LOGIC

Solution: Clearly x + 3 > x + 2 is synonymous with the inequality 3 > 2,


which is true for all x, since it makes no demands on the value of x. The
original inequality is therefore fulfilled for all real numbers x. This can be
written
[x + 3 > x + 2] ↔ 3 > 2.
Answer: All x fulfill the inequality. º

Example 2.11
Solve the inequality
x + 2 > x + 3.
º
Solution: Clearly x + 2 > x + 3 is synonymous with the inequality 2 >
3, which is false, regardless of the whatever value x has. The inequality
therefore lacks a solution.
Answer: The set of solutions is empty. º

Example 2.12
Solve the inequality
|x − 3| + x ≤ 5.
º
Solution: Observe that the notation with the absolute value |x − 3| has
two different meanings depending on whether x is greater than 3 or not. If
x is greater than 3 then |x − 3| simply means x − 3, and if x is less than
3 then |x − 3| denotes the number −(x − 3) = −x + 3. Divide the problem
into two cases.
Case 1: Assume that x ≥ 3. The it holds that [|x−3|+x ≤ 5] ↔ [x−3+x ≤
5] ↔ [2x ≤ 8] ↔ [x ≤ 4]. This shows that
[x ≥ 3 ∧ |x − 3| + x ≤ 5] → x ≤ 4.
Not all x ≤ 4 satisfy the inequality, since substituting x ≤ 4 in the condition
[x ≥ 3 ∧ |x − 3| + x ≤ 5] only fulfills it for those values of x where x ≥ 3.
Solutions the inequality in this case are all values of x where 3 ≤ x ≤ 4.
Case 2: Assume that x < 3. Then it holds that |x−3|+x ≤ 5 ↔ −x+3+x ≤
5 ↔ 3 ≤ 5. Since 3 < 5 is fulfilled regardless of the value of x, the inequality
is therefore fulfilled by all x that also fulfil the condition x < 3.
Answer: Together this shows that the inequality is fulfilled by all x ≤ 4.
º
2.3. FALSE HYPOTHESES AND BOGUS SOLUTIONS* 37

Example 2.13
Solve the inequality
|x| > 2|x − 1|.
º
Solution: The absolute values |x| and |x − 1| have different meanings
depending on whether x is greater or less than 0 or whether x is greater or
less than 1 respectively. Divide the problem into three different cases.
Case 1: Assume x ≥ 0 and x ≥ 1, that is x ≥ 1. Then the inequality is
equivalent to x > 2(x − 1) which is equivalent to x < 2. From this it follows
that all x such that 1 ≤ x < 2 satisfy the given inequality.
Case 2: Assume x ≥ 0 and x < 1. This yields |x| > 2|x − 1| ↔ x >
−2x + 2 ↔ 3x > 2 ↔ x > 2/3. From this it follows that all x such that
2/3 < x < 1 are solutions to the given inequality.
Case 3: Assume x < 0. This case yields |x| > 2|x − 1| ↔ −x > −2x + 2 ↔
x > 2. No values of x > 2 satisfy the condition x < 0 so the inequality lacks
solutions in this case.
Together this shows that [|x| > 2|x − 1|] ↔ [(1 ≤ x < 2) ∨ (2/3 < x <
1)] ↔ [2/3 < x < 2].
Answer: 2/3 < x < 2. º

It might appear that if solutions can be calculated for a problem then it


shouldn’t be necessary to worry about whether there really are any solutions.
The examples above show that grave mistakes can be made by not checking
whether there is a solution or whether the solutions produced by the calcula-
tion really are solutions. The next example is a classical example that shows
this even more clearly – by attempting to calculate “the largest integer”.

Example 2.14
Assume that n is the largest positive integer. Then is holds that n2 is a
positive integer, and since n is the largest integer n2 ≤ n. It therefore holds
that
[n2 ≤ n] ↔ [n(n − 1) ≤ 0] ↔ [n − 1 ≤ 0] ↔ [n ≤ 1].
But because n is a positive integer, it also holds that n ≥ 1. Since [n ≤
1 and n ≥ 1] ↔ [n = 1] it follows that n = 1 is the largest positive integer.
This shows that
[n is the largest positive integer] → [n = 1].
38 CHAPTER 2. INTRODUCTION TO SENTENCE LOGIC

This really is a true implication, but that it is true does not depend on
n = 1 being the largest positive integer, but rather on the antecedent
n is the largest positive integer
which is false for all n. º
When conducting mathematics and reasoning logically – for example when
solving equations – the implications and equivalences are not always explicitly
written out. Instead there is a convention of only noting down the various
steps one above the other with possible comments, the intention being that
every new line follows from previous lines in some way or is equivalent to some
previous line.

Example 2.15
Solve the simultaneous equations
 2
x =1
x+y =2

xy = 1
º
Solution: The bracketed system
 2
x =1
x+y =2 (∗)

xy = 1
indicates that x2 = 1, x + y = 2 and xy = 1 all hold. From the first
equation (∗) it follows that x = 1 or that x = −1.

Case 1: Assume x = 1. Then it follows from the second equation in (∗)


that 1 + y = 2, that is y = 1.
This shows that x = 1 → y = 1. (Actually this shows that [x satisfies(∗)] →
[x = 1 → y = 1].)
Substituting (x, y) = (1, 1) in (∗) confirms that this is a solution.

Case 2: Assume x = −1. Then it follows from the second equation in (∗)
that −1 + y = 2, that is y = 3.
This shows that x = −1 → y = 3.

Substituting (x, y) = (−1, 3) in (∗) refutes this as a solution.

Answer: (x, y) = (1, 1). º


2.3. FALSE HYPOTHESES AND BOGUS SOLUTIONS* 39

Exercises

2.7 Solve the equation x + 1 = 1 − x.
2.8 Solve the simultaneous equations
½ 2
x + y2 = 2
xy = −1.
2.9 Solve the simultaneous equations
 2
 x + y2 = 2
xy = −1

x−y =2

Revise & Reflect

1. True or False? If a triangle has 4 corners then it only rains on Mon-


days.
2. Explain necessary and sufficient conditions in terms of implication.
3. How are conjunction, disjunction and implication interrelated?
? A false antecedent makes any implication true even if the consequent
is false.
? For any true implication, the consequent becomes a necessary condi-
tion for the truth of the antecedent; and the antecedent becomes a
sufficient condition for the consequent - sufficient because the conse-
quent could be true even if the antecedent is not.
? P → Q ⇔ ¬P ∨ Q ⇔ ¬(P ∧ ¬Q).
? Check that you can explain all the ‘Concepts Covered’ listed at the
beginning of the chapter.
40

CHAPTER 3

The Language SL
Learning Objectives
After working through this chapter you should be able to:
• distinguish between well formed and ill-formed logical sentences
• analyse and categorise the logical truth value of a sentence using a
table.

Concepts covered
Alphabet Priority Truth value
Sub-formulae Contradiction Tautology
Contingent Law of the excluded middle Expressive power
Counter-model Distribution laws Logically false
Catagorical Logically true Negation
Information content Satisfy Falsify

There is a subtle distinction between a proposition and how it is expressed.


Propositions are the conceptual ideas or thoughts inside our heads which we
express in language1 . For any one proposition there may be many different
expressions in the same or in different languages that all express the same
1
Some would have it that propositions are independent of a cerebral substrate. Mathemati-
cians in particular favour this view. The nature of a proposition is also debated. Some philoso-
phers suggest a proposition as being uniquely characterised by the set of possible worlds in
which it is true. Mathematicians however find this unsatisfactory since many mathematical
statements would be indistinguishable, such as 1+1=2 and 3+4=7. There is an extensive
and inconclusive literature on the true nature of propositions. McGrath, Matthew, “Proposi-
tions”, cf. The Stanford Encyclopedia of Philosophy (Spring 2006 Edition), Edward N. Zalta
(ed.).
3.2. THE SYNTAX FOR SENTENCES IN SL 41

proposition. Any language therefore is just an external expression of proposi-


tions which are essentially intangible. Bearing this thought in mind we are now
ready for a formal treatment of the language SL which represents propositions
with sentences. A sentence in SL is constructed from atoms and connectives
according to a well defined syntax. Here again, note that the syntax bears no
concern whatsoever for what these components might mean. All that matters
is strict adherence to the rules that govern how sentences are built up. The
semantics for SL, on the other hand, will be dealt with after syntax has been
covered.

1. Alphabet
In order to be able to describe a language we need an exact picture of which
propositions the language needs to describe and what expressions it allows.
And in order to define these, an alphabet is first needed. This alphabet will
need to consist of the all the possible characters used to write expressions
in the language. An alphabet for a propositional language consists of three
different components: atoms, connectives and punctuation marks.
Atoms can be regarded as representing the basic components of proposi-
tional expressions. An atom is denoted by a symbol in the language. And
a symbol is quite simply one more characters that we decide will denote an
atom. Atoms are then combined with each other in the language using symbols
for connectives. Which connectives are chosen depends on what the language
needs to able to express and prove, but the most common are concepts that
correspond with and, or, not, if ... then, and if and only if in natural lan-
guage. Punctuation marks are merely used to demarcate the structure of
compound sentences according to their construction.

Definition 3.1
The alphabet SLA is a structure (A, C, M ), where:
A is the set of symbols p, q, r, s, t1 , t2 , ...2
C is the set of symbols ¬, ∧, ∨, →, ↔
M is the set of symbols ), (, ], [

2. The Syntax for Sentences in SL


The Syntax for a language determines the form of the sentences in that
language. The syntax for the propositional language SL is provided by the
schema of rules in the following definition.
2
Lower case will be used most often to denote atoms.
42 CHAPTER 3. THE LANGUAGE SL

Definition 3.2
Given the alphabet SLA, and propositional variables P and Q.
An element in A from SLA is a sentence in SL:
If P and Q hold sentences then the following are also sentences in SL (recall
that parentheses are used for grouping sentences):
(i) (P )
(ii) ¬(P )
(iii) (P ∧ Q)
(iv) (P ∨ Q)
(v) (P → Q)
(vi) (P ↔ Q)
SL consists only of those sentences that can be constructed using the cases
specified above.
The sentences held by P and Q are subsentences of the compound sen-
tences (ii) – (vi) above.3

Example 3.3
The expression ¬((p ∧ q) ∨ (r → s)) is a sentence. This becomes clear by
applying rules (i) – (vi) above. The atoms p, q, r and s are elements in A
and are therefore sentences. It therefore follows from rules (iii) and (v) that
(p ∧ q) and (r → s) are also sentences. By rule (iv) then (p ∧ q) ∨ (r → s) is
also a sentence. Applying (ii) now yields the sentence ¬((p ∧ q) ∨ (r → s)).
º
Example 3.4
Note that p ∧ q ∨ r is not a sentence since it cannot be generated by the rules
above. This is reasonable since (p ∧ q) ∨ r and p ∧ (q ∨ r) mean different
things. º
The schema of rules above is formulated so that ambiguity in sentences is
avoided by the way they are constructed. However, in order to reduce the
clutter of parentheses in expressions without ambiguity creeping in, a prece-
dence order is usually assigned to the connectives. This determines the order
in which they should be evaluated when omitted parentheses would otherwise
leave this unclear.
From now on, the following precedence order ¬, ∧, ∨, →, ↔, from highest
to lowest, will be used. Applied to ¬p ∧ q ∨ r, evaluation proceeds as it would
for (¬p ∧ q) ∨ r and not ¬(p ∧ q) ∨ r, ¬(p ∧ (q ∨ r)), ¬p ∧ (q ∨ r) or ¬((p ∧ q) ∨ r).
3
Square parentheses may be used in addition to rounded parentheses when this increases
legibility.
3.3. THE MEANING OF A SENTENCE 43

A little technical remark is that the rules in Definition 2 are generalised


over all atoms by using propositional variables. Propositional variables are
denoted by capital letters to distinguish them from atoms in the language
(which, in contrast, are used in the examples above when applying Definition
2). A propositional variable can be assigned an arbitrary sentence from the
language SL and when this is done it said to be instantiated. When every
variable in a rule is instantiated, this constitutes an instance of that rule and
the rule is said to be instantiated. The rules in the schema can be instanti-
ated any number of times in any order. Repeated instantiation using various
rules from the schema is how more complicated sentences are syntactically
constructed starting with simple atoms, or deconstructed starting with more
complex sentences and applying the rules in reverse. 45

3. The Meaning of a Sentence


The above section dealt with the form of sentences in logic. Note however that
not all expression in everyday language can be represented in a natural way.
Consider the following example:
Is T a right angled triangle?
Rub out that triangle T !
Oh, the beauty of this triangle!
Even though in one sense “Oh, the beauty of this triangle!” expresses “T is
a beautiful triangle” or “I think T is a beautiful triangle”, it is still not a
sentence of logical nature. Questions, imperatives and interjections are not
propositions that assert something with a logical content that can assume a
value of either true or false.
Meaning in a sentence in sentence logic is expressed by truth values. Each
atom can be assigned exactly one proposition and each proposition must as-
sume one of two values, true or false. When this has been done, the atom
is said to have an interpreting assignment. The truth value of an atomic
sentence is determined by the truth of the proposition assigned to the atom6 .
The truth value of a compound sentence is determined by the connectives in

4
The latter process is basically how a parser works that checks the syntax of computer
programs. The source code of a computer program is essentially one huge sentence in a
computer language.
5
Note that the syntax is defined relative to the alphabet SLA, i.e. the structure (A, K, I) in
6
the above definition.
In practise when solving tasks in logic, you can skip this distinction and assign truth values
directly to the atoms and in most expositions of propositional logic the link between truth
values and atoms via propositions is omitted for simplicity and truth values are thought of a
being assigned directly to atoms. This does not affect the logical properties of the language,
however this link is essential when the logic is applied in practice to a real problem.
44 CHAPTER 3. THE LANGUAGE SL

the sentence, and their evaluation rules for the values of the subsentences that
they connect.

For example the truth value of p → q is determined by the value of the com-
ponent atoms p, q, and the connective →. Since there are two independent
possible assignments to each atom in a sentence, the number of possible as-
signments of truth values that a sentence may take is 2a where a is the number
of unique atoms in the sentence. Since there are finitely many assignments of
truth values it is possible to completely list in a table, all possible assignments
to any sentence, as well as the resulting values for each sub-sentence. Such
tables are called truth tables. In this way truth tables effectively define what
sentences can logically mean.
Let 1 denote the value true and 0 denote the value false. The truth table
for any implication P → Q looks like this:7
P Q P →Q
0 0 1
(3.1) 0 1 1
1 0 0
1 1 1
Note how the table lists all 2a possible combinations of truth assignments to
the a atoms held in P and Q. Note also, as explained previously, that P → Q
is false only in the one case, namely where the sentence held in P is true and
that held in Q is false. In all other cases P → Q is true.
The truth table for any conjunction P ∧ Q is formed in the same way.
P Q P ∧Q
0 0 0
(3.2) 0 1 0
1 0 0
1 1 1
Note that P ∧ Q is true in only one case – when the sentences held in both P
and Q evaluate to 1.
Disjunction P ∨ Q has the following truth table.
P Q P ∨Q
0 0 0
(3.3) 0 1 1
1 0 1
1 1 1
7
Note that we use the signs P and Q, rather than the symbols in SL. This is because the rules
are supposed to be read as variables for arbitrary sentences. For instance, we can substitute
the sentence r → s for Q. For simplicity, we will sometimes write “the sentence P → Q” and
similar when we more rigorously should write “sentences of the form P → Q”.
3.3. THE MEANING OF A SENTENCE 45

Note that P ∨ Q is false in only one case – when the sentences held in both P
and Q evaluate to 0.
Equivalence P ↔ Q expresses that sentences held in P and Q have the
same truth value:
P Q P ↔Q
0 0 1
(3.4) 0 1 0
1 0 0
1 1 1
Equivalence P ↔ Q can also be understood as meaning that both P → Q and
Q → P hold, since in the table for both expressions, the values are equal in
expressions’ respective columns:
P Q P ↔ Q (P → Q) ∧ (Q → P )
0 0 1 1 1 1
(3.5) 0 1 0 1 0 0
1 0 0 0 0 1
1 1 1 1 1 1
Negation ¬P has the following truth table.
P ¬P
(3.6) 0 1
1 0
A sentence that is true for all possible assignments is called a tautology (i.e,
the truth value of the sentence is 1 in every row). A sentence that is false for
all possible assignments is called a contradiction.
Some examples of tautologies are to be found in de Morgan’s laws, one
of which states:
(3.7) ¬(P ∧ Q) ↔ ¬P ∨ ¬Q,
and which when examined in a truth table reveals that its truth value is 1 in
all cases:
P Q P ∧ Q ¬(P ∧ Q) ¬P ¬Q ¬P ∨ ¬Q ¬(P ∧ Q) ↔ (¬P ∨ ¬Q)
0 0 0 1 1 1 1 1
0 1 0 1 1 0 1 1
1 0 0 1 0 1 1 1
1 1 1 0 0 0 0 1
Other examples of tautologies are sentences with the form
P ∨ ¬P,
46 CHAPTER 3. THE LANGUAGE SL

which has the truth table


P ¬P P ∨ ¬P
(3.8) 0 1 1
1 0 1
This can be expressed as
(3.9) P ∨ ¬P ↔ T0 ,
where T0 denotes an arbitrary tautology (i.e. with a constant value of 1).
The sentence (3.9) is actually a tautology called the law of the excluded
middle 8 .
Examples of contradictions include sentences with the following forms
P ∧ ¬P
which yields a truth table with only zeros:
P ¬P P ∧ ¬P
(3.10) 0 1 0
1 0 0
This can be expressed
(3.11) P ∧ ¬P ↔ F0 ,
where F0 denotes an arbitrary contradiction (i.e. with a constant value of 0).
The sentence (3.11) is called the principle of contradiction.
Unlike tautologies and contradictions the truth value of which is immutable,
the truth value of some sentences can be contingent on the value assignment
to its atoms. Such a contingent sentence can assume the value true or false.
An example of the form that a contingent sentence can take is
(P → Q) → Q,
the contingency of which is apparent in the truth table:
P Q P → Q (P → Q) → Q
0 0 1 0
(3.12) 0 1 1 1
1 0 0 1
1 1 1 1

8
Whether this law is reasonable or not is actually debated among some logicians. So called
mathematical intuitionists (or constructivists) are forcefully arguing against it, claiming that
all mathematical objects must be constructed. It is not sufficient for an object to exist just
because it has been proven impossible for it not to exist.
3.3. THE MEANING OF A SENTENCE 47

The construction of truth tables is a method by which differences in the


truth values of sentences can be examined. For example if sentences with the
form P ∨ (Q ∧ R) and (P ∨ Q) ∧ R are thought to be equivalent, this table,

P Q R Q ∧ R P ∨ (Q ∧ R) P ∨ Q (P ∨ Q) ∧ R
0 0 0 0 0 0 0
0 0 1 0 0 0 0
0 1 0 0 0 1 0
0 1 1 1 1 1 1
1 0 0 0 1 1 0
1 0 1 0 1 1 1
1 1 0 0 1 1 0
1 1 1 1 1 1 1

shows that the truth values of the two sentence schemata differ on the fifth
and seventh row. Therefore

[P ∨ (Q ∧ R) ↔ (P ∨ Q) ∧ R]

is not a tautology.
Here the precedence of connectives can be important, but usually parenthe-
ses are used to add clarity to intention. Compare this to arithmetic, where
there is a more established convention of precedence:

a·b+c

means (a · b) + c and not a · (b + c), i.e. multiplication is done before addition.


In arithmetic + and · fulfill what is known as the distributive law for
multiplication over addition (the arrow illustrate here that a is distributed
over b and c):

a · (b + c) = a · b + a · c
6
6

The corresponding distributive laws for ∧ and ∨ are

P ∧ (Q ∨ R) ↔ (P ∧ Q) ∨ (P ∧ R)
6
6

and
P ∨ (Q ∧ R) ↔ (P ∨ Q) ∧ (P ∨ R).
6
6
48 CHAPTER 3. THE LANGUAGE SL

respectively. That these are both these are correct is clear in the following
truth tables (corresponding columns have the same truth values.):

P Q R P ∧ (Q ∨ R) (P ∧ Q) ∨ (P ∧ R) P ∨ (Q ∧ R) (P ∨ Q) ∧ (P ∨ R)
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 1 0 0 0 0 0 0 0 0 1
0 1 0 0 0 1 0 0 0 0 0 0 1 0 0
0 1 1 0 0 1 0 0 0 0 1 1 1 1 1
1 0 0 1 0 0 0 0 0 1 1 0 1 1 1
1 0 1 1 1 1 0 1 1 1 1 0 1 1 1
1 1 0 1 1 1 1 1 0 1 1 0 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

Note that in SL there are two different distributive laws, whereas in arithmetic
there is only one: a · (b + c) = a · b + a · c, and not the other: a + (b · c) 6=
(a + b) · (a + c). This becomes clear with an assignment of a = b = c = 1, since
then a + (b · c) = 1 + 1 = 2, whereas (a + b) · (a + c) = 2 · 2 = 4.

Sometimes translating statements from everyday speech to logical symbols


is fraught with uncertainty. Truth tables can then sometimes help to elicit the
intended meaning from the everyday intention.

Example 3.5
Formalise the sentence:

In actual fact it holds that x > 1, but that is of no matter, since y < 0 in any case.

Solution: The statement contains the components x > 1 and y < 0, which
can be represented by p and q respectively. Let s denote the statement as
a whole. Clearly s is false if p is false (i.e. if x ≤ 1) or if q is false (i.e. if
y ≥ 0). On the other hand if both p and q are true then s must also be true.
Listing these results in a truth table yields
p q s
0 0 0
0 1 0
1 0 0
1 1 1

which bears the same values as the truth table for P ∧ Q, so s is logically
equivalent to P ∧ Q. Answer: (x > 1) ∧ (y < 0). º
3.3. THE MEANING OF A SENTENCE 49

Example 3.6
Formalise the sentence:

It makes no difference whether x > 1, because y < 0 holds in any case.

Solution: As in the previous example this statement is clearly false if


y ≥ 0. It is true otherwise (i.e. when y < 0) since the value of x “makes
not difference”. The logical content of the statement is therefore only y < 0,
regardless of whether x > 1 or not.
Answer: y < 0. º

Example 3.7
Consider the following statement
The value α is positive [a], but the function g still doesn’t become
(3.13)
positive [¬p] and therefore there must be something wrong [w].
Show that the formalisation

(3.14) (a ∧ ¬p) → w

is not correct. º

Solution: Clearly the given statement is false if a is false, since the state-
ment says explicitly that a holds. However, the implication (3.14) is true is
the antecedent is false, for example if a is false. The given statement and
the formula (3.14) therefore do not have the same logical content, so the
formalisation is not correct.
In order to find a possible correct formalisation note that the statement
(3.13) expresses partly that a∧¬p holds and partly that (a∧¬p) → w holds.
A correct formalisation is therefore

(a ∧ ¬p) ∧ [(a ∧ ¬p) → w],

which can be simplified to sentence s :

a ∧ ¬p ∧ w.

The truth table method can also determine whether s captures logic of the
given statement (3.13) where s is false if a is false or if p is true or if w is
false. If a is true and ¬p is true, then s is true if w is true. This yields the
50 CHAPTER 3. THE LANGUAGE SL

truth table
a p w s
0 0 0 0
0 0 1 0
0 1 0 0
0 1 1 0
1 0 0 0
1 0 1 1
1 1 0 0
1 1 1 0
which is precisely the table for a ∧ ¬p ∧ w. º

Exercises
3.1 Construct the truth table for the sentence p ∨ ¬q.
3.2 Construct the truth table for the sentence (p ∧ q) → r.
3.3 Determine whether the following sentences are tautologies, contradictions or
contingencies:
a) p ∧ ¬p, b) p ∨ ¬p, c) p ∨ q, d) p ∧ q.

4. The Expressive Power of Connectives


The section above covered some of the most common connectives. You might
well ask, why the preference for these in particular. The answer is really that
there isn’t actually any formal reason for this, but rather that these are the
connectives that are considered to be the most natural since they correspond
to common words and notions in everyday use.
The fact is that it is possible to have fewer connective without sacrificing the
expressive power of sentence logic. For example a language just as expressive
as SL need only contain the connectives ¬ and ∨. That this is so can be verified
by defining the other connectives ∧, → and ↔, using only the connectives ¬
and ∨. The tables below show just this using basic sentence forms that have the
exact same semantic content. The first table shows the sentence form P → Q
can be expressed as ¬P ∨ Q. Note that this is possible because the truth tables
for these sentences are identical for all assignments to the sentences held by
P and Q. This means that the sentences express precisely the same logical
content.
3.4. THE EXPRESSIVE POWER OF CONNECTIVES 51

P Q P → Q ¬P ∨ Q
0 0 1 1
0 1 1 1
1 0 0 0
1 1 1 1
Similarly ¬ and ∨ suffice to define the connective ∧

P Q P ∧ Q ¬(¬P ∨ ¬Q)
0 0 0 0
0 1 0 0
1 0 0 0
1 1 1 1
Finally the connective ↔ is now easily defined since P ↔ Q has the same
semantic content as (P → Q) ∧ (Q → P ).
Another interesting question can be how many different connectives are
available for inclusion in a language. For connectives that bind two sentences,
so called 2-place or binary connectives, there is a total of 16. This can be seen
in the following table:

P Q 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1
0 1 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1
1 0 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1
1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
The columns 1–16 contain all the possible evaluations of a truth table given
the truth values assigned in each row to sentences held by P and Q. Naturally
the table contains the connectives dealt with earlier in the chapter. For exam-
ple the connective ∧ corresponds to column 2; connective ↔ corresponds to
column 10. Further, column 14 show the values of the truth table for → and
column 8 corresponds to the connective ∨.
Even some of the other connectives are used sometimes with special symbols,
like xor (column 7) and nand (column 15). The pattern in the previous table
suggests a general answer to the question of how many n-place connectives
there are if n is the number of sentences that the connectives bind together.
n
This number 22 grows too rapidly for many-place connectives to be of any
practical interest since they can be defined by binary connectives, which you
might like to verify for yourself using truth tables.
52 CHAPTER 3. THE LANGUAGE SL

5. The Semantics of SL
Formally calculation in logic is all about sentences. A sentence is an expres-
sion that can assume a truth value of true or false but not both. This is done
by interpreting the sentence.
Recall that a tautology is a sentence that is true regardless of the values
assigned to its atoms. Such sentences are sometimes called theorems. A sen-
tence is a contradiction if it is false regardless of the values of the atoms.
A sentence that is neither a tautology nor a contradiction is contingent on
the values of the atoms.

P ∨ ¬P is a tautology
P ∧ ¬P is a contradiction
P ∧ Q is contingent

The formal semantics for the language SL. now follows. First a precise def-
inition of what an interpretation of a sentence is. First think about a set
of propositions P in some reasonable sense of the word, e.g., the content of a
meaningful declarative sentence or something like that. Now we first formalise
interpretations in terms of mappings from the atoms to these propositions.
This is maybe little confusing, but it is primarily a technical trick just to for-
malise the correspondence between symbols in the language SL and ..hmmm..
bah ... something that we might call the ”real” world. Or something like that
at least.

Definition 3.8
An interpretation in SL is a function I : A 7→ P from the set of atoms A
in SL to the set P of propositions.

Propositions can be true or false and depending how we combine the atoms
denoting them, we get resulting truth values. This is expressed by the following
definition.9

9
Note that (i) below that the concept of interpretation underlies the definition. Again, this
is just a formality and we could as well consider the set of propositions to be the set {1,0}
directly. That is, for all purposes needed here, it suffices perfectly fine to think about evalu-
ations as the rows in a truth table.
3.5. THE SEMANTICS OF SL 53

Definition 3.9
Given an interpretation I, and sentence variables P and Q, then an eval-
uation V of I is a function V I : SL 7→ {1, 0} from the set of sentences in
SL to the truth values, such that the following holds for any sentence in SL:

(i) If P is an atom then V I (P ) = 1 if I(P ) is a true proposition,


otherwise V I (P ) = 0.
(ii) V I (¬P ) = 0 iff V I (P ) = 1
(iii) V I (P ∧ Q) = 1 iff V I (P ) = 1 and V I (Q) = 1
(iv) V I (P ∨ Q) = 1 iff V I (P ) = 1 or V I (Q) = 1
(v) V I (P → Q) = 1 iff V I (P ) = 0 or V I (Q) = 1
(vi) V I (P ↔ Q) = 1 iff V I (P ) = V (Q)

As a function, an evaluation of an interpretation of a sentence in SL maps the


sentence to a truth value, and when that truth value is 1 the interpretation can
be said to make the sentence true, or make the sentence false when the
evaluation maps the sentence to 0. When the evaluation of an interpretation
maps a sentence or set of sentences to 1, that interpretation is called a model.
An interpretation of a sentence that evaluates to 0 is called a counter-model.

Definition 3.10
A model m for a sentence P is an interpretation I such that V I (P ) = 1. A
model m for a set of sentences Γ is an interpretation I such that V I (P ) = 1
for every sentence P ∈ Γ, in which case it is usual to say that m satisfies
Γ, or hold in Γ.

Definition 3.11
A counter-model c to a sentence P is an interpretation I such that the
evaluation V I (P ) = 0. A counter-model c to a set of sentences Γ is an
interpretation I such that V I (P ) = 0 for some sentence P ∈ Γ in which
case it is usual to say that c falsifies Γ.

Sentences and sets of sentences have different properties depending on which


truth value they assume after the atoms have been assigned propositions.
54 CHAPTER 3. THE LANGUAGE SL

Definition 3.12
A sentence that is falsified by all interpretations is called unsatisfiable. A
set of sentences that is falsified by all interpretations is called unsatisfiable.

The following definition expresses the concepts tautology, contradiction and


contingence in terms of models and counter-models.

Definition 3.13
A sentence is logically true (a tautology) iff every interpretation of the
sentence is a model for the sentence. A set of sentences are logically true iff
all interpretations are models for all sentences in the set.

Definition 3.14
A sentence is logically false (a contradiction) iff every interpretation of a
sentence is a counter-model to the sentence. A set of sentences is logically
false iff every interpretation is a counter-model to the set.

Definition 3.15
A sentence is contingent iff it is neither logically true nor logically false.
A set of sentences is contingent iff it is neither logically true nor logically
false.

In the next section an interesting special case of contingency called a cate-


gorical sentence will be described. First though, its definition:

Definition 3.16
A sentence is categorical iff it has precisely one model. A set of sentences
is categorical iff it has precisely one model.

Exercises
Determine whether the following sentences are logically true, contradictory or
contingent.
3.4 p → (q ∧ r)
3.5 p → (p → p)
3.6 p ∨ ¬p
3.7 p ∧ ¬p
3.6. INFORMATION CONTENT OF A SENTENCE 55

3.8 p → (p ↔ r)
3.9 (p → q) → ((p → q) → (p → q))
3.10 ¬(p ∨ q) ↔ (¬p ∧ ¬q)
3.11 ¬(p ∧ q) ↔ (¬p ∨ ¬q)
3.12 (p → q) ↔ (¬q → ¬p)
3.13 p ∨ (q ∧ r) ↔ ((p ∨ q) ∧ (p ∨ r))
3.14 p ∧ (q ∨ r) ↔ ((p ∧ q) ∨ (p ∧ r))

6. Information Content of a Sentence


An essential component in logical reasoning is how much information a
sentence contains. If S is a sentence, the number of 1’s in the truth table for
S can be seen as a measure of the information content in the sentence. The
more 1’s a sentence has the less information the sentence contains.
Recall that a sentence that only has a single 1 in its truth table, all other
values being 0’s is called categorical. A categorical sentence thereby asserts
only one single thing and not several possible things. A tautology is true in all
possible cases and therefore contains in this sense, no information at all and
is empty.
Consider the following two sentences:
Castro is happy.
Castro is sad or not sad.
¿From the first sentence Castro’s state of mind can be gleaned. However the
other sentence tells us nothing about his state of mind because it is true no
matter how he is feeling.
A more formal example of a sentence containing no information is the tau-
tology P ∧ Q ↔ ¬(¬P ∨ ¬Q). This has the following truth table.
P Q P ∧Q ↔ ¬(¬P ∨ ¬Q)
0 0 1
0 1 1
1 0 1
1 1 1
If a sentence S only has 0’s in its truth table, i.e. if S is a contradiction, then
along the same line this could be interpreted as meaning that S contains all
information, so much so that S contradicts itself, since S contains the negation
56 CHAPTER 3. THE LANGUAGE SL

of all information too. An indication of this is that every sentence follows from
a contradiction. The implication
S→T
is true for all sentences T if S is a contradiction.
Yet another application of the concept of information content is that a
sentence Q cannot follow from a sentence P if Q has greater information
content than P . If Q has greater information content than P then Q has fewer
1’s in its truth table than P , and in that case the table for P → Q would
contain at least one row where P has the value 1 and Q has the value 0, which
yields the value 0 by the implication. This can be formulated as a theorem.

Theorem 3.17
Let P and Q be two sentences. Assume that Q has greater information con-
tent than P . Then Q cannot follow from P .

Exercises
3.15 Which sentence contains the most information: p ∧ (q → r) or p ∨ (q → r)?
3.16 Which sentence contains the most information: (¬p ∧ ¬q) ∨ r or (p ∧ q) ∨ r?
3.17 Which sentence contains the most information: p ∨ ¬p or p ∧ ¬p?
3.6. INFORMATION CONTENT OF A SENTENCE 57

Revise & Reflect

1. In what order are the connectives evaluated in the sentence ¬A →


B ←→ C ∨ D ∧ E.
2. How does an interpretation relate the real world to the propositional
variables in a sentence?
3. What is the difference between syntax and semantics?
4. Is the information content of a sentence independent of interpreta-
tion?

? Evaluation priority can also be thought of as how tightly bound con-


nectives are to their arguments. Tightest bound is ¬ thereafter ∧,
∨, →, ←→.
? An interpretation assigns to each propositional variable the value
1 or 0. The meaning of the propositional variables in a domain is
provided by a lexicon that assigns a proposition from the domain to
each variable. When the value assigned to each variable matches the
truth value of the corresponding propositions assigned by the lexicon,
then the interpretation represents the logical conditions of the real
world domain. An interpretation may make a sentence true in one
domain but not in another, unless the sentence is a tautology, and
then it is true in all interpretations in all domains.
? The symbols in a language’s alphabet are strung together according
to rules (grammar) which ensure the resulting strings (sentences) are
well formed (grammatical). That is syntax. The strings of symbols
are meaningless until they are made to correspond with objects in
a domain and relationships between those objects. Variables map
to propositional objects or truth values, connectives map to logical
relationships. That is semantics.
? The truth value of a sentence is determined by the semantic rules of
evaluation and the values that an interpretation assigns to its vari-
ables. A truth table represents all possible interpretations, each row
representing one interpretation. The evaluations in the table indicate
the number of interpretations that satisfy the sentence. Information
content, as defined, is inversely proportional to the number of satis-
fying interpretations.
? Check that you can explain all the ‘Concepts Covered’ listed at the
beginning of the chapter.
58

CHAPTER 4

Deductions and Arguments


Learning Objectives
After working through this chapter you should be able to:
• determine whether a sentence follows logically from other sentences
and be able to verify this using truth tables
• identify and exemplify three common flaws in an incomplete or false
line of argument.

Concepts covered
Logical consequence Logical equivalence Deduction theorem
Enthymematic Insinuation Commutative laws
Associative laws Distributive laws Idempotency
Distribution laws Dominance laws Syllogism
Modus ponens Modus tollens Duality

This chapter examines how to conduct formal deduction in sentence logic.


There are two kinds of methods, those that utilise the semantic properties of
sentence logic and those that utilise various axioms and schemata of rules that
govern how formal proofs and deductions may be done.
In some sense, the simplest way to carry out a deduction is to use truth
tables. So we will start with these and show how they can be used to deduce
propositions. However the use of truth tables is tremendously time consuming,
especially if large numbers of sentences need working on simultaneously. A
further disadvantage (except for special cases) is that truth tables cannot be
used to draw conclusion in predicate logic. The next chapter will therefore
present four methods based on rule schemata.
4.1. LOGICAL CONSEQUENCE 59

1. Logical Consequence
Truth tables can be used to analyse various types of reasoning, and it will
become apparent that this method can detect faulty reasoning and hidden
assumptions. First lets be clear about what logical consequence is.

Definition 4.1
A sentence Q is a logical consequence (follows logically) of a set of sen-
tences (premises) {P1 , P2 , . . . , Pn } iff every model for {P1 , P2 , . . . , Pn } is
also a model for Q. When sentence Q is a logical consequence of a set of
sentences this will be denoted{P1 , P2 , . . . , Pn } ² Q.1

This means that every assignment of truth values that satisfies all premisses,
also satisfies the conclusion.

Definition 4.2
A sentence Q is logically true (or a tautology) iff ∅ ² Q, where ∅ denotes
the empty set. When a sentence Q is logically true, this will be denoted ² Q.

Note that the truth value of the conclusion is only of interest when the pre-
misses are true;2 not when the interpretation is not a model for the premisses.

Definition 4.3
The sentences P and Q are logically equivalent iff {P } ² Q and {Q} ² P .
When the sentences P and Q are logically equivalent this will be denoted
P ⇔ Q.

Logical consequence is sometimes denoted by first writing down the premisses


one above the other, drawing a horizontal line, and writing the symbol ∴ which
means conclusion and finally adding the intended conclusion, like this:
P1
P2
(4.1) ..
.
Pn
∴Q
Sometimes this is written like this instead:
P1 , P2 , . . . , Pn ∴ Q.
1
Until the chapter on set theory, curly brackets will usually be omitted for this kind of
statement. In this case that means that {P1 , P2 , . . . , Pn } ² Q is written P1 , P2 , . . . , Pn ² Q.
2
This also means that all sentences follow logically from a false premiss.
60 CHAPTER 4. DEDUCTIONS AND ARGUMENTS

Example 4.4
Consider the following argument:
Castro is a dog if God has so decided. But Castro
is not a dog. Therefore God has not so decided.
The sentences here are the composed of the atoms p and q, where p stands
for “Castro is a dog” and q for “God has so decided”. The argument can
now be written as
q→p
(4.2) ¬p
∴ ¬q
To say that this is correctly reasoned – regardless of whether the premisses
q → p and ¬p are actually true – means that whenever q → p and ¬p both
are true, ¬q is also true. This is precisely what is meant by
(4.3) {q → p, ¬p} ² ¬q.
It is now possible to check whether ¬q follows logically from {q → p, ¬p}
either by a truth table
p q [(q → p) ∧ ¬p] → ¬q
0 0 1 1 1 1 1
0 1 0 0 1 1 0
1 0 1 0 0 1 1
1 1 1 0 0 1 0
or by noting that q → p is equivalent to ¬p → ¬q according to the earlier
result about transpositive statements (see (2.4) on page 24). So if the premiss
¬p holds, then ¬q thereby holds too. This means that the conclusion follows
logically from the premisses.
Note that this analysis of the flawlessness of the reasoning above, was done
without concern for whether the premisses q → p and ¬p are mathematically
true or not. º
The alert reader will probably have noticed the use of the → symbol for
implication in the previous table rather than the ² symbol for logical conse-
quence. The validity of doing this to argue a proof of logical consequence is
itself a consequence of the following meta-logical result known as the deduc-
tion theorem.3

3
Once it is understood how truth tables work, the theorem seems so obvious that it can
be hard to see why it even needs proving. But remember that this book intends a formal
presentation of logic. Everything must be proved – even things that appear obvious.
4.1. LOGICAL CONSEQUENCE 61

Theorem 4.5
{P1 , P2 , . . . , Pn } ² Q iff ² (P1 ∧ P2 ∧ . . . ∧ Pn ) → Q.

Proof: Show that ² (P1 ∧ P2 ∧ . . . ∧ Pn ) → Q if {P1 , P2 , . . . , Pn } ² Q. The


proof of the converse is similar.
Assume that {P1 , P2 , . . . , Pn } ² Q. It follows from the definition of logical
consequence that every model m for {P1 , P2 , . . . , Pn } is a model for Q. But
m is a model for {P1 , P2 , . . . , Pn } iff m is a model for P1 ∧ P2 ∧ . . . ∧ Pn ,
according to the semantics for ∧. According to the semantics for →, m is
a model for (P1 ∧ P2 ∧ . . . ∧ Pn ) → Q exactly when it holds that if m is a
model for P1 ∧ P2 ∧ . . . ∧ Pn then it is a model for Q. º
That (4.3) is a correct argument means therefore that [(q → p) ∧ ¬p] → ¬q is
a tautology. By this theorem the truth table for this tautology could equally
well have been constructed in the following way.
p q q→p ¬p ¬q
0 0 1 1 1
0 1 0 1 0
1 0 1 0 1
1 1 1 0 0
Only the interpretation of the first row maps the expressions q → p and
¬p to truth value 1. Since this interpretation also maps ¬q to 1, every model
q → p and¬p is thereby also a model for ¬q.
When using this method the aim is to find all the interpretations that map
all the premisses to the value 1 and then to check whether they also map the
conclusion to 1.
Now lets look at how truth tables can be used to discover faulty reasoning.

Example 4.6
Consider the following argument. The letters in square brackets here stand
for primitive sentences.
Karadžić can be reborn [r] if he has been kind to all beings [k].
Karadžić has not been kind to all beings.
Therefore, Karadžić cannot be reborn.

This can be written with symbols like this

k→r
¬k
∴ ¬r
62 CHAPTER 4. DEDUCTIONS AND ARGUMENTS

Here the conclusion does not follow from the premisses by any obvious
logical principle, since from k → r, r can only be concluded if k holds. By
constructing the corresponding transpositive statement ¬r → ¬k it would
be possible to conclude ¬k if ¬r were to hold. But we do not know that
since it is ¬r we want to conclude. So argument (1) is suspiciously faulty.
However there could be some logical principle that has not been uncovered
by which the conclusion follows. This possibility can be fully investigated
using a truth table.
k r [(k → r) ∧ ¬k] → ¬r
0 0 1 1 1 1 1
0 1 1 1 1 0 0
1 0 0 0 0 1 1
1 1 1 0 0 1 0
Clearly when k and r map to 0 and 1 respectively, both premisses are true
but the suggested conclusion ¬r then maps to 0. This constitutes a counter-
example. Therefore the argument is not correct and the conclusion does not
follow from the premisses. This can be expressed as
Maybe Karadžić can be reborn after all.
The argument is correct however if the first premiss is replaced by

Karadžić can be reborn


only if e has been kind to all beings.
that is r → k. This formulation yields the following truth table
k r [(r → k) ∧ ¬k] → ¬r
0 0 1 1 1 1 1
0 1 0 0 1 1 0
1 0 1 0 0 1 1
1 1 1 0 0 1 0
º
If the implication is a tautology then it is sometimes called valid although
this word is usually reserved for predicate logic. Regardless of whether the
antecedent is actually true or not – it is important that if the antecedent is true
that the consequent is also true. Since an implication P → Q is automatically
true if P is false, the whole sentence is automatically true if any one of the
premisses P1 , . . . , Pn is false, regardless of the truth value of Q. It is only the
cases (rows, interpretations) in which all the premisses map to 1 that need
to be checked. If in each of these cases Q also maps to 1 then the sentence
(P1 ∧ P2 ∧ . . . ∧ Pn ) → Q is a tautology and according the deduction theorem
the conclusion thereby follows from the set of premisses. Otherwise, if Q maps
4.1. LOGICAL CONSEQUENCE 63

to 0 in any case when P1 , . . . , Pn all map to 1, then a counter-example has


been found that shows that the conclusion does not follow from the set of
premisses.

Definition 4.7
Construct a model c for a set {P1 , P2 , . . . , Pn } of sentences, such that c is a
counter-model to a sentence Q. The existence c shows that Q cannot follow
logically from {P1 , P2 , . . . , Pn }. In this case c is called a counter-example
to {P1 , P2 , . . . , Pn } ² Q. That Q does not follow from {P1 , P2 , . . . , Pn } is
denoted {P1 , P2 , . . . , Pn } 2 Q.

In order to show that logical consequence holds, it is necessary to show that


the consequent Q maps to 1 in all cases when the antecedent maps to 1, but
to show that logical consequence does not hold, it is enough to find just one
counter-example, that is, any interpretation that maps the premisses to 1 and
the conclusion to 0.

Example 4.8
Is the following argument correct, that is, does the conclusion follow from
the premisses?
If A is a triangle then the sum of the angles of A is 180◦ .
(4.4) If A is trapezium then the sum of the angles of A is 360◦ .
∴ Therefore no triangle has four corners.
º
Solution: Clearly a triangle never has four corners, but the question here
is whether this conclusion follows from the two premisses given. In order
to use the method with truth tables the following symbols can be used to
represent the propositions that the premisses and conclusion are composed
of:
T : the object A is a triangle
F : the object A has four corners
t : the sum of the angles of object A is 180◦
f : the sum of the angles of object A is 360◦
This can be formalised
T →t
F →f
∴ ¬(T ∧ F )
Note that the translation of No triangle has four corners is translated by
A cannot both be a triangle and have four corners, that is, it is not the case
that A is a triangle and that A has four corners. Compare this with the
64 CHAPTER 4. DEDUCTIONS AND ARGUMENTS

exposition of the previous example, the premisses P1 and P2 are here T → t


and F → f respectively and the conclusion Q is ¬(T ∧ F ). This can now be
written
P1 ∧ P2 → Q.
Next examine all cases where both P1 and P2 are true, since then if any of
the premisses are false, the implication P1 ∧ P2 → Q is immediately true.
Clearly the premiss T → t is true when T and t both map to 1 or when T
maps to 0, which yields three true cases for this premiss. The premiss F → f
is true in the same way when F and f both map to 1 or when F maps to
0, which independently also yields three true cases for this premiss. It is
therefore sufficient to check through the possible combinations of these two
sets of 3 cases of true values. These combinations are listed for T, t, F, f in
the table below. (The table contains 3 · 3 = 9 combinations, saving us from
writing out the entire table of 24 = 16 rows.)

T F t f [(T → t) ∧ (F → f )] → ¬(T ∧ F)
0 0 0 0 1 1 1 0
0 0 0 1 1 1 1 0
0 0 1 0 1 1 1 0
0 0 1 1 1 1 1 0
0 1 0 1 1 1 1 0
0 1 1 1 1 1 1 0
1 0 1 0 1 1 1 0
1 0 1 1 1 1 1 0
1 1 1 1 1 0 0 1
º
Solution: In the last row the implication assumes the truth value 0, and
is therefore no tautology. In spite of this the proposition in the consequent
is correct (as is well known from geometry), but it is not a valid conclusion.
The error is that the consequent does not follow from the given premisses
– maybe A is both a triangle and has for corners after all – there is a
premiss missing. Clearly the implication maps to 0 only in the last row,
where T, F, t, f all map to 1. In particular it holds in the this last case
that both t and f are true, that is, that t∧f is true. Our geometrical intuition
tells that an the angles of an object cannot both sum to 180◦ and 360◦ so
this case is really of no interest. And so, the addition of the extra premiss;
the angles of an object cannot simultaneously sum to both 180◦ and 360◦ ;
excludes the last row of the table above, thereby allowing the conclusion
to follow from the premisses. The extra premiss can be formalised ¬(t ∧ f )
which is equivalent to t → ¬f . It is now obvious that if P3 is the premiss
4.1. LOGICAL CONSEQUENCE 65

t → ¬f then the implication


P1 ∧ P2 ∧ P3 → Q
is a tautology:
T F t f [(T → t) ∧ (F → f ) ∧ (t → ¬f )] → ¬(T ∧ F)
0 0 0 0 1 1 1 1 1 0
0 0 0 1 1 1 1 1 1 0
0 0 1 0 1 1 1 1 1 0
0 0 1 1 1 1 0 1 1 0
0 1 0 1 1 1 1 1 1 0
0 1 1 1 1 1 0 1 1 0
1 0 1 0 1 1 1 1 1 0
1 0 1 1 1 1 0 1 1 0
1 1 1 1 1 1 0 1 0 1
It is worth noting that this example could have been approached from the
other direction in order to find a counter-example. A counter-example re-
quires the consequent to be false, and since Q is ¬(T ∧ F ) it maps to 0 only
when T and F both map to 1. So it would have been easier in this example
to check only in those cases whether the premisses assume the value 0 or
not. This would have reduced the table to
T F t f [(T → t) ∧ (F → f )] → ¬(T ∧ F )
1 1 0 0 0 1 0 1
1 1 0 1 0 1 0 1
1 1 1 0 0 1 0 1
1 1 1 1 1 0 0 1
with only four rows and where, as above, the last row contains the important
counter-example to the erroneous logical consequence. º

Exercises
Determine whether the conclusions below follow from the premisses. For any
that does not, construct a counter-example.
4.1 {p, q} ² q ∧ r
4.2 p²p→p
4.3 ² ¬p ∨ ¬¬p
4.4 ²p∧r
4.5 {p, r} ² (p ↔ r)
4.6 {¬p ∨ ¬q, p} ² ¬q
4.7 {¬(p → q)} ² p ∧ ¬q
4.8 {p ∧ q} ² ¬p ∨ p
66 CHAPTER 4. DEDUCTIONS AND ARGUMENTS

4.9 {(p → q), r} ² (¬q → ¬p) ↔ r


4.10 {(p ∧ s) ∨ (q ∧ r)} ² (p ∨ q) ∧ (p ∨ r) ∧ (s ∨ q) ∧ (s ∨ r)

2. Incomplete Arguments
Deductions like the one in example 4.8, where a premiss is missing but where
the conclusion actually does correspond to reality, are very common in every-
day speech for example.
If I do not study this evening I will not pass the exam on Saturday.
Therefore I must study this evening.
Here the premiss
I want to pass the exam on Saturday
is omitted. Incomplete reasoning like this is called enthymematic. The con-
cept enthymeme was already in use in the context of Aristotle’s classical syllo-
gisms. An English phrase commonly used to referring to incomplete reasoning
is train of thought.
When in everyday speech someone asserts
P1
P2
..
(4.5) .
Pn
·
→Q ··
they usually mean only that Q follows logically from {P1 , P2 , . . . , Pn } without
meaning the P1 , . . . , Pn actually are true. The complete intended meaning of
(4.5) is therefore
(1) P1 , . . . , Pn is true,
(2) P1 ∧ . . . ∧ Pn → Q is a tautology,
(3) therefore Q is true.
Note however that P1 , . . . , Pn from a purely logical perspective can be either
true of false, so in order to determine whether (1) holds does not become a
question of logic, but rather a question of correspondence with what we lousily
call reality (empirical facts or mathematical facts), a question about belief or
something that is dependent on earlier assumptions.
In practice, communication would be an impossible enterprise were it not
possible to implicitly assume premisses. Following any reasoning would be
most arduous. We would loose track and miss the salient points in an argu-
mentation if all the obvious premisses were not excluded. Often such premisses
4.2. INCOMPLETE ARGUMENTS 67

are generally accepted facts, something that has just been mentioned, or that
minimal premiss needed to complete the gap in the argumentation.
On the other hand, it is not hard to imagine reasoning where the conclusion
corresponds with known facts but where the reasoning is impossible to repair
by adding any further premiss other than a premiss that is synonymous with
the conclusion itself, or that contradicts one of the premisses that has already
been declared. Reasoning of this kind is not just incomplete, it is either vacuous
or totally erroneous.

Example 4.9
As an example of an omission from an argumentation that is precisely what
is logically required in order for the reasoning to be correct, imagine being
offered coffee just before going to bed early. You might reply to such an offer
with.
No, thank you [¬K].
(4.6)
When I drink coffee [K] I find it hard to fall asleep [¬S].
This incomplete argumentation can be written (note that in the natural
language formulation above, the conclusion is spoken first)
K → ¬S
∴ ¬K
What is missing (and implicit) and needed for the argumentation to be
complete is the premiss S (“I want to fall asleep easily”) and (4.6) is thereby
an abbreviation for
K → ¬S
(4.7) S
∴ ¬K
Note that S is the minimal4 premiss (without involving K) needed to be
able to draw the conclusion ¬K from K → ¬S (since K → ¬S is equivalent
to S → ¬K). This can also be seen in the truth table
K S [K → ¬S] → ¬K
0 0 0 1 1 1 1
0 1 0 1 0 1 1
1 0 1 1 1 0 0
1 1 1 0 0 1 0
where it is only in one of the rows where S maps to 0, row 3, that the
implication assumes the value 0. By including S in the set of premisses,
this row would evaluate to 0 instead of 1 as it does on row 3 above. With
4
Minimal in the sense that as few rows as possible are excluded in the corresponding truth
table for [K → ¬S] → ¬K.
68 CHAPTER 4. DEDUCTIONS AND ARGUMENTS

a false set of premisses, the implication would immediately hold for the
interpretation of this row. If we wished to exclude only the 3rd row in the
table in this way, rather than both the 3rd and the 1st , then we could assert
premiss ¬(K ∧ ¬S) which is equivalent to ¬K ∨ S, but since ¬K is the
desired conclusion this is of less interest. º
Example 4.10
As an example of an enthymematic reasoning where the conclusion is also
implicit, imagine replying only
(K → ¬S) ∧ S
to the offer of coffee, rather than (4.6), or even shorter just
(4.8) K → ¬S.
This is sufficient explanation for the complete argumentation (4.7), Whoever
just offered you coffee [K] would wonder what your reply (4.8) has to do
with K
I see
and sees that the only conclusion that can be drawn from (4.8) that has to
do with K (i.e. the only situation where K occurs in the consequent in an
implication ) is ¬K, by forming the contra-positive of the reply
S → ¬K.
But in order to able to draw the conclusion ¬K the premiss S is required.
The most reasonable (i.e. simplest) implicit conclusion in (4.8) is therefore
¬K, but then the premiss S is implicit. This results in the complete argu-
mentation
K → ¬S
S
∴ ¬K
Reasoning with statements, possibly together with implicit propositions,
that constitute premisses for a conclusion that is preferably left unsaid, is
called insinuation. º
Is it then possible to repair all incorrect and incomplete reasoning by adding
some suitable premiss? The following example shows that it is not. A necessary
extra premiss could very well contradict other premisses.

Example 4.11
Reasoning of this kind
P → ¬P
is completely fallacious and impossible to repair into a tautology through the
addition of any further premisses, as can be understood from the following
4.2. INCOMPLETE ARGUMENTS 69

argument. Assume that it were possible to repair in this way. Then every
extra premiss Q, for which
(4.9) P ∧ Q → ¬P
is a tautology, would then have to be such that P ∧ Q may never map to
1 when ¬P maps to 0 because that would make the implication false – a
counter-example. Clearly P maps to 1 exactly when ¬P maps to 0, which is
unproblematic when Q maps to 0, because then P ∧ Q also maps to 0, but
it is problematic when Q maps to 1, because then P ∧ Q maps to 1 and ¬P
maps to 0; a counter-example to (4.9) and to the assumption that adding a
premiss can repair (4.9). All cases are shown in the truth table below, with
the counter-example in the last row.
P Q P ∧Q → ¬P
0 0 0 1 1
0 1 0 1 1
1 0 0 1 0
1 1 1 0 0
But could not the value of Q be such that it is dependent on P so that Q
always maps to 1 when P maps to 0. In that case Q would need to be ¬P .
If (4.9) is a tautology, then Q is dependent on P in such a way that the
last row (the only row where the implication has the truth value 0) is not
possible. For instance, we can let Q be ¬P . Then, in all the rows, P ∧ Q
would clearly have the truth value 0, i.e. P ∧ Q is a contradiction, which
maybe was not such a great surprise.
A concrete example of completely erroneous reasoning that is not repara-
ble through the addition of any premiss is the following:
Charley does not swing the cat [¬J] or the cat claws Charley [D].
Therefore Charley swings the cat and the cat does not claw Charley.
This argumentation can be written
¬J ∨ D
∴ J ∧ ¬D
The reasoning here is of the kind P → ¬P since if P is ¬J ∨ D then
the negation is ¬(¬J ∨ D) according to de Morgans laws synonymous with
J ∧ ¬D. The truth tables for the corresponding implication becomes
J D ¬J ∨ D → J ∧ ¬D
0 0 1 0 0
0 1 1 0 0
1 0 0 1 1
1 1 1 0 0
70 CHAPTER 4. DEDUCTIONS AND ARGUMENTS

The extra premiss required is one that allows only row three to hold true,
i.e. the premiss J ∧ ¬D, and this yields the correct argumentation
¬J ∨ D
(4.10) J ∧ ¬D
∴ J ∧ ¬D
But according to de Morgans laws ¬(¬J ∨ D) ⇔ J ∧ ¬D holds, so the
premisses are each others negations. Reasoning as in (4.10) is reasonable,
but meaningless. º

Example 4.12
An example of reasoning that builds on implicit generally accepted facts is
the following:
Charley is taller than Mr. Archer [P1 ]. Mr Archer is taller than the target [P2 ].
Therefore Charley is taller than the target [Q].
which can be written
P1 ∧ P2 → Q.
The hidden assumption here is the mathematical property that the ordering
relationship > fulfills, namely that for all numbers x, y, z it holds that 5
(x > y) ∧ (y > z) → x > z.
Without the mathematical axiom it is perfectly logically possible that P1
and P2 are true while Q is false. This can be illustrated by the following
argumentation:
Charley resembles Mr. Archer. Mr. Archer resembles Bill the baboon.
Therefore Charley resembles Bill the baboon.
This depends on the property with which resemblance is associated, which
can be different in each comparison. In the example above it is perhaps
their facial properties that create the resemblance between Charley and Mr.
Archer. In the second comparison however it is probably Mr. Archer’s extra
long arms, hairy body and hunched back that create the resemblance with
the baboon, but Charley need not share these last three properties, since
they are independent of facial features (unless you are baboon). º

Example 4.13
It is not only logically possible but it is also possible in reality that A
resembles B, and B resembles C even though A does not resemble C:
5
This is normally called transitivity.
4.3. SOME IMPORTANT LOGICAL RELATIONSHIPS 71

#Ã #Ã
aa aa
a
! a
!
!! !!
"! "!
A B C

The circle A and the figure B resemble one another since both contain
something round. Figure B and figure C also resemble one another since
both have an acute angle. But A and C do not resemble each other since A
is only round and C is only angular. º

Example 4.14
Yet another fallacious way of arguing with unordered relationships like re-
semblance is the following:

Charley resembles a stick figure. This stick figure resembles graph.


Therefore Charley resembles a graph.

This also has shift in the properties associated with the comparison, but in
addition there is the undefined identity of the objects being compared. Only
Charley is identified. The stick figure can be drawn as a graph. However ’a
graph’ could be any graph and in the conclusion, in the mind of the reader ’a
graph’ assumes whatever the reader deems to be a archetypal graph rather
than just a graph like a stick figure. This makes the conclusion all the more
absurd. Comedians frequently turn logic on its head to great effect. º

3. Some Important Logical Relationships


The summary below contains many of the most useful logical relationships
for proofs and transformations of logical operators. They are stated as one
theorem and their proof are all easily shown with truth tables.
72 CHAPTER 4. DEDUCTIONS AND ARGUMENTS

Theorem 4.15
For all sentences P, Q and R and every tautology T0 and contradiction F0
it holds that
1. ¬¬P ⇔ P The law of double negation
2. ¬[P ∧ Q] ⇔ [¬P ∨ ¬Q] de Morgan’s laws
¬[P ∨ Q] ⇔ [¬P ∧ ¬Q]
3. P ∧ Q ⇔ Q ∧ P Commutative laws
P ∨Q⇔Q∨P
4. (P ∧Q)∧R ⇔ P ∧(Q∧R) Associative laws
⇔P ∧Q∧R
(P ∨Q)∨R ⇔ P ∨(Q∨R)
⇔P ∨Q∨R
5. P ∧ (Q ∨ R) ⇔ Distributive laws
(P ∧ Q) ∨ (P ∧ R)
P ∨ (Q ∧ R) ⇔
(P ∨ Q) ∧ (P ∨ R)
6. P ∧ P ⇔ P Idempotency laws
P ∨P ⇔P
7. P ∧ T0 ⇔ P Identity laws
P ∨ F0 ⇔ P
8. P ∧ ¬P ⇔ F0 Law of contradiction
P ∨ ¬P ⇔ T0 law of the excluded middle
(inverse laws)
9. P ∧ F0 ⇔ F0 Dominance laws
P ∨ T0 ⇔ T0
10. P ∧ (P ∨ Q) ⇔ P absorption laws
P ∨ (P ∧ Q) ⇔ P

The truth tables on page 44 showed that the tables for ∧ and ∨ are the opposite
of one another. It is therefore useful to define duality as follows.

Definition 4.16
Let S be a sentence that only contains atoms and the symbols ¬, ∧ and
∨. Then the corresponding dual sentence S d is the sentence obtained by
replacing in S every occurrence of ∧ with ∨, every occurrence of ∨ with ∧
as well as every occurrence of T0 with F0 and F0 with T0 , where T0 and F0
denote an arbitrary tautology and contradiction respectively.
4.3. SOME IMPORTANT LOGICAL RELATIONSHIPS 73

For example if S represents a sentence with the form P ∧ ¬Q then it holds


that
S d = P ∨ ¬Q.
Note that ¬Q is unchanged in S and S d . In the same way it holds that ([P ∧
¬P ] ↔ F0 )d is a sentence with the form [P ∨ ¬P ] ↔ T0 .
When a sentence also contains other propositional connectives than just
¬, ∧ and ∨, its dual sentence can be found by first transforming the sentence
to expressions that only contain the connectives ¬, ∧ and ∨, and thereafter
construct applying the definition of duality.
For example, let S represent P → Q. Here S can be rewritten using only
the connectives ∨ and ¬ as follows:
[P → Q] ⇔ [¬P ∨ Q],
from the dual sentence of S becomes ¬P ∧ Q.
On account of the opposite tables for ∧ and ∨ show the general duality
principle, namely that every assertion that is a theorem has a corresponding
dual assertion that is also a theorem. We will not prove this here.

Theorem 4.17
Let S and T be sentence that only contain the connectives ¬, ∧ and ∨. Then
it holds that
[S ↔ T ] ⇔ [S d ↔ T d ].

Examples of application of the duality principle are de Morgans laws. Here


¬[P ∧ Q] ↔ [¬P ∨ ¬Q] and ¬[P ∨ Q] ↔ [¬P ∧ ¬Q] are one another’s dual
sentences.
In the same way the following principles can be shown for the purpose of
reasoning.

Theorem 4.18
For all sentences P, Q and R and every contradiction F0 the following holds
1. {P, P → Q} ² Q podus ponens
2. {P → Q, Q → R} ² P → R syllogism principle
3. {P → Q, ¬Q} ² ¬P modus tollens
4. {P ∨ Q, ¬P } ² Q disjunctive syllogism
5. {¬P → F0 } ² P contradiction principle
74 CHAPTER 4. DEDUCTIONS AND ARGUMENTS

Revise & Reflect

1. How can a counter-model be used as proof and what does it prove?


2. Which are the parts of a logical argument and which are common to
leave out in human dialogue?

? A counter-example is used to show that a proposed entailment does


not hold. To do this the model must be constructed so that it satisfies
the premisses and falsifies the conclusion. This proves that not all
models satisfy the proposed entailment and therefore the definition
of logical truth is not fulfilled.
? Premisses may consist of atomic propositions or logical relationships
between atoms. Axioms may also be used as instances of axiom
schemata. Deduction rules lead to a conclusion that depends on the
premisses. In dialogue people normally omit a large portions of an ar-
gumentation and express only the least obvious parts leaving the rest
to the listener. Any part can be omitted and when it is the conclusion,
the argument is called an insinuation.
? Check that you can explain all the ‘Concepts Covered’ listed at the
beginning of the chapter.
75

CHAPTER 5

Rule Systems

Learning Objectives
After working through this chapter you should be able to:
• construct a formal logical derivation of a sentence from set of pre-
misses using three different systems of predefined rules.

Concepts covered

Axiomatic schema Derivation Proof


Semantic tableau Open and closed paths Resolution method
Sound Complete Conjunctive normal form
Clause Resolution rule Literal
Premiss Assumption Indirect derivation
Deducibility Introduction and elimination rules Natural deduction

This chapter examines four common rule systems for logical reasoning. These
four system are axiomatic systems, semantic tableaux, resolution, and natural
deduction. The point of having a rule system is to be able to reason more or
less mechanically according to its strict rules. No steps in a deduction may
occur that do not exactly correspond to the application of a rule that specifies
how a the deduction may be constructed. The advantage of this is partly
that it helps eliminate faulty reasoning and partly that such systems lend
themselves well to automated theorem proving, as with computers. Since the
calculations are purely mechanical and do not require any real intelligence,
they can be implemented in a fairly straightforward way in various kinds of
computer programs.
76 CHAPTER 5. RULE SYSTEMS

1. Axiomatic Systems
In the formalists’1 original system a number of propositional logical axioms
were defined, that were considered reasonable. Also one or more rules of infer-
ence were introduced. Such systems are quite difficult to work with, so they
are described only briefly next before moving on the four main systems of the
chapter.
A typical axiomatic system – the H system – is shown below.2
Axiom schema:
ax1. P → (Q → P )
ax2. (P → Q) → ((P → (Q → R)) → (P → R))
ax3. P → (Q → P ∧ Q)
ax4. P → P ∨ Q
ax5. Q → P ∨ Q
ax6. P ∧ Q → P
ax7. P ∧ Q → Q
ax8. (P → R) → ((Q → R) → (P ∨ Q → R))
ax9. ¬¬P → P
ax10. (P → Q) → ((P → ¬Q) → ¬P )

Deduction rule (modus ponens):


P P →Q
Q

Note that this description of the axiomatic H system and its deduction rule,
once again uses upper case letters P , Q and R, rather than the lower case sym-
bols of the alphabet of the language SL. The axioms and their accompanying
rule are intentionally written in this way and similarly to the syntactic schema
on page 42, the characters P , Q and R should be interpreted as variables that
can be assigned any sentences. When the variables are assigned sentences they
constitute instances of the axiom- and rule schema. These instances are the
real axioms and the actual application of the rule. The schema itself does not
actually contain any written axioms, but rather constitutes a skeleton or tem-
plate for what they should look like. The schema is written in a meta-language
for SL. There is nothing mysterious about this, it is merely a convenient way
to denote the infinite number of axioms and rule applications that are possible
in H.
The axioms are the system’s point of departure and the rule, modus ponens,
is used to draw conclusions. The axioms appear to be intuitively reasonable.
1
The formalistic school is briefly described in the introduction.
2
From S.C. Kleene, Introduction to Metamathematics, North-Holland, 1971, p. 82.
5.1. AXIOMATIC SYSTEMS 77

For example it seems perfectly reasonable to believe the following instantiation


of axiom ax7.

If Castro eats fish and Ernesto is a snake, then Ernesto is a snake.

Even the rule, modus ponens, seems reasonable. Assume that the following
holds:

Castro is a cat
If Castro is a cat then he is ever so cuddly

If we were to meet somebody who asserted this, then we would probably think
him irrational if he simultaneously maintained that

Castro is not ever so cuddly.

The idea with the H system is to enable reasoning to be carried out quite
mechanically according to specified rules Each new line in an argumentation
must either correspond to an application of modus ponens or to an axiom.
This is what constitutes a deduction in the H system.

Definition 5.1
Assume that P1 , ...Pn , Q are sentences in SL. A deduction in H of the
conclusion Q from the premisses P1 , ...Pn , is a sequence of sentences
S1 , ..., Sm where one of the following holds:
(i) Si is included in the set of premisses
(ii) Si is an axiom
(iii) the sequence of sentences includes Sj and Sk , where Sk = Sj → Si and j, k < i.
This means that Si follows from an application of the rule modus ponens.
(iv) Sm is Q.

That Q is deducible from P1 , ..., Pn in the H system is denoted


{P1 , ..., Pn } `h Q.

Example 5.2
Now lets deduce sentence s from the sentences p, q and p → (q → s) in the
H system , i.e. show that {p, q, p → (q → s)} `h s.
78 CHAPTER 5. RULE SYSTEMS

1. p Premiss
2. q Premiss
3. p → (q → s) Premiss
4. q→s This follows from steps 1 and 3 together with
an application of modus ponens
5. s This follows from steps 2 and 4 together with
an application of modus ponens

The deduction is thus p, q, p → (q → s), q → s, s.


Note that this deduction uses no axioms, only the premisses and deduction
rule, modus ponens. Line 1 to 3 in the deduction constitute only an list of
the premisses. In line 4 two of the premisses are used together with modus
ponens in order to draw a conclusion that is then used together with another
premiss and a second application of modus ponens in order to arrive at the
conclusion in the line 5. The use of the schema above is clear. In order
to arrive at the conclusion q → s in line 4, p instantiates P and q → s
instantiates Q in the deduction rule. Similarly s in line 5 is concluded by
instantiating P with q and Q with s in the deduction rule. º

A formal proof in sentence logic is a deduction of a conclusion from an empty


set of premisses. This can be written ∅ `h Q, where ∅ denotes the empty
set, but for clarity’s sake here is a full definition, which differs only form the
previous definition in that premisses are excluded.

Definition 5.3
Assume that Q is a sentence in SL. A proof of Q in the H system is a
sequence of sentences S1 , ..., Sm where one of the following holds:
(i) Si is an axiom.
(ii) the sequence of sentences includes Sj and Sk , where Sk = Sj → Si and j, k < i.
This means that Si follows from an application of the rule modus ponens.
(iii) Sm is Q.

That Q is provable in the H system is denoted `h Q.

Example 5.4
In a similar way to the example above this is a proof of p → p in the H
system, that is `h p → p.
5.1. AXIOMATIC SYSTEMS 79

1. p → (p → p) (ax1)
2. [p → (p → p)] → [(p → ((p → p) → p)) → (p → p)] (x2)
3. (p → ((p → p) → p)) → (p → p) (1,2 with
modus ponens)
4. p → ((p → p) → p) (ax1)
5. p → p (3, 4 with
modus ponens)

Note that in line 1, p instantiates both P and Q in the axiom schema ax1.
In line 2, p instantiates both P and R in the axiom schema ax2. In addition
(p → p) instantiates Q. Line 3 applies the deduction rule modus ponens to
the axioms on line 1 and 2. Line 4 uses another instantiation of the axiom
schema ax1, but this time p instantiates P and (p → p) instantiates Q. Line
5 applies the deduction rule again but this time to lines 3 and 4. º
Two fundamental meta-logical theorems now follow that allow deductions
in the H system to be used to show logical consequence, and thereby relinquish
the needs for truth tables. The gist of the first theorem is that if a sentence
is deducible in the H system from a set of premisses, then the sentence is
a logical consequence of those premisses. Somewhat sloppily expressed this
means that deductions yields correct results. The second theorem shows the
converse of the first, namely that if a sentence is a logical consequence of a set
of premisses, then a deduction of that sentence is possible in the H system.3

Theorem 5.5
(Soundness of the H system) If {P1 , ..., Pn } `h Q then {P1 , ..., Pn } ² Q.

Theorem 5.6
(Completeness of the H system) Om {P1 , ..., Pn } ² Q then {P1 , ..., Pn } `h Q.

These theorems establish that the power of the H system to prove sentences
is no less than that of the truth tables (theorem 5.6), and that the H system
does not result in false conclusions (theorem 5.5). At the time when the for-
malists were investigating axiomatisations, it was believed that all theorems
of logic and mathematics could be calculated as proofs from first principles.
Although computers were still a few decades away it was clear that mechanical
calculation would be possible in some way, so those first principles needed to
be as simple as possible, preferably with only one connective, only one rule
3
Later in this book, there are proofs of the corresponding theorems for the soundness and
completeness of natural deduction. The corresponding proofs for the H system is very similar
to these.
80 CHAPTER 5. RULE SYSTEMS

and very few axioms. The desire to discover what such a minimal logic might
achieve lead to the sacrifice of all theorems of SL requiring negation. An ulti-
mate compression of a deductive system for this subset of sentence logic was
eventually found and uses nothing but modus ponens, implication, and the
sole axiom ((p → q) → r) → ((r → p) → (s → p)). A complete system for
SL, however, requires, negation, implication, modus ponens and three axioms.
Anything more is just syntactic sweetening.

2. Semantic Tableaux
A common method of carrying out deductions is with semantic tableaux.
The principle idea is that if P ² Q then {P, ¬Q} is unsatisfiable, that is, every
possible instantiation of the premiss and negated conclusion will lead to a
contradiction. Therefore, the technique of semantic tableaux is first to negate
the conclusion that you wish to derive, and then to create a syntactic tree over
the sentences and show that no branch escapes contradictions. For instance,
assume that the following premisses hold:
Castro is a cat
If Castro is a cat then he is ever so cuddly
If Castro is ever so cuddly and he clawed Charley then he has been swung
Castro clawed Charley
With a little thought it seems reasonable to conclude that
Charley has been swung
Now by adding
Castro has not been swung
to the set of premisses yields a set of contradictory sentences. It is this that is
systematically exploited by the tableau method.
More formally, the tableau method is based on the following meta-logical
lemma4 .

Lemma 5.7
{P1 , ..., Pn } ² Q iff {P1 , ..., Pn , ¬Q} is not satisfiable

Proof: Beginning with the ’if’ direction, assume that every model for
{P1 , ..., Pn } is also a model for Q. Assume further that i is an interpretation.
The interpretation i is either a model for {P1 , ..., Pn } or not.
Assume first the i is a model for {P1 , ..., Pn }. In that case according to the
assumptions so far, i is a model for Q, that is. V I (Q) = 1. The interpretation
4
A theorem that is used to support other theorems is usually called a lemma.
5.2. SEMANTIC TABLEAUX 81

i can therefore not be a model for ¬Q, according to the semantics for the
connective ¬. This means that i is not a model for {P1 , ..., Pn ,¬Q}.
Assume now instead that i is not a model for {P1 , ..., Pn }. In that case i
cannot be a model for {P1 , ..., Pn ,¬Q}. Because this holds for every inter-
pretation i of {P1 , ..., Pn }, it follows that {P1 , ..., Pn ,¬Q} is not satisfiable.
Now, to prove the converse direction, assume that {P1 , ..., Pn ,¬Q} is un-
satisfiable. Assume further that i is an interpretation. In this case there
must be at least one Pj such that V I (Pj ) = 0 or that V I (¬Q) = 0.
Assuming that V I (Pj ) = 0, i cannot be a model for {P1 , ..., Pn }. Assuming
instead that V I (¬Q) = 0, then V I (Q) = 1. Because this holds for each
interpretation of i, it must therefore hold for {P1 , ..., Pn } ² Q according to
the definition 4.1 of logical consequence. º
A deduction using semantic tableau method is constructed by creating closed
branches in a tree through the repeated application of the rule of the following
rule schema:

1. ¬¬P 2. P ∧ Q 3. P ∨Q 4. P →Q
| |
P P P Q ¬P Q
|
Q

5. P ↔Q 6. ¬(P ∨ Q) 7. ¬(P → Q) 8. ¬(P ↔ Q)


| |
P ¬P ¬P P ¬P P
| | | | | |
Q ¬Q ¬Q ¬Q Q ¬Q

9. ¬(P ∧ Q)

¬P ¬Q
First lets see how the method is used and then examine the formal details.

Example 5.8
Suppose you want to prove that ¬q → ¬p is deducible from p → q. Start, as
always in this method, by negating the conclusion. The negated conclusion
is ¬(¬q → ¬p). Then list the premisses and negated conclusion under each
other and use the rule schema above to construct the following tableau:
82 CHAPTER 5. RULE SYSTEMS

1. p→q (premiss)
|
2. ¬(¬q → ¬p) (negated conclusion)
|
3. ¬q (from 2 by rule 7)
|
4. ¬¬p (from 2 by rule 7) º
|
5. p (from 4 by rule 1)

¬p q
6. | | (from 1 by rule 4)
× ×
Clearly all the branches of the tableau end with a cross. This denotes closed
branches in the tree. A branch is closed when a sentence and its negation
occur on the same branch5 . As we shall see below, if all branches are closed,
then the set of premisses together with the negated conclusion is not satisfiable.
According to the lemma above, this means that the conclusion follows logically
from the set of premisses.
The intuition behind the tableau method is fairly simple. The method op-
erates on the connectives ∧ and ∨ by using various transformation rules, such
as de Morgan’s laws. For example the connective ∧ is eliminated for P ∧ Q by
writing its components P and Q under each other as an extension of the branch
beneath P ∧ Q. The connective ∨ on the other hand gives rise to bifurcation
of as the branch is extended. Consider the rule
P →Q

¬P Q
This is based on the relationship (P → Q) ⇔ (¬P ∨ Q)
Clearly bifurcation corresponds to a disjunction. We can therefore under-
stand the tableau in example 5.8 in the following way:
What needs to hold in order for ¬(¬q → ¬p) and p → q to be true? Well,
to start with both ¬q and ¬¬p need to be true. This is expressed in the
rule schema for ¬(P → Q). Similarly p needs to be true for ¬¬p to be true.
Furthermore, it holds that for p → q to be true, ¬p or q must be true. This
corresponds to a ∨ – bifurcation in the schema above. But assume that ¬p is
true. Then p is obviously false and therefore ¬(¬q → ¬p) cannot be true (since
5
A ’branch’ in this particular context means a sequence of sentences between the root (at the
top of the tree) and a sentence that is not connected to any sentences below. More extensive
definitions pertaining to trees can be found in expositions of graph theory.
5.2. SEMANTIC TABLEAUX 83

one of the conditions for this was that p was true). Assume instead that q is
true. But nor can ¬(¬q → ¬p) be true even then (since ¬q being true was a
condition for ¬(¬q → ¬p) being true). Clearly whichever branch we choose in
an attempt try to find a way make both ¬(¬q → ¬p) and p → q true, we fail.
Success in either case would require simultaneously satisfying a sentence and
its negation, which by definition is impossible. The sentences ¬(¬q → ¬p) and
p → q can therefore not be made true simultaneously. From this, it follows by
the lemma 5.7 above that ¬q → ¬p follows logically from p → q.
Below a somewhat larger tableau is shown in which (p∧q)∨(p∧r) is deduced
from the set of premisses p ∧ (q ∨ r).
1. p ∧ (q ∨ r)
|
2. ¬[(p ∧ q) ∨ (p ∧ r)]
|
3. p
|
4. q∨r
|
5. ¬(p ∧ q)
|
6. ¬(p ∧ r)

7. ¬p ¬r
|
8. × ¬p ¬q
|
9. × q r
| |
10. × ×
Row 1 states the premiss and row 2 the negation of the conclusion to be
deduced. Rows 3 and 4 follow from the rule schema for the connective ∧.
Rows 5 and 6 follow in a similar way from the rule schema for the negated
connective ∨. Row 7 follows from row 6 by the rule for a negated ∧. The left
branch of row 7 now contains the sentence ¬p, while further up the tree the
sentence p lies on the same branch and we therefore close the branch at the
bottom with the symbol ×. It is pointless to continue building on the tree
from this point on because that branch will always contain a sentence and its
negation. However not all branches are yet closed. The right hand branch of
row 7 can be extended. In row 5 there is the sentence ¬(p ∧ q). This bifurcates
into row 8 by rule 9, where the ’leaves’ become ¬p and ¬q. Next, rule 3 for
84 CHAPTER 5. RULE SYSTEMS

the connective ∨ can be applied to the sentence on row 4. This makes the tree
bifurcate into row 9. Every branch in the tree now contains a sentence and its
negation and so the whole tableau is closed and the deduction ended.
Some expositions of the semantic tableau method express the rules in terms
of α- and β-rules. α-rules are conjunctive leading the component sentences
α1 - and α2 to be written beneath one another so as to extend the ends of
all branches below the sentence α. with two vertices. β-rules are disjunctive,
leading the component sentences β 1 - and β 2 to bifurcate from the ends of all
branches below the sentence β.

α α1 α2 β β1 β2
¬¬P P ¬(P ∧ Q) ¬P ¬Q
P ∧Q P Q P ∨Q P Q
¬(P ∨ Q) ¬P ¬Q P →Q ¬P Q
¬(P → Q) P ¬Q ¬(P ↔ Q) ¬P ∧ Q P ∧ ¬Q
P ↔ Q P ∧ Q ¬P ∧ ¬Q

From this it is possible to show more formally how a semantic tableau is


constructed.
A semantic tableau is a tree, in other words a graph that is both connected
and has no cycles.
5.2. SEMANTIC TABLEAUX 85

Definition 5.9
Given a set M = {P1 , ..., Pn ,¬Q} of sentences. A semantic tableau T for
M is constructed in the following way:
(1) Define a root vertex in T . Build a single branch originating from the
root by adding a vertex for each element in {P1 , ..., Pn ,¬Q} labelled
accordingly.
(2) Check each branch to see if it contains a sentence and its negation.
If so, mark the bottom vertex of the branch as closed using the
symbol ×. If all branches are closed the construction is finished.
(3) Choose a branch B that is not closed. If there are no sentences
on B to which the a rule can be applied then the construction is
finished. Choose a sentence S on B that is not an atom. Apply an
α- or β-rule to this.

case 1) When applying an α-rule add two vertices in se-


quence to the bottom vertex of B and label them with sentences
α1 and α2 . Repeat this for every open branch that contains S.

case 2 ) When applying a β-rule add two vertices so that


are both directly connected to and bifurcate from to the bottom
vertex of B, and label them with sentences β 1 and β 2 . Repeat this
for every open branch that contains S.

(4) Go back to step 2.

Definition 5.10
A branch in a tableau is closed if it contains both a sentence and its nega-
tion. A branch is open if it is not closed. A tableau is closed if every branch
in the tableau is closed. A tableau is open if it is not closed.

The tableau in the previous example is thus closed. Below an open tableau is
shows an unsuccessful attempt to deduce p → ¬q from p → q.
86 CHAPTER 5. RULE SYSTEMS

1. p→q
|
2. ¬(p → ¬q)
|
3. p
|
4. ¬¬q

5. ¬p q
| |
6. ×

What might this indicate, if the tableau method does not succeed deducing a
desired result? Probably that the conclusion is not a logical consequence of the
premisses at all. In that case there must be a counter-model, and the tableau
can be used to generate this.
There are not more rules that can be applied to the tableau above, but
in spite of this not all branches are closed. This means (which we state here
without proof) that p → ¬q does not follow from {p → q}. Instead a counter-
model has been constructed. This becomes clear from the informal discussion
about the tableau method in the beginning of this section. A necessary and
sufficient condition for a set {p → q, ¬(p → ¬q)} to be satisfiable is (accord-
ing to the tableau above) that both p and q are satisfiable. But this is in
fact the case, since is possible to satisfy these – with an evaluation V of an
interpretation I such that V I (p) = V I (q) = 1.
Similarly counter-models can be shown to the sentence p ∧ q.

¬(p ∧ q)

¬p ¬q

In the left hand branch ¬p remains without any constraints on q. This means
that it is enough that ¬p is true in order for ¬(p ∧ q) to be true. Similarly,
in the right hand side ¬q remains without any constraints on p. The tableau
shows therefore that there is more than one counter-model to the sentence,
namely all interpretations where ¬p maps to 1 and all interpretation where ¬q
maps to 1. The whole set of counter-models is thus {V I (p) = 0, V I (q) = 1},
{V I (p) = 0, V I (q) = 0}, {V I (p) = 1, V I (q) = 0}, although one would be
enough to invalidate the proposed logical consequence.
5.2. SEMANTIC TABLEAUX 87

Definition 5.11
Given a set M = {P1 , ..., Pn ,¬Q} of sentences. If there is a closed semantic
tableau for M then Q is deducible from the premisses P1 , ..., Pn through the
tableau method. This is denoted {P1 , ..., Pn } `t Q.

This helps with the definition of a proof in the tableau method.

Definition 5.12
If ∅ `t Q, where Q is a sentence and ∅ the empty set, then Q is provable
through the tableau method. This is denoted `t Q.

The following theorems expresses that the tableau method is both sound and
complete. These theorems mean that deductions using sematic tableaux to
show logical consequence are just as acceptable as deductions using axiomatic
systems. Soundness means that if a sentence is deducible from a set of pre-
misses using a deduction with semantic tableaux, then the sentence is a logical
consequence of those premisses. Completeness means that if a sentence is a
logical consequence of a set of premisses, then this can be show through de-
duction with the tableau method.6

Theorem 5.13
(Soundness of the tableau method) If {P1 , ..., Pn } `t Q then {P1 , ..., Pn } ² Q.

Theorem 5.14
(Completeness of the tableau method) If {P1 , ..., Pn } ² Q then {P1 , ..., Pn } `t
Q.

These theorems establish that the power of the tableau method to prove sen-
tences is no less than that of the truth tables (theorem 5.14), and that the
tableau method does not result in false conclusions (theorem 5.13).

Exercises
Show the following.
5.1 ¬¬p `t p
5.2 p `t ¬¬p

6
Later in this book, there are proofs of the corresponding theorems for the soundness and
completeness of natural deduction. The corresponding proofs for the tableau method is very
similar to these.
88 CHAPTER 5. RULE SYSTEMS

5.3 p → q, ¬q `t ¬p
5.4 `t (¬p → p) → p
5.5 p ∨ q, ¬q `t p
5.6 ¬(p ∨ q) `t ¬p ∧ ¬q
5.7 ¬p ∧ ¬q `t ¬(p ∨ q)
5.8 p ∧ q `t q ∧ p
5.9 p ∧ (q ∨ r) `t (p ∧ q) ∨ (p ∧ r)
5.10 p ∨ (q ∧ r) `t (p ∨ q) ∧ (p ∨ r)
5.11 p → q, ¬q `t ¬p
5.12 p → q `t ¬p ∨ q
5.13 ¬p ∨ q `t p → q
5.14 `t p ∨ ¬p
5.15 `t ¬(p ∧ ¬p)
5.16 p → q `t ¬q → ¬p
5.17 ¬q → ¬p `t p → q
5.18 `t ¬(p ↔ ¬p)
5.19 q `t p → q
5.20 q ∧ ¬q `t p
5.21 p ∨ q, q → r, p ∨ r → t `t t
5.22 p → q, q → r, ¬r `t ¬p
5.23 `t p → (p → p)
5.24 p → (q → r), p ∨ r, ¬q → ¬p `t r
5.25 (p → q) → (p → r) `t p → (q → r)
5.26 p → [(q ∧ r) ∨ t] , (q ∧ r) → ¬p, s → ¬t `t p → ¬s

Using the tableau method show that the statements below are correct, and
construct a complete set of counter-examples.
5.27 p → q 6`t q → p
5.28 6`t p ↔ (p ∧ q)
5.29 p → q 6`t ¬p → ¬q
5.30 p → q 6`t p ↔ q
5.31 6`t p ∨ q
5.32 6`t p ∧ q ∧ r
5.4. CONJUNCTIVE NORMAL FORM 89

3. The Resolution Method


With the advent of computers a representation of SL with a minimal number
of rules and symbols was desirable. The resolution method introduced in 1965
by J. A. Robinson was such a language which cleverly used placement of sets of
clauses either side of a generalised disjunction symbol to indicate presence or
absence of negation thereby dispensing with connective symbols and the need
to interpret them. Resolution has shown itself to be the deductive method
most suited to automatic theorem proving. There are a number of variations
on the resolution method,7 one of which underpins the programming language
Prolog.
The rule of resolution in sentence logic is based on the simple rule8 :
¬P P ∨Q

The rule appears intuitively reasonable. Suppose that we are told that
Castro is in the sauna or in the freezer.
If there is nobody in the sauna is reasonable to conclude that
Castro is in the freezer

4. Conjunctive Normal Form


The resolution method is based on sentences written in the so called conjunc-
tive normal form (CNF), which is a conjunction of disjunctions of atoms
or negated atoms. This does not entail any limitations on the power of the
method to prove sentences, because all sentences in SL can be rewritten in
logically equivalent form in CNF. Before explaining how this is done a few
definitions are needed.

Definition 5.15
A literal is an atom or a negated atom.

Definition 5.16
A disjunction of literals is a clause.

7
See, e.g., L. Wos, R. Overbeek, E. Lusk and J. Boyle, Automated Reasoning: Introduction
and Applications, Prentice-Hall, 1984.
8
This rule is sometimes called disjunctive syllogism.
90 CHAPTER 5. RULE SYSTEMS

In other words a clause is a number of atoms and negated atoms joined by


∨–symbols. For example the sentence p ∨ q ∨ ¬r is a clause in SL.

Definition 5.17
A sentence that has the form (P11 ∨ P12 ∨ ... ∨ P1m1 ) ∧ (P21 ∨ P22 ∨ ... ∨ P2m2 )
∧ ... ∧ (Pn1 ∨ Pn2 ∨ ... ∨ Pnmn ), where Pij are literals, is in conjunctive
normal form.

A sentence in CNF is therefore a conjunction of clauses. For example the


sentence (p ∨ q ∨ ¬r) ∧ (¬p ∨ ¬q) is in CNF.
The following theorem with is constructive proof shows that any sentence
in SL can be rewritten a logically equivalent sentence in CNF.

Theorem 5.18
Given an arbitrary sentence S in the language SL, it holds that there is a
sentence SCN F in conjunctive normal form such that S is true exactly when
SCN F is true.

Proof: By step wise transformation of an arbitrary sentence in SL using the


rules below, a logically equivalent sentence can be obtained in conjunctive
normal form. The correctness of these rules can easily be checked using truth
tables or the tableau method.9
(i) ¬¬P ⇔ P
(ii) ¬(P ∧ Q) ⇔ (¬P ∨ ¬Q)
(iii) ¬(P ∨ Q) ⇔ (¬P ∧ ¬Q)
(iv) (P → Q) ⇔ (¬P ∨ Q)
(v) ¬(P → Q) ⇔ (P ∧ ¬Q)
(vi) (P ↔ Q) ⇔ (¬P ∨ Q) ∧ (¬Q ∨ P )
(vii) ¬(P ↔ Q) ⇔ (P ∨ Q) ∧ (¬P ∨ ¬Q)
(viii) P ∨ (R ∧ S) ⇔ (P ∨ R) ∧ (P ∨ S)
(ix) P ∨P ⇔ P
(x) P ∧P ⇔ P
(xi) (P ∧ Q) ∧ R ⇔ P ∧ (Q ∧ R)
(xii) (P ∨ Q) ∨ R ⇔ P ∨ (Q ∨ R)
Any of the rules above can be applied to any sentence that is not in CNF.
Rules (i)–(vii) remove a least one negation, the scope of which is the whole
sentence, or remove a connective that is not ∧ or ∨. Since there are a finite
number of connectives in a sentence, after a finite number of applications
9
Note that the appearance of the rules is strongly reminiscent of the rule system of the
semantic tableau method.
5.4. CONJUNCTIVE NORMAL FORM 91

rules (i)–(vii), a sentence will be obtained that only contains literals held to-
gether with the connectives ∧ and ∨. This sentence can then be mechanically
transformed to a sentence in CNF through a finite number of applications
of the rules (viii)–(xii). º

Example 5.19
The sentence ¬((p → q) → (¬p → ¬q)) can be rewritten as (¬p ∨ q) ∧ ¬p ∧ q
by following the application of the rules above.

¬((p → q) → (¬p → ¬q))


(p → q) ∧ ¬(¬p → ¬q)) Rule (v)
(¬p ∨ q) ∧ ¬(¬p → ¬q)) Rule (iv) applied to the left side of the sentence
(¬p ∨ q) ∧ (¬p ∧ ¬¬q) Rule (v) applied to right side of the sentence
(¬p ∨ q) ∧ (¬p ∧ q) Rule (i) applied to the double negation above
(¬p ∨ q) ∧ ¬p ∧ q Rule (xi) applied to the conjunction above
º
The transformation above replaces sub-sentences with other sub-sentences ac-
cording to the equivalences in theorem above. However, the theorem pertains
to whole sentences and strictly speaking, it does not thereby automatically
follow that sub-sentences can be transformed into logically equivalent sub-
sentences without changing the truth value of the whole sentence. Fortunately,
the following theorem establishes that substitutions of sub-sentences do not
affect the truth value of the whole sentence. The proof is omitted, but it should
not be hard to see that the theorem is valid.

Theorem 5.20
Let P be a sentence and let Q be a sub-sentence of P . Assume that Q ⇔ R
and let P 0 denote the result of replacing one or more occurrences of Q in P
with R. Then P ⇔ P 0 .

Example 5.21
This equivalence ¬((p ∧ q) ∨ (¬p ∧ ¬q)) ⇔ (¬p ∨ ¬q) ∧ (p ∨ q) can now be
shown to hold through by the two theorems above.

¬((p ∧ q) ∨ (¬p ∧ ¬q)) º


¬(p ∧ q) ∧ ¬(¬p ∧ ¬q) Rule (iii)
(¬p ∨ ¬q) ∧ (¬¬p ∨ ¬¬q) Rule (ii) and theorem 5.20
(¬p ∨ ¬q) ∧ (p ∨ q) Rule (i) and theorem 5.20
92 CHAPTER 5. RULE SYSTEMS

Here is a little new notation, not to confuse you, but because this or very
similar notations are well established in the context of the resolution method.10

Definition 5.22
A clause P1 ∨ P2 ∨ ... ∨ Pn ∨ ¬Q1 ∨ ¬Q2 ∨ ... ∨ ¬Qm in many contexts is
written P1 , P2 , ..., Pn ← Q1 , Q2 , ..., Qm .

The table shows some examples of how the two notations correspond with
each other.
P P ←
¬P ←P
P ∨ ¬Q P ←Q
P1 ∨ P2 ∨ ... ∨ Pn P1 , P2 , ..., Pn ←
¬Q1 ∨ ¬Q2 ∨ ... ∨ ¬Qm ← Q1 , Q2 , ..., Qm
⊥ (Falsum)11 ←

The resolution method operates on clauses that are components in sentences


in CNF exploiting the following theorem.

Theorem 5.23
The interpretation m is a model for (P11 ∨ P12 ∨ ... ∨ P1m1 ) ∧ (P21 ∨ P22 ∨
... ∨ P2m2 ) ∧ ... ∧ (Pn1 ∨ Pn2 ∨ ... ∨ Pnmn ), where Pij are literals, exactly with
m is a model for {(P11 ∨ P12 ∨ ... ∨ P1m1 ), (P21 ∨ P22 ∨ ... ∨ P2m2 ), ..., (Pn1 ∨
Pn2 ∨ ... ∨ Pnmn )}.

Exercises
Write the following sentences in the equivalent CNF form.
5.33 ¬(p ∨ q)
5.34 p → (q ∨ r)
5.35 (p ∧ q) ∨ (p ∧ r)
5.36 p→q
5.37 ¬(p ∧ ¬p)
5.38 (¬p → p) → p
5.39 ¬q → ¬p
5.40 ¬(p ↔ ¬p)
10
It could be said that in general, logicians use the notation ∧, ∨ and ¬ in resolution, while
computer scientists prefer the arrow notation.
11
Falsum is sometimes called the empty clause.
5.5. DEDUCTIONS WITH THE RESOLUTION METHOD 93

5.41 p → (p → p)
5.42 p → (q → r)
5.43 (p → q) → (p → r)
5.44 p → [(q ∧ r) ∨ t]

5. Deductions with the Resolution Method


Basically the resolution method aims at deducing the empty clause from a set
of unsatisfiable clauses. Suppose we want to prove that Q follows from the
premisses P1 , P2 , ..., Pn , all of which are clauses. The steps in the method are
the following.
(1) Negate Q. This yields the set of clauses {P1 , P2 , ..., Pn ,¬Q}.
(2) Use the resolution method on these clauses until the empty clause
has been deduced.
Deduction of the empty clause shows that {P1 , P2 , ..., Pn ,¬Q} is unsatisfiable.
From lemma 5.7 on page 80 it now follows that Q is a logical consequence of
the premisses P1 , P2 , ..., Pn .
What the resolution rule means is defined next.

Definition 5.24
The resolution rule has the following form:
P, H ← Q R ← H, S

P, R ← Q, S
Where P, Q, R, S and H are literals.

Example 5.25
p← ← p, q

←q
º
Naturally, the resolution rule can also be used in the most general case for
arbitrary clauses.
94 CHAPTER 5. RULE SYSTEMS

Definition 5.26
The general form of the resolution rule has the following form:
P1 , ..., Pk , H ← Q1 , ..., Qm R1 , ..., Rn ← H, S1 , ..., Sp

P1 , ..., Pk , R1 , ..., Rn ← Q1 , ..., Qm , S1 , ..., Sp


Where all Pi , Qi , Ri , Si and H are literals.

Example 5.27
In order to show that (p ∧ q) ∨ (p ∧ r) follows logically from p ∧ (q ∨ r)
using the resolution method, the conclusion must first be negated. By de
Morgan’s laws this yields (¬p ∨ ¬q) ∧ (¬p ∨ ¬r). Then the premiss and
the negated conclusion are written as a set of disjunctions. This yields the
set {p, (q ∨ r), (¬p ∨ ¬q), (¬p ∨ ¬r)}. In the arrow notation this is written
{p ← , q, r ← , ← p, q, ← p, r}. Finally the deduction of the empty clause:

p← ← p, q

←q q, r ←

r← ← p, r

p← ←p Note here how the clause p ←


is repeated.

º

Deduction and proof with the resolution method will now be defined more
precisely.
5.5. DEDUCTIONS WITH THE RESOLUTION METHOD 95

Definition 5.28
Assume that P1 , ..., Pn , Q are sentences in SL. A deduction with the
resolution method of the conclusion Q from the premisses P1 , ..., Pn , is a
sequence of sentences S1 , ..., Sm where one of the following holds:
(i) Si is in CNF and Si ⇔ Pj for some premiss Pj .
(ii) Si is in CNF and Si ⇔ ¬Q.
(iii) Si follows from a application of the resolution rule to Sj and Sk , j, k < i.
(iv) Sm is ←.

The existence of a deduction with the resolution method of the conclusion


Q from the premisses P1 , ..., Pn is denoted {P1 , ..., Pn } `res Q.

Example 5.29
Show that {¬(p ↔ q)} `res (¬p∧q)∨(p∧¬q). Start by writing the premisses
and the negated conclusion in clausal form.

¬(p ↔ q) ¬((¬p ∧ q) ∨ (p ∧ ¬q))


(p ∨ q) ∧ (¬q ∨ ¬p) ¬(¬p ∧ q) ∧ ¬(p ∧ ¬q)
(¬¬p ∨ ¬q) ∧ (¬p ∨ ¬¬q)
p ∨ ¬q, ¬p ∨ q
p, q ←, ← p, q p←q q←p

Then carry out the deduction, for example in the following way.12

q←p p, q ←

q← p←q

p← ← p, q

q← ←q


º
Proofs with the resolution method are similar, except there is no set of pre-
misses, only the negated conclusion.

12
We are here using that q, q ← is equivalent to q ←, since q ∨ q is equivalent to q.
96 CHAPTER 5. RULE SYSTEMS

Definition 5.30
If ∅ `res Q, where Q is a sentence and ∅ is the empty set, then Q is
provable by the resolution method. This is denoted `res Q.

Example 5.31
Show that `res p → (p → p). Begin by writing the negated conclusion in
clausal form.
¬(p → (p → p))
p ∧ ¬(p → p)
p ∧ (p ∧ ¬p)
(p ∧ p) ∧ ¬p
p ∧ ¬p
p, ¬p
p← ←p
Then carry out (in this case very short) the deduction.
p← ←p º

A common mistake when carrying out deductions with the resolution method
is trying to resolve several literals at once. This in not permitted and yields
incorrect results.

Example 5.32
{¬p → q} `res p ∧ q does not hold. But still the empty clause is obtained
below. This is because two literals have been resolved simultaneously.

¬p → q ¬(p ∧ q)
¬¬p ∨ q ¬p ∨ ¬q
p∨q
p, q ← ← p, q
p, q ← ← p, q

← Both p and q have been resolved simultaneously, which is not permitted.

The resolution method can therefore not be applied to two literals simulta-
neously. For example in the case above there is a V I that makes both p ∨ q
5.5. DEDUCTIONS WITH THE RESOLUTION METHOD 97

and ¬p ∨ ¬q true simultaneously, namely V I (p) = 1 and V I (q) = 0. The


clauses are therefore not contradictory. º
In a similar way to the tableau method, the resolution method is both sound
and complete. The first theorem below establishes that if a sentence is de-
ducible from a set of premisses by the resolution method, than the sentence is
a logical consequence of those premisses. The second theorem establishes the
converse of the first, namely that if a sentence is a logical consequence of a
set of premisses, then a deduction of that sentence exists using the resolution
method. Later in the book, soundness and completeness are proved for natural
deduction.

Theorem 5.33
(Soundness of the resolution method) If {P1 , ..., Pn } `res Q then
{P1 , ..., Pn } ² Q.

Theorem 5.34
(Completeness of the resolution method) If {P1 , ..., Pn } ² Q then
{P1 , ..., Pn } `res Q.

So the resolution method is also sufficiently powerful to show all logical con-
sequences and tautologies in SL. (theorem 5.34). It is also sound in the sense
of not being able to deduce erroneous conclusions from a set of premisses.
(theorem 5.33).

Exercises
Show the following.
5.45 ¬¬p `res p
5.46 p `res ¬¬p
5.47 p → q, ¬q `res ¬p
5.48 `res (¬p → p) → p
5.49 p ∨ q, ¬q `res p
5.50 ¬(p ∨ q) `res ¬p ∧ ¬q
5.51 ¬p ∧ ¬q `res ¬(p ∨ q)
5.52 p ∧ q `res q ∧ p
5.53 p ∧ (q ∨ r) `res (p ∧ q) ∨ (p ∧ r)
5.54 p → (r ∨ s), s → r, p `res r
5.55 p → q, ¬q `res ¬p
5.56 p → q `res ¬p ∨ q
98 CHAPTER 5. RULE SYSTEMS

5.57 ¬p ∨ q `res p → q
5.58 `res p ∨ ¬p
5.59 `res ¬(p ∧ ¬p)
5.60 p → q `res ¬q → ¬p
5.61 ¬q → ¬p `res p → q
5.62 `res ¬(p ↔ ¬p)
5.63 q `res p → q
5.64 q ∧ ¬q `res p
5.65 p ∨ q, q → r, p ∨ r → t `res t
5.66 p → q, q → r, ¬r `res ¬p
5.67 `res p → (p → p)
5.68 p → (q → r), p ∨ r, ¬q → ¬p `res r
5.69 (p → q) → (p → r) `res p → (q → r)
5.70 p → [(q ∧ r) ∨ t] , q ∧ r → ¬p, s → ¬t `res p → ¬s

6. Natural Deduction
This deduction method was constructed as a reaction against the counter intu-
itive and unnatural experience of using the axiomatic systems of the formalists.
Natural deduction was propounded first by Gerhard Gentzen and developed
further by Dag Prawitz13 . Natural deduction is considered by many to be the
formal system that best corresponds to our intuitive way of logical reasoning.
This should become very apparent from the structure and techniques used
in many of the proofs of lemmas and theorems in this book. Indeed it will
help you considerably to understand the proofs in this book if you look for
the resemblance between their reasoning and the rule schemata in the natu-
ral deduction system. The proof of the completeness of natural deduction for
sentence logic on page 118 shows this resemblance.
Natural deduction uses a number of introduction and elimination rules in
order to derived sentences from premisses. These rules operate on connectives.
For example if we know that
Castro is a cat ∧ Bill is a baboon
then we obviously know that
Castro is a cat
13
D. Prawitz, Natural Deduction: A Proof Theoretical Study, Almqvist & Wiksell, 1961.
5.6. NATURAL DEDUCTION 99

just as well as we know that

Bill is a baboon

In some sense then the ∧ – symbol has been eliminated leaving two independent
statements. It seems equally reasonable to expect that if we know that

Mr. Archer needs target practice

is true, and that

Charley let the cat out of the bag

is true, then we know that

Mr. Archer needs target practice ∧ Charley let the cat out of the bag

is also true. In a similar sense the ∧ – symbol has been introduced.


The following schema is lists the elimination- and introduction rules of the
natural deduction system:14

P Q (∧ – I) P ∧Q (∧ – E) P ∧Q (∧ – E)
P ∧Q P Q

P (∨ – I) Q (∨ – I) P (r)
P ∨Q P ∨Q P

P (a) Q (a) (∨ – E) P (⊥ – I)
P ∨Q R R ¬P
R ⊥

14
The abbreviations I and E stand for introduction and elimination respectively. The ab-
breviation a stands for assumption and r stands for repetition.
100 CHAPTER 5. RULE SYSTEMS

P (a) (→ – I) P P →Q (→ – E)
Q Q
P →Q

¬P (a) P (a)
⊥ (¬ – E) ⊥ (¬ – I)
P ¬P

P ↔Q P ↔Q
(↔ – E) (↔ – E)
P →Q Q→P

P →Q Q→P
(↔ – I)
Q↔P

The principle behind the rules of ¬ – E and ¬ – I, is sometimes referred to as


indirect argument. Argumentation analogous to the rule → – I is sometimes
known as hypothetical argument and structure of the rule of → – E is well
known as modus ponens. Note that the rules above are written in a fairly
compact form. In the deductions below, the rules will be applied in a more
linear way. For example, the components in the deduction schema for ∨ – E,

P (a) Q (a)
P ∨Q R R
R
can be written in several orders such as
1. P ∨ Q 1. P ∨ Q 1. P (a)
2. P (a) 2. Q (a) 2. R
3. R 3. R 3. Q (a)
4. Q (a) 4. P (a) 4. R
5. R 5. R 5. P ∨ Q
6. R 6. R 6. R
The schema of rules is written in its compact from in order to emphasize that
any of these uses are correct. Note also that in some rules assumptions occur
(a). These are used in the deduction when explicit assumption are made from
which the conclusion is draw. These assumptions are said to be discharged
just before the conclusion of the rule is asserted. Discharging assumptions
can also be seen as relinquishing the deductions dependency on assumptions.
The lines that follow the assumptions may be completely dependent on the
assumptions in order to hold, but once a pattern of statements matches a rule,
then that rule’s conclusion can be asserted without this dependency on the
5.6. NATURAL DEDUCTION 101

assumptions. In the sequences above this happens as the deduction reaches


line 6. In the next example of the deduction of ¬¬p from p, this happens just
before line 4

Example 5.35
1. p (premiss)
2. ¬p (assumption)
3. ⊥ (1,2, ⊥ – I)
4. ¬¬p (2–3, ¬ – I)
Line 1 is the premiss p. Line 2 assumes ¬p. Line 3 applies the rule of ⊥ –
I. This can be done because both p and ¬p occur earlier in the deduction.
In other words the assumption ¬p leads to ⊥ (falsum), which once it has
been obtained, allows an application of the rule of ¬ – I in line 4. There
the assumption ¬p, on which ⊥ on line 3 is dependent, can be discharged
according the rule of ¬ – I and the conclusion ¬¬p can be drawn which is no
longer dependent on the assumption of ¬p. The right column in the example
contains an abbreviated commentary to explain which rules are being used
and to which line the rule is being applied. º
Natural deduction operates on all connectives. It is therefore not necessary to
rewrite sentences in clausal form before starting a deduction. The following
example shows how p → r can be concluded from the set of premisses {p →
q, q → r} using natural deduction. 15

Example 5.36
1. p → q (premiss)
2. q → r (premiss)
3. p (assumption)
4. q (1, 3,→ – E)
5. r (2, 4, → – E)
6. p → r (3–5, → – I)

Lines 1 and 2 specify the premisses. Line three 3 makes the assumption p.
The purpose of this assumption is to allow a later application of the rule of
→ – I in order obtain the desired conclusion. Line 4 uses the rule of → – E
in order to arrive at q from the premiss p → q and the assumption p. Recall
that this rule is also called modus ponens. Line 5 uses the conclusion from
line 4 together with a premiss in order to deduce r using a new application
of the rule → – E. Finally the rule of → – I allows the assumption of p to
be discharged leading to the conclusion p → r on line 6. Note the hyphen
between the digits 3–5 in the commentary on line 6. This emphasizes the
15
This rule is usually called the syllogism principle.
102 CHAPTER 5. RULE SYSTEMS

fact that the conclusion of line 6 is dependent on the reasoning that starts
with the assumption on line 3 up to and including line 5. º
The examples below shows how ¬p → r can be deduced from {¬p → q, (p →
q) → r} and how ¬p ∨ ¬q can be deduced from ¬(p ∧ q).

Example 5.37
1. ¬p → q (premiss) º
2. (p → q) → r (premiss)
3. ¬p (assumption)
4. p (assumption)
5. q (1, 3, → – E)
6. p→q (4–5, → – I)
7. r (2, 6, → – E)
8. ¬p → r (3–7, → – I)

Example 5.38
1. ¬(p ∧ q) (premiss) º
2. ¬(¬p ∨ ¬q) (assumption)
3. ¬p (assumption)
4. ¬p ∨ ¬q (3, ∨ – I)
5. ⊥ (2, 4, ⊥ – I)
6. p (3–5, ¬ – E)
7. ¬q (assumption)
8. ¬p ∨ ¬q (7, ∨ – I)
9. ⊥ (2, 8, ⊥ – I)
10. q (7–9, ¬ – E)
11. p∧q (6, 10, ∧ – I)
12. ⊥ (1, 11, ⊥ – I)
13. ¬p ∨ ¬q (2–12, ¬ – E)
An effective technique in natural deduction is often to begin by assuming
the negation of the conclusion and then deduce a contradiction, ⊥ from the
premisses (echoes of resolution). Using the rule for ¬ – E then allows us to
conclude the desired result from the set of premisses. This technique is clearly
similar to the tableau and resolution methods, and is usually called indirect
deduction. Note however that whilst this technique was necessary for the
tableau and resolution methods, it is not necessary for natural deduction. The
technique is illustrated again in the following example.

Example 5.39
This deduction contains no premisses, and is therefore a proof of (¬p →
p) → p.
5.6. NATURAL DEDUCTION 103

1. (¬p → p) (assumption)
2. ¬p (assumption)
3. p (1, 2,→ – E)
4. ⊥ (2, 3, ⊥ – I)
5. p (2–4, ¬ – E)
6. (¬p → p) → p (1–5, → – I)

Line 2 makes the assumption ¬p. This leads to a contradiction, allowing the
rule of ¬ – E rule to be applied which leads to p. º

As in the previous example, it is very important to keep track of which as-


sumptions have been discharged. If an assumption has not been discharged
according to one of the schema’s rules, then any sentence that is written fur-
ther down may be dependent on the assumption, i.e. only hold true if the
assumption is upheld as true. When the process of discharging assumptions is
not carefully observed, erroneous conclusions can easily be drawn.

Example 5.40
The assumption p is not discharged in the deduction below and q on line 4 i
still dependent of the assumption that p holds. Obviously q does not follow
logically from p → q ∧ r.

1. p → q ∧ r (premiss) º
2. p (assumption)
3. q∧r (1, 2, → – E)
4. q (3, ∧ – E)

A deduction and a proof in natural deduction are now defined.


104 CHAPTER 5. RULE SYSTEMS

Definition 5.41
Assume that P1 , ..., Pn , Q are sentences in SL. A deduction in natural de-
duction of the conclusion Q from the premisses P1 , ..., Pn , is a sequence of
sentences S1 , ..., Sm where one of the following holds:
(i) Si is included in the set {P1 , ..., Pn }.
(ii) Si is an assumption that is discharged in some Sj , i < j ≤ m and there
is no assumption Sk , i < k < j that is discharged in any Sl ,
j < l.
(iii) Si follows from the application of an introduction- or elimination rule
to sentences earlier in the sequence. This means that Si is the conclusion
of a rule where it holds for Sj and Sk (and, in the case of ∨ – E, also Sl ),
on which application of the rule is conditional, that j, k, l < i.
(iv) Sm is Q.

The deduction through natural deduction of the conclusion Q from the pre-
misses P1 , ..., Pn is denoted {P1 , ..., Pn } `nd Q.

In condition (ii) above the part that follow the word ‘and’ means that an
assumption may not be discharged before other assumptions on which it is
dependent have also be discharged.

Definition 5.42
If ∅ `nd Q, where Q is a sentence ∅ is the empty set, then Q is provable
by natural deduction. This is denoted `nd Q.

Note that it is often easier to write deductions and proofs in a more compact
from by using results that have proved earlier.

Example 5.43
The proof of p ∨ ¬p can be done in two stages. First ¬p ∧ ¬q can be derived
from ¬(p ∨ q).
1. ¬(p ∨ q) (p)
2. p (a)
3. p ∨ q (2, ∨ – I)
4. ⊥ (1, 3, ⊥ – I)
5. ¬p (2–4, ¬ – I)
6. q (a)
7. p ∨ q (6, ∨ – I)
8. ⊥ (1, 7, ⊥ – I)
9. ¬q (6–8, ¬ – I)
10. ¬p ∧ ¬q (5, 9, ∧ – I)
5.6. NATURAL DEDUCTION 105

This deduction can then be referred to and its result used in order to prove
p ∨ ¬p as below.

1. ¬(p ∨ ¬p) (a)


2. ¬p ∧ ¬¬p (1, from the deduction above where ¬p is substituted for q)
3. ¬p (2, ∧ – E)
4. ¬¬p (2, ∧ – E)
5. ⊥ (3, 4, ⊥ – I)
6. p ∨ ¬p (1–5, ¬ – E)

The use of earlier deductions or proofs rather than a limitation on their


length is intended to increase their legibility and the ease with which the can
be constructed. In order to reconstruct a deduction with external references
into one that uses only the introduction- and elimination rules, all that is
required is to mechanically copy the deductions referred to into the main
deduction, making suitable substitutions of variable names. The proof below
could be made shorter, but the essential thing for this examples sake is that
it is a mechanical reconstruction of the two deductions above.
1. ¬(p ∨ ¬p) (p)
2. p (a)
3. p ∨ ¬p (2, ∨ – I)
4. ⊥ (1, 3, ⊥ – I)
5. ¬p (2–4, ¬ – I)
6. ¬p (a)
7. p ∨ ¬p (6, ∨ – I)
8. ⊥ (1, 7, ⊥ – I)
9. ¬¬p (6–8, ¬ – I)
10. ¬p ∧ ¬¬p (5, 9, ∧ – I)
11. ¬p (10, ∧ – E)
12. ¬¬p (10, ∧ – E)
13. ⊥ (11, 12, ⊥ – I)
14. p ∨ ¬p (1–13, ¬ – E)
º
As with the previous systems, natural deduction is both sound and complete.
This is established by the following two theorems, which are proved later in
the book.

Theorem 5.44
(Soundness of natural deduction) If {P1 , ..., Pn } `nd Q
then {P1 , ..., Pn } ² Q.
106 CHAPTER 5. RULE SYSTEMS

Theorem 5.45
(Completeness of natural deduction) If {P1 , ..., Pn } ² Q
then {P1 , ..., Pn } `nd Q.

The two theorems above mean that natural deduction has sufficient power to
deduce all logical consequences and tautologies in SL (completeness), and that
in the same sense as the tableau- and resolution methods no deduction can
lead to a conclusion that does not logically follow from the given premisses.

Exercises
Show the following:
5.71 ¬¬p `nd p
5.72 p `nd ¬¬p
5.73 p → q, ¬q `nd ¬p
5.74 `nd (¬p → p) → p
5.75 p ∨ q, ¬q `nd p
5.76 ¬(p ∨ q) `nd ¬p ∧ ¬q
5.77 ¬p ∧ ¬q `nd ¬(p ∨ q)
5.78 p ∧ q `nd q ∧ p
5.79 p ∧ (q ∨ r) `nd (p ∧ q) ∨ (p ∧ r)
5.80 p ∨ p `nd p
5.81 p → q, ¬q `nd ¬p
5.82 p → q `nd ¬p ∨ q
5.83 ¬p ∨ q `nd p → q
5.84 `nd p ∨ ¬p
5.85 `nd ¬(p ∧ ¬p)
5.86 p → q `nd ¬q → ¬p
5.87 ¬q → ¬p `nd p → q
5.88 `nd ¬(p ↔ ¬p)
5.89 q `nd p → q
5.90 q ∧ ¬q `nd p
5.91 p ∨ q, q → r, p ∨ r → t `nd t
5.92 p → q, q → r, ¬r `nd ¬p
5.93 `nd p → (p → p)
5.94 p → (q → r), p ∨ r, ¬q → ¬p `nd r
5.95 (p → q) → (p → r) `nd p → (q → r)
5.7. A NOTE ON SEQUENT CALCULUS * 107

5.96 p → [(q ∧ r) ∨ t] , q ∧ r → ¬p, s → ¬t `nd p → ¬s

7. A note on sequent calculus *


The sequent calculus was introduced by Gerhard Gentzen as a tool for study-
ing natural deduction in 1934.
Sequent calculus is very similar to natural deduction, but it makes the set
of assumptions explicit.

Definition 5.46

A sequent has the form Γ ⇒ ∆, where Γ and ∆ are finite (and possible
empty) sets of formulæ.
The sequent

A1 , . . . , Am ⇒ B1 , . . . , Bn
is true if A1 ∧ . . . ∧ Am → B1 ∨ . . . ∨ Bn . Thus, given that all Ai are true
then at least one of Bi must be true for the expression to hold.

Sequent calculus is built up from deduction rules analogous to those in


natural deduction. Each rule is classified as a right or left rule, depending
on whether it operates on the expression to the right or left of the arrow.
Rules that operate on the right-hand side are analogous to introduction rules
in natural deduction. Rules that operate on the left-hand side are analogous
to elimination rules.
A basic sequent is one in which the same formula appears on both sides,
e.g. A, Γ ⇒ A, ∆. This is trivially true because, if all the formulæ on the left-
hand side are true, then A is too. Then at least one formula (namely A) on
the right-hand side is true.
The calculus therefore regards all basic sequents as proved.
For instance,
A, Γ ⇒ ∆, B
Γ ⇒ ∆, A, → B
directly corresponds to the rule (→ – I) in natural deduction.
In practice, sequent calculus is usually conducted “backwards”, that is,
starting with the sequent to be proved and working backwards, reducing it
to simpler sequents. To prove (A → B) ∨ (B → A) the calculus is applied
like this (working upwards from below, and with notes to the right about the
corresponding deduction rule being applied):
108 CHAPTER 5. RULE SYSTEMS

A, B ⇒ B, A
(→ r)
A ⇒ B, B → A
(→ r)
⇒ A → B, B → A
(∨ r)
⇒ A→B∨B →A
Note that the calculation terminates with the basic sequent A, B ⇒ B, A.
A set of deductions rules for sequent calculus for SL are provided below.
The first set corresponds to the introduction and elimination rules for
connectives.
Basic sequent: A, Γ ⇒ A, ∆

Negation rules:
Γ ⇒ ∆, A A, Γ ⇒ ∆
(¬I) (¬r)
¬A, Γ ⇒ ∆ Γ ⇒ ∆, ¬A

Conjunction rules:
A, B, Γ ⇒ ∆ Γ ⇒ ∆, A Γ ⇒ ∆, B
(∧I) (∧r)
A ∧ B, Γ ⇒ ∆ Γ ⇒ ∆, A ∧ B

Disjunction rules:
A, Γ ⇒ ∆ B, Γ ⇒ ∆ Γ ⇒ ∆, A, B
(∨I) (∨r)
A ∨ B, Γ ⇒ ∆ Γ ⇒ ∆, A ∨ B

Implication rules:
Γ ⇒ ∆, A B, Γ ⇒ ∆ A, Γ ⇒ ∆, B
(→ I) (→ r)
A → B, Γ ⇒ ∆ Γ ⇒ ∆, A → B

Next are the structural rules for the calculus. These do not introduce or
eliminate connectives, but are useful since they allow for additional formulæ
to be inserted to the left or right of the arrow.
Γ ⇒ ∆ Γ ⇒ ∆
(weaken: I) (weaken: r)
A, Γ ⇒ ∆ Γ ⇒ ∆, A
5.7. A NOTE ON SEQUENT CALCULUS * 109

In addition, contraction rules allow formulæ to be used more than once.


A, A, Γ ⇒ ∆ Γ ⇒ ∆, A, A
(contract: I) (contract: r)
A, Γ ⇒ ∆ Γ ⇒ ∆, A
Finally, if some formula A is proved in the first premise, and assumed in
the second premise, the cut-elimination theorem, states that this rule is not
required. It is formulated as the, so called, cut rule in the calculus. The cut
rule thus allows the use of previously proved sub-formulas and lemmas.
Γ ⇒ ∆, A A, Γ ⇒ ∆
(cut)
Γ⇒∆
Example 5.47
The proof of the law of the excluded middle using sequent calculus might
look like as follows:

A ⇒ A
(¬r)
⇒ ¬A, A
(∨r)
⇒ A ∨ ¬A, A ∨ ¬A
(contract: r)
⇒ A ∨ ¬A
º
Example 5.48
The proof of the distributive law A ∨ (B ∧ C) ⇒ (A ∨ B) ∧ (A ∨ C) using
sequent calculus might look like this:

B, C ⇒ A, B
(∧I)
A ⇒ A, B B ∧ C ⇒ A, B
(∨I)
A ∨ (B ∧ C) ⇒ A, B
(∨r)
A ∨ (B ∧ C) ⇒ A ∨ B
(∧r)
A ∨ (B ∧ C) ⇒ (A ∨ B) ∧ (A ∨ C)
º
110 CHAPTER 5. RULE SYSTEMS

Revise & Reflect

1. What is semantic about the tableau method?


2. Why do semantic tableaux and resolution require the consequent to
be negated?
3. Which of the alternative are wrong and why? An open path in a
semantic tableau
- means the path is not satisfied
- represents a falsifying value assignment
- represents a model for the premiss set and negated conclusion
- proves the negated conclusion is false.
4. Why must no more than one pair of literals be resolved at a time?

? The tableau method assigns the value true to the initial set of sen-
tences and then decomposes them by their connectives. Which values
are assigned to the decomposed parts of a sentence is determined by
rules that exactly correspond to the definitions of the semantics of
SL.
? Given the task of proving {P1 , ..., Pn } ² Q, the tableau method as-
sumes the value assignments that would make {P1 , ..., Pn , ¬Q} true.
By failing to satisfy {P1 , ..., Pn , ¬Q} the proof is achieved, since {P1 , ..., Pn } ²
Q iff {P1 , ..., Pn , ¬Q} is not satisfiable.
? A path is a conjunction of expressions all of which have been assigned
the value 1. Unless the path contains an expression and its negation,
it is a consistant conjunction of expressions and their constituent
littorals, which indicate the truth values required by a counter-model
to the proposed logical consequence being tested.
? The clauses are disjunctions of one or more literals. p, q ← in normal
notation would be p∨q. Whereas ← p, q would be ¬p∨¬q. Clauses are
in conjunctive normal form so p, q ←, ← p, q. is (p ∨ q) ∧ (¬p ∨ ¬q).
which is clearly satisfiable when p = q. But to derive the empty
clause, ←, is to assert that the resolved literals where not satisfiable.
? Check that you can explain all the ‘Concepts Covered’ listed at the
beginning of the chapter.
111

CHAPTER 6

Soundness and Completeness


Learning Objectives
After working through this chapter you should be able to:
• explain the principle arguments and techniques used in the soundness
and completeness proofs of predicate logic.
• explain the principle arguments and techniques used in the soundness
and completeness proofs of modal sentence logic.

Concepts covered
Consistency Maximally consistent set Inconsistency
Soundness Beauty Completeness

This section deals with the proofs of soundness and completeness for natural
deduction. In the midst of so much symbolic precision it is easy to loose sight
of the whole purpose of the study of logic. The objective is to gain a greater
understanding and ability to analyse, model, or mechanise aspects of human
rationality. This we want to do because human rationality though powerful
is extremely prone to error. However the fact that we can recognise error is
evidence that there is some underlying rationality to be found; some underly-
ing deductive process by which the prize – true conclusions – can be found.
However before we can evaluate any system of finding those conclusions we
need to be able to test, either directly or in principle, whether or not the sys-
tem’s conclusions really are true, for otherwise the system is useless. Testing
the truth of a derived conclusion is similar to recognising a correct solution
to a puzzle. The process of testing the solution is often quite different from
the method of finding the solution. So too with logic systems. The theorems
112 CHAPTER 6. SOUNDNESS AND COMPLETENESS

produced by a system of logic should stand up to a test that embodies our


intuitive sense of what truth is. This sense has already been expressed in the
semantics of the logic in terms of validity, so what remains is ensure that the
system is faithful to those semantics as they were intended.
Although it may seem obvious that validity and theoremhood should apply
to the same formulae, the two are quite independent concepts and it is only
in certain formalisations of reasoning that these concepts have successfully
been found to correspond. There are infinitely many systems of logic in which
theoremhood and validity do not fully coincide. Most of these are obviously
deficient, but some cost many years’ work before their shortcomings became
apparent and they were discarded. Even so, many ’incomplete’ systems are
will continue to be used because they are the best that we ever can hope for.
Assessing a formal system for quality is similar to a performance review.
The performance target for a proposed system of logic is that in its output,
there should be no false conclusions (soundness) and no missing truths (com-
pleteness). The performance review method to test achievement is metalogical
proof of both soundness and completeness, as this chapter will lay out. Proof of
these properties for the other deduction systems build on a similar approach.
The proofs are first shown in detail for sentence logic and later supplemented
with what is needed to extend them to predicate logic.

1. Soundness and Completeness for sentence logic


Rewriting the deduction rules in another form will facilitate the proof. This
form is equivalent, it just makes the proofs more easily explained. The table
below expresses the deduction rules of natural deduction as sets of premisses
denoted by Γ. together variables for arbitrary sentences P , Q and R. The label
r (repetition) simply means that a sentence is repeated further down in the
deduction. The label tp (addition premiss) means no fewer conclusions can be
drawn from a set of premisses through the addition of an extra premiss. This
section uses some notations from set theory such as ∪ for union and ⊆ for set
inclusion.1

1
In the table curly brackets are omitted for the sake of legibility though strictly Γ, P `nd P
should really be written Γ ∪ {P } `nd P for example.
6.1. SOUNDNESS AND COMPLETENESS FOR SENTENCE LOGIC 113

Γ, P `nd P r
If Γ `nd P , then Γ ∪ Γ0 `nd P a.p.
If Γ, ¬P `nd Q and Γ, ¬P `nd ¬Q, then Γ `nd P ¬–E
If Γ, P `nd Q and Γ, P `nd ¬Q, then Γ `nd ¬P ¬–I
If Γ `nd P → Q and Γ `nd P , then Γ `nd Q →–E
If Γ, P `nd Q, then Γ `nd P → Q →–I
If Γ `nd P ∧ Q, then Γ `nd P ∧–E
If Γ `nd P ∧ Q, then Γ `nd Q ∧–E
If Γ `nd P and Γ `nd Q, then Γ `nd P ∧ Q ∧–I
If Γ, P `nd R and Γ, Q `nd R, then Γ, P ∨ Q `nd R ∨–E
If Γ `nd P , then Γ `nd P ∨ Q ∨–I
If Γ `nd P , then Γ `nd Q ∨ P ∨–I
If Γ `nd P ↔ Q and Γ `nd P , then Γ `nd Q ↔–E
If Γ `nd P ↔ Q and Γ `nd Q, then Γ `nd P ↔–E
If Γ, P `nd Q and Γ, Q `nd P , then Γ `nd P ↔ Q ↔–I
If Γ `nd Q and Γ `nd ¬Q, then Γ `nd ⊥ ⊥–I
This form of notation and other symbols corresponds to those used in the
section Natural Deduction. This correspondence can be expressed as in the
following theorem.2

Theorem 6.1
Γ `nd P if there is a sequence Γ1 `nd P1 , ..., Γn `nd Pn of applications of the
rules above and where Γ = Γn and P = Pn . Instances of the if-part of the
rules above must precede their then-part in order for the latter to be included
in the sequence.

The example below demonstrates how to show that {p → q, q → r} `nd p → r


using this new form of notation.

Example 6.2
1. {p → q, q → r, p} `nd p → q (r)
2. {p → q, q → r, p} `nd q → r (r)
3. {p → q, q → r, p} `nd p (r)
4. {p → q, q → r, p} `nd q (1, 3,→ – E)
5. {p → q, q → r, p} `nd r (2, 4, → – E)
6. {p → q, q → r} `nd p → r (5, → – I)
Comparing this deduction with the corresponding deduction in example
5.36 it becomes clear that the notations are alike except that assumptions are
made in different ways. Here all assumptions are held in the set of premisses.
2
Also note the similarities with the sequent calculus.
114 CHAPTER 6. SOUNDNESS AND COMPLETENESS

Clearly, deductions written in one of the notations can be transformed into


the other notation in a mechanical way. º
This makes the proof of soundness in natural deduction easy to express. The
notation for evaluation V and interpretation I is that used for sentence logic
from definitions 5 and 5 on page 53.

Theorem 6.3
(Soundness of natural deduction) If {P1 , ..., Pn } `nd Q then {P1 , ..., Pn } ² Q.

Proof: The premiss of the theorem asserts as a general deductive statement


which can be specified as each rule schema in the natural deduction system.
This leaves the task of proving that the form of those rules holds for truth
in interpretations. Each rule can be proved independently.
That P ² P holds is obvious. It is also obvious that if Γ ² P , then
Γ ∪ Γ0 ² P.
Now to show, if Γ, ¬P ² Q and Γ, ¬P ² ¬Q, then Γ ² P.
Assume that Γ, ¬P ² Q and Γ, ¬P ² ¬Q holds. Assume further for indi-
rect argument that Γ 2 P and show that this leads to a contradiction.
If Γ 2 P then there must be an evaluation V of an interpretation I such
that V I (Γ) = 1 and V I (P ) = 0. The latter entails that V I (¬P ) = 1. Since
the premisses of the two entailments in the first assumption are satisfied,
then V I (Q) = 1 and V I (¬Q) = 1. This is clearly impossible, since by the
semantics of sentence logic V I (¬Q) = 0 iff V I (Q) = 1, (see page 44) and so
is a contradiction as desired.
Finally, we show that if Γ, P ² Q, then Γ ² P → Q. Other cases are dealt
with in a similar way and are left to the reader.
Assume that Γ, P ² Q and for indirect argument that Γ 2 P → Q. From
the latter statement it follows that there is an interpretation I such that
V I (Γ) = 1 and V I (P → Q) = 0. According to the semantics for sentence
logic it now holds that V I (P ) = 1 and that V I (Q) = 0. It therefore holds
that V I ({Γ, P }) = 1. But from Γ, P ² Q it then follows that V I (Q) = 1
which yields the desired contradiction.
In the corresponding way, it can now be shown that it holds that if
{P1 , ..., Pn } `nd Q then {P1 , ..., Pn } ² Q for all rules in the table. A de-
duction consists of a finite sequence of such steps and therefore it clearly
holds for the last step in the deduction which constitutes the theorem. º

The proof that all logical consequences are deducible through natural deduc-
tion is more difficult. First a number of definitions and lemmas are needed.
6.1. SOUNDNESS AND COMPLETENESS FOR SENTENCE LOGIC 115

The first central concept is that of consistency and inconsistency. From


an inconsistent set of sentences, it is always possible to deduce a contradiction.

Definition 6.4
A set of sentences Γ in SL is consistent iff no sentence Q exists in SL such
that Γ `nd Q and Γ `nd ¬Q. If Γ is not consistent then Γ is inconsistent.

The proof of completeness can appear somewhat difficult, but this is only due
to the quantity of details involved. The proof structure itself is fairly simple.
The completeness proof needs to show that if {P1 , ..., Pn } ² Q then it holds
that {P1 , ..., Pn } `nd Q. The proof has the following structure.
1. {P1 , ..., Pn } ² Q Assumption
2. {P1 , ..., Pn , ¬Q} is unsatisfiable Follows from 1 and lemma 5.7.
3. {P1 , ..., Pn , ¬Q} is inconsistent Follows from 2 and the main theorem
below.
4. {P1 , ..., Pn , ¬Q} `nd R and Follows from 3 and definition 6.4
{P1 , ..., Pn , ¬Q} `nd ¬R, for some above.
sentence R
5. {P1 , ..., Pn } `nd Q Follows from 4 and the definition of
¬ – E.

Clearly there is really only one difficulty in the proof of completeness. This
is the proof that an unsatisfiable set is inconsistent. The proof is carried out
by extending a consistent set to something that is called a maximally con-
sistent set, and then showing that this is satisfiable. The result then follows
by contra position.3 First a definition of maximal consistency is needed.
As indicated by the name, no sentences can be added to maximally consistent
sets without them becoming inconsistent. They are a special case of consistent
sets.

Definition 6.5
A set of sentences Γ in SL is maximally consistent iff Γ is consistent and
it holds for every sentence P in SL that if P ∈
/ Γ then it holds that Γ ∪ {P }
is inconsistent.

That a set of sentences is maximally consistent means that every sentence


that is deducible from the set is already an element in that set. The set there-
fore contains everything that can be derived from it. This is expressed more
precisely in the following lemma.
3
A result is usually said to follow from the transpositive if in order to show P → Q it is first
shown that ¬Q → ¬P , or visa versa.
116 CHAPTER 6. SOUNDNESS AND COMPLETENESS

Lemma 6.6
Given a maximally consistent set Γ of sentences, then it holds for each
sentence P in SL that P ∈ Γ iff Γ `nd P .

Proof: The only-if-direction: P ∈ Γ only if Γ `nd P , follows immediately


from the r rule of repetition: Γ ∪ {P } `nd P since P ∈ Γ.
Now to show the if-direction : if Γ `nd P then P ∈ Γ.
Assume that Γ `nd P and for indirect argument that P ∈ / Γ. The latter
assumption together with the assumption that Γ is maximally consistent
leads to Γ ∪ {P } `nd ¬Q and Γ ∪ {P } `nd Q for some Q. According to the
rule of ¬ – I it follows that Γ `nd ¬P . which contradicts the assumption
Γ `nd P made above as desired and therefore Γ `nd P then P ∈ Γ. º
The next lemma shows a mechanical process by which a consistent set of
sentences can be expanded to a maximally consistent set of sentences.

Lemma 6.7
(Lindenbaum’s lemma) Every consistent set of sentences is a subset of
some maximally consistent set of sentences.

Proof: Let Γ be a consistent set of sentences and P1 , P2 , P3 , ..., be an


enumeration of all sentences in the language SL. An infinite sequence of set
can now be constructed in the following way:
(i) Γ0 = Γ
(ii) Let Γi+1 = Γi ∪ {Pi+1 } if Γi ∪ {Pi+1 } is consistent and let Γi+1 = Γi
otherwise.
Clearly it holds that Γi ⊆ Γi+1 and that Γi is consistent for all i.
Now to show that Γ0 = Γ0 ∪ Γ1 ∪ Γ2 ∪ Γ3 ∪ ... is a maximally consistent
set.
First show that Γ0 is consistent.
Assume that this is not the case. Then some finite subset {Pi1 ,...,Pik } of
Γ0 exists that is inconsistent. Since Γi ⊆ Γi+1 for all i there must be some Γj
such that {Pi1 , ..., Pik } ⊆ Γj . But this means that Γj is inconsistent which
contradicts that Γi is consistent for all i.
Second show that Γ0 is maximally consistent. Assume that a sentence Q
in SL is not in Γ0 . This means that Q ∈ / Γi for some i. Q is in SL and
is therefore identical with some Pi+1 in the enumeration above since this
contains all sentences in SL. From the way in which Γ0 was constructed
and the repetition rule, it follows that Γi ∪ {Pi+1 } yields Γi `nd Pi+1 and
Γi `nd ¬Pi+1 which, since Γi ⊆ Γ0 means that Γ0 ∪ {Pi+1 }, i.e. Γ0 ∪ {Q}, is
6.1. SOUNDNESS AND COMPLETENESS FOR SENTENCE LOGIC 117

/ Γ0 then by definition
inconsistent. Since that holds for every sentence Q ∈
0
6.5, Γ is maximally consistent. º
In order to prove the next lemma a so called induction proof will be used
induction proof over the sentences’ complexity. The idea for this is first
to show the base case, that the lemma hold for all atoms, and thereafter to
show that if it holds for the base case then it must also hold when connective
are introduced i.e. that if it hold for the sentences P and Q then it must
also hold for P ∧ Q – the induction step. Having shown the lemma holds for
all atoms then entails that it holds for all conjunctions of two atoms, and any
conjunctions of those conjunctions, and so on. For example, if the lemma holds
for p, q, r, s and for p ∧ q and r ∧ s then by the induction step it holds for
(p ∧ q) ∧ (r ∧ s). and so on for conjunctive structured sentences of arbitrary
complexity. If the induction step can be shown to hold for all the connectives
∧, ∨, ¬, → and ↔, this means that the lemma holds for any sentence in the
language SL since these are all syntactically constructed starting with atoms
and then connectives to form more complex sentences, which in turn can be
used in the formation of more complex sentences. and so on. A proof of the
base case and the induction step for all connectives is a proof of all possible
cases. For example given the sentence [(p → q) ∧ (r ∧ s)] ∨ s. If something holds
for the atoms p, q, r and s (the base case) and that also holds for combinations
of those atoms using → and ∧ (the induction) then it holds for p → q and
r ∧ s. But since this holds for these constructions then it holds by induction
on ∧, for the sentence (p → q) ∧ (r ∧ s). Finally induction on ∨, requires that
the construction [(p → q) ∧ (r ∧ s)] ∨ s also holds.
The next lemma says that if all the atoms in a maximally consistent set are
true, then all the sentences in the set are true.

Lemma 6.8
Given a maximally consistent set Γ of sentences and an interpretation I
such that for each atom p in SL it holds that V I (p) = 1 iff p ∈ Γ, then it
holds for every sentence Q in SL that Q ∈ Γ iff V I (Q) = 1.

Proof: If p is an atom then lemma 6.8 holds by its given conditions. Now
show that the lemma holds for an arbitrary sentence in SL, by showing
this for connectives ¬ and ∧, but leaving the remaining connectives to the
reader, since these are dealt with similarly.
Assume that the lemma holds for the sentence P . Show that the lemma
holds for ¬P by first showing that ¬P ∈ Γ iff P ∈ / Γ, and then expressing
this with i.
Assume for indirect argument that ¬P ∈ Γ and P ∈ Γ. According to
lemma 6.6, the rule of repetition, and definition 6.4 this means that Γ is
118 CHAPTER 6. SOUNDNESS AND COMPLETENESS

inconsistent, which contradicts the given condition that Γ is a maximally


consistent set of sentences. Therefore it holds that if ¬P ∈ Γ then P ∈ / Γ.
Assume for hypothetical argument that P ∈ / Γ. By definition 6.5, this
means that Γ ∪ {P } is inconsistent. The rule of ¬ – I then entails that
Γ `nd ¬P . By lemma 6.6 it then holds that ¬P ∈ Γ. Consequently if P ∈ /Γ
then ¬P ∈ Γ.
This shows that ¬P ∈ Γ iff P ∈ / Γ.
Furthermore, P ∈ / Γ iff V I (P ) = 0 according to the transpositive of the
conditions given in lemma 6.8for Q, sentence P here. This means that ¬P
∈ Γ iff V I (¬P ) = 1.
Now for connective ∧ assume the lemma holds for sentences P and R,
and show that the lemma then holds of P ∧ R. To show this first show that
P ∧ R ∈ Γ iff P ∈ Γ and R ∈ Γ.
Assume for hypothetical argument that both P ∈ Γ and R ∈ Γ. By lemma
6.6 Γ `nd P and Γ `nd R. From the rule of ∧ – I it follows that Γ `nd P ∧ R.
By lemma 6.6 again, it then holds that P ∧ R ∈ Γ.
For hypothetical argument in the other direction assume that P ∧ R ∈ Γ.
By the rule of ∧ – E it then holds that Γ `nd P and Γ `nd R. By lemma
6.6again it follows that P ∈ Γ and R ∈ Γ. Thus it holds that P ∧ R ∈ Γ iff
P ∈ Γ and R ∈ Γ. And according to the given conditions, the latter holds iff
V I (P ) = 1 and V I (R) = 1, which holds iff V I (P ∧ R) = 1, by the semantics
of ∧. º

Theorem 6.9
(Completeness of natural deduction) If {P1 , ..., Pn } ² Q
then {P1 , ..., Pn } `nd Q.

Proof: Assume that {P1 , ..., Pn } ² Q. According to lemma 5.7on page 70,
M = {P1 , ..., Pn , ¬Q} is unsatisfiable. Assume further that M is consistent.
Then by lemma 6.7, M can be expanded to a maximally consistent set
0
M . By lemma 6.8 an interpretation I exists such that V I (R) = 1 for each
0 0
sentence R in M . Since M ⊆ M , I is also a model for M , which is therefore
satisfiable, which is a contradiction. Therefore M must be inconsistent. By
definition 6.4 this means that there is a sentence R such that M `nd R and
M `nd ¬R. Since M = {P1 , ..., Pn , ¬Q} it follows from the rule of ¬ – E
that {P1 , ..., Pn } `nd Q. º
In order to illustrate the resemblance of a meta-logical proof to proofs in the
natural deduction system, the structure of the proof above is presented below
in a similar way to a proof in natural deduction. Constructing an exposition
like this often helps to understand proofs.
6.1. SOUNDNESS AND COMPLETENESS FOR SENTENCE LOGIC 119

1. {P1 , ..., Pn } ² Q (assumption for hypothetical arg.)


2. let M = {P1 , ..., Pn , ¬Q} (abbreviate substitution)
3. M is unsatisfiable (1, lemma 5.7)
4. M is consistent (assumption for indirect arg.)
0
5. M expands to M (3, lemma 6.7)
0
6. i(R) = 1 for all R in M (4, lemma 6.8)
0
7. M ⊆M (4, 5, lemma 6.7)
8. i is a model for M (6, 7, definition 3.9)
9. M is satisfiable (8, definition 3.11)
10. contradiction (9, 3)
11. M is inconsistent (4-10, negation introduction)
12. M `nd R and M `nd ¬R. (11, definition 6.4)
13. {P1 , ..., Pn } `nd Q (2, 12, rule of ¬ – E)
14. If {P1 , ..., Pn } ² Q
then {P1 , ..., Pn } `nd Q (1-13, entailment introduction)

Exercises
6.1 Show that a sentence logic case only containing the connectives ∨ and ¬ is
sound and complete.
6.2 Show that a sentence logic case only containing the connectives → and ¬ is
sound and complete.
6.3 Sketch a proof of the soundness and completeness of the Semantic tableau
method for the sentence logic case.
6.4 Show completeness and completeness of the Resolution method for the sen-
tence logic case.
6.5 Show completeness and completeness of the Resolution method for the sen-
tence logic case.
6.6 Show that a sentence logic case only containing the connectives ∨ is not com-
plete.
120 CHAPTER 6. SOUNDNESS AND COMPLETENESS

Revise & Reflect

1. Which of the following is not analogous, and why.


- Logical truth is to validity as tautology is to theoremhood
- Unsatisfiability is to a model as inconsistency is to a proof
- A contradiction is to a proof as a counter-example is to an eval-
uation
- Completeness is to provability as soundness is to validity
2. What is role of maximal sets in proof of completeness?
121

Solutions to Exercises

Answers and solutions to selected exercises can be found at:


http://sites.google.com/site/logicbasicsbeyond/
Index

Symbols List → – I, implication-introduction,


F0 , falsum, contradiction (arbitrary), introduction rule natural deduction,
constant value 0, 72 99
F0 , falsum, contradiction (arbitrary), →– E, implication-elimination,
constant value 0, 46 elimination rule natural deduction,
H, system H, 76 99
M 2 P , counter-example, P does not →, implication, 22
follow logically from M , 63 ∴, logical consequence, conclusion, 59
M ² Q, logical consequence, 59 ×, closed branch in tableau method,
P ⇔ Q, logically equivalent sentence, 83, 85
59 ² Q, logically true sentence, 59
P1 , P2 , ..., Pn ← Q1 , Q2 , ..., Qm , arrrow `h Q, sentence Q is provable in system
notation in resolution method, H, 78
denotes `nd Q, sentence Q is provable by
P1 ∨P2 ∨...∨Pn ∨¬Q1 ∨¬Q2 ∨...∨¬Qm , natural deduction, 104
92
`res Q, sentence Q is deducible by
S d , dual sentence, 72
resolution method, 96
T0 , tautology (arbitrary), constant
`t Q, sentence Q is provable by
value 1, 46, 72
tableau method, 87
Γ, premiss set, 112
∨– E, or-elimination, elimination rule
⊥ – I, falsum-introduction,
natural deduction, 99
introduction rule natural deduction,
99 ∨– I, or-introduction, introduction rule
⊥, falsum, 92 natural deduction, 99
↔ – E, equivalence-elimination, ∨, disjunction, or, 27
elimination rule natural deduction, Y, exclusive or, 29
99 ∧– E, and-elimination, elimination rule
↔ – I, equivalence-introduction, natural deduction, 99
introduction rule natural deduction, ∧– I, and-introduction, introduction
99 rule natural deduction, 99
↔, equivalence, 23 ∧, conjunction, and, 27
¬ – I, not-introduction, introduction {P1 , ..., Pn } `h Q, deducible sentence
rule natural deduction, 99 from premisses in system H, 77
¬– E, not-elimination, elimination rule {P1 , ..., Pn } `nd Q, sentence Q is
natural deduction, 99 deducible by natural dedection for
¬, negation, 23 the premisses, 104

122
123

{P1 , ..., Pn } `res Q, sentence Q is cat


deducible by resolution method from Castro, 14, 98
premisses, 95 Charley’s, 20
{P1 , ..., Pn } `t Q, sentence Q is eats fish, 14
deducible by tableau method from ever so cuddly, 28
premisses, 87 categorical
ap, added premiss, 112 sentence in SL, 54, 55
u, repetition (deduction rule, 112 Church, Alonzo, 12
u, repetition, deduction rule, 99 clause, 89
nand, 51 closed
branch in tableau method, 85
absorption laws, 72 tableau, 85
alphabet CNF (conjunctive normal form), 89
in sentence logic, SLA, 41 coffee, 67
although, 28 commutative laws, 72
Amarinja, 13 completeness
An Investigation of the Laws of Thought, of a logic, 16
11 of natural deduction, 106, 118
and, 27, 28 of resolution method, 97
antecedent, 22 of sentence logic, 112
argument of system H, 79
enthymematic, 66 tableau method, 87
hypothetical, 100 conclusion, 22
in sentence logic, 58 condition
incomplete, 66 necessary and sufficient, 23
indirect, 100 necesssary, 22
Aristotle, 10 sufficient, 22
assignment, 43 conjunction, 28, 44
associative laws, 72 conjunctive normal form, 89
assumption, 22 connective, 13, 27
atom, 13 although, 28
in the langauge SL, 41 and, 27
automatic theorem proving, 89 exclusive or, 29
axiomatic rule expressive power, 50
modus ponens, 76 in spite of, 28
axiomatic systems, 76 in the langauge SL, 41
baboon nand, 51
Bill, 98 or, 27
basic logical relationships, 71 otherwise, 29, 31
Begriffschrift, 11 precedence, 42
bogus solutions, 32, 34 unless, 29
Boole, G., 11 xor, 29, 51
branch consequent, 22
closed, in tableau method, 85 consistent
open, in tableau method, 85 mathematics, 12
Brouwer, L. E. J., 12 set of sentences in SL, 115
contingent
Carnap, Rudolf, 12 sentence in SL, 46, 52, 54
Castro, Fidel, 14, 77, 89, 98 contraction rules
124 CHAPTER 6. INDEX

sequent calculus, 109 equivalence, 23


contradiction, 45 truth table, 45
free from, 12 Euclid, 12
in the language SL, 52 evaluation
contradiction principle, 73 of propositional symbols in the
counter example, 63 language SL, 53
counter-example, 62 exclusive or, 29
counter-model expressive power
in sentence logic, 53 of connectives, 50
with tableau method, 86
cut rule false hypotheses, 32
sequent calculus, 109 falsifiable
cut-elimination theorem in sentence logic, 53
sequent calculus, 109 falsified
sentence in SL, 53
de Morgan’s laws, 30, 45, 72 falsum
de Morgan, A, 11 in resolution, 92
deducible form of a sentence, 20
in system H, 77 Formalists, 11
deducible sentence Formal Logic, or the Calculus of
in natural deduction, 103 Inference, 11
in system H, 77 free from contradictions
tableau method, 87 ‘consistent’, 12
with resolution method, 94 Frege, Gottlob, 11
deduction Begriffschrift, 11
in natural deduction, 103 Grundgesetze der Arithmetik, 11
in sentence logic, 58 fuzzy logic, 16
with resolution method, 93, 94
Gödel, Kurt, 11
deduction theorem, 61
Gentzen, Gerhard, 12, 98, 107
discharge assumption, 100
Grundgesetze der Arithmetik, 11
disjunction, 28
truth table, 44 H, deduction system, 76
disjunctive syllogism (deductive rule), Hardy, G. H., 10
73, 89 Henkin, Leon, 12
distributive law Herbrand, Jaques, 12
for ∧ and ∨, 72 higher order language, 15
for ∧ and ∨, 47 Hilbert, David, 11
for multiplication in arithmetic, 47 history of logic, 10
dominance laws, 72 Hungarian, 13
dual sentence hypothesis, 22
duality principle, 73 false, 32, 33
duality, 72 hypothetical
argument, 100
Elementa, Euclid, 12
elimination rules idempotency laws, 72
natural deduction, 99 identity laws, 72
empty clause, 92 if and only if, iff, 23
enthymematic reasoning, 68 if s then t
enthymeme, 66 programming language, 26
125

iff (if and only if), 23 what is logic, 12


imperatives, 43 logical
implication, 22 content, 13
truth table, 44 logical consequence
valid, 62 in SL, 59
implies, 22 logical content, 43
in spite of, 28 logical relationship
incomplete argument, 66 contradiction principle, 73
indirect modus ponens, 73
argument, 100 modus tollens, 73
indirect deduction, 102 syllogism principle, 73
induction proof, 117 logically
information true sentence in SL, 54
categorical, 55 logically equivalent, 59
content in a sentence, 55 logically false
empty, 55 sentence in SL, 54
insinuation, 68 logically true
instantiate sentence in SL, 59
axiom schema, 76
rule, 43 maximally inconsistent set of sentences,
variable, 43 115
interjections, 43 meaning
interpretation of a sentence in SL, 43
in the language SL, 52 meta-language, 16
of a sentence in SL, 52 meta-logic, 16, 118
of propositional symbols in the meta-mathematical methods, 11
language SL, 52 modal logic, 15
introduction rules model
natural deduction, 99 in sentence logic, 53
sequent calculus, 108 modus ponens
inversion law axiomatic rule, 76
law of the excluded middle, 72 in natural deduction, 101
in SL, 73
Kleene, Stephen Cole, 12, 76 modus tollens, 73

language nand, 51
higher order, 15 natural deduction, 98
logical, 13 completeness, 106, 118
law of double negation, 72 elimination rules, 99
law of excluded middle, 46, 72 introduction rules, 99
leads to, 22 proof with, 103
lexicon, 19 soundness, 105
Lindenbaum’s lemma, 116 soundness in propositional logic, 114
literal, 89 necesssary
logic condition, 22
history of, 10 negation
languages for, 13 truth table, 45
sentence/propositional, introduction, Neumann, John von, 12
18 normal form
126 CHAPTER 6. INDEX

conjunctive (CNF), 89 resolution


in propositional logic, 89
object language, 16 resolution method, 89
open completeness, 97
branch in tableau method, 85 deduction, 93
tableau, 85 deduction with, 94
or, 27, 28 proof with, 94
otherwise, 29, 31 soundness, 97
resolution rule, 93
Portuguese, 13
general form, 94
Prawitz, Dag, 98
resolving literals, 96
precedence order for connectives, 42
Robinson, J. A., 89
predicate logic, 15
rule
premiss, 22
instantiation, 43
minimal, 67
rule system
Principia Mathematica, 11
semantic tableaux, 80
principle of contradiction, 46, 72
rule systems
programming language
for sentence logic, 75
if s then t, 26
rules
Prolog, 89
alfa-, 84
proof
beta-, 84
in natural deduction, 103
Russel’s paradox, 11
in system H, 78
Russel, Bertrand, 11, 18
tableau method, 87
with resolution method, 94
satisfiable
proposition, 13
in sentence logic, 53
duel, 72
satisfied
in sentence logic, 13
sentence in SL, 53
transpositive, 26
schema
propositional
axiom, 76
logic, introduction, 18
rule, 76
variable, 19
semantic tableau, 80, 85
propositional language SL, 41
closed and open branchs, 85
propositional logic, 13
semantics, 12
propositional variable, 43
of sentence logic, 52
provable
sentence, 43
by natural deduction, 104
categorical, 54
sentence by resolution method, 96
contingent, 54
sentence in system H, 78
dual, 72
sentence tableau method, 87
falsified, 53
punctuation marks
in SL, 42
in the langauge SL, 41
information content, 55
questions, 43 logically false, 54
Quine, Willard van Orman, 12 logically true, 54
satisfied, 53
reasoning semantics in SL, 52
correct, 60 unsatisfiable, 54
enthymematic, 66 sentence logic, SL, 41
faulty, 61 sequent
127

basic, 107 natural language to logic, 48


Skolem, Thoralf, 12 transpositive law, 115
SL, 41 transpositive proposition, 26
semantics, 52 tree
syntax, 41 semantic tableau, 84
SLA, alphabet in sentence logic, 41 truth table, 44
solutions truth value
bogus, 32, 33 of a sentence in SL, 43
soundness Turing, Alan, 12
of a logic, 16
of natural deduction, 105 unless, 29
propositional logic, 114 unsatisfiable
of resolution method in SL, 97 sentence in SL, 54
of sentence logic, 112 vague expression, 16
of system H, 79 valid
of tableau method, 87 in sentence logic, 62
structural rules variable, 15
sequent calculus, 108 in the language SL, 43
subsentence, 42 instantiation, 43
sufficient propositional, 19
condition, 22
syllogism Whitehead, Alfred North, 11
disjunctive, 89 Principia Mathematica, 11
syllogism principle, 73, 101
symbolic logic, 11 xor, 29, 51
Symbols list Über Formal Unentscheidbare Sätze der
xor, exclusive or, 29 Principia Mathematica und
synonymous with, 23 Vervandter Systeme, 12
syntax, 12
of sentence logic, 41
system H, 76

tableau
closed, 85
open, 85
semantic, 85
tableau method
completeness, 87
soundness, 87
Tarski, Alfred, 12
tautology, 45, 52, 59
temporal logic, 15
theorem
in the language SL, 52
theorem proving
automatic, 89
train of thought, 66
transitivitet, 70
translation

Potrebbero piacerti anche