Sei sulla pagina 1di 347

Introductory Logic

by John G. Bennett The University of Rochester

1999-2008 by John G. Bennett

Introductory Logic

Preface
This book reflects a consensus that developed in the University of Rochester Philosophy Department among David Braun, Ted Sider, and myself about what should be taught in the introductory symbolic logic course. The main outlines of this consensus were this:

C The course should give students a working knowledge of

first order predicate logic with identity. C The students should learn a natural deduction system somewhat like that of Kalish, Montague, and Mar, Logic: Techniques of Formal Reasoning,1 but simplified in various ways. The Kalish-Montague system employs SHOW lines to state conclusions before the steps that derive them from premises, allows assumptions only immediately after an appropriate SHOW line, and has quantifier rules that are simple, at the cost of being slightly more restrictive than necessary. The natural deduction system should be designed for ease of use, rather than for ease of proving meta-theorems. C The course should include appropriate informal semantics. C The course should focus on using and understanding the first-order language, rather than on meta-theory. We agreed that there was no satisfactory book meeting these criteria, hence this book. I have made a variety of additional decisions. I chose the rules for the natural deduction system to make proofs easy, without requiring the use of a theorem list, although I generally supply students with a sheet of diagrams of all the rules for tests. I also tried to include among the rules many of the simple inferences I would make myself in convincing myself that an argument was valid. In addition, I wanted the rules to include many logical principles whose omission would be disgraceful. All such choices of rules are ultimately arbitrary, and I do not pretend that my set is ideal. I have divided the rules into basic rules and derived rules,
1

second edition, Harcourt Brace Jovanovich, 1980 ii

even though a rigorous showing that the derived rules are dispensable is not supplied. Examples or exercises are available to illustrate how the derived rules of inference can be dispensed with, but the replacement rules cannot so easily be shown dispensable. Indeed, the most straightforward proof is as a corollary to a completeness proof for the basic rules, something that is well beyond the scope of this text. I have emphasized in the book the artificial nature of the logical languages, and have never suggested that the logical symbols are in any way abbreviations of the English terms to which they more or less roughly correspond. This seems to me the only intellectually defensible position, though it can be difficult for beginners. Although I wish to make the logic as simple as possible, I dont think it acceptable to mislead students about the nature of the subject. Given this attitude, some might wonder at the use of the pseudo-sentence ! adapted from Hardegrees logic text.2 The more traditional method would be to treat ! as a logical constant having the value False, but since the ! is here used only in derivations, the method adopted in the text seemed more sensible, and seems intellectually defensible as well. Anyone who disagrees may easily adopt the traditional approach. The advantages of using ! on either interpretation are chiefly pedagogical, since its use makes more obvious the point at which an indirect derivation is finished, requires the student to cite the contradictory sentences derived explicitly, and provides an appropriate SHOW line for the indirect derivations. Metatheory is generally ignored in the text, being limited to some very informal remarks without proof. Generally the techniques needed for even a rigorous statement of metatheorems are beyond what seems feasible to teach in the one semester course for which this book is designed. Some instructors may wish to use this text for a course which does not go beyond monadic predicate logic. This can be done by skipping Chapters 7 and 11, section 7 of Chapter 9, and section 3 of Chapter 10, and omitting exercises in Chapters 8 and 10 which involve polyadic predicates. As I am a bad proofreader, I am especially grateful to the long-suffering students of my Spring, 1999 logic course, who had

Gary Hardegree, Symbolic Logic: A First Course, McGraw Hill 1994. Hardegree uses instead of !
2

iii

Introductory Logic to use a preliminary version of this book that was riddled with errors. Especially helpful in identifying typographical errors were Alan McDaniel, Doug Clouden, Kieshelle Cudjoe, Jason Baxter, Tara Blackman, Frank Georges, Brian Kolstad, Emily Lewis, Liz Peters, Shuomon Sharif, and Brianna Winters. I am also grateful to Gabriel Uzquiano, who used the book in his Fall, 1999 logic course, and found many more typos. Also, John Kwak, my Teaching Assistant for the Fall of 2007 found a staggering number of errors, most of which I hope I have corrected. I am very grateful to him and to all who have helped by pointing out errors. For whatever errors and stupidities remain, I am alone responsible. Correspondence about them should be directed to jbennett@philosophy.rochester.edu John G. Bennett University of Rochester

iv

Preface to the Student


This logic book tries to tell you all that you need to know to master your logic course. In theory you could learn the material of the course without ever going to class. But in practice, skipping class is probably a very bad idea. This book has not been written for the autodidact; rather, it is intended to accompany a course in which crucial points are explained more fully, illustrative examples are given, and student questions are answered. Furthermore, nearly all students learn better if they not only read, but also hear the material explained by a human being. Any course that uses this book will be a hard course. The book and your instructor try to make it as easy as possible to learn logic, but there is no avoiding the fact that logic at this level is a difficult subject. However, there are things you can do to make sure that you master the material. Here are some:

C Work on logic every day for at least a couple of hours.

You might be able to get away with working on logic only every other day, but do not expect to do well if you study only once a week or only when exercises are due. Keep up with the material. You cannot cram logic. If you havent been doing logic regularly, there is little chance that you can learn enough the night before a test to do well. The material of a logic course is cumulative. If you dont understand one part, you may not understand the next. Do lots of exercises. You learn logic mostly by doing exercises. There are more exercises in this book than your instructor will require you to hand in, but you should try to do as many of them as possible, even when they are not assigned. Pick up any written work you hand in as soon as it is available. (This includes tests.) You need to get feedback to find out how you are doing. Since the material is cumulative, you cant do well on the next assignment if you dont understand the last one. Be sure to understand any mistakes you made on work you handed in and how to avoid them in the future. Ask questions when you dont understand things or cant do many of the exercises. If there is something you dont v

Introductory Logic understand, you need to do something to remedy the situation. Ask your instructor, teaching assistant, or another student who knows the material. Dont put off your questions any longer than necessary. Remember, the only stupid question is the one that is not asked. Dont worry if you cant do one or two of the exercises; some are rather hard, and not all students will be able to do all of the exercises. But if you find yourself unable to do most of the exercises in a given set, you should get some help. The author of this book is not a very good proofreader, so there are likely to be errors in the text. Please send any you find to the author by e-mail at the following address. jbennett@philosophy.rochester.edu

vi

Table of Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii Preface to the Student . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii Chapter 1: Basic Logical Concepts . . . . . . . . . . . . . . . . . . . . . . . 1 1: Truth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2: Statements and Sentences . . . . . . . . . . . . . . . . . . . . . . 3 3: Logical Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 4: Logical Inconsistency and Equivalence . . . . . . . . . . . . 7 Exercises 1-4: . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 5: Arguments, Validity, and Soundness . . . . . . . . . . . . . 11 Exercises 1-5: . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 6. The Rest of this Book . . . . . . . . . . . . . . . . . . . . . . . . . 14 Definitions for Chapter 1: . . . . . . . . . . . . . . . . . . . . . . . . 15 Appendix I: Logical Possibility . . . . . . . . . . . . . . . . . . . . 15 Appendix II: Use and Mention . . . . . . . . . . . . . . . . . . . . 17 Exercises 1-A2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Chapter 2: L1, Syntax and Translation . . . . . . . . . . . . . . . . . . . 1: Vocabulary and syntax . . . . . . . . . . . . . . . . . . . . . . . . Exercises 2-1: . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2: Interpretations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3: Truth-functional Connectives . . . . . . . . . . . . . . . . . . . Exercises 2-3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4: Translation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises 2-4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5: Complex Translations . . . . . . . . . . . . . . . . . . . . . . . . . Exercises 2-5: . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 3: Truth Functional logic . . . . . . . . . . . . . . . . . . . . . . 1: Truth of Sentences of L1 . . . . . . . . . . . . . . . . . . . . . . . Exercises 3-1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2: Calculating all Possible Truth Values . . . . . . . . . . . . Exercises 3-2: . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3: Logical relations in L1 . . . . . . . . . . . . . . . . . . . . . . . . 4: Shortcuts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii 20 20 26 27 28 32 33 43 45 50 53 53 56 57 61 61 69

Introductory Logic Exercises 3-4: . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5: Truth-Functional Logical Properties of English Sentences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises 3-5: . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix: Knights and Knaves . . . . . . . . . . . . . . . . . . . 72 74 76 78

Chapter 4: Derivations I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 1: Overview of Derivations . . . . . . . . . . . . . . . . . . . . . . . 81 2: Getting Started with Derivations . . . . . . . . . . . . . . . 83 Exercises 4-2: . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 3: More Rules of Inference . . . . . . . . . . . . . . . . . . . . . . . 88 Exercises 4-3: . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4: Subderivations and two more rules . . . . . . . . . . . . . . 92 Exercises 4-4: . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 5: Using Rules Correctly . . . . . . . . . . . . . . . . . . . . . . . 102 Exercises 4-5: . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 6: Derivation Strategies . . . . . . . . . . . . . . . . . . . . . . . . 106 Exercises 4-6: . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Chapter 5: Derivations II . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1: Derived Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises 5-1: . . . . . . . . . . . . . . . . . . . . . . . . . . . 2: Showing Inconsistency and Equivalence with Derivations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises 5-2: . . . . . . . . . . . . . . . . . . . . . . . . . . . 3: Replacement Rules: . . . . . . . . . . . . . . . . . . . . . . . . . Exercises 5-3: . . . . . . . . . . . . . . . . . . . . . . . . . . . 4: Derivation Strategies . . . . . . . . . . . . . . . . . . . . . . . . Exercises 5-4 . . . . . . . . . . . . . . . . . . . . . . . . . . . 5: D1 Derivations and English proofs . . . . . . . . . . . . . Exercises 5-5 . . . . . . . . . . . . . . . . . . . . . . . . . . . 6: The Deductive System D1 . . . . . . . . . . . . . . . . . . . . . Rules for D1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Assumption Rules: . . . . . . . . . . . . . . . . . . . . . . . . . . . . Basic Rules of Inference . . . . . . . . . . . . . . . . . . . . . . . . Pseudo-inference rule: . . . . . . . . . . . . . . . . . . . . . . . . . Structural Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Completed Derivation . . . . . . . . . . . . . . . . . . . . . . . . . Derived Rules of Inference for D1 . . . . . . . . . . . . . . . . Replacement Rules for D1 . . . . . . . . . . . . . . . . . . . . . . viii 111 111 117 118 121 121 127 127 132 133 136 136 139 139 139 140 140 140 140 141

Chapter 6: Monadic Quantification . . . . . . . . . . . . . . . . . . . . 1: Names and Predicates . . . . . . . . . . . . . . . . . . . . . . . Exercises 6-1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2: Variables, Open Sentences, and Satisfaction . . . . . Exercises 6-2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3: Interpretations and Small Domains . . . . . . . . . . . . Exercises 6-3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4: Quantifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises 6-4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5: Translation: All and its Variants . . . . . . . . . . . . . . Exercises 6-5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6: Translation: At least one and its variants . . . . . . Exercises 6-6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7: Some additional common idioms of English . . . . . . Exercises 6-7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8: Translation in to L2: Complex Sentences . . . . . . . . . Exercises 6-8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix: Syntax Rules for L2 . . . . . . . . . . . . . . . . . . . Chapter 7: Polyadic Quantification . . . . . . . . . . . . . . . . . . . . . 1: Predicates with more than one place . . . . . . . . . . . . Exercises 7-1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2: Translating Quantifiers and Polyadic Predicates . . Exercises 7-2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3: Small domains with Polyadic Predicates . . . . . . . . . Exercises 7-3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4: Ambiguities in English . . . . . . . . . . . . . . . . . . . . . . . Exercises 7-4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 8: Quantificational Relations . . . . . . . . . . . . . . . . . . 1: Quantificational logical relations . . . . . . . . . . . . . . . Exercises 8-1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2: Logical Relations with Polyadic Predicates . . . . . . . Exercises 8-2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3: Confinement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises 8-3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4: Expansion in Finite Domains . . . . . . . . . . . . . . . . . . Exercises 8-4: . . . . . . . . . . . . . . . . . . . . . . . . . . .

142 142 146 147 149 149 153 154 157 158 163 164 168 170 173 174 178 180 182 182 184 186 190 193 196 197 199 201 201 205 206 208 209 212 212 218

Chapter 9: Quantificational Derivations . . . . . . . . . . . . . . . . 220 1: Instances of quantified sentences . . . . . . . . . . . . . . . 220 Exercises 9-1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 ix

Introductory Logic 2: The Universal Exploitation and Existential Introduction rules. . . . . . . . . . . . . . . . . . . . . . . Exercises 9-2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 3: Existential Exploitation . . . . . . . . . . . . . . . . . . . . . . Exercises 9-3 . . . . . . . . . . . . . . . . . . . . . . . . . . . 4: Universal Derivation . . . . . . . . . . . . . . . . . . . . . . . . Exercises 9-4 . . . . . . . . . . . . . . . . . . . . . . . . . . . 5: Using Rules Correctly . . . . . . . . . . . . . . . . . . . . . . . Exercises 9-5 . . . . . . . . . . . . . . . . . . . . . . . . . . . 6: Strategy in Quantificational Derivations . . . . . . . . Exercises 9-6 . . . . . . . . . . . . . . . . . . . . . . . . . . . 7: Derivations with Polyadic Predicates . . . . . . . . . . . Exercises 9-7 . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 223 224 227 228 230 230 232 234 237 238 241

New Derivation Rules In Chapter 9: . . . . . . . . . . . . . . . . . . . 243 Basic Quantificational Rules of Inference . . . . . . . . . . 243 Quantificational Structural Rule . . . . . . . . . . . . . . . . . 243 Chapter 10: More Quantificational Derivations . . . . . . . . . . 1: Quantifier Negation . . . . . . . . . . . . . . . . . . . . . . . . . Exercises 10-1 . . . . . . . . . . . . . . . . . . . . . . . . . . 2: Strategies for Derivations . . . . . . . . . . . . . . . . . . . . Exercises 10-2 . . . . . . . . . . . . . . . . . . . . . . . . . . 3: Enthymemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises 10-3 . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 11: Identity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1. Translations Using the Identity Predicate: . . . . . . . Exercises 11-1 . . . . . . . . . . . . . . . . . . . . . . . . . . 2. Definite Descriptions: . . . . . . . . . . . . . . . . . . . . . . . . Exercises 11-2 . . . . . . . . . . . . . . . . . . . . . . . . . . 3: Showing Invalidity, Consistency, Non-equivalence ...................................... Exercises 11-3 . . . . . . . . . . . . . . . . . . . . . . . . . . 4. Derivations with Identity: . . . . . . . . . . . . . . . . . . . . Exercises 11-4 . . . . . . . . . . . . . . . . . . . . . . . . . . Answers to Odd Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 2: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x 244 244 247 248 250 251 253 255 255 260 261 265 265 268 269 273 275 275 276 278

Chapter 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 5: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 6: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 7: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 8: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 9: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 11: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

291 297 308 311 312 316 328 340

xi

Basic Logical Concepts

Chapter 1: Basic Logical Concepts


Peter is tying the shoes of his baby brother. His younger sister, Ellie, who has just learned to tie her own shoes and is still having a bit of trouble doing so, asks, Will you tie my shoes too, Peter? Peter says, No, Ellie. I dont tie the shoes of those who are old enough to tie their own shoes. You are old enough to tie your own shoes, and you should do so. Its good practice. Ellie asks, Do you tie your own shoes? Peter replies, Of course. Im old enough to tie my own shoes, and I do so. You are too, so you should tie your own shoes. Peter has spoken carelessly and made inconsistent statements. What he said, taken strictly and literally, cannot be true. (Can you figure out why? Can you figure out what he meant to say instead?) Alice is reading a web page that is part of the Mystics-andmystery.com web site. She reads, There is something called The One. Each thing is such that it is identical to The One if and only if The One is divine. Alice notices that it is a consequence of this second sentence that there is only one thing and it is Divine. (Can you see why this is a consequence of the web pages statement?) Thats a sort of Pantheism, she thinks. The inconsistency of Peters statements and the consequence relation that Alice notices between the statement of the web site and the conclusion that she draws are examples of deductive logical properties and relations. Of course, there are simpler and more familiar examples. For instance, you may know that this is a good argument: All humans are mortal. Socrates is human. Therefore, Socrates is mortal. Since this is a good argument, its conclusion must be true if its premises are. Logic deals with these sorts of properties and relations. In this book, we will learn about some logical relations and develop tools for dealing with them in a rigorous fashion. We wont develop the tools for demonstrating Peters inconsistency until Chapter 9, and the tools for dealing with what he probably meant to say instead 1

Introductory Logic wont appear until Chapter 11, where we will also develop the tools needed to show that Alices inference is a good one. This chapter introduces the basic logical concepts we will work with in this course. The most fundamental one is truth, and we shall begin by saying something about it. Next we must distinguish statements, which are true and false, from sentences generally, because not all sentences are true or false, though all statements are. The fundamental idea of deductive logic is that statements may have important properties in virtue of their forms. We shall introduce the notion of logical form and use it to explain the fundamental logical relations: consistency, equivalence, and validity. Finally we shall say a word about how we will proceed in the remaining chapters of this book. 1: Truth Logic deals with truth. Although philosophers have argued about the nature of truth for centuries, for logical purposes we only need to get clear about a few simple and relatively uncontroversial facts about truth. The basic idea is very simple: a statement is true if, and only if, things are as the statement says they are. It is false if things are not as it says. For instance, it is true that all dogs bark if all dogs do bark; otherwise, it is false. It is true that Columbus sailed to the new world in 1492 if Columbus did so in that year, false otherwise. Truth is not knowledge. Although everything that is known is true, not everything true is known. What is known varies from person to person, from place to place, and from time to time, but truth does not vary in this way. For instance, one thousand years ago no one knew that water was composed of hydrogen and oxygen. Still, it was composed of these elements then, just as it is now. It was true then, as it is now, that water is composed of hydrogen and oxygen, although no one then knew that it was true. There may even be truths that no one will ever know. Consider the following statement: On July 4, 3042 B.C.E, at least 0.2 inches of rain fell on (what is now) Rochester, New York. Probably no one has ever known, and probably no one ever will know whether that statement is true or false. And if it is not true, 2

Basic Logical Concepts then the following statement is true: On July 4, 3042 B.C.E, there was less than 0.2 inches of rain on (what is now) Rochester, New York. One of these two statements must be true, but I doubt that anyone will ever know which one. Truth may be controversial. Whether something is true is not ordinarily determined by whether people agree about it. Scientists disagree (at the time of this writing) about whether there is a genetic component to homosexuality. Nevertheless, the following statement, There is at least one gene, possession of which makes a person more likely to be homosexual. is either true or false, no matter how controversial it may be. Controversy cannot rob a statement of truth, though it may make it harder for us to know what is true. 2: Statements and Sentences Statements are true or false. In this course, we shall use the term statement for things that are either true or false. So far, all the statements we have considered have been given in sentences, and so it will be throughout this book, even though there are good reasons for believing that there are many truths that cannot be expressed in any sentence of English. However, even though we will make statements using sentences of English, statements cannot be simply identified with sentences of English: some sentences of English make no statements, or make different statements on different occasions. Let's consider some of the ways English sentences may fail to make statements. English sentences need not be declarative sentences. English sentences may ask questions or issue orders: Shut the door! Have you shut the door? Only sentences in declarative mood, like

Introductory Logic The door is shut. can make statements, although sentences in other moods can presuppose, insinuate, or suggest statements. The sentence, Have you stopped cutting classes, yet? presupposes, and so suggests, that the person to whom it is addressed has been cutting classes, but it does not make that statement directly. In this course, we shall only be concerned with what sentences say directly, and all our sentences that express statements will be declarative sentences. English sentences may contain indexical terms or other terms whose reference varies with context. Indexical terms are terms like I, you, here, and now, whose reference varies on different occasions of use. When I say, I am hungry, I am making a different statement from the one you make when you say, I am hungry. One of these statements may be true and the other false. So the English sentence, I am hungry is not simply true or false, but is sometimes true and sometimes false, depending on who is uttering it. The word now also varies similarly. If I should say, It is now raining what I say would be true if I say it when it is raining, but false when said other times. This feature of now is pervasive in English, because of tense: a present tense sentence always says what's happening now, a past tense sentence says what happened before now, and so on. Other terms in the language may need the circumstances of their utterance to make clear what statement they make. If I say, The table is cluttered, I may be referring to my dining room table or to the table in my office; ordinarily we tell which by the circumstances under which I utter the sentence. We can, at some cost in convenience, regiment our sentences so that they make a single statement regardless of the circumstances under which they are uttered. Occurrences of I can be replaced by names; I would replace I by John G. Bennett, while you would replace I by Wilma S. Freeblestrom.3 Now can be replaced by a date and time, and tensed sentences can be replaced by sentences with the time reference made explicit. We shall not always be that careful in giving examples of statements in this book, because that would make the examples

Or whatever your name is, if it is not Wilma S. Freeblestrom.

Basic Logical Concepts even more cumbersome and artificial than they will be anyway. But you should always imagine that sentences that are supposed to make statements have been appropriately spelled out in this way. English sentences may be ambiguous. Ambiguous sentences can be understood in more than one way. Understood one way an ambiguous sentence may be true, while understood another, it may be false. For instance, consider the following sentence, uttered by a spirit conjured up by some characters in Shakespeare's play, Henry VI Part II (Act i, Sc. iv): The duke yet lives that Henry shall depose; But him outlive, and die a violent death. Will Henry depose the duke, or the duke depose Henry? The sentence can be read either way. It does not make a single statement, but can be understood to make either of two. To identify a single statement we need an unambiguous sentence. English sentences may be too vague to make a statement. An English sentence may be too vague to be definitely true or false, and for this reason may fail to make a statement. Consider a word like bald. A man with no hair on his head is certainly bald, while a man with a full head of hair is not. But there is no precise line between bald heads and heads that are not bald; some people may have lost just enough hair to put them in the indeterminate area between bald and not bald. If a sentence is too vague to be true or false, it does not make a statement. There are other ways sentences can fail to make statements. Any sentence that fails to be true or false fails to make a statement. Consider the following sentence, for instance:
The sentence in the box on page 5 is false.

If this sentence makes a statement, the statement is true or false. But it cannot be true, for if it is, since the sentence says that it is false, it must be false, and not true. But it cant be false either, for in that case, the sentence, which says that it is false, would be true. Consequently, this sentence cant make a statement. A statement cant say of itself that it is false, though a sentence can; if a sentence does that, it doesn't make any 5

Introductory Logic statement at all. (Recall, that by definition statements are true or false.) In summary, statements are either true or false, though not all sentences of English make statements. A sentence of English may fail to make a statement, or may make different statements on different occasions because of mood, indexicality, ambiguity, vagueness or other factors. We shall exhibit statements in the course by presenting English sentences which are (or which are imagined to be) free of such problems. 3: Logical Form We begin with an example. Forms are categories of statements that can be applied without knowing whether the statements are true or false. A statement that belongs to a given form is said to be an instance of that form. We cannot yet give a precise account of logical form, but we can begin to appreciate the idea through an example. Consider the following statements: 641 is a prime number. It is not the case that 641 is a prime number. These two sentences cannot both be true; indeed, exactly one of them must be true. Compare the following two statements: January 23, 2031 is a Thursday. It is not the case that January 23, 2031 is a Thursday. These two statements also cannot both be true. In fact, again, exactly one of them must be true. There is a general pattern here: (F1)

N
It is not the case that N

(The symbol, N is the Greek letter phi, pronounced like fie.) No matter how we fill in the blank indicated by the Greek letter N, if both occurrences of the blank are filled in the same way, and the result is a pair of statements, then exactly one of the statements will be true. The pattern exhibited above has two characteristics of a logical form: (1) The pattern itself does not make any statements; indeed 6

Basic Logical Concepts it doesnt say anything. But we can get a variety of statements by supplying what we shall call an interpretation for the Greek letter. By assigning to the Greek letter a sentence that makes a statement, we get a pair of sentences that makes statements. We shall call what we get when we interpret a forms Greek letters as an instance of that form. (2) By examining the forms we can find out something about the truth of the statements for all of the infinite number of interpretations we could give. In this case, for each interpretation we will get a pair of sentences, and we know that any such pair will have exactly one true sentence. We have not yet defined the notion of a logical form, but these features will be important to understand. Another example may help: (F2) If N then R.

N R
In this form, the Greek letters are interpreted by sentences that make statements. (The symbol R is the Greek letter Psi, pronounced like sigh.) The same sentence must be used to interpret a particular Greek letter at each of its occurrences, but the sentences that interpret the different Greek letters may be the same or different. Any instance has three statements, for instance: If Juan is president then Tawana is vice-president. Juan is president. Tawana is vice president. If the sample is representative, then the disease is more widespread than we thought. The sample is representative. The disease is more widespread than we thought. This form has the following property: whatever interpretation we supply for the Greek letters, if the first two statements of an instance of the form are true, the third one is also. That is, we never can get an instance where the first two statements are true and the third one is false. 4: Logical Inconsistency and Equivalence 7

Introductory Logic Logical properties are defined in terms of forms. Again we begin with an example. Recall (F1) above: (F1)

N
It is not the case that N

Whatever interpretation we may supply, we get two statements, and as we noted above, in every case one of the statements is true and one is false. Consequently, we can never find an interpretation that yields two true statements. When a form has this property, we say that it is an inconsistent form. Generally, the definition of an inconsistent form is this: A form is inconsistent if and only if there is no interpretation on which all of the statements are true. Note that when we speak of the statements not all being true, we are not speaking of the sentences we may substitute to make the instance; we are speaking of the statements that result when we have made the interpretation. For instance, in (F1) above we may substitute the sentence, 3024 is evenly divisible by 7 for the Greek letter N and obtain this: (I1) 3024 is evenly divisible by 7. It is not the case that 3024 is evenly divisible by 7.

These two statements are not both true)you can know this even if you don't know whether 3024 is evenly divisible by 7. We can use the notion of an inconsistent form to define inconsistency and consistency for statements. A set of statements is inconsistent if and only if it is an instance of an inconsistent form. A set of statements is consistent if and only if it is not inconsistent. Note that the definition of inconsistent statements implies that there is no inconsistent form of which they are instances.

Basic Logical Concepts Since (F1) is an inconsistent form, all interpretations yield sets of inconsistent statements. But notice that any set of inconsistent statements is an instance of many forms, some inconsistent, some not. For instance, the sentences I1 above are not only an instance of (F1) but also of (F3): (F3)

N R

but an instance of a form that is not inconsistent (like (F3)) need not be consistent.

which is not inconsistent; any pair of true sentences is an instance of this form. An instance of an inconsistent form is inconsistent, Speaking casually, we may say that if statements are inconsistent, you have a sort of logical guarantee that they are not all true. On the other hand, knowing that statements are consistent gives us no logical guarantee at all about their truth values; all consistency amounts to is an absence of the logical guarantee that inconsistency supplies. Here are two statements that form a consistent set of sentences: New York City is in New York State. Chicago is in Illinois. We know that these statements are consistent, because they are both true, and hence cannot be an instance of any inconsistent form. (Why?) The following statements also form a consistent set, New York City is in Illinois. Chicago is in New York State. but we are not in a position to show that they are. Since the statements are both false, we cannot offer the sort of reason we offered for the first pairs being consistent. To show them consistent we would have to show that they are not an instance of any inconsistent form, and we are not yet in a position to do that. Equivalence is also defined by forms. Intuitively, when statements are equivalent we have a logical guarantee that they have the same truth value. This notion is also defined in terms of 9

Introductory Logic forms. A form is an equivalence form if and only if: (1) All its instances consist of exactly two statements, and (2) there is no interpretation on which the statements have different truth values. Consider this example: (F4) All Ns are Rs. Nothing that is a N fails to be a R also.

Here we don't replace the Greek letters by sentences, but rather by general terms, like cow, or mammal. For example, one instance of (F4) is: All cows are mammals. Nothing that is a cow fails to be a mammal also. Another is: All astronauts are males. Nothing that is an astronaut fails to be a male also. In the first of these examples, both statements are true; in the second, both are false, but in either case, both statements have the same truth value. In fact, (F4) is an equivalence form. We define equivalence for statements in terms of equivalence for forms. Two statements are equivalent if and only if the pair of them is an instance of an equivalence form. Two statements are non-equivalent if and only if they are not equivalent. For instance, in virtue of the fact that (F4) is an equivalence form, the following statements are equivalent: All logic students work hard. 10

Basic Logical Concepts Nothing that is a logic student fails to work hard. We don't need to know whether these sentences are true or false to know that they must have the same truth value. Exercises 1-4: Declare the following true or false; if false, explain briefly why. Note that (F1), (F2), etc. refer to the forms discussed in the text. 1. 2. 3. 4. 5. 6. 7. 8. 9. (F1) is inconsistent. (F2) is inconsistent. (F3) is inconsistent. (F4) is inconsistent. If a pair of sentences is an instance of (F1) the sentences are inconsistent. If a pair of sentences is an instance of (F3) the sentences are consistent. If a set of sentences is inconsistent, all the sentences are false. If two sentences are equivalent, they form an inconsistent set. The following sentences are equivalent: 64 is a perfect cube (i.e., there is an integer, n, such that n3 = 64). 64 is divisible by 3. The following three sentences are inconsistent: 5 + 3 = 8. Some elephants live in Africa. The earth has one natural satellite.

10.

5: Arguments, Validity, and Soundness In Logic, argument has a special meaning. In ordinary English, an argument may be (among other things) either a quarrel or a stretch of reasoning, designed to convince or persuade. Its meaning in modern logic is derived from the second meaning mentioned, but it is more abstract:

11

Introductory Logic An argument is a non-empty set of statements, exactly one of which is designated the conclusion. The other statements, if any, are called premises. Notice that according to this definition an argument must have exactly one conclusion, but premises are optional. In presenting arguments, we must settle on a standard way of designating the conclusion. We shall use the therefore sign, made of three dots (). We adopt a policy of always presenting the premises (if any) first, and then the conclusion, preceded by as in the following example: (A1) If Alonzo was at the party, Bertha was at the party. Alonzo was at the party. Bertha was at the party.

Here, Bertha was at the party is the conclusion, and the other two statements are premises. The most important logical property arguments may have is validity. As you would expect, validity is defined in terms of forms: An argument form is valid if and only if it has no interpretation on which all the premises (if any) are true and the conclusion is false. An example of a valid argument form can be obtained by modifying form (F2) mentioned earlier so as to designate the last statement as a conclusion: If N then R.

N R
The argument above, (A1), is an instance of this form. As you expect, an instance of a valid argument form is a valid argument, as indicated by this definition: A valid argument is an argument that is an instance of a valid argument form.

12

Basic Logical Concepts An invalid argument is an argument that is not valid. If we have a valid argument, we have a logical guarantee that if the premises are true, the conclusion is also. This guarantee comes from the fact that the argument is an instance of a valid logical form, and that form, being valid, has no instances with true premises and false conclusions. If the premises of a valid argument are not true, we have no logical guarantee at all about the conclusion; it may be either true or false. If an argument is invalid, that fact tells us nothing at all about the truth values of its premises and conclusion: any of them could be either true or false. Above we pointed out that an argument need not have any premises. What is a valid argument with no premises like? It must be an instance of a valid argument form, and from the definition, if there are no premises, then the argument must have no instances where the conclusion is false. Here is an example of a valid argument form with no premises:

Either N or it is not the case that N.


This is a valid argument form, because no matter what statement-making sentence we use to replace N, the conclusion of the argument is true. Naturally, we have a special interest in valid arguments with true premises. The validity of an argument does not guarantee the truth of its conclusion, as we have noted; for this the argument must also have true premises. There is a special term for arguments that are valid and have all their premises (if any) true: they are said to be sound. An argument is sound if and only if it is valid and has true premises. An argument is unsound if and only if it is not sound. A sound argument must have a true conclusion (why?) even though the definition does not say that it does. As we will learn later, it is easy to construct sound arguments for any true statement. When we use arguments to learn things or to show to others that statements are true, we must use arguments that satisfy criteria that go beyond mere soundness. 13

Introductory Logic But as these other criteria are part of epistemology, not of logic, we will not discuss them further.

Exercises 1-5: Declare the following true or false; if false, explain why. 1. 2. 3. 4. 5. 6. Any argument with true premises and a true conclusion is valid. Any argument with true premises and a false conclusion is invalid. A valid argument may have false premises. A sound argument may have false premises. A valid argument may have a false conclusion. A sound argument may have a false conclusion. 6. The Rest of this Book Modern logic uses artificial rather than natural languages. You will have noticed that we have still not defined precisely what a form is. We have not done so because this is difficult to do for natural languages like English, Chinese, Hindi, or Tagalog. Instead, modern logic uses artificial languages, a technique developed by Gottlob Frege (1848-1925), a German mathematician. These artificial languages are free of most of the problems connecting English sentences with statements that we discussed in Section 2 of this chapter. The artificial languages we will study provide the forms of statements; we give interpretations to get instances. The artificial languages can be specified rigorously, and we will learn how to use them and how to translate between English and the various artificial languages we discuss. We shall assume that the logical properties of the artificial language sentences that translate English sentences are approximately the same as the logical properties of the statements made by the English sentences they translate. Although this is not always correct (and we shall point out some problems with it), we shall assume that this procedure does a good enough job for us to understand the logic of natural language sentences. 14

Basic Logical Concepts

Definitions for Chapter 1: A form is inconsistent if and only if there is no interpretation on which all its statements are true. A set of statements is inconsistent if and only if it is an instance of some inconsistent statement form. A set of statements is consistent if and only if it is not inconsistent. A form is an equivalence form if and only if: (1) All its instances consist of exactly two statements, and (2) there is no interpretation on which the statements have different truth values. Two statements are equivalent if and only if the pair of them is an instance of an equivalence form. Two statements are non-equivalent if and only if they are not equivalent. An argument is a non-empty set of statements, exactly one of which is designated the conclusion. The other statements, if any, are called premises. An argument form is valid if and only if there is on interpretation on which all the premises (if any) are true and the conclusion is false. A valid argument is an argument that is an instance of a valid argument form. An invalid argument is an argument that is not an instance of any valid argument form. An argument is sound if and only if it is valid and has true premises. An argument is unsound if and only if it is not sound.

Appendix I: Logical Possibility In this chapter we have defined validity using the notion of a form. You may have heard the following definition of validity:

15

Introductory Logic An argument is valid if and only if it is impossible that the premises should be true while the conclusion is false. This definition is not incorrect, but it is not very helpful, because the notion of possibility is not sufficiently clear. For instance, consider the following argument: Alpha Centuri is more than 4 light years away from earth. Therefore, no signal can reach Alpha Centuri from earth in less than four years. Physics tells us that it is impossible for any signal to travel faster than light. Hence physics tells us that it is impossible for the premise of the argument above to be true while the conclusion is false. Nevertheless, the argument is not valid. Some philosophers would say that this is a mere physical impossibility, and physical impossibility is not the sort of impossibility we are talking about in the definition of validity. Apparently something is physically impossible if it is inconsistent with the laws of nature. However, for the definition of validity this is not the correct notion of possibility. Physical possibility is not the only sort of possibility that differs from logical possibility. Consider the following argument: John G. Bennett was a human being on July 1, 2000. Therefore, John G. Bennett was not a tomato on July 2, 2000. Many philosophers argue that it is impossible for a human being to turn into a tomato. This is supposed to be not merely physically impossible, for it is supposed that even if the laws of nature were different than they are, it would still be impossible for a person to become a tomato. Something that is a human being has a different essence from anything that is a tomato, it may be claimed, and hence nothing that is a human being can be identical with anything that is a tomato. According to these philosophers, it is impossible for the premise of the above argument to be true if the conclusion is false. Still, the argument above is not a valid argument. We shall not stop to determine whether the philosophers in question are correct, or even whether what they 16

Basic Logical Concepts say makes sense. We merely observe that the sort of possibility involved)metaphysical possibility, it is called)is not relevant to defining validity. What sort of possibility might be relevant to validity, then? Logical possibility is the answer, but what is logical possibility? Logical possibility is just consistency as we have defined it in this chapter. To define validity in terms of consistency, we must have the notion of the negation of a statement. If we take a sentence that makes a statement and put it is not the case that immediately in front of it, we get a sentence that makes a statement that is the negation of the original statement. An argument is valid if and only if the premises together with the negation of the conclusion are logically impossible, that is to say, if and only if the premises and the negation of the conclusion form an inconsistent set. This definition is equivalent to the one given above in the chapter. The upshot of this discussion is that while it is true in a sense that it is impossible for the premises of a valid argument to be true while the conclusion is false, the sort of possibility involved)logical possibility)must be explained using the logical notions we have explained in this chapter. We cannot clearly explain the logical notions using the concept of possibility; rather the relevant type of possibility must be explained using the logical concepts.

Appendix II: Use and Mention When I was young, I was acquainted with the following riddle: Railroad crossing: Watch out for the cars! Can you spell it without any Rs? Those of us who had heard the riddle before would answer, Yes: I-T spells it. This riddle turns on what is called a use-mention ambiguity. When a letter, word, phrase, sentence, or other bit of language is mentioned, it is not being used with its usual meaning; rather it is being exhibited so it can be talked about. For instance, in the riddle above, the word it seems to be used in the ordinary way to talk about the previous two lines, but 17

Introductory Logic the answer assumes it is being mentioned, in other words that the riddle is asking whether the word it can be spelled without using the letter R. When a bit of text is used, it is contributing its ordinary meaning to the sentence and not (ordinarily) referring to itself. Since we are trying to be precise in talking about languages, we dont want to be tripped up by the sort of use-mention ambiguity that riddle depends on, so we need to settle on a standard way of distinguishing use and mention. In American English, the standard way of marking that a bit of text is being mentioned rather than used is to surround it with quotation marks. For example, in the following two sentences, we use quotation marks to indicate that the word toast is being mentioned in the first sentence; the absence of such marks indicates that the word is being used in the second sentence. Toast has five letters. Whole wheat toast with butter and honey is delicious. Another way of indicating that a bit of text is being mentioned rather than used is to display it indented and separated from ordinary text with blank space, thus: This is sample text. When you see text displayed like this, you know that it is not being asserted by the author, but is being mentioned (i.e., talked about). As with everything else in English, there are complications to the use-mention distinction. One complication is that sometimes it is possible to both use and mention a word at the same time. Consider this sentence: Barbarossa was so-called because of his red hair. In this sentence, the word Barbarossa is being used to refer to Frederick I, Holy Roman Emperor from 1152 to 1190. But the word is also being mentioned at the same time. You can see this by considering that the sentence above means roughly the same as the following: Barbarossa was called Barbarossa because of his red 18

Basic Logical Concepts hair. Where a word is being both used and mentioned, it is not put in quotation marks. Exercises 1-A2 Where appropriate, punctuate the following to indicate which bits of text are being mentioned. In case more than one answer is possible, choose the most plausible one, that is, the one most likely to make the sentence true. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. Send has four letters. My mailbox has four letters. His kiss meant nothing to her. Kiss has more than one meaning. The sign said, Keep Off the Grass. The sign says that no refunds will be given. Schnee is the German word for snow. The German word for snow is monosyllabic. Frederick Barbarossa shares the name Barbarossa with a Barbary pirate of the sixteenth century, Khayr al-Din (d. 1546). Nova Scotia was named that by British settlers. Phtholognyrrh is pronounced the same as turner; the phth as in phthisis, the olo as in colonel, the gn as in gnu, the yrrh as in myrrh.

[Here all punctuation except the final period has been removed from the last sentence, not just quotation marks.] 12. Smith and Jones translated the same text from Sanskrit to English. Smith where Jones had had had had had had had had had had the instructors approval.

19

Introductory Logic

Chapter 2: L1, Syntax and Translation

In Chapter 1, we saw how sentences of natural languages may not unambiguously make a single statement. Modern logic deals with this by inventing artificial languages in which the sentences can be guaranteed by the rules of construction to make unambiguous statements. This chapter introduces the simplest symbolic language, which we call L1, a language adequate for truth-functional logic. We begin with the vocabulary and the rules by which basic symbols are combined to make sentences)the syntax. In order for the language to make statements it must be interpreted, so we shall explain what an interpretation is. The meanings of the logical terms in the language will then be explained, and finally we will learn how to translate between English and L1. 1: Vocabulary and syntax The symbols of L1 include sentences, letters, connectives, and punctuation. We are going to learn a new, symbolic language. The first thing to do is to become acquainted with the symbols that appear in the language)its basic vocabulary. There are three types of symbols. First there are sentence letters. These are upper case Roman letters, with or without Arabic numeral subscripts. Here are some of them, in what we shall declare to be alphabetical order: A, B, C, D,...,Y, Z, A1, B1,... Most of the time we will not use the subscripted sentence letters, but it is handy to have an unlimited stock of sentence letters around, since you never know how many you may need. Next we have five sentence connectives. The language has one unary connective and four binary sentence connectives. The unary connective is the tilde: ~ 20

L1: Syntax and Translation and the binary connectives are the ampersand: & the wedge: w the arrow: the double arrow: There are two official punctuation marks: the left parenthesis: ( the right parenthesis: ) That's all the vocabulary of L1. The vocabulary of a language)the basic symbols)must be combined to produce sentences. The rules that specify how the symbols may be combined are the languages syntax. One of the advantages of the artificial languages used by logicians is that their syntax can be exactly specified by a set of fairly simple rules. Our language, L1, has just three rules, though one of them has five parts. Here is the first syntax rule of L1: 1. A sentence letter standing alone is a sentence. Rule 1 guarantees that all of the following are sentences: A Q M27 X234395

But it does not provide for the following to be sentences (why not?): Dx g

Rule 2 is more complex than rule 1: 2. If N and R are sentences, then so are ~N (M & R) (N w R) (M R) and (N R) The first complexity is the use of the Greek letters, (Phi, N, 21

Introductory Logic pronounced fie, and Psi, R pronounced like sigh). These Greek letters are not part of the L1, as we noted above. Instead they are part of the language we use to talk about L1)ordinary English, extended by some special terms that we will discuss when they are needed.4 Any of these Greek letters stands for any sentence of L1. So, for example, Rule 2 implies that since A and B32 are sentences of L1 (by Rule 1), the following are also sentences: ~A (B32 6 A) (A & B32)

and so forth. Rule 2 can be used over and over again on the sentences it has provided for. For instance, since A and B are sentences, so are ~A and (B 6 A), by Rule 2. But now we can use Rule 2 again and see that the following is also a sentence: (~A w (B A)) and similarly, the following will also be a sentence: ((A & B) (~A w (B A))) There is one more rule of the syntax of L1: 3. Thats all; that is, nothing is a sentence unless these rules provide that it is. This is a very important rule! Without it, although we could say that some sequences of symbols definitely are sentences, we could not know that any sequence failed to be a sentence. Because of Rule 3, the following are not sentences: g W&R&T

MB

The syntax rules of L1, though simple, can be tricky to use. One difficulty arises because of all the parentheses the complex Technically, the language used to talk about another language is called its metalanguage. The Greek letters we are using are called metalinguistic variables. 22
4

L1: Syntax and Translation sentences contain. For instance can you easily tell which of the following is a sentence of L1? ((A & B) (A w (B A)) ((A & B) (A w (B A))) ((A & B)) (A w (B A)) Actually, only the second one is a sentence, according to the rules we have given. The other two have errors in the placement of parentheses. There is a simple checking procedure that can help you find some errors with parentheses: Give a left parenthesis a value of +1 and a right parenthesis a value of -1. Start with a value of 0 and proceed through any putative sentence from left to right, adding the values of parentheses as you go. If you reach the end of the sequence of symbols with a value other than 0, or if you reach the value 0 before reaching the end of the sequence of symbols, the symbols do not make a sentence of L1. Using this checking procedure on the three putative sentences above we get: ((A & B) (A w (B A)) 0 12 1 2 3 21 Not a sentence! (Not 0 at end.) ((A & B) (A w (B A))) 0 12 1 2 3 210 No problem found. ((A & B)) (A w (B A)) 0 12 10 Not a sentence! (0 reached before end.) This checking procedure cannot find all possible problems with parentheses; for instance, it won't find any problem with this sequence of symbols, even though the parentheses are misplaced:

23

Introductory Logic ((A & B) (A) w (B A)) 012 1 21 2 10 No problem found. However, the checking procedure can be helpful in sorting out parentheses. It often finds syntax errors that one might miss, especially cases where one parenthesis is missing. We can make things still easier by adopting a couple of informal syntax conventions. We agree that we may drop the outermost parentheses on any sentence, and that any matching pair of parentheses may be replaced by brackets([ and ]). We are not modifying our official syntax rules: they remain rules 1-3 above. But we are allowing ourselves some unofficial shortcuts. Here are some samples of the official and unofficial ways of writing sentences: Official Sentence An Unofficial Version (F M) FM ((A & B) (A w (B A))) (A & B) [A w (B 6 A)] ((P (Q R)) ((P Q) (P R))) [P (Q R)] [(P Q) (P R)] Incorporating these conventions into our syntax would make our syntax rules much more complicated, so we will not do so; but the unofficial versions of the sentences above are obviously easier for humans to read and understand. Note that the checking procedure, given above, will only work on official sentences; unofficial sentences must have outermost parentheses replaced (if they were dropped) and brackets changed to parentheses before using the checking procedure. Sentences of L1 are built up out of smaller sentences, which in turn are built of smaller sentences, and so forth. The smallest sentences, those that are not built up out of any smaller sentences, are called atomic sentences. In L1, the atomic sentences are all sentence letters. Non-atomic sentences can be analyzed into the parts that make them up. We do such an analysis with what we shall call a parse tree. The idea of a parse tree is much easier to understand than to explain. Let us construct a parse tree for our example sentence: ((A & B) (~A w (B 6 A))) 24

L1: Syntax and Translation First we break it into the two parts from which it is made, showing which connective was used to put it together: ((A & B) (~A w (B A))) (A & B) (~A w (B A))

We complete the parse tree by repeating the process for each of the component sentences until we have reached all the atomic sentences that make up the sentence: ((A & B) (~A w (B A))) | (A & B) | A B (~A w (B A)) | ~A | A B (B A) | A

The parse tree for a sentence shows how the sentence was built up from atomic sentences using Rule 2 of the syntax for L1.5 Each vertical line below a non-atomic formula points to the connective attached to the sentence or sentences on the line below to make the sentence above it. We can explain the main connective of a sentence of L1 using the parse tree. The main connective of a sentence of L1 is the connective involved in the last use of Rule 1 or 2 at the top of the parse tree. This is the connective that should be pointed to by the vertical line immediately below the top level. Obviously, an atomic sentence has no main connective. But any non-atomic sentence does, and the main connective is important determinant of the logical properties of a sentence. We will be referring to it often in what follows. Note that if an official sentence begins with a tilde,

It can be proved mathematically that there is only one parse tree for any sentence formed in accord with Rules 1-3, but the proof is beyond the scope of this course. 25

Introductory Logic that is the main connective. If it does not, the checking procedure that we used before to find incorrect uses of parentheses can be used to find the main connective of a sentence. The main connective is the first binary connective one encounters when the count is at 1, provided the sequence of symbols is a sentence. If the sequence of symbols is not a sentence, there is no main connective, so this procedure may give the wrong answer. In cases where it is not obvious whether a sequence of symbols is a sentence or not, here is a procedure that will give a definitive answer. First replace all square brackets by parentheses, then check to see that every symbol is a part of the vocabulary of L1. Next check to see whether the number of left parentheses is the same as the number of occurrences of binary connectives. If the number is the same it is an official sentence, if it is a sentence at all. If there is one fewer left parenthesis than occurrences of binary connectives, it is an unofficial sentence, if it is a sentence at all. Replace the outermost parentheses to get something that might be an official sentence. If the difference between the number of occurrences of binary connectives and left parentheses is not 0 or 1, it is not any kind of sentence. If you have not yet determined whether or not the sequence is a sentence, make a parse tree, using the procedure for checking parentheses to identify the main connective of each sequence of symbols you obtain. If you can make a parse tree correctly from the sequence of symbols, it is a sentence. Exercises 2-1: Which of the following is an official sentence? Which an unofficial sentence? For those that are official or unofficial sentences, indicate the main connective and produce a complete parse tree. For those that are not, explain why not. 1 2 3 4 5 6 7 26 A (B w C) B ~(C w D) (N w M) & (P (Q R) (X w Y) w (C & D) (B w C) ~(A w B) ~(A) ~~~(A ~~Q) (p w q)

L1: Syntax and Translation 8 9 10 11 12 13 14 (N & R) ((~P R) & S) w T -(S w T) [P (Q & R)] [(P Q) & (P R)] {S T} [Q & (R T)] (([P Q] w R)) & (S T) ~(S & T) [(P & Q) w ((S T) w M)] 2: Interpretations Sentence letters are used to make statements. As you may have guessed, sentence letters are used to make statements. To indicate which statements sentence letters make is to provide an interpretation of L1. In providing interpretations for L1 we could adopt either of two strategies. One would be to assign all the sentences of the language statements once and for all)this would be to provide one complete interpretation. However, as you can imagine, since there are so many sentences of L1, this would be very inconvenient. So we shall adopt the other alternative of providing partial interpretations each time we wish to use L1. Since on any occasion, we shall want only a few of the sentence letters, we can explain the statements made by each of the sentence letters conveniently without worrying about all the sentence letters we are not using on that occasion. Here is a sample partial interpretation: A: G: L: P: Q: Alonzo is a logic student. Gertrude is a logic student. Logic is a hard subject. Plums are delicious. Quadrupeds have legs.

We present an interpretation by presenting a list, each item of which consists of a sentence letter, followed by a colon, followed by a sentence of English which makes a statement. Here is another partial interpretation: A: B: G: L: Aardvarks eat ants. Brenda studies logic. Geometry is studied in high school. Lauren excels at math. 27

Introductory Logic We choose an interpretation on any occasion that enables us to say what we wish to say using L1. 3: Truth-functional Connectives The connectives in L1 are truth-functional. Of course, if we only had sentence letters in L1, it would not be a very interesting language. L1 gets its interest from the connectives and the way that complex sentences are built up from simple ones using them. All the connectives in L1 are truth functional connectives. A method of compounding sentences is truth functional if and only if the truth of a sentence compounded by that method depends only on the truth or falsity of the component sentences of which it is compounded. Because all of the connectives in L1 are truth functional, we can illustrate this idea by discussing each of the connectives in turn. The tilde creates a negation of a sentence. The meaning of the tilde can be given by this rule: A sentence of the form ~N is true if N is false; otherwise it is false. There is nothing more to the meaning of the tilde than this. A connective that has this property is called negation, and sentences of the forms N and ~N are said to be negations of each other. The tilde is somewhat like the English expression, it is not the case that, and putting a tilde in front of a symbolic sentence is much like putting it is not the case that in from of an English sentence. Thus, suppose we have this interpretation:6 A: Aardvarks eat ants. Since we will be talking only about partial interpretation in this book, we will drop the word partial and speak only of interpretations. 28
6

L1: Syntax and Translation Then the sentence ~A makes the negation of the statement that aardvarks eat ants. We could translate ~A as It is not the case that aardvarks eat ants. Of course, this last sentence is not a very good sentence of English, but it does convey exactly what ~A means, on the interpretation above. The ampersand creates the conjunction of two sentences. The meaning of ampersand is given by this rule: A sentence of the form (N & R) is true if both N and R are true, and false otherwise. A sentence connective that works this way is called conjunction,7 and a sentence of the form (N & R) is said to be the conjunction of N and R. N and R are said to be conjuncts. We can present the rule about the meaning of ampersand more perspicuously in a truth table:

N
T T F F

R (N & R)
T F T F T F F F

In this table, the first two columns contain all the possible combinations of truth values that N and R might have, and the third column presents the corresponding values of their conjunction. This use of the term conjunction must be distinguished from the grammatical use. Grammatically, a conjunction is a word that joins or connects words, phrases, or clauses. In logic, a conjunction is a sentence connective with the meaning of ampersand, or a sentence having such a connective as its main connective. Whenever we use the term conjunction in this book, we shall use it in the logical sense, unless we say, grammatical conjunction, in which case, we mean the grammatical sense. 29
7

Introductory Logic The & works something like the English word and in some of its uses. Using the following interpretation: A: Aardvarks eat ants. C: Cows eat grass. we can translate A & C into English as Aardvarks eat ants and cows eat grass. Just like the sentence of L1, the English sentence is true when both of its component sentences are and false otherwise. The English and does not always mean exactly the same as &, however; we will discuss complications later. The wedge creates the disjunction of two sentences. The meaning of wedge is given by this rule: A sentence of the form (N w R) is true if at least one of N and R is true, and false otherwise. A sentence connective that works this way is called disjunction, and a sentence of the form (N w R) is said to be the disjunction of N and R. N and R are said to be disjuncts. As with ampersand, we can present the rule about the meaning of wedge more perspicuously in a truth table:

N
T T F F

R (N w R)
T F T F T T T F

The w works something like the English word or in some of its uses. Using the interpretation above, we can translate A w C into English as Aardvarks eat ants or cows eat grass. Just like the sentence of L1, the English sentence is true when at least one of its component sentences is and false otherwise. In saying that the wedge corresponds to the English or we are assuming that the English word is being used in its inclusive sense, so that the sentence, Aardvarks eat ants or cows eat grass, is true. The 30

L1: Syntax and Translation English or also has an exclusive sense; we will discuss that sense later. The arrow expresses the truth-functional conditional. The meaning of arrow is given by this rule: A sentence of the form (N R) is true if either N is false or R is true (or both), It is false only if N is true and R is false. A sentence connective that works this way is called a material conditional, or just a conditional, and a sentence of the form (N R) is said to be a conditional. N is said to be the antecedent of the conditional and R is said to be the consequent of it. As before, we can present the rule about the meaning of arrow more perspicuously in a truth table:

N
T T F F

R (N R)
T F T F T F T T

The is something like the English if ... then. Using the following interpretation: A: Aardvarks eat ants. C: Cows eat grass. we can translate A C into English as If aardvarks eat ants then cows eat grass. This is only an approximation; we will discuss later some of the apparent differences between the arrow and if in English. A more accurate translation into English would be, Either aardvarks dont eat ants or cows eat grass; nevertheless, we shall generally translate with the English if ... then. Note that the arrow always goes between its antecedent and consequent, whereas in English, if precedes its antecedent, and between antecedent and consequent a then may appear. 31

Introductory Logic The double arrow creates biconditionals. The meaning of double arrow is given by this rule: A sentence of the form (N R) is true if both N and R have the same truth value; it is false otherwise. A sentence connective that works this way is called the biconditional, and a sentence of the form (N R) is said to be a biconditional. As above, we can present the rule about the meaning of double arrow more perspicuously in a truth table:

N
T T F F

R
T F T F

(N R) T F F T

The works something like the English if and only if in some of its uses. Using the interpretation above, we can translate A C into English as Aardvarks eat ants if and only if cows eat grass. As with and if this is only very approximately accurate; a more accurate translation would be, either its both true that aardvarks eat ants and that cows eat grass, or its neither true that aardvarks eat ants nor true that cows eat grass. You can see why we would prefer to stick with the if and only if translation. The double arrow turns out to be equivalent to two conditionals; that is, N R turns out to be equivalent to the conjunction of N R and R N. Exercises 2-3 Using the interpretation given, translate each of the following into our symbolic language.

32

L1: Syntax and Translation A: B: C: M: 1 2 3 4 5 6 7 8 Alonzo studies logic Bertha studies logic. Cows are mammals. Cows give milk.

It is not the case that Alonzo studies logic. Alonzo studies logic and Bertha studies logic. Either Alonzo studies logic or Bertha studies logic. If cows are mammals, then cows give milk. Cows give milk if and only if cows are mammals. If Bertha studies logic, then it is not the case that Alonzo studies logic. If it is not the case that cows are mammals then it is not the case that cows give milk. Either it is not the case that Alonzo studies logic or it is not the case that Bertha studies logic.

Using the interpretation above, translate each of the following into English. 9 10 11 12 13 14 BA BA ~A & ~B CwB ~C B ~~B

4: Translation The basic ideas in translation are easy. In translating, we try to understand what a sentence in one language (English or L1) is saying, and then to say that same thing, as nearly as possible, in another language (L1 or English). Naturally, any translation to or from L1 requires an interpretation; without one, sentences of L1 don't say anything at all. When we select an interpretation for translation from English into L1, we always seek an interpretation that allows us to represent as much logical structure as possible in L1. For instance, compare these two interpretations; first: A: It is not the case that Wakili played in the NBA. 33

Introductory Logic B: Rajib and Francisco study logic. second: A: Wakili played in the NBA. B: Rajib studies logic. C: Francisco studies logic. The first interpretation is unsuitable, because it does not help to make clear all the logical structure of the sentences. Translating between English and L1 cannot be a mechanical process: while L1 is a simple language, English is not. Fortunately, you already know English, and while you may not realize how very complex the English language is, you have mastered it well enough to translate between it and L1. Let's see one example of the complexity of English. Consider this sentence: (S1) Alonzo and Bertha are logic students.

Since you know that this sentence means the same as Alonzo is a logic student and Bertha is a logic student. you can see that S1 can be treated as a conjunction and translated using &. However, if we consider this sentence: (S2) Alonzo and Bertha are lovers.

we cannot suppose that this means the same as Alonzo is a lover and Bertha is a lover. because the latter sentence does not say that Alonzo and Bertha love each other, as the first sentence does. We cannot represent S2 as a conjunction of two sentences, though we can represent S1 that way. Consequently, given the following interpretation, A: Alonzo is a logic student. B: Bertha is a logic student. you can easily translate S1 into L1 thus: 34

L1: Syntax and Translation A&B but no interpretation will allow us to say that conjunction is a translation of S2.8 The moral of this is that we should not look for mechanical ways of translating English into L1. We can tell without concerning ourselves about the meaning of the sentence that A & B is a conjunction, but the presence of and in an English sentence does not always indicate that the sentence is a conjunction. To tell whether an English sentence is a conjunction, we must understand it. If it is made true simply by two other sentences being true, we may understand it as a conjunction; otherwise not. Negation can be complicated in English. So far we have been translating ~ as it is not the case that, but this can be awkward. You can recognize that It is not the case that Alonzo studies logic. means the same as Alonzo does not study logic. and hence that the latter may translate ~A as well as the former. However, not in English is somewhat trickier than ~ in L1. You can't always form the negation of an English sentence by inserting not into it. The following sentences do not mean the same: It is not the case that all cows are purple. All cows are not purple. The second of the sentences is ambiguous, and might be interpreted to mean the same as Every cow is some color other than purple.

Later in this book, when we learn L2, a more complicated language, we will be able to represent the logical structure of statements like S2. 35

Introductory Logic which certainly does not mean the same as the first sentence. Or again, the following do not mean the same: It is not the case that you must wear a tie to the party. You must not wear a tie to the party. These complexities do not arise with L1: in that language the tilde always appears in front of the sentence, and always has the same meaning. L1 is simple, English is complicated; fortunately, you already know English. Conjunction may be expressed in English in a variety of ways. We have seen that a sentence with a compound subject may sometimes mean the same as the conjunction of two sentences; the same is true of a compound predicate. For instance, Alonzo studied logic and played football. means the same as Alonzo studied logic and Alonzo played football. and hence can be translated using &. The English and has some complexities connected with tense. Consider the following sentences: We saw Stromboli and it was erupting. We visited London and then Edinburgh

while we were seeing it. If that is so, then it is not a mere

The first sentence seems to say that Stromboli was erupting

conjunction of We saw Stromboli and Stromboli was erupting, because this conjunction could be true if the two events happened at different times. The second sentence says that our visit to Edinburgh occurred after our visit to London, and hence is not a mere conjunction of We visited London, and We visited Edinburgh. The latter conjunction says nothing about the order of visits. We have so far considered English sentences using the word "and" to express conjunction. Conjunction can also be expressed using other English words; here are some that can at least sometimes be used to express logical conjunction:

36

L1: Syntax and Translation although but9 even though furthermore both ... and however moreover though whereas

None of these words means exactly the same as and. For instance, but is appropriately used only when there is some contrast between what precedes and what follows it, since it means something like, on the contrary. However, both and and but can be used to join two sentences to create a third that is true if and only if both the component sentences are true. That is why they express conjunction and can be translated using &. The wedge translates or only in its inclusive sense. The English word or has both an inclusive and an exclusive sense. In its inclusive sense it means the same as wedge, that is, it is used to join two sentences to make a third which is true if either or both of the original sentences is true. In its exclusive sense, it joins two sentences to make a new sentence which is true if and only if exactly one of the two component sentences is true. The two senses of or differ only when both component sentences are true: in that case the inclusive or makes a true sentence, the exclusive or a false one. The exclusive or is rare in English, but it does occur. For instance, if I say, You may have ice cream or cake, I should not be understood to have said that you may have both. However, in this book, all uses of or will be inclusive unless clearly indicated as exclusive. We have no separate connective in L1 for the exclusive or, because none is needed. In the exclusive sense,

N or [exclusive] R
means the same as

N or [inclusive] R but not both


A glance in a good dictionary will reveal several quite different senses in which but may be used as a grammatical conjunction; only one of these expresses logical conjunction; the sense in which it means the same as on the contrary. 37
9

Introductory Logic and hence may be translated into L1 as (N w R) & ~(N & R) The arrow approximates the English if ... then. The translation of arrow as if .. then is only approximate, because the English if .. then is generally not truth functional. Nevertheless, the arrow is not a bad approximation of if ... then, so logicians use it. On occasion we will point out situations where the approximation is not entirely satisfactory. The arrow can approximate if .. then only in some of its uses. For instance, the counter-factual conditional is definitely not truth functional and cannot be approximated by any truth-functional connective. Counter-factual conditionals are those with false antecedents, such as, If Oswald had not shot Kennedy, someone else would have. If you are a conspiracy theorist, you might hold that this is true, while I hold that it is false. But we both may recognize that Oswald did shoot Kennedy, and no one else did. The truth of the counter-factual does not depend solely on the truth or falsity of its components. Contrast the sentence above with If Oswald did not shoot Kennedy, someone else did. Whatever you think about the Kennedy assassination, you will agree that this is true. It could only be false if Oswald did not shoot Kennedy and no one else did either. Hence it can be translated using the arrow. In English, a clause beginning with if may come anywhere in the sentence. All of the following mean the same: If the ancient accounts are correct, the king came to the battle unprepared. The king came to the battle unprepared, if the ancient accounts are correct. The king, if the ancient accounts are correct, came to the battle unprepared.

38

L1: Syntax and Translation In English, the word if marks the clause which is the antecedent of the conditional. In L1 there is no such marker; the antecedent must always come to the left of the arrow. Hence using the following interpretation, A: The ancient accounts are correct. K: The king came to the battle unprepared. all of the above sentences would be translated by the sentence, AK There are several other words in English that can express approximately the same idea as the English if and which can also generally be translated by the arrow. Here are some: provided that on condition that in case in the event that on the assumption that Each of these has a slightly different meaning from if, but they are close enough that we will translate all of them by arrow. Occasionally the English if is used to mean even if, in which case it is used to make a conjunction, rather than a conditional. For instance, consider: His excuse was creative, if hardly credible. which means almost the same as His excuse was creative, but his excuse was hardly credible. which you should be able to recognize as a conjunction and translate using &. Certain English words deserve special discussion. One such is neither ... nor. Neither in English means not either, and to say neither ... nor is to say it is not the case that either ... or ..., although this latter wording is not very good style. Thus

39

Introductory Logic Neither the Democratic candidate nor the Republican candidate is free of scandal. may be translated thus, using the indicated interpretation: D: The Democratic candidate is free of scandal. R: The Republican candidate is free of scandal. ~(D w R) As we shall show in the next chapter, this is equivalent to ~D & ~R which is an equally good translation. Another interesting word is unless; it means if not. Thus We will have the picnic on Sunday unless it rains. means the same as We will have the picnic if it does not rain on Sunday. and hence must be translated, using the indicated interpretation, thus: P: We will have the picnic on Sunday. R: It rains on Sunday. ~R P Note that unless always introduces an antecedent of a conditional, and hence the sentence representing the clause beginning with unless must always come to the left of the arrow, regardless of where it occurs in the English sentence. It is interesting to note that ~R P is logically equivalent to R w P (and to P w R), as we shall show in the next chapter. Hence it would not be incorrect to translate unless using w, although this may seem incorrect. In fact, it is not precisely correct, though it would be if if meant exactly what means. Only if is another expression that we can translate. Only if 40

L1: Syntax and Translation

does not mean the same as if. Instead, something of the form N only if R means not N if not R. Thus,
You will get a good grade only if you work hard. means the same as You will not get a good grade if you do not work hard. or If you do not work hard you will not get a good grade. This may be translated thus, using the indicated interpretation: H: You work hard. G: You will get a good grade. ~H ~G As we shall show in the next chapter, this is equivalent to GH hence NR can also translate N only if R. Finally, consider otherwise. This term, when it is used to make a logically compound sentence, means if that is not the case, but just precisely what condition it refers to depends on the context. Here are two examples: Give me the money; otherwise Ill shoot you. means approximately this: If you don't give me the money, Ill shoot you. But this sentence, If it is good weather Sunday, well go to the beach; otherwise, we'll go to a movie.

41

Introductory Logic means approximately this: If it is good weather Sunday, well go to the beach, and if it is not good weather, well go to a movie. In each case, otherwise stands for the antecedent of a conditional, but just what that conditional is depends on the context. There is a final complication about natural language. Language is a complex phenomenon, and in our logical discussions we try to ignore many features of it. For instance, when a person utters a sentence, the literal meaning of what he or she utters is not always the same as what the person is trying to communicate to his or her listeners. Consider this exchange: A: Is Alonzo intelligent? B: He can tie his own shoes without assistance. We would all recognize that B is here conveying the message that Alonzo is not very intelligent at all, though his sentence does not literally say this, and its literal truth is in fact perfectly compatible with Alonzos being very intelligent. We can think of sentences of natural language as having a relatively fixed literal (dictionary) or conventional meaning. But speakers dont always use sentences to convey that literal meaning. Because of laziness, carelessness, cleverness, wit, or delight in word play, or for any of a number of other reasons, a speaker may use a sentence to convey a variety of messages in addition to or even other than the literal meaning of the sentence. When we listen to speakers, we primarily are trying to get the message they want to convey; the literal meaning of what they say is often important only as it enables us to infer the message they have in mind. In making these inferences we rely on context, information we may have about the speaker or about people in general, and all sorts of general information that we have. On the other hand, when we translate sentences into L1, we are concerned only with what is literally said by the English sentences we are translating. Hence our L1 sentences do not capture any of the information that we infer from the English sentences, but only their literal meaning. Some assumptions about peoples conversational behavior are so common that we may easily think what we infer on the basis of 42

L1: Syntax and Translation these assumptions is part of the meaning of what they say. For instance, we usually assume that people are being helpful and cooperative when they talk to us, trying to give us the best information they can. If we fail to distinguish these assumptions from the literal meaning of the sentence, we can misunderstand the literal meaning of natural language connectives. For instance, suppose we consider the sentence, Alonzo or Gertrude came to the party last night. If I am trying to be helpful and informative when talking to you about who came to the party, I will ordinarily use this sentence only if I dont know which of the two came. If I knew that Gertrude came or that both came, I would not be being as helpful as I could if I only said that Alonzo or Gertrude came. But this fact that you infer is not part of the literal meaning of the sentence, and so we can translate the sentence, using a suitable interpretation, by an L1 sentence like A w G. One indicator that can help us distinguish the literal meaning of a sentence from what we infer from its use on many occasions is that the inferences can be canceled but the literal meaning cant be. We can say, for instance, Alonzo or Gertrude came to the party last night; in fact, Alonzo did. The second clause cancels the ordinary implication of the disjunction, and seems merely to give further information, not to contradict the original statement. Contrast that with this statement: Alonzo or Gertrude came the party last night; in fact neither did. Here the second clause does not cancel an implication of the first clause, it contradicts it. When an implication can be canceled in this way, it is not part of the literal meaning of the sentence. There are other examples involving connectives. We would not ordinarily say If Alonzo came to the party last night Gertrude did too if we knew that Gertrude came to the party. Hence, when we say this, our hearers reasonably take us to imply that we dont know whether or not Gertrude came. But this implication can be canceled, as in a sentence like this: If Alonzo came to the party then Gertrude did too; in fact, Gertrude came whether or not Alonzo came. Hence it is no part of the literal meaning of the sentence that the speaker does not know whether the consequent of the conditional is true. Exercises 2-4

43

Introductory Logic Translate the following sentences into L1 using the interpretation given. A: E: J: L: S: T: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 Alonzo disgraced himself. Ellen was drunk. Julius had a good time. Ellen danced lewdly. Sally came to the party. Tom came to the party.

Tom didn't come to the party. Tom and Sally both came to the party. Either Tom or Sally came to the party. If Sally came to the party, Julius did not have a good time. Tom came to the party if and only if Sally did. Ellen neither danced lewdly nor was drunk. Julius had a good time only if Alonzo did not disgrace himself. Ellen danced lewdly if Tom came to the party. Alonzo disgraced himself unless Ellen was drunk. Julius had a good time provided Ellen was not drunk. Ellen danced lewdly but was not drunk. If Ellen danced lewdly, then Julius had a good time; otherwise, he did not. Neither Tom nor Sally came to the party. Tom, but not Sally, came to the party. Although Ellen was not drunk, she danced lewdly. Unless Alonzo disgraced himself, Julius did not have a good time. Ellen danced lewdly if she was drunk. Alonzo disgraced himself if and only if Ellen was drunk. If Sally came to the party, Julius had a good time; otherwise Alonzo disgraced himself. Ellen danced lewdly; however, she was drunk. Sally came to the party if Alonzo disgraced himself. Ellen danced lewdly only if Tom came to the party.

Using the interpretation above, translate the following into graceful, idiomatic English: 23 44 EwL

L1: Syntax and Translation 24 25 26 27 28 TS ~J ~A ~(A w E) ~L ~A ~(T & S) 5: Complex Translations Complex translations are best done step by step. Complex translations from English to L1 are best done by proceeding step by step. First one finds the main connective, and the parts joined by it. Then these parts are further analyzed in the same way, and one proceeds down the parse tree of the L1 sentence. Perhaps an example will make this clear. Consider the following interpretation and sentence of English: B: The Braves will have a serious obstacle between them and the World Series. M: The Mets continue to lose. O: The Orioles will have a serious obstacle between them and the World Series. T: The Tigers continue to lose. If the Mets and Tigers continue to lose, then neither the Orioles nor the Braves will have any serious obstacles between them and the World Series. To translate the sentence into L1, we begin by identifying the main connective of the translation. In this case it is obviously the arrow, so we rewrite the sentence in this way as a sort of amalgam of L1 and English: The Mets and Tigers continue to lose Neither the Orioles nor the Braves will have any serious obstacle between them and the World Series. Now we just have to translate the two sentences on either side of the arrow and replace the English sentences with their translations. Let us begin with The Mets and Tigers continue to lose. 45

Introductory Logic Again we look for the main connective; it is obviously &. Hence this sentence is translated as (M & T). Note the parentheses! Using them at each stage (except possibly the first) will guarantee that they end up in the correct place. Inserting our translation into the partial translation above, we get: (M & T) Neither the Orioles nor the Braves will have any serious obstacle between them and the World Series. The remaining sentence is of the form, neither N nor N, and so may be translated ~(O w B). Replacing this in partial translation gives the completed translation into L1: (M & T) ~(O w B) Lets remind ourselves of the parse tree for this sentence: (M & T) ~(O w B) | M & T | M T O ~(O w B) | O w B | B

Our translation proceeded along the lines of the parse tree, beginning by identifying the main connective of the whole sentence and the parts joined by it. The steps of translation reconstruct the parse tree from the top down. When translating from L1 to English, it is usually better to proceed from the bottom up. Lets translate from this sentence of L1 back into English to see how this works. The object of translation into English is to produce a well-crafted English sentence that says what the sentence of L1 says. Typically, there will be more than one way to do this. In the case of the sentence above, we start by recognizing the interpretations of the sentence letters. Then we put together the sentences, working up from the bottom of the parse tree. Our first attempt to create a part of the translation might be this: The Mets continue to lose and the Tigers continue to lose. 46

L1: Syntax and Translation but we observe that this can be put much more succinctly as The Mets and Tigers continue to lose. We cant go further without working on the other part of the bottom of the tree, and here we might get this: Either the Orioles will have a serious obstacle between them and the World Series or the Braves will have a serious obstacle between them and the World Series. Once again, a more succinct version is obvious: Either the Orioles or the Braves will have a serious obstacle between them and the World Series. At the next step we might try, It is not the case that either the Orioles or the Braves have a serious obstacle between them and the World Series. However, this is awkward, and we observe that it can be simplified: Neither the Orioles nor the Braves will have any serious obstacle between them and the World Series. Finally, we put everything together. We can choose any of the equivalent expressions we have noticed for expressing conditionals; lets choose provided that: Neither the Orioles nor the Braves will have any serious obstacle between them and the World Series, provided that the Mets and Tigers continue to lose. This is our completed translation. You will notice that this English sentence is not the same as the sentence that we started with. It means very nearly the same, however. Since there are many English phrases that can translate any particular bit of L1, we cannot expect that we will get the same 47

Introductory Logic English sentence back when we translate into L1 and back. However, we should get sentences that say very nearly the same thing. L1 provides parentheses for grouping, but English doesnt. So in translating from L1 into English, it is sometimes necessary to take special measures to preserve the grouping. For instance, if we want to translate this sentence, using the given interpretation, A: Alonzo owns a computer G:Gertrude owns a computer. C: Gertrude is a C++ programmer (A G) & C we must produce an English sentence which has the same main connective as the L1 sentence. This would not do: If Alonzo owns a computer, then Gertrude owns a computer, and Gertrude is a C++ programmer. This last English sentence is ambiguous; it might have either conjunction or the conditional as its main connective. It is not a good translation of the L1 sentence, because the L1 sentence is not ambiguous. Students sometimes think that grouping can be provided by punctuation, and sometimes it can, but not always. English punctuation rules were not designed by logicians and require, for instance, both commas in the sentence above. English provides various resources for indicating grouping. One way is to combine identical subjects or predicates. For instance, this sentence, If Alonzo owns a computer then Gertrude owns a computer and is a C++ programmer. is not ambiguous, because we have used a single subject with conjoined predicates. It can only have this translation: A (G & C) What grammarians call correlative conjunctions provide another mechanism for grouping in English. Correlative conjunctions are 48

L1: Syntax and Translation grammatical conjunctions that are used in pairs like either ... or, both ... and, and if ... then. These serve to group together whatever comes between the two conjunctions, and to show that the connective corresponding to the coordinating conjunctions is higher in the parse tree than any connective involved in the material between the conjunctions. Consider this: If Gertrude is a C++ programmer and Alonzo owns a computer, then Gertrude owns a computer. This sentence is not ambiguous, because the If and then group together into the antecedent the two sentences joined by and and put them together in the antecedent of the conditional. So this sentence can only be translated this way: (C & A) G Sometimes to make correlative conjunctions fit into sentences naturally, we need to add some other words. For instance, we could translate our original sentence unambiguously in this way: Its both true that if Alonzo owns a computer then Gertrude does, and also that Gertrude is a C++ programmer. Correlative conjunctions cannot solve all grouping problems, though, because they can only group together items on the left side of the conjunction. Sometimes a sentence is best translated by changing the order of the components, when that produces a sentence equivalent to the original. For instance, our original sentence might best be translated this way: Gertrude is a C++ programmer, and if Alonzo owns a computer then Gertrude does too. because C & (A G) is logically equivalent to (A G) & C, as we shall learn in the next chapter. Not all ambiguities in English grouping are harmful. For instance, although this sentence, Either Alonzo owns a computer, or Gertrude owns a 49

Introductory Logic computer, or Gertrude is a C++ programmer.

has ambiguous grouping, this doesnt matter, because the two readings, translated by (A w G) w C and A w (G w C) respectively, are logically equivalent. Another example is this sentence: If Alonzo owns a computer, then Gertrude owns a computer, unless Gertrude is a C++ programmer. The unless clause might be part of the consequent, in which case the sentence is translated by A (~C G), or it might qualify the whole sentence, in which case the sentence would be translated by ~C (A G). But again, the ambiguity is harmless, because the two readings are logically equivalent; well learn how to demonstrate this in the next chapter.

Exercises 2-5: Translate the sentences into L1 using the following interpretation. A: E: G: M: P: R: S: 1 2 3 50 Alonzo will come to the picnic. Ethel will be distressed. Gertrude will come to the picnic. Alonzo and Gertrude will go to a movie together. There will be a picnic on Sunday. It will rain on Sunday. Gertrude is sick.

Alonzo will come to the picnic only if Ethel is not distressed; however, Gertrude will come to the picnic if and only if Alonzo does. If Alonzo comes to the picnic, Gertrude wont, and if Gertrude doesnt, Ethel will be distressed. If there is a picnic on Sunday and Alonzo comes then Ethel

L1: Syntax and Translation will be distressed, but if there is no picnic, Alonzo and Gertrude will go to a movie together, unless Gertrude is sick. If it doesnt rain on Sunday and neither Alonzo nor Gertrude comes to the picnic, then Ethel will be distressed unless Alonzo and Gertrude go to a movie together. Alonzo and Gertrude will go to a movie together unless it fails to rain on Sunday, in which case they will go to the picnic and Ethel will not be distressed. Unless it rains on Sunday, there will be a picnic and either Alonzo or Gertrude will come; but if it rains, Ethel will be distressed. If it rains on Sunday there will be no picnic and Ethel will be distressed, but otherwise there will be a picnic and both Alonzo and Gertrude will come. Alonzo will come to the picnic only if Gertrude does; however, if it rains on Sunday and there is no picnic, Ethel will be distressed if and only if Alonzo and Gertrude go to a movie together.

4 5 6 7 8

Translate the following into smooth, accurate, and idiomatic English, on the basis of the abbreviation scheme above. 9 10 11 ~R [P & (A G)] ~P w (A ~ G) (R & P) [E & (~A w ~G)]

Translate the sentences into L1 using the following interpretation. B: The Boss is angry. E: Ethel is taking pregnancy leave. H: Hubert has hepatitis. N: The network is down. P: The plotter has overheated. S: Silvia will be promoted. W: The Wilbursteen project will be two weeks late. 12 If the plotter has overheated and the network is down then either the Wilbursteen project will be two weeks late or Silvia will be promoted. 51

Introductory Logic 13 14 If Hubert has hepatitis then the Wilbursteen project will be two weeks late unless the Boss is not angry Silvia will be promoted only if the Wilbursteen project will not be two weeks late; unfortunately, Ethel is taking pregnancy leave, Hubert has hepatitis, and either the network is down or the plotter has overheated.

Translate the following into smooth, accurate, and idiomatic English, on the basis of the abbreviation scheme above. 15 16 [(B & E) & N] (W w S) (N P) & [(W B) w H]

52

Logical Properties in L1

Chapter 3: Truth Functional logic

In this chapter we will explain the logic of L1, which is the logic of the truth-functional sentence connectives, or truth-functional logic, as we shall call it. We shall begin by explaining how to calculate the truth-value of a sentence, given the truth values of its atomic components. Then we will see how to calculate all possible truth values for a sentence in a truth table. We shall explain how these calculations enable us to establish the logical properties of the sentences, and shall define the truth-functional logical relations in this way. Finally, we shall explain the relation of all this to the logic of English sentences.

1: Truth of Sentences of L1 The truth of a sentence of L1 depends only on the truth or falsity of its atomic components. Sentences of L1 are built up out of atomic components by truth-functional connectives, so the truth of a whole sentence is determined by the truth or falsity of its atomic components. Thus if S is true and T is false, S T is false and S w T is true. (If you are not sure why this is so, consult the truth tables for these connectives in Chapter 2.) Thus, to find out whether a sentence of L1 is true, we only need to know whether its atomic components are true or false. From the point of view of truth and falsity, we dont need to know the interpretation of the sentences, except to find out whether the sentences are true or false. Thus, the following interpretation, S: Snow is white T: Tellurium occurs in a dark red crystalline form. makes S T false, because it makes S true and T false. However, for many purposes, we dont need to concern ourselves with the interpretation at all, we only need to be concerned with the truth values it gives the sentences. To be precise about this, we need the notion of an assignment 53

Introductory logic

of truth values to sentence letters of L1. An assignment of truth

values to sentence letters gives each sentence letter either the value True or the value False. In presenting assignments of truth values, we shall only concern ourselves with the truth values of the sentence letters we have to deal with on any given occasion. So, for example, if we are concerned only with the sentence, P w (Q R), our assignment of truth values need only assign truth values to the letters P, Q, and R, like this: P: T Q: F R: F This assignment gives P the value True, Q the value False, and R the value False also. We use the syntax of the sentence as our guide when calculating its truth value. Using the syntax of the sentence as a guide, we may determine the truth of a sentence of L1 using an assignment of truth values to its atomic components. For instance, consider the sentence above, P w (Q R). Its parse tree looks like this: P w (Q R) P Q Q R R

We can begin by assigning truth values to the bottommost entries in the parse tree, the atomic sentences, in accord with the above assignment of truth values. P w (Q R) P = T Q R Q = F R = F

Next we calculate the values of each other entry in the parse tree referring to the truth tables for each connective (which we have 54

Logical Properties in L1 memorized!). First, since a conditional is true if its antecedent is false, P w (Q R) P = T Q R = T Q = F R = F

and then, since a disjunction is true if at least one disjunct is true: P w (Q R) = T P = T Q R = T Q = F R = F

We can obviously calculate the truth value of any sentence of L1 this way. Calculating the truth table using the parse tree takes up a good deal of space, so we shall adopt a more compact representation of the calculation, compressing it to two lines. We begin thus:

P Q R T F F P w (Q R)

To the left we write the truth value assignment, and to the right we calculate the truth value of the sentence. To make it easier to see the result, we mark the main connective of the sentence. We begin by entering the truth value assignment on the left, then transfer it to the right:

55

Introductory logic P Q R P w (Q R) T F F T F F

Now we calculate the truth values of the various component sentences, entering the resulting truth value under the main connective for that sentence:

P Q R P w (Q R) T F F T then FTF

P Q R P w (Q R) T F F TT FTF

Though we write the calculation in two lines, it is important to calculate the values in the same order that we calculated them on the parse tree. This helps us to avoid mistakes.

Exercises 3-1 For each of the following sentences, make a parse tree. Then calculate the truth value of the sentences according to the given assignment of truth values using the parse tree.

56

Logical Properties in L1 A: B: C: D: E: F: 1 2 3 4 5 6 T F T F T F

A (~B w C) ~(C (D & ~E)) ~~[(A & B) (~F w B)] (A w B) ~[C & (D E)] (B & A) [(C w F) D] ~(C w F) & [(A ~B) D]

For each of the following sentences, calculate the truth value in the two-line compact form, using the assignment of truth values given above. 7 8 9 10 11 12 (A B) w (C D) (A B) & ~[C & (D w E)] (B C) [(D & F) w ~(B & C)] (C E) & [(D F) w (F D)] (B w C) ~[D (E & F)] [B (C & D)] [(C A) w~ F]

2: Calculating all Possible Truth Values We calculate all the possible truth values for a sentence in a truth table. A truth table is a rectangular tabulation of truth values, each line of which corresponds to an assignment of truth values to the sentence letters of the sentence. In fact, each line of the truth table after the first looks just like the second line of a two-line calculation of the truth or a sentence given an assignment of truth values. Here is a sample truth table for the sentence P Q:

57

Introductory logic

?
P T T F F Q T F T F PQ TTT TFF FTT FTF

Of course, we can also make truth tables for longer sentences with more sentence letters, for instance:

?
P Q R (P (Q w ~R) T T T TT T T F TT T F T TF T F F TT F T T FT F T F FT F F T FT F F F FT T T FT T T TF F F FT F T TF T T FT T T TF F F FT F T TF

Because this sentence has one more sentence letter than the previous one, its table has twice as many rows. In general, a sentence with n different sentence letters in it will generate a truth table of 2n rows. For convenience, all of our truth tables will be done in a specified order. In theory, it does not matter in what order we present the rows of a truth table, so long as we present all relevant rows and calculate them correctly. In practice, making sense of truth tables is much easier if they are always presented in the same order. This makes it possible to compare different truth tables easily. We specify two features of the truth table order: the order in which the sentence letters are to appear in the left of the 58

Logical Properties in L1 table, and the order the truth value assignments that appear on the various rows of the table. Sentence letters occur on the left side of the table in alphabetic order, regardless of the order in which they occur in the sentence. So, for instance, for the sentence S (Q R), we would begin the table thus:

S (Q R)

We assign truth values according to a simple binary pattern. To assign truth values to the sentence letters, we assign them so that the rightmost column in the left part of the table)that for the alphabetically latest sentence letter)alternates T, F, T, F, etc., thus:

?
Q R S T F T F T F T F S (Q R)

The column immediately to the left of this one has two Ts followed by two Fs, etc., thus:

?
59

Introductory logic Q R T T F F T T F F The column immediately preceding that has four Ts followed by four Fs, like this: S S (Q R)

Q T T T T F F F F

S (Q R)

The whole looks like this:

60

Logical Properties in L1

?
Q T T T T F F F F R T T F F T T F F S T F T F T F T F S (Q R)

The same basic pattern is continued for more or fewer sentence letters, remembering that a truth table involving n sentence letters always has 2n rows.

Exercises 3-2: Make complete truth tables for each of the following, taking care to use the order prescribed above. 1 2 3 4 5 6 M (~N w M) (S ~T) ~S (A D) (~C & D) S (Q & ~R) A w (~F G) (P w ~M) (N M)

3: Logical relations in L1 Sentences of L1 are their own forms. All of our definitions of logical relations in Chapter 1 involved the notion of forms and of instances of forms. To define the logical relations and properties of sentences in L1 we must explain the notion of a form and instances of forms for L1 as well. In L1 sentences are their own 61

Introductory logic

forms.10 Thus the form of P 6 Q is P 6 Q. An instance of a form

is that form plus an interpretation. Thus an instance of the form P Q is this: PQ P: Plutonium has atomic number 94. Q: The Queen of the United Kingdom in 1997 was Elizabeth II.

Notice that on this definition, as we would expect, forms are neither true nor false. A sentence of L1 doesnt say anything unless it has an interpretation, and so it cannot be true or false at all. Instances, however, are true or false. To determine whether an instance of a form is true or false, we must know whether the statements assigned the sentence letters by the interpretation are true or false; then we can calculate the truth values of the sentence as we have learned to do in the first section of this chapter. Of course, for some interpretations, we may not know whether the statements the interpretation assigns to sentence letters are true or false. In this case we will not be able to determine the truth value of that particular instance. For example, consider this instance of P 6 Q: P6Q P: Goldbachs conjecture11 is true. The observant reader will note that on this account P Q and R S have different forms, and this may seem incorrect. In fact, this will not cause any problems for us. It would be straightforward to define the notion of a form in such a way that P Q and R S would have the same form, but this would unnecessarily complicate the notion of form and all the subsequent discussion. 11 Goldbachs conjecture is the claim that every even number greater than 2 can be expressed as the sum of two prime numbers. Mathematicians suspect that it is true, but it has not been proved, and there remains the possibility that a counter-examplean even number that cannot be expressed as the sum of two prime numbersexists. If there is one, it must be a very large number, beyond the range that can be checked by computers. 62
10

Logical Properties in L1 Q: The Queen of The United Kingdom in 1997 was Elizabeth I. We dont know whether P is true or false on this interpretation, though we know that on this interpretation Q is false. Hence we dont know the truth value of this instance of P Q. Forms of sets, arguments, and pairs of sentences of L1 are defined in the obvious ways. The form of a set of sentences of L1 is that set of sentences of L1. The form of an argument of L1 is that argument. The form of a pair of sentences of L1 is that pair of sentences. In each case, the instances of the form are the form plus an interpretation. Inconsistency can be explained as in Chapter 1. With an understanding of forms and instances, we can apply the definitions of the logical relations in Chapter 1 to L1. We had these definitions for inconsistence and consistency: A form is inconsistent if and only if it has no instance all of whose statements are true. A set of statements is inconsistent if and only if it is an instance of some inconsistent form. A set of statements is consistent if and only if it is not an instance of any inconsistent form. To determine whether a form is inconsistent, we need to determine whether it has any true instances. This may seem like a formidable task, since any form has an infinite number of instances! But, fortunately, we only need to check a finite number of possibilities. The only thing that matters to the truth or falsity of sentences of L1 is the truth or falsity of the atomic sentences)sentence letters)that make them up. That is to say, the only thing that matters is the truth value assignment generated by any particular interpretation. Furthermore, by our account of form, each sentence has exactly one form. So we can use the following definitions for L1: A set of sentences of L1 is truth-functionally inconsistent if and only if there is no assignment of truth values on which all the sentences in the set are true.

63

Introductory logic A set of sentences of L1 is truth-functionally consistent if and only if there is at least one assignment of truth values on which all the sentences in the set are true. Some examples will make this clearer. Consider the following set of sentences: PQ P ~Q We can determine whether it is consistent or inconsistent by constructing the following truth table:

?
P Q PQ T T TTT T F TFT F T FTT F F FTF

? ?
P ~Q T FT T TF F FT F TF

Here we have made one truth table that shows the truth values of all the sentences simultaneously. These show all the possible truth values that the sentences can have for all their infinitely many instances. We can see by inspection that there is no line in this truth table where all the sentences are true, so there is no instance of this form that has all the sentences true. Consequently, this form is inconsistent. Since the form is inconsistent, any instance of it is as well. On the other hand, consider this set of sentences: PQ P ~Q We can similarly check these for consistency:

64

Logical Properties in L1 P Q P Q T T TTT T F TFF F T FTT F F FTF P ~Q T F FT T T TF F T FT F T TF

The truth table has both sentences true in the third and fourth rows of the table, so the sentences are consistent. In this case, we dont actually need the whole table. One line is enough to show that there is one assignment of truth values that makes all the sentences true simultaneously. So this would suffice to show the sentences consistent:

P Q P Q F T FTT

P ~Q F T FT

Equivalence may be defined similarly. Recall the definition of equivalent forms from Chapter 1: A form is an equivalence form if and only if: (1) All its instances consist of exactly two statements, and (2) there is no instance where the two statements have different truth values. Two statements are equivalent if and only if the pair of them is an instance of an equivalence form. Given our account of forms for L1, we can give a simpler definition for truth-functional equivalence of sentences of L1: A pair of sentences is truth-functionally equivalent if and only if there is no assignment of truth values on which they have different truth values. A pair of sentences is not truth functionally equivalent if there is at least one assignment of truth values on which the 65

Introductory logic sentences have different truth values. We can check sentences for equivalence by means of a truth table. For instance, this truth table shows that F G and ~F w G are equivalent:

F G FG T T TTT T F TFF F T FTT F F FTF

~F w G FT T T FT F F TF T T TF T F

Since the table shows that on every line the sentences have the same truth values, the two sentences are equivalent. On the other hand, these sentences are not equivalent, as the table shows:

P Q P ~Q T T T F FT T F T T TF F T F T FT F F F T TF

~(P Q) FTT T TTF F FFT T FFT F

The sentences are not equivalent because in the third and fourth rows of the table they have different truth values. Of course, we do not need the whole table to show nonequivalence; one line would suffice, like this:

P Q P ~Q F T F T FT

~(P Q) FFT T

Validity may be defined in a similar way. Recall the definitions of validity from Chapter 1: An argument form is valid if and only if it has no instance 66

Logical Properties in L1 where all the premises (if any) are true and the conclusion is false. A valid argument is an argument that is an instance of a valid argument form. An invalid argument is an argument that is not an instance of any valid argument form. These definitions, along with our account of forms are the basis for the following definitions of validity and invalidity in L1: An argument in L1 is truth-functionally valid if and only if there is no assignment of truth values on which all its premises (if any) are true and its conclusion is false. An argument in L1 is invalid if and only if it is not valid, that is, if and only if there is at least one assignment of truth values on which all its premises (if any) are true and its conclusion is false. As with inconsistency and equivalence, we can check arguments for validity with a truth table. For instance, consider this argument: M (R w D) ~D & M R This argument is valid, as demonstrated by the following truth table:

?
D M R M (D w R) T T T T T F T F T T F F TT TTT TT TTF FT TTT FT TTF

?
~D & M FT F T FT F T FT F F FT F F

?
R T F T F 67

Introductory logic F T T F T F F F T F F F TT FTT TF FFF FT FTT FT FFF TF T T TF T T TF F F TF F F T F T F

There is no line in this table where both premises have the value T while the conclusion is false. Although the conclusion, R, is false in lines 2, 4, 6, and 8, the second premise is false in lines 2, 4, and 8 and the first premise is false in line 6, so there is no line that shows the argument invalid. The following argument, however, is not valid: WK K W as the following truth table shows:

?
K W W K T T TTT T F FTT F T TFF F F FTF

?
K T T F F

?
W T F T F

Here the second line of the table shows that the argument is invalid; both premises are true there, but the conclusion is false. Of course, the whole table is not needed to show the argument invalid; the second line alone would do:

?
68

Logical Properties in L1 K W WK T F FTT K T W F

This displays the assignment of truth values that shows the argument invalid. 4: Shortcuts We can sometimes take shortcuts rather than doing a complete truth table. A complete truth table is needed to show inconsistency, equivalence, or validity. But all that is needed to show consistency, nonequivalence, or invalidity is a single line of a table. Sometimes it is easier to calculate that line directly rather than to search for it in a truth table. For instance, suppose that we wish to show the following set consistent: AG G ~A G We begin by setting up the single line of the truth table we need.

?
A G AG

?
G ~A

?
G

We know that we want to make all three sentences true, so we enter those values in the table.

AG T

G ~A T

G T

Since we have to give G the value T, we enter that in the other columns of the table for G; 69

Introductory logic

?
A G T AG TT

?
G ~A TT G T

Now we look for other entries that are forced upon us by this one. We notice that we have given G in G ~A the value T, and we have also specified that the whole sentence must have the value T. So the consequent, ~A, must also have the value T (since a true conditional with a true antecedent must have a true consequent), so we enter that in the table.

?
A G T AG TT

?
G ~A TTT T

?
G

But since ~A has been given the value T, A must get the value F, so we can now fill in those values:

?
A F G T AG FTT

?
G ~A T T TF T

?
G

We must check to be sure all these entries are now correct, but in this case they are. We have shown the sentences consistent. Of course, we can treat nonequivalence and invalidity similarly. Finding an appropriate assignment of truth values may involve trial and error. When we are using this shortcut method, things do not always work out as simply as they did in the example above. Suppose we wish to show the following sentences nonequivalent: BwC B&C We begin thus: 70

Logical Properties in L1

?
B C BwC

?
B&C

We know we want to make one of the sentences true and the other false, but which should be which? We will just have to guess; if our guess turns out wrong, we will try another guess. Let us try the guess that makes continuing easiest: if we make B w C false, we have determined the truth values of B and C, so lets hope that works.

BwC F

B&C T

Now, there is no way to complete the line. For instance, if we try this: INCORRECT!

B F

C F

BwC FFF

B&C T

there is no way to complete the last cell of the table. We must try again.

BwC T

B&C F

Now we have two ways to go on. We can either make B true or C true, but we must not make both true. Let us try making B true.

?
71

Introductory logic B T C BwC TT B&C TF

Now if we make C false, everything will work.

?
B T C F BwC TTF

?
B&C TFF

Here we have shown the sentences nonequivalent. The moral of this example is that you must systematically try other possibilities if your first try doesnt work. Of course, a truth table is a systematic way of trying all possibilities; but sometimes you can limit the number of possibilities you have to try. Exercises 3-4: Show that each of the following sets of sentences is inconsistent by providing a truth table. Indicate how your truth table shows inconsistency. 1 2 3 4 5 6 B 6 C, B, ~C C D, ~(D w ~C) E & ~F, G F, ~E w G H & ~(I J), ~H w (I & J), ~J ~H K & (L w M), L (K ~M), ~M ~L (N ~O) P, N ~P, ~O & P

Show that each of the following sets of sentences is consistent either by providing a full truth table or by providing a single line of a truth table. If you provide a full truth table, mark a line that shows the set consistent. 7 8 9 10 11 72 (P Q), P ~Q (P & Q) Q, (P w Q) ~P (R w S) P, R ~P, ~R S U w (W & X), W (~U ~X), U & ~(W & X) (Z & ~Y) ~A, A ~Z, A w ~Y

Logical Properties in L1 12 (G & Y) w S, S ~G, ~Y & (G w S)

Show that each of the following pairs of sentences are equivalent by providing a truth table. Indicate how your truth table shows equivalence. 13 14 15 16 17 18 A ~M ~(A M) ~P w T ~T ~P ~C w (D ~R) (C & D) ~ R ~(M & L) w S ~M w (L S) ~(R & K) & T (T & ~R) w (T & ~K) (P w ~S) & (H w S) [P & (S w H)] w (~S & H)

Show that each of the following pairs of sentences is nonequivalent either by providing a full truth table or by providing a single line of a truth table. If you provide a full truth table, mark a line that shows the pair non-equivalent. 19 20 21 22 23 24 IQ ~(M G) (P & Q) X (Z w T) E (L & K) (L & S) ~(R w C) & M QI M ~G (P X) & (Q X) (Z E) w (T E) L & (K S) ~[(R & C) w M]

Show that each of the following arguments is valid by providing a truth table. Indicate how your truth table shows validity. 25 26 27 28 29 30 P&Q PwQ H&B BH (P w S) & T ~(P & T) (S & T) P (S w T), ~S ~T, P S K (I & F), F I F K (P & A) w (A J), (A ~J) & (A w J)

PA

Show that each of the following arguments is invalid either by providing a full truth table or by providing a single line of a truth table. If you provide a full truth table, mark a line that shows the argument invalid. 31 QX XQ 73

Introductory logic 32 33 34 35 36 P w ~Q ~Q P B w (M & A) B & (M w A) R (T W) (R T) W (P J) & R, ~R ~J R (P & ~J) (M S) ~T, (S & M) w (S & T) (S & M) T

Determine whether the following are consistent, and provide appropriate demonstration of your finding -- either a truth table or a line of a truth table, as required. 37 38 39 40 41 42 F R, R L, ~L & (~R w F) G w (Y N), ~N & Y, G N P (C Z), ~(Z & P), (C P) & (~C P) (W & A) w (W & B), W (~A w ~B), B ~W I (D & K), D ~K, D w I E & (O Z), Z ~(E & O), ~E w Z

Determine whether the following are equivalent, and provide appropriate demonstration of your finding -- either a truth table or a line of a truth table, as required. 43 44 45 46 47 48 P (B & J) (P B) J (M T) C M (T C) (C w P) (P & U) (P & U) w (~P & ~C) E & ~E [L (F & ~L)] & L (P & D) w (~P & D) (Q D) & (~D Q) (F C) w K F (C w K)

Determine whether the following are valid, and provide appropriate demonstration of your finding -- either a truth table or a line of a truth table, as required. 49 50 51 52 53 54 K L, L w M K w M A (D & Z), ~Z A H (R w T), ~T H W (N C), W w N (N C) w (W C) ~[B & (S Q)], B & (Q S) Q ~(X w ~O) T, T & (O T) X

5: Truth-Functional Logical Properties of English Sentences 74

Logical Properties in L1 Generally, the truth-functional logical properties of English sentences are the truth-functional logical properties of their translations into L1. This will usually be true if all of the truth-functional connectives in the English sentence have been represented by connectives of L1. However, there are some limitations. In the first place, L1 captures only truth functional logic, not all logic. For instance, the following two sentences are logically equivalent, but not truth-functionally equivalent: Some sophomores do not study logic. Some who do not study logic are sophomores. So truth-functional logic is by no means all of logic. Second, L1 can only represent the logic of sentences that are accurately translatable by sentences of L1. Insofar as the translations are not accurate, there will be differences between the logic of English and the logic of the sentences of L1 that purport to translate it. We have mentioned before that there is some controversy about whether English sentences involving if are accurately translated by L1 sentences involving ; naturally it follows that there is controversy about whether logical relations involving sentences of English using if can be adequately represented by the logical relations of sentences of L1 involving that purport to translate them. Here is an example intended to illustrate these problems. Consider the proposition that if I pray, the evils of the world will be eliminated. I dont know whether that is true, but if it is true, then surely God exists. That is, the following is surely true: If its true that if I pray, the evils of the world will be eliminated, then it is true that God exists. With that as a premise, I can easily prove that God exists; all I need is the additional premise that I dont pray. Do you doubt that this argument is valid? We can show it valid in L1. Using this interpretation: E: The evils of the world will be eliminated. G: God exists. P: I pray.

75

Introductory logic the argument may be symbolized thus in L1: (P E) G ~P G This argument is plainly valid, as the reader may demonstrate with a truth table.12 Whether or not God exists, surely the argument offered in the example cannot prove that God exists. Some philosophers, who believe that if is accurately translated by , believe that the L1 argument is a good translation. They say that if we really understand what if means (namely, what means, according to them), we will see that only confusion or misunderstanding could make someone accept the first premise of the argument if he or she did not already believe that God exists. Other philosophers think that there is no mistake involved in granting the English first premise, but deny that accurately translates if, and so hold that although the L1 argument is valid, the English argument is not. We shall not try to settle such controversies here. We will only stipulate that if in exercises in this book means exactly the same as .

Exercises 3-5: Determine whether the following are truth-functionally consistent and establish your answer by a truth table or a single line of a truth table as appropriate.

This argument is adapted from C. L. Stevenson. Other amusing examples can be found in his article, If-iculties, Philosophy of Science v. 37 (1970) pp. 27-49; reprinted with a few changes in Rudner and Scheffler, eds. Logic and Art: Essays in Honor of Nelson Goodman, (Indianapolis, Bobbs Merril, 1972), pp. 279-309. 76

12

Logical Properties in L1 1 If the Flugelheim project is delayed, then either the pointy-haired boss is mad or Wally will not be promoted. The pointy-haired boss is mad, but Wally will be promoted. If the pointy-haired boss is mad, then the Flugelheim project is not delayed. Its not true that if the computer has a virus then it should be scrapped. If its not true that the computer should be scrapped, then a new printer is needed. Its not both true that a new printer is needed and that the computer has a virus. Oscar got lots of sleep; however, Ernie got lots of sleep if but only if Bert did. If Oscar got lots of sleep, then Ernie did. Its not true that both Ernie and Bert got lots of sleep. Either the mummy dates from the time of Cheops or well find the gold and all become famous. Well all become famous only if we find the gold. If the mummy dates from the time of Cheops, we will all become famous.

Determine whether the following are truth-functionally equivalent and establish your answer by a truth table or a single line of a truth table as appropriate. 5 If the red team is free from injuries then if the blue teams star player is healthy then it will be a good game. If its not a good game, then its not both true that the red team is free from injuries and the blue teams star player is healthy. 6 The king is safe if and only if the rook is not attacked and the pawn structure is uncompromised. Its both true that either the king is safe or the rook is not attacked and also that either the king is safe or the pawn structure is uncompromised. 7 Provided we work hard, we will both pass the course and 77

Introductory logic learn a lot. If we work hard, well pass the course; furthermore, if we work hard well learn a lot. 8 Emily will enjoy the party unless neither Tom nor Charles comes. Emily will enjoy the party unless Tom doesnt come; furthermore, she will enjoy the party unless Charles doesnt come. Determine whether the following are truth-functionally valid and establish your answer by a truth table or a single line of a truth table as appropriate. 9 The Pirates will win the pennant if but only if its true that if they get a new manager theyll get a good one. Its neither true that theyll win the pennant nor that they will not get a new manager. Therefore, the Pirates will get a good manager. If the television set is working, then either we can see Varsity Athlete or we can see Leno. We can see Varsity Athlete if but only if we cannot see Leno. Therefore, if we cannot see Varsity Athlete, then if the television is working we can see Leno. Appendix: Knights and Knaves There are many logic puzzles of the following sort. You are visiting the island of knights and knaves. Knights utter only true statements, knaves only false ones. Knights and knaves are otherwise indistinguishable to one who first meets them. You encounter two inhabitants of the island who speak as follows: A: We are not both knights. B: I am a knight. You are to determine whether A is a knight or a knave and 78

10

Logical Properties in L1 whether B is a knight or a knave. Puzzles of this sort yield readily to truth-functional analysis. We start with this interpretation: A: A is a knight B: B is a knight Since we are told that every inhabitant is either a knight or a knave, ~A says that A is a knave. Now what A says may be translated this way: ~(A & B) We dont know whether this is true or false, but it is true if and only if A is a knight. Hence we can be sure that this is true: A ~(A & B) Similarly, we can know, given what B said, that this is true: BB So to find out whether A and B are knights or knaves, we just have to find an assignment of truth values that makes both of these sentences true. There is only one such: A: T B: F

Hence A must be a knight and B a knave. You might like to try your hand at some puzzles of this type. 1 2 3 4 You encounter two inhabitants of the island of knights and knaves, A and B. A says, If B is a knave then I am too. What are A and B? You encounter two more inhabitants of the island, C and D. C says, Neither of us is a knight. What are C and D? You encounter yet another pair of inhabitants, E and G. E says, At least one of us is a knave. What are E and G? Two inhabitants of the island of knights and knaves are said to be the same type if both are knights or both are knaves; they are said to be different types if one is a knight and the 79

Introductory logic other a knave. You encounter two inhabitants of the island, J and K. J says, I am a different type from K. K says, I am the same type as J. What are J and K? You are on your way to the Capital City of the island of knights and knaves, and you come to a fork in the road where two inhabitants, M and N, are standing. You ask M whether the left fork leads to the Capital City. He says, N is a knight if and only if the left fork goes to the Capital City. N immediately chimes in, saying, M is lying. Does the left fork go to the Capital City? A traveler told me the following story: One day while I was visiting the island of knights and knaves I met two inhabitants of the island, O and P. O said, There is at most one knight among us. P said, There is at least one knight among us. What should you conclude?

For many logic puzzles like this, see Raymond Smullyan, What is the Name of This Book: The Riddle of Dracula and Other Logical Puzzles (Englewood Cliffs, NJ, Prentice Hall, 1978).

80

Derivations I

Chapter 4: Derivations I
In this chapter we will learn how to construct formal proofs of validity, called derivations, in a system we shall call D1. First we will have an overview of the topic of derivations, then we will begin learning the details of D1. We will begin with simple derivations using simple rules of inference and simple structural rules; then we will learn more complex rules involving sub-derivations. The subject of derivations is so complex that it will be continued in the next chapter. 1: Overview of Derivations Logic is about inference. When we think of logic, we usually (at least, before taking a course) think of argument and inference. A paradigm of logic is the rigorous mathematical proof. So far, nothing we have done seems to resemble a mathematical proof, but in this chapter we will discuss a method of showing arguments valid that resembles mathematical proofs. A derivation begins with the premises (if any) of the argument, and proceeds by simple steps to demonstrate the conclusion. A derivations steps can be mechanically checked for correctness; there is no room for judgment or discretion in determining whether a derivation is correct, although the process of creating derivations is not typically a mechanical one. The derivation system we will learn, D1, is intended to model the sort of proof one might offer for a conclusion, if one were rigorously arguing for the conclusion. For this reason, this sort of system is called natural deduction. There are other reasons to want a natural deduction system. Making truth-tables can be very tedious (and hence error prone). For instance, consider the following simple argument: P (Q w R) QS RS ST PT 81

Introductory Logic This argument is valid, but a truth table that could show it valid would require 32 lines. More complex arguments would take even more lines. A derivation demonstrating the validity of this argument would be much shorter; it would take less work and be more likely to be correct. Another and more important reason for derivations looks toward topics to come in future chapters. When we discuss quantificational logic, it will turn out that there is no technique like constructing a truth table by which we can mechanically check for validity or invalidity. Constructing derivations will be our only method of showing validity. A derivation is a sequence of lines of a particular sort. Derivations in our system, D1, are sequences of lines. A set of very strict rules governs what may occur on these lines and what may follow earlier lines in the sequence. We number the lines in order, starting with 1.13 In addition to its number, a line must contain three other parts: a sentence of L1, which is said to be on the line, a justification for the line, which is an annotation which cites the rule licensing the line, and possibly a structural element, which we shall discuss below.14 The first lines of the derivation must contain premises (if any) of the argument, and the line after the premises contains the conclusion we seek to derive, preceded by the word SHOW, which serves as a structural indicator to mark the conclusion we seek to derive. When we finish the derivation, we cancel the SHOW thus, SHOW, as an indication that we have finished the derivation and indeed derived the conclusion successfully. The details of the rules governing derivations vary depending on the logic text one uses. The selection or rules and methods of deduction is largely a matter of convenience, pedagogical usefulness, and taste. Of course, there are some requirements that all These numbers are theoretically dispensable, because given a sequence of lines we can always count to find the number. But they are very convenient, and so we shall require them. 14 Strictly speaking, the justifications and some of the structural elements are not needed, since they too can be reconstructed from the basic sequence of sentences that makes up the derivation. But it would be extremely inconvenient to omit them. 82
13

Derivations I deductive systems of sentential logic must satisfy; we will talk about these more at the end of Chapter 5. 2: Getting Started with Derivations The easiest way to learn about derivations is to start a sample one. Consider the following obviously valid argument: A&B B&A We chose a trivial argument like this so we can concentrate on the mechanics of derivation. We begin with premises. We begin our derivation by writing the premise as line 1; premises must come before everything else. 1. A & B P

Lines of a derivation need justification; the justification is indicated by the annotation in the right column. Each justification must appeal to a Rule. In this case the rule is the Premise Rule:

Premise Rule: Any sentence may appear on a line as a

premise, provided there are no lines other than premises earlier in the proof. (Annotation: P.) The rule says that you may put anything as a premise, but if you are trying to show a particular argument valid, you must use only the premises of that argument as premises in your derivation. The premise rule allows you to write anything as a premise, because it must cover every possible derivation you make. Next comes a SHOW line with the conclusion in it. Immediately after the premises we must state what we are trying to derive as our conclusion. 1. A & B 2. SHOW B & A P

A show line indicates what we are trying to prove. Unlike lines in a proof not beginning with SHOW, it cant be used to infer other things, because it is not something we are given to reason 83

Introductory Logic

from, but something we are to reason to, a goal we have set for ourselves in doing the derivation. We will say that it is not accessible, because we cannot use it with our inference rules until

we have actually derived the sentence on the SHOW line. When we succeed in showing the sentence following the SHOW, we will cancel the SHOW (SHOW) to indicate that it is no longer a goal but rather something that we have established. At that point it becomes accessible. We need rules of inference. So far we have merely been setting up the derivation for the particular argument we wish to show valid. To actually derive the conclusion we must use rules that license us to infer other lines from the lines we already have. The basic rules of inference that we will encounter in this chapter are associated with particular connectives. Each is either an Exploitation Rule or an Introduction rule. An Exploitation rule for a particular connective tells us how to use (exploit) earlier lines in the proof with the rules connective as their main connective. Introduction rules tell us how to infer new lines with the rules connective as a main connective. Let us see how this works with the connective &, the only one involved in our sample derivation. Introduction and Exploitation rules for &. The exploitation rule for & is this:

alone on a line. (Annotation: the line number of the earlier line plus &E.) This rule corresponds to perfectly ordinary reasoning: If I know that both diamonds and rubies are gems, I can surely conclude that rubies are gems. &-Exploitation will allow us to infer new lines from line 1 of our sample derivation, but not from line 2. (Line 2 is not accessible, because it begins with uncanceled "SHOW".) We can infer either conjunct; in fact, we will want both, so we will get them one after the other: 1. 2. 3. 4. A&B SHOW B & A A B P 1, &E 1, &E

&-Exploitation Rule: If N & R appears on an earlier accessible line of a derivation, either N or R may appear

Rules of inference always get new lines from previous lines; the 84

Derivations I justification of these lines always includes the line numbers of these earlier lines, as well as the abbreviation of the rule used. The introduction rule for & is this:

&-Introduction Rule: If N and R appear on earlier accessible lines of a derivation, N & R may appear on a line. (Annota-

tion: the line numbers of the earlier lines on which the conjuncts appear plus &I.)

This rule also corresponds to ordinary reasoning: if I know that dogs are canids and also that jackals are canids, I can surely conclude that both dogs and jackals are canids. We can use this rule to complete the sample derivation we have started. 1. 2. 3. 4. 5. A&B SHOW B & A A B B&A P 1, &E 1, &E 3, 4, &I

We inferred the line five from lines 3 and 4 as indicated in the justification column. Boxing and Canceling. We have completed our derivation by deriving the sentence on the SHOW line. We must now indicate that this is a completed derivation. We do this by canceling the SHOW and boxing the lines that justify it; the lines following it that led up to the line on which B & A appears. 1. A & B 2. SHOW B & A +))))))))))), 3. *A * * 4. *B 5. *B & A * .)))))))))))P DD 1, &E 1, &E 3, 4, &I

Canceling the SHOW indicates that the sentence on that line has actually been derived)it is no longer merely a goal. We box the lines that justify canceling both to show which lines justify the cancellation and to indicate that they are no longer accessible. (In this case it doesnt matter that they are accessible, because we have completed the derivation.) Boxing and canceling requires justification. When we cancel a "SHOW," we must add a justification to the SHOW line. The 85

Introductory Logic rule cited here is the Direct Derivation rule:

Direct Derivation Rule: When a sentence on a line (other

than an uncanceled SHOW line) is identical to the sentence on the most recent uncanceled SHOW line, the SHOW may be canceled and all the subsequent lines boxed. (Annotation: DD)appears on the SHOW line.) When we have boxed and canceled the SHOW line immediately following the premises, the derivation is completed. A derivation is completed when and only when the first SHOW line is canceled. The sentence on that line is the conclusion of the derivation; the derivation is said to be a derivation of its conclusion from its premises (if any). Thus our sample derivation is a derivation of B & A from A & B. We can present rules diagrammatically. The rules of D1 are often easier to grasp when presented diagrammatically. Here are diagrammatic presentations of the &-exploitation and &-introduction rules:

&E

N&R N
)))))

N&R R
))))))

&I

N R
)))) N&R

The diagrams use the meta-linguistic variables from Chapter 2. Where a Greek letter occurs in the diagrams, it may be replaced by any sentence of L1)with the understanding that in a given diagram, the same sentence must replace that Greek letter at all of its occurrences. Above the line in the diagram we place the representations of the premises that must be available on earlier accessible lines to use the rules; the order in which they occur does not matter. Below the line we place the representation of the sentence that the rule allows us to infer. We will present all rules of inference from now on as diagrams, relegating the verbal 86

Derivations I presentation of the rules to an appendix to this chapter. The DD rule can also be presented as a diagram: DD SHOW N +)))))))), * * * * * N * .))))))))-

This diagram indicates that you are permitted to box and cancel when you have arrived at the sentence on the SHOW line. (The diagram does not clearly indicate, but the rule specifies, that there must be no uncanceled SHOW lines between the canceled SHOW and the final occurrence of the sentence shown. Because the diagrams of structural rules may be more easily misunderstood than those of inference rules, we will always present the rule as well as the diagram.)

Exercises 4-2: In each case, indicate which of the subsequent sentences can be inferred from the given sentence or sentences using the indicated rule. 1 From A & (B & C) using &E, you may infer: a) A b) B c) C d) A w B e) B w C f) A w C From A w B and C using &I, you may infer: 87

Introductory Logic a) (A w B) & C b) C & (A w B) c) C & A d) C & C e) (A w B) & (A w B) f) (A w B) & (A w C) Construct proper derivations in D1 showing the following arguments valid. 3 4 5 6 7 8 A & (B w C) B w C D & (E & F) D & F G & H, I & J G & J K & (L M), (K M) & L (K & L) & (L M) P & (Q R), (Q w R) & R (P & (Q w R)) & R ((S w T) & R) & W, T & W R & T

3: More Rules of Inference There are Exploitation and Introduction rules for w. Here are the exploitation and introduction rules for w:

wE

NwR ~N
))))))

NwR ~R
))))))

wI N
)))))

R
))))

NwR

NwR

Each rule has two forms, because the connective is symmetric. The w-Exploitation rules corresponds to ordinary reasoning: for instance, if I know that either Janet is at the movies with Bill or she is in her room studying, and I know that she is not in her room, I can conclude that she is at the movies with Bill. The w-Introduction rule does not exhibit a common pattern of reasoning, but it is surely sound; if I know that one disjunct is true, I can conclude that the disjunction is. We can use these new rules, along with those we have already learned, to construct the following derivation: 88

Derivations I 1. C & (D w E) 2. ~E 3. SHOW D w F +)))))))))))))), 4. * D w E * 5. * D * 6. * D w F * .))))))))))))))P P DD 1, &E 2, 4, wE 5, wI

We used the rules at lines 5 and 6; line 4 was inferred with the &-exploitation rule. There is an exploitation rule for . For there is no introduction rule, but there is an exploitation rule. (We can derive conditionals; but we do so by a structural rule rather than by a rule of inference; we will discuss this in section 4 of this chapter.) The -Exploitation rule is sometimes called Modus Ponens;15 its diagram looks like this:

NR N
)))))

This rule corresponds to ordinary reasoning: When I know that if Pamela insulted Quentin, then Quentin was upset, and I know that Pamela insulted Quentin, I may conclude that Quentin was upset. We can use it in the following derivation: 1. P Q 2. R & P 3. SHOW R & Q +))))))))))), 4. * P * 5. * R * 6. * Q * 7. * R & Q * .)))))))))))P P DD 2, &E 2, &E 1, 4, E 5, 6, &I

Modus Ponens is Latin for the method of positing or the method of asserting. The name is of medieval origin. 89

15

Introductory Logic The introduction and exploitation rules for reflect its equivalence to a conjunction of conditionals. The Exploitation rule has two forms:

NR N
))))))

NR R
))))))

This rule corresponds to ordinary ways of reasoning: if I know that porpoises are mammals if and only if whales are, and I know that whales are mammals, then I can surely conclude that porpoises are mammals. The -Exploitation rule reflects the -Exploitation rule and the fact that a biconditional is equivalent to a conjunction of conditionals. The -Introduction rule reflects this as well:

I N R RN
))))))

NR

We can illustrate the use of both these rules in the following derivation: 1. 2. 3. 4. 5. 6. 7. 8. 9. H&I H (G F) I (F G) SHOW F G +))))))))))), *H * *G F * *I * *F G * *F G * .)))))))))))P P P DD 1, &E 2, 5, E 1, &E 3, 7, E 6, 8, I

90

Derivations I Exercises 4-3: For each of the following, indicate which of the sentences can be inferred from the given sentence using the indicated rule. 1 From A w B using wI you can infer: a) A b) (A w B) w C c) B w A d) (D C) w (A w B) e) (B w A) w (A w B) f) M w (A w B) g) (A w B) w (A w B) h) A w (B w C) From A w (C & D), ~A, ~C, and ~(A & D) using wE you may infer: a) A w D b) C & D c) C d) A & D From B (C & D), C, C & D, and B, using E you may infer: a) C & D b) B D c) D & C d) B From A B, A, and A w C using E you may infer: a) B b) B & C c) A B d) C e) B w C From A B, C B, B A, and C A using I you may infer: a) A B b) B A c) C A d) A A

Provide derivations in D1 showing each of the following valid.

91

Introductory Logic 6 7 8 9 10 11 12 13 14 15 P (Q w R), P & ~Q R & P A & (C D), C & ~P D & (A & ~P) M (T C), M & T C w (T M) P & Q, Q A, A R R w S A w B, C ~A, C & D E w B F G, ~H & I, H w G F & I J w (K & ~M), N ~J, N & (M w O) K & O W & (T V), ~X, X w (V T) T V (E & L) Y, Z w E, ~Z & L Y w (X & Z) (A & R) & G, R (M &~S), S w Q Q w S 4: Subderivations and two more rules Subderivations are derivations inside other derivations. We can illustrate the idea of a subderivation by doing the last example from the previous section in a different way. Suppose we have started our derivation like this: 1. 2. 3. 4. H&I H (G F) I (F G) SHOW F G P P P

Suppose we werent sure how to proceed, but we recognize that if we can derive F 6 G and G 6 F we will be able to reach the conclusion by I. So we set ourselves the goal of getting F 6 G thus: 1. 2. 3. 4. 5. H&I H (G F) I (F G) SHOW F G SHOW F G P P P

?. SHOW F 6 G We leave space between line 7 and the next line, because we havent yet figured out how to show F G Similarly, we put a question mark in place of the line number for the next show line, because we dont yet know how many lines will go between them. We now set about to get F G, as though it were all we needed 92

Derivations I to do. 1. 2. 3. 4. 5. 6. 7. H&I H (G F) I (F G) SHOW F G SHOW F G I FG P P P 1, &E 3, 6, E

Now that we have reached our goal, we can box and cancel)but as always, we cancel only the most recent SHOW. 1. 2. 3. 4. 5. H&I H (G F) I (F G) SHOW F G SHOW F G +))))))))))), 6. * I * 7. * F G * .)))))))))))P P P DD 1, &E 3, 6, E

Now that we have boxed lines 6 and 7, they are no longer accessible. We cannot use them again. They have done their work in supporting the derivation of line 5. Line 5, however, is now accessible. Since the SHOW has been canceled, it is no longer a mere goal, but something we may use. We can complete the derivation this way: 1. 2. 3. 4. H&I H (G F) I (F G) SHOW F G +)))))))))))))), 5. *SHOW F G * *+)))))))))), * 6. **I * * 7. **F G * * *.))))))))))- * 8. *SHOW G F * *+)))))))))), * 9. **H * * 10. **G F * * *.))))))))))- * 11. * F G * .))))))))))))))P P P DD DD 1, &E 3, 6, E DD 1, &E 2, 9, E 5, 8, I 93

Introductory Logic A derivation may have as many subderivations as you wish, and may nest them inside each other, so that there may be a subderivation inside a subderivation inside a subderivation inside the main derivation. Subderivations may be nested as deeply within derivations as you wish. Subderivations may not overlap, however; this is guaranteed by the rule that whenever we box and cancel, we must always cancel the most recent uncanceled

SHOW.

In this example, the subderivations havent really enabled us to do anything we couldnt have done without them. The SHOW lines merely served to remind us of the strategy we were pursuing for the derivation. But SHOW lines play a special role in connection with the next two rules we shall learn. Deriving conditionals involves assumptions and a new box-and-cancel rule. Above we noted that there was no introduction rule for . To derive conditionals we use a special sort of derivation, the conditional derivation. In a conditional derivation we begin with a SHOW line. Immediately after the SHOW line we make an assumption: we assume the antecedent of the conditional. We then attempt to derive the conclusion of the conditional. When we have done so, we box and cancel the SHOW we began with. In a diagram, the procedure looks like this: CD SHOW N R +)))))))))))))), *N * * SHOW R * * +)))))))))), * ** * * ** * * ** * * **R * * * .))))))))))- * .))))))))))))))-

Lets work through an example. 1. Q ~P 2. P w R 3. SHOW Q R P P

As a rule, whenever we encounter a SHOW line with a conditional on it, we follow the slogan, To derive a conditional, assume 94

Derivations I

the antecedent and try to derive the consequent. That slogan tells
us what the next two lines of our derivation should be: 1. 2. 3. 4. 5. Q ~P PwR SHOW Q R Q SHOW R P P ACD

The annotation for line 4 is ACD, which stands for Assumption for Conditional Derivation. Now we must derive R. In doing so, we may use our assumption, Q. Deriving R from lines 1, 2, and 4 is not difficult: 1. Q ~P 2. P w R 3. SHOW Q R 4. Q 5. SHOW R 6. ~P 7. R P P ACD 1, 4, E 2, 6, wE

Having shown R, we may box and cancel the SHOW on line 5. Q ~P PwR SHOW Q R Q SHOW R +))))))))))), 6. *~P * * 7. * R .)))))))))))1. 2. 3. 4. 5. P P ACD DD 1, 4, E 2, 6, wE

Now we have reached our goal of showing R, and thereby we have done everything we need to do in order to show Q R, so we may cancel the SHOW on line 3 and box the subsequent lines. 1. Q ~P 2. P w R 3. SHOW Q R +))))))))))), 4. *Q * * 5. *SHOW R *+))))))), * 6. **~P * * 7. ** R * * P P CD ACD DD 1, 4, E 2, 6, wE 95

Introductory Logic
*.)))))))- * .)))))))))))-

The derivation is complete. Notice that the annotation for line 3 is CD, referring to the rule of Conditional Derivation. We need two rules to justify this procedure, one to justify the assumption line and one to justify boxing and canceling the SHOW preceding the conditional. Here is the first:

CD assumption rule: Immediately after a line of the form SHOW NR , a line N may appear. (Annotation: ACD.)
Notice that the assumption is optional according to the strict formulation of the rule; nevertheless, we will always make the assumption whenever we can, because the cases where it is not needed are relatively rare, and there is no harm in having an extra assumption available. Notice also that an assumption may be made only immediately after a SHOW line of the proper form. Assumptions cannot be made whenever one wants; they may be made only in connection with the attempt to derive something of an appropriate form. We also need a rule that justifies canceling the SHOW preceding the conditional; here it is: line and the most recent uncanceled SHOW line is of the form N R, then the SHOW line may be canceled and subsequent lines boxed. (Annotation: CD appears on the canceled SHOW line.)

Conditional Derivation Rule: If R appears on an accessible

There are now two rules that allow canceling of SHOW: the DD rule and the CD rule. The Conditional Derivation procedure may at first seem complicated, but it corresponds to ordinary ways of thinking. I may reason as follows: I dont know whether Gertrude went to the movies with Alonzo, but suppose she did. Well, if she did, then shes not in her room. But either she is in her room, or her roommate Ethel is there. So I conclude that if Gertrude went to the movies with Alonzo, then her roommate Ethel is in her 96

Derivations I room.16 It is common to make a supposition (assumption) and reason on the basis of it. In reasoning on the basis of such an assumption we cannot conclude that something established on the basis of an assumption is true, but only that it is true if the assumption is. Indirect derivation also involves subderivations and assumptions. In order to complete our derivation system, D1, we need one more derivation procedure, called indirect derivation (sometimes also called reductio ad absurdum or proof by contradiction). In this procedure, which is perhaps more common in mathematics than in ordinary reasoning, we make an assumption and show that it leads to a contradiction. We then conclude that our assumption must be false. Here is how the procedure works in a diagram: ID SHOW N +)))))))))))))), * ~N * * SHOW ! * * +)))))))))), * ** * * **! * * * .))))))))))- * .))))))))))))))-

We can best explain this diagram by working through an example. Suppose we wish to complete this derivation: 1. P w ~Q 2. Q 3. SHOW P P P

You may at first think that we can complete this derivation immediately by using w-Exploitation, but we cannot. Recall the diagram for w-Exploitation:

wE N w R
16

NwR

The reader will recognize that this argument can be translated, with a suitable interpretation, into that of the sample derivation we completed above. 97

Introductory Logic ~N )))))) ~R ))))))

w-Exploitation requires that one premise be a disjunction and the other premise must have a ~ as its main connective. We dont
first assume the negation of what we are trying to show. 1. 2. 3. 4. P w ~Q Q SHOW P ~P P P AID

w-Exploitation. So let us use indirect derivation. To do that we

have that in our sample derivation, so we cannot use -

The annotation here is AID, for Assumption for Indirect Derivation. Now we want to try to derive a contradiction; that is, we want to have something of the form N on one line and ~N on another. We may not know yet exactly which sentences will be involved in this contradiction, so we cannot enter any of them in a SHOW line. So to remind ourselves that we seek a contradiction)any old contradiction)we use ! in place of a sentence in the show line: 1. 2. 3. 4. 5. P w ~Q Q SHOW P ~P SHOW ! P P AID

Line 5 is a reminder that we seek a contradiction. The ! looks as though it were a sentence in L1, but it isnt; its a pseudo-sentence. It can appear in some places where sentences would appear, but its just there to remind us of the contradiction. Now we can exploit our assumption, ~P: 1. 2. 3. 4. 5. 6. P w ~Q Q SHOW P ~P SHOW ! ~Q P P AID 1, 4, wE

We now have a contradiction: the Q on accessible line 2 is 98

Derivations I explicitly contradicted by the ~Q on line 6. To mark the fact that we have found a contradiction, we enter a ! on the next line, and then box and cancel. P w ~Q Q SHOW P ~P SHOW ! +))))))))))), 6. * ~Q * 7. * ! * .)))))))))))1. 2. 3. 4. 5. P P AID 1, 4, wE

We need an annotation for line 7. To fit in with the pattern that is developing, we will use the following pseudo-inference rule: !I

N ~N
))) !

This is a pseudo-inference rule because we are not really inferring anything: ! is not a sentence of L1, but just a way of indicating that we have succeeded in deriving a contradiction. The annotation requires citing the two earlier lines and adding !I: P w ~Q Q SHOW P ~P SHOW ! +)))))))), 6. * ~Q * 7. * ! * .))))))))1. 2. 3. 4. 5. P P AID DD 1, 4, wE 2, 6, !I

Now we are in a position to cancel the SHOW on line 3 and box subsequent lines: 1. P w ~Q 2. Q 3. SHOW P +)))))))))))))), 4. *~P * P P ID AID 99

Introductory Logic 5. *SHOW ! * * +)))))))), * 6. * *~Q * * 7. * * ! * * * .))))))))- * .))))))))))))))DD 1, 4, wE 2, 6, !I

In addition to the pseudo inference rule, !I, we need two new rules: one to justify the assumption in line 4, and one to justify canceling the SHOW in line 3.

ID Assumption rule: Immediately after a line of the form, SHOW N, a line ~N may appear. (Annotation: AID.) Indirect Derivation Rule: If ! appears on a line and the most recent SHOW line is not SHOW !, the SHOW may be canceled and subsequent lines boxed. (Annotation: ID appears on the canceled SHOW line.)
Note that in the case that the most recent SHOW line is SHOW !, direct derivation is used instead of indirect derivation. All the comments about assumptions are still important. Assumptions may be made only immediately after show lines. After a line of the form, SHOW NR, N may be assumed. After any line SHOW N, ~N may be assumed. Also, notice that the assumption in the indirect derivation is, strictly speaking, optional. However, we will always make it when we seek to complete a derivation by indirect derivation, since the cases where the derivation can be completed without it are rare. The two rules, CD and ID, are necessary to provide derivations of what are called theorems of D1. Theorems of D1 are sentences of L1 that can be derived from no premises. For instance, P P is a theorem of D1; here is a derivation that shows this: 1. SHOW P P +))))))))))), 2. *P * 3. *SHOW P * *+))))))), * 4. **P & P * * 5. **P * * *.)))))))- * .)))))))))))100 CD ACD DD 2, &I 4, &E

Derivations I The assumption makes it possible to get the derivation started, even though there are no premises.17 Indirect derivation may be used to derive theorems that do not have as their main connective; for example, 1. SHOW P w ~P +)))))))))))))))))))), 2. *~(P w ~P) * 3. *SHOW ! * *+)))))))))))))))), * 4. **SHOW P * * **+)))))))))))), * * 5. ***~P * * * 6. ***SHOW ! * * * ***+)))))))), * * * 7. ****P w ~P * * * * 8. ****! * * * * ***.))))))))- * * * **.))))))))))))- * * 9. **P w ~P * * 10. **! * * *.))))))))))))))))- * .))))))))))))))))))))ID AID DD ID AID DD 5, wI 2, 7, !I 4, wI 2, 9, !I

With the addition of the three rules involved in indirect derivations, our system has all the rules it needs. We can, using these rules, derive everything we wish to derive.18 In Chapter 5 we will introduce additional rules, but they only make derivations easier, they dont enable us to show any arguments valid that cant be shown valid with the rules we have already introduced.

The reader who studies the derivation rules carefully will note that the lines after line 2 are not actually needed for this derivation. 18 We will give a somewhat more rigorous statement of this claim in Chapter 5, section 5, but the proof of it is beyond the scope of this book. 101

17

Introductory Logic Exercises 4-4: Provide derivations showing the following arguments valid. 1 2 3 4 5 6 7 8 9 10 11 12 B w C, ~C (A w B) D & E, D E F w G, H & ~I, ~G (F & ~I) J (K J) L (M N) (L M) (L N) (O & P) Q, O (P Q) R w (S & ~S) R ~P Q, ~P ~Q P ~P P P ~P P Q ~T w V T V ~(~W w ~X) W & X 5: Using Rules Correctly Though the rules of D1 are not complicated, they can be used incorrectly. The key to using the rules correctly is to understand that they are to be understood absolutely literally, as though they were instructions to a computer. (In fact, a computer can be programmed to check whether the rules have been correctly used.) A common error in using rules is forgetting that rules refer to whole lines, and the connectives mentioned in the rules must be the main connectives of the sentences. For instance, you know that you can do this: n.P & Q... n+1. Q n, &E

But you cannot use the &-Exploitation rule in this manner: n.(P & Q) R n+1. P R ... n, &E WRONG!!

In line n the & is not the main connective of the sentence. So the &-Exploitation rule cannot be used on line n. Similarly, you cannot do this: 102

Derivations I n.P Q ... n+1. (P w R) Q

n, wI WRONG!!

Again, the w is not the main connective of line n+1; hence it cannot be inferred using I; using I on line n would give something like (P Q) w R where the whole of line n is one of the disjuncts of the inferred line. Negation must be treated with care. For instance, the following fragment of a derivation is incorrect: n. K w ~T... n+1. T n+2. K ... n, n+1, wE WRONG!!

According to the w-Exploitation rule, one sentence must be a disjunction and the other must be one of the disjuncts with a tilde (~) in front of it. But line n+1 doesnt have a tilde in it at all; much less is it one of the disjuncts with a tilde in front of it. (cf.. the discussion on p. 97.) You can infer the sentence on line n+2 from the previous lines, but you must go to a bit more trouble. n. K w ~T... n+1. T n+2. SHOW K +)))))))), n+3. *~K * n+4. *SHOW ! * *+)))), * n+5. **~T * * n+6. **! * * *.))))- * .))))))))... ID AID DD n, n+3, wE n+1, n+4, !I

(In chapter 5 we will learn some simpler ways to get this result.) Another error can arise in complicated derivations. For instance, consider this attempted derivation: 1. 2. 3. 4. 5. 6. 7. PR ~P ~Q Q SHOW R ~R SHOW ! SHOW P P P P AID

103

Introductory Logic 8. ~P 9. SHOW ! 10. ~Q 11. ! AID 2, 8, E 3, 10, !I

Now we can box and cancel. We may be tempted to cancel the SHOW on line 6, but this would be an error! We can only cancel the most recent SHOW. We must cancel the SHOW on line 9 (and also the one on line 7). 1. 2. 3. 4. 5. 6. 7. PR ~P ~Q Q SHOW R ~R SHOW ! SHOW P +)))))))))))), 8. * ~P * 9. * SHOW ! * *+))))))), * 10. **~Q * * 11. **! * * *.)))))))* .))))))))))))We might now be tempted to do this: PR ~P ~Q Q SHOW R ~R SHOW ! SHOW P +))))))))))), 8. *~P * 9. *SHOW ! * *+))))))), * 10. **~Q * * 11. **! * * *.)))))))- * .)))))))))))12. ! 1. 2. 3. 4. 5. 6. 7. P P P AID ID AID DD 2, 8, E 3, 10, !I 3, 10 !I WRONG!! P P P AID ID AID DD 2, 8, E 3, 10, !I

But this is wrong, because line 10 is no longer accessible. It is boxed, and so we cannot use it. We must complete the derivation 104

Derivations I this way: 1. P R 2. ~P ~Q 3. Q 4. SHOW R +))))))))))))))))), 5. *~R * 6. *SHOW ! * *+))))))))))))), * 7. **SHOW P * * **+))))))))), * * 8. ***~P * * * 9. ***SHOW ! * * * ***+))))), * * * 10 ****~Q * * * * 11 ****! * * * * ***.)))))- * * * **.)))))))))- * * 12. **R * * 13. **! * * *.)))))))))))))- * .)))))))))))))))))P P P ID AID DD ID AID DD 2, 8, E 3, 10, !I 1, 7, E 5, 12, !I

This correct derivation could be shortened by omitting lines 5, 6, and 13, and using DD to justify canceling the SHOW on line 4. Of course, it does not matter how long derivations are, so long as they are correct. Sometimes longer derivations are easier to discover than shorter ones. Exercises 4-5: Find all the mistakes in the following derivations. 1. 1. ~P w Q 2. M P 3. P & ~Q 4. SHOW ~M +))))))))))), 5. *P * 6. *Q * * 7. *M 8. *Q * 9. *P * 10. *~P * P P P ID 3, &E 1, wE 2, E 1, 5, wE 2, 7, E 1, 6, wE 105

Introductory Logic 11. *! * .)))))))))))2. 1. R S 2. T & M 3. ~R 4. SHOW ~S & M +))))))))))), 5. *S & M * 6. *S * 7. *M * 8. *R * 9. *~S * 10. *! * .)))))))))))1. R 2. SHOW P (Q R) +)))))))))))))), 3. *Q * 4. *SHOW Q R * 5. *Q * 6. *SHOW R * 7. *Q & R * 8. *R * .))))))))))))))1. 2. 3. 4. 5. 6. 7. 8. SHOW P (P w Q) +)))))))))))))), *P * *SHOW P w Q * *+)))))))))), * **P & P * * **(P & P) w Q * * **Q & P * * **P * * **P w Q * * *.))))))))))- * .))))))))))))))9, 10 !E

P P P ID AID 5, &E 2, &E 1, 6, E 4, &E 6, 10, !I

3.

P CD ACD ACD 1, 6, &I 7, &E

4.

CD ACD DD 1, &I 4, wI 5, wE 4, &E 7, wI

6: Derivation Strategies

106

Derivations I The process of creating derivations is not mechanical,19 although the process of checking derivations for correctness is. Creating a derivation requires ingenuity and thought. Since we cannot provide a system of mechanical rules for creating derivations, we must content ourselves with advice)what I shall call strategy suggestions. These suggestions will not be as definite as a recipe, but they are helpful when one doesnt know what to do. Here are some strategy suggestions that may prove useful. Use trial and error. Derivations other than the simplest are seldom simply written down as you would write down the lines of a truth-table. Typically you simply try out various things until you see what will work. If one plan isnt working, you try another, trying to be systematic and cover all possibilities. You should never be afraid to start using rules without knowing exactly where you are going; similarly, if you run into a dead end, you should always be prepared to go back to the beginning and start over. Set goals for yourself, as SHOW lines, that will help you derive your conclusion. Then complete the subderivations. To show a conditional, assume the antecedent and try to derive the consequent. There is almost no exception to this rule. Whenever you are trying to derive a conditional, either as the main conclusion or as part of a subordinate derivation, always assume the antecedent and try to derive the consequent, on this pattern: SHOW N R

SHOW R

ACD

This is nearly always the best way to derive a conditional, although sometimes it may not be obvious how to carry out this strategy. Use exploitation rules to derive things from premises and other earlier accessible lines. Typically, you will need to use all the premises to derive the conclusion. If you are stuck, therefore,

It would be possible to create a mechanical procedure for creating derivations in D1, but it would be complicated, and the derivations would generally be longer than those we can think up ourselves. In the system for quantificational logic developed later in the book, such a system would not be feasible. 107

19

Introductory Logic it is a good idea to check which premises you have not used, and see whether exploitation rules can be used on the premises you have not yet used. Similarly, if there are useful-looking earlier accessible lines of the derivation that you have not used, look for ways to use exploitation rules on those lines. Look for ways to use introduction rules to derive the conclusion (if it is not atomic). If your conclusion is a conjunction, try to derive both conjuncts and use &I. If your conclusion is a biconditional, you will nearly always use I; usually you will get the two conditionals you need for that rule by successive subderivations using CD. Look for sentence letters in common between available lines; this may suggest ways to proceed. When you find sentence letters in common, you may see how to use exploitation rules. Or you may see what you need to derive to complete the derivation. When you cant see what else to do, assume the negation of what you seek and try to use ID. Lots of derivations use ID, and sometime ingenuity is required in the use of it. Consider the following derivation, for instance. 1. A ~B 2. SHOW ~A w ~B P

We might hope to get to the conclusion using w-Introduction, but a quick check with a truth table will show that neither ~A nor ~B follows from line 1. Hence we will have to use indirect derivation. This is a very common strategy when trying to derive a disjunction. 1. 2. 3. 4. A ~B SHOW ~A w ~B ~(~A w ~B) SHOW ! P AID

How shall we derive a contradiction? A little truth-value analysis may help. Line 3 says that neither A nor B is false; hence it must imply that they are both true. In particular, it must imply that A is true. If we could show this, we could use it with line 1 and -Exploitation. So this looks like a promising line. Let us try to get A from line 3. We will set this as a subordinate goal, and since no obvious strategy suggests itself for getting A, we will try indirect derivation. 108

Derivations I 1. 2. 3. 4. 5. 6. 7. AB SHOW ~A w ~B ~(~A w ~B) SHOW ! SHOW A ~A SHOW ! P AID AID

Now it is easy to get our contradiction by wI: 1. 2. 3. 4. 5. 6. 7. 8. 9. A ~B SHOW ~A w ~B ~(~A w ~B) SHOW ! SHOW A +)))))))))))))), *~A * *SHOW ! * *+)))))))))), * **~A w ~B * * **! * * *.))))))))))- * .))))))))))))))P AID

AID DD 6, wI 3, 8, !I

With A available, we can use line 1 and finish the derivation: 1. A ~B 2. SHOW ~A w ~B +)))))))))))))))))))), 3. *~(~A w ~B) * * 4. *SHOW ! *+)))))))))))))))), * 5. **SHOW A * * **+)))))))))))), * * * * * 6. ***~A 7. ***SHOW ! * * * ***+)))))))), * * * 8. ****~A w ~B * * * * 9. ****! * * * * ***.))))))))- * * * **.))))))))))))- * * 10. **~B * * 11. **~A w ~B * * 12. **! * * *.))))))))))))))))- * .))))))))))))))))))))P ID AID DD ID AID DD 6, I 3, 7, !I 1, 5, 6E 10, wI 3, 11, !I

109

Introductory Logic This derivation is somewhat difficult, but with practice you will be able to discover such derivations. Exercises 4-6: Provide derivations showing the following valid: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 ~P (S w T), ~S & ~T, P P & Q, P Q ~P w Q , P Q ~~P, P P, ~~P P Q, ~Q ~P P R, Q R, P w Q , R ~(S & ~T), S T ~P, P Q ~(P Q), ~P Q A w B, B w A (K & H) ~R, ~(~K w ~R), ~H ~R M B w H, B M, H M, M C L, ~C L, L

110

Derivations II

Chapter 5: Derivations II
In this chapter we will learn additional rules to make derivations simpler. We will also learn how to use derivations to show sets of sentences inconsistent and pairs of sentences equivalent. Finally, we shall review some general questions about derivations. 1: Derived Rules Modus Tollens is a derived rule of inference. In this section we will learn some derived rules for D1. Derived rules provide short cuts: they enable us to do in one step what would otherwise require many steps. For instance, here is a common sequence of steps in a derivation: PQ ~Q ... n. SHOW ~P +)))))))))))))))), n+1. *~~P * n+2. * SHOW ! * *+)))))))))))), * n+3. ** SHOW P * * **+)))))))), * * n+4. *** ~P * * * n+5. *** SHOW ! * * * ***+)))), * * * * * * * n+6. ****! ***.))))- * * * **.))))))))- * * n+7. ** Q * * n+8. ** ! * * *.))))))))))))- * .))))))))))))))))j. k.

ID AID DD ID AID DD n+1, n+4, !I j, n+3, E k, n+7, !I

This sequence of steps comes up often enough that we will find it handy to abbreviate it. One way to do this is to introduce a derived rule of inference. We shall call this rule Modus Tollens;20

Modus Tollens is Latin for the method of denying. The name is traditional. 111

20

Introductory Logic here is its diagram: MT

NR ~R
))))))) ~N

This rule corresponds to our ordinary ways of reasoning. When I know that if the battery is dead the car will fail to crank when I turn the ignition key, but I also know that the car does not fail to crank when I turn the key, then I can conclude that the battery is not dead. We can use this rule in derivations like this one: 1. 2. 3. 4. MS MwE ~S SHOW E +))))))))), 5. *~M * 6. * E * .)))))))))P P P DD 1, 3, MT 2, 5, wE

Repetition is another derived rule of inference. It allows us to repeat any earlier accessible line. Its diagram looks like this: R

M
))))))

You might well wonder when we would need to use a rule like this. In fact, we seldom do, but occasionally it is helpful when using CD. Because CD requires that we obtain the consequent of the conditional on the last line of the subderivation, it can be handy to have the Repetition rule in cases where we already have the consequent before starting the CD derivation, such as this simple one: Q SHOW P Q +))))))))))), 3. *P * * 4. *SHOW Q *+))))))), * 3. **Q * * *.)))))))- * 112 1. 2. P CD ACD DD 1, R

.)))))))))))-

Derivations II

This rule does not enable us to complete any derivations that could not be completed without it. Like the Modus Tollens rule, it merely makes derivations shorter (and a bit easier). Without the rule, it would always be possible to use steps like the following to repeat an earlier line: i. P ... n. P & P n+1. P

i, &I n, &E

The hypothetical syllogism rule is another derived rule. Sometimes we wish to derive a conditional from two others in the following pattern: 1. A B 2. B C 3. SHOW A C P P

Now we can complete this proof easily, thus: 1. 2. 3. 4. 5. 6. 7. AB BC SHOW A C +)))))))))))), *A * *SHOW C * *+)))))))), * **B * * **C * * *.))))))))- * .))))))))))))P P CD ACD DD 1, 4, E 2, 7, E

But it is handy to be able to avoid the conditional derivation in this case and simply get line 3 directly from lines 1 and 2. The traditional rule that allows us to do this is called hypothetical syllogism; its diagram looks like this: HS N R

R2 N2

)))))))

113

Introductory Logic Using this rule, the derivation above could be done in four lines: 1. A B 2. B C 3. SHOW A C +))))))))), 4. *A C * .)))))))))P P DD 1, 2, HS

There are two forms of the Separation of Cases rule. Another handy derived rule is called Separation of Cases (also sometimes called Constructive Dilemma). The general form of the Separation of Cases rule looks like this: SC

NwR N2 R2
)))))

This rule corresponds to ordinary inference patters. Suppose I know that Alonzo is either studying or sleeping. I also know that if he is sleeping, he will not wish to be disturbed, and if he is studying, he will not wish to be disturbed. I can certainly conclude that he will not wish to be disturbed. If you provided a derivation for Exercise 13 from Exercises 4-6, you will see how you could do without this rule, and also how inconvenient it would be to do so. We can use the rule in derivations such as this one: 1. 2. 3. 4. 5. 6. 7. 8. W w (G & N) WN SHOW N +))))))))))))))))))))))), *SHOW (G & N) N * *+))))))))))))))))))), * **G & N * * **SHOW N * * **+))))))))))))))), * * ***N * * * **.)))))))))))))))- * * *.)))))))))))))))))))- * *N * .)))))))))))))))))))))))P P DD CD ACD DD 5, &E 1, 2, 4, SC

114

Derivations II A special case of SC arises when the first premise is of the form N w ~N. Since this is a theorem of D1 (we proved it at the end of Chapter 4, 4), it need not be explicitly present in order to use SC; in fact, we have a second form of SC which we may call SC2: SC2

NR ~N R R
))))))

If you supplied a derivation for exercise number 14 of Exercises 46, you will have a good idea how we could do without this rule, and how inconvenient it would be to do so. This rule corresponds to ordinary reasoning as well. If the accountant knew about the embezzlement, he is culpable and should be dismissed; if he didnt know, he is incompetent and should be dismissed. Therefore, he should be dismissed. 1. A (G & D) 2. ~A (I & D) 3. SHOW D +))))))))))))))))))))), 4. *SHOW A D * *+))))))))))))))))), * 5. **A * * 6. **SHOW D * * **+))))))))))))), * * 7. ***G & D * * * 8. ***D * * * **.)))))))))))))- * * *.)))))))))))))))))- * 9. *SHOW ~A D * *+))))))))))))))))), * 10.** ~A * * * * 11.** SHOW D **+))))))))))))), * * 12.***I & D * * * 13.***D * * * **.)))))))))))))- * * *.)))))))))))))))))- * 14.* D * .)))))))))))))))))))))P P DD CD ACD DD 1, 5, E 7, &E CD ACD DD 2, 10, E 12, &E 4, 9, SC2

This derivation looks more complicated than it is. The basic strategy)using CD to get the two conditionals needed for SC2)is not difficult to discover, and carrying it out is easy. SC2 can also 115

Introductory Logic provide a handy way to tackle derivations that dont present any obvious strategy. For instance, consider this theorem: 1. SHOW (P Q) w (P ~Q) It is not obvious how to tackle this derivation, but sometimes one can choose a strategic sentence letter and use SC2. Here is a sketch showing how this strategy might work in this case: 1. SHOW (P Q) w (P 6 ~Q) 2. SHOW Q [(P Q) w (P ~Q)] ... n. SHOW ~Q [(P Q) w (P ~Q)] ... (P Q) w (P Q)

2, n, SC2

The reader should have no difficulty finishing the derivation using this strategy. Finally, we shall add a second form of indirect derivation. In the form we are familiar with, the assumption we make for indirect derivation must consist of the sentence on the preceding SHOW line with a tilde (~) in front of it. This is so even if that sentence already begins with a tilde; in this case the assumption will have to have two tildes. Our new form of the rule will allows us to drop the tilde to form the assumption. (We will still use the annotation, AID in this case.) Its diagram looks like this: ID SHOW ~N +))))))))))))))), * N * * SHOW ! * * +)))))))))), * ** * * ** * * **! * * * .))))))))))* .)))))))))))))))This new form of indirect derivation will not enable us to complete any derivations we could not complete without it. For, as the following derivation illustrates, we could always achieve the same result by a more cumbersome method. 1. P (S & ~S) 2. SHOW ~P 116 P ID

+))))))))))))))))), 3. *~~P * 4. * SHOW ! * *+))))))))))))), * 5. ** SHOW P * * **+))))))))), * * 6. *** ~P * * * 7. *** SHOW ! * * * ***+))))), * * * 8. ****! * * * * ***.)))))- * * * **.)))))))))- * * 9. ** S & ~S * * 10. ** S * * 11. ** ~S * * 12. ** ! * * *.)))))))))))))- * .)))))))))))))))))-

Derivations II AID DD ID AID DD 3, 6, !I 1, 5, E 9, &E 9, &E 10, 11, !I

Some well-known proofs in mathematics use this version of the rule. For instance, Euclids famous proof that there is no largest prime begins with the assumption that there is a largest prime, say p. The proof proceeds by considering the number we get when we take the product of p with all the numbers less than it and add 1 to this product. This number is obviously bigger than p, and it is not divisible by any number other than 1 that is less than p (since division by any such number leaves a remainder of 1). Hence this number is either itself a prime larger than p, or else is divisible by some prime number less than itself but larger than p. Either way there is a prime larger than p, which contradicts the assumption that p is the largest prime. Since the assumption that there is a largest prime leads to a contradiction, it must be false; that is, it is not the case that there is a largest prime. (Did you notice that SC was also used in this proof?) Although it may seem as though our system becomes more complicated with more rules, it actually is easier to construct derivations with the larger set of rules. The derivations are shorter, because we have the shortcuts of the new rules, and they offer us additional strategies for constructing derivations. Exercises 5-1: Indicate which of the sentences can be derived from the given sentences with the given rule (using no other rules): 117

Introductory Logic 1. From A B, C ~D, ~B, D, and ~C, using Modus Tollens, you may infer: a) ~A b) ~~D c) A d) ~C e) ~~A f) D 2. From E w ~F, E G, ~F G, E H, F H, and G w H using Separation of Cases (first form, SC) you may infer: a) G b) H c) ~F w E d) ~H 3. From J (R & S), S w R, ~S w R, ~J (R & S), using Separation of Cases (second form, SC2) you may infer: a) R b) S c) S & R d) R & S Use the new rules in constructing correct derivations that show the following valid: 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. (Q w R) S, ~S, ~(Q w R) (T w R) N, Q & ~N, ~(T w R) P (S & T), (S & T) J, P J R I, I S, S R, S R S & T, M S R w M, W & ~M, F R Q w L, L T, Q T, T P (D w Y), R w P, ~R & (D Q), Y Q, Q T (W O), ~T L, L (W O), W O K S, P & ~S, ~K L (S w G), ~S, ~(L & ~G)

2: Showing Inconsistency and Equivalence with Derivations Derivations may be used to show sentences equivalent or sets 118

Derivations II of sentences inconsistent. So far we have only used derivations to show arguments valid, but derivations can also be used to show sentences equivalent or sets of sentences inconsistent. To show two sentences equivalent, we derive them from each other. That is, To show N is equivalent to R, complete the following two derivations: 1. N 2. SHOW R P 1. R 2. SHOW N P

Once you have completed these two derivations, you have shown the two sentences equivalent. Let us do an example. We can show P Q equivalent to ~P w Q by the following two derivations: PQ SHOW ~P w Q +))))))))))))))))))))))), 3. * SHOW P (~P w Q) * *+))))))))))))))))))), * 4. **P * * 5. **SHOW ~P w Q * * **+))))))))))))))), * * 6. ***Q * * * 7. ***~P w Q * * * **.)))))))))))))))- * * *.)))))))))))))))))))- * 8. *SHOW ~P (~P w Q) * *+))))))))))))))))))), * 9. **~P * * 10. **SHOW ~P w Q * * **+))))))))))))))), * * 11. ***~P w Q * * * **.)))))))))))))))- * * *.)))))))))))))))))))- * 12. * ~P w Q * .)))))))))))))))))))))))1. 2. P DD CD ACD DD 1, 4, E 6, wI CD ACD DD 9, wI 3, 8, SC2

~P w Q SHOW P Q +)))))))))))))))))))), 3. *P * 4. *SHOW Q * *+)))))))))))))))), * 1. 2.

P CD ACD DD 119

Introductory Logic 5. **SHOW ~~P * * **+)))))))))))), * * 6. ***~P * * * 7. ***SHOW ! * * * ***+)))))))), * * * 8. ****! * * * * ***.))))))))- * * * **.))))))))))))- * * 9. **Q * * *.))))))))))))))))- * .))))))))))))))))))))-

ID AID DD 3, 6, !I 1, 5, wE

The first of the derivations further illustrates the use of SC2. The second of these is fairly straightforward, though later in this chapter we will learn how to simplify it even further. Together, these derivations show that the two sentences are equivalent. To show a set of sentences inconsistent, you must derive ! (i.e., a contradiction) from premises all of which are in the set to be shown inconsistent. Suppose, for instance, we wish to show the following set inconsistent: PwQ ~(~P Q) To show this set inconsistent, we must complete the following derivation: 1. 2. 3. PwQ ~(~P Q) SHOW ! P P

A promising strategy here is to use premise 1 to derive the negation of premise 2, thus: 1. 2. 3. 4. 5. 6. 7. 8. 120 PwQ ~(~P Q) SHOW ! +))))))))))))))))), *SHOW ~P Q * *+))))))))))))), * **~P * * **SHOW Q * * **+))))))))), * * ***Q * * * **.)))))))))- * * *.)))))))))))))- * *! * P P DD CD ACD DD 1, 5, wE 2, 4, !I

.)))))))))))))))))-

Derivations II

Exercises 5-2: Show the following pairs of sentences equivalent by deriving each member of the pair from the other. 15. 16. 17. 18. 19. 20. 21. 22. QX P PQ ~(P w Q) PwQ M w (N & P) ZU B (B & Y) (Q X) & (X Q) ~~P ~P w Q ~P & ~Q QwP (M w N) & (M w P) ~U ~Z BY

Show the following inconsistent by deriving ! from them. 23. 24. 25. 26. P (Q & R), ~R & Q, Q P M T, M w T, M ~T R (H A), ~A H, H & R (M w W) C, ~C & J, ~W ~J 3: Replacement Rules: Replacement rules are a different sort of rule. So far, all our rules have operated on whole lines of a derivation and have allowed inferences in one direction only. Replacement rules are different. They license replacement of whole sentences on a line or of sentential parts of sentences on a line by equivalent sentences, and they can work backwards as well as forwards. Lets see how such rules work by considering an example, the Double Negation Replacement rule. The Diagram of this rule looks like this: DN

N :: ~~N

This rule licenses us to replace any instance of the left side of the double colon (::) by the corresponding instance of the right hand side, and vice versa. This rule will allow us to replace any of the sentences on the left below with the corresponding sentence to its 121

Introductory Logic right (or vice versa): P P ~~Q ~~(P w Q) P (N w S) ~(R J) ~~P PQ PwQ P (~~N w S) ~~~(R J)

We can do this because the sentences on each side are truth-functionally equivalent to each other. For a sample derivation using this rule, we can simplify the derivation in the previous section. 1. ~P w Q 2. SHOW P Q +)))))))))))), 3. *P * 4. *SHOW Q * *+)))))))), * 5. **~~P * * 6. **Q * * *.))))))))- * .))))))))))))P CD ACD DD 3, DN 1, 5, wE

Steps 5 and 6 are common ones; we cannot get line 6 from line 1 and 3 directly using wE, because line 3 is not a tilde followed by one of the disjuncts of line 1. But we use DN to get a line that we can use with wE and line 1. We often do the same with MT, as in this simple derivation: 1. P ~Q 2. Q 3. SHOW ~P +))))))))), 4. *~~Q * 5. *~P * .)))))))))P P DD 2, DN 1, 4, MT

All replacement rules work like this. They give us pairs of equivalent statement forms, and they license us to derive new lines by replacing any line or part of a line that is an instance of one of the two forms by the corresponding instance of the other form. Some replacement rules are useful for deriving things from lines beginning with a tilde. When a premise or earlier line of a derivation begins with a tilde and is more complex than the 122

Derivations II negation of a sentence letter, we often need to do a round-about indirect derivation to derive anything from the line. Here is a simple example: 1. ~(F w Q) 2. SHOW ~F +)))))))))))), 3. *F * 4. *SHOW ! * *+)))))))), * 5. **F w Q * * 6. **! * * *.))))))))- * .))))))))))))P ID AID DD 3, wI 1, 5, !I

A pair of replacement rules will help us shorten such derivations. They are usually known as De Morgans Laws,21 and their diagrams look like this: DeM ~(N w R) :: ~N & ~R ~(N & R) :: ~N w ~R Using De Morgans Laws we can shorten the preceding derivation to two simple steps: 1. ~(F w Q) 2. SHOW ~F +))))))))), 3. * ~F & ~Q * 4. * ~F * .)))))))))P DD 1, DeM 3, &E

De Morgans Laws are of no help when dealing with conditionals or biconditionals; for them we introduce two additional replacement rules: ~ ~ ~(M R) :: N & ~R ~(N R) :: ~N R

We can illustrate both of these in the following derivation:


21

After the British mathematician and logician Augustus De Morgan (1806-1871). 123

Introductory Logic 1. ~(U D) 2. ~(D P) 3. SHOW P +))))))))), 4. *U & ~D * 5. *~D * 6. *~D P * 7. * P * .)))))))))-

P P DD 1, ~ 4, &E 2, ~ 5, 6, E

Some replacement rules reflect fundamental properties of the logical operators. You are probably familiar with the fact that addition is commutative; we might express that fact thus: x+y=y+x A similar property is the basis for the following three replacement rules, which we will call commutativity: Com

N & R :: R & N N w R :: R w N N R :: R N

Note that there is no replacement rule of commutativity for . (Why not?) We can illustrate all three commutativity rules in the following simple derivation: 1. [(B w Q) & M] W 2. SHOW W [M & (Q w B)] +))))))))))))))))))))), 3. * W [(B w Q) & M] * 4. * W [M & (B w Q)] * 5. * W [M & (Q w B)] * .)))))))))))))))))))))P DD 1, Com 3, Com 4, Com

Another group of three rules may be called Associativity, again by analogy with the familiar arithmetical property: Assoc.

N & (R & 2) :: (N & R) & 2 N w (R w 2) :: (N w R) w 2 N (R 2) :: (N R) 2

We can illustrate them in this simple derivation: 124

(G & A) & F (G F) ~R (R w M) w H SHOW M w H +))))))))))))))), 5. *G & (A & F) * 6. *G * 7. *G (F ~R) * 8. *F ~R * 9. *F * 10.*~R * 11.*R w (M w H) * 12.*M w H * .)))))))))))))))-

1. 2. 3. 4.

P P P DD 1, Assoc 5, &E 2, Assoc 6, 7, E 1, &E 8, 9, E 3, Assoc 10, 11, E

Derivations II

Next we have the distributive rules. You are familiar with the arithmetic fact that multiplication distributes over addition: x (y + z) = xy + xz Similarly, disjunction distributes over conjunction, and conjunction distributes over disjunction. Dist

N & (R w 2) :: (N & R) w (N & 2) N w (R & 2) :: (N w R) & (N w 2)

We can illustrate this in the following derivation: ~(P & Q) P & (Q w R) (P & R) [S w (T & M)] SHOW S w T +)))))))))))))))))), 5. *(P & Q) w (P & R) * 6. *P & R * 7. *S w (T & M) * 8. *(S w T) & (S w M) * 9. *S w T * .))))))))))))))))))1. 2. 3. 4. P P P DD 1, Dist 1, 5, wE 3, 6, E 7, Dist 8, &E

We also have rules representing fundamental equivalences involving the connectives. We have pointed out before that N R is most accurately translated Either not N or R. The following two rules reflect that:

125

Introductory Logic

N R :: ~N w R ~N R :: N w R

The second of these rules is the basis for a new strategy for deriving disjunctions: assume the negation of one of the disjuncts and try to derive the other. We can illustrate it in this derivation: 1. ~(P Q) 2. SHOW P w Q +)))))))))))))))))), 3. * SHOW ~P Q * *+)))))))))))))), * 4. **~P * * 5. **SHOW Q * * **+)))))))))), * * 6. ***~P Q * * * 7. ***Q * * * **.))))))))))- * * *.))))))))))))))- * 8. * P w Q * .))))))))))))))))))P DD CD ACD DD 1, ~ 4, 6, E 3, w

One final rule is included because it is traditional. The contrapositive of a conditional is the conditional you get by exchanging antecedent and consequent and negating both. Since the contrapositive of a conditional is equivalent to it, we add the following Contraposition rule: Ctr

N R :: ~R ~N

We can illustrate the use of this rule with the following derivation: 1. ~(~H & V) 2. SHOW V H +)))))))))))), 3. *~~H w ~V * 4. *~H ~V * 5. * V H * .))))))))))))P DD 1, DeM 3, w 4, Ctr

The addition of all these replacement rules to our stock of rules does not enable us to complete any derivations that can not be completed using only the basic rules of Chapter 4, but this fact is not at all easy to prove. It is a consequence of two facts. The first 126

Derivations II is that the replacement rules are sound, that is, they can never lead us to infer a false sentence from a true one. You can probably convince yourself that this is true by verifying with truth tables that all the replacements involve replacing sentential parts of sentences by equivalent parts. (Check with a truth table to verify the equivalence if necessary.) Hence only valid arguments have derivations using the new rules. The second fact is quite difficult to prove, and we will not attempt to prove it. It is that the rules of Chapter 4 are complete, that is they enable us to provide derivations for every valid argument formulated in L1. There is more about this in Section 6 of this chapter, but its proof is beyond the scope of this book. Exercises 5-3: Show the following valid: 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. P w ~L, L & (Z ~P), ~~L & ~Z R (~M ~E), M S, E & R, S (H w T) ~S, S & (T w Q), ~H & Q (O & L) (T w K), P w (~T & ~K), ~P & L, ~(O w P) ~(C T), H w T, H (M Q) S, ~(T S), T & ~Q S (A N), Q w (~A N), ~Q & (~S T), T (B K) R, ~(R w K), B (P & Q) (H w X), G (Q & P), G (X w H) S (~P & Q), S (~Q w P), S H A w (B w C), (C w A) w B M w [(R & H) & K], M w [(K & R) & H] B w (~ J & W), (J & W) w (J & H), B M (Z w W), ~K w (G w Z), K, M [Z w (W & G)] ~I ~(E w ~X), I w X (C w R) N, S (~C R), S N P (Q R), ~R w S, P & (S ~Q), ~Q (~H ~R) (M w T), S (R H), S (M w T) 4: Derivation Strategies We will have no mechanical recipe for constructing derivations. Derivations are constructed by trial and error and by using ingenuity. Although we have no mechanical method for constructing derivations, some general advice is useful for the 127

Introductory Logic student. First, dont be afraid to start a derivation without seeing how it will end. In all but the simplest derivations, you generally start out doing promising things without knowing how the derivation will be finished. You start off in a promising direction and keep going until you either complete the derivation or come to a dead end where there is nothing else to do. If you reach a dead end, you go back to an earlier point at which you could have tried something different and try that different thing. Finding a correct derivation often requires erasing or crossing out and starting over. Scrap paper and a pencil with a good eraser are useful tools. Second, be sure to try all possible routes to a complete derivation before giving up. In logic, as elsewhere in life, a good deal of problem-solving is simply a matter of systematically trying all the possibilities. Third, if you are stuck in the middle of a derivation, look around at all the accessible lines for rules that you can use. Look for lines that have sentence letters in common to see whether they can somehow be useful together. Following this advice requires familiarity with the rules, of course. This comes mainly with practice. Fourth, remember that in most derivations you will need to use all the premises. When stuck in the middle of a derivation, check to see that you have in fact made use of all the premises. If not, try to find a way to use those that are as yet unused. Guidance in completing derivations comes from setting goals. When one begins a derivation, one automatically has one goal: it is given by the first SHOW line of the derivation. But it is also useful to set up subsidiary goals. For instance, consider the following derivation: 1. ~D & ~W 2. SHOW D W P

It is reasonable to expect that the derivation will be completed by using I. To use this rule, you will need to have available D W and W D. So it is plausible to set these as subsidiary goals, and to enter SHOW lines for each goal: 1. ~D & ~W 2. SHOW D W 3. SHOW D W 128 P

.. SHOW W D

Derivations II

We have not entered a line number for the second SHOW line, and we have left a space above it because we need to complete the first subderivation, beginning with SHOW D W, before we begin the second one. The reader should have no difficulty completing the derivation. Goals are often achieved using the introduction rules for the connectives. If you are trying to derive a non-atomic sentence, you will often use the introduction rule for the main connective of the sentence. Sometimes it will be helpful to set subsidiary goals using SHOW lines. Here are some suggestions: To derive something of the form N R, assume the antecedent, N, and use CD. To derive something of the form N & R, try to derive both N and R and use &I. To derive something of the form N R, derive N R and R N and use I. Derive each of the conditionals using CD. Suggestions for goals of the form N w R and ~N are more complicated. Sometimes N w R can be inferred using wI, but often not. You can check whether wI can be used by doing a quick truth-table check to see whether either disjunct alone follows from the premises. If so, try to derive that disjunct and use wI. If neither disjunct alone follows, you will have to derive the disjunction either by an indirect derivation or by use of the w replacement rule. For instance, suppose we wish to complete the following: 1. ~T M 2. SHOW M w T P

First we check to see whether either M or T follows from the premise. But since the premise can be true when M is false, and also when T is false, neither alone can be derived from the premise, so we cannot expect to complete the derivation using wI. If we use an Indirect derivation, however, we can complete it thus: 1. ~T M P 129

Introductory Logic 2. SHOW M w T +)))))))))))), 3. *~(M w T) * 4. *SHOW ! * *+)))))))), * 5. **~M & ~T * * 6. **~M * * 7. **~T * * 8. **M * * 9. **! * * *.))))))))- * .))))))))))))-

ID AID DD 3, DeM 5, &E 5, &E 1, 7, E 6, 8, !I

Alternatively, we can complete the derivation using the w rule: 1. ~T M 2. SHOW M w T +))))))))))))))), 3. *SHOW ~M T * *+))))))))))), * 4. **~M * * 5. **SHOW T * * **+))))))), * * 6. ***~~T * * * 7. ***T * * * **.)))))))- * * *.)))))))))))- * 8. *M w T * .)))))))))))))))or even more simply: 1. ~T M 2. SHOW M w T +)))))))))), 3. *T w M * 4. *M w T * .))))))))))P DD 1, w 3, Com P DD CD ACD DD 1, 4, MT 7, DN 3, w

Either method is as good as the other; the choice between them is a matter of taste. Conclusions beginning with "~" can always be derived using indirect derivations. Sometimes, however, it is simpler to derive the conclusion directly, if it is contained in the premises and you can get it using exploitation rules. We will show how to do a derivation in several ways to illustrate this: 130

Derivations II 1. ~(P & Q) w (R S) 2. R & ~S 3. SHOW ~(P & Q) +))))))))))))))))))))), 4. *P & Q * 5. * SHOW ! * *+))))))))))))))))), * 6. **~~(P & Q) * * 7. **R S * * 8. **R * * 9. **S * * 10.**~S * * 11.**! * * *.)))))))))))))))))- * .)))))))))))))))))))))1. ~(P & Q) w (R S) 2. R & ~S 3. SHOW ~(P & Q) +)))))))))))))))))), 4. *SHOW ~(R S) * 5. *+)))))))))))))), * 6. **R S * * 7. **SHOW ! * * **+)))))))))), * * 8. ***R * * * 9. ***S * * * 10.***~S * * * 11.***! * * * **.))))))))))- * * *.))))))))))))))- * 12.*~(P & Q) * .))))))))))))))))))P P DD ID AID DD 2, &E 6, 8, E 2, &E 9, 10, !I 1, 4, wE P P ID AID DD 4, DN 1, 6, wE 2, &E 7, 8, E 2, &E 9, 10, !I

1. ~(P & Q) w (R S) P 2. R & ~S P 3. SHOW ~(P & Q) DD +)))))))))))))))))), 4. *~~R & ~S * 2, DN 5. *~(~R w S) * 4, DeM 6. *~(R S) * 5, w 7. *~(P & Q) * 1, 6, wE .))))))))))))))))))All three derivations are equally good. The last illustrates the use of the replacement rules. In order to reach goals one must use the rules to infer things from accessible lines. Which rules to use depends on what lines are accessible. In general, the rules one can use on a line depend 131

Introductory Logic on the main connective of the line. If the main connective is & Use &E wE E E

Note that in the cases of w, , and , another sentence is needed to use the exploitation rules. If there is no appropriate accessible line, try to derive one, by starting a subderivation to SHOW it, if necessary. If the main connective of a line is ~, then using it will depend on what follows the ~. If the following sentence is atomic, try to use the line with MT, wE, or in an indirect derivation. Otherwise, if the main connective of what follows ~ is ~ & use DN DeM DeM ~ ~

Using these rules under these circumstances will usually simplify what you have and make it easier to continue the derivation. Exercises 5-4 Show the following valid: 45. 46. 47. 48. 49. 50. 51. 52. 53. 132 L (E w H), ~E (~L w H) T w (B & Q), ~H ~T, ~B w J, H w J (A M) w (B ~M) Q ~G, (G Q) w (S R), ~(S Q) R & G Q S, Q w T, T (Q w S) (S & T) w (S & Q) M w (~R & T), ~(R S), ~(S w Q), M & ~Q (P Q) R, (S ~Q) T, (R w ~T) (S R) B ~B, J ~C F G, F w G, F & G

Derivations II Show the following pairs of sentences equivalent. 54. 55. 56. P (Q R) K (N & F) AB (P Q) (P R) (K N) & (K F) (A & B) w (~A & ~B)

Show the following sets of sentences inconsistent. 57. 58. (P Q) (S & T), ~S, Q w ~P (P & Q) R, R ~R, ~R Q, P 5: D1 Derivations and English proofs Derivations in D1 correspond approximately to the way proofs are given in English. The system D1 is one of the deductive systems called natural deduction systems. Natural deduction systems are so-called because they are supposed to model, to some extent, the way we present logical proofs when we are reasoning correctly and presenting careful arguments for others, as when we are giving proofs of mathematical theorems, for instance. Now you may have occasionally presented proofs that you might think looked somewhat like our proofs. Usually high school geometry, for instance, presents proofs of theorems in a formal style with numbered lines and justifications in a separate column to the right. But most mathematical proofs dont look that way. If you open a mathematical text or journal, you will usually find proofs given in a more discursive manner, in paragraphs without numbered lines and justifications. And even proofs in high school geometry texts do not have SHOW lines and boxes. We can get an idea of how our derivations relate to proofs we find in mathematics by translating our derivations into English. Consider, for instance, the following derivation. 1. A w ~B 2. ~B C 3. SHOW ~A C +)))))))))))), 4. *~A * 5. *SHOW C * *+)))))))), * 6. **~B * * 7. **C * * *.))))))))- * .))))))))))))P P CD ACD DD 1, 4, wE 2, 6, E

133

Introductory Logic In English, we might present this derivation like this. We are given that either A or not-B and that if not-B then C. We are to show that if not-A then C. So assume that notA., and try to show C. Then obviously, since we were given that either A or not-B, we must have not-B. But since were given that if not-B then C, we have C. QED.22 In the English version, we typically dont cite the rules we use, taking them to be obvious. We also dont number the lines, and so we cant refer to earlier lines by number, though in complex mathematical proofs, it is typical to label crucial intermediate results for later reference. Heres a derivation using indirect derivation. 1. B A 2. ~A 3. SHOW ~B & ~A +))))))))))))))), 4. *SHOW ~B * *+))))))))))), * 5. **B * * 6. **SHOW ! * * **+))))))), * * 7. ***A * * * 8. ***! * * * **.)))))))- * * 9. **~B & ~A * * *.)))))))))))- * .)))))))))))))))P P DD ID AID DD 1, 5, E 2, 7, !I 2, 4, &I

In English, we could present the argument this way. We are given that B is true if and only if A is, and that A is not true. We want to show that both not-B and not-A are true. First we show that not-B is true by an indirect proof. We assume B and derive a contradiction. From B, with the premise that B if and only if A, it follows that A. But this

QED stands for the Latin phrase, quod erat demonstrandum, meaning which was to be demonstrated. It is traditional to use it to signal the end of proofs. 134

22

Derivations II contradicts the premise that not-A. We have found our contradiction, so we conclude that not-B. But that together with the premise that not-A yields not-B and not-A. QED. Finally, we present a derivation of a biconditional. 1. ~A w B 2. ~B w A 3. SHOW A B +))))))))))))))))), 4. *SHOW A B * *+))))))))))))), * 5. **A * * 6. **SHOW B * * **+))))))))), * * 7. ***~~A * * * 8. ***B * * * **.)))))))))- * * *.)))))))))))))- * 9. *SHOW B A * *+))))))))))))), * 10. **B * * 11. **SHOW A * * **+))))))))), * * 12. ***~~B * * * 13. ***A * * * **.)))))))))- * * *.)))))))))))))- * 14. *A B * .)))))))))))))))))P P DD CD ACD DD 5, DN 1, 8, wE CD ACD DD 10, DN 2, 12, wE 3, 9, I

In English, the argument might look like this. We are given that either A is false or B is true, and that either B is false or A is true. We want to show that A is true if and only if B is. We begin by showing that if A is true, B is. Suppose A is true. Then its false that A is false. But since were given that either A is false or B is true, B must be true. So weve shown that if A is true B is. Now we show that if B is true, A is. Suppose B is true. Then not-B is not true. But since were given that either not-B is true or A is, A must be true. So weve shown that if B is true, A is. Since weve shown that if B is true, A is and if A is true B is, weve shown that A is true if and only if B is. [Note: a mathematics text would probably omit this last sentence as 135

Introductory Logic too obvious to mention.] QED. A comparison of these derivations with their English equivalents will show how natural our derivations are. Our derivation differ in that we use only symbols, set down each step on a separate line, and specifically refer to the rules and the previous steps we employ. But they have the same overall structure as the arguments we might find in mathematics texts. Two features of mathematical arguments are not modeled by our logical arguments, though if we wanted to complicate our logic we could easily add them. We dont refer to previously proved theorems, and we dont make use of definitions of terms. Exercises 5-5 For each of the following arguments, construct a derivation in D1 and then give an equivalent in English. 59. 60. 61. 62. A & (B w C), ~C, B A B, A (A & B) A w ~B, B w A, A ~A & ~B, A B 6: The Deductive System D1 The deductive system D1 includes all the rules we have so far learned and all the derivation rules. It includes the inference rules &I, &E, wI, wE, E, I, E, MT, R, SC, and SC2; the replacement rules DN, DeM, ~, ~, Com., Assoc. Dist. and w, and the derivation rules, DD, CD, and ID (both forms), as well as the associated rules allowing assumptions (ACD, AID). The rules are stated in the appendix to this chapter, and their diagrams are provided. D1 provides a way of showing arguments valid, sentences equivalent, and sets of sentences inconsistent that is different from the method of truth tables presented in Chapter 3. It is therefore natural to ask what the relation between these two methods is. We would hope that two things would be true. First we would like to be sure that anytime there is a derivation of a sentence from 136

Derivations II a set of premises, there is no assignment of truth values that makes the premises true and the conclusion false. In other words, we would like to be sure that our derivation system will not lead us from truths to falsehoods. A deductive system for the language L1 is sound if and only if whenever a sentence of L1 can be derived from premises using the derivation system, there is no assignment of truth values that makes the premises all true and the conclusion false. A deductive system that was not sound would not be very useful. Using an unsound system, one could derive conclusions that dont follow from the premises. Fortunately, D1 is sound. The proof of this is beyond the scope of this book, but you can perhaps become convinced that none of the rules can ever lead from truths to falsehoods.23 For instance, consider E. It allows us to infer R from N R and N. If both N and N R are made true by an assignment of truth values, obviously R must be made true by that assignment as well. So E will never take us from truths to falsehoods. The second thing we want to be true of D1 is that it be complete, that is, that it make possible a derivation for every valid argument. A deductive system for L1 is complete if and only if for every argument that is valid (i.e., no assignment of truth values makes all premises true and the conclusion false), there is a derivation of the conclusion from the premises in the deductive system. A deductive system that was not complete could be very frustrating. Even if we knew that an argument was valid, it might be impossible to find a derivation showing the argument valid in that incomplete system. Fortunately D1 is complete. In fact, only the rules given in Chapter 4 are needed to make a complete

A rigorous proof that D1 is sound requires the use of mathematical induction. 137

23

Introductory Logic system.24 The additional rules of Chapter 5 help to make derivations easier, but they dont make possible any derivations that couldnt be done using only the rules of Chapter 4. There are many different deductive systems for L1 that are both sound and complete. Nearly every different logic book has a different set of rules. So long as the rules are sound and complete, one deductive system is as good as another, from the point of view of logic. The choice of rules to include is based on convenience, tradition, and pedagogy.

24

The proof that D1 is complete is beyond the scope of this book.

138

Derivations II

Rules for D1
Assumption Rules: Any sentence may appear on a line as a premise, provided there are no lines other than premises earlier in the proof. (Annotation: P.) CD assumption rule: Immediately after a line of the form SHOW NR , a line N may appear. (Annotation: ACD.) ID Assumption rule: Immediately after a line of the form, SHOW N, a line ~N may appear. (Annotation: AID.) Basic Rules of Inference

Premise Rule:

&-Exploitation Rule: If N & R appears on an earlier accessible line of a derivation, either N or R may appear alone on a line. (Annotation: the line number of the earlier line plus &E.) &-Introduction Rule: If N and R appear on earlier accessible lines of a derivation, N & R may appear on a line. (Annotation: the line numbers of the earlier lines on which the conjuncts appear plus &I.) w-Exploitation rule: If N w R and ~N appear on earlier accessible lines of a derivation R may appear on a line; if N w R and~R appear on earlier accessible lines of a derivation N may appear on a line. (Annotation: line numbers of the two earlier lines and wE.) w-Introduction rule: If N appears on a line then either N w R or R w N may appear on a line. (Annotation: the number of the earlier line plus I.) -Exploitation rule: If N R and N appear on earlier accessible lines of a derivation, then R may appear on a line. (Annotation: the numbers of the earlier lines plus E.) -Exploitation rule: If N R and N appear on an earlier accessible line then R may appear on a line; if N R and R appear on earlier accessible lines then N may appear on a line. (Annotation: the numbers of the earlier lines plus E.) -Introduction rule: If N R and R N appear on earlier accessible lines, then N R may appear on a line. (Annotation: the numbers of the earlier line plus I.)

139

Introductory Logic

Pseudo-inference rule: ble lines, then ! may appear on a line. (Annotation: the numbers of the earlier lines plus !I) Structural Rules

!-Introduction rule: If N and ~N both appear on earlier accessi-

Accessible lines: A line is accessible only if (1) it does not have uncanceled SHOW; and (2) it is not boxed. SHOW rule: Any sentence may appear on a line preceded by SHOW. One SHOW line must appear before any lines other than premise lines. Direct Derivation Rule: When a sentence on an accessible line is identical to the sentence on the most recent uncanceled SHOW line, the SHOW may be canceled and all the subsequent lines boxed. (Annotation: DD appears on the canceled SHOW line.) Conditional Derivation Rule: If R appears on an accessible line and the most recent uncanceled SHOW line is of the form N R, then the SHOW line may be canceled and subsequent lines boxed. (Annotation: CD appears on the canceled SHOW line.) Indirect Derivation Rule: If ! appears on an accessible line, and the most recent SHOW line is not SHOW !, the SHOW may be canceled and subsequent lines boxed. (Annotation: ID appears on the canceled SHOW line.)
Completed Derivation A derivation is completed when and only when the first SHOW line is canceled. The sentence on that line is the conclusion of the derivation; the derivation is said to be a derivation of its conclusion from its premises (if any).

Derived Rules of Inference for D1

Modus Tollens Rule: If N R and ~R appear on earlier accessible lines of a derivation, then ~N may appear on a line.
140

Derivations II (Annotation: The line numbers of the earlier lines plus MT.) Repetition: If N appears on an earlier accessible line, N may appear on a line. (Annotation: The line number of the earlier line plus R.) Hypothetical Syllogism: If N R and R 2 appear on earlier accessible lines of a derivation, N 2 may appear on a line. (Annotation: The line number of the earlier lines plus HS.) Separation of Cases: If N w R, N 2, and R 2 appear on earlier accessible lines of a derivation, 2 may appear on a line. (Annotation: the line numbers of the earlier line plus SC.) Separation of Cases 2: If N R and ~N R appear on earlier lines of a derivation, R may appear on a line. (Annotation: the line numbers of the earlier lines plus SC2.) ID Assumption Rule Second Form: Immediately after a line of the form, SHOW ~N, a line N may appear. (Annotation: AID.) Replacement Rules for D1 If N appears on an earlier accessible line, the result of replacing one sentential part, R of N with a sentence, 2, may appear on a line, provided R and 2 are instances of one of the pairs of forms in the list below. Annotation: The appropriate abbreviation below.

DN

~~N

:: N

DeM

~(N & R) :: ~N w ~R ~(N w R) :: ~N & ~R ~(N R) :: ~N R

~(N R) :: N & ~R

~ Com

Assoc N & (R & 2) :: (N & R) & 2 N w (R w 2) :: (N w R) w 2 N (R 2) :: (N R) 2 Dist

N & R :: R & N N w R :: R w N N R :: R N N R :: ~N w R ~N R :: N w R

N & (R w 2) :: (N & R) w (N & 2) N w (R & 2) :: (N w R) & (N w 2) N R :: ~R ~N

Ctr

141

Introductory Logic

Chapter 6: Monadic Quantification


In this chapter we will begin to learn a more complex language, L2. In this language we can express some of the internal structure of sentences. We will begin by learning about names and simple predicates; these can be combined to make sentences of L2. Then we will learn about variables, which enable us to create open sentences and complex predicates. The new features of the language require its interpretations to be more complex than interpretations of L1, and we will learn what these more complex interpretations are like. The real power of L2 comes from combining open sentences with quantifiers, so we will learn about quantifiers and how they work. Finally we will work on translating between L2 and English. 1: Names and Predicates In order handle more kinds of arguments, we must look at some of the internal structure of sentences. Many valid arguments cannot be shown valid with any of the techniques we have developed for L1 For instance, consider the following argument: All humans are mortal. Socrates is human. Therefore, Socrates is mortal. The best we can do symbolizing this argument in L1 is this (using the obvious interpretation): P Q

R
But this argument is not valid, as you can see by considering an assignment of truth values that makes the premises true and the conclusion false. In order to understand the validity of the English argument, we must consider the structure of the statements. To do this we will introduce a new language, L2. L2 is an extension of L1, 142

Monadic Quantification that is, it has all the features of L1 and more besides. You should note that L2 is not very much like English. Although L2 is not a complex language, it has a different structure from English, and many things are done in different orders or in different ways. Sentences that appear very similar in English may be quite different in L2. But this is not because L2 is hard; on the contrary, L2 is quite a simple language. English, however, is not a simple language. Fortunately, you already know English. In L2 we take a very simplified view of the structure of sentences. We break them into four basic elements: names, predicates, variables, and quantifiers. In this section we will begin to learn about names and predicates. A name in L2 is a term that designates a particular thing. For example, a name might designate any one of the following: Susan B. Anthony Harriet Tubman Bill Clinton The number 10

The intersection of Elmwood Ave. and Goodman St. in Rochester, New York The Crab Nebula The desk in Mr. Bennetts office, 521 Lattimore The first p on this page. For names in L2 we will use lower case Roman letters in the range a through u, with and without Arabic numerical subscripts. Thus all of the following are names of L2: a, k, c24, u239, r, b But none of the following are names in L2: P, v, x97, d, biv, * We can give meaning to a name in L2 by indicating which particular object it designates, for example, e: Queen Elizabeth II of England n: Nelson Mandela 143

Introductory Logic This makes e designate Queen Elizabeth, and n designate Nelson Mandela. Predicates are what is left of sentences when names are removed. When we speak of predicates in Logic class, we are not using the term in its ordinary sense. In logic, a predicate is (approximately) something you get by taking a sentence and removing one or more names from it. For instance, consider this English sentence: Alonzo kissed Susan. We can get a logical predicate out of this by removing the name Susan (we use the symbol to mark the place where we took the name out): Alonzo kissed . We can get a different predicate by taking out the name Alonzo instead:

kissed Susan.
And yet a third predicate by removing both names (in this case we use the symbol to indicate where the second name was removed):

kissed .
(In this chapter, we will be confining ourselves mostly to what we shall call one-place or monadic predicates: those that are made by removing only one name from a sentence. But you will need to be aware that there are other predicates in order to understand some features of our treatment.) Predicates in L2 are upper case Roman letters with or without Arabic numeral subscripts. Thus all of the following are predicates in L2: A, G, Z, Q27 but none of the following are: 144

Monadic Quantification Big, Mvii, 1 We can say what a predicate of L2 means by matching it with an English sentence from which names have been removed:25 H: is a head of state. S: is a South African This gives H in L2 nearly the same meaning as is a head of state in English. We can make sentences in L2 by combining names with predicates. In L2 we can make a sentence from a one-place predicate and a name by placing the name after the predicate. For instance the following are sentences of L2: He Hn Se Sn Given the meanings we assigned above, the first says that Queen Elizabeth is a head of state, the second that Nelson Mandela is, the third that Queen Elizabeth is a South African, and the fourth that Nelson Mandela is a South African. In L2 we also have available all the resources of L1. In particular, we have all the truth-functional sentence connectives of L1 available. So the following are also sentences of L2: He w Sn Se Hn (He & Sn) ~(Hn w Se) You should have no difficulty figuring out what these sentences say. Note that all the formation rules of L1 still apply in L2. Especially note that you cannot do things like this in L2:

Note that we wouldnt really need to use the symbol here, but since we will need such symbols in the next chapter when we treat predicates of more than one place, we will for consistency use the symbols here too. 145

25

Introductory Logic H(e w n) WRONG!! (H w S)e WRONG!! (In the next section of this chapter we will see how to make complex predicates out of simple ones that will do something like what the expression H w S might be intended to do above.) Exercises 6-1 Indicate which of the following are sentences of L2; for those that are not, explain why not. 1. 2. 3. 4. 5. 6. 7. 8. Pc & Fa Km & Kn Kr Bz ~Bc P(c & a) Da (Pa & Qa) ~Ra Gc Hu Za & (Cb Tr) Cc (Vd Cy)

Given the meanings below, indicate what the following sentences of L2 say: B: is on the baseball team. H: is on the field hockey team. L: is on the lacrosse team. S: is on the softball team. a: Alice b: Benjamin c: Carol d: Donald 9. 10. 11. 12. 13. 14. Ld & Bb Ha & ~Hc ~Hc Sc La Lb La (Sd & Hb) ~(Lb & La) ~(Ba w Bd)

146

Monadic Quantification 2: Variables, Open Sentences, and Satisfaction L2 also has variables. Variables in L2 are lower case Roman letters in the range v through z, with or without Arabic numeral subscripts. Thus, the following are all variables of L2: v, w, x, y, z, x23, w234 but the following are not: c, h, v, xmine Variables can be used in L2 anywhere that names can be used (and, as we shall see in Section 4, in some places that names cant be used). When we combine a variable and a predicate, though, we dont get a sentence. Variables dont designate anything in particular; they are merely place holders. We call the result of replacing a name by a variable in a sentence of L2 an open sentence. This expression could be misleading. An open sentence is not a sentence (any more than a decoy duck is a duck). The following are all open sentences of L2: Fx Gx Gr Cy Mz ~ Mw Mr w (Lx & Ty) For the rest of this chapter, we will only be concerned with open sentences that have only one variable in them (that may occur more than once, as in the first sentence above.) Open sentences are not true or false, rather they are satisfied by things. Consider a simple open sentence, Px and suppose the meaning of the predicate P is given thus: P: is President of the United States in 2005. The open sentence Px does not say anything; but it can be true of, or satisfied by, things. For instance, it is satisfied by George Walker Bush, and by no one else. If the predicate F is given this 147

Introductory Logic meaning: F: is female. Then the open sentence Fx is satisfied by all females, and by nothing else. In general we can give the following simple account of an objects satisfying an open sentence: An object satisfies an open sentence with only one variable if and only if when a name for the object replaces the variable at all of its occurrences in the open sentence, the resulting sentence is true. So, for instance, if we want to know whether the open sentence Fx is satisfied by Hilary Rodham Clinton, we just replace the x in Fx by some name for Hilary Rodham Clinton, say h, to get a sentence Fh and ask whether the resulting sentence is true. Obviously, it is, so Hilary Rodham Clinton satisfies Fx. Similarly, you can easily verify that George Walker Bush does not satisfy Fx. This holds for open sentences of any complexity. Suppose we consider a sentence like Fx w Px, where Px has the meaning assigned to it above. Hilary Clinton will satisfy this sentence, for Fh w Ph is true. George Walker Bush will also satisfy it, because, if g is a name for George Walker Bush, Fg w Pg is true. In fact, all females as well as George Walker Bush satisfy Fx w Px. The set of objects that satisfy a given open sentence (on some interpretation) is called the extension of that open sentence (on that interpretation). So the extension of Fx w Px is the set consisting of all female persons plus George Walker Bush. The extension of a monadic predicate is the extension of the open sentence we get by putting a variable after the predicate. Now consider the sentence Px ~Fx. This sentence is satisfied by George W. Bush, because Pg ~Fg is true. But in fact, it is also satisfied by every other person on earth. Consider, say, the pope. (Assume that p is a name designating him.) Pp ~Fp is surely true, since its consequent is true. It is also true of Elizabeth Dole (assume d is a name designating her), since Pd ~Fd will be true because its antecedent is false. It should be easy to see that everyone on earth will satisfy Px ~Fx. In open sentences, it doesnt matter which variable we use in 148

Monadic Quantification sentences in which only one variable occurs. Fy w Py is satisfied by exactly the same things as Fx w Px and Fz w Pz Exercises 6-2 On the basis of the given account of the meaning of the predicates, say which animals satisfy the open sentences following: C: is a canid.26 D: is a dog. M: is a mammal. R: is a reptile. W: is a wolf. b: Fido (a dog) s: Tabby (a cat) 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. Dx Dy w Wy Dz & Wz Rw w Mw Dv Mv Mx Dx Dy My Cz ~Mz (Rw & Cw) Mw (Dv w Wv) (Mv & Cv) Db w Wx Mx Ws 3: Interpretations and Small Domains We must now explain interpretations for L2. In L1 an interpretation only needed to assign statements to each sentence letter, because in L1 the only language elements to interpret were sentence letters. In L2 there are both names and predicates, and an interpretation must give meaning to those as well. In addition, an interpretation must identify a set of objects, which we shall call the domain, about which we are talking. The domain will identify

A canid is an animal that is a member of the family Canidae that includes dogs, wolves, coyotes, foxes, and jackals, but not cats. 149

26

Introductory Logic objects to which the open sentences of the language will apply. An interpretation for L2 will begin by specifying the domain. We shall use an upper case D to designate it. Thus we might have taken the set of all animals as the domain for exercises 6-2. We will write the domain this way: D: animals Any set of objects may be the domain for an interpretation. If we want to have a small domain with only a few objects in it, we may specify it by presenting a set in the usual mathematical notation: D: {George W. Bush, the moon, 3} This specifies a domain containing exactly three objects: George W. Bush, the moon, and the number three. The objects in a domain need not have anything in common. An interpretation will have to provide meanings for the predicates and names that occur in sentences we are interested in. As in L1, a complete interpretation would provide meanings for all the expressions possible in the language. But, as with L1, complete interpretations would be extremely inconvenient to use. So we shall get along with partial interpretations: interpretations that give meanings only to those language elements that we are using on any particular occasion. Here is all of the interpretation presented in exercises 6-2: D: animals C: is a canid. D: is a dog. M: is a mammal. R: is a reptile. W: is a wolf. b:Fido (a dog) s: Tabby (a cat) Not every interpretation uses a large domain like the set of all animals. Sometimes it is useful, especially when dealing with complicated sentences, to create small domains as we want them. For instance, we could take as our domain {The Pope, Madonna}. But it can be useful to have a small domain for which we can 150

Monadic Quantification create artificial predicates that apply to whatever we want in the domain. One way to custom design small domains is to make diagrams of an imaginary domain. At the left is a domain of two objects. We can refer to this domain thus: D: The stick figures to the left. To say which predicates are true or false of the objects in this domain, we can put letters above the objects which satisfy the predicate letters. For instance, if we are concerned with only the predicates F and G, we might have the diagram below. This diagram shows that the predicate F is true of both figures, but the predicate G is true only of the left one. If we were concerned also with the predicate H, it would be true of neither figure in this diagram. The diagrams are not themselves interpretations, but there is a simple way to generate interpretations from them. For instance, the following is an interpretation for the second diagram: D: The stick figures to the left F: has an F above it G: has a G above it. H: has an H above it. We can ask which figures in the diagram open sentences are true of. For instance, consider the open sentence Fx Gx. Will the figure on the left satisfy this open sentence? Yes, because both F and G are true of it. You can work this out in detail (if necessary) by assigning a name, say a, to the figure on the left and checking the truth value of the sentence you get by replacing the x in Fx Gx by that name. Making the replacement gives the sentence, Fa Ga. We can determine that this is true by first noting that Fa and Ga are true on this interpretation: they say respectively that the figure on the left has an F above it and that the figure on the left has a G above it. Both are true. Then we calculate the truth-value of the whole sentence using the rules for the connective . These tell us that the whole sentence has the value True. Since it does, the 151 A small domain

Introductory Logic figure on the left satisfies the open sentence Fx Gx. Does the figure on the right? Which figures satisfy Gx Hx? There is a second way to produce small, custom-made domains. We can use numbers as the objects in the domain. (This method is particularly suited for people who are more comfortable with numbers than with pictures of people.) We could pick any numbers to be in our small domains, but for convenience, we will pick the first few positive integers. So a domain of one object would be {1}, a domain of two objects would be {1, 2}, and a domain of three objects would be {1, 2, 3}. To assign meanings to the predicates, we simply specify the objects in the domain that satisfy the predicate by presenting the set of all objects that do. For instance, we might have an interpretation like this: D: {1, 2} F: is in {1, 2} G: is in {1} H: is in {} [Note that {} is the empty set.] (The reader should have no difficulty recognizing that this is very similar to the interpretation above using the stick figure diagram.) We can easily discover which objects in the domain satisfy various open sentences on this interpretation. For instance, consider the sentence, Fx Gx. If a names 1, it is easy to verify that Fa Ga is true on this interpretation, so 1 satisfies the open sentence Fx Gx. Does 2 satisfy it? Which numbers in the domain satisfy Gx Hx?

152

Monadic Quantification Exercises 6-3 Using the diagram and interpretation given, tell which stick figures satisfy the given open sentences. (For the purpose of giving your answers, name the figures, from left to right, a, b, and c.) D: the stick figures to the left. R: has an R above it. S: has a S above it. T: has a T above it.

1. 2. 3. 4. 5. 6. 7. 8.

Rx & Sx Ry w Sy Rz Sz Sv Rv Rw Tw ~Rx ~Tx (Ry & Ty) Sy (Rz v Sz) ~Tz

Using the interpretation given, tell which numbers satisfy the given open sentences. D: {1, 2, 3} M: is in {1, 2} N: is in {1} P: is in {} 9. 10. 11. 12. 13. 14. 15. 16. Mv w Nv Mw & Pw Px Nx (Px v Nx) Mx ~Pz (Mz & Nz) (~Pv & Nv) Mv Mw (~Pw v Nw) ~(Px Mx)

153

Introductory Logic 4: Quantifiers The sentences we have so far seen in L2 do not enable us to say much beyond what could be said in L1. But we can go beyond L1 using quantifiers. Quantifiers in L2 are complex symbols; each quantifier is made up of a quantifier symbol followed by a variable. There are two quantifier symbols, the universal quantifier symbol, and the existential quantifier symbol, . Here are some sample quantifiers:

x w x17 y

z v13

A quantifier beginning with a universal quantifier symbol is called a universal quantifier, and a quantifier beginning with an existential quantifier symbol is called an existential quantifier. Thus the first three quantifiers above are universal quantifiers and the second three are existential quantifiers. Syntactically, quantifiers are one-place logical operators, and they can be put anywhere a ~ could be put.27 Thus the following are all acceptable:

x(Fx Gx)

x(Fx & ~Gx) Fy ~x(Rx w ~Tx)

Although these are all syntactically correct, they are not all sentences of L2. Some are only formulas of L2. Every sentence of L2 is a formula, but not every formula is a sentence. To explain the difference we must explain free and bound variables. A quantifier goes at the beginning of a formula of L2. Usually this formula is an open sentence. A quantifier is said to bind occurrences of its variable in the open sentence to which it is attached, provided that variable is not already bound by another quantifier occurrence. Lets illustrate this. Fx Gx is a formula that is an open sentence; it has a variable, x, that occurs at two places and is not bound by any quantifier. If we put in front of Fx Gx a universal quantifier containing x, we get

We shall defer a formal statement of the syntax of L2 to the appendix at the end of this chapter. However, you may find it helpful to refer to that more rigorous discussion in connection with the points about syntax in this and the next few paragraphs. 154

27

Monadic Quantification x(Fx Gx). (Notice how, just as with ~, we had to restore the parentheses before adding the quantifier.) Now both occurrences of x within the parentheses are bound by the quantifier. Consider also this more complex formula: Fx (xGx Tx) If we put a universal quantifier containing x in front of this, we get

x(Fx (xGx Tx))


In this case, the initial universal quantifier binds the x immediately after the F and the x immediately after the T. It does not bind the x immediately after the G, because that x is already bound by the existential quantifier, x, immediately in front of the G. We will try to avoid this sort of situation; preferable is the equivalent sentence,

x(Fx (yGy Tx))


A quantifier can only bind variables identical to the one it contains. In x(Fx Gy) only the x is bound; the y is not bound. A variable that is not bound by any quantifier in a formula is said to be free. Sentences of L2 are formulas that contain no free variables. Sentences are formulas, but so are strings of symbols which would be a sentence but for having free variables. Any formula containing free variables is not a sentence, and is not true or false. Our syntax rules define what it is to be a formula, then explain what free variables are, then declare that sentences are formulas without free variables. The meanings of the quantifiers are quite simple, even though the meanings of the sentences that can be made with them are very complex. When we put a universal quantifier in front of an open sentence whose only free variable is the universal quantifiers variable, the resulting sentence says that everything in the domain satisfies the open sentence the quantifier is attached to. For instance, if we put a universal quantifier x in front of an open sentence Rx & Tx to get the sentence x(Rx & Tx), this last sentence says that everything in the domain of our interpretation satisfies the open sentence Rx & Tx, or in other words, 155

Introductory Logic everything is both R and T. We can come to understand quantified sentences better by looking at an interpretation and some quantified sentences of L2. Consider this interpretation: D: University Students J: is a junior. L: studies logic. S: is a senior. W: works hard. Using this interpretation, the sentence xSx would say that every university student is a senior; that is false. y(Ly & Wy) would say that every university student studies logic and works hard; that is also false. z(Jz ~Sz) would say that every university student is such that if he or she is a junior, then he or she is not a senior. That is true. If you are not sure that this last sentence is true, consider any arbitrary student. Either he or she is a junior, or not. If not, then the antecedent of the open sentence, Jz ~Sz is false of him or her, and hence the whole is true of him or her. On the other hand, if the student is a junior, then he or she is certainly not also a senior, and so the consequent is true of him or her, and consequently the whole is. In either case, the open sentence is true of the student, and so the sentence, z(Jz ~Sz) is just true. The existential quantifier is equally simple. When we put an existential quantifier in front of an open sentence whose only free variable is the existential quantifiers variable, the resulting sentence says that at least one thing in the domain satisfies the open sentence the quantifier is attached to. For instance, using the interpretation above, putting x in front of Sx & Lx to get x(Sx & Lx) says that there is at least one university student who satisfies the open sentence Sx & Lx, that is, at least one university student is both a senior and studies logic. This is true. Other sentences involving the existential quantifier get similar accounts under the above interpretation. x(Sx & Jx) says that at least one university student is both a senior and a junior, and that is false. x(Sx w Jx) says that there is a university student who is either a junior or a senior; that is true. y(Ly Sy) says that there is a university student who is a senior if and only if he 156

Monadic Quantification or she studies logic; this is also true.

Exercises 6-4 Using the appropriate interpretation, tell whether each of the following sentences is true or false. D: People F: is female. G: plays golf. M: is male. P: is currently president of the United States 1. 2. 3. 4. 5. 6. 7. 8.

x(Mx w Fx) y(Py & My) z(Gz w ~Pz) w(Fw Gw) v(Fv & Pv) ~x(Fx & ~Px) y(Py ~Fy) ~z(Gz Mz)

D: The stick figures in the diagram, left A: has an A above it. B: has a B above it. C: has a C above it.

9. 10. 11. 12. 13. 14. 15. 16.

v(Av Bv) w(Bw Aw) x(Cx w Bx) y(Ay & By) ~z(Az & ~Bz) ~v(Av & ~Cv) ~w~Bw ~x~Cx
157

Introductory Logic D: {1, 2, 3} R: is in {1,3} S: is in {2, 3} T: is in {1, 2, 3} 17. 18. 19. 20. 21. 22. 23. 24.

y(Ry Ty) z(Tz w ~Rz) vTv vRv wRw wSw x(Rx & Sx) ~xSx ~x~Tx y(Ry Sy) z(Tz vRv) w(Sw ~xTx)

5: Translation: All and its Variants Translation between English and L2 can be tricky. Most of the difficulty is with English: L2 is a much simpler language than English, but it has a different structure, and this can confuse beginners. Fortunately, you already understand English, but you will need to pay careful attention to how it is used in making translations. The first step in translation is to understand the translation of sentences like All As are Bs. Given the following interpretation, D: People H: is a human. M: is mortal. we can translate All humans are mortal as x(Hx Mx). Lets try to understand why this translation works. x(Hx Mx) will be true just in case every person satisfies Hx Mx. The only way a person can fail to satisfy this open sentence is by satisfying the antecedent, but not the consequent, that is, by being human but not mortal. So the sentence says that there arent any humans who arent mortal, or in other words, all humans are mortal. This same reasoning can work for all sentences that say that 158

Monadic Quantification all such-and-suches are so-and-sos. All such sentences will be translated by a sentence of the general form, <(N< R<).28 Lets consider some examples to see how this translation works. Suppose we have the following interpretation: D: People J: is a junior L: studies logic S: is a student W: works hard

Consider the sentence, All who study logic work hard. This is of the form All such-and-suches are so-and-sos, so it should be translated by a universally quantified conditional. We will do best to work step by step, even with these simple sentences. We can start by making a pseudo-sentence combining English and L2:

x(x is one who studies logic

x works hard)

Now we can easily see how to translate the antecedent and consequent, arriving at this translation into L2:

x(Lx Wx)
The sentence All students who study logic work hard, can be translated similarly , but is slightly more complicated. It is of the same form, so the first step is to make a similar pseudo-sentence:

y(y is a student who studies logic y works hard)


(I have used y as the variable this time; it doesnt matter which variable one uses here.) The consequent is easy; it is Wy. The antecedent is slightly more complicated: a student who studies logic is someone who is both a student and a studier of logic. Hence the antecedent is Sy & Ly) and the whole is

y([Sy & Ly] Wy)


In the expression using Greek letters, the Greek nu (<) stands for any variable, and N< and R< stand for open sentences with < as their only free variable. 159
28

Introductory Logic (Note that to improve readability I have used square brackets instead of parentheses for bracketing the antecedent; we can use the square brackets here just as we did in L1.) Proceeding step by step and inserting the brackets as you go helps you to get the brackets in the right places. The sentence, Not every junior studies logic can also be translated easily; to translate it you just need to note that it is just the logical denial of Every junior studies logic. Hence it will have the translation, ~z(Jz Lz) Dont confuse Not every junior studies logic with Every junior does not study logic. The latter sentence is ambiguous: it might mean Not every junior studies logic, but its favored reading is Every junior fails to study logic which means something different. These sentences make two quite different statements! The two can be translated easily into L2: ~x(Jx Lx) and

x(Jx ~Lx)

The first denies that something is true of every junior, allowing that it may be that some juniors study logic and some dont. The second asserts something of every junior: it claims that each and every one does not study logic. Because properly formed sentences of L2 are never ambiguous, L2 provides a good way to represent clearly different ways of reading ambiguous English sentences. In this book, we shall try to avoid ambiguous English sentences, except when we are deliberately illustrating ambiguity. There are other English idioms that say the same thing as all without using that word. You can see that the following are all equivalent: All students who study logic work hard. Every student who studies logic works hard. Each student who studies logic works hard. Students who study logic work hard. A student who studies logic works hard. If a student studies logic, he or she works hard. 160

Monadic Quantification The sentences would all be translated by the same sentence of L2. Be careful, though: in other contexts words like each and every may not be equivalent, and often a is used in a sentence that would be translated using an existential, not a universal quantifier. In translating between English and L2, any simple, mechanical rules will sometimes lead you astray, because English is such a complicated language. You should always try to understand what English sentences say, and then see how to say that in L2. Ask yourself whether the English sentence you are translating is really saying (or denying) that all such-and-suches are so-and-sos. If it is, then you will translate it by an L2 sentence of the form <(N< N<) (or ~<(N< N<), in the case of the denial). If the sentence is not saying or denying that all such-andsuches are so-and-sos, then whatever words may be used, it should not be translated by an L2 sentence of one of those forms. The placement of quantifiers and parentheses can be crucial. Consider the following interpretation: D: People W: works hard a: Alonzo h: Homer and the sentence, If everyone works hard, Homer does. This sentence may be correctly translated by this sentence of L2:

xWx Wh
However, it could not be correctly translated by this sentence:

x(Wx Wh)

WRONG TRANSLATION!!

To see that these sentences do not mean the same, consider their truth values if Homer does not work hard but Alonzo does. Since in that case not everyone works hard, the first sentence has a false antecedent, and so is true, since it is a conditional. But the second statement is a universally quantified statement. Since Wa Wh is false, Wx Wh is not satisfied by Alonzo, and the second sentence is false. A true sentence cant mean the same as a false one, so the two sentences of L2 dont mean the same. Sometimes it is useful to note that when you have to use the same variable in two parts of a sentence, the quantifier will have 161

Introductory Logic to come at the beginning of the sentence so it can bind both occurrences; otherwise there will be a free variable, like the last occurrence of x in

xFx Gx

NOT A SENTENCE.

This x is free.

This probably should be x(Fx Gx). On the other hand, if the quantifier does not bind any variable in part of the sentence, that part often does not need to be in the quantifiers scope, as in

xWx Wa
the

(The scope of the quantifier is limited to antecedent. There is no variable in the consequent.)

If nothing satisfies the antecedent of a universally quantified conditional, it is true. This follows from what we have said. Consider, for instance, x(Tx Px), and suppose that nothing satisfies Tx. Then no matter what item a may be, the antecedent of Ta Pa will be false, and therefore Ta Pa will be true. So everything will satisfy Tx Px and thus x(Tx Px) will be true. For instance, if we have this interpretation, D: People P: is purple. T: is over 15 feet tall. then x(Tx Px) will translate Everyone over 15 feet tall is purple. Since we wouldnt ordinarily say something like that unless we believed there were people over 15 feet tall, we may be uncertain whether the English behaves as the L2 sentence does. However, we shall declare that in this book all English sentences of the form All such-and-suches are so-and-so are true if there are no such-and-suches. Some consequences of this may be surprising. For instance, both Everyone over 15 feet tall is purple and Everyone over 15 feet tall fails to be purple will come out true, on this account. It may seem as though this is a contradiction, but it is not. Everyone over 15 feet tall fails to be purple does not deny that everyone over fifteen feet tall is purple; 162

Monadic Quantification to deny that we would have to say, Not everyone over fifteen feet tall is purple. However, the following three statements are inconsistent, as are their translations into L2: Everyone over fifteen feet tall is purple. Everyone over fifteen feet tall fails to be purple. There are people who are over fifteen feet tall. Exercises 6-5 Translate the following into L2 using the interpretations given. D: musical instruments B: is a brass instrument D: is a double-reed instrument E: is an English horn. K: is a krummhorn. S: is a string instrument. V: is a violin. W: is a woodwind instrument 1. 2. 3. 4. 5. 6. 7. 8. All violins are string instruments. Every double-reed instrument is a woodwind. If every violin is a string instrument, then not every violin is a brass instrument. Each English horn is a woodwind, but not every violin is. A krummhorn is a double-reed instrument and not a brass instrument. Neither English horns nor krummhorns are brass instruments. Both English horns and krummhorns are double-reed instruments. Krummhorns are double-reed instruments unless English horns are brass instruments.

D: The stick figures to the left. L: has an L above it. 163

Introductory Logic M: has an M above it. P: has a P above it.

9. 10. 11. 12. 13. 14. 15. 16.

All figures with an M above them have an L above them. Not every figure with an L above it has an M above it. Every figure with an L above it has either an M or a P above it. Not every figure with a P above it has an L above it. Every figure has either an L or a P above it. A figure has both an M and an L above it if and only if it does not have a P above it. Any figure that has neither an M nor an L above it has a P above it. If a figure has a P above it, then it doesnt have an L above it. D: {1, 2, 3} C: is in {1} N: is in {1, 3} T: is in {2, 3} t: 2

17. 18. 19. 20. 21. 22. 23. 24.

A number that is in {1} is also in {1, 3} Not every number in {2, 3} is in {1, 3} 2 is in {2, 3}. If 2 is in {2, 3}, then every number in {1} is in {1, 3}. If a number is in {2, 3} and 2 is also, then that number is not in {1}. A number is in {1} if and only if it is both in {1} and in {1, 3}. Every number in {1, 3} is either in {1} or in {2, 3}. A number is in either {1} or {2, 3} if and only if it is either in {1, 3} or {2, 3}. 6: Translation: At least one and its variants At least one such-and-such is so-and-so may be translated by

164

Monadic Quantification an L2 sentence of the form, <(N< & R<).29 The existential quantifier is used to say that the open sentence following it is satisfied by at least one thing in the domain. So it can translate sentences that say that in English. For instance, using the following interpretation, D:People D: is diligent. S: is a student. the sentence, There is at least one student who is diligent, may be translated x(Sx & Dx). We shall also use the existential quantifier to translate sentences involving the English word some, because we will understand some in the way logicians do so that it means at least one. Some refers to an indefinite number or quantity, and some people seem to think it must refer to a number greater than one. If you want to know what some means in English, you should look it up in a good dictionary; here we shall simply declare that in this book, some always means at least one. So using the above interpretation, y(Sy& Dy) can translate Some students are diligent. Note that the structure of the translation of Some students are diligent is different from the structure of the translation of All students are diligent. The English sentences look identical except for the words some and all, but their translations into L2 differ in more than just the quantifiers:

x(Sx & Dx) x(Sx Dx)


These sentences differ not only in their quantifiers, but also in the connective used in the open sentence inside the quantifier. Keep in mind this rule: All: Universally quantified conditional. Some: Existentially quantified conjunction.

29

For an account of the Greek letters, see footnote 4 above. 165

Introductory Logic It almost never makes sense to put an existential quantifier in front of a conditional. To reinforce this point, consider the drinking principle.30 The drinking principle derives from a joke. It seems a drunk went into a bar and called out, Gimme a drink, and give all my friends here a drink, because when I drink, everybody drinks! Drinks were sent round to general approval. Later the drunk cried out again, Gimme another drink, and give all my friends here another drink, because when I drink, everybody drinks! More drinks were passed round to more general approval. A while later the drunk called out, Now Im gonna pay, and when I pay, everybody pays! So much for the joke. The drinking principle is this: There is some person such that if he or she drinks, everyone does. Astonishingly, this is true; in fact its translation into L2 is a logical truth, as we shall be able to show rigorously in later chapters. We can translate the drinking principle, using the obvious interpretation: D: People D: drinks (The Drinking Principle:)

x(Dx yDy)

It is easy enough to see that this must be true. Either everyone drinks, or not everyone drinks. If everyone drinks, then the consequent of Dx yDy is true, and the open sentence will be satisfied by anyone; hence the drinking principle is true. If not everyone drinks, then there is at least one person who does not drink; this person will satisfy Dx yDy since its antecedent will be false of him or her. Hence the drinking principle is also true if not everyone drinks. Hence the drinking principle is true. The drinking principle may serve to remind us that we generally dont want to put an existential quantifier in front of a

The drinking principle and the associated joke are from Raymond Smullyan, What is the Name of this Book?, (PrenticeHall, Englewood Cliffs, New Jersey, 1978) p.209 166

30

Monadic Quantification conditional. Such a sentence will be true on an interpretation if there is even one item in the domain that fails to satisfy the antecedent of the conditional. We seldom want to say anything like that. The moral to remember is: All: Universally quantified conditional. Some: Existentially quantified conjunction. You will discover a few exceptions to this rule of thumb, but generally it will be good advice to follow. There are other idioms that are translated by the existential quantifier. Given this interpretation, D: students F: fell down yesterday. L: studies logic W: works hard. all of the following are equivalent, and equally translated by x(Lx & Wx): At least one student who studies logic works hard. Some students who study logic work hard. There is a student who studies logic and works hard. There exists a student who studies logic and works hard. The indefinite article, a or an, can be tricky in English. Compare the following English sentences; try to translate them using the interpretation above before reading the text following them. A student who studies logic works hard. A student who studies logic fell down yesterday. The first sentence is a general statement: it talks about any student who studies logic. Hence it is translated using a universal quantifier, x(Lx Wx). But the second sentence, despite looking very much the same, talks about some particular, unidentified student, and hence is translated using an existential quantifier, x(Lx & Fx). The moral here is that you cannot tell merely from the fact that an indefinite article is used how the 167

Introductory Logic sentence is to be translated. Rather, you must understand what it says and think how to translate that into L2. Ask yourself whether the sentence is making a general claim about all people of some sort or other. If it is, then it will be translated using a universal quantifier. If it is making a statement about just one person, not specifically identified, or about some indefinite number of persons, then it will be translated with an existential quantifier. The denial of an existentially quantified statement says that there is nothing that satisfies the open sentence following it. xSx says that there is something that satisfies Sx. ~xSx denies that; it says that it is not the case that something satisfies Sx, that is, that nothing does. To say in L2 that no student who studies logic works hard, we deny that there is a student who studies logic and works hard: ~x(Lx & Wx). This sentence of L2 translates all of the following English statements: No student who studies logic works hard. There is no student who studies logic and works hard. There does not exist a student who studies logic and works hard. ~x(Lx & Wx) turns out to be equivalent to x(Lx ~Wx). You can see that this is plausible: to say that no student who studies logic works hard is obviously very much the same as saying that any student who studies logic fails to work hard. In Chapter 9 we will learn how to show that these are equivalent. So either sentence can be used to translate any of the English sentences above.

Exercises 6-6 Translate the following into L2 using the interpretation given. D: Planets of the solar system 168

Monadic Quantification C: is closer to the sun than the earth. F: is farther from the sun than the earth. J: is a Jovian planet. (= Jupiter, Saturn, Uranus or Neptune) R: has rings 1. 2. 3. 4. 5. 6. 7. 8. Some planets are closer to the sun than the earth. There is at least one planet that has rings. Some planets that are farther from the sun than the earth are Jovian planets. At least one Jovian planet has rings. Some planets without rings are closer to the sun than the earth. No Jovian planet is closer to the sun than the earth. Some Jovian planets have rings, but no planets closer to the sun than the earth do. Some Jovian planets that are farther from the sun than the earth have rings, but no planets farther from the sun than the earth that are not Jovian planets have rings.

D: the stick figures to the left A: has an A above it. B: has a B above it. C: has an C above it. 9. 10. 11. 12. 13. 14. 15. 16. Some figure with an A above it has a C above it. At least one figure has both an A and a C above it. Some figure has both an A and a B above it, but no figure has both a B and a C above it. Some figure has a A above it if and only if no figure has both a C and a B above it. No figure fails to have a A above it. Theres a figure with a B above it, a figure with a C above it and a figure with an A above it. There exists a figure with an A above it that has neither a B nor a C above it. If there is a figure that has a B above it, then there is a figure that does not. 169

Introductory Logic D: {1, 2, 3} O: is in {1} P: is in {3} Q: is in {1, 2} b: 2 17. 18. 19. 20. 21. 22. 23. 24. Some number is in {1} No number is in both {1} and {3} If there exists a number in both {1} and {1, 2} then there exists a number in {3}. Either no number is in both {1} and {3} or some number is in neither {1} nor {1, 2}, No number is in both {1} and {3} if and only if some number that is in {1, 2} is in {1}. There exists a number that is in both {3} and {1,2} only if there is no number in both {1} and {1, 2}. If 2 is in {1, 2} then no number is in both {3} and {1}. Either 2 is not in {1} or some number is in both {1} and {3}.

7: Some additional common idioms of English Only has a variety of uses in English. For instance, consider the sentence, Only the brave deserve the fair. Using the following interpretation, we seek to translate this sentence into L2. D: People B: is brave. D: deserves the fair. To say that only the brave deserve the fair is to say the same as None but the brave deserve the fair, so one translation would be: ~x(~Bx & Dx) As we shall see in later chapters, this is equivalent to

x(Dx Bx)
This makes sense when you think about it: If no one who is not 170

Monadic Quantification brave deserves the fair, then everyone who deserves the fair must be brave. Either of these sentences will translate Only the brave deserve the fair. Sentences involving the only seem to work differently. If I say, The only students who will pass are those who work hard, I seem to be saying that every student who passes will have worked hard. Thus using the given interpretation, we can render this in L2 as x[(Sx & Px) Wx] D: People P: passes. S: is a student. W: works hard. Some people find it easier to think of The only students who will pass are those who work hard as saying the same as No student who fails to work hard will pass, and hence to translate it as ~x[(Sx & ~Wx) & Px]. This is equivalent to the previous L2 sentence, and either is an acceptable translation. Both can be confusing in sentences that are translated with quantifiers. Suppose we consider the sentence, Both wolves and tigers eat meat, and use the following interpretation: D: Animals eats meat M: T: is a tiger W: is a wolf. We might be tempted to translate the sentence thus:

x[(Wx & Tx) Mx]

WRONG TRANSLATION!!!

This sentence says that anything that is both a wolf and a tiger eats meat. But the original sentence did not speak of things that were wolves and also tigers; it spoke of wolves and of tigers. One way to translate this sentence would be to take it to be a conjunction:

x(Wx Mx) & y(Ty My)


This makes the same claim that the original English sentence makes. But we can also translate it by a sentence with just one 171

Introductory Logic quantifier. We want a sentence like this:

x( x is ____________ x eats meat)


How shall we fill in the blank? Remember the English sentence: Wolves and tigers eat meat. What must a thing be in order for the sentence to declare it a meat eater? It must be either a wolf or a tiger. So the blank is filled in thus:

x( x is either a wolf or a tiger x eats meat)


and the correct translation is

x[(Wx w Tx) Mx]


The moral is that to pick out all the things which you can select with two predicates, you must disjoin the predicates with w. The existential quantifier can be confusing with both as well. Consider the sentence, Both some men and some women study logic, and the following interpretation: D: People L: studies logic M: is a man. W: is a woman. We mustnt translate the sentence this way:

x[(Mx & Wx) & Lx]


This sentence says that someone is both a man and a woman and also studies logic. Again we can treat the sentence as a conjunction:

x(Mx & Lx) & x(Wx & Lx)


However, in this case there is no way to translate this sentence using just one existential quantifier. Thats because the original English sentence can be only made true by the existence of two things, one a man who studies logic and one a woman who studies logic. So well need two existential quantifiers to express it. 172

Monadic Quantification

Exercises 6-7 Translate the sentences into L2 using the given interpretation. D: People B: bicycles every day. H: is healthy. J: jogs every day. P: has high blood pressure. has a sedentary job. S: W: works in my office. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. Everyone who bicycles every day is healthy. Some who have sedentary jobs are not healthy. A person who is healthy and jogs every day does not have high blood pressure. A person who bicycles every day and does not have high blood pressure works in my office. Only those who are unhealthy have high blood pressure. No one who jogs every day bicycles every day. The only people in my office who are unhealthy are those that neither jog every day nor bicycle every day. Only healthy people bicycle every day. Some people who bicycle every day are not healthy although they do not have high blood pressure. Not everyone who jogs every day is healthy but everyone who bicycles every day is. Some who have sedentary jobs, but not those who bicycle every day, are not healthy. Everyone who has a sedentary job and neither bicycles every day nor jogs every day is unhealthy unless he does not have high blood pressure.

D: The Stick Figures to the left E: has an E above it. 173

Introductory Logic L: has an L above it Y: has a Y above it.

13. 14. 15. 16. 17. 18.

Figures with an E above them have either an L above them or a Y above them. Some figures with an E above them have a Y above them also. Only figures with an E above them have an L above them. The only figures that have an E above them that do not have an L above them have a Y above them. Only figures that have a Y above them have neither an E nor an L above them. Every figure has either an L or a Y above it, but some figures dont have an E above them. D: {1, 2, 3} A: is in {1, 2} M: is in {2} T: is in {2, 3}

19. 20. 21. 22. 23. 24.

A number is in {2} only if it is in both {1, 2} and {2, 3} Only the numbers that are both in {1, 2} and in {2, 3} are in {2}. Some numbers are both in {1, 2} and {2, 3}. Some numbers in {2, 3} are not in {1, 2}. Every number in both {1, 2} and {2} is also in {2, 3}. Every number in {2} is in {2, 3} and so are some numbers in {1, 2}. 8: Translation in to L2: Complex Sentences

The basic principles of translation into L2 are similar to the principles of translation into L1. The first principle is to proceed step by step, from the top down (or from the outside in). Always determine first what the principle logical operator of the whole sentence is. Next express the sentence in a sort of amalgam of L2 and English, and then proceed to translate the parts. Be sure to 174

Monadic Quantification insert appropriate parentheses at each stage; do not wait to put them in at the end. Doing some examples will help. Consider the following interpretation: D: E: L: P: S: W: Days of the week is a weekend day is a day on which logic class meets is a day on which you party. is a day on which you do lots of studying. is a weekday.

Suppose we have to translate the following sentence: Every weekday you do lots of studying, but every weekend day you party. We must first determine what the principle logical operator of the sentence is. Here it is obviously indicated by the English word, but so we know that this sentence is a conjunction. We write down this: Every weekday you do lots of studying & every weekend day you party. (We dont need to insert parentheses here, because they are outermost parentheses and may be dropped.) Now we translate the component sentences in any order. For each of these sentences we ask what the principle logical operator of the sentence is. The first one obviously has a universal quantifier as its main logical operator. To decide which quantifier is the main logical operator of a quantified sentence, ask whether the sentence is saying that all of some class of objects have some property or whether it is saying that one or more is. In this case, the first conjunct obviously says that all weekdays are days on which you do lots of studying. We can write the next approximation:

x(x is a weekday x is a day on which you do lots of


studying) & every weekend day you party. Notice that the parentheses are essential at this step. The first conjunct can now be rendered entirely in L2 : 175

Introductory Logic

x(Wx Sx) & every weekend day you party.


A similar analysis on the second conjunct yields the full sentence:

x(Wx Sx) & y(Ey Py)


(Note that it was not essential to choose a different variable for the second conjunct, x could have been used just as well.) Now consider a more complex sentence. Only weekdays on which logic class meets are days when you neither do lots of studying nor party. As we saw in the previous section, sentences that say only suchand-suches are so-and-sos, like this one, can be translated as denials of existential quantifications. We can thus render this at a first step like this: ~x( ~(x is a weekday on which logic class meets) & x is a day when you neither to lots of studying nor party.) (Note again that all the parentheses are essential here.) To say that x is a weekday on which logic class meets is to say Wx & Lx. To say that x is a day on which you neither do lots of studying nor party is to say ~(Sx w Px). So the whole is: ~x( ~(Wx & Lx) & ~(Sx w Px) ) Finally, consider this sentence: A weekday on which logic class meets is a day on which you do lots of studying if any day is. The first difficulty with this sentence is to determine the main logical operator. Is the sentence a conditional, There is a day on which you do lots of studying every weekday on which logic class meets is a day on which you do lots of studying, 176

Monadic Quantification or is the universal quantifier the main logical operator:

x(x is a weekday on which logic class meets x is a day on


which you do lots of studying if any day is) Fortunately, these two are equivalent, so we neednt choose between them. Either is correct. We will continue with the second one. The antecedent of the conditional is easy enough:

x[(Wx & Lx) x is a day on which you do lots of studying


if any day is] The consequent says that x is a day on which you do lots of studying if any day is; its obviously a conditional. It says that if there is a day on which you do lots of studying, x is such a day. So it is translated (ySy Sx) and the whole becomes

x[(Wx & Lx) (ySy Sx)]


Notice that we used a new variable, y, with the existential quantifier. This is not strictly necessary, but it helps to make the L2 sentence more readable. When translating from L2 to English, it is usually best to proceed from the bottom up (or from inside out). Consider this sentence of L2:

x[(Wx & Sx) (Lx ~Px)]


We can begin by translating the antecedent and consequent of the embedded conditional:

x[ x is a weekday on which you do lots of studying if

logic class meets on x then x is a day on which you dont party.] The sentence overall has the form of a universally quantified conditional, so we recognize that it is saying that all the things that satisfy the antecedent satisfy the consequent. Hence a first attempt at translating the whole might look like this: All days that are weekdays on which you do lots of studying are days such that if logic class meets on them, you dont 177

Introductory Logic party on them. However, this is not good English. Recognizing what the sentence is trying to say, we can rewrite it into better English, thus: Every weekday on which you do lots of studying is a day on which you dont party, provided logic class meets that day. Of course, this is not the only English sentence that we might use as a translation of our original sentence into L2. There are many subtly different ways to express sentences of L2 in English. For instance, we might have chosen this translation: Only days on which you dont party if logic class meets are weekdays on which you do lots of studying. This is a very different English sentence than the one preceding it, but for logical purposes they are equivalent, and either is a good translation of our original sentence.

Exercises 6-8 Translate the given sentences into L2 using the interpretation given. D: Students F: is a fraternity member is a student government office holder G: M: is male L: is a logic student is a sorority member S: W: is female 178

1. 2. 3. 4. 5. 6. 7. 8. 9.

Monadic Quantification All fraternity members are male and all sorority members are female. Some male fraternity members are student government office holders, but no male logic stud ents are. A male logic student is a fraternity member only if he is not a student government office holder. No sorority members are student government office holders, but some female logic students are. There are male and female logic students, and male and female student government office holders, but there are no male sorority members or female fraternity members. If a male student is a logic student, he is not also a student government office holder, but some female students are both logic students and student government office holders. Only male students are fraternity members, but both male and female students are student government office holders. A female student who is a sorority member is not a student government office holder unless she is a logic student.

Translate the following into good, idiomatic English, on the basis of the given interpretation. D: People A: claims to have been abducted by aliens in a flying saucer. B: believes that aliens regularly visit earth in flying saucers. C: is completely credible on the subject of aliens. G: is very gullible. is a respected physicist. P: 10. 11. 12. 13. 14. 15. 16. 17. ~y(Py & By) x[(Ax & Bx) Gx] ~z(Bz ~ Cz) w[(Aw w Bw) ~Cw] v[(Cv & Pv) ~(Av w Bv)] x(Gx & Ax) y[(Gy & ~Cy) & By] z(Cz ~Bz) ~w(Pw & Aw) ~v[(Pv & Cv) & Av] w x(Px & Gx)

179

Introductory Logic Appendix: Syntax Rules for L2

Predicate letters: Upper case Roman letters with or without Arabic numeral subscripts.
1. names: lower case Roman letters in the range a-u, with or without Arabic numeral subscripts. 2. variables: lower case Roman letters in the range v-z, with or without Arabic numeral subscripts.

Vocabulary:

Singular Terms:

Formulas: A quantifier symbol followed by a single variable is a quantifier. A predicate followed by any number of singular terms is a(n atomic) formula. If N and R are formulas, then so are: ~N (N & R) (N w R) (N R) (N R) any quantifier followed by N. Thats all. (Nothing is a formula unless it is so in virtue of the above rules.) Bondage and Freedom: Scope: The scope of an occurrence of a quantifier is the smallest sub-formula containing that occurrence of the quantifier, that is, the sub-formula of which it is the main logical operator. Bondage: an occurrence of a variable is bound if and only if it is within the scope of an occurrence of a quantifier containing the variable. Freedom: An occurrence of a variable is free if and only if it is not bound. Sentences: A sentence of L2 is a formula containing no free occurrences of any variable. 180

Quantifier Symbols: Connectives: ~ & w Punctuation: ( )

Monadic Quantification Parse Trees: Parse trees can be made for sentences of L2 on the basis of the syntax rules above just as they were for L1. For L2, the leaves of the parse tree will be atomic formulas. For example, here is a parse tree for xy(Fx (Py & z(Gxyz Rxy))):
xy(Fx (Py & z(Gxyz ~Rxy))) y(Fx (Py & z(Gxyz ~Rxy)))

Fx (Py & z(Gxyz ~Rxy)) Fx Py (Py & z(Gxyz ~Rxy))


z(Gxyz ~Rxy)

Gxyz ~Rxy Gxyz ~Rxy Rxy

181

Introductory Logic

Chapter 7: Polyadic Quantification


In this chapter we will learn how to use 2-place, 3-place, and more place predicates in translating. In principle, many-place predicates dont raise any new issues beyond those discussed in Chapter 6. In practice, sentences involving many place predicates can be much more complicated that those involving only one-place predicates. 1: Predicates with more than one place We form logical predicates from English sentences by removing names from the sentences. For instance, from the sentence, Gertrude slapped Alonzo, we can get the predicate, slapped Alonzo. We may also get the predicate, slapped . This last is a two-place (or dyadic) predicate. Given this interpretation, D: People S: slapped a: Alonzo g: Gertrude Sga says that Gertrude slapped Alonzo, Sag says that Alonzo slapped Gertrude, xSxa says that someone slapped Alonzo, xSgx says that Gertrude slapped someone, and xySxy says that someone slapped someone. The numbered blanks that follow the predicate letters in an interpretation must always appear in numerical order from left to right, and never in any other order. S WRONG!!

While the numbers in the English predicates that appear on the right side in any interpretation may in principle appear in any order, it will generally be best to write them in right to left order as well. Thus the following is not wrong, but could easily be confusing: D: People S: slapped 182

Possibly confusing

Polyadic Quantification a: Alonzo g: Gertrude On this interpretation Sag would say that Gertrude slapped Alonzo. Of course, predicates may have more than two places. From Chelsea is a daughter of William and Hilary, we can get is a daughter of and , and from The intersection of Main and Clinton is south of the intersection of Norton and Clinton we can get, The intersection of and is south of the intersection of and . A careful reading of our syntax rules for L2 given in the appendix to chapter 6 will demonstrate that the following is an acceptable sentence of our language: Sagm & ~Sag Here we must treat the two occurrences of S as different predicates; hence an interpretation might look like this: D: People S: slapped S: stood between and The first occurrence of S in the sentence above is interpreted by the last line of the interpretation, because it has three terms following it; the second occurrence is interpreted by the middle line, because it has two terms following it. Such double uses of

predicate letters, though allowed, are likely to be confusing and should be avoided. Instead, use different subscripts on the
predicate letters, or better yet, use different letters. In this respect, English is more flexible than L2. In English, for instance, the verb read can be both transitive and intransitive. We can say any of the following: Alonzo read. Alonzo read The Catcher in the Rye. Alonzo read The Catcher in the Rye and Raise High the Roof

Beams, Carpenters.

Someone who understands one probably understands the others. In L2, on the other hand, when different occurrences of a predicate 183

Introductory Logic letter are followed by different numbers of terms, the two occurrences need have nothing to do with each other.

Exercises 7-1 Using the given interpretation, translate the following sentences into L2. D: People L: likes O: owes money to R: is rooming with a: Alonzo g: Gertrude 1. 2. 3. 4. 5. 6. 7. 8. Alonzo owes money to someone, but he doesnt owe money to Gertrude. Alonzo owes money to someone, but is not rooming with that person. Gertrude owes money to someone that she does not like. Gertrude doesnt like anyone Alonzo is rooming with. Alonzo likes everyone he rooms with. Gertrude is rooming with someone who likes someone who is rooming with Alonzo. Alonzo is rooming with someone he likes, but Gertrude is not rooming with anyone she likes. Alonzo owes money to someone and Gertrude owes money to someone, but Gertrude does not owe money to anyone to whom Alonzo owes money. D: Positive Integers B: is between and G: is greater than L: is less than R: The ratio of to is equal to the ratio of to a: 1 b: 2 c: 3 d: 4 184

Polyadic Quantification 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 3 is between 1 and 4. 2 is between 1 and 4 but not between 3 and 4. 3 is greater than 2 and 1 but not greater than 4 If 1 is greater than 2 then 2 is greater than 4. Either 1 is greater than 2 or 3 is greater than 2, but not both. The ratio of 1 to 2 is equal to the ratio of 2 to 4. If 3 is greater than 1 then the ratio of 3 to 2 is not equal to the ratio of 2 to 4. The ratio of 1 to 2 is equal to the ratio of 2 to 4 if and only if the ratio of 2 to 1 is equal to the ratio of 4 to 2. Some positive integer is greater than 4. There is some positive integer such that the ratio of it to 4 is equal to the ratio of 4 to 2. Every positive integer greater than 3 is greater than 2. 1 is greater than no positive integer. If a positive integer is greater than 1 and less than 3, then the ratio of this number to 1 is equal to the ratio of 4 to it. Some positive integer is greater than 4 and some positive integer is less than 4, but no positive integer is both greater than 4 and less than 4. 1 is less than 2 and 2 is less than 4 if and only if 2 is between 1 and 4. Any number between 1 and 3 is less than 4.

Using the given interpretation, translate the sentence of L2 into smooth, idiomatic English. D: People D: lives in the same dorm as . W: is in the same writing class as a: Alonzo g: Gertrude 25. 26. 27. 28. 29. 30. 31. 32. Wag Wag & ~Dag xWax & xWgx y(Wya & Wyg) y(Wya & Wgy) z(Dza & Wzg) w(Dwg Waw) v[(Wvg & Dvg) Wav] 185

Introductory Logic

2: Translating Quantifiers and Polyadic Predicates Some uses of quantifiers with polyadic predicates are straightforward. For instance, on this interpretation, D: People at the party last night K: kissed a: Alonzo g: Gertrude Kag says that Alonzo kissed Gertrude, xKxg that someone at the party kissed Gertrude, and yKay that Alonzo kissed someone who was at the party. Sentences with two identical quantifiers are also straightforward: xyKxy says that someone at the party kissed someone at the party, and zwKzw says that everyone at the party kissed everyone at the party. Sentences with two unlike quantifiers can be trickier. Consider xyKxy, for example. It says that someone at the party kissed everyone there. It needs to be distinguished from xyKyx, which says that someone at the party was kissed by everyone there. Both of these in turn must be distinguished from xyKxy and zwKwz; these say, respectively, that everyone at the party kissed someone or other, and that everyone at the party was kissed by someone or other. I have used the phrase someone or other rather than simply someone in order to try to counter the ambiguity of Everyone at the party kissed someone who was there. This English sentence might be thought to say that everyone at the party bestowed a kiss, without saying anything about whether all kissed the same person, or it might be understood as saying that everyone at the party kissed one and the same person (perhaps Gertrude). No sentence of L2 is ambiguous, and the two readings of the English sentence can be expressed unambiguously in these sentences of L2: English: Everyone kissed someone. Could mean either of: xyKxy yxKxy

The two L2 sentences differ only in the order of the initial quantifiers. It may help to spell out exactly what the two sentences 186

Polyadic Quantification of L2 say. The first begins with a universal quantifier. This sentence will be true if everything in the domain satisfies the open sentence following the quantifier, that is, yKxy. This sentence will be satisfied by things that kissed someone. If Alonzo, Bill, Carey, Donna, Edgar, Fran, and Gertrude were all the people at the party last night, then xyKxy will be true if and only if Alonzo kissed someone, Bill kissed someone, Carey kissed someone, and so forth, in other words, only if each of them kissed someone or other. The other sentence begins with an existential quantifier. It will be true if some one thing in the domain satisfies the open sentence following the quantifier, that is, xKxy. In other words, it will be true if there is one person that everyone kissed. Another complication concerns the difference between everyone and everyone else. Consider the sentence, xKxg. This says everyone kissed Gertrude. If it is true, then since Gertrude is someone, she must also have kissed Gertrude. This may not be what someone meant to say in saying Everyone kissed Gertrude. If your younger sister says, Leonardo is sexier than anyone, she almost certainly does not mean to claim literally that Leonardo is sexier than himself. She has gotten carried away in her enthusiasm; she meant to say that Leonardo is sexier than anyone else. People are typically careless and lazy in speech, and we often understand what they meant to say without noting carefully what they did say. Dont let this carelessness infect your speech. In this book, when we say everyone, we mean everyone. When we mean everyone else, we will say it.31 We can use binary predicates in more complicated sentences. For instance, suppose we add an additional predicate to our interpretation: D: People at the party last night C: is cute K: kissed a: Alonzo g: Gertrude Suppose we wish to translate into L2 the English sentence, We wont say it until the last chapter, because until then we will have no way of saying it in a formal language. 187
31

Introductory Logic Gertrude kissed everyone at the party who was cute. To translate this sentence, we must first recognize that it is a universal generalization. It says that everyone at the party who was cute was kissed by Gertrude. So we must begin by putting it into our basic form for universal generalizations:

x(x is cute x was kissed by Gertrude)


To say that x was kissed by Gertrude is obviously to say that Gertrude kissed x. So we have:

x(x is cute Gertrude kissed x)


which is obviously

x(Cx Kgx)32
Note that although Gertrudes name comes at the beginning of the English sentence, it comes very near the end of the L2 sentence. This is one of the many ways in which L2 is very different from English. We can also say that Gertrude was kissed by every cute person at the party. Following the same pattern as above, this will be:

x(Cx Kxg)
This differs from the previous sentence only in the order of the terms following the predicate K. In L2, who is kissing whom is determined by the order of the terms. In English, things work differently. We can say either, Gertrude kissed Alonzo, or Gertrude was kissed by Alonzo. The terms occur in the same order in English, but the passive voice and the preposition are used to change who is kissing whom. Suppose we want to say that Gertrude kissed everyone who kissed Alonzo. We analyze that sentence in the same way. It is a universal generalization, so we begin this:

z(z kissed Alonzo Gertrude kissed z)


Note that this sentence can be true even if Gertrude never kissed anyone. How? 188
32

Polyadic Quantification Once we have made this beginning, we can see easily how to finish:

z(Kza Kgz)
Sentences in L2 can express quite complicated ideas in a small space. Consider this sentence:

w(z(Kwz & Kza) Kgw)


This says that Gertrude kissed everyone in some category of persons at the party, but what category is it? It is the category of persons who satisfy z(Kwz & Kza), that is, the category of persons who kissed someone who kissed Alonzo. So the original sentence says that Gertrude kissed everyone who kissed someone who kissed Alonzo. This thought is quite complicated, and you may have to take a moment to think through the meaning of the L2 sentence. Polyadic predicates are useful for translating temporal facts. Consider the following interpretation: D: People at the party last night and times B: is earlier than C: is cute K: kissed at P: is a person T: is a time a: Alonzo e: Ethel g: Gertrude m: midnight last night Using an interpretation like this, we can say that Alonzo kissed Gertrude at midnight: Kagm We can also say that Alonzo kissed Gertrude before midnight. The key to saying this is that we understand it as saying that there is a time at which he kissed Gertrude and that time is before midnight. 189

Introductory Logic v[Tv & (Bvm & Kagv)] Note that we must specify that v is a time, which we do by including the conjunct, Tv. We do this because the English sentence we are translating says that there is a time ... Note also that on this interpretation we can no longer say simply that Alonzo kissed Gertrude; to say that we must say that Alonzo kissed her at some time:

x(Tx & Kagx)


We can say that Alonzo kissed Ethel before he kissed Gertrude. This is a bit tricky, because we have to consider the possibility that Alonzo kissed each of them several times. It will not do to say that there is a time at which Alonzo kissed Ethel and a time at which he kissed Gertrude, and the first of these is earlier than the second, for that could be true if he kissed Gertrude first, then Ethel, then Gertrude again. Probably the English sentence means that he kissed Ethel at some time before every time at which he kissed Gertrude. To translate this into L2, it will be helpful to proceed step by step. First we see that the general form is this:

x(x is a time & [Alonzo kissed Ethel at x & x is earlier than


every time at which Alonzo kissed Gertrude]) The first two conjuncts are translated easily thus:

x([Tx & [Kaex & x is earlier than every time at which


Alonzo kissed Gertrude]) The right conjunct is a universal generalization, so the next pass gives us this:

x(Tx & [Kaex & y(y is a time at which Alonzo kissed Gertrude x is earlier than y)])
And now the whole sentence is easily seen to be:

x(Tx & [Kaex & y([Ty & Kagy] Bxy)])


Exercises 7-2 190

Polyadic Quantification Translate the following into L2 using the interpretation given. D: People O: is older than L: likes T: is taller than a: Alonzo g Gertrude 1. 2. 3. 4. 5. 6. 7. 8. Alonzo is older than someone, but hes not older than Gertrude. Alonzo is older than someone, but he is not taller than that person. Gertrude is older than someone she does not like. Gertrude doesnt like anyone Alonzo is taller than. Alonzo likes everyone he is taller than. Gertrude is taller than someone who likes someone who is taller than Alonzo. Alonzo is taller than someone he likes, but Gertrude is not taller than anyone she likes. Alonzo is older than someone and Gertrude is older than someone, but neither likes anyone. D: People is female F: M: is male P: is a parent of S: is a spouse of a: Alonzo e: Eustace g: Gertrude (Note: for these exercises, assume the kinship system common in the United States.) 9. 10. 11. 12. 13. Alonzo is a male parent of someone, and Gertrude is someones mother. Alonzo is Gertrudes father, and Gertrude is Eustaces mother. Alonzo is Eustaces grandfather. Alonzo is Eustaces uncle. [Hint: Remember that Alonzo is not Eustaces father.] Alonzo is married to Gertrudes sister, but not to Gertrude. 191

Introductory Logic 14. Gertrude is Eustaces mother-in-law. 15. Gertrude is Alonzos sister-in-law. [Hint: there are two ways one can be someones sister-in-law: one can be a sister of a spouse or a wife of a sibling. Also, ones sister-in-law is not ones spouse.] 16. Alonzo is Gertrudes first cousin. [Hint: first cousins are not siblings.] Using the given interpretation, translate the given sentence into smooth, idiomatic English. D: People P: is a professor S: is a student A: admires D: despises T: teaches 17. 18. 19. 20. 21. 22. 23. 24.

x(Px yTxy) x[Sx y(Py & Axy)] x[Px y(Sy & Dyx)] ~w[Pw & z([Sz & Twz] Azw)] z[Sz w(Pw & Azw)] xyAxy & xyDxy ~v[Pv & z([Sz & Tvz] Dvz)] x[Px & (y[Sy & Ayx] & z[Sz & Dzx])]

Translate the following into L2 using the interpretation provided. D: People, automobiles, and times is a car C: B: bought at F: is a Ford L: is later than M: is a Mazda is a person P: T: is a time e: Ethel g: Gertrude b: Ethels birthday (October 18) 25. 192 Ethel bought a Ford on her birthday.

26. 27. 28. 29. 30. 31. 32.

Polyadic Quantification Ethel bought a Ford when Gertrude bought a Mazda. Someone bought a Ford when Gertrude bought a Mazda. Gertrude bought a car before Ethels birthday. Gertrude bought a used car. [Hint: understand a used car to be one that had been previously bought by someone.] Ethel bought a car that Gertrude had previously bought. Gertrude has never bought a Mazda. Gertrude has always bought Fords. [Hint: this does not mean that every time is one at which Gertrude buys a Ford!] 3: Small domains with Polyadic Predicates

Small domains can be helpful in understanding sentences with polyadic predicates. Just as with 1-place predicates, with predicates of two and more places small domains can help with understanding sentences of L2. Let us consider diagrams first. For our diagrams to accommodate predicates of more than one place we need a way of representing such predicates in the diagrams. A natural way to represent such predicates is with arrows. For instance: D: the stick figures to the left F: An arrow goes from to G: A broken arrow goes from to

It is a bit more difficult to accommodate three and four place predicates, but some variation of the following technique may do:

D: The stick figures to the left F: An arrow goes from to and then to . 193

Introductory Logic

We can use our diagrams to illustrate the difference between xyFxy and yxFxy. Consider the following interpretation. D: The Stick figures to the left F: An arrow goes from to

On this interpretation, xyFxy is true. It says that from every stick figure in the diagram an arrow goes to some stick figure or other. But yxFxy is false. It says that there is a stick figure to which an arrow goes from every stick figure. To make yxFxy true, wed need an interpretation like this, for instance: D: The stick figures to the left F: An arrow goes from to

These are not the only diagrams that make these sentences true. There are diagrams that make both sentences true and diagrams that make neither sentence true, as well as diagrams that make one or the other sentence true. Try to construct some of these other diagrams and figure out which of the sentences are made true on the corresponding interpretations. We can also use small numerical domains. If we want to use small domains in the positive integers we must have a way of representing relations. To spell out a relation, we must spell out the pairs for which it holds. Well use the notation of pointy 194

Polyadic Quantification brackets (<>) to indicate pairs, so <1, 2> will indicate the pair consisting of 1 and 2, in that order. (Note that these are ordered pairs. <1, 2> is not the same as <2, 1>.) A relation may be represented by the set of all pairs for which it holds. For instance, a two-place relation may be interpreted this way: R: <, > is in {<1, 2>, <2, 1>} This relation holds between 1 and 2, and between 2 and 1, but between no other pairs of numbers. A relation of more than two places can be represented in the same way, except that there will be more items between the pointy brackets. For instance, with a domain limited to {1, 2, 3}, S: <, , > is in {<1, 1, 2>, <1, 2, 3>, <2, 1, 3>} could assign to S the same interpretation as would S: + = when limited to the same domain. If we number the figures in the four previous diagrams above from left to right, those four diagrams would correspond to the following numerical interpretations: D: {1, 2} F: <, > is in {<1, 2>} G: <, > is in {<2,1>} D: {1, 2, 3} F: <, , > is in {<1, 2, 3>} D: {1, 2} F: <, > is in {<1, 2>, <2, 1>} D: {1, 2} F: <, > is in {<1, 1>, <1, 2>} Try to understand how these interpretations are related to those given by the diagrams. This provides a good exercise in understanding the abstraction involved in using L2. 195

Introductory Logic Exercises 7-3 Tell whether the following sentences are true or false on the given interpretations. D: The Stick figures to the left F: has an F above it G: An arrow goes from to

1. 2. 3. 4. 5. 6. 7. 8.

xyGxy yxGxy z(Fz wGzw) v(Fv & x~Gxv) yz[(Fy & ~Fz) & Gyz] wz(Fz & Gwz) ~vy[Fy & (Gvy & Gyv)] x[Fx & z(Fz & ~Gxz)]

D: The stick figures to the left M: has an M above it R: An arrow goes from to

196

Polyadic Quantification

9. 10. 11. 12. 13. 14. 15. 16.

xyRxy yxRxy z[Mz w(Rww & Rwz)] vz(Mz Rvz) ~wx(Rwx & Rxw) y~v(Mv & Ryv) z(~Mz wRzw) x[Rxx y(My & Ryx)]
D: {1, 2} {1,2} B: L: {<1, 2>, <2, 2>}

17. 18. 19. 20. 21. 22. 23. 24.

xyLxy yxLxy zy(By Lyz) xz[(Bx & Bz) & (Lxz & Lzx)] z[Bz w(Bw & Lzw)] ~y(By vLvy) w[Bw ~y(By & Lyw)] v~x(~Lxx & Lxv)
4: Ambiguities in English

English sentences involving quantifiers can be ambiguous. Sentences of L2 are never ambiguous, even though many sentences of English are. Sentences of L2 can often be used, by people who understand them, to explain clearly and succinctly the different possible readings of an ambiguous sentence. Negation in English is a fruitful source of ambiguity in quantified sentences. Consider this sentence: Everyone does not support the President. This might say that no person supports the President, or it might be taken to be the denial of the claim that everyone supports the President. The first reading of the sentence gets this translation, using the indicated interpretation: 197

Introductory Logic D: People S: supports p: The President

x~Sxp
while the second reading of the sentence gets this translation (using the same interpretation): ~xSxp These two sentences of L2 differ only in the placement of the tilde, but they make quite different claims. The first claims that no one supports the President; the second only that some people fail to do so. Of course there is no particular need, in English, to be ambiguous in this way. One can say either, Not everyone supports the President, or No one supports the president. Neither of these sentences is ambiguous. If one is asked to translate an ambiguous sentence of English into L2, one is confronted with a dilemma: no one translation is correct. The best one can do is offer one translation for each distinct reading the sentence has. Some ambiguities concern the order in which the quantifiers occur. Consider the following English sentence: Some person is hit by a car every day. There are four different ways of understanding this sentence, and each of them has a different translation into L2. The sentence might say that on each and every day a collision occurs between some car or other and some person or other. Given this interpretation, D: Persons, objects, and days C: is a car is a day D: H: hits on P: is a person This reading would be represented thus in L2:

x(Dx yz[(Py & Cz) & Hzyx])


198

Polyadic Quantification But the sentence might also (less plausibly) assert that there is some one particular person who is involved in a collision every day; this would be translated thus:

x(Px & y[Dy z(Cz & Hzxy)])


Yet again, the sentence might assert that there is a car that each day is involved in a collision with some person or other:

x(Cx & y[Dy z(Pz & Hxzy)])


Finally, and perhaps least plausibly, the sentence might assert that there are a particular person and a particular car that collide each day:

x(Px & y[Cy & z(Dz Hyxz)])


Not all of these readings of our original sentence are equally plausible, but each is a possible reading, and in some context would be the natural one. Exercises 7-4 Translate the following sentences into L2 using the interpretation supplied. If a sentence is ambiguous, provide a translation for each of its possible readings. (Note: Not every sentence is ambiguous, but some are. Also note that a sentence is not ambiguous simply because it has more than one translation into L2 A genuinely ambiguous sentence must have two or more non-equivalent translations into L2.) D: People R: was in the room S: saw in the corridor g: Gertrude 1. 2. 3. 4. 5. Everyone in the room saw Gertrude in the corridor. Everyone in the room did not see Gertrude in the corridor. Not everyone in the room saw Gertrude in the corridor. Everyone in the room saw someone in the corridor. Everyone in the room did not see someone in the corridor. 199

Introductory Logic 6. No one in the room saw someone in the corridor. 7. No one in the room saw anyone in the corridor.

200

Quantificational Logical Relations

Chapter 8: Quantificational Relations


In this chapter we will learn how to define quantificational validity, quantificational inconsistency, and quantificational equivalence. We will also learn how to show that arguments are not quantificationally valid, how to show that a set of sentences is not quantificationally inconsistent (i.e., that it is quantificationally consistent), and how to show that two sentences are not quantificationally equivalent. We shall not learn how to show arguments valid, sets of sentences inconsistent, or pairs of sentences equivalent; the only technique for that is to provide derivations, and these are the topic of chapters 9 and 10. 1: Quantificational logical relations Logical relations in L2 are defined much as we defined logical relations in L1. The major difference is that there is no shortcut involving truth tables. Once again, sentences of L2 are their own forms. Instances of forms are just sentences with interpretations. Hence we may define the logical relations this way: Definition: An argument of L2 is quantificationally valid if and only if there is no interpretation on which the premises are all true and the conclusion false. Definition: A set of sentences of L2 is quantificationally inconsistent if and only if there is no interpretation on which all of its sentences are true. Definition: Two sentences of L2 are quantificationally equivalent if and only if there is no interpretation which makes one of them true and the other false. Notice that these definitions apply just as well to L1 as to L2, but in the case of L1 we were able to simplify the survey of all the infinite numbers of interpretations by concentrating on the finite number of truth value assignments possible. No such simplification is possible for L2, and there is in general no mechanical procedure for testing whether logical relations hold. (The proof of 201

Introductory Logic this is beyond the scope of this course.) To show that logical relations do not hold, we must use ingenuity in constructing interpretations that show this. To show that they do hold, we must provide a derivation (using the rules introduced in the next two chapters). To make it possible to check whether a given interpretation makes a particular sentence true or false, we shall require that whenever possible the interpretations be those using small finite domains that we have discussed previously, that is, either one whose domain is the set of stick figures in a drawing accompanying the interpretation, or one whose domain is a small set of positive integers. Lets see how we might go about showing an argument invalid. Consider the following argument:

x(Ex w Ix), x(Ex & Ix)


The premise says that everything is either E or I; the conclusion says that something is both E and I. To show the argument invalid, we want to make the premise true and the conclusion false, so we want there to be nothing that is both E and I, but we want everything to be one or the other. First we must choose a size for our domain. A domain of one member is sufficient in this case. We cannot make this figure both E and I, because that would make the conclusion of the argument true. But to make the premise true, well have to make it one or the other. Lets make it E. That gives us the following diagram and interpretation:

D: The stick figure to the left E: has an E above it I: has an I above it. On this diagram the premise is true, because every figure has either an E or an I above it, as the premise says. But the conclusion is false, because no figure 202

Quantificational Logical Relations has both letters above it, as the conclusion says. We could just as well have used an interpretation in the positive integers, like this: D: {1} E: is in {1} I: is in {} Other arguments may require larger domains, and some trial and error may be involved in constructing an interpretation. For instance, consider the following invalid argument:

x(Lx Mx), xMx,

x(Lx & Mx)

We look for a small domain in which this argument can be shown invalid. We might try a one-member domain, like the one to the left. But then the second premise will require us to put an M above that item, and the first will then require an L above it also, making the conclusion true, which is not what we want. To show this argument invalid, we will need a domain with at least two items in it. The diagram at the left can be used to provide such an interpretation: D: The stick figures to the left. L: has an L above it M has an M above it We can show sets of sentences consistent and pairs of sentences non-equivalent in a similar way, remembering which sentences are to be shown true and which false. For instance, to show the following set of

sentences consistent,

xYx, xJx, x(Jx ~Yx)


all we need to do is find an interpretation which makes all three sentences true. Our interpretation will need something that is J and something that is Y, but since (according to the third sentence) 203

Introductory Logic a J thing isnt Y, well need two distinct things. So this interpretation will work: D: The stick figures to the left J: has a J above it Y: has a Y above it. Alternatively, this will work as well: D: {1, 2} J: is in {1} Y: is in {2} If we wish to show the two sentences,

x(Fx Gx)

xFx xGx

non-equivalent, we need to find an interpretation that makes one of the two sentences true and the other false. A little thought will show that there is no way to make the left sentence (A thing is F if and only if it is G) true and the right sentence (Everything is F if and only if everything is G) false. So we must make the right sentence true and the left sentence false. We can make the right sentence true in either of two ways: by making everything F and also G, or by making at least one thing fail to be F and at least one thing fail to be G. If we make everything both F and G, then both sentences will be true, so that is not the right strategy. Hence we must try to make at least one thing not F and at least one thing not G, in order to make the right sentence true. There are several ways to do this while making the left sentence false; here is one: D: The stick figures to the left F: has an F above it G: has a G above it Heres another, different from the first: D: {1, 2} F: is in {1} G: is in {} 204

Quantificational Logical Relations If all the predicates in our sentences are monadic (1-place), then if an argument is invalid, or a pair of sentences non-equivalent, or a set of sentences consistent, there will always be an interpretation with a finite domain on which the sentences are true and false as required to show that the logical operations do not hold.33 In some cases, the domain might be large; in general one can only say that if the sentences have n distinct predicates, there will be an interpretation in a domain of 2n or fewer items. In our exercises, however, we will not require finite domains of more than three items. Exercises 8-1 For each of the following, provide an interpretation with a small, finite domain that shows the argument invalid. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

xFx, x~Gx, x(Gx Fx) Fa, x(Fx Hx), xGx, x(Gx & ~Hx) x(Ax Cx), x(Cx ~Ax), ~xCx x[(Px w Qx) Rx], x~Px, x~Rx ~xQx x(Dx & Ex) w x(Rx & Ex), x(Ex Dx) x(Fx Gx), x(Hx Gx), x(Hx & Fx) x(Bx Cx), xBx, xCx x(Bx Cx), xCx, xBx x(Bx Cx), xBx, xCx x(Fx Gx), x(Hx ~Fx), x(Hx Gx)

Show that the following are not equivalent 11. 12. 13. 14. 15. 16.

xy(Mx Hy) xFx Ga xFx & xGx xFx w xGx x(Fx w Ga) x(Fx Gx)
33

xMx yHy x(Fx Ga) x(Fx & Gx) x(Fx w Gx) x(Fx w Gb) xFx xGx

The proof of this is beyond the scope of this book. See Bernays, P. and Schnfinkel, M., Zum Entscheidungproblem der mathematischen Logic, Mathematische Annalen, 99 (1928), 342372 205

Introductory Logic 17. x(Fx Gx) 18. x(Mx Nx)

y(xFx Gy) xMx xNx

Show the following sets of sentences are consistent. 19. 20. 21.

x(Ax Bx), x(Ax ~Bx) x(Fx Gx) xNx, x(Nx Gx), ~x(~Fx w Gx) (~xFx xFx) w ~Fa, ~zFz
2: Logical Relations with Polyadic Predicates

The definitions of the quantificational logical relations are the same for polyadic preciates as for monadic ones. Indeed, the definitions (given at the beginning of the previous section) make no mention of whether the predicates are monadic or polyadic. So we show that the relations do not hold for sentences in the same way, by finding appropriate interpretations. However, it can be more challenging to find the interpretations we want, because the sentences involving polyadic predicates can make more complex statements. For instance, suppose we wish to show this argument invalid:

xy(Fxy w Fyx) xyFxy


We know that the interpretation we need will look something like this:

D: The stick figures to the left F: An arrow goes from to We need to figure out how many stick figures we need and exactly where to draw the arrows in the diagram. We should begin by noting that the premise requires that each item in the domain have an arrow going from it to itself. If this is not obvious, remember that the first premise will be true if but only if the open sentence, y(Fxy 206

Quantificational Logical Relations w Fyx) is true of everything. Now if this open sentence is true of some item, say a, everything must either have an arrow coming to or an arrow from a. Everything, of course, includes a itself. So a must either have an arrow coming from itself to itself or an arrow going from itself to itself; that is, a must have an arrow from itself to itself. D: The stick figures to the left F: An arrow goes from to But now we can see that we will need more than one stick figure to show the argument invalid. For each figure must have an arrow from itself to itself, and with one figure we cannot make the conclusion false. So let us try two figures. With two figures, we will need to have an arrow from each to itself and also an arrow from one to the other to make the first premise true. But still the second premise, which says that an arrow goes from some figure to all figures, is true, and we need to make it false. So well need to have three figures,, each with an arrow going from it to itself, and each with an arrow going to one other, like this: D: The stick figures to the left F: An arrow goes from to Now we have an interpretation on which the premise is true and the conclusion is false, so we have shown the argument invalid. Unfortunately, it is not always possible to provide an interpretation in a small, finite domain to show 207

Introductory Logic that logical relations do not hold. A finite domain would always suffice for sentences with only monadic predicates, as we pointed out at the end of the previous section. But when polyadic predicates are involved, a finite domain may not suffice. For instance, consider the following set of sentences:

xyFxy xyz[(Fxy & Fyz) Fxz] ~xFxx


These sentences are consistent, but there is no interpretation having a finite number of objects in its domain that makes all of these sentences true. For an interpretation that makes them all true we need to have an infinite domain. For instance, on this interpretation all these sentences are true: D: positive integers F: < Although the proof is beyond the scope of this course, it can be shown that every consistent set of sentences of L2 has an interpretation whose domain is the positive integers and which makes all the sentences of the set true. Most of our examples will require only small finite domains, however. Exercises 8-2 For each of the following, provide an interpretation that shows the argument invalid. If possible, use an interpretation with a small, finite domain. 1. 2. 3. 4.

xSxa, xy(Sxy Syx), xSax x(Lx yMxy), x(~Lx & yMxy), x(Lx & yMxy) xyz[(Wxy & Wyz) Wxz), xyWxy, xy(Wxy &
Wyx) xGx, x(Gx Dxx), xy(Gx & Dxy)

Show that the following are not equivalent 5. 6. 7. 208

x[Fx y(By & Kxy)] x(Bx & yDyx) x(Bx & yDyx)

x[Fx y(By Kxy)] x(Bx & yDyx) x(Bx & yDxy)

8.

xy(Fx Kyx)

Quantificational Logical Relations xy(Fx Kyx)

Show the following sets of sentences are consistent. 9. 10. 11. 12. 13. 14.

x(Cx yDxy), xy[(Cx & Cy) & ~Dxy] xy(Fxy Fyx), xy(Qy Fxy), ~xy(Qx & Fxy) x[(Rx w Px) Tx], x(Tx Vxx),~Vaa, x~Rx & x~Px xy~Lxy, ~x~Lxx xy(Hxy w Jxy), xy~Hxy, ~xyJxy xy(Mxy Nxy), xy[Mxy & (~Nxy w ~Nyx)]

3: Confinement We can more easily understand some complex sentences of L2 by using confinement equivalences. Some sentences of L2 are difficult to understand because they often have no natural English equivalent. For instance, consider this sentence:

xy(Fx Gy)
It may not be obvious at first what this sentence says. But this sentence is equivalent to a simpler one:

xFx yGy
which is easier to understand. There are a group of equivalences which can help simplfy some sentences of L2 in the manner. You will find it useful to master the following confinement equivalences. By using them you can transform some complex sentences of L2 into sentences that are simpler to understand.

NOTE: in the following, N< is a formula containing the variable < free; R is a formula containing no free <. In each case the sentence in the left column and the corresponding sentence in the right column are equivalent.

<(N< & R) <(R & N<)

<N< & R R & <N<


209

Introductory Logic <(N< & R) <(R & N<) <(N< w R) <(R w N<) <(N< w R) <(R w N<) <(N< R) <(R N<) <(N< R) <(R N<)

<N< & R R & <N< <N< w R R w <N< <N< w R R w <N< <N< R R <N< <N< R R <N<

Note that there are no confinement equivalences of the above pattern for .34 Further note that in all but two cases, the quantifier is simply attached to the subformula containing the free variable. However, if the antecedent of a conditional is involved, the quantifier must change type, from universal to existential or from existential to universal. Keep in mind the following points about the confinement equivalences. First, they only apply to quantified formulas in which a binary connective connects two formulas and only one of the two has the quantified variable free. Thus no confinement principles apply to these sentences:

x(Fx Gx), xy[(Fx & Gy) (Fy & Gx)], x~(Fx & Ha)
Second, since the confinement equivalences concern equivalence, they may apply to subformulas as well as whole sentences. Thus they license the claim that the following pairs are equivalent:

xy(Fx Gy) x(Fx yGy) ~x(Fx Ja) ~(xFx Ja) xBx & x(Mx & Qa) xBx & (xMx & Qa)
Finally, notice that the confinement equivalences may apply more than once to a sentence, as with the first example in this

Note that <(N< R) is equivalent to (<N< R) & (R <N<), but this does not fit the pattern above. Similarly, <(N< R) is equivalent to (<N< & R) w (<~N< & ~R).
34

210

Quantificational Logical Relations section:

xy(Fx Gy)
is equivalent to

x(Fx yGy)
which is in turn equivalent to

xFx yGy
Keeping the confinement equivalences in mind sometimes makes it easier to understand complex sentences of L2 and hence to find interpretations that can show arguments invalid, sets of sentences consistent, or pairs of sentences non-equivalent. For instance, suppose we wish to show that the following sentences are consistent:

x(Fx Ga), x(Ga Hx), y(~Ga & ~Fy)


It is easier to see what an interpretation must be like to make these three sentences true if we consider the following equivalents:

xFx Ga, Ga xHx, ~Ga & y~Fy


Since the first of our original sentences was a universally quantified conditional whose consequent did not contain the quantified variable, we could confine the quantifier to the antecedent. In confining to the antecedent of the conditional, the quantifier must change from universal to existential. The second sentence is also a conditional, but there the quantifier is confined to the consequent, so it does not change type. Finally, the third sentence is an existentially quantified conjunction. Here the quantifier can be confined to the right conjunct. The resulting sentences are easier to understand than the originals. We can easily see that Ga must be false to make the third sentence true. If Ga is false, then automatically the second sentence will be true. Finally, if Ga is false, xFx must be false as well, to make the first sentence true. So we need an interpretation in which one thing, named a, is not G and nothing is F, like this 211

Introductory Logic one: D: The stick figure to the left F: has an F above it G: has a G above it. a: The stick figure to the left.

Exercises 8-3 Find interpretations showing the following sentences consistent. Use the confinement equivalences to simplify the sentences, where possible. Mondaic Exercises: 1. 2. 3. 4.

x(Fx w yGy), y(Ga & Fy), z~Fz x(Nx & Gd), x(Gx Ld), x~(Nx & Lx) x(~Pj w Bx), x(Bj & Px), x(Bx Pj) xy(Hx w My), x[Hx y(My w Jn)], x(~Mx & ~zJz)

Polyadic Exercises: 5. 6. 7. 8. ~x(yAxy Bc), x(Bx w Acc), x~(Bx & Cc) xy(Hxa Gxy), xy(GayGxx), x(Hax w Hxa) xy(Rxy Mx), z~(Mz w Ma), xyz(Rxy Rzx) xy(Fxy Ga), x(Gx Ha), x~Gx

4: Expansion in Finite Domains If we want to understand what a quantified sentence says on an interpretation with a small, finite domain, sometimes it helps 212

Quantificational Logical Relations to find a quantifier-free sentence that says the same thing, on that interpretation. For instance, consider the sentence, xFx. This says that everything in the domain is F. If there are only two items in the domain of an interpretation, like this one: D: {The Pope, Madonna} F: is a Roman Catholic then xFx will be true if and only if both the Pope and Madonna are Roman Catholics. If we add to our interpretation names for the two members of the domain: D: {The Pope, Madonna} F: is a Roman Catholic a: The Pope b: Madonna then we can say that on this interpretation xFx is true if and only if Fa & Fb is true. Similarly, xFx will be true if and only if at least one of the Pope and Madonna is a Roman Catholic. In other words, it will be true if and only if Fa w Fb is true. The same thing is true of more complicated quantified sentences. For instance, consider the sentence x(Fx & xGx). On an interpretation with only two items in its domain, named a and b, this will be true if and only if (Fa & xGx) w (Fb & xGx) is true. We can expand this sentence further. The xGx that occurs twice in the sentence immediately above will be true in the domain we are talking about if and only if Ga & Gb is true there. So we may say that our original sentence is true in this domain if and only if (*) (Fa & (Ga & Gb)) w (Fb & (Ga & Gb))

is true there. Although this sentence is much longer than the original, it is in many ways more tractable. If we want to know how to make it false, for example, we can use a truth table. You can readily see that if Fa and Gb are both false on whatever interpretation we have chosen, then the whole sentence (*) is false. And since (*) is true on this interpretation if and only if 213

Introductory Logic our original sentence is, you can see that our original sentence, x(Fx & xGx) must be false in that case as well. This procedure can be generalized. It is easy to find a quantifier-free sentence that is true in a particular finite domain if and only if a given quantified sentence is true there. You can follow these steps: 1. 2. Decide how many items you want in your domain. You may pick any positive integer, but once you get beyond three, things can become very unwieldy. Select names for the items in the domain. Include any names that occur in the sentence (or sentences) you are interested in. You should choose as many names as you have items in your intended domain. Beginning with the leftmost quantifier of the given quantified sentence, replace each subsentence beginning with a quantifier as follows: If the quantifier is a universal quantifier, replace the subsentence by the conjunction of all the sentences that can be made by removing the quantifier and substituting one of the names you have chosen for the quantifiers variable at each of its free occurrences. If the quantifier is an existential quantifier, replace the subsentence by the disjunction of all the sentences that can be made by removing the quantifier and substituting one of the names you have chosen for the quantifiers variable at each of its free occurrences.

3. 4.

5.

This process is called expansion in a finite domain. Lets do an example, following the rules explicitly. Well expand this sentence:

x[Fx y(Gy & Hr)]


1. 2. 3. 4. Lets use a domain of two elements. Since the name r occurs in the sentence, we must use it as one of the two names. We will use a for the other. The leftmost quantifier is the x. After we follow rule 4 above for it, we can use rule 5 on the subsentence beginning with y. The universal quantifier is expanded with a conjunction:

214

Quantificational Logical Relations [Fa y(Gy & Hr)] & [Fr y(Gy & Hr)] 5. Now the quanfied subsentences are expanded, one at a time: [Fa ([Ga & Hr] w [Gr & Hr])] & [Fr y(Gy & Hr)] [Fa ([Ga & Hr] w [Gr & Hr])] & [Fr ([Ga & Hr] w [Gr & Hr])] We dont have to choose a two-membered domain for an expansion. We can chose a domain with only one member. If we expand x[Fx y(Gy & Hr)] in a one-membered domain, we get the following: Fr (Gr & Hr) If we want we can expand in a three-membered domain, with members named a, b, and r: [Fa ([Ga & Hr] w [(Gb & Hr) w (Gr & Hr)])] & ([Fb ([Ga & Hr] w [(Gb & Hr) w (Gr & Hr)])] & [Fr ([Ga & Hr] w [(Gb & Hr) w (Gr & Hr)])]) We could expand the sentence in a domain of four members, or indeed, of any positive integral number of members, but obviously, for domains larger than three members the resulting expansions would be very cumbersome. The method of expansion in finite domains gives us a way of producing quantifier-free sentences that will have the same truth values as the quantified ones on any interpretations where all the individuals in the domain are named by the names used in the expansion. When a sentence has been expanded in a finite domain, we can use a truth table to determine how to make it true or false. For instance, consider the sentence we have been using as our example, x[Fx y(Gr & Hr)]. The expansion in a one membered domain we found above to be Fr (Gr & Hr). If you make a truth table, you can easily find many assignments of truth values to the atomic sentences of the expansion that make the sentence true, for instance: Fr : T Gr: T Hr: T 215

Introductory Logic With this information, we can easily construct an interpretation that makes the sentence (and hence the original quantified sentence) true. We start by using our basic pattern for interpretations in small finite domains. We will use a stick-figure diagram: D: F: G: H: r: The stick figure to the left. has an F above it. has a G above it. has a G above it. The stick figure to the left.

To get this diagram we start with the single stick figure. We put an F above the object named by r because we have assigned Fr the value True. If we had assigned it the value False, we would not have put the F there. Similarly for G and H. Constructing an interpretation involving numbers would be equally straightforward. In this case, our domain will consist of the number 1. The extensions of the predicates will contain just the objects needed to make the predicates true or false, according to the truth value assignment we have above. Here is the resulting interpretation: D: F: G: H: r: {1} is in {1} is in {1} is in {1} 1

We can use expansion in a finite domain to find interpretations that show arguments invalid. Suppose we wish to show the following argument invalid:

x(Rx Kx), xKx, xRx


We begin by guessing how big the domain must be for the interpretation we seek. Well guess that only one member will be needed. Next we expand each of the sentences in a domain of 216

Quantificational Logical Relations only one member (well pick a as the name of the one member). We get these sentences: Ra Ka, Ka, Ra We must find a way to make the conclusion false and all the premises true. Obviously, this assignment of truth values will do: Ka: T Now we construct an interpretation: D:The stick figure to the left K: has a K above it R: has an R above it Alternatively, we could use this D: {1} K: is in {1} R: is in {} This interpretation will show our original argument invalid, because it will make our original premises true and our original conclusion false. We could also have found an interpretation with a domain of two members. If we expand our original argument in a domain of two members, we get this: (Ra Ka) & (Rb Kb), Ka w Kb, Ra w Rb Obviously, we can make the premises true and conclusion false by making both Ra and Rb false, and making one or both of Ka and Kb true. This illustrates an important point about L2 : If a sentence is true on an interpretation with a given sized domain, then for any larger domain size there will be an interpretation that also Ra: F

217

Introductory Logic makes it true.35 We can obviously show sets of sentences consistent, or pairs of sentences non-equivalent by the same method. The only difference is that instead of making a conclusion false and premises true, we make all sentences true to show consistency, or make the two sentences have different truth values to show non-equivalence. This method has limitations. Although many aspects of expanding in finite domains are purely mechanical, expanding in a finite domain is not a purely mechanical method for checking arguments for validity. There are two reasons for this. First, as mentioned at the end of the previous section, there are some arguments that can only be shown to be invalid by considering domains with an infinite number of members. Second, even if a finite domain is available, there is no convenient general method other than trial and error for finding out how big it must be. You must make a good guess, or try various possibilities. Exercises 8-4: Expand the following sentences in a domain of one member: 1. 2. 3. 4.

x(Fx Grx) y(My & Lyy) xFxc xRcxx x(yGsy yPyx)

Expand the following in a domain of two members: 5. 6. 7. 8.

x(Mx w yFxy) z[Bz w(Jw Swz)] xy(Dxt Wxy) vz(Bzq & Pjv)

Expand the following in a domain of three members: 9. 10.

xyGxy y(Vy & wLyw)

This is not difficult to prove, but the proof is just beyond the scope of this course. 218

35

Quantificational Logical Relations Use the method of expansion in finite domains to show that the following sets of sentences are quantificationally consistent: 11. 12. 13. 14.

x(Cx Mx), y~My, z~Cz x(Wx Rx) yWy, z~Rz xyHxy, x(Hxx Zx), xZx x(Ufx Ax), ~zAz, yUfy

Use the method of expansion in finite domains to show that the following arguments are quantificationally invalid: 15. 16. 17. 18.

w(Lw & Dw), ~v(Lv Dv), x(Dx & ~Lx) x(Mx Px), xMx, xPx x(yHxy yHyx), ~zHzz, xyHxy x(Rx & ySxy), ~yxSxy, x~Sxx

Use the method of expansion in finite domains to show that the following pairs of sentences are not quantificationally equivalent: 19. 20. 21. 22.

x(Mx & Cx) xyWxy x(yQxy yKxy) xy(Oxy Txy)

~x(Mx Cx) xyWxy xy(Qxy Kxy) y(xOxy xTxy)

219

Introductory Logic

Chapter 9: Quantificational Derivations


In this chapter we shall learn a basic set of derivation rules for quantifiers. We will learn introduction and exploitation rules for the existential quantifier, an exploitation rule for the universal quantifier, and a new form of derivation, universal derivation, that is used to derive universally quantified sentences. We shall also discuss strategies for completing derivations. The only method for showing arguments valid, sentences equivalent, or sets of sentences inconsistent in L2 is to provide an appropriate derivation or derivations. Because L2 has quantifiers added to it, we must add to the derivation system D1 some rules for quantifiers to get a derivation system, D2, for L2. D2 contains all the rules of D1, and all of the derivation techniques for D1 apply to D2 as well. 1: Instances of quantified sentences The derivation rules refer to instances of quantified formulas. In order to explain the derivations rules, we must first learn the concept of an instance of a quantified sentence. A quantified sentence is one whose main logical operator is a quantifier. An instance of a quantified sentence is a sentence obtained from the quantified sentence by dropping the initial quantifier and replacing each free occurrence of the quantifiers variable with a name (the same name at all occurrences). A quantified formula can have as many instances as there are names in L2 . He are some examples of quantified sentences with some of their instances: Sentence xFx zGz xAxd x(Bx Cx) xyGxy x(Fx xGx) Instances Fa, Fb, Fc Gu, Gt, Gs Aad, Abd, Add Ba Ca, Bf Cf yGay, yGby, yGgy Fa xGx, Fc xGx

220

Quantificational Derivations Exercises 9-1 In each of the following exercises, tell which of the sentences are instances of the given sentence. The instances of x(Fx & Gx) are: a. Fb & Gb b. Fx & Gx c. x(Fa & Gx) d. Fa & Gb e. Fc & Gc 2. The instances of x[(Hx & Rc) Mx] are: a. (Ha & Rc) Ma b. (Hb & Rb) Mb c. (Hc & Rc) Mb d. (Hc & Rc) Mc e. (Hc & Rb) Mb 3. The instances of x[(Ax & Bx) w y(Gx & Hy)] are: a. (Ax & Bc) w y(Gx & Hy) b. (Ac & Bc) w y(Gb & Hy) c. (Am & Bm) w y(Gm & Hy) d. (Af & Bg) w y(Gf & Hy) e. (Ar & Br) w (Gr & Hc) 4. The instances of zv(Jz [Fz w Kv]) are: a. Ja (Fb Kb) b. Jg (Fg Kv) c. z(Jz [Fz w Ke]) d. v(Jt [Ft w Kv]) e. v(Jn [Fv w Kv]) 1. 2: The Universal Exploitation and Existential Introduction rules. The Universal Exploitation rule is simple and intuitive. This rule says that from any universally quantified sentence we can infer any of its instances. Thus from xFx we may infer Fa or Fb or Fp or any other instance. We can use the rule in this derivation: 1. x(Hx Mx) P 221

Introductory Logic 2. Hs 3. SHOW Ms +))))))))))))))), 4. *Hs Ms * 5. *Ms * .)))))))))))))))-

P DD 1, E 2, 4, E

This derivation corresponds to the well-know argument, All humans are mortal; Socrates is human; therefore Socrates is mortal. The Universal Exploitation rule corresponds to our ordinary ways of reasoning. If everyone (or everything) has some property, then surely anyone (or anything) we name has it. If every dog barks, then surely if Fido is a dog, Fido barks. If every car has a battery, then surely my car has one. We can use Universal Exploitation to infer any of the following from x(Fx & Ga): Fb & Ga Fa & Ga Fm & Ga

Note that Universal Exploitation is a rule of inference. Hence it can only be used when the Universal Quantifier is the main logical operator of the sentence. That means that Universal Exploitation cannot be used to derive anything from any of the following: ~xMx yz(Kx & Gz)

zPz yHy

Fg & xPx

We can represent the Universal Exploitation rule with a diagram in our usual fashion. To do this, well need to explain some notation. We will use a Greek letter < to stand for variables of L2, and the Greek letter 0 to stand for names of L2. Well use the notation N[< > 0] to stand for the result of replacing all free occurrences of < in N by 0. Then we can diagram Universal Exploitation thus:

<N
))))))) N[< > 0]

Existential Introduction is also a simple and intuitive rule. Existential Introduction allows us to infer an existentially quantified sentence from any one of its instances. Thus from Fa 222

Quantificational Derivations we may infer xFx or yFy or zFz, for instance. We can use the rule in this derivation: 1. x(Lx & Mg) 2. SHOW x(Lb & Gx) +))))))))))))), 3.*Lb & Mg * 4.*x(Lb & Mx) * .)))))))))))))P DD 1, E 3, I

The Existential Introduction rule corresponds to our ordinary ways of thinking. If we know that Alonzo is a logic student, then we can infer that someone is a logic student. If we know that Alpha Centuri is a star that is four light years from earth, we know that some star is four light years from earth. We can use Existential Introduction to infer any of the following from Ga & Wb:

x(Ga & Wx)

y(Gy & Wb)

z(Ga & Wb)

These are all correct because Ga & Wb is an instance of each of those sentences; that is, you can get Ga & Wb from each of those sentences by dropping the initial quantifier and replacing the variable by the appropriate name. Like Universal Exploitation, Existential Introduction is a rule of inference. Whenever it is used the existential quantifier that is introduced must have scope over the whole sentence. So none of the following can be inferred by Existential Introduction: ~xFx

yBy yMy

Vxy(Hx & Ty)

We can represent the Existential Introduction rule with a diagram in our usual fashion. Using the notational conventions we used above for Universal Exploitation, the diagram looks like this:

I N[< > 0]
)))))))))

< N

Exercises 9-2 223

Introductory Logic Provide derivations using the new rules that show the following arguments valid: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

x(Fx w Gx), x~Gx, Fb x(~Mx Lxa), x(Lxx Px), ~Ma Pa Fa & Ga, zGz Fg & (yGy Hg), Gr, z(Fz & Hz) xMx w ~yPy, ~(Pq Mg), xMx ~(Bo Si), ~(Lo w Si), vBv xy[(Jx & Ky) (My & Nx)], Ja & Kb, y(Mb & Ny) x[y(Oy & Px) Hx], Ot & Pt, zHz x(Kx & Mx) yGy, xy[Nx (Kx & My)], Np, Gm w(Cw Dw), Cc, zDz
3: Existential Exploitation

The instances of an existentially quantified sentence dont follow from it. We would like to make inferences from existentially quantified sentences. For instance, from the premise that all murderers should go to jail and the premise that someone is a murderer, we would like to be able to infer that someone should go to jail. A derivation to show that this follows would start like this (using the obvious interpretation): 1. x(Mx Jx) 2. xMx 3. SHOW zJz P P

But how shall we continue the derivation? If we somehow could take it for granted that Colonel Mustard was the murderer in question, then we could complete the derivation thus: 1. x(Mx Jx) 2. xMx 3. SHOW zJz +))))))))), 4. *Mc * 7. *Mc Jc * 8. *Jc * 9. *zJz * .)))))))))224 P P DD ??????? 1, E 5, 6, E 7, I

Quantificational Derivations But, of course, we cant simply assume that Colonel Mustard was the murderer; the murderer might well have been Ms. Scarlet, or Professor Plum instead. The instances of an existentially quantified sentence dont follow logically from it. If all we know is that someone is a murderer, we are not entitled to conclude that anyone in particular is. Even though the instances of an existentially quantified sentence dont follow from it, we can still sometimes pretend that they do. If you look back at the derivation, you will see that it doesnt matter which name we used; the same conclusion could have been derived if we had assumed that Ms. Scarlet was the murderer, or that Professor Plum was the murderer, or even that the President was the murderer. This observation suggests a way we might safely carry out the derivation: simply assume that we can name the unknown individual with a particular name, and proceed with the derivation! Obviously, this procedure might lead to trouble if it were not used carefully: we mustnt assume that the individual is someone about whom we are trying to prove something, and we mustnt assume that the individual is someone about whom we already know something that we dont know about everyone. Fortunately, we can guarantee that we dont get into trouble by imposing a restriction on the names we use: we may not use a name that has appeared already in the derivation.36 A careful statement of the rule we have been discussing, the Existential Exploitation rule, is in order. The rule allows us to infer an instance from an existentially quantified sentence,

provided the name we introduce does not appear on any earlier line of the derivation. Note that the reference to earlier lines of the derivation here is not limited to accessible lines. In particu-

lar, the name also must not occur in lines containing uncanceled SHOW. The Existential Exploitation rule is very different from any of the rules we have discussed before, and very different from the Existential Introduction and Universal exploitation rules. Existential Introduction and Universal Exploitation can be used whenever we want, without restriction, because instances follow from a universally quantified sentence, and existentially

This condition is more restrictive than necessary, but it is simple. A less restrictive condition would be more complicated. 225

36

restriction that the name chosen in a use of Existential Exploitation does not appear on any earlier line of the derivation.

Introductory Logic quantified sentences follow from their instances. Since the instances of an existentially quantified sentence are not logical consequences of it, when we use the Existential Exploitation rule we are inferring lines that do not follow from earlier lines. In fact, the lines are really assumptions, and in a more formal treatment would be treated as such. But we can allow ourselves to infer lines using Existential Exploitation without fear that we will reach conclusions (canceled SHOW lines) that do not follow from premises, provided we observe the The Existential Exploitation rule seems to correspond with some ordinary ways of thinking. If we wanted to reason about murderers, as above, we might start out like this: Someone is a murderer; for convenience lets call him Clive. An informal version of the derivation we gave above might go like this: All murderers should go to jail. Someone is a murderer. For convenience, lets all that person Clive. So Clive is a murderer. By our first premise, if Clive is a murderer, he should go to jail. So he should go to jail. And since Clive should go to jail, someone should go to jail.

Existential exploitation, like all the rules of this chapter, is a rule of inference. It can only be used when the existential quantifier is the main logical operator of the sentence. Thus it cannot be used to infer anything from any of these sentences: ~xFx xFx yGy

xyJxy

Fb w xTx

We can represent the Existential Exploitation rule in a diagram, using the notation we have introduced earlier in this chapter:

<N
))))))) N[< > 0] provided that 0 is new to the derivation

Note that we include the restriction with the diagram to emphasize its importance. The restriction on the use of Existential Exploitation can lead 226

Quantificational Derivations to troubles in derivations. One common error can be seen here: 1. x(Mx w Dx) 2. x~Mx 3. SHOW xDx +))))))))))))), 4.*Ma v Da * 5.*~Ma * 6.*Da * 7.*xDx * .)))))))))))))P P DD 1, E 2, E <<< WRONG!! 4, 5, wE 6, E

The use of Existential Exploitation in line 5 is incorrect: a already appears in the derivation (on line 4). To use Existential Exploitation, a new name must be chosen, such as b. Unfortunately, if the sentence on line 5 is ~Mb, it cannot be used with line 4 to infer Da on line 6. The remedy is simple: one must reverse the order of lines 4 and 5. If ~Ma is derived first, the restriction on the use of Existential Exploitation is not violated. Since there is no restriction on the use of Universal Exploitation, it can be used to get Ma v Da any time in the derivation. This example illustrates the following rule of thumb: When possible, use E before E. The reason for this rule is simple, though the rule is easy to forget. Often you want to get the same name as a result of Existential Exploitation and Universal Exploitation, so that the results can be used together with sentential rules to derive something. Unless Existential Exploitation is used first, the restriction on its use will prevent using the same name that is used in Universal Exploitation. Exercises 9-3 Provide derivations showing the following arguments valid. 1. 2. 3. 4. 5. 6.

x(Ax Bx), yAy, zBz xy(Rx & Sy), yx(Rx & Sy) y(Ga & Fy), xy[(Gx & Fy) (Mx & Sy)], Ma & zSz x(Fx & Gx), xFx & xGx x(Fx Ga), xFx Ga xMx w xLx, x(Mx w Lx)
227

Introductory Logic 7. x~Dx, x(Dx yDy) 8. xPx, ~x~Px xQx, ~x~Qx 9. 10. x(Tx yLy), xTx yLy 11. x(Fa & Mx), xy[(Fx & My) wBw], zBz 12. x(yBxy yLyx), xy[(Mx & My) Bxy], zMz, xyLyx 4: Universal Derivation There is no Universal Introduction rule; rather, there is a method of deriving universally quantified statements. Universally quantified statements do not follow from their instances. Gertrude likes rap, but it does not follow that everyone does. My car is gray, but it doesnt follow that every car is. To show that a universally quantified statement is true, we show that an arbitrary instance of it is true. In this context, an arbitrary instance is one about which we assume nothing, except what is true of everything. For instance, to show that the sum of the angles of a triangle is equal to a straight angle, we prove that this is true of an arbitrary triangle, to which we give an arbitrary name. In D2, we have a derivation rule that provides for such derivations. It says that to show a universally quantified sentence, we show an arbirary instance of it, where an arbitrary instance is one where the name that replaces the variable is new to the derivation. We call this sort of derivation Universal Derivation (UD). Here is a sample derivation using it: 1. x(Kx Qx) 2. x~Qx 3. SHOW x~Kx +))))))))))))), 4.*SHOW ~Ka * *+))))))))), * 5.**Ka Qa * * 6.**~Qa * * 7.**~Ka * * *.)))))))))- * .)))))))))))))P P UD DD 1, E 2, E 5, 6, MT

We chose the letter a for the name that replaced the x in ~Kx, but we could have chosen any names, since no name appears earlier in the derivation. Heres another sample 228

Quantificational Derivations derivation: 1. x[(Lx & Mg) (Dx & Ca)] 2. xy[(Dx & My) ~(Lx & My)] 3. SHOW x[(Lx & Mg) ~(Lx & Ma)] +)))))))))))))))))))))))))))), 4.*SHOW (Lb & Mg) ~(Lb & Ma) * *+)))))))))))))))))))))))), * 5.**(Lb & Mg) (Db & Ca) * * 6.**y[(Db & Cy ~(Lb & My)] * * 7.**(Db & Ca) ~(Lb & Ma) * * 8.**(Lb & Mg) ~(Lb & Ma) * * *.))))))))))))))))))))))))- * .))))))))))))))))))))))))))))P P UD DD 1, E 2, E 6, E 5,7, HS

Notice that in this derivation we could not choose a or g as a name for the arbitrary individual in line 4. Both of those names already occur earlier in the derivation. We chose b instead; we could equally well have chosen q, or j, or any other name but a and g. We can diagram the pattern of Universal Derivation, just as we diagram the other derivations rules (DD, ID, and CD): UD SHOW <N +)))))))))))))), *SHOW N[< > 0] * *+))))))), * ** * * ** * * ** * * *.)))))))* .))))))))))))))-

provided 0 is new to the derivation

not optional and must immediately follow the line where Universal derivation is used. As a general rule, we almost always

When using Universal Derivation, the second SHOW line is

use universal derivation to show statements that begin with a universal quantifier,so whenever you find a SHOW followed by a universally quantified statement, the next line should be a SHOW followed by an instance of the statement on the line above, using a name that is new to the derivation.

229

Introductory Logic Exercises 9-4 Provide derivations showing the following arguments valid. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12.

x(Ax Bx), x(Bx Cx), x(Ax Cx) xFx & yGy, x(Fx & Gx) x(Fx Gx), y(Py Gy), x(Px Fx) x~Fx, x(Fx Gx) x(~Px w Rx), x(~Dx ~Rx), x(Px Dx) Ns, x[y(Ny Dxy) Dxs] xFx Ga, x(Fx Ga) z(Hz Fz), xGx x(Gx Hx), x(Gx Fx) xy(Fx Gy), x~Gx, x~Gx xFx yGy, xy(Fx Gy) xWx xRx, x~Rx xWx, x(Wx w Rx) xPx yQy, ~xLx ~Qb, xy[Pa (Qx & Ly)]
5: Using Rules Correctly

The quantification rules are not difficult, but they must be used correctly. Using rules incorrectly may result in the derivation of falsehoods from truths. The three quantifier rules (and Universal Derivation) can only be used when the relevant quantifier is the main logical operator of the sentence. So you cannot do this: 6. xy(Gx & Hy) 7. x(Gx & Ha) WRONG!! 6, E

The universal quantifier (y) is not the main logical operator of line 6 in the example above; the existential quantifier (x) is. Hence you cannot use E on line 6. You must use E first. You cannot do this either: 9. 10.

xRx w xKx Ra w xKx WRONG!!

9, E

The universal quantifier is not the main logical operator of line 9, even though it comes at the beginning of the line. The main logical operator of line 9 is the wedge (w). Remember, a quantifier that is the initial symbol of a sentence need not be the 230

Quantificational Derivations main logical operator of that sentence, just as a tilde which is the initial symbol in a sentence need not be the main logical operator of that sentence (as, for example, in ~S w T). You can see clearly that this is so by restoring the parentheses that our informal conventions allow us to drop. The official sentence corresponding to xRx w xKx is (xRx w xKx). This point applies to UD as well as to the other rules. So this derivation is incorrect: 1. x[Ga y(Fx & Hy)] P 2. xGx P 3. SHOW xy(Fx & Hy) UD <<< WRONG!! +)))))))))))))))))))))))), 4. *SHOW x(Fx & Ha) * DD *+)))))))))))))))))))), * 5. **Gb * * 2, E 6. **Gb y(Fb & Hy) * * 1, E 7. **y(Fb & Hy) * * 5, 6, E 8. **Fb & Ha * * 7, E 9. **x(Fx & Ha) * * 8, I *.))))))))))))))))))))- * .))))))))))))))))))))))))The use of UD at line 3 is wrong, because the universal quantifier is not the main operator of xyFxy on line 3. This sentence cannot be derived by Universal Derivation. The restrictions on the E and UD rules must be followed scrupulously, no matter how inconvenient they seem. Thus you cannot do this: 4. x(Ga & Kx) 5. Ga & Ka WRONG!! 4, E

because the a already occurs in the derivation before line 5. And this partial derivation is wrong: 1. x(Mx Hx) 2. yMy 3. SHOW zHz 4. Ma Ha 5. Ma P P 1, E 2, E <<< WRONG!!

because a already occurs in the derivation before line 5. In this case, the derivation can be fixed by putting line 5 before line 4, but the remedy is not always so easy. The requirement for a new name applies even when the 231

Introductory Logic previous occurrence of the name is in an uncanceled SHOW line, as in this bogus derivation: 1. xFx P 2. SHOW Fa DD +)))))))))), 3.*Fa * 1, E .))))))))))-

<<<< WRONG!!

There is no way to fix this derivation; we cannot conclude that any particular thing is F just because something is. The same restriction applies to the UD rule, making this bogus derivation wrong: 1. Fa 2. SHOW xFx +)))))))))))), 3. *SHOW Fa * *+)))))))), * 4. **Fa * * *.))))))))- * .))))))))))))P UD <<<< WRONG!! DD 1, R

We surely cannot conclude that everything is is F just because one particular thing is. The UD rule is used incorrectly here. The a occurs already in the derivation, before line 3, so the use of UD is incorrect. Exercises 9-5 Each of the following derivations contains several errors. Find ALL the errors in them. Then try to redo the derivations correctly. Sometimes the best way to do this will involve an entirely different strategy than that used in the erroneous derivation. 1. 1. xFx xFx 2. Fa 3. SHOW Fb +)))))))))))), 4. *Fa xFx * 5. *Fa Fb * 6. *Fb * .))))))))))))232 P P DD 1, E 4, E 2, 5, E

Quantificational Derivations 2. 1. x[y(Fx & Gy) z(Hx & Gz)] P 2. yx(Fx & Gy) P 3. SHOW zy(Hy w Gz) DD +)))))))))))))))))))))))), 4. *y(Fa & Gy) * 2, E 5. *Fa & Ga * 4, E 6. *y(Fa & Gy) z(Ha & Gz) * 1, E 7. *y(Fa & Gy) * 5, I 8. *z(Ha & Gz) * 6, 7, E 9. *zy(Hy w Gz) * 8, I .))))))))))))))))))))))))3. 1. xFx ~yGy 2. yx(Hy & Fx) 3. SHOW xy(Hx & ~Gy) +)))))))))))))))))), 4. *Fa ~yGy * 5. *y(Hy & Fa) * 6. *Ha & Fa * 7. *Fa * 8. *~yGy * 9. *~Gb * 10.*Ha * 11.*Ha & ~Gb * 12.*x(Hx & ~Gb) * 13 *xy(Hx & ~Gy) * .))))))))))))))))))P P DD 1, E 2, E 5, E 6, &E 4, 7, E 8, E 6, &E 9, 10, &I 11, I 12, I P P ID 2, E AID DD 5, E 1, E 7, &E 8, 9, MT 10, E 11, &E 4, DeM 13, &E 12, 14, !I

4. 1. x[~y(Rx & Sy) Px] 2. ~z(Sz w Tz) 3. SHOW ~x(Rx & ~Px) +))))))))))))))))))))), 4. *~(Sa w Ta) * * 5. * x(Rx & ~Px) 6. * SHOW ! * *+))))))))))))))))), * 7. ** Ra & ~Pa * * 8. **~y(Ra & Sy) Pa * * 9. **~Pa * * 10.**y(Ra & Sy) * * 11.**Ra & Sa * * 12.**Sa * * 13.**~Sa & ~Ta * * 14.** ~Sa * * 15.** ! * * *.)))))))))))))))))- * .)))))))))))))))))))))5. 1. x[(Fx & Ga) Hx] 2. x(Fx & Ga)

P P 233

Introductory Logic 3. SHOW xHx +)))))))))))))))))))), 4. * SHOW Ha * *+)))))))))))))))), * 5. ** xFx * * 6. **Ga * * 7. ** (Fa & Ga) Ha * * 8. ** Fa * * 9. ** Fa & Ga * * 10. ** Ha * * *.))))))))))))))))- * .))))))))))))))))))))-

UD DD 2, &E 2, &E 1, E 5, E 6, 8, &I 7, 9, E

6: Strategy in Quantificational Derivations Constructing derivations requires ingenuity, persistence, and mastery of the rules. Any strategy suggestions are best thought of as rules of thumb only, guides to what is often the best way to find a derivation. Only experience and practice can develop skill, since real skill is not a matter of following mechanical rules. But the strategy suggestions provide a good place to start. Note: in this section, you may find it useful to work the derivations along with the text, and finish those that are not completed. All the strategy suggestions of D1 apply as well to D2. We will not repeat here the discussion of D1 strategy we gave in Section 4 of Chapter 5, but all the suggestions there apply to D2 as well as to D1. For instance, suppose we must complete this derivation: 1. x(Cx w Ox) 2. SHOW xCx w xOx 3. Ca w Oa P 1, E

Thinking over the various strategies we used for derivations in D1, it may occur to us that Separation of Cases might work here, so we try to establish the necessary statements: 1. 2. 3. 4. ...

x(Cx w Ox) P SHOW xCx w xOx Ca w Oa 1, E SHOW Ca (xCx w xOx)

?. SHOW Oa (xCx w xOx) 234

Quantificational Derivations Now it is relatively easy to complete each of these sub-derivations using Conditional Derivation, and then to use Separation of Cases to get xCx v xOx. Many derivations are done by taking instances of quantified sentences and using the D1 rules to complete the derivations. In the following simple derivation, we merely use our quantifier exploitation rules to eliminate the quantifiers; then the remaining steps are often mostly applications of D1 rules. For instance, we may easily complete this derivation: 1. x[Wx y(Hx & Ry)] 2. xWx 3. SHOW Ha & Rb +)))))))))))))))))), 4. *Wa y(Ha & Ry) * 5. *Wa * 6. *y(Ha & Ry) * 7. *Hab * .))))))))))))))))))P P DD 1, E 2, E 4, 5, E 6, E

In such derivations it is important to use the same name for successive uses of Universal Exploitation. We discover which names to use in this case by looking ahead to the conclusion. Since a occurs in Ha in the conclusion, it seems likely that we will use the name a to replace the variable x when we use Universal Exploitation of line 1. Then we use the same name in Universal Exploitation on line 2, so that the lines we get from the two uses (lines 4 and 5) can be used together with D1 rules. Once you have understood this derivation, you should be able to recognize how to derive other conclusions from the same premises, such as xy(Hx & Ry) or even xy(Hx & Ry). (Try to construct such derivations.) Sometimes it is necessary to look ahead in order to figure out which names to use. In such cases, it often pays to work simultaneously back from the end of the derivation and forward from the premises. For instance, consider this derivation: 1. xy[(Nx & My) (Sy & Tx)] 2. xy(Nx & My) 3. SHOW Sd & Tg P P

Here it would be a good idea to figure out what steps are likely to lead to the conclusion, and work backwards to see which 235

Introductory Logic names need to be used at each stage. It seems like a good guess that the conclusion will be reached from a sentence like this: ... ?. (Ng & Md) (Sd & Tg) and one like this: ... ? Ng & Md (We use question marks for the line numbers here, because we dont know yet which line numbers the will actually get.) Now it is easy to see how the whole derivation should be completed: 1. xy[(Nx & My) (Sy & Tx)] 2. xy(Nx & My) 3. SHOW Sd & Tg +)))))))))))))))))))))))), 4. *y(Ng & My) * 5. *Ng & Md * 6. *y[(Ng & My) (Sy & Tg)] * 7. *(Ng & Md) (Sd &Tg) * 8. *Sd & Tg * .))))))))))))))))))))))))P P DD 2, E 4, E 1, E 6, E 5, 7, E

We must always remember to use Existential Exploitation before Universal Exploitation whenever possible. Remember that once a name has appeared in a derivation, it cannot be used in an Existential Exploitation inference. So if you need to get the same name from a use of Universal Exploitation and a use of Existential Exploitation, you must use Existential Exploitation first. As a corollary of this rule, if you have several premises whose main operators are existential quantifiers, you may want to start out your derivation by using Existential Exploitation to eliminate these. For instance, consider this derivation: 1. xy(Kx & Ly) 2. x[Kx (Mx & Tb)] 3. SHOW xy(Lx & My) P P

The first step is to get rid of both the existential quantifiers in the first line 4. y(Ka & Ly) 236 1, E

Quantificational Derivations 5. Ka & Lc 4, E

Note that at line 5, we could not have chosen b as the name to introduce, because it already appears in the derivation. Line 5 now gives us a target for the Universal Exploitation we will use on line 2. 6. Ka (Ma & Tb) 7. Ka 8. Ma & Tb 9. Tb 10. Ma 2, E 5 &E 6, 7, E 8, &E 8, &E

Now the derivation is easy to finish. (Finish it.) To SHOW a universally quantified sentence, start a universal derivation by trying to SHOW an instance. Since the name you pick for the instance needs to be new to the derivation, it doesnt matter what it is, but you need to introduce it to the derivation before any other names are introduced. If you have both existentially quantified premises and a SHOW line that is universally quantified, first start the universal derivation and then eliminate the existential quantifiers, as in this derivation: 1. xAx 2. x[zAz (zQz & Rx)] 3. SHOW xQx 4. SHOW Qa 5. Ab P P 1, E

From this beginning the rest of the derivation is straightforward, but without the proper start, it would be very difficult. Line 5 provides a target for the use of Universal Exploitation on line 2: 6. 7.

zAz (zQz & Rc) zAz

2, E 5, I

We figure out that line 7 is needed by comparing line 6 and line 5, bearing in mind the desire to use E on line 6. Now the derivation is practically finished. (Finish it.) Exercises 9-6 For each of the following, provide a derivation showing the argument valid. 237

Introductory Logic 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

xAx, Aa y(Cy & Dy), yAy z(Hz Az), Ha, yDy x(Ex Nx), xEx x(Ex & Nx) xy(Fx ~Gy), ~x(Fx & Gx) x(xMx Mx) [Hint: Find a way to use SC2] xy(Dx Fy), xFx yDy x(Kx Zx), x(Bx & Zx) & x(Bx & ~Zx), ~x(Bx
Kx) xy(Rx Wy), x(Rx w Wx), zRz xBx xJx, Ba & Pa, ~xBx ~xPx xJx ~xLx x~Lx ~ xSx x~Sx

7: Derivations with Polyadic Predicates


Derivations with polyadic predicates do not require any different rules or strategies from derivations with monadic predicates, although the derivations can be somewhat more complicated. Consider, for instance, this derivation: 1. xyBxy 2. x(zBxz zSzx) 3. SHOW xySxy 4. SHOW ySay 5. yBby 6. Bbc P P 1, E 5, E

Here we have simply followed the rules to show a universally quantified sentence by universal derivation, and to use Existential Exploitation before Universal Exploitation. Now we must use Universal exploitation on line 2, thus: 7. zBbz zSzb 8. zBbz 2, E 6, I

We chose the name used in line 7 to make the antecedent as much like line 6 as possible, and we noticed that we could then get that antecedent from line 6. Now the derivation is practically finished. (Finish it.) We can easily derive xyFxy from yxFxy, thus:

238

Quantificational Derivations 1. yxFxy 2. SHOW xyFxy +))))))))))))))), 3. * SHOW yFay * *+))))))))))), * 4. **xFxb * * 5. **Fab * * 6. **yFay * * *.)))))))))))- * .)))))))))))))))P UD DD 1, E 4, E 5, I

But we cannot derive yxFxy from xyFxy because the restrictions on names in our rules will prevent the derivation from being completed. You will find it instructive to try to construct such a derivation. If you think you have done it, check your derivation carefully, because you will have made a mistake. The corresponding argument is not valid, as you can easily show, and hence our rules must not permit the derivation. We will end this section on strategy by working through an amusing but complicated derivation. In working through it, we will try to reproduce the trial and error that one usually uses to discover derivations. Suppose we are given the premises Everyone loves a lover37 and Someone loves someone and asked to derive Everyone loves everyone. Using an obvious interpretation, we begin this way: 1. x(yLxy zLzx) 2. xyLxy 3. SHOW xyLxy 4. SHOW yLay P P

We have started one universal derivation, but we still have a SHOW line containing a universally quantified sentence. So we start another universal derivation: 5. SHOW Lab

Now we get rid of the existential quantifiers in the second premise choosing names that are new to the derivation: 6.
37

yLcy

2, E

Note that this sentence is ambiguous; we interpret it to mean that every person loves every lover. 239

Introductory Logic 7. Lcd 6, E

Now we try to see how to connect what we have with the first premise. Using the name c with Universal Exploitation seems promising. 8.

yLcy zLzc

1, E

At this point, we realize that line 7 was unnecessary; the antecedent of line 8 matches line 6. However, it never hurts to have an unnecessary line, so we proceed. 9.

zLzc

6, 8, E

Now what shall we do? Remember that we are trying to get to Lab. Perhaps the right move here is to get as close as we can, so we try this: 10. Lac 9, E

(Perhaps you have already noticed that this will not work, but we will go on with it to see how to recover. If we had looked forward to the end of the derivation and then worked backwards, we might have avoided this poor strategy.) What do we do next? We have used both premises, and we havent gotten the result we are aiming for. One thing to do is to go back and look at previous steps to see whether we could have chosen differently so that we could have a b here where we have a c. But a little inspection shows that we could not have. That c was introduced by Existential Exploitation in line 6; it needed to be new to the derivation then, and since b already appeared, we had to use a new name. So perhaps we should look at some other ways to go on from here. One thing is possible: since a loves c, a is a lover, and hence we can use the first premise again. Lets try that and see what happens. 11. 12. 13.

yLay yLay zLza zLza

10, I 1, E 11, 12, E

This is very close to what we want! We could do this as the next line:

240

Quantificational Derivations 14. Lba 13, E

So near, and yet so far! We want Lab, but we have Lba instead. Lets look back at the derivation and see whether there is anything we could have done differently that would get around this problem. In fact, there are two ways to fix the derivation. First, we could easily have arranged things so that our target was Lba instead of Lab. This would only require changing two lines: 4. 5. SHOW yLby SHOW Lba

A quick check of the rules will show that these lines are just as good as the lines we started with. Another way to fix the derivation is to go back to line 10 and make a different decision. If we choose b there, rather than a, we can complete the derivation with the original lines 4 and 5: 10. 11. 12. 13. 14.

yLby yLby zLzb zLzb


Lab

Lbc

9, E 10, I 1, E 11, 12, E 13, E

Now we only need to box and cancel three times, and we are done. Exercises 9-7 Provide derivations showing the following arguments valid. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.

xy[Fxy (Gx w Hy)], Frs & ~Hs, Gr xy(Fxy Fyx), y(Fya Py), Fab, Pb xy(Pxy w Pyx), xyPxy x(Ix Gx), x[(Gx & Vxx) Wxx], yVyy, x(Ix Wxx) xyFxy yxFxy xy(Lxy Lyx), xy(Lxy Lyx) xy(Wxy Ixx), xyWxy, xIxx xy(Rxy w Ryx), xRxx x(Lxa w Lxb), x(yLxy Hx), xHx xy(Pxay Qxy), xyPxyb, xyQxy xy(Bxy w Byx). xBxx
241

Introductory Logic 12. 13. 14. 15. 16. ~Gyx) xyz[(Cxy & Cyz) Cxz], xy(Cxy Cyx), x(~Cxx y~Cxy) x[Sx y(Wy Axy)], x[Sx y(Ty ~Axy)], xSx x(Wx ~Tx) xy(Rxy Rxx) xyMxy, xMxa ~xMax, xy~Myx

xyz[(Gxy & Gyz) Gxz], x~Gxx, xy(Gxy

242

Quantificational Derivations

New Derivation Rules In Chapter 9:


Instances: If N = <R or N = <R is a sentence of L2, then an instance of N is R with all free occurrences of < replaced by any name, 0. In this case, 0 is called the instantiating name.
Basic Quantificational Rules of Inference

Universal Exploitation Rule: If <N appears on an earlier accessible line of a derivation, any instance of it may appear on a line. (Annotation: the number of the earlier line plus E.) Existential Introduction rule: If an instance of <N appears on an earlier accessible line of a derivation, <N may appear on a line. (Annotation: the number of the earlier line plus I.)
accessible line of a derivation, an instance of it may appear on a line, provided the instantiating name has not occurred already in the derivation, even in an inaccessible line. (Annotation: the number of the earlier line plus E.) Quantificational Structural Rule

Existential Exploitation: If <N appears on an earlier

Universal Derivation Rule: If an uncanceled SHOW line is of the form <N, and immediately after it a canceled SHOW line appears on which there is an instance of the sentence on the preceding SHOW line, and the instantiating name does not appear in the derivation on the uncanceled SHOW line or earlier in the derivation, even in an inaccessible line, the SHOW line may be canceled and subsequent lines boxed. (Annotation: UD appears on the canceled show line.)

243

Introductory Logic

Chapter 10: More Quantificational Derivations


In this chapter we will learn equivalence rules involving quantifiers that make derivations easier. We shall also discuss strategy for derivations and learn about enthymemes, arguments with missing premises. 1: Quantifier Negation The quantifier rules of Chapter 9 are complete, in the sense that if an argument of L2 is quantificationally valid, there is a correct derivation employing only the rules of Chapter 9 that has only the premises (if any) of the argument as its premises and the conclusion of the argument as the first sentence on a SHOW line. However, the rules of Chapter 9 make some derivations complex and difficult to discover. For instance, the rules of Chapter 9 give us no way to derive something from premises that are negations of quantified sentences, except indirect derivation. This can make some derivations complicated. Consider, for instance, this argument. ~x(Fx Gx), ~x~Hx, x(Fx & Hx) It is not obvious how to find a derivation that shows this argument quantificationally valid; the premises are negated quantifications, and if we try do do an indirect derivation, we simply get another negated quantification. A derivation can be found, with sufficient ingenuity, for instance, this one: 1. ~x(Fx Gx) 2. ~x~Hx 3. SHOW x(Fx & Hx) +))))))))))))))))))))))))))), 4. *SHOW xFx * *+))))))))))))))))))))))), * 5. **~xFx * * 6. **SHOW ! * * **+))))))))))))))))))), * * 7. ***SHOW x(Fx Gx) * * * ***+))))))))))))))), * * * 8. ****SHOW Fa Ga * * * * ****+))))))))))), * * * * 244 P P DD ID AID DD UD CD

More Quantificational Derivations 9. *****Fa * * * * * 10. *****SHOW Ga * * * * * *****+))))))), * * * * * 11. ******~Ga * * * * * * 12. ******SHOW ! * * * * * * ******+))))),* * * * * * 13. *******xFx ** * * * * * 14. *******! ** * * * * * ******.)))))-* * * * * * *****.)))))))- * * * * * ****.))))))))))- * * * * ***.))))))))))))))- * * * **.))))))))))))))))))- * * * * 15. **! *.))))))))))))))))))))))* 16. *Fb * 17. *SHOW Hb * *+)))))))))))))))))))))), * 18. **~Hb * * 19. **SHOW ! * * **+)))))))))))))))))), * * 20. ***x~Hx * * * 21. ***! * * * **.))))))))))))))))))- * * *.))))))))))))))))))))))* 22. *Fb & Hb * 23. *x(Fx & Hx) * .)))))))))))))))))))))))))))ACD ID AID DD 9, I 5, 13, !I

1, 7, !I 4, E ID AID DD 18, I 2, 20, !I 16, 17, &I 22, I

To make it easier to find derivations showing arguments like this one quantificationally valid, we shall introduce two new derived rules. These new rules do not enable us to find derivations for any arguments for which there are not already derivations using merely the rules of Chapter 9, but they will make many derivations easier. The new rules will be equivalence rules that allow us to replace negated quantifications with sentences from which it is easier to derive things. The two rules have the same name, quantifier negation. Here are their diagrams: QN ~<N :: <~N ~<N :: <~N

The rules tell us that we can replace a sentence that is a negated quantification by the sentence we get by putting the negation sign 245

Introductory Logic on the other side of the quantifier and changing the type of the quantifier, from existential to universal, or from universal to existential, as the case may be. Using these new rules, we can get a much simpler derivation showing the argument above to be quantificationally valid. 1. ~x(Fx Gx) 2. ~x~Hx 3. SHOW x(Fx & Hx) +))))))))))))))), 4. *x~(Fx Gx) * 5. *~(Fa Ga) * 6. *Fa & ~Ga * 7. *Fa * 8. *~~xHx * 9. *xHx * 10.*Ha * 11.*Fa & Ha * 12.*x(Fx & Hx) * .)))))))))))))))P P DD 1, QN 4, E 5, ~ 6, &E 2, QN 8, DN 9, E 7, 10 &I 11, I

Here are some points to keep in mind concerning the Quantifier Negation rules: Whenever you use a QN rule, you must not only move the negation sign to the other side of the quantifer, you must also change the type of the quantifier. The QN rules are replacement rules and hence can be used on subformulas. For instance, from x(Mx ~ySxy) you may infer x(Mx y~Sxy) using QN. Use the QN rules whenever you want to infer something from a premise or other line that is a negated quantification. The QN rule can help when you are using indirect derivation to derive existentially quantified sentences. The assumption for such an indirect derivation is the negation of the existentially quantified sentence. Using QN turns this into a universally quantified negation, which is usually easier to use in the derivation. The following derivation illustrates the last point. 1. Ca w Cb 2. SHOW xCx +)))))))))))), 246 P ID

More Quantificational Derivations 3. *~xCx * 4. *SHOW ! * *+)))))))), * 5. **x~Cx * * * * 6. **~Ca 7. **Cb * * 8. **~Cb * * 9. **! * * *.))))))))- * .))))))))))))Exercises 10-1 Show the following arguments quantificationally valid by constructing a derivation: 1. ~x(Ax & ~Bx), x(Ax Bx) 2. ~x(Cx Dx), x(Cx & ~Dx) 3. x(Ex Fx), ~xFx, x~Ex 4. ~x(Tx & ~Wx), ~ x(Zx w Wx), x~Tx 5. ~x(Rx & ~Px), ~x(Jx & Px), ~x(Jx & Rx) 6. x(Qx ~ yPy), ~x(Px Qx), ~xQx 7. x(Cx w Nx), ~x(Kx & Nx), x(Kx Cx) 8. ~xyGxy, xy~Gxy 9. ~xyHxy, xy~Hxy 10. x(Ix Jxb), y~xJyx, x~Ix 11. ~xKmx, x(Lx yKyx), x~Lx 12. ~x(yMxy yNxy), xy~Nxy 13. ~x~yOxy, xy(Oxy Pyx), xyPxy 14. ~x(Qxa & Rxb), x[Qxa (~Sx Rxb)], x(Qxa Sx) Show the following quantificationally equivalent by producing two appropriate derivations: 15. ~x~Ax 16. ~x~Bx AID DD 3, QN 5, E 1, 6, wE 5, E 7, 8, !I

xAx xBx

Show the following quantificationally inconsistent by deriving ! from them: 17. ~x(Fx Gx), ~x(Fx & ~Gx) 18. ~xy(Mx & Rxy), xy(Rxy Mx), xyRxy 247

Introductory Logic 19. xy(My Pxy), x(Mx & ~yPyx) 20. xy(Fx ~Fy), ~xy(Fx ~Fy) 2: Strategies for Derivations As this book has often said, constructing derivations is not a mechanical matter. Ingenuity and perseverence are often required. But there are strategies that can be useful. In this section we will review the useful strategies for doing quantificational derivations, incorporating discussion of the new rules. This section wont review the discussion of L1 strategies in Chapter 5, section 4, but you will find it useful to do so. When premises or other lines have quantifiers as their main logical operators, use the quantifier exploitation rules. Wherever possible, use E before E. When picking names for use with the quantifier exploitation rules, remember to follow these guidelines: A name introduced by E must be completely new to the derivation. (Otherwise the use of E is incorrect.) Usually, it is a good idea to use names already in the derivation when you use E, if there are any. In some derivations E must be used more than once on the same universally quantified sentence, so that it can be used with all the names in the derivation. (See the last derivation at the end of the text of section 1 above for an example.) In some derivations, it is useful to look ahead and choose names with an eye to what you want to get. To derive a universally quantified sentence, use UD. This strategy, like the idea that to derive a conditional one should assume the antecedent and try to derive the consequent, is so generally useful that situations where it does not give the shortest derivation can be safely ignored. Following it will never make a derivation significantly more difficult. Use QN whenever premises or other lines from which you need to derive things are negated quantifications. The QN rules help you avoid cumbersome indirect derivations and make derivations simpler. Deciding on a strategy for deriving an existential quantification is tricky. There are two general strategies for deriving existential quantifications: direct derivation and indirect derivation. A direct derivation is often best if the premises contain 248

More Quantificational Derivations names or existential quantifications that provide for the existence of the thing whose existence is asserted by the conclusion. An indirect derivation may be necessary when the premises dont assert the existence of anything in particular. An indirect derivation is possible even in the first case, but it may be unnecessarily long and convoluted. Here is an example where an existential quantification is derived by direct derivation. Here the thing said to exist by the conclusion is the thing said to exist in the first premise. 1. x(~Fx & Gx) 2. x(Hx Fx) 3. SHOW x~Hx +)))))))))))), 4. *~Fa & Ga * 5. *~Fa * 6. *Ha Fa * 7. *~Ha * 8. *x~Hx * .))))))))))))P P DD 1, E 2, &E 3, E 5, 6, MT 7, I

But the following derivation uses indirect derivation, because there are no premises to assert the existence of anything. 1. SHOW x(Fx w ~Fx) +)))))))))))))))))), 2. *~x(Fx w ~Fx) * 3. *SHOW ! * *+)))))))))))))), * 4. **x~(Fx w ~Fx) * * * * 5. **~(Fa w ~Fa) 6. **~Fa & ~~Fa * * 7. **~Fa * * 8. **~~Fa * * 9. **! * * *.))))))))))))))- * .))))))))))))))))))ID AID DD 2, QN 4, E 5, DeM 6, &E 6, &E 7, 8, !I

(Note: theres a shorter derivation that gets this conclusion by DD, but it is much trickier to think up.)

249

Introductory Logic Exercises 10-2 Show the following arguments quantificationally valid by constructing a derivation: 1. 2. 3. 4. 5. 6. 7. 8.

xZx Wa, x(Zx Wa) x(Dx yDy) (The drinking principle) x(xMx Mx) (Cf. #4 of Exercises 9-6. Is it easier with xFx Ga, x(Fx Ga) xy(Txy zTzy) xy(Fx Gxy), ~x(yGxy & Hx), ~x(Fx & Hx) x[Wx & ~y(Ay & Bxy)], x[Ax y(Wy & ~Byx)] xyz(Fxz Fyz)
ID and QN?)

Show the following quantificationally equivalent by producing two appropriate derivations: 9. 10. 11. 12. 13. 14. 15. 16.

x(Cx w Dx) x(Ex & Fx) x(Fa Gx) xy(Gx Hy) xy(Ix Iy) xy(Zx Zy) x(Sx La) x(Sx La)

xCx w xDx xEx & xFx Fa xGx xGx yHy ~xIx w xIx xy(Zx Zy) (xSx La) & (La xSx) (xSx & La) w (x~Sx & ~La)

Show the following quantificationally inconsistent by deriving ! from them: 17. 18. 19. 20.

xyFxy, x~yFyx Fab & (Fbc & Fca), xyz[(Fxy & Fyz) Fxz], ~Fcc xyRxy, ~x(Fx yRyx) x(Ax & y[Ay (Bxy ~Byy)])

Given the premises below, derive the indicated conclusions. Premises:

xyz[(Rxy & Ryz) Rxz] xy(Rxy w Ryx) xy[Pxy (Rxy & ~Ryx)]
250

More Quantificational Derivations Conclusions: 21. 22. 23. 24.

xRxx x~Pxx xy[(Pxy w Pyx) w (Rxy & Ryx)] xyz[(Pxy & Pyz) Pxz]

Provide a suitable interpretation, translate each argument into L2, and provide a derivation showing it quantificationally valid. 25. Neither Alonzo nor anyone in the same Physics section as Alonzo is a Philosophy major. Either Alonzo or Gertrude is a Philosophy major. Therefore, Gertrude is not in the same Physics section as Alonzo. All animals that are suitable as pets are affectionate. No otter is affectionate, unless it is not yet an adult. Every animal in this room is suitable as a pet. Therefore, if any otters are in this room, they are not yet adults. Jeffrey is a person who likes some red wines, but none from New York State. Alonzo does not like anyone who likes no wine from New York State. Therefore, if Alonzo likes Jeffrey, then some wine from New York State is not red. Alonzo does not play chess with people whom he does not like. Alonzo does not like anyone who has not defeated any member of the chess team in a chess game. Alonzo plays chess with Gertrude. Gertrude has only defeated women at chess. Therefore, some woman is a member of the chess team. 3: Enthymemes Arguments are often stated with crucial premises missing. Consider the following argument, for instance: Bertrand is a brother of Susan. Herbert is married to Susan. Herbert is married to a sister of Bertrand. If you translate this argument into L2 using the following interpretation, you will find that it is quantificationally invalid. 251

26.

27.

28.

Introductory Logic D: People B: is a brother of M: is married to S: is a sister of b: Bertrand h: Herbert s: Susan On this interpretation, the argument becomes Bbs, Mhs, x(Sxb & Mhx) But this argument is not quantificationally valid; this can be shown by the following interpretation: D: The stick figures to the left. B: A solid arrow goes from to M: An dotted arrow goes from to S: A dashed arrow goes from to b: the leftmost figure h: the rightmost figure s: the center figure We need to add premises to make the argument valid, premises that someone making the argument might plausibly take for granted. Here are two that could turn the argument into a quantificationally valid one: Susan is female. If one person is a brother of another, and that other is female, then that other is a sister of the first. If we add a predicate to our interpretation, F: is female we can provide a derivation showing that the argument with the added premises is quantificationally valid: 1. Bbs 252 P

More Quantificational Derivations Mhs Fs xy[(Bxy & Fy) Syx] SHOW x(Sxb & Mhx) +))))))))))))))))))))), 6. *Bbs & Fs * 7. *y[(Bby & Fy) Syb] * 8. *(Bbs & Fs) Ssb * 9. *Ssb * 10.*Ssb & Mhs * 11.*x(Sxb & Mhx) * .)))))))))))))))))))))2. 3. 4. 5. P P P DD 1, 3, &I 4, E 7, E 6, 8, E 2, 9, &I 10 I

Arguments like this, in which an obviously true premise is not explicitly stated, are called enthymemes.38 Many arguments we encounter outside of logic texts are actually enthymemes. When we encounter one, we naturally want to discover the missing premises. Discovering missing premises is not a matter which can be mechanized. It is a matter of trying to understand what the person who produced an argument might have had in mind. One strategy is to first see why the given argument is quantificationally invalid. Producing an interpretation that shows it invalid, and comparing it with the original interpretation will often suggest the premises that need to be added. Premises that are added should be ones that make the argument quantificationally valid, and if possible, they should be premises that are either obviously true or widely believed to be obviously true. Exercises 10-3 The following arguments are enthymemes. Choose suitable interpretations and translate the arguments into L2. Show the resulting arguments are quantificationally invalid by producing suitable interpretations. Then discover obviously true premises which can be added to make quantificationally valid arguments. Provide derivations that show the resulting arguments quantificationally valid. 1. New York City is farther from Washington, D. C. than it is

The word derives from the Greek verb meaning to keep in mind. 253

38

Introductory Logic from Philadelphia. New York City is farther from San Francisco Ca. than it is from Washington, D. C. Therefore, New York City is farther from San Francisco than it is from Philadelphia. Only offices on the fifth floor have good views. Alonzos office is in the basement. Therefore, Alonzos office does not have a good view. I vacation with only those who are married to me. I go on vacation with some physician. Therefore, I am married to a physician. Gertrude can spell better than anyone who is not older than she is. Gertrude is older than Alonzo. Therefore, Gertrude can spell better than Alonzo. Lawyers who commit crimes are liable to disbarment. H. Louis Dewey (of Dewey, Cheatham, & Howe) is a lawyer who embezzles money from his clients. Therefore, H. Louis Dewey is liable to disbarment. Alonzo likes any woman who laughs at herself, but detests any woman who laughs at all her friends. Therefore, if any woman laughs at all her friends, some woman is not a friend of herself. (Argument due to Kalish and Montague)

2. 3. 4. 5.

6.

254

Identity

Chapter 11: Identity


This chapter introduces the logical identity predicate, explains translations using it, including translation of numerical concepts, and presents derivation rules for identity. 1. Translations Using the Identity Predicate: Identity extends our language. Consider the following claim: Everyone is afraid of Dracula, but Dracula is afraid only of me. It follows from this that I am Dracula!39 Since this result is somewhat surprising, we would like to be able to show it with a derivation. Using the obvious interpretation (D: people, d: Dracula, m: me, F: is afraid of ), we can easily symbolize the first conjunct of the premise:

xFxd
But how do we express the second conjunct (Dracula is afraid only of me)? We would express Dracula is afraid only of crosses, thus (C: is a cross): ~x(Fdx & ~Cx) or

x(Fdx Cx)
In order to express Dracula is afraid only of me we need to Raymond Smullyan reports getting this argument from Richard Cartwright. See his What Is the Name of This Book, (Prentice-Hall, Englewood Cliffs, 1978), p. 212. 255
39

Introductory Logic have a predicate that can say is me. We can express this using the two-place identity predicate. If we have I: is identical to , then we can express x is me thus: Ixm Following tradition, and because the identity predicate is a logical predicate rather than an ordinary one, we will use the equals sign, = instead of the I, and instead of writing it in front of two terms (=ab) we will put it in the more natural position between them (a = b). So the second conjunct of the premise about Dracula becomes

x(Fdx x = m)
The whole premise is thus

xFxd & x(Fdx x = m)


From this you should be able to derive d=m The inference about Dracula and me is surprising because we tend to confuse the first conjunct of the premise, Everyone is afraid of Dracula, with the very different statement, Everyone else is afraid of Dracula. These two statements are very different: dont say everyone when you really mean everyone else. To symbolize everyone else is afraid of Dracula (i.e., Everyone but Dracula is afraid of Dracula), we also use the identity predicate, thus:

x(~x = d Fxd)
From now on we shall adopt the very common practice of abbreviating ~ x = y by x y". So the symbolic sentence above becomes 256

Identity

x(x d Fxd)
Using the identity predicate we can also symbolize such claims as Only Alphonse saw Gertrude (with D: People; a: Alphonse, g: Gertrude, S: saw )

x(Sxg x = a)
and No one but Gertrude saw Alphonse ~x(x g & Sxa) and Only Alphonse and Gertrude were in the classroom. (using C: was in the classroom) ~x[Cx & (x a & x g)] or equivalently,

x[Cx (x = a w x = g)]
We can use identity to express numerical quantities. Recall that (using P: is president of the United States)

xPx
says that there is at least one president of the United States, which is rather imprecise. With some ingenuity, we can say, using identity, that there is at most one president (that is, that there is no more than one):

xy[(Px & Py) x = y]


257

Introductory Logic Observe how this works: it says that if x and y are both presidents of the United States, then they are identical. It thus rules out there being two distinct presidents of the United States. It also rules out there being more than two distinct presidents, because if there were more than two, there would have to be two who were distinct, and the statement rules that out. So it guarantees that if it is true, there can be no more than one president of the United States. On the other hand, notice that it does not imply that there is any president of the United States. It is a universally quantified conditional: if there are no presidents of the United States, the antecedent of the embedded conditional will always be false, so the whole will be true. We can also say that there is exactly one president of the United States: for if there is at least one and also at most one, there must be exactly one. Thus

xPx & xy[(Px & Py) x = y]


says that there is exactly one president of the United States. There is also a simpler way of saying that there is exactly one president of the United States, that combines the two conjuncts above into one, in a manner of speaking:

x[Px & y(Py x = y)]


This says that there is a president of the United States, and every president of the United States is identical to him or her40 As with one, so with two, three and all the other positive integers. Suppose that we want to say that there are at least two

Theres a still more compact way of saying that there is exactly one president of the United States, but its quite a bit harder to understand:

40

xy(Py x = y)
See whether you can understand why that statement is true if but only if there is exactly one president of the United States. 258

Identity houses of Congress. It would NOT do to say this (using H: is a house of Congress):

xy(Hx & Hy)

WRONG TRANSLATION!

This just says that there is at least one house of Congress (and says it twice). To say that there are at least two houses of Congress, we must say that the x and y in question are distinct, that is, not identical:

xy[ x y & (Hx & Hy)]


To say that there are at most two (no more than two) we use the same technique as with at most one; in this case, however, we must say that if x, y, and z are all houses of Congress then some pair must be identical:

xyz([(Hx & Hy) & Hz] [x = y w (x = z w y = z)])


Finally, we can say that there are exactly two houses of Congress simply by conjoining the last two statements with &. We can also use the simpler method, saying that there are two distinct houses of Congress, and every house of Congress is identical to one or the other of them:

xy[([Hx & Hy] & x y) & z(Hz [z = x w z = y])]


If you are catching on to the pattern behind all this, you can figure out how to say there are at least three members of the House of Representatives from Nebraska (using N: is a member of the House of Representatives from Nebraska):

xyz([ x y & (y z & x z)] & [ Nx & (Ny & Nz)])


Saying that there are at most three members of the House of Representatives from Nebraska requires four variables, if you follow the pattern; this is left as an exercise. And of course, we can go on to say that there are exactly three members of the House of Representatives from Nebraska.

259

Introductory Logic Exercises 11-1 Using the given interpretation, translate the following into L2 using the identify predicate. D: Animals B: is in my back yard S: is suitable for a pet M: is owned by me t: Tabitha, the cat 1 2 3 4 5 The only animal in my back yard is Tabitha I own at least two animals. There is exactly one animal in my back yard that is suitable for a pet. I own exactly three animals. No animal in my back yard, except Tabitha, is suitable for a pet. D: People M: is male S: is a logic student W: is a woman L: loves T: is taller than a: Alonzo e: Elaine g: Gertrude 6 7 8 9 10 11 12 No woman logic student other than Gertrude is taller than Alonzo. Every other logic student is taller than Elaine. Elaine and only Elaine loves Alonzo. No more than one male logic student loves Gertrude. Gertrude is taller than at least two male logic students. Alonzo is taller than any other logic student. There are more than two women logic students, but no more than two of them love Alonzo.

Choose a suitable interpretation and translate the following. Provide your interpretation along with the translation. 260

Identity 13 14 15 16 Nobody loves me but my mother, and she could be jivin too. (B. B. King) James Buchanan has been the only president from Pennsylvania. No presidents other than Andrew Johnson and William Clinton have been impeached. No man can serve two masters. [Understand this to say no man can serve more than one master.] 2. Definite Descriptions: Definite descriptions identify a single thing. Suppose we have the following argument: The major league baseball team from Detroit defeated the Baltimore Orioles last night. The Detroit Tigers are a major league baseball team from Detroit. The Detroit Tigers defeated the Baltimore Orioles last night. This argument looks valid, but how can we symbolize it so that the resulting symbolic argument is valid? The first premise says that a certain team nearly defeated the Baltimore Orioles last night. If we choose a name to represent that team, e.g., m: The major league baseball team from Detroit we will not have any way to connect the first premise with what is said in the second premise. In order to discover a better way of symbolizing such sentences as the first premise of the argument above, we need to think about terms like The major league baseball team from Detroit. Philosophers call such terms definite descriptions. A definite description is a term that refers to just one thing by identifying a property that is possessed uniquely by that thing. Here are some examples: 261

Introductory Logic The tallest building in the world The fifth floor of Lattimore Hall The intersection of Elmwood and Goodman in Rochester The first test for this course The first of these definite descriptions, for instance, refers to a particular building by means of the property of being taller than any other building in the world. The building referred to by the description is the one and only building in the world with that property. The philosopher Bertrand Russell proposed a way of analyzing sentences using definite descriptions,41 and we shall adopt his analysis in this book. According to Russells analysis, the sentence, The major league baseball from Detroit defeated the Baltimore Orioles last night. Should be analyzed this way: There is one and only one major league baseball team from Detroit and it defeated the Baltimore Orioles last night. With this analysis, we can easily symbolize the sentence, remembering what we have learned about symbolizing claims for the form, there is exactly one ... (With D: things; M: is a major league baseball team, F: is from Detroit, D: defeated last night, b: The Baltimore Orioles)

x([(Mx & Rx) & y[(My & Ry) y = x]) & Dxb)
This says that here is something that is a major league baseball team and is from Detroit, and any major league baseball team from Detroit is identical to it, and it defeated the Baltimore Orioles last night. This sentence will be true only if there is exactly one major league baseball team from Detroit and that team defeated the Baltimore Orioles last night. It will be false if

Russell, Bertrand, On Denoting, Mind, vol. 14 (1905), pp. 479-493


41

262

Identity there is no major league baseball team from Detroit, or if there is more than one, or if the Baltimore Orioles were not defeated by such a team last night. In general, on Russells account, a sentence of the form The F is G is analyzed as There is exactly one F and it is G and symbolized as

x[(Fx & y[Fy y = x]) & Gx]


When Russell developed his analysis, he was concerned about sentences like this: The present king of France is bald. Someone might think that this sentence refers to something, the present king of France, and says that it is bald. But since there was no king of France when Russell was working on this problem (and still is none today), the phrase the present king of France cannot refer to any king of France. Could it refer to some other mysterious entitya subsisting but not existing king of France, for instance? Russell thought such an idea nonsense. The sentence, he claimed, simply claimed that there was one and only one thing that was presently king of France, and that this was bald. The sentence is false, therefore, since there is no such thing, and does not refer to any odd entity. We would symbolize the sentence thus (D: people; K: is presently king of France, B: is bald):

x[(Kx & y[Ky y = x]) & Bx]


Complications arise, on this analysis, for a sentence like The present king of France is not bald. This sentence is ambiguous. It might be taken to be attempting to assert of the present king of France that he is not bald, in other 263

Introductory Logic words, to say that there is exactly one entity that is presently king of France and it is not bald. This sentence would be false, since there is presently no King of France. Or it might be taken to be the denial of the sentence the present king of France is bald, in which case it is true, since the latter sentence is false. Heres how to symbolize the two sentences,

x[(Kx & y[Ky y = x]) & ~Bx] ~x[(Kx & y[Ky y = x]) & Bx]
Note that the only difference between the two symbolizations is the placement of the ~; we say they differ in the scope of the negation. This is analogous to the ambiguity of All that glitters is not gold. which has two readings that could be symbolized as follows (D: things; S: glitters, G: is gold):

x(Sx ~Gx) ~x(Sx Gx)


Sentences involving negations and definite descriptions often involve this ambiguity, even though we often dont notice it because one of the readings is not plausible in the context. If someone says, The President of the United States is not a crook, We would usually simply assume that he is asserting of the president that he is not a crook. Since the two readings of the sentence differ in truth value only when there is either no president or more than one, we dont pay any attention to the ambiguity because we are confident that there is exactly one president of the United States. A sentence may involve two definite descriptions, such as, The president congratulated the winner, which we can symbolize thus (D: people; P: is president, W : is a winner, C: congratulated ): 264

Identity

x[([Px & y(Py y = x)] & z[Wz & y(Wy y = z)]) &
Cxz] Exercises 11-2 Translate the following using the interpretation given. D: People M: is a man R: has red hair W: is a woman K: kissed L: loves a: Alonzo e: Ethel g: Gertrude h: Hubert 1. 2. 3. 4. 5. 6. The woman who kissed Alonzo has red hair. The woman whom Alonzo loves kissed Hubert. Alonzo loves a woman who kissed the man who loves Gertrude. Hubert loves no one but kissed the woman Alonzo loves. The man Gertrude loves kissed the woman Alonzo loves. Gertrude loves the man who loves Ethel, but not the man who loves her. 3: Showing Invalidity, Consistency, Non-equivalence The definitions of quantificational validity, quantificational inconsistency, and quantificational equivalence are the same for sentences of L2 generally (see Chapter 8). Hence we can show invalidity, consistency, and non-equivalence in the same ways that we did so for sentences of L2 in Chapter 8. For instance, the following set of sentences,

x(x = a w x = b), xFx, ~Fa


can be shown consistent by the following interpretation, which 265

Introductory Logic makes all of them true: D: The stick figures to the left F: has an F above it a: the figure on the left b: the figure on the right

There is one important difference that identity makes, however. As we noted in Chapter 8, for L2 sentences without identity, if there is an interpretation with a given sized domain that makes a sentence true, for any larger domain size there is also an interpretation with that size domain that makes the sentence true. But this is not true when sentences involving identity are involved. You can see this easily. Consider the following sentence:

x(x = a)
This sentence says that everything is identical to the item named a. This can be true only on interpretations having exactly one item in their domains. It cannot be true on interpretations that have more than one item in their domains. Expansion in finite domain can also be used with sentences involving identity, but identity makes several differences to the procedure. First, as already noted, a given domain may be too large to make a sentence true. Second there are complications due to the fact that atomic sentences involving identity may not be logically independent. When we perform an expansion in a finite domain for sentences without identity, the atomic sentences we get are logically independent: you can assign distinct atomic sentences any truth values you like. However, when the sentences involve identity, this is not true. For instance, suppose we expand the following sentence in a domain of two elements:

x(x=a Fx)
266

Identity We get this: (a = a Fa) & (a = b Fb) The atomic sentences are: a = a, a = b, Fa, Fb Now we are not free to assign these atomic sentences whatever truth values we please. Consider a = a, for instance. It must obviously be assigned the value True. Otherwise we would not be respecting the logical properties of identity. Also, consider what happens if we assign a = b the value True. If we do, then we must also make b = a true. Further, we must make Fa and Fb either both true or both false. If a and b are names for the same object, then Fa will be true if and only if Fb is. Hence we must follow these restrictions when assigning truth values to atomic sentences involving identity: 1. 2. If 0 is a name, then any sentence 0 = 0 must be assigned the value True. If : and 0 are names, and a sentence : = 0 is assigned the value True, and N and R are sentences that differ only in that one has 0 in one or more places where R has :, or vice versa, then N and R must be assigned the same truth value.

Lets illustrate all this by doing an example. Consider the following set of sentences:

xy(x = y Fxy) xy~Fxy


If we wish to show these consistent, we must find an interpretation that makes them all true. We try to expand in a domain of two members, getting this: [(a = a Faa) & (a = b Fab)] & [(b = a Fba) & (b = b Fbb)] (~Faa w ~Fab) w (~Fba w ~Fbb) 267

Introductory Logic We must make a = a and b = b true. Hence, to make the first sentence true, we must make Faa and Fbb true. Should we make a = b true? If we do, we must make all of the rest of the atomic sentences true as well. For since a = a is true, if a = b is true then b = a must be also. Further, since we made Faa true, if a = b is true, then Fab and Fba would have to be true as well. But if all the atomic sentences are true, then the second sentence will be false. Consequently we must make a = b false, and so b = a false as well. Then if we let at least one of Fab and Fba be false, the second sentence can come out true. So here is an interpretation that works: D: The stick figures to the left F: An arrow goes from to or, differently: D: {1, 2} F: <,> is in {<1,1>, <1,2> <2,2>}

Exercises 11-3 Provide interpretations in small finite domains showing the following invalid. 1. 2. 3. 4. 5.

z(z = r Wz) xy(Mxy & ~ x = y) xy[(Fx Fy) x = y], xFx, xz[ x z & (Fx & Fz)] xy(Gxy x = y), xy(x = y Gxy) xyz( x = y x z)

Show the following pairs of sentences are not equivalent by producing a suitable interpretation in a small finite domain. 6. 7. 8. 268

xy x = y xy x = y xy[x = y (Fx Fy)] xy[(Fx Fy) x = y] (a = b w a = c) a = d a = c (a = b w a = d)

Identity 9.

xy(x y Gy)

xy(Gy x y)

Show the following consistent by providing a suitable interpretation in a small finite domain. 10. 11. 12. 13. ~xy[(Fx & Gy) & x = y], xy[(Fx & Fy) & x y] x(Gx x = a), x(~Gx & x = a) x(Mx x = a), a b, ~x(x b & ~Mx) xy x y, ~zQzz, xyQxy 4. Derivations with Identity: We can do some derivations involving identity with the rules we have already learned. For instance, the derivation showing that the argument involving Dracula and me is valid might go like this: 1. xFxd & x(Fdx x = m) Pr 2. Show d = m +))))))))))))))))), 3. *xFxd * 4. *Fdd * 5. *x(Fdx x = m) * 6. *Fdd d = m * 7. *d = m * .)))))))))))))))))-

DD 1, &E 3, E 1, &E 5, E 4, 5, E

But since identity is a logical predicate, there are derivation rules involving it as well. We will introduce identity introduction and identity exploitation (in two forms). The identity introduction rule looks like this: =I
))))))) 0 = 0

where 0 is any name

In other words, whenever we want, we may write down any name, followed by the identity symbol, followed by the same name again. We dont need to have any earlier lines to license this; we may just write it down. It may seem that this rule would be useless, but it is not. For instance, suppose we wish to show that 269

Introductory Logic every thing is self-identical. 1. Show x x = x UD +))))))))))))))), 2. *Show a = a * *+))))))))))), * 3. **a = a * * *.)))))))))))- * .)))))))))))))))-

DD =I

Or consider the following derivation: 1. x(x = p Jx) 2. Show Jp +))))))))))), 3. *p = p Jp * 4. *p = p * 5. *Jp * .)))))))))))P DD 1, E =I 3, 4, E

The identity exploitation rule has two forms:

=E

0= : N[0]

:= 0 N[0]

0 and : are any names N[0] is any formula containing some 0s N[:/0] is N[0] with some or all of the 0s changed to :s

)))) )))) N[:/0] N[:/0]

This rule says that if we know some fact involving 0, and we know that : is identical to 0, then we know the same fact about :. If Mark Twain wrote Tom Sawyer and Mark Twain is identical to Samuel Clemens, then Samuel Clemens wrote Tom Sawyer. 1. Wmt 2. m = c 270 P P

Identity 3. Show Wct DD +))))))))), 4. *Wct * 1, 2, =E .)))))))))We can also use identity exploitation to provide a derivation showing that the following argument is valid: Only Alphonse is taller than Gertrude. Someone with red hair is taller than Gertrude. Alphonse has red hair 1. x(Txg x = a) 2. x(Rx & Txg) 3. Show Ra +)))))))))))))), 4. *Rb & Tbg * 5. *Tbg * 6. *Tbg b = a * 7 *b = a * 8. *Rb * 9. *Ra * .))))))))))))))P P DD 2, E 4, &E 1, E 5, 6, E 4, &E 7, 8, =E

Identity exploitation can also be used to do some tricky maneuvers, as in the following derivation: 1. a = b 2. Show b = a +))))))))), 3. *a = a * 4. *b = a * .)))))))))P DD =I 1, 3, =E

Here we first used identity introduction to get a = a. Then we used a = b and identity exploitation to replace one a in a = a with b. You could use this technique to show that a = b is equivalent to b = a. We also use identity exploitation to show that identity is transitive: 1. a = b 2. b = c 3. Show a = c P P DD 271

Introductory Logic
+))))))))), 4. *a = c * .)))))))))-

1, 2, =E

A characteristic use of the identity exploitation rule is in arguments involving definite descriptions. Suppose we have this argument: The person who met Alphonse last night had red hair. Gertrude met Alphonse last night. Gertrude has red hair. A derivation showing that argument valid could look like this: 1. x([Mxa & y(Mya y = x)] & Rx) P 2. Mga P 3. Show Rg DD +))))))))))))))))))))))))))), 4. *[Mba & y(Mya y = b)] & Rb * 1,E 5. *Mba & y(Mya y = b) * 4,&E 6. *y(Mya y = b) * 5,&E 7. *Mga g = b * 6,E 8. *g = b * 2,7,E 9. *Rb * 4,&E 10.*Rg * 8,9,=E .)))))))))))))))))))))))))))Usually we cannot derive a universally quantified statement from its existentially quantified counterpart. But the following derivation shows that there are exceptions to this. 1. xy x = y 2. Show xy x = y +)))))))))))))))))), 3. *Show y a = y * *+)))))))))))))), * 4. **Show a = b * * **+)))))))))), * * 5. ***y c = y * * * 6. ***c = a * * * 7. ***c = b * * * 8. ***a = b * * * **.))))))))))- * * *.))))))))))))))- * .))))))))))))))))))P UD UD DD 1, E 5, E 5, E 6,7 =E

272

Identity You sometimes hear people who have not studied logic claim that a universal statement cannot be derived from a particular one. The following demonstration shows that this is not so. In this derivation, recall that x y is just ~ x = y. 1. Fa 2. Show x(~Fx x a) +)))))))))))))))))), 3. *Show ~Fb b a * *+)))))))))))))), * 4. **~Fb * * 5. **Show b a * * **+)))))))))), * * 6. ***b = a * * * 7. ***Show ! * * * ***+)))))), * * * 8. ****~Fa * * * * 9. ****! * * * * ***.))))))- * * * **.))))))))))- * * *.))))))))))))))- * .))))))))))))))))))Exercises 11-4 Provide derivations showing the following valid. 1. 2. 3. 4. 5. 6. P UD CD ACD ID AID DD 4, 6, =E 1, 8, !I

xyz[(x = y & y = z) x = z] ~Cr, xy(Cx x y) x([Rx & y(Ry x = y)] & Nx), Rt, Nt xy(x = y y = x) xy x = y xFx & yGy ~x(Fx & Gx) xy( x y & [(Fx w Gx) & (Fy w Gy)])

Provide derivations showing the following equivalent. 7. 8.

xy(Ay x = y) xy[(Bx & By) x = y]

x[Ax & y(Ay x = y)] xy(By x = y)

Provide derivations showing the following sets of sentences inconsistent. 9. xyGxy, ~xGxx, ~xy x y 273

Introductory Logic 10. 11. 12.

xy(Fy y = x), x(Yx Fx), xy[ x y & (Yx & Yy)] xy x = y, xFx, yDy, ~x(Fx & Dx) xy(Fxy x y), xFxx

The following derivations indicate how we might justify some of the claims made in Chapter 8 about expansion in finite domains. Provide derivations that show the following arugments valid. 13. 14. 15. 16.

x x = a, xFx Fa x x = a, xFx Fa x(x = a w x = b), xFx (Fa & Fb) x(x = a w x = b), xFx (Fa w Fb)

274

Answers to Odd Exercises


Chapter 1
Exercises 1.4: 1. True 3. False. An inconsistent form must have no instances where all the sentences are true, but the following is an instance of F3: All dogs are mammals. Cheddar cheese is made from milk. Both these sentences are true. 5. True 7. False. For instance, all instances of F1 are sets of inconsistent sentences, but each instance consists of two sentences with opposite truth values, and so one of them must be true. Consider, for instance, (I1) on page 7. The first sentence is true. 9. False; the first is true and the second false, so they cannot be equivalent. Exercises 1.5 1. False. Here is an invalid argument with true premises and conclusion: January 13, 1999 is a Wednesday. Therefore, William Jefferson Clinton and Hilary Rodham Clinton are parents of Chelsea Clinton. 3. True 5. True (provided it has at least one false premise). Exercises 1-A2 1. "Send" has four letters. 3. [no change] 5. The sign said, "Keep Off the Grass." 7. "Schnee" is the German word for snow. 9. Frederick Barbarossa shares the name "Barbarossa" with a Barbary pirate of the sixteenth century, Khayr al-Din. 11. "Phtholognyrrh" is pronounced the same as "turner;" the "phth" as in "phthisis," the "olo" as in "colonel," the "yrrh" as in "myrrh."

275

Introductory Logic

Chapter 2: Exercises 2.1 1. Unofficial Sentence. Main connective:

(A (B w C)) | --------------| | A (B w C) | --------| | B C 3. 5. 7. 9. Not a sentence. (Lacks one right parenthesis, at end.) Not a sentence. No rule allows (A) to appear in a sentence. Not a sentence. Lower case letters are not part of the language. Unofficial sentence. Main connective: w (((~P R) & S) w T) | -------------------------| ((~P R) & S) | -------------| | (~P R) S | ---------------| | ~P R | P 11. Unofficial Sentence. Main connective: | T

((P (Q & R)) ((P Q) & (P R))) | ------------------------------------| | (P (Q & R) ((P Q) & (P R)) | | --------------------------------------| | | | (P R) P (Q & R) (P Q) | | | -------------------------------| | | | | | Q R P Q P R 13. Not a sentence. Too many parentheses, or misplaced parentheses. (([P Q] w R) & (S T)) or ([P Q] w R) & (S T)

276

would be correct. Exercises 2.3: 1. ~A 3. A w B 5. M C 7. ~C ~M 9. If Bertha studies logic Alonzo studies logic. 11. Alonzo doesn't study logic and Bertha doesn't study logic. 13. If cows are not mammals then Bertha studies logic. Exercises 2-4: 1. ~T 3. T w S 5. T S 7. ~~A ~J or J ~A or E w A 9. ~E A 11. L & ~E 13. ~(T w S) or ~T & ~S 15. ~E & L 17. E L 19. (S J) & (~S A) 21. A S [Note: there are other equivalent English translations also for the following.] 23. Either Ellen was drunk or she danced lewdly. 25. If Julius did not have a good time, Alonzo did not disgrace himself. 27. Unless Ellen Danced lewdly, Alonzo did not disgrace himself. Exercises 2-5: (A ~E) & (G A) [(P & A) E] & [~S (~P M)] (~~R M) & (~R [(A & G) & ~E]) [R (~P & E)] & (~R [P & (A & G)]) Unless it rains on Sunday there will be a picnic and Alonzo will come to it if and only of Gertrude does. 11. If it rains on Sunday and there's a picnic, then Ethel will be distressed and either Alonzo or Gertrude won't be there. 13. H (W w ~B) or H (~~B W) or (H W) w ~B or ~~B (H W) 15. If the boss is angry, Ethel is taking pregnancy leave, and the network is down, then the Wilbursteen project will be two weeks late unless Sylvia is promoted. 1. 3. 5. 7. 9.

Chapter 3
Exercise 3-1 1. A (~B w C) = T | ----------------------| | A = T (~B w C) = T | ----------| | ~B = T C = T

277

Introductory Logic
| B = F 3. ~~[(A & B) (~F w B)] = F | ~[(A & B) (~F w B)] = T | [(A & B) (~F w B)] = F | -----------------------------| | (A & B) = F ~F w B = T | | ---------------------------| | | | A = T B = F ~F = T B = F | F = F (B & A) [(C w F) D] = T | ------------------------| | B & A = F (C w F) D = F | | -----------------------------------| | | | B = F A = T C w F = T D = F | --------| | C = T F = F ----------------------------------------| A | B | C | D | (A B) w (C D) | ----------------------------------------| T | F | T | F | T F F F T F F | ------------------------------------------------------------------------------------------| B | C | D | F | (B C) [(D & F) w ~(B & C)] | --------------------------------------------------| F | T | F | F | F T T T F F F T TF F T | -----------------------------------------------------------------------------------------------------| B | C | D | E | F | (B w C) ~[D (E & F)] | ---------------------------------------------------| F | T | F | T | F | F T T F F F T T F F | ----------------------------------------------------

5.

7.

9.

11.

Exercise 3-2 1. ------------------------| M | N | M (~N w M) | ------------------------| T | T | T T FT T T | -------------------------

278

| T | F | T T TF T T | ------------------------| F | T | F T FT F F | ------------------------| F | F | F T TF T F | ------------------------3. -----------------------------------| A | C | D | (A D) (~C & D) | -----------------------------------| T | T | T | T T T F FT F T | -----------------------------------| T | T | F | T F F T FT F F | -----------------------------------| T | F | T | T T T T TF T T | -----------------------------------| T | F | F | T F F T TF F F | -----------------------------------| F | T | T | F T T F FT F T | -----------------------------------| F | T | F | F T F F FT F F | -----------------------------------| F | F | T | F T T T TF T T | -----------------------------------| F | F | F | F T F F TF F F | -----------------------------------------------------------------| A | F | G | A w (~F G) | ------------------------------| T | T | T | T T FT F T | ------------------------------| T | T | F | T T FT T F | ------------------------------| T | F | T | T T TF T T | ------------------------------| T | F | F | T T TF F F | ------------------------------| F | T | T | F F FT F T | ------------------------------| F | T | F | F T FT T F | ------------------------------| F | F | T | F T TF T T | ------------------------------| F | F | F | F F TF F F | -------------------------------

5.

Exercise 3-4 1. ------------------------------| B | C | B C | B | ~C | ------------------------------| T | T | T T T | T | FT | ------------------------------| T | F | T F F | T | TF | ------------------------------| F | T | F T T | F | FT | ------------------------------| F | F | F T F | F | TF | ------------------------------The table shows the sentences inconsistent because in no row are all three sentences assigned the value True.

279

Introductory Logic

3.

------------------------------------------| E | F | G | E & ~F | G F | ~E w G | ------------------------------------------| T | T | T | T F FT | T T T | FT T T | ------------------------------------------| T | T | F | T F FT | F T T | FT F F | ------------------------------------------| T | F | T | T T TF | T F F | FT T T | ------------------------------------------| T | F | F | T T TF | F T F | FT F F | ------------------------------------------| F | T | T | F F FT | T T T | TF T T | ------------------------------------------| F | T | F | F F FT | F T T | TF T F | ------------------------------------------| F | F | T | F F TF | T F F | TF T T | ------------------------------------------| F | F | F | F F TF | F T F | TF T F | ------------------------------------------The table shows the sentences inconsistent because in no row are all three sentences assigned the value True.

5.

-------------------------------------------------------------| K | L | M | K & (L w M) | L (K ~M) | ~M ~L | -------------------------------------------------------------| T | T | T | T T T T T | T F T F FT | FT T FT | -------------------------------------------------------------| T | T | F | T T T T F | T T T T TF | TF F FT | -------------------------------------------------------------| T | F | T | T T F T T | F T T F FT | FT F TF | -------------------------------------------------------------| T | F | F | T F F F F | F T T T TF | TF T TF | -------------------------------------------------------------| F | T | T | F F T T T | T T F T FT | FT T FT | -------------------------------------------------------------| F | T | F | F F T T F | T F F F TF | TF F FT | -------------------------------------------------------------| F | F | T | F F F T T | F T F T FT | FT F TF | -------------------------------------------------------------| F | F | F | F F F F F | F T F F TF | TF T TF | -------------------------------------------------------------The table shows the sentences inconsistent because in no row are all three sentences assigned the value True.

7.

-------------------------------| P | Q | P Q | P ~Q | -------------------------------| F | T | F T T | F T FT | -------------------------------| F | F | F T F | F T TF | -------------------------------[Either line will do.]

9.

-----------------------------------------

280

| P R S | (R w S) P | R ~P | ~R S | ----------------------------------------| T F T | F T T T T | F T FT | TF T T | ----------------------------------------11. ------------------------------------------------------| A | Y | Z | (Z & ~Y) ~A | A ~Z | A w ~Y | ------------------------------------------------------| T | T | F | F F FT T FT | T T TF | T T FT | ------------------------------------------------------| T | F | F | F F TF T FT | T T TF | T T TF | ------------------------------------------------------| F | F | T | T T TF T TF | F T FT | F T TF | ------------------------------------------------------| F | F | F | F F TF T TF | F T TF | F T TF | ------------------------------------------------------Any of these lines will do. 13. ------------------------------------| A | M | A ~M | ~(A M) | ------------------------------------| T | T | T F FT | F T T T | ------------------------------------| T | F | T T TF | T T F F | ------------------------------------| F | T | F T FT | T F F T | ------------------------------------| F | F | F F TF | F F T F | ------------------------------------In all lines the two sentences have the same truth values. 15. ------------------------------------------------| C | D | R | ~C w (D ~R) | (C & D) ~R | ------------------------------------------------| T | T | T | FT F T F FT | T T T F FT | ------------------------------------------------| T | T | F | FT T T T TF | T T T T TF | ------------------------------------------------| T | F | T | FT T F T FT | T F F T FT | ------------------------------------------------| T | F | F | FT T F T TF | T F F T TF | ------------------------------------------------| F | T | T | TF T T F FT | F F T T FT | ------------------------------------------------| F | T | F | TF T T T TF | F F T T TF | ------------------------------------------------| F | F | T | TF T F T FT | F F F T FT | ------------------------------------------------| F | F | F | TF T F T TF | F F F T TF | ------------------------------------------------In all lines the two sentences have the same truth values.

17.

---------------------------------------------------| K | R | T | ~(R & K) & T | (T & ~R) w (T & ~K) | ---------------------------------------------------| T | T | T | F T T T F T | T F FT F T F FT | ----------------------------------------------------

281

Introductory Logic
| T | T | F | F T T T F F | F F FT F F F FT | ---------------------------------------------------| T | F | T | T F F T T T | T T TF T T F FT | ---------------------------------------------------| T | F | F | T F F T F F | F F TF F F F FT | ---------------------------------------------------| F | T | T | T T F F T T | T F FT T T T TF | ---------------------------------------------------| F | T | F | T T F F F F | F F FT F F F TF | ---------------------------------------------------| F | F | T | T F F F T T | T T TF T T T TF | ---------------------------------------------------| F | F | F | T F F F F F | F F TF F F F TF | ---------------------------------------------------In all lines the two sentences have the same truth values. 19. ------------------------------| I | Q | I Q | Q I | ------------------------------| T | F | T F F | F T T | ------------------------------| F | T | F T T | T F F | ------------------------------Either line will do. 21. ------------------------------------------------------| | P | Q | X | (P & Q) X | (P X) & (Q X) ------------------------------------------------------| T | F | F | T F F T F | T F F F F T F | ------------------------------------------------------| F | T | F | F F T T F | F T F F T F F | ------------------------------------------------------Either line will do. 23. ------------------------------------------------------| K | L | S | (L & K) (L & S) | L & (K S) | ------------------------------------------------------| T | F | T | F F T T F F T | F F T T T | ------------------------------------------------------| T | F | F | F F T T F F F | F F T F F | ------------------------------------------------------| F | F | T | F F F T F F T | F F F F T | ------------------------------------------------------| F | F | F | F F F T F F F | F F F T F | ------------------------------------------------------Any of these lines will do. 25. -------------------------| P | Q | P & Q | P w Q | -------------------------| T | T | T T T | T T T | -------------------------| T | F | T F F | T T F | -------------------------| F | T | F F T | F T T |

282

-------------------------| F | F | F F F | F F F | -------------------------There is no line where the premise is true and the conclusion false. 27. ---------------------------------------------------| P | S | T | (P w S) & T | ~(P & T) (S & T) | ---------------------------------------------------| T | T | T | T T T T T | F T T T T T T T | ----------------------------------------------------| T | T | F | T T T F F | T T F F F T F F | ----------------------------------------------------| T | F | T | T T F T T | F T T T T F F T | ----------------------------------------------------| T | F | F | T T F F F | T T F F F F F F | ----------------------------------------------------| F | T | T | F T T T T | T F F T T T T T | ----------------------------------------------------| F | T | F | F T T F F | T F F F F T F F | ----------------------------------------------------| F | F | T | F F F F T | T F F T F F F T | ----------------------------------------------------| F | F | F | F F F F F | T F F F F F F F | ----------------------------------------------------There is no row where the premise is true and the conclusion false. 29. -----------------------------------------------------| F | I | K | K (I & F) | F I | F K | -----------------------------------------------------| T | T | T | T T T T T | T T T | T T T | -----------------------------------------------------| T | T | F | F F T T T | T T T | T F F | -----------------------------------------------------| T | F | T | T F F F T | T F F | T T T | -----------------------------------------------------| T | F | F | F T F F T | T F F | T F F | -----------------------------------------------------| F | T | T | T F T F F | F T T | F T T | -----------------------------------------------------| F | T | F | F T T F F | F T T | F T F | -----------------------------------------------------| F | F | T | T F F F F | F T F | F T T | -----------------------------------------------------| F | F | F | F T F F F | F T F | F T F | -----------------------------------------------------There is no line where both premises are true and the conlusion is false. 31. ------------------------------| Q | X | Q X | X Q | ------------------------------| F | T | F T T | T F F | ------------------------------------------------------------------------| A | B | M | B w (M & A) | B & (M w A) |

33.

283

Introductory Logic
------------------------------------------| T | F | T | F T T T T | F F T T T | ------------------------------------------| F | T | F | T T F F F | T F F F F | ------------------------------------------Either of these lines will do.

35.

------------------------------------------------------------| J | P | R | (P J) & R | ~R ~J | R (P & ~J) | ------------------------------------------------------------| T | T | T | T T T T T | FT T FT | T F T F FT | ------------------------------------------------------------| T | F | T | F T T T T | FT T FT | T F F F FT | ------------------------------------------------------------| F | F | T | F T F T T | FT T TF | T F F F TF | ------------------------------------------------------------Any of these lines will do.

37.

-----------------------------------------------------R L | ~L & (~R w F) | | F | L | R | F R | -----------------------------------------------------| F | F | F | F T F | F T F | TF T TF T F | -----------------------------------------------------Consistent.

39.

-------------------------------------------------------------|C |P |Z | P (C Z)| ~(Z & P) |(C P) & (~C P)| -------------------------------------------------------------|T |T |T | T T T T T F T T T T T T F FT F T | -------------------------------------------------------------|T |T |F | T F T F F T F F T T T T F FT F T | -------------------------------------------------------------|T |F |T | F T T T T T T F F T F F F FT T F | -------------------------------------------------------------|T |F |F | F T T F F T F F F T F F F FT T F | -------------------------------------------------------------|F |T |T | T F F F T F T T T F F T F TF T T | -------------------------------------------------------------|F |T |F | T T F T F T F F T F F T F TF T T | -------------------------------------------------------------|F |F |T | F T F F T T T F F F T F F TF F F | -------------------------------------------------------------|F |F |F | F T F T F T F F F F T F F TF F F | -------------------------------------------------------------Inconsistent.

41.

---------------------------------------------------| D | I | K | I (D & K) | D ~K | D w I | ---------------------------------------------------| T | F | F | F T T F F | T T TF | T T F | ----------------------------------------------------

284

Consistent. 43. ---------------------------------------------| | B | J | P | P (B & J) | (P B) J ---------------------------------------------| T | F | F | F T T F F | F T T F F | ---------------------------------------------| F | T | T | T F F F T | T F F T T | ---------------------------------------------| F | F | T | T F F F F | T F F T F | ---------------------------------------------| F | F | F | F T F F F | F T F F F | ---------------------------------------------Not equivalent. ----------------------------------------------------------| C | P | U | (C w P) (P & U) | (P & U) w (~P & ~C) | ----------------------------------------------------------| T | T | T | T T T T T T T | T T T T FT F FT | ----------------------------------------------------------| T | T | F | T T T F T F F | T F F T FT F FT | ----------------------------------------------------------| T | F | T | T T F F F F T | F F T F TF F FT | ----------------------------------------------------------| T | F | F | T T F F F F F | F F F F TF F FT | ----------------------------------------------------------| F | T | T | F T T T T T T | T T T F FT F TF | ----------------------------------------------------------| F | T | F | F T T F T F F | T F F F FT F TF | ----------------------------------------------------------| F | F | T | F F F T F F T | F F T T TF T TF | ----------------------------------------------------------| F | F | F | F F F T F F F | F F F T TF T TF | ----------------------------------------------------------Equivalent. 47. -------------------------------------------------------------| D | P | Q | (P & D) w (~P & D) | (Q D) & (~D Q ) | -------------------------------------------------------------| T | T | T | T T T T FT F T | T T T T FT T T | -------------------------------------------------------------| T | T | F | T T T T FT F T | F T T T FT T F | -------------------------------------------------------------| T | F | T | F F T T TF T T | T T T T FT T T | -------------------------------------------------------------| T | F | F | F F T T TF T T | F T T T FT T F | -------------------------------------------------------------| F | T | T | T F F F FT F F | T F F F TF T T | -------------------------------------------------------------| F | T | F | T F F F FT F F | F T F F TF F F | -------------------------------------------------------------| F | F | T | F F F F TF F F | T F F F TF T T | -------------------------------------------------------------| F | F | F | F F F F TF F F | F T F F TF F F | -------------------------------------------------------------Equivalent. 49. -------------------------------------------| K | L | M | K L | L w M | K w M | -------------------------------------------| T | T | T | T T T | T T T | T T T |

45.

285

Introductory Logic
-------------------------------------------| T | T | F | T T T | T T F | T T F | -------------------------------------------| T | F | T | T F F | F T T | T T T | -------------------------------------------| T | F | F | T F F | F F F | T T F | -------------------------------------------| F | T | T | F F T | T T T | F T T | -------------------------------------------| F | T | F | F F T | T T F | F F F | -------------------------------------------| F | F | T | F T F | F T T | F T T | -------------------------------------------| F | F | F | F T F | F F F | F F F | -------------------------------------------Valid. 51. ------------------------------------------| H | R | T | H (R w T) | ~T | H | ------------------------------------------| F | F | F | F T F F F | TF | F | ------------------------------------------Invalid. 53. ------------------------------------------------------| B | Q | S | ~[B & (S Q)] | B & (Q S) | Q | ------------------------------------------------------| T | F | T | T T F T F F | T T F T T | F | ------------------------------------------------------Invalid. Exercises 3.5 1. F: The Flugelhenim project is delayed. P: The pointy-haired boss is mad. W: Wally will be promoted. F (P w ~W) P & W P ~F --------------------------------------------------| F | P | W | F (P w ~W) | P & W | P ~F | --------------------------------------------------| F | T | T | F T T T FT | T T T | T T TF | --------------------------------------------------Consistent. 3. B: Bert got lots of sleep. E: Ernie got lots of sleep. O: Oscar got lots of sleep. O & (E O E ~(B & E) B)

286

-----------------------------------------------------| B | E | O | O & (E B) | O E | ~(B & E) | -----------------------------------------------------| T | T | T | T T T T T | T T T | F T T T | -----------------------------------------------------| T | T | F | F F T T T | F T T | F T T T | -----------------------------------------------------| T | F | T | T F F F T | T F F | T T F F | -----------------------------------------------------| T | F | F | F F F F T | F T F | T T F F | -----------------------------------------------------| F | T | T | T F T F F | T T T | T F F T | -----------------------------------------------------| F | T | F | F F T F F | F T T | T F F T | -----------------------------------------------------| F | F | T | T T F T F | T F F | T F F F | -----------------------------------------------------| F | F | F | F F F T F | F T F | T F F F | -----------------------------------------------------Inconsistent. 5. R: The read team is free from injuries. B: The blue teams's star player is healthy. G: It will be a good game. R
(B G)

~G

~(R & B)

-------------------------------------------------| B | G | R | R (B G) | ~G ~(R & B) | -------------------------------------------------| T | T | T | T T T T T | FT T F T T T | -------------------------------------------------| T | T | F | F T T T T | FT T T F F T | -------------------------------------------------| T | F | T | T F T F F | TF F F T T T | -------------------------------------------------| T | F | F | F T T F F | TF T T F F T | -------------------------------------------------| F | T | T | T T F T T | FT T T T F F | -------------------------------------------------| F | T | F | F T F T T | FT T T F F F | -------------------------------------------------| F | F | T | T T F T F | TF T T T F F | -------------------------------------------------| F | F | F | F T F T F | TF T T F F F | -------------------------------------------------Equivalent. 7. L: We well learn a lot. P: We will pass the course. W: We work hard. W
(P & L)

(W

P) & (W

L)

-----------------------------------------------------| L | P | W | W (P & L) | (W P) & (W L) | -----------------------------------------------------| T | T | T | T T T T T | T T T T T T T | -----------------------------------------------------| T | T | F | F T T T T | F T T T F T T |

287

Introductory Logic
-----------------------------------------------------| T | F | T | T F F F T | T F F F T T T | -----------------------------------------------------| T | F | F | F T F F T | F T F T F T T | -----------------------------------------------------| F | T | T | T F T F F | T T T F T F F | -----------------------------------------------------| F | T | F | F T T F F | F T T T F T F | -----------------------------------------------------| F | F | T | T F F F F | T F F F T F F | -----------------------------------------------------| F | F | F | F T F F F | F T F T F T F | -----------------------------------------------------Equivalent 9. G: The pirates will get a good manager. N: The Pirates will get a new manager. W: The Pirates will win the pennant. W

(N

G), ~(W w ~N),

therefore

-----------------------------------------------------~(W w ~N) | G | | G | N | W | W (N G) | -----------------------------------------------------| F | T | F | F T T F F | T F F FT | F | -----------------------------------------------------Invalid.

Chapter 4
Exercises 4-2: 1. 3. a. 1. 2. (b, c, can be derived with two uses) P DD 1, &E P P DD 1, &E 2, &E 4, 5, &I

A & (B w C) SHOW B w C -------------3. | B w C | --------------

5.

1. G & H 2. I & J 3. SHOW G & J ----------------4. | G | 5. | J | 6. | G & J | ----------------1. 2. 3. 4.

7.

P & (Q R) P (Q w R) & R P SHOW [P & (Q w R)] & R DD ---------------------| P | 1, &E

288

Q w R | | P & (Q w R) R | [P & (Q w R)] & R | ---------------------Exercise 4-3 5. 6. 7. 8. | | | | 1. 3. 5. 7. b, d, e, f, g a, a, b

2, 4, 2, 6,

&E 5, &I &E 7, &I

1. A & (C D) 2. C & ~P 3. SHOW D & (A & ~P) -------------------------4. | C | 5. | C D | 6. | D | 7. | A | 8. | ~P | 9. | A & ~P | 10. | D & (A & ~P) | -------------------------1. P & Q 2. Q A 3. A R 4. SHOW R w S --------------------5. | Q | 6. | A | 7. | R | 9. | R w S | --------------------1. F G 2. ~H & I 3. H w G 4. SHOW F & I --------------------5. | I | 6. | ~H | 7. | G | 8. | F | 9. | F & I | --------------------1. 2. 3. 4. P P P DD

P P DD 2, 1, 4, 1, 2, 7, 6, &E &E 5, E &E &E 8, &I 9, &I

9.

1 &E 5, 2, E 3, 6, E 7, wI P P P DD 2, 2, 3, 1, 5, &E &E 6, wE 7, E 8, &I

11.

13.

P W & (T V) ~X P X w (V T) P SHOW T V DD ------------------------| 1, &E 5. | T V | 2, 3, wE 6. | V T 7. | T V | 5, 6, I ------------------------1. 2. 3. 4. (A & R) & G R (M & S) S w Q -SHOW Q w S -----------------P P P DD

15.

289

Introductory Logic
5. 6. 7. 8. 9. | | | | | A & R | R | M & S | S | Q w S | -----------------1, 5, 2, 7, 8, &E &E 6, E &E wI

Exercise 4-4 1. 1. B w C ~C (A w B) 2. SHOW ------------------------3. | ~C | 4. | SHOW A w B | | ---------------------- | 5. || B | | | | 6. || A w B | ---------------------- | ------------------------3. 1. F w G 2. H & ~I ~G (F & ~I) 3. SHOW ------------------------4. | ~G | 5. | SHOW F & ~I | | ----------------------- | 6. | | F || 7. | | ~I || 8. | | F & ~I || | ---------------------- | -------------------------

P CD ACD DD 1, 3, wE 5, wI

P P CD ACD DD 1, 4, wE 2, &E 6, 7, &I

5.

1. L (M N) 2. SHOW (L M) (L N) ----------------------------3. | L M | 4. | SHOW L N | | -------------------------- | 5. || L | | N | | 6. || SHOW || ---------------------- | | 7. || | M | | | | | | 8. || | M N 9. || | N | | | || ----------------------- | | | -------------------------- | ----------------------------1. 2. 3. 4. 5. 6. 7. 8. R w (S & ~S) SHOW R -------------------| ~R | | SHOW ! | | ---------------- | | | S & ~S | | | | S | | | | ~S | | | | ! | | P ID AID DD 1, 5, 5, 6, 3 wE &E &E 7, !I

P CD ACD CD ACD DD 3, 5, E 1, 5, E 7, 8, E

7.

290

---------------- | --------------------

9.

1. 2. 3. 4. 5. 6.

P ~P P SHOW P ID ---------------------------| ~P | AID ! | DD | SHOW | ------------------------ | | | P | | 1, 3, E | | ! | | 3, 5, !I | ------------------------ | ----------------------------

11.

P 1. ~T w V 2. SHOW T V CD ----------------------------3. | T | ACD 4. | SHOW V | ID | -------------------------- | 5. | | ~V | | AID 6. | | SHOW ! | | DD | | ---------------------- | | 7. | | | ~T | | | 1, 5, wE 8. | | | ! | | | 3, 7, !I | | ---------------------- | | | -------------------------- | ------------------------------

Exercise 4-5 1. 6. wE requires two earlier lines; it can't be used here. 7. E requires two earlier lines; it can't be used here. 8. Wrong form for wE; you'd need ~~P. 10. Wrong form for wE; you'd need ~Q. 3. 2. 3. 7. The "SHOW" on his line cannot be canceled until those on lines 6 and 4 are. Q cannot be assumed here; only P could be. Justification should be 1, 5, &I

Exercise 4-6 1. 1. ~P (S w T) 2. ~S & ~T 3. SHOW P -----------------------4. | ~P | | 5. | SHOW ! | -------------------- | 6. | | S w T | | 7. | | ~S | | 8. | | T | | 9. | | ~T | | 10. | | ! | | | -------------------- | -----------------------3. 1. ~P w Q 2. SHOW P Q -----------------------3. | P | 4. | SHOW Q | P P ID AID DD 1, 2, 6, 2, 8, 4, E &E 7, wE &E 9, !I

P CD ACD ID

291

Introductory Logic
| ------------------| 5. | | ~Q | | 6. | | SHOW ! | | | | --------------- | | 7. | | | ~P | | | 8. | | | ! | | | | | --------------- | | | ------------------| -----------------------5. 1. P 2. SHOW ~~P -----------------------3. | ~~~P | | 4. | SHOW ! | -------------------- | 5. | | SHOW ~P | | | | ---------------- | | 6. | | | ~~P | | | 7. | | | SHOW ! | | | | | | ------------ | | | 8. | | | | ! | | | | | | | ------------ | | | | | ---------------- | | 9. | | ! | | | -------------------- | -----------------------7. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17 18.

AID DD 1, 5, wE 3, 7, !I

P ID AID DD ID AID DD 3, 6, !I 1, 5, !I

P P R Q R P P w Q P SHOW R ID ---------------------| ~R | AID | DD | SHOW ! | ------------------ | | | SHOW ~P | | ID | | -------------- | | | | | ~~P | | | AID | | | DD | | | SHOW ! | | | ----------- | | | | | | | SHOW P || | | ID | | | | -------- || | | | | | | | ~P ||| | | AID | | | | | SHOW ! ||| | | DD | | | | | ----- ||| | | | | | | | | ! |||| | | 8, 11, !I | | | | | ----- ||| | | | | | | -------- || | | | | | | R || | | 1, 10, E | | | | ! || | | 5, 14, !I | | | ----------- | | | | | -------------- | | | | Q | | 3, 7, wE | | R | | 2, 16, E | | ! | | 5, 17, !I | ------------------ | ----------------------

292

9.

1. ~P 2. SHOW P Q ----------------------3. | P | 4. | SHOW Q | | ------------------- | 5. | | ~Q | | | | 6. | | SHOW ! | | --------------- | | 7. | | | ! | | | | | --------------- | | | ------------------- | ----------------------1. A w B B w A 2. SHOW ----------------------3. | ~(B w A) | | 4. | SHOW ! | ------------------- | 5. | | SHOW ~A | | | | --------------- | | 6. | | | ~~A | | | 7. | | | SHOW ! | | | | | | ----------- | | | 8. | | | | SHOW A | | | | | | | | ------- | | | | 9. | | | | | ~A | | | | | 10. | | | | | SHOW !| | | | | | | | | | --- | | | | | 11. | | | | | | ! | | | | | | | | | | | --- | | | | | | | | | ------- | | | | 12. | | | | B w A | | | | 13. | | | | ! | | | | | | | ----------- | | | | | --------------- | | 14. | | B | | 15. | | B w A | | 16. | | ! | | | ------------------- | ----------------------1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14.

P CD ACD ID AID DD 1, 3, !I

11.

p ID AID DD ID AID DD ID AID 6, 9, !I 8, wI 3, 12, !I 1, 5, wE 14, wI 3, 15, !I

13.

P B w H B M P H M P SHOW M ID ---------------------| ~M | AID | -SHOW ! | DD | ------------------ | | | -SHOW B | | ID | | -------------- | | | | | ~B | | | AID | | | -SHOW ! | | | DD | | | ---------- | | | | | | | H | | | | 1, 8, vE | | | | M | | | | 3, 10, -->E | | | | ! | | | | 5, 11, !I | | | ---------- | | | | | -------------- | | | | M | | 2, 7, -->E | | ! | | 5, 13, !I

293

Introductory Logic
| ------------------ | ----------------------

Chapter 5: EXERCISES 5-1 1. a (Note: D and ~C could be inferred, but not using MT) 3. d 5. 1. (T w R) N P 2. Q & ~N P DD 3. SHOW ~(T w R) --------------------------4. | ~N | 2, &E 5. | ~(T w R) | 1, 4, MT --------------------------1. 2. 3. 4. R I I S S R SHOW S R --------------------------5. | R S | 6. | S R | --------------------------R w M W & ~M SHOW F R -------------------------| F | | SHOW R | | ---------------------- | | | ~M | | | | R | | | ---------------------- | -------------------------P (D v Y) R w P ~R & (D Q) Y Q -SHOW Q ------------------------| ~R | | P | | D w Y | | D Q | | Q | ------------------------P P P DD 1, 2, HS 3, 5, I P P CD ACD DD 2, &E 1, 6, wE

7.

9.

1. 2. 3. 4. 5. 6. 7.

11.

1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

P P P P DD 3, &E 2, 6, w 1, 7, E 3, &E 4, 8, 9, SC

294

13.

1. K S 2. P & ~S 3. SHOW ~K --------------------------4. | ~S | 5. | ~K | ---------------------------

P P DD 2, &E 1, 4, MT

Exercises 5-2 1. 1. Q X 2. SHOW (Q X) & (X Q) -------------------------3. | SHOW Q X | | ---------------------- | 4. | | Q | | | | 5. | | SHOW X | | ------------------ | | 6. | | | X | | | | | ------------------ | | | ---------------------- | 7. | SHOW X Q | | ---------------------- | 8. | | X | | 9. | | SHOW Q | | | | ------------------ | | 10. | | | Q | | | | | ------------------ | | | ---------------------- | 11. | (Q X) & (X Q) | -------------------------1. (Q X) & (X Q) 2. SHOW Q X --------------------------3. | Q X | 4. | X Q | 5. | Q X | --------------------------3. 1. P Q 2. SHOW ~P w Q ---------------------| 3. | ~(~P w Q) | 4. | SHOW ! | ------------------ | 5. | | SHOW P | | | | -------------- | | 6. | | | ~P | | | | | | 7. | | | SHOW ! | | | ---------- | | | 8. | | | | ~P w Q | | | | 9. | | | | ! | | | | | | | ---------- | | | | | -------------- | | 10. | | Q | | 11. | | ~P w Q | | 12. | | ! | | | ------------------ | ---------------------1. 2. ~P w Q SHOW P Q P ID AID DD ID AID DD

P DD CD ACD DD 1, 4, E CD ACD DD 1, 8, E 3, 7, &I P DD 1, &E 1, &E 3, 4, I

6, wI 3, 8, !I 1, 5, E 10, wI 3, 11, !I

P CD

295

Introductory Logic
---------------------3. |P | 4. | SHOW Q | | ------------------ | 5. | | ~Q | | 6. | | SHOW ! | | | | -------------- | | 7. | | | ~P | | | 8. | | | ! | | | | | -------------- | | | ------------------ | ---------------------5. 1. P w Q 2. SHOW Q w P ---------------------3. | SHOW P (Q w P) | | ------------------ | 4. | | P | | 5. | | SHOW Q w P | | | | -------------- | | 6. | | | Q w P | | | | | -------------- | | | ------------------ | | 7. | SHOW Q (Q w P) | ------------------ | 8. | | Q | | 9. | | SHOW Q w P | | | | -------------- | | | | | 10. | | | Q w P | | -------------- | | | ------------------ | 11. | Q w P | ---------------------1. Q w P 2. SHOW P w Q ---------------------3. | SHOW Q (P w Q) | | ------------------ | 4. | | Q | | | | 5. | | SHOW P w Q | | -------------- | | | | | 6. | | | P w Q | | -------------- | | | ------------------ | 7. | SHOW P (P w Q) | | ------------------ | 8. | | P | | 9. | | SHOW P w Q | | | | -------------- | | 10. | | | P w Q | | | | | -------------- | | | ------------------ | 11. | P w Q | ---------------------7. 1. Z U 2. SHOW ~U ~Z

ACD ID AID DD 1, 5, wE 3, 7, !I

P DD CD ACD DD 4, wI CD ACD DD 8, wI 1, 3, 7, SC P DD CD ACD DD 4, wI CD ACD DD 8, wI 1, 3, 7, SC P CD

296

-----------------------3. | ~U | ACD 4. | SHOW ~Z | DD | -------------------- | 5. | | ~Z | | 1, 3, MT | -------------------- | -----------------------1. ~U ~Z 2. SHOW Z U ---------------------3. | Z | 4. | SHOW U | | ------------------ | 5. | | ~U | | | | 6. | | SHOW ! | | -------------- | | 7. | | | ~Z | | | 8. | | | ! | | | | | -------------- | | | ------------------ | ---------------------9. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. P (Q & R) ~R & Q Q P SHOW ! ----------------------| Q | | P | | Q & R | | R | | ~R | | ! | ----------------------P CD ACD ID AID DD 1, 5, E 3, 7, !I

P P P DD 2, 3, 1, 7, 2, 8, &E 5, E 6, E &E &E 9, !I

1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

R (H A) P ~A H P H & R P SHOW ! DD ----------------------| H | 3, &E | R | 3, &E | ~A | 2, 5, E | H A | 1, 6, E | A | 5, 8, E | ! | 7, 9, !I -----------------------

Exercises 5-3 1. 1. P w ~L P 2. L & (Z ~P) P DD 3. SHOW ~~L & ~Z -----------------------4. | L | 2, &E 5. | ~~L | 4, DN 6. | Z ~P | 2, &E 7. | P | 1, 5, wE 8. | ~~P | 7, DN 9. | ~Z | 6, 8, MT 10. | ~~L & ~Z | 5, 9, &I ------------------------

297

Introductory Logic
3. 1. (H w T) ~S 2. S & (T w Q) 3. SHOW ~H & Q ---------------------4. | S | 5. | ~~S | 6. | ~(H w T) | 7. | ~H & ~T | 8. | ~H | 9. | ~T | | 10. | T w Q 11. | Q | 12. | ~H & Q | ---------------------1. ~(C T) 2. H w T 3. SHOW H ----------------------4. | C & ~T | 5. | ~T | 6. | H | ----------------------1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 9. S (A N) Q w (~A N) ~Q & (~S T) SHOW T -----------------------| ~Q | | ~A N | | ~(A N) | | ~S | | ~S T | | T | -----------------------P P DD 2, 4, 1, 6, 7, 7, 2, 9, 8, P P DD 1, ~ 4, &E 2, 5, wE P P P DD 3, &E 2, 5, wE 6, ~ 1, 7, MT 3 &E 8, 9, E P P DD 2, Com 1, 4, HS 5, Com P DD 1, Com 3, Assoc. 4, Com P P P DD &E DN 5, MT DeM &E &E &E 10, wE 11, &I

5.

7.

1. (P & Q) (H w X) 2. G (Q & P) 3. SHOW G (X w H) -----------------------4. | G (P & Q) | 5. | G (H w X) | | 6. | G (X w H) -----------------------1. A w (B w C) 2. SHOW (C w A) w B ----------------------3. | A w (C w B) | | 4. | (A w C) w B 5. | (C w A) w B | ----------------------1. 2. 3. 4. B w (~J & W) (J w W) & (J w H) (W & H) B SHOW B ------------------------

11.

13.

298

5. | J w (W & H) | 2, Dist. | CD 6. | SHOW J B | -------------------- | 7. | | J | | ACD 8. | | SHOW B | | DD | | ---------------- | | 9. | | | ~~J | | | 7, DN | | | 9, wI 10. | | | ~~J w ~W 11. | | | ~(~J & W) | | | 10, DeM 12. | | | B | | | 1, 11, wE | | ---------------- | | | -------------------- | 13. | B | 3, 5, 6, SC -----------------------15. 1. ~I ~(E w ~X) 2. SHOW I w X -----------------------| 3. | SHOW ~I X | -------------------- | 4. | | ~I | | 5. | | SHOW X | | | | ---------------- | | 6. | | | ~(E w ~X) | | | 7. | | | ~E & ~~X | | | 8. | | | ~E & X | | | 9. | | | X | | | | | ---------------- | | | -------------------- | 10. | I w X | -----------------------P DD CD ACD DD 1, 6, 7, 8, 4, E DeM DN &E

3, w

17.

1. P (Q R) P 2. ~R w S P 3. P & (S ~Q) P 4. SHOW ~Q DD ------------------------5. | P | 3, &E 6. | Q R | 1, 5, E 7. | ~R ~Q | 6, Ctr 8. | S ~Q | 3 &E 9. | ~Q | 2, 7, 8, SC -------------------------

Exercises 5-4 1. 1. L (E w H) 2. SHOW ~E (~L w H) ---------------------------3. | ~L w (E w H) | 4. | (~L w E) w H | 5. | (E w ~L) w H | | 6. | E w (~L w H) 7. | ~E (~L w H) | ---------------------------P DD 1, 3, 4, 5, 6,
w

Assoc Com Assoc


w

3.

1. SHOW (A M) w (B ~M) DD -------------------------------------| CD 2. | SHOW M M | ---------------------------------- | 3. | | M | | ACD 4. | | SHOW M | | DD | | ------------------------------ | |

299

Introductory Logic
5. | | | M | | | 3,R | | ------------------------------ | | | ---------------------------------- | 6. | ~M w M | 2, w 7. | M w ~M | 6, Com | 7, wI 8. | ~A w (M w ~M) 9. | (~A w M) w ~M | 8, Assoc. 10. | (A M) w ~M | 9, w 11. | [(A M) w ~M] w ~B | 10, wI 12. | (A M) w (~M w ~B) | 11, Assoc. | 12, Com. 13. | (A M) w (~B w ~M) 14. | (A M) w (B ~M) | 13, w -------------------------------------5. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 7. P Q S Q w T P P T (Q w S) SHOW (S & T) w (S & Q) DD ----------------------| SHOW S | ID | ------------------- | | | ~S | | AID | | SHOW ! | | DD | | --------------- | | | | | ~Q | | | 1, 6, MT | | | T | | | 2, 8, wE | | | Q w S | | | 3, 9, E | | | S | | | 8, 10, wE | | | ! | | | 6, 11, !I | | --------------- | | | ------------------- | | S & (Q w T) | 2, 5, &I | S & (T w Q) | 13, Com | (S & T) w (S & Q) | 14, Dist -----------------------

1. (P Q) R P 2. (S ~Q) T P 3. SHOW (R w ~T) (S R) CD --------------------------| ACD 4. | R w ~T 5. | SHOW S R | CD | ----------------------- | 6. | | S | | ACD 7. | | SHOW R | | DD | | ------------------- | | | | | CD 8. | | | SHOW R R | | | --------------- | | | 9. | | | | R | | | | ACD 10. | | | | SHOW R | | | | DD | | | | ----------- | | | | 11. | | | | |R | | | | | 9, R | | | | ----------- | | | | | | | -------------- | | | | | | CD 12. | | | SHOW ~T R | | | --------------- | | | 13. | | | | ~T | | | | ACD | | | | DD 14. | | | | SHOW R | | | | ------------ | | | |

300

15. 16. 17. 18. 19. 20. 21.

| | | | | | | | | 22. | | | 1. 2. 3. 4. 5. 6. 7. 8. 9.

| | | ~(S ~Q) || | | | | | | S & ~~Q || | | | | | | ~~Q || | | | | | | Q || | | | | | | ~P w Q || | | | | | |P Q || | | | | | | R || | | | | | ------------ | | | | | -------------- | | | |R | | | ------------------- | | ----------------------- | --------------------------| | | | | | | | | | | P P

2, 13, MT 15, ~ 16, &E 17, DN 18, wI 19, v 1, 20, E 4, 8, 12, SC

9.

10. 11. 12. 13. 14. 11.

F G F w G SHOW F & G ---------------------| | SHOW F (F & G) | ------------------- | | | F | | | | SHOW F & G | | | | --------------- | | | | | G | | | | | | F & G | | | | | --------------- | | | ------------------- | | | SHOW G (F & G) | ------------------- | | | G | | | | SHOW F & G | | | | --------------- | | | | | F | | | | | | F & G | | | | | --------------- | | | ------------------- | | F & G | -----------------------

DD CD ACD DD 1, 5, E 5, 7, &I; CD ACD DD 1, 10, E 10, 12, &I; 2, 4, 9, SC P DD 1, w 3, Dist 4, w 5, w

1. K (N & F) 2. SHOW (K N) & (K F) -------------------------| 3. | ~K w (N & F ) 4. | (~K w N) & (~K w F) | 5. | (K N) & (~K w F) | 6. | (K N) & (K F) | -------------------------1. 2. 3. 4. 5. 6.

P (K N) & (K F) SHOW K (N & F) DD -------------------------| (K N) & (~K w F) | 1, v | 3, v | (~K w N) & (~K w F) | ~K w (N & F ) | 4, Dist | 5, v | K (N & F) -------------------------P P P DD 3, Com

13.

1. 2. 3. 4.

(P Q) (S & T) ~S Q w ~P SHOW ! -------------------------5. | ~P w Q |

301

Introductory Logic
6. 7. 8. 9. | | | | P Q | S & T | S | ! | -------------------------5, v 1, 6, E 7, &E; 2, 8, !I

Exercises 5-5 1. A & (B w C) ~C SHOW B --------------------------4. | B w C | 5. | B | --------------------------1. 2. 3. both A and We want to B or C, we given that P P DD 1, &E 2, 4, wE

We are given that that C is false. both A and either Since we are also 3. 1. 2. 3. 4. 5. 6. 7. 8.

either B or C, and we are also given SHOW B. Since we are given that can infer that either B or C. And C is false, B must be true. QED. P P ID AID DD 1, 4, wE 2, 4, wE 6, 7, !I

A w ~B B w A SHOW A -------------------------| ~A | | SHOW ! | | ---------------------- | | | ~B | | | | B | | | | ! | | | ---------------------- | --------------------------

We are given that either A or not B, and also that either A or B. We want to SHOW that A. We will use an indirect proof. We first assume that not-A. But from this and the first premise, that A or not-B, it follows that not-B. But from our assumption and the second premise, that A or B, it follows that B. So a contradiction follows from our assumption, not-B, which must therefore be false. Hence B is true. QED. Chapter 6: Exercises 6-1 1. 3. 5. 7. 9. Sentence Not a sentence; "z" not a name. Sentence Sentence Donald is on the lacrosse team and Benjamin is on the baseball team. 11. If Carol is not on the field hockey team, she's on the softball team. 13. If Alice is on the lacrosse team then Donald is on the softball team and Benjamin is on the field hockey team.

302

Exercises 6-2 1. All dogs. 3. Nothing (assuming nothing is both a dog and a wolf; if you classify the result of breeding a dog with a wolf as both a dog and a wolf, such a creature would satisfy this open sentence.) 5. All animals. 7. non-mammals and dogs. 9. all animals. (no animal satisfies the antecedent.) 11. Wolves. Exercises 6-3 1. a 3. a, c 5. b, c 7. a, c 9. 1, 2 11. 1, 2, 3 13. 1 15. 1, 2 Exercises 6-4 1. T 3. T 5. F 7. T 9. T 11. T 13. T 15. T 17. T 19. T 21. T 23. F Exercises 6-5 1. x(Vx Sx) 3. x(Vx Sx) ~x(Vx Bx) 5. x(Kx (Dx & ~Bx)) 7. y[(Ey w Ky) Dy] 9. z(Mz Lz) 11. w(Lw [Mw w Pw]) 13. v(Lv w Pv) 15. x[~(Mx w Lx) Px] 17. y[Cy Ny] 19. Tt 21. z[(Tz &Tt) ~Cz] 23. w[Nw (Cw w Tw)] Exercises 6-6 1. 3. 5. 7. 9. 11. 13. 15. 17. 19. 21. 23.
xCx z(Fz & Jz) v(~Rv & Cv) y(Jy & Ry)& ~ y(Cy & Ry) w(Aw & Cw) x(Ax & Bx) & ~ x(Bx & Cx) ~z~Az v[Av & ~(Bv w Cv)] yOy w(Ow & Qw) xPx ~x(Ox & Px) x(Qx & Ox) Qb ~z(Pz & Oz)

303

Introductory Logic
Exercises 6-7 1. 3. 5. 7. 9. 11. 13. 15. 17. 19. 21. 23.
x(Bx Hx) z[(Hz & Jz) ~Pz] v(~~Hv ~Pv) y[(Wy & ~Hy) ~(Jy w By)] w[(Bw & ~Hw) & ~Pw] x(Sx & ~Hx) & x(Bx ~~Hx) z[Ez (Lz w Yz)] v(~Ev ~Lv) y[~Yy ~~(Ey w Ly)] w[Mw (Aw & Tw)] x(Ax & Tx) z[(Az & Mz) Tz]

Exercises 6-8 1. x(Fx Mx) & x(Sx Wx) 3. z[([Mz & Lz] & Fz) ~Gz] 5. [v(Mv & Lv) & v(Wv & Lv)] & [( v[Mv & Gv] & v[Wv & Gv]) & ~v([Mv & Sv] w [Wv & Fv])] 7. y(Fx Mx) & [x(Mx & Gx) & y(Wy & Gy)] 9. No respected physicist believes that aliens regularly visit earth in flying saucers. 11. Not everyone one who believes that aliens regularly visit earth in flying saucers fails to be completely credible on the subject of aliens. 13. Every respected physicist who is completely credible on the subject of aliens neither claims to have been abducted by aliens in a flying saucer nor believes that aliens regularly visit the earth in flying saucers. 15. If everyone who is completely credible on the subject of aliens does not believe that aliens regularly visit the earth in flying saucers, then no respected physicist claims to have been abducted by aliens in flying saucers.

304

Chapter 7:
Exercises 7-1 1. 3. 5. 7. 9. 11. 13. 15. 17. 19. 21. 23. 25. 27.
xOax & ~Oag x(Ogx & ~Lgx) x(Rax Lax) x(Rax & Lax) & ~ x(Rgx & Lgx)

Bcad (Gcb & Gca) & ~Gcd (Gab w Gcb) & ~(Gab & Gcb) Gca ~Rcbbd yGyd z(Gzc Gzb) w[(Gwa & Lwc) Rwadw] (Lab & Lbd) Bbad Alonzo is in the same writing class as Gertrude. Alonzo is in the same writing class as someone and Gertrude is in the same writing class as someone.[Note: NOT Alonzo and Gertrude are both in the same writing class as someone. (Why not?)] 29. Gertrude is in the same writing class as someone who is in the same writing class as Alonzo. 31. Alonzo is in the same writing class as everyone who lives in the same dorm as Gertrude.

Exercises 7-2 1. xOax & ~Oag 3. y(Ogy & ~Lgy) 5. x(Tax Lax) 7. z(Taz & Laz) & ~ z(Tgz & Lgz) 9. w(Ma & Paw) & w(Fg & Pgw) 11. v[Ma & (Pav & Pve)] 13. x[(Fx & Sax) & y(Pyg & Pyx)] & ~Sag 15. x( [(Sax & y(Pyx & Pyg)) & Fg] w [(Sxg & y(Pyx & Pya)) & Fg]) &~Sag 17. Every professor teaches someone. 19. Every professor is despised by some student. 21. Every student admires some professor or other. 23. No professor despises every student he or she teaches. 25. x(Fx & Bexb) 27. x[yz([Tx & (Fy & Pz)] & Bzyx) & y(My & Bgyx)] 29. xyzw[((Cx & Pz) & (Ty & Tw)) & ((Bgxy & Bzxw) & Lyw)] 31. ~x[(Tx & y(My & Bgyx)] Exercises 7-3 1. 3. 5. 7. 9. 11. 13. 15. 17. 19. 21. 23. T T T F F T F T T T T F

Exercises 7.4:

305

Introductory Logic
1. 3. 5.
x(Rx Sxg) ~x(Rx Sxg) x(Rx ~ySxy) yx(Rx ~Sxy) ~x(Rx ySxy) y~x(Rx Sxy) ~xy(Rx & Sxy)

7.

Chapter 8: Exercises 8-1 1. Your interpretation must have at least two things in its domain. At least one thing must be F, and something else must be G but not F. At least one thing must not be G. Your interpretation must have nothing that is A but at least one thing that is C. Your interpretation must have at least one thing that is either D but not E or E but not D. It must either have something that is R and E or something that is D and E. Your interpretation must have at least two things in its domain. At least one thing must be B and everything that is B must be C. Something must be neither B nor C. Your interpretation must have at least two things in its domain. Nothing may be C. Something must be B and something must not be B.

3. 5.

7.

9.

11. You interpretation must have at least two things in its domain. Either something must not be M and something must be H and something not H, or something must be M, something not M, and nothing H. 13. Your interpretation must have at least two things in its domain. At least one must be F and at least one must be G, but nothing may be both F and G. 15. You interpretation must have at least two things in its domain. The things named a and b must be distinct and one must be G and one not. Nothing may be F. 17. Your interpretation must have at least two things in its domain. There must be something that is F but not G, and something that is not F. 19. You interpretation must have nothing that is A. 21. Your interpretation must have nothing that is F.

306

Exercises 8-2: 1. You interpretation must have at least two things in its domain. If one thing Ss a second, the second must S the first. Something must S and be Sed by a, and something must neither S nor be Sed by a. Your domain must have an infinite number of things in it. W must be a transitive relation that everything has to something, and there must be nothing that Ws something that Ws it. (For instance, your domain may be the positive integers and a thing may W another if and only if it is less than that other). Your interpretation must have somethings that are F. If there are any Bs, there must be at least two, and every F must K some, but not all of them. Your interpretation must have at least two things in its domain. There must be at least one thing which is B, and every B must either D everything but not be Ded by everything, or must be Ded by everything but not D everything. Your interpretation must have at least two things in it. There must be at least one C. Every C must D something. Either there must be a C that doesnt D itself or there must be two Cs, one of which does not D the other. Your interpretation must have a thing named a which is neither R nor P nor T and must not V itself. Anything that is R or P or T must V itself. Your interpretation must have at least two things in its domain. Something must be such that no matter what thing you pick, it either Hs or Js it. Nothing may J everything. Something must H nothing.

3.

5.

7.

9.

11.

13.

Exercises 8-3 1. Using confinement the sentences become:


xFx w yGy, Ga & yFy, z~Fz

Your interpretation must have at least two members in its domain. It must make whatever is named "a" G and must make something F and something not F. 3. Using confinement the sentences become: ~Pj w xBx, Bj & xPx,
xBx Pj

Your interpretation must have some-

307

Introductory Logic
thing that is P, and the thing named j must be B. Either the thing named j must also be P, or there must be one thing that is not B. 5. Using confinement the sentences become: ~(xyAxy Bc), xBx w Acc, x~Bx w ~Cc [Note: we used Dem on the last sentence before using confinement.] Your interpretation must have something that As something. The thing named c must not be B. Either something must be B and the thing named c not C, or c must A itself and and if anything is B, c must not be C. 7. Using confinement, the sentences become:
x(yRxy Mx), z~Mz & ~Ma, x(yRxy zRzx)

Your interpretation must have nothing that is M. Nothing may R everything. If anything Rs something, then something must R it. Exercises 8-4: 1. 3. 5. 7. Fa Gra Fcc Rccc [Ma w (Faa w Fab)] & [Mb w (Fba w Fbb)] [(Dat Waa) w (Dat Wat)] & [(Dtt Wta) w (Dtt Wtt)]

9. ([(Gaa w Gab) w Gac] & [(Gba w Gbb) w Gbc]) & [(Gca w Gcb) w Gcc] 11. Your interpretation must have something that is not C and something that is not M. 13. You interpretation must have everything Z. Something must H everything. 15. Your interpretation must have at least two things in its domain. There must be something that is both D and L, and something that is L but not D. Everything that is D must be L. 17. Your interpretation must have nothing that H's itself. Something must H nothing. If anything H's something, something must H it. (Note: there needn't be anything that H's anything.) 19. In your interpretation, something must be M and either all the Ms must be C or all the Ms must

308

be not C. 21. The left sentence entails the right, so your interpretation must make the left sentence false and the right true. Hence something must Q something but not K anything, and everything that Q's everything must K something or other.

Chapter 9: Exercises 9-1 1. 3. a, e c

Exercises 9-2 1. 1. 2. 3. 4. 5. 6. 3. 1. 2. 3. 4. 5. 1. 2. 3. 4. 5. 6. 7. 8. 7.
x(Fx w Gx) x~Gx

SHOW Fb ----------------------| Fb w Gb | | ~Gb | | Fb | ----------------------Fa & Ga SHOW zGz ------------------| Ga | | zGz | ------------------P DD 1, &E 3, ]I

P P DD 1, E 2, E 4, 5, wE

xMx w ~yPy ~(Pq Mg) SHOW xMx --------------------------| Pq & ~Mg | | Pq | | | yPy | ~~yPy | | xMx | ---------------------------

P P DD 2, 4, 5, 6, 1, ~ &E I DN 7, wE

1. xy[(Jx & Ky) (My & Nx)] P 2. Ja & Kb P 3. SHOW ]y(Mb & Ny) DD --------------------------------| 1, E 4. | y[(Ja & Ky) (My & Na)] 5. | (Ja & Kb) (Mb & Na) | 4, E 6. | Mb & Na | 2, 5, E | 6, I 7. | y(Mb & Ny) ---------------------------------1. x(Kx & Mx) yGy 2. xy[Nx (Kx & My)] 3. Np 4. SHOW Gm --------------------------P P P DD

9.

309

Introductory Logic
5. 6. 7. 8. 9. 10. | | | | | |
y[Np (Kp & My)] | 2, E Np (Kp & Mp) | 5, E Kp & Mp | 3, 6, E x(Kx & Mx) | 7, I yGy | 1, 8, E Gm | 9, E --------------------------

Exercises 9-3 1. 1. 2. 3. 4. 5. 6. 7. 3. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 5. 1. 2. 3. 4. 5. 6. 7.


x(Ax Bx)

P P DD 2, E 1, E 4, 5, E 6, I P P DD 1, 2, 5, 4, 7, 7, 9, 8, P CD ACD DD 1, E 3, E 5, 6, E
E E E

SHOW zBz --------------------------| Aa | | Aa Ba | | Ba | | zBz | --------------------------y(Ga & Fy) xy[(Gx & Fy) (Mx & Sy)] SHOW Ma & zSz

yAy

--------------------------| Ga &Fb | | y[(Ga & Gy) (Ma & Sy)] | | (Ga & Fb) (Ma & Sb) | | Ma & Sb | | Ma | | Sb | | zSz | | Ma & zSz | --------------------------x(Fx Ga) SHOW xFx Ga

6, E &E &E I 10, &I

--------------------------| | xFx | SHOW Ga | | ----------------------- | | | Fb Ga | | | | Fb | | | | Ga | | | ----------------------- | --------------------------SHOW x(Dx yDy) --------------------------| ~Da | | | ~Da w yDy | Da yDy | | x(Dx yDy) | --------------------------xQx x~Dx

7.

1. 2. 3. 4. 5. 6.

P DD 1, E 3, wI 4, w 5, I P ID

9.

1. 2.

SHOW ~x~Qx

310

3. 4. 5. 6. 7.

--------------------------| | x~Qx | SHOW ! | | ----------------------- | | | Qa | | | | ~Qa | | | | ! | | | ----------------------- | --------------------------x(Fa & Mx) xy[(Fx & My) wBw] SHOW zBz

AID DD 1, E 3, E 5, 6, !I

11. 1. 2. 3. 4. 5. 6. 7. 8. 9.

P P DD 1, E 2, E 5, E 4, 6, E 7, E 8, I

--------------------------| Fa & Mb | | | y[(Fa & Mb) wBw] | (Fa & Mb) wBw | | wBw | | Bc | | zBz | ---------------------------

Exercise 9-4 1. 1. 2. 3. 4. 5. 6. 7.
x(Ax Bx) x(Bx Cx) SHOW x(Ax Cx)

P P UD DD 1, E 2, E 5, 6, HS

--------------------------| SHOW Aa Ca | | ----------------------- | | | | | Aa Ba | | Ba Ca | | | | Aa Ca | | | ----------------------- | --------------------------x(Fx Gx) y(Py Gy) SHOW x(Px Fx)

3.

1. 2. 3. 4.

P P UD CD ACD DD 1, E 2, E 5, 8, E 7, 9, E

--------------------------| | SHOW Pa Fa | ----------------------- | 5. | | Pa | | | | 6. | | SHOW Fa | | ------------------- | | 7. | | | Fa Ga | | | 8. | | | Pa Ga | | | 9. | | | Ga | | | 10. | | | Fa | | | | | ------------------- | | | ----------------------- | --------------------------5. 1. 2. 3. 4. 5. 6. 7. 8.


x(~Px w Rx) x(~Dx ~Rx) SHOW x(Px Dx)

P P UD DD 1, E 5, w 2, E 7, Ctr

| | | | | |

--------------------------| SHOW Pa Da ----------------------- | | ~Pa w Ra | | | Pa Ra | | | | | ~Da ~Ra | Ra Da | |

311

Introductory Logic
9. | | Pa Da | | | ----------------------- | --------------------------6, 8, HS

7. 1. xFx Ga 2. SHOW x(Fx Ga) UD --------------------------------| SHOW Fb Ga | | ------------------------------ | | | Fb | | | | SHOW Ga | | | | ------------------------- | | | | | | | | xFx | | | Ga | | | | | -------------------------- | | | ------------------------------ | ---------------------------------xy(Fx Gy) x~Gx -SHOW x~Gx

3. 4. 5. 6. 7.

CD ACD DD 4, I 1, 6, E

9.

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14.

P P UD ID AID DD 2, E 1, E 8, E 5, 9, E 1, E 11, E 10, 12, E 7, 13, !I

------------------------| | SHOW ~Ga | --------------------- | | | Ga | | | | SHOW ! | | | | ----------------- | | | | | ~Gb | | | | | | y(Fc Gy) | | | | | | Fc Ga | | | | | | Fc | | | | | | y(Fc Gy) | | | | | | Fc Gb | | | | | | Gb | | | | | | ! | | | | | ----------------- | | | --------------------- | -------------------------

P 11. 1. xWx xRx 2. x~Rx xWx P UD 3. SHOW x(Wx w Rx) --------------------------4. | SHOW Wa w Ra | ID | ----------------------5. | | ~(Wa w Ra) | 6. | | SHOW ! | | | ------------------- | 7. | | | ~Wa & ~Ra | | 8. | | | ~Ra | | 9. | | | x~Rx | | 10. | | | xWx | | 11. | | | xRx | | 12. | | | Ra | | 13. | | | ! | | | | ------------------- | | | | | | | | | | | | | AID DD 5, DeM 7, &E 8, I 2, 9, E 1, 10, E 11, E 8, 12, !I

312

----------------------- | ---------------------------

Exercises 9-5 1. 4. WRONG Existential Exploitation can't be used on line 1 because the existential quantifier is not the main logical operator of line 1. The arrow is. (If you could use Existential Exploitation, you would have to use a new name, not a.) 5. WRONG Universal Exploitation can't be used on line 4 because the Universal quantifier is not the main logical operator of line 4. The arrow is. P 1. xFx xFx 2. Fa P DD 3. SHOW Fb ------------------4. | xFx | 2, I 5. | xFx | 1, 4, E 6. | Fb | 5, E -------------------

3. 4. WRONG You cannot use Universal Exploitation on line 1 because the universal quantifier is not the maim logical operator of line 1; the arrow is. 5. WRONG You cannot use Universal Exploitation on line 2 because the universal quantifier is not the main logical operator of line 2; the existential quantifier is. 6 WRONG Existential Exploitation requires a new name; "a" already occurs in the derivation. 9 WRONG You cannot use existential exploitation on line 8 because the existential quantifier is not the main logical operator of line 8; the tilde is. 13 WRONG If Existential Introduction is used on line 12, the "y" must be the main operator on line 13. P 1. xFx ~yGy 2. yx(Hy & Fx) P DD 3. SHOW xy(Hx & ~Gy) ---------------------4. | x(Ha & Fx) | 2, E 5. | SHOW xFx | UD | ------------------ | | | DD 6. | | SHOW Fb | | -------------- | | 7. | | | Ha & Fb | | | 4, E 8. | | | Fb | | | 7, &E | | -------------- | | | ------------------ |

313

Introductory Logic
9. | ~yGy | 10. | SHOW ~Gc | | -----------------11. | | Gc | | 12. | | SHOW ! | | | | --------------- | | 13. | | | yGy || | 14. | | | ! || | | | --------------- | | | -----------------15. | Ha & Fd | 16. | Ha | 17. | Ha & ~Gc | 18. | y(Ha & ~Gy) | | 19. | xy(Hx & ~Gy) ---------------------5. 1, 5, E ID AID DD 11, I 9, 13, !I 4, E 15, &E 10, 16, &I 17, I 18, I

3. WRONG You cannot use UD here because the instance on the next line uses a name that is not new to the derivation (it occurs in both lines 1 and 2). 6. WRONG The "&" is not the main operator of line 2, so you cannot use ampersand exploitation on line 2.
x[(Fx & Ga) Hx] x(Fx & Ga) SHOW xHx

1. 1. 2. 3. 4. 5. 6. 7.

P P UD DD 2, E 1, E 5, 6, E

------------------------| | SHOW Hb | --------------------- | | | Fb & Ga | | | | (Fb & Ga) Hb | | | | Hb | | | --------------------- | -------------------------

Exercises 9-6 1. 1. xAx 2. Aa y(Cy & Dy) 3. yAy z(Hz Az) 4. Ha 5. SHOW yDy ----------------------6. | Ab | 7. | yAy | 8. | z(Hz Az) | | 9. | Ha Aa 10. | Aa | | 11. | y(Cy & Dy) 12. | Cc & Dc | 13. | Dc | 14. | yDy | ----------------------P P P P DD 1, E 6, I 3, 7, E 8, E 4, 9, E 2, 10, E 11, E 12, &E 13, I

314

3.

1. 2. 3. 4.

xy(Fx ~Gy) SHOW ~x(Fx & Gx)

P ID

--------------------| x(Fx & Gx) | AID | SHOW ! | DD | ----------------- | 5. | | Fa & Ga | | 3, E 6. | | Fa | | 5, &E 7. | | y(Fa ~Gy) | | 1, E 8. | | Fa ~Ga | | 7, E 9. | | ~Ga | | 6, 8, E 10. | | Ga | | 5, &E 11. | | ! | | 9, 10, !I | ----------------- | --------------------xy(Dx Fy) P SHOW xFx yDy CD ------------------------| xFx | ACD | SHOW yDy | UD | --------------------- | | | DD | | SHOW Da | | ----------------- | | | | | y(Da Fy) | | | 1, E | | | Da Fb | | | 6, E | | | Fb | | | 3, E | | | Da | | | 7, 8, E | | ----------------- | | | --------------------- | ------------------------xy(Rx Wy) x(Rx w Wx) SHOW zRz

5.

1. 2. 3. 4. 5. 6. 7. 8. 9.

7.

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19.

P P UD DD 2, E CD ACD DD 1, E 9, E 7, 10 E CD ACD DD 1, E 15, E 13, 16, E 6, 17, E 5, 6, 12, SC

---------------------------| SHOW Ra | | ------------------------ | | | Rb w Wb | | | | SHOW Wb Ra | | | | -------------------- | | | | | Wb | | | | | | | | | SHOW Ra | | | ---------------- | | | | | | | | | | | y(Ra Wy) | | | | Ra Wb | | | | | | | | Ra | | | | | | | ---------------- | | | | | -------------------- | | | | | | SHOW Rb Ra | | -------------------- | | | | | Rb | | | | | | SHOW Ra | | | | | | ---------------- | | | | | | | y(Rb Wy) | | | | | | | | Rb Wb | | | | | | | | Wb | | | | | | | | Ra | | | | | | | ---------------- | | | | | -------------------- | | | | Ra | | | ------------------------ | ----------------------------

315

Introductory Logic

9.

1. 2. 3. 4. 5. 6. 7. 8. 9.

10. 11. 12. 13. 14. 15. 16. 17.

18.

SHOW ~xLx x~Lx -------------------------------| SHOW ~xLx x~Lx | | ---------------------------- | | | ~ xLx | | | | SHOW x~Lx | | | | ------------------------ | | | | | SHOW ~La | | | | | | ------------------| | | | | | | La | | | | | | | | SHOW ! | | | | | | | | ---------------- | | | | | | | | | xLx | | | | | | | | | | ! | | | | | | | | | ---------------- | | | | | | | -------------------- | | | | | ------------------------ | | | ---------------------------- | | SHOW x~Lx ~xLx | | ---------------------------- | | | x~Lx | | | | SHOW ~xLx | | | | ------------------------ | | | | | | | | xLx | | | SHOW ! | | | | | | -------------------- | | | | | | | Lb | | | | | | | | ~Lb | | | | | | | | ! | | | | | | | -------------------- | | | | | | ----------------------- | | | | -------------------------- | | ---------------------------- | | ~xLx x~Lx | --------------------------------

DD CD ACD UD ID AID DD 6, I 3, 8, !I

CD ACD ID AID DD 13, E 11, E 15,16, !I

2,10, I

Exercises 9-7 1. 1. 2. 3. 4. 5. 6. 7. 8. 9. 3. 1. 2. 3. 4. 5. 6.
xy[Fxy (Gx w Hy)] Frs & ~Hs SHOW Gr --------------------------|Frs | | y[Fry (Gr w Hy)] | | | Frs (Gr w Hs) | Gr w Hs | | ~Hs | | Gr | --------------------------xy(Pxy w Pyx) SHOW xyPxy

P P DD 2, &E 1, E 5, E 4, 6 E 2, &E 7, 8, wE P DD 1, I 3, I CD ACD

| | | | |

---------------------------------y(Pay w Pya) | Pab w Pba | SHOW Pab xyPxy | ------------------------------ | | Pab | |

316

7. 8. 9. 10. 11. 12. 13. 14. 15. 5. 1. 2. 3. 4. 5. 6. 7.

| | SHOW xyPxy | | | | ------------------------- | | | | | yPay | | | | | | | | | xyPxy | | ------------------------- | | | ------------------------------ | | | SHOW Pba xyPxy | ------------------------------ | | | Pba | | | | SHOW xyPxy | | | | ------------------------- | | | | | yPby | | | | | | xyPxy | | | | | ------------------------- | | | ------------------------------ | | | xyPxy -----------------------------------

DD 6, I 8, I CD ACD DD 11, I 13, I 4, 5, 10, SC

CD SHOW xyFxy yxFxy --------------------------------| xyFxy | ACD | SHOW yxFxy | UD | ----------------------------- | | | SHOW xFxa | | DD | | ------------------------- | | | | | yFby | | | 2, E | | | Fba | | | 5, E | | | xFxa | | | 6, I | | ------------------------- | | | ----------------------------- | ---------------------------------

7.

1. xy(Wxy Ixx) P 2. xyWxy P 3. SHOW xIxx UD ------------------------------4. | SHOW Iaa | DD | --------------------------- | 5. | | yWay | | 2, E 6. | | Wab | | 5, E 7. | | y(Way Iaa) | | 1, E 8. | | Wab Iaa | | 7, E 9. | | Iaa | | 6, 8, E | --------------------------- | ------------------------------1. 2. 3. 4.
x(Lxa w Lxb) x(yLxy Hx) SHOW xHx

9.

P P UD DD 1, E 2, E CD ACD DD 8, I

----------------------------------| SHOW Hc | | ------------------------------- | | | 5. | | Lca w Lcb | | 6. | | yLcy Hc 7. | | SHOW Lca yLcy | | | | --------------------------- | | 8. | | | Lca | | | | | | 9. | | | SHOW yLcy | | | ----------------------- | | | 10. | | | | yLcy | | | | | | | ----------------------- | | | | | --------------------------- | |

317

Introductory Logic
11. | | 12. | 13. | | 14. | | | 15. | 16. | | SHOW Lcb yLcy | | --------------------------- | | | Lcb | | | | SHOW yLcy | | | | ----------------------- | | | | | yLcy | | | | | ----------------------- | | | --------------------------- | | yLcy | | Hc | | ------------------------------- | ----------------------------------| | | | | | | | | | P UD CD ACD DD 12, I 5, 7, 11, SC 6, 15, E

11. 1. 2. 3. 4. 5. 6. 7. 8 9.

xy(Bxy w Byx) SHOW xBxx

-------------------------| SHOW Baa | ID | ---------------------- | | | ~Baa | | AID | | SHOW ! | | DD | | ------------------ | | | | | | | | y(Bay w Bya) | | | Baa w Baa | | | | | | Baa | | | | | | ! | | | | | ------------------ | | | ---------------------- | -------------------------xyz[(Cxy & Cyz) Cxz] xy(Cxy Cyx) SHOWx(~Cxx y~Cxy)

1, 6, 4, 4,

E E

7, wE 8, !I

13.1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18.

P P UD CD ACD UD ID AID DD 2, E 10, E 8, 11, E 8, 12, &I 1, E 14, E 15, E 13, 16, E 5, 17, !I

-------------------------------------------| SHOW ~Caa y~Cay | | ---------------------------------------- | | | ~Caa | | | | SHOW y~Cay | | | | ------------------------------------ | | | | | | | | SHOW ~Cab | | | -------------------------------- | | | | | | | Cab | | | | | | | | | | | | SHOW ! | | | | ---------------------------- | | | | | | | | | y(Cay Cya) | | | | | | | | | | Cab Cba | | | | | | | | | | Cba | | | | | | | | | | Cab & Cba | | | | | | | | | | yz[(Cay & Cyz) Caz] | | | | | | | | | | | | | | | z[(Cab & Cbz) Caz] | | | | | (Cab & Cba) Caa | | | | | | | | | | Caa | | | | | | | | | | ! | | | | | | | | | ---------------------------- | | | | | | | | ---------------------------- | | | | | | | -------------------------------- | | | | | ------------------------------------ | | | ---------------------------------------- |

318

-------------------------------------------15. 1. 2. 3. 4. 5. 6. 7. 8. SHOW xy(Rxy Rxx) --------------------------------| SHOW y(Ray Raa) | | ----------------------------- | | | | | SHOW Raa Raa | | ------------------------- | | | | | Raa | | | | | | SHOW Raa | | | | | | --------------------- | | | | | | | Raa | | | | | | | --------------------- | | | | | ------------------------- | | | | Raa Raa | | | | | | y(Ray Raa) | ----------------------------- | --------------------------------UD DD CD ACD DD 4, R 3, I 7, I

Chapter 10 Exercises 10-1 1. ~x(Ax & ~Bx) P SHOW x(Ax Bx) UD --------------------------| CD 3. | SHOW Aa Ba | ----------------------- | 4. | | Aa | | ACD 5. | | SHOW Ba | | DD | | ------------------- | | 6. | | | x~(Ax & ~Bx) | | | 1, QN 7. | | | ~(Aa & ~Ba) | | | 6, E 8. | | | ~~(Aa Ba) | | | 7, ~ 9. | | | Aa Ba | | | 8, DN 10. | | | Ba | | | 4, 9, E | | ------------------- | | | ----------------------- | --------------------------1. 2. 1. 2. 3. 4. 5. 7. 8. 9.
x(Ex Fx) ~xFx SHOW x~Ex --------------------------| SHOW ~Em | | ----------------------- | | | Em Fm | | | | | | x~Fx | | ~Fm | | | | ~Em | | | ----------------------- | ---------------------------

3.

P P UD DD 1, E 2, QN 7, E 5, 8, E

5.

1. 2. 3. 4. 5. 6. 7. 8. 9.

~x(Rx & ~Px) ~x(Jx & Px) SHOW ~x(Jx & Rx) ------------------------| x(Jx & Rx) | | | SHOW ! | --------------------- | | | Ja & Ra | | | | | | x~(Rx & ~Px) | | x~~(Rx Px) | | | | x(Rx Px) | |

P P ID AID DD 4, 1, 7, 8,
E QN ~ DN

319

Introductory Logic
10. 11. 12. 13. 14. 15. 16. 17. 18. 19. | | | | | | | | | | | | Ra Pa | | Ra | | Pa | | x~(Jx & Px) | | ~(Ja & Pa) | | | ~Ja w ~Pa | Ja | | ~~Ja | | ~Pa | | ! | --------------------| 9, E | 6, &E | 10,11, E | 2, QN | 13, E | 14, DeM | 6, &E | 16, DN | 15, 17, vE | 12, 18, !I |

------------------------7. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14.


x(Cx w Nx) ~x(Kx & Nx) SHOW x(Kx Cx)

P P UD

-------------------| SHOW Ka Ca | CD | ---------------------- | | | Ka || ACD | | SHOW Ca || DD | | ------------------- || ||| 2, QN | | | x~(Kx & Nx) | | | ~(Ka & Na) ||| 7, E | | | ~(Ka & ~~Na) ||| 8, DN | | | ~~(Ka ~Na) ||| 9, ~ | | | Ka ~Na ||| 10, DN | | | ~Na ||| 5, 11, E | | | Ca w Na ||| 1, E | | | Ca ||| 12, 13, wE | | ------------------- || | ---------------------- | ------------------------~xyHxy SHOW xy~Hxy --------------------------| | x~yHxy | xy~Hxy | --------------------------P DD 1, QN 3, QN P P DD 1, QN 4, E 2, E 5, I 7, QN 6, 8, E 9, I

9.

1. 2. 3. 4.

11. 1. 2. 3.

~xKmx x(Lx yKyx) SHOW x~Lx --------------------------| 4. | x~Kmx 5. | ~Kmj | | 6. | Lj yKyj 7. | y~Kyj | | 8. | ~yKyj 9. | ~Lj | | 10. | x~Lx ---------------------------

13. 1. 2. 3. 4.

~x~yOxy P xy(Oxy Pyx) P UD SHOW xyPxy --------------------------| SHOW yPoy | UD | ----------------------- |

320

| | SHOW Pot | | UD | | ------------------- | | 6. | | | x~~yOxy | | | 1, QN | | | 6, DN 7. | | | xyOxy 8. | | | yOty | | | 7, E 9. | | | Oto | | | 8, E | | | 2, E 10. | | | y(Oty Pyt) 11. | | | Oto Pot | | | 10, E 11. | | | Pot | | | 9, 11, E | | ------------------- | | | ----------------------- | --------------------------15. 1. 2. 3. 4. 1. 2. 3. 4. 17. 1. 2. 3. 4. 5. 6. 7. 8. 19. 1. 2. 3. ~x~Ax SHOW xAx --------------------------| x~~Ax | | xAx | --------------------------SHOW ~x~Ax --------------------------| ~~xAx | | ~x~Ax | --------------------------~x(Fx Gx) ~x(Fx & ~Gx) SHOW ! --------------------------| x~(Fx Gx) | | ~(Fa Ga) | | Fa & ~Ga | | x(Fx & ~Gx) | | ! | --------------------------xy(My Pxy) x(Mx & ~yPyx) xAx

5.

P DD 1, QN 3, DN P DD 1, DN 3, QN P P DD 1, QN 4, E 5, ~ 6, I 2, 7, !I P P DD 1, E 2, E 5, &E 5, &E 4, E 6, 8, E 7, QN 10. E 9, 11, !I

SHOW ! --------------------------| 4. | y(My Pay) | 5. | Mb & ~ yPyb 6. | Mb | 7. | ~yPyb | 8. | Mb Pab | 9. | Pab | | 10. | y~Pyb 11. | ~Pab | 12. | ! | ---------------------------

Exercises 10-2 1. 1. xZx Wa P 2. SHOW x(Zx Wa) ID -----------------------| AID 3. | ~x(Zx Wa) 4. | SHOW ! | DD | -------------------- | | | 3, QN 5. | | x~(Zx Wa) 6. | | x(Zx & ~Wa) | | 5, ~ 7. | | Za & ~Wa | | 6, E 8. | | ~Wa | | 7, &E

321

Introductory Logic
9. 10. 11. 12. 13. 14. | | ~xZx | | | | x~Zx | | | | ~Zb | | | | Zb & ~Wa | | | | Zb | | | | ! | | | -------------------- | -----------------------1, 8, MT 9, QN 10, E 6, E 12, &E 11, 13, !I

3.

SHOW x(xMx Mx) ID --------------------------------2. | ~x(xMx Mx) | AID 3. | SHOW ! | DD | ----------------------------- | | | 2, QN 4. | | x~(xMx Mx) 5. | | ~( xMx Ma) | | 4, E | | 5, ~ 6. | | xMx & ~Ma 7. | | xMx | | 6, &E 8. | | Mb | | 7, E 9. | | ~( xMx Mb) | | 4, E 10. | | xMx & ~Mb | | 9, ~ 11. | | ~Mb | | 10, &E 12. | | ! | | 8, 11, !I | ----------------------------- | --------------------------------1. 1. 2. 3. 4. 5. 6. SHOW xy(Txy zTzy) --------------------------| | SHOW y(Tay zTzy | ----------------------- | | | SHOW Tab zTzb | | | | ------------------- | | | | | Tab | | | | | | SHOW xTzb | | | | | | --------------- | | | | | | | zTzb | | | | | | | --------------- | | | | | ------------------- | | | ----------------------- | --------------------------x[Wx & ~y(Ax & Bxy))] SHOW x[Ax y(Wy & ~Byx)]

5.

UD UD CD ACD DD 4, I

7.

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14.

P UD CD ACD DD 1, E 6, &E 6, &E 8, QN 9, E 4, DN 10, 11, vE 7, 12, &I 13, I

--------------------------| SHOW Aa y(Wy & ~Bya) | | ----------------------- | | | Aa | | | | SHOW y(Wy & ~Bya) | | | | ------------------- | | | | | Wb & ~ y(Ay & Bby)| | | | | | Wb | | | | | | ~ y(Ay & Bby) | | | | | | y~(Ay & Bby) | | | | | | ~Aa v ~Bba | | | | | | ~~Aa | | | | | | ~Bba | | | | | | Wb & ~Bba | | | | | | | | | y(Wy & ~Bya) | | ------------------- | | | ----------------------- | ---------------------------

322

9.

1. 2. 3.

x(Cx v Dx) SHOW xCx v xDx

P DD CD ACD DD 1, 4, 7, 6, 9, EI QN E 8, vE I

--------------------------| SHOW ~xCx xDx | | ----------------------- | | | 4. | | ~ xCx 5. | | SHOW xDx | | | | ------------------- | | 6. | | | Ca v Da | | | 7. | | | x~Cx | | | 8. | | | ~Ca | | | 9. | | | Da | | | 10. | | | xDx | | | | | ------------------- | | | ----------------------- | 11. | xCx v xDx | --------------------------1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16.
xCx v xDx SHOW x(Cx v Dx)

3, v P ID AID DD 3, QN 5, Dem UD DD 6, E 9, &E 7, QN 1, 11, vE 12, E 6, E 14, &E 13, 15, !I

--------------------------| |~x(Cx v Cx) | SHOW ! | | ----------------------- | | | x~(Cx v Dx) | | | | x(~Cx & ~Dx) | | | | SHOW x~Cx | | | | ------------------- | | | | | SHOW ~Ca | | | | | | --------------- | | | | | | | ~Ca & ~Da | | | | | | | | ~Ca | | | | | | | --------------- | | | | | ------------------- | | | | ~ xCx | | | | xDx | | | | Db | | | | ~Cb & ~Db | | | | ~Db | | | | ! | | | ----------------------- | --------------------------x(Fa Gx) SHOW Fa xGx

11. 1. 2. 3. 4. 5. 6. 7.

P CD ACD UD DD 1, E 3, 6, E

--------------------------| Fa | | | SHOW xGx | ----------------------- | | | SHOW Gb | | | | ------------------- | | | | | Fa Gb | | | | | | Gb | | | | | ------------------- | | | ----------------------- | ---------------------------

1. 2. 3. 4. 5.

P Fa xGx SHOW x(Fa Gx) UD --------------------------| CD | SHOW Fa Gb | ----------------------- | | | Fa | | ACD | | SHOW Gb | | DD

323

Introductory Logic
| | ------------------- | | | | | xGx | | | | | | Gb | | | | | ------------------- | | | ----------------------- | --------------------------xy(Ix Iy) SHOW ~xIx v xIx --------------------------| SHOW ~~xIx xIx | | ----------------------- | | | ~~ xIx | | | | | | SHOW xIx | | ------------------- | | | | | SHOW Ia | | | | | | --------------- | | | | | | | y(Ib Iy) | | | | | | | | xIx | | | | | | | | Ic | | | | | | | | Ib Ic | | | | | | | | Ib | | | | | | | | Ib Ia | | | | | | | | Ia | | | | | | | --------------- | | | | | ------------------- | | | ----------------------- | | ~xIx v xIx | ---------------------------

6. 7.

1, 4, E 6, E

13. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13.

P DD CD ACD UD DD 1, E 4, DN 8, E 7, E 9, 10, E 7, E 11, 12, E

14. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 20.

3, w P DD DD DD CD ACD DD 6, E 8, DN 1, vE 10. E CD ACD DD 13, E 15, DN 1, vE 17. E

~xIx v xIx SHOW xy(Ix Iy) --------------------------| | SHOW y(Ia Iy) | ----------------------- | | | SHOW Ia Ib | | | | ------------------- | | | | | | | | SHOW Ia Ib | | | --------------- | | | | | | | Ia | | | | | | | | | | | | SHOW Ib | | | | ----------- | | | | | | | | | xIx | | | | | | | | | | | | | | | ~~ xIx | | | | | xIx | | | | | | | | | | Ib | | | | | | | | | ----------- | | | | | | | --------------- | | | | | | SHOW Ib Ia | | | | | | --------------- | | | | | | | Ib | | | | | | | | SHOW Ia | | | | | | | | ----------- | | | | | | | | | xIx | | | | | | | | | | ~~ xIx | | | | | | | | | | xIx | | | | | | | | | | Ib | | | | | | | | | ----------- | | | | | | | --------------- | | | | | | Ia Ib | | | | | ------------------- | | | ----------------------- |

324

21. | xy(Ix Iy) | --------------------------15. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14.

3, I P DD

x(Sx La) SHOW (xSx La) & (La xSx)

15. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16.

------------------------------------| CD | SHOW xSx La | --------------------------------- | | | xSx | | ACD | | SHOW La | | DD | | ----------------------------- | | | | | Sb | | | 4, E | | | Sb La | | | 1, E | | | La | | | 6, 7, E | | ----------------------------- | | | --------------------------------- | | SHOW La xSx | CD | --------------------------------- | | | La | | ACD | | SHOW xSx | | UD | | ----------------------------- | | | | | SHOW Sc | | | DD | | | ------------------------- | | | | | | | Sc La | | | | 1, E | | | | Sc | | | | 10, 13, E | | | ------------------------- | | | | | ----------------------------- | | | --------------------------------- | | 3, 9, &I | (xSx La) & (La xSx) ------------------------------------(xSx La) & (La xSx) DD SHOW x(Sx La) UD ------------------------------------| SHOW Sb La | DD | --------------------------------- | | | SHOW Sb La | | CD | | ----------------------------- | | | | | Sb | | | ACD | | | SHOW La | | | DD | | | ------------------------- | | | | | | | 1, &E | | | | xSx La | | | | xSx | | | | 6, I | | | | La | | | | 7, 8, E | | | ------------------------- | | | | | ----------------------------- | | | | SHOW La Sb | | CD | | ----------------------------- | | | | | La | | | ACD | | | DD | | | SHOW Sb | | | ------------------------- | | | | | | | La xSx | | | | 1, &E | | | | 11, 13, E | | | | xSx | | | | Sb | | | | 14, E | | | ------------------------- | | | | | ----------------------------- | | | | Sb La | | 4, 10, I | --------------------------------- | ------------------------------------xyFxy x~yFyx

17. 1. 2. 3. 4.

P P DD 1, E

SHOW ! -----------------------| | xFay

325

Introductory Logic
5. 6. 7. 8. 9. | ~yFyb | | Fab | | y~Fyb | | ~Fab | | ! | -----------------------2, 4, 5, 7, 6,
E E

QN E 8, !I

19.

1. xyRxy P 2. ~x(Fx yRyx) P DD 3. SHOW ! ---------------------------------4. | x~(Fx yRyx) | 2, QN | 4, E 5. | ~(Fa yRya) | 5, ~ 6. | Fa & ~ yRya 7. | ~yRya | 6, &E | 7, QN 8. | y~Rya 9. | yRby | 1, E 10. | Rba | 9, E 11. | ~Rba | 8, E 12. | ! | 10, 11, !I ---------------------------------xyz[(Rxy & Ryz) Rxxz] xy(Rxy v Ryx) xy[Pxy (Rxy & ~Ryx)] SHOW xRxx

21. 1. 2. 3. 4. 5.

P P P UD ID AID DD 2, 8, 6, 6,
E E

----------------------------------| | SHOW Raa | ------------------------------- | 6. | | ~Raa | | 7. | | SHOW ! | | | | --------------------------- | | | | | 8. | | | y(Ray v Rya) 9. | | | Raa v Raa | | | 10. | | | Raa | | | 11. | | | ! | | | | | --------------------------- | | | ------------------------------- | -----------------------------------

9, vE 10, !I

23. Note: the direct derivation that follows is perhaps


not quite as short as the indirect derivation, but it is instructive, In line 7 we state a lemma, which is proved in lines 8-26. The point of the lemma is to get both lines 30 and 34 without having to duplicate the maneuvers it contains.
1. 2. 3. 4. 5. 6. 7. 8. 9.
xyz[(Rxy & Ryz) Rxz] xy(Rxy v Ryx) xy[Pxy (Rxy & ~Ryx)] SHOW xy[(Pxy v Pyx) v (Rxy & Ryx)]

P P P UD UD DD UD UD CD

-------------------------------------------------------------| SHOW y[(Pay v Pya) v (Ray & Rya)] | | ---------------------------------------------------------- | | | (Pab v Pba) v (Rab & Rba) | | | | SHOW xy(Rxy [(Pxy v Pyx) v (Rxy & Ryx)]) | | | | ------------------------------------------------------ | | | | | SHOW y(Rcy [(Pcy v Pyc) v (Rcy & Ryc)] | | | | | | -------------------------------------------------- | | | | | | | SHOW (Rcd [(Pcd v Pdc) v (Rcd & Rdc)]) | | | | | | | | ---------------------------------------------- | | | |

326

10. | | | | | Rcd | | | | | ACD 11. | | | | | SHOW (Pcd v Pdc) v (Rcd & Rdc) | | | | | DD | | | | | ------------------------------------------ | | | | | 12. | | | | | | SHOW Rdc [(Pcd v Pdc) v (Rcd & Rdc)] | | | | | | CD | | | | | | -------------------------------------- | | | | | | 13. | | | | | | | Rdc | | | | | | | ACD | | | | | | | DD 14. | | | | | | | SHOW [(Pcd v Pdc) v (Rcd & Rdc)] | | | | | | | ---------------------------------- | | | | | | | 15. | | | | | | | | Rcd & Rdc | | | | | | | | 10,13,&I 16. | | | | | | | | (Pcd v Pdc) v (Rcd & Rdc) | | | | | | | | 15, vI | | | | | | | ---------------------------------- | | | | | | | | | | | | | -------------------------------------- | | | | | | 17. | | | | | | SHOW ~Rdc [(Pcd v Pdc) v (Rcd & Rdc)] | | | | | | CD | | | | | | -------------------------------------- | | | | | | 18. | | | | | | | ~Rdc | | | | | | | ACD 19. | | | | | | | SHOW [(Pcd v Pdc) v (Rcd & Rdc)] | | | | | | | DD | | | | | | | ---------------------------------- | | | | | | | 20. | | | | | | | | Rcd & ~Rdc | | | | | | | | 10,18,&I 21. | | | | | | | | y[Pcy (Rcy & ~Ryc)] | | | | | | | | 3,E 22. | | | | | | | | Pcd (Rcd & ~Rdc) | | | | | | | | 21. E 23. | | | | | | | | Pcd | | | | | | | |20,23, E 24. | | | | | | | | Pcd v Pdc | | | | | | | | 24, vI 25. | | | | | | | | [(Pcd v Pdc) v (Rcd & Rdc)] | | | | | | | | 25, vI | | | | | | | ---------------------------------- | | | | | | | | | | | | | -------------------------------------- | | | | | | 26. | | | | | | [(Pcd v Pdc) v (Rcd & Rdc)] | | | | | | 12,17,SC2 | | | | | ------------------------------------------ | | | | | | | | | ---------------------------------------------- | | | | | | | -------------------------------------------------- | | | | | ------------------------------------------------------ | | 27. | | y(Ray v Rya) | | 2, E 28. | | Rab v Rby | | 27, E 29. | | y(Ray [(Pay v Pya) v (Ray & Rya)] | | 7, E 30. | | Rab [(Pab v Pba) v (Rab & Rba)] | | 29, E 31. | | y(Rby [(Pby v Pyb) v (Rby & Ryb)] | | 7, E 32. | | Rba [(Pba v Pab) v (Rba & Rab)] | | 31, E 33. | | Rba [(Pab v Pba) v (Rba & Rab)] | | 32, Com 34. | | Rba [(Pab v Pba) v (Rab & Rba)] | | 33, Com 35. | | [(Pab v Pba) v (Rab & Rba)] | | |28,30,34,SC | ---------------------------------------------------------- | --------------------------------------------------------------

25.

D: People P: is a philosophy major. S: is in the same physics section as a: Alonzo g: Gertrude Translation: ~[Pa v y(Sya & Py)] Pa v Pg Therefore, ~Sga ~[Pa v y(Sya & Py)] Pa v Pg SHOW ~Sga ----------------------------------4. | Sga | 5. | SHOW ! | | ------------------------------- | 6. | | ~Pa & ~ y(Sya & Py) | | 7. | | ~Pa | | 8. | | Pg | | 9. | | Sga & Pg | | 10. | | y(Sya & Py) | | | | 11. | | ~ y(Sya & Py) 12. | | ! | | | ------------------------------- | 1. 2. 3. P P ID AID DD 1, Dem 6, &E 2, 7, vE 4, 8, &I 9, I 6, &E 10, 11, !I

327

Introductory Logic
-----------------------------------

27.

D: Persons and wines W : is a wine R : is red N : is from New York State P : is a person. L a : Alonzo j : Jeffrey (Pj & x[(Wx & Rx) & Ljx]) & ~ y[(Wy & Ry) & (Ny & Ljy)] x(~y[(Wy & Ny) & Lxy] ~Lax) Therefore, Laj y[Wy & Ny) & ~Ry]

1. 2. 3.

(Pj & x[(Wx & Rx) & Ljx]) & ~y[(Wy & Ry) & (Ny & Ljy)] x[(Px &b~y[(Wy & Ny) & Lxy]) ~Lax] SHOW Laj y[(Wy & Ny) & ~Ry]

P P CD

4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32.

----------------------------------------------| Laj | | SHOW y[(Wy & Ny) & ~Ry] | | ------------------------------------------- | | | ~y[(Wy & Ny) & ~Ry] | | | | SHOW ! | | | | --------------------------------------- | | | | | Pj & x[(Wx & Rx) & Ljx] | | | | | | Pj | | | | | | | | | (Pj & ~ y[(Wy & Ny) & Ljy]) ~Laj | | | ~~Laj | | | | | | ~(Pj & ~ y[(Wy & Ny) & Ljy]) | | | | | | ~Pj v ~~ y[(Wy & Ny) & Ljy]) | | | | | | ~~Pj | | | | | | | | | ~~y[(Wy & Ny) & Ljy]) | | | y[(Wy & Ny) & Ljy]) | | | | | | (Wa & Na) & Lja | | | | | | y~[(Wy & Ny) & ~Ry] | | | | | | ~[(Wa & Na) & ~Ra] | | | | | | ~(Wa & Na) v ~~Ra | | | | | | Wa & Na | | | | | | ~~(Wa & Na) | | | | | | ~~Ra | | | | | | Ra | | | | | | Wa & (Na & Lja) | | | | | | Wa | | | | | | Na & Lja | | | | | | Wa & Ra | | | | | | (Wa & Ra) & (Na & Lja) | | | | | | | | | y[(Wy & Ry) & (Ny & Ljy)] | | | ~y[(Wy & Ry) & (Ny & Ljy)] | | | | | | ! | | | | | --------------------------------------- | | | ------------------------------------------- | -----------------------------------------------

ACD ID AID DD 1, &E 8, &E 2, E 4, DN 10, 11, MT 12, Dem 9, 13, vE 13, 14, vE 15, DN 16, E 6, QN 18, E 19, Dem 17, &E 21, DN 20, 22, vE 23, DN 17, Assoc 25, &E 25, &E 24, 26, &I 27, 28, &I 29, I 1, &E 30, 31, !I

Exercises 10-3 (Since the derivations are straightforward, I have omitted them. 1. D: U.S. Cities F: is farther from than from . d: Washington, DC

328

n: New York City p: Philadelphia, PA s: San Francisco, CA Fndp Fnsd Fnsp D: {1, 2} F: <, , > is in {<1, 1, 2>, <1, 2, 1>} d: 1 n: 1 p: 2 s: 2 Add premise: xyzw[(Fxyz & Fxzw) Fxyw] 3. D: People M: is married to P: is a physician V: goes on vacation with m: me
x(Vmx Mxm) x(Px & Vmx) x(Px & Mmx)

D: {1, 2} M: <> is in {<1, 2>} P: is in {1} V: <,> is in {<2, 1>} m: 2 Add Premise: xy(Mxy Myx) 5. D: People C: commits crimes D: is liable to disbarment E: has embezzled money from a client L: is a lawyer h: Hugh Louis Dewey
x[(Lx & Cx) Dx] Lh & Eh Dh

Add premise:

x(Ex Cx)

Chapter 11: Exercises 11-1

x(Bx x = t) x[(Sx & Bx) & y[(Sy & By) x = y]) ~x[Bx & (x t & Sx)] x[(Sx & x e) Txe] xy[([My & Sy] & Lyg) x = y] or xy[([(Mx & Sx) & Lxg] & [(My & Sy) & Lyg] x = y] 11. x[(Sx & x a) Tax]
1. 3. 5. 7. 9. 329

Introductory Logic 13. D = People J: could be jivin' L: loves i : me m : my mother

~x(x m & lxi) & Jm 15. D = Presidents (sc. of the U.S.) I : has been impeached c : William Clinton j : Andrew Johnson ~x[x j & x c) & Ix] Exercises 11-2 1. x([(Wx & Kxa) & y[(Wy & Kya) x = y]] & Rx) 3. x[(Wx Lax) & y([(My & Lyg) & z([Mz & Lzg] y = z)] & Kxy)] 5. xy[([(Mx & Lgx) & z[(Mz & Lgz) x = z]] & [(Wy & Lay) & z[(Wz & Laz) y = z]]) & Kxy] Exercises 11-3 1. Your interpretation must have: the thing labeled r is not W 3. Your interpretation must have no more than two items in its domain and exactly one thing that is F . 5. Your interpretation must have exactly one item in its domain. 7. The sentence on the left is a logical truth. Your interpretation must have at least two items in its domain which are F or at least two items which are not F. 9. Your interpretation must satisfy one of the following two conditions: - Everything is G, or - At least two things fail to be G. 11. Your interpretation must have nothing that is G. 13. Your interpretation must have all of the following: - at least two items in its domain, 330

- nothing Qs itself, and - each thing Qs something other than itself. Exercises 11-4: 1. 1. 2. 3. 4. 5. 6. 7. 8. 9. SHOW xyz[(x = y & y = z) x = z] UD ----------------------------------------| SHOW yz[(a = y & y = z) a = z] | UD | ------------------------------------- | | | SHOW z[(a = b) & (b = z) a = z] | | UD | | --------------------------------- | | | | | SHOW (a = b & b = c) a = c | | | CD | | | ----------------------------- | | | | | | | a = b & b = c | | | | ACD | | | | SHOW a = c | | | | DD | | | | ------------------------- | | | | | | | | | a = b | | | | |5, &E | | | | | b = c | | | | |5, &E | | | | | a = c | | | | |7,8,=E | | | | ------------------------- | | | | | | | ----------------------------- | | | | | --------------------------------- | | | ------------------------------------- | ----------------------------------------P P DD 1, E 4, &E 5, &E 6, E 2, 7,E 4, &E 8, 9, =E UD DD =I 3, I

3.

x([Rx & y(Ry x = y)] & Nx) Rt SHOW Nt -----------------------------------4. | [Ra & y(Ry a = y)] & Na | 5. | Ra & y(Ry a = y) | 6. | y(Ry a = y) | 7. | Rt a = t | 8. | a = t | 9. | Na | 10. | Nt | ------------------------------------

1. 2. 3.

5.

1. 2. 3. 4.

SHOW xy x = y -------------------------------| SHOW y a = y | | ---------------------------- | | | a = a | | | | y a = y | | | ---------------------------- | ---------------------------------

7.

1. 2. 3. 4.

xy(Ay x = y) SHOW x[Ax & y(Ay x = y)] --------------------------------------| y(Ay a = y) | | Aa a = a |

P DD 1, E 3, E

331

Introductory Logic 5. 6. 7. 8. 9. 10. 11. 12. | | | | | | | | | | | | | | | | a = a | =I Aa |4, 5,E SHOW y(Ay a = y) | UD ----------------------------------- | | SHOW Ab a = b | | CD | ------------------------------- | | | | Ab | | | ACD | | SHOW a = b | | | DD | | --------------------------- | | | | | | Ab a = b | | | | 3, E | | | a = b | | | |9, 11,E | | --------------------------- | | | | ------------------------------- | | ----------------------------------- | Aa & y(Ay a = y) | 6, 7, &I x[Ax & y(Ay x = x)] | 13, I --------------------------------------P

13. 14. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 9. 1. 2. 3. 4. 5. 6. 7. 332

DD --------------------------------------| Aa & y(Ay a = y) | 1, E | Aa | 3, &E | y(Ay a = y) | 3, &E | SHOW y(Ay a = y) | UD | ----------------------------------- | | | SHOW Ab a = b | | DD | | ------------------------------- | | | | | Ab a = b | | | 5, VD | | | SHOW a = b Ab | | | CD | | | --------------------------- | | | | | | | a = b | | | | | | | | DD | | | | SHOW Ab | | | | ----------------------- | | | | | | | | | Ab | | | | |4,10,=E | | | | ----------------------- | | | | | | | --------------------------- | | | | | |8, 9, I | | | Ab a = b | | ------------------------------- | | | ----------------------------------- | | xy(Ay x = y) | 6, I --------------------------------------xyGxy ~xGxx ~xy xy

x[Ax & y(Ay x = y)] SHOW xy(Ay x = y)

P P P DD 1, E 5, E 3, QN

SHOW ! ------------------------------| yGay | | Gab | | y~y x y |

8. 9. 10. 11. 12. 13. 14. 11. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15.

| | | | | | |

~ y a y | y~ a y | ~ a b | a = b | Gaa | xGxx | ! | -------------------------------

7, E 8, QN 9, E 10, DN 6, 11, =E 12, I 2, 13, !I P P P P DD 2, E 3, E 6, 7, &I 1, E 9, E 9, E 10, 11, =E 8, 12, =E 13, I 4, 14, !I

xy x = y xFx YDy ~x(Fx & Dx)

SHOW ! ------------------------------| Fa | | Db | | Fa & Db | | | y c = y | c = a | | c = b | | a = b | | Fa & Da | | x(Fx & Dx) | | ! | -------------------------------

13. 1. x x = a P 2.SHOW xFx Fa DD --------------------------------3. | SHOW xFx Fa | CD | ----------------------------- | 4. | | xFx | | ACD 5. | | SHOW Fa | | DD | | ------------------------- | | 6. | | | Fa | | | 4, E | | ------------------------- | | | ----------------------------- | 7. | SHOW Fa xFx | CD | ----------------------------- | 8. | | Fa | | ACD 9. | | SHOW xFx | | UD | | ------------------------- | | 10. | | | SHOW Fb | | | DD | | | --------------------- | | | 11. | | | | b = a | | | | 1, E 12. | | | | Fb | | | | 8, 11, =E | | | --------------------- | | | | | ------------------------- | | | ----------------------------- | 14. | xFx Fa | 3, 7, E --------------------------------15. 1. x(x = a w x = b) 333 P

Introductory Logic 2. SHOW xFx (Fa & Fb) DD --------------------------------------| CD 3. | SHOW xFx (Fa & Fb) | ----------------------------------- | 4. | | xFx | | ACD 5. | | SHOW (Fa & Fb) | | DD | | ------------------------------- | | 6. | | | Fa | | | 4, E 7. | | | Fb | | | 4, E 8. | | | Fa & Fb | | |6, 7, &I | | ------------------------------- | | | ----------------------------------- | 9. | SHOW (Fa & Fb) xFx | CD | ----------------------------------- | 10. | | Fa & Fb | | ACD 11. | | SHOW xFx | | UD | | ------------------------------- | | 12. | | | SHOW Fc | | | DD | | | --------------------------- | | | 13. | | | | c = a w c = b | | | | 1, E 14. | | | | SHOW c = a Fc | | | | CD | | | | ----------------------- | | | | 15. | | | | | c = a | | | | | ACD 16. | | | | | SHOW Fc | | | | | DD | | | | | ------------------- | | | | | 17. | | | | | | Fa | | | | | | 10, &E 18. | | | | | | Fc | | | | | |15,17, =E | | | | | ------------------- | | | | | | | | | ----------------------- | | | | 19. | | | | SHOW c = b Fc | | | | CD | | | | ----------------------- | | | | 20. | | | | | c = b | | | | | ACD | | | | | DD 21. | | | | | SHOW Fc | | | | | ------------------- | | | | | 22. | | | | | | Fb | | | | | | 10, &E 23. | | | | | | Fc | | | | | |20,22,=E | | | | | ------------------- | | | | | | | | | ----------------------- | | | | 24. | | | | Fc | | | |13,14,19,SC | | | --------------------------- | | | | | ------------------------------- | | | ----------------------------------- | 25. | xFx (Fa & Fb) | 3, 9, I ---------------------------------------

334

Basic Rules &E

N&R
)))))

N&R R
))))))

&I

N wE

N R
))) N&R

NvR ~N
))))))

R E

NvR ~R )))))) N NR R N
)))))))

wI

N
)))

R
)))

NwR

NwR

NR N
)))))))

NR RN
))))))

R E

NR
!I

NR N
)))))))

~N ))))) !

<N
))))) N[<>0]

I
provided 0 is new to the
derivation

N[<>0]
)))))))

<N

<N
)))) N[<>0]

=E

0=: N[0] N[:/0 ]


))))

: = 0 0 and : are names N[0] N[0] contains one or more 0's


)))) N[:/0] N[:/0] is N[0] with one or more 0's replaced by :'s

=I

))))) 0=0

Where 0 is any name

Derivation rules SHOW N +))))))))))))), * * * * N .)))))))))))))DD SHOW N R CD +))))))))))))))))))))), N * ACD * * SHOW R * * *+)))))))))))))), * *.)))))))))))))).)))))))))))))))))))))SHOW <N UD +)))))))))))))))))), * provided 0 is * SHOW N[<>0] *+ ))))))))))), * new to the ** * * derivation *. )))))))))))* .))))))))))))))))))-

SHOW N ID +)))))))))))))))))), ~N * AID * * SHOW ! * DD *+))))))))))))), * ! * * ** * *.))))))))))))).))))))))))))))))))-

335

Introductory Logic
Derived Rules MT

NR ~R
))))))) ~N

N
)))

SC

NwR N2 R2
)))))))

SC2

~N R ))))))))

NR R

HS

NR R2
))))))))

N2

Second Form of ID SHOW ~N +)))))))))))))))))))), * N * * SHOW ! * *+)))))))))))))))), * ** * * ** ! * * *.))))))))))))))))- * .))))))))))))))))))))Replacement Rules


DN ~ ~~N :: N DeM ~ Com ~(N & R) :: ~N w ~R ~(N w R) :: ~N & ~R ~(N R) :: ~N R

ID AID DD

~(N R) :: N & ~R

Assoc N & (R & 2) :: (N & R) & 2 N w (R w 2) :: (N w R) w 2 N (R 2) :: (N R) 2 Dist

N & R :: R & N N w R :: R w N N R :: R N

N & (R w 2) :: (N & R) w (N & 2) N w (R & 2) :: (N w R) & (N w 2) N R :: ~R ~N

N R :: ~N w R ~N R :: N w R
~<N :: <~N ~<N :: <~N

Ctr

QN

336

Potrebbero piacerti anche