Sei sulla pagina 1di 83

Logic Programming

Adrian Craciun
November 22, 2013

1
Contents

I An Introduction to Prolog 5
1 Introduction 5
1.1 Traditional programming paradigms . . . . . . . . . . . . . . . . 5
1.2 Logic programming . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2 Prolog: Informal Introduction 7


2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Facts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Queries (Goals) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.4 Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.6 Further Reading and Exercises . . . . . . . . . . . . . . . . . . . 14

3 Syntax and Data Structures 15


3.1 Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.1.1 Constants . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.1.2 Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1.3 Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2 Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.3 Unification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.4 Arithmetic in Prolog . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.6 Further Reading and Exercises . . . . . . . . . . . . . . . . . . . 21

4 Using Data Structures 21


4.1 Structures as Trees . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.2 Lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

5 Recursion 27
5.1 Introducing Recursion . . . . . . . . . . . . . . . . . . . . . . . . 27
5.2 Recursive Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5.3 Recursive Comparison . . . . . . . . . . . . . . . . . . . . . . . . 29
5.4 Joining Structures . . . . . . . . . . . . . . . . . . . . . . . . . . 30
5.5 Accumulators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
5.6 Difference Structures . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.7 Further Reading and Exercises . . . . . . . . . . . . . . . . . . . 36

6 Backtracking and the Cut (!) 37


6.1 Backtracking behaviour . . . . . . . . . . . . . . . . . . . . . . . 37
6.2 The cut (!) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
6.3 Common uses of the cut (!) . . . . . . . . . . . . . . . . . . . . . 41
6.4 Reading and Exercises . . . . . . . . . . . . . . . . . . . . . . . . 46

2
7 Efficient Prolog 47
7.1 Declarative vs. Procedural Thinking . . . . . . . . . . . . . . . . 47
7.2 Narrow the search . . . . . . . . . . . . . . . . . . . . . . . . . . 48
7.3 Let Unification do the Work . . . . . . . . . . . . . . . . . . . . . 48
7.4 Understand Tokenization . . . . . . . . . . . . . . . . . . . . . . 49
7.5 Tail recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
7.6 Let Indexing Help . . . . . . . . . . . . . . . . . . . . . . . . . . 51
7.7 How to Document Prolog Code . . . . . . . . . . . . . . . . . . . 52
7.8 Reading and Further Exercises . . . . . . . . . . . . . . . . . . . 52

8 I/O with Prolog 53


8.1 Edinburgh style I/O . . . . . . . . . . . . . . . . . . . . . . . . . 53
8.2 ISO I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
8.3 Reading and Further Exercises . . . . . . . . . . . . . . . . . . . 58

9 Defining New Operators 58


9.1 Reading and Further Exercises . . . . . . . . . . . . . . . . . . . 60

II The Theoretical Basis of Logic Programming 61


10 Logical Background 61
10.1 Predicate logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
10.2 Herbrands Theorem . . . . . . . . . . . . . . . . . . . . . . . . . 64
10.3 Clausal Form of Formulae . . . . . . . . . . . . . . . . . . . . . . 66
10.4 Reading and Further Exercises . . . . . . . . . . . . . . . . . . . 68

11 Resolution 69
11.1 Ground Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . 69
11.2 Substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
11.3 Unification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
11.4 Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
11.5 Reading and Further Exercises . . . . . . . . . . . . . . . . . . . 75

12 Logic Programming 76
12.1 Formulas as programs . . . . . . . . . . . . . . . . . . . . . . . . 76
12.2 Horn Clauses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
12.3 SLD Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
12.4 Reading and Further Exercises . . . . . . . . . . . . . . . . . . . 82

3
Overview of the Lecture
Contents of this lecture

Part 1: An introduction in the logic programming language Prolog. Based


largely on [Clocksin and Mellish, 2003].
Part 2: A review of the theoretical basis of logic programming. Based on cor-
responding topics in [Ben-Ari, 2001] and [Nilsson and Maluszynski, 2000].
Part 3: Advanced topics in logic programming/Prolog. Based on correspond-
ing topics in [Ben-Ari, 2001] and [Nilsson and Maluszynski, 2000].

4
Part I
An Introduction to Prolog
1 Introduction
1.1 Traditional programming paradigms
Recalling the Von Neumann machine

The von Neumann machine (architecture) is characterized by:


large uniform store of memory,
processing unit with registers.
A program for the von Neumann machine: a sequence of instructions for
moving data between memory and registers,
carrying out arithmetical-logical operations between registers,
control, etc.
Most programming languages (like C, C++, Java, etc.) are influenced by
and were designed for the von Neumann architecture.
In fact, such programming languages take into account the architec-
ture of the machine they address and can be used to write efficient
programs.
The above point is by no means trivial, and it leads to a separation
of work (the software crisis):
finding solutions of problems (using reasoning),
implementation of the solutions (mundane and tedious).

Alternatives to the von Neumann approach

How about making programming part of problem solving?


i.e. write programs as you solve problems?
rapid prototyping?
Logic programming is derived from an abstract model (not a reorganiza-
tion/abstraction of a von Neumann machine).
In logic programming
program = set of axioms,
computation = constructive proof of a goal statement.

5
1.2 Logic programming
Logic programming: some history

David Hilberts program (early 20th century): formalize all mathematics


using a finite, complete, consistent set of axioms.
Kurt Godels incompleteness theorem (1931): any theory containing arith-
metic cannot prove its own consistency.

Alonzo Church and Alan Turing (independently, 1936): undecidability -


no mechanical method to decide truth (in general).

Alan Robinson (1965): the resolution method for first order logic (i.e.
machine reasoning in first order logic).
Robert Kowalski (1971): procedural interpretation of Horn clauses, i.e.
computation in logic.

Alan Colmerauer (1972): Prolog (PROgrammation en LOGique).prover.

David H.D. Warren (mid-late 1970s): efficient implementation of Prolog.

1981 Japanese Fifth Generation Computer project: project to build the


next generation computers with advanced AI capabilities (using a concur-
rent Prolog as the programming language).

Applications of logic programming

Symbolic computation:

relational databases,
mathematical logic,
abstract problem solving,
natural language understanding,
symbolic equation solving,
design automation,
artificial intelligence,
biochemical structure analysis, etc.

Industrial applications:

aviation:
* SCORE - a longterm aiport capacity management system for
coordinated airports (20% of air traffic worldwide, according to
www.pdc-aviation.com)

6
* FleetWatch - operational control, used by 21 international air
companies.
personnel planning: StaffPlan (airports in Barcelona, Madrid; Hov-
edstaden region in Denmark).
information management for disasters: ARGOS - crisis management
in CBRN (chemical, biological, radiological and nuclear) incidents
used by Australia, Brasil, Canada, Ireland, Denmark, Sweden, Nor-
way, Poland, Estonia, Lithuania and Montenegro.

2 Prolog: Informal Introduction


2.1 Overview
Problem solving with Prolog

Programming in Prolog:

rather than prescribe a sequence of steps to solve a problem,


describe known facts and relations, then ask questions.
Use Prolog to solve problems that involve objects and relations between
objects.

Examples:

Objects: John, book, jewel, etc.


Relations: John owns the book, The jewel is valuable.
Rules: Two people are sisters if they are both females and they have
the same parents.

Attention!!! Problem solving requires modelling of the problem (with its


respective limitations).
Problem solving with Prolog:

declare facts about objects and their relations,


define rules about objects and their relations,
ask questions about objects and their relations.
Programming in Prolog: a conversation with the Prolog interpreter.

7
2.2 Facts
Stating a fact in Prolog:
l i k e s ( johnny , mary ) .

Names of relations (predicates) and objects are written in lower case


letter.
Prolog uses (mostly) prefix notation (but there are exceptions).
Facts end with . (full stop).

A model is built in Prolog, and facts describe the model.

The user has to be aware of the interpretation:


l i k e s ( john , mary ) .
l i k e s ( mary , john ) .

are not the same thing (unless explicitly specified).


Arbitrary numbers of arguments are allowed.

Notation: likes /2 indicates a binary predicate.

Facts are part of the Prolog database (knowledge base).

2.3 Queries (Goals)


A query in Prolog:
? owns ( mary , book ) .

Prolog searches in the knowledge base for facts that match the question:
Prolog answers true if :

the predicates are the same,


arguments are the same,

Ortherwise the answer is false :

only what is known is true (closed world assumption),


Attention: false may not mean that the answer is false (but more
like not derivable from the knowledge).

8
Variables

Think variables in predicate logic.

Instead of:
? l i k e s ( john , mary ) .
? l i k e s ( john , a p p l e s ) .
? l i k e s ( john , candy ) .

ask something like What does John like? (i.e. give everything that John
likes).
Variables stand for objects to be determined by Prolog.

Variables can be:

instantiated - there is an object the variable stands for,


uninstantiated - it is not (yet) known what the variable stands for.
In Prolog variables start with CAPITAL LETTERS:
? l i k e s ( john , X ) .

Prolog computation: example

Consider the following facts in a Prolog knowledge base:


...
l i k e s ( john , f l o w e r s ) .
l i k e s ( john , mary ) .
l i k e s ( paul , mary ) .
...

To the query
? l i k e s ( john , X ) .

Prolog will answer


X = flowers

and wait for further instructions.

Prolog answer computation

Prolog searches the knowledge base for a fact that matches the query,

when a match is found, it is marked.

if the user presses Enter, the search is over,

9
if the user presses ; then Enter, Prolog looks for a new match, from
the previously marked place, and with the variable(s) in the query unin-
stantiated.
In the example above, two more ; Enter will determine Prolog to an-
swer:
X = mary .
false

When no (more) matching facts are found in the knowledge base, Prolog
answers false .

Conjunctions: more complex queries


Consider the following facts:
l i k e s ( mary , food ) .
l i k e s ( mary , wine ) .
l i k e s ( john , wine ) .
l i k e s ( john , mary ) .

And the query:


? l i k e s ( john , mary ) , l i k e s ( mary , john ) .

the query reads does john like mary and does mary like john?
Prolog will answer false : it searches for each goal in turn (all goals
have to be satisfied, if not, it will fail, i.e. answer false ).

Conjunctions: more complex queries (contd)


For the query:
? l i k e s ( mary , X) , l i k e s ( john , X ) .

Prolog: try to satisfy the first goal (if it is satisfied put a place-
marker), then try to satisfy the second goal (if yes, put a place-
marker).
If at any point there is a failure, backtrack to the last placemarker
and try alternatives.

Example: conjunction, backtracking


The way Prolog computes the answer to the above query is represented:
In Figure 1, the first goal is satisfied, Prolog attempts to find a match for
the second goal (with the variable instantiated).
The failure to find a match in the knowledge base causes backtracking,
see Figure 2.
The new alternative tried is successful for both goals, see Figure 3.

10
Figure 1: Success for the first goal.

Figure 2: Second goal failure causes backtracking.

Figure 3: Success with alternative instantiation.

11
2.4 Rules
John likes all people can be represented as:

l i k e s ( john , alfred ).
l i k e s ( john , bertrand ) .
l i k e s ( john , charles ).
l i k e s ( john , david ) .
...

but this is tedious!!!


l i k e s ( john , X ) .

but this should be only for people!!!

Enter rules: John likes any object, but only that which is a person is a
rule about what (who) John likes.
Rules express that a fact depends on other facts.

Rules as definitions

Rules can be used to express definitions.

Example:

X is a bird if X is an animal and X has feathers.

Example:

X is a sister of Y if X is female and X and Y have the same parents.


Attention! The above notion of definition is not the same as the notion
of definition in logic:

such definitions allow detection of the predicates in the head of the


rule,
but there may be other ways (i.e. other rules with the same head) to
detect such predicates,
in order to have full definitions iff is needed instead of if.

Rules are general statements about objects and their relationships (in
general variables occur in rules, but not always).

12
Rules in Prolog

Rules in Prolog have a head and a body.


The body of the rule describes the goals that have to be satisfied for the
head to be true.
Example:
l i k e s ( john , X) :
l i k e s (X, wine ) .
l i k e s ( john , X) :
l i k e s (X, wine ) , l i k e s (X, f o o d ) .
l i k e s ( john , X) :
f e m a l e (X) , l i k e s (X, wine ) .

Attention! The scope of the variables that occur in a rule is the rule itself
(rules do not share variables).

Example (royals)

Knowledge base:
male ( a l b e r t ) .
male ( edward ) .
female ( a l i c e ) .
female ( v i c t o r i a ) .
parents ( alice , albert , victoria ).
p a r e n t s ( edward , a l b e r t , victoria ).
s i s t e r o f (X, Y):
f e m a l e (X) ,
p a r e n t s (X, M, F ) .
p a r e n t s (Y, M, F ) .

Goals:
? s i s t e r o f ( a l i c e , edward ) .
? s i s t e r o f ( a l i c e , X ) .

Exercise (thieves)

Consider the following:


/ * 1 * / t h i e f ( john ) .

/ * 2 * / l i k e s ( mary , f o o d ) .
/ * 3 * / l i k e s ( mary , wine ) .
/ * 4 * / l i k e s ( john , X): l i k e s (X, wine ) .

/ * 5 * / m a y s t e a l (X, Y) :
t h i e f (X) , l i k e s (X, Y ) .

13
Explain how the query
? m a y s t e a l ( john , X ) .

is executed by Prolog.

2.5 Summary
In this introduction to Prolog, the following were discussed:
asserting facts about objects,
asking questions about facts,
using variables, scopes of variables,
conjunctions,
an introduction to backtracking (in examples).

2.6 Further Reading and Exercises


All things SWIProlog can be found at http://www.swi-prolog.org.

Install SWI-Prolog and try out the examples in the lecture.

Read: Chapter 1 (including exercises section) of [Clocksin and Mellish, 2003].

14
3 Syntax and Data Structures
3.1 Terms
Prolog programs are built from terms (written as strings of characters).

The following are terms:

constants,
variables,
structures.

3.1.1 Constants
Constants are simple (basic) terms.

They name specific things or predicates (no functions in Prolog).

Constants are of 2 types:

atoms,
numbers: integers, rationals (with special libraries), reals (floating
point representation).

Examples of atoms

atoms:

likes ,
a (lowercase letters),
=,
>,
Void (anything between single quotes),
george smith (constants may contain underscore),
not atoms:

314a5 (cannot start with a number),


georgesmith (cannot contain a hyphen),
George (cannot start with a capital letter),
something (cannot start with underscore).

15
3.1.2 Variables
Variables are simple (basic) terms,

written in uppercase or starting with underscore ,

Examples: X, Input, something, (the last one called anonymous variable).

Anonymous variables need not have consistent interpretations (they need


not be bound to the same value):
? l i k e s ( , john ) . % d o e s anybody l i k e John ?
? l i k e s ( , ) . % d o e s anybody l i k e anybody ?

3.1.3 Structures
Structure are compound terms, single objects consisting of collections of
objects (terms),

they are used to organize the data.

A structure is specified by its functor (name) and its components


owns ( john , book ( w u t h e r i n g h e i g h t s , b r o n t e ) ) .
book ( w u t h e r i n g h e i g h t s , a u t h o r ( emily , b r o n t e ) ) .

?owns ( john , book (X, a u t h o r (Y, b r o n t e ) ) ) .


% d o e s John own a book (X) by Bronte (Y, b r o n t e )?

Characters in Prolog

Characters:

A-Z
a-z
0-9
+-*/\ <>: .
? @#$&

Characters are ASCII (printing) characters with codes greater than 32.

Remark: allows the use of any character.

16
3.2 Operators
Arithmetic operators

Arithmetic operators:
+,
,
*,
/,
+(x, (y, z)) is equivalent with x + (y z)
Operators do not cause evaluation in Prolog.
Example: 3+4 (structure) does not have the same meaning with 7 (term).
X is 3+4 causes evaluation ( is represents the evaluator in Prolog).
The result of the evaluation is that X is assigned the value 7.

Parsing arithmetic expressions

To parse an arithmetic expression you need to know:


The position:
* infix: x + y, x y
* prefix x
* postfix x!
Precedence: x + y z ?
Associativity: What is x + y + z? x + (y + z) or (x + y) + z?

Each operator has a precedence class:


1 - highest
2 - lower
...
lowest
/ have higher precedence than +
8/2/2 evaluates to:
8 (8/(2/2)) - right associative?
or 2 ((8/2)/2) - left associative?
Arithmetic operators are left associative.

17
3.3 Unification
The unification predicate =

= - infix built-in predicate.


?X = Y.

Prolog will try to match(unify) X and Y, and will answer true if successful.
In general, we try to unify 2 terms (which can be any of constants, vari-
ables, structures):
?T1 = T2 .

Remark on terminology: while in some Prolog sources the term match-


ing is used, note that in the (logic) literature matching is used for the
situation where one of the terms is ground (i.e. contains no variables).
What = does is unification.

The unification procedure


Summary of the unification procedure ? T1 = T2:
If T1 and T2 are identical constants, success (Prolog answers true);
If T1 and T2 are uninstantiated variable, success (variable renaming);
If T1 is an uninstantiated variable and T2 is a constant or a structure,
success, and T1 is instantiated with T2;
If T1 and T2 are instantiated variables, then decide according to their
value (they unify - if they have the same value, otherwise not);
If T1 is a structure: f (X1 , X2 , ..., Xn ) and T2 has the same functor (name):
f (Y1 , Y2 , ..., Yn ) and the same number of arguments, then unify these ar-
guments recursively (X1 = Y1 , X2 = Y2 , etc.). If all the arguments unify,
then the answer is true, otherwise the answer is false (unification fails);
In any other case, unification fails.

Occurs check

Consider the following unification problem:


? X = f (X ) .

Answer of Prolog:
X = f (**).

X = f (X ) .

18
In fact this is due to the fact that according to the unification procedure,
the result is X = f(X) = f(f(X)) = ...= f( f (...( f (X )...))) - an infinite loop
would be generated.

Unification should fail in such situations.

To avoid them, perform an occurs check: If T1 is a variable and T2 a


structure, in an expression like T1 = T2 make sure that T1 does not
occur in T2.
Occurence check is deactivated by default in most Prolog implementations
(is computationally very expensive) - Prolog trades correctness for speed.
A predicate complementary to unification:

\= succeeds only when = fails,


i.e. T1 \= T2 cannot be unified.

3.4 Arithmetic in Prolog


Built-in predicates for arithmetic

Prolog has built-in numbers.

Built-in predicates on numbers include:

X = Y,
X \= Y,
X < Y,
X > Y,
X =< Y,
X >= Y,

with the expected behaviour.

Note that variables have to be instantiated in most cases (with the excep-
tion of the first two above, where unification is performed in the case of
uninstantiation).

The arithmetic evaluator is

Prolog also provides arithmetic operators (functions), e.g.:


+, , *, /, mod, rem, abs, max, min, random, round, floor, ceiling etc, but these
cannot be used directly for computation(2+3 means 2+3, not 5) - expres-
sions involving operators are not evaluated by default.

19
The Prolog evaluator is has the form:
X i s Expr .

where X is an uninstantiated variable, and Expr is an arithmetic expres-


sion, where all variables must be instantiated (Prolog has no equation
solver).

Example (with arithmetic(1))

r e i g n s ( rhondri , 844 , 8 7 8 ) .
r e i g n s ( anarawd , 8 7 8 , 9 1 6 ) .
r e i g n s ( hywel dda , 9 1 6 , 9 5 0 ) .
r e i g n s ( lago ap idwal , 950 , 9 7 9 ) .
r e i g n s ( hywel ap ieuaf , 979 , 9 8 5 ) .
r e i g n s ( cadwallon , 9 8 5 , 9 8 6 ) .
r e i g n s ( maredudd , 9 8 6 , 9 9 9 ) .

p r i n c e (X, Y):
r e i g n s (X, A, B) ,
Y >= A,
Y =< B .

? p r i n c e ( cadwallon , 9 8 6 ) .
true
? p r i n c e (X, 9 7 9 ) .
X = lago ap idwal ;
X = hywel ap ieuaf

Example (with arithmetic(2))

pop ( p l a c e 1 , 203).
pop ( p l a c e 2 , 548).
pop ( p l a c e 3 , 800).
pop ( p l a c e 4 , 108).

area ( place1 , 3).


area ( place2 , 1).
area ( place3 , 4).
area ( place4 , 3).

d e n s i t y (X, Y):
pop (X, P) ,
a r e a (X, A) ,
Y i s P/A.

? d e n s i t y ( p l a c e 3 , X ) .
X = 200
true

20
3.5 Summary
The notions covered in this section:
Prolog syntax: terms (constants, variables, structures).
Arithmetic in Prolog.
Unification procedure.
Subtle point: occurs check.

3.6 Further Reading and Exercises


Read: Chapter 2 of [Clocksin and Mellish, 2003].
Try out all the examples in these notes, and in the above mentioned Chap-
ter 2 of [Clocksin and Mellish, 2003]. from [?].

4 Using Data Structures


4.1 Structures as Trees
Structures as trees

Consider the following structure:


parent ( charles , el i s ab e t h , p h i l i p ) .

this can be represented as a tree:

parent

charles elisabeth philip

Other examples:

a + b* c

a *

b c

book ( moby dick , a u t h o r ( herman , m e l v i l l e ) ) .

21
book

moby dick author

herman melville

s e n t e n c e ( noun (X) , v e r b p h r a s e ( verb (Y) , noun ( Z ) ) ) .

sentence

noun verb phrase

X verb noun

Y Z

john likes mary

f (X, g (X, a ) ) .

4 X a

In the above, the variable X is shared between nodes in the tree represen-
tation.

4.2 Lists
Introducing lists

Lists are a common data structure in symbolic computation.

Lists contain elements that are ordered.

22
Elements of lists are terms (any type, including other lists).

Lists are the only data type in LISP.

They are a data structure in Prolog.

Lists can represent practically any structure.

Lists (inductive domain)

Base case: [ ] the empty list.

General case : .( h, t) the nonempty list, where:

h - the head, can be any term,


t - the tail, must be a list.

List representations

.( a, [ ]) is represented as

or []

a [] a

tree vine
representation representation

.( a, .( b, [ ])) is

[]

a b

.( a, b) is not a list, but it is a legal Prolog structure, represented as

.(.( a, []), .( a, .( X, [ ]))) is represented as

23
[]

[] a X

Syntactic sugar for lists

To simplify the notation, , can be used to separate the elements.

The lists introduced above are now:

[a] ,
[a, b] ,
[[ a ], a, X].

List manipulation

Lists are naturally split between the head and the tail.

Prolog offers a construct to take advantage of this: [H | T].

Consider the following example:


p([1 , 2 , 3]).
p ( [ the , cat , s a t , [ on , the , mat ] ] ) .

Prolog will give:


?p ( [ H | T ] ) .
H = 1,
T = [2 , 3];
H = the
T = [ cat , s a t , [ on , the , mat ] ] ;
no

Attention! [a | b] is not a list, but it is a valid Prolog expression, corre-


sponding to .( a, b)

Unifying lists: examples

[ X, Y, Z ] = [ john , l i k e s , f i s h ]
X = john
Y = likes
Z = fish

24
[ c a t ] = [X | Y]
X = cat
Y = [ ]

[ X, Y | Z ] = [ mary , l i k e s , wine ]
X = mary
Y = likes
Z = [ wine ]

[ [ the , Y] | Z ] = [ [ X, h a r e ] , [ i s , h e r e ] ]
X = the
Y = hare
Z = [ [ is , here ] ]

[ g o l d e n | T ] = [ golden , n o r f o l k ]
T = [ norfolk ]

[ v a l e , h o r s e ] = [ h o r s e , X]
false

[ w h i t e |Q] = [ P | h o r s e ]
P = white
Q = horse

Strings
In Prolog, strings are written inside double quotation marks.
Example: a string.
Internally, a string is a list of the corresponding ASCII codes for the
characters in the string.
? X = a s t r i n g .
X = [ 9 7 , 32 , 115 , 116 , 114 , 105 , 110 , 1 0 3 ] .

Summary
Items of interest:
the anatomy of a list in Prolog .( h, t)
graphic representations of lists: tree representation, vine repre-
sentation,
syntactic sugar for lists [...] ,
list manipulation: head-tail notation [H|T],
strings as lists,
unifying lists.

25
Reading

Read the corresponding Chapter 3, Section 3.2, from [Clocksin and Mellish, 2003].

26
5 Recursion
5.1 Introducing Recursion
Induction/Recursion

Inductive domain:
A domain composed of objects constructed in a manageable way,
i.e.:
there are some simplest(atomic) objects, that cannot be decom-
posed,
there are complex objects that can be decomposed into finitely
many simpler objects,
and this decomposition process can be performed finitely many times
before one reaches the simplest objects.
In such domains, one can use induction as an inference rule.
Recursion is the dual of induction, i.e.:
recursion describes computation in inductive domains,
recursive procedures (functions, predicates) call themselves,
but the recursive call has to be done on a simpler object.
As a result, a recursive procedure will have to describe the behaviour
for:
(a) The simplest objects, and/or the objects/situations for which the
computation stops, i.e. the boundary conditions, and
(b) the general case, which describes the recursive call.

Example: lists as an inductive domain

simplest object: the empty list [ ] .


any other list is made of a head and a tail (the tail should be a list): [H|T].

Example: member

Implement in Prolog the predicate member/2, such that member(X, Y) is


true when X is a member of the list Y.

% The boundary c o n d i t i o n .
member (X, [X| ] ) .
% The r e c u r s i v e c o n d i t i o n .
member (X, [ |Y] ) :
member (X, Y ) .

27
The boundary condition is, in this case, the condition for which the com-
putation stops (not necessarily for the simplest list, which is [ ] ).
For [ ] the predicate is false, therefore it will be omitted.

Note that the recursive call is on a smaller list (second argument). The ele-
ments in the recursive call are getting smaller in such a way that eventually
the computation will succeed, or reach the empty list and fail. predicate
for the empty list (where it fails).

When to use the recursion?

Avoid circular definitions:


p a r e n t (X, Y): c h i l d (Y, X ) .
c h i l d (X, Y): p a r e n t (Y, X ) .

Careful with left recursion:


p e r s o n (X): p e r s o n (Y) , mother (X, Y ) .
p e r s o n ( adam ) .

In this case,
?p e r s o n (X ) .

will loop (no chance to backtrack). Prolog tries to satisfy the rule and this
leads to the loop.

Order of facts, rules in the database:


i s l i s t ( [ A| B] ) : i s l i s t (B ) .
is list ([]).

The following query will loop:


? i s l i s t (X)

The order in which the rules and facts are given matters. In general, place
facts before rules.

5.2 Recursive Mapping


Mapping: given 2 similar structures, change the first into the second,
according to some rules.
Example:

you are a computer maps to i am not a computer,


do you speak french maps to i do not speak german.

28
Mapping procedure:

1. accept a sentence,
2. change you to i,
3. change are to am not,
4. change french to german,
5. change do to no,
6. leave everything else unchanged.

The program:
change ( you , i ) .
change ( are , [ am, not ] ) .
change ( f r e n c h , german ) .
change ( do , no ) .
change (X, X ) .

alter ( [ ] , [ ] ) .
a l t e r ( [ H| T ] , [X|Y] ) :
change (H, X) ,
a l t e r (T, Y ) .

Note that this program is limited:

it would change i do like you into i no like i,


new rules would have to be added to the program to deal with such
situations.

5.3 Recursive Comparison


Dictionary comparison (lexicographic comparison) of atoms: aless /2

1. aless (book, bookbinder) succeeds.


2. aless ( ele phant,ele vator) succeeds.
3. aless (lazy , leather ) is decided by aless (azy, eather).
4. aless (same, same) fails.
5. aless (alphabetic, alp) fails.
Use the predicate name/2 which returns the name of a symbol:
?name(X, [ 9 7 , 1 0 8 , 1 1 2 ] ) .
X=a l p .

The program:

29
a l e s s (X, Y):
name(X, L ) , name(Y, M) , a l e s s x (L ,M) .

alessx ( [ ] , [ | ] ) .
a l e s s x ( [ X| ] , [Y| ] ) : X < Y.
a l e s s x ( [ H|X] , [H|Y] ) : a l e s s (X, Y ) .

5.4 Joining Structures


We want to append two lists, i.e.
? a p p e n d L i s t s ( [ a , b , c ] , [ 3 , 2 , 1 ] , [ a , b , c , 3 , 2 , 1 ] ) .
true

This illustrate the use of appendLists/3 for testing that a list is the result
of appending two other lists.
Other uses of appendLists/3:
- Total list computation:
? a p p e n d L i s t s ( [ a , b , c ] , [ 3 , 2 , 1 ] , X ) .

- Isolate:
? a p p e n d L i s t s (X, [ 2 , 1 ] , [ a , b , c , 2 , 1 ] ) .

- Split:
? a p p e n d L i s t s (X, Y, [ a , b , c , 3 , 2 , 1 ] ) .

% t h e boundary c o n d i t i o n
appendLists ( [ ] , L, L ) .
% recursion
a p p e n d L i s t s ( [ X| L1 ] , L2 , [X| L3 ] ) :
a p p e n d L i s t s ( L1 , L2 , L3 ) .

5.5 Accumulators
Summary
The recursive nature of structures (and in particular lists) gives a way to
traverse them by recursive decomposition.
When the boundary is reached, the decomposition stops and the result is
composed in a reverse of the decomposition process.
This process can be made more efficient: introduce an extra variable in
which the result so far is accumulated.
When the boundary is reached this extra variable already contains the
result, no need to go back and compose the final result.
This variable is called an accumulator.

30
Example: List Length

Without accumulator:
% length of a l i s t
% boundary c o n d i t i o n
listlen ([] , 0).
% recursion
l i s t l e n ( [ H| T ] , N):
l i s t l e n (T, N1 ) ,
N i s N1+1.

With accumulator:
% l e n g t h of a l i s t with accumulators
% c a l l of the accumulator :
l i s t l e n 1 (L , N):
l e n a c c (L , 0 , N ) .
% boundary c o n d i t i o n f o r a c c u m u l a t o r
l e n a c c ( [ ] , A, A ) .
% recursion f o r the accumulator
l e n a c c ( [ H| T ] , A, N):
A1 i s A + 1 ,
l e n a c c (T, A1 , N ) .

Inside Prolog, for the query ?listlen1 ([ a, b, c ], N):


lenacc ([a, b , c ] , 0 , N) .
lenacc ([b, c ] , 1 , N) .
lenacc ([ c ] , 2 , N) .
lenacc ([] , 3 , N)

The return variable is shared by every goal in the trace.

Example: Reverse

Without accumulators:
%% r e v e r s e
% boundary c o n d i t i o n
reverse1 ( [ ] , [ ] ) .
% recursion
r e v e r s e 1 ( [ X|TX] , L):
r e v e r s e 1 (TX, NL) ,
a p p e n d L i s t s (NL, [X] , L ) .

With accumulators:

31
%% r e v e r s e w i t h a c c u m u l a t o r s
% c a l l the accumulator
r e v e r s e 2 (L , R):
r e v e r s e A c c (L , [ ] , R ) .
% boundary c o n d i t i o n f o r t h e a c c u m u l a t o r
r e v e r s e A c c ( [ ] , R, R ) .
% recursion f o r the accumulator
r e v e r s e A c c ( [ H| T ] , A, R):
r e v e r s e A c c (T, [H|A] , R ) .

5.6 Difference Structures


Summary

Accumulators provide a technique to keep trace of the result so far (in


the accumulator variable) at each step of computation, such that when the
structure is traversed the accumulator contains the final result, which
is then passed to the output variable.
Now we consider a technique where we use a variable to hold the final
result and the second to indicate a hole in the final result, where more
things can be inserted.
Consider [a, b, c | X] - we know that this structure is a list up to a point
(up to X). We call this an open list (a list with a hole).

a b c

Using open lists

Consider
? X = [ a , b , c | L ] , L = [ d , e , f , g ] .
X = [a, b, c , d, e , f , g] ,
L = [d, e , f , g ].

the result is the concatenation of the beginning of X (the list before


the hole) with L,
i.e. we filled the hole,
and this is done in one step!

Now fill the hole with an open list:


? X = [ a , b , c | L ] , L = [ d , e | L1 ] .
X = [ a , b , c , d , e | L1 ] ,
L = [ d , e | L1 ] .

32
the hole was filled partially.

Now express this as a Prolog predicate:


d i f f a p p e n d 1 ( OpenList , Hole , L):
Hole=L .

i.e. we have an open list (OpenList), with a hole (Hole) is filled with a list
(L):
? X = [ a , b , c , d | Hole ] ,
d i f f a p p e n d 1 (X, Hole , [ d , e ] ) .
X = [a, b, c , d, d, e] ,
Hole = [ d , e ] .

Note that when we work with open lists we need to have information (i.e.
a variable) both for the open list and its hole.
A list can be represented as the the difference between an open list and
its hole.
Notation: OpenListHole

here the difference operator has no interpretation,


in fact other operators could be used instead.

Now modify the append predicate to use difference list notation:


d i f f a p p e n d 2 ( OpenListHole , L):
Hole = L .

its usage:
? X = [ a , b , c , d | Hole ] Hole ,
d i f f a p p e n d 2 (X , [ d , e ] ) .
X = [ a , b , c , d , d , e ] [ d , e ] ,
Hole = [ d , e ] .

Perhaps the fact that the answer is given as a difference list is not conve-
nient.

A new version that returns a(n open) list (with the hole filled) as the
answer:
d i f f a p p e n d 3 ( OpenListHole , L , OpenList ):
Hole = L .

its usage:

33
? X = [ a , b , c , d | Hole ] Hole ,
d i f f a p p e n d 3 (X , [ d , e ] , Ans ) .
X = [ a , b , c , d , d , e ] [ d , e ] ,
Hole = [ d , e ] ,
Ans = [ a , b , c , d , d , e ] .

diff append3 has

a difference list as its first argument,


a proper list as its second argument,
returns a proper list.

A further modification to be systematic for this version the arguments


are all difference lists:
d i f f a p p e n d 4 (OL1Hole1 , OL2Hole2 , OL1Hole2 ):
Hole1 = OL2 .

and its usage:


? X=[a , b , c | Ho]Ho ,
d i f f a p p e n d 4 (X, [ d , e , f | Hole2 ] Hole2 , Ans ) .
X = [ a , b , c , d , e , f | Hole2 ] [ d , e , f | Hole2 ] ,
Ho = [ d , e , f | Hole2 ] ,
Ans = [ a , b , c , d , e , f | Hole2 ] Hole2 .

or, if we want the result to be just the list, fill the hole with the empty
list:
? X=[a , b , c | Ho]Ho , d i f f a p p e n d 6 (X, [ d , e , f | Hole2 ] Hole2 , Ans [ ] ) .
X = [ a , b , c , d , e , f ] [ d , e , f ] ,
Ho = [ d , e , f ] ,
Hole2 = [ ] ,
Ans = [ a , b , c , d , e , f ] .

One last modification is possible:


a p p e n d d i f f (OL1Hole1 , Hole1Hole2 , OL1Hole2 ) .

its usage:
? X=[a , b , c |H]H,
a p p e n d d i f f (X, [ d , e , f | Hole2 ] Hole2 , Ans [ ] ) .
X = [ a , b , c , d , e , f ] [ d , e , f ] ,
H = [d, e , f ] ,
Hole2 = [ ] ,
Ans = [ a , b , c , d , e , f ] .

34
Example: adding to back

Let us consider the program for adding one element to the back of a list:
% boundary c o n d i t i o n
a d d t o b a c k ( El , [ ] , [ El ] ) .
% recursion
a d d t o b a c k ( El , [ Head | T a i l ] , [ Head | NewTail ):
a d d t o b a c k ( El , T a i l , NewTail ) .

The program above is quite inefficient, at least compared with the similar
operation of adding an element at the beginning of a list (linear in the
length of the list one goes through the whole list to find its end versus
constant one step).
But difference lists can help - the hole is at the end of the list:
a d d t o b a c k d ( El , OpenListHole , Ans):
a p p e n d d i f f ( OpenListHole , [ El | ElHole ] ElHole , Ans [ ] ) .

Problems with difference lists

Consider:
? a p p e n d d i f f ( [ a , b ] [ b ] , [ c , d ] [ d ] , L ) .
false .

The above does not work! (no holes to fill).


There are also problems with the occurs check (or lack there of):
empty (LL ) .

? empty ( [ a |Y]Y ) .
Y = [a|**].

in difference lists is a partial function. It is not defined for [a, b, c][d]


:
? a p p e n d d i f f ( [ a , b ] [ c ] , [ c ] [ d ] , L ) .
L = [ a , b ] [ d ] .

The query succeeds, but the result is not the one expected.
This can be fixed:
a p p e n d d i f f f i x (XY, YZ , XZ):
s u f f i x (Y, X) ,
s u f f i x (Z , Y) .

however, now the execution time becomes linear again.

35
5.7 Further Reading and Exercises
Read Chapter 3, Chapter 7, Sections 7.5, 7.6, 7.7 of [Clocksin and Mellish, 2003].

Read Chapter 7 of [Nilsson and Maluszynski, 2000].

Read Section 12.2 of [Brna, 1988].

Try out in Prolog the examples.

Solve the corresponding exercises.

36
6 Backtracking and the Cut (!)
6.1 Backtracking behaviour
Undesired backtracking behavior

There are situations where Prolog does not behave as expected.

Example:
f a t h e r ( mary , g e o r g e ) .
f a t h e r ( john , g e o r g e ) .
f a t h e r ( sue , h a r r y ) .
f a t h e r ( g e o r g e , edward ) .

This works as expected for:


? f a t h e r (X, Y ) .
X = mary ,
Y = george ;
X = john ,
Y = george ;
X = sue ,
Y = harry ;
X = george ,
Y = edward .

However, for this:


? f a t h e r ( , X ) .
X = george ;
X = george ;
X = harry ;
X = edward .

Once we find that george is a father, we dont expect to get the answer
again.

Consider the following recursive definition:


is nat (0).
i s n a t (X):
i s n a t (Y) ,
X i s Y+1.

In the following, by issuing the backtracking request, all natural numbers


can be generated:

37
? i s n a t (X ) .
X = 0 ;
X = 1 ;
X = 2 ;
X = 3 ;
X = 4 ;
X = 5 ;
X = 6 ;
...

There is nothing wrong with this behavior!

Consider:
member (X, [X| ] ) .
member (X, [ |Y] ) :
member (X, Y ) .

the query:
? member ( a , [ a , b , a , a , a , v , d , e , e , g , a ] ) .
true ;
true ;
true ;
true ;
true ;
false .

Backtracking confirms the answer several times. But we only need it once!

6.2 The cut (!)


The cut predicate (!)

The cut predicate !/0 tells Prolog to discard certain choices of backtrack-
ing.

It has the effect of pruning branches of the search space.

As an effect,

the programs will run faster,


the programs will occupy less memory (less backtracking points to
be remembered).

38
Example:library

Reference library:

determine which facilities are available: basic, general.


if one has an overdue book, only basic facilities are available.

f a c i l i t y ( Pers , Fac ):
b o o k o v e r d u e ( Pers , Book ) ,
b a s i c f a c i l i t y ( Fac ) .

basic facility ( references ).


basic facility ( enquiries ).

a d d i t i o n a l f a c i l i t y ( borrowing ) .
additional facility ( inter library exchange ).

g e n e r a l f a c i l i t y (X):
b a s i c f a c i l i t y (X ) .
g e n e r a l f a c i l i t y (X):
a d d i t i o n a l f a c i l i t y (X ) .

c l i e n t ( C . Wetzer ) .
c l i e n t ( A. J o n e s ) .

b o o k o v e r d u e ( C . Wetzer , book00101 ) .
b o o k o v e r d u e ( C . Wetzer , book00111 ) .
b o o k o v e r d u e ( A. J o n e s , book010011 ) .

? c l i e n t (X) , f a c i l i t y (X, Y ) .
X = C . Wetzer ,
Y = references ;
X = C . Wetzer ,
Y = enquiries ;
X = A. J o n e s ,
Y = references ;
X = A. J o n e s ,
Y = enquiries .

Example: library revisited (and with cut)

f a c i l i t y ( Pers , Fac ):
b o o k o v e r d u e ( Pers , Book ) ,
!,
b a s i c f a c i l i t y ( Fac ) .

39
basic facility ( references ).
basic facility ( enquiries ).

a d d i t i o n a l f a c i l i t y ( borrowing ) .
additional facility ( inter library exchange ).

g e n e r a l f a c i l i t y (X):
b a s i c f a c i l i t y (X ) .
g e n e r a l f a c i l i t y (X):
a d d i t i o n a l f a c i l i t y (X ) .

The goal ?client(X), facility (X, Y) is answered by Prolog in the following


way:

 
? - client(X) , facility(X, Y).
X=C.Wetzer

? - bfacility(C.Wetzer, Y)c. ...

?-bbook overdue(C.Wetzer, Book)c, !, basic facility(C.Wetzer, Y).


Book=book00101 Book=book00111

?- b!c, basic facility(C.Wetzer, Y).

?- bbasic facility(C.Wetzer, Y)c.


Y =ref erences Y =enquiries

X X

Guarded gate metaphor!

Effect of the cut:

if a client has an overdue book, only allow basic facilities,


no need to look for all overdue books,
no need to consider any other rules about facilities.
! always succeeds (with empty substitutions).

When the cut is encountered as a goal:

the system becomes commited to all the choices made since the parent
(here this is facility ) was invoked,
all other alternatives are discarded (e.g. the branch indicated by
above),

40
an attempt to satisfy any goal between the parent and the cut goal
will fail,
if the user asks for a different solution, Prolog goes to the backtrack
point above the parent goal (if any). In the example above the first
goal is ( client (X)) is the backtrack point above the goal.

6.3 Common uses of the cut (!)


1. Confirm the choice of a rule (tell the system the right rule was found).
2. Cut-fail combination (tell the system to fail a particular goal without
trying to find alternative solutions).
3. Terminate a generate-and-test (tell the system to terminate the
generation of alternative solutions by backtracking).

Confirm the choice of a rule

Situation:

There are some clauses associated with the same predicate.


Some clauses are appropriate for arguments of certain forms.
Often argument patterns can be provided (e.g. empty and nonempty
lists), but not always.
If no exhaustive set of patterns can be provided, give rules for specific
arguments and a catch all rule at the end.

sum to ( 1 , 1 ) .
sum to (N, Res ):
N1 i s N1 ,
sum to (N1 , Res1 ) ,
Res i s Res1 + N.

When backtracking, there is an error (it loops - why?):


? sum to ( 5 , X ) .
X = 15 ;
ERROR: Out o f l o c a l s t a c k

Now using the cut (!):


csum to ( 1 , 1): ! .
csum to (N, Res ):
N1 i s N1 ,
csum to (N1 , Res1 ) ,
Res i s Res1 + N.

41
The system is commited to the boundary condition, it will not backtrack
for others anymore:
? csum to ( 5 , X ) .
X = 15.

However:
? csum to ( 3 , Res ) .
ERROR: Out o f l o c a l s t a c k

Placing the condition N =< 1 in the boundary condition fixes the problem:
ssum to (N, 1):
N =< 1 , ! .

ssum to (N, Res ):


N1 i s N1,
ssum to (N1 , Res1 ) ,
Res i s Res1 + N.

Cut ! and not

Where ! is used to confirm the choice of a rule, it can be replaced by


not/1.

not(X) succeeds when the goal X fails.

using not is considered to be good programming style:

but programs may become less efficient (why?)


there is a trade-off between readability and efficiency!

A variant with not:


nsum to ( 1 , 1 ) .
nsum to (N, Res ):
not (N =< 1 ) ,
N1 i s N1,
nsum to (N1 , Res1 ) ,
Res i s Res1 + N.

When not is used, there may be double work:


A: B, C .
A: not (B) , D.

in the above, B is tried twice after backtracking.

42
The cut-fail combination

fail /0 is a built-in predicate.

When it is a goal, it fails and causes backtracking.

Using fail after the cut changes the backtracking behavior.

Example:

we are interested in average taxpayers,


foreigners are not average,
if the taxpayer is not foreigner, apply some general criteria.

a v e r a g e t a x p a y e r (X):
f o r e i g n e r (X) , ! , f a i l .
a v e r a g e t a x p a y e r (X):
s a t i s f i e s g e n e r a l c r i t e r i o n (X ) .

What if the cut ! isnt used?


then a foreigner that satisfied the general criterion will be considered
an average taxpayer.

The general criterion:

a person whose spouse earns more than 3000 is not average,


otherwise, a person is average if they earn between 1000 and 3000.

s a t i s f i e s g e n e r a l c r i t e r i o n (X):
s p o u s e (X, Y) ,
g r o s s i n c o m e (Y, I n c ) ,
Inc > 3000 ,
! , fail .

satisfies g e n e r a l c r i t e r i o n (X):
gross i n c o m e (X, I n c ) ,
Inc < 3000 ,
Inc > 2000.

Gross income:

pensioners with less than 500 have no gross income,


otherwise, gross income is the sum of the gross salary and the invest-
ment income.

43
g r o s s i n c o m e (X, Y):
r e c e i v e s p e n s i o n (X, P) ,
P < 500 ,
! , fail .

g r o s s i n c o m e (X, Y):
g r o s s s a l a r y (X, Z ) ,
i n v e s t m e n t i n c o m e (X, W) ,
Y i s Z + W.

not can be implemented with the cut-fail combination.

not (P): c a l l (P) , ! , f a i l .


not (P ) .

Note though that Prolog will take issue with you trying to redefine essen-
tial predicates.

Replacing the cut in cut-fail situations

The cut can be replaced with not.

This replacement does not affect the efficiency in the cut-fail situations.

However, programs have to be rearranged:


a v e r a g e t a x p a y e r (X):
not ( f o r e i g n e r (X) ) ,
not ( ( s p o u s e (X, Y) ,
g r o s s i n c o m e (Y, I n c ) ,
Inc > 3000)
), ...

Terminating a generate and test

Tic-tac-toe.

Natural number division:


d i v i d e (N1 , N2 , R e s u l t ):
i s n a t ( Result ) ,
Product1 i s R e s u l t * N2 ,
Product2 i s ( R e s u l t +1) * N2 ,
Product1 =< N1 , Product2 > N1 ,
!.

44
Problems with the cut

Consider the example:


cappend ( [ ] , X, X) : ! .
cappend ( [ A| B ] , C, [A|D] ) :
cappend (B, C, D) .

? cappend ( [ 1 , 2 , 3 ] , [ a , b , c ] , X ) .
X = [1 , 2 , 3 , a , b, c ] .

? cappend ( [ 1 , 2 , 3 ] , X, [ 1 , 2 , 3 , a , b , c ] ) .
X = [a, b, c ].

? cappend (X, Y, [ 1 , 2 , 3 , a , b , c ] ) .
X = [] ,
Y = [1 , 2 , 3 , a , b, c ] .

The variant of append with a cut works as expected for the first two
queries above. However, for the third, it only offers one solution (all the
others are cut!)

Consider:

n u m b e r o f p a r e n t s ( adam , 0 ) : ! .
n u m b e r o f p a r e n t s ( eve , 0 ) : ! .
n u m b e r o f p a r e n t s (X, 2 ) .

? n u m b e r o f p a r e n t s ( eve , X ) .
X = 0.

? n u m b e r o f p a r e n t s ( john , X ) .
X = 2.

? n u m b e r o f p a r e n t s ( eve , 2 ) .
true .

The first two queries work as expected.

The third query gives an unexpected answer. This is due to the fact that
the particular instantiation of the arguments does not match the special
condition where the cut was used.
In fact, here, the pattern that distinguishes between the special condition
and the general case is formed by both arguments together.

The predicate above can be fixed in two ways:

45
n u m b e r o f p a r e n t s 1 ( adam , N) : ! , N = 0 .
n u m b e r o f p a r e n t s 1 ( eve , N) : ! , N = 0 .
n u m b e r o f p a r e n t s 1 (X, 2 ) .

n u m b e r o f p a r e n t s 2 ( adam , 0 ) : ! .
n u m b e r o f p a r e n t s 2 ( eve , 0 ) : ! .
n u m b e r o f p a r e n t s 2 (X, 2):
X \ = adam , X, \= eve .

The cut is a powerful construct and should be used with great care.

The advantages of using the cut can be major, but so are the dangers.

There are two types of cut:

green cuts: when no solutions are discarded by cutting,


red cuts: the part of the search space is cut, and this part contains
solutions.
Green cuts are harmless, whereas red cuts should be used with great care.

6.4 Reading and Exercises


Read: Chapter 4 of [Clocksin and Mellish, 2003].

Also read: Chapter 5, Section 5.1 of [Nilsson and Maluszynski, 2000].

Carry out the examples in Prolog.

Items of interest:

what is the effect of the cut predicate (!), guarded gate metaphor,
common uses of the cut: 1. confirming the use of a rule, 2. cut-fail
combination, 3. terminate a generate and test,
cut elimination (can it be done, does it cost in terms of computational
complexity?)
problems with cut (green cuts/red cuts).

46
7 Efficient Prolog
7.1 Declarative vs. Procedural Thinking
The procedural aspect of Prolog

While Prolog is described as a declarative language, one can see Prolog


clauses from a procedural point of view:
i n (X, usa ):
i n (X, m i s s i s s i p p i ) .

The above can be seen:


from a declarative point of view: X is in the USA if X is in Missis-
sippi,
from a procedural point of view: To prove that X is in the USA,
prove X is in Mississippi, or To find X in USA, (it is sufficient to)
find them in Mississippi.
Procedural programming languages can also contain declarative aspects.
Something like
x = y + z;

can be read
declaratively, as the equation x = y + z,
procedurally: load y, load z, store x.

The need to understand the procedural/declarative aspects

The declarative/procedural aspects are not symmetrical: there are sit-


uations where one not understanding one aspect can lead to problems.
For procedural programs: A = (B + C) + D and A = B + (C +D) appear
to have equivalent declarative readings but:
imagine the biggest number that can be represented is 1000,
then for B = 501, C = 501, D = -3, the two expressions yield totally
different results!
The same can happen in Prolog. Declaratively, the following is correct:
a n c e s t o r (A, C):
a n c e s t o r (A, B) ,
a n c e s t o r (B, C ) .

However, ignoring its procedural meaning, this can lead to infinite loops
(when B and C are both unknown).

47
7.2 Narrow the search
The task of a Prolog programmer is to build a model of the problem and
to represent it in Prolog.
Knowledge about this model can improve performance significantly.

?horse(X), gray(X). will find the answer much faster than ?gray(X), horse(X).
in a model with 1000 gray objects and 10 horses.
Narrowing the search can be even more subtle:
s e t e q u i v a l e n t ( L1 , L2):
permute ( L1 , L2 ) .

i.e. to find whether two lists are set-equivalent it is enough to see whether
they are permutations of eachother. But for N element lists, there are N !
permutations (e.g. for 20 elements, 2.4 1018 possible permutations).

Now considering a faster program:


s e t e q u i v a l e n t ( L1 , L2):
s o r t ( L1 , L3 ) ,
s o r t ( L2 , L3 ) .

i.e. two lists are set equivalent if their sorted versions are the same. And
sorting can be done in N logN steps (e.g. approx 86 steps for 20 element
lists).

7.3 Let Unification do the Work


When patterns are involved, unification can do some of the work that the
programmer may have to do.
E.g. consider variants the predicate that detects lists with 3 elements:


h a s 3 e l e m e n t s (X):
length (X, N) ,
N = 3.


has 3 elements ( [ , , ]).

Also consider the predicate for swapping the first two elements from a list:
s w a p f i r s t 2 ( [ A, B | Rest ] , [ B, A| Rest ] ) .

Letting unification work saves having to go through the whole list.

48
7.4 Understand Tokenization
Atoms are represented in Prolog in a symbol table where each atom ap-
pears once - a process called tokenization.
Atoms in a program are replaced by their address in the symbol table.

Because of this:
f ( What an a w f u l l y l o n g atom t h i s a p p e a r s t o be ,
What an a w f u l l y l o n g atom t h i s a p p e a r s t o be ,
What an a w f u l l y l o n g atom t h i s a p p e a r s t o be ) .

will actually take less memory than g(a, b, c)


Comparison of atoms can be performed very fast because of tokenization.

For example a \= b and aaaaaaaaa \= aaaaaaaab can both be done in the


same time, without having to parse the whole atom names.

7.5 Tail recursion


Continuations, backtracking points

Consider the following:


a: b , c .
a: d .

For ? a., when b is called, Prolog has to save in the memory:

the continuation, i.e. what has to be done after returning with success
from b (i.e. c),
the backtrack point, i.e. where can an alternative be tried in case of
returning with failure from b (i.e. d).

For recursive procedures the continuation and backtracking point have to


be remembered for each of the recursive calls.
This may lead to large memory requirements

Tail recursion

If a recursive predicate has no continuation, and no backtracking point,


Prolog can recognize this and will not allocate memory.
Such recursive predicates are called tail recursive (the recursive call is the
last in the clause and there are no alternatives).

They are much more effficient than the non-tail recursive variants.

49
The following is tail recursive:
t e s t 1 (N): write (N) , nl , NewN i s N+1 , t e s t 1 (NewN ) .

In the above write writes (prints) the argument on the console and suc-
ceeds, nl moves on a new line and succeeds. The predicate will print
natural numbers on the console until the resources run out (memory or
number representations limit).

The following is not tail recursive (it has a continuation):


t e s t 2 (N): write (N) , nl , NewN i s N+1 , t e s t 2 (NewN) , nl .

When running this, it will run out of memory relatively soon.


The following is not tail recursive (it has a backtracking point):
t e s t 3 (N): write (N) , nl , NewN i s N+1 , t e s t 3 (NewN ) .
t e s t 3 (N): N<0.

The following is tail recursive (the alternative clause comes before the
recursive clause so there is no backtracking point for the recursive call):
t e s t 3 a (N): N<0.
t e s t 3 a (N): write (N) , nl , NewN i s N+1 , t e s t 3 a (NewN ) .

The following is not tail recursive (it has alternatives for predicates in
the recursive clause preceding the recursive call, so backtracking may be
necessary):
t e s t 4 (N): write (N) , nl , m(N, NewN) , t e s t 4 (NewN ) .

m(N, NewN): N >= 0 , NewN i s N + 1 .


m(N, NewN): N < 0 , NewN i s ( 1) *N.

Making recursive predicates tail recursive

If a predicate is not tail recursive because it has backtracking points, then


it can be made so by using the cut before the recursive call.
The following are now tail recursive:
t e s t 5 (N): write (N) , nl , NewN i s N+1 ,! , t e s t 5 (NewN ) .
t e s t 5 (N): N<0.

t e s t 6 (N): write (N) , nl , m(N, NewN) , ! , t e s t 6 (NewN ) .

m(N, NewN): N >= 0 , NewN i s N + 1 .


m(N, NewN): N < 0 , NewN i s ( 1) *N.

50
Note that tail recursion can be indirect. The following is tail recursive:
t e s t 7 (N): write (N) , nl , t e s t 7 a (N ) .
t e s t 7 a (N): NewN i s N+1 , t e s t 7 (NewN ) .

In the above we have mutual recursion, but note that test7a is just used
to rename part of the test7 predicate.

Summary: tail recursion

In Prolog, tail recursion exists when:

the recursive call is the last subgoal in the clause,


there are no untried alternative clauses,
there are no untried alternatives for any subgoal preceeding the re-
cursive call in the same clause.

7.6 Let Indexing Help


Consider the program:
a(b ) .
a(c ).

d( e ) .
d( f ) .

and the query ?d(f).

Contrary to the expectation, most Prolog implementations will not have


to go through all the knowledge base.
Prolog uses indexing over the functor name and the first argument.

These indices will be stored as a hash table or something similar for fast
access.
Therefore, Prolog will find d(f) directly.

Using indexing can make predicates be tail recursive when they would not
be:
t e s t 8 (0): write ( S t i l l g o i n g ) , nl , t e s t 8 ( 0 ) .
t e s t 8 ( 1).

The second clause is not an alternative to the first because of indexing.


Note, however, that indexing works only when the first argument of the
predicate is instantiated.

51
7.7 How to Document Prolog Code
Consider some built-in predicates in Prolog, as presented in the help sec-
tion of the program:
append ( ? L i s t 1 , ? L i s t 2 , ? L i s t 3 )

Succeeds when List3 unifies with the concatenation of List1 and List2. The predicate
can be used with any instantiation pattern (even three variables).
Number i s +Expr [ ISO ]

True if Number has successfully been unified with the number Expr evaluates to. If
Expr evaluates to a float that can be represented using an integer (i.e, the value is
integer and within the range that can be described by Prologs integer representation),
Expr is unified with the integer value.

The above examples use a notation (documentation) convention in Prolog:


when describing the predicate, use mode indicators for its arguments:
+ describes an argument that should already be instantiated when the
predicate is called,
- denotes an argument that is normally not instantiated until this pred-
icate instantiates it,
? denotes an argument that may or may not be instantiated,
@ is used by some programmers to indicate that the argument contains
variables that must not be instantiated.
Note that the above description does not ensure guarantee what would
happen if the argument is used in another mode. For that matter, it does
not even guarantee the intended behavior.

7.8 Reading and Further Exercises


Read: the paper [Covington, 1989, Covington et al., 1997].
Try out the examples in Prolog.
Items of interest:
Procedural and declarative meaning of Prolog programs.
Narrowing the search space.
Using unification.
Understanding tokenization.
Avoiding string processing.
Tail recursion: recognizing tail recursion, reasons for considering tail
recursion (complexity issues).
Using indexing.

52
8 I/O with Prolog
There are two styles of I/O in Prolog:
Edinburg style I/O is the legacy style, still supported by Prolog im-
plementations. It is relatively simple to use but has some limitations.
ISO I/O is the standard style, supported by all Prolog implementa-
tions.
There are some overlaps between the two styles.

8.1 Edinburgh style I/O


Writing terms

The built-in predicate write takes any Prolog term and displays it on the
screen.
The predicate nl moves the cursor at a new line.

? write ( H e l l o t h e r e ) , nl , write ( Goodbye ) .


Hello there
Goodbye
true .

Note that quoted atoms are displayed without quotes. The variant writeq
of write will also display the quotes.

? write (X ) .
G243
true .

? write ( some s t r i n g ) .
[ 1 1 5 , 111 , 109 , 101 , 32 , 115 , 116 , 114 , 105 , 110 , 103]
true .

Note that Prolog displays the internal representation of terms. In partic-


ular, the internal representation of variables.

Reading terms

The predicate read accepts any Prolog term from the keyboard (typed in
Prolog syntax, followed by the period).
? read (X ) .
| : hello .
X = hello .

? read (X ) .

53
| : hello there .
X = hello there .

? read (X ) .
| : hello there .
ERROR: Stream u s e r i n p u t : 0 : 3 7
Syntax e r r o r : Operator e x p e c t e d

? read ( h e l l o ) .
| : hello .
true .

? read ( h e l l o ) .
| : bye .
false .

? read (X ) .
| : mother (Y, ada ) .
X = mother ( G288 , ada ) .

The read predicate succeeds if its argument can be unified with the term
given by the user (if this is a term). The examples above illustrate several
possible uses and situations.

File handling

The predicate see takes a file as argument, the effect is to open the file for
reading such that Prolog gets input from that file rather than the console.

The predicate seen closes all open files, input now comes again from the
console.

? see ( m y f i l e . t x t ) ,
read (X) ,
read (Y) ,
read ( Z ) ,
seen .

When a file is opened, Prolog will keep track of the position of the cursor
in that file.

One can switch between several open files:


? see ( aaaa ) ,
read (X1 ) ,
see ( bbbb ) ,
read (X2 ) ,

54
see ( c c c c ) ,
read (X3 ) ,
seen .

The predicate tell opens a file for writing and switches the output to it.
The predicate told closes all files opened for writing and returns the output
to being the console.
? tell ( myfile . txt ) ,
write ( H e l l o t h e r e ) ,
nl ,
told .

Several files can be opened and written into:


? t e l l ( aaaa ) ,
write ( f i r s t l i n e o f aaaa ) , nl ,
t e l l ( bbbb ) ,
write ( f i r s t l i n e o f bbbb ) , nl ,
tell ( cccc ) ,
write ( f i r s t l i n e o f c c c c ) , nl ,
told .

Character level I/O

The predicate put writes one character (integer representing the ASCII
code corresponding to the character).
? put ( 4 2 ) .
*
true .

The predicate get reads one character from the default input (console).
? g e t (X ) .
| %

X = 37.

In SWI Prolog, put can also handle nonprinting characters:


? write ( h e l l o ) , put ( 8 ) , write ( bye ) .
hellbye
true .

Complete Information: SWI-Prolog Manual

For exact details of the Edinburgh style I/O predicates in SWI Prolog, con-
sult [Wielemaker, 2008] (also available in SWI Prolog by calling ? help.).

55
8.2 ISO I/O
Streams
ISO standard I/O in Prolog considers the notion of a stream (open files
or file-like objects).
ISO I/O predicates are provided for:
Open and close streams in different modes.
Inspecting the status of a stream, as well as other information.
Reading/writing is done in streams.
There are two special streams that are always open: user input and user output.

Opening streams
The predicate open(Filename, Mode, Stream, Options) opens a stream, where:
Filename indicates the file name (implementation, OS dependent),
Mode is one of read, write, append,
Stream is a handle for the file,
Options is a (possibly empty) list of options. Options include:
* type(text) (default) or type(binary),
* reposition (true) or reposition ( false ) (the default) indicating
whether it is possible to skip back or forward to specified po-
sitions,
* alias (Atom) a name (atom) for the stream,
* action for reading past the end of the line: eof action (error) -
raise an error condition, oef action (eof code) - return an error
code, eof action ( reset ) - to examine the file again (in case it was
updated e.g. by another concurrent process).
Example:
t e s t :
open ( f i l e . t x t , read , MyStream , [ type ( t e x t ) ] ) ,
r e a d t e r m ( MyStream , Term , [ quoted ( true ) ] ) ,
c l o s e ( MyStream ) , write ( Term ) .

Closing streams
The predicate close(Stream, Options) closes Stream with Options.
close(Stream) is the version without options.
Options include force ( false ) (default) and force (true) - even if there is an
error (e.g. the file was on a removable storage device which was removed),
the file is considered closed, without raising an error.

56
Stream properties
The predicate stream property(Stream, Property) can be used to get prop-
erties like:
file name (...) ,
mode(M),
alias (A),
etc, consult the documentation [Wielemaker, 2008] for the rest of the
options.
Example:
? s t r e a m p r o p e r t y ( u s e r i n p u t , mode (What ) ) .
What = read .

Reading terms
Predicates for reading terms:
read term(Stream, Term, Options),
read term(Term, Options), using the current input stream,
read(Stream, Term) like the above, without the options,
read(Term) like the above, from current input.

Read about the Options in the documentation [Wielemaker, 2008].


The following example illustrates the use of variable names, variables , singletons :
? r e a d t e r m ( Term ,
[ v a r i a b l e n a m e s ( Vars ) , s i n g l e t o n s ( S ) , v a r i a b l e s ( L i s t ) ] ) .
| f (X, X, Y, Z ) .
Term = f ( G359 , G359 , G361 , G362 ) ,
Vars = [ X = G359 , Y = G361 , Z = G362 ] ,
S = [ Y = G361 , Z = G362 ] ,
L i s t = [ G359 , G361 , G362 ] .

Writing terms
Predicates for writing terms include:
write term(Stream, Term, Options),
write term(Term, Options),
write(Stream, Term),
write(Term),

the variants being analogous to the ones for reading terms.


For options, other predicates for writing terms, consult the documentation.

57
Other I/O predicates

Other I/O predicates include predicates for reading/writing character-


s/bytes:
get char, peek char, put char, put code, get code, peek code, get byte,
peek byte, put byte.

Other predicates:

current input, current output, set input , set output, flush output, at the end of stream,
nl, etc.

Consult the documentation for the details of the syntax.

8.3 Reading and Further Exercises


Read: Sections 2.2, 2.3, 2.6, 2.10, 2.12, A.7 of [Covington et al., 1997].

Try out the examples in Prolog.

Items of interest:

Edinburgh I/O, limitations.


ISO I/O: streams, term I/O, other I/O.

9 Defining New Operators


Operators in Prolog

The usual way to write a Prolog predicate is to put it in front of its


arguments:
functor ( arg1 , arg2 , ...)

However, there are situations where having a different position may make
the programs easier to understand:
X is father of Y

In Prolog, one can define new operators like the one above.

The following have to be specified:

the position: whether the operator is prefix (default), infix or post-


fix,
the precedence: to decide which operator applies first (e.g. is 2+3*4
(2+3)*4 or 2+(3*4)?), lower precedence binds the strongest, prece-
dences are between 1 and 1200,

58
the associativity: e.g. is 8/2/2 (8/2)/2 or is it 8/(2/2)?
Note that logic predicates (i.e. those expressions that evaluate to
true or false are not associative in general). Consider:
3 = 4 = 3, and suppose it were left associative,
then (3 = 4) = 3 evaluates to false = 3, which changes the
type of the arguments.

Operator syntax specifiers

Specifier Meaning
fx Prefix, not associative.
fy Prefix, right-associative.
xf Postfix, not associative.
yf Postfix, left-associative.
xfx Infix, not associative (like =).
xfy Infix, right associative (like comma in compound goals).
yfx Infix, left associative (like +).

Commonly predefined Prolog operators

Priority Specifier Operators


1200 xfx :
1200 fx : ?
1100 xfx ;
1050 xfy >
1000 xfy ,
900 fy not
700 xfx = \= == \== @< is = =<
500 yfx +
400 yfx * / // mod
200 xfy
200 fy

Example

% n o t e t h e s y n t a x o f d e c l a r i n g t h e new o p e r a t o r :

: op ( 1 0 0 , xfx , i s f a t h e r o f ) .

m i c h a e l i s f a t h e r o f kathy .
X i s f a t h e r o f Y : male (X) , p a r e n t (X, Y ) .

? X i s f a t h e r o f kathy .

X = michael .

59
9.1 Reading and Further Exercises
Read: Section 6.6, of [Covington et al., 1997].

Try out the examples in Prolog.

Items of interest:

Defining operators.

60
Part II
The Theoretical Basis of Logic
Programming
10 Logical Background
10.1 Predicate logic
We review here (first order) predicate logic:

the syntax,
the semantics,
illustrate some difficulties of the semantic evaluation of truth in first
order logic,
review some results that deal with this difficulty.

Syntax of first order predicate logic

The vocabulary of the language contains the symbols from which expres-
sions of the language are built:
Reserved symbols:
* ( ),
* , , , , ,
* , .
The set of variables V (countable set).
The set of language symbols L:
* F - function symbols (each with their own arity),
* P - predicate symbols (with arity),
* C - constant symbols.

Example language (symbols): L = {{+/2 , /1 }, {</2 , /2 }, {0, 1}}. We


use a notation similar to Prolog to indicate the arity of symbols.

The expressions of (first order) predicate logic:

Terms:
* variables v V are terms,
* constants c C are terms,
* if f/n F and t1 , . . . , tn are terms, then so is f (t1 , . . . , tn ).
Formulae:

61
* if p/n P and t1 , . . . , tn are terms, then p(t1 , . . . , tn ) is an atomic
formula,
* if F, G are formulae, then F , F G, F G, F G, F G
are (compound) formulae,
* if x V and F is a formula then xF , xF are (quantified)
formulae (the universally and existentially quantified formulae,
respectively).

Semantics of first order predicate logic

The semantics of first order logic describe the meaning of expressions in


the language.
Such a language is used to describe:

a domain of objects,
relations between the objects (or properties of the objects),
processes or functions that produce new objects from other ob-
jects.
To find (compute) the meaning of an expression, one must first define an
interpretation of the symbols:

constants are interpreted as objects in the domain described by the


language,
function symbols are interpreted as processes (functions) in the do-
main described by the language,
predicate symbols are interpreted as relations/properties between/of
objects in the domain described by the language.

Consider the language presented previously L = {{+/2 , /1 }, {</2 , /2 }, {0, 1}}


and lets consider two interpretations of this language:

I1 an interpretation in the natural numbers:


* I1 (0) = seven,
* I1 (1) = zero,
* I1 (+) = multiplication,
* I1 () = factorial,
* I1 (<) = smaller than,
* I1 () = divides.
I2 an interpretation in domain of strings:
* I2 (0) = ,
* I2 (1) = one,
* I2 (+) = concatenation,

62
* I2 () = reverse,
* I2 (<) = substring,
* I1 () = sorted version.

Note that interpretation shows the correspondence between the name of


a concept (constant, function symbol, predicate symbol) and the concept
described by that name.

Once an interpretation has been defined, one can compute the value of
an expression E under interpretation I, I (E) (i.e. the meaning of
an expression under interpretation) in the following way:
The value of terms under interpretation:

In general, terms will evaluate to objects in the universe of discourse.


If c C, I (c) = I(c).
If x V, I (v) is not defined, unless the variable v is assigned a
value. I.e. the value of expressions containing free variables cannot
be determined unless the variables have values assigned to them.
If f (t1 , . . . , tn ) is a term, then

I (f (t1 , . . . , tn )) = I(f )(I (t1 ), . . . , I (tn )).

The value of formulae under interpretation:

Formulae will evaluate to true or false (but not both).


For atomic formulae,

I (p(t1 , . . . , tn )) = I(p)(I (t1 ), . . . , I (tn )).

For compound formulae:


* I (F ) = true iff I (F ) = false.
* I (F G) = true iff I (F ) = true and I (G) = true.
*
* I (F G) = true iff I (F ) = true or I (G) = true (at least
one is true).
* I (F G) = false iff I (F ) = true and I (G) = false.
* I (F G) = true iff I (F ) = I (G).
For quantified formulae:
* I (xF ) = true iff for all values of x from the domain, I (F ) =
true.
* I (xF ) = true iff for some values of x from the domain,
I (F ) = true.

63
For example, consider I1 as defined above:

I1 ((0 + 1))) =
I1 ()(I1 (0 + 1)) =
factorial(I1 (+)(I1 (0), I1 (1))) =
factorial(multiplication(seven, zero)) =
factorial(zero) =
one.

Validity, satisfiability, unsatisfiability

We are interested in the meaning of formulae, in particular:

Whether a formula is valid, i.e. true under all possible interpreta-


tions.
Whether a formula is satisfiable, i.e. there is an intepretation such
that the formula is true.
Whether a formula is unsatisfiable, i.e. the formula is false under
all possible interpretations.
Whether two formulae are logically equivalent, i.e. the formulae
have the same meaning under all possible interpretations (we denote
F1 F2 ).
Whether a formula is a logical consequence of a set of other formu-
lae, i.e. the formula is true in all intepretations such that all formulae
in the set are true (we denote F1 , . . . , Fn  G).

Using these notions in practice is very difficult: the number of possible in-
tepretations for a language is infinite. Checking the value of an expression
in all possible intepretations is therefore not practical.
If a formula is (or a set of formulae are) true under an interpretation in a
domain, then that domain is called a model of the formula(e).

10.2 Herbrands Theorem


Herbrand Universe

Fortunately, the difficulty represented by the immense number of possible


interpretations of a language can be overcome.
We will define a domain and an interpretation that captures all the
properties of all potential domains and interpretation.
Checking satisfiability (and validity) of a formula (set) can be done by
just checking the evaluation under a certain interpretation into this special
universe.

64
Let L be a language containing the constant symbols C, function symbols
F and predicate symbols P. Let F be a formula over L.

The Herbrand universe H corresponding to the language L (or corre-


sponding to the formula F ) is defined in the following way:
If c C then c H.
If t1 , . . . , tn and f F then f (t1 , . . . , tn ) H.
Note that if C = then add an arbitrary constant to the Herbrand
universe H.
The Herbrand universe is the set of ground terms that can be formed from
the constants and function symbols of the language.
The Herbrand base B of the language L or the formula F is the set of
ground atoms that can be formed from the predicate symbols in P and
terms in H.

A Herbrand interpretation IH for the language L is an interpretation


whose domain is the Herbrand universe H whose symbols are interpreted
to themselves:
If c C, IH (c) = c.
If f F, IH (f ) = f .
If f P, IH (p) = p.
A Herbrand model for a formula (set) F is a Herbrand interpretation
that satisfies F . A Herbrand model can be identified with a subset of the
Herbrand base, namely the subset for which

IH (p(t1 , . . . , tn )) = true.

Herbrands Theorem

Several remarkable results hold.


Theorem 1. Let F be a formula. F has a model iff it has a Herbrand
model.

Theorem 2 (Herbrands theorem (semantic form)). Let F be a formula


(set). F is unsatisfiable iff a formula built from a finite set of ground
instances of subformulae of F is unsatisfiable.
Theorem 3 (Herbrands theorem (syntactic form)). A formula F is prov-
able iff a formula built from a finite set of ground instances of subformulas
of F is provable in propositional logic.

65
Herbrands theorem (semantic form) tells us that we can reduce the ques-
tion of unsatisfiability in predicate logic to the question of unsatisfiability
in propositional logic.
For propositional logic, the resolution method is used to decide the
question of satisfiability. See [Craciun, 2010] for details.
For using resolution in propositional logic, propositional formulas are writ-
ten in Conjunctive Normal Form (CNF).
To use Herbrands theorem in and propositional resolution, one would
need a similar transformation for predicate logic.

10.3 Clausal Form of Formulae


A literal in predicate logic is an atomic formula or the negation of an
atomic formula.
A formula of predicate logic is in conjunctive normal form (CNF) iff
it is a conjunction of disjunctions of literals.
A formula of predicate logic is in prenex conjunctive normal form
(PCNF) iff it is of the form
Q1 x1 . . . Qn xn M,
where Qi is a quantifier (either , ), for i = 1 . . . n, and M is a quantifier-
free formula in CNF. Q1 x1 . . . Qn xn is called the prefix and M is called
the matrix.
A formula is closed iff it has no free variables (i.e. all variables are bound
by a quantifier).
A closed formula is in clausal form iff it is in PCNF and its prefix consists
only of universal quantifiers.
A clause is a disjunction of literals.
Example: The following formula is in clausal normal form:
 
  
xyz p(f (x))q(y, z) p(x)q(y, f (z))r(x, y) q(x, f (z))r(f (y), f (z)) .

Notation: Since the matrix only consists of universal quantifiers, these can
be omitted. The clausal form can be represented in the following manner
(clauses as sets of literals, formulae in clausal form as sets of clauses):


p(f (x)), q(y, z) ,

p(x), q(y, f (z)), r(x, y) ,

q(x, f (z)), r(f (y), f (z)) .

66
Notation: Let F , G be formulas. We denote F G if F and G are
equisatisfiable (i.e. F is satisfiable iff G is satisfiable).

Theorem 4 (Skolem). Let F be a closed formula. Then there exists a formula


F 0 in clausal form such that F F 0 .

Skolems theorem can be used to decide if a formula is unsatisfiable if a


method for deciding unsatisfiability of formulas in clausal exists. This is
the subject of the next Chapter.

Skolemization Algorithm
IN: closed formula F .
OUT: formula F 0 in clausal form such that F F 0 .
Running example:

x(p(x) q(x)) (xp(x) xq(x))

1: Rename the bound variables such that no variable appears in the scope of
two different quantifiers.

x(p(x) q(x)) (yp(y) zq(z))

2: Eliminate all the equivalence and implication connectives (, ).

x(p(x) q(x)) (yp(y) zq(z))

3: Push the negations inside the parantheses, until negations apply only to
atomic formulae. Use the equivalences

(F ) F,
(F G) (F G),
(F G) (F G),
(xF [x]) xF [x],
(xF [x]) xF [x].

x(p(x) q(x)) yp(y) zq(z)

4: Extract the quantifiers from the matrix. Since the variables have been re-
named, the following equivalences can be applied: AopQxB[x] Qx(AopB[X])
and QxB[x]opA Qx(B[X]opA) where Q is one of , and op is one of
, .

xyz((p(x) q(x)) p(y) q(z))

67
5: Use the distributive laws P (Q R) (P Q) (P R),
(P Q) R) (P R) (Q R) to transform the matrix into CNF.

xyz((p(x) p(y) q(z)) (q(x) p(y) q(z)))

6: Skolemization
If the prefix is of the form y1 . . . yn x, let f be a new n-ary function
symbol. Delete x from the prefix, replace all occurences of x in
the matrix by f (y1 , . . . , yn ). The function f is called a Skolem
function.
If there are no universal quantifiers before x in the prefix, let a
be a new constant. Eliminate x from the prefix and replace every
occurence of x in the matrix with a. The constant a is a Skolem
constant.

z((p(a) p(b) q(z)) (q(a) p(b) q(z)))

Note that steps 1-5 preserve logical consequence. It is relatively easy to show
that step 6 preserves satisfiability. For details, see [Ben-Ari, 2001].

10.4 Reading and Further Exercises


Read: Chapter 7, sections 7.1-7.4 of [Ben-Ari, 2001].

Items of interest (no proofs required):

Predicate logic language: syntax, semantics(interpretation, model).


Herbrands universe, Herbrand base, Herbrand interpretation
Herbrands theorem, the significance of Herbrands theorem.
Clausal form of first order formulae: Skolemization (Skolem con-
stants, Skolem functions), transformation algorithm.

68
11 Resolution
11.1 Ground Resolution
Herbrands theorem reduces the problem of establishing unsatisfiability of
a formula (set) to the problem of establishing unsatisfiability of a finite
set of ground formulae.
For practical purposes, given a finite set of ground formulae, one can
rename the distinct ground atoms by distinct propositional formulae and
thus answer the question of unsatisfiability by propositional resolution.
See [Craciun, 2010] for details on propositional resolution.
However, this approach is not practical: there is no indication how to find
the finite set of ground formulae: the set of possible ground instantiations
is both unbounded and unstructured.

11.2 Substitution
A substitution of terms for variables is a set:

{x1 t1 , . . . , xn tn }

where, for i = 1 . . . n, xi are distinct variables, ti are terms such that


xi and ti are distinct. Substitutions will be denoted by lowercase greek
letters (, , ,). The empty substitution is denoted by .
An expression is a term or formula (in particular literal, clause, or set
of clauses).

Let E be an expression, = {x1 t1 , . . . , xn tn }. An instance of


E (or the result of applying to E), E is the expression obtained by
simultaneously replacing every occurrence of xi in E by ti .
Example: let E = p(x) q(f (y)), = {x y, y f (a)}. Then

E = p(y) q(f (f (a))).

Let = {x1 t1 , . . . , xn tn } and = {y1 s1 , . . . , yk sk } be


substitutions. Let X, Y be the sets of variables from and , respectively.
The composition of and , is the substitution:

= {xi ti |xi X, xi 6= ti } {yj sj |yj Y, yj 6 X},

in other words, apply the substitution to the terms ti (provided that


the resulting substitution does not collapse into xi xi ) then append the
substitutions from whose variables do not appear already in .

69
Example: let

= {x f (y), y f (a), z u},


= {y g(a), u z, v f (f (a))},

then:
= {x f (g(a)), y f (a), u z, v f (f (a))}.

Let E be an expression and , substitutions. Then E() = (E).

Let , , be substitutions. Then () = ().

11.3 Unification
Unifiers

Consider two nonground literals: p(f (x), g(y)) and p(f (f (a)), g(z)):

the substitution

{x f (a), y f (g(a)), z f (g(a))}

applied to both the literals will make them identical (will unify
them),
the same effect is obtained when applying the substitutions

{x f (a), y a, z a},

{x f (a), z y}.

Given a set of literals, a unifier is a substitution that makes the atoms


of the set identical. A most general unifier (mgu) is a unifier such
that any other unifier can be obtained from by a further substitution
such that = .
Note that not all literals are unifiable: if the predicate symbols are dif-
ferent, the literals cannot be unified. Also, consider the case of p(x) and
p(f (x)). Since the substitution of the variable x has to be done in the same
time, the terms x and f (x) cannot be made identical, and the unification
will fail.

Unification algorithm

Note that the unifiability of the literals p(f (x), g(y)) and p(f (f (a)), g(z))
can be expressed as a set of term equations:

f (x) = f (f (a))
g(y) = g(z).

70
A set of term equations is in solved form iff:

all equations are of the form xi = ti , where xi are variables,


each variable xi that apears on the left hand side of an equation does
not appear elsewhere in a set.
A set of equations in solved form defines a substitution in a nat-
ural way by turning each equation xi = ti into an element of the
substitution, xi ti .

Unification Algorithm
INPUT: A set of term equations.
OUTPUT: A set of term equations in solved form, or not unifiable.

Perform the following transformations on the set of equations as long as


any of them can still be performed:
1. Transform t = x into x = t, where x is a variable and t is not.
2. Erase the equation x = x, for all x, variables.
3. Let t0 = t00 be an equation where t0 , t00 are not variables. If the
outermost (function) symbol of t0 and t00 are not identical, termi-
nate and answer not unifiable. Otherwise, if t0 is of the form
f (t0 1 , . . . , t0 k ) and t00 is of the form f (t00 1 , . . . , t00 k ), replace the equa-
tion f (t0 1 , . . . , t0 k ) = f (t00 1 , . . . , t00 k ) by the k equations

t0 1 = t00 1 , . . . , t0 k = t00 k .

4. Let x = t a term equation such that x has another occurence in the


set of term equations. If x occurs in t (occurs check!), terminate and
answer not unifiable. Otherwise, transform the equation set by
replacing each occurence of x in other equations by t.
Example 5 (Unification, from [Ben-Ari, 2001]). Consider the following two
equations:
g(y) = x
f (x, h(x), y) = f (g(z), w, z).

Apply rule 1 to the first equation and rule 3 to the second equation:

x = g(y)
x = g(z)
h(x) = w
y = z.

71
Apply rule 4 on the second equation to replace the other occurences of x:

g(z) = g(y)
x = g(z)
h(g(z)) = w
y = z.

Apply rule 3 to the first equation

z=y
x = g(z)
h(g(z)) = w
y = z.

Apply rule 4 on the last equation to replace y by z in the first equation,


then erase the resulting z = z using rule 2:

x = g(z)
h(g(z)) = w
y = z.

Transform the second equation by rule 1:

x = g(z)
w = h(g(z))
y = z.

The algorithm terminates successfully. The resulting substitution

{x g(z), w h(g(z)), y z}

is the most general unifier of the initial set of equations.


Theorem 6 (Correctness of the unification algorithm). The unification algo-
rithm terminates. If the algorithm terminates with the answer not unifiable,
there is no unifier for the set of term equations. If it terminates successfully,
the resulting set of equations is in solved form and it defines an mgu

= {x1 t1 , . . . , xn tn }

of the set of equations

Proof. See [Ben-Ari, 2001], pp. 158.

11.4 Resolution
Ground resolution was not practical.

72
It turns out that a practical version of resolution is possible, using unifi-
cation.
Recall the notions of literal, clause, clause sets introduced in Subsec-
tion 10.3.

Notation. Let L be a literal. We denote with Lc the complementary


literal (i.e. L and Lc are opposite, one is the negation of the other).

Definition 7 (General resolution step). Let C1 , C2 be clauses with no variables


in common. Let L1 C1 and L2 C2 be literals in the clauses such that L1
and L2 c can be unified by a mgu . Then C1 and C2 are said to be clashing
clauses, that clash on the literals L1 and L2 , and resolvent of C1 and C2
is the clause:
Res(C1 , C2 ) = (C1 L1 ) (C2 L2 ).

Example 8 (Resolvent of two clauses).


Consider the clauses:

p(f (x), g(y)) q(x, y) p(f (f (a)), g(z)) q(f (a), g(z))

L1 = p(f (x), g(y)) and L2 c = p(f (f (a)), g(z)) can be unified with the mgu
{x f (a), y z} and the resolvent of the clauses is:

q(f (a), z) q(f (a), g(z)).

Note that the requirement for clauses to have no variables in common does
not impose any real restrictions on the clause set. Remember that clauses
are implicitly universally quantified, so changing the name of a variable
does not change the meaning of the clause set.

General Resolution Procedure.

INPUT: A set of clauses S.


OUTPUT: S is satisfiable or S is not satisfiable. Also the algorithm may
not terminate.
Start with S0 = S.

Repeat

Choose C1 , C2 Si clashing clauses and let C = Res(C1 , C2 ).


If C = terminate with the answer not satisfiable.
Otherwise, Si+1 = Si {C}.

until Si+1 = Si

73
Return satisfiable.

Note that the the algorithm above may not terminate, it is not a decision
procedure (indeed that would not be expected since first order predicate
logic is undecidable). The reason for nontermination is the existance of
infinite models.

Example 9 (Resolution refutation, from [Ben-Ari, 2001]). Lines 1-7 contain


the initial clause set. The rest of the lines represent the execution of the
resolution algorithm. On each line we have the resolvent, mgu and the
number of parent clauses (clashing clauses that were resolved).

1. p(x) q(x) r(x, f (x))


2. p(x) q(x) s(f (x))
3. t(a)
4. p(a)
5. r(a, y) t(y)
6. t(x) q(x)
7. t(x) s(x)

8. q(a) xa 3, 6
9. q(a) s(f (a)) xa 2.4
10. s(f (a)) 8, 9
11. q(a) r(a, f (a)) xa 1, 4
12. r(a, f (a)) 8.11
13. t(f (a)) y f (a) 5, 12
14. s(f (a)) x f (a) 7, 13
15. 10, 14
Example 10 (Resolution refutation with variable renaming, from [Ben-Ari, 2001]).
First four clauses represent the initial clause set.

1. p(x, y) p(y, x)
2. p(x, y) p(y, z) p(x, z)
3. p(x, f (x))
4. p(x, x)
30 . p(x0 , f (x0 )) Rename 3.
5. p(f (x), x) 1 = {y f (x), x0 x} 1, 30
300 . p(x00 , f (x00 )) Rename 3
6. p(f (x), z) p(x, z) 2 = {y f (x), x00 x} 2, 300
5000 . p(f (x000 ), x000 ) Rename 5
7. p(x, x) 3 = {z x, x000 x} 6, 5000
40000 . p(x0000 , x0000 ) Rename 4
8. 4 = {x0000 x} 7, 40000

74
The substitution resulting from composing all intermediary substitutions:

= 1 2 3 4 = {y f (x), z x, x0 x, x00 x, x000 x, x0000 x}

Restricted to the variables from the initial set, the resulting substitution
is:
= {y f (x), z x}

Theorem 11 (Soundness of substitution).

If the unsatisfiable clause is derived during the general resolution proce-


dure, then the set of clauses is unsatisfiable.
Theorem 12 (Completeness of substitution).
If a set of clauses is unsatistiable, then the empty clause can be derived
by the resolution procedure.

For details on how the proofs of these theorems, see [Ben-Ari, 2001].

Some remarks on the resolution procedure

Note that the resolution procedure is nondeterministic: which clashing


clause to choose and which clashing literals to resolve on is not specified.

Good choices will lead to the result quickly, while bad choices may lead
to the algorithm not terminating.
The completeness theorem says that if the clause set is unsatisfiable a
resolution refutation (generation of the empty clause) exists, i.e. that
which uses good choices. Variants with bad choices may miss the solution.

11.5 Reading and Further Exercises


Read: Chapter 7, sections 7.5-7.8 of [Ben-Ari, 2001].

Items of interest:

Ground resolution, impracticality of ground resolution (reasons).


Substitutions: compositions of substitutions, unifiers, most general
unifiers.
Unification procedure.
General resolution, completeness of resolution (no proof).

75
12 Logic Programming
12.1 Formulas as programs
Consider a fragment of the theory of strings, with the binary function
symbol (concatenation) and binary predicates substr, pref ix, suf f ix,
described by the following axioms:

1. x substr(x, x)
2. xyz ((substr(x, y) suf f ix(y, z)) substr(x, z))
3. xy suf f ix(x, y x)
4. xyz ((substr(x, y) pref ix(y, z)) substr(x, z))
5. xy pref ix(x, x y)

The procedural interpretation of these formulae is:

1. x is a substring of x.
2. To check is x is a substring of z, find a suffix y of z and check if x is
a substring of y.
3. x is a suffix of y x,
4. To check if x is a substring of z, find a prefix y of z and check if x is
a substring of y.
5. x is a prefix of x y.

The clausal form of these axioms is:


1. substr(x, x)
2. substr(x, y) suf f ix(y, z) substr(x, z)
3. suf f ix(x, y x)
4. substr(x, y) pref ix(y, z) substr(x, z)
5. pref ix(x, x y)

Now consider a refutation of substr(a b c, a a b c c):

6. substr(a b c, a a b c c)
7. substr(a b c, y1) suf f ix(y1, a a b c c) 6, 2
8. substr(a b c, a b c c) 7, 3
9. substr(a b c, y2) pref ix(y2, a b c c) 8, 4
10. substr(a b c, a b c) 9, 5
11. 10, 1

i.e. we have shown by resolution that substr(a b c, a a b c c).

Another way to use resolution is to check whether

wsubstring(w, a a b c c)

76
Denoting the axioms by the formula Axioms, we must show that the
following formula is unsatisfibable:

Axioms (wsubstring(w, a a b c c)).

But this is

Axioms w(substring(w, a a b c c))

which can be written in a straightforward manner into clausal form (and


resolution is applicable).

A resolution refutation works with the substitution {w abc}: not only


was the logical consequence proved, but a value for w was computed such
that substring(w, a a b c c) is true. In this context, the string axiom
constitute a program that computes answers to questions. However,
the program is highly nondeterministic. Choices that can be made during
the execution of the program (by resolution) influence the result and even
the answer.
The nondeterministic formalism can be turned into a practical logic pro-
gramming language by specifying rules for making choices.

12.2 Horn Clauses


Definition 13 (Horn Clauses).

A Horn clause is a clause A B1 , . . . , Bn with at most one positive


literal. The positive literal A is called the head and the negative literals
Bi form the body. A unit positive Horn clause A is called a fact,
and a Horn clause with no positive literals B1 , . . . , Bn is called a goal
clause. A Horn clause with one positive literal and one or more negative
literals is called a program clause.

Note that the notation A B1 , . . . , Bn is equivalent to (B1 . . .Bn ) A


which in turn is equivalent to B1 . . . Bn A.

The notation used in the first part of this lecture (in particular by the
Prolog syntax) for program clauses is A:-B1 , . . . , Bn . From now on we
will use this notation.

Definition 14 (Logic programs).


A set of non-goal Horn clauses whose heads have the same predicate is
called a procedure. A set of procedures if a (logic) program. A pro-
cedure composed only of ground facts is called a database.
Definition 15.

77
A computation rule is a rule for choosing a literal in a goal clause to
solve with. A search rule is a rule for choosing a clause to resolve with
the chosen literal in the goal clause.

The difference between logic programming and imperative programming


is the control of the program:
in imperative programming the control of the program is given ex-
plicitly as part of the code by the programmer,
in logic programming the programmer writes declarative formulae
that describe the relationship between the input and the output, and
resolution together with the search and computation rules supply an
uniform control structure.
As a consequence of the uniform control structure, there are cases when
logic programs will not be as efficient as special handcrafted control struc-
tures for specific computations.

Example 16 (Logic program, from [Ben-Ari, 2001]).


The following program has two procedures:
1. q(x, y):-p(x, y)
2. q(x, y):-p(x, z), q(z, y)

3. p(b, a)
4. p(c, a)
5. p(d, b)
6. p(e, b)
7. p(f, b)
8. p(h, g)
9. p(i, h)
10. p(j, h)

Definition 17 (Correct answer substitution).


Let P be a program and G a goal clause. A substitution for variables
in G is called a correct answer substitution iff P  (G), where
denotes the universal closure of the variables that are free in G.

In other words, the correct answer substitution makes the negation of the
goal clause a logical consequence of the program.
Example 18.
Consider a refutation for the goal clause :-q(y, b), q(b, z) from the program
introduced in Example 16. At each step, choose a literal within the goal
clause and a clause whose head clashes with that literal:

78
1. Choose q(y, b) and resolve with the clause 1, obtain :-p(y, b), q(b, z).
2. Choose p(y, b) and resolve with clause 5, obtain :-q(b, z). The needed
substitution is {y d}.
3. Choose the remaining literal :-q(b, z) and resolve with clause 1, obtain
:-p(b, z).
4. Choose the remaining literal :-p(b, z) and resolve with clause 3, obtain
. The needed substitution is {z a}.
We obtained the empty clause . With the correct answer substitution
{y d, z a} applied to the goal, we get that
P  q(d, b) q(b, a).

12.3 SLD Resolution


Definition 19 (SLD Resolution).
Let P be set of program clauses, R a computation rule and G a goal
clause. A derivation by SLD-resolution is defined as a sequence of
resolution steps between the goal clause and the program clauses. The
first goal clause G0 is G. Assume that Gi has been derived. Gi+1 is
defined by selecting a literal Ai Gi according to the computation rule
R, choosing a clause Ci P such that the head of Ci unifies with Ai by
mgu i and resolving:
Gi = :-A1 , . . . , Ai1 , Ai , Ai+1 , . . . , An
Ci = A:-B1 , . . . , Bk
Ai i = Ai
Gi+1 = :-(A1 , . . . , Ai1 , B1 , . . . , Bk , Ai+1 , . . . , An )i

An SLD refutation is an SLD-derivation of .

Soundness and completeness of SLD-resolution


Theorem 20 (Soundness of SLD-resolution).
Let P be a set of program clauses, R a computation rule and G a goal
clause. If there is an SLD-refutation of G, = 1 . . . n is the sequence of
unifiers used in the refutation and is the restriction of to the variables
of G, then is a correct answer substitution for G.
Proof. See [Ben-Ari, 2001], pp. 178.
Theorem 21 (Completeness of SLD-resolution).
Let P be a set of program clauses, R a computation rule and G a goal
clause. Let be a correct answer substitution. Then there is an SLD-
resolution of G from P such that is the restriction of the sequence of
unifiers = 1 , . . . , n to the variables in G.

79
Proof. See [Ben-Ari, 2001], pp. 179.

Note that the above results only refer to Horn clauses (logic programs),
not to arbitrary clauses. SLD resolution is not complete for arbitrary
clauses:
p q, p q, p q, p q
is unsatisfiable, but there is no SLD-resolution of for it (exercise!).

Example 22 (Examples 16, 18, revisited).


If, in step 2 of Example 18 clause 6 would have been chosen to resolve
with, the resolvent would have been :-q(b, z). The resolution will succeed
(exercise!) but the correct answer substitution would have been different:
{y e, z a}. For a given goal clause there may be several correct
answer substitutions.
Suppose that the computation rule is to always choose the last literal in
the goal clause. Resolving always with clause 2 gives:

:-q(y, b), q(b, z)


:-q(y, b), p(b, z 0 ), q(z 0 , z)
:-q(y, b), p(b, z 0 ), p(z 0 , z 00 ), q(z 00 , z)
:-q(y, b), p(b, z 0 ), p(z 0 , z 00 ), p(z 00 , z 000 ), q(z 000 , z)
...

A correct answer substitution exists but this resolution will not terminate.

Consider the computation rule that always chooses the first literal in the
goal. The SLD resolution proceeds as follows:
1. q(y, b) is chosen and resolved with clause 2, obtain :-p(y, z 0 ), q(z 0 , b), q(b, z).
2. Choose the first literal p(y, z 0 ) and resolve it with clause 6 p(e, b),
then with clause 1 and obtain :-q(b, b), q(b, z), then :-p(b, b), q(b, z).
3. No program clause unifies with p(b, b) and this resolution fails.
Even though an answer substitution exists, resolution fails.
Definition 23 (SLD-trees).

Let P be a set of program clauses, R a computation rule and G a foal


clause. All possible SLD-derivations can be displayed on an SLD-tree,
constructed in the following manner:
Te root is labeled with the goal clause G.
Given a node n labeled with a goal clause Gn , create a child ni for
each new goal clause Gni that can be obtained by resolving the literal
chosen by R with the head of a clause from P.

80
Definition 24.
In an SLD-tree, a branch leading to a refutation is called a success
branch. A branch leading to a goal clause which cannot be resolved
is called a failed branch. A branch corresponding to a non-terminating
derivation is called an infinite branch.

Example 25 (SLD-tree).
The following is the SLD tree for the derivation in Examples 16, 18 where
the choice rule is to resolve with the leftmost literal in the goal (see also
Example 22). The chosen literal is underlined, the clause resolved with
is the label of the edge. Successful branches are indicated with , failed
.
branches with , infinite branches with ...

q(y, b)
(1) (2)

p(y, b) p(y, z), q(z, b)


(5) (7) (3) (10)
(6)

   q(a, b) ... q(h, b)


(1) (2) (1) (2)

p(a, b) p(a, z 0 ), q(z 0 , b) p(h, j) p(h, z 00 ), q(z 00 , j)


.. ..
. .
Theorem 26.
Let P be a program and G be a goal clause. Then every SLD-tree for P
and G has infinitely many success branches, or they all have the same
finite number of success branches.

Proof. Not given here.


Definition 27.
A search rule is a procedure for searching an SLD-tree for a refutation.
A SLD-refutation procedure is the SLD-resolution algorithm together
with the specification of a computation rule and a search rule.

Some comments on the completeness of SLD resolution

Note that the SLD resolution is complete for any computation rule (i.e.
a refutation exists). However, the choice of the search rule is essential in
whether or not this refutation exists.

81
If a more restricted notion of completeness is considered (a refutation
exists and is found), certain search rule make SLD resolution incomplete:
for example the depth-first rule search rule (which Prolog uses: Prolog =
SLD resolution with leftmost literal as computations rule and depth-first
search rule).

There are search rules for which SLD resolution is complete (in the stronger
sense):
breath-first (search every level of the SLD tree),
bounded depth first (go down up to a certain depth, then try another
branch down to that depth, and so on, if the solution is not found,
increase the depth).
However, these complete search rules are computationally expensive. A
trade-off between completeness and efficiency is made.

12.4 Reading and Further Exercises


Read: Chapter 8, sections 8.1-8.3 of [Ben-Ari, 2001].

Also read: Chapter 2, Chapter 3 of [Nilsson and Maluszynski, 2000].

Items of interest:

Declarative vs. procedural interpretations of formulas (again).


Resolution as computation. Nondeterminism of resolution.
Computation rules, search rules (in the context of resolution).
Horn clauses.
SLD resolution. Completeness of SLD resolution (no proofs required).
Prolog clauses as Horn Clauses.
Translation of Prolog (Horn) clauses in predicate logic formulae.
Prolog: completeness issues. Complete Prolog computation: com-
pleteness issues.

References
[Ben-Ari, 2001] Ben-Ari, M. (2001). Mathematical Logic for Computer Science.
Springer Verlag, London, 2nd edition.

[Brna, 1988] Brna, P. (1988). Prolog Programming A First Course. copyright


Paul Brna.

82
[Clocksin and Mellish, 2003] Clocksin, W. F. and Mellish, C. S. (2003).
Programming in Prolog. Springer, Berlin, 5th edition.
[Covington, 1989] Covington, M. (1989). Efficient Prolog: A Practical Guide.
Technical Report Research Report AI-1989-08, University of Georgia, Athens,
Georgia.

[Covington et al., 1997] Covington, M., Nute, D., and Vellino, A. (1997). Prolog
Programming in Depth. Prentice Hall, New Jersey.
[Craciun, 2010] Craciun, A. (2005-2010). Logic for Computer Science.
[Nilsson and Maluszynski, 2000] Nilsson, U. and Maluszynski, J. (2000). Logic,
Programming and Prolog. copyright Ulf Nilsson and Jan Maluszynski, 2nd
edition.
[Wielemaker, 2008] Wielemaker, J. (19902008). SWI-Prolog 5.6.60 Reference
Manual. University of Amsterdam.

83

Potrebbero piacerti anche