Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
is the goal (L
1
, . . . , L
m1
, B
1
, . . . , B
n
, L
m+1
, . . . , L
k
).
G
, Y
)} (where either X
or Y
must be dierent
from Z; these are the only computed answers) composing it with {X
/a, Y
/a} will
give {Z/f(a, a), X
/a, Y
/a} (or {Z/f(a, a), Y
/a} if X
= Z or {Z/f(a, a), X
/a} if
Y
= Z) which is dierent from {X/f(a)}.
18 CHAPTER 3. LOGIC AND LOGIC PROGRAMMING
2
int(0)
?
?
int(s(0))
Figure 3.1: Complete SLD-tree for Example 3.2.4
3.3.2 Programs with built-ins
Most practical logic programs make (heavy) usage of built-ins. Although a lot
of these built-ins, like e.g. assert/1 and retract/1, are extra-logical and ruin
the declarative nature of the underlying program, a reasonable number of them
can actually be seen as syntactic sugar. Take for example the following program
which uses the Prolog [14, 42, 8] built-ins = ../2 and call/1.
map(P, [], [])
map(P, [X[T], [P
X
[P
T
]) C = ..[P, X, P
X
], call(C), map(P, T, P
T
)
inv(0, 1)
inv(1, 0)
For this program the query map(inv, [0, 1, 0], R) will succeed with the com-
puted answer R/[1, 0, 1]. Given that query, the Prolog program can be seen as
a pure denite logic program by simply adding the following denitions (where
we use the prex notation for the predicate = ../2):
= ..(inv(X, Y ), [inv, X, Y ])
call(inv(X, Y )) inv(X, Y )
The so obtained pure logic program will succeed for map(inv, [0, 1, 0], R) with
the same computed answer R/[1, 0, 1].
This means that some predicates like map/3, which are usually taken to be
higher-order, can simply be mapped to pure denite (rst-order) logic programs
([44, 36]). Some built-ins, like for instance is/2, have to be dened by innite
relations. Usually this poses no problems as long as, when selecting such a
built-in, only a nite number of cases apply (Prolog will report a run-time error
if more than one case applies while the programming language Godel [22] will
delay the selection until only one case applies).
In the remainder of this thesis we will usually restrict our attention to those
built-ins that can be given a logical meaning by such a mapping.
Chapter 4
Parsing
4.1 Operator Declarations
[TO DO: INSERT: material on op declarations]
4.2 DCGs
[TO DO: INSERT: Some DCG background.]
4.3 Treating Precedences and Associativites with
DCGs
Scheme to treat operator precedences and associativity: one layer per operator
precedence; every layer calls the layer below, bottom layer can call top layer via
parentheses.
Program 4.3.1
plus(R) --> exp(A), plusc(A,R).
plusc(Acc,Res) --> "+",!, exp(B), plusc(plus(Acc,B),Res).
/* left associative: B associates to + on left */
plusc(A,A) --> [].
exp(R) --> num(A), expc(A,R).
expc(Acc,exp(Acc,R2)) --> "**",!, exp(R2). /* right associative */
expc(A,A) --> [].
num(X) --> "(",!,plus(X), ")".
19
20 CHAPTER 4. PARSING
+ +
**
x y
** **
x y z y
plus
exp
num
parse
Figure 4.1: Illustrating the parse hierachy and tree for "x**y+x**y**z+y"
num(id(x)) --> "x".
num(id(y)) --> "y".
num(id(z)) --> "z".
parse(S,T) :- plus(T,S,[]).
This parser can be called as follows:
| ?- parse("x+y+z",R).
R = plus(plus(id(x),id(y)),id(z)) ?
yes
| ?- parse("x**y**z",R).
R = exp(id(x),exp(id(y),id(z))) ?
yes
| ?- parse("x**y+x**y**z+y",R).
R = plus(plus(exp(id(x),id(y)),exp(id(x),exp(id(y),id(z)))),id(y)) ?
yes
Exercise 4.3.2 Extend the parser from Program 4.3.1 to handle the operators
-, *, /.
4.4 Tokenisers
Exercise 4.4.1 Extend the parser from Program 4.3.1 and Exercise 4.3.2 to
link up with a tokenizer that recognises numbers and identiers.
Chapter 5
Simple Interpreters
5.1 NFA
Our rst task is to write an interpreter for non-deterministic nite automaton
(NFA). [REF; Ullman?]
One question we must ask ourselves is, how do we represent the NFA to be
interpreted. One solution is to represent the NFA by Prolog facts. E.g., we
could use Prolog predicate init/1 to represent the initial states, final/1 to
represent the nal states, and trans/3 to represent the transitions between the
states. To represent the NFA from Figure 7.2 we would thus write:
Program 5.1.1
init(1).
trans(1,a,2).
trans(2,b,3).
trans(2,b,2).
final(3).
Let us now write an interpreter, checking whether a string can be generated
by the automaton. The interpreter needs to know the current state St as well as
the string to be checked. The empty string can only be generated in nal states.
1 2 a
b
3 b
Figure 5.1: A simple NFA
21
22 CHAPTER 5. SIMPLE INTERPRETERS
A non-empty string can be generated by taking an outgoing transition from the
current state to another state S2, thereby generating the rst character H of the
string; from S2 we must be able to generate the rest T of the string.
accept(S,[]) :- final(S).
accept(S,[H|T]) :- trans(S,H,S2), accept(S2,T).
We have not yet encoded that initially we must start from a valid initial
state. This can be encoded by the following:
check(Trace) :- init(S), accept(S,Trace).
We can now use our interpreter to check whether the NFA can generate
various strings:
| ?- check([a,b]).
yes
| ?- check([a]).
no
We can even only provide a partially instantiated string and ask for solutions:
| ?- check([X,Y,Z]).
X = a,
Y = b,
Z = b ? ;
no
Finally, we can generate strings accepted by the NFA:
| ?- check(X).
X = [a,b] ? ;
X = [a,b,b] ? ;
X = [a,b,b,b] ? ;
Exercise 5.1.2 Extend the above interpreter for epsilon transitions.
5.2 Regular Expressions
Our next task is to write an interpreter for regular expressions. [REF] We ll
rst use operator declarations so that we can use the standard regular expression
operators . for concatenation, + for alternation, and * for repetition. For
this we write the following operator declarations [REF].
:- op(450,xfy,.). /* + already defined; has 500 as priority */
:- op(400,xf,*).
5.2. REGULAR EXPRESSIONS 23
With append:
Program 5.2.1
:- use_module(library(lists)).
gen(X,[X]) :- atomic(X).
gen(X+Y,S) :- gen(X,S) ; gen(Y,S).
gen(X.Y,S) :- gen(X,SX), append(SX,SY,S), gen(Y,SY).
gen(*(_X),[]).
gen(*(X),S) :- gen(X,SX), append(SX,SSX,S),gen(*(X),SSX).
Observe how to compute the meaning/eect of X.Y we compute the eect of
X and Y by recursive calls to gen. This is a common pattern, that will appear
time and time again in interpreters.
Usage:
| ?- gen(a*,[X,Y,Z]).
X = a,
Y = a,
Z = a ?
yes
| ?- gen((a+b)*,[X,Y]).
X = a,
Y = a ? ;
X = a,
Y = b ? ;
X = b,
Y = a ? ;
X = b,
Y = b ? ;
no
Problem: eciency as SX traversed a second time by append. More prob-
lematic is the following, however:
| ?- gen((a*).b,[c]).
! Resource error: insufficient memory
Solution: dierence lists: no more need to do append (concatenation in
constant time using dierence lists: diff append(X-Y,Z-V,R) :- R = X-V,
Y=Z.
Program 5.2.2 Version with dierence lists:
generate(X,[X|T],T) :- atomic(X).
generate(X +_Y,H,T) :- generate(X,H,T).
generate(_X + Y,H,T) :- generate(Y,H,T).
generate(X.Y,H,T) :- generate(X,H,T1), generate(Y,T1,T).
generate(*(_),T,T).
generate(*(X),H,T) :- generate(X,H,T1), generate(*(X),T1,T).
24 CHAPTER 5. SIMPLE INTERPRETERS
gen(RE,S) :- generate(RE,S,[]).
We can now call:
| ?- gen((a*).b,[c]).
no
[Insert Figure showing call graph for both versions of gen.]
Explain alternate reading: generate(RE, InEnv, OutEnv): InEnv = char-
acters still to be consumed overall; OutEnv = characters remaining after RE has
been matched.
Common theme: sometimes called threading. Can be written in DCG
style notation:
Program 5.2.3
generate(X) --> [X], {atomic(X)}.
generate(X +_Y) --> generate(X).
generate(_X + Y) --> generate(Y).
generate(.(X,Y)) --> generate(X), generate(Y).
generate(*(_)) --> [].
generate(*(X)) --> generate(X), generate(*(X)).
5.3 Propositional Logic
Program 5.3.1
int(const(true)).
int(const(false)) :- fail.
int(and(X,Y)) :- int(X), int(Y).
int(or(X,Y)) :- int(X) ; int(Y).
int(not(X)) :- \+ int(X).
[Discuss Problem with negation int(not(const(X))) fails even though there
is a solution. Explain that negation only sound when call is ground, i.e., contains
no variables. ]
One common technique is to get rid of the need for the built-in negation, by
explicitly writing a predicate for the negation:
Program 5.3.2
int(const(true)).
int(const(false)) :- fail.
int(and(X,Y)) :- int(X), int(Y).
int(or(X,Y)) :- int(X) ; int(Y).
int(not(X)) :- nint(X).
nint(const(false)).
5.4. A FIRST SIMPLE IMPERATIVE LANGUAGE 25
nint(const(true)) :- fail.
nint(and(X,Y)) :- nint(X) ; nint(Y).
nint(or(X,Y)) :- nint(X),nint(Y).
nint(not(X)) :- int(X).
Exercise 5.3.3 Extend the above interpreter for implication imp/2.
Literature: Clark Completion and Equality Theory, SLDNF, Constructive
negation, Godel Programming language,...
5.4 A First Simple Imperative Language
Let us rst choose an extremely simple language with three constructs:
variable denition def, denining a single variable. Example: def x.
assignment := to assign a value to a variable. Example: x := 3
sequential composition ; to compose two constructs. Example: def x ;
x:= 3
Imperative programs access and modify a global state. When writing an
interpreter we thus need to model this environment. Thus, we rst provide
auxilary predicate to store and retrieve variable values in an environment:
Program 5.4.1
/* def(OldEnv, VariableName, NewEnv) */
def(Env,Key,[Key/undefined|Env]).
/* store(OldEnv, VariableName, NewValue, NewEnv) */
store([],Key,Value,[exception(store(Key,Value))]).
store([Key/_Value2|T],Key,Value,[Key/Value|T]).
store([Key2/Value2|T],Key,Value,[Key2/Value2|BT]) :-
Key \== Key2, store(T,Key,Value,BT).
/* lookup(VariableName, Env, CurrentValue) */
lookup(Key,[],_) :- print(lookup_var_not_found_error(Key)),nl,fail.
lookup(Key,[Key/Value|_T],Value).
lookup(Key,[Key2/_Value2|T],Value) :-
Key \== Key2,lookup(Key,T,Value).
We suppose that store and lookup will be called with all but the last argu-
ment bound ?
| ?- store(Old,x,X,[x/3,y/2]).
X = 3,
Old = [x/_A,y/2] ? ;
no
26 CHAPTER 5. SIMPLE INTERPRETERS
Program 5.4.2
int(X:=V,In,Out) :- store(In,X,V,Out).
int(def(X),In,Out) :- def(In,X,Out).
int(X;Y,In,Out) :- int(X,In,I2), int(Y,I2,Out).
test(R) :- int( def x; x:= 5; def z; x:= 3; z:= 2 , [],R).
| ?- int( def x; x:= 5; def z; x:= 3; z:= 2 , [],R).
R = [z/2,x/3]
| ?- int( def x; x:= 5; def z; x:= 3; z:= X , In, [z/2,x/RX]).
X = 2,
In = [],
RX = 3 ? ;
no
Now let us allow to evaluate expressions in the right hand side of an assign-
ment. To make our interpreter a little bit simpler, we suppose that variable uses
are preceded by a $. E.g., x := $x+1.
We need a separate function to evaluate arguments. This is again a common
pattern in interpreter development: we have one predicate to execute state-
ments, modifying an environment, and one predicate to evaluation expressions
returning a value. If the expressions in the language to be interpreted are side-
eect free, the eval predicate need not return a new environment. This is what
we have done below:
:- op(200,fx,$).
/* eval(Expression, Env, ExprValue) */
eval(X,_Env,Res) :- number(X),Res=X.
eval($(X),Env,Res) :- lookup(X,Env,Res).
eval(+(X,Y),Env,Res) :- eval(X,Env,RX), eval(Y,Env,RY), Res is RX+RY.
eint(X:=V,In,Out) :- eval(V,In,Res),store(In,X,Res,Out).
eint(def(X),In,Out) :- def(In,X,Out).
eint(X;Y,In,Out) :- eint(X,In,I2), eint(Y,I2,Out).
| ?- eint( def x; x:= 5; def z; x:= $x+1; z:= $x+($x+2) , [],R).
R = [z/14,x/6] ?
Exercise 5.4.3 Extend the interpreter for other operators, like multiplication
* and exponentiation **.
Exercise 5.4.4 Rewrite the interpreter so that one does not need to use the
$. Make sure that your interpreter only has a single (and correct) solution.
5.5. AN IMPERATIVE LANGUAGE WITH CONDITIONALS 27
5.4.1 Summary
The general scheme is
int( Statement[X1,...XN], I1, OutEnv) :-
int(X1,I1,I2), or eval(X1,I1,R1),I2=I1,
...,
int(Xn,In,OutEnv).
For evaluating expressions:
eval(Expr [X1,...XN], Env, Res) :-
eval(X1,Env,R1), ...,
eval(Xn,Env,Rn),
compute(R1,...,Rn,Res).
5.5 An imperative language with conditionals
Two choices:
treat a boolean expression 1=x like an expression returning a boolean value
introduce a separate class boolean expression with a separate predicate.
Here there are again two sub choices:
Write a predicate test be(BE,Env) which succeeds if the boolean
expression succeeds in the environment Env.
Write a predicate eval be(BE,Env,BoolRes)
Program 5.5.1
eint(if(BE,S1,S2),In,Out) :-
eval_be(BE,In,Res),
((Res=true -> eint(S1,In,Out)) ; eint(S2,In,Out)).
eval_be(=(X,Y),Env,Res) :- eval(X,Env,RX), eval(Y,Env,RY),
((RX=RY -> Res=true) ; (Res=false)).
eval_be(<(X,Y),Env,Res) :- eval(X,Env,RX), eval(Y,Env,RY),
((RX<RY -> Res=true) ; (Res=false)).
5.6 An imperative language with loops
Add a string object that gets converted to an atom:
eval([HS|String],_,Res) :- name(Res,[HS|String]).
Add a println command:
eint(println(S),E,E) :- eval(S,E,ES),print(ES),nl.
28 CHAPTER 5. SIMPLE INTERPRETERS
eint(while(BE,S),In,Out) :-
eval_be(BE,In,Res),
((Res=true -> eint(;(S,while(BE,S)),In,Out)) ; In=Out).
Alternate solution:
eint(skip,E,E).
eint(while(BE,S),In,Out) :- eint(if(BE,;(S,while(BE,S)),skip),In,Out).
Chapter 6
Writing a Prolog
Interpreter in Prolog
Def: object program
How to represent the Prolog program:
Clausal Representation: Like in Section 5.1 use clauses to represent the
object program.
Term Representation (Reied representation): Like in Sections 5.2 and
5.3, use a Prolog term to represent the object program
[ vanilla meta-interpreter (see, e.g., [21, 3]). ]
Program 6.0.1
:- dynamic app/3.
app([],L,L).
app([H|X],Y,[H|Z]) :- app(X,Y,Z).
solve(true).
solve(,(A,B)) :- solve(A),solve(B).
solve(Goal) :- Goal \= ,(_,_), Goal \= true,
clause(Goal,Body), solve(Body).
Debugging version:
Program 6.0.2
isolve(Goal) :- isolve(Goal,0).
indent(0).
indent(s(X)) :- print(+-), indent(X).
29
30 CHAPTER 6. WRITING A PROLOG INTERPRETER IN PROLOG
isolve(true,_).
isolve(,(A,B),IL) :- isolve(A,IL),isolve(B,IL).
isolve(Goal,IL) :- Goal \= ,(_,_), Goal \= true,
print(|), indent(IL), print(> enter ), print(Goal),nl,
backtrack_message(IL,fail(Goal)),
clause(Goal,Body), isolve(Body,s(IL)),
backtrack_message(IL,redo(Goal)),
print(|), indent(IL), print(> exit ), print(Goal),nl.
backtrack_message(_IL,_).
backtrack_message(IL,Msg) :-
print(|), indent(IL), print(> ),print(Msg),nl,fail.
6.0.1 Towards a reined version
First (wrong) attempt:
Program 6.0.3
:- use_module(library(lists),[member/2]).
rsolve(true,_Prog).
rsolve(,(A,B),Prog) :- rsolve(A,Prog),rsolve(B,Prog).
rsolve(Goal,Prog) :- member(:-(Goal,Body),Prog), rsolve(Body,Prog).
At rst sight seems to work:
| ?- rsolve(p,[ (p:-q,r), (q :- true), (r :- true)]).
yes
| ?- rsolve(p(X), [ (p(X) :- q(X)) , (q(a) :- true), (q(b) :- true) ]).
X = a ? ;
X = b ? ;
no
But here it doesnt:
| ?- rsolve(i(s(s(0))), [ (i(0) :- true) , (i(s(X)) :- i(X)) ]).
no
What is the problem ?
| ?- rsolve(i(s(0)), [ (i(0) :- true) , (i(s(X)) :- i(X)) ]) .
X = 0
We need standardising apart, i.e., generate a fresh copy for the variables
inside the clauses being used:
Program 6.0.4
31
rsolve(true,_Prog).
rsolve(,(A,B),Prog) :- rsolve(A,Prog),rsolve(B,Prog).
rsolve(Goal,Prog) :- get_clause(Prog,Goal,Body), rsolve(Body,Prog).
get_clause([ Clause | _], Head, Body) :-
copy_term(Clause,CClause),
CClause = :-(Head,Body).
get_clause([ _ | Rest], Head, Body) :- get_clause(Rest,Head,Body).
Exercise 6.0.5 Improve the data structure for the above interpreter; by group-
ing clauses according to predicate name and arity.
6.0.2 Towards a declarative Interpreter
The rst vanilla interpreter can be given a declarative reading (cite De Schreye,
Martens). However, the reied one is certainly not.
What do we mean by declarative:
We say that a predicate p of arity n is declarative i
p(a
1
, ..., a
n
), , p(a
1
, ..., a
n
) (binding-insensitive)
p(a
1
, ..., a
n
), fail fail , p(a
1
, ..., a
n
) (side-eect free)
copy term is not declarative: copy term(p(X),Q),X=a is dierent from X=a,
copy term(p(X),Q).
Exercise 6.0.6 Find other predicates in Prolog which are not declarative and
explain why.
Why is it a good idea to be declarative:
clear logical semantics; independent of operational reading
gives optimisers more liberty to reorder goals: parallelisation, optimisa-
tion, analysis, specialisation,...
Solution: the ground representation:
Going from [ (i(0) :- true) , (i(s(X)) :- i(X)) ] with variables to
[ (i(0) :- true) , (i(s(var(x))) :- i(var(x))) ] . What if the pro-
gram uses var/1 ?? [ (i(term(0,[])) :- true) , (i(term(s,[var(x)]))
:- i(var(x))) ] .
But now things get more complicated: we need to write an explicit unica-
tion procedure ! We need to treat substitutions and apply bindings !
InstanceDemo or the Lifting Interpreter
Trick: lift var(1) to variables; can be done declaratively !
Try to write lift(p(var(1),var(1)), X) which gives answer X = p( 1, 1).
Lift can be used to generate fresh copy of clauses !
32 CHAPTER 6. WRITING A PROLOG INTERPRETER IN PROLOG
Program 6.0.7
/* --------------------- */
/* solve(GrRules,NgGoal) */
/* --------------------- */
solve([],_GrRules).
solve([NgH|NgT],GrRules) :-
fresh_member(term(clause,[NgH|NgBody]),GrRules),
solve(NgBody,GrRules),
solve(NgT,GrRules).
/* --------------------------------- */
/* fresh_member(NgExpr,GrListOfExpr) */
/* --------------------------------- */
fresh_member(NgX,[GrH|_GrT]) :- lift(GrH,NgX).
fresh_member(NgX,[_GrH|GrT]) :- fresh_member(NgX,GrT).
/* ---------------------------------------- */
/* lift(GroundRepOfExpr,NonGroundRepOfExpr) */
/* ---------------------------------------- */
lift(G,NG) :- mkng(G,NG,[],_Sub).
mkng(var(N),X,[],[sub(N,X)]).
mkng(var(N),X,[sub(N,X)|T],[sub(N,X)|T]).
mkng(var(N),X,[sub(M,Y)|T],[sub(M,Y)|T1]) :-
N \== M, mkng(var(N),X,T,T1).
mkng(term(F,Args),term(F,IArgs),InSub,OutSub) :-
l_mkng(Args,IArgs,InSub,OutSub).
l_mkng([],[],Sub,Sub).
l_mkng([H|T],[IH|IT],InSub,OutSub) :-
mkng(H,IH,InSub,IntSub),
l_mkng(T,IT,IntSub,OutSub).
test(X,Y,Z) :- solve([term(app,[X,Y,Z])], [
term(clause,[term(app,[term([],[]),var(l),var(l)]) ]),
term(clause,[term(app,[term(.,[var(h),var(x)]),var(y),
term(.,[var(h),var(z)])]),
term(app,[var(x),var(y),var(z)]) ])
]).
6.0.3 Meta-interpreters and pre-compilation
[TO DO: Integrate material below into main text]
A meta-program is a program which takes another program, the object pro-
6.1. THE POWER OF THE GROUNDVS. THE NON-GROUNDREPRESENTATION33
gram, as input, manipulating it in some way. Usually the object and meta-
program are supposed to be written in (almost) the same language. Meta-
programming can be used for (see e.g. [21, 5]) extending the programming lan-
guage, modifying the control [9], debugging, program analysis, program trans-
formation and, as we will see, specialised integrity checking.
6.1 The power of the ground vs. the non-ground
representation
6.1.1 The ground vs. the non-ground representation [Rewrite]
In logic programming, there are basically two (fundamentally) dierent ap-
proaches to representing an object level expression, say the atom p(X, a), at
the meta-level. In the rst approach one uses the term p(X, a) as the meta-level
representation. This is called a non-ground representation, because it represents
an object level variable by a meta-level variable. In the second approach one
would use something like the term struct(p, [var(1), struct(a, [])]) to represent
the object level atom p(X, a). This is called a ground representation, as it repre-
sents an object level variable by a ground term. Figure 6.1 contains some further
examples of the particular ground representation which we will use throughout
this thesis. From now on, we use T to denote the ground representation of a
term T . Also, to simplify notations, we will sometimes use p(t
1
, . . . , t
n
) as a
shorthand for struct(p, [t
1
, . . . , t
n
]).
Object level Ground representation
X var(1)
c struct(c, [])
f(X, a) struct(f, [var(1), struct(a, [])])
p q struct(clause, [struct(p, []), struct(q, [])])
Figure 6.1: A ground representation
The ground representation has the advantage that it can be treated in a
purely declaratively manner, while for many applications the non-ground repre-
sentation requires the use of extra-logical built-ins (like var/1 or copy/2). The
non-ground representation also has semantical problems (although they were
solved to some extent in [10, 32, 33]). The main advantage of the non-ground
representation is that the meta-program can use the underlying
1
unication
mechanism, while for the ground representation an explicit unication algorithm
is required. This (currently) induces a dierence in speed reaching several or-
ders of magnitude. The current consensus in the logic programming community
is that both representations have their merits and the actual choice depends
1
The term underlying refers to the system in which the meta-interpreter itself is written.
34 CHAPTER 6. WRITING A PROLOG INTERPRETER IN PROLOG
on the particular application. In the following subsection we discuss the dif-
ferences between the ground and the non-ground representation in more detail.
For further discussion we refer the reader to [21], [22, 6], the conclusion of [32].
Unication and collecting behaviour
As already mentioned, meta-interpreters for the non-ground representation can
simply use the underlying unication. For instance, to unify the object level
atoms p(X, a) and p(Y, Y ) one simply calls p(X, a) = p(Y, Y ). This is very ef-
cient, but after the call both atoms will have become instantiated to p(a, a).
This means that the original atoms p(X, a) and p(Y, Y ) are no longer accessi-
ble (in Prolog for instance, the only way to undo these instantiations is via
failing and backtracking), i.e. we cannot test in the same derivation whether
the atom p(X, a) unies with another atom, say p(b, a). This in turn means
that it is impossible to write a breadth-rst like or a collecting (i.e. performing
something like findall/3
2
) meta-interpreter declaratively for the non-ground
representation (it is possible to do this non-declaratively by using for instance
Prologs extra-logical copy/2 built-in).
In the ground representation on the other hand, we cannot use the under-
lying unication (for instance p(var(1),a) =p(var(2), var(2)) will fail).
The only declarative solution is to use an explicit unication algorithm. Such
an algorithm, taken from [12], is included in Appendix ??. (For the non-
ground representation such an algorithm cannot be written declaratively; non-
declarative features, like var/1 and = ../2, have to be used.) For instance,
unify(p(var(1),a),p(var(2), var(2)), Sub) yields an explicit representation
of the unier in Sub, which can then be applied to other expressions. In con-
trast to the non-ground representation, the original atoms p(var(1),a) and
p(var(2), var(2)) remain accessible in their original form and can thus be used
again to unify with other atoms. Writing a declarative breadth-rst like or a
collecting meta-interpreter poses no problems.
Standardising apart and dynamic meta-programming
To standardise apart object program clauses in the non-ground representation,
we can again use the underlying mechanism. For this we simply have to store the
object program explicitly in meta-program clauses. For instance, if we represent
the object level clause
anc(X, Y ) parent(X, Y )
by the meta-level fact
clause(1, anc(X, Y ), [parent(X, Y )])
2
Note that the findall/3 built-in is non-declarative, in the sense that the meaning of
programs using it may depend on the selection rule. For example, given a program containing
just the fact p(a) , we have that ndall (X, p(X), [A]), X = b succeeds (with the answer
{A/p(a), X/b}) when executed left-to-right but fails when executed right-to-left.
6.1. THE POWER OF THE GROUNDVS. THE NON-GROUNDREPRESENTATION35
we can obtain a standardised apart version of the clause simply by calling
clause(1, Hd, Bdy). Similarly, we can resolve this clause with the atom
anc(a, B) by calling clause(C, anc(a, B), Bdy).
3
The disadvantage of this method, however, is that the object program is
xed, making it impossible to do dynamic meta-programming (i.e. dynami-
cally change the object program, see [21]); this can be remedied by using a mixed
meta-interpreter, as we will explain in Subsection ?? below). So, unless we re-
sort to such extra-logical built-ins as assert and retract, the object program
has to be represented by a term in order to do dynamic meta-programming.
This in turn means that non-logical built-ins like copy/2 have to be used to
perform the standardising apart. Figure 6.2 illustrates these two possibilities.
Note that without the copy in Figure 6.2, the second meta-interpreter would
incorrectly fail for the given query. For our application this means that, on the
one hand, using the non-logical copying approach unduly complicates the spe-
cialisation task while at the same time leading to a serious eciency bottleneck.
On the other hand, using the clause representation, implies that representing
updates to a database becomes much more cumbersome. Basically, we also
have to encode the updates explicitly as meta-program clauses, thereby making
dynamic meta-programming impossible.
1. Using a Clause Representation 2. Using a Term Representation
solve([]) solve(P, [])
solve([H|T]) solve(P, [H|T])
clause(H, B) member(Cl, P), copy(Cl, cl(H, B))
solve(B), solve(T) solve(P, B), solve(P, T)
clause(p(X), [])
solve([p(a), p(b)]) solve([cl(p(X), [])], [p(a), p(b)])
Figure 6.2: Two non-ground meta-interpreters with {p(X) } as object program
For the ground representation, it is again easy to write an explicit standar-
dising apart operator in a fully declarative manner. For instance, in the pro-
gramming language Godel [22] the predicate RenameFormulas/3 serves this
purpose.
Testing for variants or instances
In the non-ground representation we cannot test in a declarative manner whether
two atoms are variants or instances of each other, and non-declarative built-ins,
like var/1 and =../2, have to be used to that end. Indeed, suppose that we
have implemented a predicate variant/2 which succeeds if its two arguments
represent two atoms which are variants of each other and fails otherwise. Then
variant(p(X), p(a)) must fail and variant(p(a), p(a)) must succeed. This,
however, means that the query variant(p(X), p(a)), X = a fails when using
3
However, we cannot generate a renamed apart version of anc(a, B). The copy/2 built-in
has to be used for that purpose.
36 CHAPTER 6. WRITING A PROLOG INTERPRETER IN PROLOG
a left to right computation rule and succeeds when using a right to left compu-
tation rule. Hence variant/2 cannot be declarative (the exact same reasoning
holds for the predicate instance/2). Thus it is not possible to write declara-
tive meta-interpreters which perform e.g. tabling, loop checks or subsumption
checks.
Again, for the ground representation there is no problem whatsoever to write
declarative predicates which perform variant or instance checks.
Specifying partial knowledge
One additional disadvantage of the non-ground representation is that it is more
dicult to specify partial knowledge for partial evaluation. Suppose, for in-
stance, that we know that a given atom (for instance the head of a fact that
will be added to a deductive database) will be of the form man(T ), where T
is a constant, but we dont know yet at partial evaluation time which partic-
ular constant T stands for. In the ground representation this knowledge can
be expressed as struct(man, [struct(C, [])]). However, in the non-ground repre-
sentation we have to write this as man(X), which is unfortunately less precise,
as the variable X now no longer represents only constants but stands for any
term.
4
4
A possible solution is to use the = ../2 built-in to constrain X and represent the above
atom by the conjunction man(X), X = ..[C]. This requires that the partial evaluator provides
non-trivial support for the built-in = ../2 and is able to specialise conjunctions instead of
simply atoms, see Chapter ??.
Chapter 7
Verication Tools
7.1 Kripke Structures and Labeled Transition
Systems
A Kripke structure is a graph with labels on the nodes:
each node is a state of the system to be veried
the labels indicate which property holds in that state.
A labeled transition system is typically a graph with labels on the edges:
each node is again a state of the system to be veried
the labels indicate which particular action/operation of the system can
trigger a state change.
In this chapter we combine these two formalisms as follows:
Denition 7.1.1 A Labeled Transition System (LTS) is a tuple M = S, T, L, I, s
0
)
where
S is a nite set of states,
T S L S is a transition relation (s, a, t) is also written as s
a
t),
L is a nite set of actions,
P is a set of basic properties,
I : S P is the node labeling function, indicating which properties hold
in a particular state,
s
0
is the initial state.
Encoding this as a logic program:
Program 7.1.2
37
38 CHAPTER 7. VERIFICATION TOOLS
open
closed
closed
heat
open_door close_door
start
open_door
stop
Figure 7.1: A simple LTS for a microwave oven
start(s0).
prop(s0,open).
prop(s1,closed).
prop(s2,closed).
prop(s2,heat).
trans(s0,close_door,s1).
trans(s1,open_door,s0).
trans(s1,start,s2).
trans(s2,open_door,s0).
trans(s2,stop,s1).
Note that trans does not have to be dened by facts alone: we can plug in
an interpeter !
We now try to compute reach(M) the set of reachable states of M, which is
important for checking so-called safety properties of a system:
Program 7.1.3
reach(X) :- start(X).
reach(X) :- reach(Y), trans(Y,_,X).
This is elegant and logically correct, but does not really work with classical
Prolog:
| ?- reach(X).
X = s0 ? ;
X = s1 ? ;
X = s0 ? ;
X = s2 ? ;
7.2. BOTTOM-UP INTERPRETER 39
X = s1 ? ;
X = s0 ? ;
X = s1 ? ;
X = s0 ?
...
If we try to check a simple safety property of our system, namely that the
door cannot be open while the heat is on, then we get stuck into an innite
loop:
| ?- reach(X), prop(X,open), prop(X,heat).
..... infinite loop ....
The same problem persists if we change the order of the literals in the second
clause:
reach(X) :- trans(Y,_,X), reach(Y).
7.2 Bottom-Up Interpreter
Top-down interpretation: start from goal try to nd refutation. Used by classical
Prolog, SLD-resolution.
Bottom-up: start from facts and try to reach goal. Used by relational and
deductive databases. In logic programming this corresponds to the T
P
Operator
von Denition 3.2.7.
Below we adapt our vanilla interpreter from [REF] to work backwards (i.e.,
bottom-up) and construct a table of solutions:
Program 7.2.1
:- dynamic start/1, prop/2, trans/3, reach/1.
pred(start(_)). pred(prop(_,_)). pred(trans(_,_,_)). pred(reach(_)).
:- dynamic table/1.
:- dynamic change/0.
bup(Call) :- pred(Call), clause(Call,Body), solve(Body).
solve(true).
solve(,(A,B)) :- solve(A),solve(B).
solve(Goal) :- Goal \= ,(_,_), Goal \= true,
table(Goal).
run :- retractall(table(_)), bup, print_table.
print_table :- table(X), print(X),nl,fail.
40 CHAPTER 7. VERIFICATION TOOLS
print_table :- nl.
bup :- retractall(change),
bup(Sol),
\+ table(Sol),
assert(table(Sol)),
assert(change),
fail.
bup :- print(.), flush_output, (change -> bup ; nl).
7.3 Tabling and XSB Prolog
Tabling combines bottom-up with top-down evaluation.
Whenever a predicate call is treated:
check if a variant of the call has already an entry in the table
if there is no entry, then add a new entry and proceed as usual; whenever
a solution is found for this call it is entered into the table (unless the
solution is already in the table)
if there is an entry, then lookup the solutions in the table; do not evaluate
the goal; if the table is currently empty then delay and do another branch
rst...
A system which implements tabling is XSB Prolog, available freely at:
http://xsb.sourceforge.net/.
Declaring a predicate reach with 1 argument as tabled:
:- table reach/1.
(There also exist mechanisms for XSB to decide automatically what to table.)
One can switch between variant tabling (:- use variant tabling p/n, the
default) and subsumptive tabling (:- use subsumptive tabling p/n ).
Other useful predicates: abolish all tables, get calls for table(+PredSpec,?Call)
This system has a very ecient tabling mechanism: ecient way to check
if a call has already been encountered; ecient way to propagate answers and
check whether a table is complete.
Program 7.3.1
:- table reach/1.
reach(X) :- start(X).
reach(X) :- reach(Y), trans(Y,_,X).
7.4. TEMPORAL MODEL CHECKING 41
reach(X)
start(X)
X=s0
reach(Y),
reach(X) X=s0
X=s1
X=s2
trans(Y,_,X)
trans(s0,_,X)
X=s1
trans(s1,_,X) trans(s2,_,X)
X=s0
X=s2 X=s0
X=s1
Table for reach/1:
Figure 7.2: Tabled execution of reach(X)
| ?- reach(X).
X = s2;
X = s1;
X = s0;
no
| ?- reach(X), prop(X,open), prop(X,heat).
no
7.4 Temporal Model Checking
[Copied from Lopstr paper; needs to be rewritten]
7.4.1 CTL syntax and semantics
The temporal logic CTL (Computation Tree Logic) introduced by Clarke and
Emerson in [18], allows to specify properties of specications generally described
as Kripke structures. The syntax and semantics for CTL are given below.
Given Prop, the set of propositions, the set of CTL formulae is inductively
dened by the following grammar (where p Prop):
:= true [ p [ [ [ _ [ _ [ | [ |
A CTL formula can be either true or false in a given state. For example,
true is true in all states, true is false in all states, and p is true in all states
which contain the elementary proposition p. The symbol _ is the nexttime
operator and | stands for until. _ (resp. _) intuitively means that
holds in every (resp. some) immediate successor of the current program state.
42 CHAPTER 7. VERIFICATION TOOLS
s p
q
p,r
0
s
1
s
2
s p
0
s
p,r 2
q
s q
1
s q
1
s
1
s p,r
2
s q
1
s p,r
2
s p
1
s p
0
q
.
.
.
.
.
.
.
.
.
.
b a
Figure 7.3: Example of a Kripke structure.
The formula
1
|
2
(resp.
1
|
2
) intuitively means that for every (resp.
some) computation path, there exists an initial prex of the path such that
2
holds at the last state of the prex and
1
holds at all other states along the
prex.
The semantics of CTL formulae is dened with respect to a Kripke structure
(S, R, , s
0
) with S being the set of states, R( S S) the transition relation,
(S 2
Prop
) giving the propositions which are true in each state, and s
0
being
the initial state.
The semantics of CTL formulae is dened with respect to a Kripke structure
(S, R, , s
0
) with S being the set of states, R( S S) the transition relation,
(S 2
Prop
) giving the propositions which are true in each state, and s
0
being the initial state. Figure 7.3a gives a graphical representation of a Kripke
structure with 3 states s
0
, s
1
, s
2
, where s
0
is the initial state. The propositions
p, q, r label the states.
Generally it is required that any state has at least one outgoing vertex. From
a Kripke structure, we can dene an innite transition tree as follows: the root
of the tree is labelled by s
0
. Any vertex labelled by s has one son labelled by s
with a transition s s