Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Jan Smith
Department of Computer Science
University of GSteborg
Chalmers University of Technology
S-412 96 G~teborg, Sweden
INTRODUCTION.
Bishop's book Foundations of Constructive Analysis [I] shows that practically all
of classical analysis can be rebuilt constructively. Its appearance in 1967 gave rise
to a number of formalizations of constructive mathematics. Per Martin-LSf proposed
that his formalization, intuitionistic type theory, may also be used as a programming
language. His ideas are unfolded in [11]. Viewed as a programming language, type theory
is a functional language with a very rich type structure. Compared with other functional
languages, like Hope [3], ML [5] or SASL [14], one of the main differences is that gen-
eral recursion is not allowed in type theory. The use of general recursion equations is
replaced by primitive recursion. For natural numbers, this means that the only recursive
definitions allowed are of the form
f f(O) = c
If(s(n)) = g(n,f(n))
where s(n) denotes the successor of n; and correspondingly for lists
f (nil) = c
If(a'l) = g(a,l,f(1))
where a.l denotes the list obtained by adding the object a to the left of the list I.
The use of this restrictive form of recursion makes it possible to give a very simple
and coherent semantics for type theory, the one develope d in [11]. It also makes it
easier to reason about programs; for example, the rule of type theory involving primi-
tive recursion for natural numbers corresponds closely to the induction rule for natural
numbers. This correspondance is explained below in connection with the formulation of
the rules for natural numbers.
At first sight, it may seem like a severe limitation not to have general recursion
available. But this is not so, because definition by primitive recursion may involve
functions of higher type. For instance, all the recursive functions that are provably
total in first order arithmetic are definable in type theory. One way of seeing this
is to note that the theory of primitive recursive funetionals of finite type in GSdel
[6] is a subtheory of type theory, and then use the theorem that all recursive functions
provably total in first order arithmetic are definable in G~del's theory. Of course, a
metamathematical result like this may not be of much Use in actually constructing
workable programs.
Many algorithms used in programming are directly definable by primitive recursion,
446
but there are important exceptions, for example the program quicksort, which sorts a
list s in the following way:
If s=nil then quicksort of s is nil. If s=a.l, partition I into two lists
I S and I>, where i E consists of those elements of 1 that are smaller than or equal
to a and i> of those that are greater than a. Apply quicksort to i S and I> and
concatenate the resulting lists with a in between; this list is the value of quick-
sort on s.
Quicksort is clearly a program that cannot be defined directly by primitive recur-
sion on lists, because at least one of the lists I S and I> must have a length less
than that of I. Moreover, neither i S nor I> need be an initial segment of I.
However, quicksort satisfies a course-of-values recursion of the form
S f(nil) = c
f(a.l) = g(a,l,f(d1(a,l)),f(d2(a,l)))
where
length(d1(a,l)) J length(l) and length(d2(a,l)) ! length(l)
For natural numbers there is a well-known method from recursion theory for reducing
course-of-values recursion in general to primitive recursion by introducing an auxilia-
ry function ~(n), which stores all the values f(m) for m < n on a list; for the de-
tails, I refer to [8]. This method clearly does not work for lists, because, in general,
if I is a list with length greater than one, there may be infinitely many lists with
length smaller than the length of I.
In the explanation of intuitionistic predicate logic given by Heyting [7], the
logical constants are explained in terms of constructions. Martin-L~f's type theory is
a theory of constructions with such a strong type system that each proposition can be
expressed as a type, namely the type of constructions, expressing proofs of the propo-
sition, using Heyting's explanation. The purpose of this paper is to solve, in type
theory, the course-of-values equations
f(nil) = c
f(a.l) = g(a,l,f(d 1(a,l)) ..... f(dk(a,l)))
where
length(di(a,l)) J length(l) ( i = I .... ,k)
and thereby using a method which will illustrate the identification of propositions and
types. The idea is to give an informal proof of a natural course-of-values induction
rule. This proof is just a few lines long and, when formalized in type theory, a program
will be obtained which solves the equations.
It should be noted that the use of intuitionistic, or constructive, logic is here
not motivated by any philosophical argument. If we use non-constructive reasoning when
proving the existence of a program, it is in general no longer possible to obtain the
program from the proof. The idea of using a formalization of constructive mathematics
for prograrmning is not new; it was already suggested by Bishop [2] and also by Constable
[4]. Constructive proofs are also used in the quite different context of automatic pro-
gram synthesis, see e.g. [9].
The program for the solution will be written in a completely formal notation, but
447
the correctness proof, i.e., the proof that the solution really satisfies the equations,
will be given by informal mathematics. However, in the proof, all steps are straight-
forward applications of rules of type theory. So, there are no problems in formalizing
the correctness proof and having it checked by a computer. This has actually been done,
using the implementation of type theory given by Petersson [13].
Finally, I will construct a program for quicksort in type theory, as an example
of the method.
The program for the solution of the course-of-values equations and the proof that
it satisfies the equations, only involve a tiny part of type theory. However, since the
method I use for obtaining the solution involves the identification of propositions and
types, more rules of type theory are needed.
If we regard a type A as a proposition and we have aEA then we may read this as
'~ is a proof of A" and since we are often not concerned with the details of the con-
struction of that proof, we may suppress the object a and write "A is true" or just "A +'.
If, in the rules of type theory, we read some of the types as propositions and suppress
the explicit proof-objects, all the rules of intuitionistie arithmetic may be obtained,
formulated in natural deduction. A proof of a proposition A in this formalization of
arithmetic can be mechanically transformed back into a proof in type theory, thereby
giving an object a and a derivation of a~A. I will only give those rules which we ac-
tually are going to use. In terms of propositions, these rules involve implication,
universal quantification and induction on natural numbers and lists. For further details
of type theory, I refer to Martin-L~f [10,11] and, for some programming examples, to
NordstrSm [12].
The objects of the types do not only form proof-objects, but they are also the pro-
grams of type theory. Expressions for the objects are built up from variables by means
of various primitive forms, which will be given in connection with the rules, and by
means of abstraction (Xl,...,Xk)b and application b(a I .... ,ak). We do not use the more
common lambda-notation for abstraction, because we want to reserve the lambda for the
objects in the function types. For abstraction and application, the rules of ~- and
q-reduction hold, i.e., ((Xl,...,Xn)b)(al,...,ak) E b(al,...,ak/Xl,...,Xk) , where
b(al,...,ak/xl,...,xk) denotes the result of substituting al,...,a k for Xl,...,x k
in b, and (x I .... ,Xk)(b(xl,...,Xk) ) m b provided that x| ..... x k does not occur free
in b. The programs of type theory are always evaluated outermost first , i.e., lazy
evaluation is used, corresponding to "call by need" in a conventional programming
language.
The rules are formulated in natural deduction and assumptions are written within
square brackets.
Natural numbers. We let N denote the type of natural numbers. The objects of type N
448
are 0 and s(a) provided a6N, where s denotes the successor function. So, we have
aEN
N-introduction: 06N
s (a) E N
C(n)
The correspondence between the N-elimination rule and the induction rule can be
explained as follows. According to Heyting, a proposition is proved by giving a proof-
construction for it. In the antecedent of the N-elimination rule, we have cEC(0), so
C(0) is true. We also have e(x,y)6C(s(x)) provided that xEN and y E C ( x ) , so
C(s(x)) is true provided that xEN and C(x) is true. Hence, by induction, C(n) is
true for all natural numbers n.
We can also get the N-elimination rule from the induction rule. That C(0) is true,
means, according to Heyting's explanation, that we have a proof-object c and c6C(0).
That C(s(x)) is true under the assumptions xEN and C(x), means that we must have a
proof-object e(x,y) and e(x,y) 6C(s(x)) under the assumptions x6N and y E C ( x ) . So,
by primitive recursion, we have an object of type C(n) for each natural number n, and
the constant rec is introduced to express this object, i.e., rec(n,c,e) ~C(n).
Lists. We let List(A) denote the type of lists whose elements are objects of type A.
aCA I 6 List(A)
List-introduction: nil 6 List(A)
a.l E List(A)
449
1 6 List(A)
e £ C(nil)
List-elimination:
e(x,y,z) £ C(x.y) [xEA, yEList(A), z6C(y)]
listrec(l,c,e) £ C(1)
c(1)
Function application is expressed by the constant ap, which is computed in the usual
way :
ap((Ix)b(x),a) = b(a)
From this computation rule, we get the eliminatio~ rule for N:
a6A c E (T]x~A)B(x)
[I -elimination:
ap(c,a) 6 B(a)
z~,50
B [A] A AmB
D-introduction: m -elimination:
ADB B
Boolean. The type Boolean has two objects, true and false.
The expression (if b then c else d) is computed in the usual way, which gives
S f(nil) = e
If(a.l) = g(a,l,f(d1(a,l)) ..... f(dk(a,l)))
In order to solve these equations, I will give an informal proof of the course-
of-values induction rule
1E List(A)
C(nil)
C(a.s) [aEA, sEList(A), C(d.(a,s)),..., C(dl (a,s))]
I K
length(di(a,s)) !length(s) [ a E A , sEList(A)] (i=1,...,k)
C(1)
where C(1) is a proposition depending on 1EList(A). Since propositions and types are
identified in type theory, we know that for each proposition in the antecedents of the
course-of-values induction rule there must exist an explicit construction of an object
of the corresponding type. A proof of this induction rule is a method of going from
proofs of the antecedents to a proof of C(1). Hence, .given a proof of this induction
rule, we can give a corresponding recursion rule in type theory, giving an object f(1)
of type C(1):
1 E List(A)
c E C(nil)
g(a,s,y I ..... yk ) EC(a.s) [aEA, sEList(A), Yl EC(d1(a,s)),..., YkEC(dk(a's))]
length(di(a,s)) !length(s) [aEA, sEList(A)] (i=1,...,k)
f(1) C C(1)
by induction on n EN. P(0) it obviously true, since length(l) < 0 implies 1 = nil
and we know that C(nil) is true. Assume that P(x) is true. We then have to show that
452
holds for all IEList(A). This will be done by list induction. The case of l=nil
is trivial since C(nil) is true. Let 1 = u.v and assume
C(d(u,v))
From this and the third premiss of the induction rule we are about to prove, we get
C(u.v)
and thereby (I) is proved. Hence, by putting n = length(l) in (I), the course-of-
values induction rule is proved.
We now have to repeat this proof in type theory in order to get the program for
the solution. From the proof of (I) we will get a function F and a derivation of
remembering that the N - and ~ - t y p e s interpret the universal quantifier and impli-
cation, respectively. The solution f can then be defined by
where the dots denote a proof, or construction, of the proposition length(l) !length(l),
i.e., ... Elength(1) !length(%) . What construction the dots denote depends on the
particular definition we have chosen of the proposition m!n. When we have obtained F,
it is easy to see that the construction denoted by the dots never will be used in the
computation of F(n) ; it is only the existence of a construction that is needed. So,
there is no reason for defining < and then construct the object denoted by the dots.
I will use the dot notation in similar situations below.
We proved P(n) by induction and the corresponding rule of type theory is N-elimi-
nation, so F(n) will be constructed by recursion. Since c E C(nil) we clearly have,
by ~ - and N-introduction,
Assume
u 6 A (5), v 6 List(A) (6) and p 6 length(u.v) <s(x) (7)
From (8) and the premiss length(d(u,v)) ! length(v), we get, by the transitivity of i,
Substituting this into the third premiss of the recursion rule we are proving gives
whereby the assumption (7) is discharged. Since we have (4) and a derivation of (10)
from (5) and (6), we can use List-elimination to get
(%l)listrec(l,(%p)c,(u,v,w)(%p)g(u,v,ap(ap(y,d(u,v)),...))) E
(91CList(A))(length(1) is(x) ~ C(1)) (12)
F(n)
rec (n,
(Al)(Ap)c,
(x,y)(Al)listrec(l,(lp)c,(u,v,w)(%p)g(u,v,ap(ap(y,d(u,v)) .... ))))
since we have (I) and a derivation of (12) from the assumptions (2) and (3).
If the type C does not depend on 1 6List(A), we can simply take away the de-
pendence on the proof of length(l) i n and change the definition of F to
F(n) m rec(n,(Al)c,(x,y)(%l)listrec(l,c,(u,v,w)g(u,v,ap(y,d(u,v)))))
F(n)
rec (n,
(~l)c,
(x,y)(%l)listrec(l,e,(u,v,w)g(u,v,ap(y,d1(u,v)) ..... ap(y,dk(U,V)))))
,~54
I leave out the simple proof that F(n) is of type List(A) ~ C when n6N. The proof
that f solves the equations is based on the lemma:
holds for all I E List(A). This will be done by list induction. By the definition of
F, we have ap(F(n+1),nil) = c = ap(F(n+2),nil) which takes care of the case l=nil.
Let i = a.s and assume that length(l) j n+1 . The definition of F gives
ap(F(n),d(a,s)) = ap(F(n+1),d(a,s))
ap(F(n+1),l) = ap(F(n+2),l)
(I) and (2) give f(a.l) = g(a,l,f(d(a,l))) , i.e., the second equation is satisfied.
Remark. The proof of the course-of-values induction rule determines the solution of the
equations. So, if we choose another proof, we will get a different program. We proved
the induction rule by proving (Vl~List(A))(length(1)Jnm C(1)) by induction on h E N
and in the induction step, we used list induction. But this list induction can be re-
placed by a separation into two cases: length(1) i n and length(1) !n+1 . If we work out
the details of this proof, we will get the following definition ofF:
F(n)
rec (n,
(~l)c,
(x,y)(%l)(if length(1) i x then ap(y,l) else listrec(l,c,(u,v,w)g(u,v,ap(y,d(u,v))))))
This program is somewhat longer than the one we obtained above and there will also
occur superfluous steps in the computation of f(1). However, this definition of F has
one advantage; it is easier to see that f is a solution to the course-of-values equa-
tions, because the lemma now becomes trivial.
QUICKSORT.
concat(l,s) ~ listrec(l,s,(x,y,z)(x.z))
filter<(a,l) m listrec(l,nil,(x,y,z)(if x!a then x.z else z))
and
filter>(a,l) ~ listrec(l,nil,(x,y,z)(if x!a then z else x.z))
S quicksort(nil) = nil
quicksort(a.l) = concat(quicksort(filter<(a,l)), a.quicksort(filter>(a,l)))
I leave out the simple proofs of (I) and (2). We prove (3) by list induction. By defi-
nition, we have filter<(a,nil) = nil. Assume that (3) holds for v E List(A). For
I= u.v, where uEA, we get two cases:
I) u < a = true
length(filter<(a,u.v)) = length(u.filter<(a,v))! length(v) + I = length(u.v)
2) u < a = false
length(filter<(a,u.v)) = length(filter<(a,v)) j length(v) < length(u.v)
By the theorem, we know that our way of defining quicksort in type theory gives a func-
tion from List(A) to List(A), satisfying the claimed equations.
References.