Sei sulla pagina 1di 12

THE IDENTIFICATION OF PROPOSITIONS AND TYPES IN

MARTIN-LOF'S TYPE THEORY: A PROGRAMMING EXAMPLE

Jan Smith
Department of Computer Science
University of GSteborg
Chalmers University of Technology
S-412 96 G~teborg, Sweden

INTRODUCTION.

Bishop's book Foundations of Constructive Analysis [I] shows that practically all
of classical analysis can be rebuilt constructively. Its appearance in 1967 gave rise
to a number of formalizations of constructive mathematics. Per Martin-LSf proposed
that his formalization, intuitionistic type theory, may also be used as a programming
language. His ideas are unfolded in [11]. Viewed as a programming language, type theory
is a functional language with a very rich type structure. Compared with other functional
languages, like Hope [3], ML [5] or SASL [14], one of the main differences is that gen-
eral recursion is not allowed in type theory. The use of general recursion equations is
replaced by primitive recursion. For natural numbers, this means that the only recursive
definitions allowed are of the form
f f(O) = c
If(s(n)) = g(n,f(n))
where s(n) denotes the successor of n; and correspondingly for lists
f (nil) = c
If(a'l) = g(a,l,f(1))
where a.l denotes the list obtained by adding the object a to the left of the list I.
The use of this restrictive form of recursion makes it possible to give a very simple
and coherent semantics for type theory, the one develope d in [11]. It also makes it
easier to reason about programs; for example, the rule of type theory involving primi-
tive recursion for natural numbers corresponds closely to the induction rule for natural
numbers. This correspondance is explained below in connection with the formulation of
the rules for natural numbers.
At first sight, it may seem like a severe limitation not to have general recursion
available. But this is not so, because definition by primitive recursion may involve
functions of higher type. For instance, all the recursive functions that are provably
total in first order arithmetic are definable in type theory. One way of seeing this
is to note that the theory of primitive recursive funetionals of finite type in GSdel
[6] is a subtheory of type theory, and then use the theorem that all recursive functions
provably total in first order arithmetic are definable in G~del's theory. Of course, a
metamathematical result like this may not be of much Use in actually constructing
workable programs.
Many algorithms used in programming are directly definable by primitive recursion,
446

but there are important exceptions, for example the program quicksort, which sorts a
list s in the following way:
If s=nil then quicksort of s is nil. If s=a.l, partition I into two lists
I S and I>, where i E consists of those elements of 1 that are smaller than or equal
to a and i> of those that are greater than a. Apply quicksort to i S and I> and
concatenate the resulting lists with a in between; this list is the value of quick-
sort on s.
Quicksort is clearly a program that cannot be defined directly by primitive recur-
sion on lists, because at least one of the lists I S and I> must have a length less
than that of I. Moreover, neither i S nor I> need be an initial segment of I.
However, quicksort satisfies a course-of-values recursion of the form
S f(nil) = c
f(a.l) = g(a,l,f(d1(a,l)),f(d2(a,l)))
where
length(d1(a,l)) J length(l) and length(d2(a,l)) ! length(l)

For natural numbers there is a well-known method from recursion theory for reducing
course-of-values recursion in general to primitive recursion by introducing an auxilia-
ry function ~(n), which stores all the values f(m) for m < n on a list; for the de-
tails, I refer to [8]. This method clearly does not work for lists, because, in general,
if I is a list with length greater than one, there may be infinitely many lists with
length smaller than the length of I.
In the explanation of intuitionistic predicate logic given by Heyting [7], the
logical constants are explained in terms of constructions. Martin-L~f's type theory is
a theory of constructions with such a strong type system that each proposition can be
expressed as a type, namely the type of constructions, expressing proofs of the propo-
sition, using Heyting's explanation. The purpose of this paper is to solve, in type
theory, the course-of-values equations
f(nil) = c
f(a.l) = g(a,l,f(d 1(a,l)) ..... f(dk(a,l)))
where
length(di(a,l)) J length(l) ( i = I .... ,k)
and thereby using a method which will illustrate the identification of propositions and
types. The idea is to give an informal proof of a natural course-of-values induction
rule. This proof is just a few lines long and, when formalized in type theory, a program
will be obtained which solves the equations.
It should be noted that the use of intuitionistic, or constructive, logic is here
not motivated by any philosophical argument. If we use non-constructive reasoning when
proving the existence of a program, it is in general no longer possible to obtain the
program from the proof. The idea of using a formalization of constructive mathematics
for prograrmning is not new; it was already suggested by Bishop [2] and also by Constable
[4]. Constructive proofs are also used in the quite different context of automatic pro-
gram synthesis, see e.g. [9].
The program for the solution will be written in a completely formal notation, but
447

the correctness proof, i.e., the proof that the solution really satisfies the equations,
will be given by informal mathematics. However, in the proof, all steps are straight-
forward applications of rules of type theory. So, there are no problems in formalizing
the correctness proof and having it checked by a computer. This has actually been done,
using the implementation of type theory given by Petersson [13].
Finally, I will construct a program for quicksort in type theory, as an example
of the method.

SOME RULES OF TYPE THEORY.

The program for the solution of the course-of-values equations and the proof that
it satisfies the equations, only involve a tiny part of type theory. However, since the
method I use for obtaining the solution involves the identification of propositions and
types, more rules of type theory are needed.
If we regard a type A as a proposition and we have aEA then we may read this as
'~ is a proof of A" and since we are often not concerned with the details of the con-
struction of that proof, we may suppress the object a and write "A is true" or just "A +'.
If, in the rules of type theory, we read some of the types as propositions and suppress
the explicit proof-objects, all the rules of intuitionistie arithmetic may be obtained,
formulated in natural deduction. A proof of a proposition A in this formalization of
arithmetic can be mechanically transformed back into a proof in type theory, thereby
giving an object a and a derivation of a~A. I will only give those rules which we ac-
tually are going to use. In terms of propositions, these rules involve implication,
universal quantification and induction on natural numbers and lists. For further details
of type theory, I refer to Martin-L~f [10,11] and, for some programming examples, to
NordstrSm [12].
The objects of the types do not only form proof-objects, but they are also the pro-
grams of type theory. Expressions for the objects are built up from variables by means
of various primitive forms, which will be given in connection with the rules, and by
means of abstraction (Xl,...,Xk)b and application b(a I .... ,ak). We do not use the more
common lambda-notation for abstraction, because we want to reserve the lambda for the
objects in the function types. For abstraction and application, the rules of ~- and
q-reduction hold, i.e., ((Xl,...,Xn)b)(al,...,ak) E b(al,...,ak/Xl,...,Xk) , where
b(al,...,ak/xl,...,xk) denotes the result of substituting al,...,a k for Xl,...,x k
in b, and (x I .... ,Xk)(b(xl,...,Xk) ) m b provided that x| ..... x k does not occur free
in b. The programs of type theory are always evaluated outermost first , i.e., lazy
evaluation is used, corresponding to "call by need" in a conventional programming
language.
The rules are formulated in natural deduction and assumptions are written within
square brackets.

Natural numbers. We let N denote the type of natural numbers. The objects of type N
448

are 0 and s(a) provided a6N, where s denotes the successor function. So, we have

aEN
N-introduction: 06N
s (a) E N

We want to define functions on the natural numbers by primitive recursion and we


therefore introduce the constant rec, which is computed according to the rules
Srec(0,c,e) = c
Irec(s(a),c,e) = e(a,rec(a,c,e))
So, defining f by f(n)---rec(n,c,e) corresponds to the introduction of f by the
primitive recursion
f(0) = c
f(s(a)) = e(a,f(a))
Let C(x) be a type for x E N . We then have the following rule, which should be
obvious from the computation rules for rec

nCN c6C(0) e(x,y) 6C(s(x)) [x6N, y6C(x)]


N-elimination:
ree(n,c,e) ~ C(n)

For instance, defining plus by plus(m,n)-rec(n,m,(x,y)s(y)) we can use N-elimina-


tion, by putting C(x) m N and e(x,y)-=s(y) , to get plus(m,n) E N [mCN, n C N ] .
By regarding C(n) as a proposition and leaving out its proof-objects in the
N-elimination rule, we get the induction rule for natural numbers:

n6N C(O) C(s (x)) [x6N, C(x)]

C(n)

The correspondence between the N-elimination rule and the induction rule can be
explained as follows. According to Heyting, a proposition is proved by giving a proof-
construction for it. In the antecedent of the N-elimination rule, we have cEC(0), so
C(0) is true. We also have e(x,y)6C(s(x)) provided that xEN and y E C ( x ) , so
C(s(x)) is true provided that xEN and C(x) is true. Hence, by induction, C(n) is
true for all natural numbers n.
We can also get the N-elimination rule from the induction rule. That C(0) is true,
means, according to Heyting's explanation, that we have a proof-object c and c6C(0).
That C(s(x)) is true under the assumptions xEN and C(x), means that we must have a
proof-object e(x,y) and e(x,y) 6C(s(x)) under the assumptions x6N and y E C ( x ) . So,
by primitive recursion, we have an object of type C(n) for each natural number n, and
the constant rec is introduced to express this object, i.e., rec(n,c,e) ~C(n).

Lists. We let List(A) denote the type of lists whose elements are objects of type A.

aCA I 6 List(A)
List-introduction: nil 6 List(A)
a.l E List(A)
449

As for natural numbers, we want to introduce functions on lists by primitive re-


cursion. Hence, we introduce the constant listrec, which is computed according to the
rules
S listree(nil,c,e) = c
listrec(a.l,e,e) = e(a,l,listrec(l,c,e))
So, defining f by f(1) ~ listrec(l,c,e) corresponds to the introduction of f by the
primitive list recursion
f(nil) = c
f(a.l) = e(a,l,f(1))
Let C(1) be a type for 1 6List(A). We then have the following rule, which should
be obvious from the computation rules for listrec,

1 6 List(A)
e £ C(nil)
List-elimination:
e(x,y,z) £ C(x.y) [xEA, yEList(A), z6C(y)]

listrec(l,c,e) £ C(1)

For instance, defining length by length(l) mlistrec(l,0,(x,y,z)s(z)) we can use


List-elimination, by putting C(1) s N and e(x,y,z) ~ s(z) , to get
length(l) 6 N [I 6 List(A)].
As for natural numbers, we can obtain an induction rule for lists from the elimi-
nation rule:

16List(A) C(nil) C(x.y) [xEA, y6List(A), C(y)]

c(1)

Cartesian product of a family of types. This type is introduced in order to allow us


to reason about fuhctions and to introduce the universal quantifier and implication.
If A is a type and B(x) is a type under the assumption x6A, then we may introduce
the Cartesian product ([TxEA)B(x). An object of this type is a function which, when
applied to an object a of type A, gives an object of type B(a). The functions are
formed by means of lambda-abstraction.

b(x) 6B(x) [x6A]


N - int roduc t ion:
(kx)b(x) 6 (nx6A)B(x)

Function application is expressed by the constant ap, which is computed in the usual
way :
ap((Ix)b(x),a) = b(a)
From this computation rule, we get the eliminatio~ rule for N:

a6A c E (T]x~A)B(x)
[I -elimination:
ap(c,a) 6 B(a)
z~,50

l~eyting's explanation of the universal quantifier is that (VxEA)B(x) is true


if we can give a function which, when applied to an object a of type A, gives a proof
of the proposition B(a). Reading "proof of the proposition B(a)" as "object of the
type B(a)", we see that the proposition (VxEA)B(x) corresponds to the type
(NxEA)B(x). As before, if we suppress some of the explicit constructions in the n -
rules, we obtain the natural deduction rules for the universal quantifier.

B(x) [xEA] aCA (VxEA)B(x)


V -introduct ion: V -el iminat ion:
(V xEA)B(x) B(a)

If B does not depend on x, we define the function type from A to B by

A~B --- ([TxCA)B

From the rules for N, we get, as special eases,

b(x) C B [xEA] aCA eCA~B


-introduc t ion: -~ -eliminat ion :
(%x)b(x) EA-~B ap(c,a) E B

Heyting's explanation of the implication is that A~B is true provided we can


give a function which, when applied to a proof of the proposition A,gives a proof of
the proposition B. Reading "proposition" as "type", we see that ADB corresponds to
A~B. We get the natural deduction rules for D from the rules for -~:

B [A] A AmB
D-introduction: m -elimination:
ADB B

Boolean. The type Boolean has two objects, true and false.

Boolean-introduction: true E Boolean false ~ Boolean

The expression (if b then c else d) is computed in the usual way, which gives

b E Boolean c E C(true) d ~ C(false)


Boolean-elimination:
(if b then e else d) CC(b)

COURSE-0F-VALUES RECURSION ON LISTS.

Let A and C be types and assume that we have


c6C
g(a,l,y I ..... yk) E C [aEA, IEList(A), YIEC,..., YkCC]
and
di(a,l) CList(A) [aEA, iEList(A)] ( i = 1,...,k)
where
length(di(a,l)) < length(l) [aCA, iEList(A)]

The course-of-values equations to be solved are


45t

S f(nil) = e
If(a.l) = g(a,l,f(d1(a,l)) ..... f(dk(a,l)))
In order to solve these equations, I will give an informal proof of the course-
of-values induction rule

1E List(A)
C(nil)
C(a.s) [aEA, sEList(A), C(d.(a,s)),..., C(dl (a,s))]
I K
length(di(a,s)) !length(s) [ a E A , sEList(A)] (i=1,...,k)

C(1)

where C(1) is a proposition depending on 1EList(A). Since propositions and types are
identified in type theory, we know that for each proposition in the antecedents of the
course-of-values induction rule there must exist an explicit construction of an object
of the corresponding type. A proof of this induction rule is a method of going from
proofs of the antecedents to a proof of C(1). Hence, .given a proof of this induction
rule, we can give a corresponding recursion rule in type theory, giving an object f(1)
of type C(1):

1 E List(A)
c E C(nil)
g(a,s,y I ..... yk ) EC(a.s) [aEA, sEList(A), Yl EC(d1(a,s)),..., YkEC(dk(a's))]
length(di(a,s)) !length(s) [aEA, sEList(A)] (i=1,...,k)

f(1) C C(1)

When we have constructed this function f, it is possible to see that it satisfies


the course-of-values equations above in the more general case when the type C depends
on 1 E List(A). Since I do not know of any interesting applications of this more general
situation, I will carry out the simplifications of f that are possible when C does
not depend on 1E List(A), and then show that the function so obtained satisfies our
original equations. In order to simplify the notation, I assume that we only have one
function dl, which I write d. The arguments will be valid for the cases of several
functions dl,...,d k without any changes.
The identification of propositions and types means that we can work most of the
time using propositions, and then in the final stage, we can introduce the explicit
constructions, which will give us programs. The course-of-values induction rule has a
very short and simple proof. To get a program from this proof is a tedious task of
transforming the details of the proof to type theory. This could, however, be automated;
it is the proof of the induction rule that is the creative part of the construction
of the program.
The course-of-values induction rule will be proved by proving

P(n) m (VIEList(A))(length(1) i n = C(1)) (I)

by induction on n EN. P(0) it obviously true, since length(l) < 0 implies 1 = nil
and we know that C(nil) is true. Assume that P(x) is true. We then have to show that
452

length(l) is(x) D C(1) (2)

holds for all IEList(A). This will be done by list induction. The case of l=nil
is trivial since C(nil) is true. Let 1 = u.v and assume

length(u.v) < s(x)

from which we get

length(d(u,v)) < x (3)

because d is a length decreasing function and < is transitive. As induction hypoth-


esis, we have assumed that P(x) is true, which, together with (3), gives

C(d(u,v))

From this and the third premiss of the induction rule we are about to prove, we get

C(u.v)
and thereby (I) is proved. Hence, by putting n = length(l) in (I), the course-of-
values induction rule is proved.
We now have to repeat this proof in type theory in order to get the program for
the solution. From the proof of (I) we will get a function F and a derivation of

F(n) E (NIEList(A))(length(1) i n ~ C(1))

remembering that the N - and ~ - t y p e s interpret the universal quantifier and impli-
cation, respectively. The solution f can then be defined by

f(1) m ap(ap(F(length(1)),l) .... )

where the dots denote a proof, or construction, of the proposition length(l) !length(l),
i.e., ... Elength(1) !length(%) . What construction the dots denote depends on the
particular definition we have chosen of the proposition m!n. When we have obtained F,
it is easy to see that the construction denoted by the dots never will be used in the
computation of F(n) ; it is only the existence of a construction that is needed. So,
there is no reason for defining < and then construct the object denoted by the dots.
I will use the dot notation in similar situations below.
We proved P(n) by induction and the corresponding rule of type theory is N-elimi-
nation, so F(n) will be constructed by recursion. Since c E C(nil) we clearly have,
by ~ - and N-introduction,

(%l)(%p)c E (91EList(A))(length(1) < 0 ~ C(1)) (I)


Assume
x E N (2) and y E (91EList(A))(length(1) ~ x ~ C(1)) (3)

We now have to construct an object of type (RxEList(A))(length(1) is(x) ~ C(1)) and


this will be done By list recursion. Clearly

(%p)c E length(nil) is(x) ~ C(nil) (4)


453

Assume
u 6 A (5), v 6 List(A) (6) and p 6 length(u.v) <s(x) (7)

Since length(u.v) = s(length(v)) , there obviously exists a construction of the propo-


sition length(v) ! x under the assumption (7):

... 6 length(v) ! x (8)

From (8) and the premiss length(d(u,v)) ! length(v), we get, by the transitivity of i,

... 6 length(d(u,v)) < x

which, together with the induction hypothesis (3), gives

ap(ap(y,d(u,v)) .... ) 6 C(d(u,v))

Substituting this into the third premiss of the recursion rule we are proving gives

g(u,v,ap(ap(y,d(u,v)),...)) 6 C(u.v) (9)

-introduction on (9) gives

(~p)g(u,v,ap(ap(y,d(u,v)),...)) 6 length(u.v) <s(x) ~ C(u.v) (10)

whereby the assumption (7) is discharged. Since we have (4) and a derivation of (10)
from (5) and (6), we can use List-elimination to get

listrec(l,(%p)c,(u,v,w)(%p)g(u,v,ap(ap(y,d(u,v)) .... ))) E


length(l) is(x) ~ C(1) [I E List(l)] (11)

-introduction on (11) gives

(%l)listrec(l,(%p)c,(u,v,w)(%p)g(u,v,ap(ap(y,d(u,v)),...))) E
(91CList(A))(length(1) is(x) ~ C(1)) (12)

We can now define F by

F(n)
rec (n,
(Al)(Ap)c,
(x,y)(Al)listrec(l,(lp)c,(u,v,w)(%p)g(u,v,ap(ap(y,d(u,v)) .... ))))

We then get, by N-elimination,

F(n) E (Nl6List(A))(length(1) i n ~ C(1)) [n C N]

since we have (I) and a derivation of (12) from the assumptions (2) and (3).
If the type C does not depend on 1 6List(A), we can simply take away the de-
pendence on the proof of length(l) i n and change the definition of F to

F(n) m rec(n,(Al)c,(x,y)(%l)listrec(l,c,(u,v,w)g(u,v,ap(y,d(u,v)))))

or, in the case of several functions dl,...,dd,

F(n)
rec (n,
(~l)c,
(x,y)(%l)listrec(l,e,(u,v,w)g(u,v,ap(y,d1(u,v)) ..... ap(y,dk(U,V)))))
,~54

It now remains to show that f defined by f(1) map(F(length(1)),l) satisfies


the course-of-values equations; hut, before we enter the details of this, let us see
how the method works in a simple example. The solution to the equations
f h(nil) = 0
lh(a.l) = s(h(tail(1)))
where tailslistree(l,nil,(u,v,w)v), is obtained by defining h(1) Eap(H(length(1)),l)
where
H(n) m rec(n,(%l)O,(x,y)(ll)listrec(l,0,(u,v,w)s(ap(y,tail(v)))))

The computation of h(3.2.4.5.nil) goes as follows:


h(3.2.4.5.nil) =ap(H(4),3.2.4.5.nil) = s(ap(H(3),4.5.nil)) =
s(s(ap(H(2),nil))) = s(s(0)) ~ 2

To show that the method of solving the course-of-values equations is correct,


the following theorem has to be proved:

Theorem. Let F be defined as above. Then F(n) C List(A) ~ C [n 6 N]


and f(1) defined By
f(1) m ap(F(length(1)),l)
solves the course-of-values equations.

I leave out the simple proof that F(n) is of type List(A) ~ C when n6N. The proof
that f solves the equations is based on the lemma:

Lemma. If n 6N and I 6List(A) then


length(l) < n implies ap(F(n),l) = ap(F(length(1)),l)

We will prove the lermma by proving

(VlEeist(A))(length(1) in D ap(F(n),l) =ap(F(n+1),l)) (I)

by induction on n 6 N . Since length(l) ! 0 implies l=nil and, by the definition of F,


ap(F(0),nil) = c =ap(F(1),nil) , (I) holds for n = 0 . As induction hypothesis, assume
that (I) is true. We then have to show that

length(l) J n+1 m ap(F(n+1),l) = ap(F(n+2),l)

holds for all I E List(A). This will be done by list induction. By the definition of
F, we have ap(F(n+1),nil) = c = ap(F(n+2),nil) which takes care of the case l=nil.
Let i = a.s and assume that length(l) j n+1 . The definition of F gives

ap(F(n+1),l) = g(a,s,ap(F(n),d(a,s))) (2)


and
ap(F(n+2),l) = g(a,s,ap(F(n+~),d(a,s))) (3)

Since length(d(a,s)) Jlength(s) ! n we can use the induction hypothesis to get

ap(F(n),d(a,s)) = ap(F(n+1),d(a,s))

which, together with (2) and (3), gives

ap(F(n+1),l) = ap(F(n+2),l)

and thereby the lenmm is proved.


455

We can now prove that f(1) m ap(F(length(1),l) satisfies the course-of-values


equations
f(nil) = c
f(a°l) g(a,l,f(d(a,l)))

The first equation is an immediate consequence of the definition of F. The definition


of F also gives
f(a.l) map(F(length(a.l)),a.l) =
listrec(a.l,c,(u,v,w)g(u,v,ap(F(length(1)),d(u,v)))) = g(a,l,ap(F(length(1)),d(a,l))) (I)

Because length(d(a,l)) J length(l), we can apply the lemma to get

ap(F(length(1)),d(a,l)) = ap(F(length(d(a,l))),d(a,l)) ~f(d(a,l)) (2)

(I) and (2) give f(a.l) = g(a,l,f(d(a,l))) , i.e., the second equation is satisfied.

Remark. The proof of the course-of-values induction rule determines the solution of the
equations. So, if we choose another proof, we will get a different program. We proved
the induction rule by proving (Vl~List(A))(length(1)Jnm C(1)) by induction on h E N
and in the induction step, we used list induction. But this list induction can be re-
placed by a separation into two cases: length(1) i n and length(1) !n+1 . If we work out
the details of this proof, we will get the following definition ofF:
F(n)
rec (n,
(~l)c,
(x,y)(%l)(if length(1) i x then ap(y,l) else listrec(l,c,(u,v,w)g(u,v,ap(y,d(u,v))))))
This program is somewhat longer than the one we obtained above and there will also
occur superfluous steps in the computation of f(1). However, this definition of F has
one advantage; it is easier to see that f is a solution to the course-of-values equa-
tions, because the lemma now becomes trivial.

QUICKSORT.

Let A be a type and J a Boolean valued function of two arguments defined on A,


i.e., a<b CBoolean [aEA, bEA]. In order to give the course-of-values equations
for quicksort, we have to define some functions:

concat(l,s) ~ listrec(l,s,(x,y,z)(x.z))
filter<(a,l) m listrec(l,nil,(x,y,z)(if x!a then x.z else z))
and
filter>(a,l) ~ listrec(l,nil,(x,y,z)(if x!a then z else x.z))

The equations for quicksort are

S quicksort(nil) = nil
quicksort(a.l) = concat(quicksort(filter<(a,l)), a.quicksort(filter>(a,l)))

We must show that these equations satisfy the requirements on course-of-values


equations, i.e. that

nil E List(A) (I), concat(Yl,a.y 2) E List(A) [aEA, Yl EList(A), Y2EList(A)] (2)


length(filter<(a,l))_< length(l) [aCA, IEList(A)] (3)
456

and length(filter>(a,l)) j length(1) (4)

I leave out the simple proofs of (I) and (2). We prove (3) by list induction. By defi-
nition, we have filter<(a,nil) = nil. Assume that (3) holds for v E List(A). For
I= u.v, where uEA, we get two cases:
I) u < a = true
length(filter<(a,u.v)) = length(u.filter<(a,v))! length(v) + I = length(u.v)
2) u < a = false
length(filter<(a,u.v)) = length(filter<(a,v)) j length(v) < length(u.v)

(4) is proved in a similar way.


Now we can apply our method of solving course-of-values equations and define
quicksort by
quicksort(1) m ap(Q(length(1),l)
where
Q(n)
rec (n,
(%l)nil,
(x,y)(%l)listrec(l,nil,(u,v,w)concat(ap(y,filter<(u,v), u.ap(y,filter>(u,v)))))

By the theorem, we know that our way of defining quicksort in type theory gives a func-
tion from List(A) to List(A), satisfying the claimed equations.

Acknowledgements. I am very greatful to Per Martin-L~f for many helpful suggestions.


I also would like to thank Bengt NordstrSm and Kent Petersson for many discussions on
type theory and programming.

References.

I. E.Bishop, Foundations of Constructive Analysis (McGraw-Hill 1967).


2. E.Bishop, Mathematics as a numerical language, Myhill, Kino and Vesley, eds.,
Intuitionism and Proof Theory pp.53-71 (North-Holland 1970).
3. R.M.Burstall, D.B.McQueen and D.T.Sanella, Hope: An experimental applicative
language, Proceedings of the 1980 LISP conference, pp.136-143.
4. R.Constable, Constructive Mathematics and Automatic Program Writers, Information
Processing 71 (North-Holland 1972).
5. M.J.Gordon, A.J.Milner and C.P.Wadsworth, Edinburgh LCF, Lecture Notes in Computer
Science 78 (Springer-Verlag 1979).
6. K.GSdel, Uber eine hisher noch nieht benOtze Erwelterung des finiten Standpunktes,
Dialectica, Vol. 12, 1958, pp.280-287.
7. A.Heyting, Intuitionism, an introduction (North-Holland ~956).
8. S.C.Kleene, Introduction to Metamathematics (North-Holland 1952).
9. Z.Manna and R.Waldinger~ A deductive approach to program synthesis, ACM Transac-
tions on Programming Languages and Systems~ Vol.2, No,t, pp.92-121, 1980.
10. P.Martin-L~f, An Intuitionistic Theory of Types: Predicative Part, Logic Colloquium
73, Rose and Shepardson, eds., pp.73-118 (North-Holland 1975).
11. PoMartin-LSf, Constructive Mathematics and Computer Programming, Logic, Methodology
and Philosophy of Science VI, pp.153-175 (North-Holland 1982).
12. B.NordstrSm, Programming in Constructive Set Theory, Some Examples, Proceedings of
the Conference on Functional Programming Languages and Computer Architecture, ACM,
Portsmouth, New Hampshire 1981.
~3. K.Petersson, A programming system for type theory, LPM Memo 21,1982~ Dept. of
Computer Science, Chalmers Univ. of Technology, GSteborg, Sweden.
14. D.Turner, SASL Language Manual, St.Andrews University, Technical report 1976.

Potrebbero piacerti anche