Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
George Tourlakis
February 18, 2008
1 What
The Ackermann function was proposed, naturally, by Ackermann. The version here is a
simplication offered by Robert Ritchie.
What the function does is to provide us with an example of a number-theoretic intuitively
computable, total function that is not in T1.
Another thing it does is it provides us with an example of a function x.f(x) that is hard
to compute (f / T1) but whose graph that is, the predicate yx.y = f(x) is easy to
compute ( T1
).
1
Denition 1.1 The Ackermann function, nx.A
n
(x), is given, for all n 0, x 0 by the
equations
A
0
(x) = x + 2
A
n+1
(x) = A
x
n
(2)
where h
x
is function iteration (repeated composition of h with itself a variable number of
times, x)
h h h h
. .
x copies of h
More precisely, for all x, y,
h
0
(y) = y
h
x+1
(y) = h
_
h
x
(y)
_
Remark 1.2 An alternative way to dene the Ackermann function, extracted directly from
Denition 1.1, is as follows:
A
0
(x) = x + 2
2
I.H. is an acronym for Induction Hypothesis. Formally, what we are proving is (n)(x)An(x) > x + 1.
Thus, as we start on an induction on n, its I.H. is (x)An(x) > x + 1 for a xed unspecied n.
3
I.S. is an acronym for Induction Step. Formally, the step is to prove from the Basis and I.H.
(x)A
n+1
(x) > x + 1 for the n that we xed in the I.H. It turns out that this is best handled by induction
on x.
2
Lemma 2.3 x.A
n
(x) .
NOTE. x.f(x) means that the (total) function f is strictly increasing, that is,
x < y implies f(x) < f(y), for any x and y. Clearly, to establish the property one just needs
to check for the arbitrary x that f(x) < f(x + 1).
Proof We handle two cases separately.
A
0
: x.x + 2 .
A
n+1
: A
n+1
(x + 1) = A
n
(A
n+1
(x)) > A
n+1
(x) + 1 the > by Lemma 2.2.
Lemma 2.4 n.A
n
(x + 1) .
Proof A
n+1
(x + 1) = A
n
(A
n+1
(x)) > A
n
(x + 1) the > by Lemmata 2.2 (left
argument > right argument) and 2.3.
NOTE. The x + 1 in 2.4 is important since A
n
(0) = 2 for all n. Thus n.A
n
(0) is
increasing but not strictly (constant).
Lemma 2.5 y.A
y
n
(x) .
Proof A
y+1
n
(x) = A
n
(A
y
n
(x)) > A
y
n
(x) the > by Lemma 2.2.
Lemma 2.6 x.A
y
n
(x) .
Proof Induction on y:
For y = 0 we want that x.A
0
n
(x) , that is, x.x , which is true.
We take as I.H. that
A
y
n
(x + 1) > A
y
n
(x) (1)
We want
A
y+1
n
(x + 1) > A
y+1
n
(x) (2)
But (2) follows from (1) by applying A
n
to both sides of > and invoking Lemma 2.3.
Lemma 2.7 For all n, x, y, A
y
n+1
(x) A
y
n
(x).
Proof Induction on y:
For y = 0 we want that A
0
n+1
(x) A
0
n
(x), that is, x x.
We take as I.H. that
A
y
n+1
(x) A
y
n
(x)
We want
A
y+1
n+1
(x) A
y+1
n
(x)
This is true because
A
y+1
n+1
(x) = A
n+1
_
A
y
n+1
(x)
_
if y = x = 0 we have =; else by 2.4
A
n
_
A
y
n+1
(x)
_
2.3 and I.H.
A
y+1
n
(x)
Denition 2.8 For a predicate P(x) we say that P(x) is true almost everywhere in sym-
bols P(x) a.e. iff the set of (vector) inputs that make the predicate false is nite. That is,
the set x : P(x) is nite.
A statement such as xy.Q(x, y, z, w) a.e. can also be stated, less formally, as
Q(x, y, z, w) a.e. with respect to x and y.
3
Lemma 2.9 A
n+1
(x) > x + l a.e. with respect to x.
NOTE. Thus, in particular, A
1
(x) > x + 10
350000
a.e.
Proof In view of Lemma 2.4 and the note following it, it sufces to prove
A
1
(x) > x + l a.e. with respect to x
Well, since
A
1
(x) = A
x
0
(2) =
x 2s
..
( (((y + 2) + 2) + 2) + + 2) |
evaluated at y = 2
= 2 + 2x
we ask: Is 2 + 2x > x + l a.e. with respect to x? You bet. It is so for all x > l 2 (only
x = 0, 1, . . . , l 2 fail).
Lemma 2.10 A
n+1
(x) > A
l
n
(x) a.e. with respect to x.
Proof If one (or both) of l or n is 0, then the result is trivial. For example,
A
l
0
(x) =
l 2s
..
( (((x + 2) + 2) + 2) + + 2) = x + 2l
In the preceding proof we saw that A
1
(x) = 2x + 2. Clearly, 2x + 2 > x + 2l as soon as
x > 2l 2, that is, a.e with respect to x.
Let us then assume l 1, n 1. We note that (straightforwardly, via Denition 1.1)
A
l
n
(x) = A
n
(A
l1
n
(x)) = A
A
l1
n
(x)
n1
(2) = A
A
A
l2
n
(x)
n1
(2)
n1
(2) = A
A
A
A
l3
n
(x)
n1
(2)
n1
(2)
n1
(2)
The straightforward observation that we have a ladder of k A
n1
s precisely when the
top-most exponent is l k can be ratied by induction on k (not done here). Thus I state
A
l
n
(x) =
k An1
_
A
A
A
lk
n
(x)
n1
(2)
.
.
.
n1
(2)
In particular, taking k = l,
A
l
n
(x) =
l An1
_
A
A
A
ll
n
(x)
n1
(2)
.
.
.
n1
(2) =
l An1
_
A
A
x
n1
(2)
.
.
.
n1
(2) ()
Let us now take x > l.
Thus, by (),
A
n+1
(x) = A
x
n
(2) =
x An1
_
A
A
2
n1
(2)
.
.
.
n1
(2) ()
4
By comparing () and () we see that the rst ladder is topped (after l A
n1
steps) by
x and the second is topped by
xl An1
_
A
A
2
n1
(2)
.
.
.
n1
(2)
Thus in view of the fact that A
y
n
(x) increases with respect to each of the arguments
n, x, y we conclude by answering:
Is
xl An1
_
A
A
2
n1
(2)
.
.
.
n1
(2) > x a.e. with respect to x?
Yes, because by () this is the same question as is A
n+1
(x l) > x a.e. with respect to
x? which has been answered in Lemma 2.9.
Lemma 2.11 For all n, x, y, A
n+1
(x + y) > A
x
n
(y).
Proof
A
n+1
(x + y) = A
x+y
n
(2)
= A
x
n
_
A
y
n
(2)
_
= A
x
n
_
A
n+1
(y)
_
> A
x
n
(y) by Lemmata 2.2 and 2.6
x, y, A
rx
m
_
A
k
n
([x, y[)
_
_
, by I.H.(5) and 2.6
= A
r
m
_
A
rx
m
_
A
k
n
([x, y[)
__
, by [ w[ w
i
and 2.6
= A
r(x+1)
m
_
A
k
n
([x, y[)
_
With (5) proved, let me set l = max(m, n). By Lemma 2.7 I now get
f(x, y) A
rx+k
l
([x, y[) <
Lemma 2.11
A
l+1
([x, y[ + rx + k) (6)
6
Now, [x, y[ + rx + k (r + 1)[x, y[ + k thus, (6) and 2.3 yield
f(x, y) < A
l+1
((r + 1)[x, y[ + k) (7)
To simplify (7) note that there is a number q such that
(r + 1)x + k A
q
1
(x) (8)
for all x. Indeed, this is so since (easy induction on y) A
y
1
(x) = 2
y
x+2
y
+2
y1
+ +2.
Thus, to satisfy (8), just take y = q large enough to satisfy r+1 2
q
and k 2
q
+2
q1
+
+ 2.
By (8), (7) and 2.3 yield
f(x, y) < A
l+1
(A
q
1
([x, y[)) A
1+q
l+1
([x, y[)
(by Lemma 2.7) which is all we want.
NB. Reading the proof carefully we note that the subscript argument of the majorant
4
is precisely the depth of nesting of primitive recursion. Indeed, the initial functions have
a majorant with subscript 0; composition has a majorant with subscript no more than the
maximum subscript of the component parts no increase; primitive recursion has a majorant
with a subscript that is bigger than the maximum subscript of the h and g-majorants by
precisely 1.
Corollary 3.2 nx.A
n
(x) / T1.
Proof By contradiction: If nx.A
n
(x) T1 then also x.A
x
(x) T1. By the theorem
above, for some n, k, A
x
(x) A
k
n
(x), for all x, hence, by 2.10
A
x
(x) < A
n+1
(x), a.e. with respect to x (1)
On the other hand, A
n+1
(x) < A
x
(x) a.e. with respect to x indeed for all x > n + 1 by
2.4 which contradicts (1).
4 The Graph of the Ackermann function is in T1
.
5
An(x) = A
n1
(An(x 1)).
6
As in quintuples, n-tuples. This word has found its way in the theoreticians dictionary, if not in general purpose
dictionaries.
7
Assuming that modus ponens is the only rule of inference, the proof a formula Adepends, in general, on that of
earlier formulae X A and X, which in turn depend (require) earlier formulae each, and so on and so on, until
we reach formulae that are axioms.
8
Proof We will use some notation that will be useful to make the proof more intuitive (this
notation also appears in the Kleene Normal Form notes posted). Thus we introduce two
predicates: vu.v u and vwu.v <
u
w. The rst says
u = . . . , v, . . .)
and the second says
u = . . . , v, . . . , w, . . .)
Both are in T1
since
v u Seq(u) (i)
<lh(u)
(u)
i
= v
and
v <
u
w Seq(u) (i)
<lh(u)
(j)
<lh(u)
_
(u)
i
= v (u)
j
= w i < j
_
We can now dene Comp(u) by a formula that makes it clear that it is in T1
:
Comp(u)Seq(u) (v)
u
_
v u Seq(v) lh(v) = 3
_
Comment: Case (i), p.8 (v)
0
= 0 (v)
2
= (v)
1
+ 2
Comment: Case (ii) (v)
1
= 0 (v)
2
= 2
Comment: Case (iii)
_
(v)
0
> 0 (v)
1
> 0
(w)
<v
((v)
0
1, w, (v)
2
) <
u
v (v)
0
, (v)
1
1, w) <
u
v)
___
The Pause on p.8 justies the bound on (w) above. Indeed, we could have used the tighter
bound (v)
2
. Clearly Comp(u) T1
.
Thus A
n
(x) = z iff n, x, z) u for some u that satises Comp, for short
A
n
(x) = z (u)(Comp(u) n, x, z) u) (1)
If we succeed to nd a bound for u that is a primitive recursive function of n, x, z then we
will have succeeded showing:
Theorem 4.2 nxz.A
n
(x) = z T1
.
Proof Let us focus on a computation u that as soon as it veries A
n
(x) = z quits, that is, it
only codes n, x, z) and just the needed predecessor triples, no more. How big can such a u
be?
Well,
u = p
i,j,k+1
r
p
n,x,z+1
l
(2)
for appropriate l (=lh(u) 1). For example, if all we want is to verify A
0
(3) = 5, then
u = p
0,3,5+1
0
.
Similarly, if all we want to verify is A
1
(1) = 4, then since the recursive calls here are
to A
0
(2) = 4 and A
1
(0) = 2two possible u-values work: u = p
0,2,4+1
0
p
1,0,2+1
1
p
1,1,4+1
2
or u = p
1,0,2+1
0
p
0,2,4+1
1
p
1,1,4+1
2
.
How big need l be? No bigger than needed to provide distinct positions (l + 1 such)
in the computation, for all the needed triples i, j, k. Since z is the largest possible output
9
(and larger than any input) computed, there are no more than (z + 1)
3
triples possible, so
l + 1 (z + 1)
3
. Therefore, (2) yields
u p
z,z,z+1
r
p
z,z,z+1
l
=
_
il
p
i
_
z,z,z+1
p
(l+1)(z,z,z+1)
l
p
((z+1)
3
+1)(z,z,z+1)
(z+1)
3
Setting g = z.p
((z+1)
3
+1)(z,z,z+1)
(z+1)
3
we have g T1 and we are done by (1):
A
n
(x) = z (u)
g(z)
(Comp(u) n, x, z) u)
10