Sei sulla pagina 1di 17

2

Numerous Proofs of (2) = 6

Brendan W. Sullivan
April 15, 2013

Abstract
In this talk, we will investigate howPthe late, great Leonhard Euler
2 2
originally proved the identity (2) = n=1 1/n = /6 way back in
1735. This will briefly lead us astray into the bewildering forest of com-
plex analysis where we will point to some important theorems and lemmas
whose proofs are, alas, too far off the beaten path. On our journey out
of said forest, we will visit the temple of the Riemann zeta function and
marvel at its significance in number theory and its relation to the prob-
lem at hand, and we will bow to the uber-famously-unsolved Riemann
hypothesis. From there, we will travel far and wide through the kingdom
of analysis, whizzing through a number N of proofs of the same original
fact in this talks title, where N is not to exceed 5 but is no less than
3. Nothing beyond a familiarity with standard calculus and the notion of
imaginary numbers will be presumed.

Note: These were notes I typed up for myself to give this seminar talk.
I only got through a portion of the material written down here in the
actual presentation, so I figured Id just share my notes and let you read
through them. Many of these proofs were discovered in a survey article by
Robin Chapman (linked below). I chose particular ones to work through
based on the intended audience; I also added a section about justifying
the sin(x) factoring as an infinite product (a fact upon which two of
Eulers proofs depend) and one about the Riemann Zeta function and its
use in number theory. (Admittedly, I still dont understand it, but I tried
to share whatever information I could glean!)

http://empslocal.ex.ac.uk/people/staff/rjchapma/etc/zeta2.pdf

1
The Basel Problem was first posed by the Italian mathematician Pietro
Mengoli in 1644. His question was simple:

X 1
What is the value of the infinite sum (2) = 2
?
n=1
n

Of course, the connection to the Riemann zeta function came later. Well
use the notation for now and discuss where it came from, and its signif-
icance in number theory, later. Presumably, Mengoli was interested in
infinite sums, since he had proven already not only that the harmonic
series is divergent, but also

X 1
(1)n+1 = ln 2
n=1
n

and that Wallis product



Y 2n 2n 2 2 4 4 6 6 8 8
= = .
n=1
2n 1 2n + 1 1 3 3 5 5 7 7 9 2

is correct. Lets tackle the problem from the perspective of Euler, who
first solved the problem in 1735; at least, thats when he first announced
his result to the mathematical community. A rigorous proof followed a few
years later in 1741 after Euler made some headway in complex analysis.
First, lets discuss his original proof and then fill in some of the gaps
with some rigorous analysis afterwards.
2
Theorem 1. (2) = 6

Proof #1, Euler (1735). Consider the Maclaurin series for sin(x)

(x)3 (x)5 X 2n+1 x2n+1
sin(x) = x + = (1)n+1 =: p(x)
3! 5! n=1
(2n + 1)!

We know that the roots of sin(x) are the integers Z. For finite polyno-
mials q(x), we know that we can write the function as a product of linear
factors of the form (1 xa ), where q(a) = 0. Euler conjectured that the
same trick would work here for sin(x). Assuming, for the moment, that
this is correct, we have
 x x x x x x
p(x) : = x 1 1+ 1 1+ 1 1+
1 1 2 2 3 3

x2 x2 x2 x2
      Y  
= x 1 1 1 = x 1 2
1 4 9 n=1
n

Notice that we have included the leading x factor to account for the root
at 0, and the factor to make things work when x = 1. Now, lets examine
the coefficient of x3 in this formula. By choosing the leading x term, and
2
then nx2 from one of the factors and 1 from all of the other factors, we
see that  
1 1 1 X 1
p(x)[x3 ] = + + + = 2
1 4 9 n=1
n

2
Comparing this to the coefficient from the Maclaurin series, p(x)[x3 ] =
3
6 , we obtain the desired result!

3 X 1 X 1 2
= 2
2
=
6 n=1
n n=1
n 6

So why is it that we can factor the function sin(x) by using what we


know about its roots? We can appeal to the powerful Weierstrass factor-
ization theorem which states that we can perform this root factorization
process for any entire function over C.
Definition 2. A function f : D C is said to be holomorphic on a
domain D C provided z D such that the derivative
f (y) f (z0 )
f 0 (z0 ) = lim
yz0 y z0
exists z0 B(z, ). A function f is said to be entire if it is holomorphic
over the domain D = C.
There are two forms of the theorem, and they are essentially converses
of each other. Basically, an entire function can be decomposed into factors
that represent its roots (and their respective multiplicities) and a nonzero
entire function. Conversely, given a sequence of complex numbers and a
corresponding sequence of integers satisfying a specific property, we can
construct an entire function having exactly those roots.
Theorem 3 (Weierstrass factorization theorem). Let f be an entire func-
tion and let {an } be the nonzero zeros of f repeated according to multi-
plicity. Suppose f has a zero at z = 0 of order m 0 (where order 0
means f (0) 6= 0). Then g an entire function and a sequence of integers
{pn } such that
 
Y z
f (z) = z m exp(g(z)) Epn
n=1
a n

where
(
(1 y) if n = 0,
En (y) = 
y1 y2 yn

(1 y) exp 1
+ 2
+ + n
if n = 1, 2, . . .

This is a direct generalization of the Fundamental Theorem of Algebra.


It turns out that for sin(x), the sequence pn = 1 and the function g(z) =
log() works. Here, we attempt to briefly explain why this works. We
start by using the functional representation
1  iz 
sin(z) = e eiz
2i
and recognizing that the zeros are precisely the integers n Z. One of the
Lemmas that provides the bulk of the proof of the Factorization Theorem
requires that the sum

+ 1+pn
X r
< +
n=
|an |

3
be finite for all r > 0, where the hat indicates the n = 0 term is removed.
Since |an | = n, we see that pn = 1 n suffices, and so


Y z
sin(x) = z exp(g(z)) 1 exp(z/n)
n=
n

and we can cancel the terms exp(z/n)exp(z/n) and combine the factors
(1 z/n) to say

z2
Y 
f (z) := sin(x) = z exp(g(z)) = 1 =: exp(g(z))zh(z)
n=1
n2

for some entire function g(z). Now,


Qa useful Lemma states that for analytic
functions fn and a function f = n fn , we have

X Y
0
fk (z) fn (z)
k=1 n6=k

which immediately implies



f 0 (z) X fn0 (z)
=
f (z) n=1
f (z)

This allows us to write



f 0 (z) g 0 (z) exp(g(z)) 1 X 2z/n2
cot(z) = = + +
f (z) exp(g(z)) z n=1 1 z 2 /n2

1 X 2z
= g 0 (z) + +
z n=1 z 2 n2

and according to previous analysis, we know



1 X 2z
cot(z) = +
z n=1 z 2 n2

which is based on integrating (z 2 a2 )1 cot(z) dz for a non-integral


R

and where is an appropriately chosen rectangle. As we enlarge , the


integral goes to 0, and we get what we want. This means g(z) = c for
some c. Putting this back into the formula above, we have

ec Y z2

sin(z)
= 1 2
z n=1 n

for all 0 < |z| < 1. Taking z 0 tells us ea = , and were done!
1
Remark 4. Notice that plugging in z = 2
and rearranging yields the
aforementioned Wallis product for

Y (2n)2
=
2 n=1
(2n 1)(2n + 1)

4
Now, lets define the Riemann zeta function and discuss some of the
interesting number theoretical applications thereof.
Definition 5. The Riemann zeta function is defined as the analytic con-
tinuation of the function defined by the sum of the series

X 1 1 1 1
(s) = s
= s + s + s + <(s) > 1
n=1
n 1 2 3

This function is holomorphic everywhere except for a simple pole at


s = 1 with residue 1. A remarkable elementary result in this field is the
following.
Theorem 6 (Euler product formula). For all s > 1,

X 1 Y 1
=
n=1
ns
p prime
1 ps

Sketch. Start with the sum definition for (s) and subtract off the sum
1
2s
(s):
1 1 1 1
(s) = 1 + + s + s + s +
2s 3 4 5
1 1 1 1 1 1
(s) = s + s + s + s + s +
2s 2 4 6 8 10
We see that this removes all terms n1s where 2 | n. We repeat this process
by taking the difference between
 
1 1 1 1 1
1 s (s) = 1 + s + s + s + s +
2 3 5 7 9
 
1 1 1 1 1 1 1
1 (s) = s + s + s + s + s +
3s 2s 3 9 15 21 27

and we see that this removes all of the terms n1s where 2 | n or 3 | n or
both. Continuing ad infinitum, we have
     
1 1 1 1 1
1 s 1 s 1 s 1 s 1 s (s) = 1
11 7 5 3 2
and dividing by the factors on the left yields the desired result.

Remark 7. To prove a neat consequence of this formula, lets consider


the case s = 2. We have
Y  1

1 6
1 2 = = 2 0.6079271016
p prime
p (2)

Lets think about what the product on the left hand side represents. Given
two random integers m, n, the probability that 2 | m is 21 (and so is 2 | n)
since roughly half of the integers are divisible by two. Likewise, the
probability that 3 | m is 13 (and same for 3 | n). Thus, each term in the
product is just the probability that a prime p does not divide both m and
n. Multiplying over all primes p gives us the probability that m, n have no
common factors, i.e. that m and n are relatively prime, or gcd(m, n) = 1.

5
The Riemann zeta function has a non-series definition, given by
 s 
(s) = 2s s1 sin (1 s) (1 s)
2
where Z
(z) = tz1 et dt
0
is the so-called Gamma function that analytically extends the factorial
function to the complex plane. This is what allows us to think of (s)
as a function over the whole complex plane (except s = 1). Notice that
(2k) = 0 for any k N, since the sin term vanishes; these are the
so-called trivial zeros of the function. The famous Riemann hypothesis
asserts that
1
<z = (z) = 0 nontrivial
2
This problem is famously unsolved (and is worth quite a bit of fame and
money), and it is generally believed to be true; in fact, modern computer
calculations have shown that the first 10 trillion zeros lie on the critical
line, (according to Wikipedia) although this is certainly not a proof,
and some have even pointed to Skewes number (an astronomically huge
number that served as an upper bound for some proof) as an example
of phenomena occurring beyond the scope of current computing power.
The bottom line is that this is closely related to the distribution of prime
numbers in N. Riemanns original explicit formula involves defining the
function
1 1
f (x) = (x) + (x1/2 ) + (x1/3 ) +
2 3
where (x) is the number of primes less than x. If we can find f (x), then
we can recover
1 1
(x) = f (x) f (x1/2 ) f (x1/3 )
2 3
Riemanns huge result is that
Z
X dt
f (x) = Li(x) Li(x ) log(2) +
x t(t2 1) log(t)

where Li(x) is the logarithmic integral function


Z x
dx
Li(x) =
0 log(x)
P
and the sum is over the nontrivial zeros of (s). There are a number
of results that have been shown to be true after assuming the hypothesis,
and there are a variety of conditions that have been shown to be equivalent
to the hypothesis. From Wikipedia: In particular the error term in the
prime number theorem is closely related to the position of the zeros; for
example, the supremum of real parts of the zeros is the infimum of numbers
such that the error is O(x ). An interesting equivalency is that has
only simple zeros on the critical line is equivalent to its derivative having
no zeros on the critical line. Also from Wikipedia: Patterson (1988)
suggests that the most compelling reason for the Riemann hypothesis for

6
most mathematicians is the hope that primes are distributed as regularly
as possible. and The proof of the Riemann hypothesis for varieties over
finite fields by Deligne (1974) is possibly the single strongest theoretical
reason in favor of the Riemann hypothesis.
Its also possible that Riemann initially made the conjecture with the
hopes of proving the Prime Number Theorem.
Theorem 8 (Prime Number Theorem). Let (x) = |{p < x}|. Then

(x)
lim =1
x x/ log x

We also know that


(x)
lim =0
x x
so that there are infinitely many less primes than composite integers,
which gives some heuristic credence to the result above regarding the
(relatively high) probability that 2 random integers are relatively prime.
This area is incredibly rich, deep, diverse and mathematically stimu-
lating, but most of it is way behind what we have time to discuss here.
Its about time that we move on and race through a bunch of other proofs
of Eulers original result.
At least, while were on the tangential topic of complex analysis, lets
do a complex proof, i.e. use the calculus of residues. Well make use of
the following ideas. We will frequently use Eulers formula

exp(i) = cos() + i sin()

which also allows us to define, for z C,

exp(iz) exp(iz) exp(iz) + exp(iz)


sin z = cos(z) =
2i 2
Definition 9. Suppose U C is open and a such that f is holomorphic
over U \ {a}. If g : U C holomorphic and a positive integer n such
that
g(z)
f (z) = z U \ {a}
(z a)n
then a is called a pole of order n. A pole of order 1 is called a simple
pole.
Definition 10. For a meromorphic function f with pole a, the residue
of f at a, denoted by Res(f, a), is defined to be the unique value R such
R
that f (z) za has an analytic antiderivative in some punctured disk
0 < |z a| < . For a simple pole, we can use the formula

Res(f, c) = lim (z c)f (z).


zc

An alternative definition is to use the Laurent expansion about z = a


+
X
f (z) = an (z a)n
n=

7
If f has an isolated singularity at z = a, then the coefficient a1 =
Res(f, a). Finally, the definition we will likely use is as follows: Sup-
pose f has a pole of order m at z = a and put g(z) = (z a)m f (z);
then
1
Res(f ; a) = g (m1) (a)
(m 1)!
Theorem 11 (Residue theorem). Assume f is analytic except for singu-
larities at zj , and that C is a closed, piecewise smooth curve. Then
Z k
1 X
f (z) dz = Res(f, zj )
2i C j=1

Now, were ready for proof number 2!

Proof #2, complex analysis textbook. Define the function

f (z) = z 2 cot(z)

so the function has poles at z Z. The pole at 0 is of order 3 because


z
z 3 f (z) = cos(z) 1 as z 0
sin(z)

and the poles at z = n Z are simple (of order 1) because

(z n) cos(z) 1 1
(z n)f (z) = 1 2 = 2 as z n
sin(z) z2 n n

So to evaluate Res(f ; n) for n Z, we simply need to apply the limit


definition from above, which we just evaluated, to obtain n12 . It takes
much more work to find Res(f ; 0); we need to consider

g(z) = z cot(z)

and evaluate
1
lim g 00 (z)
2 z0
We find

g 0 (z) = cot(z) 2 z csc2 (z)


g 00 (z) = 2 2 csc2 (z) + 2 3 z csc2 (z) cot(z)

8
and then we can evaluate
lim g 00 (z) = lim 2 2 csc2 (z) + 2 3 z csc2 (z) cot(z)
z0 z0
z cot(z) 1
= 2 2 lim LH
z0 sin2 (z)
cot(z) 2 z csc2 (z)
= 2 2 lim
z0 2 sin(z) cos(z)
cos(z) sin(z) z
= 2 lim LH
z0 sin3 (z) cos(z)
cos2 (z) sin2 (z) 1
= 2 lim LH
z0 3 sin2 (z) cos2 (z) sin4 (z)

2 cos(z) sin(z)
= 2 lim LH
z0 3 sin(z) cos3 (z) 5 sin3 (z) cos(z)

2 cos2 (z) + 2 sin2 (z) 2 2


= 2 lim 4
=
z0 3 cos (z) + p(sin(z)) 3
2
and so Res(f ; 0) = 3 . Now, let N N and take CN to be the square
with vertices (1 i)(N + 21 ). By the residue integral theorem, we can
say
N
2
Z
X 1 1
+2 = f (z) dz =: IN
3 n=1
n2 2i CN
The goal now is to show IN 0 as N . The trick is to bound the
quantity | cot(z)|2 on the edges of the rectangle. Consider z = x + iy.
Then we can use the identities
sin(x + iy) = sin x cosh y + i cos x sinh y
and
cos(x + iy) = cos x cosh y i sin x sinh y
(which one can verify easily) to write
cos(x + iy) 2 cos x cosh y i sin x sinh y 2

| cot(z)|2 = =
sin(x + iy) sin x cosh y + i cos x sinh y
sin x cos x i sinh y cosh y 2

= 2
sin x cosh2 y + cos2 x sinh2 y
sin2 x cos2 x + sinh2 y cosh2 y
=
(sin2 x cosh2 y + cos2 x sinh2 y)2
cos2 x + sinh2 y
= =
sin2 x + sinh2 y
Thus, if z lies on the vertical edges of CN then x = <(z) = 2N2+1 so

sinh2 y
| cot(z)|2 = <1
1 + sinh2 y
and if z lies on the horizontal edges of CN then y = =(z) = 2N2+1 =: k
so
1 + sinh2 k 
| cot(z)|2 2 = coth2 k coth2 =: K 2 > 1
sinh k 2

9
since coth is a monotone decreasing function on (0, ). So far any z on
the rectangle CN , we know
 2
2
|f (z)| = | cot(z)| K
|z|2 2N + 1

Since the lengths of the edges of CN is L = 4(2N + 1), then we can say
 2
1 2 8K
|IN | 4(2N + 1) K = 0
2 2N + 1 2N + 1

as n . Therefore,
N
2 X 1 2
+ 2 lim 2
= 0 (2) =
3 N
n=1
n 6

A similar consideration of the infinite series for cot(z) allows us to


conclude (2k) for all values of k. Specifically, we have

!
2k
k 2 1
X 2k 2k
X X
1+ (1) B2k z = z cot(z) = 1 2 2k
z 2k
(2k)! n=1
n
k=1 k=1

where Bn are the Bernoulli numbers given by the nice linear relations

2B1 + 1 = 0
3B2 + 3B1 + 1 = 0
4B3 + 6B2 + 4B1 + 1 = 0
5B4 + 10B3 + 10B2 + 5B1 + 1 = 0
..
.

This tells us
2 4 6 8 1 0
(2) = (4) = (6) = (8) = (10) =
6 90 945 9450 35 5 7 11
Okay, so thats enough complex analysis. Lets do some good old
calculusy proofs.

Proof #3, Apostol (1983). This is taken from an article in the Mathemat-
ical Intelligencer. First, observe that
Z 1Z 1 Z 1 Z 1
1
xn1 y n1 dx dy = xn1 dx y n1 dy = 2
0 0 0 0 n
Now, we sum over all n and apply the monotone convergence theorem to
say
Z 1Z 1X Z 1Z 1
X 1 n1 1
2
= (xy) dx dy = dx dy
n=1
n 0 0 n=1 0 0 1 xy

10
Our goal now is to evaluate this integral. There are issues with the x-
and y-axes, so we transform this unit rectangle into a different rectangle
in the uv-plane. Specifically, let
x+y x + y
u= v= x=uv y =u+v
2 2
which transforms the square into the square with vertices (0, 0) and (1/2, 1/2)
and (1, 0) and (1/2, 1/2), in order. Notice that
1 1 1
= =
1 xy 1 (u2 v 2 ) 1 u2 + v 2
and we have the Jacobian
 
1 1
J= | det J| = 2
1 1

Also, since we have symmetry across the u-axis, we can write (recognizing
the arctan derivative in the integrands, from Calc II)
Z 1/2 Z u Z 1 Z 1u
2 dv du 2 dv du
(2) = 2 + 2
0 0 1 u 2 + v2
1/2 0 1 u2 + v 2
Z 1/2   v=u
1 v
=4 arctan du
0 1 u 2 1 u2 v=0
Z 1   v=1u
1 v
+4 arctan du
1/2 1 u2 1 u2 v=0
Z 1/2  
1 u
=4 arctan du
0 1 u2 1 u2
Z 1  
1 1u
+4 arctan du
1/2 1 u2 1 u2

since arctan 0 = 0. We now want to replace the arctan expressions above


with
simpler stuff. Set up the right triangle with angle and bases
u, 1 u2 and hypotenuse 1, and we see that
 
u
arctan = arcsin u
1 u2
Next, we see that
 
1u 1u
= arctan tan2 =
1 u2 1+u

and using the identity tan2 + 1 = sec2 , we have


1u 2
sec2 = 1 + = u = 2 cos2 1 = cos(2)
1+u 1+u
Rearranging for yields
1 1
= arccos u = arcsin u
2 4 2

11
and this is an expression that will be helpful in the integral above. We
can now straightforwardly evaluate
Z 1/2 Z 1 1
arcsin u /4 arcsin u
(2) = 4 du + 4 2 du
0 1u 2
1/2 1u 2 1 u2
1/2
= [2 arcsin2 u]0 + [ arcsin u arcsin2 u]11/2
 2  2  2
=2 0+ +
6 2 2 6 6
2 2 2 2 2 2
= + + =
18 2 4 6 36 6

Proof #3b, Calabi, Beukers, Kock. This is very similar to the previous
proof. Start by observing that
Z 1Z 1
1
x2m y 2m dx dy = n0
0 0 (2m + 1)2

and then sum, applying the monotone convergence theorem, to get


Z 1 Z 1
X 1 1
= dx dy
m=0
(2m + 1)2 0 0 1 x2 y 2

Make the substitution


sin u sin v
x= y=
cos v cos u
so that
r ! s !
1 y2 1 x2
u = arctan x v = arctan
1 x2 1 y2

(One can check that this actually works.) This gives us the Jacobian
matrix  cos u sin u sin v

J = sincos v cos2 v
u sin v cos v
cos2 u cos u
and so, magically,

sin2 u sin2 v
| det J| = 1 = 1 x2 y 2
cos2 u cos2 v
which eliminates everything in the integrand. Now, the unit square is
transformed to the triangular region
n o
A = (u, v) : u, v > 0 and u + v <
2

12
2
which has area 8
. It only remains to observe that

X 1 1 1 1
=1+ + + +
m=0
(2m + 1)2 9 25 49
 
1 1 1 1 1 1 1
= 1+ + + + + + + +
4 9 16 25 36 49 64
 
1 1 1 1
+ + + +
4 16 36 64
1 3
= (2) (2) = (2)
4 4
3 2 2
So weve shown 4
(2) = 8
(2) = 6
.

Proof #4, Fourier analysis textbook. Consider the Hilbert space


 Z 1 
X = L2 [0, 1] = f : [0, 1] C : |f (x)|2 dx < +
0

with inner product Z 1


hf, gi = f g dx
0
One can prove that the set of functions {en }nZ given by

en (x) = exp(2inx) = cos(2nx) + i sin(2nx)

form a complete orthonormal set in X; that is,


Z 1
hem , en i = em (x)en (x) dx = 0 m 6= n
0

and ken k2 = hen , en i = 1 n. Note that this uses the fact that

exp(2inx) = exp(2inx)

Basically, {en } is a basis for X, in some sense. This allows us to apply


Parsevals formula, which says that
X
hf, f i = kf k2 = |hf, en i|2
nZ

In a Hilbert space, this formula is equivalent to being a complete orthonor-


mal set (also called an orthonormal basis). The trick now is to apply this
formula to the simple function f (x) = x X. Its easy to see that
Z 1
1
hf, f i = x2 dx =
0 3
and Z 1
1
hf, e0 i = x dx =
2 0
Applying integration by parts, we can show that
Z 1
1
x exp(2inx) dx =
0 2in

13
This completes the brunt work of the proof, because now we apply Par-
sevals formula to say

1 1 X 1 1 1 2
= + 2 2
= 2
2(2) (2) =
3 4 nZ 4 n 12 4 6

Remark 12. An essentially equivalent proof applies Parsevals formula


to the function g = [0,1/2] and uses the equivalent formulation for 34 (2)
we discussed above.
For the next proof, well require DeMoivres Formula.
Lemma 13. For any z C and n Z,

(cos z + i sin z)n = cos(nz) + i sin(nz)

Proof. Follows directly from Eulers formula (see above), or can be proven
easily by induction.

Proof #5, Apostol. This can be found in Apostols Mathematical Analy-


sis. Note that for 0 < x < 2 the following inequalities hold

1
sin x < x < tan x cot2 x < < csc2 x = 1 + cot2 x
x2
n
Let n, N N with 1 n N . Then 0 < 2N +1
< 2
, so

(2N + 1)2
   
n n
cot2 < < 1 + cot2
2N + 1 n2 2 2N + 1
2
and multiplying everything by (2N +1)2
and summing from n = 1 to n = N
yields
N
2 X 1 N 2 2
2
AN < 2
< 2
+ AN
(2N + 1) n=1
n (2N + 1) (2N + 1)2

where
N  
X 2 n
AN = cot
n=1
2N + 1
We now claim that
AN 2
=lim
N2 3
N

Before proving this claim, notice that this finishes the proof. Taking
N in the two way inequality above, we obtain

2 2 2 2 2
(2) 0 + (2) =
4 3 4 3 6
Now, to prove the claim, take 1 n N as before and let
n
= sin ((2N + 1)) = 0 , sin 6= 0
2N + 1

14
By DeMoivres Formula,
h i
sin((2N + 1)) = = (cos + i sin )2N +1

Now, if we think of actually expanding the 2N + 1 copies in the product


on the right in the line above, then we can see that
N
!
X 2N + 1
sin((2N + 1)) = (1)k cos2N 2k sin2k+1
2N 2k
k=0

where k = 0 corresponds to one factor of i sin and the rest cos , k = 1


corresponds to 3 factors of i sin , etc. But we know this is 0, so we can
divide by the nonzero factor sin2N +1 and distribute this into the sum to
obtain !
N
X 2N + 1
0= (1) k
cot2N 2k =: F (cot2 )
2N 2k
k=0

which is some polynomial in cot2 , where


! !
N 2N + 1 N 1 2N + 1 N 2
F (x) = (2N + 1)x x + x
3 5
!
2N + 1
+ (1)N 1 x + (1)N
2

From above, we know that the roots x such that F (x)P= 0 are precisely
a
cot2 for 1 n N . It is a well-known result that rn = n1
an
for
the roots rn of any polynomial. This tells us
(2N + 1)! N (2N 1) AN 2
AN = = 2
(2N 2)!3!(2N + 1) 3 N 3
as N .

The next proof is rather slick and should be mentioned, although I


wont go through the calculations at all.

Proof #6, Fourier analysis. If f is continuous, of bounded variation on


[0, 1], and f (0) = f (1), then the Fourier series of f converges to f point-
wise. Using the function f (x) = x(1 x), we can calculate the Fourier
coefficients to get

1 X cos(2nx)
x(1 x) =
6 n=1 2 n2
2
Let x = 0 and we get (2) = 6 immediately. Also, letting x = 1
2
gives
us the interesting, and related, result

2 X 1
= (1)n+1 2
12 n=1
n

15
Here we list a sketch of the big ideas behind some other known proofs.

Proof #7, Boo Rim Choe. Taken from a note by Boo Rim Choe in the
American Mathematical Monthly in 1987. Use the power series for arcsin:

X 1 3 (2n 1) x2n+1
arcsin x =
n=0
2 4 (2n) 2n + 1

which is valid for |x| 1. Let x = sin t and apply the integral formula
Z
2 2 4 (2n)
sin2n+1 x dx =
0 3 5 (2n + 1)
3
to get the 4
(2) formulation, as before.

Proof #8. The series



X cos(nt)
f (t) =
n=1
n2
is uniformly convergent on R. With some complex (i.e. imaginary num-
bers) calculus, we can show that

X sin(nt)
g(t) =
n=1
n

is uniformly convergent on [, 2 ], by Dirichlets test. Then for t


(0, 2), we have
t
f 0 (t) = g(t) =
2
Apply the FTC from t = 0 to t = to the function f 0 (t).

Proof #9, Matsuoka. Found in American Mathematical Monthly, 1961.


Consider Z /2
In = cos2n x dx
0
and Z /2
Jn = x2 cos2n x dx
0
A well-known reduction formula says
1 3 5 (2n 1)
In =
2 4 6 (2n) 2
and applying integration by parts to In and simplifying yields the relation
4n1 (n 1)!2 4n n!2
2
= Jn1 Jn
4n (2n 2)! (2n)!
Add this from n = 1 to n = N and then one just needs to show that
4N N !2
lim JN = 0
N (2N )!
2
which is accomplished by bounding JN above by the difference 4
(IN
IN +1 ).

16
Proof #10. Consider the well-known identity for the Fejer kernel
 2 n
sin(nx/2) X
f (x) := =n+2 (n k) cos(kx)
sin(x/2)
k=1

Integrate Z
xf (x) dx
0
to obtain some series. Let n .

Proof #11. Start with Gregorys formula



X (1)n
=
4 n=0
2n + 1

and square both sides carefully. This is an exercise in Borwein & Bor-
weins Pi and the AGM (Wiley, 1987).

Proof #12. Let r(n) be the number of representations of a positive integer


n as a sum of four squares. Then
X
r(n) = 8 m
m|n,46|m

Let R(N ) = N
P
n=0 r(n). Then

R(N ) is asymptotic to the volume of the
2
4-dimensional ball of radius N ; i.e. R(N ) 2 N 2 . Use r(n) to relate
this to .

References:
Robin Chapman, Evaluating (2).
http://empslocal.ex.ac.uk/people/staff/rjchapma/etc/zeta2.pdf

Ogilvy & Anderson


Wikipedia
Conway
Freitag & Busam

17

Potrebbero piacerti anche