Sei sulla pagina 1di 100

NOTES ON LINEAR ALGEBRA

LIVIU I. NICOLAESCU

C ONTENTS
1. Multilinear forms and determinants 3
1.1. Mutilinear maps 3
1.2. The symmetric group 4
1.3. Symmetric and skew-symmetric forms 7
1.4. The determinant of a square matrix 8
1.5. Additional properties of determinants. 11
1.6. Examples 16
1.7. Exercises 18
2. Spectral decomposition of linear operators 23
2.1. Invariants of linear operators 23
2.2. The determinant and the characteristic polynomial of an operator 24
2.3. Generalized eigenspaces 26
2.4. The Jordan normal form of a complex operator 31
2.5. Exercises 41
3. Euclidean spaces 44
3.1. Inner products 44
3.2. Basic properties of Euclidean spaces 45
3.3. Orthonormal systems and the Gramm-Schmidt procedure 48
3.4. Orthogonal projections 51
3.5. Linear functionals and adjoints on Euclidean spaces 54
3.6. Exercises 62
4. Spectral theory of normal operators 65
4.1. Normal operators 65
4.2. The spectral decomposition of a normal operator 66
4.3. The spectral decomposition of a real symmetric operator 68
4.4. Exercises 69
5. Applications 70
5.1. Symmetric bilinear forms 70
5.2. Nonnegative operators 73
5.3. Exercises 76
6. Elements of linear topology 79
6.1. Normed vector spaces 79
6.2. Convergent sequences 82
6.3. Completeness 85
6.4. Continuous maps 86

Date: Started January 7, 2013. Completed on . Last modified on April 28, 2014.
These are notes for the Honors Algebra Course, Spring 2013.
1
2 LIVIU I. NICOLAESCU

6.5. Series in normed spaces 89


6.6. The exponential of a matrix 90
6.7. The exponential of a matrix and systems of linear differential equations. 92
6.8. Closed and open subsets 94
6.9. Compactness 95
6.10. Exercises 98
References 100
LINEAR ALGEBRA 3

1. M ULTILINEAR FORMS AND DETERMINANTS


In this section, we will deal exclusively with finite dimensional vector spaces over the field F =
R, C. If U 1 , U 2 are two F-vector spaces, we will denote by Hom(U 1 , U 2 ) the space of F-linear
maps U 1 U 2 .

1.1. Mutilinear maps.


Definition 1.1. Suppose that U 1 , . . . , U k , V are F-vector spaces. A map
: U1 Uk V
is called k-linear if for any 1 i k, any vectors ui , v i U i , vectors uj U j , j 6= i, and any
scalar F we have
(u1 , . . . , ui1 , ui + v i , ui+1 , . . . , uk )
= (u1 , . . . , ui1 , ui , ui+1 , . . . , uk ) + (u1 , . . . , ui1 , v i , ui+1 , . . . , uk ),
(u1 , . . . , ui1 , ui , ui+1 , . . . , uk ) = (u1 , . . . , ui1 , ui , ui+1 , . . . , uk ).
In the special case U 1 = U 2 = = U k = U and V = F, the resulting map

| {z
:U U} F
k
is called a k-linear form on U . When k = 2, we will refer to 2-linear forms as bilinear forms. We
will denote by T k (U ) the space of k-linear forms on U . t
u

Example 1.2. Suppose that U is an F-vector space and U is its dual, U := Hom(U , F). We have
a natural bilinear map
h, i : U U F, U U 3 (, u) 7 h, ui := (u).
The bilinear map is called the canonical pairing between the vector space U and its dual. t
u

Example 1.3. Suppose that A = (aij )1i,jn is an n n matrix with real entries. Define
X
A : Rn Rn R, (x, y) = aij xi yj ,
i,j

x1 y1
.. .. .
x = . , y =

.
xn yn
To show that is indeed a bilinear form we need to prove that for any x, y, z Rn and any R
we have
A (x + z, y) = A (x, y) + A (z, y), (1.1a)
A (x, y + z) = A (x, y) + A (x, z), (1.1b)
A (x, y) = A (x, y) = A (x, by). (1.1c)
To verify (1.1a) we observe that
X X X X
A (x + z, y) = aij (xi + zi )yj = (aij xi yj + aij zi yj ) = aij xi yj + aij zi yj
i,j i,j i,j ij

= A (x, y) + A (z, z).


4 LIVIU I. NICOLAESCU

The equalities (1.1b) and (1.1c) are proved in a similar fashion. Observe that if e1 , . . . , en is the
natural basis of Rn , then
A (ei , ej ) = aij .
This shows that A is completely determined by its action on the basic vectors e1 , . . . en . t
u

Proposition 1.4. For any bilinear form T 2 (Rn ) there exists an n n real matrix A such that
= A , where A is defined as in Example 1.3. t
u

The proof is left as an exercise.


1.2. The symmetric group. For any finite sets A, B we denote Bij(A, B) the collection of bijective
maps : A B. We set S(A) := Bij(A, A). We will refer to S(A) as the symmetric group on A
and to its elements as permutations of A. Note that if , S(A) then
, 1 S(A).
The composition of two permutations is often referred to as the product of the permutations. We
denote by 1, or 1A the identity permutation that does not permute anything, i.e., 1A (a) = a, a A.
For any finite set S we denote by |S| its cardinality, i.e., the number of elements of S. Observe that
Bij(A, B) 6= |A| = |B|.
In the special case when A is the discrete interval A = I n = {1, . . . , n} we set
Sn := S(I n ).
The collection Sn is called the symmetric group on n objects. We will indicate the elements Sn
by diagrams of the form  
1 2 ... n
.
1 2 . . . n
For any finite set S we denote by |S| its cardinality, i.e., the number of elements of S.
Proposition 1.5. (a) If A, B are finite sets and |A| = |B|, then
| Bij(A, B)| = | Bij(B, A)| = |S(A)| = |S(B)|.
(b) For any positive integer n we have |Sn | = n! := 1 2 3 n.
Proof. (a) Observe that we have a bijective correspondence
Bij(A, B) 3 7 1 Bij(B, A)
so that
| Bij(A, B)| = | Bij(B, A)|.
Next, fix a bijection : A B. We get a correspondence
F : Bij(A, A) Bij(A, B), 7 F () = .
This correspondence is injective because
F (1 ) = F (2 ) 1 = 2 1 ( 1 ) = 1 ( 2 ) 1 = 2 .
This correspondence is also surjective. Indeed, if Bij(A, B) then 1 Bij(A, A) and
F ( 1 ) = ( 1 ) = .
This, F is a bijection so that
|S(A)| = | Bij(A, B)|.
LINEAR ALGEBRA 5

Finally we observe that


|S(B)| = | Bij(B, A)| = | Bij(A, B)| = |S(A)|.
This takes care of (a).
To prove (b) we argue by induction. Observe that |S1 | = 1 because there exists a single bijection
{1} {1}. We assume that |Sn1 | = (n 1)! and we prove that Sn | = n!. For each k I n we set
Skn := Sn ; (n) = k .


A permutation Skn is uniquely detetrimined by its restriction to I n \ {n} = I n1 and this


restriction is a bijection I n1 I n \ {k}. Hence
|Skn | = | Bij(I n1 , I n \ {k})| = |Sn1 |,
where at the last equality we used part(a). We deduce
|Sn | = |S1n | + + |Snn | = |Sn1 | + + |Sn1 |
| {z }
n
= n|Sn1 | = n(n 1)!,
where at the last step we invoked the inductive assumption. t
u

Definition 1.6. An inversion of a permutation Sn is a pair (i, j) I n I n with the following


properties.
i < j.
(i) > (j).
We denote by || the number of inversions of the permutation . The signature of is then the
quantity
sign() := (1)|| {1, 1}.
A permutation is called even/odd if sign() = 1. We denote by S n the collection of even/odd
permulations. t
u

Example 1.7. (a) Consider the permutation S5 given by


 
1 2 3 4 5
= .
5 4 3 2 1
The inversions of are
(1, 2), (1, 3), (1, 4), (1, 5),
(2, 3), (2, 4), (2, 5),
(3, 4), (3, 5), (4, 5),
so that || = 4 + 3 + 2 + 1 = 10, sign() = 1.
(b) For any i 6= j in I n we denote by ij the permutation defined by the equalities

k, k 6= i, j

ij (k) = j, k = i

i, k = j.

A transposition is defined to be a permutation of the form ij for some i < j. Observe that
|ij | = 2|j i| 1,
so that
sign(ij ) = 1, i 6= j. (1.2)
6 LIVIU I. NICOLAESCU

t
u

Proposition 1.8. (a) For any Sn we have


Y (j) (i)
sign() = . (1.3)
ji
1i<jn

(b) For any , Sn we have


sign( ) = sign() sign(). (1.4)
(c) sign( 1 ) = sign()
(j)(i)
Proof. (a) Observe that the ratio ji negative if and only if (i, j) is inversion. Thus the number
(j)(i)
of negative ratios ji , i < j, is equal to the number of inversions of so that the product
Y (j) (i)
ji
1i<jn

has the same sign as the signature of . Hence, to prove (1.3) it suffices to show that

Y

(j) (i)
= | sign()| = 1,

1i<jn j i

i.e.,
Y Y
|(j) (i)| = |j i|. (1.5)
i<j i<j

This is now obvious because the factors in the left-hand side are exactly the factors in the right-hand
side multiplied in a different order. Indeed, for any i < j we can find a unique pair i0 < j 0 such that
(j 0 ) (i0 ) = (j i).
(b) Observe that
Y (j) (i) Y ((j)) ((i)
sign() = =
ji (j) (i)
i<j i<j

and we deduce

Y ((j)) ((i)) Y (j) (i)
sign() sign() =
(j) (i) ji
i<j i<j

Y ((j)) ((i))
= sign( ).
ji
i<j

To prove (c) we observe that


1 = sign(1) = sign( 1 ) = sign( 1 ) sign().
t
u
LINEAR ALGEBRA 7

1.3. Symmetric and skew-symmetric forms.


Definition 1.9. Let U be an F-vector space, F = R or F = C.
(a) A k-linear form T k (U ) is called symmetric if for any u1 , . . . , uk U , and any permutation
Sk we have
(u(1) , . . . , u(k) ) = (u1 , . . . , uk ).
We denote by S k U the collection of symmetric k-linear forms on U .
(b) A k-linear form T k (U ) is called skew-symmetric if for any u1 , . . . , uk U , and any
permutation Sk we have
(u(1) , . . . , u(k) ) = sign()(u1 , . . . , uk ).
We denote by k U the space of skew-symmetric k-linear forms on U . t
u

Example 1.10. Suppose that n U , and u1 , . . . , un U . The skew-linearity implies that for
any i < j we have
(u1 , . . . , ui1 , ui , ui+1 , . . . , uj1 , uj , uj+1 , . . . , un )
= (u1 , . . . , ui1 , uj , ui+1 , . . . , uj1 , ui , uj+1 , . . . , un ).
Indeed, we have
(u1 , . . . , ui1 , uj , ui+1 , . . . , uj1 , ui , uj+1 , . . . , un )
= (uij (1) , . . . , uij (k) , . . . , uij (n) )
and sign(ij ) = 1. In particular, this implies that if i 6= j, but ui = uj then
(u1 , . . . , un ) = 0. t
u
Proposition 1.11. Suppose that U is an n-dimensional F-vector space and e1 , . . . , en is a basis of
U . Then for any scalar c F there exists a unique skew-symmetric n-linear form n U such
that
(e1 , . . . , en ) = c.

Proof. To understand what is happening we consider first the special case n = 2. Thus dim U = 2.
If 2 U and u1 , u2 U we can write
u1 = a11 e1 + a21 e2 , u2 = a12 e1 + a22 e2 ,
for some scalars aij F , i, j {1, 2}. We have
(u1 , u2 ) = (a11 e1 + a21 e2 , a12 e1 + a22 e2 )
= a11 (e1 , a12 e1 + a22 e2 ) + a21 (e2 , a12 e1 + a22 e2 )
= a11 a12 (e1 , e1 ) + a11 a22 (e1 , e2 ) + a21 a12 (e2 , e1 ) + a21 a22 (e2 , e2 ).
The skew-symmetry of implies that
(e1 , e1 ) = (e2 , e2 ) = 0, (e2 , e1 ) = (e1 , e2 ).
Hence
(u1 , u2 ) = (a11 a22 a21 a12 )(e1 , e2 ).
If dim U = n and u1 , . . . , un U , then we can write
X n n
X
u1 = ai1 1 ei1 , . . . , uk = aik k eik
i1 =1 ik =1
8 LIVIU I. NICOLAESCU

n n
!
X X
(u1 , . . . , un ) = a i1 1 e i1 , . . . , ain n ein
i1 =1 in =1
n
X
= ai1 1 ain n (ei1 , . . . , ein ).
i1 ,...,in =1
Observe that if the indices i1 , . . . , in are not pairwise distinct then
(ei1 , . . . , ein ) = 0.
Thus, in the above sum we get contributions only from pairwise distinct choices of indices i1 , . . . , in ,
Such a choice corresponds to a permutation Sn , (k) = ik . We deduce that
X
(u1 , . . . , un ) = a(1)1 a(n)n (e(1) , . . . , e(n) ).
Sn

X
= sign()a(1)1 a(n)n (e1 , . . . , en ).
Sn
Thus, n U
is uniquely determined by its value on (e1 , . . . , en ).
Conversely, the map
X n
X
(u1 , . . . , un ) c sign()a(1)1 a(n)n , uk = aik ei ,
Sn i=1
is indeed n-linear, and skew-symmetric. The proof is notationally bushy, but it does not involve any
subtle idea so I will skip it. Instead, Ill leave the proof in the case n = 2 as an exercise.
t
u

1.4. The determinant of a square matrix. Consider the vector space Fn we canonical basis

1 0 0
0 1 0
e1 = .. , e2 = .. , . . . , en = .. .

. . .
0 0 1
According to Proposition 1.11 there exists a unique, n-linear skew-symmetric form on Fn such that
(e1 , . . . , en ) = 1.
We will denote this form by det and we will refer to it as the determinant form on Fn . The proof of
Proposition 1.11 shows that if u1 , . . . , un Fn ,

u1k
u2k
uk = .. , k = 1, . . . , n,

.
unk
then X
det(u1 , . . . un ) = sign()u(1)1 u(2)2 u(n)n . (1.6)
Sn
Note that X
det(u1 , . . . un ) = sign()u1(1) u2(2) un(n) . (1.7)
Sn
LINEAR ALGEBRA 9

Definition 1.12. Suppose that A = (aij )1i,jn is an n n-matrix with entries in F which we regard
as a linear operator A : Fn Fn . The determinant of A is the scalar
det A := det(Ae1 , . . . , Aen )
where e1 , . . . , en is the canonical basis of Fn , and Aek is the k-th column of A,

a1k
a2k
Aek = .. , k = 1, . . . , n.

.
ank
t
u

Thus, according to (1.6) we have


X (1.7) X
det A = sign()a(1)1 a(n)n = sign()a1(1) an(n) . (1.8)
Sn Sn

Remark 1.13. Consider a typical summand in the first sum in (1.8), a(1)1 a(n)n . Observe that
the n entries
a(1)1 , a(2)2 , . . . , a(n)n
lie on different columns of A and thus occupy all the n columns of A. Similarly, these entries lie on
different rows of A.
A collection of n entries so that no two lie on the same row or the same column is called a rook
placement.1 Observe that in order to describe a rook placement, you need to indicate the position of
the entry on the first column, by indicating the row (1) on which it lies, then you need to indicate
the position of the entry on the second column etc. Thus, the sum in (1.8) has one term for each rook
placement. t
u

If A denotes the transpose of the n n-matrix A with entries

aij = aji
we deduce that
sign()a(1)1 a(n)n =
X X
det A = sign()a1(1) an(n) = det A. (1.9)
Sn Sn

Example 1.14. Suppose that A is a 2 2 matrix


 
a11 a12
A=
a21 a22
Then
det A = a11 a22 a12 a21 . t
u
Proposition 1.15. If A is an upper triangular n n-matrix, then det A is the product of the diagonal
entries. A similar result holds if A is lower triangular.

1If you are familiar with chess, a rook controls the row and the column at whose intersection it is situated.
10 LIVIU I. NICOLAESCU

Proof. To keep the ideas as transparent as possible, we carry the proof in the special case n = 3.
Suppose first that A is upper tringular, Then

a11 a12 a13
A = 0 a22 a23
0 0 a33
so that
Ae1 = a11 e1 , Ae2 = a12 e1 + a22 e2 , Ae3 = a13 e1 + a23 e2 + a33 e3
Then
det A = det(Ae1 , Ae2 , Ae3 )
= det(a11 e1 , a12 e1 + a22 e2 , a13 e1 + a23 e2 + a33 e3 )
= det(a11 e1 , a12 e1 , a13 e1 + a23 e2 + a33 e3 ) + det(a11 e1 , a22 e2 , a13 e1 + a23 e2 + a33 e3 )
= a11 a12 det(e1 , e1 , a13 e1 + a23 e2 + a33 e3 ) +a11 a22 det(e1 , e2 , a13 e1 + a23 e2 + a33 e3 )
| {z }
=0
 
= a11 a22 det(e1 , e2 , a13 e1 ) + det(e1 , e2 , a23 e2 ) + det(e1 , e2 , a33 e3 )
| {z } | {z }
=0 =0
= a11 a22 a33 det(e1 , e2 , e3 ) = a11 a22 a33 .
This proves the proposition when A is upper triangular. If A is lower triangular, then its transpose A
is upper triangular and we deduce
det A = det A = a11 a22 a33 = a11 a22 a33 .
t
u

Recall that we have a collection of elementary column (row) operations on a matrix. The next
result explains the effect of these operations on the determinant of a matrix.
Proposition 1.16. Suppose that A is an n n-matrix. The following hold.
(a) If the matrix B is obtained from A by multiplying the elements of the i-th column of A by the
same nonzero scalar , then
det B = det A.
(b) If the matrix B is obtained from A by switching the order of the columns i and j, i 6= j then
det B = det A.
(c) If the matrix B is obtained from A by adding to the i-th column, the j-th column, j 6= i then
det B = det A.
(d) Similar results hold if we perform row operations of the same type.
Proof. (a) We have
det B = det(Be1 , . . . , Ben ) = det(Ae1 , . . . , Aei , Aen )
= det(Ae1 , . . . , Aei , Aen ) = det A.
(b) Observe that for any Sn we have
det(Ae(1) , . . . , Ae(n) ) = sign() det(Ae1 , . . . , Ae(n) ) = sign() det A.
Now observe that the columns of B are
Be1 = Aeij (1) , . . . , Ben = Aeij (n)
and sign(ij ) = 1.
LINEAR ALGEBRA 11

For (c) we observe that


det B = det(Ae1 , . . . , Aei1 , Aei + Aej , Aei+1 , . . . , Aej , . . . , Aen )
= det(Ae1 , . . . , Aei1 , Aei , Aei+1 , . . . , Aej , . . . , Aen )
+ det(Ae1 , . . . , Aei1 , Aej , Aei+1 , . . . , Aej , . . . , Aen )
| {z }
=0
= det A.
Part (d) follows by applying (a), (b), (c) to the transpose of A, observing that the rows of A are the
columns of A and then using the equality det C = det C .
t
u

The above results represents one efficient method for computing determinants because we know
that by performing elementary row operations on a square matrix we can reduce it to upper triangular
form.
Here is a first application of determinants.
Proposition 1.17. Suppose that A is an n n-matrix with entries in F. Then the following statements
are equivalent.
(a) The matrix A is invertible.
(b) det A 6= 0.

Proof. A matrix A is invertible if and only if by performing elementary row operations we can reduce
to an upper triangular matrix B whose diagonal entries are nonzero, i.e., det B 6= 0. By performing
elementary row operation the determinant changes by a nonzero factor so that
det A 6= 0 det B 6= 0.
t
u

Corollary 1.18. Suppose that u1 , . . . , un Fn . The following statements are equivalent.


(a) The vectors u1 , . . . , un are linearly independent.
(b) det(u1 , . . . , un ) 6= 0.
Proof. Consider the linear operator A : Fn Fn given by Aei = ui , i = 1, . . . , n. We can
tautologically identify it with a matrix and we have
det(u1 , . . . , un ) = det A.
Now observe that (u1 , . . . , un ) are linearly independent of and only if A is invertible and according
to the previous propostion, this happens if and only if det A 6= 0.
t
u

1.5. Additional properties of determinants.


Proposition 1.19. If A, B are two n n-matrices , then
det AB = det A det B. (1.10)
Proof. We have
n
X n
X
det AB = det(ABe1 , . . . , ABen ) = det( bi1 1 Aei1 , . . . bin n Aein )
i1 =1 in =1
12 LIVIU I. NICOLAESCU

b
X
= bi1 1 bin n det(Aei1 , . . . Aein )
i1 ,...,in =1
In the above sum, the only nontrivial terms correspond to choices of pairwise distinct indices i1 , . . . , in .
For such a choice, the sequence i1 , . . . , in describes a permutation of I n . We deduce
X
det AB = b(1)1 b(n)n det(Ae(1) , . . . , Ae(n) )
S
| {z }
n
=sign() det A
X
= det A sign()b(1)1 b(n)n = det A det B.
Sn
t
u

Corollary 1.20. If A is an invertible matrix, then


1
det A1 = .
det A
Proof. Indeed, we have
A A1 = 1
so that
det A det A1 = det 1 = 1.
t
u

Proposition 1.21. Suppose that m, n are positive integers and S is an (m + n) (m + n)-matrix


that has the block form  
A C
S= ,
0 B
where A is an m m-matrix, B is an n n-matrix and C is an m n-matrix. Then
det S = det A det B.

Proof. We denote by sij the (i, j)-entry of S, i, j I m+n . From the block description of S we
deduce that
j m and i > n sij = 0. (1.11)
We have
X m+n
Y
det S = sign() s(i)i ,
Sm+n i=1
From (1.11) we deduce that in the above sum the nonzero terms correspond to permutations Sm+n
such that
(i) m, i m. (1.12)
If is such a permutation, ten its restriction to I m is a permutation of I m and its restriction to
I m+n \ I m is a permutation of this set, which we regard as a permutation of I n . Conversely, given
Sm and Sn we obtain a permutation = Smn satisfying (1.12) given by
(
(i), i m,
(i) =
m + (i m), i > m.
LINEAR ALGEBRA 13

Observe that
sign( ) = sign() sign(),
and we deduce
X m+n
Y
det S = sign( ) s(i)i
Sm , Sn i=1

X m
Y X n
Y
= sign() s(i)i sign() sm+(j),j+m = det A det B.
Sm i=1 Sn j=1
t
u

Definition 1.22. If A is an n n-matrix and i, j I n , we denote by A(i, j) the matrix obtained from
A by removing the i-th row and the j-th column. t
u

Corollary 1.23. Suppose that the j-th column of an n n-matrix A is sparse, i.e., all the elements
on the j-th column, with the possible exception of the element on the i-th row, are equal to zero. Then
det A = (1)i+j aij det A(i, j).
Proof. Observe that if i = j = 1 then A has the block form
 
a11
A=
0 A(1, 1)
and the result follows from Proposition 1.21.
We can reduce the general case to this special case by permuting rows and columns of A. If we
switch the j-th column with (j 1)-th column we can arrange that the (j 1)-th column is the sparse
column. Iterating this procedure we deduce after (j 1) such switches that the first column is the
sparse column.
By performing (i 1) row-switches we can arrange that the nontrivial element on this sparse
column is situated on the first row. Thus, after a total of i + j 2 row and column switches we obtain
a new matrix A0 with the block form
 
0 aij
A =
0 A(i, j)
We have
(1)i+j det A = det A0 = aij det A(i, j).
t
u

Corollary 1.24 (Row and column expansion). Fix j I n . Then for any n n-matrix we have
Xn X n
det A = (1)i+j aij det A(i, j) = (1)i+k ajk A(j, k).
i=1 k=1
The first equality is referred to as the j-th column expansion of det A, while the second equality is
referred to as the j-th row expansion of det A.
Proof. We prove only the column expansion. The row expansion is obtained by applying to column
expansion to the transpose matrix. For simplicity we assume that j = 1. We have
n
X 
det A = det(Ae1 , Ae2 , . . . , Aen ) = det ai1 ei , Ae2 , . . . , Aen
i1
14 LIVIU I. NICOLAESCU

n
X 
= ai1 det ei , Ae2 , . . . , Aen .
i1
Denote by Ai the matrix whose first column is the column basic vector ei , and the other columns are
the corresponding columns of A, Ae2 , . . . , Aen . We can rewrite the last equality as
n
X
det A = ai1 det Ai .
i=1

The first column of Ai is sparse, and the submatrix Ai (i, 1) is equal to the submatrix A(i, 1). We
deduce from the previous corollary that
det Ai = (1)i+1 det Ai (i, 1) = (1)i+1 det A(i, 1).
This completes the proof of the column expansion formula. t
u

Corollary 1.25. If k 6= j then


n
X
(1)i+j aik det A(i, j) = 0.
i=1

Proof. Denote by A0 the matrix obtained from A by removing the j-th column and replacing with
the k-th column of A. Thus, in the new matrix A0 the j-th and the k-th columns are identical so that
det A0 = 0. On the other hand A0 (i, j) = A(i, j) Expanding det A0 along the j-th column we deduce
n
X n
X
0 = det A0 = (1)i+j a0ij det A(i, j) = (1)ij aik det A(i, j).
i=1 i=1
t
u

Definition 1.26. For any n n matrix A we define the adjoint matrix A to be the n n-matrix with
entries
aij = (1)i+j det A(j, i), i, j I n . t
u
Form Corollary 1.24 we deduce that for any j we have
n
X
aji aij = det A,
i=1

while Corollary 1.25 implies that for any j 6= k we have


n
X
aji aik = 0.
i=1

The last two identities can be rewritten in the compact form


AA = (det A)1. (1.13)
If A is invertible, then from the above equality we conclude that
1
A1 = A. (1.14)
det A
LINEAR ALGEBRA 15

Example 1.27. Suppose that A is a 2 2 matrix


 
a11 a12
A=
a21 a22
Then
det A = a11 a22 a12 a21 ,

A(1, 1) = [a22 ], A(1, 2) = [a21 ], A(2, 1) = [a12 ], A(2, 2) = [a11 ],

a11 = det A(1, 1) = a22 , a12 = det A(2, 1) = a12 ,

a21 = det A(1, 2) = a21 , a22 = det A(2, 2) = a11 ,


so that
 
a22 a12
A =
a21 a11
and we observe that
 
det A 0
AA = . t
u
0 det A
Proposition 1.28 (Cramers Rule). Suppose that A is an invertible n n-matrix and u, x Fn are
two column vectors such that
Ax = u,
i.e., x is a solution of the linear system


a11 x1 + a12 x2 + + a1n xn = u1
a21 x1 + a22 x2 + + a2n xn

= u2
.. .. ..


. . .
a x + a x + + a x = un .
n1 1 n2 2 nn n

Denote by Aj (u) the matrix obtained from A by replacing the j-th column with the column vector u.
Then
det Aj (u)
xj = , j = 1, . . . , n. (1.15)
det A
Proof. By expanding along the j-th column of Aj (u) we deduce
n
X
det Aj (u) = (1)j+k det A(k, j). (1.16)
k=1

On the other hand,


(det A)x = (AA)x = Au.
Hence
n
X X (1.16)
(det A)xj = ajk uk = (1)k+j uk det A(k, j) = det Aj (u).
k=1 k
t
u
16 LIVIU I. NICOLAESCU

1.6. Examples. To any list of complex numbers (x1 , . . . , xn ) we associate the n n matrix

1 1 1
x1 x2 xn
V (x1 , . . . , xn ) = .. .. . (1.17)

.. ..
. . . .
xn1
1 x2n1 xn1 n

This matrix is called the Vandermonde matrix associated to the list of numbers (x1 , . . . , xn ). We want
to compute its determinant. Observe first that
det V (x1 , . . . xn ) = 0.
if the numbers z1 , . . . , zn are not distinct. Observe next that
 
1 1
det V (x1 , x2 ) = det = (x2 x1 ).
x1 x2
Consider now the 3 3 situation. We have

1 1 1
det V (x1 , x2 , x3 ) = det x1 x2 x3 .
x21 x22 x23
Subtract from the 3rd row the second row multiplied by x1 to deduce

1 1 1
det V (x1 , x2 , x3 ) = det x1 x2 x3
2 2
0 x2 x1 x2 x3 x3 x1

1 1 1
= det x1 x2 x3 .
0 x2 (x2 x1 ) x23 x3 x1
Subtract from the 2nd row the first row multiplied by x1 to deduce

1 1 1  
x2 x1 x3 x1
det V (x1 , x2 , x3 ) = det 0 x2 x1 x3 x1 = det
x2 (x2 x1 ) x3 (x3 x1 )
0 x2 (x2 x1 ) x3 (x3 x1 )
 
1 1
= (x2 x1 )(x3 x1 ) det = (x2 x1 )(x3 x1 ) det V (x2 , x3 ).
x2 x3
= (x2 x1 )(x3 x1 )(x3 x1 ).
We can write the above equalities in a more compact form
Y Y
det V (x1 , x2 ) = (xj xi ), det V (x1 , x2 , x3 ) = (xj xi ). (1.18)
1i<j2 1i<j3

A similar row manipulation argument (left to you as an exercise) shows that


det V (x1 , . . . , xn ) = (x2 x1 ) (xn x1 ) det V (x2 , . . . , xn ). (1.19)
We have the following general result.
Proposition 1.29. For any integer n 2 and any complex numbers x1 , . . . , xn we have
Y
det Vn (x1 , . . . , xn ) = (xj xi ). (1.20)
1i<jn
LINEAR ALGEBRA 17

Proof. We will argue by induction on n. The case n = 2 is contained in (1.18). Assume now that
(1.20) is true for n 1. This means that
Y
det V (x2 , . . . , xn ) = (xj xi ).
2i<jn
Using this in (1.19) we deduce
Y Y
det Vn (x1 , . . . , xn ) = (x2 x1 ) (xn x1 ) (xj xi ) = (xj xi ). t
u
2i<jn 1i<jn
Here is a simple application of the above computation.
Corollary 1.30. If x1 , . . . , xn are distinct complex numbers then for any complex numbers r1 , . . . , rn
there exists a polynomial of degree n 1 uniquely determined by the conditions
P (x1 ) = r1 , . . . , p(xn ) = rn . (1.21)
Proof. The polynomial P must have the form
P (x) = a0 + a1 x + + an1 xn1 ,
where the coefficients a0 , . . . , an1 are to be determined. We will do this using (1.21) which can be
rewritten as a system of linear equations in which the unknown are the coefficients a0 , . . . , an1 ,
a0 + a0 x1 + + an1 x1n1 = r1



a0 + a1 x2 + + an1 xn1 = r2

2
.. .. ..


. . .
a0 + a1 xn + + an1 xnn1 = rn .

We can rewrite this in matrix form


xn1

1 x1 1 a0 r1
1 x2 x2n1 a1 r2
= .

.. .. .. .. .. ..
. . . . . .
1 xn xnn1 an1 rn
| {z }
=V (x1 ,...,xn )

Because the numbers x1 , . . . , xn are distinct, we deduce from (1.20) that


det V (x1 , . . . , xn ) = V (x1 , . . . , xn ) 6= 0.
Hence the above linear system has a unique solution a0 , . . . , an1 . t
u
18 LIVIU I. NICOLAESCU

1.7. Exercises.
Exercise 1.1. Prove that the map in Example 1.2 is indeed a bilinear map. t
u

Exercise 1.2. Prove Proposition 1.4. t


u

Exercise* 1.3. (a) Show that for any i 6= I n we have ij ij = 1I n .


(b) Prove that for any permutation Sn there exists a sequence of transpositions i1 j1 , . . . , im jm ,
m < n, such that
im jm i1 j1 = 1I n .
Conclude that any permutation is a product of transpositions. t
u

Exercise 1.4. Decompose the permutation


 
1 2 3 4 5
5 4 3 2 1
as a composition of transpositions. t
u

Exercise 1.5. Suppose that T 2 (U ) is a symmetric bilinear map. Define Q : U F by setting


Q(u) = (u, u), u U .
Show that for any u, v U we have
1 
(u, v) = Q(u + v) Q(u v) . t
u
4
Exercise 1.6. Prove that the map
: R2 R2 R, (u, v) = u1 v2 u2 v1 ,
is bilinear, and skew-symmetric. t
u

Exercise 1.7. (a) Show that a bilinear form : U U F is skew-symmetric if and only if
(u, u) = 0, u U .
Hint: Expand (u + v, u + v) using the bilinearity of .
(b) Prove that an n-linear form T n (U ) is skew-symmetric if and only if for any i 6= j and any
vectors u1 , . . . , un U such that ui = uj we have
(u1 , . . . , un ) = 0.
Hint. Use the trick in part (a) and Exercise 1.3. t
u

Exercise 1.8. Compute the determinant of the following 5 5-matrix



1 2 3 4 5
2 1 2 3 4

3 2 1 2 3 .

4 3 2 1 2
5 4 3 2 1
LINEAR ALGEBRA 19

Exercise 1.9. Fix complex numbers x and h. Compute the determinant of the matrix

1 1 0 0
2 h 1 0 .
x
x hx h 1
x hx2 hx h
3

Can you generalize thus example? t


u

Exercise 1.10. Prove the equality (1.19). t


u

Exercise 1.11. (a) Consider a degree (n 1) polynomial


P (x) = an1 xn1 + an2 xn2 + + a1 x + a0 , an1 6= 0.
Compute the determinant of the following matrix.

1 1 1
x1
x 2 xn

V = .
.. .
.. .. ..
.

n2 . .
n2
xnn2

x1 x2
P (x1 ) P (x2 ) P (xn )
(b) Compute the determinants of the following n n matrices

1 1 1

x1 x2 xn

A= .
. .
. .
. ..
,

. . . .
n2 n2
n2

x1 x2 xn
x2 x3 xn x1 x3 x4 xn x1 x2 xn1
and

1 1 1

x1 x2 xn

.. .. .. ..
B= .

. . . .
xn2 x2n2 xn2

1 n

(x2 + x3 + + xn ) n1 (x1 + x3 + x4 + + xn )n1 (x1 + x2 + + xn1 )n1
Hint. To compute det B it is wise to write S = x1 + + xn so that x2 + x3 + + xn = (S x1 ),
x1 + x3 + + xn = S x2 etc. Next observe that (S x)k is a polynomial of degree k in x. t u

Exercise 1.12. Suppose that A is skew-symmetric n n matrix, i.e.,


A = A.
Show that det A = 0 if n is odd. t
u

Exercise 1.13. Suppose that A = (aij )1i,jn is an n n matrix with complex entries.
(a) Fix complex numbers x1 , . . . , xn , y1 , . . . , yn and consider the n n matrix B with entries
bij = xi yj aij .
Show that
det B = (x1 y1 xn yn ) det A.
20 LIVIU I. NICOLAESCU

(b) Suppose that C is the n n matrix with entries


cij = (1)i+j aij .
Show that det C = det A. t
u

Exercise 1.14. (a) Suppose we are given three sequences of numbers a = (ak )k1 , b = (bk )k1 and
c = (ck )k1 . To these seqences we associate a sequence of Jacobi matrices

a1 b1 0 0 0 0
c1 a2 b2 0 0 0

0 c2 a3 b3 0 0
Jn = . (J )
.. .. .. .. .. .. ..
. . . . . . .
0 0 0 0 cn1 an
Show that
det Jn = an det Jn1 bn1 cn1 det Jn2 . (1.22)
Hint: Expand along the last row.
(b) Suppose that above we have
ck = 1, bk = 2, ak = 3, k 1.
Compute J1 , J2 . Using (1.22) determine J3 , J4 , J5 , J6 , J7 . Can you detect a pattern? t
u

Exercise 1.15. Suppose we are given a sequence of polynomials with complex coefficients (Pn (x))n0 ,
deg Pn = n, for all n 0,
Pn (x) = an xn + , an 6= 0.
Denote by V n the space of polynomials with complex coefficients and degree n.
(a) Show that the collection {P0 (x), . . . , Pn (x)} is a basis of V n .
(b) Show that for any x1 , . . . , xn C we have

P0 (x1 ) P0 (x1 ) P0 (xn )
P1 (x1 ) P1 (x2 ) P1 (xn ) Y
det = a0 a1 an1 (xj xi ). t
u

.. .. .. ..
. . . .
i<j
Pn1 (x1 ) Pn1 (x2 ) Pn1 (xn )
Exercise 1.16. To any polynomial P (x) = c0 + c1 x + . . . + cn1 xn1 of degree n 1 with
complex coefficients we associate the n n circulant matrix

c0 c1 c2 cn2 cn1
cn1 c0
c1 cn3 cn2
CP = cn2 cn1 c0 cn4 cn3

,

c1 c2 c3 cn1 c0
Set
2i
=e n , i= 1,
so thatn = 1. Consider the n n Vandermonde matrix V = V (1, , . . . , n1 ) defined as in (1.17)
(a) Show that for any j = 1, . . . , n 1 we have
1 + j + 2j + + (n1)j = 0.
LINEAR ALGEBRA 21

(b) Show that


CP V = V Diag P (1), P (), . . . , P (n1 ) ,


where Diag(a1 , . . . , an ) denotes the diagonal n n-matrix with diagonal entries a1 , . . . , an .


(c) Show that
det CP = P (1)P () P (n1 ). t
u
2 3
(d) Suppose that P (x) = 1 + 2x + 3x + 4x so that CP is a 4 4-matrix with integer entries and
thus det CP is an integer. Find this integer. Can you generalize this computation?
Exercise 1.17. Consider the n n-matrix

0 1 0 0 0 0
0
0 1 0 0 0
A = ... .. .. .. .. .. ..

. . . . . .
0 0 0 0 0 1
0 0 0 0 0 0
(a) Find the matrices
A2 , A3 , . . . , An .
(b) Compute (I A)(I + A + + An1 ).
(c) Find the inverse of (I A). t
u

Exercise 1.18. Let


P (x) = xd + ad1 xd1 + + a1 x + a0
be a polynomial of degree d with complex coefficients. We denote by S the collection of sequences
of complex numbers, i.e., functions

f : 0, 1, 2, . . . C, n 7 f (n).
This is a complex vector space in a standard fashion. We denote by SP the subcollection of sequences
f S satisfying the recurrence relation
f (n + d) + ad1 f (n + d 1) + a1 f (n + 1) + a0 f (n) = 0, n 0. (RP )
(a) Show that SP is a vector subspace of S.
(b) Show that the map I : SP Cd which associates to f SP its initial values If ,

f (0)
f (1)
If = Cd

..
.
f (d 1)
is an isomorphism of vector spaces.
(c) For any C we consider the sequence f defined by
f (n) = n , n 0.
(Above it is understood that 0 = 1.) Show that f SP if and only if P () = 0, i.e., is a root of
P.
(d) Suppose P has d distinct roots 1 , . . . , d C. Show that the collection of sequences f1 , . . . , fd
is a basis of SP . 
(e) Consider the Fibonacci sequence f (n) n0 defined by
f (0) = f (1) = 1, f (n + 2) = f (n + 1) + f (n), n 0.
22 LIVIU I. NICOLAESCU

Thus,
f (2) = 2, f (3) = 3, f (4) = 5, f (5) = 8, f (6) = 13, . . . .
Use the results (a)(d) above to find a short formula describing f (n). t
u

Exercise 1.19. Let b, c be two distinct complex numbers. Consider the n n Jacobi matrix

b+c b 0 0 0 0
c
b+c b 0 0 0
0 c b + c b 0 0
Jn = .. .. .

.. .. .. .. ..
. . . . . . .

0 0 0 c b + c b
0 0 0 0 c b+c
Find a short formula for det Jn .
Hint: Use the results in Exercises 1.14 and 1.18. t
u
LINEAR ALGEBRA 23

2. S PECTRAL DECOMPOSITION OF LINEAR OPERATORS


2.1. Invariants of linear operators. Suppose that U is an n-dimensional F-vector space. We denote
by L(U ) the space of linear operators (maps) T : U U . We already know that once we choose a
basis be = (e1 , . . . , en ) of U we can represent T by a matrix
A = M(e, T ) = (aij )1i,jn ,
where the elements of the k-th column of A describe the coordinates of T ek in the basis e, i.e.,
n
X
T ek = a1k e1 + + ank en = ajk ej .
j=1

A priori, there is no good reason of choosing the basis e = (e1 , . . . , en ) over another f = (f 1 , . . . , f n ).
With respect to this new basis the operator T is represented by another matrix
n
X
B = M(f , T ) = (bij )1i,jn , T f k = bjk f j .
j=1

The basis f is related to the basis e by a transition matrix


n
X
C = (cij )1i,jn , f k = cjk ej .
j=1

Thus the, k-th column of C describes the coordinates of the vector f k in the basis e. Then C is
invertible and
B = C 1 AC. (2.1)
The space U has lots of bases, so the same operator T can be represented by many different matrices.
The question we want to address in this section can be loosely stated as follows.
Find bases of U so that, in these bases, the operator T represented by very simple matrices.
We will not define what a a very simple matrix is but we will agree that the more zeros a matrix
has, the simpler it is. We already know that we can find bases in which the operator T is represented
by upper triangular matrices. These have lots of zero entries, but it turns out that we can do much
better than this.
The above question is closely related to the concept of invariant of a linear operator. An invariant
is roughly speaking a quantity naturally associated to the operator that does not change when we
change bases.
Definition 2.1. (a) A subspace V U is called an invariant subspace of the linear operator T
L(U ) if
T v V , v V .
(b) A nonzero vector u0 U is called an eigenvector of the linear operator T if and only if the linear
subspace spanned by u0 is an invariant subspace of T . t
u

Example 2.2. (a) Suppose that T : U U is a linear operator. Its null space or kernel

ker T := u U ; T u = 0 ,
is an invariant subspace of T . Its dimension, dim ker T , is an invariant of T because in its definition
we have not mentioned any particular basis. We have already encountered this dimension under a
different guise.
24 LIVIU I. NICOLAESCU

If we choose a basis e = (e1 , . . . , en ) of U and use it to represent T as an n n matrix A =


(Aij )1i,jn , then dim ker T is equal to the nullity of A, i.e., the dimension of the vector space of
solutions of the linear system
x1
Ax = 0, x = ... Fn .

xn
The range 
R(T ) = T u; u U
is also an invariant subspace of T . Its dimension dim R(T ) can be identified with the rank of the
matrix A above. The rank nullity theorem implies that
dim ker T + dim R(T ) = dim U . (2.2)
(b) Suppose that u0 U is an eigenvector of T . Then T u0 span(u0 ) so that there exists F
such that
T u0 = u0 . t
u
2.2. The determinant and the characteristic polynomial of an operator. Assume again that U is
an n-dimensional F-vector space. A more subtle invariant of an operator T L(U ) is its determinant.
This is a scalar det T F. Its definition requires a choice of a basis of U , but the end result is
independent any choice of basis. Here are the details.
Fix a basis
e = {e1 , . . . , en }
of U . We use it to represent T as an n n real matrix A = (aij )1i,jn . More precisely, this means
that
Xn
T ej = aij ei , j = 1, . . . , n.
i=1
If we choose another basis of U ,
f = (f 1 , . . . , f n ),
then we can represent T by another n n matrix B = (bij )1i,jn , i.e.,
n
X
Tfj = bij f i , j = 1, . . . , n.
i=1
As we have discussed above the basis f is obtained from e via a change-of-basis matrix C =
(cij )1i,jn , i.e.,
Xn
fj = cij ej , j = 1, . . . , n.
i=1
Moreover the matrices A, B, C are related by the transition rule (2.1),
B = C 1 AC.
Thus
det B = det(C 1 AC) = det C 1 det A det C = det A.
The upshot is that the matrices A and B have the same determinant. Thus, no mater what basis of
U we choose to represent T as an n n matrix, the determinate of that matrix is independent of the
basis used. This number, denoted by det T is an invariant of T called the determinant of the operator
T . Here is a simple application of this concept.
LINEAR ALGEBRA 25

Corollary 2.3.
ker T 6= 0 det T = 0. t
u

More generally, for any x F consider the operator


x1 T : U U ,
defined by
(1 T )u = xu T u, u U .
We set
PT (x) = det(x1 T ).
Proposition 2.4. The quantity PT (x) is a polynomial of degree n = dim U in the variable .
Proof. Choose a basis e = (e1 , . . . , en ). In this basis T is represented by an n m matrix A =
(aij )1i,jn and the operator x1 T is represented by the matrix

x a11 a12 a13 a1n
a21 x a22 a23 a2n
xI A = .

.. .. .. .. ..
. . . . .
an1 an2 an3 x ann
As explained in Remark 1.13, the determinant of this matrix is a sum of products of certain choices
of n entries of this matrix, namely the entries that form a rook placement. Since there are exactly
n entries in this matrix that contain the variable x, we see that each product associated to a rook
placement of entries is a polynomial in x of degree n. There exists exactly one rook placement
so that each of the entries of this placement contain the term x This pavement is easily described,
it consists of the terms situated on the diagonal of this matrix, and the product associated to these
entries is
(x a11 ) (x ann ).
Any other rook placement contains at most (n1) entries that involve the term x, so the corresponding
product of these entries is a polynomial of degree at most n 1. Hence
det(xI A) = (x a11 ) (x ann ) + polynomial of degree n 1.
Hence PT (x) = det(xI A) is a polynomial of degree n in x. t
u

Definition 2.5. The polynomial PT (x) is called the characteristic polynomial of the operator T . t
u

Recall that a number F is called an eigenvalue of the operator T if and only if there exists
u U \ 0 such that T u = u, i.e.,
(1 T )u = 0.
Thus is an eigenvalue of T if and only if ker(I T ) 6= 0. Invoking Corollary 2.3 we obtain the
following important result.
Corollary 2.6. A scalar F is an eigenvalue of T if and only if it is a root of the characteristic
polynomial of T , i.e., PT () = 0. t
u
26 LIVIU I. NICOLAESCU

The collection of eigenvalues of an operator T is called the spectrum of T and it is denoted by


spec(T ). If spec(T ), then the subspace ker(1 T ) U is called the eigenspace corresponding
to the eigenvalue .
From the above corollary and the fundamental theorem of algebra we obtain the following impor-
tant consequence.
Corollary 2.7. If T : U U is a linear operator on a complex vector space U , then spec(T ) 6= .u
t

We say that a linear operator T : U U is triangulable if there exists a basis e = (e1 , . . . , en )


of U such that the matrix representing T in this basis is upper triangular. We will refer to A as a
triangular representation of T . Triangular representations, if they exist, are not unique. We already
know that any linear operator on a complex vector space is triangulable.
Corollary 2.8. Suppose that T : U U is a triangulable operator. Then for any basis e =
(e1 , . . . , en ) of U such that the matrix A = (aij )1i,jn representing T in this basis is upper trian-
gular, we have
PT (x) = (x a11 ) (x ann ).
Thus, the eigenvalues of T are the elements along the diagonal of any triangular representation of T .
t
u

2.3. Generalized eigenspaces. Suppose that T : U U is a linear operator on the n-dimensional


F-vector space. Suppose that spec(T ) 6= . Choose an eigenvalue spec(T ).
Lemma 2.9. Let k be a positive integer. Then
ker(1 T )k ker(1 T )k+1 .
Moreover, if ker(1 T )k = ker(1 T )k+1 , then
ker(1 T )k = ker(1 T )k+1 = ker(1 T )k+2 = ker(1 T )k+3 = .

Proof. Observe that if (1 T )k u = 0, then


(1 T )k+1 u = (1 T )(1 T )k u = 0,
so that ker(1 T )k ker(1 T )k+1 .
Suppose that
ker(1 T )k = ker(1 T )k+1
To prove that ker(1 T )k+1 = ker(1 T )k+2 it suffices to show that
ker(1 T )k+1 ker(1 T )k+2 .
Let v ker(1 T )k+2 . Then
(1 T )k+1 (1 T )v = 0,
so that (1 T )v ker ker(1 T )k+1 = ker(1 T )k so that
(1 T )k (1 T )v = 0,
i.e., v ker(1 T )k+1 . We have thus shown that
ker(1 T )k+1 = ker(1 T )k+2 .
The remaining equalities ker(1 T )k+2 = ker(1 T )k+3 = are proven in a similar fashion.
t
u
LINEAR ALGEBRA 27

Corollary 2.10. For any m n = dim U we have


ker(1 T )m = ker(1 T )n , (2.3a)

R(1 T )m = R(1 T )n . (2.3b)


Proof. Consider the sequence of positive integers
d1 () = dimF (1 T ), . . . , dk () = dimF (1 T )k , . . . .
Lemma 2.9 shows that
d1 () d2 () n = dim U .
Thus there must exist k such that dk () = dk+1 (). We set

k0 = min k; dk () = dk+1 () .
Thus
d( ) < < dk0 () n,
so that k0 n. On the other hand, since dk0 () = dk0 +1 () we deduce that
ker(1 T )k0 = ker(1 T )m , m k0 .
Since n k0 we deduce
ker(1 T )n = ker(1 T )k0 = ker(1 T )m , m k0 .
This proves (2.3a). To prove (2.3b) observe that if m > n, then
 
R(1 T )m = (1 T )n (1 T )mn V (1 R)n V = R(1 T )n .


On the other hand, the rank-nullity formula (2.2) implies that


dim R(1 T )n = dim U dim ker(1 T )n

= dim U (1 T )m = dim R(1 T )m .


This proves (2.3b). t
u

Definition 2.11. Let T : U U be a linear operator on the n-dimensional F-vector space U .


Then for any spec(T ) the subspace ker(1 T )n is called the generalized eigenspace of T
corresponding to the eigenvalue and it is denoted by E (T ). We will denote its dimension by
m (T ), or m , and we will refer to it as the multiplicity of the eigenvalue . t
u

Proposition 2.12. Let T L(U ) , dimF U = n, and spec(T ). Then the generalized eigenspace
E (T ) is an invariant subspace of T .
Proof. We need to show that T E (T ) E (T ). Let u E (T ), i.e.,
(1 T )n u = 0.
Clearly u T u ker(1 T )n+1 = E (T ). Since u E (T ) we deduce that
T u = u (u T ) E (T ).
t
u
28 LIVIU I. NICOLAESCU

Theorem 2.13. Suppose that T : U U is a triangulable operator on the the n-dimensional


F -vector space U . Then the following hold.
(a) For any spec(T ) the multiplicity m is equal to the number of times appears along the
diagonal of a triangular representation of T .
(b) Y
det T = m (T ) , (2.4a)
spec(T )
Y
PT (x) = (x )m (T ) , (2.4b)
spec(T )
X
m (T ) = deg PT = dim U = n. (2.4c)
spec(T )

Proof. To prove (a) we will argue by induction on n. For n = 1 the result is trivially true. For the
inductive step we assume that the result is true for any triangulable operator on an (n1)-dimensional
F-vector space V , and we will prove that the same is true for triangulable operators acting on an n-
dimensional space U .
Let T L(U ) be such an operator. We can then find a basis e = (e1 , . . . , en ) of U such that, in
this basis, the operator T is represented by the upper triangular matrix

1
0 2

.. .
. .
. .
. .
.
A= .

. . . .
0 0 n1
0 0 0 n
Suppose that spec(T ). For simplicity we assume = 0. Otherwise, we carry the discussion of
the operator T 1. Let be the number of times 0 appears on the diagonal of A we have to show
that
= dim ker T n .
Denote by V the subspace spanned by the vectors e1 , . . . , en1 . Observe that V is an invariant
subspace of T , i.e., T V V . If we denote by S the restriction of T to V we can regard S as a linear
operator S : V V .
The operator S is triangulable because in the basis (e1 , . . . , en1 ) of V it is represented by the
upper triangular matrix
1
0 2
B = .. .. .

.. ..
. . . .
0 0 n1
Denote by the number of times 0 appears on the diagonal of B. The induction hypothesis implies
that
= dim ker S n1 = dim ker S n .
Clearly . Note that
ker S n ker T n
so that
= dim ker S n dim ker T n .
We distinguish two cases.
LINEAR ALGEBRA 29

1. n 6= 0. In this case we have = so it suffices to show that


ker T n V .
Indeed, if that were the case, we would conclude that ker T n ker S n , and thus
dim ker T n = dim ker S n = = .
We argue by contradiction. Suppose that there exists u ker T n such that u 6 V . Thus, we can find
v V and c F \ 0 such that
u = v + cen .
Note that T n v V and
en = n en + vector in V .
Thus
T n cen = cnn en + vector in V
so that
T n u = cnn en + vector in V 6= 0.
This contradiction completes the discussion of Case 1.
2. n = 0. In this case we have = + 1 so we have to show that
dim ker T n = + 1.
We need an auxiliary result.
Lemma 2.14. There exists u U \ V such that T n u = 0 so that
dim(V + ker T n ) dim V + 1 = n. (2.5)

Proof. Set
v n := T en .
Observe that v n V . From (2.3b) we deduce that R S n1 = R S n so that there exists v 0 V such
that
S n1 v n = S n v 0 .
Set u := en v 0 . Note that u U \ V ,
T u = v n T v 0 = v n Sv,
n n1
T u=T (v n Sv 0 ) = S n1 v n S n v 0 = 0.
t
u

Now observe that


(2.5)
n = dim U dim(V + ker T n ) n,
so that
dim(V + ker T n ) = n.
We conclude that
n = dim(V + ker T n ) = dim(ker T n ) + dim
| {zV} dim(U ker T n )
| {z }
n1 =
n
= dim(ker T ) + n 1 ,
which shows that
dim ker T n = + 1 = .
30 LIVIU I. NICOLAESCU

This proves (a). The equalities (2.4c), (2.4b), (2.4c) follow easily from (a). t
u

+ In the remainder of this section we will assume that F is the field of complex numbers, C.
Suppose that U is a complex vector space and T L(U ) is a linear operator. We already know
that T is triangulable and we deduce from the above theorem the following important consequence.
Corollary 2.15. Suppose that T is a linear operator on the complex vector space U. Then
Y Y
det T = m (T ) , PT (x) = (x )m (T ) ,
spec(T ) spec(T )
X
m (T ). t
u
spec(T )

For any polynomial with complex coefficients


p(x) = a0 + a1 x + + an xn C[x]
and any linear operator T on a complex vector space U we set
p(T ) = a0 1 + a1 T + + an T n .
Note that if p(x), q(x) C[x], and if we set r(x) = p(x)q(x), then
r(T ) = p(T )q(T ).
Theorem 2.16 (Cayley-Hamilton). Suppose T is a linear operator on the complex vector space U .
If PT (x) is the characteristic polynomial of T , then
PT (T ) = 0.

Proof. Fix a basis e = (e1 , . . . , en ) in which T is represented by the upper triangular matrix

1
0 2

.. .
.. .
.. .
.. ..
A= . .

0 0 n1
0 0 0 n
Note that
n
PT (x) = det(x1 T ) =
Y
(x j )
j=1
so that
n
(T j 1).
Y
PT (T ) =
j=1
For j = 1, . . . , N we define
U j := span{e1 , . . . , ej }.
and we set U 0 = {0}. Note that for any j = 1, . . . , n we have
(T j 1)U j U j1 .
Thus
n n1  
(T j ) (T n 1)U n
Y Y
PT (T )U = (T j )U n =
j=1 j=1
LINEAR ALGEBRA 31

n1
Y n2
Y
(T j )U n1 (T j )U n2 (T 1 )U 1 {0}.
j=1 j=1
In other words,
PT (T )u = 0, u U .
t
u

Example 2.17. Consider the 2 2-matrix


 
3 2
A=
2 1
Its characteristic polynomial is
 
x 3 2
PA (x) = det(xI A) = det = (x3)(x+1)+4 = x2 2x3+4 = x2 2x+1.
2 x+1
The Cayley-Hamilton theorem shows that
A2 2A + 1 = 0.
Let us verify this directly. We have  
2 5 8
A =
4 3
and      
2 5 4 3 2 1 0
A 2A + I = 2 + = 0.
4 3 2 1 0 1
We can rewrite the last equality as
A2 = 2A I
so that
An+2 = 2An+1 An ,
We can rewrite this as
An+2 An+1 = An+1 An = An An1 = = A I.
Hence
An = (An An1 ) + (An1 An2 ) + + (A I) +I = nA (n 1)I. t
u
| {z }
=n(AI)

2.4. The Jordan normal form of a complex operator. Let U be a complex n-dimensional vector
space and T : U U . For each eigenvalue spec(T ) we denote by E (T ) the corresponding
generalized eigenspace, i.e.,
u E (T )k > 0 : (T 1)k u = 0.
From Proposition 2.12 we know that E (T ) is an invariant subspace of T . Suppose that the spectrum
of T consists of ` distinct eigenvalues,

spec(T ) = 1 , . . . , ` .
Proposition 2.18.
U = E1 (T ) E` (T ).
32 LIVIU I. NICOLAESCU

Proof. It suffices to show that


U = E1 (T ) + + E` (T ), (2.6a)

dim U = dim E1 (T ) + + dim E` (T ). (2.6b)


The equality (2.6a) follows from (2.4c) since
dim U = m1 (T ) + + m` (T ) = dim E1 (T ) + + dim E` (T ),
so we only need to prove (2.6b). Set
V := E1 (T ) + + E` (T ) U .
We have to show that V = U .
Note that since each of the generalized eigenspaces E (T ) are invariant subspaces of T , so is there
sum V . Denote by S the restriction of T to V , which we regard as an operator S : V V .
If spec(T ) and v E (T ) V , then
(S 1)k v = (T 1)k v = 0
for some k 0. Thus is also an eigenvalue of S and v is also a generalized eigenvector of S. This
proves that
spec(T ) spec(S),
and
E (T ) E (S), spec(T ).
In particular, this implies that
X X
dim U = dim E (T ) dim E (S) = dim V dim U .
spec(T ) spec(S)

This shows that dim V = dim U and thus V = U . t


u

For any spec(T ) we denote by S the restriction of T on the generalized eigenspace E (T ).


Since this is an invariant subspace of T we can regard S as a linear operator
S : E (T ) E (T ).
Arguing as in the proof of the above proposition we deduce that E (T ) is also a generalized eigenspace
of S . Thus, the spectrum of S consists of a single eigenvalue and
E (T ) = E (S) = ker(1 S )dim E (T ) = ker(1 S )m (T ) .
Thus, for any u E (T ) we have

(1 S )m (T ) u = 0,
i.e.,
(S 1)m (T ) = (1)m (T ) (1 S )m (T ) = 0.

Definition 2.19. A linear operator N : U U is called nilpotent if N k = 0 for some k > 0. t


u

If we set N = S 1 we deduce that the operator N is nilpotent.


LINEAR ALGEBRA 33

Definition 2.20. Let N : U U be a nilpotent operator on a finite dimensional complex vector


spec V . A tower of N is an ordered collection T of vectors
u1 , u2 , . . . , uk U
satisfying the equalities
N u1 = 0, N u2 = u1 , . . . , N uk = uk1 .
The vector u1 is called the bottom of the tower, the vector uk is called the top of the tower, while the
integer k is called the height of the tower. t
u

In Figure 1 we depicted a tower of height 4. Observe that the vectors in a tower are generalized
eigenvectors of the corresponding nilpotent operator.

u4
N
u3
N
u2
N
u1

F IGURE 1. Pancaking a tower of height 4.

Towers interact in a rather pleasant way.


Proposition 2.21. Suppose that N : U U is a nilpotent operator on a complex vector space U
and T1 , . . . , Tr are towers of N with bottoms b1 , . . . , br .
If the bottom vectors b1 , . . . , br are linearly independent, then the following hold.
(i) The towers T1 , . . . , Tr are mutually disjoint, i.e., Ti Tj = if i 6= j.
(ii) The union
T = T1 Tr
is a linearly independent family of vectors.
Proof. Denote by ki the height of the tower Ti and set
k = k1 + + kr .
We will argue by induction on k, the sum of the heights of the towers.
For k = 1 the result is trivially true. Assume the result is true for all collections of towers with total
heights < k and linear independent bases, and we will prove that it is true for collection of towers
with total heights = k.
Denote by V the subspace spanned by the union T. It is an invariant subspace of N , and we denote
by S the restriction of N to V . We regard S as a linear operator S : V V .
Denote by Ti0 the tower obtained by removing the top of the tower Ti and set (see Figure 2 )
T 0 = T10 Tr0 .
Note that
R(S) = span(T 0 ). (2.7)
34 LIVIU I. NICOLAESCU

The collection of towers T10 , . . . Tr0 has total height


k 0 = (k1 1) + + (kr 1) = k r < k.
Moreover the collection of bottoms is a subcollection of {b1 , . . . , br } so it is linearly independent.

T1 T2 T3 T4 T
5

F IGURE 2. A group of 5 towers. The tops are shaded and about to be removed.

The inductive assumption implies that the towers T10 , . . . , Tr0 are mutually disjoint and their union
T0 is a linear independent family of vectors. Hence T 0 is a basis of R(S), and thus
dim R(S) = k 0 = k r.
On the other hand
Sbj = N bj = 0, j = 1, . . . , r.
Since the vectors b1 , . . . , br are linearly independent we deduce that
dim ker S r,
From the rank-nullity theorem we deduce that
dim V = dim ker S + dim R(S) r + k r = r.
on the other hand
dim V = dim span(T) |T| k.
Hence
|T| = dim V = k.
This proves that the towers T1 , . . . Tr are mutually disjoint and their union is linearly independent. t
u

Theorem 2.22 (Jordan normal form of a nilpotent operator). Let N : U U be a nilpotent operator
on an n-dimensional complex vector space U . Then U has a basis consisting of a disjoint union of
towers of N .
Proof. We will argue by induction on the dimension n of U . For n = 1 the result is trivially true. We
assume that the result is true for any nilpotent operator on a space of dimension < n and we prove it
is true for any nilpotent operator N on a space U of dimension n.
LINEAR ALGEBRA 35

Observe that V = R(N ) is an invariant subspace of N . Moreover, since ker N 6= 0, we deduce


from the rank nullity formula that
dim V = dim U dim ker N < dim U .
Denote by M the restriction of N to V . We view M as a linear operator M : V V . Clearly
M is nilpotent. The induction assumption implies that there exist a basis of V consisting a mutually
disjoint towers of M ,
T1 , . . . , Tr .
For any j = 1, . . . , r we denote by kj the height of Tj , by bj the bottom of Tj and by tj the top of
Tj . By construction
dim R(N ) = k1 + + kr .
Since tj V = R(N ) there exists uj U such that (see Figure 3)
tj = N uj .

u1
N
u u
3 5
t1 N N
u
N
t3 t

t2
u4
N

b3 t v1 v2
b1 b2 b4 4 b5
KerN

F IGURE 3. Towers in R(N ).

Next observe that the bottoms b1 , . . . , br belong to ker N and are linearly independent, because
they are a subfamily of the linearly independent family T1 Tr . We can therefore extend the
family {b1 , . . . , br } to a basis of ker N ,
b1 , . . . , br , v 1 , . . . , v s , r + s = dim ker N .
We obtain new towers T1 , . . . , Tr , Tr+1 , . . . , Tr+s defined by (see Figure 3)
T1 := T1 {u1 }, . . . , Tr := Tr {ur }, Tr+1 := {v 1 }, . . . , Tr+s := {v s }.
The sum of the heights of these towers is
(k1 + 1) + (k2 + 1) + + (kr + 1) + |1 + {z
+ 1}
s
= (k1 + + kr ) + (r + s) = dim U .
| {z } | {z }
=dim R(N ) =dim ker N
36 LIVIU I. NICOLAESCU

By construction, their bottoms are linearly independent and Proposition 2.21 implies that they are
mutually disjoint and their union is a linearly independent collection of vectors. The above computa-
tion shows that the number of elements in the union of these towers is equal to the dimension of U .
Thus, this union is a basis of U . t
u

Definition 2.23. A Jordan basis of a nilpotent operator N : U U is a basis of U consisting of a


disjoint union of towers of N . t
u

Example 2.24. (a) Suppose that the nilpotent operator N : U U admits a Jordan basis consisting
of a single tower
e1 , . . . , en .
Denote by Cn the matrix representing N in this basis. We use this basis to identify U with Cn and
thus
1 0 0
0 1 0

0 0 0
e1 = .. , e2 = .. , . . . , en = .. .

. . .

0 0 0
0 0 1
From the equalities
N e1 = 0, N e2 = e1 , N e3 = e2 , . . .
we deduce that the first column of Cn is trivial, the second column is e1 , the 3-rd column is e2 etc.
Thus Cn is the n n matrix.

0 1 0 0 0
0 0 1 0 0

Cn = ... ... ... .. .. ..

. . .
0 0 0 0 1
0 0 0 0 0
The matrix Cn is called a (nilpotent) Jordan cell of size n.
(b) Suppose that the nilpotent operator N : U U admits a Jordan basis consisting of mutually
disjoint towers T1 , . . . , Tr of heights k1 , . . . , kr . For j = 1, . . . , r we set
U j = span(Tj ).
Observe that U j is an invariant subspace of N , Tj is a basis of U j and we have a direct sum decom-
position
U = U 1 U r.
The restriction of N to U j is represented in the basis Tj by the Jordan cell Ckj so that in the basis
T = T1 Tr the operator N has the block-matrix description

Ck1 0 0 0
0 Ck 0 0
2
.. . t
u

.. .. .. ..
. . . . .
0 0 0 Ckr
We want to point out that the sizes of the Jordan cells correspond to the heights of the towers in a
Jordan basis. While there may be several Jordan bases, the heights of the towers are the same in all
of them; see Remark 2.26. In other words, these heights are invariants of the nilpotent operator. t
u
LINEAR ALGEBRA 37

Definition 2.25. The Jordan invariant of a nilpotent operator N is the nonincreasing list of the sizes
of the Jordan cells that describe the operator in a Jordan basis. t
u

Remark 2.26 (Algorithmic construction of a Jordan basis). Here is how one constructs a Jordan basis
of a nilpotent operator N : U U on a complex vector space U of dimension n.
(i) Compute N 2 , N 3 , . . . and stop at the moment m when N m = 0. Set
R0 = U , R1 = R(N ), R2 = R(N 2 ), . . . , Rm = R(N m ) = {0}.
Observe that R1 , R2 , . . . , Rm are invariant subspaces of N , satisfying
R0 R1 R2 ,
(ii) Denote by Nk the restriction of N to Rk , viewed as an operator Nk : Rk Rk . Note that
Nm1 = 0, N0 = N and
Rk = R(Nk1 ), k = 1, . . . , m.
Set rk = dim Rk so that dim ker Nk = rk rk+1 .
(iii) Construct a basis Bm1 of Rm1 = ker Nm1 . Bm1 consists of rm1 vectors.
(iv) For each b Bm1 find a vector t(b) U such that
b = N m1 t(b).
For each b Bm1 we thus obtain a tower of height m
Tm1 (b) = b = N m1 t(b), N m2 t(b), . . . , N t(b), t(b) , b Bm1 .


(v) Extend Bm1 to a basis


Bm2 = Bm1 Cm2
of ker Nm2 Rm2 .
(vi) For each b Cm2 Rm2 find t = t(b) N such that N m2 t = b. For each b Cm2
we thus obtain a tower of height m 1
Tm2 (b) = b = N m2 t(b), . . . , N t(b), t(b) , b Bm2


(vii) Extent Bm2 to a basis


Bm3 = Bm2 Cm3
of ker Nm3 Rm3 .
(viii) For each b Cm3 Rm3 , find t(b) N such that N m3 t(b) = b. For each b Cm3
we thus obtain a tower of height m 2
Tm3 (b) = b = N m3 t(b), . . . , N t(b), t(b) , b Cm3


(ix) Iterate the previous two steps


(x) In the end we obtain a basis
B0 = Bm1 Cm2 C0
of ker N0 = ker N , vectors t(b), b Cj , and towers
Tj (b) = b = N j t(b), . . . , N t(b), t(b) , j = 0, . . . , m 1, b Cj .


These towers form a Jordan basis of N .


38 LIVIU I. NICOLAESCU

(xi) For uniformity set Cm1 = Bm1 , and for any j = 1, . . . , m denote by cj the cardinatlity of
Cj1 . In the above Jordan basis the operator N will be a dirrect sum of c1 cells of dimension
1, c2 cells of dimension 2, etc. We obtain the identities
n = c1 + 2c2 + + mcm ,
r1 = c2 + 2c3 + + (m 1)cm1 , r2 = c3 + + (m 2)cm1 ,
rj = cj+1 + 2cj+2 + + (m j)cm , j = 0, . . . , m 1. (2.8)
where r0 = n.
If we treat the equalities (2.8) as a linear system with unknown k1 , . . . , km , we see that the matrix
of this system is upper triangular with only 1-s along the diagonal. It is thus invertible so that the
numbers c1 , . . . , cm are uniquely determined by the numbers rj which are invariants of the operator
N . This shows that the sizes of the Jordan cells are independent of the chosen Jordan basis.
If you are interested only in the sizes of the Jordan cells, all you have to do is find the integers m,
r1 , . . . , rm1 and then solve the system (2.8). Exercise 2.13 explains how to explicitly express the
cj -s in terms of the rj -s. t
u

Remark 2.27. To find the Jordan invariant of a nilpotent operator N on a complex n-dimensional
space proceed as follows.
(i) Find the smallest integer m such that N m = 0.
(ii) Find the ranks rj of the matrices N j , j = 0, . . . , m 1, where N 0 : I.
(iii) Find the nonnegative integers c1 , . . . , cm by solving the linear system (2.8).
(iv) The Jordan invariant of N is the list
m, . . . , m, (m 1), . . . , (m 1), . . . , 1, . . . , 1 .
| {z } | {z } | {z }
cm cm1 c1
t
u

If T : U U is an arbitrary linear operator on a complex n-dimensional space U with spectrum



spec(T ) = 1 , . . . , m },
then we have a direct sum decomposition
U = E1 (T ) Em (T ),
where Ej (T ) denotes the generalized eigenspace of T corresponding to the eigenvalue j . The
generalized eigenspace Ej (T ) is an invariant subspace of T and we denote by Sj the restriction of
T to Ej (T ). The operator Nj = Sj j 1 is nilpotent.
A Jordan basis of U is basis obtain as a union of Jordan bases of the nilpotent operators N1 , . . . , Nr .
The matrix representing T in a Jordan basis is a direct sum of elementary Jordan -cells

1 0 0 0
0 1 0 0

Cn () = Cn + I = ... ... ... .. .. .. .

. . .
0 0 0 1
0 0 0 0
Definition 2.28. The Jordan invariant of a complex operator T is a collection of lists, one list for
every eigenvalue of of T . The list L corresponding to the eigenvalue is the Jordan invariant of the
nilpotent operator N , the restriction of T 1 to E (T ) arranged in noincreasing order. t
u
LINEAR ALGEBRA 39

Example 2.29. Consider the 4 4 matrix



1 0 0 0
1 1 1 0
A=
2 4 3 0 ,

0 0 0 1
viewed as a linear operator C4 C4 .
Expanding along the first row and then along the last column we deduce that
 
2 x + 1 1
PA (x) = det(xI A) = (x 1) det = (x 1)4 .
4 x3
Thus A has a single eigenvalue = 1 which has multiplicity 4. Set

0 0 0 0
1 2 1 0
N := A I = 2 4 2 0 .

0 0 0 0
The matrix N is nilpotent. In fact we have
N2 = 0
Upon inspecting N we see that ech of its columns is a multiple of the fiurst column. This means that
the range of N is spanned by the vector

0
1
u1 := N e1 = 2 ,

0
where e1 , . . . , e4 denotes the canonical basis of C4 .
The vector u1 is a tower in R(N ) which we can extend to a taller tower of N
T1 = (u1 , u2 ), u2 = e1 .
Next, we need to extend the basis {u1 } of R(N ) to a basis of ker N . The rank nulity theorem tells us
that
dim R(N ) + dim ker N = 4,
so that dim ker N = 3. Thus, we need to find two more vectors v 1 , v 2 so that the collection
{u1 , v 1 , v 2 } is a basis of ker N .
To find ker N we need to solve the linear system

x1
x2
x3 .
N x = 0, x =

x4
which we do using Gauss elimination, i.e., row operations on N . Observe that

0 0 0 0 1 2 1 0 1 2 1 0
1 2 1 0 2 4 2 0 0 0 0 0
N = 2 4 2 0 0 0 0 0 0 0 0
.
0
0 0 0 0 0 0 0 0 0 0 0 0
Hence
ker N = x C4 ; x1 2x2 + x3 = 0 ,

40 LIVIU I. NICOLAESCU

and thus a basis of ker N is



2 1 0
1 0 0
f1 =
0 , f 2 = 1 , f 3 := 0
.

0 0 1
Observe that
u1 = 2f 1 + f 2 ,
and thus the collection {u1 , f 2 , f 3 } is also a basis of ker N .
We now have a Jordan basis of N consisting of the towers
T1 = {u1 , u2 }, T2 = {f 2 }, T3 = {f 3 }.
In this basis the operator is described as a direct sum of three Jordan cells: a cell of dimension 2, and
two cells of dimension 1. Thus the Jordan invariant of A consists of single list L1 corresponding to
the single eigenvalue 1. More precisely
L1 = 2, 1, 1. t
u
LINEAR ALGEBRA 41

2.5. Exercises.
Exercise 2.1. Denote by U 3 the space of polynomials of degree 3 with real coefficients in one
variable x. We denote by B the canonical basis of U 3 ,
B = 1, x, x2 , x3 .


(a) Consider the linear operator D : U 3 U 3 given by


dp
U 3 3 p 7 Dp = U 3.
dx
Find the matrix that describes D in the canonical basis B.
(b) Consider the operator : U 3 U 3 given by
(p)(x) = p(x + 1) p(x).
Find the matrix that describes in the canonical basis B.
(c) Show that for any p U 3 the function
Z
dp(x + t)
x 7 (Lp)(x) = et dt
0 dx
is also a polynomial of degree 3 and then find the matrix that describes L in the canonical basis
B. t
u

Exercise 2.2. (a) For any n n matrix A = (aij )1i,jn we define the trace of A as the sum of the
diagonal entries of A,
n
X
tr A := a11 + + ann = aij .
i=1
Show that if A, B are two n n matrices then
tr AB = tr BA.
(b) Let U be an n-dimensional F-vector space, and e, f be two bases of U . Suppose that T : U U
is a linear operator represented in the basis e by the matrix A and the basis f by the matrix B. Prove
that
tr A = tr B.
(The common value of these traces is called the trace of the operator T and it is denoted by tr T .)
Hint: Use part (a) and (2.1).
(c) Consider the operator A : F2 F2 described by the matrix
 
a11 a12
A= .
a21 a22
Show that
PT (x) = x2 tr Ax + det A.
(d) Let T be a linear operator on the n-dimensional F-vector space. Prove that the characteristic
polynomial of T has the form
PT (x) = det(x1 T ) = xn (tr T )xn1 + + (1)n det T. t
u
Exercise 2.3. Suppose T : U U is a linear operator on the F-vector space U , and V 1 , V 2 U
are invariant subspaces of T . Show that V 1 V 2 and V 1 + V 2 are also invariant subspaces of T . t
u
42 LIVIU I. NICOLAESCU

Exercise 2.4. Consider the Jacobi matrix



2 1 0 0 0 0
1
2 1 0 0 0
Jn = 0
1 2 1 0 0
.. .. .. .. .. .. ..
. . . . . . .
0 0 0 0 1 2
(a) Let Pn (x) denote the characteristic polynomial of Jn ,
Pn (x) = det(xI Jn ).
Show that
P1 (x) = x 2, P2 (x) = x2 4x + 3 = (x 1)(x 3),
Pn (x) = (x 2)Pn1 (x) Pn2 (x), n 3.
(b) Show that all the eigenvalues of J4 are real and distinct, and then conclude that the matrices
1, J4 , J42 , J43 are linearly independent. t
u

Exercise 2.5. Consider the n n matrix A = (aij )1i,jn , where aij = 1, for all i, j, which we
regard as a linear operator Cn Cn . Consider the vector

1
1
c = .. Cn .

.
1
(a) Compute Ac.
(b) Compute dim R(A), dim ker A and then determine spec(A).
(c) Find the characteristic polynomial of A. t
u

Exercise 2.6. Find the eigenvalues and the eigenvectors of the circulant matrix described in Exercise
1.16. t
u

Exercise 2.7. Prove the equality (2.7) that appears in the proof of Proposition 2.21. t
u

Exercise 2.8. Let T : U U be a linear operator on the finite dimensional complex vector space
U . Suppose that m is a positive integer and u U is a vector such that
T m1 u 6= 0 but T m u = 0.
Show that the vectors
u, T u, . . . , T m1 u
are linearly independent. t
u

Exercise 2.9. Let T : U U be a linear operator on the finite dimensional complex vector space
U . Show that if
dim ker T dim U 1 6= dim ker T dim U ,
then T is a nilpotent operator and
dim ker T j = j, j = 1, . . . , dim U .
Can you construct an example of operator T satisfying the above properties? t
u
LINEAR ALGEBRA 43

Exercise 2.10. Let S, T be two linear operators on the finite dimensional complex vector space U .
Show that ST is nilpotent if and only if T S is nilpotent. t
u

Exercise 2.11. Find a Jordan basis of the linear operator C4 C4 described by the 4 matrix

1 0 1 1
0 1 1 1
A= 0 0
. t
u
1 0
0 0 0 1
Exercise 2.12. Find a Jordan basis of the linear operator C7 C7 given by the matrix

3 2 1 1 0 3 1
1 2 0 0 1 1 1

1 1 3 0 6 5 1

A= 0 2 1 4 1 3 1 . t
u
0 0 0 0 3 0 0

0 0 0 0 2 5 0
0 0 0 0 2 0 5
Exercise 2.13. (a) Find the inverse of the m m matrix

1 2 3 m 1 m
0 1 2 m 2 m 1

A = ... ... ... .. .. ..
.

. . .
0 0 0 1 2
0 0 0 0 1
(b) Find the Jordan normal form of the above matrix. t
u
44 LIVIU I. NICOLAESCU

3. E UCLIDEAN SPACES
In the sequel F will denote either the field R of realnumbers, or the field C of complex numbers.
Any complex number has the form z = a + bi, i = 1, so that i2 = 1. The real number a is
called the real part of z and it is denoted by Re z. The real number b is called the imaginary part of
z and its is denoted by Im z.
The conjugate of a complex number z = a + bi, is the complex number
z = a bi.
In particular, any real number is equal to its conjugate. Note that
z + z = 2 Re z, z z = 2i Im z.
The norm or absolute value of a complex number z = a + bi is the real number
p
|z| = a2 + b2 .
Observe that
|z|2 = a2 + b2 = (a + bi)(a bi) = z z.
In particular, if z 6= 0 we have
1 1
= 2 z.
z |z|

3.1. Inner products. Let U be an F-vector space.


Definition 3.1. An inner product on U is a map
h, i : U U F, U U 3 (u1 , u2 ) 7 hu1 , u2 i F,
satisfying with the following condition.
(i) Linearity in the first variable, i.e., u, v, u2 U , x, y F we have
hxu + yv, v 2 i = xhu, u2 i + yhv, v 2 i.
(ii) Conjugate linearity in the second variable, i.e., u1 , , u, v U , x, y F we have
hu1 , xu + yvi = xhu1 , ui + yhu1 , vi.
(iii) Hermitian property, i.e., u, v U we have
hv, ui = hu, vi.
(iv) Positive definiteness, i.e.,
hu, ui 0, u U ,
and
hu, ui = 0u = 0.
A vector space u equipped with an inner product is called an Euclidean space. t
u

Example 3.2. (a) The standard real n-dimensional Euclidean space. The vector space Rn is
equipped with a canonical inner product
h, i : Rn Rn R.
LINEAR ALGEBRA 45

More precisely if
u1 v1
u2 v2
u= , v = Rn ,

.. ..
. .
un vn
then
n
X
hu, vi = u1 v1 + + un vn = uk vk = u v.
k=1
You can verify that this is indeed an inner product, i.e., it satisfies the conditions (i)-(iv) in Definition
3.1.
(b) The standard complex n-dimensional Euclidean space. The vector space Cn is equipped
with a canonical inner product
h, i : Cn Cn C.
More precisely if
u1 v1
u2 v2
u = .. , v = .. Cn

. .
un vn
then
Xn
hu, vi = u1 v1 + + un vn = uk vk .
k=1
You can verify that this is indeed an inner product.
(c) Denote by Pn the vector space of polynomials with real coefficients and degree n. We can
define an inner product on Pn
h, i : Pn Pn R,
by setting
Z 1
hp, qi = p(x)q(x)dx, p, q Pn .
0
You can verify that this is indeed an inner product
(d) Any finite dimensional F-vector space U admits an inner product. Indeed, if we fix a basis
(e1 , . . . , en ) of U , then we define
h, i : U U F,
by setting D E
u1 e1 + + un en , v1 e1 + + vn en = u1 v1 + + un vn . t
u

3.2. Basic properties of Euclidean spaces. Suppose that (U , h, i) is an Euclidean vector space.
We define the norm or length of a vector u U to be the nonnegative number
p
kuk := hu, ui.
Example 3.3. In the standard Euclidean space Rn of Example 3.2(a) we have
v
u n
uX
kuk = t u2k . t
u
k=1
46 LIVIU I. NICOLAESCU

Theorem 3.4 (Cauchy-Schwarz inequality). For any u, v U we have



hu, vi | kuk kvk.
Moreover, the equality is achieved if and only if the vectors u, v are linearly dependent, i.e., collinear.

Proof. If hu, vi = 0, then the inequality is trivially satisfied. Hence we can assume that hu, vi = 0.
In particular, u, v 6= 0.
From the positive definiteness of the inner product we deduce that for any x F we have
0 ku xvk2 = hu xv, u xvi = hu, u xvi xhv, u xvi
= hu, ui xhu, vi xhv, ui + xxhv, vi
= kuk2 xhu, vi xhv, ui + |x|2 kvk2 .
If we let
1
x0 = hu, vi, (3.1)
kvk2
then
1 1 |hu, vi|2
x0 hu, vi + x0 hv, ui = hu, vihu, vi + hu, vihu, vi = 2 ,
kvk2 kvk2 kvk2
|hu, vi|2
|x0 |2 kvk2 = ,
kvk2
and thus
|hu, vi|2 |hu, vi|2 |hu, vi|2
0 ku x0 vk2 = kuk2 2 + = kuk2
. (3.2)
kvk2 kvk2 kvk2
Thus
|hu, vi|2
kuk2
kvk2
so that
hu, vi |2 kuk2 kvk2 .

Note that if 0 = hu, vi = kuk kvk then at least one of the vectors u, v must be zero.
If 0 6= hu, vi = kuk kvk, then by choosing x0 as in (3.1) we deduce as in (3.2) that
ku x0 vk = 0.
Hence u = x0 v. t
u

Remark 3.5. The Cauchy-Schwarz theorem is a rather nontrivial result, which in skilled hands can
produce remarkable consequences. Observe that if U is the standard real Euclidean space of Example
3.2(a), then the Cauchy-Schwarz inequality implies that for any real numbers u1 , v1 , . . . , un , vn we
have
n v n
v
u n
X u u X uX

uk vk
t 2
uk t vk2


k=1 k=1 k=1
If we square both sides of the above inequality we deduce
n
!2 n
! n
!
X X X
uk vk u2k vk2 . (3.3)
k=1 k=1 k=1
t
u
LINEAR ALGEBRA 47

Observe that if u, v are two nonzero vectors in an Euclidean space (U , h, i), then the Cauchy-
Schwarz inequality implies that
Rehu, vi
[1, 1].
kuk kvk
Thus there exists a unique [0, ] such that
Rehu, vi
cos = .
kuk kvk
This angle is called the angle between the two nonzero vectors u, v. We denote it by ](u, v). In
particular, we have, by definition,
Rehu, vi
cos ](u, v) = . (3.4)
kuk kvk

Note that if the two vectors u, v were perpendicular in the classical sense, i.e., ](u, v) = 2, then
Rehu, vi = 0. This justifies the following notion.
Definition 3.6. Two vectors u, v in an Euclidean vector space (U , h, i) are said to be orthogonal
if hu, vi = 0. We will indicate the orthogonality of two vectors u, v using the notation u v. t
u

+ In the remainder of this subsection we fix an Euclidean space (U , h, i).

Theorem 3.7 (Pythagora). If u v, then


ku + vk2 = kuk2 + kvk2 .
Proof. We have
ku + vk2 = hu + v, u + vi = hu, ui + hu, vi + hv, ui +hv, vi
| {z } | {z }
=0 =0

= kuk2 + kvk2 .
t
u

Theorem 3.8 (Triangle inequality).


ku + vk kuk + kvk, u, v U .
Proof. Observe that the inequality is can be rewritten equivalently as
2
ku + vk2 kuk + kvk .
Observe that
ku + vk2 = hu + v, u + vi = hu, ui + hu, vi + hv, ui + hv, vi

kuk2 + kvk2 + 2<hu, vi kuk2 + kvk2 + 2 hu, vi


(use the Cauchy-Schwarz inequality


2
kuk2 + kvk2 + 2kuk kvk = kuk + kvk .
t
u
48 LIVIU I. NICOLAESCU

3.3. Orthonormal systems and the Gramm-Schmidt procedure. In the sequel U will denote an
n-dimensional Euclidean F-vector space. We will denote the inner product on U by h, i.
Definition 3.9. A family of nonzero vectors

1 , . . . , uk U
is called orthogonal if
ui uj , i 6= j.
An orthogonal family 
u1 , . . . , uk U
is called orthonormal if
kui k = 1, i = 1, . . . , k.
A basis of U is called orthogonal (respectively orthonormal) if it is an orthogonal (respectively
orthonormal) family. t
u

Proposition 3.10. Any orthogonal family in U is linearly independent.


Proof. Suppose that 
u1 , . . . , uk U
is an orthogonal family. If x1 , . . . , xk F are such that
x1 u1 + + xk uk = 0,
then taking the inner product with uj of both sides in the above equality we deduce
0 = x1 hu1 , uj i + + xj1 huj1 , uj i + xj huj , uj i + xj+1 huj+1 , uj i + + xn hun , uj i
(hui , uj i = 0, i 6= j)
= xj kuj k2 .
Since uj 6= 0 we deduce xj = 0. This happens for any j = 1, . . . , k, proving that the family is
linearly independent. t
u

Theorem 3.11 (Gramm-Schmidt). Suppose that



u1 , . . . , uk U
is a linearly independent family. Then there exists an orthonormal family

e1 , . . . , ek U
such that 
span{u1 , . . . , uj = span e1 , . . . , ej , j = 1, . . . , k.

Proof. We will argue by induction on k. For k = 1, if {u1 } U is a linearly independent family,


then u1 6= 0 and we set
1
e1 := u1 .
ku1 k
Clearly {e1 } is an orthonormal family spanning the same subspace as {u1 }.
Suppose that the result is true for any linearly independent family of vectors con sting of (k 1)
vectors. We need to prove that the result is true for linearly independent families consisting of k
vectors. Let 
u1 , . . . , uk U
LINEAR ALGEBRA 49

be such a family. The induction assumption implies that we can find an orthonormal system

e1 , . . . , ek1
such that 
span{u1 , . . . , uj = span e1 , . . . , ej , j = 1, . . . , k 1.
Define
v k := huk , e1 ie1 + . . . + huk , ek1 iek1 ,
f k := uk v k .
Observe that 
v k span e1 , . . . , ek1 = span{u1 , . . . , uk1 .
Since {u1 , . . . , uk } is a linearly independent family we deduce that

uk 6 e1 , . . . , ek1
so that f k = uk v k 6= 0. We can now set
1
ek := f .
kf k k k
By construction kek k = 1. Also note that if 1 j < k, then
hf k , ej i = huk v k , ej i = huk , ej i hv k , ej i
D E
= huk , ej i huk , e1 ie1 + . . . + huk , ek1 iek1 , ej
| {z }
=v k
huk , ej i huj , ej i hej ej i = 0.
This proves that {e1 , . . . , ek1 , ek } is an orthonormal family.
Finally observe that
uk = v k + f k = v k + kf k kek .
Since 
v k span e1 , . . . , ek1
we deduce 
uk span e1 , . . . , ek1 , ek
and thus 
span{u1 , . . . , uk = span e1 , . . . , ek
t
u

Remark 3.12. The strategy used in the proof of the above theorem is as important as the theorem
itself. The procedure we used to produce the orthonormal family {e1 , . . . , ek }. This procedure goes
by the name of the Gramm-Schmidt procedure. To understand how it works we consider a simple
case, when U is the space R2 equipped with the canonical inner product.
Suppose that    
3 5
u1 = , u2 := .
4 0
Then
ku1 k2 = 32 + 42 = 9 + 16 = 25,
so that and we set 3
1 5
e1 := u1 = .
ku1 k 4
5
50 LIVIU I. NICOLAESCU

Next
9

5 5
v 2 = hu2 , e1 ie1 = 3e1 , f 2 = u2 v 2 =
0 12
5
16 4

5 5
= = 4 .
12
5 35
We see that f 2 = 4 and thus
4

5
e2 = . t
u
35

The Gramm-Schmidt theorem has many useful consequences. We will discuss a few of them
Corollary 3.13. Any finite dimensional Euclidean vector space (over F = R, C) admits an orthonor-
mal basis.
Proof. Apply Theorem 3.11 to a basis of the vector space. t
u

The orthonormal bases of an Euclidean space have certain computational advantages. Suppose that
e1 , . . . , en
is an orthonormal basis of the Euclidean space U . Then the coordinates of a vector u U in this
basis are easily computed. More precisely, if
u = x1 e1 + + xn en , x1 , . . . , xn F, (3.5)
then
xj = hu, ej i j = 1, . . . , n. (3.6)
Indeed, the equality (3.5) implies that


hu, ej i = x1 e1 + + xn en , ej = xj hej , ej i = xj ,
where at the last step we used the orthonormality condition which translates to
(
1, i = j
hei , ej i =
0, i 6= j.
Applying Pythagoras theorem we deduce
kx1 e1 + + xn en k2 = x21 + + x2n = |hu, e1 i|2 + + |hu, en i |2 . (3.7)
Example 3.14. Consider the orthonormal basis e1 , e2 of R2 constructed in Remark 3.12. If
 
2
u= ,
1
then the coordinates x1 , x2 of u in this basis are given by
6 4
x1 = hu, e1 i = + = 2,
5 5
8 3
x2 = hu, e2 i = = 1,
5 5
so that
u = 2e1 + e2 . t
u
LINEAR ALGEBRA 51

Corollary 3.15. Any orthonormal family



e1 , . . . , ek
in a finite dimensional Euclidean vector can be extended to an orthonormal basis of that space.
Proof. According to Proposition 3.37, the family

e1 , . . . , ek
is linearly independent. Therefore, we can extend it to a basis of U ,

e1 , . . . , ek , uk+1 , . . . , un .
If we apply the Gramm-Schmidt procedure to the above linearly independent family we obtaine an
orthonormal basis that extends our original orthonormal family.2 t
u

3.4. Orthogonal projections. Suppose that (U , h, i) is a finite dimensional Euclidean F-vector


space. If X is a subset of U then we set
X = u U ; hu, xi = 0, x X .


In other words, X consists of the vectors orthogonal to all the vectors in X. For this reason we will
obten write u X to indicate that u X . The following result is left to the reader.
Proposition 3.16. (a) The subset X is a vector subspace of U .
(b)
X Y X Y .
(c)

X = span(X) . t
u

Theorem 3.17. If V is a subspace of U , then


U = V V .

Proof. We need to check two things.


V V = 0, (3.8a)
U = V + V . (3.8b)
Proof of (3.8a). If x V V then
0 = hx, xi
which implies that x = 0.
Proof of (3.8b). Fix an orthonormal basis of V ,
B := {e1 , . . . , ek }.
Extend it to an orthonormal basis of U ,
e1 , . . . , ek , ek+1 , . . . , en .
By construction,

ek+1 , . . . , en B = span(B) =V
so that
span{ek+1 , . . . , en } V .
2Exercise 3.5 asks you to verify this claim.
52 LIVIU I. NICOLAESCU

Clearly any vector u U can be written as a sum of two vectors


u = v + w, v span(B) = V , w span{ek+1 , . . . , en } V .
t
u

Corollary 3.18. If V is a subsace of U , then


V = (V ) .

Proof. Theorem 3.17 implies that for any subspace W of U . We have


dim W = dim U dim W .
If we let W = V we deduce that
dim(V ) = dim U dim V .
If we let W = V we deduce
dim V = dim U dim V .
Hence
dim V = dim(V )
so it suffices to show that
V (V ) ,
i.e., we have to show that any vector v in V is orthogonal to any vector w in V . Since w V we
have w v so that v w. t
u

Suppose that V is a subspace of U . Then any u U admits a unique decomposition


u = v + w, v V , w V .
We set
PV u := v.
Observe that if
u0 = v 0 + w0 , u1 = v 1 + w2 , v 0 , v 1 V , w0 , w1 V ,
then
(u0 + u1 ) = (v 0 + v 1 ) + (w0 + w1 )
| {z } | {z }
V V
and we deduce
PV (u0 + u1 ) = v 0 + v 1 = PV u0 + PV u1 .
Similarly, if F and u U , then
u = v + w
and we deduce
PV (u) = v = PV u.
We have thus shown that the map
PV : U U , u 7 PV u
is a linear operator. It is called the the orthogonal projection onto the subpace V . Observe that
R(PV ) = V , ker PV = V . (3.9)
Note that PV u is the unique vector v in V with the property that (u v) V .
LINEAR ALGEBRA 53

Proposition 3.19. Suppose V is a subspace of U and e1 , . . . , ek is an orthonormal basis of V . Then


PV u = hu, e1 ie1 + + hu, ek iek , u U .
Proof. It suffices to show that the vector

w = u hu, e1 ie1 + + hu, ek iek
is orthogonal to all the vectors e1 , . . . , ek because then it will be orthogonal to any linear combination
of these vectors. We have

hw, ej i = hu, ej i hu, he1 , ihe1 , ej i + + hu, ek ihek , ej i .
Since e1 , . . . , ek is an orthonormal basis of V we deduce that
(
1, i = j
hei , ej i =
0, i 6= j.
Hence 
hu, he1 , ihe1 , ej i + + hu, ek ihek , ej i = hu, ej ihej ej i = hu, ej i.
t
u

Theorem 3.20. Let V be a subspace of U . Fix u0 U . Then


ku0 PV u0 k ku0 vk, v V ,
and we have equality if and only if v = v 0 . In other words, PV u0 is the vector in V closest to u0 .
Proof. Set v 0 := PV u0 , w0 := u0 PV u0 V . Then for any v V we have
u v = (v 0 v) + w0 .
Since v 0 (u0 v) we deduce from Pythagoras Theorem that
ku vk2 = kv 0 vk2 + kw0 k2 kw0 k2 = ku0 PV u0 k2 .
Hence
ku0 PV u0 k ku0 vk,
and we have equality if and only if v = v 0 = PV u0 . t
u

Proposition 3.21. Let V be a subspace of U . Then


PV2 = PV ,
and
kPV uk kuk, u U .
Proof. By construction
PV v = v, v V .
Hence 
PV PV u = PV u, u U
because PV u V .
Next, observe that for any u U we have PV u (u PV u) so that
kuk2 = kPV uk2 + ku PV uk2 kPV uk2 .
t
u
54 LIVIU I. NICOLAESCU

3.5. Linear functionals and adjoints on Euclidean spaces. Suppose that U is a finite dimensional
F-vector space, F = R, C. The dual of U , is the F-vector space of linear functionals on U , i.e.,
linear maps
: U F.
The dual of U is denoted by U . The vector space U has the same dimension as U and thus they

are isomorphic. However, there is no distinguished isomorphism between these two vector spaces!
We want to show that if we fix an inner product on U , then we can construct in a concrete way an
isomorphism between these spaces. Before we proceed with this construction let us first observe that
there is a natural bilinear map
B : U U F, B(, u) = (u).
Theorem 3.22 (Riesz representation theorem: the real case). Suppose that U is a finite dimensional
real vector space equipped with an inner product
h, i : U U R.
To any u U we associate the linear functional u U defined by the equality
B(u , x) = u (x) := hx, ui, x U .
Then the map
U 3 U 7 u U
is a linear isomorphism.
Proof. Let us show that the map u 7 u is linear.
For any u, v, x U we have
(u + v) (x) = hx, u + vi = hx, u i + hx, v i
= u (x) + v (x) = (u + v )(x),
which show that
(u + v) = u + v .
For any u, x U and any t R we have
(tu) (x) = hx, tui = thx, ui = tu (x),
i.e., (tu) = tu . This proved the claimed linearity. To prove that it is an isomorphism, we need to
show that it is both injective and surjective.
Injectivity. Suppose that u U is such that u = 0. This means that u (x) = 0, x U . If we let
x = u we deduce
0 = u (u) = hu, ui = kuk2 ,
so that u = 0.
Surjectivity. We have to show that that for any U there exists u U such that = u .
Let n = dim U . Fix an orthonormal basis e1 , . . . , en of U . The linear functional is uniquely
determined by its values on ei ,
i = (ei ).
Define
Xn
u := k ek .
k=1
Then
u (ei ) = hei , ui = hei , 1 e1 + + i ei + + n en i
LINEAR ALGEBRA 55

= hei , 1 e1 i + + hei , i ei i + + hei , n en i = i .


Thus
u (ei ) = (ei ), i = 1, . . . , n,
so that = u . t
u

Theorem 3.23 (Riesz representation theorem: the complex case). Suppose that U is a finite dimen-
sional complex vector space equipped with an inner product
h, i : U U C.
To any u U we associate the linear functional u U defined by the equality
B(u , x) = u (x) := hx, ui, x U .
Then the map
U 3 U 7 u U
is bijective and conjugate linear, i.e., for any u, v U and any z C we have
(u + v) = u + v , (zu) = zu .
Proof. Let us first show that the map u 7 u is conjugate linear.
For any u, v, x U we have
(u + v) (x) = hx, u + vi = hx, u i + hx, v i
= u (x) + v (x) = (u + v )(x),
which show that
(u + v) = u + v .
For any u, x U and any z C we have
(zu) (x) = hx, zui = zhx, ui = zu (x),
i.e., (zu) = zu . This proved the claimed conjugate linearity. We now prove the bijectivity claim.
Injectivity. Suppose that u U is such that u = 0. This means that u (x) = 0, x U . If we let
x = u we deduce
0 = u (u) = hu, ui = kuk2 ,
so that u = 0.
Surjectivity. We have to show that that for any U there exists u U such that = u .
Let n = dim U . Fix an orthonormal basis e1 , . . . , en of U . The linear functional is uniquely
determined by its values on ei ,
i = (ei ).
Define
Xn
u := k ek .
k=1
Then
u (ei ) = hei , ui = hei , 1 e1 + + i ei + + n en i
= hei , 1 e1 i + + hei , i ei i + + hei , n en i = ei , i ei = i hei , ei i = i .
Thus
u (ei ) = (ei ), i = 1, . . . , n,
so that = u . t
u
56 LIVIU I. NICOLAESCU

Suppose that U and V are two F-vector spaces equipped with inner products h, iU and respec-
tively h, iV . Next, assume that T : U V is a linear map.
Theorem 3.24. There exists a unique linear map S : V U satisfying the equality
hT u, viV = hu, SviU , u U , v V . (3.10)
This map is called the adjoint of T with respect to the inner products h, iU , h, iV and it is
denoted by T . The equality (3.10) can then be rewritten
hT u, viV = hu, T viU , hT v, uiU = hv, T uiU , u U , v V . (3.11)

Proof. Uniqueness. Suppose there are two linear maps S1 , S2 : V U satisfying (3.10). Thus
0 = hu, S1 viU hu, S2 viU = hu, (S1 S2 )viU , u U , v V .
For fixed v V we let u = (S1 S2 )v and we deduce from the above equality
0 = h (S1 S2 )v, (S1 S2 )v iU = k(S1 S2 )vk2U ,
so that (S1 S2 )v = 0, for any v in V . This shows that S1 = S2 , thus proving the uniqueness part
of the theorem.
Existence. Any v V defines a linear functional
Lv : U F, Lv (u) = hT u, viV .
Thus there exists a unique vector Sv U such that
Lv = (Sv) hT u, viV = hu, SviU , u U .
One can verify easily that the correspondence V 3 v 7 Sv U described above is a linear map;
see Exercise 3.13. t
u

Example 3.25. Let T : U V be as in the statement of Theorem 3.24. Assume m = dimF U ,


n = dimF V . Fix an orthonormal basis
e := {e1 , . . . , em }
of U and an orthonormla basis
f := {f 1 , . . . , f n }
of V . With respect to these bases the operator T is represented by an n m matrix

a11 a12 a1m
a21 a22 a2m
A = .. .. ,

.. ..
. . . .
an1 an2 anm
while the adjoint operator T : V U is represented by an m n matrix

a11 a12 a1n

a
21 a22 a2n

A = .. .. .. .. .
. . . .
am1 am2 amn
The j-th column of AS describes the coordinates of the vector T ej in the basis f ,
T ej = a1j f 1 + anj f n .
LINEAR ALGEBRA 57

We deduce that
hT ej , f i iV = aij .
Hence
hf i , T ej iV = hT ej , f i iV = aij .
On the other hand, the i-th column of A describes the coordinates of T f i in the basis e so that
T f i = a1i e1 + + ami em
and we deduce that
hT f i , ej i = aji .
On the other hand, we have
(3.11)
aij = hf i , T ej iV = hT f i , ej i = aji .
Thus, A is the conjugate transpose of A. In other words, the entries of A are the conjugates of the
corresponding entries of the transpose of A,
A = A . t
u

Definition 3.26. Let U be a finite dimensional Euclidean F-space with inner product h, i. A linear
operator T : U U is called selfadjoint or symmetric if T = T , i.e.,
hT u1 , u2 i = hu1 , T u2 , u1 , u2 U . t
u
Example 3.27. (a) Consider the standard real Euclidean n-dimensional space Rn . Any real n n
matrix A can be identified with a linear operator TA : Rn Rn . The operator TA is selfadjoint if
and only if the matrix A is symmetric, i.e.,
aij = aji , i, j
or, equivalently A = A = the transpose of A.
(b) Consider the standard complex Euclidean n-dimensional space Cn . Any complex n n matrix
A can be identified with a line operator TA : Cn Cn . The operator TA is selfadjoint if and only if
the matrix A is Hermitian, i.e.,
aij = aji , i, j

or, equivalently A = A = the conjugate transpose of A.
(c) Suppose that V is a subspace of the finite dimensional Euclidean space U . Then the orthogonal
projection PV : U U is a selfadjoint operator, i.e.,
hPV u1 , u2 i = hu1 , PV u2 i, u1 , u2 U .
Indeed, let u1 , u2 U . They decompose uniquely as
u1 = v 1 + w1 , u2 = v 2 + w2 , v 1 , v 2 V , w1 , w2 V .
Then PV u1 = v 1 so that
hPV u1 , u2 i = hv 1 , v 2 + w2 i = hv 1 , v 2 i + hv 1 , w2 i = hv 1 , v 2 i.
| {z }
w2 v 1

Similarly, PV u2 = v 2 and we deduce


hu1 , PV u2 i = hv 1 + w1 , v 2 i = hv 1 , v 2 i + hw1 , v 2 i = hv 1 , v 2 i. t
u
| {z }
w1 v 2
58 LIVIU I. NICOLAESCU

Proposition 3.28. Suppose that U is a finite dimensional Euclidean F-space and T : U U is a


selfadjoint operator. Then
spec T R.
In other words, the eigenvalues of a selfadjoint operator are real.
Proof. Let be an eigenvalue T and u 6= 0 an eigenvector of T corresponding to the eigenvalue .
We have
T u = u
so that
1
kuk2 = hu, ui = hT u, ui = hT u, ui.
kuk2
On the other hand
hT u, ui = hu, T ui.
Since T is selfadjoint we deduce
hu, T ui = hT u, ui
so that
hT u, ui = hT u, ui.
Hence the inner product hT u, ui is a real number. From the equality
1
= hT u, ui
kuk2
we deduce that is a real number as well. t
u

Corollary 3.29. If A is an n n complex matrix such that A = A , then all the roots of the charac-
teristic polynomial PA () = det(1 A) are real.
Proof. The roots of PA () are the eigenvalues of the linear operator TA : Cn Cn defined by A.
Since A = A we deduce that TA is selfadjoint with respect to the natural inner product on Cn so that
all its eigenvalues are real. t
u

Theorem 3.30. Suppose that U , V are two finite dimensional Euclidean F-vector spaces and T :
U V is a linear operator. Then the following hold.
(a) (T ) = T .
(b) ker T = R(T ) .
(c) ker T = R(T ) .
(d) R(T ) = (ker T ) .
(e) R(T ) = (ker T .
Proof. (a) The operator (T ) is a linear operator U V . We need to prove that, for any u U we
have x := (T ) u T u=0. Let
Because (T ) is the adjoint of T we deduce from (3.11) that
h(T ) u, viV = hu, T viU .
Because T is the adjoint of T we deduce from (3.11) that
hu, T viU = hT u, viV .
Hence, for any v V we have
0 = h(T ) u, viV hT u, viV = hx, viV .
By choosing v = x we deduce x = 0.
LINEAR ALGEBRA 59

(b) We need to prove that


u ker T u R(T ).
Let u ker T , i.e., T u = 0. To prove that u R(T ) we need to show that u T v, v V .
For v V we have
(3.11)
hu, T vi = hT u, vi = 0,
so that u T v for any v V .
Conversely, let us assume that u T v for any v V . We have to show that x = T u = 0.
Observe that x V so that u T x. We deduce
0 = hu, T xi = hT u, xh= hx, xi = kxk2 .
Hence x = 0.
(c) Set S := T from (b) we deduce
ker T = ker S = R(S ) .
From (a) we deduce S = (T ) = T and (c) is now obvious.
Part (d) follows from (c) and Corollary 3.18, while (e) follows from (b) and Corollary 3.18. t
u

Corollary 3.31. Suppose that U is a finite dimensional Euclidean vector space, and T : U U is
a selfadjoint operator. Then
ker T = R(T ) , R(T ) = (ker T ) , U = ker T R(T ). t
u

Proposition 3.32. (a) Suppose that U , V , W are finite dimensional Euclidean F-spaces. If
T :U V, S :V W
are linear operators, then
(ST ) = T S .
(b) Suppose that T : U V is a linear operator between two finite dimensional Euclidean F-spaces.
Then T is invertible if and only if the adjoint T : V U is invertible. Moreover, if T is invertible,
then
(T 1 ) = (T )1 .
(c)Suppose that S, T : U V are linear operators between two finite dimensional Euclidean F-
spaces. Then
(S + T ) = S + T , (zS) = zS , z F. t
u

The proof is left to you as Exercise 3.15.


Proposition 3.33. Suppose that U is a finite dimensional Euclidean F-space and S, T : U U are
two selfadjoint operators. Then
S = T hSu, ui = hT u, ui, u U .
60 LIVIU I. NICOLAESCU

Proof. The implication is obvious so it suffices to prove that


hSu, ui = hT u, ui, u U S = T.
We set A := S T . Then A is a selfadjoint operator and it suffices to show that
hAu, ui = 0, u U A = 0.
We distinguish two cases.
(a) F = R. For any u, v U we have
0 = hA(u + v), u + vi = hAu, u + vi + hAv, u+v i
= hAu, ui +hAu, vi + hAv, ui + hAv, vi
| {z } | {z }
=0 =0
= hAu, vi + hv, Aui = 2hAu, vi.
Hence
hAu, vi = 0, u, v U .
If in the above equality we let v = Au we deduce
kAuk2 = 0, u U ,
i.e., A = 0.
(b) F = C. For any u, v U we have
0 = hA(u + v), u + vi = hAu, u + vi + hAv, u + vi
= hAu, ui +hAu, vi + hAv, ui + hAv, vi
| {z } | {z }
=0 =0

= hAu, vi + hv, Aui = hAu, vi + hAu, vi = 2 RehAu, vi.


Hence
RehAu, vi = 0, u, v U . (3.12)
Similarly for any u, v U we have
0 = hA(u + iv), u + ivi = hAu, u + ivi + ihAv, u + ivi
= hAu, ui ihAu, vi + ihAv, ui i2 hAv, vi
| {z } | {z }
=0 =0
 
= ihAu, vi + ihv, Aui = i hAu, vi hAu, vi = 2 ImhAu, vi.
Hence
ImhAu, vi = 0, u, v U . (3.13)
Putting together (3.12) and (3.13) we deduce that
hAu, vi = 0, u, v U .
If we now let v = Au in the above equality we deduce as in the real case that Au = 0, u U .
t
u

Definition 3.34. Let U , V be two finite dimensional Euclidean F-vector spaces. A linear operator
T : U V is called an isometry if for any u U we have
kT ukV = kukU . t
u
LINEAR ALGEBRA 61

Proposition 3.35. A linear operator T : U V between two finite dimensional Euclidean vector
spaces is an isometry if and only if
T T = 1U . t
u

The proof is left to you as Exercise 3.16.


Definition 3.36. Let U be a finite dimensional Euclidean F-space. A linear operator T : U U
is called an orthogonal operator if T is an isometry. We denote by O(U ) the space of orthogonal
operators on U . t
u

Proposition 3.37. Let U be a finite dimensional Euclidean F-space. Then


T O(U )T T = T T = 1U .

Proof. The implication follows from Proposition 3.35. To prove the opposite implication,
assume that T is an orthogonal operator. Hence
kT uk = kuk, u U .
This implies in particular that ker T = 0, so that T is invertible. If we let u = T 1 v in the above
equality we deduce
kT 1 vk = kvk, v U .
Hence T 1 is also an isometry so that
(T 1 ) T 1 = 1U .
Using Proposition 3.32 we deduce (T 1 ) = (T )1 . Hence
(T )1 T 1 = 1U .
Taking the inverses of both sides of the above equality we deduce
1U = (T )1 T 1 1 = ( T 1 )1 ( (T )1 )1 = T T .


t
u
62 LIVIU I. NICOLAESCU

3.6. Exercises.
Exercise 3.1. Prove the claims made in Example 3.2 (a), (b), (c). t
u


Exercise 3.2. Let U , h, i be an Euclidean space.
(a) Show that for any u, v U we have
ku + vk2 + ku vk2 = 2 kuk2 + kvk2 .


(b) Let u0 , . . . , un U . Prove that


ku0 un k ku0 u1 k + ku1 u2 k + + kun1 un k. t
u
Exercise 3.3. Show that for any complex numbers z1 , . . . , zn we have
2
|z1 | + + |zn | n |z1 |2 + + |zn |2


and 2
|z1 |2 + + |zn |2 |z1 | + + |zn | .


Exercise 3.4. Consider the space P3 of polynomials with real coefficients and degrees 3 equipped
with the inner product
Z 1
hP, Qi = P (x)Q(x)dx, P, Q P3 .
0
Construct an orthonormal basis of P3 by using the Gramm-Schmidt procedure applied to the basis of
P3 given by
E0 (x) = 1, E1 (x) = x, E2 (x) = x2 , E3 (x) = x3 . t
u
Exercise 3.5. Fill in the missing details in the proof of Corollary 3.15. t
u

Exercise 3.6. Suppose that T : U U is a linear operator on a finite dimensional complex Eu-
clidean vector space. Prove that there exists an orthonormal basis of U , such that, in this basis T is
represented by an upper triangular matrix. t
u

Exercise 3.7. Prove Proposition 3.16. t


u

Exercise 3.8. Consider the standard Euclidean space R3 . Denote by e1 , e2 , e3 the canonical or-
thonormal basis of R3 and by V the subspace generated by the vectors
v 1 = 12e1 + 5e3 , v 2 = e1 + e2 + e3 .
Find the matrix representing the orthogonal projection PV : R3 R3 in the canonical basis
e1 , e2 , e3 . t
u

Exercise 3.9. Consider the space P3 of polynomials with real coefficients and degrees 3. Find
P P3 such that p(0) = p0 (0) = 0 and
Z 1
2 + 3x p(x) 2 dx

0
is as small as possible.
Hint Observe that the set
V = p P3 ; p(0) = p0 (0) = 0

LINEAR ALGEBRA 63

is a subspace of P3 . Then compute PV , the orthogonal projection onto V with respect to the inner
product Z 1
hp, qi = p(x)q(x)dx, p, q P3 .
0
The answer will be PV q, q = 2 + 3x P3 . t
u

Exercise 3.10. Suppose that (U , h, i) is a finite dimensional real Euclidean space and P : U U
is a linear operator such that
P 2 = P, kP uk kuk, u U . (P )
Show that there exists a subspace V U such that P = PV = the orthogonal projection onto V .
Hint: Let V = R(P ) = P (U ), W := ker P . Using (P ) argue by contradiction that V W and
then conclude that P = PV . t
u

Exercise 3.11. Let P2 denote the space of polynomials with real coefficients and degree 2. De-
scribe the polynomial p0 P2 uniquely determined by the equalities
Z Z
cos xq(x)dx = p0 (x)q(x)dx, q P2 . t
u

Exercise 3.12. Let k be a positive integer, and denote by Pk denote the space of polynomials with real
coefficients and degree k. For k = 2, 3, 4, describe the polynomial pk Pk uniquely determined
by the equalities Z 1
q(0) = pk (x)q(x)dx, q Pk . t
u
1
Exercise 3.13. Finish the proof of Theorem 3.24. (Pay special attention to the case when F = C.) t
u

Exercise 3.14. Let P2 denote the space of polynomials with real coefficients and degree 2. We
equip it with the inner product Z 1
hp, qi = p(x)q(x)dx.
0
Consider the linear operator T : P2 P2 defined by
dp
Tp = , p P2 .
dx
Describe the adjoint of T . t
u

Exercise 3.15. Prove proposition 3.32. t


u

Exercise 3.16. Suppose that U is a finite dimensional Euclidean F-spaces and T : U V is an


orthogonal operator. Prove that spec(T ) || = 1. t
u

Exercise 3.17. Let U be a finite dimensional Euclidean F-vector space and T : U U is a linear
operator. Prove that the following statements are equivalent.
(i) The operator T : U U is orthogonal.
(ii) hT u, T vi = hu, vi, u, v U .
(iii) For any orthonormal basis e1 , . . . , en of U , the collection T e1 , . . . , T en is also an orthonor-
mal basis of U .
64 LIVIU I. NICOLAESCU

t
u
LINEAR ALGEBRA 65

4. S PECTRAL THEORY OF NORMAL OPERATORS


4.1. Normal operators. Let U be a finite dimensional complex Euclidean space. A linear operator
T : U U is called normal if
T T = T T .
Example 4.1. (a) A selfadjoint operator T : U U is a normal operator. Indeed, we have T = T
so that
T T = T 2 = T T .
(b) An orthogonal operator T : U U is a normal operator. Indeed Proposition 3.37 implies that
T T = T T = 1U . t
u
(c) If T : U U is a normal operator and C then 1U T is also a normal operator. Indeed
(1U T ) = (1U ) (T ) = 1U T
and we have
(1U T ) (1U T ) = (1U T ) (1U T ). t
u

Proposition 4.2. If T : U U is a normal operator then so are any of its powers T k , k > 0.
Proof. To see this we invoke Proposition 3.32 and we deduce
(T k ) = (T )k
Then
(T k ) T k = (T T ) (T T ) = (T T ) (T T ) T
| {z } | {z } | {z } | {z }
k k k1 k

= (T T ) (T T ) (T )2 = = (T T )(T )k = T k (T )k = T k (T k ) .
| {z } | {z } | {z }
k2 k k
t
u

Definition 4.3. Let U be a finite dimensional Euclidean F -space. A linear operator T : U U is


called orthogonally diagonalizable if there exists an orthonormal basis of U such that, in this basis
the operator T is represented by a diagonal matrix. t
u

We can unravel a bit the above definition and observe that a linear operator T on an n-dimensional
Euclidean F-space is orthogonally diagonalizable if and only if there exists an orthonormal basis
e1 , . . . , en and numbers a1 , . . . , an F such that
T ek = ak ek , k.
Thus the above basis is rather special: it is orthonormal, and it consists of eigenvectors of T . The
numbers ak are eigenvalues of T .
Note that the converse is also true. If U admits a n orthonormal basis consisting of eigenvectors of
T , then T is orthogonally diagonalizable.
Proposition 4.4. Suppose that U is a complex Euclidean space of dimension n and T : U U is
orthogonally diagonalizable. Then T is a normal operator.
66 LIVIU I. NICOLAESCU

Proof. Fix an orthonormal basis


e = (e1 , . . . , en )
of U such that, in this basis, the operator T is represented by the diagonal matrix

a1 0 0 0 0
0 a2 0 0 0
D = .. .. .. , a1 , . . . , an C.

.. .. ..
. . . . . .
0 0 0 0 an
The computations in Example 3.25 show that the operator T is represented in the basis e by the
matrix D . Clearly

|a1 |2 0 0 0 0

0 |a2 |2 0 0 0
DD = D D = .. .. .

.. .. .. ..
. . . . . .
0 0 0 0 |an |2
t
u

4.2. The spectral decomposition of a normal operator. We want to show that the converse of
Proposition 4.4 is also true. This is a nontrivial and fundamental result of linear algebra.
Theorem 4.5 (Spectral Theorem for Normal Operators). Let U be an n-dimensional complex Eu-
clidean space and T : U U a normal operator. Then T is orthogonally diagonalizable, i.e., there
exists an orthonormal basis of U consisting of eigenvectors of T .
Proof. The key fact behind the Spectral Theorem is contained in the following auxiliary result.
Lemma 4.6. Let spec(T ). Then
ker(1U T )2 = ker(1U T ).

We first complete the proof of the Spectral Theorem assuming the validity of the above result.
Invoking Lemma 2.9 we deduce that
ker(1U T ) = ker(1U T )2 = ker(1U T )2 =
so that the generalized eigenspace of T corresponding to an eigenvalue coincides with the eigenspace
ker(1U T ),
E (T ) = ker(1U T ).
Suppose that 
spec(T ) = 1 , . . . , ` .
From Proposition 2.18 we deduce that
U = ker(1 1U T ) ker(` 1U T ). (4.1)
The next crucial observation is contained in the following elementary result.
Lemma 4.7. Suppose that , are two distinct eigenvalues of T , and u, v U are eigenvectors
T u = u, T v = v.
Then
T u = , T v = v,
LINEAR ALGEBRA 67

and
u v.

Proof. Let S = T 1U so that S u = 0. Note that S = T 1U so that we have to show that


S u = 0. As explained in Example 4.1(c), the operator S is normal. We deduce that

0 = S S u = S S u.

Hence
0 = hS S u, ui = hS u, S ui = kS uk2 .
This proves that T u = u. A similar argument shows that T v = v.
From the equality T u = u we deduce

hu, vi = hT u, vi = hu, T vi = hu, vi = hu, vi.

Hence
( )hu, vi = 0.
Since 6= we deduce hu, vi = 0. t
u

From the above result we conclude that the direct summands in (4.1) are mutually orthogonal. Set
dk = dim ker(k 1U T ). We fix an orthonormal basis

e(k) = e1 (k), . . . , edk (k)

of ker(k 1U T ). By construction, the vectors in this basis are eigenvectors of T . Since the spaces
ker(k 1U T ) are mutually orthogonal we deduce from (4.1) that the union of the orthonormal
bases ek is an orthonormal basis of U consisting of eigenvectors of T . This completes the proof of
the Spectral Theorem, modulo Lemma 4.6. t
u

Proof of Lemma 4.6. The operator S = 1U T is normal so that the conclusion of the lemma
follows if we prove that for any normal operator S we have

ker S 2 = ker S.

Note that ker S ker S 2 so that it suffices to show that ker S 2 ker S.
Let u U such that S 2 u = 0. We have to show that Su = 0. Note that

0 = (S )2 S 2 u = S S SSu = S SS Su.

Set A := S S. Note that A is selfadjoint, A = A and we can rewrite the above equality as 0 = A2 u.
Hence
0 = hA2 u, ui = hAu, Aui = kAuk2 .
The equality Au = 0 now implies

0 = hAu, ui = hS Su, ui = hSu, Sui = kSuk2 .

This completes the proof of Lemma 4.6. t


u
68 LIVIU I. NICOLAESCU

4.3. The spectral decomposition of a real symmetric operator. We begin with the real counterpart
of Proposition 4.4
Proposition 4.8. Suppose that U is a real Euclidean space of dimension n and T : U U is
orthogonally diagonalizable. Then T is a symmetric operator.
Proof. Fix an orthonormal basis
e = (e1 , . . . , en )
of U such that, in this basis, the operator T is represented by the diagonal matrix

a1 0 0 0 0
0 a2 0 0 0
D = .. .. .. , a1 , . . . , an R.

.. .. ..
. . . . . .
0 0 0 0 an
Clearly D = D and the computations in Example 3.25 show that T is a selfadjoint operator. t
u

We can now state and prove the real counterpart of Theorem 4.5
Theorem 4.9 (Spectral Theorem for Real Symmetric Operators). Let U be an n-dimensional real
Euclidean space and T : U U a symmetric operator. Then T is orthogonally diagonalizable, i.e.,
there exists an orthonormal basis of U consisting of eigenvectors of T .
Proof. We argue by induction on n = dim U . For n = 1 the result is trivially true. We assume that
the result is true for real symmetric operators action on Euclidean spaces of dimension < n and we
prove that it holds for a symmetric operator T on a real n-dimenasional Euclidean space U .
To begin with let us observe that T has at least one real eigenvalue. Indeed, if we fix an orthonormal
basis e1 , . . . , en of U , then in this basis the operator T is represented by a symmetric nn real matrix
A. As explained in Corolllary 3.29, all the roots of the characteristic polynomial det(1 A) are
real, and they coincide with the eigenvalues of T .
Fix one such eigenvalue spec(T ) R and denote by E the corresponding eigenspace
E := ker(1 T ) U .
Lemma 4.10. The orthogonal complement E is an invariant subspace of T , i.e.,
u E T u E .
Proof. Let u E . We have to show that T u E , i.e., T u v, v E . Given such a v we
have
T v = v.
Next observe that u v since u E . Hence
hT u, vi = hu, T vi = hu, vi = 0.
t
u

The restriction S of T to E is a symmetric operator E E and the induction hypothesis


implies that we can find and orthonormal basis of E such that, in this basis, the operator S is
represented by a diagonal matrix D . Fix an arbitrary basis e of E . The union of e with is an
orthonormal basis of U . In this basis T is represented by the block matrix
1E 0
 
.
0 D
The above matrix is clearly diagonal. t
u
LINEAR ALGEBRA 69

4.4. Exercises.
Exercise 4.1. Let U be a finite dimensional complex Euclidean vector space and T : U U a
normal operator. Prove that for any complex numbers a0 , . . . , ak the operator
a0 1U + a1 T + + ak T k
is a normal operator. t
u

Exercise 4.2. Let U be a finite dimensional complex vector space and T : U U a normal operator.
Show that the following statements are equivalent.
(i) The operator T is orthogonal.
(ii) If is an eigenvalue of T , then || = 1. t
u

Exercise 4.3. (a) Prove that the product of two orthogonal operators on a finite dimensional Euclidean
space is an orthogonal operator.
(b) Is it true that the product of two selfadjoint operators on a finite dimensional Euclidean space is
also a selfadjoint operator? t
u

Exercise 4.4. Suppose that U is a finite dimensional Euclidean space and P : U U is a linear
operator such that P 2 = P . Show that the following statements are equivalent.
(i) P is the orthogonal projection onto a subspace V U .
(ii) P = P .
t
u

Exercise 4.5. Suppose that U is a finite dimensional complex Euclidean space and T : U U is a
normal operator. Show that
R(T ) = R(T ). t
u
Exercise 4.6. Does there exists a symmetric operator T : R3 R3 such that

1 0 1 1
T 1 = 0 , T 2 = 2 ? t
u
1 0 3 3
Exercise 4.7. Show that a normal operator on a complex Euclidean space is selfadjoint if and only if
all its eigenvalues are real. t
u

Exercise 4.8. Suppose that T is a normal operator on a complex Euclidean space U such that T 7 =
T 5 . Prove that T is selfadjoint and T 3 = T . t
u

Exercise 4.9. Let U be a finite dimensional complex Euclidean space and T : U U be a selfad-
joint operator. Suppose that there exists a vector u, a complex number , and a number > 0 such
that
kT u uk < kuk.
Prove that there exists an eigenvalue of T such that | | < . t
u
70 LIVIU I. NICOLAESCU

5. A PPLICATIONS
5.1. Symmetric bilinear forms. Suppose that U is a finite dimensional real vector space. Recall
that a symmetric bilinear form on U is a bilinear map
Q:U U R
such that
Q(u1 , u2 ) = Q(u1 , u2 ).
We denote by Sym(U ) the space of symmetric bilinear forms on U .
Suppose that e = (e1 , . . . , en ) is a basis of the real vector space U . This basis associated to any
symmetric bilinear form Q Sym(U ) a symmetric matrix
A = (aij )1i,jn , aij = Q(ei , ej ) = Q(ej , ei ) = aji .
Note that the form Q is completely determined by the matrix A. Indeed if
n
X n
X
u= ui ei , v = vj ej ,
i=1 j=1

then
Xn n
X n
X m
X
Q(u, v) = Q ui ei , vj e j = ui vj Q(ei , ej ) = aij ui vj .
i=1 j=1 i,j=1 i,j=1

The matrix A is called the symmetric matrix associated to the symmetric form Q in the basis e.
Conversely any symmetric n n matrix A defines a symmetric bilinear form QA Sym(Rn )
defined by
Xm
Q(u, v) = aij ui vj , u, v Rn .
i,j=1
If h, i denotes the canonical inner product on Rn , then we can rewrite the above equality in the
more compact form
QA (u, v) = hu, Avi.
Definition 5.1. Let Q Sym(U ) be a symmetric bilinear form on the finite dimensional real space
U . The quadratic form associated to Q is the function
Q : U R, Q (u) = Q(u, , u), u U . t
u
Observe that if Q U , then we have the polarization identity
1 
Q(u, v) = Q (u + v) Q (u v) .
4
This shows that a symmetric bilinear form is completely determined by its associated quadratic form.
Proposition 5.2. Suppose that Q Sym(U ) and
e = {e1 , . . . , en }, f = {f 1 , . . . , f n }.
Denote by S the matrix describing the transition from the basis e to the basis f . In other words, the
j-th column of S describes the coordinates of f j in the basis e, i.e.,
n
X
fj = sij ei .
i=1
LINEAR ALGEBRA 71

Denote by A (respectively B) the matrix associated to Q by the basis e (respectively f ). Then


B = S AS. (5.1)

Proof. We have
n n
!
X X
bij = Q(f i , f j ) = Q ski ek , s`j e`
k=1 `=1
n
X n
X
= ski s`j Q(ek e` ) = ski ak` s`j
k,`=1 k,`=1

If we denote by sij the entries of the transpose matrix S , sij = sji , we deduce
n
sik ak` s`j .
X
bij =
k,`=1

The last equality shows that bij is the (i, j)-entry of the matrix S AS. t
u

+ We strongly recommend the reader to compare the change of base formula (5.1) with the change
of base formula (2.1).

Theorem 5.3. Suppose that Q is a symmetric bilinear form on a finite dimensional real vector space
U . Then there exist at least one basis of U such that the matrix associated to Q by this basis is a
diagonal matrix.
Proof. We will employ the spectral theory of real symmetric operators. For this reason with fix an
Euclidean inner product h, i on U and the choose a basis e of U which is orthonormal with
respect to the above inner product. We denote by A the symmetric matrix associated to Q by this
basis, i.e.,
aij = Q(ei , ej ).
This matrix defines a symmetric operator TA : U U by the formula
n
X
TA ej = aij ei .
i=1
Let us observe that
Q(u, v) = hu, TA vi, u, v U . (5.2)
To verify the above equality we first notice that both sides of the above equalities are bilinear on u, v
so that it suffices to check the equality in the special case when the vectors u, v belong to the basis e.
We have * +
X
hei , TA ej i = ei , akj ek = aij = Q(ei , ej ).
k
The spectral theorem for real symmetric operators implies that there exists an orthonormal basis f of
U such that the matrix B representing TA in this basis is diagonal. If S denotes the matrix describing
the transition to the basis e to the basis f then the equality (2.1) implies that
B = S 1 AS.
72 LIVIU I. NICOLAESCU

Since the biases e and f are orthonormal we deduce from Exercise 3.17 that the matrix S is orthog-
onal, i.e., S S = SS = 1. Hence S 1 = S and we deduce that
B = S AS.
The above equality and Proposition 5.2 imply that the diagonal matrix B is also the matrix associated
to Q by the basis f . t
u

Theorem 5.4 (The law of inertia). Let U be a real vector space of dimension n and Q a symmetric
bilinear form on U . Suppose that e, f are bases of U such that the matrices associated to Q by these
bases are diagonal.3 Then these matrices have the same number of positive (respectively negative,
respectively zero) entries on their diagonal.
Proof. We will show that these two matrices have the same number of positive elements and the same
number of negative entries on their diagonals. Automatically then they must have the same number
of trivial entries on their diagonals.
We take care of the positive entries first. Denote by A the matrix associated to Q by e and by B
the matrix associated by f . We denote by p the number of positive entries on the diagonal of A and
by q the number of positive entries on the diagonal of B. We have to show that p = q. We argue by
contradiction and we assume that p 6= q, say p > q.
We can label the elements in the basis e so that
aii = Q(ei , ei ) > 0, i p, ajj = Q(ej , ej ) 0, j > p. (5.3)
Observe that since A is diagonal we have
Q(ei , ej ) = 0, i 6= j. (5.4)
Similarly we can label the elements in the basis f so that
bkk = Q(f k , f k ) > 0, k q, b`` = Q(f ` , f ` ) 0, ` > q. (5.5)
Since B is diagonal we have
Q(f i , f j ) = 0, i 6= j. (5.6)
Denote V the subspace spanned by the vectors ei , i = 1, . . . , p, and by W the subspace spanned by
the vectors f q+1 , . . . , f n . From the equalities (5.3), (5.4), (5.5), (5.6) we deduce that
Q(v, v) > 0, v V \ 0, (5.7a)

Q(w, w) 0, w W \ 0. (5.7b)
On the other hand, we observe that dim V = p, dim W = n q. Hence
dim V + dim W = n + p q > n > dim U
so that there exists a vector
u (V W ) \ 0.
The vector u cannot simultaneously satisfy both inequalities (5.7a) and (5.7b). This contradiction
implies that p = q.
Using the above argument for the form Q we deduce that A and B have the same number of
negative elements on their diagonals. t
u

3Such a bases are called diagonalizing bases of Q.


LINEAR ALGEBRA 73

The above theorem shows that no matter what diagonalizing basis of Q we choose, the diagonal
matrix representing Q in that basis will have the same number of positive negative and zero elements
on its diagonal. We will denote these common numbers by + (Q), (Q) and respectively 0 (Q).
These numbers are called the indices of inertia of the symmetric form Q. The integer (Q) is called
Morse index of the symmetric form Q and the difference
(Q) = + (Q) (Q).
is called the signature of the form Q.
Definition 5.5. A symmetric bilinear form Q : Sym(U ) is called positive definite is
Q(u, u) > 0, u U \ 0.
It is called positive semidefinite if
Q(u, u) 0, u U .
It is called negative (semi)definite if Q is positive (semi)definite. t
u

We observe that a symmetric bilinear form on an n-dimensional real space U is positive definite if
and only if + (Q) = n = dim U .
Definition 5.6. A real symmetric nn matrix A is called positive definite if and only if the associated
symmetric bilinear form QA Sym(Rn ) is positive definite. t
u

5.2. Nonnegative operators. Suppose that U is a finite dimensional Euclidean F-vector space.
Definition 5.7. A linear operator T : U U is called nonnegative if the following hold.
(i) T is selfadjoint, T = T .
(ii) hT u, ui 0, for all u U .
The operator is called positive if it is nonnegative and hT u, ui = 0u = 0. t
u

Example 5.8. Suppose that T : U U is a linear operator. Then the operator S := T T is


nonnegative. Indeed, it is selfadjoint and
hSu, ui = hT T u, ui = hT u, T ui = kT uk2 0.
Note that S is positive if and only ker S = 0 so that S is injective. t
u

Definition 5.9. Suppose that T : U U is a linear operator on the finite dimensional F-space U .
A square root of T is a linear operator S : U U such that S 2 = T . t
u

Theorem 5.10. Let T : U U be a linear operator on the n-dimensional Euclidean F-space. Then
the following statements are equivalent.
(i) The operator T is nonnegative.
(ii) The operator T is selfadjoint and all its eigenvalues are nonnegative.
(iii) The operator T admits a nonnegativesquare root.
(iv) The operator T admits a selfadjoint root.
(v) there exists an operator S : U U such that T = S S.
Proof. (i) The operator T being nonnegative is also selfadjoint. Hence all its eigenvalues are real.
If is an eigenvalue of T and u ker(1U T ) \ 0. then
kuk2 = hT u, ui 0.
74 LIVIU I. NICOLAESCU

This implies 0.
(ii) (iii) Since T is selfadjoint, there exists an orthonormal basis e = {e1 , . . . , en } of U such
that, in this basis, the operator T is represented by the diagonal matrix
A = Diag(1 , . . . , .n ),
where 1 , . . . , n are the eigenvalues of T . They are all nonnegative so we can form a new diagonal
matrix p p
B = Diag( 1 , . . . , n ).
The matrix B defines a selfadjoint linear operator S on U which is represented by the matrix B in
the basis e. More precisely p
Sei = i ei , i = 1, . . . , n.
If u = ni=1 ui ei , then
P
n p
X n p
X
Su = i ui ei , hSu, ui = i |ui |2 0.
i=1 i=1

Hence S is nonnegative. From the obvious equality A = B 2 we deduce T = S 2 so that S is


nonnegative square root of T .
The implication (iii) (iv) is obvious because any nonnegative square root of T is automatically
a selfadjoint square root. To prove the implication (iv) (v) observe that if S is a selfadjoint square
root of T then
T = S 2 = S S.
The implication (v) (i) was proved in Example 5.8. t
u

Proposition 5.11. Let U be a finite dimension Euclidean F-space. Then any nonnegative operator
T : U U admits a unique nonnegative square root.
Proof. We have an orthogonal decomposition
ker(1U T )
M
U=
spec(T )

so that any vector u U can be written uniquely as


u , u ker(1U T ).
X
u= (5.8)
spec(T )

Moreover X
Tu = u .
spec(T )
Suppose that S is a nonnegative square root of T . If spec(S) and u ker(1U S), then
T u = S 2 u = S(Su) = S(u) = 2 u.
Hence
2 spec(T )
and
ker(1U S) ker(2 1U T ).
We have a similar orthogonal decomposition
ker(1U S) ker(2 1U T ) ker(1U T ) = U .
M M M
U=
spec(S) spec(S) spec(T )
LINEAR ALGEBRA 75

This implies that


spec(T ) = 2 ; spec(S) , ker(1U S) = ker(2 1U T ), spec(S).


Since all the eigenvalues of S are nonnegative we deduce that for any spec(T ) we have
spec(S). Thus if u is decomposed as in (5.8),
X
u= u ,
spec(T )

then X
Su = u .
spec(T )
The last equality determines S uniquely. t
u

Definition 5.12. If T is a nonnegative operator, then its unique nonnegative square root is denoted by

T. t
u
76 LIVIU I. NICOLAESCU

5.3. Exercises.
Exercise 5.1. Let Q be a symmetric bilinear form on an n-dimensional real vector space. Prove that
there exists a basis of U such that the matrix associated to Q by this basis is diagonal and all the
entries belong to {1, 0, 1}. t
u

Exercise 5.2. Let Q be a symmetric bilinear form on an n-dimensional real vector space U with
indices of inertia + , , 0 . We say Q is positive definite on a subspace V U if
Q(v, v) > 0, v V \ 0.
(a) Prove that if Q is positive definite on a subspace V , then dim V + .
(b) Show that there exists a subspace of dimension + on which Q is positive definite.
Hint: (a) Choose a diagonalizing basis e = {e1 , . . . , en } of Q. Assume that Q(ei , ei ) > 0, for
i = 1, . . . , + and Q(ej , ej ) 0 for j > + . Argue by contradiction that
dim V + dim span{ej ; j > + } dim U = n. t
u
Exercise 5.3. Let Q be a symmetric bilinear form on an n-dimensional real vector space U . with
indices of inertia + , , 0 . Define

Null (Q) := u U ; Q(u, v) = 0, v U . .
Show that Null (Q) is a vector subspace of U of dimension 0 . t
u

Exercise 5.4 (Jacobi). For any n n matrix M we denote by Mi the i i matrix determined by the
first i rows and columns of M .
Suppose that Q is a symmetric bilinear form on the real vector space U of dimension n. Fix a
basis e = {e1 , . . . , en } of U . Denote by A the matrix associated to Q by the basis e and assume that
i := det Ai 6= 0, i = 1, . . . , n.
(a) Prove that there exists a basis f = (f 1 , . . . , f n ) of U with the following properties.
(i) span{e1 , . . . , ei } = span{f 1 , . . . , f i }, i = 1, . . . , n.
(ii) Q(f k , ei ) = 0, 1 i < k n, Q(f k , ek ) = 1, k = 1, . . . , n.
Hint: For fixed k, express the vector f k in terms of the vectors ei ,
Xn
fk = sik ei
i=1
and then show that the conditions (i) and (ii) above uniquely determine the coefficients sik , again with
k fixed.
(b) If f is the basis found above, show that
Q(f k , f i ) = 0, i 6= k,
k1
Q(f k , f k ) = , k = 1, . . . , n, 0 := 1.
k
(c) Show that the Morse index (Q) is the number of sign changes in the sequence
1, 1 , 2 , . . . , n . t
u
Exercise 5.5 (Sylvester). For any n n matrix M we denote by Mi the i i matrix determined by
the first i rows and columns of M .
Suppose that Q is a symmetric bilinear form on the real vector space U of dimension n. Fix a
basis e = {e1 , . . . , en } of U and denote by A the matrix associated to Q by the basis e. Prove that
the following statements are equivalent.
LINEAR ALGEBRA 77

(i) Q is positive definite.


(ii) det Ai > 0, i = 1, . . . , n. t
u

Exercise 5.6. (a) Let and f : [0, 1] [0, ) a continuous function which is not identically zero. For
any k = 01, 2, . . . we define the k-th momentum of f to be the real number
Z 1
k := k (f ) = xk f (x)dx.
0
Prove that the symmetric (n + 1) (n + 1)-matrix symmetric matrix
A = (aij )0i,jn , aij = i+j .
is positive definite.
Hint: Associate to any vector
u0
u1
u=

..
.
un
xn
the polynomial Pu (x) = u0 + u1 x + + un and then express the integral
Z 1
Pu (x)Pv (x)f (x)
0
in terms of A.
(b) Prove that the symmetric n n symmetric matrix
1
B = (bij )1i,jn , bij =
i+j
is positive definite.
(c) Prove that the symmetric (n + 1) (n + 1) symmetric matrix
C = (cij )0i,jn , cij = (i + j)!,
where 0! := 1, n! = 1 2 n.
Hint: Show that for any k = 0, 1, 2, . . . we have
Z
xk ex dx = k!,
0
and then due the trick in (a). t
u

Exercise 5.7. (a) Show that the 2 2-symmetric matrix


 
2 1
A=
1 2
is positive definite.
(b) Denote by QA the symmetric bilinear form on R2 defined by then above matrix A. Since QA is
positive definite, it defines an inner product h, iA on R2 ,
hu, viA = QA (u, v) = hu, Avi
where h, i denotes the canonical inner product on R2 . Denote by T # the adjoint of T with respect
to the inner product h, iA , i.e.,
hT u, viA = hu, T # viA , u, v R2 . (#)
78 LIVIU I. NICOLAESCU

Show that
T # = A1 T A,
where T is the adjoint of T with respect to the canonical inner product h, i. What does this
formula tell you in the special case when T is described by the symmetric matrix
 
1 2
?
2 3
Hint: In (#) use the equalities hx, yiA = hx, Ayi, hT x, yi = hx, T yi, x, y R2 . t
u

Exercise 5.8. Suppose that Uis a finite dimensional Euclidean F-spaceand T : U U is an


invertible operator. Prove that T T is invertible and the operator S = T ( T T )1 is orthogonal.u
t

Exercise 5.9. (a) Suppose that U is a finite dimensional real Euclidean space and Q Sym(U ) is a
positive definite symmetric bilinear form. Prove that there exists a unique positive operator
T :U U
such that
Q(u, v) = hT u, T vi, u, v U . t
u
LINEAR ALGEBRA 79

6. E LEMENTS OF LINEAR TOPOLOGY


6.1. Normed vector spaces. As usual, F will denote one of the fields R or C.
Definition 6.1. A norm on the F-vector space U is a function
k k : U [0, ),
satisfying the following conditions.
(i) (Nondegeneracy)
kuk = 0u = 0.
(ii) (Homogeneity)
kcuk = |c| kuk, u U , c F .
(iii) (Triangle inequality)
ku + vk kuk + kvk, u, v U .
A normed F-space is a pair (U , k k) where U is a F-vector space and k k is a norm on U . t
u

The distance between two vectors u, v in a normed space (U , k k) is the quantity


dist(u, v) := ku vk.
Note that the triangle inequality implies that
dist(u, w) dist(u, v) + dist(v, w), u, v, w U (6.1a)

kuk kvk ku, vk, u, v U . (6.1b)
Example 6.2. (a) The absolute value | | : R [0, ) is a norm on the one-dimensional real vector
space R. p
(b) The absolute value | | : C [0, ), |x + iy| = x2 + y 2 is a norm on the one dimensional
complex vector space C.
(c) If U is an Euclidean F-space with inner product h, i then the function
p
| | : U [0, ), |u| = hu, ui
is a norm on U called the norm determined by the inner product.
(d) Consider the space RN with canonical basis e1 , . . . , eN . The function
p
| |2 : RN [0, ), |u1 e1 + + uN eN |2 = |u1 |2 + + |uN |2
is the norm associated to the canonical inner product on Rn .
The functions
| |1 , | | : RN [0, )
defined by
N
X
|u1 e1 + + uN eN | := max |uk |, |u|1 = |uk |,
1kN
k=1
are also a norm on RN . The proof of this fact is left as an exercise. The norm | 1 is sometimes
known as the taxicab norm. One can define similarly norms | |2 and | | on Cn .
(e) Let U denote the space C([0, 1]) of continuous functions u : [0, 1] R. Then the function
k k : C([0, 1]) [0, ), kuk := sup |u(x)|
x[0,1]

is a norm on U . The proof is left as an exercise.


80 LIVIU I. NICOLAESCU

(f) If (U 0 , k k0 ) and (U 1 , k k1 ) are two normed F-spaces, then their cartesian product U 0 U 1
is equipped with a natural norm
k(u0 , u1 )k := ku0 k0 + ku1 k1 , u0 U 0 , u1 U 1 . t
u
Theorem 6.3 (Minkowski). Consider the space RN with canonical basis e1 , . . . , eN . Let p (1, ).
Then the function
p
!1
X p
N p
| |p : R [0, ), |u1 e1 + + un en |p = |uk |
k=1

is a norm on RN .
Proof. The non degeneracy and homogeneity of | |p are obvious. The tricky part is the triangle
inequality. We will carry the proof of this inequality in several steps. Define q (1, ) by the
equality
1 1 p
1 = + q = .
p q p1
Step 1. Youngs inequality. We will prove that if a, b are nonnegative real numbers then
ap bq
ab + . (6.2)
p q
1
The inequality is obviously true if ab = 0 so we assume that ab 6= 0. Set := p and define
f : (0, ) R, f (x) = x x + 1.
Observe that
f 0 (x) = (x1 1).
Hence f 0 (x) = 0x = 1. Moreover, since < 1 we deduce
f 00 (1) = ( 1) < 0
so 1 is a global max of f (x), i.e.,
f (x) f (1) = 0, x > 0.
ap
Now let x = bq . We deduce
 p 1  p 1
a p 1 ap 1 1 q a
p ap bq
1 = b
bq p bq p q bq p q
q pq ap bq
ab + .
p q
q
This is precisely (6.2) since q p = q(1 p1 ) = 1.
Step 2. Holders inequality. We prove that for any x, y RN we have
|hx, yi| |x|p |y|q , (6.3)
where hx, yi is the canonical inner product of the vectors x, y RN ,
hx, yi = x1 y1 + + xN xN .
Clearly it suffices to prove the inequality only in the case x, y 6= 0. Using Youngs inequality with
|xk | |yk |
a= , b=
|x|p |y|q
LINEAR ALGEBRA 81

we deduce
|xk yk | 1 |xk |p 1 |yk |q
+ , k = 1, . . . , N.
|x|p |y|q p |x|pp q |y|qq
Summing over k we deduce
N N
1 X 1 X p 1 X 1 1
|xk yk | p |xk | + q |yk |q = + .
|x|p |y|q p|x|p q|y|q p q
k=1
|k=1{z } |k=1{z }
=|x|pp =|y|qq

Hence X
|xk xk | |u|p |v|q .
k=1
Now observe that
XN X
|hx, yi| = xk yk |xk yk |.


k=1 k=1
Step 3. Conclusion. We have
N
X N
X
|u + v|pp = |uk + vk | =p
|uk + vk | |uk + vk |p1
k=1 k=1
N
X N
X
p1
|uk | |uk + vk |) + |vk | |uk + vk |)p1 .
k=1 k=1
Using Holders inequality and the equalities q(p 1) = p, 1q = pq p1 we deduce
N N
! p1 N
! 1q
X X X p
|uk | |uk + vk |p1 |uk |p |uk + vk |q(p1) = |u|p |u + v|pq
k=1 k=1 k=1
Similarly, we have
N
X p
|vk | |uk + vk |p1 |v|p |u + v|pq .
k=1
Hence p
|u + v|pp |u + v| q |u|p + |v|p ,


so that
p pq
|u + v|p = |u + v|p |u|p + |v|p .
t
u

Definition 6.4. Suppose that k k0 and k k1 are two norms on the same vector space U we say
that k k1 is stronger than k k0 , and we denote this by k k0 k k1 if there exists C > 0 such
that
kuk0 Ckuk1 , u U . t
u
The relation is a partial order on the set of norms on a vector space U . This means that if
k k0 , k k1 , k k2 are three norms such that
k k 0 k k1 , k k 1 k k 2 ,
then
k k 0 k k2 .
82 LIVIU I. NICOLAESCU

Definition 6.5. Two norms k k0 and k k1 on the same vector space U are called equivalent, and
we denote this by k k0 k k1 , if
k0 k 1 k k 1 k k 0 . t
u
Observe that two norms k 0k1 are equivalent if and only if there exist positive constants c, C
such that
k k0 k k1 c, C > 0 : ckuk1 kuk0 Ckuk1 , u U . (6.4)
The relation is an equivalence relation on the space of norms on a given vector space.
Proposition 6.6. The norms | |p , p [1, ], on RN are all equivalent with the norm | |
Proof. Indeed for u RN we have
N
! p1  1
X p 1
p p
|u|p = |uk | N max |uk | = N p |u| ,
1kN
k=1

so that | |p | | . Conversely
 1 N
! p1
p X
p p
|u| = max |uk | |uk | = |u|p .
1kN
k=1
t
u

Proposition 6.7. The norm | | on RN is stronger than any other norm k k on RN .


Proof. Let
N
C = max kek k,
k=1
where e1 , . . . , eN is the canonical basis in We have
kuk = ku1 e1 + + un eN k |u1 |ke1 k + + |uN |keN k.
t
u

Corollary 6.8. For any p [1, ], the norm | |p on RN is stronger than any other norm k k on
RN .
Proof. Indeed the norm | |p is stronger than which is stronger than k k. t
u

6.2. Convergent sequences.


Definition 6.9. Let U be a vector space. A sequence (u(n))n0 of vectors in U is said to converge
to u in the norm k k if
lim ku(n) u k = 0,
n
i.e.,
> 0 = () > 0 such that ku(n) u k < , n .
We will use the notation
u = lim u(n)
n
to indicate that the sequence u(n) converges to u . t
u
LINEAR ALGEBRA 83

Proposition 6.10. A sequence of vectors



u1 (n)
.. N
u(n) = R

.
uN (n)
converges to
u1

u = ... RN

uN
in the norm | | if and only if
lim uk (n) = uk , k = 1, . . . , N.
n

Proof. We have
lim |u(n) u | lim max |uk (n) uk | = 0 lim |uk (n) uk | = 0, k = 1, . . . , N.
n n 1kN n

t
u

Proposition 6.11. Let (U , k k) be a normed space. If the sequence ( u(n) )n0 in U converges to
u U in the norm k k, then
lim ku(n)k = ku k.
n

Proof. We have
(6.1b)
ku(n)k ku k ku(n) u k 0 as n .

t
u

The proof of the next result is left as an exercise.


Proposition 6.12. Let (U , kk) be a normed F-vector space. If the sequences ( u(n) )n0 , (v(n))n0
converge in the norm k k to u and respectively v , then their sum u(n) + v(n) converges in the
norm k k to u + v . Moreover, for any scalar F, the sequence u(n) converges in the norm
k k to the vector u . t
u

Proposition 6.13. Let U be a vector space and k k0 , k k1 be two norms on U . The following
statements are equivalent.
(i) k k0 k k1 .
(ii) If a sequence converges in the norm k k1 , then it converges to the same limit in the norm
k k0 as well.

Proof. (i) (ii). We know that there exists a constant C > 0 such that kuk0 Ckuk1 , u U .
We have to show that if the sequence ( u(n) )n0 converges in the norm k k1 to u , then it also
converges to u in the norm k k0 . We have
ku(n) u k0 Cku(n) u k1 0 as n .
84 LIVIU I. NICOLAESCU

(ii) (i). We argue by contradiction and we assume that for any n > 0 there exists v(n) U \ 0
such that
kv(n)k0 nkv(n)k1 . (6.5)
Set
1
u(n) := v(n), n > 0.
kv(n)k0
1
Multiplying both sides of (6.5) by kv(n)k0
we deduce
1 = ku(n)k0 nku(n)k1 , n > 0.
Hence u(n) 0 in the norm k k1 and thus u(n) 0 in the norm k k0 . Using Proposition 6.11
we deduce ku(n)k0 0. This contradicts the fact that ku(n)k0 = 1 for any n > 0. t
u

Corollary 6.14. Two norms on the same vector space are equivalent if and only if any sequence that
converges in one norm converges to the same limit in the other norm as well. t
u

Theorem 6.15. Any norm k k on RN is equivalent to the norm | | .


Proof. We already know that k k | so it suffices to show that | | k k. Consider the
sphere
S := u RN ; |u| = 1 ,


We set
m := inf kuk.
uS
We claim that
m>0 (6.6)
Let us observe that the above inequality implies that | | k k. Indeed for any u RN \ 0 we
have
1
u := uS
|u|
so that
kuk m.
Multiplying both sides of the above inequality with |u| we deduce
1
kuk m|u| |u| kuk, u RN \ 0| | k k.
m
Let us prove the claim (6.6). Choose a sequence u(n) S such that
lim ku(n)k = m. (6.7)
n
Since u(n) S, coordinates uk (n) of u(n) satisfy the inequalities
uk (n) [1, 1], k = 1, . . . , N, n.
The Bolzano-Weierstrass theorem implies that we can extract a subsequence ( u() ) of the sequence
( u(n) ) such that sequence ( uk () ) converges to some uk R, for any k = 1, . . . , N . Let u RN
be the vector with coordinates u1 , . . . , uN . Using Proposition 6.10 we deduce that u() u in the
norm | | . Since |u()| = 1, , we deduce from Proposition 6.11 that |u | = 1, i.e., u S.
On the other hand k k | | and we deuce that u() converges to u in the norm k k.
Hence
(6.7)
ku k = lim ku()k = m.

Since |u | = 1 we deduce that u 6= 0 so that m = ku k > 0. t
u
LINEAR ALGEBRA 85

Corollary 6.16. On a finite dimensional real vector space U any two norms are equivalent.
Proof. Let N = dim U . We can then identify U with RN and we can invoke Theorem 6.15 which
implies that any two norms on RN are equivalent. t
u

Corollary 6.17. Let U be a finite dimensional real vector space. A sequence in U converges in some
norm if and only if it converges to the same limit in any norm. t
u

6.3. Completeness.
Definition 6.18. Let (U , k k) be a normed space. A sequence ( u(n) )n0 of vectors in U is said
to be a Cauchy sequence (in the norm k k) if
lim ku(m) u(n)k = 0,
m,n

i.e.,

> 0 = () > 0 such that dist u(m), u(n) = ku(m) u(n)k < , m, n > . t
u
Proposition 6.19. Let (U , k k) be a normed space. If the sequence ( u(n) ) is convergent in the
norm k k, then it is also Cauchy in the norm k k.
Proof. Denote by u the limit of the sequence ( u(n) ). For any > 0 we can find = () > 0
such that

ku(n) u k < .
2
If m, n > () then

ku(m) u(n)k ku(m) u k + ku un k < + = .
2 2
t
u

Definition 6.20. A normed space (U , k k) is called complete if any Cauchy sequence is also con-
vergent. A Banach space is a complete normed space. t
u

Proposition 6.21. Let k k0 and k k1 be two equivalent norms on the same vector space U . Then
the following statements are equivalent.
(i) The space U is complete in the norm k k0 .
(ii) The space U is complete in the norm k k1 .

Proof. We prove only that (i) (ii). The opposite implication is completely similar. Thus, we know
that U is k k0 -complete and we have to prove that it is also k k0 -complete.
Suppose that ( u(n) ) is a Cauchy sequence in the norm k k1 . We have to prove that it converges
in the norm k k1 . Since k k0 k k1 we deduce that there exists a constant C > 0 such that
ku(n) u(m)k0 Cku(n) u(m)k1 0 as m, n .
Hence the sequence ( u(n) ) is also Cauchy in the norm k k0 . Since U is k k0 -complete, we
deduce that the sequence ( u(n) ) converges in the norm k k0 to some vector u U .
Using the fact that k k1 k k0 we deduce from Proposition 6.13 that the sequence converges
to u in the norm k k1 as well. t
u

Theorem 6.22. Any finite dimensional normed space (U , k k) is complete.


86 LIVIU I. NICOLAESCU

Proof. For simplicity we assume that U is a real vector space of dimension N . Thus we can identify
it with RN . Invoking Proposition 6.21 we see that it suffices to prove that U complete with respect
to any norm equivalent to | k. Invoking Theorem 6.15 we see that it suffices to prove that RN is
complete with respect to the norm | | .
Suppose that ( u(n) ) is a sequence in RN which is Cauchy with respect to the norm | | . As
usual, we denote by u1 (n), . . . , uN (n) the coordinates of u(n) RN . From the inequalities
| uk (m) uk (n)| | u(m) u(n) | , k = 1, . . . , N,
we deduce that for any k = 1, . . . , N the sequence of coordinates ( uk (n) ) is a Cauchy sequence of
real numbers. Therefore it converges to some real number uk . Invoking Proposition 6.10 we deduce
that the sequence ( u(n) ) converges to the vector u RN with coordinates u1 , . . . , uN . t
u

6.4. Continuous maps. Let (U , k kU ) and (V , k kV ) be two normed vector spaces and S U .
Definition 6.23. A map f : S V is said to be continuous at a point s S if for any sequence
( s(n) ) of vectors in S that converges to s in the norm k kU the sequence f (s(n)) of vectors in
V converges in the norm k kV to the vector f (s ). The function f is said to be continuous (on S)
if it is continuous at any point s in S. t
u

Remark 6.24. The notion of continuity depends on the choice of norms on U and V because it relies
on the notion of convergence in these norms. Hence, if we replace these norms with equivalent ones,
the notion of continuity does not change. t
u

Example 6.25. (a) Let (U , k k) be a normed space. Proposition 6.11 implies that the function
f : U R, f (u) = kuk is continuous.
(b) Suppose that U , V are two normed spaces, S U and f : S V is a continuous map. Then
the restriction of f to any subset S 0 U is also acontinuous map f |S 0 : S 0 V . t
u

Proposition 6.26. Let U , V and S as above, f : S V a map and s S. Then the following
statements are equivalent.
(i) The map f is continuous at s .
(ii) For any > 0 there exists = () > 0 such that if s S and ks s kU < , then
kf (s) f (s )kV < .

Proof. (i) (ii) We argue by contradiction and we assume that there exists 0 > 0 such that for any
n > 0 there exists sn S such that
1
ksn s kU < and kf (sn ) f (s )kV 0 .
n
From the first inequality above we deduce that the sequence sn converges to s in the norm k kU .
The continuity of f implies that
lim kf (sn ) f (s )kV = 0.
n

The last equality contradicts the fact that kf (sn ) f (s )kV 0 for any n.
(ii) (i) Let s(n) be a sequence in S that converges to s . We have to show that f ( s(n) ) converges
to f (s ).
Let > 0. By (ii), there exists () > 0 such that if s S and ks s kV < (), then
kf (s) f (s )kV < . Since s(n) converges to s , we can find = () such that if n (), then
LINEAR ALGEBRA 87

ks(n) s kV < (). In particular, for any n () we have


kf ( s(n) ) f (s )kV < .
This proves that f ( s(n) ) converges to f (s ). t
u

Definition 6.27. A map f : S V is said to be Lipschitz if there exists L > 0 such that
kf (s1 ) f (s2 )kV Lks1 s2 kU , s1 , s2 S.
Observe that if f : S V is a Lipschitz map, then
 
|f (s1 ) f (s2 )kV
sup ; s1 , s2 S; s1 6= s2 < .
ks1 s2 kU
The above supremum is called the Lipschitz constant of f .
Proposition 6.28. A Lipschitz map f : S V is continuous.
Proof. Denote by L the Lipschitz constant of f . Let s S and s(n) a sequence in S that converges
to s .
kf (s(n)) f (s )kV Lks(n) s kU 0 as n .
t
u

Proposition 6.29. Let (U , k kU ) and (V , k kV ) be two normed vector spaces and T : U V


a linear map. Then the following statements are equivalent.
(i) The map T : U V is continuous on U .
(ii) There exists C > 0 such that
kT ukV CkukU , u U . (6.8)
(iii) The map T : U V is Lipschitz.

Proof. (i) (ii) We argue by contradiction. We assume that for any positive integer n there exists a
vector un U such that
kT un kV nkun kU .
Set
1
un := u(n).
nku(n)kU
Then
1
kun kU = , (6.9a)
n
kT un kV nkun kU = 1. (6.9b)
The equality (6.9a) implies that un 0. Since T is continuous we deduce kT un kV 0. This
contradicts (6.9b).
(ii) (iii) For any u1 , u2 U we have
kT u1 T u2 kV = kT (u1 u2 )kV Cku1 u2 kU .
This proves that T is Lipschitz. The implication (iii) (i) follows from Proposition 6.28. t
u
88 LIVIU I. NICOLAESCU

Definition 6.30. Let (U , k kU ) and (V , k kV ) be two normed vector spaces and T : U V a


linear map. The norm of T , denoted by kT k or kT kU ,V , is the infimum of all constants C > 0 such
that (6.8) is satisfied. If there is no such C we set kT k := . Equivalently
kT ukV
kT k = sup = sup kT ukV .
uU \0 kukU kukU =1

We denote by B(U , V ) the space of linear operators such that that kT k < , and we will refer to
such operators as bounded. When U = V and k kU = k kV we set
B(U ) := B(U , U ). t
u
Observe that we can rephrase Proposition 6.29 as follows.
Corollary 6.31. Let (U , k kU ) and (V , k kV ) be two normed vector spaces. A linear map
T : U V is continuous if and only if it is bounded. t
u

Let us observe that if T : U V is a continuous operator, then


kT k CkT ukV CkukU , u U . (6.10)
We have the following useful result whose proof is left as an exercise.
Proposition 6.32. Let (U , k k) be a normed F-vector space and consider the space B(U ) of
continuous linear operators U U . If S, T B(U ) and F, then
kSk = || kSk, kS + T k kSk + kT k, kS T k kSk kT k.
In particular, the map B(U ) 3 T 7 kT k [0, ) is a norm on the vector space of bounded linear
operators U U . For T B(U ) the quantity kT k is called the operator norm of T . t
u

Corollary 6.33. If (Sn ) and (Tn ) are sequences in B(U ) which converge in the operator norm to
S B(U ) and respectively T B(U ), then the sequence (Sn Tn ) converges in the operator norm to
ST .
Proof. We have
kSn Tn ST k = k(Sn Tn STn ) + (STn ST )k k(Sn S)Tn k + kS(Tn T )k
k(Sn S)k kTn k + kSk k(Tn T )k.
Observe that
k(Sn S)k 0, kTn k kT k, kSk k(Tn T )k 0.
Hence
k(Sn S)k kTn k + kSk k(Tn T )k 0,
and thus kSn Tn ST k 0. t
u

Theorem 6.34. Let (U , k kU ) and (V , k kV ) be two finite dimensional normed vector spaces.
Then any linear map T : U V is continuous.
Proof. For simplicity we assume that the vector spaces are real vector spaces, N = dim U , M =
dim V . By choosing bases in U and V we can identify them with the standard spaces U = RN ,

V =R . M

The notion of continuity is only based on the notion of convergence of sequences and as we know,
on finite dimensional spaces the notion of convergence of sequences is independent of the norm used.
Thus we can assume that the norm on U and V is | | .
LINEAR ALGEBRA 89

The operator T can be represented by a M N matrix


A = (aij )1im, 1jN .
Set X
C := |akj |.
j,k
In other words, C is the sum of the absolute values of all the entries of A. If

u1
u = ... RN

uN
then the vector v = T u RM has coordinates
XN
vk = akj uj , k = 1, . . . , M.
j=1

We deduce that that for any k = 1, . . . , M we have


N
X N
X N
X
|vk | |akj | |uj | max |uj | |akj | = |u| |akj | C|u| .
1jN
j=1 j=1 j=1

Hence, for any u RN we


|T u| = |v| = max |vk | C|u| .
1kM
This proves that T is continuous. t
u

6.5. Series in normed spaces. Let (U , k k) be a normed space.


Definition 6.35. A series in U is a formal infinite sum of the form
X
u(n) = u(0) + u(1) + u(2) + ,
n0

where u(n) is a vector in U for any n 0. The n-th partial sum of the series is the finite sum
Sn = u(0) + u(1) + + u(n).
The series is called convergent if the sequence Sn is convergent. The limit of this sequence is called
the sum of the series. The series is called absolutely convergent if the series of nonnegative real
numbers X
ku(n)k
n0
is convergent. t
u

Proposition 6.36. Let (U , k k) be a Banach space, i.e., a complete normed space. If the series in
U X
u(n) (6.11)
n0
is absolutely convergent, then it is also convergent. Moreover
X X
k u(n)k ku(n)k. (6.12)
n0 0
90 LIVIU I. NICOLAESCU

Proof. Denote by Sn the n-th partial sum of the series (6.11), and by Sn+ the n-th partial sum of the
series X
ku(n)k.
n0
Since U is complete, it suffices to show that the sequence (Sn ) is Cauchy. For m < n we have
kSn Sm k = ku(m + 1) + + u(n)k ku(m + 1)k + + ku(n)k = Sn+ Sm
+
.
Since the sequence (Sn+ ) is convergent, we deduce that it also Cauchy so that
lim (Sn+ Sm
+
) = 0.
m,n

This forces
lim kSn Sm k = 0.
m,n
The inequality (6.12) is obtained by letting n in the inequality
ku(0) + + u(n)k ku(0)k + + ku(n)k.
t
u

Corollary 6.37 (Comparison trick). Let X


u(n)
n0
be a series in the Banach space (U , k k). If there exists a convergent series of positive real numbers
X
cn ,
n0
such that
ku(n)k cn , n 0, (6.13)
P
then the series n0 u(n) is absolutely convergent and thus convergent.
P
Proof. The inequality (6.13) implies that the series of nonnegative real numbers n0 ku(n)k is
convergent. t
u

6.6. The exponential of a matrix. Suppose that A is an N N complex matrix. We regard it as a


linear operator A : Cn Cn . As such, it is a continuous map, in any norm we choose on Cn . For
simplicity, we will work with the Euclidean norm
p
|u| = |u|2 = |u1 |2 + + |uN |2 .
The operator norm of A is then the quantity
|Au|
kAk = sup .
uCN \0 |u|

We denote by BN the vector space of continuous linear operators CN CN equipped with the
operator norm defined above. The space is a finite dimensional complex vector space (dimC BN =
N 2 ) and thus it is complete. Consider the series in BN
X 1 1 1
An = 1 + A + A2 + .
n! 1! 2!
n0
Observe that
kAn k kAkn
LINEAR ALGEBRA 91

so that
1 1
kAn k kAkn .
n! n!
The series of nonnegative numbers
X 1 1 1
kAkn = 1 + kAk + kAk2 + ,
n! 1! 2!
n0

is ekAk Thus the series n0 n!


1 n
P
A is absolutely convergent and hence convergent.
Definition 6.38. For any complex N N matrix A we denote by eA the sum of the convergent (and
absolutely convergent) series
1 1
1 + A + A2 + .
1! 2!
The N N matrix A is called the exponential of A. t
u

Note that
keA k ekAk .
Example 6.39. Suppose that A is a diagonal matrix,
A := Diag(1 , . . . , N ).
Then
An = Diag(n1 , . . . , nN )
and thus
X 1 X 1
eA = Diag n1 , . . . , nN = Diag e1 , . . . , eN .

t
u
n! n!
n0 n0

Before we explain the uses of the exponential of of a matrix, we describe a simple strategy for
computing it.
Proposition 6.40. Let A be a complex N N matrix and S and invertible N N matrix. Then
1 AS
S 1 eA S = eS . (6.14)

Proof. Set AS := S 1 AS. We have to prove that


S 1 eA S = eAS .
Set
1 1 1 1
Sn := 1 + A + + An , Sn0 = 1 + AS + + AnS .
1! n! 1! n!
Observe that that for any positive integer k
AkS = (S 1 AS)(S 1 AS) (S 1 AS) = S 1 Ak S.
| {z }
k
Moreover, for any two N N matrices B, C we have
S 1 (B + C)S = A1 BS + S 1 CS.
We deduce that
S 1 Sn S = Sn0 , n 0.
92 LIVIU I. NICOLAESCU

Since Sn eA , we deduce from Corollary 6.33 that


Sn0 = S 1 Sn S S 1 eA S.
On the other hand Sn0 eAS and this implies (6.16). t
u

We can rewrite (6.16) in the form


1 AS
eA = SeS S 1 . (6.15)
If we can choose S cleverly so that S 1 AS is not too complicated, then we have a good shot at
1
computing eS AS , which then leads via (6.15) to a description of eA .
Here is one such instance. Suppose that A is normal, i.e., A A = AA . The spectral theorem for
normal operators then implies that there exists an invertible matrix S such that
S 1 AS = Diag(1 , . . . , N ),
where 1 , . . . , N C are the eigenvalues of A. Moreover the matrix S also has an explicit descrip-
tion: its j-th column is an eigenvector of Euclidean norm 1 of A corresponding to the eigenvalue j .
The computations in Example 6.39) then lead to the formula
eA = S 1 Diag(e1 , . . . , eN )S. (6.16)

6.7. The exponential of a matrix and systems of linear differential equations. The usual expo-
nential function ex satisfies the very important property
ex+y = ex ey , x, y.
If A, B are two N N matrices, then the equality
eA+B = eA eB
no longer holds because the multiplication of matrices is non commutative. Something weaker does
hold.
Theorem 6.41. Suppose that A, B are complex N N matrices that commute, i.e., AB = BA. Then
eA+B = eA eB .

Proof. The proof relies on the following generalization of Newtons binomial formula whose proof
is left as an exercise.
Lemma 6.42 (Newtons binomial formula: the matrix case). If A, B are two commuting N N
matrices, then for any positive integer n we have
n        
n
X n nk k n n n n1 n n2 2
(A + B) = A B = A + A B+ A B + ,
k 0 1 2
k=0

where nk is the binomial coefficient



 
n n!
:= , 0 k n. t
u
k k!(n k)!
LINEAR ALGEBRA 93

We set
1 1 1 1
Sn (A) = 1 + A + + An , Sn (B) = 1 + B + + B n ,
1! n! 1! n!
1 1
Sn (A + B) = 1 + (A + B) + + (A + B)n .
1! n!
We have
Sn (A) eA , Sn (B) eB , Sn (A + B) eA+B as n .
and we will prove that
lim kSn (A)Sn (B) Sn (A + B)k = 0,
n
which then implies immediately the desired conclusion. Observe that
  
1 1 1 1
Sn (A)Sn (B) = 1 + A + + An 1 + B + + Bn
1! n! 1! n!
n k 2n n
! !
X X 1 i 1 X X 1 1
= A Ai B ki + Ai Ai B ki
i! (k i)! i! (k i)!
k=0 i=0 k=n+1 i=0
n k 2n n
! !
X 1 X k! X 1 X k!
= Ai B ki + Ai B ki
k! i!(k i)! k! i!(k i)!
k=0
| i=1 {z } k=n+1 i=1

=(A+B)k
n 2n n
!
X 1 X 1 X k!
= (A + B)k + Ai B ki
k! k! i!(k i)!
k=0 k=n+1 i=1
Hence
2n n
!
X 1 X k!
Sn (A)Sn (B) Sn (A + B) = Ai B ki
| {z } k! i!(k i)!
k=n+1 i=1
=:Rn
and we deduce
2n n 2n n
! !
X 1 X k! i ki
X 1 X k! i ki
kRn k kA B k kAk kBk k
k! i!(k i)! k! i!(k i)!
k=n+1 i=1 k=n+1 i=1
2n k 2n
!
X 1 X k! X 1
kAki kBkki k = (kAk + kBk)k .
k! i!(k i)! k!
k=n+1
| i=1 {z } k=n+1
(kAk+kBk)k
Set C := kAk + kBk so that
2n
X 1 k
kRn k C . (6.17)
k!
k=n+1
Fix an positive integer n0 such that
n0 > 2C.
For k > n0 we have
k! = 1 2 n0 (n0 + 1) k nkn
0
0

so that
1 1
kn0 , k > n0 .
k! n0
94 LIVIU I. NICOLAESCU

We deduce that if n > n0 , then


2n 2n 2n 
C kn0

X 1 k n0
X 1 kn0 n
X
C =C C C0
k! k! n0
k=n+1 k=n+1 k=n+1
2n 2n  
X 1 X1 1 1
C n0 = (2C)n0 = (2C) n0
+ +
2kn0 2k 2n+1 22n
k=n+1 k=n+1
(2C)n0 (2C)n0 (2C)n0
 
1 1
= n+1 1 + + + n1 2 n+1 =
2 2 2 2 2n
| {z }
2
Using (6.17) we deduce
(2C)n0
kRn k 0 as n .
2n
t
u

6.8. Closed and open subsets.


Definition 6.43. (a) Let U be a vector space. A subset C U is said to be closed in the norm k k
on U if for any sequence ( u(n) ) of vectors in C which converges in the norm k k to a vector
u U we have u C .
(b) A subset O in U is said to be open in the norm k k if the complement U \ O is closed in the
norm k k. t
u

Remark 6.44. The notion of closed set with respect to a norm is based only on the notion of conver-
gence with respect to a norm. Thus, if a set is closed in a given norm, it is closed in any equivalent.
In particular, the notion of closed set in a finite dimensional normed space is independent of the norm
used. A similar observation applies to open sets. t
u

Proposition 6.45. Suppose that (U , k k) is a normed vector space and O U . Then the following
statements are equivalent.
(i) The set O is open in the norm k k.
(ii) For any u O there exists > 0 such that any v U satisfying kv uk < belongs to O.
t
u

Proof. (i) (ii) We argue by contradiction. We assume that there exists u O so that for any
n > 0 there exists v n U satisfying
1
kv n u k < but v n 6 O.
n
Set C so that C is closed and v n C for any n > 0. The inequality kv n u k < n1 implies
v n u . Since C is closed we deduce that u C . This contradicts the initial assumption that
u O =
bsU \ C .
(ii) (i) We have to prove that C = U \ O is closed, given that O satisfies (ii). Again we
argue by contradiction. Suppose that there exists sequence ( u(n) ) in C which converges to a vector
u U \ C = O. Thus there exists > 0 such that any v U satisfying kv u k does not
belong to C . This contradicts the fact that u(n) u because at least one of the terms u(n) satisfies
ku(n) u k < , and yet it belongs to C . t
u
LINEAR ALGEBRA 95

Definition 6.46. Let (U , k k) be an normed space. The open ball of center u and radius r is the set

B(u, r) := v U ; kv uk < r . t
u
We can rephrase Proposition 6.45 as follows.
Corollary 6.47. Suppose that (U , k k) is a normed vector space. A subset O U is open in the
norm k k if and only if for any u O there exists > 0 such that B(u, ) O. t
u

The proof of the following result is left as an exercise.


Theorem 6.48. Suppose that (U , k kU ), (V , k kV ) are normed spaces and f : U V is a
map. Then the following statements are equivalent.
(i) The map f is continuous.
(ii) For any subset O V that is open in the norm k kV the pre image f 1 (O) U is open
in the norm k kV .
(iii) For any subset C V that is closed in the norm k kV the pre image f 1 (C ) U is
closed in the norm k kV . t
u

6.9. Compactness. We want to discuss a central concept in modern mathematics which is often used
proving existence results.
Definition 6.49. Let (U , k k) be a normed space. A subset K U is called compact if any
sequence of vectors in K has a subsequence that converges to some vector in K. t
u

Remark 6.50. Since the notion of compactness is expressed only in terms of convergence of se-
quences we deduce that if a set is compact in some norm, it is compact in any equivalent norm. In
particular, on finite dimensional vector spaces the notion of compactness is independent of the norm.
t
u

Example 6.51 (Fundamental Example). For any real numbers a < b the closed interval [a, b] is a
compact subset of R. t
u

The next existence result illustrates our claim of the usefulness of compactness in establishing
existence results.
Theorem 6.52 (Weierstrass). Let (U , k k) be a normed space and f : K R a continuous
function defined on the compact subset K U . Then there exist u , u K such that
f (u ) f (u) f (u ), u K,
i.e.,
f (u ) = inf f (u), f (u ) = sup f (u).
uK uK
In particular, f is bounded both from above and from below.
Proof. Let
m := inf f (u) [, ).
uK
There exists a sequence ( u(n) ) in K such that4
lim f (u(n)) = m.
n

4Such sequence is called a minimizing sequence of f .


96 LIVIU I. NICOLAESCU

Since K is compact, there exists a subsequence ( u() ) of ( u(n) ) that converges to some u K.
Observing that
lim f ( u() ) = lim f ( u(n) ) = m
n
we deduce from the continuity of f that
f (u ) = lim f ( u() ) = m.

The existence of a u such that f (u ) = supuK f (u) is proved in a similar fashion. t


u

Definition 6.53. Let (U , k k) be a normed space. A set S U is called bounded in the norm k k
if there exists M > 0 such that
ksk M, s S.
In other words, S is bounded if and only if
sup ksk < t
u
sS

Let us observe again that if a set S is bounded in a norm k k, it is also bounded in any other
equivalent norm.
Proposition 6.54. Let (U , k k) be a normed space and K U a compact subset. Then K is both
bounded and closed.
Proof. We first prove that K is bounded. Indeed, Example 6.25 shows that the function f : K R,
f (u) = kuk is continuous and Theorem 6.52 implies that supuK f (u) < . This proves that K is
bounded.
To prove that K is also closed consider a seequence (u(n) ) in K which converges to some u in
U . We have to show that in fact u K. Indded, since K is compact, the sequence ( u(n) ) contains
a subsequence which converges to some point in K. On the other hand, since ( u(n) ) is convergent,
limit of this subsequence must coincide with the limit of u(n) which is u . Hence u K. t
u

Theorem 6.55. Let (U , kk) be a finite dimensional normed space, and K U . Then the following
statements are equivalent.
(i) The set K is compact.
(ii) The set K is bounded and closed.

Proof. The implication (i) (ii) is true for any normed space so we only have to concetrate on the
opposite implication. For simplicity we assume that U is a real vector space of dimension N . By
fixing a basis of U we can identify it with RN . Since the notions of compactness, boundedness and
closedness are independent of the norm we use on RN we can assume that k k is the norm | | .
Since K is bounded we deduce that there exists M > 0 such that
|u| M, u K.
In particular we deduce that
uk [M, M ], k = 1, . . . , N, u K,
where as usual we denote by u1 , . . . , uN the coordinates of a vector u RN .
Suppose now that ( u(n) ) is a sequence in K. We have to show that it contains a subsequence that
converges to a vector in K. We deduce from the above that
uk (n) [M, M ], k = 1, . . . , N, n.
LINEAR ALGEBRA 97

Since the interval [M, M ] is a compact subset of R we can find a subsequence (, u() ) of ( u(n) )
such that each of the sequences ( uk () ), k = 1, . . . , N converges to some uk [M, M ]. denote
by u the vector in RN with coordinates u1 , . . . , uN . Invoking Proposition 6.10 we deuce that the
subsequence ( u() ) converges in the norm | | to u . Since K is closed and the sequence ( u() )
is in K we deduce that its limit u is also in K. t
u
98 LIVIU I. NICOLAESCU

6.10. Exercises.
Exercise 6.1. (a) Consider the space RN with canonical basis e1 , . . . , eN . Prove that the functions
| |1 , | | : RN [0, ) defined by
N
X
|u1 e1 + + uN eN | := max |uk |, , |u|1 = |uk |
1kN
k=1

are norms on RN .
(b) Let U denote the space C([0, 1]) of continuous functions u : [0, 1] R. Then the function
k k : C([0, 1]) [0, ), kuk := sup |u(x)|
x[0,1]

is a norm on U . t
u

Exercise 6.2. Prove Proposition 6.12. t


u

Exercise 6.3. (a) Prove that the space U of continuous functions u : [0, 1] R equipped with the
norm
kuk = max |u(x)|
x[0,1]
is a complete normed space.
(b) Prove that the above vector space U is not finite dimensional.
For part (a) you need to use the concept of uniform convergence of functions. For part (b) consider
the linear functionals Ln : U R Ln (u) = u( n1 ), u U , n > 0 and then show they are linearly
independent. t
u

Exercise 6.4. Prove Proposition 6.32. t


u

Exercise 6.5. Prove Theorem 6.48. t


u

Exercise 6.6. Let (U , k k) be a normed space.


(a) Show that if C1 , . . . , Cn are closed subsets of U , then so is their union C1 Cn .
(b) Show that if (Ci )iI is an arbitrary family of closed subset of U , then their intersection
iI Ci is also closed.
(c) Show that if O1 , . . . , On are open subsets of U , then so is their intersection O1 On .
(d) Show that if (Oi )iI is an arbitrary family of open subset of U , then their union iI Oi is
also open. t
u

Exercise 6.7. Suppose that (U , k k) is a normed space. Prove that for any u U and any r > 0
the ball B(u, r) is an open subset. t
u

Exercise 6.8. (a) Show that the subset [0, ) R is closed.


(b) Use (a), Theorem 6.48 and Exercise 6.6 to show that the set
PN := RN ; 1 + + N = 1, k 0, k = 1, . . . , N


is closed. Draw a picture of PN for n = 1, 2, 3.


Hint: Use Theorem 6.34 to prove that the maps f : RN R and `k : RN R given by
f (u) = u1 + + uN , `k (u) = uk
LINEAR ALGEBRA 99

are continuous. t
u

Exercise 6.9. Consider the 2 2-matrix


 
0 1
J= .
1 0
(a) Compute J 2 , J 3 , J 4 , J 5 .
(b) Let t R. Compute etJ . Describe the behavior of the point
 
tJ 1
u(t) = e R2
0
as t varies in the interval [0, 2]. t
u

Exercise 6.10. Prove Newtons binomial formula, Lemma 6.42, using induction on n. t
u
100 LIVIU I. NICOLAESCU

R EFERENCES
[1] S. Axler: Linear Algebra Done Right, Springer Verlag, 2004.
[2] V.V. Prasolov: Problems and Theorem in Linear Algebra, in Russian.
[3] S. Treil: Linear Algebra Done Wrong, Notes for a Linear algebra course at Brown University.

D EPARTMENT OF M ATHEMATICS , U NIVERSITY OF N OTRE DAME , N OTRE DAME , IN 46556-4618.


E-mail address: nicolaescu.1@nd.edu
URL: http://www.nd.edu/lnicolae/

Potrebbero piacerti anche