Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
L
aszl
o Szil
ard Csaba
Cluj-Napoca
2014
Viorel Adrian
Contents
Introduction
iii
1 Matrices
28
53
CONTENTS
ii
72
95
126
145
CONTENTS
Bibliography
iii
155
Introduction
The aim of this book is to give an introduction in linear algebra, and at the same
time to provide some applications that might be useful both in practice and theory.
Hopefully this book will be a real help for graduate level students to understand
the basics of this beautiful mathematical eld called linear algebra. Our scope is
twofold: one hand we give a theoretical introduction to this eld, which is more
than exhaustive for the need and understanding capability of graduate students, on
the other hand we present fully solved examples and problems, that might be helpful in preparing to exams and also show the techniques used in the art of problem
solving on this eld. At the end of every chapter, this work contains several proposed problems that can be solved using the theory and solved examples presented
previously. The book is structured on seven chapters, we begin with the study of
matrices and determinants and we end with the study of quadratic forms with some
useful applications in analytical geometry.
Our hope is that our students, and not just them, will enjoy studying this book
and they will nd very helpful in preparing for exams.
iv
1
Matrices
1.1
a12
a
11
a21 a22
A
..
..
.
.
am1 am2
. . . a1n
. . . a2n
.. .
.
...
. . . amn
Hence, the elements of a matrix A are denoted by aij , where aij stands for the
number that appears in the ith row and the j th column of A (this is called the pi, jq
entry of A) and the matrix is represented as A paij qi1,m .
j1,n
We will denote the set of all m n matrices with entries in F by Mm,n pFq
respectively, when m n by Mn pFq. It is worth mentioning that the elements of
1
Mn pFq are called square matrices. In what follows, we provide some examples.
Example 1.2. Consider the matrices
1 2 3
i 2`i
0
,
A 4 5 6 , respectively B
?
2 1 ` 3i
3
7 8 9
where i is the imaginary unit. Then A P M3 pRq, or in other words, A is a real valued
square matrix, meanwhile B P M2,3 pCq, or in other words, B is a complex valued
matrix with two rows and three columns.
In what follows we present some special matrices.
Example 1.3. Consider the matrix In paij qi,j1,n P Mn pFq, aij 1, if i
j and aij 0 otherwise. Here 1 P F, respectively 0 P F are the multiplicative
identity respectively the zero element of the eld F.
Then
0
In
..
.
0 ... 0
1 . . . 0
..
..
. . . . .
0 ... 1
0
O
..
.
j1,n
0 ... 0
0 . . . 0
..
..
. . . . .
0 ... 0
a
a
...
11 12
0 a22 . . .
A
..
..
.
. ...
0
0 ...
a
a1n
0
11
a
a2n
a
, respectively A 21 22
..
..
.
..
.
.
an1 an2
ann
...
... 0
..
.
...
. . . ann
a
0
11
0 a22
A
..
..
.
.
0
0
...
...
...
0
0
..
.
. . . ann
Addition of Matrices.
If A and B are m n matrices, the sum of A and B is dened to be the m n
matrix A ` B obtained by adding corresponding entries. Hence, the addition
operation is a function
` : Mm,n pFq Mm,n pFq Mm,n pFq,
paij qi1,m ` pbij qi1,m paij ` bij qi1,m , @ paij qi1,m , pbij qi1,m P Mm,n pFq.
j1,n
j1,n
j1,n
j1,n
where cij aij ` bij for all i P t1, 2, . . . , mu, j P t1, 2, . . . , nu.
j1,n
X pxij qi1,m P Mm,n pFq. For every A, B, C P Mm,n pFq the following properties
hold:
j1,n
j1,n
j1,n
j1,n
3. pA ` Bq A ` B (distributive property).
4. p ` qA A ` A (distributive property).
5. 1A A, where 1 is the multiplicative identity of F (identity property).
Of course that we listed here only the left multiplication of matrices by scalars. By
dening A A we obtain the right multiplication of matrices by scalars.
1 0 2
1 1 1
Example 1.7. If A 0
2 1 and B 1 1 1 , then
0 1 2
2 2
0
3 2 0
1 2 4
2A B 1 5 3 and 2A ` B 1
3 1 .
4 5 2
4 3
2
Transpose.
The transpose of a matrix A P Mm,n pFq is dened to be a matrix AJ P Mn,m pFq
obtaining by interchanging rows and columns of A. Locally, if A paij qi1,m , then
j1,n
AJ paji q j1,n .
i1,m
It is clear that pAJ qJ A. A matrix, that has many columns, but only one row, is
called a row matrix. Thus, row matrix A with n columns is an 1 n matrix, i.e.
A pa1 a2 a3 . . . an q.
A matrix, that has many rows, but only one column, is called a column matrix.
Thus, a column matrix A with m rows is an m 1 matrix, i.e.
a1
a2
A .
.
..
am
Obviously, the transpose of a row matrix is a column matrix and viceversa, hence,
in inline text a column matrix A is represented as
A pa1 a2 . . . am qJ .
Conjugate Transpose.
Let A P Mm,n pCq. Dene the conjugate transpose of A paij qi1,m P Mm,n pCq by
j1,n
1 2 4
4
3 2
0
1 3
3 3 0
1
1`i
i
The matrix C 1 i
3
3 2i is a hermitian matrix, meanwhile the
i 3 ` 2i
2
i
2i
3i
matrix D 2 i
i
2 ` 3i is a skew-hermitian matrix.
3i 2 ` 3i
0
Matrix multiplication.
For a matrix X pxij qi1,m P Mm,n pFq we denote by Xi its ith row, i.e. the row
matrix
j1,n
respectively
pX J qj pXj qJ .
We say that the matrices A and B are conformable for multiplication in the order
AB, whenever A has exactly as many columns as B has rows, that is A P Mm,p pFq
and B P Mp,n pFq.
For conformable matrices A paij qi1,m and B pbjk q j1,p the matrix product AB
j1,p
k1,n
k1,n
cik Ai Bk
aij bjk .
j1
In the case that A and B failed to be conformable, the product AB is not dened.
Remark 1.9. Note, the product is not commutative, that is, in general,
AB BA even if both products exists and have the same shape.
1 1
1 0 1
and B 0
Example 1.10. Let A
1 .
1 1 0
1 1
2 1 1
2 0
and BA
Then AB
1 1
0 .
1 2
0
1
1
Rows and columns of a product.
Suppose that A paij qi1,m P Mm,p pFq and B pbij q i1,p P Mp,n pFq.
j1,p
j1,n
There are various ways to express the individual rows and columns of a matrix
product. For example the ith row of AB is
Ci rABsi rAi B1 Ai B2 . . . Ai Bn s Ai B
B
1
B2
b2j
A1 A2 . . . Ap
..
.
bpj
Consequently, we have:
1. rABsi Ai B
pith
row of AB).
2. rABsj ABj
pj th
column of AB).
aik Bk .
Ak bkj .
k1
k1
The last two equations has both theoretical and practical importance. They shows
that the rows of AB are combinations of rows of B, while columns of AB are
combinations of columns of A. So it is waisted time to compute the entire product
when only one row or column is needed.
10
k1
bki ajk
Aj Bi
k1
k1
ajk bki
11
Exercise. Prove, that for every matrix A paij qi1,m P Mm,n pFq the matrices
j1,n
1 0
0 1
0 1
, A3
then A2
Example 1.12. If A
0 1
1 0
1 0
1 0
I2 . Hence Am Amp mod q4 .
and A4
0 1
Trace of a product. Let A be a square matrix of order n. The trace of A is the
sum of the elements of the main diagonal, that is
trace A
aii .
i1
Proposition 1.13. For A P Mm,n pCq and B P Mn,m pCq one has
trace AB trace BA.
Proof. We have
trace AB
rABsii
i1
m
n
i1 k1
bki aik
pAqi pBqi
i1
n
m
k1 i1
bki aik
m
n
aik bki
i1 k1
n
k1
A
A12
11
A21 A22
A
..
..
.
.
As1 As2
B
A1r
B12
11
B
A2r
B22
and B 21
..
..
..
.
.
.
Br1 Br2
Asr
12
...
...
...
...
...
B1t
. . . B2r
..
.
...
. . . Brt
We say that the partitioned matrices are conformable partitioned if the pairs
pAik , Bkj q are conformable matrices, for every indices i, j, k. In this case the
product AB is formed by combining blocks exactly the same way as the scalars are
combined in ordinary matrix multiplication. That is, the pi, jq block in the
product AB is
Ai1 B1j ` Ai2 B2j ` . . . Air Brj .
Matrix Inversion.
For a square matrix A P Mn pFq, the matrix B P Mn pFq that satises
AB In and BA In
(if exists) is called the inverse of A and is denoted by B A1 . Not all square
matrices admits an inverse (are invertible). An invertible square matrix is called
nonsingular and a square matrix with no inverse is called singular matrix.
Although not all matrices are invertible, when an inverse exists, it is unique.
Indeed, suppose that X1 and X2 are both inverses for a nonsingular matrix A.
Then
X1 X1 In X1 pAX2 q pX1 AqX2 In X2 X2
which implies that only one inverse is possible.
Properties of Matrix Inversion. For nonsingular matrices A, B P Mn pFq, the
following statements hold.
13
1. pA1 q1 A
2. The product AB is nonsingular.
3. pABq1 B 1 A1 .
4. pA1 qJ pAJ q1 and pA1 q pA q1 .
One can easily prove the following statements.
Products of nonsingular matrices are nonsingular.
If A P Mn pFq is nonsingular, then there is a unique solution X P Mn,p pFq for the
equation
AX B, where B P Mn,p pFq,
and the solution is X A1 B.
A system of n linear equations in n unknowns can be written in the form Ax b,
with x, b P Mn,1 pFq, so it follows when A is nonsingular, that the system has a
unique solution x A1 b.
1.2
Determinants.
For every square matrix A paij q i1,n P Mn pFq one can assign a scalar denoted
j1,n
detpAq .
..
..
.. .
.
.
.
.
.
14
PSn
It can easily be computed, that for A paij q i1,2 P M2 pFq, one has
j1,2
j1,3
detpAq
a11 a22 a33 ` a13 a21 a32 ` a12 a23 a31 a13 a22 a31 a11 a23 a32 a12 a21 a33 .
1 2 3
7 8 9
detpAq 1 5 9 ` 3 4 8 ` 2 6 7 3 5 7 1 6 8 2 4 9 0.
15
Laplaces theorem.
Let A P Mn pFq and let k be an integer, 1 k n. Consider the rows i1 . . . ik and
the columns j1 . . . jk of A. By deleting the other rows and columns we obtain a
submatrix of A of order k, whose determinant is called a minor of A and is
...jk
denoted by Mij11...i
. Now let us delete the rows i1 . . . ik and the columns j1 . . . jk of
k
pAji q i1,n
, that is
j1,n
adjpAq
A11
A12
A1n
A21
A22
A2n
..
.
..
.
An1 An2
..
.
n
An
1
adjpAq.
detpAq
16
Theorem 1.18.
detpAq
...jk j1 ...jk
Ai1 ...ik , where
Mij11...i
k
(ii) detpAq
k1
k1
1
.
detpAq
17
18
1 2 0
0 0 3
elementary row operations.
1 0 0
1 2 0
p 1 A3 `A2 q
We write 0 2 1 0 1 0
0 0 1
0 0 3
19
2 0
1 0 0
pA `A q
2
1
2 0 0 1 3
0 3
0 0 1
1 1 13
0 0
p 1 A2 , 1 A3 q
2
3
1
2 0
0 1 3
0 3
0 0
1
1
1 1 3
1
1
1
Hence A 0 2 6 .
1
0 0
3
Recall that a matrix is in row echelon form if
1 0 0
0 1 0
0 0 1
1 1
1
3
1
2
16
1
3
(1) All nonzero rows are above any rows of all zeroes.
(2) The rst nonzero element (leading coecient) of a nonzero row is always
strictly to the right of the rst nonzero element of the row above it.
If supplementary the condition
p3q Every leading coecient is 1 and is the only nonzero entry in its column, is
also satised, we say that the matrix is in reduced row echelon form.
An arbitrary matrix can be put in reduced row echelon form by applying a nite
sequence of elementary row operations. This procedure is called the Gauss-Jordan
elimination procedure.
Existence of an inverse. For a square matrix A P Mn pFq the following
statements are equivalent.
1. A1 exists (A is nonsingular).
2. rank pAq n.
3. A is transformed by Gauss Jordan in In .
20
4. Ax 0 implies that x 0.
Systems of linear equations.
Recall that system of m linear equations in n unknowns can be written as
$
& a x ` a x ` a x b
21 1
2n n
22 2
% a x ` a x ` a x b .
m1 1
m2 2
mn n
m
Here x1 , x2 , . . . , xn are the unknowns, a11 , a12 , . . . , amn are the coecients of the
system, and b1 , b2 , . . . , bm are the constant terms. Observe that a systems of linear
equations may be written as Ax b, with A paij qi1,m P Mm,n pFq, x P Mn,1 pFq
j1,n
and b P Mm,1 pFq. The matrix A is called the coecient matrix, while the matrix
rA|bs P Mm,n`1 pFq,
$
& a if j n ` 1
ij
rA|bsij
% b if j n ` 1
i
21
&
x1 x2 ` 2x4 2
2x1 ` x2 x3 4
x1 x2 2x3 ` x4 1
% x ` x ` x 1.
2
3
4
We
1 1
2 1 1 0
4 p2A1 `A2 ,A1 `A3 q
have rA|bs
1 1 2 1
1
0 1
1 1
1
2
1 1 0
2
8 pA2 A4 q
0 3 1 4
3
0 0 2 1
1
0 1
1
1
2
1 1 0
2
3
0 0 2 1
8
0 3 1 4
3
1 0 1
3
0 1 1
1 p 12 A3 `A1 , 21 A3 `A.21 ,2A3 `A4 q
1
0 0 2 1
3
11
0 0 4 7
5
3
2
1 0 0
2
1
1
p 2 A4 `A1 , 101 A4 `A2 , 15 A4 `A3 q
0 1 0
2
2
0 0 2 1
3
0 0 0 5
5
1 0 0 0
1 0 0
0
1
1
1
0 1 0
0
1 p 2 A3 , 5 A4 q 0 1 0 0
1
0 0 1 0
0 0 2 0
2
1
0 0 0 1
0 0 0 5
5
1
One can easily read the solution x1 1, x2 1, x3 1, x4 1.
Recall that a system of linear equations is called homogeneous if b p0 0 0qJ
that is
& a x ` a x ` a x 0
21 1
22 2
2n n
% a x ` a x ` a x 0.
m1 1
m2 2
mn n
22
Problems
1.3
23
Problems
Problem
D1 0
2 1 0 0 0 0
2 3 4 5
1
2
1
0
0
0
1 2 3 4
0 1 2 1 0 0
2 1 2 3 , D2
0 0 1 2 1 0
0 2 1 2
0 0 0 1 2 1
0 0 2 1
0 0 0 0 1 2
1 2 3
2
3
1
, where P C such that the relation 2 ` ` 1 0 holds.
a)
2
3
1
2
1
1 1
1
...
1
1
. . . n1
2
4
2pn1q
` i sin 2
.
b) 1 2
, where cos 2
...
n
n
..
..
..
..
..
.
.
.
.
.
2
1 n1 2pn1q . . . pn1q
Problem 1.3.3. Let A paij q i1,n P Mn pCq and let us denote
j1,n
a) detpAq detpAq.
b) If aij aji , i, j P t1, 2, . . . , nu then detpAq P R.
Problem 1.3.4. Let a1 , a2 , . . . an P C. Compute the following determinants.
Problems
1
1
a1
a2
a) a21
a22
..
..
.
.
n1 n1
a1
a2
a1 a2 a3
an a1 a2
b) .
.. ..
..
. .
a2 a3 a4
24
...
a3
...
an
a23
..
.
...
..
.
a2n
..
.
an1
. . . an1
n
3
. . . an
. . . an1
.. .
..
.
.
. . . a1
7
4
a b
, A
, a, b P R.
a) A
9 5
b a
1 3 5
a b b
b) A 0 1 3 , A b a b , a, b P R.
0 0 1
b b a
Problem 1.3.6. Compute the rank of the following matrices by using the
Gauss-Jordan elimination method.
1
2 2 3 2
0
1 2 3 5
6 1 1
2
3
3 1 1 3 4
a)
,
2 4
3
2
1 2 1
0
1 1
3 0
2
1
2
2
0
0 1 0
Problems
25
1 2 3 5 3 6
0 1 2 3 4 7
b) 2 1 3 3 2 5 .
5 0 9 11 7 16
2 4 9 12 10 26
Problem 1.3.7. Find the inverses of the following matrices by using the
Gauss-Jordan elimination method.
2 1 1
1 1
, B 1 2
a) A
3 .
1 3
3 1 1
b) A paij q i1,n
j1,n
$
& 1 if i j
P Mn pRq, where aij
% 0 otherwise.
Problem 1.3.8. Prove that if A and B are square matrices of the same size, both
invertible, then:
a) ApI ` Aq1 pI ` A1 q1 ,
b) pA ` BB J q1 B A1 BpI ` B J A1 Bq1 ,
c) pA1 ` B 1 q1 ApA ` Bq1 B,
d) A ApA ` Bq1 A B BpA ` Bq1 B,
e) A1 ` B 1 A1 pA ` BqB 1
f) pI ` ABq1 I ApI ` BAq1 B,
g) pI ` ABq1 A ApI ` BAq1 .
Problems
26
Problem 1.3.9. For every matrix A P Mm,n pCq prove that the product A A and
AA are hermitian matrices.
Problem 1.3.10. For a quadratic matrix A of order n explain why the equation
AX XA I
has no solution.
Problem 1.3.11. Solve the following systems of linear equations by using
Gauss-Jordan elimination procedure.
a)
$
&
% x ` 4x 2x ` 2x 12.
1
2
3
4
b)
$
&
x1 x2 ` x3 x4 ` x5 x6 1
x1 ` x2 ` x3 ` x4 ` x5 ` x6 1
2x1 ` x3 x5 1
x2 3x3 ` 4x4 4
x1 ` 3x2 ` 5x3 x6 1
% x ` 2x ` 3x ` 4x ` 5x ` 6x 2
1
2
3
4
5
6
Problem 1.3.12. Find m, n, p P R such that the following systems be consistent,
and then solve the systems.
Problems
27
a)
$
2x y z 0
x ` 2y 3z 0
& 2x ` 3y ` mz 0
nx ` y ` z 0
x ` py ` 6z 0
% 2ex y ` z ` 2.
b)
$
2x y ` z 0
x ` 2y ` z 0
& mx y ` 2z 0
x ` ny 2z 0
3x ` y ` pz 0
% x2 ` y 2 ` x2 3.
2
Vector Spaces
2.1
Denition 2.1. A vector space V over a eld F (or F vector space) is a set V
with an addition ` (internal composition law) such that pV, `q is an abelian group
and a scalar multiplication : F V V, p, vq v v, satisfying the
following properties:
1. pv ` wq v ` w, @ P F, @v, w P F
2. p ` qv v ` v, @, P F, @v P V
3. pvq pqv
4. 1 v v, @v P V
The elements of V are called vectors and the elements of F are called scalars. The
scalar multiplication depends upon F. For this reason when we need to be exact
28
29
we will say that V is a vector space over F, instead of simply saying that V is a
vector space. Usually a vector space over R is called a real vector space and a
vector space over C is called a complex vector space.
Remark. From the denition of a vector space V over F the following rules for
calculus are easily deduced:
0V 0
0F v 0V
v 0V 0F or v 0V .
Examples. We will list a number of simple examples, which appear frequently in
practice.
V Cn has a structure of R vector space, but it also has a structure of C
vector space.
V FrXs, the set of all polynomials with coecients in F with the usual
addition and scalar multiplication is an F vector space.
Mm,n pFq with the usual addition and scalar multiplication is a F vector space.
Cra,bs , the set of all continuous real valued functions dened on the interval
ra, bs, with the usual addition and scalar multiplication is an R vector space.
2.2
It is natural to ask about subsets of a vector space V which are conveniently closed
with respect to the operations in the vector space. For this reason we give the
following:
30
31
32
The subspace U X W is called the intersection vector subspace, while the subspace
U ` W is called the sum vector subspace. Of course that these denitions can be
also given for nite intersections (respectively nite sums) of subspaces.
Proposition 2.7. Let V be a vector space over F and S V nonempty. The set
(
n
xSy
i1 i vi : i P F and vi P S, for all i 1, n, n P N is a vector subspace
over F of V .
Proof. The proof is straightforward in virtue of Proposition 2.4.
The above vector space is called the vector space generated by S, or the linear hull
of the set S and is often denoted by spanpSq. It is the smallest subspace of V
which contains S, in the sense that for every U subspace of V with S U it
follows that xSy U.
Now we specialize the notion of sum of subspaces, to direct sum of subspaces.
Denition 2.8. Let V be a vector space and Ui V subspaces, i 1, n. The sum
U1 ` ` Un is called direct sum if for every v P U1 ` ` Un , from
v u1 ` ` un w1 ` ` wn with ui , wi P Ui , i 1, n it follows that
ui wi , for every i 1, n.
The direct sum of the subspaces Ui , i 1, n will be denoted by U1 Un . The
previous denition can be reformulated as follows. Every u P U1 ` ` Un can be
written uniquely as u u1 ` u2 ` . . . ` un where ui P Ui , i 1, n.
The next proposition characterizes the direct sum of two subspaces.
Proposition 2.9. Let V be a vector space and U, W V be subspaces. The sum
U ` W is a direct sum i U X W t0V u.
Proof. Assume that U ` W is a direct sum and there exists s P U X W, s 0V .
But then every x P U ` W, x u ` w can be written as
33
Basis. Dimension.
34
2.3
Basis. Dimension.
Up to now we have tried to explain some properties of vector spaces in the large.
Namely we have talked about vector spaces, subspaces, direct sums, factor space.
The Proposition 2.7 naturally raises some questions related to the structure of a
vector space V . Is there a set S which generates V (that is xSy V )? If the
answer is yes, how big should it be? Namely how big should a minimal one
(minimal in the sense of cardinal numbers) be? Is there a nite set which generates
V ? We will shed some light on these questions in the next part of this chapter.
Why are the answers to such questions important? The reason is quite simple. If
we control (in some way) a minimal system of generators, we control the whole
space.
Denition 2.11. Let V be a F vector space. A nonempty set S V is called
system of generators for V if for every v P V there exists a nite subset
tv1 , . . . , vn u V and the scalars 1 , . . . , n P F such that v 1 v1 ` ` n vn (it
is also said that V is a linear combination of v1 , . . . , vn with scalars in F). V is
called dimensionally nite, or nitely generated, if it has a nite system of
generators.
Basis. Dimension.
35
2 ` 22 0
&
1 ` 22
% 2
`
1
has only the trivial solution p1 , 2 , 3 q p0, 0, 0q or not. But we can easily
compute the rank of the matrix, which is 3 due to
0 1 2
1 2 0 9 0,
2 0 1
Basis. Dimension.
36
to see that, indeed, the system has only the trivial solution, and hence the three
vectors are linearly independent.
We have the following theorem.
Theorem 2.13. (Existence of basis) Every vector space V has a basis.
We will not prove this general theorem here, instead we will restrict to nite
dimensional vector spaces.
Theorem 2.14. Let V t0u be e nitely generated vector space over F. From
every nite system of generators one can extract a basis.
Proof. Let S tv1 , . . . , vr u be a nite generators system. It is clear that there are
nonzero vectors in S (otherwise V t0u). Let 0 v1 P S. The set tv1 u is linearly
independent (because v1 0 0 from v1 0). That means that S contains
linearly independent subsets. Now P pSq is nite (S being nite), and in a nite
number of steps we can extract a maximal linearly independent system, let say
B tv1 , . . . , vn u, 1 n r in the following way:
v2 P Szxv1 y,
v3 P Szxtv1 , v2 uy
..
.
vn P Szxtv1 , v2 , . . . , vn1 uy.
We prove that B is a basis for V . It is enough to show that B generates V ,
because B is linearly independent by the choice of it. Let v P V . S being a system
of generators it follows that it is enough to show that every vk P S, n k r is a
linear combination of vectors from B. Suppose, by contrary, that vk is not a linear
combination of vectors from B. It follows that the set B Y tvk u is linearly
independent, contradiction with the maximality of B.
Basis. Dimension.
37
Because B is a basis the vectors e1i can be uniquely written as e1i nj1 aij ej ,
1
1 i m. If B1 is linearly independent, then it follows that m
i1 i ei 0 implies
Basis. Dimension.
38
choice of the basis, and it is denoted by dim F V ). The vector space V is said to be
of nite dimension. For V t0u , dim F V 0.
Remark 2.19. According to the proof of Theorem 2.17, if dim F V n then any
set of m n vectors is linear dependent.
Corollary 2.20. Let V be a vector space over F of nite dimension, dim F V n.
1. Any linearly independent system of n vectors is a basis. Any system of m
vectors, m n is linearly dependent.
2. Any system of generators of V which consists of n vectors is a basis. Any
system of m vectors, m n is not a system of generators
Proof. a) Consider L tv1 , . . . , vn u a linearly independent system of n vectors.
From the completion theorem (Theorem 2.16) it follows that L can be completed
to a basis of V . It follows from the cardinal basis theorem (Theorem 2.17) that
there is no need to complete L, so L is a basis.
Let L1 be a system of m vectors, m n. If L1 is linearly independent it follows that
L1 can be completed to a basis (Theorem 2.16), so dim F V m n, contradiction.
b) Let S tv1 , . . . , vn u be a system of generators which consists of n vectors.
From the Theorem 2.14 it follows that a basis can be extracted from its n vectors.
Again from the basis Theorem 2.17 it follows that there is no need to extract any
vector, so S is a basis.
Let S 1 be a generators system which consists of m vectors, m n. From the
Theorem 2.14 it follows that from S 1 one can extract a basis, so dim F V m n,
contradiction.
Remark 2.21. The dimension of a nite dimensional vector space is equal to any
of the following:
Basis. Dimension.
39
px, y, zq P R4 |x ` y ` z 0
tpx, y, x yq |x, y P Ru
tx p1, 0, 1q ` y p0, 1, 1q |x, y P Ru
span tp1, 0, 1q , p0, 1, 1qu .
The vectors p1, 0, 1q and p0, 1, 1q are linearly independent so they form a basis
of S.
Theorem 2.23. Every linearly independent list of vectors in a nite dimensional
vector space can be extended to a basis of the vector space.
Proof. Suppose that V is nite dimensional and tv1 , . . . , vm u is linearly
independent. We want to extend this set to a basis of V . V being nite
dimensional, there exists a nite set tw1 , . . . , wn u, a list of vectors which spans V .
If w1 is in the span of tv1 , . . . , vm u, let B tv1 , . . . , vm u. If not, let
B tv1 , . . . , vm , w1 u.
Basis. Dimension.
40
Basis. Dimension.
41
so
a1 u1 ` ` am um b1 w1 bn wm 0.
But this is a contradiction with the fact that tu1 , . . . , um , w1 , . . . , wn u is a basis of
V , so we obtain the contradiction, i.e. U X W t0u.
The next theorem relates the dimension of the sum and the intersection of two
subspaces with the dimension of the given subspaces:
Theorem 2.25. If U and W are two subspaces of a nite dimensional vector
space V , then
dim pU ` W q dim U ` dim W dim pU X W q .
Proof. Let tu1 , . . . , um u be a basis of U X W , so dim U X W m. This is a linearly
independent set of vectors in U and W respectively, so it can be extended to a
basis tu1 , . . . , um , v1 . . . vi u of U and a basis tu1 , . . . , um , w1 , . . . wj u of W , so
dim U m ` i and dim W m ` j. The proof will be complete if we show that
tu1 , . . . , um , v1 . . . , vi , w1 , . . . , wj u is a basis for U ` W , because in this case
dim pU ` W q m ` i ` j
pm ` iq ` pm ` jq m
dim U ` dim W dimpU X W q
The set spantu1 , . . . , um , v1 . . . , vi , w1 , . . . , wj u contains U and W , so it contains
U ` W . That means that to show that it is a basis for U ` W it is only needed to
show that it is linearly independent. Suppose that
a1 u1 ` ` am um ` b1 v1 ` ` bi vi ` c1 w1 ` ` cj wj 0 .
We have
c1 w1 ` ` cj wj a1 u1 am um b1 v1 bi vi
Basis. Dimension.
42
Basis. Dimension.
43
Then
V U1 Un .
Proof. One can choose a basis for each Ui . By putting all these bases in one list,
we obtain a list of vectors which spans V (by the rst property in the theorem),
and it is also a basis, because by the second property, the number of vectors in this
list is dim V .
Suppose that we have ui P Ui , i 1, n, such that
0 u1 ` ` un .
Every ui is represented as the sum of the vectors of basis of Ui , and because all
these bases form a basis of V , it follows that we have a linear combination of the
vectors of a base of V which is zero. So all the scalars are zero, that is all ui are
zero, so the sum is direct.
We end the section with two important observations. Let V be a vector space over
F (not necessary nite dimensional). Consider a basis B pei qiPI of V .
We have the rst representation theorem:
Theorem 2.27. Let V be a vector space over F (not necessary nite dimensional).
Let us consider a basis B pei qiPI . For every v P V, v 0 there exist a unique
subset B1 B, B1 tei1 , . . . , eik u and the nonzero scalars ai1 , . . . , aik P F , such
that
v
j1
ji eji
ki eki , ji 0, i 1, n, ki 0, i 1, m.
v
i1
i1
Local computations
44
ji eji
i1
ki eki , ji 0, i 1, n, ki 0, i 1, n.
i1
i1
ji eji
i1
2.4
Local computations
In this section we deal with some computations related to nite dimensional vector
spaces.
Let V be an F nite dimensional vector space, with a basis B te1 , . . . , en u. Any
vector v P V can be uniquely represented as
v
i1
ai ei a1 e1 ` ` an en .
Local computations
45
The scalars pa1 , . . . , an q are called the coordinates of the vector v in the basis B. It
1
is obvious that if we have another basis B , the coordinates of the same vector in
the new basis change. How we can measure this change? Let us start with a
situation that is a bit more general.
Theorem 2.29. Let V be a nite dimensional vector space over F with a basis
1
e1 a11 e1 ` ` a1n en
...
1
em am1 e1 ` ` amn en
Denote by A paij qi1,m the matrix formed by the coecients in the above
j1,n
equations. The dimension of the subspace xSy is eqaul to the rank of the matrix A,
i.e. dimxSy rankA.
Proof. Let us denote by Xi pai1 , . . . , ain q P Fn , i 1, m the coordinates of
1
1
ei , i 1, m in B. Then, the linear combination m
i1 i ei has its coordinates
m
i1 i Xi in B. Hence the set of all coordinate vectors of elements of xSy equals
1
X1
..
. A.
Xm
Local computations
46
1
Consider now the case of m n in the above discussion. The set S te1 , . . . , en u
is a basis i rankA n We have now
1
e1 a11 e1 ` ` a1n en
1
e2 a21 e1 ` ` a2n en
...
1
en an1 e1 ` ` ann en ,
1
representing the relations that change from the basis B to the new basis B S.
The matrix AJ is denoted by
a11 a21 . . .
P pe,e q
a12 a22 . . .
a1n a2n . . .
an1
an2
...
ann
.
The columns of this matrix are given by the coordinates of the vectors
1
1
e
e
1
1
1
e
e2
A 2
...
...
1
en
en
Consider the change of the basis from B to B with the matrix P pe,e q and
1
the change of the basis from B to B with the matrix P pe ,e q . We can think
Local computations
47
at the composition of these two changes, i.e. the change of the basis from
2
P pe,e q P pe ,e q P pe,e
P pe,e q P pe ,eq In ,
that is
1
pP pe ,eq q1 P pe,e q
At this step we try to answer the next question, which is important in
applications. If we have two basis, a vector can be represented in both of them.
What is the relation between the coordinates in the two basis?
Let us x the setting rst. Consider the vector space V , with two basis
1
B te1 , . . . , en u and B te1 , . . . , en u and P pe,e q the matrix of the change of basis.
Let v P V . We have
1
v a1 e1 ` ` an en b1 e1 ` ` bn en ,
where pa1 , . . . an q and pb1 , . . . bn q are the coordinates of the same vector in the two
basis. We can write
pvq
a1 a2 . . .
e
e
1
1
e2
e1
2
b b ... b
an
1
2
n
...
...
1
en
en
Local computations
48
Denote
a
1
a2
pvqe
...
an
and
b
1
b2
.
pvqe1
...
bn
the matrices of the coordinates of v in the two basis.
Denote further the basis columns
peq1n
e
1
e2
...
en
pe q1n
e
1
1
e2
...
1
en
J
J
pe,e q J
v pvqJ
q peq1n
e peq1n pvqe1 pe q1n pvqe1 pP
pP pe,e q qJ pvqJ
pvqJ
e ,
e1
Problems
49
or
1
2.5
Problems
0 1 2
,
2 4 1 2 1 1 ,
0 1
1
3 1 1
1 2
2 2 1 .
1 2 1
Problem 2.5.3. Let V be a nite dimensional vector space dim V n. Show that
there exist one dimensional subspaces U1 , . . . , Un , such that
V U1 Un .
Problem 2.5.4. Find three distinct subspaces U, V, W of R2 such that
R2 U V V W W U.
Problem 2.5.5. Let U, W be subspaces of R8 , with dim U 3, dim W 5 and
dim U ` W 8. Show that U X W t0u.
Problem 2.5.6. Let U, W be subspaces of R9 with dim U dim W 5. Show
that U X W t0u.
Problems
50
Problem 2.5.7. Let U and W be subspaces of a vector space V and suppose that
each vector v P V has a unique expression of the form v u ` w where u belongs
to U and w to W. Prove that
V U W.
Problem 2.5.8. In Cra, bs nd the dimension of the subspaces generated by the
following sets of vectors:
a) t1, cos 2x, cos2 xu,
b) tea1 x , . . . , ean x u, where ai aj for i j
Problem 2.5.9. Find the dimension and a basis in the intersection and sum of
the following subspaces:
U spantp2, 3, 1q, p1, 2, 2, q, p1, 1, 3qu,
V spantp1, 2, 1q, p1, 1, 1q, p1, 3, 3qu.
U spantp1, 1, 2, 1q, p0, 1, 1, 2q, p1, 2, 1, 3u,
V spantp2, 1, 0, 1q, p2, 1, 1, 1q, p3, 0, 2, 3qu.
Problem 2.5.10. Let U, V, W be subspaces of some vector space and suppose that
U W. Prove that
pU ` V q X W U ` pV X W q.
Problem 2.5.11. In R4 we consider the following subspace
V spantp2, 1, 0, 1q, p2, 1, 1, 1q, p3, 0, 2, 3qu. Find a subspace W of R4 such
that R4 V W .
Problem 2.5.12. Let V, W be two vector spaces over the same eld F. Find the
dimension and a basis of V W.
Problems
51
1 3 5 3 6
1 2 3 4 7
M 1 3 3 2 5 .
0 9 11 7 16
4 9 12 10 26
Let U and W be the subspaces of R5 generated by rows 1, 2 and 5 of M, and by
rows 3 and 4 of M respectively. Find the dimensions of U ` W and U X W.
Problems
52
Problem 2.5.19. Find bases for the sum and intersection of the subspaces U and
W of R4 rXs generated by the respective sets of polynomials
t1 ` 2x ` x3 , 1 x x2 u and tx ` x2 3x3 , 2 ` 2x 2x3 u.
3
Linear maps between vector spaces
Up to now we met with vector spaces. It is natural to ask about maps between
them, which are compatible with the linear structure of a vector space. These are
called linear maps, special maps which also transport the linear structure. They
are also called morphisms of vector spaces or linear transformations.
Denition 3.1. Let V and W be two vector spaces over the same eld F. A linear
map from V to W is a map f : V W which has the property that
f pv ` uq f pvq ` f puq for all v, u P V and , P F.
The class of linear maps between V and W will be denoted by LF pV, W q or
HomF pV, W q.
From the denition it follows that f p0V q 0W and
fp
i1
i vi q
i f pvi q, @ i P F, @vi P V, i 1, n.
i1
We shall dene now two important notions related to a linear map, the kernel and
the image.
Consider the sets:
ker f f 1 p0W q tv P V |f pvq 0w u, and
53
54
(
px, yq P R2 |T px, yq p0, 0q
(
px, yq P R2 | px ` y, x ` yq p0, 0q
(
px, yq P R2 |x ` y 0 .
55
k f pek q 0W ,
km`1
fp
k ek q 0W .
km`1
Hence
1
k ek P ker f
km`1
and v 1 can be written in terms of e1 , . . . , em . This is only compatible with the fact
that e1 , . . . , en form a basis of V if m`1 n 0, which implies the linear
independence of the vectors f pem`1 q, . . . , f pen q.
Theorem 3.6. Let f : V W be a linear mapping between vector spaces V and
W , and dim V dim W 8. Then, f pV q W i ker f t0V u. In particular f is
onto i f is one to one.
Proof. Suppose that ker f t0V u. Since f pV q is a subspace of W it follows that
dim V dim f pV q dim W , which forces dim f pV q dim W , and this implies
that f pV q W .
56
The fact that f pV q W implies that ker f t0V u follows by reversing the
arguments.
Proposition 3.7. Let f : V W be a linear map between vector spaces V, W over
F. If f is a bijection, it follows that its inverse f 1 : W V is a linear map.
Proof. Because f is a bijection @w1 , w2 P W , D! v1 , v2 P V , such that
f pvi q wi , i 1, 2. Because f is linear, it follows that
1 w1 ` 2 w2 1 f pv1 q ` 2 f pv2 q f p1 v1 ` 2 v2 q.
It follows that 1 v1 ` 2 v2 f 1 p1 w1 ` 2 w2 q, so
f 1 p1 w1 ` 2 w2 q 1 f 1 pw1 `q ` 2 f 1 pw2 q.
Properties of LpV, W q
57
3.1
Properties of LpV, W q
In this section we will prove some properties of linear maps and of LpV, W q.
Proposition 3.10. Let f : V W be a linear map between the linear spaces V, W
over F.
1. If V1 V is a subspace of V , then f pV1 q is a subspace of W .
2. If W1 W is a subspace of W , then f 1 pW1 q is a subspace of V .
Proof. 1. Let w1 , w2 be in f pV1 q. It follows that there exist v1 , v2 P V1 such that
f pvi q wi , i 1, 2. Then, for every , P F we have
w1 ` w2 f pv1 q ` f pv2 q f pv1 ` v2 q P f pV1 q.
2. For v1 , v2 P f 1 pW1 q we have that f pv1 q, f pv2 q P W1 , so
@ , P F, f pv1 q ` f pv2 q P W1 . Because f is linear
f pv1 q ` f pv2 q f pv1 ` v2 q v1 ` v2 P f 1 pW1 q.
The next proposition shows that the kernel and the image of a linear map
characterize the injectivity and surjectivity properties of the map.
Proposition 3.11. Let f : V W be a linear map between the linear spaces V, W .
Properties of LpV, W q
58
Properties of LpV, W q
59
i P F, i 1, n such that
i1
i vi v. It follows that
w f pvq f p
i1
i vi q
i f pvi q.
i1
3. Because f is bijective and S is a basis for V , it follows that both 1. and 2. hold,
that is f pSq is a basis for W .
Denition 3.13. Let f, g : V W be linear maps between the linear spaces V
and W over F, and P F. We dene
1. f ` g : V W by pf ` gqpvq f pvq ` gpvq, @ v P V , the sum of the linear
maps, and
2. f : V W by pf qpvq f pvq, @ v P V, @ P F, the scalar multiplication
of a linear map.
Proposition 3.14. With the operations dened above LpV, W q becomes a vector
space over F.
The proof of this statement is an easy verication.
In the next part we specialize in the study of the linear maps, namely we consider
the case V W .
Denition 3.15. The set of endomorphisms of a linear space V is:
EndpLq tf : V V | f linear u.
By the results from the previous section, EndpV q is an F linear space.
Let W, U be two other linear spaces over the same eld F, f P LpV, W q and
g P LpW, Uq. We dene the product (composition) of f and g by
h g f : V U,
hpvq gpf pvqq, @ v P V.
Properties of LpV, W q
60
Properties of LpV, W q
61
The group of automorphisms of a linear space is called the general linear group
and is denoted by GLpV q.
Example 3.20.
Properties of LpV, W q
62
3.2
63
Let V and W be two vector spaces over the same led F, dim V m, dim W n,
and e te1 , . . . , em u and f tf1 , . . . , fn u be bases in V and W respectively. A
linear map T P LpV, W q is uniquely determined by the values of the basis e.
We have
T pe1 q a11 f1 ` ` a1n fn ,
T pe2 q a21 f1 ` ` a2n fn ,
..
.
T pem q am1 f1 ` ` amn fn ,
or, in the matrix notation
T pe1 q
T pe2 q
..
T pem q
f1
f2
..
.
j1,n
fn
pf,eq
64
Now we want to see how the image of a vector by a linear map can we expressed.
J
Let v P V, v m
i1 vi ei , or in the matrix notation pvqe peq1m , where, as usual
v1
pvqe
v2
..
.
vn
and
e1
peq1m
e2
..
.
em
Now denote T pvq w
j1 wj ej
P W , we have
T pvq pwqJ
f pf q1n .
T being linear, we have T pvq
i1
T pvq pvqJ
e pT peqq1m .
pf,eq
it follows that
pf,eq J
pT peqq1m pMT
q pf q1n .
So nally we have
pf,eq J
J
pwqJ
f pf q1n pvqe pMT
q pf q1n .
J
pwqJ
f pvqe pMT
q .
pwqf pMT
qpvqe .
65
3 0 2
3
3
Example 3.23. Let T : R R , T 1 1 0 . Find a basis in ker T and
2 1 2
3
nd the dimension of T pR q .
Observe that the kernel of T ,
$
&
ker T px, y, zq P |T
,
/
/
/
.
y 0 ,
/
/
/
z
0
3x
` 2z 0
&
x ` y
0
% 2x ` y ` 2z 0,
the matrix of the system being exactly T . To solve this system we need to
compute the rank of the matrix T . We
get that
0 2
1 0 0
1 2
y x,
3
z x.
2
*
"
"
*
3
3
1, 1,
, , | P R span
2
2
(3.1)
66
e1 te11 , . . . , e1m u and f 1 tf11 , . . . , fn1 u the matrix of T with respect to these bases
pf 1 ,e1 q
will be MT
pf,eq
P pf ,f q MT
P pe,e q .
Problems
67
pwqf 1 MT
pf 1 ,e1 q
pvqe1 MT
P pe ,eq pvqe .
pf,eq
pvqe .
MT
pf,eq
P pf ,f q MT
pf,eq
pP pe ,eq q1 P pf ,f q MT
P pe,e q .
and A1 MT
3.3
Problems
Problems
68
3 1
MT1 0 2
1 2
the matrices
1 ,
respectively
MT2
1 4 2
0 4 1
0 0 5
Problems
69
M 1
0
1
2
1
0
1 .
1 1 3
Find a basis in ker T, im T and the dimension of the spaces V, W, ker T and im T .
Problem 3.3.6. Show that a linear transformation T : V W is injective if and
only if it has the property of mapping linearly independent subsets of V to linearly
independent subsets of W.
Problem 3.3.7. Show that a linear transformation T : V W is surjective if and
only if it has the property of mapping any set of generators of V to a set of
generators of W.
Problem 3.3.8. Let T : V W be a linear mapping represented by the matrix
1
1
1
2
M 1 1
1
1 .
0 2 2 3
Compute dim V, dim W and nd a basis in im T and ker T .
Problem 3.3.9. Find all the linear mappings T : R R with the property
im T ker T .
Find all n P N such that there exists a linear mapping T : Rn Rn with the
property im T ker T .
Problem 3.3.10. Let V , respectively Vi , i 1, n be vector spaces over C. Show
that, if T : V1 V2 Vn V is a linear mapping then there exist and they
are unique the linear mappings Ti : Vi V , i 1, n such that
T pv1 , . . . , vn q T1 pv1 q ` T2 pv2 q ` ` Tn pvn q.
Problems
70
1 2 0 1
3 0 1 2
MT
2 5 3 1
1 2 1 3
in the canonical basis te1 , e2 , e3 , e4 u of R4 .
Find the matrix of T with respect to the following basis.
Problems
a) te1 , e3 , e2 , e4 u.
b) te1 , e1 ` e2 , e1 ` e2 ` e3 , e1 ` e2 ` e3 ` e4 u.
c) te4 e1 , e3 ` e4 , e2 e4 , e4 u.
Problem 3.3.16. A linear transformation T : R3 R2 is dened by
T px, x2 , x3 q px1 x2 x3 , x1 ` x3 q. Let e tp2, 0, 0q, p1, 2, 0q, p1, 1, 1qu and
f tp0, 1q, p1, 2qu be bases in R3 and R2 respectively. Find the matrix that
represents T with respect to these bases.
71
4
Proper vectors and the Jordan canonical
form
4.1
In this part we shall further develop the theory of linear maps. Namely we are
interested in the structure of an operator.
Let us begin with a short description of what we expect to obtain.
Suppose that we have a vector space V over a eld F and a linear operator
T P EndpV q. Suppose further that we have the direct sum decomposition:
V
i1
Ui ,
72
73
However we have a problem: if we want to apply tools which are commonly used in
the theory of linear maps (such as taking powers for example) the problem is that
generally T may not map Uj into itself, in other words T |Uj may not be an
operator on Uj . For this reason it is natural to consider only that kind of
decomposition for which T maps every Uj into itself.
Denition 4.1. Let V be an operator on the vector space V over F and U a
subspace of V . The subspace U is called invariant under T if T pUq U, in other
words T |U is an operator on U.
Of course that another natural question arises when dealing with invariant
subspaces. How does an operator behave on an invariant subspace of dimension
one? Every one dimensional subspace is of the form U tu| P Fu. If U is
invariant by T it follows that T puq should be in U, and hence there should exist a
scalar P F such that T puq u. Conversely if a nonzero vector u exists in V
such that T puq u, for some P F, then the subspace U spanned by u is
invariant under T and for every vector v in U one has T pvq v. It seems
reasonable to give the following denition:
Denition 4.2. Let T P EndpV q be an operator on a vector space over the eld F.
A scalar P F is called eigenvalue (or proper value) for T if there exists a nonzero
vector v P V such that T pvq v. A corresponding vector satisfying the above
equality is called eigenvector (or proper vector) associated to the eigenvalue .
The set of eigenvectors of T corresponding to an eigenvalue forms a vector space,
denoted by Epq, the proper subspace corresponding to the proper value . It is
clear that Epq kerpT IV q
For the nite dimensional case let MT be the matrix of T in some basis. The
equality T pvq v is equivalent with MT v v or, with pMT In qv 0, which
74
1
detpMT Iq detpP q detpMT Iq,
detpP q
75
4.2
76
The main reason for which there exists a richer theory on operators than for linear
maps is that operators can be raised to powers (we can consider the composition of
an operator with itself).
Let V be an n-dimensional vector space over a eld F and T : V V be a linear
operator.
Now, LpV, V q EndpV q is an n2 dimensional vector space. We can consider
T 2 T T and of course we obtain T n T n1 T inductively. We dene T 0 as
being the identity operator I IV on V . If T is invertible (bijective), then there
exists T 1 , so we dene T m pT 1 qm . Of course that
T m T n T m`n , for m, n P Z.
For T P EndpV q and p P FrXs a polynomial given by
ppzq a0 ` a1 z ` . . . am z m , z P F
we dene the operator ppT q given by
ppT q a0 I ` a1 T ` . . . am T m .
This is a new use of the same symbol p, because we are applying it to operators
not only to elements in F. If we x the operator T we obtain a function dened on
FrXs with values in EndpV q, given by p ppT q which is linear. For p, q P FrXs we
dene the operator pq given by ppqqpT q ppT qqpT q.
Now we begin the study of the existence of eigenvalues and of their properties.
Theorem 4.6. Every operator over a nite dimensional, nonzero, complex vector
space has an eigenvalue.
77
78
aik ei ,
i1
79
Proof. 12 obviously follows from a moments tought and the denition. Again
32. It remains only to prove that 23.
So, suppose 2 holds. Fix k P t1, . . . , nu. From 2 we have
T pe1 q P
spante1 u
spante1 , . . . , ek u
T pe2 q P
..
.
spante1 , e2 u
spante1 , . . . , ek u
T pek q P
spante1 , . . . , ek u
spante1 , . . . , ek u.
80
1 . . .
0
MT
...
...
81
We will prove that T is not invertible i one of the k s equals zero. If 1 0, then
T pv1 q 0, so T is not invertible, as desired.
Suppose k 0, 1 k n. The operator T maps the vectors e1 , . . . , ek1 in
spante1 , . . . , ek1 u and, because k 0, T pek q P te1 , . . . , ek1 u. So, the vectors
T pe1 q, . . . , T pek q are linearly dependent (they are k vectors in a k 1 dimensional
vector space, spante1 , . . . , ek1 u. Consequently T is not injective, and not
invertible.
Suppose that T is not invertible. Then ker T t0u, so v P V, v 0 exists such
that T pvq 0. Let
v a1 e1 ` ` an en
and let k be the largest integer with ak 0. Then
v a1 e1 ` ` ak ek ,
and
0 T pvq,
0 T pa1 e1 ` ` ak ek q,
0 pa1 T pe1 q ` ` ak1 T pek1 qq ` ak T pek q.
The term pa1 T pe1 q ` ` ak1 T pek1 qq is in spante1 , . . . , ek1 u, because of the
form of MT . Finally T pek q P spante1 . . . , ek1 u. Thus when T pek q is written as a
linear combination of the basis te1 , . . . , en u, the coecient of ek will be zero. In
other words, k 0.
Theorem 4.13. Suppose that T P EndpV q has an upper triangular matrix with
respect to some basis of V . Then the eigenvalues of T are exactly of the entries on
the diagonal of the upper triangular matrix.
Proof. Suppose that we have a basis te1 , . . . , en u such that the matrix of T is
upper triangular in this basis. Let P F, and consider the operator T I. It has
Diagonal matrices
82
the same matrix, except that on the diagonal the entries are i if those in the
matrix of T are j . It follows that T I is not invertible i is equal with some
j . So is proper value as desired.
4.3
Diagonal matrices
2
..
Diagonal matrices
83
Diagonal matrices
84
Let te11 , . . . , e1i1 u, . . . , ten1 , . . . , enin u bases in kerpT 1 Iq, . . . , kerpT n Iq. Then
dim V i1 ` ` in , and te11 , . . . , e1i1 , . . . , en1 , . . . , enin u are linearly independent.
Hence V spante11 , . . . , e1i1 , . . . , en1 , . . . , enin u which shows that 2 holds.
2 1 1
1 1 0
diagonalizable and nd the diagonal matrix similar to A.
The characteristic polynomial of A is
detpA Iq 3 ` 42 6 p ` 1qp 2qp 3q.
Hence, the eigenvalues of A are 1 1, 2 2 and 3 3. To nd the
corresponding eigenvectors, we have to solve the three linear systems pA ` Iqv 0,
pA 2Iqv 0 and pA 3Iqv 0. On solving these systems, we nd that the
solution spaces are
tp, , 2q : P Ru,
tp, , q : P Ru,
respectively
tp, , 0q : P Ru.
Hence, the corresponding eigenvectors associated to 1 , 2 and 3 respectively, are
v1 p1, 1, 2q, v2 p1, 1, 1q and v3 p1, 1, 0q respectively. There exists 3 linear
independent eigenvectors, thus A is diagonalizable.
1 1
1
2 1 0
Diagonal matrices
85
1 1
We have P 1 16 2 2
3 3
Hence, the diagonal matrix
2 .
0
similar to A is
1 0 0
D P 1AP 0 2 0 .
0 0 3
Obviously one may directly compute D, by knowing, that D is the diagonal matrix
having the eigenvalues of A on its main diagonal.
Proposition 4.17. If is a proper value for an operator (endomorphism) T , and
v 0, v P V is a proper vector then one has:
1. @k P N, k is a proper value for T k T T (k times) and v is a proper
vector of T k .
2. If p P FrXs is a polynomial with coecients in F, then ppq is an eigenvalue
for ppT q and v is a proper vector of ppT q.
3. For T automorphism (bijective endomorphism), 1 is a proper value for T 1
and v is an eigenvector for T 1 .
Proof. 1. We have T pvq v, hence T T pvq T pvq T pvq 2 v. Assume,
that T k1pvq k1 v. Then T k pvq
T T k1 pvq T pT k1 pvqq T pk1 vq k1T pvq k1 v k v.
2. Let p a0 ` a1 x ` ` an xn P FrXs. Then ppT qpvq
a0 Ipvq ` a1 T pvq ` ` an T n pvq a0 v ` a1 pvq ` ` an pn vq ppqv.
86
` 2
T ` T v v,
or, equivalently
`
T 2 ` T ` I v 0.
Now, we apply the linear map T I (recall that the linear maps form a vector
space, so the sum or dierence of two linear maps is still linear) to the above
relation to get
`
pT Iq T 2 ` T ` I v 0.
Here we have used that, by linearity, pT Iq 0 0.
Finally, simple algebra yields pT Iq pT 2 ` T ` Iq T 3 I, so the above equation
shows that
T 3 v v,
as desired.
4.4
87
Let V be a vector space of nite dimension n over a eld F. Let T : V V and let
0 be an eigenvalue of T . Consider the matrix form of the endomorphism in a
given basis, T pvq MT v. The eigenvalues are the roots of the characteristic
polynomial detpMt In q 0. It can be proved that this polynomial does not
depend on the basis and of the matrix MT . So, it will be called the characteristic
polynomial of the endomorphism T , and it will be denoted by P pq, and of course
deg P n. Sometimes it is called the characteristic polynomial of the matrix, but
we understand that is the matrix associated to an operator.
Denote by mp0 q the multiplicity of 0 as a root of this polynomial. Associated to
the proper value 0 we consider the proper subspace corresponding to 0 :
Ep0 q tv P V |T pvq 0 vu.
Consider a basis of V and let MT be the matrix of T with respect to this basis. We
havev that:
Theorem 4.19. With the above notations, the following holds
dim Ep0 q n rank pMT 0 Iq mp0 q.
Proof. Obviously is enough to prove the claim in V Rn . Let x1 , x2 , . . . , xr be
linearly independent eigenvectors associated to 0 , so that dim Ep0 q r.
Complete this set with xr`1 , . . . xn to a basis of Rn . Let P be the matrix whose
columns are xi , i 1, n. We have MT P r0 x1 | . . . |0 xr | . . . s. We get that the
rst r columns of P 1MT P are diagonal with 0 on the diagonal, but that the rest
of the columns are indeterminable. We prove next that P 1 MT P has the same
characteristic polynomial as MT . Indeed
detpP 1MT P Iq detpP 1MT P P 1 pIqP q
88
1
detpMT Iq detpP q detpMT Iq.
detpP q
But since the rst few columns of P 1 MT P are diagonal with 0 on the diagonal
we have that the characteristic polynomial of P 1MT P has a factor of at least
p0 qr , so the algebraic multiplicity of 0 is at least r.
The value dim Ep0 q is called the geometric multiplicity of the eigenvalue 0 .
Let T P EndpV q, and suppose that the roots of the characteristic polynomial are in
F. Let be a root of the characteristic polynomial, i.e. an eigenvalue of T .
Consider m the algebraic multiplicity of and q dim Epq, the geometric
multiplicity of .
It is possible to nd q eigenvectors and m q principal vectors (also called
generalized eigenvectors), all of them linearly independent, and an eigenvector v
and the corresponding principal vectors u1 , . . . , ur satisfy
T pvq v, T pu1q u1 ` v, . . . , T pur q ur ` ur1
The precedent denition can equivalently be stated as
A nonzero vector u is called a generalized eigenvector of rank r associated with the
eigenvalue if and only if pT Iqr puq 0 and pT Iqr1 puq 0. We note that
a generalized eigenvector of rank 1 is an ordinary eigenvector. The previously
dened principal vectors u1 , . . . , ur are generalized eigenvectors of rank 2, . . . , r ` 1.
It is known that if is an eigenvalue of algebraic multiplicity m, then there are m
linearly independent generalized eigenvectors associated with .
These eigenvectors and principal vectors associated to T by considering all the
eigenvalues of T form a basis of V , called the Jordan basis with respect to T . The
89
matrix of T relative to a Jordan basis is called a Jordan matrix, and it has the form
J2
..
.
Jp
The Js are matrices, called Jordan cells. Each cell represents the contribution of
an eigenvector v, and the corresponding principal vectors, u1 , . . . ur , and it has the
form
1
..
. 1
P Mr`1 pFq
It is easy to see that the Jordan matrix is a diagonal matrix i there are no
principal vectors i mpq dim Epq for each eigenvalue .
Let MT be the matrix of T with respect to a given basis B, and J be the Jordan
matrix with respect to a Jordan basis B 1 . Late P be the transition matrix from B
to B 1 , hence it have columns consisting of either eigenvectors or generalized
eigenvectors. Then J P 1MT P , hence MT P JP 1 .
Example 4.20. (algebraic multiplicity
0 1
2 1
transition matrix of A.
90
1 0 1
3 1 1
2 1 0
J P 1 AP 0 2 0 .
0 0 2
Example 4.21. (algebraic multiplicity
3, geometric
multiplicity 1) Consider the
1 18 7
operator with the matrix A 1 13 4 . Find the Jordan matrix and the
1 25
8
transition matrix of A.
The characteristic polynomial of A is detpA Iq p ` 2q3 , hence 2 is an
eigenvalue with algebraic multiplicity 3. By solving the homogenous system
pA ` 2Iqv 0 we obtain the solution space
Problems
91
5 3 4
7 4
5
2 1 2
P 1 1 3 2 , hence
2 1 1
2 1
0
1
J P AP 0 2 1 .
0
0 2
4.5
Problems
f 1 pxq
.
xex2
Problems
92
1 5
3 3
1 2 1
bq 1 0
1 .
4 4 5
Problem 4.5.3. Find the Jordan canonical form and the transition matrix for the
matrix
2 1 1
3 2 3 .
2 2 3
Problem 4.5.4. Prove that a square matrix and its transpose have the same
eigenvalues.
Problem 4.5.5. Find the Jordan canonical form and the transition matrix for the
matrix
6 6 15
1 5 5 .
1 2 2
Problem 4.5.6. Find the eigenvalues and eigenvectors of the operator
T : Cr, s Cr, s,
T pf qpxq
Problem 4.5.7. Find the Jordan canonical form and the transition matrix for the
matrix
2 2 2 .
1 1 4
Problems
93
Problem 4.5.9. Find the Jordan canonical form and the transition matrix for the
matrix
7 12 6
10 19 10 .
12 24 13
Problem 4.5.10. Find the eigenvalues and eigenvectors of the operator
f 1 pxq
.
sin2 x
1 1
.
Problem 4.5.11. Triangularize the matrix A
1 3
Problem 4.5.12. Find the Jordan canonical form and the transition matrix for
the matrix
4 5 2
5 7 3 .
6 9 4
f 1 pxq
.
tan2 x
Problem 4.5.14. Find the Jordan canonical form and the transition matrix for
the matrix
4 2 1 .
4
1 2
Problems
94
1 3 3
4 6 15
2 6 13 , 1 3 5 .
1 4 8
1 2 4
2 6 15
1 1 5 .
1 2 6
5
Inner product spaces
5.1
Up to now we have studied vector spaces, linear maps, special linear maps.
We can measure if two vectors are equal, but we do not have something like
length, so we cannot compare two vectors. Moreover we cannot say anything
about the position of two vectors.
In a vector space one can dene the norm of a vector and the inner product of two
vectors. The notion of the norm permits us to measure the length of the vectors,
and compare two vectors. The inner product of two vectors, on one hand induces a
norm, so the length can be measured, and on the other hand (at least in the case
of real vector spaces), lets us measure the angle between two vectors, so a full
geometry can be constructed there. Nevertheless in the case of complex vector
spaces, the angle of two vectors is not clearly dened, but the orthogonality is.
Denition 5.1. An inner product on a vector space V over the eld F is a
function (bilinear form) x, y : V V R with the properties:
(positivity and deniteness) xv, vy 0 and xv, vy 0 i v 0.
95
96
a b
be a positive denite matrix, that
Example 5.2. Let A P M2 pRq, A
b c
2
is a 0, detpAq
for every u pu1 , u2q, v pv1 , v2 q P R we dene
0. Then
xu, vy pv1 v2 qA
u1
u2
It can easily be veried that x, y is an inner product on the real linear space R2 .
97
98
In this course we are mainly interested in the inner product spaces. But we should
a
point out that an inner product on V denes a norm, by }v} xv, vy for v P V ,
and a norm on V denes a metric by dpv, wq }w v}, for v, w P V .
On the other hand, from their generality point of view the metrics are the most
general ones (can be dened on any set), followed by norms (which assumes the
linearity of the space where is dened) and on the last position is the inner
product. It should be pointed that every inner product generates a norm, but not
every norm comes from an inner product, as is the case for the max norm dened
above.
For an inner product space pV, x, yq the following identity is true:
C
i1
i vi ,
j1
G
j wj
m
n
i j xvi , wj y.
i1 j1
99
We have
{
vKw xv, wy 0 pv,
wq .
2
Theorem 5.8. (Parallelogram law) Let V be an inner product space and
u, v P V . Then
}u ` v}2 ` }u v}2 2p}u}2 ` }v}2 q.
Proof.
}u ` v}2 ` }u v}2 xu ` v, u ` vy ` xu v, u vy xu, uy ` xu, vy ` xv, uy ` xv, vy
`xu, uy xu, vy xv, uy ` xv, vy
2p}u}2 ` }v}2q.
Now we are going to prove one of the most important inequalities in mathematics,
namely the Cauchy-Schwartz inequality. There are several methods of proof for
this, we will give one related to our aims.
100
xu,vy
,
}v}2
xu, vy
xu, vy
u
v` u
v .
}v}2
}v}2
Theorem 5.10. Cauchy-Schwartz Inequality Let V be an inner product space
and u, v P V . Then
|xu, vy| }u} }v}.
The equality holds i one of u, v is a scalar multiple of the other (u and v are
collinear).
Proof. Let u, v P V . If v 0 both sides of the inequality are 0 and the desired
xu,vy
xu,vy
result holds. Suppose that v 0. Write u }v}2 v ` u }v}2 v . Taking into
account that the vectors
xu,vy
v
}v}2
and u
xu,vy
v
}v}2
theorem we obtain
}u}
2
xu, vy 2
xu,
vy
` u
v
v
}v}2
}v}2
2
|xu, vy|2
xu, vy
` u
v
}v}2
}v}2
|xu, vy|2
,
}v}2
xu,vy
v
}v}2
Orthonormal Bases
5.2
101
Orthonormal Bases
Denition 5.11. Let pV, x, yq an inner product space and let I be an arbitrary
index set. A family of vectors A tei P V |i P Iu is called an orthogonal family, if
xei , ej y 0 for every i, j P I, i j. The family A is called orthonormal if it is
orthogonal and }ei } 1 for every i P I.
One of the reason that one studies orthonormal families is that in such special
bases the computations are much more simple.
Proposition 5.12. If pe1 , e2 , . . . , em q is an orthonormal family of vectors in V ,
then
}1 e1 ` 2 e2 ` ` m em }2 |1 |2 ` |2 |2 ` ` |m |2
for all 1 , 2 , . . . , m P F.
Proof. Apply Pythagorean Theorem, that is
}1 e1 ` 2 e2 ` ` m em }2 |1 |2 }e1 }2 ` |2 |2 }e2 }2 ` ` |m |2 }en }2 .
The conclusion follows taking into account that }ei } 1, i 1, n.
Corollary 5.13. Every orthonormal list of vectors is linearly independent.
Proof. Let pe1 , e2 , . . . , em q be an orthonormal list of vectors in V and
1 , 2 , . . . , m P F with
1 e1 ` 2 e2 ` ` m em 0.
It follows that }1 e1 ` 2 e2 ` ` m em }2 |1 |2 ` |2 |2 ` ` |m |2 0, that is
j 0, j 1, m.
Orthonormal Bases
102
|xv, ei y|2
i1
|xv, ei y|2 .
i1
Orthonormal Bases
103
such that
spanpv1 , v2 , . . . , vk q spanpe1 , e2 . . . , ek q
for every k P t1, 2, . . . , mu.
Proof. Let pv1 , v2 , . . . , vm q be a linearly independent set of vectors. The family of
orthonormal vectors pe1 , e2 . . . , em q will be constructed inductively. Start with
e1
v1
.
}v1 }
Orthonormal Bases
104
Both lists being linearly independent (the rst one by hypothesis and the second
one by orthonormality), it follows that the generated subspaces above have the
same dimension j, so they are equal.
Remark 5.16. If in the Gram-Schmidt process we do not normalize the vectors
we obtain an orthogonal basis instead of an orthonormal one.
Example 5.17. Orthonormalize the following list of vectors in R4 :
tv1 p0, 1, 1, 0q, v2 p0, 4, 0, 1q, v3 p1, 1, 1, 0q, v4 p1, 3, 0, 1qu.
First we will orthogonalize by using the Gram-Schmidt procedure.
Let u1 v1 p0, 1, 1, 0q.
xv2 , u1y
4
u1 p0, 4, 0, 1q p0, 1, 1, 0q p0, 2, 2, 1q.
xu1 , u1 y
2
xv3 , u1 y
xv3 , u2 y
1 1 4
.
u3 v3
u1
u2 1, , ,
xu1, u1 y
xu2, u2 y
9 9 9
1
2
xv4 , u1 y
xv4 , u2y
xv4 , u3y
1 1
u4 v4
u1
u2
u3
, , ,
.
xu1, u1 y
xu2 , u2 y
xu3 , u3 y
11 22 22 11
u2 v2
It can easily be veried that the list tu1 , u2, u3 , u4 u is orthogonal. Take now
wi
ui
,
}ui }
i 1, 4. We obtain
1 1
w1 0, ? , ? , 0 ,
2 2
2 2 1
,
w2 0, , ,
3 3 3
1
1
4
3
w3 ? , ? , ? , ?
,
11 3 11 3 11 3 11
?
?
?
?
22 22
22 2 22
w4
,
,
,
.
11 22
22
11
Orthogonal complement
105
5.3
Orthogonal complement
Orthogonal complement
106
vector in U i.e.:
U K tv P V |xv, uy 0, @u P Uu.
It can easily be veried that U K is a subspace of V , V K t0u and t0uK V , as
well that U1 U2 U2K U1K .
Theorem 5.22. If U is a subspace of V , then
V U UK
Proof. Suppose that U is a subspace of V . We will show that
V U ` UK
Let te1 , . . . , em u be an orthonormal basis of U and v P V . We have
v pxv, e1ye1 ` ` xv, em yem q ` pv xv, e1 ye1 xv, em yem q
Denote the rst vector by u and the second by w. Clearly u P U. For each
j P t1, 2, . . . , mu one has
xw, ej y xv, ej y xv, ej y
0
Thus w is orthogonal to every vector in the basis of U, that is w P U K ,
consequently
V U ` U K.
We will show now that U X U K t0u. Suppose that v P U X U K . Then v is
orthogonal to every vector in U, hence xv, vy 0, that is v 0. The relations
V U ` U K and U X U K t0u imply the conclusion of the theorem.
Proposition 5.23. If U1 , U2 are subspaces of V then
Orthogonal complement
107
a) U1 pU1K qK .
b) pU1 ` U2 qK U1K X U2K .
c) pU1 X U2 qK U1K ` U2K .
Proof. a) We show rst that U1 pU1K qK . Let u1 P U1 . Then for all v P U1K one has
vKu1 . In other words xu1 , vy 0 for all v P U1K . Hence u1 P pU1K qK .
Assume now that pU1K qK U1 . Hence, there exists u2 P pU1K qK zU1 . Since
V U1 U1K we obtain that there exists u1 P U1 such that u2 u1 P U1K pq.
On the other hand, according to the rst part of proof u1 P pU1K qK and pU1K qK is a
linear subspace, hence u2 u1 P pU1K qK . Hence, for all v P U1K we have
pu2 u1 qKv pq.
pq and pq implies that pu2 u1 qKpu2 u1 q that is xu2 u1 , u2 u1 y 0, which
leads to u1 u2 contradiction.
b) For v P pU1 ` U2 qK one has xv, u1 ` u2 y 0 for all u1 ` u2 P U1 ` U2 . By taking
u2 0 we obtain that v P U1K and by taking u1 0 we obtain that v P U2K . Hence
pU1 ` U2 qK U1K X U2K .
Conversely, let v P U1K X U2K . Then xv, u1y 0 for all u1 P U1 and xv, u2y 0 for all
u2 P U2 . Hence xv, u1 ` u2y 0 for all u1 P U1 and u2 P U2 , that is v P pU1 ` U2 qK .
c) According to a) ppU1 X U2 qK qK U1 X U2 .
According to b) and a) pU1K ` U2K qK pU1K qK X pU2K qK U1 X U2 .
Hence, ppU1 X U2 qK qK pU1K ` U2K qK which leads to pU1 X U2 qK U1K ` U2K .
Example 5.24. Let U tpx1 , x2 , x3 , x4 q P R4 |x1 x2 ` x3 x4 0u. Knowing
that U is a subspace of R4 , compute dim U and U K .
Linear manifolds
108
px1 , x2 , x3 , x4 q P R4 |x1 x2 ` x3 x4 0
tpx1 , x2 , x3 , x1 x2 ` x3 q |x1 , x2 , x3 P Ru
tx1 p1, 0, 0, 1q ` x2 p0, 1, 0, 1q ` x3 p0, 0, 1, 1q |x1 , x2 , x3 P Ru
span tp1, 0, 0, 1q , p0, 1, 0, 1q , p0, 0, 1, 1qu .
The three vectors p1, 0, 0, 1q , p0, 1, 0, 1q , p0, 0, 1, 1q are linearly independent (the
rank of the matrix they form is 3), so they form a basis of U and dim U 3.
The dimension formula
dim U ` dim U K dim R4
tells us that dim U K 1, so U K is generated by a single vector. A vector that
generates U K is p1, 1, 1, 1q, the vector formed by the coecients that appear in
the linear equation that denes U. This is true because the right hand side of the
equation is exactly the scalar product between uK p1, 1, 1, 1q and a vector
v px1 , x2 , x3 , x4 q P U.
5.4
Linear manifolds
Linear manifolds
109
if v0 P VL then L VL .
v0 P L because v0 v0 ` 0 P v0 ` VL .
for v1 , v2 P L we have v1 v2 P VL .
for every v1 P L we have L v1 ` VL .
L1 L2 , where L1 v0 ` VL1 and L2 v01 ` VL2 i VL1 VL2 and
v0 v01 P VL1 .
Denition 5.27. We would like to emphasize that:
1. The dimension of a linear manifold is the dimension of its director subspace.
2. Two linear manifolds L1 and L2 are called orthogonal if VL1 KVL2 .
3. Two linear manifolds L1 and L2 are called parallel if VL1 VL2 or VL2 VL1 .
Let L v0 ` VL be a linear manifold in a nitely dimensional vector space V . For
dim L k n dim V one can choose in the director subspace VL a basis of nite
dimension tv1 , . . . , vk u. We have
L tv v0 ` 1 v1 ` ` k vk |i P F, i 1, ku
We can consider an arbitrary basis (xed) in V , lets say E te1 , . . . , en u and if we
use the column vectors for the coordinates in this basis, i.e.
vrEs px1 , . . . , xn qJ , v0rEs px01 , . . . , x0n qJ , vjrEs px1j , . . . , xnj qJ , j 1, k, one has
the parametric equations of the linear manifold
Linear manifolds
110
&
..
.
independent.
Linear manifolds
111
Remark 5.29. In fact the map constructed in the previous theorem is nothing
but the projection on U parallel to the space spante1 , . . . , ek u.
Theorem 5.30. Let V, U two linear spaces over the same eld F. If T : V U is
a surjective linear map, then for every u0 P U, the set L tv P V |T pvq u0 u is a
linear manifold.
Proof. T being surjective, there exists v0 P V with T pv0 q u0 . We will show that
tv v0 |v P Lu ker T .
Let v P L. We have T pv v0 q T pvq T pv0 q 0, so tv v0 |v P Lu ker T .
Let v1 P ker T , i.e. T pv1 q 0. Write v1 pv1 ` v0 q v0 . T pv1 ` v0 q u0, so
pv1 ` v0 q P L. Hence, v1 P tv v0 |v P Lu or, in other words ker T tv v0 |v P Lu.
Consequently L v0 ` ker T, which shows that L is a linear manifold.
The previous theorems give rise to the next:
Theorem 5.31. Let V a linear space of dimension n. Then, for every linear
manifold L V of dimension dim L k n, there exists an n k-dimensional
vector space U, a surjective linear map T : V U and a vector u P U such that
L tv P V |T pvq uu.
Proof. Indeed, consider L v0 ` VL , where the dimension of the director subspace
VL k. Choose a basis te1 , . . . , ek u in VL and complete it to a basis
te1 , . . . , ek , ek`1, . . . , en u of V . Consider U spantek`1 , . . . , en u. Obviously
dim U n k. According to a previous theorem the linear map
T : V U, T p1 e1 ` ` k ek ` k`1 ek`1 ` ` n en q k`1 ek`1 ` ` n en is
surjective and ker T VL . Let T pv0 q u. Then, according to the proof of the
previous theorem L tv P V |T pvq uu.
Linear manifolds
112
Remark 5.32. If we choose in V and U two bases and we write the linear map by
matrix notation MT v u we have the implicit equations of the linear manifold L,
$
&
..
.
xv, v1 y u1
&
..
.
%xv, vp y up
113
where the vectors v1 , . . . vp are linearly independent. The director subspace is given
by
$
xv, v1 y 0
&
..
.
%xv, vp y 0
so, the vectors v1 , . . . , vp are orthogonal to the director subspace VL .
5.5
In this section we will explain how we can measure the distance between some
linear sets, which are linear manifolds.
Let pV, x, yq be an inner product space and consider the vectors vi P V , i 1, k.
The determinant
xv1 , v1 y xv1 , v2 y
xv2 , v1 y xv2 , v2 y
Gpv1 , . . . , vk q
......
...
xvk , v1 y xvk , v2 y
...
...
...
...
xv1 , vk y
xv2 , vk y
xvk , vk y
xv1 , vy 0
&
..
.
%xvk , vy 0
114
where v x1 v1 ` . . . xk vk .
115
wPU
Remark 5.36. The linear structure implies a very simple but useful fact:
dpv, Uq dpv ` w, w ` Uq
for every v, w P V and U V , that is the linear structure implies that the distance
is invariant by translations
We are interested in the special case when U is a subspace.
Proposition 5.37. The distance between a vector v P V and a subspace U is given
by
d
dpv, Uq }v K }
Gpe1 , . . . , ek , vq
,
Gpe1 , . . . , ek q
@u P U. We have
}v K } }v u}
xv K , v K y xv K ` v1 u, v K ` v1 uy
xv K , v K y xv K , v K y ` xv1 u, v1 uy.
b
1 ,...,ek ,vq
K
, follows from the previous
The second part of the equality, i.e. }v } Gpe
Gpe1 ,...,ek q
remark.
Denition 5.38. If e1 , . . . , ek are vectors in V the volume of the k- parallelepiped
a
constructed on the vectors e1 , . . . , ek is dened by Vk pe1 , . . . , ek q Gpe1 , . . . , ek q.
116
wPL
inf dpv v0 , vL q
vL PVL
dpv v0 , VL q.
Finally,
d
dpv, Lq dpv v0 , VL q
Gpe1 , . . . , ek , v v0 q
,
Gpe1 , . . . , ek q
where e1 , . . . , ek is a basis in VL .
Example 5.39. Consider the linear manifolds
L tpx, y, z, tq P R4 |x ` y ` t 2, x 2y ` z ` t 3u,
K tpx, y, z, tq P R4 |x ` y ` z t 1, x ` y ` z ` t 3u. Find the director
subspaces VL , VK and a basis in VL X VK . Find the distance of v p1, 0, 2, 2q from
L, respectively K, and show that the distance between L and K is 0.
Since L v0 ` VL and K u0 ` VK it follows that VL L v0 and VK K u0
for some v0 P L, u0 P K. By taking x y 0 in the equations that describe L we
117
,
dpv, Lq dpv v0 , VL q
Gpe1 , e2 q
21
meanwhile
d
dpv, Kq dpv v0 , VK q
Gpe3 , e4 , v v0 q
Gpe3 , e4 q
4
.
3
118
x`y`t2
& x 2y ` z ` t 3
is
It is obvious that K X L H, since the system
x`y`zt1
% x`y`z`t3
consistent, having solution p1, 0, 1, 1q, hence we must have
dpL, Kq 0.
Let us consider now the hyperplane H of equation
xv v0 , ny 0 .
The director subspace is VH xv, ny 0 and the distance
dpv, Hq dpv v0 , VH q.
One can decompose v v0 n ` vH , where vH is the orthogonal projection of
v v0 on VH and n is the normal component of v v0 with respect to VH . It
means that
dpv, Hq }n}
Let us compute a little now, taking into account the previous observations about
the tangential and normal part:
xv v0 , ny xn ` vH , ny
xn, ny ` xvH , ny
}n}2 ` 0
So, we obtained
|xv v0 , ny|
||}n} }n}
}n}
that is
dpv, Hq
119
|xv v0 , ny|
}n}
In the case that we have an orthonormal basis at hand, the equation of the
hyperplane H is
a1 x1 ` ` ak xk ` b 0 ,
so the relation is now
dpv, Hq
|a1 v1 ` ` ak vk ` b|
a
.
a21 ` ` a2k
(5.1)
(5.2)
120
xk yk .
i1
GpxM x1 , d1 q
;
Gpd1 q
GpxM xP , v1 , v2 q
;
Gpv 1 , v 2 q
Gpx1 x2 , d1 , d2 q
if D1 D2
Gpd1 , d2 q
dpM, D1 q
dpM, P q
dpD1 , D2 q
121
Gpx1 x2 , d1 q
if D1 D2
Gpd1 , q
|xxM , ny ` b|
dpM, Hq
}n}
d
Gpx1 xP , d1 , v1 , v2 q
dpD1 , P q
if D1 P
Gpd1 , v 1 , v 2 q
dpD1 , D2 q
Problems
dpM, Hq
122
|xxM ,ny`b|
,
}n}
dpD, Hq
5.6
Problems
i1
ai bi q p
2
i1
ia2i qp
1
i1
b2i q,
for all ai , bi P R , , i 1, n.
Problem 5.6.6. Let S be the subspace of the inner product space R3 rXs, the
space of polynomials of degree at most 3, generated by the polynomials 1 x2 and
1
2 x ` x2 , where xf, gy 0 f pxqgpxqdx. Find a basis for the orthogonal
complement of S.
Problems
123
}u ` v}2 }u v}2
, @u, v P V.
4
2
is orthonormal in Cr, s, endowed with the scalar product
xf, gy
f pxqgpxqdx.
Problems
124
Problem 5.6.13. Show that the set of all vectors in Rn which are orthogonal to a
given vector v P Rn is a subspace of Rn . What will its dimension be?
Problem 5.6.14. If S is a subspace of a nite dimensional real inner product
space V, prove that S K V {S.
Problem 5.6.15. Let V be an inner product space and let tv1 , . . . , vm u a list of
linearly independent vectors from V. How many orthonormal families te1 , . . . , em uq
can be constructed by using the Gram-Schmidt procedure, such that
spantv1 , . . . , vi u spante1 , . . . , ei u, @ i 1, m.
Problem 5.6.16. Orthonormalize the following list of vectors in R4
tp1, 11, 0, 1q, p1, 2, 1, 1q, p1, 1, 1, 0q, p1, 1, 1, 1qu.
Problem 5.6.17. Let V be an inner product space and let U V subspace. Show
that
dim U K dim V dim U.
Problem 5.6.18. Let te1 , . . . , em u be an orthonormal list in the inner product
space V . Show that
}v}2 |xv, e1 y|2 ` ` |xv, em y|2
if and only if v P spante1 , . . . , em u.
Problem 5.6.19. Let V be a nite-dimensional real inner product space with a
basis te1 , . . . , en u. Show that for any u, w P V it holds
xu, wy rusJ Gpe1 , . . . , en qrws where rus is the coordinate vector (represented as a
column matrix) of u with respect to the given basis and Gpe1 , . . . , en q is the matrix
having the same entries as the Gram determinant of te1 , . . . , en u..
Problem 5.6.20. Find the distance between the following linear manifolds.
Problems
125
6
Operators on inner product spaces.
6.1
127
f puq xu, vy .
Proof. We will present the proof only in the nite dimensional case. We show rst
that there is a vector v P V such that f puq xu, vy. Let te1 , . . . , en u be an
orthonormal basis of V . One has
128
Let us consider another vector space W over F, and an inner product on it, such
that pW, x, yq becomes a Hilbert space.
Let T P LpV, W q a continuous operator in the topologies induced by the norms
a
a
}v}V xv, vyV , respectively }w}W xw, wyW , (as a continuous function in
analysis). We dene now the adjoint of T , as follows.
Fix w P W . Consider the linear functional on V which maps v in xT pvq, wyW . It
follows that there exists a unique vector T pwq P V such that
xv, T pwqyV xT pvq, wyW
@v P V.
129
130
131
w P ker T T pwq 0
xv, T pwqy 0
xT pvq, wy 0
@vPV
@v P V
w P pim T qK ,
Normal operators
132
pj, kq of MT the entry is xT pfk q, ej y, which equals to xfk , T pej qy, which equals to
xT pej q, fk y. In others words, MT equals to the complex conjugate of MT .
6.2
Normal operators
2 3
3
Normal operators
133
T T T T 0
xpT T T T qpvq, vy 0 for all v P V
xT T pvq, vy xT T pvq, vy for all v P V
}T pvq}2 }T pvq}2 for all v P V.
Normal operators
134
for all w P Ep0 q. The rst term in the inner product lives in Ep0 q by the
previous statement. Take w T pv0 q 0 v0 and it follows that T pv0 q 0 v0 , i.e.
the second assertion in the theorem.
Now follows the last statement. One has T pvq v and T pq w. By the
previous point T pwq w, so
xT pvq, wy xv, T pwq
(def. of adjoint), which implies xv, wy xv, wy. Since , it follows that
xv, wy 0.
Proposition 6.11. If U is a T invariant subspace of V then U K is a T invariant
subspace of V .
Proof.
w P U K , v P V w P U K , T pvq P U xv, T pwqy xT pvq, wy.
That is T pwq P U K .
Normal operators
135
To prove the other direction suppose that T is normal. Then, there is a basis
te1 , . . . , en u of V with respect to which the
a
a
1,1 1,2
0 a2,2
A
..
..
.
.
0
0
. . . an,n
. . . a2,n
.. .
.
. . . an,n
|a1,1 |2
and
}T pe1 q}
|a1,1 |2 ` ` |a1,n |2 .
}T pe2 q}
|a2,2 |2 ` ` |a2,n |2 .
Isometries
136
6.3
Isometries
Isometries
137
Isometries
138
Isometries
139
cos sin
sin cos
140
with P p0, q.
Proof. The eigenvalues of T have modulus 1, hence are the form 1, 1 or
cos sin . On the other hand, the matrix of T is similar to a diagonal matrix
whose diagonal entries are the eigenvalues.
6.4
2 b
3 5
Then T is self-adjoint i b 3.
Indeed, for px, yq P F2 one has T px, yq p2x ` by, 3x ` 5yq, hence for pu, vq P F2 it
holds
xT px, yq, pu, vqy p2x ` byqu ` p3x ` 5yqv xpx, yq, p2u ` 3v, bu ` 5vqy.
Thus T px, yq p2x ` 3y, bx ` 5yq.
In conclusion T is self adjoint, i.e. T T iif b 3.
It can easily be veried that the sum of two self adjoint operators and the product
of an self adjoint operator by a real scalar is an self-adjoint operator.
141
Problems
142
6.5
Problems
Problem 6.5.1. Suppose that A is a complex matrix with real eigenvalues which
can be diagonalized by a unitary matrix. Prove that A must be hermitian.
Problem 6.5.2. Prove or give a counter example: the product of any two self
adjoint operators on a nite dimensional inner product space is self adjoint.
Problem 6.5.3. Show that an upper triangular matrix is normal if and only if it
is diagonal.
Problem 6.5.4. Suppose p P LpV q is such that p2 p. Prove that p is an
orthogonal projection if and only if p is self adjoint.
Problems
143
Problem 6.5.5. Show that if V is a real inner product space, then the set of self
adjoint operators on V is a subspace of LpV q. Show that if V is a complex inner
product space, then the set of self-adjoint operators on V is not a subspace of
LpV q.
Problem 6.5.6. Show that if dim V 2 then the set of normal operators on V is
not a subspace of LpV q.
Problem 6.5.7. Let A be a normal matrix. Prove that A is unitary if and only if
all its eigenvalues satisfy || 1.
Problem 6.5.8. Let v be any unit vector in Cn and put A In 2XX . Prove
that A is both hermitian and unitary. Deduce that A A1 .
Problem 6.5.9. Suppose V is a complex inner product space and T P LpV q is a
normal operator such that T 9 T 8 . Prove that T is self adjoint and T 2 T.
Problem 6.5.10. Let A be a normal matrix. Show that A is hermitian if and
only if all its eigenvalues are real.
Problem 6.5.11. Prove that if T P LpV q is normal, then
im T im T .
and
ker T k ker T
im T k im T
for every positive integer k.
Problem 6.5.12. A complex matrix A is called skew-hermitian if A A.
Prove the following statements.
Problems
144
7
Elements of geometry
following joint notes with I. Rasa and D. Inoan
7.1
Quadratic forms
145
Quadratic forms
146
x1
x2
..
.
and
xn
a11 a12 . . .
a1n
a12 a22 . . .
..
..
.
.
a2n
..
.
The symmetric matrix A (notice that aij aji ) is be called the matrix of the
quadratic form. Being symmetric (and real), A it is the matrix of a self-adjoint
operator with respect to the basis E. This operator, that we call T , is
diagonalizable and there exists a basis B tb1 , . . . , bn u formed by eigenvectors
with respect to which T has a diagonal matrix consisting of eigenvalues (also
denoted by T )
T diagt1 . . . . , n u.
Let C be the transition matrix from E to B and
x11
1
x2
X1
..
.
1
xn
the coordinates of the initial vector written in B. We have that
X CX 1
Knowing that T C 1 AC, and that C 1 C J we can compute that
Qpxq X J AX
J
pCX 1 q A pCX 1 q
X 1J C J ACX 1
X 1J T X 1
1 x1 1 ` ` n x1 n ,
2
Quadrics
147
a11 a12
D1 a11 , D2
a12 a22
, . . . , Dn det A.
7.2
Quadrics
Quadrics
148
Quadrics
149
x
x
1
y R y .
1
z
z
We know that with respect to the new coordinates
Q 1 x1 ` 1 y 1 ` n z 1 ,
2
and thus, the equation of the quadric reduces to the simpler form
1 x1 ` 1 y 1 ` n z 1 ` 2a1 14 x1 ` 2a1 24 y 1 ` 2a1 34 z 1 ` a44 0.
2
To obtain the canonical form of the quadric we still have to perform another
transformation, namely a translation. To complete this step we investigate three
cases: (A) when A has three nonzero eigenvalues, (B) when one eigenvalue is zero
and (C) when two eigenvalues are equal to zero.
(A) For i 0 we obtain
1 px1 x0 q2 ` 2 py 1 y0 q2 ` 3 pz 1 z0 q2 ` a1 44 0
Consider the translation dened by
x2 x1 x0 ,
y 2 y 1 y0 ,
z 2 z 1 z0 .
In the new coordinates the equation of the quadric reduces to the canonical form
1 x22 ` 2 y 22 ` 3 z 22 ` a1 44 0.
The cases (B) and (C) can be treated similarly.
Conics
7.3
150
Conics
Studied since the time of ancient greek geometers, conic sections (or just conics)
are obtained, as their name shows, by intersecting a cone with a sectioning plane.
They have played a crucial role in the development of modern science, especially in
astronomy, the motion of earth round the sun taking place on a particular conic
called ellipse. Also, we point out the fact that the circle is a conic section, a special
case of ellipse.
The general equation of a conic is
a11 x2 ` 2a12 xy ` a22 y 2 ` 2a13 x ` 2a23 y ` a33 0.
The following two determinants obtained from the coecients of the conic play a
crucial role in the classication
a11 a12
a12 a22
a13 a23
of conics
a13
and
a23
a33
a11 a12
D2
a12 a22
Notice that the second determinant corresponds to the quadratic form dened by
the rst three terms.
Conical sections can be classied as follows:
Degenerate conics, for which 0. These include:two intersecting lines (when
D2 0), two parallel lines or one line (when D2 0) and one point (when D2 0).
Nondegenerate conics, for which 0. Depending on D2 we distinguish
between the
Ellipse pD2 0q whose canonical equation is
x2 y 2
` 2 1,
a2
b
Conics
151
x2
a2
x2 y 2
2 1.
a2
b
y2
b2
The reduction of a conic section to its canonical form is very similar with the
procedure that we have presented in the last section when dealing with quadrics.
Again, we must perform a rotation and a translation. We show how the reduction
can be performed by means of an example.
Example 7.1. Find the canonical form of 5x2 ` 4xy ` 8y 2 32x 56y ` 80 0.
The matrix of the quadratic form of this conic is
5 2
2 8
and its eigenvalues are the roots of 2 13 ` 36 0. So 1 9 and 2 4, while
two normed eigenvectors are v1
rotation matrix is thus
?1
5
p1, 2q and v2
?1
5
?25
?2
5
?1
5
?1
5
Conics
152
1
x
x
R
y1
y
that is
1
x ? px1 2y 1q
5
1
y ? p2x1 ` y 1q .
5
By substituting these expressions in the initial equation we get
?
?
144 5 1 8 5 1
12
12
x `
y ` 80 0.
9x ` 4y
5
5
To see the translation that we need to perform we rewrite the above equation as
follows
?
?
? 2
? 2
8
5
5
5
5
8
9 x12 2
x1 `
y1 `
` 4 y 12 ` 2
5
5
5
5
? 2
? 2
5
8 5
4
` 80 0.
9
5
5
Finally, we obtain
? 2
? 2
5
8 5
1
1
9 x
`4 y `
30 0.
5
5
Thus, the translation x2 x1
?
8 5
, y2
5
y1 `
5
5
form
3 22
2
x ` y 22 1.
10
15
Problems
7.4
153
Problems
Problem 7.4.1. Find the canonical form of the following quadric surfaces:
a) 2y 2 ` 4xy 8xz 6x ` 8y ` 8 0,
b) 3x2 ` y 2 ` z 2 2x 4z 4 0,
c) xz y,
d) x2 ` y 2 ` 5z 2 6xy ` 2xz 2yz 4x ` 8y 12z ` 14 0
Problem 7.4.2. Find the canonical form of the following conics:
a) x2 6xy ` 9y 2 ` 20 0,
b) 3y 2 4xy 2y ` 4x 3 0,
c) 5x2 ` 6xy ` 2y 2 ` 2y 1 0,
d) xy 1.
Problem 7.4.3. Given the ellipsoid
pEq :
x2 y 2
`
` z 2 1 0,
4
3
at the point M
?
1 1
2
,
,
.
2 2 2
Problems
154
Problem 7.4.5. Find the equations of the planes that contain the line
$
& x1
% y0
and are tangent to the sphere
px 1q2 ` y 2 ` z 2 1.
Problem 7.4.6. Consider the hyperboloid of one sheet
x2 y 2 z 2
`
1.
16
9
4
Determine the straight lines that belong to this surface and pass through the point
P p4, 3, 2q.
Problem 7.4.7. Find the equation of the circle whose center is located in C p1, 2q
and whose radius is R 2.
Problem 7.4.8. Determine the center and the radius of the circle of equation
x2 ` y 2 ` 2x 4y 4 0.
Problem 7.4.9. Write the equation of the circle that passes through the points
A p1, 1q , B p1, 5q , C p4, 1q.
Bibliography
[1] Sheldon Axler: Linear Algebra Done Right, Springer, 2-nd edition, 2004.
[2] Derek Robinson: A Course in Linear Algebra with Applications, World
Scientic Publishing, 2-nd Edition, 2006.
155