Sei sulla pagina 1di 72

Linear Algebra

Department of Mathematics
Indian Institute of Technology Guwahati

January – May 2019

MA 102 (RA, RKS, MGPP, KVK)

1 / 11
Similarity and diagonalization

Topics:
Similarity transformation
Diagonalization of matrices and operators
Triangularization of complex matrices

2 / 11
Similar matrices
We have seen that diagonal and triangular matrices are nice in the
sense that their eigenvalues are transparently displayed.

3 / 11
Similar matrices
We have seen that diagonal and triangular matrices are nice in the
sense that their eigenvalues are transparently displayed.
It would be desirable to transform a square matrix to a diagonal or
triangular matrix in such a way that they have exactly the same
eigenvalues.

3 / 11
Similar matrices
We have seen that diagonal and triangular matrices are nice in the
sense that their eigenvalues are transparently displayed.
It would be desirable to transform a square matrix to a diagonal or
triangular matrix in such a way that they have exactly the same
eigenvalues.

Gaussian elimination reduces a matrix to a triangular matrix.


Unfortunately, this process does not preserve the eigenvalues.

3 / 11
Similar matrices
We have seen that diagonal and triangular matrices are nice in the
sense that their eigenvalues are transparently displayed.
It would be desirable to transform a square matrix to a diagonal or
triangular matrix in such a way that they have exactly the same
eigenvalues.

Gaussian elimination reduces a matrix to a triangular matrix.


Unfortunately, this process does not preserve the eigenvalues.

Definition: Let A, B ∈ Mn (F). Then A is said to be similar to B if


there is an invertible matrix P ∈ Mn (F) such that P −1 AP = B.

3 / 11
Similar matrices
We have seen that diagonal and triangular matrices are nice in the
sense that their eigenvalues are transparently displayed.
It would be desirable to transform a square matrix to a diagonal or
triangular matrix in such a way that they have exactly the same
eigenvalues.

Gaussian elimination reduces a matrix to a triangular matrix.


Unfortunately, this process does not preserve the eigenvalues.

Definition: Let A, B ∈ Mn (F). Then A is said to be similar to B if


there is an invertible matrix P ∈ Mn (F) such that P −1 AP = B.

We write A ∼ B when A is similar to B.

3 / 11
Similar matrices
We have seen that diagonal and triangular matrices are nice in the
sense that their eigenvalues are transparently displayed.
It would be desirable to transform a square matrix to a diagonal or
triangular matrix in such a way that they have exactly the same
eigenvalues.

Gaussian elimination reduces a matrix to a triangular matrix.


Unfortunately, this process does not preserve the eigenvalues.

Definition: Let A, B ∈ Mn (F). Then A is said to be similar to B if


there is an invertible matrix P ∈ Mn (F) such that P −1 AP = B.

We write A ∼ B when A is similar to B.

The map A 7−→ P −1 AP is called a similarity transformation of A.

3 / 11
Similar matrices
We have seen that diagonal and triangular matrices are nice in the
sense that their eigenvalues are transparently displayed.
It would be desirable to transform a square matrix to a diagonal or
triangular matrix in such a way that they have exactly the same
eigenvalues.

Gaussian elimination reduces a matrix to a triangular matrix.


Unfortunately, this process does not preserve the eigenvalues.

Definition: Let A, B ∈ Mn (F). Then A is said to be similar to B if


there is an invertible matrix P ∈ Mn (F) such that P −1 AP = B.

We write A ∼ B when A is similar to B.

The map A 7−→ P −1 AP is called a similarity transformation of A.

Note that similarity of matrices is a transitive relation on Mn (F).

3 / 11
Diagonalization
Remark: Note that A and B := P −1 AP have the same eigenvalues
and

4 / 11
Diagonalization
Remark: Note that A and B := P −1 AP have the same eigenvalues
and Bv = λv ⇔ Au = λu,

4 / 11
Diagonalization
Remark: Note that A and B := P −1 AP have the same eigenvalues
and Bv = λv ⇔ Au = λu, where u := Pv.

4 / 11
Diagonalization
Remark: Note that A and B := P −1 AP have the same eigenvalues
and Bv = λv ⇔ Au = λu, where u := Pv.
   
1 2 1 0
Example: Let A = and B = .
0 −1 −2 −1

4 / 11
Diagonalization
Remark: Note that A and B := P −1 AP have the same eigenvalues
and Bv = λv ⇔ Au = λu, where u := Pv.
   
1 2 1 0
Example: Let A = and B = . Then
0 −1 −2 −1
A ∼ B,

4 / 11
Diagonalization
Remark: Note that A and B := P −1 AP have the same eigenvalues
and Bv = λv ⇔ Au = λu, where u := Pv.
   
1 2 1 0
Example: Let A = and B = . Then
0 −1 −2 −1
 
1 −1
A ∼ B, since AP = PB, where P =
1 1

4 / 11
Diagonalization
Remark: Note that A and B := P −1 AP have the same eigenvalues
and Bv = λv ⇔ Au = λu, where u := Pv.
   
1 2 1 0
Example: Let A = and B = . Then
0 −1 −2 −1
 
1 −1
A ∼ B, since AP = PB, where P =
1 1

Definition: An matrix A ∈ Mn (F) is said to be diagonalizable if


there is an invertible matrix P ∈ Mn (F) such that D := P −1 AP is
a diagonal matrix.

4 / 11
Diagonalization
Remark: Note that A and B := P −1 AP have the same eigenvalues
and Bv = λv ⇔ Au = λu, where u := Pv.
   
1 2 1 0
Example: Let A = and B = . Then
0 −1 −2 −1
 
1 −1
A ∼ B, since AP = PB, where P =
1 1

Definition: An matrix A ∈ Mn (F) is said to be diagonalizable if


there is an invertible matrix P ∈ Mn (F) such that D := P −1 AP is
a diagonal matrix.
An LT T ∈ L(V) with dim(V) = n is said to be diagonalizable if
there exists an ordered basis B of V such that [T ]B is a diagonal
matrix.

4 / 11
Diagonalization
Remark: Note that A and B := P −1 AP have the same eigenvalues
and Bv = λv ⇔ Au = λu, where u := Pv.
   
1 2 1 0
Example: Let A = and B = . Then
0 −1 −2 −1
 
1 −1
A ∼ B, since AP = PB, where P =
1 1

Definition: An matrix A ∈ Mn (F) is said to be diagonalizable if


there is an invertible matrix P ∈ Mn (F) such that D := P −1 AP is
a diagonal matrix.
An LT T ∈ L(V) with dim(V) = n is said to be diagonalizable if
there exists an ordered basis B of V such that [T ]B is a diagonal
matrix.

Theorem: Let A ∈ Mn (F). Then A is diagonalizable ⇔ A has n


linearly independent eigenvectors.

4 / 11
Diagonalization
Remark: Note that A and B := P −1 AP have the same eigenvalues
and Bv = λv ⇔ Au = λu, where u := Pv.
   
1 2 1 0
Example: Let A = and B = . Then
0 −1 −2 −1
 
1 −1
A ∼ B, since AP = PB, where P =
1 1

Definition: An matrix A ∈ Mn (F) is said to be diagonalizable if


there is an invertible matrix P ∈ Mn (F) such that D := P −1 AP is
a diagonal matrix.
An LT T ∈ L(V) with dim(V) = n is said to be diagonalizable if
there exists an ordered basis B of V such that [T ]B is a diagonal
matrix.

Theorem: Let A ∈ Mn (F). Then A is diagonalizable ⇔ A has n


linearly independent eigenvectors. In particular, if A has n distinct
eigenvalues then A is diagonalizable.
4 / 11
Diagonalization
Remark: Note that A and B := P −1 AP have the same eigenvalues
and Bv = λv ⇔ Au = λu, where u := Pv.
   
1 2 1 0
Example: Let A = and B = . Then
0 −1 −2 −1
 
1 −1
A ∼ B, since AP = PB, where P =
1 1

Definition: An matrix A ∈ Mn (F) is said to be diagonalizable if


there is an invertible matrix P ∈ Mn (F) such that D := P −1 AP is
a diagonal matrix.
An LT T ∈ L(V) with dim(V) = n is said to be diagonalizable if
there exists an ordered basis B of V such that [T ]B is a diagonal
matrix.

Theorem: Let A ∈ Mn (F). Then A is diagonalizable ⇔ A has n


linearly independent eigenvectors. In particular, if A has n distinct
eigenvalues then A is diagonalizable. (Is the converse true?)
4 / 11
Diagonalization

Proof: Suppose A is diagonalizable and


P −1 AP = D = diag (λ1 , . . . , λn ). Then AP = PD.

5 / 11
Diagonalization

Proof: Suppose A is diagonalizable and


P −1 AP = D = diag (λ1 , . . . , λn ). Then AP = PD.
Now, if P = [v1 v2 · · · vn ] then {v1 , . . . , vn } is LI (why?)

5 / 11
Diagonalization

Proof: Suppose A is diagonalizable and


P −1 AP = D = diag (λ1 , . . . , λn ). Then AP = PD.
Now, if P = [v1 v2 · · · vn ] then {v1 , . . . , vn } is LI (why?) and

[λ1 v1 λ2 v2 · · · λn vn ] = PD = AP = [Av1 Av2 · · · Avn ]

5 / 11
Diagonalization

Proof: Suppose A is diagonalizable and


P −1 AP = D = diag (λ1 , . . . , λn ). Then AP = PD.
Now, if P = [v1 v2 · · · vn ] then {v1 , . . . , vn } is LI (why?) and

[λ1 v1 λ2 v2 · · · λn vn ] = PD = AP = [Av1 Av2 · · · Avn ]

which shows that Avj = λj vj for j = 1 : n.

5 / 11
Diagonalization

Proof: Suppose A is diagonalizable and


P −1 AP = D = diag (λ1 , . . . , λn ). Then AP = PD.
Now, if P = [v1 v2 · · · vn ] then {v1 , . . . , vn } is LI (why?) and

[λ1 v1 λ2 v2 · · · λn vn ] = PD = AP = [Av1 Av2 · · · Avn ]

which shows that Avj = λj vj for j = 1 : n. Hence λj is an


eigenvalue of A and vj is an eigenvector corresponding to λj .

5 / 11
Diagonalization

Proof: Suppose A is diagonalizable and


P −1 AP = D = diag (λ1 , . . . , λn ). Then AP = PD.
Now, if P = [v1 v2 · · · vn ] then {v1 , . . . , vn } is LI (why?) and

[λ1 v1 λ2 v2 · · · λn vn ] = PD = AP = [Av1 Av2 · · · Avn ]

which shows that Avj = λj vj for j = 1 : n. Hence λj is an


eigenvalue of A and vj is an eigenvector corresponding to λj .
Conversely, if Avi = λi vi for i = 1 : n and {v1 , . . . , vn } is LI,

5 / 11
Diagonalization

Proof: Suppose A is diagonalizable and


P −1 AP = D = diag (λ1 , . . . , λn ). Then AP = PD.
Now, if P = [v1 v2 · · · vn ] then {v1 , . . . , vn } is LI (why?) and

[λ1 v1 λ2 v2 · · · λn vn ] = PD = AP = [Av1 Av2 · · · Avn ]

which shows that Avj = λj vj for j = 1 : n. Hence λj is an


eigenvalue of A and vj is an eigenvector corresponding to λj .
Conversely, if Avi = λi vi for i = 1 : n and {v1 , . . . , vn } is LI,
then A[v1 , . . . , vn ] = [v1 , . . . , vn ]diag(λ1 , . . . , λn ) ⇒ P −1 AP is a
diagonal matrix,

5 / 11
Diagonalization

Proof: Suppose A is diagonalizable and


P −1 AP = D = diag (λ1 , . . . , λn ). Then AP = PD.
Now, if P = [v1 v2 · · · vn ] then {v1 , . . . , vn } is LI (why?) and

[λ1 v1 λ2 v2 · · · λn vn ] = PD = AP = [Av1 Av2 · · · Avn ]

which shows that Avj = λj vj for j = 1 : n. Hence λj is an


eigenvalue of A and vj is an eigenvector corresponding to λj .
Conversely, if Avi = λi vi for i = 1 : n and {v1 , . . . , vn } is LI,
then A[v1 , . . . , vn ] = [v1 , . . . , vn ]diag(λ1 , . . . , λn ) ⇒ P −1 AP is a
diagonal matrix, where P = [v1 , · · · , vn ]. 

5 / 11
Diagonalization

Proof: Suppose A is diagonalizable and


P −1 AP = D = diag (λ1 , . . . , λn ). Then AP = PD.
Now, if P = [v1 v2 · · · vn ] then {v1 , . . . , vn } is LI (why?) and

[λ1 v1 λ2 v2 · · · λn vn ] = PD = AP = [Av1 Av2 · · · Avn ]

which shows that Avj = λj vj for j = 1 : n. Hence λj is an


eigenvalue of A and vj is an eigenvector corresponding to λj .
Conversely, if Avi = λi vi for i = 1 : n and {v1 , . . . , vn } is LI,
then A[v1 , . . . , vn ] = [v1 , . . . , vn ]diag(λ1 , . . . , λn ) ⇒ P −1 AP is a
diagonal matrix, where P = [v1 , · · · , vn ]. 

Exercise: Let A ∈ Mn (F). Then geometric multiplicity of each


eigenvalue is less than or equal to its algebraic multiplicity.

5 / 11
Characterization

6 / 11
Characterization
Exercise: Let λ1 , λ2 , . . . , λk be distinct eigenvalues of a matrix A.
Suppose Bi is a basis for the eigenspace Eλi .

6 / 11
Characterization
Exercise: Let λ1 , λ2 , . . . , λk be distinct eigenvalues of a matrix A.
Suppose Bi is a basis for the eigenspace Eλi . Then
B = B1 ∪ B2 ∪ . . . ∪ Bk is a linearly independent set.

6 / 11
Characterization
Exercise: Let λ1 , λ2 , . . . , λk be distinct eigenvalues of a matrix A.
Suppose Bi is a basis for the eigenspace Eλi . Then
B = B1 ∪ B2 ∪ . . . ∪ Bk is a linearly independent set.

Theorem: Let λ1 , . . . , λm be distinct eigenvalues of A. Then A is


diagonalizable ⇔ dim(Eλi ) equals the algebraic multiplicity of λi
for i = 1 : m.

6 / 11
Characterization
Exercise: Let λ1 , λ2 , . . . , λk be distinct eigenvalues of a matrix A.
Suppose Bi is a basis for the eigenspace Eλi . Then
B = B1 ∪ B2 ∪ . . . ∪ Bk is a linearly independent set.

Theorem: Let λ1 , . . . , λm be distinct eigenvalues of A. Then A is


diagonalizable ⇔ dim(Eλi ) equals the algebraic multiplicity of λi
for i = 1 : m.

6 / 11
Characterization
Exercise: Let λ1 , λ2 , . . . , λk be distinct eigenvalues of a matrix A.
Suppose Bi is a basis for the eigenspace Eλi . Then
B = B1 ∪ B2 ∪ . . . ∪ Bk is a linearly independent set.

Theorem: Let λ1 , . . . , λm be distinct eigenvalues of A. Then A is


diagonalizable ⇔ dim(Eλi ) equals the algebraic multiplicity of λi
for i = 1 : m.

The Diagonalization Theorem: For an n × n matrix A, the


following statements are equivalent:

6 / 11
Characterization
Exercise: Let λ1 , λ2 , . . . , λk be distinct eigenvalues of a matrix A.
Suppose Bi is a basis for the eigenspace Eλi . Then
B = B1 ∪ B2 ∪ . . . ∪ Bk is a linearly independent set.

Theorem: Let λ1 , . . . , λm be distinct eigenvalues of A. Then A is


diagonalizable ⇔ dim(Eλi ) equals the algebraic multiplicity of λi
for i = 1 : m.

The Diagonalization Theorem: For an n × n matrix A, the


following statements are equivalent:
1 A is diagonalizable.

6 / 11
Characterization
Exercise: Let λ1 , λ2 , . . . , λk be distinct eigenvalues of a matrix A.
Suppose Bi is a basis for the eigenspace Eλi . Then
B = B1 ∪ B2 ∪ . . . ∪ Bk is a linearly independent set.

Theorem: Let λ1 , . . . , λm be distinct eigenvalues of A. Then A is


diagonalizable ⇔ dim(Eλi ) equals the algebraic multiplicity of λi
for i = 1 : m.

The Diagonalization Theorem: For an n × n matrix A, the


following statements are equivalent:
1 A is diagonalizable.

6 / 11
Characterization
Exercise: Let λ1 , λ2 , . . . , λk be distinct eigenvalues of a matrix A.
Suppose Bi is a basis for the eigenspace Eλi . Then
B = B1 ∪ B2 ∪ . . . ∪ Bk is a linearly independent set.

Theorem: Let λ1 , . . . , λm be distinct eigenvalues of A. Then A is


diagonalizable ⇔ dim(Eλi ) equals the algebraic multiplicity of λi
for i = 1 : m.

The Diagonalization Theorem: For an n × n matrix A, the


following statements are equivalent:
1 A is diagonalizable.

2 The union B of the bases of the eigenspaces of A (as in


Exercise above) contains n vectors.

6 / 11
Characterization
Exercise: Let λ1 , λ2 , . . . , λk be distinct eigenvalues of a matrix A.
Suppose Bi is a basis for the eigenspace Eλi . Then
B = B1 ∪ B2 ∪ . . . ∪ Bk is a linearly independent set.

Theorem: Let λ1 , . . . , λm be distinct eigenvalues of A. Then A is


diagonalizable ⇔ dim(Eλi ) equals the algebraic multiplicity of λi
for i = 1 : m.

The Diagonalization Theorem: For an n × n matrix A, the


following statements are equivalent:
1 A is diagonalizable.

2 The union B of the bases of the eigenspaces of A (as in


Exercise above) contains n vectors.
3 The algebraic multiplicity of each eigenvalue A equals its
geometric multiplicity.
6 / 11
Examples
1 2 3
" #
Example: The matrix A = 0 4 5 is diagonalizable
0 0 6

7 / 11
Examples
1 2 3
" #
Example: The matrix A = 0 4 5 is diagonalizable because
0 0 6
1, 4, 6 are (distinct) eigenvalues of A.

7 / 11
Examples
1 2 3
" #
Example: The matrix A = 0 4 5 is diagonalizable because
0 0 6
1, 4, 6 are (distinct) eigenvalues of A.
1 2 16
" # " # " #
You can easily find: 1, 4, 6 have eigenvectors 0 , 3 , 25 ,
0 0 10
respectively.

7 / 11
Examples
1 2 3
" #
Example: The matrix A = 0 4 5 is diagonalizable because
0 0 6
1, 4, 6 are (distinct) eigenvalues of A.
1 2 16
" # " # " #
You can easily find: 1, 4, 6 have eigenvectors 0 , 3 , 25 ,
0 0 10
respectively.
1 1 1
" #
Example: The matrix A = 0 1 1 is not diagonalizable
0 0 1

7 / 11
Examples
1 2 3
" #
Example: The matrix A = 0 4 5 is diagonalizable because
0 0 6
1, 4, 6 are (distinct) eigenvalues of A.
1 2 16
" # " # " #
You can easily find: 1, 4, 6 have eigenvectors 0 , 3 , 25 ,
0 0 10
respectively.
1 1 1
" #
Example: The matrix A = 0 1 1 is not diagonalizable because
0 0 1
1 is the only eigenvalue with algebraic multiplicity 3 but geometric
multiplicity 1.

7 / 11
Examples
1 2 3
" #
Example: The matrix A = 0 4 5 is diagonalizable because
0 0 6
1, 4, 6 are (distinct) eigenvalues of A.
1 2 16
" # " # " #
You can easily find: 1, 4, 6 have eigenvectors 0 , 3 , 25 ,
0 0 10
respectively.
1 1 1
" #
Example: The matrix A = 0 1 1 is not diagonalizable because
0 0 1
1 is the only eigenvalue with algebraic multiplicity 3 but geometric
multiplicity 1. Indeed, the eigenspace E1 is given by
0 1 1 1
" #! " #!
E1 = null 0 0 1 = span 0 .
0 0 0 0

7 / 11
Example
Example: Let T : R2 [x] −→ R2 [x] be given by

(Tp)(x) := p(x) + (x + 1)p 0 (x).

Find the eigenvalues of T .

8 / 11
Example
Example: Let T : R2 [x] −→ R2 [x] be given by

(Tp)(x) := p(x) + (x + 1)p 0 (x).

Find the eigenvalues of T . Is T diagonalizable?

8 / 11
Example
Example: Let T : R2 [x] −→ R2 [x] be given by

(Tp)(x) := p(x) + (x + 1)p 0 (x).

Find the eigenvalues of T . Is T diagonalizable?  


1 1 0
Solution: Consider B := [1, x, x 2 ]. Then [T ]B =  0 2 2  .
0 0 3

8 / 11
Example
Example: Let T : R2 [x] −→ R2 [x] be given by

(Tp)(x) := p(x) + (x + 1)p 0 (x).

Find the eigenvalues of T . Is T diagonalizable?  


1 1 0
Solution: Consider B := [1, x, x 2 ]. Then [T ]B =  0 2 2  .
0 0 3
Hence 1, 2, 3 are the eigenvalues of T . Hence T is diagonalizable.

8 / 11
Example
Example: Let T : R2 [x] −→ R2 [x] be given by

(Tp)(x) := p(x) + (x + 1)p 0 (x).

Find the eigenvalues of T . Is T diagonalizable?  


1 1 0
Solution: Consider B := [1, x, x 2 ]. Then [T ]B =  0 2 2  .
0 0 3
Hence 1, 2, 3 are the eigenvalues of T . Hence T is diagonalizable.
 
0 1 0
Now A − I =  0 1 2  ⇒ null(A − I ) := span{e1 },
0 0 2

8 / 11
Example
Example: Let T : R2 [x] −→ R2 [x] be given by

(Tp)(x) := p(x) + (x + 1)p 0 (x).

Find the eigenvalues of T . Is T diagonalizable?  


1 1 0
Solution: Consider B := [1, x, x 2 ]. Then [T ]B =  0 2 2  .
0 0 3
Hence 1, 2, 3 are the eigenvalues of T . Hence T is diagonalizable.
 
0 1 0
Now A − I =  0 1 2  ⇒ null(A − I ) := span{e1 },
0 0 2
 
−1 1 0
A − 2I =  0 0 2  ⇒ null(A − 2I ) = span{[1, 1, 0]> }, and
0 0 1

8 / 11
Example
 
−2 1 0
A − 3I =  0 −1 2  = null(A − I ) := span{[1, 2, 1]> }.
0 0 0

9 / 11
Example
 
−2 1 0
A − 3I =  0 −1 2  = null(A − I ) := span{[1, 2, 1]> }.
0 0 0

Hence p1 (x) := 1, p2 (x) := 1 + x and p3 (x) := 1 + 2x + x 2 are


eigenvectors T corresponding to the eigenvalues 1, 2 and 3,
respectively.

9 / 11
Example
 
−2 1 0
A − 3I =  0 −1 2  = null(A − I ) := span{[1, 2, 1]> }.
0 0 0

Hence p1 (x) := 1, p2 (x) := 1 + x and p3 (x) := 1 + 2x + x 2 are


eigenvectors T corresponding to the eigenvalues 1, 2 and 3,
respectively.

Consider the ordered bases C := [p1 , p2 , p3 ]. Then we have

[T ]C = diag(1, 2, 3). 

9 / 11
Example
 
−2 1 0
A − 3I =  0 −1 2  = null(A − I ) := span{[1, 2, 1]> }.
0 0 0

Hence p1 (x) := 1, p2 (x) := 1 + x and p3 (x) := 1 + 2x + x 2 are


eigenvectors T corresponding to the eigenvalues 1, 2 and 3,
respectively.

Consider the ordered bases C := [p1 , p2 , p3 ]. Then we have

[T ]C = diag(1, 2, 3). 

Exercise: Let T ∈ L(V) and dim(V) = n. Define the determinant


of T by det (T ) := det ([T ]B ) for any ordered basis B of V.

9 / 11
Example
 
−2 1 0
A − 3I =  0 −1 2  = null(A − I ) := span{[1, 2, 1]> }.
0 0 0

Hence p1 (x) := 1, p2 (x) := 1 + x and p3 (x) := 1 + 2x + x 2 are


eigenvectors T corresponding to the eigenvalues 1, 2 and 3,
respectively.

Consider the ordered bases C := [p1 , p2 , p3 ]. Then we have

[T ]C = diag(1, 2, 3). 

Exercise: Let T ∈ L(V) and dim(V) = n. Define the determinant


of T by det (T ) := det ([T ]B ) for any ordered basis B of V.
Show that det (T ) is independent of the choice of an ordered
basis.

9 / 11
Example
 
−2 1 0
A − 3I =  0 −1 2  = null(A − I ) := span{[1, 2, 1]> }.
0 0 0

Hence p1 (x) := 1, p2 (x) := 1 + x and p3 (x) := 1 + 2x + x 2 are


eigenvectors T corresponding to the eigenvalues 1, 2 and 3,
respectively.

Consider the ordered bases C := [p1 , p2 , p3 ]. Then we have

[T ]C = diag(1, 2, 3). 

Exercise: Let T ∈ L(V) and dim(V) = n. Define the determinant


of T by det (T ) := det ([T ]B ) for any ordered basis B of V.
Show that det (T ) is independent of the choice of an ordered
basis. Also show that T is invertible ⇔ det (T ) 6= 0.

9 / 11
Example
 
−2 1 0
A − 3I =  0 −1 2  = null(A − I ) := span{[1, 2, 1]> }.
0 0 0

Hence p1 (x) := 1, p2 (x) := 1 + x and p3 (x) := 1 + 2x + x 2 are


eigenvectors T corresponding to the eigenvalues 1, 2 and 3,
respectively.

Consider the ordered bases C := [p1 , p2 , p3 ]. Then we have

[T ]C = diag(1, 2, 3). 

Exercise: Let T ∈ L(V) and dim(V) = n. Define the determinant


of T by det (T ) := det ([T ]B ) for any ordered basis B of V.
Show that det (T ) is independent of the choice of an ordered
basis. Also show that T is invertible ⇔ det (T ) 6= 0. Further show
that det (T − λIV ) = det ([T ]B − λIn ) for any λ ∈ F.

9 / 11
Triangularization
Theorem: Let A ∈ Mn (C). Then there is an invertible matrix
P ∈ Mn (C) such that U := P −1 AP is upper triangular.

10 / 11
Triangularization
Theorem: Let A ∈ Mn (C). Then there is an invertible matrix
P ∈ Mn (C) such that U := P −1 AP is upper triangular.
Proof: Apply induction on n. The result holds for n = 1.

10 / 11
Triangularization
Theorem: Let A ∈ Mn (C). Then there is an invertible matrix
P ∈ Mn (C) such that U := P −1 AP is upper triangular.
Proof: Apply induction on n. The result holds for n = 1. Assume
that the result holds for n − 1.

10 / 11
Triangularization
Theorem: Let A ∈ Mn (C). Then there is an invertible matrix
P ∈ Mn (C) such that U := P −1 AP is upper triangular.
Proof: Apply induction on n. The result holds for n = 1. Assume
that the result holds for n − 1.
Let λ be an eigenvalue of A and v be an eigenvector. Then
Av = λv.

10 / 11
Triangularization
Theorem: Let A ∈ Mn (C). Then there is an invertible matrix
P ∈ Mn (C) such that U := P −1 AP is upper triangular.
Proof: Apply induction on n. The result holds for n = 1. Assume
that the result holds for n − 1.
Let λ be an eigenvalue of A and v be an eigenvector. Then
Av = λv. Choose an n × (n − 1) matrix V such that P1 := [v, V ]
is invertible.

10 / 11
Triangularization
Theorem: Let A ∈ Mn (C). Then there is an invertible matrix
P ∈ Mn (C) such that U := P −1 AP is upper triangular.
Proof: Apply induction on n. The result holds for n = 1. Assume
that the result holds for n − 1.
Let λ be an eigenvalue of A and v be an eigenvector. Then
Av = λv. Choose an n × (n − 1)  matrix V such that P1 := [v, V ]
λ h
is invertible. Then P1−1 AP1 =
0 A
b

10 / 11
Triangularization
Theorem: Let A ∈ Mn (C). Then there is an invertible matrix
P ∈ Mn (C) such that U := P −1 AP is upper triangular.
Proof: Apply induction on n. The result holds for n = 1. Assume
that the result holds for n − 1.
Let λ be an eigenvalue of A and v be an eigenvector. Then
Av = λv. Choose an n × (n − 1)  matrix V such that P1 := [v, V ]
λ h
is invertible. Then P1−1 AP1 = for some row vector h
0 A
b
and (n − 1) × (n − 1) matrix A.b

10 / 11
Triangularization
Theorem: Let A ∈ Mn (C). Then there is an invertible matrix
P ∈ Mn (C) such that U := P −1 AP is upper triangular.
Proof: Apply induction on n. The result holds for n = 1. Assume
that the result holds for n − 1.
Let λ be an eigenvalue of A and v be an eigenvector. Then
Av = λv. Choose an n × (n − 1)  matrix V such that P1 := [v, V ]
λ h
is invertible. Then P1−1 AP1 = for some row vector h
0 A b
and (n − 1) × (n − 1) matrix A.b
By induction hypothesis, there exists P
b such that U b −1 A
b := (P) bPb is

upper triangular.

10 / 11
Triangularization
Theorem: Let A ∈ Mn (C). Then there is an invertible matrix
P ∈ Mn (C) such that U := P −1 AP is upper triangular.
Proof: Apply induction on n. The result holds for n = 1. Assume
that the result holds for n − 1.
Let λ be an eigenvalue of A and v be an eigenvector. Then
Av = λv. Choose an n × (n − 1)  matrix V such that P1 := [v, V ]
λ h
is invertible. Then P1−1 AP1 = for some row vector h
0 A b
and (n − 1) × (n − 1) matrix A.b
By induction hypothesis, thereexists P
b such that U b −1 A
b := (P) bPb is

1 0
upper triangular. Set P := P1 b .
0 P

10 / 11
Triangularization
Theorem: Let A ∈ Mn (C). Then there is an invertible matrix
P ∈ Mn (C) such that U := P −1 AP is upper triangular.
Proof: Apply induction on n. The result holds for n = 1. Assume
that the result holds for n − 1.
Let λ be an eigenvalue of A and v be an eigenvector. Then
Av = λv. Choose an n × (n − 1)  matrix V such that P1 := [v, V ]
λ h
is invertible. Then P1−1 AP1 = for some row vector h
0 Ab
and (n − 1) × (n − 1) matrix A.b
By induction hypothesis, thereexists P
b such that U b −1 A
b := (P) bPb is

1 0
upper triangular. Set P := P1 b . Then
0 P
" #
λ h P
P −1 AP =
b
= upper triangular.
0 U b

10 / 11
Trace and eigenvalues
Theorem: Let A ∈ Mn (C) with eigenvalues λ1 , . . . , λn (counted
according to their algebraic multiplicities).

11 / 11
Trace and eigenvalues
Theorem: Let A ∈ Mn (C) with eigenvalues λ1 , . . . , λn (counted
according to their algebraic multiplicities). Then
n
Y
Trace(A) = λ1 + · · · + λn and det (A) = λj .
j=1

Proof: By triangularization theorem, there is a nonsingular matrix


P such that U := P −1 AP is upper triangular.

11 / 11
Trace and eigenvalues
Theorem: Let A ∈ Mn (C) with eigenvalues λ1 , . . . , λn (counted
according to their algebraic multiplicities). Then
n
Y
Trace(A) = λ1 + · · · + λn and det (A) = λj .
j=1

Proof: By triangularization theorem, there is a nonsingular matrix


P such that U := P −1 AP is upper triangular. Hence
n
Y
det (A) = det (U) = λj .
j=1

11 / 11
Trace and eigenvalues
Theorem: Let A ∈ Mn (C) with eigenvalues λ1 , . . . , λn (counted
according to their algebraic multiplicities). Then
n
Y
Trace(A) = λ1 + · · · + λn and det (A) = λj .
j=1

Proof: By triangularization theorem, there is a nonsingular matrix


P such that U := P −1 AP is upper triangular. Hence
n
Y
det (A) = det (U) = λj .
j=1

Now Trace(A) = Trace(PUP −1 ) = Trace(U) = λ1 + · · · + λn . 

Exercise: Let A, B ∈ Mn (F). Show that Trace(AB) = Trace(BA).

11 / 11

Potrebbero piacerti anche