Sei sulla pagina 1di 23

Diagonalization: examples Indistinct eigenvalues Eigenvectors and linear transformations

MATH 4A - Linear Algebra with Applications


Lecture 22: Diagonalization examples, and eigenvalues of linear
transformations

22 May 2019

Reading: §5.3-5.5
Recommended problems from §5.3: 1, 3, 5, 7, 11, 19, 21-32
Recommended problems from §5.4: 1-17 odd, 19-24
Diagonalization: examples Indistinct eigenvalues Eigenvectors and linear transformations

Lecture plan

1 Diagonalization: examples

2 Indistinct eigenvalues

3 Eigenvectors and linear transformations


Diagonalization: examples Indistinct eigenvalues Eigenvectors and linear transformations

iClicker 1

Is the following matrix diagonalizable:


 
32 −1 π
 0 420 542?
0 0 −1

(a) Yes
(b) No
Diagonalization: examples Indistinct eigenvalues Eigenvectors and linear transformations

General procedure

1 Find the eigenvalues.


2 Find eigenvectors.
3 Construct P from the eigenvectors.
4 Construct D from the eigenvalues.
Diagonalization: examples Indistinct eigenvalues Eigenvectors and linear transformations

Example

Let  
1 3 3
A = −3 −5 −3
3 3 1

1. Characteristic polynomial is −(λ − 1)(λ + 2)2 , so eigenvalues


are 1 with multiplicity 1, and −2 with multiplicity 2. Since
one of the eigenvalues has multiplicity, A may not be
diagonalizable. We have to do more work.
Diagonalization: examples Indistinct eigenvalues Eigenvectors and linear transformations

Example
2. We should try to find bases for the two eigenspaces, which we
do by solving (A − λI )x = 0 for each of the two eigenvalues. If
the dimension of the eigenspace isn’t equal to the multiplicity,
then A is not diagonalizable.

Basis for λ = 1 eigenspace:




1
v1 =  1  .
−1
Basis for λ = −2 eigenspace:
   
−1 −1
v2 =  1  , v3 =  0  .
0 1
Since a basis exists for the eigenspace that has multiplicity, we
know A is diagonalizable.
Diagonalization: examples Indistinct eigenvalues Eigenvectors and linear transformations

Example

3. Let  
 1 −1 −1
P = v1 v2 v3 = 1 1 0
−1 0 1
4. Let  
1 0 0
D = 0 2 0  .
0 0 2
It is easy to check that

A = PDP −1 .
Diagonalization: examples Indistinct eigenvalues Eigenvectors and linear transformations

Non-example

The matrix  
2 4 3
B = −4 −6 −3
3 3 1
is not diagonalizable. This is because it’s characteristic polynomial
is −(λ − 1)(λ + 2)2 , but, if we try, we can only find a
one-dimensional space of solutions for (A + 2I )x = 0.
Diagonalization: examples Indistinct eigenvalues Eigenvectors and linear transformations

The problem: iClicker 2

Having an eigenvalue with multiplicity more than 1 is a source of


trouble. We know that if a n × n matrix A has n distinct
eigenvalues, then A is diagonalizable. Is the converse true?

That is, does every diagonalizable matrix have distinct


eigenvalues?
(a) Yes
(b) No
Diagonalization: examples Indistinct eigenvalues Eigenvectors and linear transformations

OF COURSE NOT

The n × n identity matrix In is a diagonal matrix, so it’s


diagonalizable, and of course In has only one eigenvalue: 1 with
multiplicity n.
Diagonalization: examples Indistinct eigenvalues Eigenvectors and linear transformations

So the answer to the question of whether a matrix that has


eigenvalues with multiplicity > 1 is diagonalizable must be
subtle

Theorem
Let A be a n × n matrix whose eigenvalues are λ1 , . . . , λp , p ≤ n.
a. For each k = 1, . . . , p, the eigenspace of λk has dimension at
most the multiplicity of λk .
b. A is diagonalizable if and only if the sum of the dimensions of
the eigenspaces equals n. That latter happens if and only if the
characteristic polynomial factors into linear factors and the
dimension of the eigenspace for each λk equals the multiplicity
of λk (that is, the number of times the factor (λ − λk ) appears
in the characteristic polynomial’s factorization.)
c. If A is diagonalizable and Bk is a basis for the eigenspace of λk ,
then the set of vectors in B1 , . . . , Bp forms a basis of A.
Diagonalization: examples Indistinct eigenvalues Eigenvectors and linear transformations

Recall: coordinate mappings

If V is an n-dimensional vector space with basis B = {v1 , . . . , vn },


the coordinate mapping with respect to B is a linear transformation

V → Rn
 
c1
 .. 
x 7→ [x]B =  . 
cn

where
x = c1 v1 + · · · + cn vn .
Diagonalization: examples Indistinct eigenvalues Eigenvectors and linear transformations

Matrices for linear transformations


If V and W are two vector spaces with bases B = {v1 , . . . , vn } and
C = {w1 , . . . , wm }, respectively, and if T : V → W is a linear
transformation, we can use the two coordinate mappings to find a
matrix description of T . Let
x = c1 v1 + · · · + cn vn ,
so  
c1
 .. 
[x]B =  .  .
cn
By linearity of T ,
T (x) = r1 T (v)1 + · · · + cn T (vn ),
and by linearity of the C coordinate mapping
[T (x)]C = r1 [T (v)1 ]C + · · · + cn [T (vn )]C .
Diagonalization: examples Indistinct eigenvalues Eigenvectors and linear transformations

Thus, if we construct the m × n matrix



M = [T (v1 )]C [T (v2 )]C · · · [T (vn )]C ,

we get the identity


[T x]C = M[x]B .
We call M the matrix for T relative to B and C
Diagonalization: examples Indistinct eigenvalues Eigenvectors and linear transformations

Intuition for matrices of linear transformations

x T (x)
T
V W

multiplication
Rn Rm
by M
[x]B M [x]B = [T (x)]C
Diagonalization: examples Indistinct eigenvalues Eigenvectors and linear transformations

Example

Suppose B = {b1 , b2 } is a basis for V and C = {c1 , c2 , c3 } is a


basis for W . Let T : V → W be the linear transformation such
that

T (b1 ) = 3c1 − 2c2 + 5c3 and T (b2 ) = 4c1 + 7c2 − c3 .

The the matrix for T relative to B and C is


 
3 4
−2 7 
5 −1
Diagonalization: examples Indistinct eigenvalues Eigenvectors and linear transformations

Matrices for transformations with the same domain and


codomain

We’re usually most interested in assuming T : V → V is a linear


transformation from a vector space back to itself. In this case, as
soon as we have a basis for the domain, we also have one for the
codomain! So we can talk about a matrix for T relative to just B
(as opposed to B and C).
Diagonalization: examples Indistinct eigenvalues Eigenvectors and linear transformations

iClicker

Consider P2 , the polynomials of degree at most 2, and the linear


transformation

T (a0 + a1 t + a2 t 2 ) = a1 + 2a2 t.

What is the (2, 3) entry of the matrix of T with respect to the


standard basis {1, t, t 2 }?
(a) a2
(b) 2a2
(c) 0
(d) 1
(e) 2
Diagonalization: examples Indistinct eigenvalues Eigenvectors and linear transformations

Recall: motivation

To make abstract vector spaces and linear transformations more


concrete, we need to pick a basis (“choose coordinates”) so we can
use a matrix algebra, which we know is super useful.

In fact, this philosophy is useful even for linear transformations


from Rn to itself. As we will now see, picking different
coordinates/using a different basis (instead of the standard basis
B = {e1 , . . . , en }) can sometimes lead us to find a different matrix
that describes the same transformation in a much simpler way with
respect to this different basis.
Diagonalization: examples Indistinct eigenvalues Eigenvectors and linear transformations

Similarity and change of basis

Let A be an n × n matrix similar to C , so there exists an invertible


matrix P such that A = PCP −1 . Write

P = v1 · · · vn .

Since P is invertible, we know the set of columns C = {v1 , . . . , vn }


forms a basis of Rn .

I claim C is the basis for the transformation x 7→ Ax with respect


to C = {v1 , . . . , vn }. (Compare this with the fact that A is the
matrix for the transformation x 7→ Ax with respect to the standard
basis.)
Diagonalization: examples Indistinct eigenvalues Eigenvectors and linear transformations

Interpreting the P in A = PCP −1

Remember I said that when studying a linear transformation with


the same domain and codomain T : V → V , we usually use the
same basis for both. On this slide, I won’t do that.

Let’s consider the identity transformation I : Rn → Rn . Of course,


with respect to the standard basis B = {e1 , . . . , en }, the matrix is
just  
1 0 0
0 1 
.
 
 ..
 . 0
0 0 1

Recall P = v1 · · · vn , where C = {v1 , . . . , vn } is a basis. Then
P is the matrix for the identity transformation of Rn → Rn , but
where the domain has basis B and the codomain has basis C.
Diagonalization: examples Indistinct eigenvalues Eigenvectors and linear transformations

Diagonal matrix representations

Everything we just discussed is especially useful if we can make A


similar to a diagonal matrix D, because matrix multiplication with
D is super easy. So let’s spell it out as a theorem in this special
case:
Theorem
Suppose A = PDP −1 , where D is a diagonal n × n matrix. If B is
the basis of Rn formed from the columns of P, then D is the
B-matrix for the transformation x 7→ Ax.
Diagonalization: examples Indistinct eigenvalues Eigenvectors and linear transformations

Intuition

x Ax = PDP −1 x
multiplication
Rn Rn
by A

multiplication
Rn Rn
by D
P −1 x DP −1 x

Potrebbero piacerti anche