Sei sulla pagina 1di 3

Homework # 8 Solutions

Math 102, Winter 2016

Exercise 1 Solve the system of differential equations

y 0 = Ay, y(0) = c,

where    
3 1 1 2
A = 0 2 0 and c = 2 .
1 1 3 2

Solution Well,
det(A − λI)=(2 − λ)((3 − λ)2 − 1)
=(2 − λ)(λ2 − 6λ + 8)
=(2 − λ)2 (4 − λ),
so the eigenvalues of A are 2 and
 4. By solving
 (A−2I)x = 0 and (A−4I)x
  = 0,
1 1 1
we find Nul(A − 2I) = span{  0  , −1  } and Nul(A − 4I) = span{ 0} so

−1 0 1
   −1
1 1 1 2 0 0 1 1 1
A= 0 −1 0 0 2 0  0 −1 0 .
−1 0 1 0 0 4 −1 0 1

With the diagonalization of A in hand, we may solve the differential equations


by finding c as a linear combination of the eigenvectors of A, namely by solving
       
2 1 1 1
c = 2 = c1  0  + c2 −1 + c3 0
2 −1 0 1

to find c1 = 1, c2 = −2, c3 = 3. Then the solution to the system of differential


equations is
   2t
−e + 3e4t
    
1 1 1
2t  2t 4t
y=e 0  − 2e −1 + 3e 0 =  2e2t ,
−1 0 1 −e2t + 3e4t

1
and we verify
−2e2t + 12e4t
 

y 0 = Ay =  4e2t 
−2e + 12e4t
2t

and  
2
y(0) = 2 .
2

Exercise 2 Prove that the trace of a matrix is equal to the sum of its eigenvalues.
Solution We begin by supposing A ∈ Rn×n is a matrix with eigenvalues
λ1 , . . . , λn . Following the proof suggested by the textbook, we consider that
det(A − λI) is a polynomial whose roots are the eigenvalues of A, and whose
coefficient of λn is (−1)n ; in other words,

det(A−λI) = (λ1 −λ)(λ2 −λ) · · · (λn −λ) = (−λ)n +(−λ)n−1 (λ1 +· · ·+λn )+· · ·+λ1 ·λ2 ·· · ··λn ,

where we neglect the terms of order λ1 through λn−2 . Coincidentally, the coef-
ficient of (−λ)n−1 in the characteristic polynomial of A must be the sum of the
eigenvalues of A.
On the other hand, det(A − λI) may also be calculated by cofactor expansion.
Going along the first row, we have

a22 − λ · · · a2n a21 · · · a2n

.
.. . .. .
.. . . .
det(A−λI) = (a11 −λ) −a12 .. .. .. +· · · .


an2 · · · ann − λ an1 · · · ann − λ

It may be observed that the only term here capable of producing a “λn−1 ” is the
first term, because the remaining terms only have (n − 2) λ’s in them. Similarly,
considering

a22 − λ · · · a2n a33 − λ · · · a3n a32 · · · a3n

.. . .. .
.. .
.. . .. .
.. . .. ..
= (a22 −λ) −a23 .. ,


. . .
an1 · · · ann − λ an3 · · · ann − λ an2 · · · ann − λ

the only term here capable of producing (in conjunction with the (a11 − λ))
a λn−1 is the first term. By continuing this process all the way down the
determinants of the nested submatrices, we find that the coefficient of λn−1 in
det(A − λI) comes entirely from the term

(a11 − λ)(a22 − λ) · · · (ann − λ) = (−λ)n + (−λ)n−1 (a11 + · · · + ann ).

Therefore, det(A−λI) has a coefficient of (a11 +· · ·+ann ) = Tr(A) for (−λ)n−1 .


Since the coefficient for (−λ)n−1 in det(A − λI) is both Tr(A) and λ1 + · · · + λn ,
it follows that these two numbers are equal!

2
If this proof is difficult to read, consider the 2 × 2 case:
 
a−λ b
det = (a − λ)(d − λ) − bc = λ2 − λ(a + d) + (ad − bc),
c d−λ

whereas of course

det(A − λI) = (λ1 − λ)(λ2 − λ) = λ2 − λ(λ1 + λ2 ) + λ1 · λ2 ;

comparing coefficients, we see λ1 + λ2 = Tr(A) and λ1 · λ2 = det(A).


Exercise 3 Explain what eigenvalues and eigenvectors are. Why does the
characteristic equation determine the eigenvalues? Explain why a matrix has
zero as an eigenvalue if and only if it is non-invertible. Finally, explain why
invertibility does not imply diagonalizability, nor vice versa.
Solution Given a square matrix A ∈ Rn×n , an eigenvalue of A is any number
λ such that, for some non-zero x ∈ Rn , Ax = λx. Any such vector is called an
eigenvector of A corresponding to the eigenvalue λ.
Notice that Ax = λx for a non-zero x if and only if (A − λI)x = 0 for the
same, non-zero x! This means that λ is an eigenvalue of A if and only if A − λI
has linearly dependent columns, which is to say that A − λI is non-invertible!
Because we know that a matrix is singular if and only if its determinant is zero,
this means that λ is an eigenvalue of A if and only if det(A − λI) = 0, which is
the characteristic equation.
If 0 is an eigenvalue of A, then Ax = 0 · x = 0 for some non-zero x, which
clearly means A is non-invertible. On the other hand, if A is non-invertible,
then Ax = 0 for some non-zero x, which is to say that Ax = 0 · x for some
non-zero x, which obviously means that 0 is an eigenvalue of A.
Invertibility and diagonalizability are independent properties because the in-
vertibility of A is determined by whether or not 0 is an eigenvalue of A, whereas
diagonalizability of A is determined by whether it has n linearly independent
eigenvectors. Not only is there no immediate reason to connect these two prop-
erties, but there are examples of matrices that have any combination of the two!
You may see examples in the following table:
Not diagonalizable
  Diagonalizable
 
0 1 1 0
Not invertible
0 0 0 0
1 1 1 0
Invertible
0 1 0 1

Potrebbero piacerti anche