Sei sulla pagina 1di 2

MAST10008 Accelerated Mathematics 1 Version 2 MAST10008 Accelerated Mathematics 1 Version 2

Tutorial 8: Solutions (c) Eigenvalues are given by det(C − λI) = 0. We expand by cofactors along the first column to
obtain:

1. (a) Calculating Av we have 4 − λ −2
−1
         −2 4 − λ −1 = (4 − λ)((4 − λ)(1 − λ) − 1) − (−2)(−2(1 − λ) − 1) − 1(2 + (4 − λ))
1 3 0 −1 1 2 1
−1

−1 1 − λ
A 0  =  0 2 0  0  =  0  = 2 0 
1 2 0 0 1 2 1 = (4 − λ)(λ2 − 5λ + 3) − 4 + 4λ − 2 − 2 − 4 + λ
= −λ3 + 9λ2 − 18λ
so this is an eigenvector with eigenvalue λ = 2.
= −λ(λ − 3)(λ − 6) = 0
(b)
giving eigenvalues λ = 0, 3, 6.
       
1 3 0 −1 1 3 1
A 0 = 0 2 0
     0 = 0 =
   6 λ 0  To find eigenvectors we substitute λ in (C − λI)v = 0 and row reduce:
0 2 0 0 0 2 0        
4 −2 −1 1 1 −1 1 1 −1 2 0 −1
for any λ. Hence (1, 0, 0) is not an eigenvector of A. For λ = 0 : −2 4 −1 ∼ 0 −6 3  ∼ 0 2 −1 ∼ 0 2 −1 .
−1 −1 1 0 6 −3 0 0 0 0 0 0
2. (a) The eigenvalues are given by
1 1
Let v3 = 2t, then v1 = t and v2 = t. Thus an eigenvector is (1, 1, 2).
= ( 1 − λ)( 3 − λ) − 1 = λ2 − 5 λ + 1 = (λ − 1)(λ − 1 ) = 0
−λ
det(A − λI) = 2 1 3
4
2
−λ 4
2 4 8 4 4 4

1 −2 −1
 
1 −2 −1
 
1 0 1

so λ = 1 or λ = 1
. For λ = 3 : −2 1 −1 ∼ 0 −3 −3 ∼ 0 1 1 .
    
4
−1 −1 −2 0 −3 −3 0 0 0
For λ = 1: eigenvectors satisfy (A − λI)v = (A − I)v = 0. This has augmented matrix
 1 1    Let v3 = t, then v1 = −t and v2 = −t. Thus an eigenvector is (−1, −1, 1).
−2 4 0 −2 1 0
1 ∼ .
− 14 0
     
2
0 0 0 −2 −2 −1 1 1 5 1 1 0
For λ = 6 : −2 −2 −1 ∼ 0 0 1 ∼ 0 0 1 .
Since there is no leading entry for v2 we set v2 = t and then v1 = t/2. The solution space is
−1 −1 5 0 0 1 0 0 0
therefore {t(1/2, 1) : t ∈ R}. Taking t = 2 gives (1, 2) for the corresponding eigenvector.
For λ = 14 : eigenvectors satisfy (A − λI)v = (A − 14 I)v = 0. This has augmented matrix Let v2 = t, then v1 = −t and v3 = 0. Thus an eigenvector is (−1, 1, 0).
 1 1    As there are 3 linearly independent eigenvectors (coming from 3 distinct eigenvalues) the
0 1 1 0 matrix
4
1
4
1 ∼ .  is diagonalisable.
    
2 2
0 0 0 0 0 0 0 1 −1 −1 0 −3 −6
D = 0 3 0 and P = 1 −1 1 . Check: CP = P D = 0 −3 6  .
Since there is no leading entry for v2 we set v2 = t and then v1 = −t. The solution space is
0 0 6 2 1 0 0 3 0
therefore {t(−1, 1) : t ∈ R}. Taking t = 1 gives (−1, 1) for the corresponding eigenvector.
As there are 2 linearly independent eigenvectors (coming from 2 distinct eigenvalues), the 3. Using Q2(a) we have
matrix
 is diagonalisable.
   −1  −1  
1 −1 1 0 1 −1 1 −1 1 1 1
1 − 41 A = P DP −1 = −1
    
1 0 1 −1 and P = = .
D= 1 and P = . Check: AP = P D = 1 . 2 1 0 14 2 1 2 1 3 −2 1
0 4 2 1 2 4
(b) Eigenvalues are given by Thus
  k  −1
1 −1 1 0 1 −1

2 − λ −3
det(B − λI) = = (2 − λ)2 = 0 Ak = P Dk P −1 = 1
0 2 − λ 2 1 0 2 1
  4  
1 1 −1 1 0 1 1
so λ = 2. = 1
3 2 1 0 k −2 1
For λ = 2: eigenvectors satisfy (B − λI)v = (B − 2I)v = 0. This has augmented matrix   4 
1 1 −1 1 1
    = 2 1
0 −3 0 0 1 0 3 2 1 − 4k 4k
∼ .
0 0 0 0 0 0 1 1 + 42k 1 − 41k
 
= .
Since there is no leading entry for v1 we set v1 = t and then v2 = 0. The solution space is 3 2 − 42k 2 + 41k
therefore {t(1, 0) : t ∈ R}. Taking t = 1 gives (1, 0) for the corresponding eigenvector. As k → ∞ we have 1
→ 0 so
4k  
1 1 1
As there is only one linearly independent eigenvector, the matrix is not diagonalisable. Ak → .
3 2 2

Mathematics and Statistics 1 University of Melbourne Mathematics and Statistics 2 University of Melbourne
MAST10008 Accelerated Mathematics 1 Version 2

4. (a) Vectors on the plane map to themselves, while vectors perpendicular to the plane map to
(0, 0, 0).
So the plane x + y + z = 0 is a 2-d eigenspace with corresponding eigenvalue 1.
Another eigenspace is the line {(t, t, t) | t ∈ R with corresponding eigenvalue 0. As the
dimensions of the two eigenspaces add to 3 = dim R3 we are done.
(b) Again vectors on the plane map to themselves so x + y + z = 0 is a 2-d eigenspace with
eigenvalue 1.
Vectors perpendicular to the plane are multiplied by −1 so the line {(t, t, t) | t ∈ R is a 1-d
eigenspace with corresponding eigenvalue −1.
5. T (sin kx) = −k 2 sin kx so sin kx is an eigenvector with corresponding eigenvalue −k 2 .
T (cos kx) = −k 2 cos kx so cos kx is an eigenvector with corresponding eigenvalue also −k 2 .
6. We first find the characteristic equation for A.
 
6−λ 0 1
|A − λI| =  3 −2 − λ 0 
−8 0 −3 − λ
= (−2 − λ)((6 − λ)(−3 − λ) + 8) = −(λ3 − λ2 − 16λ − 20).
So the characteristic equation is λ3 − λ2 − 16λ − 20 = 0.
By the Cayley-Hamilton theorem, A satisfies this equation. So A3 − A2 − 16A − 20I = 0.
1 1
Rearranging this gives 20
(A3 − A2 − 16A) = I or 20
(A2 − A − 16I) · A = I. Hence
1 2
A−1 = (A − A − 16I).
20
Now  
28 0 3
2
A =  12 4 3
−24 0 1
giving    
28 − 6 − 16 0−0 3−1 6 0 2
−1 1  1
A = 12 − 3 4 + 2 − 16 3+0 =  9 −10 3 
20 20
−24 + 8 0−0 1 + 3 − 16 −16 0 −12
7. If λ is an eigenvalue of A, then there exists a vector v 6= 0 such that Av = λv. (*)
Let P (n) be the statement: “An v = λn v”.
P (1) is true from (*).
Assume P (k) is true for some k ∈ N. Then Ak v = λk v.
LHS of P (k + 1) = Ak+1 v
= Ak (Av)
= Ak λv by (*)
= λAk v
= λ(λk v) by inductive assumption
= λk+1 v = RHS of P (k + 1)
So P (k) ⇒ P (k + 1) and by the principle of mathematical induction P (n) is true for all n ∈ N.
This shows that v is an eigenvector of An with eigenvalue λn for all n ∈ N.

Mathematics and Statistics 3 University of Melbourne

Potrebbero piacerti anche