Sei sulla pagina 1di 5

MATH110 Homework 15 & 16

Taylor Tam

Outline
HW15

Section 5.2: Diagonizability


Notes: Focus on Theorems 5.8, 5.9, Test for Diagonilzation; Examples 5-8; pp. 273-4
Problems: #2(e)(g), 3(c)(d)(f), 4-6, 14(b)(c)
Challenge Problems: #18, 19

HW16
Section 5.4: Invariant Subspaces and the Cayley-Hamilton Theorem (pp. 313-318)
Notes
Problems: #1, 2(b)(e), 4, 5, 6(b)(d), 7, 8, 10, 17, 18, 21
Challenge Problems: #13-15, 19, 23, 24, 26

Section 5.2: Diagonizability


Notes
Theorem 5.5 establishes that eigenvectors of distinct eigenvalues are linearly independent. However,
two eigenvectors from the same eigenvalue may show linear dependence.
Theorem 5.8: Let T be a linear operator on a vector space V , and let 1 , 2 , . . . , k be distinct
eigenvalues of T . For each i = 1, 2, . . . , k, let Si be a finite linearly independent subset of the
eigenspace Ei . Then S = Si is a linearly independent subset of V .

Proof. PUses previous theorems regarding linear independence and lemma. The lemma establishes
that if vi = 0 for v E , then vi = 0 for all i and Theorem 5.5 establishes that eigenvectors for
distinct eigenvalues are linearly independent.

Theorem 5.9: Let T be a linear operator on a finite-dimensional vector space V such that the
characteristic polynomial of T splits. Let i be the distinct eigenvalues of T . Then:

1. T is diagonizable if and only if the multiplicity of I is equal to dim(E ) for all i.


2. If T is diagonizable and i is an ordered basis for E , for each i, then = i is an ordered
basis for V consisting of eigenvectors of T .
Tests for Diagonalization:

1. The characteristic polynomial of T splits


2. For each eigenvalue of T , the multiplicity of = n rank(T I)

1
Direct Sums
P
Sums of Subspaces are defined as Wi where Wi = v1 + v2 + + vk for vi W .
P P
Direct Sums of Subspaces are defined as V = Wi and Wj i6=j Wi = {0}
Theorem 5.10: Let W1 , W2 , . . . , Wk be subspaces of a finite-dimensional vector space V . The
following conditions are equivalent:
1. V = W1 W2 Wk
P
2. V = Wi , and, for any vectors vi Wi , if v1 + v2 + + vk = 0, then vi = 0 for all i.
3. Each vector v V can be uniquely wirtten as v = v1 + v2 + + vk , where vi Wi .
4. If i is an ordered basis for Wi , then i is an ordered basis for V .
5. There is exists a basis i for Wi such that i is an ordered basis for V .
Theorem 5.11: A linear operator T on a finite-dimensional vector space V is diagonizable if and
only if V is the directsum of the eigenspaces of T .

Problems
#2(e)(g), 3(c)(d)(f), 4-6, 14(b)(c)

Challenge Problems
#18, 19

Section 5.3: Matrix Limits and Markov Chains


Notes
Convergence of a Matrix: The limit of a matrix over a complex field is defined as

lim (Am )ij = Lij


m

Theorem 5.12: Given two matrices over a complex field,

lim P Am = P L and lim Am Q = LQ


m m

Corollary: Let A Mnn (C) such that limn Am = L. Then for any invertible matrix
Q Mnn (C)
lim (QAQ1 )m = QLQ1
n

Unit Disc: The set


S = { C :| |< 1 or = 1}
This set consists of the complex number 1 and the interior of the unit disk.
Theorem 5.13: Let A be a square matrix with complex entries. Then limm Am exists iff both of
the following hold
1. Every eigenvalue of A is contained in the set S.
2. If 1 is an eigenvalue of A, then the dimension of the eigenspace corresponding to 1 equals the
multiplicity of 1 as an eigenvalue of A.

2
Theorem 5.14: Let A Mnn (C) satisfy the following two conditions:
1. Every eigenvalue of A is contained in S.
2. A is diagonizable
Then
lim Am exists
m

Applications of Matrix Limits


Transition (Stochastic) Matrices: USing an n n trnasition matrix, rows and columns
correspond to different states and aij corresponds to provavility of moving to another state.
To find eventual outcome of stochastic process: (1) Diagonalize the matrix, (2) Take the limits
of the matrix (A = QDm D1 ), (3) Matrix multiply to solve for the eventual outcome.
Theorem 5.15: Let M be an n n matrix with real nonnegative entries and v a column vector
in Rn having nonnegative coordinates, and let u Rn be the column vector in which each
coordinate equals 1. Then

1. M is a transition matrix iff M t (u) = u


2. v is a probability vector iff ut v = (1).

Corollary:
1. The product of two n n transition matrices is an n n transition matrix. In particular, any
power of a transition matrix is a transition matrix.
2. The product of a transition matrix and a probability vector is a probability vector.

Markov Processes and Markov Chains: Probability that object states change over time. We use
the above process to determine outcomes of such chains.
Regular Transition Matrix: A transition matrix is regular if some power of the matrix only
contains positive entries (no nonzero entries).

For regular transition matrices, the limit of the sequence of powers of A exists and has identical
columns.
Column sum and Row sum: The column sum of A is:
n
X
max{j (A) = | Aij |}
i=1

and row sum


n
X
max{i (A) = | Aij |}
i=1

Gerschgorins Disk: For an n n matrix, the ith Gerschgorin disk is:

Ci = {z C :| z Aii |< ri }

3
Theorem 5.16: Gerschgorins Disk Theorem: Given Ann (C), every eigenvalue of A is
contained in a Gerschgorin disk.
Corollary 1: Let be any eigenvalue of A, then | | (A).
Corollary 2: Let be any eigenvalue of A, then
| | min{(A), (A)}
Corollary 3: If is an eigenvalue of a transition matrix, then | | 1.
Theorem 5.17: Every transition matrix has 1 as an eigenvalue.
Theorem 5.18: Suppose Ann (C) is a matrix with positive entries and let be an eigenvalue of A
such that | |= (A). Then = (A) and {u} is a basis for E where u C n is the column vector in
which each coordinate equals 1.
Corollary 1: Let Ann (C) be a matrix in which each entry is positive, and let be an eigenvalue of
A such that | |= (A). Then = (A) and the dimension of E = 1.
Corollary 2: Let Ann (C) be a transition matrix in which each entry is positive, and let be any
eignevalue of A other than 1. Then | |< 1. Moreover, the corresponding eigenspace has dimension 1.
Theorem 5.19: Let A be a regular transition matrix, and let be an eigenvalue of A. Then
1. | | 1
2. If | |= 1, then = 1, and dim(E ) = 1.

Corollary: For a regular transition matrix A that is diagonizable,


lim Am exists
m

Theorem 5.20: Given regular transition matrix A, then


1. The multiplicity of 1 has an eigenvalue of A is 1.
2. limm Am exists.
3. L = limm Am is a transition matrix
4. AL = LA = L.
5. The columns of L are identical. In fact, each column of L is equal to the unique probability
vector v that is also an eigenvector of A corresponding to the eigenvalue 1.
6. For any probability vector w, limm (Am w) = v.
This vector v is known as the fixed probability vector.

4
Problems
#2(e)(g), 3(c)(d)(f), 4-6, 14(b)(c)

Challenge Problems
#18, 19

Section 5.4: Invariant Subspaces and the Cayley-Hamilton


Theorem (pp. 313-318)
Notes
T-Invariant Subspace: Given a linear operator, a subspace W of V is a T-invariant subspace of V
if T (W ) W .
The following are T-invariant:

{0} V R(T ) N (T ) E for any eigenvalue of T

T-cyclic subspace of V generated by x: Given a linear operator T : V V and a vector x V ,


W as defined below is a subclass of T-invariant subspaces: the T-cyclic subspace.

W = span({x, T (x), T 2 (x), . . .})

Theorem 5.21: Given T on V and W an invariant subspace of V , then the characteristic


polynomial of TW divides the characteristic polynomial of T .
Theorem 5.22: Given T : V V and W as a T-cyclic subspace of V with dim(W ) = k. The
following hold:

{v, T (v), T 2 (v), . . . , T k1 (v)} is a basis for W .


Ifa0 v + a1 T (v) + + ak1 T k1 (v) + T k (v) = 0, then the characteristic polynomial of TW is
f (t) = (1)k (a0 + a1 t + + ak1 tk1 + tk ).
Theorem 5.23 (Cayley-Hamilton): Given linear operator T on V and f (t) as the characteristic
polynomial of T . Then f (T ) = T0 , the zero transformation. We say that T satisfies its characteristic
polynomial.

Problems
#1, 2(b)(e), 4, 5, 6(b)(d), 7, 8, 10, 17, 18, 21

Challenge Problems
#13-15, 19, 23, 24, 26

Potrebbero piacerti anche