Sei sulla pagina 1di 4

Math 551, Duke University Part 0 Lecture 1

Extending the concepts from Linear Algebra (part I) [Haberman Sect 5.5 App]

Inner products are extensions of the vector dot product: hu, vi


For real vectors, u, v Rn , define the inner product: hu, vi u v = uT v
Other definitions for the inner product are made for other classes of problems
(ODEs, PDEs, ...). They all share common properties:
Inner products are used to define orthogonality, adjoints, norms.
The linearity property of inner products: for all vectors u, v, w:

hau + bv, wi = hau, wi + hbv, wi = ahu, wi + bhv, wi

inner prod. of linear combination of vectors = lin. combo. of inner prods.


(i.e. can expand-out sums, and factor-out scalar multiples)
The norm property of inner products: for all vectors us:

|u|2 hu, ui 0 and |u| = 0 if and only if u = ~0

Orthogonality of vectors is defined by the inner product:


uv hu, vi = 0
Linear Operators are a generalization of matrices
Calculating Lu produces another vector, v = Lu (L for Linear operator)

Eigenvalues and eigenvectors of L are defined by:

L = with || =
6 0

Eigen-vals/vecs calculated using: det(L I) = 0 then {(L k I)k = 0}


The adjoint operator L is defined by the inner product adjoint relation:

hv, Lui = hL v, ui for any u, v

The adjoint eigenvalues and adjoint eigenvectors are defined by:

L = with || =
6 0 = det(L I) = 0 {(L j I) j = 0}

For real vectors, with hu, vi = uT v, with real matrix L, the adjoint relation
gives: (using c = cT , (ABC)T = C T B T AT , and hu, vi = hv, ui)

hv, Lui = vT (Lu) = (vT Lu)T = uT LT v = uT (LT v) = hu, LT vi

So the adjoint operator is L = LT (the transpose matrix).


Further results for Linear Operators on real vectors:
Relations between regular eigenvalues and adjoint eigenvalues:

det(L I) = 0 vs. det(LT I) = 0

Use properties: det(M) = det(MT ), (M + N)T = MT + NT , (LT )T = L

det(LT I) = det((LT I)T ) = det(L I) = 0

So the adjoint eigenvalues satisfy the eig-val eqn, and are the same: =
But the eig-vecs k and adj-eig-vecs j are generally different...
Relations between Eigenvectors and adjoint eigenvectors:
Claim: If k 6= j then k j The bi-orthogonality of the sets of eigenvectors

Start with k h j , k i = h j , k k i [via linearity]

= h j , Lk i [via Lk = k k ]

= hLT j , k i [via adjoint relation]

= hj j , k i [via L j = j j ]

= j h j , k i [via linearity, End here]

(Start)(End)=0 (k j )h j , k i = 0 = h j , k i = 0
Completeness: The set {k } is called a complete basis set if every given vector
w Rn can be written as a linear combination of the k basis vectors:
n
X
w= ck k How do we determine the ck coefficients?
k=1

Key step: Orthogonal projection This universal problem-solving approach


always starts with taking the inner product of both sides of the equation with each
of the adjoint eigenvectors j for j = 1, 2, 3, , n: hj , Eqni
D X E
h j , wi = j , ck k [via hj , LHSi = hj , RHSi]
X
= ck h j , k i [via linearity]
k
= cj h j , j i [via bi-orthogonality for j 6= k]

and finally, swap letters (j, k) to get a formula for each coefficient
n
h k , wi X h k , wi
= ck = w= k
h k , k i k=1
h k , k i

Potrebbero piacerti anche