Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
' $
Inner product
& %
' $
& %
Math 20F Linear Algebra Lecture 25 2
' $
Examples
• The Euclidean inner product in IR2 . Let V = IR2 , and {e1 , e2 }
be the standard basis. Given two arbitrary vectors
x = x1 e1 + x2 e2 and y = y1 e1 + y2 e2 , then
(x, y) = x1 y1 + x2 y2 .
' $
Examples
& %
Math 20F Linear Algebra Lecture 25 3
' $
Norm
An inner product space induces a norm, that is, a notion of length
of a vector.
Definition 2 (Norm) Let V , ( , ) be a inner product space. The
norm function, or length, is a function V → IR denoted as k k, and
Slide 5
defined as
p
kuk = (u, u).
Example:
• The Euclidean norm in IR2 is given by
p p
kuk = (x, x) = (x1 )2 + (x2 )2 .
& %
' $
Examples
& %
Math 20F Linear Algebra Lecture 25 4
' $
Distance
A norm in a vector space, in turns, induces a notion of distance
between two vectors, defined as the length of their difference.
Definition 3 (Distance) Let V , ( , ) be a inner product space,
and k k be its associated norm. The distance between u and v ∈ V
Slide 7
is given by
dist(u, v) = ku − vk.
Example:
• The Euclidean distance between to points x and y ∈ IR 3 is
p
kx − yk = (x1 − y1 )2 + (x2 − y2 )2 + (x3 − y3 )2 .
& %
' $
Orthogonal vectors
ku + vk = ku − vk ⇔ (u, v) = 0.
Slide 8 Proof:
ku + vk2 = (u + v, u + v) = kuk2 + kvk2 + 2(u, v).
ku − vk2 = (u − v, u − v) = kuk2 + kvk2 − 2(u, v).
then,
ku + vk2 − ku − vk2 = 4(u, v).
& %
Math 20F Linear Algebra Lecture 26 5
' $
Orthogonal vectors
Slide 9 (u, v) = 0.
& %
' $
Example
• Vectors u = [1, 2]T and v = [2, −1]T in IR2 are orthogonal with
the inner product (u, v) = u1 v1 + u2 v2 , because,
(u, v) = 2 − 2 = 0.
Slide 10
• The vectors cos(x), sin(x) ∈ C([0, 2π]) are orthogonal, with the
R 2π
inner product (f, g) = 0 f g dx, because
Z 2π Z
1 2π
(cos(x), sin(x)) = sin(x) cos(x) dx = sin(2x) dx,
0 2 0
1 2π
(cos(x), sin(x)) = − cos(2x)|0 = 0.
4
& %
Math 20F Linear Algebra Lecture 26 6
' $
Orthogonal vectors
& %
' $
& %
Math 20F Linear Algebra Lecture 26 7
' $
x = x̂ + x0 ,
where
(x, u)
x̂ = u, x0 = x − x̂.
kuk2
Therefore, x̂ is proportional to u, and (x̂, x0 ) = 0.
& %
' $
Orthogonal projection along a vector
Proof: Introduce x̂ = cu, and then write x = cu + x0 . The condition
(x̂, x0 ) = 0 implies that (u, x0 ) = 0, then
(x, u)
(x, u) = c(u, u), ⇒ c= ,
kuk2
then
(x, u)
x̂ = u, x0 = x − x̂.
Slide 14 kuk2
This decomposition is unique, because, given a second decomposition
x = ŷ + y0 with y = du, and (ŷ, y0 ) = 0, then, (u, y0 ) = 0 and
cu + x0 = du + y0 ⇒ c = d,
x0 = y 0 .
& %
Math 20F Linear Algebra Lecture 26 8
' $
Orthogonal bases
where i, j = 1, · · · , n.
& %
' $
Orthogonal bases
Theorem 5 Let V, ( , ) be an n dimensional inner product vector
space, and {u1 , · · · , un } be an orthogonal basis. Then, any x ∈ V
can be written as
x = c 1 u1 + · · · + c n un ,
with the coefficients have the form
Slide 16 (x, ui )
ci = , i = 1, · · · , n.
kui k2
(x, ui ) = ci (ui , ui ).
& %
Math 20F Linear Algebra Lecture 26 9
' $
Orthogonal projections onto subspaces
Notice:
To write x in an orthogonal basis means to do an orthogonal
decomposition of x along each basis vector.
(All this holds for vector spaces of functions.)
Theorem 6 Let V, ( , ) be an n dimensional inner product vector
Slide 17 space, and W ⊂ V be a p dimensional subspace. Let {u1 , · · · , up } be
an orthogonal basis of W .
Then, any x ∈ V can be decomposed as
x = x̂ + x0
with (x̂, x0 ) = 0 and x̂ = c1 u1 + · · · + cp up , where the coefficients ci
are given by
(x, ui )
ci = , i = 1, · · · , p.
kui k2
& %
' $
& %
Math 20F Linear Algebra Lecture 27 10
' $
v1 = u1 ,
Slide 19 (u2 , v1 )
v2 = u2 − v1 ,
kv1 k2
(u3 , v1 ) (u3 , v2 )
v3 = u3 − v1 − v2 ,
kv1 k2 kv2 k2
.. ..
. .
n−1
X (un , vi )
vn = un − vi .
i=1
kvi k2
& %
' $
Least-squares approximation
& %
Math 20F Linear Algebra Lecture 27 11
' $
v1 = u1 ,
Slide 21 (u2 , v1 )
v2 = u2 − v1 ,
kv1 k2
(u3 , v1 ) (u3 , v2 )
v3 = u3 − v1 − v2 ,
kv1 k2 kv2 k2
.. ..
. .
n−1
X (un , vi )
vn = un − vi .
i=1
kvi k2
& %
' $
Least-squares approximation
& %
Math 20F Linear Algebra Lecture 27 12
' $
Least-squares approximation
Ax = b,
Slide 23
is a vector x̂ ∈ V such that
& %
as small as possible, where m = dim(W ).
' $
Least-squares approximation
(Ax̂ − b) ⊥ Range(A),
& %
Math 20F Linear Algebra Lecture 27 13
' $
Least-squares approximation
Ax − b = Ax̂ − b + Ax − Ax̂,
Slide 25 = (Ax̂ − b) + A(x − x̂).
so x̂ is a least-squares solution.
& %
' $
Least-squares approximation
& %
Math 20F Linear Algebra Lecture 27 14
' $
Least-squares approximation
& %