Sei sulla pagina 1di 20

MATH 3795

Lecture 9. Linear Least Squares. Using SVD


Decomposition.
Dmitriy Leykekhman

Fall 2008

Goals
I

SVD-decomposition.

Solving LLS with SVD-decomposition.

D. Leykekhman - MATH 3795 Introduction to Computational Mathematics

Linear Least Squares

SVD Decomposition.
For any matrix A Rmn there exist orthogonal matrices U Rmm ,
V Rnn and a diagonal matrix Rmn , i.e.,

1
0 ... 0

..

r
for m n
=

..

.
0

...

with diagonal entries


1 r > r+1 = = min {m,n} = 0
such that A = U V T
D. Leykekhman - MATH 3795 Introduction to Computational Mathematics

Linear Least Squares

SVD Decomposition.
For any matrix A Rmn there exist orthogonal matrices U Rmm ,
V Rnn and a diagonal matrix Rmn , i.e.,

..

=
for m n
..

.
..
..
0
0
with diagonal entries
1 r > r+1 = = min {m,n} = 0
such that A = U V T
D. Leykekhman - MATH 3795 Introduction to Computational Mathematics

Linear Least Squares

SVD Decomposition.
I

The decomposition
A = U V T
is called Singular Value Decomposition (SVD). It is very
important decomposition of a matrix and tells us a lot about its
structure.

It can be computed using the Matlab command svd.

The diagonal entries i of are called the singular values of A. The


columns of U are called left singular vectors and the columns of V
are called right singular vectors.

D. Leykekhman - MATH 3795 Introduction to Computational Mathematics

Linear Least Squares

SVD Decomposition.
I

The decomposition
A = U V T
is called Singular Value Decomposition (SVD). It is very
important decomposition of a matrix and tells us a lot about its
structure.

It can be computed using the Matlab command svd.

The diagonal entries i of are called the singular values of A. The


columns of U are called left singular vectors and the columns of V
are called right singular vectors.

Using the orthogonality of V we can write it in the form


AV = U
We can interpret it as follows: there exists a special orthonormal set
of vectors (i.e. the columns of V ), that is mapped by the matrix A
into an orthonormal set of vectors (i.e. the columns of U ).

D. Leykekhman - MATH 3795 Introduction to Computational Mathematics

Linear Least Squares

Applications of SVD Decomposition.


Given the SVD-Decomposition of A,
A = U V T
with
1 r > r+1 = = min {m,n} = 0
one may conclude the following:
I

rank(A) = r,

R(A) = R([u1 , . . . , ur ]),

N (A) = R([vr+1 , . . . , vn ]),

R(AT ) = R([v1 , . . . , vr ]),

N (AT ) = R([ur+1 , . . . , um ]).

D. Leykekhman - MATH 3795 Introduction to Computational Mathematics

Linear Least Squares

Applications of SVD Decomposition.

Moreover if we denote
Ur = [u1 , . . . , ur ],

r = diag(1 , . . . , r ),

then we have
A = Ur r VrT =

r
X

Vr = [v1 , . . . , vr ],

i ui viT

i=1

This is called the dyadic decomposition of A, decomposes the matrix A


of rank r into sum of r matrices of rank 1.

D. Leykekhman - MATH 3795 Introduction to Computational Mathematics

Linear Least Squares

Applications of SVD Decomposition.


I

The 2-norm and the Frobenius norm of A can be easily computed


from the SVD decomposition
kAxk2
= 1
x6=0 kxk2
m X
n
q
X
kAkF =
a2ij = 12 + + p2 ,
kAk2 = sup

p = min {m, n}.

i=1 j=1
I

From the SVD decomposition of A it also follows that


AT A = V T V T

and AAT = U T U T .

Thus, i2 , i = 1, . . . , p are the eigenvalues of symmetric matrices


AT A and AAT and vi and ui are the corresponding eigenvectors.

D. Leykekhman - MATH 3795 Introduction to Computational Mathematics

Linear Least Squares

Applications of SVD Decomposition.


Theorem
Let the SVD of A Rmn be given by
A = Ur r VrT =

r
X

i ui viT

i=1

with r = rank(A). If k < r


Ak =

k
X

i ui viT ,

i=1

then
min
rank(D)=k

kA Dk2 = kA Ak k2 = k+1 ,

and
min
rank(D)=k

v
u p
uX
kA DkF = kA Ak kF = t
i2 ,

D. Leykekhman - MATH 3795 Introduction to Computational Mathematics

p = min {m, n}.

k+1
Linear Least Squares

Solving LLS with SVD Decomposition.

Consider the LLS


min kAx bk22
x

Let A = U V T be the SVD of A Rmn .

Using the orthogonality of U and V we have


T
2
T
kAx bk22 = kU T (AV V T x b)k22 = k V
| {zx} U b)k2
=z

r
X
i=1

D. Leykekhman - MATH 3795 Introduction to Computational Mathematics

m
X

(i zi uTi b)2 +

(uTi b)2 .

i=r+1

Linear Least Squares

Solving LLS with SVD Decomposition.


I

Thus,
min kAx bk22 =
x

r
X
i=1

D. Leykekhman - MATH 3795 Introduction to Computational Mathematics

m
X

(i zi uTi b)2 +

(uTi b)2 .

i=r+1

Linear Least Squares

Solving LLS with SVD Decomposition.


I

Thus,
min kAx bk22 =
x

r
X

m
X

(i zi uTi b)2 +

i=1

i=r+1

The solution is given


uTi b
, i = 1, . . . , r,
i
zi = arbitrary, i = r + 1, . . . , n.
zi =

D. Leykekhman - MATH 3795 Introduction to Computational Mathematics

(uTi b)2 .

Linear Least Squares

Solving LLS with SVD Decomposition.


I

Thus,
min kAx bk22 =
x

r
X

m
X

(i zi uTi b)2 +

i=1

i=r+1

The solution is given


uTi b
, i = 1, . . . , r,
i
zi = arbitrary, i = r + 1, . . . , n.
zi =

As a result
min kAx bk22 =
x

D. Leykekhman - MATH 3795 Introduction to Computational Mathematics

(uTi b)2 .

m
X

(uTi b)2 .

i=r+1

Linear Least Squares

Solving LLS with SVD Decomposition.

Recall that z = V T x. Since V is orthogonal, we find that


kxk2 = kV V T xk2 = kV T xk2 = kzk2 .
All solutions of the linear least squares problem are given by z = V T x
with
uT b
zi = i , i = 1, . . . , r,
i
zi = arbitrary, i = r + 1, . . . , n.

D. Leykekhman - MATH 3795 Introduction to Computational Mathematics

Linear Least Squares

10

Solving LLS with SVD Decomposition. Minimum norm


solution
The minimum norm solution of the linear least squares problem is given
by
x = V z ,
where z Rn is the vector with entries
zi =

uTi b
,
i

zi = 0,

i = 1, . . . , r,

i = r + 1, . . . , n.

The minimum norm solution is


x =

r
X
uT b
i

i=1

D. Leykekhman - MATH 3795 Introduction to Computational Mathematics

vi

Linear Least Squares

11

Solving LLS with SVD Decomposition. MATLAB code.

% compute the SVD:


[U,S,V] = svd(A);
s
= diag(S);
% determine the effective rank r of A using singular values
r = 1;
while( r < size(A,2) & s(r+1) >= max(size(A))*eps*s(1) )
r = r+1;
end
d = U*b;
x = V* ( [d(1:r)./s(1:r); zeros(n-r,1) ] );

D. Leykekhman - MATH 3795 Introduction to Computational Mathematics

Linear Least Squares

12

Conditioning of a Linear Least Squares Problem.


I

Suppose that the data b are


b = bex + b,
where b represents the measurement error.

The minimum norm solution of min kAx (bex + b)k22 is


x =

r
X
uT b
i

i=1

vi =

D. Leykekhman - MATH 3795 Introduction to Computational Mathematics

r  T
X
u b
i

i=1

Linear Least Squares

uTi b
i

13


vi .

Conditioning of a Linear Least Squares Problem.


I

Suppose that the data b are


b = bex + b,
where b represents the measurement error.

The minimum norm solution of min kAx (bex + b)k22 is


x =

r
X
uT b
i

i=1
I

vi =

r  T
X
u b
i

i=1

uTi b
i

uT (b)


vi .

If a singular value i is small, then i i could be large, even if


uTi (b) is small. This shows that errors b in the data can be
magnified by small singular values i .

D. Leykekhman - MATH 3795 Introduction to Computational Mathematics

Linear Least Squares

13

Conditioning of a Linear Least Squares Problem.


% Compute A
t = 10.^(0:-1:-10);
A = [ ones(size(t)) t t.^2 t.^3 t.^4 t.^5];
% compute SVD of A
[U,S,V] = svd(A); sigma = diag(S);
% compute exact data
xex = ones(6,1); bex = A*xex;
for i = 1:10
% data perturbation
deltab = 10^(-i)*(0.5-rand(size(bex))).*bex;
b = bex+deltab;
% solution of perturbed linear least squares problem
w = U*b;
x = V * (w(1:6) ./ sigma);
errx(i+1) = norm(x - xex); errb(i+1) = norm(deltab);
end
loglog(errb,errx,*);
ylabel(||x^{ex} - x||_2); xlabel(||\delta b||_2)
D. Leykekhman - MATH 3795 Introduction to Computational Mathematics

Linear Least Squares

14

Conditioning of a Linear Least Squares Problem.


I

The singular values of A in the above Matlab example are:


1 3.4
2 2.1
3 8.2 102

4 7.2 104
5 6.6 107
6 5.5 1011

The error kxex xk2 for different values of kbk2 (loglog-scale):

We see that small perturbations b in the measurements can lead to


large errors in the solution x of the linear least squares problem if
the singular values of A are small.

D. Leykekhman - MATH 3795 Introduction to Computational Mathematics

Linear Least Squares

15

Potrebbero piacerti anche