Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Ayush Bhandari
(ayush@MIT.edu)
1
x
Under-determined P S2
Constrained &
Regularized LS P?
S1 (x)
Full Rank
S1
P S1
Ax = b
Linear Inverse Problems
2
Inverse Problems Regularized By Sparse Priors...
Data
Imaging
Analytics
Probability
Signal Processing
Theory
Communication Algorithms
Optimization
4
Topics in Sparse Approximation and Regularization
Parametrized Waveforms
Dictionaries
16XX 1715 1807 1910 1941 1946 1975 1980s 2000 Sparse Regularization
1948
Fourier and the world Wavelet Revolution
Ayush Bhandari
Weights
X
x (t) = 1 1 (t) + 2 2 (t) + 3 3 (t) + = k k (t), Fourier, Maclaurin,..
k
Basis Functions
Linear Art
XX D E
x (t) = (j, k) j,k (t), with (j, k) = x (t) , ej,k (t)
k2Z j2Z
Many Fathers: Haar, Stromberg,...Morlet, Grossman, Mallat, Meyer, Daubechies
6
Shannons Sampling Theory Approximation of Signals
Thanks Wiki!
10 10
8 8
6 6
4 4
2 2
1 2 3 4 5 6 7 1 2 3 4 5 6 7
10
10
8
8
6
6
4
4
2
2
2 3 4 5 6 7
2 3 4 5 6 7
8
Approximation of Functions: Goes back to -
Mathematical
First-Order Second-Order Astronomy Cardinal Series Signal Priors Cubic
Theory
Interpolation! Interpolation! Physics Expansions Smoothness/BL Kernels
Subspace Fitting meets
Approximation Theory
Borel
E. T. Whittaker
Valle Poussin
J. M. Whittaker
Lagrange
Kotel'nikov (33)
Euler Raabe (39)
Issac Mitchell/Netravali
Claudius Schaum
Ptolemy Newton
Early-Medieval Brahamgupta
China and India
190/120 140 600 625 1600 1800 1900 1930-1950 1976 1990
BC AD AD Until Beginning of 20th Century
10
Compressed Sensing by B. S. Kashin 1970s ...
11
Kolmogorov Widths
Parametrized Waveforms
Dictionaries
16XX 1715 1807 1910 1941 1946 1975 1980s 2000 Sparse Regularization
1948
Fourier and the world Wavelet Revolution
Ayush Bhandari
12
The World According to Ax = b !
x A Ax = b b A x
Z
b (t) = A (t, z) x (z) dz
| {z }
Continuous Models
X
bm = Am,n xn
m
| {z }
Discrete Models
Ayush Bhandari
13
14
Example 1: Camera and Photography
Cameras, CCD and the MegaPixel Game
A1 A2 A3
Harvard in Spring
Ax = b Recorded Image
Ayush Bhandari
15
Whittaker-Nyquist-
Bennet-Gabor
16
Optimized Interpolations
Shift-invariant linear approximation methods
A
antialiasing basis function
sampling
f (t) Qh f (x)
( x ) ( x )
!
x(t) h h
!
kZ
(x kh) b
Compression
Transmission
Coding
Ayush Bhandari
17
x b
A
=
STochastic Optical Reconstruction Microscopy (STORM)
http://nikon.com
Zhuang et al.
18
Example 4: Scientific Imaging
A A
x b
=
http://www.cis.rit.edu/class/simg217/
nasa.gov
19
A
x b
=
20
Examples of Sparse Representations
1.0
458 CHAPTER 9 Approximations in Bases
0.5
-2 -1 1 2
f (t) -0.5
40 -1.0
20
1
0
2
!20 t
0 0.2 3 0.4 0.6 0.8 1
4 (a)
2!5 6
7
!6
2 8
2!7
-3 -2 -1 0 1 2 3
2!8
458 CHAPTER 9 Approximations in Bases21
2!9
Sparse Approximation:
f (t) Reconstruction
(b) from Modulus Maxima
40 fM (t)
20
40
0 20
!20 0 t
0 0.2 0.4 0.6 0.8 1
!20 t
0 0.2 0.4 (a) 0.6 0.8 1
(c)
2FIGURE
!5 9.2
(a) Original signal f . (b) Each Dirac corresponds to one of the largest M ! 0.15 N wavelet
coefficients, calculated with a symmlet 4. (c) Nonlinear approximation fM recovered from the
M!6largest wavelet coefficients shown in (b), f " fM / f ! 5.1 10"3 .
2
2!7
2!8
2!9
Ayush Bhandari
(b)
22
fM (t)
Dierent Questions: Same Answer!
2
Least-Squares World arg minx kAx bk`2
MAP Estimation + Gaussian
Over-determined
Under-determined
Constrained &
Regularized LS
Full Rank
Ax = b
Linear Inverse Problems
23
Inverse Problems
24
Inverse Problems Regularized By Sparse Priors...
25
2
Classical Methods: Tikhonov and Followers kAx bk + kxk2
Recent Breakthrough: `1 Minimization and Convex Relaxation
26
Linear Measurements and Linear vs Non-Linear Decoding
A x 2 R2
" #
1 1 x1
.. .. =b Ax = b ) x1 x2
. . x2 | {z }
Measurement
Fewer Measurements
Ayush Bhandari
27
A x 2 R2
" #
1 1 x1
.. .. =b Ax = b ) x1 x2
. . x 2 | {z }
Measurement
Fewer Measurements
28
Geometrical View for Non-Linear Decoding
A x 2 R2
" #
1 1 x1
.. .. =b x2
. . x2
P?
SA (x2 )
x1
P?
SA (x1 )
S A R2
Ayush Bhandari
29
How Gauss would have solved it! (Or even most of us before 2005)
A x 2 R2
" #
1 1 x1
.. .. x
=b
. . 2
2
min kxk2 s.t. Ax = b ) x = A> AA> b
1
1 +1
x = b
2 1
Ayush Bhandari
30
Geometrical View for Linear Decoding
A x 2 R2
" # P?
1 1 x1 SA (x2 )
.. .. =b x2
. . x2
x1
1
x = A> AA> b
x =
1 +1
b A
2 1
P? S A R2
SA (x1 )
Ayush Bhandari
31
x
Under-determined P S2
Constrained &
Regularized LS P?
S1 (x)
Full Rank
S1
P S1
Ax = b
Linear Inverse Problems
32
Euclidean Norm Function
n
! p1 Z p1
X p p
kxkp , |xk | kf kp , |f (z)| dz
k=1
Vectors Functions
Ayush Bhandari
33
>
x1 = = [1, 0, . . . , 0]
k
>
1 1 1 1
x2 = p k m = p , p , . . . , p
N N N N
Ayush Bhandari
34
What about other norms? OR What can we optimize?
`p Wells p=2
p=1 n=N
X p
|xn |
p = 0.8 n=1
x0 x1 xN
Ayush Bhandari
35
Convexity of Norms
f = g, >0
`p Wells p=2
p=1
p = 0.8
x0 x1 xN
36
Exemplary Problem: Sparse Deconvolution
Measurements
1
Kernel
0
1.2 1.4 1.6 1.8 2 2.2
Ayush Bhandari
37
2
kx x ? k2
x H Hx f (H) x?
1
HT H + AT A HT
For Toeplitz H, Hx amounts to filtering of filter h with x. This is also linked with Weiner Filters.
Ayush Bhandari
38
Linear Systems: Overdetermined | Underdetermined
x H Hx
H x
H x
Overdetermined Underdetermined
39
n
! p1 Z p1
X p p
kxkp , |xk | kf kp , |f (z)| dz
k=1
Vectors Functions
40
L2 - norms
2 2
2 2
41
@J 1
= 0 ) x = HT H HT y
@x
Pseudo-Inverse
1
HT WH HT Wy Overdetermined
42
Linear Systems: Big Picture
2
kx x ? k2
x H Hx f (H) x?
1
HT H + AT A HT
43
a=0
2 3 2 3
y1 1 x1 x12 2 c 3
y (x) = ax2 + bx + c + sin ( x) 6 .. 7 6 .. .. .. 7 4 b 5 , y = Px
| {z } | {z } 4 . 5=4
f (x)
. . . 5
(x) a
yn 1 xn x2
n
44
Linear Systems: Underdetermined Processing
x H Hx
H x
Underdetermined System (Minimum Norm Solution)
2
Objective min kxk2 Constraint y = Hx
x
2 T
L (x, ) = kxk2 + (y Hx)
@L (x, ) @L (x, )
= 2x HT =y Hx
@x @
1 1
7! 2 HHT y ) x = HT HHT y
45
x H Hx
2 2
L (x, ) = c1 ky Hxk2 + c2 kxk2
1
x? = HT H + I HT y
| {z } y HT H + I
1
HT x?
H?
46
Linear Systems: Weighted Regularization / Tikhonov
x H Hx
2 2
L (x, ) = c1 ky Hxk2 + c2 kAxk2
1
x? = HT H + AT A HT y
| {z }
H?
A
47
2 2
min kb Axk`2 + kxk`1 , min kb Axk`2
x kxk` =K1
1
2
> ka1 k ha1 , a2 i
| {zA} =
A
b = A 1
b
ha1 , a2 i ka2 k
G
2
kxk`2 = K2
x1
2 > >
kb Axk`2 = x b A
| {zA} x b
| {z }
0
Least Squares
kxk`1 = K1
48
Key Idea Behind Enforcing Sparsity
x2 2
> ka1 k ha1 , a2 i
| {zA} =
A
ha1 , a2 i ka2 k
G
g1 g2
GA = = A> A
g3 g4
1
b = A b
2 x> GA x = x21 g1 + x1 x2 (g2 + g3 ) + x22 g4
kxk`2 = K2 | {z }
Ellipse
x1
2 > >
kb Axk`2 = x b A
| {zA} x b
kxk`1 = K1 | {z }
0
Least Squares
49
2
L (x, ) = kb xk`2 + kxk`1
2
L (x, ) = kb xk`2 + kxk`1
2 2
= (b0 xn0 ) + |x0 | + + (bN 1 xN 1) + |xN 1|
N
X1 2
= (bn xn ) + |xn |
n=0
x
@x L (x, ) = (b x) +
|x|
50
Iterated Soft-Thresholding Algorithm
2
L (x, ) = kb Axk`2 + kxk`1
>
(k) > (k)
Mk (x) = L (x, ) + x x I A A x x + kxk`1
| {z }
Majorizer
1 > 2
(k) (k)
Mk (x) = x + A b Ax x + kxk`1 + C0
2
1
x(k+1) = x(k) + A> b Ax(k) ,
2
51