Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Revision on the adaptive filtering algorithm The Least Squares Method The RLS Algorithm Computational Complexity Performance Analysis Comparison between LMS and RLS The Fast RLS (FRLS) Algorithms Summary y
N. Tangsangiumvisai
Revision (II)
If x(n), d(n) are zero mean and WSS random processes, the cost function (MSE) at time n is
2 J (n) = d w H (n)p p H w (n) + w H (n)R w (n)
... (2)
e( n ) = d ( n ) d ( n / X n )
= d ( n) w H ( n) x( n)
Fig.1 Generic from of an adaptive filter
At the minimum point of the error-performance surface, w (n) = w opt hence, minimum MSE
2 J min = d p H w opt
- The adaptive filtering algorithm is employed to control the coefficients so as to minimize some cost function. - The usual form of the update equation:
w (n + 1) = w (n) + w (n)
N. Tangsangiumvisai Adaptive Signal Processing : Lecture 7
... (3)
since
w opt = R 1p
... (4)
... (1)
3 Adaptive Signal Processing : Lecture 7 4
N. Tangsangiumvisai
Revision (III)
Method of the steepest descent
- a recursive method to find the minimum MSE coefficients (Wiener solutions) at global minimum - uses exact measurement of the gradient vector t t f th di t t
Revision (IV)
Steepest Descent algorithm (true gradient)
Least Mean Square algorithm (stochastic gradient) The gradient vector is estimated from the available data, i.e. instantaneous estimates of R and p are used.
J ( n ) = 2 x ( n ) d * ( n ) + 2 x ( n ) x H ( n ) w ( n )
R ( n) = x( n) x H ( n) p ( n) = x( n) d * ( n)
5 Adaptive Signal Processing : Lecture 7 6
w (n + 1) = w (n) + 1 ( w J (n) ) 2
... (5)
... (7)
... (6)
N. Tangsangiumvisai
N. Tangsangiumvisai
Revision (V)
The update equation of LMS :
w (n + 1) = w (n) + x(n) d * (n) x H (n)w (n)
... (8)
e* ( n )
The update equation of NLMS :
w (n + 1) = w (n) + v
a small positive constant
x( n)
+ x( n) 2
2
e* ( n )
... (9)
LMS / NLMS uses the instantaneous estimate of the gradient vector excess MSE Misadjustment
N. Tangsangiumvisai Adaptive Signal Processing : Lecture 7 7
Contents Revision on the adaptive filtering algorithm The Least Squares Method The RLS Algorithm Computational Complexity Performance Analysis Comparison between LMS and RLS The Fast RLS (FRLS) Algorithms ( ) g Summary
N. Tangsangiumvisai
RLS family - Least Squares technique - exact minimization of the sum of square errors - use of time averages of data
N. Tangsangiumvisai
N. Tangsangiumvisai
10
x(n-1)
w1
... (10)
measurement error
... (11)
d(n)
N. Tangsangiumvisai
N. Tangsangiumvisai
e(n)
E {em (n)} = 0 n 0,
and
* E em (n)em (k )
... (12)
m , n = k 2 = 0 nk
... (13)
To estimate the unknown parameter w (n) from the tap-weights w (n) = w0 w1 wM 1 T , the estimated desired signal is M 1 equal to ... (14) d (n) = wk x(n k )
k =0
N. Tangsangiumvisai Adaptive Signal Processing : Lecture 7 14
N. Tangsangiumvisai
13
e( n ) = d ( n ) d ( n )
x(n-2)
x(n-M+2)
1 z-1
x(n-M+1)
w0
w1
w2
wM-2
wM-1
e
n =t1
t2
( n)
error energy
... (17)
N. Tangsangiumvisai
15
N. Tangsangiumvisai
16
n=M
N. Tangsangiumvisai
n = M +1
n=N
17
N. Tangsangiumvisai
18
N. Tangsangiumvisai
19
N. Tangsangiumvisai
20
e
n = t1
t2
( n)
... (18)
t2
t2
i, k = 0,1, , M 1
x ( n k )e( n )
n =t1
t2
... (19)
(k , i)
z (i )
M 1
t2
... (20)
w (k , i )
k =0 0 k
= z (i ),
i = 0,1, , M 1
... (21)
N. Tangsangiumvisai
21
N. Tangsangiumvisai
22
d ( n) d (n 1) d( n) = d (1)
T
... (24)
e( n ) e(n 1) = d(n) X(n)w (n) ... (25) The error vector e( n) = (n x 1) e(1)
23
N. Tangsangiumvisai
24
(X
( n) X( n) w ( n)
XT (n)d(n)
w ( n) =
(X
( n ) X( n )
-1(n)
XT (n)d(n)
z(n) ( )
... (26)
-1(n)
N. Tangsangiumvisai
N. Tangsangiumvisai
eT (n) X(n) = 0T
Therefore
... (27)
( d ( n)
XT (n)d(n)
... (28)
Contents Revision on the adaptive filtering algorithm The Least Squares Method The RLS Algorithm Computational Complexity Performance Analysis Comparison between LMS and RLS The Fast RLS (FRLS) Algorithms ( ) g Summary
-1(n)
N. Tangsangiumvisai
z ( n)
27 N. Tangsangiumvisai Adaptive Signal Processing : Lecture 7 28
((n+1) x M)
xT (n + 1) X(n + 1) = X( n )
e
i =1
(i )
(n + 1) = XT (n + 1) X(n + 1)
n +1
... (31)
x(n + 1)xT (n + 1)
... (32)
N. Tangsangiumvisai
30
then
A 1
= B 1 B 1C CT B 1C + D 1
A = (n + 1) B = (n)
CT B 1
... (34)
= x (n + 1) (n)x(n + 1)
T
If we select
C = x(n + 1) ( )
D = 1
1 (n + 1) = 1 (n)
... (35)
k (n + 1) =
1 (n)x(n + 1) ( + 1)
... (38)
N. Tangsangiumvisai
31
N. Tangsangiumvisai
32
z (n + 1) =
i=1
n +1
d (i )x(i )
... (40) ... (41)
= xT (n + 1) 1 (n)x(n + 1)
k (n + 1) = 1 (n)x(n + 1) ( + 1)
= z (n) + d (n + 1)x(n + 1)
To solve
w (n + 1) = 1 (n + 1)z (n + 1)
(n + 1) = d (n + 1) xT (n + 1)w (n)
w (n + 1) = w (n) + k (n + 1) (n + 1) ) ) )
w (n + 1) = w (n) + k (n + 1) (n + 1)
where the priori estimation error is given by
... (42)
(n + 1) = d (n + 1) xT (n + 1)w (n)
N. Tangsangiumvisai Adaptive Signal Processing : Lecture 7
... (43)
33 Adaptive Signal Processing : Lecture 7 34
N. Tangsangiumvisai
RLS Initialisation
Assumption of the input data : pre-windowing method data prior to n = 0 are zero
Re-define
( n) =
x(i) x
i =1
(i ) + I
... (44)
... (45) ( )
Contents Revision on the adaptive filtering algorithm The Least Squares Method The RLS Algorithm Computational Complexity Performance Analysis Comparison between LMS and RLS The Fast RLS (FRLS) Algorithms ( ) g Summary
N. Tangsangiumvisai
35
N. Tangsangiumvisai
36
Computational Complexity
x +/-
Non-stationary environment
Previously, we assume a statistically stationary environment for the RLS algorithm. i t f th l ith In a non-stationary environment,
J ( n) =
= xT (n + 1) 1 (n)x(n + 1)
1 (n)x(n + 1) ( + 1)
k (n + 1) =
i =1
n i
e 2 (i )
... (46)
(n + 1) = d (n + 1) xT (n + 1)w (n)
w (n + 1) = w (n) + k (n + 1) (n + 1)
1 (n + 1) = 1 (n) k (n + 1)xT (n + 1) 1 (n) Total
N. Tangsangiumvisai
37
N. Tangsangiumvisai
38
= xT (n + 1) 1 (n)x(n + 1)
k (n + 1) =
1 1 (n)x(n + 1) (1 + )
(n + 1) = d (n + 1) xT (n + 1)w (n)
w (n + 1) = w (n) + k (n + 1) (n + 1) ) ) )
1 (n + 1) = 1 1 (n) 1k (n + 1)xT (n + 1) 1 (n) ) ) )
Contents Revision on the adaptive filtering algorithm The Least Squares Method The RLS Algorithm Computational Complexity Performance Analysis Comparison between LMS and RLS The Fast RLS (FRLS) Algorithms ( ) g Summary
N. Tangsangiumvisai
39
N. Tangsangiumvisai
40
Performance Analysis
By writing the desired signal as
... (47)
J (n) =
i =1
(i )
... (50)
( where em (n) is the measurement error of zero mean and 2 variance m , and is independent of the input signal.
By defining the B d fi i th weight-error vector as i ht t ... (48) the a priori estimation error can then be expressed as
where R (n) = E
K ( n) = E
... (51)
= em (n) ( w (n 1) w )
The steady-state MSE approaches variances of the measurement 2 error m as n . (in stationary environment)
x( n)
... (49)
41 N. Tangsangiumvisai
LMS 20M RLS 2M The convergence rate of RLS does not depend on the condition number of the input data as in the case of LMS.
Computational Complexity
LMS O(2M) ( ) RLS O(M2)
N. Tangsangiumvisai
43
N. Tangsangiumvisai
44
Robustness
Leakage LMS : operate in fixed-point implementation RLS : numerically sensitive to rounding errors, when < 1
Tracking Performance
LMS RLS
(1 )
Contents Revision on the adaptive filtering algorithm The Least Squares Method The RLS Algorithm Computational Complexity Performance Analysis Comparison between LMS and RLS The Fast RLS (FRLS) Algorithms ( ) g Summary
N. Tangsangiumvisai
45
N. Tangsangiumvisai
46
Summary
The cost function of RLS requires no statistical information about input or desired signals. RLS achieves the improvement in performance, as compared to LMS but at higher computational cost LMS, cost. Next lecture Lecture 8 : Frequency-domain algorithms
N. Tangsangiumvisai
47
N. Tangsangiumvisai
48